Hi,
we are writing so many logs for application and all of them are indexed in Splunk. is there way to find the size of all events in a particular duration?
i am able to get the size of events in in time duration,i used below search
index=myindex|eval esize=len(_raw) | stats count as count avg(esize) as avg | eval bytes=count*avg | eval kb=bytes/1024 | eval mb=kb/1024 | eval gb=mb/1024 | eval tb=gb/1024 |stats values(kb) as KB | eval KB=round(KB,2)
Very simple, by default splunk raw events are in UTF-8 format. This means that each character is 8 bits (one byte).
So you do this:
your initial search
| eval eventSize = len(_raw)/1024/1024/1024
the first division by 1024 gives you KiloBytes, the second division MegaBytes and the third GigaBytes and so on
Dont forget to upvote if you found this useful
@mattlucas I think you have a typo in your eval mb line
| eval mb=round(kb/1024/2014,2)
should be
| eval mb=round(kb/1024/1024,2)
Yes, good catch. Typo on my part
i am able to get the size of events in in time duration,i used below search
index=myindex|eval esize=len(_raw) | stats count as count avg(esize) as avg | eval bytes=count*avg | eval kb=bytes/1024 | eval mb=kb/1024 | eval gb=mb/1024 | eval tb=gb/1024 |stats values(kb) as KB | eval KB=round(KB,2)
I would actually clean it up and do like so:
index=whatever
| fields _raw
| eval esize=len(_raw)
| stats count as count avg(esize) as avg
| eval bytes=count*avg
| eval kb=bytes/1024
| eval mb=round(kb/1024/2014,2)
| eval gb=round(kb/1024/1024/1024,2)
| eval tb=round(gb/1024/1024/1024/1024,2)
| stats values(kb) as KB, values(mb) AS MB, values(gb) AS GB, values(tb) AS TB
I added the ignore all fields except _raw to speed up the search
and i added the rounding for you
I think that the eval statements may be off kilter a bit here:
shouldn't they be:
index=mildw
| fields _raw
| eval esize=len(_raw)
| stats count as count avg(esize) as avg
| eval bytes=count*avg
| eval kb=bytes/1024
| eval mb=round(kb/1024,2)
| eval gb=round(mb/1024,2)
| eval tb=round(gb/1024,2)
| stats values(kb) as KB, values(mb) AS MB, values(gb) AS GB, values(tb) AS TB
as the kb is bytes 1024 mb is kb/1024 etc etc...
Your calculation is wrong for mb and onwards. it should be one 1024 less
@mattlucas719
I think you have a typo in your eval mb line
| eval mb=round(kb/1024/2014,2)
Should be
| eval mb=round(kb/1024/1024,2)
By "the size of all events" are you asking "how much disk space does it take to index and store those events?"
i am looking for size of each event.
and it would be great,if we can get disk space for indexing.
and want find how amount of logging into Splunk
i am trying with this command but its failing
index=myindex| eval esize=len(_raw)|stats avg(esize) as avg [search index=myindex |stats count as co] | eval size=(avg*co)
Hi ,
Please give a try below command
| rest /services/data/indexes/
its not working.