I have a few easy question about splunk data compression rate.
What is the typical compression rate for english ASCII based data?
Is the compression rate different from hot / warn / cold / frozen?
Does hot buckets also get compressed?
Easy, huh? Thanks for your answer!
easy :
1 - roughly between 1 and infinite minus one.
Seriously, it depends of your data, here is a method is to calculate it.
I usually see about 40%~50% compression.
in this example we look at index=_internal, please replace by your index.
| dbinspect index=_internal
| fields state,id,rawSize,sizeOnDiskMB
| stats sum(rawSize) AS rawTotal, sum(sizeOnDiskMB) AS diskTotalinMB
| eval rawTotalinMB=(rawTotal / 1024 / 1024) | fields - rawTotal
| eval compression=tostring(round(diskTotalinMB / rawTotalinMB * 100, 2)) + "%"
| table rawTotalinMB, diskTotalinMB, compression
2 - the compression rate is identical for hot / warm / cold / frozen
However when a bucket is frozen, some metadata files are removed or compressed (it saves some MB), they can be recreated when thawed.
3 - the hot buckets are been written already compressed.
easy :
1 - roughly between 1 and infinite minus one.
Seriously, it depends of your data, here is a method is to calculate it.
I usually see about 40%~50% compression.
in this example we look at index=_internal, please replace by your index.
| dbinspect index=_internal
| fields state,id,rawSize,sizeOnDiskMB
| stats sum(rawSize) AS rawTotal, sum(sizeOnDiskMB) AS diskTotalinMB
| eval rawTotalinMB=(rawTotal / 1024 / 1024) | fields - rawTotal
| eval compression=tostring(round(diskTotalinMB / rawTotalinMB * 100, 2)) + "%"
| table rawTotalinMB, diskTotalinMB, compression
2 - the compression rate is identical for hot / warm / cold / frozen
However when a bucket is frozen, some metadata files are removed or compressed (it saves some MB), they can be recreated when thawed.
3 - the hot buckets are been written already compressed.
Hello guys,
it looks like frozen data is around -50% compared to hot/cold, is this correct?
Thanks.
With the above logic for most of the indexers I see 200+% compression an eg rawTotal=42726 and diskTotalinMB = 102921.
As per documentation compression should be around 50% meaning diskTotalinMB should be halfth the rawTotal. But in my case it is more than 2.5 times. Any pointers why it consumes more disk space?
It all depends on your data. If you are using indexed extractions on json data you will get virtually no reduction is total disk size since the tsidx files will be huge compared to typical syslog data (on which the documentation is built).
Always a good idea to add new data to a test index and check for
compression
line-breaking
time-stamping
before creating the input in a production environment.
No.
But you can run a test by segregating each source/sourcetype to a different index, index a significant sample, then compare with the previous search.
@yannk, is there a straightforward way to calculate compression ratios for different sources or sourcetypes within an index?
extra question :
Featuring "Sanford" for the search.