Deployment Architecture

Syslog, data storage, buckets

ihingos
Engager

I'm looking to index and store a ton of data (syslog). My question is once splunk has index the data, and moved it to the various buckets, is there any depup, or compression that happens? Is there a document someplace that explains the process in more detail?

Thanks

Tags (1)
0 Karma
1 Solution

bmacias84
Champion

Hello ihingos,

To answer your question Splunk does not dedup raw events and its does compress them; however, Splunk allows you to dedup events in the search query language( yoursearch | dedup _raw …). Depending on the cardinality of your data you can get fairly high compression ratios. Compress will also vary depending on Bucket and index sizes.

In general the formula is : ( Daily average indexing rate ) x ( retention policy ) x 1/2

Additional Reading:

Estimateyourstoragerequirements

HowSplunkcalculatesdiskstorage

View solution in original post

bmacias84
Champion

Hello ihingos,

To answer your question Splunk does not dedup raw events and its does compress them; however, Splunk allows you to dedup events in the search query language( yoursearch | dedup _raw …). Depending on the cardinality of your data you can get fairly high compression ratios. Compress will also vary depending on Bucket and index sizes.

In general the formula is : ( Daily average indexing rate ) x ( retention policy ) x 1/2

Additional Reading:

Estimateyourstoragerequirements

HowSplunkcalculatesdiskstorage

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...