Getting Data In

Is there an option to tell Splunk to use round robin buffers for all log files coming in (ex: only store data for a certain time or size)?

hunterbr
Engager

Hi,

I am a Splunk newbie. I have setup Splunk in a Lab enviroment with limited resources on an ESXi server (max. 100GB virtual HD). I am wondering if there is (if not default) an option to tell Splunk to use round robin buffers for all data coming in ( syslogs and Sourcefire eStreamer data). E.g. store only data for 30 days and overwrite old data if buffersize is reached. Is there an option to do that and or is this recommended or has any non obvious side effects ? The environment is mainly build to play with Splunk and start learning it. The goal is to make sure the VM does not crash, not to keep all logging.

tia,
Holger

0 Karma
1 Solution

acharlieh
Influencer

After ingesting your logs, Splunk stores thes logs in indexes. The default index is main, but often times people create their own since it's at this level where you can define access controls and (applicable here) a retention policy. You might be interested in this doc: http://docs.splunk.com/Documentation/Splunk/6.2.2/Indexer/Setaretirementandarchivingpolicy

For each index, you can tell Splunk to freeze (by default this means delete, however it could be configured to mean to archive to long term storage) data that is too old or when the index contains too much data. By default these are very large (7 years 500GB if I remember correctly) but they are likely the parameters you're looking to set in indexes.conf

View solution in original post

acharlieh
Influencer

After ingesting your logs, Splunk stores thes logs in indexes. The default index is main, but often times people create their own since it's at this level where you can define access controls and (applicable here) a retention policy. You might be interested in this doc: http://docs.splunk.com/Documentation/Splunk/6.2.2/Indexer/Setaretirementandarchivingpolicy

For each index, you can tell Splunk to freeze (by default this means delete, however it could be configured to mean to archive to long term storage) data that is too old or when the index contains too much data. By default these are very large (7 years 500GB if I remember correctly) but they are likely the parameters you're looking to set in indexes.conf

hunterbr
Engager

thx that was what I was looking for !

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...