Getting Data In

Is there an option to tell Splunk to use round robin buffers for all log files coming in (ex: only store data for a certain time or size)?

hunterbr
Engager

Hi,

I am a Splunk newbie. I have setup Splunk in a Lab enviroment with limited resources on an ESXi server (max. 100GB virtual HD). I am wondering if there is (if not default) an option to tell Splunk to use round robin buffers for all data coming in ( syslogs and Sourcefire eStreamer data). E.g. store only data for 30 days and overwrite old data if buffersize is reached. Is there an option to do that and or is this recommended or has any non obvious side effects ? The environment is mainly build to play with Splunk and start learning it. The goal is to make sure the VM does not crash, not to keep all logging.

tia,
Holger

0 Karma
1 Solution

acharlieh
Influencer

After ingesting your logs, Splunk stores thes logs in indexes. The default index is main, but often times people create their own since it's at this level where you can define access controls and (applicable here) a retention policy. You might be interested in this doc: http://docs.splunk.com/Documentation/Splunk/6.2.2/Indexer/Setaretirementandarchivingpolicy

For each index, you can tell Splunk to freeze (by default this means delete, however it could be configured to mean to archive to long term storage) data that is too old or when the index contains too much data. By default these are very large (7 years 500GB if I remember correctly) but they are likely the parameters you're looking to set in indexes.conf

View solution in original post

acharlieh
Influencer

After ingesting your logs, Splunk stores thes logs in indexes. The default index is main, but often times people create their own since it's at this level where you can define access controls and (applicable here) a retention policy. You might be interested in this doc: http://docs.splunk.com/Documentation/Splunk/6.2.2/Indexer/Setaretirementandarchivingpolicy

For each index, you can tell Splunk to freeze (by default this means delete, however it could be configured to mean to archive to long term storage) data that is too old or when the index contains too much data. By default these are very large (7 years 500GB if I remember correctly) but they are likely the parameters you're looking to set in indexes.conf

hunterbr
Engager

thx that was what I was looking for !

0 Karma
Get Updates on the Splunk Community!

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...

Splunk APM: New Product Features + Community Office Hours Recap!

Howdy Splunk Community! Over the past few months, we’ve had a lot going on in the world of Splunk Application ...