Getting Data In

indexed data is being rolled off, deleted. Not sure why, what config am I missing 6.2?

tmblue
Engager

t_activity 500,000 N/A 149,887 581,087,973 Mar 31, 2015 2:57:59 PM Apr 21, 2015 11:50:35 AM

I have others that have data well into 2014 and twice the size of this one. If my index size is set to 500MB, why is the data continuing to roll off, what setting am I missing?

Example of an index that is much larger and is holding data as it should.

umapper 500,000 N/A 499,621 2,998,937,781 Dec 30, 2014 6:14:56 PM Apr 21, 2015 11:50:30 AM

So what setting am i missing that is causing this data to be rolled off so quickly. I need at least 90 days and Im not getting it, heck just typing this I've lost 2 hours worth of data.

So I'm obviously missing a configuration or other. My limits and indexes.conf are not very exciting.

limits.conf:
[inputproc]
file_tracking_db_threshold_mb = 500

indexes.conf:
[default]

[volume:cold]
maxVolumeDataSizeMB = 250
path = /mounts/splunk_cold

[volume:home]
maxVolumeDataSizeMB = 4200000
path = $SPLUNK_DB

Tags (2)
0 Karma

martin_mueller
SplunkTrust
SplunkTrust

There’s a default maximum number of warm buckets per index, increase that or your bucket size (better).

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

To find out where something is configured, run this from the Splunk home path on the CLI:

./bin splunk cmd btool indexes list your_index --debug
0 Karma

gbower333
Path Finder

I had run into something like this before and adjusted my maxWarmDBCount to several thousand (default 300). That helped before when I was losing buckets fast (small cold retention, then freezing and rolling off). Running on 6.2.3.

Now I am seeing a very occasional bucket move to cold which is fine for now but I do not see why it is happening. I have plenty of space and my maxWarmDBCount update is still there:
./splunk cmd btool indexes list | grep -A30 | grep maxWarmDBCount
Is there some need to distinguish hot and warm settings? They are in the same index homes for me (volume:hot:*).

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

That froze a cold bucket, maybe because of the cold volume size.

0 Karma

tmblue
Engager

Martin, I think so. I'm uncovering a few things here.

One my coldb volume had a max size of of 20MB (20), which means this at a size of 504MB could not have been written and thus just deleted..

The other issue that I'm seeing and trying to fix is that this particular index has a different cold Path set inside the GUI (I can't find where it's been written to a file and I can't change it's location in the GUI, sooo). The other indexes not showing this issue have the standard default cold path specified, so that data is being put into cold storage but not deleted. The lack of volume space set on the impacted indexes are causing this to roll from warm to frozen (deleted), vs to cold..

So now I need to figure out where splunk is holding this colddb path for a couple of my indexes that are not the default and change them, I may just have to add the index config into the indexes.conf file. bahhh..

Thanks for the assistance, def got me looking in different places
Tory

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

You said you think it's rolling data - to confirm, search for BucketMover in the _internal index.

0 Karma

tmblue
Engager

Ahh thanks

04-21-2015 14:54:06.305 -0700 INFO BucketMover - idx=t_activity Moving bucket='db_1427847473_1427843268_6051' because maximum number of warm databases exceeded, starting warm_to_cold: from='/splunkdata/t_activity/db' to='/mounts/splunk_cold/t_activity/colddb'

So is this a clue? I'm wondering why it's moving from warm to cold anyways, do I not have enough buckets allocated for hot/warm? I have nothing in my configs so would think it would be using the 6 year rule. So now what 🙂 And thanks!

0 Karma

tmblue
Engager

Okay I bumped bucket size to 3000, however I'm not seeing any data in my colddb directory actually ..

Plan to move COLD bucket|dir with LT=1425602427, path=/mounts/splunk_cold/torque/colddb/db_1425602427_1425592226_1507, reductionSize=526540800 (502MB)
04-21-2015 11:57:28.505 -0700 INFO BucketMover - AsyncFreezer freeze succeeded for bkt='/mounts/splunk_cold/torque/colddb/db_1425602427_1425592226_1507'
04-21-2015 16:20:43.386 -0700 INFO databasePartitionPolicy - idx=torque, Initializing, params='[3000,period=60,frozenTimePeriodInSecs=188697600,coldToFrozenScript=,coldToFrozenDir=,warmToColdScript=,maxHotBucketSize=786432000,optimizeEvery=5,syncMeta=true,maxTotalDataSizeMB=500000,maxMemoryAllocationPerHotSliceMB=5,addressCompressBits=5,isReadOnly=false,maxMergizzles=6,signatureBlockSize=0,signatureDatabase=_blocksignature,maxHotSpanSecs=7776000,maxMetadataEntries=1000000,maxHotIdleSecs=0,maxHotBuckets=3,quarantinePastSecs=77760000,quarantineFutureSecs=2592000,maxSliceSize=131072,serviceMetaPeriod=25,partialServiceMetaPeriod=0,throttleCheckPeriod=15,homePath_maxDataSizeBytes=0,coldPath_maxDataSizeBytes=0,compressionLevel=-1,fsyncInterval=18446744073709551615,maxBloomBackfillBucketAge_secs=2592000,enableOnlineBucketRepair=true,maxUnreplicatedMsecWithAcks=60000,maxUnreplacatedMsecNoAcks=300000,alwaysBloomBackfill=false,minStreamGroupQueueSize=2000,streamingTargetTsidxSyncPeriodMsec=5000,repFactor=0,hotBucketTimeRefreshInterval=10]' isSlave=false needApplyDeleteJournal=false

0 Karma

tmblue
Engager

Sorry had authentication issues with the site.

Here is the missing piece of the question..

Splunk 6.2 standalone.

I have several indexes that I'm tracking and continue to see the "earliest event" continue to change, meaning that I think it's rolling, deleting data.

Index name Max size (MB) of entire index Frozen archive path Current size (in MB) Event count Earliest event Latest event

t_activity 500,000 N/A 149,887 581,087,973 Mar 31, 2015 2:57:59 PM Apr 21, 2015 11:50:35 AM

I have others that have data well into 2014 and twice the size of this one. If my index size is set to 500MB, why is the data continuing to roll off, what setting am I missing?

Example of an index that is much larger and is holding data as it should.

umapper 500,000 N/A 499,621 2,998,937,781 Dec 30, 2014 6:14:56 PM Apr 21, 2015 11:50:30 AM

So what setting am i missing that is causing this data to be rolled off so quickly. I need at least 90 days and Im not getting it, heck just typing this I've lost 2 hours worth of data.

So I'm obviously missing a configuration or other. My limits and indexes.conf are not very exciting.

limits.conf:
[inputproc]
file_tracking_db_threshold_mb = 500

indexes.conf:
[default]

[volume:cold]
maxVolumeDataSizeMB = 250
path = /mounts/splunk_cold

[volume:home]
maxVolumeDataSizeMB = 4200000
path = $SPLUNK_DB

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...