All Apps and Add-ons

DB Connect App DB inputs stopped loading data for some period of time

Zari
New Member

Greetings,

I am having an issue with the Splunk DB Connect App where database inputs stopped loading data for some period of time.

For example, if my whole time range starts from 01.01.17 until today, db inputs are configured with rising column/checkpoint column
(which is the timestamp field (epoch time converted into java TS format)) to load only the newest records since last time seen.
It all worked fine untill last week, we had this issue first time with couple of db inputs and now it is the same thing happening with other db inputs.
And to proceed, there are only data since mid of February until now available for searching, and the data before 20.02.17 is not there/not searchable.

At the first time we had this issue, we did delete all data from db inputs, then we defined everything from scratch for rising column, reset the checkpoint value and load all data again from 01.01.17 until now.
After that we could search normally over all our data. But this time it happened again, and it is time consuming to redo the whole steps again for each db input from scratch.

Is it somehow due to how buckets in splunk save and delete data or something relevant, at the moment we have no clue what might be the problem?

Is anybody experiencing a similar issue with the Splunk DB Connect App?
Any assistance is appreciated.

I am using Splunk 6.5.2 (Enterprise License) and the DB Connect App to index data from Oracle databases.

Thanks,

0 Karma

Richfez
SplunkTrust
SplunkTrust

I suspect this has nothing to do with DB connect. What seems most likely is that the index you have this data configured to go into is configured in such a way as to delete the data. Sometimes the settings interact in ways that aren't immediately obvious.

If you could post your indexes.conf stanzas for this index, I suspect we can spot what the issue is. But in the meantime you might be able to figure it out yourself - the settings I'd look for are some combination of the following ones.

maxDataSize = <positive integer>|auto|auto_high_volume
maxTotalDataSizeMB = <nonnegative integer>
frozenTimePeriodInSecs = <nonnegative integer>
coldPath.maxDataSizeMB = <nonnegative integer>

There could be other settings that could do this, though. My guess is you have a MaxDataSize combined with a maxTotalDataSizeMB or frozenTimePeriodInSecs that's causing you you to have large buckets compared to how long you'll keep them around.

So please post your indexes.conf (unless you spot it yourself).

Or maybe I'm TOTALLY off base and these ruminations on the probable problem is completely wrong! 🙂

Happy Splunking,
Rich

0 Karma

Zari
New Member

Hi Rich,

I appreciate your help, thank you for taking your time to reply.

I haven't figured it out yet, but was asking the same question myself, if it has to do with those configs and how data is managed within those buckets, since am newbie at splunk. Here is indexes.conf which is located under ..system/default directory of splunk. We didn't change anyting in indexes.conf, and if it is the reason, other question arises, why it did worked well until we had this issue, i mean we would have probably faced this problem before, or maybe am wrong, and it really has to do with time intervals?

################################################################################
# "global" params (not specific to individual indexes)
################################################################################
sync = 0
indexThreads = auto
memPoolMB = auto
defaultDatabase = main
enableRealtimeSearch = true
suppressBannerList =
maxRunningProcessGroups = 8
maxRunningProcessGroupsLowPriority = 1
bucketRebuildMemoryHint = auto
serviceOnlyAsNeeded = true
serviceSubtaskTimingPeriod = 30
maxBucketSizeCacheEntries = 0
processTrackerServiceInterval = 1
hotBucketTimeRefreshInterval = 10
rtRouterThreads = 0
rtRouterQueueSize = 10000

################################################################################
# index specific defaults
################################################################################
maxDataSize = auto
maxWarmDBCount = 300
frozenTimePeriodInSecs = 188697600
rotatePeriodInSecs = 60
coldToFrozenScript =
coldToFrozenDir =
compressRawdata = true
maxTotalDataSizeMB = 500000
maxMemMB = 5
maxConcurrentOptimizes = 6
maxHotSpanSecs = 7776000  -- ?
maxHotIdleSecs = 0
maxHotBuckets = 3
minHotIdleSecsBeforeForceRoll = auto
quarantinePastSecs = 77760000
quarantineFutureSecs = 2592000
rawChunkSizeBytes = 131072
minRawFileSyncSecs = disable
assureUTF8 = false
serviceMetaPeriod = 25
partialServiceMetaPeriod = 0
throttleCheckPeriod = 15
syncMeta = true
maxMetaEntries = 1000000
maxBloomBackfillBucketAge = 30d
enableOnlineBucketRepair = true
enableDataIntegrityControl = false
maxTimeUnreplicatedWithAcks = 60
maxTimeUnreplicatedNoAcks = 300
minStreamGroupQueueSize = 2000
warmToColdScript=
tstatsHomePath = volume:_splunk_summaries/$_index_name/datamodel_summary
homePath.maxDataSizeMB = 0
coldPath.maxDataSizeMB = 0
streamingTargetTsidxSyncPeriodMsec = 5000
journalCompression = gzip
enableTsidxReduction = false
suspendHotRollByDeleteQuery = false
tsidxReductionCheckPeriodInSec = 600
timePeriodInSecBeforeTsidxReduction = 604800

#
# By default none of the indexes are replicated.
#
repFactor = 0

################################################################################
# index definitions
################################################################################

[main]
homePath   = $SPLUNK_DB/defaultdb/db
coldPath   = $SPLUNK_DB/defaultdb/colddb
thawedPath = $SPLUNK_DB/defaultdb/thaweddb
tstatsHomePath = volume:_splunk_summaries/defaultdb/datamodel_summary
maxMemMB = 20
maxConcurrentOptimizes = 6
maxHotIdleSecs = 86400
maxHotBuckets = 10
maxDataSize = auto_high_volume

Regards,
Zari

0 Karma

Richfez
SplunkTrust
SplunkTrust

Hmm, I was hoping there was something like a mixture of auto_high_volume, a small daily volume and a small MaxTotalDataSizeMB that made your data really "chunky" with 2-3 months per bucket but a short retention so 5 months down the road it deleted the last 2-3 months all at once.

But no, I don't think that's the case.

Can you look through your Monitoring Console at your indexes and things? I'm not sure what you are looking for, but maybe something will stand out.

And can you paste the input you have?

Lastly, I'd suggest making a different index and changing your input to send this data there. The reasons for this are myriad, may also fix this situation too (accidentally, though!)

0 Karma

Richfez
SplunkTrust
SplunkTrust

Specifically, check your
Settings
Monitoring Console
Indexing
Indexes and Volumes
Indexes and Volumes: Instance

If you are still using the main index, click on it and in the resulting "Index Detail: Instance" page look at your earliest and latest event dates - do those make sense?

Check also "Data Age vs Frozen Age (days)" and the various "usage" metrics near it - do they make sense too?

Then, down nearer the bottom in the "Historical Charts" section, change the time frame to something long enough to cover one of the problem periods and see what the charts look like.

I'm not quite sure exactly what I mean by "do they make sense", but hopefully something will become obvious.

For instance, on a low- but steady-volume index here at home I have an earliest of December 2016, which makes sense because that's about when I configured it.

My Data age vs. Frozen age also is a sensible 212 / 2184, or I'm 212 days into a, what, 6 year retention? I didn't specify anything there, so that's probably right.

All the rest looks similarly sensible.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...