Getting Data In

Can you help us fix a forwarding delay occurring between event time and indexed time?

Nik_Shafiq
New Member

We have set up a Splunk forwarder to forward the latest logs in the same server, but we are having an issue where there is a huge difference between indexed time and the event time. I can see the delay of almost 17 hours when I use this query to find the indexed time, event time and the delay:

index=myindex sourcetype=mysourcetype host=myhost | eval delay=(_indextime-_time)/60/60 | eval indexed_time=strftime(_indextime, "%+") | table indexed_time, _time, delay

This is what i got from splunkd.log

01-08-2019 11:17:31.607 +0700 INFO  TailReader -   ...continuing.
01-08-2019 11:17:33.515 +0700 INFO  TcpOutputProc - Connected to idx=*heavy_forwarder*, pset=0, reuse=1.
01-08-2019 11:17:46.608 +0700 INFO  TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
01-08-2019 11:17:51.609 +0700 INFO  TailReader -   ...continuing.
01-08-2019 11:17:55.286 +0700 INFO  HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_*sourcehostip*_*sourcehostname*_*sourcehostmac*
01-08-2019 11:18:06.610 +0700 INFO  TailReader - Could not send data to output queue (structuredParsingQueue), retrying...
01-08-2019 11:18:11.610 +0700 INFO  TailReader -   ...continuing.
01-08-2019 11:18:16.611 +0700 INFO  TailReader - Could not send data to output queue (structuredParsingQueue), retrying...

This issue only occurred on this particular host. Hence, I created a limits.conf file to change the thruput limit as suggested here:

link:troubleshootingindexdelay

Unfortunately, that did not resolve the issue as we still get messages in splunkd.log.

Any idea which part should I look into next?

0 Karma

lakshman239
Influencer

Looks like the queues are getting full and hence blocked. Are you sending the data/logs for this sourcetype only to one indexer? Look at the metrics.log for the pipeline issue and address them. Also, look at your props/transforms for regex improvements.

You may also need to review the
https://wiki.splunk.com/Community:HowIndexingWorks

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...