Splunk Search

Real time search tuning

daniel333
Builder

hello,

Splunk 6.13/CentOS 6.4

I recently had a Splunk outage. My monitoring software showed, plenty of IO, CPU and RAM available. Yet forwarders were reporting the TCP queues were full on the receiving indexers.

I popped into Splunk on Splunk and looking at my fill Ratios all 4 stages, which are normally 0. The 4th indexing queue was maxed. We actually had lower than average throughput. After some poking around I discovered a set of Real time dashboards were created by our NOC and send out to the general population. Once I disabled RT the queues went right back to 0%.

The abusive RT dashboard aside. I feel there is some performance tuning I am missing. With plenty of system resources available I'd like to undertand why these queues backed up so bad and what I can do get the indexing queue better performance... ideally without installing 10 more indexers 🙂

Tags (2)
0 Karma

daniel333
Builder

I tried both with traditional real time search and indexed_realtime_use_by_default = true and although indexed-realtime-use was slightly better performing in both cases the queues maxed.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...