Splunk Search

Real time search tuning

daniel333
Builder

hello,

Splunk 6.13/CentOS 6.4

I recently had a Splunk outage. My monitoring software showed, plenty of IO, CPU and RAM available. Yet forwarders were reporting the TCP queues were full on the receiving indexers.

I popped into Splunk on Splunk and looking at my fill Ratios all 4 stages, which are normally 0. The 4th indexing queue was maxed. We actually had lower than average throughput. After some poking around I discovered a set of Real time dashboards were created by our NOC and send out to the general population. Once I disabled RT the queues went right back to 0%.

The abusive RT dashboard aside. I feel there is some performance tuning I am missing. With plenty of system resources available I'd like to undertand why these queues backed up so bad and what I can do get the indexing queue better performance... ideally without installing 10 more indexers 🙂

Tags (2)
0 Karma

daniel333
Builder

I tried both with traditional real time search and indexed_realtime_use_by_default = true and although indexed-realtime-use was slightly better performing in both cases the queues maxed.

0 Karma
Get Updates on the Splunk Community!

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...