All Apps and Add-ons

Splunk App for Stream: How to resolve congestion in parsing queue after starting the universal forwarder?

kwchang_splunk
Splunk Employee
Splunk Employee

Hello Stream experts,

I'm doing a stress test with the streamfwd by capturing many short-live TCP traffics over 35000 cps.
Splunk App for stream is running on a universal forwarder and sending the captured data to a remote indexer.
I configured it as follows:
- Splunk 6.3 in both sides
- Stream app version is 6.3.2
- maxKBps = 0 on UF side
- increased all queue size of UF including exec and parsingQueue to over 30MB
- applied the recommended tcp parameters in UF and Indexer node.
- set streamfwd.xml ( ProcessingThreads : 8, PcapBufferSize : 127108864, TcpConnectionTimeout : 1, MaxEventQueueSize : 500000 )

As you can see from the chart in the attached image, the parsingQueue piles up very quickly very after starting the UF. The exec queue problem follows it immediately and the stream's event queue also piled up.

Strangely, I couldn't find any congestion from indexer queues at that moment.

What should I check to resolve this problem?
Thank you in advance.

alt text

0 Karma

kwchang_splunk
Splunk Employee
Splunk Employee

My problem was caused by "compressed=true" option in outputs.conf. If there is a heavy load (like performance test scenario), the parsingQueue of UF was easily filled up with this option.
My problem was resolved by setting it to false and having bigger maxQueueSize (10MB in my case) at outputs.conf.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...