Deployment Architecture

Why are our REST limits so low, how do we change that in 6.2.6?

epleshe
Engager

Some of our Deployment Servers going offline with these events flooding SplunkD.log:

WARN HttpListener - Can't handle request for /services/broker/connect/<GUID>/<Server>/271043/windows-x64/<port>, max thread limit for REST HTTP server is 2728, threads already in use is 2728

The servers eventually show as Offline and when they're rebooted they resume working for a while.

Why is the limit so low? I found in the documentation where this is addressed in 6.3 (http://docs.splunk.com/Documentation/Splunk/6.3.0/Troubleshooting/HTTPthreadlimitissues) but can't find where to address this in 6.2.6, and we're not quite ready to upgrade.

0 Karma
1 Solution

afret2007
Path Finder

The workaround should be the same for 6.2.6 as 6.3.0. The thread limit was introduced in 6,0.0 so the calculation and override are the same for all 6.x.x versions to the present.

View solution in original post

afret2007
Path Finder

I have same issue except I have Splunk Version 6.3.0 loaded on windows. The workaround is for Linux users. Is there a workaround for windows I wonder.

0 Karma

afret2007
Path Finder

The workaround should be the same for 6.2.6 as 6.3.0. The thread limit was introduced in 6,0.0 so the calculation and override are the same for all 6.x.x versions to the present.

epleshe
Engager

Thank you, I've set it to -1 on the server that was having issues and since haven't seen any errors.

0 Karma
Get Updates on the Splunk Community!

Threat Hunting Unlocked: How to Uplevel Your Threat Hunting With the PEAK Framework ...

WATCH NOWAs AI starts tackling low level alerts, it's more critical than ever to uplevel your threat hunting ...

Splunk APM: New Product Features + Community Office Hours Recap!

Howdy Splunk Community! Over the past few months, we’ve had a lot going on in the world of Splunk Application ...

Index This | Forward, I’m heavy; backward, I’m not. What am I?

April 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...