Monitoring Splunk

HttpListener - Socket error from 127.0.0.1 while accessing : Broken Pipe

sarvesh_11
Communicator

I am frequently getting warning for Socket.

WARN HttpListener - Socket error from 127.0.0.1 while accessing /servicesNS/nobody//data/inputs/rest/*/: Broken pipe
source = /opt/splunk/var/log/splunk/splunkd.log sourcetype = splunkd

i have checked below answers, but unable to resolve my query:
https://answers.splunk.com/answers/105292/what-is-the-cause-of-these-socket-errors-reported-in-splun...

limits:
maxthread=0
maxsocket=0

INFO loader - Limiting REST HTTP server to 1365 threads
INFO loader - Limiting REST HTTP server to 1365 sockets

ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47488
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 8192
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

/etc/security/limits.conf
* hard nofile 64000
* hard nproc 8192
* hard fsize -1

Labels (2)
0 Karma
1 Solution

DavidHourani
Super Champion

Hi @sarvesh_11,

You're open files (-n)limit is still at 1024. You should increase that for the Splunk user, and should also increase the soft limit.

Try something like this in your limits.conf file :

*    hard    nofile     64000
*    soft    nofile     64000
*    hard    nproc     8192
*    soft    nproc     8192
*    hard    fsize      -1  
*    soft    fsize      -1  

Or like this if that doesn't work :
https://docs.splunk.com/Documentation/Splunk/7.2.6/Troubleshooting/ulimitErrors#Set_limits_using_the...

Cheers,
David

View solution in original post

dijikul
Communicator

Why was the above answer marked as accepted when on June 03 2019 @sarvesh_11 states the issue persists?

We're seeing this too. Was this issue resolved?

0 Karma

sarvesh_11
Communicator

Hey @dijikul
We have upgraded splunk enterprise version and were able to solve this issue.
We upgraded it to 7.3

0 Karma

DavidHourani
Super Champion

Hi @sarvesh_11,

You're open files (-n)limit is still at 1024. You should increase that for the Splunk user, and should also increase the soft limit.

Try something like this in your limits.conf file :

*    hard    nofile     64000
*    soft    nofile     64000
*    hard    nproc     8192
*    soft    nproc     8192
*    hard    fsize      -1  
*    soft    fsize      -1  

Or like this if that doesn't work :
https://docs.splunk.com/Documentation/Splunk/7.2.6/Troubleshooting/ulimitErrors#Set_limits_using_the...

Cheers,
David

sarvesh_11
Communicator

Hey @DavidHourani ,
Thanks for dropping by.
Later posting the question i soon realized that, and made the changes in limits.conf

@splunk hard nofile 64000
@splunk hard nproc 8192
@splunk hard fsize -1
@splunk soft nofile 64000
@splunk soft nproc 8192
@splunk soft fsize -1

ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47488
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 64000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 8192
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

still the issue persist.
Also, just while checking the app owner, i can see different users(root,splunk,), so shall i change the owner to splunk for all the apps?

0 Karma

DavidHourani
Super Champion

you can use * for now for the user in /etc/security/limits.conf. And restart splunk after changing that setting and check the _internal logs for ulimit to double check that Splunk also reads the right ulimits. if you run the following search you should get the relevant lines :

index=_internal  component=ulimit
0 Karma

sarvesh_11
Communicator

@DavidHourani
Thanks much man!
The changes are replicated, i shall get back to you if still the "Break pipe" error is there

Thanks alot for your prompt response 🙂

0 Karma

sarvesh_11
Communicator

@DavidHourani ,
Hey David, the error is still coming. THe changes of lmits are reflected, but still

Socket error from 127.0.0.1 while accessing /servicesNS/nobody//data/inputs/rest/http_connections_*/: Broken pipe.

Or i need to change maxSockets and maxThreads to negative integer?
currently it is 0
Any more remedies you have for this?

0 Karma

DavidHourani
Super Champion

yeah try the maxSockets and maxThreads to -1 see if it helps, any other errors you're getting ?

0 Karma

sarvesh_11
Communicator

Yeah, few more and related to that only:

"ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/rest.py" HTTP Request error: 500 Server Error: Internal Server Error"

"ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_box/bin/box_service.py" InsecureRequestWarning)"

0 Karma

DavidHourani
Super Champion

okay try out : maxSockets and maxThreads and lets see

0 Karma

sarvesh_11
Communicator

@DavidHourani
Well for doing this and restarting services again, i have to take maintenance window from business.
Ideally that should not happen right? Setting to -1 will impact server's performance also.

0 Karma

DavidHourani
Super Champion

yeah.... ideally you would need a reasonable limit to avoid breaking the server...are you using a single SH ?

0 Karma

sarvesh_11
Communicator

yaa, standalone Search Head

0 Karma

sarvesh_11
Communicator

Hey @DavidHourani ,
Modifying maxSockets and maxThread to -1, has not suffice my issue. I can still see the Broken Pipe messages.

Also, while browsing for such issue , found a link where people are struggling since long from such errors, strangely none of the Splunk Employee tried to resolve it.
FYR.. https://answers.splunk.com/answers/105292/what-is-the-cause-of-these-socket-errors-reported-in-splun...

Seems this is the loop hole in splunk older version, as SPL-82389 known issue has not been closed and addressed since 6.2.9 version.

DavidHourani
Super Champion

Which Splunk version are you running ? Have you tried reaching out for Splunk support, you MIGHT be on a bug then...

0 Karma

sarvesh_11
Communicator

We are currently on 6.6.3
Not yet, we are in plan to upgrade in near time.

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...