ERROR HttpListener [97417 TcpChannelThread] - Exception while processing request from x.x.x.x:63596 for /en-US/splunkd/__raw/services/search/shelper?output_mode=json&snippet=true&snippetEmbedJS=false&namespace=search&search=search%20i&useTypeahead=true&showCommandHelp=true&showCommandHistory=true&showFieldInfo=false&_=1664562934323: std::bad_alloc
Any help please
Hi @vikasg,
every time you have a crash I hint to open a case to Splunk Support, for your support and to highlight a possible issue!
Anyway, some question to better undertand your situation:
Giuseppe
Thank you for the response . This is latest splunk that 9.0.1 and Single Linux box for POC . It has 32 GB RAM with 16 CPU , this is SH come indexer
Ulimit is set as per recommendation .
THP is disabled
Hi @vikasg,
your hardware and your configuration (ulimit and THP) are correct, so I hint to make a Diag (they surely will ask you) and to open a case to Splunk Support.
They will be able to analize your system to understand what happened
Ciao.
Giuseppe
Since it is POC and I am using trial license for 60 days , I do not think I can open support case on this trial license .
I am digging this further , found few things which cause the issue
Finding
1) I have moved to older Splunk version like to 8.2.6.1
2) Under ulimit I set maximum memory to like 80% of the over all
3) after installation I gave like 1 hour to the machine to settle , then start ingesting data with low rate
4) NOw I am increasing data flow rate
5) Since it was POC so I ran splunk service with root user only , but now created splunk user and set ulimit accordingly
6) Tweaked some data model acceleration which was were aggressive earlier (actually same machine is acting as indexer and SH ) so better to start slow and gradually increase
I hope this will help someone , I am still not accepting my own answer as I am sure I will have few more findings which I will add to this comment
Hi @vikasg.
if you are a customer, you can ask to Splunk Sales to follow your case, if you are a Splunk partner you can ask to your Splunk Channel Manager to do the same thing! in other words you will not be abandoned!
Anyway:
1) I have moved to older Splunk version like to 8.2.6.1
2) Under ulimit I set maximum memory to like 80% of the over all
3) after installation I gave like 1 hour to the machine to settle , then start ingesting data with low rate
4) NOw I am increasing data flow rate
not relevant
5) Since it was POC so I ran splunk service with root user only , but now created splunk user and set ulimit accordingly
/etc/security/limits.conf
user_name hard nofile 8192
user_name soft nofile 8192
6) Tweaked some data model acceleration which was were aggressive earlier (actually same machine is acting as indexer and SH ) so better to start slow and gradually increase
Check if it was a momentary lap of reason (to quote Pink Floyd!) or it's something repeated.
In other words, monitor your system to understand if the situation will repeat.
You can also check your system using the Splunk Monitoring Console App (default Splunk)
Ciao.
Giuseppe
I Agree on the points where you mentioned that is it is not relevant, but some how it is working for me .Let me explain why I think . I am installing apps like Event gen , Security essentials, CIM and my custome apps that have DM with Acceleration. What I believe is rather turning on everything at same time I should have wait on each step . I saw some search lag issues as I monitor my instance , after install and with old approach my RAM got sudden hike system hung and spluknd crashed .
I just changed my approach and things started working ,Now I have turned on all inputs and I am not observing any crash from last 1 hour , DM are accelerating, Event gen is generating good amount of logs . Earlier it was failing after every 10 min
on Ulimit it is permanently set in
/etc/security/limits.conf