Splunk Search

After upgrade to a Splunk 6.2 indexer cluster, why do searches hang with high memory consumption and we're forced to restart?

rchan11
Explorer

Hi,

We've recently upgraded to a Splunk 6.2 indexer cluster, but we're finding that searches will hang and the system goes unresponsive. We're forced to restart the entire system. Our hardware doesn't spikes, but Memory consumption is high (as normal).

Is there any diagnostic we can run after we've successfully restarting the system to find out what the root cause was?

Thanks,
Ryan

0 Karma

jkat54
SplunkTrust
SplunkTrust

Please run the following searches to find your needle in the haystack:

index=_internal source=*splunkd.log WARN or ERR*

 -or maybe it wont be splunkd.log-

index=_internal source=* WARN or ERR*

You might also consider adding an _index_earliest=-1h to see events that have indexed in the last hour, etc. to help narrow down the search results to exactly when the issue occurred.

0 Karma

ppablo
Retired

Hi @rchan11

Just to clarify for other users, but are you referring to search head clustering or indexer clustering? or both?

0 Karma

rchan11
Explorer

Hi ppablo,

We have 1 search head and 2 clustered indexers.

Thanks
Ryan

0 Karma
Get Updates on the Splunk Community!

Observability | Use Synthetic Monitoring for Website Metadata Verification

If you are on Splunk Observability Cloud, you may already have Synthetic Monitoringin your observability ...

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...

.conf24 | Personalize your .conf experience with Learning Paths!

Personalize your .conf24 Experience Learning paths allow you to level up your skill sets and dive deeper ...