Monitoring Splunk

Logging of "Timed out waiting for peer ..." event?

the_wolverine
Champion

I'd like to know the history of this issue but I cannot find any evidence in the Splunk logs. The issue appears in UI banner or in email alerts where there was a time out to one or more peers.

How do I get this event logged?

0 Karma

MuS
SplunkTrust
SplunkTrust

Hi the_wolverine,

did you check splunkd.log for DispatchThread messages?

cheers, MuS

the_wolverine
Champion

Yes, checked splunkd.log and there is no logged event.

0 Karma

woodcock
Esteemed Legend

This error occurs when your Search Heads attempts to send a search job to a Search Peer (usually one of your Indexers) and the Indexer does not respond in within the default timeout period so the Search continues but without using that Indexer (which of course probably means that some of your events are not returned so your search is wrong). In my experience, the problem can often be cleared simply by restarting the Splunk instance on the Indexer in question but sometimes you need to dig deeper. In any case, something is keeping your Indexers so busy that it cannot reliably respond to search requests even though the Splunk instance is running. I am sure this kind of thing can also commonly be caused by misconfigured/misbehaving load-balancers or other identity/load-shifting equipment that is between your Search Head and your Indexer peers.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...