Monitoring Splunk

Are there any monitoring experienced suggestions for watchdog no response errors?

kpkeimig
Path Finder

Are there any monitoring experienced suggestions for watchdog no response errors?

We are currently are considering taking an average for the past 30 days, multiplying by 2 and if the current day exceeds that amount creating an alert. We have the response time set at default 8 seconds. And for our splunk cloud indexers (24 of them) receiving ~14k a day (which seems "holy cow" high). We found our max thread delay around 54 seconds (when running normally).

With the limited documentation available on server.conf and authorize.conf splunk documentation website, I presented a splunk support ticket. Response below.

1) What does watchdog monitor?
The Watchdog is a new feature starting from the version 7.2.x branch, that has been exposed in Splunk logging.
Watchdog is a way for Splunk to monitor internal threads created by Splunk and receive information whenever any monitored thread exceeds configured time to response (which could be caused by either long execution of a task or a large number of small tasks).
It was a functionality that was introduced from a need to:
- get more information regarding potential bottle necks and deadlocks when gathering data
- get snapshot of all nodes (all processes including searches) gathered in the same time of the incident
- get useful during development and QA testing to find performance issues before shipping Splunk

Features include :
From the configuration files you can configure

Enabling/disabling watchdog
Response timeout (max time to response to the watchdog) Invoke actions when blocked/slow thread is observed .
Call stacks creation (max number, interval)
Script execution
UI messaging Logging control Call stack (pstacks) generation - i.e. create stack of the blocked thread or all registered threads when a blockage is detected Enable stacks on endpoints to quickly generate call stacks of running threads

Watchdog messages are logged to $SPLUNK_HOME/var/log/watchdog/watchdog.log

Watchdog configuration files are :
https://docs.splunk.com/Documentation/Splunk/7.2.0/Admin/Serverconf

2) What does slow latency imply?

Watchdog alerts are triggering WHEN there’s a BUSY thread with > 8 second response time, and do NOT trigger when there’s a busy thread < 8 secs response time nor a successfully executed thread.

Latency just implies Spunk took a little longer servicing

3) Is slow movement through the threads user impacting or not user impacting?

No. Just because some threads are busy, Splunk will still try to work around that to deliver results to the end users.
Busy threads are expected under heavy load - and the watchdog logs may help us to identify which area of Splunk is affected, but on their own and with no context, they do not indicate issues.

Maybe the message ERROR should be re-worded WARN, as it is misleading and implies there is a problem..... when there is not.

Labels (1)
Tags (1)

vinhlu
Engager

Did you ever find out what are these watchdog errors mean?

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...