Deployment Architecture

when using ntpdate I see ERROR ProcessDispatchedSearch - PROCESS_SEARCH - Error opening "": No such file or directory

Rob
Splunk Employee
Splunk Employee

Seeing the error

ERROR ProcessDispatchedSearch - PROCESS_SEARCH - Error opening "": No such file or directory

a lot of these message on my search head's splunkd.log using a search head pooling configuration. It seems that this might be causing results to be incomplete or for searches to not run which can be noted with the additional mess

Failed to create a bundles setup with server name ''. Using peer's local bundles to execute the search, results might not be correct

1 Solution

yannK
Splunk Employee
Splunk Employee

Are you servers having any clock issues ?

I saw that in cases like :
- search-head pooling with a shared storage
- mounted bundle with shared storage
- single server but with a clock changing all the time

In the last case, the root cause was that the sysadmin decided to sync the clock across his deployment with a cronjob calling ntpdate every 5 minutes ....
This is not a good idea, because Splunk (and many others applications) rely on the clock to compare files modification time, or durations, and ntpdate change the clock immediately (so you can be several seconds in the future or in the past, without notice).

  • In general on linux use ntpdate only at system boot to change the clock once.
  • When the server is running, use a daemon like ntpd to gently sync the clock if you servers drift.

see http://superuser.com/questions/444733/linux-ntpd-and-ntpdate-service

View solution in original post

yannK
Splunk Employee
Splunk Employee

Are you servers having any clock issues ?

I saw that in cases like :
- search-head pooling with a shared storage
- mounted bundle with shared storage
- single server but with a clock changing all the time

In the last case, the root cause was that the sysadmin decided to sync the clock across his deployment with a cronjob calling ntpdate every 5 minutes ....
This is not a good idea, because Splunk (and many others applications) rely on the clock to compare files modification time, or durations, and ntpdate change the clock immediately (so you can be several seconds in the future or in the past, without notice).

  • In general on linux use ntpdate only at system boot to change the clock once.
  • When the server is running, use a daemon like ntpd to gently sync the clock if you servers drift.

see http://superuser.com/questions/444733/linux-ntpd-and-ntpdate-service

asetiawan
Explorer

In my personal experience, using ntpd doesn't completely eliminate the issue. Granted that the frequency goes down from every 10 minutes (when I cron every hour), to a few of them using ntpd. Maybe the timestamp checking is just too strict?

0 Karma

jrodman
Splunk Employee
Splunk Employee

In general, frequent ntpdate (every 5 minutes) should not cause significant jumps, so you would hope software would be resilient (we should file bugs if we are not). However there is no good reason to run ntpdate frequently, as the resources required by software to handle the jumps will be greater than that saved by not running ntpd constantly, and ntpd will give a monotonically increasing clock so the software will be able to behave more correctly.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...