Hi,
I am getting the following:
Search peer zdal134 has the following message: Too many search jobs found in the dispatch directory (found=3289, warning level=2000). This could negatively impact Splunk's performance, consider removing some of the old search jobs.
I've run the clean dispatch command, and it worked, and restarted the SH, but the message is still appearing. Is there something else that needs to be done?
I used to have this issue on the search heads and have managed to bring retention back to 24 hours or less with around 5000 jobs.
But what happens as from today, I get the same issue but this time from the two search peers (=! not search heads, but both indexers), both give a warning >2000.
Thought the dispatch issue that many of us face (makes one think Splunk should improve this somehow...) was isolated to the place where searches were fired from (search heads), but it appears 'some' searches pass to the indexers that still have default settings. What jumps over to the indexers, and what searches do count/add up?
Just cleaning dispatch is not always enough. If your users share a lot of searches, you will have more folders in dispatch since the TTL for the shared searches is increased. You can run this command to see which ones are older than 5 days and delete them, if you so wish.
find $SPLUNK_HOME/var/run/splunk/dispatch -mtime +5 -type d -exec rm -rf {} \;
Alternatively, you can up the limit of the warning. In limits.conf
place this setting:
[search]
dispatch_dir_warning_size = 3500
This will now only give a warning when you have more than 3500 folders in dispatch. Be cautious, this may negatively affect your environment depending on hardware specs.