Hi,
My /apps/splunk filesystem is filling up, and the culprit appears to be dispatchtmp. What files go here? It appears to be accelerated searches? (Not sure). How can I tell what search is associated with each directory created, and is there a way to route these tmp files elsewhere, as these are fixed disks.
To clarify, is /apps/splunk
your $SPLUNK_HOME
?
Based on this answer https://answers.splunk.com/answers/73297/clean-dispatchtmp.html you can take down splunk and move content or clean that directory but I'll suggest to move somewhere else so you can restore if it will require.
Thanks. It appears that some of these jobs are generating content every minute, and I don't know why. I looked at all the jobs associated in the app, and they run every hour.
Can you please take sid
from dispatchtmp directory and try to find out that job in _audit
index, something like index=_audit <sid>
, if it will be scheduled search then you can check whether it is scheduled to run at every minute or not.
There's only about 4 events for that sid, but my filesystem shows files getting generated every minute:
1 splunk splunk 174131 Dec 6 01:36 statstmp_partition0_1512542219.16275.lrtp449.csv.gz
-rw------- 1 splunk splunk 164979 Dec 6 01:36 statstmp_partition0_1512542219.16276.lrtp449.csv.gz
-rw------- 1 splunk splunk 164752 Dec 6 01:36 statstmp_partition0_1512542219.16277.lrtp449.csv.gz
-rw------- 1 splunk splunk 164121 Dec 6 01:37 statstmp_partition0_1512542219.16278.lrtp449.csv.gz
-rw------- 1 splunk splunk 155234 Dec 6 01:37 statstmp_partition0_1512542220.16279.lrtp449.csv.gz
-rw------- 1 splunk splunk 145870 Dec 6 01:37 statstmp_partition0_1512542220.16280.lrtp449.csv.gz
-rw------- 1 splunk splunk 174143 Dec 6 01:37 statstmp_partition0_1512542220.16281.lrtp449.csv.gz
-rw------- 1 splunk splunk 162143 Dec 6 01:37 statstmp_partition0_1512542220.16282.lrtp449.csv.gz
-rw------- 1 splunk splunk 165256 Dec 6 01:37 statstmp_partition0_1512542220.16283.lrtp449.csv.gz
-rw------- 1 splunk splunk 166296 Dec 6 01:37 statstmp_partition0_1512542220.16284.lrtp449.csv.gz
-rw------- 1 splunk splunk 159039 Dec 6 01:37 statstmp_partition0_1512542220.16285.lrtp449.csv.gz
-rw------- 1 splunk splunk 165745 Dec 6 01:37 statstmp_partition0_1512542220.16286.lrtp449.csv.gz
-rw------- 1 splunk splunk 160583 Dec 6 01:37 statstmp_partition0_1512542221.16287.lrtp449.csv.gz
-rw------- 1 splunk splunk 162555 Dec 6 01:37 statstmp_partition0_1512542221.16288.lrtp449.csv.gz
-rw------- 1 splunk splunk 149885 Dec 6 01:37 statstmp_partition0_1512542221.16289.lrtp449.csv.gz
-rw------- 1 splunk splunk 168065 Dec 6 01:37 statstmp_partition0_1512542221.16290.lrtp449.csv.gz
-rw------- 1 splunk splunk 153648 Dec 6 01:37 statstmp_partition0_1512542221.16291.lrtp449.csv.gz
Looks like it is adhoc search, can you please check on your all search head using command ps -ef| grep -i splunk | grep search
if any adhoc search is running which is trying to fetch too many events and running since long.
nothing that's been running for a long time.
At last I'll try rolling restart of search heads.
Someone owe me a beer if it turns out to be a real-time search running on all instances of the search head pool. lol
Hi @a212830,
Based on documentation it looks like storing temporary job files to distpatchtmp directory. Are you running search head pooling?
use_dispatchtmp_dir = <bool>
* Specifies if the dispatchtmp directory should be used for temporary search
time files, to write temporary files to a different directory from the
dispatch directory for the job.
* Temp files are written to $SPLUNK_HOME/var/run/splunk/dispatchtmp/<sid>/
directory.
* In search head pooling, performance can be improved by mounting dispatchtmp
to the local file system.
* Default: true, if search head pooling is enabled. Otherwise false.
Yes, we are running SHP.