All Apps and Add-ons

limits.conf default_save_ttl = 604800 (7 days) but in /opt/splunk/var/run/splunk/dispatch there are folders older than 7

lim2
Communicator
All,
2 Splunk admin questions:
1) We have default_save_ttl = 604800 (7 days), but in /opt/splunk/var/run/splunk/dispatch there are folders older than 7 days according to https://docs.splunk.com/Documentation/Splunk/8.0.5/Search/ManagejobsfromtheOS
the ttl, or length of time that job's artifacts (the output it produces) will remain on disk and available (ttl=)
 
Should we cleanup the old searches manually via cron, is that safe? 
find  /opt/splunk/var/run/splunk/dispatch/  -maxdepth 1 -type d -mtime +8 -ls|wc -l
37
 
Here is outputs of limits.conf:
/opt/splunk/bin/splunk cmd btool limits list --debug|grep ttl
/opt/splunk/etc/system/default/limits.conf indexed_csv_ttl = 300
/opt/splunk/etc/system/default/limits.conf search_ttl = 2p
/opt/splunk/etc/system/default/limits.conf concurrency_message_throttle_time = 10m
/opt/splunk/etc/system/default/limits.conf max_lock_file_ttl = 86400
/opt/splunk/etc/system/default/limits.conf cache_ttl = 300
/opt/splunk/etc/system/default/limits.conf default_save_ttl = 604800
/opt/splunk/etc/system/default/limits.conf failed_job_ttl = 86400
/opt/splunk/etc/system/default/limits.conf remote_ttl = 600
/opt/splunk/etc/system/default/limits.conf replication_file_ttl = 600
/opt/splunk/etc/system/default/limits.conf srtemp_dir_ttl = 86400
/opt/splunk/etc/system/default/limits.conf ttl = 600
/opt/splunk/etc/system/default/limits.conf ttl = 300
/opt/splunk/etc/system/default/limits.conf cache_ttl_sec = 300
/opt/splunk/etc/system/default/limits.conf ttl = 86400
 
2) Also another question regarding from ps aux |grep 1598277129
root 29785 106 0.3 764344 282356 ? Sl 13:51 (UTC) 114:48 [splunkd pid=1771] search --id=1598277129.14351_B930C604-9D78-4B47-8E19-429E50F02A65 --maxbuckets=300 --ttl=600 --maxout=500000 --maxtime=0 --lookups=1 --reduce_freq=10 --rf=* --user=redacted --pro --roles=redacted
the approve Splunk search process started/completed and have a ttl of 600 (10 minutes),
and from search.log can see CANCEL and status=3, why is the search still running if CANCEL was issued? We do see quite a few of those cases and the CPU load is usually high in 8.0.2. Did not have that condition in Splunk 7.2.x. Any inputs?
 
08-24-2020 15:47:12.345 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
08-24-2020 15:47:12.345 INFO DispatchExecutor - User applied action=CANCEL while status=3
...
08-24-2020 15:47:14.344 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=CANCEL
08-24-2020 15:47:14.345 INFO DispatchExecutor - User applied action=CANCEL while status=3

Thanks.
Labels (2)
0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...