Splunk Search

Why old search jobs that should expire are not being removed from jobs directory?

anthony_copus
Explorer

Hi,

Currently, our jobs directory is more than full. To fix this, we wanted to change the expiry time of jobs so they would be deleted from the jobs (as we have no need for historic ones to be stored at all). However, setting the dispatch.ttl doesn't appear to have fixed this. We have the following setting for a saved search:

[test_page_pivot]
action.email = 1
action.email.inline = 1
action.email.sendresults = 1
action.email.subject = TEST Splunk Alert: $name$
action.email.to = xxxxxxxxxxx@gmail.com
alert.digest_mode = True
alert.expires = 30m
alert.severity = 1
alert.suppress = 0
alert.track = 0
auto_summarize.dispatch.earliest_time = -1d@h
cron_schedule = */10 * * * *
dispatch.earliest_time = -24h@m
dispatch.ttl = 1p
enableSched = 1
search = | pivot TEST_page_pivot TEST_page_object count(TEST_page_object) AS "Total" SPLITCOL platform SPLITROW app_id AS "app_id" | search app_id="*" | rename VALUE AS unknown | addtotals

As you can see, dispatch.ttl is set to 1p for this query which runs every 10minutes. Therefore it should be reaped from the jobs after 10minutes, however this is not the case .

Inside jobs, the expiry time keeps being extended, even if it initially shows the expected expiry time. What's causing this?

0 Karma
1 Solution

sowings
Splunk Employee
Splunk Employee

The answer to your conundrum lies in "alert.email = 1". Any job which triggers an alert action (email, "alert", etc) takes on the TTL of that action. Check alert_actions.conf; stanza headers align with the <foo> part of "alert.<foo>" in savedsearches.conf.

View solution in original post

0 Karma

sowings
Splunk Employee
Splunk Employee

The answer to your conundrum lies in "alert.email = 1". Any job which triggers an alert action (email, "alert", etc) takes on the TTL of that action. Check alert_actions.conf; stanza headers align with the <foo> part of "alert.<foo>" in savedsearches.conf.

0 Karma

kmugglet
Communicator

Hi Anthony,
Just wondering if you ever found any solution to this?

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...