Getting Data In

How to delete 'DONE' jobs in a Search Head Cluster

season88481
Contributor

Hi guys,

Is there a way to delete a DONE or running job in a Search Head Cluster?

Currently some of my users constantly hitting their disk space usage limit. I tried to delete their jobs (or let them to delete their own jobs), but every time I hit the 'Delete' button in 'Job Manager' page, nothing actually happen. I used the below search query to identify if the disk space is actually being clean-up:


| rest splunk_server=local /services/search/jobs
| eval diskUsageMB=diskUsage/1024/1024
| rename eai:acl.owner AS owner, optimizedSearch AS searchQuery
| stats sum(diskUsageMB) AS diskUageMB by sid owner searchQuery
| table owner searchQuery diskUsageMB
| search owner = xxx
| addcoltotals labelfield=owner

This search confirms that jobs are still using disk space quota even after 'Delete'.

Any help will be much appreciated.

Cheers,

naidusadanala
Communicator

Usually the search jobs will expire automatically after 10 minutes,

If they are running lots of searches you need increase

srchDiskQuota the maximum disk space in MB to store the results of the searches, increase this one if you are planning to retrieve a lot of results , in authorize.conf

0 Karma

season88481
Contributor

Hi naidusadanala,

Thanks for your response. I know how to increase disk quota for users. But this question is actually asking how to delete a job in a Search Head Cluster environment.

In single search head, when users delete their jobs, the jobs will be disappear immediately. But this doesn't work at SHC.

Cheers,
Season

0 Karma

risgupta
Path Finder

You might need to check the dispatch jobs for your Splunk servers. You can manually remove that to clean the disk space.

season88481
Contributor

Hi risgupta,

Thanks for your response. Normal users should have ability to delete their own jobs. They have no access to the Splunk server, so can manually remove the dispatch files.

And this manual approach will not work in larger environment, Splunk admin will not have bandwidth to remove all 'DONE' jobs in a SHC.

Cheers,
Season

0 Karma

risgupta
Path Finder

So for that, there is a specific command which is
./splunk clean dispatch
which will clean the jobs for you and there you can apply a cron schedule to run this command to run after evey 10 to 15 minutes (what ever is best for you).

Along with that you can set the dispatch.ttl limit for the limits.conf
For more details you can check this
https://www.splunk.com/blog/2012/09/12/how-long-does-my-search-live-default-search-ttl.html

Let me know how it goes.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...