Reporting

700+ jobs running created on Jan 1, 1970

sec_team_albara
New Member

Hello,
We have more than 700 jobs with status parsing on the indexer.
We able to delete these jobs only after stopping the splunk service on the SearchHead. But these jobs kept coming back after starting the splunk Service on the SH.
We need your help.
Thanks in advance

Labels (1)
0 Karma

codebuilder
Influencer

Run this and examine the output.

| rest /services/search/jobs isSaved=1

My guess is that what you are seeing will be datamodel/report acceleration jobs, or summary indexing, etc.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

nickhills
Ultra Champion

Who owns the jobs - are they all the same user?

If my comment helps, please give it a thumbs up!
0 Karma

sec_team_albara
New Member

the jobs owner are set to blank: they are not specified.

0 Karma

nickhills
Ultra Champion

I guess I should also offer the suggestion to open a ticket with Splunk support to take you through manually removing them too.
That could be the better suggestion if this is a Production instance with important jobs.

If my comment helps, please give it a thumbs up!
0 Karma

sec_team_albara
New Member

I have already tried, after stopping splunk service on the SH, manually deleting folders under $SPLUNK_HOME/var/run/splunk/dispatch. The jobs kept coming back.

0 Karma

nickhills
Ultra Champion

In that case you have something scheduling them.
Find one of the jobs in the inspector, grab something unique(ish) or rare from the search that is running, then grep your $SPLUNK_HOME/etc folder for user/application searches that contain that search term/phrase.

If my comment helps, please give it a thumbs up!
0 Karma

nickhills
Ultra Champion

That would suggest you have some malformed jobs with invalid start times.

What is probably happening is that when you restart, since the jobs are still in the dispatch directory they get resumed.

You could try to manually delete them...
If you understand the risks and the impact of deleting jobs, you can give this a try. Be careful if your currently executing jobs are important to you - or your users.

The basic steps to remove these jobs:
Stop splunk, delete jobs, restart Splunk - watch to see if they come back.

The jobs you are looking for will be in $SPLUNK_HOME/var/run/splunk/dispatch take a look into that folder and see if you can identify just the affected jobs by there name, or metadata - compare this with the 700 jobs in the job inspector if you can.

If there is commonality in the names or format then those are your 'bad jobs'

Stop Splunk on your SH
Selectively delete the job folders for your 700 bad jobs - bearing in mind that their results (which are probably not of much concern) will be lost.
Start Splunk

Check to see if any of them come back.
Take care with the delete!

If my comment helps, please give it a thumbs up!
0 Karma
Get Updates on the Splunk Community!

Detecting Remote Code Executions With the Splunk Threat Research Team

REGISTER NOWRemote code execution (RCE) vulnerabilities pose a significant risk to organizations. If ...

Observability | Use Synthetic Monitoring for Website Metadata Verification

If you are on Splunk Observability Cloud, you may already have Synthetic Monitoringin your observability ...

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...