I am looking for a search that will list the concurrent searches(jobs) running that were running on the machine for a given time(for a specific minute preferably). I am trying to analyze the load on the search head in terms of concurrent searches.
You can find out how many simultaneous searches are running from the command line by doing a ps aux | grep splunkd | grep search | wc -l (for linux). You could always do this as a scripted lookup, or custom search command, or it might be easier to just have it run on a crontab every minute and output to a log file, and index that log file.
Maybe there's an even easier splunk-internal way, though.
You can also run this search internally to report on concurrent searches
index=_internal source=*metrics.log group="search_concurrency" | timechart sum(active_hist_searches) as concurrent_searches
and if you want to report by user
index=_internal source=*metrics.log group="search_concurrency" | timechart sum(active_hist_searches) as concurrent_searches by user
you can also play around with the time spans:
for example:
index=_internal source=*metrics.log group="search_concurrency" | timechart span=1m sum(active_hist_searches) as concurrent_searches by user
will give you a minute by minute breakdown on running searches by user.
You could probably accomplish your goal, using Splunk internal information. The running jobs are normally tracked by Splunk as follows:
Saved Searches:
index="_internal" sourcetype="scheduler" earliest=02/05/2011:12:20:00 latest=02/05/2011:12:25:00
Manual Searches:
index="_internal" sourcetype="searches" earliest=02/05/2011:12:20:00 latest=02/05/2011:12:35:00
NOTE: For saved searches, there are all sorts of fields to play with, including savedsearch_name, status, app, run_time, and more.
HTH
ron
You can find out how many simultaneous searches are running from the command line by doing a ps aux | grep splunkd | grep search | wc -l (for linux). You could always do this as a scripted lookup, or custom search command, or it might be easier to just have it run on a crontab every minute and output to a log file, and index that log file.
Maybe there's an even easier splunk-internal way, though.
Thanks David. Yes, that's a good idea. I noticed two processes per search.