Knowledge Management

Summary-Indexing not writing to summary-index.

Splunker
Communicator

Hi folks,

Running Splunk v4.1.4 on x64 Linux.

I have around 7 summary-indexing saved-reports set to run hourly and write their results into index=summary.

Their all configured identically, their just searching for different logs.

Each job is configured to run with the basic scheduler every hour, and the summary index box is selected and the index selected for the summary is summary (the default).

I have about 4-graphs and these all summarize and write to index=summary just fine. I can search index=summary and see the results.

However i have another set of graphs configured identically, and these seem to run but refuse to write anything to index=summary.

I can tell they've run because in index=_internal it tells me that the saved-search ran, status=successful, and even returned results. The alert_action is also "summary_index" which tells me Splunk knows it should write to summary=index, it just does not for some of the graphs.

Currently, all 7 saved reports are configured to run every hour on the hour. I'm thinking of skewing the graphs, as i suspect i may have hit a concurrency restriction, but i would think if that was the case i shouldnt be seeing status=successful in index=_internal.

I've looked under the Status dropdown under Scheduling errors and thats clean.

Has anyone come across this, or have any ideas of further things i can check? My next test is to skew the saved-reports so they dont all run at once, see if that helps.

Any thoughts/idea's would be appreciated!

Chris.

Tags (2)
0 Karma
1 Solution

Splunker
Communicator

Sorry folks, forgot to update this question with the eventual answer 🙂

This turned out to be a bug (which i havent tested has been fixed, but i assume it has) with having certain characters in the title of a saved-search.

Searches that had a '/' character in the name did not run, which is why i saw what i saw. Replacing the offending character allowed the saved-search to run as normal.

I believe this should be fixed now in most recent versions of Splunk.

View solution in original post

Splunker
Communicator

Sorry folks, forgot to update this question with the eventual answer 🙂

This turned out to be a bug (which i havent tested has been fixed, but i assume it has) with having certain characters in the title of a saved-search.

Searches that had a '/' character in the name did not run, which is why i saw what i saw. Replacing the offending character allowed the saved-search to run as normal.

I believe this should be fixed now in most recent versions of Splunk.

Splunker
Communicator

Thanks for the search - it's an easy-to-read view of what i was seeing.

Here's a summarized version. Reports A-D work ok, Reports E, F, G, do not write to the summary index.

Ran the above search over 7-days of data.

  • Report A, alert_actions=summary_index, count=168, min(result_count)=0, max(result_count)=7
  • Report B, alert_actions=summary_index, count=168, min(result_count)=0, max(result_count)=18
  • Report C, alert_actions=summary_index, count=168, min(result_count)=0, max(result_count)=24
  • Report D, alert_actions=summary_index, count=168, min(result_count)=0, max(result_count)=23
  • Report E, alert_actions=summary_index, count=168, min(result_count)=1, max(result_count)=2
  • Report F, alert_actions=summary_index, count=168, min(result_count)=8, max(result_count)=14
  • Report G, alert_actions=summary_index, count=168, min(result_count)=0, max(result_count)=0

Seems E & F have non-zero minimum's, and G is min=0, max=0. Very mixed results.

I take your point on the delayed saved-searches. I'll adjust the schedule as mentioned.

Thanks,

Chris.

0 Karma

Lowell
Super Champion

So you are saying that the result_count is always greater than 0 in the scheduler log?

Is it always the same saved searches that fail to write anything to the summary index, or is it different ones at different times?

You should be able to see that by running a search like this:

index="_internal" sourcetype="scheduler" [ search index="_internal" sourcetype="scheduler" alert_actions="*summary_index*" | fields savedsearch_name | dedup savedsearch_name | format ] | eval alert_actions=if(alert_actions=="", "None", alert_actions) | stats count, min(result_count), max(result_count) by savedsearch_name, alert_actions


BTW, staggering your saved searches is a good idea in any case. Also keep in mind that moving the scheduled time doesn't have to change the time window you are summarizing; you can setup the timeframe still be full hour intervals using earliest=@h latest=-1h@h. In fact its a good idea to NOT run exactly on the hour when summarizing the previous hour because running it on the hour doesn't allow for any kind of indexing delays. I normally try to make sure theres at least a 5 minute gap between the event timestamp and when the summarizing saved search is run. (This varies a lot depending on the data, but 5 minutes seems safe.)

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...