Splunk Search

accelerated search with specific week day

sansay
Contributor

I have an accelerated search which is set for a 3 months time range.
The acceleration works, I can get a whole day's logs in the past in an average of 10 seconds, where it would take forever otherwise.
I need to be able to see the data for all the same day of the week. But, since you can't specify a time range before an accelerated search query, you can't use "date_wday=Thursday".
And doing this:
| savedsearch "my_saved_search_name" | date_wday=Thursday
won't help since it will force the acceleration to get all the records for the whole week so as to filter them afterward. This results in again an extremely lengthy search. My experiments show that the time it takes for acceleration increases exponentially with the time range you are
looking at. Here is a little table to give you an idea of what I mean:
Days search time
1 4
2 13
3 31
4 65
5 104
6 207
7 216
8 246

So, as I need to look at all the Thursdays for the last 6 weeks, I end up with a search that takes more than an hour to complete.

Any suggestion on how to get this working will be very appreciated.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Here is my answer regarding martin_mueller & lguinn's requests for the exact searches:

Actually I disagree. The principle should be applicable to any searches, it should not be dependent on my specific search. I have a search that's accelerated. I want to get the accelerated data only for a specific week day, say Thursday (this means all the Thursdays), in the past 6 weeks. And as I said earlier, the way I understand the usage of accelerated searches, you can't do this without looking at the whole 6 weeks worth of data. Unfortunately this nulls the value of accelerated reports.

But just to make you happier:

Accelerated search (3 months) - Name: acc_metric_ps4_create_account_all_history
Query:
index=apache uri="*/user/accounts.json" method=POST | bin _time span=1m | rex field=_raw "(?\d+) \d+ \"ajp_resource\"" | stats count(eval(status=201)) as "Succ", count(eval(NOT status=201)) as "Fail", count as Total, avg(eval(response_time/1000000)) as Latency by _time

six weeks expected data for next day of the week:
| savedsearch acc_metric_ps4_create_account_all_history
[search earliest=-1s | head 1 | eval date_wday=strftime(relative_time(now(), "+1d@d"), "%A") | fields date_wday | format]
| eval lat=round(Latency,2) | eval tot=round(Total) | eval succ=round(100-(Fail/Total*100),1) | eval _time=strptime(strftime(relative_time(now(), "+1d@d"), "%m/%d/%Y").strftime(_time,":%H:%M:%S"), "%m/%d/%Y:%H:%M:%S") | bucket _time span=1h
| stats max(lat) as LATENCY_MAX_100, perc99(lat) as LATENCY_MAX_99, perc98(lat) as LATENCY_MAX_98, perc97(lat) as LATENCY_MAX_97, perc95(lat) as LATENCY_MAX_95, perc90(lat) as LATENCY_MAX_90, perc80(lat) as LATENCY_MAX_80, perc70(lat) as LATENCY_MAX_70, perc30(lat) as LATENCY_MIN_30, perc20(lat) as LATENCY_MIN_20, perc10(lat) as LATENCY_MIN_10, perc5(lat) as LATENCY_MIN_5, perc3(lat) as LATENCY_MIN_3, perc2(lat) as LATENCY_MIN_2, perc1(lat) as LATENCY_MIN_1, min(lat) as LATENCY_MIN_0, stdevp(lat) as LATENCY_STD_DEV, max(tot) as TOTAL_MAX_100, perc99(tot) as TOTAL_MAX_99, perc98(tot) as TOTAL_MAX_98, perc97(tot) as TOTAL_MAX_97, perc95(tot) as TOTAL_MAX_95, perc90(tot) as TOTAL_MAX_90, perc80(tot) as TOTAL_MAX_80, perc70(tot) as TOTAL_MAX_70, perc30(tot) as TOTAL_MIN_30, perc20(tot) as TOTAL_MIN_20, perc10(tot) as TOTAL_MIN_10, perc5(tot) as TOTAL_MIN_5, perc3(tot) as TOTAL_MIN_3, perc2(tot) as TOTAL_MIN_2, perc1(tot) as TOTAL_MIN_1, min(tot) as TOTAL_MIN_0, stdevp(tot) as TOTAL_STD_DEV, max(succ) as SUCCESS_MAX_100, perc99(succ) as SUCCESS_MAX_99, perc98(succ) as SUCCESS_MAX_98, perc97(succ) as SUCCESS_MAX_97, perc95(succ) as SUCCESS_MAX_95, perc90(succ) as SUCCESS_MAX_90, perc80(succ) as SUCCESS_MAX_80, perc70(succ) as SUCCESS_MAX_70, perc30(succ) as SUCCESS_MIN_30, perc20(succ) as SUCCESS_MIN_20, perc10(succ) as SUCCESS_MIN_10, perc5(succ) as SUCCESS_MIN_5, perc3(succ) as SUCCESS_MIN_3, perc2(succ) as SUCCESS_MIN_2, perc1(succ) as SUCCESS_MIN_1, min(succ) as SUCCESS_MIN_0, stdevp(succ) as SUCCESS_STD_DEV by _time | collect marker="bw_metric_ps4_create_account_all_expected"

This "expected" search takes almost 2 hours to complete. However I have devised a new technique which doesn't use the accelerated reports, and yet gets me the same results in 20 to 30 minutes.
But I still would like to know if there is something I am missing here. Thank you very much for your interest and suggestions.

Tags (3)
1 Solution

sansay
Contributor

After numerous experiments I found a technique that solves this problem.
1. Create your accelerated search
2. Schedule the acceleration time range for the amount of time that you know you will look back
3. Create another saved search in which the query is exactly the same as that in the accelerated search, except that you can add subsequent data processing, So add something like "| collect marker="events_history_d1" to write the data to the summary index.
4. For this secondary search, set the time range to something relative such as: Start time: -6d@d End time: -7d@d
5. Do NOT check "Accelerate this search"
6. Schedule your search so that it runs at close to midnight.
7. Save
8. Create other saved searches for days -13d@d, -19d@d for whatever number of past weeks you are interested in. All searches can be scheduled for the exact same time.
9. Then in your dashboard use a search that pulls the data written to the summary index.
10. List item

The trick here is that the secondary saved searches, which are set with relative time range, will use the acceleration correctly and return the data for the selected past days very quickly, taking only 30 to 60 seconds.

View solution in original post

0 Karma

sansay
Contributor

However as indicated in my own answer below, I found a way around this issue.
Also, contrarily to what anybody might assume, the 1 minute resolution doesn't get lost by putting the data into 1 hour bins. What happens is that each record gets the _time minute variable set to 0. And the percentile sill processes all the 1 minute records and gets the percentile value over all the 60 records for each hour.
But you are right that I could have removed the Fail part. I simply forgot to clean it out. Thanks for bringing that to my attention.

0 Karma

sansay
Contributor

The original accelerated query when executed for the last 24 hours returns all the data in 6 seconds. But it won't work that way if you try to look at the data from a day 6 weeks ago because you can't request that without loosing the splunk recognition that it's dealing with an accelerated search. Furthermore, as I indicated earlier, all my experiments show that accelerated search performance is not linear. So, pulling all the data from 6 weeks ago takes more than 1.5 hour. The real problem is the inability to specify a time range in the query that retrieves the result.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Having now seen the accelerated search, I'd say it's slow because you're keeping three months' worth of to-the-minute data - that's a lot of rows to process, considering the final search throws the resolution away and only goes by hours.

A minor thought, move the eval(response_time/1000000) to after the stats. That way you'll divide each minute once rather than each event once. That won't drastically change your speeds, but still... similarly, you could forego the count(eval(NOT status=201)) as "Fail" and compute that by Total-Succ after the stats.

0 Karma

sansay
Contributor

After numerous experiments I found a technique that solves this problem.
1. Create your accelerated search
2. Schedule the acceleration time range for the amount of time that you know you will look back
3. Create another saved search in which the query is exactly the same as that in the accelerated search, except that you can add subsequent data processing, So add something like "| collect marker="events_history_d1" to write the data to the summary index.
4. For this secondary search, set the time range to something relative such as: Start time: -6d@d End time: -7d@d
5. Do NOT check "Accelerate this search"
6. Schedule your search so that it runs at close to midnight.
7. Save
8. Create other saved searches for days -13d@d, -19d@d for whatever number of past weeks you are interested in. All searches can be scheduled for the exact same time.
9. Then in your dashboard use a search that pulls the data written to the summary index.
10. List item

The trick here is that the secondary saved searches, which are set with relative time range, will use the acceleration correctly and return the data for the selected past days very quickly, taking only 30 to 60 seconds.

0 Karma

lguinn2
Legend

Yes, if we could see the full searches, we could help optimize them - without that, it is just guessing

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Do post your full searches, both the accelerated base and the final six-weeks-of-Thursdays-search.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...