Splunk Search

Return values from map command?

silentttone
New Member

So I have a function which takes a certain amount of time (timer_value) and I'm trying to create an alert that triggers when the time starts to increase. For this I'm calculating the slope using this macro (http://wiki.splunk.com/Community:Plotting_a_linear_trendline). I'm using the map command, because I want to calculate the slope for each host running this process. This works fine, and I get a chart (using the table command) that gives me the host, slope, and a boolean that tells me if this value is good or bad.
However, since I want to create an alert for this, I need to pass the value out of the sub-search and map command and use it as a custom condition for the alert. I'm stuck on how to do this-I've tried return as well as table. I'm assuming this is a problem because the map command returns multiple instances of the variable named slope?
For now I don't care which host triggered the alert, I just need it to trigger if any of the values of slope are over a certain value. I can pass out either the boolean or the slope value, it doesn't matter.

This is my search:

TIMER timer_function="'scene_ingest_ndvi'" | stats count by host | map [search host=$host$ | timechart span=20min avg(timer_value) as avgyvalue | where isnotnull (avgyvalue) | `lineartrend(_time,avgyvalue)` | stats first(slope) as slope | eval host=$host$ | eval err= if(slope>0.005 OR slope<-0.005,"Bad","Good") | table host err slope ] maxsearches=100

If anyone has any ideas on how to do this, they would be more than welcome. I'm also open to doing it another way, if there's something easier than map that will achieve the desired result.

Thanks in advance!!

0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

I have converted your search to this run-anywhere search:

index=_internal bytes=* | stats count by sourcetype | map [search index=_internal sourcetype=$sourcetype$ | timechart span=20min avg(bytes) as avgyvalue | where isnotnull (avgyvalue) | `linearregression(_time,avgyvalue)` | stats first(slope) as slope | eval sourcetype="$sourcetype$" | eval err= if(slope>0.005 OR slope<-0.005,"Bad","Good") | table sourcetype err slope ] maxsearches=100

That gives me this table:

   sourcetype         err   slope
1  splunk_web_access  Bad    -0.26764105
2  splunkd            Good             0
3  splunkd_access     Bad   -0.007066474

Note, I have added double quotes around the second $sourcetype$ because you're looking for the string rather than the field's value.

To create an alert on this you could define a custom condition where err=="Bad".

View solution in original post

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

I have converted your search to this run-anywhere search:

index=_internal bytes=* | stats count by sourcetype | map [search index=_internal sourcetype=$sourcetype$ | timechart span=20min avg(bytes) as avgyvalue | where isnotnull (avgyvalue) | `linearregression(_time,avgyvalue)` | stats first(slope) as slope | eval sourcetype="$sourcetype$" | eval err= if(slope>0.005 OR slope<-0.005,"Bad","Good") | table sourcetype err slope ] maxsearches=100

That gives me this table:

   sourcetype         err   slope
1  splunk_web_access  Bad    -0.26764105
2  splunkd            Good             0
3  splunkd_access     Bad   -0.007066474

Note, I have added double quotes around the second $sourcetype$ because you're looking for the string rather than the field's value.

To create an alert on this you could define a custom condition where err=="Bad".

0 Karma

silentttone
New Member

Okay, interesting. Thanks!

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

The return command turns the results into a string stored in the search field for filtering based on subsearches: http://docs.splunk.com/Documentation/Splunk/6.1.1/SearchReference/return

That's bound to break access to the slope field.

0 Karma

silentttone
New Member

Huh. I tried where slope>0.005 yesterday (I think using a return) and the Alert menu wasn't able to access the value for slope. I don't know what was going on there.
Anyway, where err=="Bad" works, thanks for your help 🙂

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...