Splunk Search

graphing a logs per second value

acidkewpie
Path Finder

I'm using this query to graph how many web requests are being logged per second:

index="bigip_ltm" (event=HTTP_REQUEST OR event=HTTP_RESPONSE) client_ip=1.2.3.4 | timechart count(event) by event

But, in line with many questions here, the count() is graphed over a time interval derived from the search period. I've tried many permutations of span=1s, bucket commands etc, but I can't work out how to plot an average one second value over whatever period of time is represented on the graph.

In this question http://splunk-base.splunk.com/answers/46978/average-field-value-per-second the "per_second" data is in the logs, but i want a per_second of the count of the number of logs, so one step further removed.

Tags (3)
0 Karma
1 Solution

acidkewpie
Path Finder

Yeah, that's pretty useful. I thought there needed to be another aggregation stage but couldn't work out what it might be.

I've now got this

index="bigip_ltm" (event=HTTP_REQUEST OR event=HTTP_RESPONSE) client_ip=1.2.3.4 | timechart count by event | timechart per_second(HTTP_REQUEST) per_second(HTTP_RESPONSE)

I don't like having to field values end up as static field names, but I presume that's pretty much unavoidable? There's no way to graph all the "event" values implicitly?

Either way, that's got me what I asked for! thanks!

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...