Newbie here...I have an index of data that represents calls. Each event has a start_time and duration. I've been asked to take all of these events and to calculate how many concurrent calls there are per second. It was suggested that I use Python and split the calls into different rows of a DB but that sounds tedious.
Is there a way to take each events data with start time and duration and chunk it up into seconds like this...?
See http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Concurrency
Assuming that the starttime is in field '_time', and the duration is in the field duration, to get the number of concurrent calls at that event, then bucket _time per second to find the maximum concurrency per second:
index=data ...
| concurrency start=_time duration=duration
| bin _time span=1s
| stats max(concurrency) as concurrency by _time
See http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Concurrency
Assuming that the starttime is in field '_time', and the duration is in the field duration, to get the number of concurrent calls at that event, then bucket _time per second to find the maximum concurrency per second:
index=data ...
| concurrency start=_time duration=duration
| bin _time span=1s
| stats max(concurrency) as concurrency by _time
We're after the number of active calls at any given time. So, if at 12:03:01 there are 5 active calls and at 12:03:02 one ends we need it to show 4 calls. Will this get us there?
Perfect! Ran through Power BI too (took WAAAY longer) and got the same numbers. Thanks!
yes, this would show that.
@nacartwright,
You shall use Concurrency command. It has the option for duration in secs