Simple question: If I pass it a byte count, how does it calculate this value without knowing how long the event took?
per_second
is only valid for the timechart
command. timechart
always operates on some certain timespan - per_second
in your scenario is calculated by taking sum of the byte count for that timespan and divide it by the timespan's amount of seconds.
per_second
is only valid for the timechart
command. timechart
always operates on some certain timespan - per_second
in your scenario is calculated by taking sum of the byte count for that timespan and divide it by the timespan's amount of seconds.
I agree--I was mis-interpreting what per_second meant. The IIS logs do have that information (a field called time_taken that is represented in milliseconds). So my challenge is how do I have one event add to the sum of multiple bins? For example, say I have 1 minute bins, and the time taken is 5 minutes, I would need to add 1 minute to each of the 5 bins. I've done a lot of reading of the documentation and I can't determine how to split one event into many.
I don't see how per_second
could operate in another way than it currently does.
If you want to determine some kind of max bandwidth that has been seen during an interval, then you would need the information in the event that you mention, i.e. not only how many bytes but also how many seconds. Without that information, it is as impossible for Splunk as for yourself to calculate that.
Okay, I I get it now, but, unfortunately, this means this isn't going to help determine the bandwidth as I was hoping it would. This assumes that all data was transferred in equal amounts of time per second, which likely isn't the case. For example, all the data could have been delivered in the first second of the span, resulting in a spike closer to the sum rather than the average byte per second.