Splunk Search

Easy way to approach this? Calculating Busy hour across several time zones

msarro
Builder

Hey everyone. We are trying to figure out call distributions for our network by time zone. The call records we are taking in include the timezone where the user is located. All timezones are in the format 0-000000, where the first digit indicates if it is daylight savings or not (0 is no, 1 is yes), and the second digit in the second group of digits indicates the number of hours subtracted from UTC (so 0-050000 is Eastern Standard time, 1-040000 is eastern daylight time).

I have been starting by converting the timezones to easier to read ones with a simple switch statement. Then to get the list of peak hours, running the timechart command to get the listing of call counts per hour by timezone, but at that point I am not sure how to go forward. If I wanted to get a single timezone it would be easy, just sort the count by the number of calls under that time zone and take the head row. However I'm not sure about how to do this when we have multiple columns.
Here is the code I am currently using:

index=as host=wdv-as03-0* AS_User_Time_Zone=* NOT AS_User_Time_Zone="0+000000"
|eval time_zone=case(AS_User_Time_Zone="1-040000" OR AS_User_Time_Zone="0-050000","Eastern",AS_User_Time_Zone="1-050000" OR AS_User_Time_Zone="0-060000","Central",AS_User_Time_Zone="1-060000" OR AS_User_Time_Zone="0-070000","Mountain",AS_User_Time_Zone="1-070000" OR AS_User_Time_Zone="0-080000","Pacific")
|timechart span=1h count by time_zone

Which gets me to these results, 24 rows with counts for each row:

_time,Central,Eastern,Mountain,Pacific
4/15/13 12:00:00.000 AM,1608,4632,379,1514
...

And here is where I get stuck. So, what I want to end up with is a table similar to this (hour doesn't have to be as pretty, I can modify that):

Time Zone,Busy Hour,Record Count 
Eastern,1PM,1234
Central,2PM,3535
Mountain,3PM,2342
Pacific,4PM,6464

Is anyone able to help me make that jump? I'd appreciate it. The only way I can think of is to split this into four separate searches and then append them together, but that will be messy and time consuming.

Tags (1)
0 Karma
1 Solution

kristian_kolb
Ultra Champion

You do not really mention how you get these events. Do they originate in their respective TZ? One per call?

Then you could use the date_hour built-in field to do a ... | top 1 date_hour by timezone | ... (assuming that you can extract the timezone information from each event like in your sample data.

View solution in original post

msarro
Builder

This looks like it is working! I just have to add a ":00" behind the date hour and rename the fields and I should be GTG. Thank you!

0 Karma

kristian_kolb
Ultra Champion

You do not really mention how you get these events. Do they originate in their respective TZ? One per call?

Then you could use the date_hour built-in field to do a ... | top 1 date_hour by timezone | ... (assuming that you can extract the timezone information from each event like in your sample data.

msarro
Builder

Can you provide a bit more information? Sadly the explanation isn't that clear, even with the 3 examples provided. I'm not sure how it will provide me with the hour showing the highest count for each time zone.

0 Karma

theouhuios
Motivator

I think contingency command will solve your issue.

http://docs.splunk.com/Documentation/Splunk/5.0.2/SearchReference/Contingency

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...