In my approach I move all of the messing about in to the SQL query. I capture the time range picker min and max time values as variables. Convert those numeric unix time values to datetime values. I handle the '+infinity' value for all time by using a case statement and replacing '+infinity' with the current date +100 years (yah... this is a bit hacky... i know). I then handle UTC offset by calculating the difference between getdate() and getutcdate() (MS SQL server approach) and then adjusting the times by the stored offset value. Lastly I eval _time to be equal to the timestamp field that is returned.
I then save this as a saved search and the future execution is super simple.
Here is how I accomplished this:
| dbquery "DB DISPLAY NAME" [ stats count | head 1 | addinfo | eval query="
-- Splunk time range picker handling
declare @StartRangeTime varchar(50) = '".info_min_time."'
declare @EndRangeTime varchar(50) = '".info_max_time."'
declare @StartDate datetime = DATEADD(second, cast(@StartRangeTime as decimal),'1970-01-01')
declare @EndDate datetime = (select case
when @EndRangeTime = '+Infinity'
then DATEADD(year, 100, GETDATE())
else DATEADD(second, cast(@EndRangeTime as decimal),'1970-01-01')
end)
declare @UTCOffset smallint = (select datediff(hour, getutcdate(), getdate()))
set @StartDate = dateadd(hour, @UTCOffset, @StartDate)
set @EndDate = dateadd(hour, @UTCOffset, @EndDate)
-- End Splunk time range picker handling
-- SQL query below ----------
select FIELDS
from TABLE
where DATETIMEFIELD between @StartDate and @EndDate
-- SQL query above ----------
" | fields query | format "" "" "" "" "" ""] | eval _time = DATETIMEFIELD
And then my saved query looks like this:
| savedsearch "SAVED SEARCH NAME"
... View more