Splunk Search

How can I condense this data into distinct session times? [Easy-ish search question]

nick405060
Motivator

Data:

user      Source_Network_Address      session_start         session_end
bob       10.0.0.1                    01-18-2019 00:53      01-18-2019 00:59
bob       10.0.0.1                    01-18-2019 01:23    
bob       10.0.0.1                                          01-18-2019 01:29
bob       10.0.0.1                    01-18-2019 02:50    
bob       10.0.0.1                                          01-18-2019 02:52
joe       10.0.0.201                  01-18-2019 05:57      01-18-2019 05:57
joe       10.0.0.955                  01-18-2019 05:58    
joe       10.0.0.955                                        01-18-2019 06:17

Desired data:

user      Source_Network_Address      session_start         session_end
bob       10.0.0.1                    01-18-2019 00:53      01-18-2019 00:59
bob       10.0.0.1                    01-18-2019 01:23      01-18-2019 01:29
bob       10.0.0.1                    01-18-2019 02:50      01-18-2019 02:52                        
joe       10.0.0.201                  01-18-2019 05:57      01-18-2019 05:57
joe       10.0.0.955                  01-18-2019 05:58      01-18-2019 06:17

In Python, this would be super easy as shown below. But there HAS to be a super quick way to do it in a Splunk search, right?

data = [[user,Source_Network_Address,session_start,session_end],[bob,10.0.0.1,"01-18-2019 00:53","01-18-2019 00:53"],...]
result = []
new_row = []
for row in data:
    if new_row == [] and row[3] == "":
        new_row = row
    elif new_row == [] and row[3] != "":
        result.append(row)
    else:
        new_row[3] = row[3]
        result.append(row)
        new_row = []
0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

Using the same approach as your python and assuming it's sorted correctly, you could use streamstats to provide the previous row's start time to the next row:

... | streamstats current=f window=1 global=f last(session_start) as last_session_start by user Source_Network_Address
| where isnotnull(session_end)
| eval session_start = coalesce(session_start, last_session_start)

View solution in original post

nick405060
Motivator

The only way I can think to do it (not elegant, whatsoever) is:

... | search NOT (session_start!="" AND session_end!="") | table session_end | streamstats count as row_num | outputcsv "temp.csv" | table user Source_Network_Address | streamstats count as row_num | join type=left row_num [| inputcsv "temp.csv"]

And then somehow join it with (session_start!="" AND session_end!="") in the original data. Any better suggestions?

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Using the same approach as your python and assuming it's sorted correctly, you could use streamstats to provide the previous row's start time to the next row:

... | streamstats current=f window=1 global=f last(session_start) as last_session_start by user Source_Network_Address
| where isnotnull(session_end)
| eval session_start = coalesce(session_start, last_session_start)

nick405060
Motivator

I literally just copied and pasted this and it worked perfectly. Thank you so much!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...