Splunk Search

How can I condense this data into distinct session times? [Easy-ish search question]

nick405060
Motivator

Data:

user      Source_Network_Address      session_start         session_end
bob       10.0.0.1                    01-18-2019 00:53      01-18-2019 00:59
bob       10.0.0.1                    01-18-2019 01:23    
bob       10.0.0.1                                          01-18-2019 01:29
bob       10.0.0.1                    01-18-2019 02:50    
bob       10.0.0.1                                          01-18-2019 02:52
joe       10.0.0.201                  01-18-2019 05:57      01-18-2019 05:57
joe       10.0.0.955                  01-18-2019 05:58    
joe       10.0.0.955                                        01-18-2019 06:17

Desired data:

user      Source_Network_Address      session_start         session_end
bob       10.0.0.1                    01-18-2019 00:53      01-18-2019 00:59
bob       10.0.0.1                    01-18-2019 01:23      01-18-2019 01:29
bob       10.0.0.1                    01-18-2019 02:50      01-18-2019 02:52                        
joe       10.0.0.201                  01-18-2019 05:57      01-18-2019 05:57
joe       10.0.0.955                  01-18-2019 05:58      01-18-2019 06:17

In Python, this would be super easy as shown below. But there HAS to be a super quick way to do it in a Splunk search, right?

data = [[user,Source_Network_Address,session_start,session_end],[bob,10.0.0.1,"01-18-2019 00:53","01-18-2019 00:53"],...]
result = []
new_row = []
for row in data:
    if new_row == [] and row[3] == "":
        new_row = row
    elif new_row == [] and row[3] != "":
        result.append(row)
    else:
        new_row[3] = row[3]
        result.append(row)
        new_row = []
0 Karma
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

Using the same approach as your python and assuming it's sorted correctly, you could use streamstats to provide the previous row's start time to the next row:

... | streamstats current=f window=1 global=f last(session_start) as last_session_start by user Source_Network_Address
| where isnotnull(session_end)
| eval session_start = coalesce(session_start, last_session_start)

View solution in original post

nick405060
Motivator

The only way I can think to do it (not elegant, whatsoever) is:

... | search NOT (session_start!="" AND session_end!="") | table session_end | streamstats count as row_num | outputcsv "temp.csv" | table user Source_Network_Address | streamstats count as row_num | join type=left row_num [| inputcsv "temp.csv"]

And then somehow join it with (session_start!="" AND session_end!="") in the original data. Any better suggestions?

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Using the same approach as your python and assuming it's sorted correctly, you could use streamstats to provide the previous row's start time to the next row:

... | streamstats current=f window=1 global=f last(session_start) as last_session_start by user Source_Network_Address
| where isnotnull(session_end)
| eval session_start = coalesce(session_start, last_session_start)

nick405060
Motivator

I literally just copied and pasted this and it worked perfectly. Thank you so much!

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...