Splunk Search

Correlate logins outside of business hours with incidents and change requests

jorjiana88
Path Finder

Hi,
I am trying to make a table that shows the logins outside of business hours, and to show besides if the user had a reason to log in (change request, or incident).

search 1 results in login and failed login events from Windows.

(sourcetype=*:Security ((EventCode=4624 user32 ) OR EventCode=4625)  host="*") 
 | search $t_user$
 | eval hour = tonumber(strftime(_time,"%H"))
 | eval wknd = tonumber(strftime(_time,"%w"))
 | where (hour<=8 or hour>=19) OR (wknd==0 OR wknd==6 ) 
 | table user, _time, src_nt_host, signature 
 | rename src_nt_host as "Server", signature as "Event details"

Result
user, _time, Server, Event details
john smith 2017-10-12 15:29:44 WIN-BFQE1D An account was successfully logged on

Search 2 results in change requests from a csv

sourcetype="changes" 
| search $t_user$
| table  changeid, startdate, enddate, changecoordinator

Result
changeid, startdate, enddate, changecoordinator
CR311,2017-10-12 09:30:00,2017-10-12 19:30:00,John Smith
CR312,2017-10-12 22:30:00,2017-10-12 23:00:00,John Smith

The 2 searches have common field user and Change Coordinator, and there can be more changes for 1 user. How can I append the search 2 to search 1 and see the changeids on the same row ?

Final Result:
user, _time, Server, Event details, changeid
john smith,2017-10-12 15:29:44 ,WIN-BFQE1D,An account was successfully logged on,CR311 and CR312

I see there are many commands in splunk to correlate data but I don't know which one would be better for my case.

Thanks!

0 Karma
1 Solution

DalJeanis
Legend

Like you said, there are LOTS of ways that would be just fine to do this. Here's one...

 (sourcetype=*:Security ((EventCode=4624 user32 ) OR EventCode=4625)  host="*" user=$t_user$ ) 
 OR (sourcetype="changes" changecoordinator=$t_user$)

| rename COMMENT as "Eliminate all fields we don't need"
| fields _time, user, src_nt_host, signature, changeid, startdate, enddate, changecoordinator, 

| rename COMMENT as "Kill logon records that are not off-hours"
| eval hour = tonumber(strftime(_time,"%H"))
| eval wknd = tonumber(strftime(_time,"%w"))
| where (sourcetype="changes") OR (hour<=8 or hour>=19) OR (wknd==0 OR wknd==6 ) 

Now you have all the data in one file. You can take the user= and the changecoordinator= out of the two halves of the search if I've misunderstood the value there, leaving only the token.

It seems like you are looking for the last off-hours logon for each change request. We can sort the records into _time order, and use streamstats to copy down the last logon within an arbitrary window, for instance 8 hours. But first, we need to establish a single field for the user for both kinds of records. In this case, we'll use user.

| rename COMMENT as "sort records and propagate logon time"
| eval user=coalesce(user,changecoordinator)
| sort 0 user _time
| streamstats time_window=8h last(eval(case(sourcetype!="changes",_time))) as lastLogon 
    last(eval(case(sourcetype!="changes",signature))) as lastSignature by user

| rename COMMENT as "roll together all change records"
| stats count values(changeid) as changeid by user lastLogon lastSignature
| rename lastLogon as _time

It might be better practice to do the elimination of on-hours records AFTER rolling the records together. That would make the time_window unnecessary, since your changecoordinator would ALWAYS have to have logged on sometime, and we can just kill the on-hours ones all at the same time.

 (sourcetype=*:Security ((EventCode=4624 user32 ) OR EventCode=4625)  host="*" user=$t_user$ ) 
 OR (sourcetype="changes" changecoordinator=$t_user$)
| rename COMMENT as "Eliminate all fields we don't need"
| fields _time, user, src_nt_host, signature, changeid, startdate, enddate, changecoordinator, 
| rename COMMENT as "sort records and propagate logon time"
| eval user=coalesce(user,changecoordinator)
| sort 0 user _time
| streamstats last(eval(case(sourcetype!="changes",_time))) as lastLogon 
    last(eval(case(sourcetype!="changes",signature))) as lastSignature by user
| rename COMMENT as "roll together all change records"
| stats count values(changeid) as changeid by user lastLogon lastSignature
| rename lastLogon as _time
| rename COMMENT as "Kill logon records that are not off-hours"
| eval hour = tonumber(strftime(_time,"%H"))
| eval wknd = tonumber(strftime(_time,"%w"))
| where (hour<=8 or hour>=19) OR (wknd==0 OR wknd==6 ) 

View solution in original post

0 Karma

DalJeanis
Legend

Like you said, there are LOTS of ways that would be just fine to do this. Here's one...

 (sourcetype=*:Security ((EventCode=4624 user32 ) OR EventCode=4625)  host="*" user=$t_user$ ) 
 OR (sourcetype="changes" changecoordinator=$t_user$)

| rename COMMENT as "Eliminate all fields we don't need"
| fields _time, user, src_nt_host, signature, changeid, startdate, enddate, changecoordinator, 

| rename COMMENT as "Kill logon records that are not off-hours"
| eval hour = tonumber(strftime(_time,"%H"))
| eval wknd = tonumber(strftime(_time,"%w"))
| where (sourcetype="changes") OR (hour<=8 or hour>=19) OR (wknd==0 OR wknd==6 ) 

Now you have all the data in one file. You can take the user= and the changecoordinator= out of the two halves of the search if I've misunderstood the value there, leaving only the token.

It seems like you are looking for the last off-hours logon for each change request. We can sort the records into _time order, and use streamstats to copy down the last logon within an arbitrary window, for instance 8 hours. But first, we need to establish a single field for the user for both kinds of records. In this case, we'll use user.

| rename COMMENT as "sort records and propagate logon time"
| eval user=coalesce(user,changecoordinator)
| sort 0 user _time
| streamstats time_window=8h last(eval(case(sourcetype!="changes",_time))) as lastLogon 
    last(eval(case(sourcetype!="changes",signature))) as lastSignature by user

| rename COMMENT as "roll together all change records"
| stats count values(changeid) as changeid by user lastLogon lastSignature
| rename lastLogon as _time

It might be better practice to do the elimination of on-hours records AFTER rolling the records together. That would make the time_window unnecessary, since your changecoordinator would ALWAYS have to have logged on sometime, and we can just kill the on-hours ones all at the same time.

 (sourcetype=*:Security ((EventCode=4624 user32 ) OR EventCode=4625)  host="*" user=$t_user$ ) 
 OR (sourcetype="changes" changecoordinator=$t_user$)
| rename COMMENT as "Eliminate all fields we don't need"
| fields _time, user, src_nt_host, signature, changeid, startdate, enddate, changecoordinator, 
| rename COMMENT as "sort records and propagate logon time"
| eval user=coalesce(user,changecoordinator)
| sort 0 user _time
| streamstats last(eval(case(sourcetype!="changes",_time))) as lastLogon 
    last(eval(case(sourcetype!="changes",signature))) as lastSignature by user
| rename COMMENT as "roll together all change records"
| stats count values(changeid) as changeid by user lastLogon lastSignature
| rename lastLogon as _time
| rename COMMENT as "Kill logon records that are not off-hours"
| eval hour = tonumber(strftime(_time,"%H"))
| eval wknd = tonumber(strftime(_time,"%w"))
| where (hour<=8 or hour>=19) OR (wknd==0 OR wknd==6 ) 
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...