Security

How can I implement standard deviations for outliers in my search using Windows 4263 alerts for crawling accounts.

nathig
Explorer

So, I have spent a ton of time looking for an easy answer to this. Either I am completely wrong in how I am looking for the problem, or it's not something that Splunk can do. I am wanting an alert that goes off for whenever an account is crawling over multiple servers in a short period of time that is above 1.5 standard deviations. I have been looking at malwarearcheology.com and so far this is what I have come up with. I do not know how to implement standard deviations though for outliers in the search. Any help would be great. This is currently very noisy and slow to search. How would you guys handle it?

index=wineventlog LogName=Security EventCode=4624 NOT (host="DC") NOT (Account_Name="$" OR Account_Name="ANONYMOUS LOGON") NOT (Account_Name="Service_Account") | eval Account_Domain=(mvindex(Account_Domain,1)) | eval Account_Name=if(Account_Name="-",(mvindex(Account_Name,1)), Account_Name) | eval Account_Name=if(Account_Name="$",(mvindex(Account_Name,1)), Account_Name) | eval Time=strftime(_time,"%Y/%m/%d %T") | replace 2 with "Local Logon of Server" in Logon_Type | replace 8 with "New Clear Text IIS Logon" in Logon_Type | replace 9 with "RUN AS COMMAND" in Logon_Type | replace 4 with "Batch Job" in Logon_Type | replace 5 with "Scheduled Service" in Logon_Type | replace 3 with "Net Use" in Logon_Type | stats count values(Account_Domain) AS Domain, values(host) AS Host, dc(host) AS Host_Count, values(Logon_Type) AS Logon_Type, values(Workstation_Name) AS WS_Name, values(Source_Network_Address) AS Source_IP, values(Process_Name) AS Process_Name by Account_Name | where Host_Count > 2

cmerriman
Super Champion

you could try something like this, maybe. I rewrote some of the top portion for efficiency. I added a bin command to bucket time in hour increments (that can be changed for however close the events need to be. the streamstats command will look at the last 24 events (24 hours in this case) for each Account_Name to grab the average and standard deviations in order to find the outliers by a 1.5 multiplier.

index=wineventlog LogName=Security EventCode=4624 host!="DC" Account_Name!="$" Account_Name!="ANONYMOUS LOGON" Account_Name!="Service_Account" 
| eval Account_Domain=(mvindex(Account_Domain,1)) 
| eval Account_Name=if(Account_Name="-",(mvindex(Account_Name,1)), Account_Name) 
| eval Account_Name=if(Account_Name="$",(mvindex(Account_Name,1)), Account_Name) 
| eval Time=strftime(_time,"%Y/%m/%d %T") 
| eval Logon_Type=case(Logon_Type=2,"Local Logon of Server",Logon_Type=8,"New Clear Text IIS Logon",Logon_Type=9,"RUN AS COMMAND",Logon_Type=4,"Batch Job",Logon_Type=5,"Scheduled Service",Logon_Type=3,"Net Use",1=1,Logon_Type) 
| bin _time span=1h 
| stats count values(Account_Domain) AS Domain, values(host) AS Host, dc(host) AS Host_Count, values(Logon_Type) AS Logon_Type, values(Workstation_Name) AS WS_Name, values(Source_Network_Address) AS Source_IP, values(Process_Name) AS Process_Name by Account_Name _time 
| where Host_Count > 2
| streamstats window=24 avg(count) as avg stdev(count) as stdev by Account_Name
| eval lower_bound=avg-(stdev*1.5) 
| eval upper_bound=avg+(stdev*1.5) 
| eval isOutlier=if(count>upper_bound OR count<lower_bound,1,0) 

nathig
Explorer

A couple of things to add. How frequently would you run this search? I have been thinking about running it every 15 minutes for the last hour or maybe 2 hours. Should I reduce this to just interactive logon types such as 2,10, or 11? That way it pulls fewer events to compute? Also what is the best way to return the outliers, you have evals but shouldnt there be like a stats or something like that at the end after all the evals are computed? I feel that LogonType of 3 is very noisy but it is a net login so not sure how to report on this event.

index=wineventlog LogName=Security EventCode=4624 host!="DC" Account_Name!="$" Account_Name!="ANONYMOUS LOGON" Account_Name!="Service_Account"
| eval Account_Domain=(mvindex(Account_Domain,1))
| eval Account_Name=if(Account_Name="-",(mvindex(Account_Name,1)), Account_Name)
| eval Account_Name=if(Account_Name="$",(mvindex(Account_Name,1)), Account_Name)
| eval Time=strftime(_time,"%Y/%m/%d %T")
| eval Logon_Type=case(Logon_Type=2,"Local Logon of Server",Logon_Type=8,"New Clear Text IIS Logon",Logon_Type=9,"RUN AS COMMAND",Logon_Type=4,"Batch Job",Logon_Type=5,"Scheduled Service",Logon_Type=3,"Net Use",1=1,Logon_Type)
| bin _time span=1h
| stats count values(Account_Domain) AS Domain, values(host) AS Host, dc(host) AS Host_Count, values(Logon_Type) AS Logon_Type, values(Workstation_Name) AS WS_Name, values(Source_Network_Address) AS Source_IP, values(Process_Name) AS Process_Name by Account_Name _time
| where Host_Count > 2
| streamstats window=2 avg(count) as avg stdev(count) as stdev by Account_Name
| eval lower_bound=avg-(stdev*1.5)
| eval upper_bound=avg+(stdev*1.5)
| eval isOutlier=if(count>upper_bound OR count

cmerriman
Super Champion

those questions really just depend on what you're trying to answer. if you only need interactive logon types, then i would just look at those, but add them to the base search like this:

index=wineventlog LogName=Security EventCode=4624 host!="DC" Account_Name!="$" Account_Name!="ANONYMOUS LOGON" Account_Name!="Service_Account" (LogonType=2 OR Logon_Type=10 OR Logon_Type=11)

to group every 15 minutes, and find the outliers for standard deviations every hour, try this:

...
| bin _time span=15m
 | stats count values(Account_Domain) AS Domain, values(host) AS Host, dc(host) AS Host_Count, values(Logon_Type) AS Logon_Type, values(Workstation_Name) AS WS_Name, values(Source_Network_Address) AS Source_IP, values(Process_Name) AS Process_Name by Account_Name _time 
 | where Host_Count > 2
 | streamstats window=4 avg(count) as avg stdev(count) as stdev by Account_Name
...

to find just outliers, add |search isOutlier=1 to the end of the search and all outlier Account_Name and time frames will be filtered

nathig
Explorer

I realize that this seems to be comparing all accounts to all accounts, which is always going to have outliers. Logically how can I change this to find outliers comparing an account to itself based on the past activity of the account? I also think I am going to split this up into two different types of alerts, one will be for interactive logins, the other will be for batch and script such as type 3 and type 4 logins.

index=wineventlog LogName=Security EventCode=4624 host!="DC" Account_Name!="$" Account_Name!="ANONYMOUS LOGON" Account_Name!="Service_Account" (LogonType=2 OR Logon_Type=10 OR Logon_Type=11)
| eval Account_Domain=(mvindex(Account_Domain,1))
| eval Account_Name=if(Account_Name="-",(mvindex(Account_Name,1)), Account_Name)
| eval Account_Name=if(Account_Name="$",(mvindex(Account_Name,1)), Account_Name)
| eval Time=strftime(_time,"%Y/%m/%d %T")
| eval Logon_Type=case(Logon_Type=2,"Local Logon of Server",Logon_Type=8,"New Clear Text IIS Logon",Logon_Type=9,"RUN AS COMMAND",Logon_Type=4,"Batch Job",Logon_Type=5,"Scheduled Service",Logon_Type=3,"Net Use",1=1,Logon_Type)
| bin _time span=60m
| stats count values(Account_Domain) AS Domain, values(host) AS Host, dc(host) AS Host_Count, values(Logon_Type) AS Logon_Type, values(Workstation_Name) AS WS_Name, values(Source_Network_Address) AS Source_IP, values(Process_Name) AS Process_Name by Account_Name _time
| where Host_Count > 2
| streamstats window=24 avg(count) as avg stdev(count) as stdev by Account_Name
| eval lower_bound=avg-(stdev*1.5)
| eval upper_bound=avg+(stdev*1.5)
| eval isOutlier=if(count>upper_bound OR count

Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...