Splunk Search

Is there an alternative search approach to using a transaction with maxspan?

smwilli1
Explorer

I'm curious if there is a way to get the same effect of transaction w/maxspan, without having to use that process intensive transaction function. This searches by what I refer to as a "rolling time bucket", which allows there to be multiple results in my overall search window, for what I am grouping by.

For example, lets say I am searching for 5 minute windows where the number of authentication failures for any user is greater than 25. With transaction, the search would look like the following:

"Authentication Failed" |transaction user maxspan=5m |where eventcount>=25

I know there is a way to "bin" the data with stats, but this doesn't give the same effect. The logs at the end of each bin would not be associated with the logs at the beginning of the next, which could miss alerts. Another problem I have with stats, is that whenever running stats sorted by user, it groups over the whole duration of my search window. This is not what I want, since I need the ability to catch multiple groups of single user failures over my main search window.

Does anyone have a suggestion on how to approach this? This would help out a lot, since I have very large data sets that I am running similar queries on, which turn out to be quite inefficient due to the use of the transaction command.

Thanks in advance for any help!

Edit: Here are some use cases where I use transaction with maxspan
1) Find a combination events where a user swipes a badge in one location, and within 8 hours attempts to authenticate from a location over 500 miles away (pulled from ip geolocation).
Total Search Window: 24 hours
transaction maxspan window: 8 hours

2) Find a group of 20 or more failed authentication attempts followed by a Successful authentication, all within 30 minutes of each other.
Total Search Window: 24 hours
transaction maxspan window: 30 minutes

3) Find a group of 3 or more users attempt to connect on the same host within a 1 hour window.
Total Search Window: 24 hours
transaction maxspan window: 1 hour

I hope that gives more insight into what I am trying to do with the |transaction maxspan command. Thanks again for all the help.

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Streamstats with time_window will meet many of your needs. For example...

2) Find a group of 20 or more failed authentication attempts followed by a Successful authentication, all within 30 minutes of each other. Total Search Window: 24 hours transaction maxspan window: 30 minutes

earliest=-24h  "Authentication Failed" OR "Authentication Succeeded" 
| eval authfail=if(like(_raw,"Authentication Failed"),1,0)
| streamstats sum(authfail) as authfail by user time_window=30m 
| where authfail>=20 AND like(_raw,"Authentication Succeeded")

Note - Specifics of the overall search window are irrelevant to this particular code. If 20 or more authentication failures occur for a single user within the 30m time window then the next success will cause the search to succeed.

Modify the "like" clauses to match your actual success or fail wording, or use any other convenient way of testing the same thing. You should also include the actual field names to make it more efficient.


3) Find a group of 3 or more users attempt to connect on the same host within a 1 hour window.
Total Search Window: 24 hours Transaction maxspan window: 1 hour

This one will catch the 3rd and subsequent failures...

"Authentication Failed" 
| streamstats dc(user) as userfail by host time_window=1h
| where userfail > 2

To get all of them, you could run that into a subsearch, format it with the time, and feed it all into a search against the whole set of failures, but there's much easier way. Flip the order around, and the first guy of the three is now the third. Thus, the sum of their numbers in each direction will be four or more.

"Authentication Failed" 
| streamstats dc(user) as userfail1 by host time_window=1h
| reverse 
| streamstats dc(user) as userfail2 by host time_window=1h
| eval userfail = userfail1 + userfail2
| where userfail>3
0 Karma

aweitzman
Motivator

While this is less exact, one thing you might try is timechart with very small spans plus streamstats. So for instance, in your "25 bad login attempts in 5 minutes" example:

"Authentication Failed"
| timechart span=10s count
| streamstats sum(count) as last5minscount window=30 
| where last5minscount >= 25

Like I said, this might not be perfect, but it will at least give you a start to investigate.

0 Karma

jkat54
SplunkTrust
SplunkTrust

Ok so there are many ways to skin the cat... here is a method you might use that doesnt use transaction but runs a hell of a lot of searches, etc. very complex but achieves the desired effect:

Scheduled Search to Summary Index Method:
Cron: * */8 * * *
Time Picker = Last 8h
search index=badge_logs sourcetype=badgeReaderLogEvent source=badgeReader000* action="Card swipe" |sistats dc(location) as Count by Card_Number | eval moreThanOneLocationLast8h=if(Count>1,"True","False") | collect index=summary marker=badge_logs

Then run a scheduled search on top of the above one
Cron 05 */8 * * *
Time Picker = last 8h
index=summary badge_logs moreThanOneLocationLast8h="True"
if events > 0, alert.

Some reconstruction of the above method would probably suffice for all of your use cases. You could even condense it into just one search instead of summarizing. Your choice.

Sorry for such long delay in response. I just noticed you added use cases!

0 Karma

lguinn2
Legend

I think @jkat54's comments are on track, although I am one of the people that says "avoid transactions if you can." There isn't any general way to do what you want, other than the transaction command AFAIK.

However, if you know more about the specific characteristics of your use case and/or your data - there certainly could be other ways to do what you want. Looking forward to the use cases and examples.

0 Karma

smwilli1
Explorer

Well, I have had discussions with Splunk personnel and they always say stay away from transaction if possible. I know certain situations where I was previously using transaction, and was able to accomplish the same with stats and significantly improve search time. From what I know, the transaction command is built in a way that innately runs slow, and if you can accomplish the same with other commands it typically is faster.

The only thing preventing me from using another command so far is that the "maxspan" argument on |transaction. I will write out some other use cases and post later.

0 Karma

jkat54
SplunkTrust
SplunkTrust

I feel that if you replicate the functionality of transaction you're effectively replicating its inefficiencies too. Logically, its counter productive. If you can provide better examples of your data and the exact outcome you're looking for, perhaps there is a better method than transaction. Until then, I remain confident in my original statement. You're reinventing the wheel to achieve the same functionality of the wheel, so why not use the wheel you already have?

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...