Splunk Search

How do I get a large count of events over a period of time?

eli_mz
Explorer

I'm trying to write a search string that will count firewall events up to 900k over 60 minutes to trigger an alarm when the event count goes under the 900k events. However, after reviewing the job using the search string below with the time range set in the drop down, I noticed that the search job scans 931k events before reaching the 900k count. Aside from the 31k additional events returned, the search takes a while to run (1-2 min).

index=some_index sourcetype=some_stype | head 900000 | eventstats count | eval counts = tostring(count, "commas") | dedup count | table counts

I've also toyed with the metadata command to calculate traffic flow which runs significantly faster, but I'm not familiar with it enough to know if it's a viable solution. Any help or guidance would be appreciated.

0 Karma
1 Solution

maraman_splunk
Splunk Employee
Splunk Employee

try something like that (use the metadata, will be fast)

| tstats count where index=myindex sourcetype=mysourcetype by _time span=60m | where count <900000

View solution in original post

mattymo
Splunk Employee
Splunk Employee

definitely go with tstats, also, what is the goal of counting up to 900k but alarming under??

Seems like a strange threshold for firewall traffic alarming...

- MattyMo

eli_mz
Explorer

The idea is to trigger an alert when the events created\forwarded fall below 900k events as an indicator of potential issues. I've had instances where the box is up and running and seemingly running normally (no network issues) but the events are not being forwarded. Looking at one of those events overtime I can see the events coming in at a decreasing rate before it finally reaches "0".

0 Karma

rjthibod
Champion

First, your search will run much faster if you use tstats.

| tstats count where index=some_index sourcetype=some_stype

What is unclear though is the best way to use it, because I a not quite understanding what output you want. Do you just want to trigger an alert via a saved search when the count is under 900K? Are you going to limit the search to earliest=-60m or do you plan to search over a longer period and want to look in 60 minute buckets?

eli_mz
Explorer

The search is going to be limited to a 60 minute bucket. I'm planning on running this from a saved search.

The idea is to trigger an alert when the events created\forwarded fall below 900k events as an indicator of potential issues. I've had instances where the box is up and running but the events are not being forwarded.

0 Karma

rjthibod
Champion

Then this saved search should work. You will need to setup the alert conditions to look for any results (i.e., resultCount > 0).

| tstats count where index=some_index sourcetype=some_stype | where count < 900000

eli_mz
Explorer

Yup, this will work for my purpose. Off to do a bit more reading. Thanks!

0 Karma

maraman_splunk
Splunk Employee
Splunk Employee

try something like that (use the metadata, will be fast)

| tstats count where index=myindex sourcetype=mysourcetype by _time span=60m | where count <900000

DalJeanis
SplunkTrust
SplunkTrust

The by _time span=60m will break up all of history into 60 minute chunks and report if ANY of them are below 900K.

Just use earliest=-1h.

eli_mz
Explorer

Excellent. This will work for my purpose.

Both rjhibod and mmodestino pointed me to that as well. Thank you all.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...