Splunk Search

How do I get a large count of events over a period of time?

eli_mz
Explorer

I'm trying to write a search string that will count firewall events up to 900k over 60 minutes to trigger an alarm when the event count goes under the 900k events. However, after reviewing the job using the search string below with the time range set in the drop down, I noticed that the search job scans 931k events before reaching the 900k count. Aside from the 31k additional events returned, the search takes a while to run (1-2 min).

index=some_index sourcetype=some_stype | head 900000 | eventstats count | eval counts = tostring(count, "commas") | dedup count | table counts

I've also toyed with the metadata command to calculate traffic flow which runs significantly faster, but I'm not familiar with it enough to know if it's a viable solution. Any help or guidance would be appreciated.

0 Karma
1 Solution

maraman_splunk
Splunk Employee
Splunk Employee

try something like that (use the metadata, will be fast)

| tstats count where index=myindex sourcetype=mysourcetype by _time span=60m | where count <900000

View solution in original post

mattymo
Splunk Employee
Splunk Employee

definitely go with tstats, also, what is the goal of counting up to 900k but alarming under??

Seems like a strange threshold for firewall traffic alarming...

- MattyMo

eli_mz
Explorer

The idea is to trigger an alert when the events created\forwarded fall below 900k events as an indicator of potential issues. I've had instances where the box is up and running and seemingly running normally (no network issues) but the events are not being forwarded. Looking at one of those events overtime I can see the events coming in at a decreasing rate before it finally reaches "0".

0 Karma

rjthibod
Champion

First, your search will run much faster if you use tstats.

| tstats count where index=some_index sourcetype=some_stype

What is unclear though is the best way to use it, because I a not quite understanding what output you want. Do you just want to trigger an alert via a saved search when the count is under 900K? Are you going to limit the search to earliest=-60m or do you plan to search over a longer period and want to look in 60 minute buckets?

eli_mz
Explorer

The search is going to be limited to a 60 minute bucket. I'm planning on running this from a saved search.

The idea is to trigger an alert when the events created\forwarded fall below 900k events as an indicator of potential issues. I've had instances where the box is up and running but the events are not being forwarded.

0 Karma

rjthibod
Champion

Then this saved search should work. You will need to setup the alert conditions to look for any results (i.e., resultCount > 0).

| tstats count where index=some_index sourcetype=some_stype | where count < 900000

eli_mz
Explorer

Yup, this will work for my purpose. Off to do a bit more reading. Thanks!

0 Karma

maraman_splunk
Splunk Employee
Splunk Employee

try something like that (use the metadata, will be fast)

| tstats count where index=myindex sourcetype=mysourcetype by _time span=60m | where count <900000

DalJeanis
Legend

The by _time span=60m will break up all of history into 60 minute chunks and report if ANY of them are below 900K.

Just use earliest=-1h.

eli_mz
Explorer

Excellent. This will work for my purpose.

Both rjhibod and mmodestino pointed me to that as well. Thank you all.

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...