Splunk Search

Why are we getting excessive number of alerts?

danielbb
Motivator

We have an All time (real time) alert which produced 315 alerts in the first eight hours of the day.

When running the search query of the alert for these eight hours, we get six events.

The alert itself is as simple as it gets -

index=<index name>
AND (category="Web Attack"
NOT src IN (<set of IPs>)
)

| table <set of fields>

What's going on here?

Tags (1)
0 Karma
1 Solution

Sukisen1981
Champion

we perhaps need 1-2 more iterations, but I believe we are making progress 🙂
_index_earliest=-15m _index_latest=now index=your index | rest of the stuff...

Now, this should calculate only events that were indexed from 15 mins ago till now...bit closer?

View solution in original post

0 Karma

Sukisen1981
Champion

we perhaps need 1-2 more iterations, but I believe we are making progress 🙂
_index_earliest=-15m _index_latest=now index=your index | rest of the stuff...

Now, this should calculate only events that were indexed from 15 mins ago till now...bit closer?

0 Karma

danielbb
Motivator

I think that's it - love it @Sukisen1981 !!!

0 Karma

Sukisen1981
Champion

not an issue at all .would still be interesting to see if the default_backfill=false works though 🙂 🙂

0 Karma

danielbb
Motivator

haha - funny

0 Karma

danielbb
Motivator

@Sukisen1981 - please convert to an answer...

0 Karma

Sukisen1981
Champion

duuno which one to do , but I will convert the last comment into an answer.
I rarely get chance to fiddle around with the backend (.conf files) as it is maintained by a different vendor...this default_backfill=false looks interesting....maybe I will play around it with in my local

0 Karma

danielbb
Motivator

_index_earliest=-15m _index_latest=now index=your index | rest of the stuff works so far as a charm ; -)

0 Karma

Sukisen1981
Champion

glad to know 🙂

I did notice that you had posted a question on the 'real ' real time alert issue, any good clues on that thread?.
Unfortunately I got very busy in office work (on which alas, i am dependent for my B&B) and could not catch hold of the admin team to tinker with default_backfill..which i have filed in my mind and will get to it one day - the gods, winds and time permitting 🙂 🙂

0 Karma

danielbb
Motivator

Still trying to figure out this real time alert issue ; -)

0 Karma

Sukisen1981
Champion

hi @danielbb - Can you please post the alert configuration, particularly interested in the real time look back wondow

0 Karma

danielbb
Motivator

Is this the right view @Sukisen1981?

alt text

0 Karma

Sukisen1981
Champion

hi @danielbb - see this, https://docs.splunk.com/Documentation/Splunk/7.3.1/Search/Specifyrealtimewindowsinyoursearch
and this
https://docs.splunk.com/Documentation/Splunk/7.3.1/Search/Specifyrealtimewindowsinyoursearch

Try setting the default_backfill to false and see?

[realtime]

default_backfill =
* Specifies if windowed real-time searches should backfill events
* Defaults to true

danielbb
Motivator

The doc says - For windowed real-time searches, you can backfill, but we don't use windowed real-time searches.

From the UI, the only relevant option seems to be the Expires at 10 hours. Can it have anything to do with us?

Btw, where can we set "windowed" real time searches versus "all-time" real-time searches?

alt text

0 Karma

Sukisen1981
Champion

Hi @danielbb
May I ask why you need a real time alert in the first place? As a thumb rule, it is better to avoid a real time alert.
Going by the frequency of the hits you mentioned earler (6 events in 8 hrs) can you not make it a scheduled alert running say every hour / hourly frequency or even on a 3 mins scheduled window?

danielbb
Motivator

Ok, makes perfect sense, however these events have indexing delay that we can't avoid. For these 6 events the delay varies between 1.7 and 12.32 minutes.

So, is there a way to schedule these "regular" alerts based on _indextime. Meaning, we'll have the alert fires for all events that got indexed in the past 15 minutes, for example.

0 Karma

Sukisen1981
Champion

interesting, try this in search
index=yourindex|your search
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | table indextime ,_time
| eval time=strptime(indextime,"%Y-%m-%d %H:%M:%S")
| eval _time=time
| stats count by indextime,_time
Is there a 'proper' capture based on indextime or _time

0 Karma

danielbb
Motivator

It shows -

alt text

0 Karma

Sukisen1981
Champion

check the statistics tab carefully...any difference in minutes between indextime and _time in the table?

0 Karma

danielbb
Motivator

Not on the first page, but we have lags for some of the events.

0 Karma

Sukisen1981
Champion

ok one last test and sorry, I should have asked you before you said there are only 6 events in the last eight hours...so if you use your search criteria before the evals..you should
so just add these 2 evals before your table
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | table indextime ,_time
| eval time=strptime(indextime,"%Y-%m-%d %H:%M:%S")
in the table fields add indextime and _time along with the rest.
What i am asking is now, we should have just 6 events and in these 6 events is there a difference between indextime and _time , matching what you have describec - 1-7 ~ 12 mins delay?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...