Alerting

multiple alerts are triggering for same schedule

ninugala
Engager

Hi All,

Alerts are getting triggered multiple time for same schedule
lets say On saturday at 12:30 AM one alert is triggerted, sent an email and that same alert is scheduled next week but it with same scheduled time it send an email an Monday at 4:30 AM, I checked internal scheduler logs for both time every thing is same except thread_id
for first log it is like
thread_id="AlertNotifierWorker-0"
for second
thread_id="AlertNotifierWorker-1"

internal logs:

07-16-2018 04:37:38.706 -0700 INFO SavedSplunker - savedsearch_id="user;app_name;alert_name", search_type="", user="user", app="app_name", savedsearch_name="alert name", priority=default, status=success, digest_mode=1, scheduled_time=1531594800, window_time=0, dispatch_time=1531740901, run_time=28.905, result_count=1, alert_actions="email", sid="scheduler_c3ZjLmEuc3Bsay1hcHAyMg_dXNjc290X3BvcnRmb2xpb192MQ__RMD5d339bfb9e55cae10_at_1531594800_25270_935E5BAB-C17C-4CD8-AD68-1034D542951F", suppressed=0, thread_id="AlertNotifierWorker-0"

07-14-2018 12:12:31.035 -0700 INFO SavedSplunker - savedsearch_id="user;app_name;alert_name", search_type="", user="user", app="app_name", savedsearch_name="alert_name", priority=default, status=success, digest_mode=1, scheduled_time=1531594800, window_time=0, dispatch_time=1531595472, run_time=59.601, result_count=1, alert_actions="email", sid="scheduler_c3ZjLmEuc3Bsay1hcHAyMg_dXNjc290X3BvcnRmb2xpb192MQ__RMD5d339bfb9e55cae10_at_1531594800_25955_FC6F664D-9201-4D40-92D6-BA7B27AD9035", suppressed=0, thread_id="AlertNotifierWorker-1"

I checked there are no duplicates and no private search with same configurations

0 Karma

pradeepkumarg
Influencer

The saved search name appears to be different in both the logs.

0 Karma

ninugala
Engager

It is a same savedsearch, In order to hide actual name i renamed it to alert name but didn't used same in both logs..

0 Karma

renjith_nair
Legend

Is it a SH cluster?

---
What goes around comes around. If it helps, hit it with Karma 🙂
0 Karma

ninugala
Engager

Yes it is in a search head cluster

0 Karma

renjith_nair
Legend

just in case, check if there are any skipped searches and all the cluster members are in sync

---
What goes around comes around. If it helps, hit it with Karma 🙂
0 Karma

ninugala
Engager

I checked if there is any sync issues in Search Head cluster but there is no sync issues,
i used below search query to find if there is any...

index=_* source=splunkd.log Error pulling configurations from captain ConfReplicationThread NOT "Connect Timeout"|stats values(message) latest(_time) as "Last_Seen" count by host|where count>0|convert ctime(Last_Seen)

We are using splunk version 6.6.4

Also checked in Splunk messages there is nothing like that..

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...