Splunk Search

Using lookup tables to create separate alerts for prod and nonprod hosts

danbutterman
Explorer

Hello Splunk community,

My team is tasked with creating alerts for standard server monitoring metrics (CPU, memory, etc.) and separating each alert by a list of prod hosts and non-prod hosts. In other words, a high CPU alert for prod hosts to trigger anytime 24/7, and a high CPU alert for non-prod hosts to only trigger between 7AM and 7PM.

Here is an example of how we're attempting to use lookup tables to narrow the alert to non-prod hosts for the 7AM to 7PM time window:

index=perfmon [| inputlookup ServerNonProd-NoSQL.csv | rename ServerHost as host ] sourcetype="Perfmon:CPU Load" counter="% Processor Time" earliest=-5m latest=now | stats avg(Value) as metric by host | where metric >= 80 | eval metric= round(metric, 2) | table host,metric

My question: Is this the most effective way to accomplish this task (through separate lookup tables), or is there a more efficient or advisable way of accomplishing this task?

Thank you for any pointers!

0 Karma
1 Solution

jfraiberg
Communicator

for starters you can just use one lookup table that has both prod and non-prod hosts, just have an "environment" field in it that says prod or nonprod. whether this is the best way depends on how many hosts you have in your lookup table and how often that changes. If it is a significant amount of hosts (tens of thousands'ish) and they update frequently you may want to move to a KVSTORE.

other than that you could also create a new index extracted field that add that metadata to the events at index time with prod or nonprod. From there you could just do "index=perfmon env::prod....."

View solution in original post

jfraiberg
Communicator

for starters you can just use one lookup table that has both prod and non-prod hosts, just have an "environment" field in it that says prod or nonprod. whether this is the best way depends on how many hosts you have in your lookup table and how often that changes. If it is a significant amount of hosts (tens of thousands'ish) and they update frequently you may want to move to a KVSTORE.

other than that you could also create a new index extracted field that add that metadata to the events at index time with prod or nonprod. From there you could just do "index=perfmon env::prod....."

danbutterman
Explorer

Thank you for your response. I will give this a shot.

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...