I want to have an alert for when a host is NOT there. And then be able to pass that host name by email. Lets say I have 3 hosts - A B C - and I am displaying Service Process information by host. When all is normal I have all three hosts. If a host/service then stops it no long appears in the results so I cannot draw any information.
Is it possible to keep some kind of comparative what I expect to see list?
After speaking with a colleague he showed me how their team overcomes this problem. Now I just look at the events for the last 4 minutes, and evaluate the gap, then if the gap is greater than 2 minutes trigger an alert.
sourcetype="Perfmon*" earliest=-4m instance="Spin.*" | stats max(_time) As LatestTime by host, instance | eval Gap=(now()-LatestTime) | search Gap>120 | fields host, instance
After speaking with a colleague he showed me how their team overcomes this problem. Now I just look at the events for the last 4 minutes, and evaluate the gap, then if the gap is greater than 2 minutes trigger an alert.
sourcetype="Perfmon*" earliest=-4m instance="Spin.*" | stats max(_time) As LatestTime by host, instance | eval Gap=(now()-LatestTime) | search Gap>120 | fields host, instance
You can store data in a lookup and then use those values to see when an expected result does not appear.
http://docs.splunk.com/Documentation/Splunk/latest/User/CreateAndConfigureFieldLookups
Here's a similar splunk answer on this topic. You could also create an alert on this type of metadata search.
http://splunk-base.splunk.com/answers/3181/how-do-i-alert-when-a-host-stops-sending-data