I have the Splunk App for Unix and Linux on trial license and have set an alert to fire if CPU is greater that 30% to test functionality. I have a server that constantly runs above 40%, but shows no alerts yet.
If I go to the alert and add to search the result with the server, it returns successfully. Have I missed activating something?
The most common cause of this is the timeframe selected for the alert. Lets say the alert is setup for realtime now - 1minute @ minute.. and then lets propose that there is a 5 minute lag between when your data is indexed and when the 40% spike occurs. In this case, when the alert search runs, it doesnt see anything in the last minute.
So check the frequency of the inputs used in the app for unix and linux and make sure that time frame matches your alert's timeframe. I believe it runs a script ever 15 minutes by default, and as such, you would expect at least a 15 minute lag and therefore the alert should be setup to look at the last 30 minutes maybe... or maybe make it run every hour looking at the last hour. You certainly dont want an email every minute when there is a cpu spike right? once an hour... or even once every 15-30 minutes should suffice right?