We have a splitted environment where we are using another tool to take care of typical monitoring like cpu, disk, memory usage and so on. This other tool are also used to generate stats, create incidents, decide if someone has to be woken up at the middle of the night and so on.
This wonderful tool can also watch logs, so my plan is that Splunk (and maybe some other custom scripts running in cron), logs alerts to this tool to a single logfile. In other words, if "TIMESTAMP Critical A custom message here" is logged to the file /var/log/something.alerts someone will/should be called.
My problem is that creating an alert using savedsearches and action.script will only run on the searchhead itself.
Is it possible, to:
I want to implement this in a very strict environment, so there is no API I can easiely reach.. Having the files on the searchhead will break the other planned scripts populating this log.
I don't think that you can achieve this with forwarders ( UF, LWF )
for 3 reasons :
I see only 3 ways to workaround, but hey are complex to setup, and may be costly..
Thanks for the suggestions! I think I am ending up with a solution that stores the alerts on a central location, with cron-jobs running on the nodes themself to pick them up once a minute.. Feels really hackish, unstable and ugly tough.. 😞
Can you notify the other system with an API or similar?
Is the only possible way to create a file on the endpoint?