Getting Data In

Execute a script on the node itself

lsolberg
Path Finder

We have a splitted environment where we are using another tool to take care of typical monitoring like cpu, disk, memory usage and so on. This other tool are also used to generate stats, create incidents, decide if someone has to be woken up at the middle of the night and so on.

This wonderful tool can also watch logs, so my plan is that Splunk (and maybe some other custom scripts running in cron), logs alerts to this tool to a single logfile. In other words, if "TIMESTAMP Critical A custom message here" is logged to the file /var/log/something.alerts someone will/should be called.

My problem is that creating an alert using savedsearches and action.script will only run on the searchhead itself.

Is it possible, to:

  1. Create a splunk alert that populate a log on the universal forwarders themself.
  2. Do parsing (savedsearches logic) based on rules in my inputs.conf on the uf?
  3. insert more elegant solution here
Tags (1)
0 Karma

lsolberg
Path Finder

I want to implement this in a very strict environment, so there is no API I can easiely reach.. Having the files on the searchhead will break the other planned scripts populating this log.

0 Karma

yannK
Splunk Employee
Splunk Employee

I don't think that you can achieve this with forwarders ( UF, LWF )
for 3 reasons :

  • they don't have search capabilities (and no scheduled searches either)
  • they don't have the events indexed locally (and do not parse the events)
  • UF is not shipped with python.

I see only 3 ways to workaround, but hey are complex to setup, and may be costly..

  • turn all your forwarders in heavy forwarder with local indexing (index and forward), and setup the searches locally (keep a very small index locally if you have disk space issues) [ more costly in cpu/ network/disk/maintenance]
  • turn the forwarders to search-heads (forward only , no local indexing), but with ability to search remotely on the indexers, and schedule local alerting with scripts [ more costly in maintenance/network]
    • keep the searches/scripts on the search-head, and all the seach head to run remote command on the hosts based on the results. [more classic centralized management strategy]

lsolberg
Path Finder

Thanks for the suggestions! I think I am ending up with a solution that stores the alerts on a central location, with cron-jobs running on the nodes themself to pick them up once a minute.. Feels really hackish, unstable and ugly tough.. 😞

0 Karma

dart
Splunk Employee
Splunk Employee

Can you notify the other system with an API or similar?
Is the only possible way to create a file on the endpoint?

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...