Getting Data In

How to make Universal Forwarder to run script after monitoring of the directory returns files

viramamo
Explorer

Hi,
In Universal Forwarder(Windows), I have scenario where I need to run my pre-processing scripts after file from remote machine was successfully monitored,

input.confg

Step 1:

[monitor:\\UnitTest\r.xml]
disable`enter code here`d=false
index=test
sourcetype=r_xml_component

Output: r.xml indexed in SPLUNK

Step 2: Step 1 successful completion

[script://C:\Program Files\SplunkUniversalForwarder\etc\system\bin\Run.bat]
index=test1
interval=1800
disabled = false
sourcetype=log4j
source=test

How to link Step 1 result with Step 2

Help is appreciated, Many thanks

0 Karma

woodcock
Esteemed Legend

You can rename the incoming files with a file extension like xyz and then create an unarchive_cmd like this:
https://answers.splunk.com/answers/143771/whats-the-trick-to-get-unarchive-cmd-to-work-for-a-custom-...

0 Karma

woodcock
Esteemed Legend

The way to do this is to have a separate directory that only splunk monitors that starts out with nothing in it. Then setup a cron job which wakes up every few minutes, looks for new files (by comparing the soft links in splunk's directory with the files in the real directory), processes them and then drops a soft link pointing to the now-updated file into splunk's directory. You can also look for dead soft links at this time (for when housekeeping deletes the "real" file) and remove those. I have used this process many times and if you google answers, you will even find some cron job code that I have posted.

0 Karma

viramamo
Explorer

Hi woodcock,
Thanks for your reply.

When I interpret your answer. I understand that I can create a cron job for checking a file was been successfully monitored or not. If I am correct, I am not looking for this.

Let me try to put my question in a simpler terms,

How to make universal forwarder to understand that splunk instance was successfully indexed the file it was sent to it by the universal forwarder through monitoring process.

Can monitoring stanza in universal forwarder can receive the success/failure response from Splunk server. By which universal forwarder will come to know indexing for the data source happened or not.

Many Thanks,
Vignesh

0 Karma

woodcock
Esteemed Legend

No, you are not getting it at all. The cron job looks for NEW files, by checking to see if there is a soft link for it in the other directory. For any files with no soft link, it processes the file and creates the soft link in the other directory (the one where splunk is looking for it) which causes splunk to see it.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi viramamo,
let me understand: your script pre-processes your data and then they must be ingested in Splunk?
If this is your need, your script could write the processed files in a directory different than the original or use a different file naming convention and then use only the monitor stanza pointing to the new files.
In this way you have to schedule the script execution outside Splunk and you don't need of the second stanza in inputs.conf.

Ciao.
Giuseppe

0 Karma

viramamo
Explorer

Hi gcusello,
If my requirement is to schedule a script and monitor the output/result of the script execution(may be a file), then your solution will be right.
But my need is to monitor files which is different process altogether. Once the monitoring of files is success(the success criteria may be indexing). Then I need to run the script for pre-processing a different file.
I want to synch process 1(file monitoring) with process 2 (running script) to make a sequential execution. That process 2 must be trigger only on process 1 completion.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi viramamo,
at first remember that file monitoring is a continous process, not a scheduled one.
Anyway, if I correctly understood: you have two different inputs:

  • one is a file monitoring that you can ingest using monitor command and you need to check if there are or not indexed logs (alert when not),
  • the second is the results of a pre-processing script;

is this correct?

If these are your needs,
to monitor if logs are indexed, you can use an alert (index=my_index fired when results=0) scheduled with the frequency you like (e.g. every 5 minutes).

The script to pre-process a different file can be directly ingested in Splunk using a stanza (like the one you shared) or can be scheduled using Windows and results can be ingested in Splunk using a file monitor like the first: I usually prefer the second solution because it's easier.

Ciao.
Giuseppe

0 Karma

viramamo
Explorer

Hi gcusello,
Your idea of using alert is what I already planned to do. But problem in using alert is, on successful alert of the indexing. I do need to run the script from Splunk, where I guess I need to place the script file in Splunk instance server, where splunk is running. But my forwarder is running in different machine. Splunk Instance and Universal forwarder are running in two different machine.

The script which will run an application hosted in forwarder. It is better to have the script running in forwarder under pre-processing. It will be,

EX: Forwarder:
Step1: Mointor -> file1 -> Index -> file1 -> Splunk Server
Step2: Pre-process -> script -> trigger an application -> output log - > Splunk Server

Both, needs to be done in forwarder machine, using alert the script will be running in splunk server. Which is not what I wanted.

Thanks for your reply,
Thanks,
Vignesh

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi viramamo,
sorry probably I wasn't clear:

  • monitor stanza for input1 must be on UF that sends logs to Indexer;
  • alert to check if logs1 are indexer run on Indexer and gives an alert to you highlighting servers where ingestion were stopped;
  • script to ingest input2 runs on UF: it pre-process logs2 and writes results in a file on UF that is monitored in another stanza, ingested in UF and sent to Indexer;
  • alert to check if also log2 are indexer runs on Indexer.

Do not confuse alert to check if indexing is ok on Indexer with script on UF: they are two different things.

Anyway you cannot check indexing on UF because indexing is done by Indexers so, you cannot be sure the indexing is complete until you can run a search on Indexer!
Even if you could check indexing on UF you cannot be sure because some other problems (e.g. reading error or network problem or Splunk server maintenance) could occur.

Why you don't want to run the check on Indexer?
In your alert, you can also check indexing host by host, it isn't a problem.

Ciao.
Giuseppe

0 Karma

viramamo
Explorer

Hi Giuseppe,
Thanks for your reply. Probably I know Its been a while.
I do understand Monitoring UF and alert in Splunk server is two different process. But that's what I am trying bridge here.
Step1: Monitor file in UF
Step2: Alter from splunk server
Step3: Trigger script

I haven't found any cross server process execution. The only solution I could found is, alert based on script. But It is not done in UF rather it is done in Splunk server.

But, there is an alternative way,
Step1: Monitor file in UF
Step2: Alert if file got indexed in Splunk server. Also Alert needs to execute script in Splunk server which needs to send some sample file with status string to UF.
Step3: Have a file monitoring stanza and which will run pre-processor script in UF(Trigger happens via file monitor rather than waiting for input from splunk server).

Again thanks for your reply. I hope my alternative way will work. Any thoughts on that.

Thanks
Vignesh

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @viramamo,
at first, you can schedule a script in Splunk and directly send output to Splunk without writing results in a Script.

Probably I don't understand the relation between the first file monitoring and the script (if it exists).
But anyway inputs (by file or by script) run on UF, instead alerts run on Indexes and it isn't possible to link the first and the second.

Ciao.
Giuseppe

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...