Hi,
I am a newbie with a task to implement a monitoring functionality on Splunk. The requirement is for Splunk to be able to monitor an application 's live logs where each line in the logs have a format with multiple fields such as timestamp.
From my understanding of Splunk so far, the best way I can think of is to implement is to:
If so, how do I handle the following requirements:
Sorry for the long post and I thank you ahead for any help!
Thanks
Welcome to Splunk answers @hsimpson2016
You will need to set up a Universal Forwarder and configure your inputs.conf
stanza to monitor log files on a remote server. You will then need to set up your outputs.conf
file to point to your indexer. These will live in splunk/etc/system/local
.. Once you do this then start splunk, the log files will start to roll into Splunk in sub-second time. To add a timestamp to events, you will need to add a break_only_before
command to your props.conf
file which lives on the indexer. You can tie these events together at index time or search time, this all depends on your setup.
We have an index which has SOAP web service calls. Each call has a request and response with a matching unique GUID and we treat them as separate events. We tie the events together at search time using the transaction
command and have an alert set anything the difference in time is greater than 300ms. So to answer your question, yes you can send an email if an event has a duration longer than 5 minutes assuming these events have a unique identifier tied in to them.
To prevent Splunk from spamming your inbox with emails, you can throttle the alerts. So an example of this would be, if you got an alert coming in every 3 seconds, you could throttle the alerting within a 10 minute window so you would only get 1 alert every 10 minutes until the issue is resolved. I'm not sure about Splunk maintaining an alert, but you could always trigger a script which could maintain the state for you