In our environment we have syslog sources that forward data to HFs via load balancer. I would like to get the report for latency between the source and HF.
So in a picture format, it will be like..
Endpoint (event generated) Time T1, Heavy Forwarder (the same event reached HF) Time T2, Indexer (when that same event was indexed) Time T3.
So what we need is
T2 – T1 = time taken to reach HF
T3 – T2 = time taken to get the event indexed
T3 – T1 = total time taken for the event to be usable.
When we get the above information for each endpoint (only sample) we will be able to get to the bottom of the problem.
Then we have to go and dig deeper to find out if where the problem is:
1. HF is retransmitting or
2. indexer queues are full or
3. we are running out CPU or
4. we are wasting time on reading and writing from the disks on the HF
Thanks for your help in advance.
Maybe the following can help. It shows the difference between the capture time - _time
and the index time - _indextime
-
base search
| eval diff= _indextime - _time
| eval capturetime=strftime(_time,"%Y-%m-%d %H:%M:%S")
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S")
| table capturetime indextime diff
Hi ddrillic,
Thank you for responding.
Yes. As adonio is saying in his reply, the SPL suggested by you will not give me the time at which it reached HF.
Your SPL is true, if we go with an assumption that there is little or no latency between endpoint and HF.
first, @ddrillic comment is very valid for "T3-T2" you are looking for
wild idea here as i never tried it and dont know how it will work.
I dont think he HF attached a time stamp for the time it picked the event form end point, therefore, it is i cant see how you can get your "T2-T1" requirement.
with that being said, maybe you can "cheat" splunk and use in props.conf on HF DATETIME_CONFIG = CURRENT
while keeping the event timestamp you will now have T1 = event generated timestamp T2 = current time on HF
T3 = indexed time
disclaimer: this is just theory, i never tried it. if it works, please let me know
Hi Adonio,
Thank you for responding to the question.
This indeed is a good theory. However, I too am not sure if it works.
I'll check it. Thanks again.
hello @bharadwaja30
did you get a chance to try the theory?