Hi Splunker!
i am using a universal forwarder to monitor and forward data (log file) to my Splunk. i have observed a loss of data...I.e. certain events are missing.
What is recommended to avoid any data loss during this period?
is there is any way to acknowledge whether complete data is being sent by the forwarder ??
Hi, the image shows some gaps in logs transmission, because, the UF to indexer can not / need not send logs "all" the times (the logs thruput can vary depending on many factors like network connectivity, how busy the UF/HF/indexers are, etc.. )
may we know how you found out that there are data loss? just by this pic or you really missed logs when you run search queries?!?! is the UF sending lots of logs / how much daily license volume from this UF per day?!?!
Splunk's best practices is to enable load balancing and indexer acknowledgement.
Hi, the image shows some gaps in logs transmission, because, the UF to indexer can not / need not send logs "all" the times (the logs thruput can vary depending on many factors like network connectivity, how busy the UF/HF/indexers are, etc.. )
may we know how you found out that there are data loss? just by this pic or you really missed logs when you run search queries?!?! is the UF sending lots of logs / how much daily license volume from this UF per day?!?!
Hi,
I checked the log file in the directory of the host installed UF. Some events have not been collected to splunk.
Data generated on UF is very large and continuous. I don't know exactly how much per day.
on splunkd.log at the UF, do you see any errors?!
I see following INFO notice in splunkd.log
"INFO ThruputProcessor - Current data throughput (262 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in limits.conf."
Then, i set maxKMps=unlimit, the problem have solved.
But I dont know why. Can y explain for me?
More info: The UF is monitoring the file is rotate. filename.log have new content added to them and rotate around 5-10 minute to filename.log.1, filename.log.2, vv...
As you are a new user to Splunk Answers, you can upvote the answers/comments,
if this answer resolved your query, you can select this answer and "accept" it as the answer, so that this question will be moved to answered queue. Happy Splunking!
pls check the info from limits.conf file
[thruput]
maxKBps =
* The maximum speed, in kilobytes per second, that incoming data is
processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
the number of events this indexer processes to the rate (in
kilobytes per second) that you specify.
* NOTE:
* There is no guarantee that the thruput processor
will always process less than the number of kilobytes per
second that you specify with this setting. The status of
earlier processing queues in the pipeline can cause
temporary bursts of network activity that exceed what
is configured in the setting.
* The setting does not limit the amount of data that is
written to the network from the tcpoutput processor, such
as what happens when a universal forwarder sends data to
an indexer.
* The thruput processor applies the 'maxKBps' setting for each
ingestion pipeline. If you configure multiple ingestion
pipelines, the processor multiplies the 'maxKBps' value
by the number of ingestion pipelines that you have
configured.
* For more information about multiple ingestion pipelines, see
the 'parallelIngestionPipelines' setting in the
server.conf.spec file.
* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256
Hello @dailv1808
Try using Indexer Acknowledge to avoid data lose. please find the below link for your reference:
https://docs.splunk.com/Documentation/Forwarder/7.2.3/Forwarder/Protectagainstthelossofin-flightdata
Thank for your reply.
I want to know the exact source of the problem when I do not use the Index ACK. Why lost data?
Did Forwarder get too much data, cause a queue to overflow make lost data? OR other reason vv....
Hello @dailv1808
You will get this full information in splunkd.log on Universal Forwarder. The location of log is \var\log\splunk
Please accept the answer if the above solution worked for you.