Splunk Search

How do you know when to increase the bandwidth limits

jmsiegma
Path Finder

I have a few remote Splunk Universal Forwarders that forward along a metric ton of logs received from local firewalls via syslog to that local system, and I am unsure if I should increase the limits.conf [thruput] maxKBps = {default} to something greater to make sure it is able to send everything down stream.

Is there a log somewhere that would say if the Universal Forwarder was getting backed up?

0 Karma
1 Solution

yannK
Splunk Employee
Splunk Employee

A good approach is to look at the metrics.log from the forwarder (local to the forwarder, they are not monitored).
If you see that the forwarder is constantly hitting the thruput limit, you can increase it, and check back.

cd $SPLUNK_HOME/var/log/splunk/metrics.log
grep "name=thruput" metrics.log

Example: The instantaneous_kbps and average_kbps are always under 256KBps.

11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673, instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122

see this guide on how to check the speed limit in metrics.
http://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdela...

View solution in original post

yannK
Splunk Employee
Splunk Employee

A good approach is to look at the metrics.log from the forwarder (local to the forwarder, they are not monitored).
If you see that the forwarder is constantly hitting the thruput limit, you can increase it, and check back.

cd $SPLUNK_HOME/var/log/splunk/metrics.log
grep "name=thruput" metrics.log

Example: The instantaneous_kbps and average_kbps are always under 256KBps.

11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673, instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122

see this guide on how to check the speed limit in metrics.
http://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdela...

MuS
Legend

Hi jmsiegma,

if you don't have any troubles regarding late arriving events on the indexer or blocked queues on the forwarder I would not change the [thruput] ... this could bring your indexer in trouble if all forwarders suddenly send more data.

You could setup a persistent queue on the forwarder to protect your data.

If you want to know if a universal forwarder is done reading/sending data, you can use the REST end point

 /services/admin/inputstatus/TailingProcessor:FileStatus

In the end point you can find information about "open file", and others showing "finished reading".

Some details about the endpoint information, when the percent is 100% :

"finished reading" means that the file has been read and forwarded till the end.

"open file" means the same, but in addition the handle on the file is still open (because it has been less than 3 seconds, or because it is being 'tailed', or the file has just being reopen for any update or rotation).

Splunk will monitor every file, because Splunk assumes that a new event can be added to any file.

hope this helps...

cheers, MuS

jmsiegma
Path Finder

This is very interesting, I will have to play with this a bit more, and the persistent queue comment was helpful

Thankyou

0 Karma
Get Updates on the Splunk Community!

Introducing the 2024 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...

Introducing the 2024 Splunk MVPs!

We are excited to announce the 2024 cohort of the Splunk MVP program. Splunk MVPs are passionate members of ...

Splunk Custom Visualizations App End of Life

The Splunk Custom Visualizations apps End of Life for SimpleXML will reach end of support on Dec 21, 2024, ...