Getting Data In

Memory Spike on Universal Forwarder when specific inputs.conf stanza is included

RyanPrice
Engager

Hello,

We have the universal forwarder running on many machines.  In general, the memory usage is 200MB and below.  However, when adding the below stanza to inputs.conf it balloons to around 3000MB (3GB) on servers where the /var/www file path contains some content.

[monitor:///var/www/.../storage/logs/laravel*.log]
index = lh-linux
sourcetype = laravel_log
disabled = 0

 

These logs are not plentiful or especially active, so I'm confused why the large spike in memory usage.  There would only be a handful of logs and they'd be updated infrequently yet the memory spike happens anyway.  I've tried to be as specific with the filepath as I can (I still need the wildcard directory path) but that doesn't seem to bring any better performance.

There may be a lot of files in that path but only a handful that actually match the monitor stanza criteria.  Any suggestions on what can be done? 

Thanks in advance. 

Labels (3)
0 Karma
1 Solution

RyanPrice
Engager

Thanks for the reply.  I was able to resolve the issue today.  The issue was not load.  As mentioned, there are only a handful of files that actually match the criteria and they see infrequent updates - under 10 new entries per minute I would estimate.

The issue ended up being the stanza itself being too vague.  I'm not sure how Splunk monitors/parses these internally but I believe there is room for improvement.  Once I split out the stanzas into a couple that cover the majority of our use-cases the memory dropped back down to around 200MB.

 

The new stanzas for anyone else finding this in the future eliminated the `...` wildcard and replaced it with a single-level one by including a couple stanzas for the common places these logs would be found:

[monitor:///var/www/*/storage/logs/laravel*.log]
index = lh-linux
sourcetype = laravel_log
disabled = 0

[monitor:///var/www/*/shared/storage/logs/laravel*.log]
index = lh-linux
sourcetype = laravel_log
disabled = 0

 

Thank you kindly for taking the time to answer, there are some pieces of advice you mentioned that I wasn't as familiar with and it was good to learn more.

View solution in original post

0 Karma

RyanPrice
Engager

Thanks for the reply.  I was able to resolve the issue today.  The issue was not load.  As mentioned, there are only a handful of files that actually match the criteria and they see infrequent updates - under 10 new entries per minute I would estimate.

The issue ended up being the stanza itself being too vague.  I'm not sure how Splunk monitors/parses these internally but I believe there is room for improvement.  Once I split out the stanzas into a couple that cover the majority of our use-cases the memory dropped back down to around 200MB.

 

The new stanzas for anyone else finding this in the future eliminated the `...` wildcard and replaced it with a single-level one by including a couple stanzas for the common places these logs would be found:

[monitor:///var/www/*/storage/logs/laravel*.log]
index = lh-linux
sourcetype = laravel_log
disabled = 0

[monitor:///var/www/*/shared/storage/logs/laravel*.log]
index = lh-linux
sourcetype = laravel_log
disabled = 0

 

Thank you kindly for taking the time to answer, there are some pieces of advice you mentioned that I wasn't as familiar with and it was good to learn more.

0 Karma

kiran_panchavat
Communicator

@RyanPrice The stanza which you've added monitors the log files under ///var/www/.../storage/logs/laravel*.log . If these logs are large or frequently updated, it could contribute to increased memory usage. 

verify if you have disabled THP. refer the splunk doc on it

https://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/SplunkandTHP 

please check the limits.conf

https://docs.splunk.com/Documentation/Splunk/latest/Admin/Limitsconf 

[thruput]
maxKBps =

[thruput]
maxKBps = <integer>
* The maximum speed, in kilobytes per second, that incoming data is
processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
the number of events this indexer processes to the rate (in
kilobytes per second) that you specify.
* NOTE:
* There is no guarantee that the thruput processor
will always process less than the number of kilobytes per
second that you specify with this setting. The status of
earlier processing queues in the pipeline can cause
temporary bursts of network activity that exceed what
is configured in the setting.
* The setting does not limit the amount of data that is
written to the network from the tcpoutput processor, such
as what happens when a universal forwarder sends data to
an indexer.
* The thruput processor applies the 'maxKBps' setting for each
ingestion pipeline. If you configure multiple ingestion
pipelines, the processor multiplies the 'maxKBps' value
by the number of ingestion pipelines that you have
configured.
* For more information about multiple ingestion pipelines, see
the 'parallelIngestionPipelines' setting in the
server.conf.spec file.
* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256

the default value here is 256, you might consider increasing it if this is the actual reason for the data getting piled up, you can st the integer value to "0" which means unlimited.

 

Universal or Heavy, that is the question? | Splunk 

Splunk Universal Forwarder | Splunk

 

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...