Knowledge Management

How to handle very fast logging application log file ?

dbras
New Member

Hi,

We are actually monitoring our application log file with a forwarder configured like that:

[monitor:///var/log/application/application.*]
sourcetype = log4j
index = application

This application manage the log with log4j to put in a file like that:

<RollingFile name="LOG_FILE"
    fileName="/var/log/application/application.log"
    filePattern="/var/log/application/application.log.%i"
    append="true">
    <Policies>
        <SizeBasedTriggeringPolicy size="20MB" />
    </Policies>
    <DefaultRolloverStrategy max="10" />
</RollingFile>

But our application is really verbose, for example, today we have those files:

-rw-r-----. 1 weblogic weblogics 19223378 Apr  9 10:43 application.log
-rw-r-----. 1 weblogic weblogics 20976777 Apr  9 03:06 application.log.1
-rw-r-----. 1 weblogic weblogics 20971660 Apr  9 04:00 application.log.2
-rw-r-----. 1 weblogic weblogics 20972962 Apr  9 04:50 application.log.3
-rw-r-----. 1 weblogic weblogics 20971633 Apr  9 05:40 application.log.4
-rw-r-----. 1 weblogic weblogics 20971950 Apr  9 06:29 application.log.5
-rw-r-----. 1 weblogic weblogics 20971611 Apr  9 07:17 application.log.6
-rw-r-----. 1 weblogic weblogics 20971535 Apr  9 08:03 application.log.7
-rw-r-----. 1 weblogic weblogics 20972289 Apr  9 08:53 application.log.8
-rw-r-----. 1 weblogic weblogics 20971526 Apr  9 09:37 application.log.9
-rw-r-----. 1 weblogic weblogics 20971784 Apr  9 10:14 application.log.10

And this introduces a delay on when the log are available on Splunk after the rotation, you can see below a grph with the count of event in the sourcetype base on the indextime:
alt text

Do you know a solution to handle that without this delay ?
I don't think that configure log4j to log in bigger file will fix that.

Thank you for your help

0 Karma

FrankVl
Ultra Champion

If the gaps/spikes are related to the moments the log is rotated, then increasing the file size in log4j config would reduce the rotation frequency and as such the frequency of those gaps.

But from the list of files you shared and the screenshot, I wouldn't immediately draw the conclusion that the delays are related to the rotation. The list shows roughly 1 file each hour, while the graph only shows gaps every so many hours. Did you confirm this somehow, that the delays happen exactly when the file is rotated? Have you checked there are no queueing issues somewhere in your landscape around the times those gaps occur?

If it is related to the rotation, then the question is: what keeps Splunk from noticing the new application.log after rotating the previous file to log.1. Is this forwarder very busy processing other inputs? Perhaps you can enable an additional pipeline if the forwarder server has sufficient resources left.

I've seen forwarders process far more busy files (generated by a syslog daemon receiving data from many sources), into the GBs per hour, without such behavior.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...