Getting Data In

Splunk Reading a File we didn't tell it to per LSOF

daniel333
Builder

All,

I am trying to understand why Splunk it opening a file here.

When I run LSOF I see Splunk looking at a rolled over file "/opt/jboss-6.1.0.Final/server/default/log/jboss.log.2016-09-29"

splunkd 29966 root 48u REG 253,0 3381522238 5800576 /opt/jboss-6.1.0.Final/server/default/log/jboss.log.2016-09-29
[root@lvsp01cat001 default]#

But we don't monitor this file at all. It's not in our inputs.conf

[root@lSERVER default]# /opt/splunkforwarder/bin/splunk btool inputs list | grep -i jboss
[monitor:///opt/jboss/server/default/log/jboss.access.log.*]
index = jbossweb
sourcetype = jbossweb_access
[monitor:///opt/jboss/server/default/log/jboss.log]
[root@SERVER default]#

My theory is logrotate is rotating the file, the iNode is the same and Splunk it not releasing it.
/opt/splunkforwarder/bin/splunk -version
Splunk Universal Forwarder 6.3.3 (build f44afce176d0)

I am on 6.3.3 but I have seen thie behavior on 6.4x as well. Any idea what is going on here? Getting hounded by our Java support guys saying this isn't "permitted of Splunk".

0 Karma

jtacy
Builder

Under normal conditions the Universal Forwarder will continue to read a file after it's been renamed, until it reaches EOF and no activity happens on the file for a period of time (by default 3 seconds on a 6.4.3 UF). If there was an external process rotating the logs I think you'd be right about what's going on because JBoss might keep writing to the rolled file. Correct behavior of logrotate often relies on the ability to tell the writing process to create a new log file after rotation and I'm not aware of a way to do that with JBoss without restarting it. However, I wouldn't expect logrotate to be used with JBoss because it has its own mechanism for rolling logs based on time. Are you certain that logrotate is rolling the files and not the rolling mechanism built-in to JBoss?

I'm also puzzled why your Java support guys are upset; what's the impact to them? Are they not getting the logs in Splunk because of this? I would generally consider it a best practice to index the rolled log files to help ensure that you get all of your events in the event of full queues, etc.

All of that said, seeing a rolled log file for a long period of time in lsof is exactly what would happen if you hit the configured maxKBps on the UF. Your jboss.log seems to be a big file; between everything you're logging, is it possible that you're exceeding the limit? Any other sign of full queues on your indexers? If you're built-in JBoss log rolling that Splunk should work well with, slow throughput to the indexer either because of a too-low maxKBps on the UF or blocked queues would be where I would look next.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...