I have a problem with db-connect not forwarding data from a heavyweight forwarder. From dbx.log:
2013-11-06 08:23:12.489 dbx9169:INFO:TailDatabaseMonitor - Processed count=240000 results...
2013-11-06 08:23:12.502 dbx9169:INFO:TailDatabaseMonitor - Processed count=241000 results...
2013-11-06 08:23:12.520 dbx9169:INFO:SpoolOutputChannel - Moving temporary file /opt/splunk/heavyforwarder/splunk/var/run/tmp/dbx/kv_4356646782255266682.dbmonevt with size=2328610 to destination /opt/splunk/heavyforwarder/splunk/var/spool/dbmon/kv_1383747792520965798.dbmonevt
For some reason data is not being sent from the spool directory and it just remains there. outputs.conf is correct and I can see dbx.log and splunkd.log from the search head so it is sending data ok. The index being used for dbx exists on the peers. There are no error in any log. So now I'm stuck. Any ideas?
I've figured out what has happened. The input.conf for dbx had been changed and the monitoring of the spool directory removed. Added this back in, restarted the forwarder and data is being sent. The solution is always right in front of your eyes.
[batch://$SPLUNK_HOME/var/spool/dbmon/*.dbmonevt]
crcSalt =
I've figured out what has happened. The input.conf for dbx had been changed and the monitoring of the spool directory removed. Added this back in, restarted the forwarder and data is being sent. The solution is always right in front of your eyes.
[batch://$SPLUNK_HOME/var/spool/dbmon/*.dbmonevt]
crcSalt =
Can you open a support case so we can get a diag? It looks like everything is working correctly except, as you point out, the sinkhole directory, but it is hard to tell why. Also, server platform and Splunk Enterprise version will help.