Hello,
First things first, I want to ask a question? Is there any problem for splunk when it is intended to monitor a log file which is updated every second? I've set up a forwarder and an indexer to perform this job and I don't see any data flow through from the forwarder to the indexer. I've made sure the inputs.conf and outputs.conf are properly set and there's no connectivity issue between forwarder and indexer. Surprisingly, when I tried the oneshot command from the CLI, the data is successfully sent to the indexer (although the amount is not much). I'd checked the splunkd.log and metrics.log and I didn't find anything like an error or something.
Can you please enlighten me on this matter?
Thanks in advance
Best Regards,
Vincent
Thank you for your help and suggestions, guys. The problem has finally been solved. I used chmod 777 to that file and everything worked smoothly. I've checked the files' permission and splunk user was able to read that file. I didn't know how it ended up like an error when the forwarder tried to send the data though.
Best Regards,
Vincent
Thank you for your help and suggestions, guys. The problem has finally been solved. I used chmod 777 to that file and everything worked smoothly. I've checked the files' permission and splunk user was able to read that file. I didn't know how it ended up like an error when the forwarder tried to send the data though.
Best Regards,
Vincent
are you able to see forwarder's internal log events when issue index=_internal from search head?
How are you ensuring forwarder is able to send the data to Indexer?
Does the forwarder splunkd user has the read permission to the directory?
Did you enable the receiving port on indexer e.g 9997 ??
You mentioned that you have used oneshot option while adding the input file. Hence, file is not monitored for the further updates.
For continuous monitor the file: ./splunk add monitor [ -index < indexname> ] [ -sourcetype < name of source type> ]
For adding file one time only : ./splunk add oneshot [ -index < indexname> ] [ -sourcetype < name of source type> ]
Note : index name and sourcetype are optional.
If you want to re-index the file, then you have 2 options
1. crcSalt = in the inputs.conf file
2. clear fish bucket
There is a discrepancy between your inputs and outputs. Note tcpout:logloadbalancessl vs. _TCP_ROUTING = loadloadbalancessl.....tcpout: 'LOG'loadbalancessl is not matched in the input stanza.
Strings don't match for output routing so this might be your problem.
First you should not need to use '_TCP_ROUTING' if you only have a single default output configuration. This is handy if you want to direct output to a second set of indexers. Use of the this option, in my experience can be tricky and not always functional (at least in Splunk 5, in bug notes as well).
Additionally, the host value should be passed to the output automatically without the need for your variable in inputs.conf (host = $decideOnStartup).
I've fix the typo, but nothing changes. Indexers are still receiving no data.
Post your latest inputs.conf and outputs on the Forwarder. Also, from your indexer, can you search index=_internal and see events from that host? My guess is that your outputs isnt setup correctly... Btool and confirm..
splunk btool ouputs list --debug
Thanks for your help. esix. The problem has been solved now. It's not the matter of my configuration files, rather the problem lies within the monitored files themselves.
I see. That typo might be the root cause for this. I'm going to try to fix it and see what happens next. Thanks andykuhn 🙂
could you please clear _thefishbucket on remote server..also check splunkd logs on UF & share output.
If you use the same file in oneshot
and monitor
, Splunk will not index it again. Splunk keeps track of all indexed files, if you want to reindex a file clean either the index _thefishbucket
or use option
crcSalt = <SOURCE>
Thanks for the response, MuS. But, the problem emerged ever since I hadn't used the oneshot command. At first I thought it was a connectivity problem so I used oneshot just to make sure the forwarder would send the data to the indexer and it did. There are several files with the same name which are spread across several folders. I only used oneshot to one file. It shouldn't affect the other files, should it?
No it should not, can the user running splunk access the directory and files ? Check your splunkd.log on the forwarder for messages from tailingProcess and/or turn on debigging for it on the forwarder by running this command
$SPLUNK_HOME/bin/./splunk set log-level TailingProcessor -level DEBUG
See the docs for more details http://docs.splunk.com/Documentation/Splunk/6.2.1/Troubleshooting/Enabledebuglogging
Please send inputs.conf and outpurs.conf file details of forwarder and inputs.conf file of indexer
I have no inputs.conf configuration at indexer so I'll put only .conf configuration resided within the forwarder.
inputs.conf:
[monitor:///tkwl06/fs_users/tcmwl61/J2EEServer/config/ABP-CM61/ABP-CM61-Server/logs/CMServer.log]
sourcetype = ocscm
index = app_ocscm
followTail = 0
host = $decideOnStartup
_TCP_ROUTING = loadloadbalancessl
outputs.conf:
[tcpout:logloadbalancessl]
compressed = true
server = 10.37.0.197:9997
sslCertPath = /apps/splunkforwarder/etc/auth/tselindexer.pem
sslPassword = xxxxxxxx
sslRootCAPath = /apps/splunkforwarder/etc/auth/CoreCA.pem
sslVerifyServerCert = true
I deploy the configuration via deployment server.