Deployment Architecture

How do you fix this? ERROR TailingProcessor - File will not be read, is too small to match seekptr checksum (file=/var/bsm/20171021.bsm.log)

Hemnaath
Motivator

Hi All, Currently we got an issue reported by a user -- he is unable to see the current data in Splunk. When checked from Splunk, we could see data being indexed till yesterday at 10:30 AM from the remote machine under this path /var/bsm.

Inputs.conf details:
[monitor:///var/bsm]
sourcetype = unix:host:bsm
index = unix
disabled = 0

When validated in the remote machine test01 under the path /var/bsm/ from where the splunk is reading the file, we could see the below log files are present but Splunk is not reading the log files.

/var/bsm/20171023.bsm.log
/var/bsm/20171024.bsm.log

By executing the below query, we could see the below error in splunkd.log

index="_internal" host="test01*" log_level=ERROR

10-24-2017 11:51:28.311 -0400 ERROR TailingProcessor - File will not be read, is too small to match seekptr checksum (file=/var/bsm/20171021.bsm.log). Last time we saw this initcrc, filename was different. You may wish to use a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.

10-24-2017 11:51:27.088 -0400 ERROR TailingProcessor - File will not be read, is too small to match seekptr checksum (file=/var/bsm/20171024.bsm.log). Last time we saw this initcrc, filename was different. You may wish to use a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.

Kindly let me know how to fix this issue.

0 Karma
1 Solution

gjanders
SplunkTrust
SplunkTrust

Most likely the contents of the log file is repetitive or includes a long header of data which is the same between the two mentioned files.

If you refer to the inputs.conf documentation you can adjust :

crcSalt = <SOURCE>

However a nicer way to fix this might be:

initCrcLength = <a larger value>

It all depends on the contents of the log file, the first 256 bytes by default is read so if it's the same between the two log files it may trigger that error...

View solution in original post

0 Karma

gjanders
SplunkTrust
SplunkTrust

Most likely the contents of the log file is repetitive or includes a long header of data which is the same between the two mentioned files.

If you refer to the inputs.conf documentation you can adjust :

crcSalt = <SOURCE>

However a nicer way to fix this might be:

initCrcLength = <a larger value>

It all depends on the contents of the log file, the first 256 bytes by default is read so if it's the same between the two log files it may trigger that error...

0 Karma

Hemnaath
Motivator

Hi garethatiag, thanks for your effort on this, but as I had commented we have almost 200 + servers and the issue is happening in a particular sever test01. By updating the below stanza will not have any impact with other nodes which are performing good.

[monitor:///var/bsm]
sourcetype = unix:host:bsm
index = unix
initCrcLength = 256
disabled = 0

Hey I had checked with the user and he informed that he had manually copy pasted some of missing logs files "20171010.bsm.log,20171011.bsm.log, 20171015.bsm.log" under this location /var/bsm and I doubt this might be reason behind this ERROR, but not sure.

Kindly guide me whether we can update the above mentioned stanza via deployment server to all the nodes.

0 Karma

gjanders
SplunkTrust
SplunkTrust

"Hey I had checked with the user and he informed that he had manually copy pasted some of missing logs files "20171010.bsm.log,20171011.bsm.log, 20171015.bsm.log" under this location /var/bsm and I doubt this might be reason behind this ERROR, but not sure.
"

That would be a valid reason to show this error. The forwarder is advising that the log file has appeared with that particular name, but it has already seen the contents before and will therefore not re-read the contents of this particular file.

If the aim was to re-ingest the files into Splunk then a command like oneshot would be more appropriate

Hemnaath
Motivator

Hi garethatiag, thanks for your effort on this. Our first priority is to make splunk to ingest the data from the source "/var/bsm/20171024.bsm.log" . As the user as reported that he is unable to see the data in splunk console. So can deploy below stanza to this remote host.

Will it fix the issue and also the stanza should not affect other nodes which is already monitoring the file properly.

[monitor:///var/bsm]
sourcetype = unix:host:bsm
index = unix
initCrcLength = 256
disabled = 0

Kindly guide me how to make splunk start reading the logs from the source.

0 Karma

gjanders
SplunkTrust
SplunkTrust

If you are advising someone copied the file there is no need to change the inputs.conf file.
The initCrcLength = 256 is the default, if you had repetitive data in the header (for example it always printed 300 bytes of data that is the same between each file) then you would increase the initCrcLength to say 512 bytes or similar...

0 Karma

Hemnaath
Motivator

Hi Garethatiag, thanks for your effort on this.

My exact problem : Currently the data is not being ingested into splunk from this node test01. And we could see ERROR TailingProcessor - File will not be read, in splunkd.log.
Current inputs.conf stanza configured to read the file from all the remote nodes are

[monitor:///var/bsm]
sourcetype = unix:host:bsm
index = unix
disabled = 0

So by including the initCrcLength = 256 stanza in inputs.conf splunk will be able to read the file from this node. Kindly guide me on this

thanks in advance.

0 Karma

gjanders
SplunkTrust
SplunkTrust

The initCrcLength = 256 will not trigger the ingestion.
Adding:

crcSalt = <SOURCE>

Into your inputs.conf will result in any file that is renamed been re-indexed again if that is what you wish to do.
If you wanted to ingest the file just once on a single server you could use:

splunk add oneshot <filename> -sourcetype <your sourcetype goes here>

But that will only send the file in once-off.

0 Karma

Hemnaath
Motivator

Hi garethatiag, thanks for your effort again. we have the below list of files to be ingested on this particular node,currently missing these data's in splunk for this node test01 and user wants it to be ingested into splunk.

/var/bsm/
-rw-r--r-- 1 root other 3559842 Oct 24 23:30 20171024.bsm.log
-rw-r--r-- 1 root other 3476906 Oct 25 23:30 20171025.bsm.log
-rw-r--r-- 1 root other 3442653 Oct 26 23:30 20171026.bsm.log

So I need to login into the remote server test01 and go this path
/opt/splunkforwarder/bin and execute the below command to add the missing data in splunk.

./splunk add oneshot 20171024.bsm.log - sourcetype = unix:host:bsm

./splunk add oneshot 20171025.bsm.log - sourcetype = unix:host:bsm

./splunk add oneshot 20171026.bsm.log - sourcetype = unix:host:bsm

Note: we could see current day data in splunk "/var/bsm/20171027.bsm.log" .

kindly guide me on this please

0 Karma

gjanders
SplunkTrust
SplunkTrust
/opt/splunkforwarder/bin/splunk add oneshot /var/bsm/20171024.bsm.log -sourcetype unix:host:bsm

Should work...

0 Karma

Hemnaath
Motivator

Hi garethatiag thanks for you effort on this, hey again i am having an issue in ingesting the data into splunk. I could see that from 28th OCT 2017 to 30th OCT 2017, the data are missing in splunk.

when checked in splunkd.log, we could same below errors.
1) ERROR TailingProcessor - File will not be read
2) "ERROR TcpOutputFd - Connection to host=168.x.x.x:9997 failed" . - This is the indexer host instance.

we are not sure why splunk in behaving like this. So kindly guide me to fix this issue.

0 Karma

gjanders
SplunkTrust
SplunkTrust

The 2nd is a new question, it's likely that your indexer is having an issue...is it up and running?
The queues / data ingestion could be blocked...

0 Karma

Hemnaath
Motivator

Hi garethatiag, thanks for your effort, All the five indexer nodes where up and running fine, we had checked it.
But today we could see the data being ingested into splunk from the remote host test01.

There is no ERROR from this node when executed the splunk query "index="_internal" host="Test01*" sourcetype=splunkd log_level=ERROR"

Kindly let me know how to figure whats going wrong with this node why there is an intermediate failure of data ingestion happening and why we are getting this ERROR Tailing processor-File will not read frequently for this node.

0 Karma

gjanders
SplunkTrust
SplunkTrust

I think you might be better served here by logging a support case with Splunk support rather than relying on the community support website...

0 Karma

Hemnaath
Motivator

Hi Garethatiag, thanks for your effort on this, I have raised a support ticket as its not p3 issue, it will take few more days to fix the issue.

0 Karma

Hemnaath
Motivator

Hi Garethatiag, the issue got fixed after implementing the below stanza in the inputs.conf.

[monitor:///var/bsm]
sourcetype = unix:host:bsm
index = unix
crcSalt =

disabled = 0

Hemnaath
Motivator

Hi Garethatiag, Can you please guide me on this issue. Whether we can deploy the below inputs.conf stanza to all the remote node.

Inputs.conf
[monitor:///var/bsm]
sourcetype = unix:host:bsm
index = unix
initCrcLength = 256
disabled = 0

thanks in advance.

0 Karma

ddrillic
Ultra Champion

Google it ; - ) Last time we saw this initcrc, filename was different. You may wish to use a CRC salt

0 Karma

Hemnaath
Motivator

Hi ddrillic, thanks for your effort on this, yes I had searched in google and got the below link from answers.com.
As the below inputs.conf stanza has been configured more than 200 + servers but we have an issue only with one of the node test01, So I am little bit confused whether we can add crcSalt = stanza along with the below inputs.conf stanza, as this will get updated in all the 200 + nodes which is working fine.
Kindly guide me on this.

Inputs.conf :
Inputs.conf details:
[monitor:///var/bsm]
sourcetype = unix:host:bsm
index = unix
disabled = 0

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...