Hi Team,
What is the best way to monitor large rolling log files??
As of now I have following configuration to monitor files, (there are 180+ log files)
[monitor:///apps/folders/.../xxx.out]
index=app_server
At the end of month, log files are deleted and new log files are created by the application.
But the issue is, the log files are 20Gb+ in size by end of the month.
Recently when we migrated the server, we have started getting following error for some of the log files,
12-02-2020 19:03:58.335 +0530 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/xxx/xxx/xxx/xxx.out). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
WARN TailReader - Enqueuing a very large file=<hadoop large file> in the batch reader, with bytes_to_read=4981188783, reading of other large files could be delayed
I tried "crcSalt = <SOURCE>" option as well, there is no. difference.
Please suggest what configuration I should use for monitoring log files in given case.
Thanks.