Getting Data In

What is the best way to monitor large log files?

gaurav_maniar
Builder

Hi Team,

What is the best way to monitor large rolling log files??

As of now I have following configuration to monitor files, (there are 180+ log files)

 

[monitor:///apps/folders/.../xxx.out]
index=app_server

 

At the end of month, log files are deleted and new log files are created by the application.

But the issue is, the log files are 20Gb+ in size by end of the month.

Recently when we migrated the server, we have started getting following error for some of the log files, 

 

12-02-2020 19:03:58.335 +0530 ERROR TailReader - File will not be read, is too small to match seekptr checksum (file=/xxx/xxx/xxx/xxx.out). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source. Consult the documentation or file a support case online at http://www.splunk.com/page/submit_issue for more info.
WARN  TailReader - Enqueuing a very large file=<hadoop large file> in the batch reader, with bytes_to_read=4981188783, reading of other large files could be delayed

 

I tried "crcSalt = <SOURCE>" option as well, there is no. difference.

Please suggest what configuration I should use for monitoring log files in given case.

Thanks.

Labels (3)
0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...