Hi Team,
I have a splunk forwarder on a Linux Websphere machine. I have an archival script which triggers daily at a designated time and it will take care of rotating the splunk log and it will archive (.gz) it . Since I have set up a real time forwarding of the data, splunk process will be reading the log file all the time and when this archival script triggers and tries to gzip the file, it is creating a RACE AROUND Condition causing the splunk process to stop abruptly. I need to manually start the splunk process later.
Please find the log snippet below:
06-18-2015 01:00:55.194 -0700 INFO WatchedFile - Will begin reading at offset=35869164 for file='/logs/spswbsvc/spssplunklog.log.201506180100'.
06-18-2015 01:00:56.154 -0700 INFO WatchedFile - Logfile truncated while open, original pathname file='/logs/spswbsvc/spssplunklog.log', will begin reading from start.
06-18-2015 01:00:57.172 -0700 WARN FileClassifierManager - Unable to open '/logs/spswbsvc/spssplunklog.log.201506180100'.
06-18-2015 01:00:57.172 -0700 WARN FileClassifierManager - The file '/logs/spswbsvc/spssplunklog.log.201506180100' is invalid. Reason: cannot_read
06-18-2015 01:00:57.192 -0700 INFO TailingProcessor - Ignoring file '/logs/spswbsvc/spssplunklog.log.201506180100' due to: cannot_read
06-18-2015 01:00:57.193 -0700 ERROR WatchedFile - About to assert due to: destroying state while still cached: state=0x0x7f9b71f4d0c0 wtf=0x0x7f9b71c7fc00 off=0 initcrc=0xb8098a8b758746ea scrc=0x0 fallbackcrc=0x0 last_eof_time=1434614455 reschedule_target=0 is_cached=343536 fd_valid=true exists=true last_char_newline=true on_block_boundary=true only_notified_once=false was_replaced=true eof_seconds=3 unowned=false always_read=false was_too_new=false is_batch=true name="/logs/spswbsvc/spssplunklog.log.201506180100"
Is there any solution where in we can make splunk read the log file while it is getting gzipped ?
Regards
Sriram
... View more