Hello, I have a big log file that is set to be sourcetype=my_log and it basically looks like this:
--- begin_request ---
blah blah
blah blah
--- end_request ---
--- begin_request ---
blah blah
--- end_request ---
and so on. With props.conf configuration below events are correctly split most of the time but sometimes they are just split somewhere in the middle. This happens to small and large events.
[my_log]
BREAK_ONLY_BEFORE_DATE = False
SHOULD_LINEMERGE = true
BREAK_ONLY_BEFORE = --- begin_request ---
MUST_NOT_BREAK_AFTER = --- begin_request ---
MUST_BREAK_AFTER = --- end_request ---
MAX_EVENTS = 10000
TRUNCATE = 100000
App that produces my_log uses a buffered logger so there are random and periodic delays of how log data is flushed into the log file and it is definitely possible that a single event will not be flushed to disk as a whole. That might be the cause for the strange event splits but I'm not sure.
Any ideas?
If you are using a buffered logger where there may be delays, this is likely the problem. Splunk will flush the file with a _doneKey after a certain time:
http://www.splunk.com/base/Documentation/latest/Admin/Inputsconf
time_before_close = <integer>
* Modtime delta required before Splunk can close a file on EOF.
* Tells the system not to close files that have been updated in past <integer> seconds.
* Defaults to 3.
By the way, the below should be sufficient:
[my_log]
BREAK_ONLY_BEFORE = \-\-\-\sbegin
MAX_EVENTS = 10000
TRUNCATE = 100000
No solution to the breaking problem, but have you considered using transactions to achieve the same thing? Have Splunk index the events line by line but group them together as transactions using startswith="--- begin_request ---" and endswith="--- end_request ---".