Getting Data In

Long Multiline events not breaking correctly

Genti
Splunk Employee
Splunk Employee

Customer has a log file that is failing to break correctly:
Some of the events in the file are single line events. Others are multiline.

Customer was using SHOULD_LINEMERGE = true and BREAK_ONLY_BEFORE = and this was working only partially. It was working ok for the single line events but not the multiline ones.

I changed and test a different props.conf configuration with SHOULD_LINEMERGE = false and LINE_BREAKER = .
This new configuration worked for almost 98% of the log file, however there was still a few events that would not break correctly.

The events for which this happens are very large events, with more then 400 lines. What can i do to make sure that these events also break correctly?

0 Karma
1 Solution

Genti
Splunk Employee
Splunk Employee

After much testing of the regex, (to make sure that it was not its fault) the only thing left to try was to actually find out how big exactly these events were.
Looking at splunkd.log and finding some string in the log (where it says that splunk was not able to parse the event at ) i was able to identify parts of the log where this was happening.
The event itself is about 800-900 lines and what is happening that it is breaking every 300 (approx) lines.

There is one attribute left to use in these cases, TRUNCATE - this makes sure that the event doesnt get truncated at the default size.

From the docs we have:

TRUNCATE = <non-negative integer>

    * Change the default maximum line length.
    * Set to 0 if you never want truncation (very long lines are, however, often a sign of garbage data).
    * Defaults to 10000. 

Setting TRUNCATE = 0 and restarting made the logfile finally break correctly.
NOTE: it appears the default 10,000 is not lines but characters..

Hope this helps someone out there as i did have a bit of a hard time finding out what was not working correctly.
Cheers,
.gz

View solution in original post

Genti
Splunk Employee
Splunk Employee

After much testing of the regex, (to make sure that it was not its fault) the only thing left to try was to actually find out how big exactly these events were.
Looking at splunkd.log and finding some string in the log (where it says that splunk was not able to parse the event at ) i was able to identify parts of the log where this was happening.
The event itself is about 800-900 lines and what is happening that it is breaking every 300 (approx) lines.

There is one attribute left to use in these cases, TRUNCATE - this makes sure that the event doesnt get truncated at the default size.

From the docs we have:

TRUNCATE = <non-negative integer>

    * Change the default maximum line length.
    * Set to 0 if you never want truncation (very long lines are, however, often a sign of garbage data).
    * Defaults to 10000. 

Setting TRUNCATE = 0 and restarting made the logfile finally break correctly.
NOTE: it appears the default 10,000 is not lines but characters..

Hope this helps someone out there as i did have a bit of a hard time finding out what was not working correctly.
Cheers,
.gz

gkanapathy
Splunk Employee
Splunk Employee

Correct, TRUNCATE does not measure lines. When you use SHOULD_LINEMERGE = false, then every event is a single "line", so counting these "lines" would not be useful.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...