All Apps and Add-ons

Event not complete in Splunk with field extraction

kochera
Communicator

Hi,

we're indexing Java-GC-logs. Everything is fine except some Full-GC-Entries within the logfile. They do not fully show up.

Rawdata

  • 2012-07-27T03:25:46.126+0200: 1836893.272: [GC 724411K->642343K(1213440K), 0.0460068 secs]
  • 2012-07-27T03:25:46.177+0200: 1836893.323: [GC 650962K(1213440K), 0.0225285 secs]
  • 2012-07-27T03:25:46.506+0200: 1836893.653: [GC 759501K->796790K(1213440K), 0.0409750 secs]
  • 1836893.963: [Full GC 796790K->514661K(1213440K), 6.9050747 secs]
  • 2012-07-27T03:25:54.469+0200: 1836901.615: [GC 637541K(1213440K), 0.0977892 secs]
  • 2012-07-27T03:25:54.573+0200: 1836901.719: [GC 637541K->549878K(1213440K), 0.0160404 secs]
  • 2012-07-27T03:26:00.488+0200: 1836907.634: [GC 662704K(1213440K), 0.1738855 secs]

Events in Splunk

  • 2012-07-27T03:25:46.506+0200: 1836893.653: [GC 759501K->796790K(1213440K), 0.0409750 secs]
  • 2012-07-27T03:25:46.177+0200: 1836893.323: [GC 650962K(1213440K), 0.0225285 secs]
  • 2012-07-27T03:25:46.126+0200: 1836893.272: [GC 724411K->642343K(1213440K), 0.0460068 secs]
  • 1836893.963: [Full GC
  • 2012-07-27T03:25:45.605+0200: 1836892.751: [GC 660412K->608290K(1213440K), 0.2086469 secs]
  • 2012-07-27T03:25:45.049+0200: 1836892.195: [GC 650771K->537532K(1213440K), 0.0107272 secs]
  • 2012-07-27T03:25:44.287+0200: 1836891.433: [GC 647165K->527891K(1213440K), 0.0162234 secs

Different versions of field extractions have been configured and are working fine.

props.conf (Version1)

[sun_jvm]
BREAK_ONLY_BEFORE = (^\d+)
EXTRACT-gcFields = \[(Full ){0,1}GC\s(?<JVM_HeapUsedBeforeGC>\d+)K\D+(?<JVM_HeapUsedAfterGC>\d+)K\D+(?<JVM_HeapSize>\d+)K\S+\s(?<JVM_GCTimeTaken>\d+.\d+)\s

props.conf (Version 2)

[sun_jvm]
SHOULD_LINEMERGE=TRUE
REPORT-jvm = sun_jvm_gc
TRANSFORMS-changesource = source-drop-date_YYYYMMDD_HHMMSS
TIME_PREFIX = ^
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%Q

transforms.conf (Version 2)

[sun_jvm_gc]
REGEX = \[(Full ){0,1}GC\s(?<JVM_HeapUsedBeforeGC>\d+)K->(?<JVM_HeapUsedAfterGC>\d+)K\((?<JVM_HeapSize>\d+)K\),\s(?<JVM_GCTimeTaken>\d+.\d+)\ssecs\]

With both of the above configuration we face the same behaviour.Any idea why the event does not show up properly?

Cheers,
Andy

1 Solution

dwaddle
SplunkTrust
SplunkTrust

This may be Java's fault. Well, "fault" may be a little harsh, but it is a result of how certain versions of the JVM buffer GC log writes versus Splunk's assumptions of when a file is "done".

When the JVM writes these messages, they don't seem to be line-buffered - so a partial line is possible, with a few-second delay until the completion of said line. Splunk reads the log to EOF and sees the (partial) line and saves it in memory, setting a timer of some sort. The JVM doesn't write the rest of the line until after the timer expires so Splunk's assumption is "that's all of that event" and forwards it on with a "done" flag on it. The "done" flag is like an implicit line breaker, which makes for a broken GC log event...

I don't know if there's a way to tune Splunk to "wait longer", or a way to tell the JVM "please line-buffer these" Either would seem to help the issue.

View solution in original post

dwaddle
SplunkTrust
SplunkTrust

This may be Java's fault. Well, "fault" may be a little harsh, but it is a result of how certain versions of the JVM buffer GC log writes versus Splunk's assumptions of when a file is "done".

When the JVM writes these messages, they don't seem to be line-buffered - so a partial line is possible, with a few-second delay until the completion of said line. Splunk reads the log to EOF and sees the (partial) line and saves it in memory, setting a timer of some sort. The JVM doesn't write the rest of the line until after the timer expires so Splunk's assumption is "that's all of that event" and forwards it on with a "done" flag on it. The "done" flag is like an implicit line breaker, which makes for a broken GC log event...

I don't know if there's a way to tune Splunk to "wait longer", or a way to tell the JVM "please line-buffer these" Either would seem to help the issue.

Ayn
Legend

These field extractions are applied at search-time and as such should not make any difference at all to the actual events that have been indexed. I'm thinking this might rather be an issue with Splunk interpreting the number as a date for some reason and breaking improperly. If you do a real-time search while you're sending these kinds of events into Splunk, are you seeing the "missing" data then? If so, could you check what timestamp has been applied to it?

kochera
Communicator

Had a look but did not find a clue what might happen to the event. I'll investigate further.

0 Karma

lguinn2
Legend

Also, if these are single-line events, you should probably set

SHOULD_LINEMERGE=false

as Splunk will parse the data more quickly. And good catch, Ayn - I did not notice that the timestamp was missing on the truncated lines.

What happens if you delete the TIME_FORMAT spec?

0 Karma

Ayn
Legend

OK. In that case it looks like you have some kind of index-time transform that changes the raw data before it's indexed. This is a long shot, but what does the source-drop-date_YYYYMMDD_HHMMSS transform do? If it's not relevant at all to this case, I'd look at any other TRANSFORMS or SEDCMD statements that might apply to this data.

0 Karma

kochera
Communicator

same problem with RT-search
2012-07-27T09:38:42.837+0200: 231364.383: [Full GC

0 Karma

kochera
Communicator

I'll check it out.
The issue with the missing timestamp has been addressed in Java 1.6.34 and will hopefully be solved.

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...