Getting Data In

What is recommended to prevent broken multiline events that are caused by a delayed output from a command?

rune_hellem
Contributor

About

The log file is overwritten each time, therefore the MUST_NOT_BREAK_AFTER in the current definition does work, but I realize that there might be better solutions. The problem is that on 1 out of 5 servers, the event gets broken. My guess is that it is caused by delay in the output since it is the output from a command.

  • Can I somehow tell the forwarder/indexer to wait for a certain time before reading the output?
  • Or, should I instead define this as a transaction, letting each line be an event with timestamp system time?
  • Last option as I see it is to let the scheduled script move the log file from a temp-folder when finished to the folder where Splunk indexes it.

Current status
I have created a custom sourcetype like this

[fn:vwtool:loadstatus]
MAX_TIMESTAMP_LOOKAHEAD=100
MUST_NOT_BREAK_AFTER=-+\w+,\s
SHOULD_LINEMERGE=true
TIME_PREFIX=-+\w+,\s
TRUNCATE = 0
MAX_EVENTS = 10000

An event looks like this

---------------------------Mon, 20 Oct 2014 05:00:13---------------------------
vwtool : FILENETPE11 [Server (DbDBType.Oracle Blob 1 MB) -- {pe460.000.1010.101} en ]
Outputting to file 'd:\logs\vwtool\loadstatus_region3.log' and the terminal
<vwtool:3>[ For Region 3 from: Thu, 16 Oct 2014 18:46:05, To: Mon, 20 Oct 2014 05:00:13 ]
[ Total seconds:    296048, minutes:   4934.13, hours:     82.24 ]
                                             Total   Average  Average
                                             Count   Per Min  Per Hour
# Executed Regular Steps:                    49555     10.04    602.60
# Executed System Steps:                    115642     23.44   1406.23
# Java RPCs:                                     0      0.00      0.00
# Object Service RPCs:                           0      0.00      0.00
# Work Object Inject RPCs:                    6327      1.28     76.94
# Queue Query RPCs:                        2358261    477.95  28676.90
# Roster Query RPCs:                         22258      4.51    270.66
# Lock Work Object RPCs:                     22710      4.60    276.16
# Update Work Object RPCs:                   50099     10.15    609.21
# Invoke Web Service Instructions:               0      0.00      0.00
# Receive Web Service Instructions:              0      0.00      0.00

# Lock work object errors:                      61      0.01      0.74
# email notification errors:                     0      0.00      0.00
# Transaction deadlock errors:                   0      0.00      0.00
# Database reconnect:                            0      0.00      0.00
# Timer manager update errors:                 660      0.13      8.03
# Work objects skip due to sec errors:           0      0.00      0.00
# Exceed the Work Space Cache:                   0      0.00      0.00
# Exceed the Isolated Region Cache:              0      0.00      0.00
# Authentication errors:                         0      0.00      0.00
# Authentication token timeouts:                66      0.01      0.80
<vwtool:3>Output to file turned off
---------------------------Mon, 20 Oct 2014 05:00:14---------------------------
1 Solution

rune_hellem
Contributor

I did solve it by updating the script. Now it first create the log file in a temporary location, when the command is finished executing and have logged everything to the file, it will be moved to the permanent location. This solved my issue, now all events are indexed as one.

Best practice? Not sure, but it solved my problem.

View solution in original post

rune_hellem
Contributor

I did solve it by updating the script. Now it first create the log file in a temporary location, when the command is finished executing and have logged everything to the file, it will be moved to the permanent location. This solved my issue, now all events are indexed as one.

Best practice? Not sure, but it solved my problem.

gpullis
Communicator

Well, this sure worked when nothing else I tried did. Until someone comes up with a better answer, this is how it's done! 😉

0 Karma

sowings
Splunk Employee
Splunk Employee

Is this output coming from a command being run on some schedule? Could Splunk be tasked with running the command to retrieve the results directly?

0 Karma

rune_hellem
Contributor

The output is coming from a scheduled task being run on the server. And by reading between the lines, are you suggesting scripted input? The script used today is a Powershell-script.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...