I have a log file in below format:
Section 1
a1 a
a2 b
Section 2
axx abc
aff def
I want to ignore all raw data coming after "Section 2"
from log file before indexing it into Splunk. Is there any way we can do this in Splunk?
Kindly note only universal forwarder is configured.
There is no way to do this if you want
a1 a
a2 b
to be separate events.
The only way I can think of something that even comes close to what you want is to define your LINE_BREAKER and linemerging options such that the entire sections are single events, then using a TRANSFORM at index time to nullQueue the section2 event all at once. But again, this means that all of section1 would be one large event, and I can't imagine that's what you want.
This seems like a prime case for pre-data massaging before sending to splunk.
Is this a normal logfile which will contain Section 1 and Section 2 more then once?
e.g.
Section 1 --> Section 2 --> Section 1 --> Section 2 --> Section 1?
If yes you can forward your Data to a so called Heavy Forwarder. Which is a full installation of Splunk>. On this instance you can filter your data before you index it.
https://docs.splunk.com/Documentation/Splunk/6.6.3/Forwarding/Deployaheavyforwarder
As example a stip of lines containing a special keyword.
Setup for transforms.conf
[setnull]
REGEX = \s;\sDefault\s;\s
DEST_KEY = queue
FORMAT = nullQueue
Setup for props.conf
[mysourcetype]
TRANSFORMS-null= setnull
To get this thing working, we should know now how the ignored part can be selected for deletion.
No, Section 1 and 2 will come only once.
And the no of rows within Section 1 and 2 are dynamic.
That's something a universal forwarder cannot do.
Thank you. Can you tell if there is an option at indexer side.