Monitoring Splunk

Index Service-Now MID server logs

las
Contributor

Hi.

I'm indexing the log from the Service-now MID-Server, that should be nice and easy, most lines start with a timestamp, and a few are multiline events.

04/10/19 11:50:24 (365) LogStatusMonitor.60 stats threads: 67, memory max: 910.0mb, allocated: 402.0mb, used: 29.0mb, standard.queued: 0 probes, standard.processing: 0 probes, expedited.queued: 0 probes, expedited.processing: 0 probes, interactive.queued: 0 probes, interactive.processing: 0 probes
04/10/19 11:51:24 (373) LogStatusMonitor.60 stats threads: 67, memory max: 910.0mb, allocated: 402.0mb, used: 29.0mb, standard.queued: 0 probes, standard.processing: 0 probes, expedited.queued: 0 probes, expedited.processing: 0 probes, interactive.queued: 0 probes, interactive.processing: 0 probes
04/10/19 11:52:24 (406) LogStatusMonitor.60 stats threads: 67, memory max: 910.0mb, allocated: 402.0mb, used: 30.0mb, standard.queued: 0 probes, standard.processing: 0 probes, expedited.queued: 0 probes, expedited.processing: 0 probes, interactive.queued: 0 probes, interactive.processing: 0 probes
04/10/19 11:52:45 (197) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Worker starting: JDBCProbe source: 7f753a46db16a740c53f6c16ca9619a9
04/10/19 11:52:45 (275) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Waiting for connection semaphore
04/10/19 11:52:45 (275) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Obtained connection semaphore
04/10/19 11:52:45 (275) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af SELECT DISTINCT pco.PCO_Ref, pco.Act_Plan, pco.Description, pco.Impact, pco.Prioritet, pco.risk, pco.Txt_type, convert(varchar(MAX), pco.Txt) as Txt, pco.Urgency, status.CloseType, status.PCOClose, status.PCOPlan2, status.PCOrelease3, act.SenesteStart, act.SenesteEnd
FROM "PobPco"."dbo"."PCO" pco
join "PobPco"."dbo"."PCO_status" status on pco.PCO_Ref = status.PCO_Ref
FULL outer join "PobPco"."dbo"."Act" act on pco.PCO_Ref = act.PCO_Ref and (act.Sequence='430' OR act.Sequence='491')
WHERE (pco.Act_Plan='LEV_UDK_LEVERANCESITE' OR pco.Act_Plan='APPL_TWS_INITIERET' OR pco.Act_Plan='TEK_GENEREL_WEB')
04/10/19 11:53:24 (392) LogStatusMonitor.60 stats threads: 67, memory max: 910.0mb, allocated: 402.0mb, used: 30.0mb, standard.queued: 0 probes, standard.processing: 1 probes, expedited.queued: 0 probes, expedited.processing: 0 probes, interactive.queued: 0 probes, interactive.processing: 0 probes

On the universal forwarder, there's just an input.conf, that sets sourcetype and index.
On the indexer I have a props.conf with this content:

[snow:mid]
SHOULD_LINEMERGE = true
NO_BINARY_CHECK = true
CHARSET = AUTO
disabled = false
TIME_FORMAT = %m/%d/%y %H:%M:%S
category = Custom

My problem is that for some reason Splunk doesn't do event break correctly

    10/04/2019
11:53:24.000    
04/10/19 11:53:38 (565) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Enqueuing: D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\work\monitors\ECCSender\output_s\ecc_queue.16a06aacf050000001.xml
04/10/19 11:53:38 (596) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Enqueuing: D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\work\monitors\ECCSender\output_s\ecc_queue.16a06aacf140000001.xml
04/10/19 11:53:38 (611) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Enqueuing: D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\work\monitors\ECCSender\output_s\ecc_queue.16a06aacf240000001.xml
04/10/19 11:53:38 (627) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Enqueuing: D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\work\monitors\ECCSender\output_s\ecc_queue.16a06aacf430000001.xml
04/10/19 11:53:38 (643) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Enqueuing: D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\work\monitors\ECCSender\output_s\ecc_queue.16a06aacf530000001.xml
04/10/19 11:53:38 (643) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Enqueuing: D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\work\monitors\ECCSender\output_s\ecc_queue.16a06aacf530000002.xml
04/10/19 11:53:38 (643) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af Worker completed: JDBCProbe source: 7f753a46db16a740c53f6c16ca9619a9 time: 0:00:53.368
04/10/19 11:53:38 (880) ECCSender.1 Sending ecc_queue.16a06aacf050000001.xml
04/10/19 11:53:38 (989) ECCSender.1 Sending ecc_queue.16a06aacf140000001.xml
04/10/19 11:53:39 (098) ECCSender.1 Sending ecc_queue.16a06aacf240000001.xml
04/10/19 11:53:39 (192) ECCSender.1 Sending ecc_queue.16a06aacf430000001.xml
04/10/19 11:53:39 (270) ECCSender.1 Sending ecc_queue.16a06aacf530000001.xml
04/10/19 11:53:39 (364) ECCSender.1 Sending ecc_queue.16a06aacf530000002.xml
Collapse
host =  ATP-34UMID02 source =   D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\logs\agent0.log.0 sourcetype =    snow:mid
10/04/2019
11:53:24.000    
04/10/19 11:53:24 (392) LogStatusMonitor.60 stats threads: 67, memory max: 910.0mb, allocated: 402.0mb, used: 30.0mb, standard.queued: 0 probes, standard.processing: 1 probes, expedited.queued: 0 probes, expedited.processing: 0 probes, interactive.queued: 0 probes, interactive.processing: 0 probes
host =  ATP-34UMID02 source =   D:\ServiceNow\MID_Server\atp-34umid02_dev01\agent\logs\agent0.log.0 sourcetype =    snow:mid
0 Karma
1 Solution

toph3r
Explorer

I am about to onboard SNow Mid logs in my env, so I will try to remember to update after I complete that.

But based on your logs it looks like Splunk is trying to auto-break on a second date that is recognizes. Notice the difference between the event start datestamp and the datestamp on your line 18:
04/10/19 vs 10/04/2019 . Note the 2 digit vs 4 digit year, probably means that something else is generating that datestamp, possibly in the middle of an event. Though it is difficult to tell exactly what your second text box is the output from.

I would define my LINE_BREAKER based on the datestamp that leads your event. I may end up changing it, but based on your first log block I would start with:

[snow:mid]
NO_BINARY_CHECK = true
CHARSET = AUTO
TIME_FORMAT = %m/%d/%y %H:%M:%S
SHOULD_LINEMERGE = FALSE
LINE_BREAKER=([\r\n]+)\d{2}\/\d{2}\/\d{2}\s\d{2}:\d{2}:\d{2}
TIME_PREFIX=^
MAX_TIMESTAMP_LOOKAHEAD=30

View solution in original post

toph3r
Explorer

I am about to onboard SNow Mid logs in my env, so I will try to remember to update after I complete that.

But based on your logs it looks like Splunk is trying to auto-break on a second date that is recognizes. Notice the difference between the event start datestamp and the datestamp on your line 18:
04/10/19 vs 10/04/2019 . Note the 2 digit vs 4 digit year, probably means that something else is generating that datestamp, possibly in the middle of an event. Though it is difficult to tell exactly what your second text box is the output from.

I would define my LINE_BREAKER based on the datestamp that leads your event. I may end up changing it, but based on your first log block I would start with:

[snow:mid]
NO_BINARY_CHECK = true
CHARSET = AUTO
TIME_FORMAT = %m/%d/%y %H:%M:%S
SHOULD_LINEMERGE = FALSE
LINE_BREAKER=([\r\n]+)\d{2}\/\d{2}\/\d{2}\s\d{2}:\d{2}:\d{2}
TIME_PREFIX=^
MAX_TIMESTAMP_LOOKAHEAD=30

las
Contributor

This props.conf seems to work

[snow:mid]
NO_BINARY_CHECK = true
CHARSET = AUTO
TIME_FORMAT = %m/%d/%y %H:%M:%S
TIME_PREFIX=^
MAX_TIMESTAMP_LOOKAHEAD=30
category=Custom

Thank you very much

0 Karma

las
Contributor

I had a few issues, but now it seems to work

[snow:mid]
NO_BINARY_CHECK = true
CHARSET = AUTO
TIME_FORMAT = %m/%d/%y %H:%M:%S
TIME_PREFIX=^
TZ = Europe/Copenhagen
MAX_TIMESTAMP_LOOKAHEAD=30
SHOULD_LINEMERGE=true
LINE_BREAKER=([\r\n]+)
category=Custom

Kind regards

0 Karma

toph3r
Explorer

You may want to doublecheck your line breaking. in the props.conf Docs page it says " * When using LINE_BREAKER to delimit events, SHOULD_LINEMERGE should be set to false, to ensure no further combination of delimited events occurs."
SHOULD_LINEMERGE=true
LINE_BREAKER=([\r\n]+)

SHOULD_LINEMERGE says to try and merge lines together, but LINE_BREAKER=([\r\n]+) says to event break on every line.

The test is going to be to check out the SQL message (Lines 7-11 in your original post). These should be interpreted as one event:
04/10/19 11:52:45 (275) Worker-Standard:JDBCProbe-69923714db3cf300e6067742399619af SELECT DISTINCT pco.PCO_Ref, pco.Act_Plan, pco.Description, pco.Impact, pco.Prioritet, pco.risk, pco.Txt_type, convert(varchar(MAX), pco.Txt) as Txt, pco.Urgency, status.CloseType, status.PCOClose, status.PCOPlan2, status.PCOrelease3, act.SenesteStart, act.SenesteEnd
FROM "PobPco"."dbo"."PCO" pco
join "PobPco"."dbo"."PCO_status" status on pco.PCO_Ref = status.PCO_Ref
FULL outer join "PobPco"."dbo"."Act" act on pco.PCO_Ref = act.PCO_Ref and (act.Sequence='430' OR act.Sequence='491')

Here is what I see as correct line breaking with the original config I posted. Where there something that was not working? It appears to work in the GDE app, I can't figure out how to get a decent sized pic, but checkout https://i.imgur.com/bCk91rM.png

0 Karma

las
Contributor

Sorry, I should have been more clear on the second text box.
That is a screen copy of Splunk.
So lines 1 and 2 are Splunk Timestamp as extracted from the log
Lines 3 thru 15 included are the Splunk event
Line 16 (Collapse) event
Line 17 Splunk info on the event (host, source and sourcetype)
Lines 18 and 19 are Splunk Timestamp as extracted from the log
Line 20 is the event
Line 21 Splunk info on the event (host, source and sourcetype)

I will try your config and get back with the result.

Kind regards

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...