Getting Data In

Line breaks in log vary depending on job - How to break with props.conf?

gnovak
Builder

I haven't seen an example of this so far so I'm going to ask.

I have Backup Exec 10. There is a daily job and and then a WEEKLY job I have scheduled. The thing is, doesn't matter what the job is, the file name is always the same, which is something like BEX_HOSTNAME_00683.XML. Of course every time a job runs (whether weekly or daily) and a log is written to, the number changes to the next in line, and a new log file is written. A new log is made every time a job runs.

I have a batch file that runs and converts these XML log files to text. The contents of the files are dumped to one log called "backup.txt" which is indexed by splunk. It's incremental, so it just keeps dumping the converted text to this file.

So here's my problem. I already have a props.conf entry for the daily backup to make all the contents from this "event" into one event using line breaks. This works great for daily. However for weekly, well that's a different story. Splunk chopped it up into 9 separate events.

Can I use props.conf entries more then once on the same sourcetype? Not sure the best way to go about this.

Here's some info: This is what a daily backup log looks like:

    (04/26/13 06:00:00):bemcmd  -o31 -la"C:\Users\Administrator\Desktop\Backup Logs\backuplog.txt" s0 -f"C:\Program Files\Symantec\Backup Exec\Data\BEX_TAPEBACKUP_00106.xml"
Server name : TAPEBACKUP
Job name    : Database Maintenance
Job log     : C:\Program Files\Symantec\Backup Exec\Data\BEX_TAPEBACKUP_00106.xml
Device name : BEDB
Summary of database maintenance activity:
* Saved contents of BEDB database
* Deleted expired data for BEDB database:
     0 expired audit logs were deleted
     0 expired reports were deleted
     0 expired job histories were deleted
     2 expired alert histories were deleted
     2 expired job logs were deleted
Job started     : Friday, April 26, 2013 4:00:00 AM
Job ended       : Friday, April 26, 2013 4:00:01 AM
Completed status: Successful
RETURN VALUE: 1

And here's the props.conf entry that makes this look perfect in splunk as 1 event. It originally was splitting it up into about 3 events:

[sym_backup]
MUST_NOT_BREAK_AFTER = \d\sexpired\sjob\slogs\swere\sdeleted
MUST_BREAK_AFTER = RETURN\sVALUE:\s\d
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = true

The weekly backup text is a lot more and splunk has split this up every time it see's a timestamp.

Can I have multiple MUST_NOT_BREAK_AFTER entries under the same sourcetype in props.conf? The lines that I would use for the weekly backup are not present at all in the daily backup log.

For example, I tried adding another line to props.conf for the weekly backup data that's not breaking up, but this didn't work. One of the breaks is right after a line in the logs that says "Job name: Incremental Backup".

MUST_NOT_BREAK_AFTER = Job\sname:\sIncremental\sBackup

I'm not sure how to go about this or if this makes sense. Any ideas here? If all backup data, whether weekly or daily, is being indexed, but the formatting of each type is different, how can I have splunk know the difference? Can you use multiple props.conf entries perhaps?

I saw LINE_BREAKER had patterns you could match but I don't even know if that's possible with this situation.

Only other thing I can research is having the weekly backup logs send their logs somewhere else using a different name.

Here is what the weekly backup looks like. I can't get anything to stop chopping this into multiple events.

(04/30/13 14:01:14):bemcmd  -o31 -la"C:\Users\Administrator.BLAHBLAH\Desktop\Backup Logs\test.txt" s0 -f"C:\Program Files\Symantec\Backup Exec\Data\BEX_UTIL_00703.xml"


            Job server: UTIL

    Job name: Incremental Backup



    Job started: Tuesday, April 30, 2013 at 1:00:02 PM

    Job type: Backup

    Job Log: BEX_UTIL_00703.xml





    Drive and media mount requested: 4/30/2013 1:00:02 PM



    Drive and media information from media mount: 4/30/2013 1:01:39 PM

    Robotic Library Name: QUANTUM 0002

    Drive Name: QUANTUM 0001

    Slot: 13

    Media Label: Tape13

    Media GUID: {1e15a167-c1b9-4b71-80ff-211e53151f2a}

    Overwrite Protected Until: 4/28/2018 1:20:51 PM

    Appendable Until: 12/31/9999 12:00:00 AM

    Targeted Media Set Name: Allow OverWrite







            Job Operation - Backup

    Media operation - Append to media, overwrite if no appendable media is available.

    Compression Type: Hardware [if available, otherwise none]

    Encryption Type: None





    AOFO: Started for resource: "E:". Advanced Open File Option used: Microsoft Volume Shadow Copy Service (VSS).

    The snapshot provider used by VSS for volume E: - Microsoft Software Shadow Copy provider 1.0 (Version 1.0.0.7).



    Family Name: "Media created 5/15/2012 3:00:11 PM"

    Backup of "E:"

    Backup set #269 on storage media #1

    Backup set description: "Incremental Backup"

    Backup Method: Incremental - Changed Files - Reset Archive Bit



    Backup started on 4/30/2013 at 1:04:26 PM.

    Backup completed on 4/30/2013 at 1:13:24 PM.



    Backed up 4800 files in 24065 directories.

    Processed 3,135,859,491 bytes in  8 minutes and  58 seconds.

    Throughput rate: 334 MB/min

    Compression Type: Hardware

    ----------------------------------------------------------------------





            Job Operation - Verify





    Verify of "E: New Volume"

    Backup set #269 on storage media #1

    Backup set description: "Incremental Backup"



    Verify started on 4/30/2013 at 1:16:56 PM.

    Verify completed on 4/30/2013 at 1:17:15 PM.



    Verified 4800 files in 24065 directories.

    Processed 3,135,859,491 bytes in  19 seconds.

    Throughput rate: 9444 MB/min

    ----------------------------------------------------------------------





            Job ended: Tuesday, April 30, 2013 at 1:18:43 PM



    Completed status: Successful

            RETURN VALUE: 1
Tags (1)
0 Karma

_d_
Splunk Employee
Splunk Employee

Iff your events start with the same pattern (that indicates event delineation) for example (04/26/13 06:00:00):bemcmd then you can use the LINE_BREAKER attribute with SHOUlD_LINEMERGE = false:

[sym_backup]
SHOUlD_LINEMERGE = false
LINE_BREAKER = \(\d+/\d+/\d+\s\d+:\d+:\d+\):

0 Karma

gnovak
Builder

:::bump::: Any other takes on this? It seems to be breaking every time there is a timestamp and I don't want it to do that.

0 Karma

gnovak
Builder

This didn't work. It still chops it up into many sections. Every time there is a timestamp it makes a new event for the line with the timestamp. I was wondering can i have multple MUST NOT BREAK AFTER entries?

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...