Splunk Search

How to extract and compute fields from a MongoDB log?

kchongo
New Member

Hello,

I am new to Splunk, can you help me figure out to extract and fields from logs that look like the below

2016-10-06T21:22:15.285+0000 I COMMAND  [conn337418] command PersoTestServiceDB.$cmd command: update { update: "Test_Stage", updates: 1000, ordered: false, shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ] } keyUpdates:0 writeConflicts:0 numYields:0 reslen:232 locks:{ Global: { acquireCount: { r: 2000, w: 2000 } }, Database: { acquireCount: { w: 2000 } }, Collection: { acquireCount: { w: 1000 } }, Metadata: { acquireCount: { w: 1000 } }, oplog: { acquireCount: { w: 1000 } } } protocol:op_command 175ms

The above block is from a MongoDB log file, I am mostly interested in extracting the last field and then sort by the field with the largest value in "ms". I am trying to see how long queries take to complete on average as well as identify the long running queries from the logs. I would also like to list the long running query next to the query time when sorted.

Your assistance is appreciated. Thanks.

0 Karma
1 Solution

sundareshr
Legend

If your data is already in Splunk, you could try this in your search
*UPDATED*

base search  NOT "sleeping" | rex "(?<dur>\d+)ms" | eventstats avg(dur) as avg_dur | sort - dur | table _time _raw dur avg_dur

View solution in original post

0 Karma

sundareshr
Legend

If your data is already in Splunk, you could try this in your search
*UPDATED*

base search  NOT "sleeping" | rex "(?<dur>\d+)ms" | eventstats avg(dur) as avg_dur | sort - dur | table _time _raw dur avg_dur
0 Karma

kchongo
New Member

Thanks, this gives me what I am looking for. I can build more around this starting point.

I noticed that the time seems to be shown on the graph on reverse, the latest times are the one closest to the x and y intersection; should this be the other way round? How can I fix this

0 Karma

kchongo
New Member

Thanks this looks good, now one more thing; how can I strip out a log entry below that is counting sleep time; its adding to the average calculation and when sorted appears at the top of the results.

2016-10-07T00:11:56.366+0000 I SHARDING [LockPinger] cluster mongodbhost1a:27019,mongodbhost1b:27019,mongodbhost1c:27022 pinged successfully at 2016-10-07T00:11:55.615+0000 by distributed lock pinger 'mongodbhost1a:27019,mongodbhost1b:27019,mongodbhost1c:27022/mongodbhost4a:27018:1469673136:466927433', sleeping for 30000ms
0 Karma

sundareshr
Legend

Try the updated search

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...