Getting Data In

query json for most recent 3 consecutive fails

jchaudh
Explorer

Hi,

I have json log in the following format. Each line is an event.

{"receivedDate":"2013-11-08 13:13:20.236", "macAddress": "11e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "failed" }
{"receivedDate":"2013-11-08 13:16:20.236", "macAddress": "11e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "passed" }
{"receivedDate":"2013-11-08 13:19:20.236", "macAddress": "12e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "failed" }
{"receivedDate":"2013-11-08 13:21:20.236", "macAddress": "14e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "failed" }
{"receivedDate":"2013-11-08 13:24:20.236", "macAddress": "12e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "failed" }
{"receivedDate":"2013-11-08 13:27:20.236", "macAddress": "12e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "failed" }
{"receivedDate":"2013-11-08 13:30:20.236", "macAddress": "11e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "failed" }
{"receivedDate":"2013-11-08 13:33:20.236", "macAddress": "11e4c90ca", "ssid": "Target Guest Wi-Fi", "status": "failed" }

I am interested in "status", "macAddress" and "receivedDate".
Retrieve events where status=failed for most recent 3 consecutive times, group by macAddress.
So in my case it would be for "macAddress": "12e4c90ca" but NOT for "macAddress": "11e4c90ca" as it one of the recent 3 consecutives were passed.

can you please point me in the right direction how to achieve this!

Thanks a lot.

Tags (2)
0 Karma

rsennett_splunk
Splunk Employee
Splunk Employee

Hi there...

It's not clear from the way you're describing the question whether you've got the data into Splunk... so I'm not completely sure where to start - but let's assume you do.

First... let's assume you're "polling" the last 15 minutes or so since that's what you've got here. So we have a finite chunk of time. Splunk will, by default show you the most recent stuff first. So you don't have to do anything on that front.

And what you've said is... if a macAddress succeeds in the time frame you're looking at, then you don't care about it.

So first you need to group your events, then you disqualify the ones you don't want with status!="passed", then you want to be sure about how many times the failures have occurred so you need to be able to count the lines... and where that's going to happen is within a transaction.

This is a super simplified example:

index=blabla sourcetype=blabla {some kind of time restriction}
|transaction macAddress
|where status!="passed" AND linecount>=3
|table macAddress

Your example is also super simple... and before I put the linecount "where" test in there, there was only one other macAddress showing (you should try this out, one pipe section at a time and see the results) if the data is more complex, you might want to insert

|streamstats
just after the transaction section and stop. Take a look at the fields that are created.

Both transaction and streamstats will create calculation fields for you to use... read up on it in the doc.

If this does what you want... great. If it just opens up more questions... that's good too.

With Splunk... the answer is always "YES!". It just might require more regex than you're prepared for!

jchaudh
Explorer

Is it possible to do the same without using transaction? may be using stats or streamstats. Because for large data transaction might not be a good way to do!

0 Karma

rsennett_splunk
Splunk Employee
Splunk Employee

yeap. 'zacty. : ) Glad it helped.

With Splunk... the answer is always "YES!". It just might require more regex than you're prepared for!
0 Karma

jchaudh
Explorer

sorry my bad ---
I was doing status="failed" instead of what you suggested i.e. status!="passed" 🙂

0 Karma

jchaudh
Explorer

Thanks for the reply rsennett_splunk. 🙂

I tried and I get something different -

index=tms | transaction macAddress | where status="failed" and linecount>=3

"macAddress": "12e4c90ca"
"macAddress": "11e4c90ca"

It should only give me for macAddress 12e4c90ca.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...