Splunk Search

count (all) / Count (unique) = result -> chart

dexxter275
Explorer

Hey all,

I have a logfile looking like this:

Host ----- Message
test ----- Error1
test ----- Error1
prod ----- Error2
prod ----- Error2
test ----- Error2
test ----- Error2
prod ----- Error3
prod ----- Error3

Now i want one chart with three numbers. At first only a unique count of the hosts (2 test, prod), second the full count of the messages (8) and at last the result of the unique count divide the full count (8 / 2 = 4).

I tried it with transcaption and where eventcode=1 to make a count without duplicates. That works very well.
And I found a way to make calculations. But I don't know how to combine both.

Hope you can help me, thanks for all.

dexxter275

Tags (1)
1 Solution

martin_mueller
SplunkTrust
SplunkTrust

The pattern of eventstats | stats is terrible. eventstats lifts all data from the indexers to the search head, goes through all data once, passes all data to stats, then stats goes through all data again. Instead, use this:

search | bucket span=1d _time 
| stats count as FullCount dc(machine) as UniqueCount by _time
| eval ratio = round(FullCount/UniqueCount, 2)

Now stats only needs to go over all data once, and the indexers can do the bulk of the work before only returning a tiny resultset to the search head.

View solution in original post

martin_mueller
SplunkTrust
SplunkTrust

The pattern of eventstats | stats is terrible. eventstats lifts all data from the indexers to the search head, goes through all data once, passes all data to stats, then stats goes through all data again. Instead, use this:

search | bucket span=1d _time 
| stats count as FullCount dc(machine) as UniqueCount by _time
| eval ratio = round(FullCount/UniqueCount, 2)

Now stats only needs to go over all data once, and the indexers can do the bulk of the work before only returning a tiny resultset to the search head.

dexxter275
Explorer

Damn you are good. Thats great and exactly doing what I have in my mind.
Thanks!!

0 Karma

niketn
Legend

@dexxter275... That is why I follow @martin_mueller 🙂

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

niketn
Legend

Try the following. Use eventstats to compute Total stats and add the the events.

<Your Base Search>
| eventstats count(Message) as FullCount
| stats dc(Host ) as UniqueCount last(FullCount) as FullCount
| eval ratio=round(FullCount/UniqueCount,2)
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

DalJeanis
Legend

eventstats is totally unnecessary in this one. Delete that line and on the next line, change last(FullCount) to count.

0 Karma

dexxter275
Explorer

It's me again. Your answer helps me a lot and did exactly what i want. Thank you for that.

I thought about a history about the last 7 days (for every day one line). And found this question:
https://answers.splunk.com/answers/239649/need-to-get-stats-count-by-day.html
They used "bucket _time span=day" to separate the day.

Do you know how I implement this? I tried:

<SEARCH> | bucket date span=day | eventstats count(errormessage) as FullCount | stats dc(machine) as UniqueCount last(FullCount) as FullCount | eval ratio=round(FullCount/UniqueCount,2)

but it doesn't work. The field "16/02/2017" is calling date. I do my best and start searching again but maybe you can help me again.

Thanks so much.

0 Karma

niketn
Legend

@dexxter275... kindly accept if this solved your problem. Let me know otherwise.

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

dexxter275
Explorer

Wow. That works perfectly. Thanks so much 🙂

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...