Getting Data In

How can we speed up JSON array lookup?

vinodkd
New Member

Hi,

My JSON event is in this form:

{
TotalMemUsage : 887992,
ProcMemUsage : [
{
Name : "firefox",
PID : 758,
Usg : 228972,
},
{
Name : "eclipse",
PID : 569,
Usg : 19756,
}
]
}

I've to take average of each process's memory usage and draw a chart.
For the time being, I use following query.

my_search 
| rename ProcMemUsage{}.Name as PNAME | rename ProcMemUsage{}.Usg as PUSAGE
| eval x=mvzip(PNAME,PUSAGE) 
| mvexpand x|eval x=split(x,",")
| eval P_NAME=mvindex(x,0)
| eval P_USAGE = mvindex(x,1)
| stats avg(P_USAGE) as MU by P_NAME 
| sort -MU | fields MU,P_NAME
|chart avg(MU) as AvgMU by P_NAME 
| sort -AvgMU

But it takes lot of time to complete the operation. (Approx 5 minutes with only 30K records.).
Is there any way to optimize it? Can we use jsonutils somehow?

Tags (3)
0 Karma

vinodkd
New Member

@Iguinn: It doesn't give a noticeable speedup :'(

0 Karma

lguinn2
Legend

I don't really think this will make things faster, but it might help a little:

my_search 
| rename ProcMemUsage{}.Name as PNAME | rename ProcMemUsage{}.Usg as PUSAGE
| fields PNAME PUSAGE
| eval x=mvzip(PNAME,PUSAGE) 
| mvexpand x
| eval x=split(x,",")
| eval P_NAME=mvindex(x,0)
| eval P_USAGE = mvindex(x,1)
| chart avg(P_USAGE) as AvgMU by P_NAME 
| sort -AvgMU

It's also a bit cleaner. Post back with your results.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...