Alerting

How to identify which alert is causing high memory consumption in the search head?

LeandroKopke
Explorer

I am having problems with high memory consumption in my search head.
In some periods where they execute alerts already programmed, the consumption of memory RAM reaches incredible 377GB. This causes the system to terminate the Splunk process.

Is there any search I run to find out the memory consumption of each splunk alert?
So I can identify which alert is causing this high consumption.

0 Karma

gjanders
SplunkTrust
SplunkTrust

Either refer to the monitoring console "Search Activity : Deployment wide" or in Alerts for Splunk Admins (SplunkBase), (github link) I added:
SearchHeadLevel - Maximum memory utilisation per search

`comment("As originally found on https://answers.splunk.com/answers/500973/how-to-improve-my-search-to-identify-queries-which.html / DalJeanis with minor modifications. Max memory used per search process at search head level")`    
index=_introspection sourcetype=splunk_resource_usage component=PerProcess data.search_props.sid=*
| stats max(data.mem_used) AS peak_mem_usage,
    latest(data.search_props.mode) AS mode,
    latest(data.search_props.type) AS type,
    latest(data.search_props.role) AS role,
    latest(data.search_props.app) AS app,
    latest(data.search_props.user) AS user,
    latest(data.search_props.provenance) AS provenance,
    latest(data.search_props.label) AS label,
    latest(host) AS splunk_server,
    min(_time) AS min_time,
    max(_time) AS max_time
    by data.search_props.sid, host
| sort - peak_mem_usage
| head 50
| table provenance, peak_mem_usage, label, mode, type, role, app, user, min_time, max_time, data.search_props.sid splunk_server
| eval min_time=strftime(min_time, "%+"), max_time=strftime(max_time, "%+")
| rename data.search_props.sid AS sid,
    peak_mem_usage AS "Peak Physical Memory Usage (MB)",
    min_time AS "First time seen",
max_time AS "Last time seen"

You might want to narrow down to your search heads, with a host=
The monitoring console also has a view of searches using a lot of memory

0 Karma

codebuilder
SplunkTrust
SplunkTrust

From the web UI on any of your search heads go to Activity > Jobs then sort by runtime and/or size.

That will help you quickly identify searches that are consuming a lot of resources.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

somesoni2
SplunkTrust
SplunkTrust

Seems like your instance might have really bad searches and most of them might be executing near same time. You can setup monitoring console and figure out what may be causing it.
https://docs.splunk.com/Documentation/Splunk/7.2.5/DMC/WhatcanDMCdo

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...