Splunk Search

limits.conf - any settings we can change from defaults to improve performance on server with 24 CPUs and 256GB memory?

simpkins1958
Contributor

We have 9,255,277,001 events indexed for 90 days of hot/warm data. We need to run on a single Splunk instance. Our server has 24 CPUs and 256GB memory and disks are to Splunk IOPS specifications.

When we run this search on the system for 30 days it takes 6+ minutes.

| tstats count AS totalSessions
FROM datamodel=nmdm_flow
WHERE nmds_app_flow.c_user="" AND nmds_app_flow.d_name="" AND nmds_app_flow.app_name="" AND nmds_app_flow.dest_ip4_Country="" AND (nmds_app_flow.event!="Update")
| eval totalSessions = tostring(totalSessions, "commas")

CPU and memory usage on the server is minimal during this search.

We are hoping that by changing some configuration settings in limits.conf (or other conf files) we could make better use of the CPU/Memory hardware during our searches. For example, we have tried changing max_mem_usage_mb from default of 200 and are seeing improvements in the above search.

0 Karma

mayurr98
Super Champion

hey
have a look at this option:

[thruput]
maxKBps = <integer>
* The maximum speed, in kilobytes per second, that incoming data is 
  processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
  the number of events this indexer processes to the rate (in
  kilobytes per second) that you specify.
* NOTE:
  * There is no guarantee that the thruput processor 
    will always process less than the number of kilobytes per
    second that you specify with this setting. The status of 
    earlier processing queues in the pipeline can cause
    temporary bursts of network activity that exceed what
    is configured in the setting. 
  * The setting does not limit the amount of data that is 
    written to the network from the tcpoutput processor, such 
    as what happens when a universal forwarder sends data to 
    an indexer.  
  * The thruput processor applies the 'maxKBps' setting for each
    ingestion pipeline. If you configure multiple ingestion
    pipelines, the processor multiplies the 'maxKBps' value
    by the number of ingestion pipelines that you have
    configured.
  * For more information about multiple ingestion pipelines, see 
    the 'parallelIngestionPipelines' setting in the 
    server.conf.spec file.
* Default: 0 (unlimited)

let me know if this helps!

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...