Deployment Architecture

slow distributed search (DistributedBundleReplicationManager ?)

tawollen
Path Finder

We have 2 search heads, one is a lab box, so lower performance 4 way 8gig ram, but very few people use it. The other is a 16 way box. Our primary search head does no indexing locally at all, the lab box does some, (10,000 events/day). For some reason, the lab box seems REAL slow when doing searches, system load/memory is all good.

The only thing I have found in the logs is the following: From Lab box 12-01-2010 23:48:57.669 WARN DistributedBundleReplicationManager - bundle replication to 8 peer(s) took too long (227349ms), bundle file size=143900KB

From production: 12-01-2010 19:29:16.536 WARN DistributedBundleReplicationManager - bundle replication to 10 peer(s) took too long (10474ms), bundle file size=4130KB

I can't find any real reference to DistributedBundleReplicationManager in any documentation. The 'bundle replication" is taking a lot longer, and I am wondering if the "bundle file size" is making the one system slower than the other.

Any thoughts?

Tags (1)

gkanapathy
Splunk Employee
Splunk Employee

Do you have a very large lookup file or something inside one of the apps on your slow search head?

araitz
Splunk Employee
Splunk Employee

Does some content in your apps change a lot? Seems like a slow connection if it takes >10s to transfer <4 MB.

0 Karma

tawollen
Path Finder

We have some lookups, nothing that big, at least not in the app I am using, there might be something in one of the other apps.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...