Splunk Search

Why Does Search Head Cluster Replication Fail with large lookup table

faol
Explorer

Replication is failing with the following error.

07-12-2015 21:08:45.859 +0000 WARN  ConfReplicationThread - Error pushing configurations to captain=https://server_name:8089, consecutiveErrors=1: Error in acceptPush, uploading lookup_table_file="/opt/splunk/etc/apps/SA-EndpointProtection/lookups/localprocesses_tracker.csv": Non-200 status_code=413: Content-Length of 1567337721 too large (maximum is 838860800)

Is there a way to allow the replication to occur even though the file is too large?

0 Karma
1 Solution

bpaul_splunk
Splunk Employee
Splunk Employee

As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.

[httpServer]
max_content_length = 1600000000

This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.

View solution in original post

bpaul_splunk
Splunk Employee
Splunk Employee

As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.

[httpServer]
max_content_length = 1600000000

This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.

lloydknight
Builder

Hello,

what are the possible impacts in doubling the max_content_length ?

0 Karma

bpaul_splunk
Splunk Employee
Splunk Employee

The reason for the limit is to prevent excessive memory consumption. In the newer versions of Splunk, this value has been increased to 2GB. This is the information from the server.conf spec file.

max_content_length =
* Measured in bytes
* HTTP requests over this size will rejected.
* Exists to avoid allocating an unreasonable amount of memory from web
requests
* Defaulted to 2147483648 or 2GB
* In environments where indexers have enormous amounts of RAM, this
number can be reasonably increased to handle large quantities of
bundle data.

0 Karma

masonmorales
Influencer

Large lookups like this should ideally be converted to a KV store. That way, MongoDB can do the replication independently of the search bundle.

Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...