Splunk Search

Why Does Search Head Cluster Replication Fail with large lookup table

faol
Explorer

Replication is failing with the following error.

07-12-2015 21:08:45.859 +0000 WARN  ConfReplicationThread - Error pushing configurations to captain=https://server_name:8089, consecutiveErrors=1: Error in acceptPush, uploading lookup_table_file="/opt/splunk/etc/apps/SA-EndpointProtection/lookups/localprocesses_tracker.csv": Non-200 status_code=413: Content-Length of 1567337721 too large (maximum is 838860800)

Is there a way to allow the replication to occur even though the file is too large?

0 Karma
1 Solution

bpaul_splunk
Splunk Employee
Splunk Employee

As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.

[httpServer]
max_content_length = 1600000000

This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.

View solution in original post

bpaul_splunk
Splunk Employee
Splunk Employee

As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.

[httpServer]
max_content_length = 1600000000

This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.

lloydknight
Builder

Hello,

what are the possible impacts in doubling the max_content_length ?

0 Karma

bpaul_splunk
Splunk Employee
Splunk Employee

The reason for the limit is to prevent excessive memory consumption. In the newer versions of Splunk, this value has been increased to 2GB. This is the information from the server.conf spec file.

max_content_length =
* Measured in bytes
* HTTP requests over this size will rejected.
* Exists to avoid allocating an unreasonable amount of memory from web
requests
* Defaulted to 2147483648 or 2GB
* In environments where indexers have enormous amounts of RAM, this
number can be reasonably increased to handle large quantities of
bundle data.

0 Karma

masonmorales
Influencer

Large lookups like this should ideally be converted to a KV store. That way, MongoDB can do the replication independently of the search bundle.

Get Updates on the Splunk Community!

Stay Connected: Your Guide to May Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars this month. This ...

They're back! Join the SplunkTrust and MVP at .conf24

With our highly anticipated annual conference, .conf, comes the fez-wearers you can trust! The SplunkTrust, as ...

Enterprise Security Content Update (ESCU) | New Releases

Last month, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...