Replication is failing with the following error.
07-12-2015 21:08:45.859 +0000 WARN ConfReplicationThread - Error pushing configurations to captain=https://server_name:8089, consecutiveErrors=1: Error in acceptPush, uploading lookup_table_file="/opt/splunk/etc/apps/SA-EndpointProtection/lookups/localprocesses_tracker.csv": Non-200 status_code=413: Content-Length of 1567337721 too large (maximum is 838860800)
Is there a way to allow the replication to occur even though the file is too large?
As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.
[httpServer]
max_content_length = 1600000000
This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.
As stated, the file exceeds max_content_length in server.conf of 800 MB. This can be increased by adding the following to $SPLUNK_HOME/etc/system/local/server.conf.
[httpServer]
max_content_length = 1600000000
This could negatively affect performance however so it would be preferable to reduce the size of the file if at all possible.
Hello,
what are the possible impacts in doubling the max_content_length ?
The reason for the limit is to prevent excessive memory consumption. In the newer versions of Splunk, this value has been increased to 2GB. This is the information from the server.conf spec file.
max_content_length =
* Measured in bytes
* HTTP requests over this size will rejected.
* Exists to avoid allocating an unreasonable amount of memory from web
requests
* Defaulted to 2147483648 or 2GB
* In environments where indexers have enormous amounts of RAM, this
number can be reasonably increased to handle large quantities of
bundle data.
Large lookups like this should ideally be converted to a KV store. That way, MongoDB can do the replication independently of the search bundle.