Knowledge Management

Why is the Kvstore start failing in Splunk Version 6.5.1?

rbal_splunk
Splunk Employee
Splunk Employee

Issue: I have Splunk version 6.5.1 and it fails to start the Kvstore. The mongod.log has errors like below

2016-12-22T19:10:36.394Z I CONTROL  [signalProcessingThread] dbexit:  rc: 0  warning: bind_ip of 0.0.0.0 is unnecessary; listens on all ips by default
 2016-12-22T19:13:34.668Z W CONTROL  No SSL certificate validation can be performed since no CA file has been provided; please specify an sslCAFile parameter
 2016-12-22T19:13:34.712Z I CONTROL  [initandlisten] MongoDB starting : pid=24234 port=8191 dbpath=/opt/splunk/var/lib/splunk/kvstore/mongo 64-bit host=105elmp01
 2016-12-22T19:13:34.712Z I CONTROL  [initandlisten] db version v3.0.8-splunk
 2016-12-22T19:13:34.712Z I CONTROL  [initandlisten] git version: 83d8cc25e00e42856924d84e220fbe4a839e605d
 2016-12-22T19:13:34.712Z I CONTROL  [initandlisten] build info: Linux build11.ny.cbi.10gen.cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
 2016-12-22T19:13:34.712Z I CONTROL  [initandlisten] allocator: tcmalloc
 2016-12-22T19:13:34.712Z I CONTROL  [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 8191, ssl: { PEMKeyFile: "/opt/splunk/etc/auth/server.pem", PEMKeyPassword: "<password>", allowInvalidHostnames: true, mode: "preferSSL" }, unixDomainSocket: { enabled: false } }, replication: { oplogSizeMB: 200, replSet: "555D798D-3B34-4375-87F4-8AFEFF1A8AAF" }, security: { javascriptEnabled: false, keyFile: "/opt/splunk/var/lib/splunk/kvstore/mongo/splunk.key" }, setParameter: { enableLocalhostAuthBypass: "0" }, storage: { dbPath: "/opt/splunk/var/lib/splunk/kvstore/mongo", mmapv1: { smallFiles: true } }, systemLog: { timeStampFormat: "iso8601-utc" } }
 2016-12-22T19:13:34.756Z I JOURNAL  [initandlisten] journal dir=/opt/splunk/var/lib/splunk/kvstore/mongo/journal
 2016-12-22T19:13:34.756Z I JOURNAL  [initandlisten] recover : no journal files present, no recovery needed
 2016-12-22T19:13:34.774Z I JOURNAL  [durability] Durability thread started
 2016-12-22T19:13:34.774Z I JOURNAL  [journal writer] Journal writer thread started
 2016-12-22T19:13:34.774Z I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
 2016-12-22T19:13:34.774Z I CONTROL  [initandlisten]
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten]
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten] ** WARNING: You are running on a NUMA machine.
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten] **          We suggest launching mongod like this to avoid performance problems:
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten] **              numactl --interleave=all mongod [other options]
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten]
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten] **        We suggest setting it to 'never'
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten]
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten] **        We suggest setting it to 'never'
 2016-12-22T19:13:34.775Z I CONTROL  [initandlisten]
 2016-12-22T19:13:34.776Z I INDEX    [initandlisten] allocating new ns file /opt/splunk/var/lib/splunk/kvstore/mongo/local.ns, filling with zeroes...
 2016-12-22T19:13:34.800Z I STORAGE  [FileAllocator] allocating new datafile /opt/splunk/var/lib/splunk/kvstore/mongo/local.0, filling with zeroes...
 2016-12-22T19:13:34.800Z I STORAGE  [FileAllocator] creating directory /opt/splunk/var/lib/splunk/kvstore/mongo/_tmp
 2016-12-22T19:13:34.801Z I STORAGE  [FileAllocator] done allocating datafile /opt/splunk/var/lib/splunk/kvstore/mongo/local.0, size: 16MB,  took 0 secs
 2016-12-22T19:13:34.804Z I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument Did not find replica set configuration document in local.system.replset
 2016-12-22T19:13:34.805Z I NETWORK  [initandlisten] waiting for connections on port 8191 ssl
 2016-12-22T19:13:35.643Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50139 #1 (1 connection now open)
 2016-12-22T19:13:35.668Z I ACCESS   [conn1] Successfully authenticated as principal __system on local
 2016-12-22T19:13:35.668Z I NETWORK  [conn1] end connection 127.0.0.1:50139 (0 connections now open)
 2016-12-22T19:13:36.669Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50157 #2 (1 connection now open)
 2016-12-22T19:13:36.692Z I ACCESS   [conn2] Successfully authenticated as principal __system on local
 2016-12-22T19:13:36.692Z I NETWORK  [conn2] end connection 127.0.0.1:50157 (0 connections now open)
 2016-12-22T19:13:36.692Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50158 #3 (1 connection now open)
 2016-12-22T19:13:36.715Z I ACCESS   [conn3] Successfully authenticated as principal __system on local
 2016-12-22T19:13:36.715Z I NETWORK  [conn3] end connection 127.0.0.1:50158 (0 connections now open)
 2016-12-22T19:13:36.715Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50159 #4 (1 connection now open)
 2016-12-22T19:13:36.737Z I ACCESS   [conn4] Successfully authenticated as principal __system on local
 2016-12-22T19:13:36.738Z I NETWORK  [conn4] end connection 127.0.0.1:50159 (0 connections now open)
 2016-12-22T19:13:36.738Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50160 #5 (1 connection now open)
 2016-12-22T19:13:36.760Z I ACCESS   [conn5] Successfully authenticated as principal __system on local
 2016-12-22T19:13:36.760Z I REPL     [conn5] replSetInitiate admin command received from client
 2016-12-22T19:13:36.761Z E REPL     [conn5] replSet initiate got NodeNotFound No host described in new configuration 1 for replica set 555D798D-3B34-4375-87F4-8AFEFF1A8AAF maps to this node while validating { _id: "555D798D-3B34-4375-87F4-8AFEFF1A8AAF", version: 1, members: [ { _id: 0, host: "0.0.0.0:8191", votes: 1, tags: { instance: "555D798D-3B34-4375-87F4-8AFEFF1A8AAF", all: "all" } } ] }
 2016-12-22T19:13:36.761Z I NETWORK  [conn5] end connection 127.0.0.1:50160 (0 connections now open)
 2016-12-22T19:13:38.965Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50164 #6 (1 connection now open)
 2016-12-22T19:13:38.968Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50165 #7 (2 connections now open)
 2016-12-22T19:13:38.968Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50166 #8 (3 connections now open)
 2016-12-22T19:13:38.968Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50167 #9 (4 connections now open)
 2016-12-22T19:13:38.968Z I NETWORK  [initandlisten] connection accepted from 127.0.0.1:50168 #10 (5 connections now open)
 2016-12-22T19:13:38.993Z I ACCESS   [conn8] Successfully authenticated as principal __system on local
 2016-12-22T19:13:38.993Z I ACCESS   [conn10] Successfully authenticated as principal __system on local
 2016-12-22T19:13:38.993Z I ACCESS   [conn7] Successfully authenticated as principal __system on local
 2016-12-22T19:13:38.995Z I ACCESS   [conn9] Successfully authenticated as principal __system on local
Tags (2)
0 Karma
1 Solution

rbal_splunk
Splunk Employee
Splunk Employee

In this case issue turned out with splunk-launch.conf. It had SPLUNK_BINDIP = 0.0.0.0
After it was removed Kvstore started fine.

View solution in original post

rbal_splunk
Splunk Employee
Splunk Employee

In this case issue turned out with splunk-launch.conf. It had SPLUNK_BINDIP = 0.0.0.0
After it was removed Kvstore started fine.

Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...