All Apps and Add-ons

What config changes are need to ensure DB inputs don't run on non-search-head captain nodes.

jincy_18
Path Finder

Hi All,
We are using Splunk DB Connect 2.3 in a search head clustered environment.
While setting up a new database input, we set up the input on the DBX2 app on the deployer(not part of the search head cluster) and then do copy the splunk_app_db_connect app from "/apps/splunk/etc/apps/" to " /apps/splunk/etc/shcluster/apps"
and then run ./splunk apply shcluster-bundle -target https://searchheadclustermember:8089, to push it to search heads.

As per our understanding DB inputs run on the search head captain. But as per logs we can find "action=modular_input_not_running_on_captain", i.e. db inputs aren't always running on captain.
Does Splunk 1st queries each of the search head peer to find which one is the captain and then finally runs the input query on captain node? If not, what config changes are need to ensure inputs don't run on non-captain nodes?

Thanks ,
Jincy

0 Karma
1 Solution

adonio
Ultra Champion

Hello jincy_18,
please refer to @martin_mueller comment above.
Use Heavy Forwarder when using DBX, specially in a SHC configuration.

View solution in original post

0 Karma

adonio
Ultra Champion

Hello jincy_18,
please refer to @martin_mueller comment above.
Use Heavy Forwarder when using DBX, specially in a SHC configuration.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

As per http://docs.splunk.com/Documentation/DBX/2.3.1/DeployDBX/Architectureandperformanceconsiderations#Se... you shouldn't run inputs on a SHC:

Splunk recommends running reports (saved searches), alerts, and lookups on the search head or search head cluster captain, and running inputs and outputs from a forwarder. This is because disruptive search head restarts and reloads are more common, and scheduled or streaming bulk data movements can impact the performance of user searches. Poor user experience, reduced performance, increased configuration replication, unwanted data duplication, or even data loss can result from running inputs and outputs on search head infrastructure. Running inputs and outputs on a search head captain does not provide extra fault tolerance or enhance availability, and is not recommended.

0 Karma

jincy_18
Path Finder

Thank you Martin for your valuable inputs.

We do have a plan to move to HWF along with DBC3 but that will take some time.
Since we are currently using DBC2 in SHC, could you provide some pointers to debug mentioned issue?

Also , in the SHC we do not need to maintain the state of DB Connections and reads as even if a captain goes down, other search heads have the latest state of it and can continue processing. How do we address failover in a HWF configuration ?

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...