Hi Splunkers,
I have a very mind-rattling situation here. I have a distributed environment (non-clustered) with 2 SHs and 3 indexers. My first SH act as the primary, while the second act as a slave. on a normal day I see all three of my indexers receiving equal amounts of data inputs. about three days ago I noticed that my third indexer has been carrying most of the load while the others were bare getting any data. It has since gotten worse. Today inder3 is showing 62%, indexer2-11% and indexer1-11%. I have also noticed the my /etc/system/local directory on the slave SH has not outputs.conf file while the master SH does have an outputs.conf in the same directory (is this normal?). I don't how to resolve this. Please help. Thanks
For your first question related to equal distribution,you will need to configure load balancer with autolbvolume to equally distribute data. Fortunately Splunk has its own loadbalancer.
https://docs.splunk.com/Documentation/Forwarder/7.0.2/Forwarder/Configureloadbalancing
Second,
It is best practice to send search head data to indexer (as it maintains all instance data in Indexer) It is upto you whether data needs to forward to Indexer or not from Search Head.Since one of your search head is sending data why cant you configure SH2 also to send data to Indexer.
https://docs.splunk.com/Documentation/Splunk/7.0.2/DistSearch/Forwardsearchheaddata
NB:- You can use Splunk LB from SH to IDX also.
Your concern should be about the outputs.conf
on the indexers, not the SHs.
outputs.conf on the indexer? where would the indexer send his data too?
1) outputs.conf difference on the SH's : could be normal, just depends in what way they are configured.
try splunk btool --debug outputs list
Maybe te outputs.conf in a deffirent place
2) your Index load distribution.
If 1 index is getting more data it could be a couple of things