Hi
is there an easy way to find forwarders that are not sending data to all available indexers? We see, that some indexers have more data than others and we would like to find out why, so we can take countermeasures and increase the overall performance of our Splunk installation.
Regards
Chris
The following query should give you a list of indexers not receiving data from all your forwarders which I think it might help as a start:
index=_internal sourcetype=splunkd source=*metrics.log group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) [
| dbinspect index=*
| stats count by splunk_server
| fields - count
| rename splunk_server as host
]
| stats values(hostname) as Forwarders, dc(hostname) as Count by host
| rename host as Indexer
| eventstats max(Count) as Max_Count
| where Count < Max_Count
If you swap host with hostname then what you have is a list of forwarder not sending to the maximum number of indexers available in your deployment:
index=_internal sourcetype=splunkd source=*metrics.log group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) [
| dbinspect index=*
| stats count by splunk_server
| fields - count
| rename splunk_server as host
]
| stats values(host) as Indexers, dc(host) as Count by hostname
| rename hostname as Forwarder
| eventstats max(Count) as Max_Count
| where Count < Max_Count
Hope that helps.
Regards,
J
The following query should give you a list of indexers not receiving data from all your forwarders which I think it might help as a start:
index=_internal sourcetype=splunkd source=*metrics.log group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) [
| dbinspect index=*
| stats count by splunk_server
| fields - count
| rename splunk_server as host
]
| stats values(hostname) as Forwarders, dc(hostname) as Count by host
| rename host as Indexer
| eventstats max(Count) as Max_Count
| where Count < Max_Count
If you swap host with hostname then what you have is a list of forwarder not sending to the maximum number of indexers available in your deployment:
index=_internal sourcetype=splunkd source=*metrics.log group=tcpin_connections (connectionType=cooked OR connectionType=cookedSSL) [
| dbinspect index=*
| stats count by splunk_server
| fields - count
| rename splunk_server as host
]
| stats values(host) as Indexers, dc(host) as Count by hostname
| rename hostname as Forwarder
| eventstats max(Count) as Max_Count
| where Count < Max_Count
Hope that helps.
Regards,
J
Insert your servers in a lookup with column header host and the run this search
| inputlookup perimeter.csv | eval count=0 | append [ search index=_internal |stats count by host ] | stats sum(count) AS Total | where Total=0
In this way you can find servers of your lookup that didn't connected.
Beware to the name of the column in lookup: must be host or you have to rename it before "append" command.
If you want, you can create a Dashboard to display the server status, to do this follow the indication in https://answers.splunk.com/answers/454346/splunk-dashboard-widget-to-display-the-state-of-se.html.
Bye.
Giuseppe