All Apps and Add-ons

Heavy Forwarder Thruput

ephemeric
Contributor

Greetz,

When using the SoS app along with forwarded _internal indexes from heavy forwarders I get no results under S.o.S - Splunk on Splunk > Indexing Performance for "Estimated indexing rate" and "Fill ratio of data processing queues".

Upon inspecting the search I see we get no "group=per_sourcetype_thruput" in the metrics.log.

This metric is however on our indexers but then that's not the thruput we want to see.

We are "indexing" in memory on our heavy forwarders in order to save bandwidth by discarding

events at the collector.

We would like to see if any events are being dropped on the collector or queues blocked etc. Reason being we have several inputs from a 100Mbit LAN into the collector and outputs via 2Mbit WAN link upstream to our three indexers.

Is it possible to get this from SoS in this configuration?

Thank you.

hexx
Splunk Employee
Splunk Employee

The lack of per_*_thruput metrics on heavy-weight forwarders is a core Splunk bug which will be fixed in a future release - SPL-68318.

0 Karma

ephemeric
Contributor

Thank you, that's what I was looking for. Confirmed.

0 Karma

hexx
Splunk Employee
Splunk Employee

No, this is because heavy-weight forwarders do not record per_*_thruput metrics. You should be able to see events recording the queue sizes, though.

0 Karma

ephemeric
Contributor

Yes and yes.

I get some search results like "Estimated percentage of total CPU used per Splunk processor".

"per_sourcetype_thruput" is not found even in the metrics.log on the heavy forwarder.

I'm thinking reason being is that indexing only happens in memory on the forwarder and written to disk on the receiver indexer?

0 Karma

hexx
Splunk Employee
Splunk Employee

Are you sure that you are forwarding the _internal events from your HWF to your indexers? Also, are you sure that you added your forwarder to the splunk_servers_cache.csv lookup file under the right hostname?

0 Karma

MuS
SplunkTrust
SplunkTrust

Hi ephemeric

check the metrics.log of your heavy forwarder for something like tcpoutput or tcp-output-generic-processor this is where your data gets sent to the indexer. This is happening in the indexQueue, so you would see troubles or blocks on the indexQueue if your WAN link could not handle the traffic.

hope this helps

cheers,

MuS

ephemeric
Contributor

Thank you lemme check...

0 Karma

ephemeric
Contributor

Or even any tips and advice on how to get metrics in this scenario type setup with several "high speed" LAN inputs being forwarded via slow WAN uplinks through a heavy forwarding Splunk instance.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...