I have an example in this savedsearches.conf from the SplunkAdmins app that I created. My search is similar to what burwell has but I wrote my version as:
index=_internal "has reached maxKBps. As a result, data forwarding may be throttled" sourcetype=splunkd
| stats count(_raw) by host as countPerHost
| where countPerHost > 1
Here's what I do to detect throttled forwarders. I have a scheduled search for last 4 hours (-240m to now) and then alert for any events:
index=_internal " INFO " " throttled" NOT debug source=*splunkd.log* | dedup host |sort host| table host _raw
This gives me a nice table per host and I can see the hosts and what the thruput is that is getting throttled. Example output:
foo1.host.com 10-22-2017 18:26:28.131 +0000 INFO ThruputProcessor - Current data throughput (258 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in limits.conf.
foo2.host.com 10-22-2017 18:29:28.324 +0000 INFO ThruputProcessor - Current data throughput (512 kb/s) has reached maxKBps. As a result, data forwarding may be throttled. Consider increasing the value of maxKBps in limits.conf.
not sure of a splunk query, currently i got no access to prod servers to form/test a query..
by search, from a similar post
A good approach is to look at the metrics.log from the forwarder (local to the forwarder, they are not monitored).
If you see that the forwarder is constantly hitting the thruput limit, you can increase it, and check back.
cd $SPLUNK_HOME/var/log/splunk/metrics.log
grep "name=thruput" metrics.log (or metrics.log*)
Example: The instantaneous_kbps and average_kbps are always under 256KBps.
11-19-2013 07:36:01.398 -0600 INFO Metrics - group=thruput, name=thruput, instantaneous_kbps=251.790673, instantaneous_eps=3.934229, average_kbps=110.691774, total_k_processed=101429722, kb=7808.000000, ev=122
see this guide on how to check the speed limit in metrics.
http://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Troubleshootingeventsindexingdela...