I have an indexer getting data from 24 hosts. We were well within our quota
until two hosts were added that, for whatever reason (misconfiguration, extremely
busy, etc.) are sending many GB. I have no control over these forwarders, I have to
wait for their admins to fix/reconfigure them. To keep from going over quota, I've
disabled port 9997 on the indexer until I can touch base with those admins. But is
there a way to stop accepting data from just those two offenders without shutting off
the other 24 forwarders? I'm at version 4.3, if that matters.
You could always block that host from port 9997 using iptables on the indexer ...
Splunk should have a configurable way to do this at the indexer level. A rogue forwarder can easily take out an indexer by sending crap data or large volumes of data.
You could always block that host from port 9997 using iptables on the indexer ...
Thanks for the answers. FWIW, these were IIS logs being written to the default index.
I used iptables to shut 'em down.
Need a little bit more information - are these hosts writing to a specific index? Are there specific file source that's causing the issues?
If you have the host, you can do something like this on the indexer side:
transforms.conf:
[block_transform]
REGEX=DEBUG\s\[
DEST_KEY = queue
FORMAT = nullQueue
props.conf:
[host::yourservername]
TRANSFORMS-bad_log = block_transform
This was taken from http://splunk-base.splunk.com/answers/11617/route-unwanted-logs-to-a-null-queue
This is one way to do it but it consumes your indexer's resources to parse that data. Cheaper to block it from ever getting to your indexer.