We are a PCI environment with over 300 Splunk servers in stores all over the country. Because of PCI requirements, we have to log every event through our firewalls. We want to monitor the Splunkd.log on the store servers for errors.
We enabled TCP routing for the _internal index and the number of firewall events generated by the increase in Splunk traffic pretty much tripled our license consumption and pushed us over the limit.
I would like to know if it is possible to either a) route the _internal index but filter the events that are forwarded or b) setup a summary index for error events and forward that.
I tried to configure option B, but no events showed up. Here is what I did:
indexes.conf:
[splunk_errors]
coldPath = $SPLUNK_HOME\splunk_errors\colddb
homePath = $SPLUNK_HOME\splunk_errors\db
maxTotalDataSizeMB = 50000
thawedPath = $SPLUNK_HOME\splunk_errors\thaweddb
input.conf:
[monitor://$SPLUNK_HOME/var/log/splunk/splunkd.log]
index = _internal
outputs.conf:
[tcpout]
defaultGroup = 192.168.2.230_9997
disabled = false
indexAndForward = 0
forwardedindex.0.whitelist = splunk_errors
[tcpout:192.168.2.230_9997]
server = 192.168.2.230:9997
savedsearches.conf:
[Splunk Errors Last 5 Minutes]
action.summary_index = 1
action.summary_index._name = splunk_errors
cron_schedule = */5 * * * *
dispatch.earliest_time = -5m@m
dispatch.latest_time = now
displayview = flashtimeline
enableSched = 1
realtime_schedule = 0
request.ui_dispatch_view = flashtimeline
search = index="_internal" " error " NOT debug source="*splunkd.log*" | table _time,host,error,log_level,message
vsid = gjc1xvvd
Any ideas?
That log is already being forwarded by Splunk on that server so you cannot forward it again without installing another instance of Splunk on that same server (which I would advise you not to do). Instead you might look at re-forwarding like described here:
http://answers.splunk.com/answers/97379/forward-subset-of-logs-via-tcp-routing.html
http://docs.splunk.com/Documentation/Splunk/6.2.3/Forwarding/Enableareceiver