All Apps and Add-ons

unable to read snort full using universal forwarder

att35
Builder

Hi,

I am trying to bring snort logs into Splunk using universal forwarder. Forwarder installation on snort box went fine and I followed the steps mentioned in http://docs.splunk.com/Documentation/Splunk/6.1.2/Forwarding/Deployanixdfmanually

Configured inputs.conf under /opt/splunkforwarder/etc/apps/SplunkUniversalForwarder/default

[monitor:///var/log/snort/eth2/alert.full]
disabled = false
sourcetype = snort_alert_full
source = snort
index = _internal

Verified that the snort file is getting generated correctly. Also configured the receiver on Splunk Enterprise on port 9997.

After starting all the services, no logs are seen on Splunk. Checked using tcpdump, and forwarder never sent any logs. How can I further troubleshoot the issue?

Is it the forwarder unable to read the file? or is something on the box which is preventing the logs from going out? I verified the connectivity and iptables on both sides.. they have required rules. Am i using the correct inputs.conf file?

OS: CentOS 6.4 on both snort and splunk enterprise.

Many Thanks,

Abhi

0 Karma

derekarnold
Communicator

You should edit the files under $SPLUNK_HOME/etc/system/local/ or better yet create an app directory for this. If you edit files off of default disk structure your changes will be wiped upon your next version upgrade.

With that said, is there a particular reason you're using index=_internal ? That's mostly for internal Splunk diagnostics and troubleshooting, not for an app index. I don't think using _internal will work.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...