All Apps and Add-ons

Heavy forwarder cisco ironport web proxy log

ashabc
Contributor

I have 2 servers one as the indexer and the other as a heavy forwarder. I have setup syslog forwarding successfully from heavy forwarder to the indexer.

I now want ironport proxy appliance log to be dumped into the heavy forwarder (via ftp to a folder, and then use input.conf to pickup file from that folder) and then indexed to the indexer.

My question is:

  1. Is it possible?
  2. If yes, do I need to apps on the heavy forwarder?

What I have noticed that if I don't have splunk cisco security suite and ironport web proxy apps installed on heavy forwarder, I do not get any data but a blank line. On the other hand if I have above two apps installed on heavy forwarder, I get one step forward. I only get header information from log in the index and no data.

Obviously, both apps are already installed on indexer.

If I setup proxy appliances to send log directly to indexer, it works fine. But, I want the raw data to go to heavy forwarder then index to indexer.

0 Karma

jtacy
Builder

You're right that you do need the WSA app on the heavy forwarder since it has some props entries used in parsing. Probably not required on the indexer but it won't hurt. Since you got this working on your indexer, my guess is that you have some config on the heavy forwarder that's causing the sourcetype, destination index, or both, to be modified. Maybe a props entry based on source, or an ambiguous inputs stanza? I haven't noticed any tricks to routing WSA logs through a heavy forwarder but due to the bursty nature of the log uploads I use a dedicated forwarder and wouldn't have run into app conflicts.

jtacy
Builder

In addition to helping to ensure that files are completely written to before being read and deleted by the batch input, using this approach also lets you inspect the logs in cases like this. Getting headers and no data is just weird; I think being able to see the files would be helpful here. Oh, I also get a lot of files with names starting with tmon. that only have headers and I wrote the batch input to ignore those; are you sure you aren't indexing those as cisco_wsa_squid?

0 Karma

jtacy
Builder

#!/bin/bash

set -e
set -u

SRC_PATH="/opt/splunk/bigdata/inputs/wsa"
TGT_PATH="/opt/splunk/bigdata/inputs/wsabatch"

DIRS=($(find "$SRC_PATH" -mindepth 1 -maxdepth 1 -type d -print0 | xargs -0 -n 1 basename))

# Note that all target directories must exist...
for DIR in "${DIRS[@]}"
do
# Move all files whose inode was modified at least one minute ago...
find "$SRC_PATH/$DIR" -type f -cmin +1 -print0 | xargs -0 -r -I file mv -t "$TGT_PATH/$DIR" -- file
done

0 Karma

jtacy
Builder

Here's what I use on the forwarder (is interval a valid option for a batch input?):

[batch:///opt/splunk/bigdata/inputs/wsabatch/*/aclog*]
disabled = false
move_policy = sinkhole
host_segment = 6
crcSalt =
sourcetype = cisco_wsa_squid
index = wsa

I also decided to have the WSA send its files over SCP to a folder that isn't directly monitored by Splunk, then I move the files to a folder that's watched by a batch stanza if they haven't been written to in over a minute. I create two identical directory structures then run a script with cron every minute. I'll post that separately.

0 Karma

ashabc
Contributor

This is what my input file looks like in the forwarder. I use batch (so that I don't keep the logs those are already indexed), but tried with monitor as well with no difference.

Its definitely going to right index. I think you are right, the parsing is potentially is an issue. However, I do not have any other apps installed on forwarder other than 2 apps required for WSA parsing (Cisco security suite and the IronportWebSecurity)

[batch:///var/splunk/APDR-DMZPROXY01]
disabled=false
host_segment=3
sourcetype=cisco_wsa_squid
index=webproxy
interval=60
move_policy=sinkhole
crcSalt=

0 Karma

delink
Communicator

You will need to configure the indexer to receive forwarded data in its inputs.conf configuration. This is usually just a line like:


[splunktcp://:9997]

Once you have that in place, you will also need to have an outputs.conf file on the forwarder to tell it to send data to the indexer. This may look something like this:


[tcpout:splunk]
server = splunkindexer.example.com:9997

From here, you just need to have an inputs.conf on the forwarder to watch the directory that the FTP process writes the log files to. This will vary depending on how these logs were coming in, but this would be a start (if you haven't created a specific index for your logs, just leave the line out to use the default index):


[monitor:///var/log/ironport]
sourcetype = cisco_wsa_squid
index = proxy

You should then be able to see events in your indexer by searching for the sourcetype in splunkweb to verify it is working.

0 Karma

ashabc
Contributor

Thank you for your response.

What you mentioned, I already have all these in place.

As I mentioned it works for syslog sourcetype and having issues with cisco_was_squid sourcetype.

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...