Splunk Search

Events “data” field is the same for all events collected.

ribentrop
Explorer

Hi, splunkers! Some strange search results make me stuck. There is have a Splunk cluster in customer’s environment (searchhead and two indexers in a cluster). We try to monitor remotely a folder with the Blue coat archived logs and forward them to the indexers.
One log-file is put to the remote folder per day. The example of log-file for events happened 21-11-2013 looks like: SG_main__101121220000.log.gz. The inside of log is:

#Software: SGOS 6.2.14.1
#Version: 1.0
#Start-Date: 2013-11-16 22:00:00
#Date: 2013-10-19 09:52:24
#Fields: date time time-taken c-ip cs-username cs-auth-group x-exception-id sc-filter-result cs-categories cs(Referer) sc-status s-action cs-method rs(Content-Type) cs-uri-scheme cs-host cs-uri-port cs-uri-path cs-uri-query cs-uri-extension cs(User-Agent) s-ip sc-bytes cs-bytes x-virus-id x-bluecoat-application-name x-bluecoat-application-operation
"#"Remark: 3911140033 "BC9000-1.domain.ru" "10.64.48.10" "main"
2013-11-21 08:54:12 1 x.x.x.x - - authentication_failed PROXIED "File Storage/Sharing" -  407 TCP_DENIED CONNECT - tcp client70.dropbox.com 443 / - - - y.y.y.y 339 103 - "Dropbox" "none"

(The fields actually begin with the # not with the "#" - there some forum representation incompatibility - # fixed [lg])

I modified TA-bluecoat so that all fields are correctly mapped in CIM. I corrected props and transforms so that fields starting with the “#” are ignored. When I copy some log files to the local indexer’s folder and make indexer’s inputs.conf to monitor that folder I can see the correct fieds mapping when search and the events for the different days have different data field.

But when I start forwarding the same logs from the remote server with Universal Forwarder the things happen I can’t explain: the events for three log files (ignore older than 3 days) have the same “data” field as the data in the first log-file that have been forwarded (the first file processes is the oldest one – I can see it in splunkd.log on UF). All three files are processed. I can see it in splunkd.log.

Then goes another miracle… When I change ignorolderthan parameter and make it ignorolderthan=4d the forth file (the eldest) file parsed now, but while looking at the search now all the events for the last 4 days have the data of the last read (4th – the eldest) log file… So the “data” field is changed during the searchtime? How it can be possible?

Now the illustration #1:

ignorolderthan=3d
The following logfiles are processed:

SG_main__100313220000.log.gz, date modified: 13 march 2014, contain events for the 13 march 2014 (the 1st processed file)

SG_main__100314220000.log.gz, date modified: 14 march 2014, contain events for the 13 march 2014
(the 2nd processed file)

SG_main__100315220000.log.gz, date modified: 15 march 2014, contain events for the 15 march 2014
(the 3nd processed file)

All the events have 13 march 2014 data field…

Now the illustration#1:

ignorolderthan=4d

Now the new file processed is : SG_main__100312220000.log.gz, date modified: 12 march 2014, contain events for the 12 march 2014

When making search all the events for the 4 days have 12 march 2014 data…

When I place props & transforms on the UF it does’t help and it is not shouldn’t I suppose.

Then how can it be that the log-files events monitored on the indexer itself have correct data and the same log files events monitored via UF have the data of the file with the eldest data?
Thanks

Tags (1)
0 Karma

ribentrop
Explorer

Sorry for the dalayed answer.
The possible reason that were not aware of the log size volume. A single-day Bluecoat log is as big as it is not possible for UF to forward a single-day log file in a short period. It takes about more than 24 hours for UF to forward a single-day log file on the default 256kb/s performance.
Unfortunately I have a occasional access to the customers environment to take .confs.
But I beleave my suggestion may be quite correct. We don't expect sppedy forwarding for Bluecoat logs appeared too large for the customer's environment and the speed was decreased to the 32k to temporary save the license from violation

0 Karma

lguinn2
Legend

It would be helpful to see the following files from the forwarder: inputs.conf and props.conf if it exists on the forwarder.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...