We use Splunk to forward our logs to a central server.
On this server the logs are written to a local file with syslog stanza in the outputs.conf.
Since we upgraded from 4.2.5 to 5.0.15 the log line in the local file have second timestamp.
Before:
Jul 1 11:25:51 source-server-fqdn log-data
Now:
Jul 1 11:25:54 127.0.0.1 source-server-fqdn Jul 1 11:25:51 source-server-fqdn log-data
Somehow the central server adds it's own timestamp, the destination of the syslog stanza and the source server fqdn to every log line.
Where can this be changed ?
Edit:
Adding some details of our configuration:
source-server outputs.conf:
[tcpout]
defaultGroup = GROUP_MANAGER
forwardedindex.3.blacklist = (_audit|_internal)
[tcpout:GROUP_MANAGER]
server = group_manag_fqdn:9006
sslCertPath = /path/server.pem
sslPassword = password
sslRootCAPath = /path/cacert.pem
sslVerifyServerCert = false
Then it gets through our GROUP_MANAGER
group-manager inputs.conf
[splunktcp-ssl://:9006]
_TCP_ROUTING = COLLECTOR
[SSL]
password = password
requireClientCert = false
serverCert = /path/server.pem.pem
group-manager outputs.conf
[tcpout:COLLECTOR]
server = collector-fq-dn:9142
compressed = true
[tcpout]
forwardedindex.3.blacklist = (_audit|_internal)
And the COLLECTOR then writes it to a local file and forward it to a splunk indexer via syslog
COLLECTOR inputs.conf
[splunktcp://:9142]
compressed = true
_SYSLOG_ROUTING = SYSLOG_LOCAL, SYSLOG_INDEXER
COLLECTOR outputs.conf
[syslog:SYSLOG_LOCAL]
server = 127.0.0.1:5141
type = tcp
[syslog:SYSLOG_INDEXER]
server = indexer-fq-dn:2514
type = tcp
The extra timestamp only appears in the local file written by the SYSLOG_LOCAL stanza.
On the indexer this extra timestamp does not appear.
I don't know specifically about 5.x, but it seems reasonable it's behaving just like 6.x says it does. In outputs.conf docs, in section syslogSourceType = <string>
about halfway down, it says the data has to match a certain set of criteria to be already considered to be in syslog format. If it doesn't match that, it goes on to say
Data which does not match the rules has a header, optionally a timestamp (if defined in 'timestampformat'), and a hostname added to the front of the event. This is how Splunk causes arbitrary log data to match syslog expectations.
That looks like exactly what it's doing for you.
The rest of that little section in the docs gives some indication on what to do, so please see the syslog section of those same docs, and let us know what precisely you end up having to do to resolve this!
Thanks for the Tipp, I am having a look into it.
The only thing I notice is, that we don have a source file. We get the logs forwarded by another server with splunk.
the timestamp is different in the second line, are you sure that it isn't your syslog server which add the first part?
We didn't change anything on the syslog side. we only updated splunk.
also 127.0.0.1 is the target server of the syslog stanza, therefore I think it does not happen in syslog.