I have a universal forwarder pulling in a log file from a linux server. It has been working just fine up until the other day. The Summary dashboard shows events are coming in, however when I search for anything related to the source, sourcetype, anything within the event, I get "No Matching Events found". I can see events from other servers no problem. Nothing in the infrastructure nor Splunk config has changed. I am an admin, so I should be seeing everything. Any ideas?
Problem solved. The problem was Splunk was reading the log timestamp incorrectly. What's odd however is that the log timestamp format never changed, but Splunk was just now unable to read it correctly. I was able to configure the program sending the log to include the year instead of just the date.
Problem solved. The problem was Splunk was reading the log timestamp incorrectly. What's odd however is that the log timestamp format never changed, but Splunk was just now unable to read it correctly. I was able to configure the program sending the log to include the year instead of just the date.
I was able to change the timestamp being produced by the program and Splunk now identifies it correctly. It's very odd however that it would just stop reading the correct timestamp from one minute to the next when nothing changed in the program, the logs themselves nor splunk
I added this in my props.conf but without luck.
[sourcetype]
TIME_FORMAT=%m/%d-%h:%m:%s
The event comes in as:
07/05-10:01:01
Check TIMESTAMP_FORMAT in props.conf.
That actually helped me figure it out. Apparently Splunk is reading the timestamp on the logs differently now and putting them in completely different days. So for example 07/05-08:25:43 is being read as May 8 8:25:43. It's ignoring month. I'll try to figure out how to change that. Thanks for the help.
Perhaps it's writing to another index than the one(s) you're searching?