Splunk Search

How to configure Splunk to extract event timestamps from 3 different fields in each log?

cesar_tomas
Explorer

Hello everyone,

I have a problem with my timestamp fields. Splunk doesn't recognize the timestamp because it comes from two different fields. Here is my data from my own application:

date_reported | id tra| date_issue| time_issue | error message
20160211|101|20160210|13|111
20160211|195|20160210|124|111
20160211|273|20160210|1135|111
20160211|280|20160210|11136|111
20160211|307|20160210|140040|111
20160211|343|20160210|45|111

The first field is the date in which an issue was reported, the second field is the id of the transaction that failed, the third field is the date in which the issue happened, the fourth field is the time when the issue happened, and the fifth field is the message number of the error. All the fields are separated by | pipes.

I want that the timestamp be recognized from field 3 (year, month, day) and field 4 (time HHMMSS), but the problem is that field 4, erase 0 from left. For example, the records are:

20160211|101|20160210|13|111 the time is 00:00:13 but there is no 0000 at left
20160211|195|20160210|124|111 the time is 00:01:24 but there is no 000 at left
20160211|273|20160210|1135|111 the time is 00:11:35 but there is no 00 at left
20160211|280|20160210|11136|111 the time is 01:11:36 but there is no 0 at left
20160211|307|20160210|140040|111 the time is 140040 it is ok
20160211|343|20160210|45|111 the time is 00:00:45 but there is no 0000 at left and so on.

So how can I extract the correct timestamp?

Thanks in advance

0 Karma

Richfez
SplunkTrust
SplunkTrust

That's a very special way of doing time. Your absolute best thing is if you can just ask the owners of the app to fix this. They could likely just write them as strings and it'll not do it any more, it's really not a big or hard change. But surely even they can look at that "timestamp" and see that something's not right. Sometimes asking nicely....

If that isn't possible...

Some up front admissions and caveats: I'm completely happy if someone with a more thorough understanding of how this works wants to take what little I have here and fix it and otherwise clean it up and make it totally right, then claim that answer as their own. I'm no expert, but maybe this will steer you in the right direction. I'm SURE if you can make it work your way is better eve if only because it bends sanity into fewer pretzels, but I think (don't know - only think) that the replacements you are thinking of doing happen after timestamps are done. Maybe you can get it with SEDCMD in props, though. (That's a free tip there for you!) But, if that doesn't work...

You will likely need to create a new datetime.xml (call it, perhaps, @$splunkhome/etc/system/local/mydatetime.xml) and reference that from a props.conf entry (search for the second DATETIME_CONFIG in props.conf). Here's an answer where someone does that.

What you'll need in there is a bit less than well documented, and *there's a big problem I'll go into more detail later on * that I don't have time to puzzle out or test right now, but hopefully it'll just work. Anyway, what I think you'll need is each of the three types of timestamps in there.

^[^|]+\|[^|]+\|(\d{4})(\d{2})(\d{2})\|(\d{2})\|
^[^|]+\|[^|]+\|(\d{4})(\d{2})(\d{2})\|(\d{2})(\d{2})\|
^[^|]+\|[^|]+\|(\d{4})(\d{2})(\d{2})\|(\d{2})(\d{2})(\d{2})\|

Those match, in order, year month day | seconds, year month day | minutes seconds and year month day | hours minutes seconds. But where do those entries go?

Well, obviously, the file is $splunkhome/etc/system/local/mydatetime.xml but far less obviously, its contents might be something like

 <datetime>
 <define name="hourminsec" extract="year, month, day, hour, minute, second">
         <text><![CDATA[^[^|]+\|[^|]+\|(\d{4})(\d{2})(\d{2})\|(\d{2})(\d{2})(\d{2})\|]]></text> 
 </define>
 <define name="minsec" extract="year, month, day, minute, second">
         <text><![CDATA[^[^|]+\|[^|]+\|(\d{4})(\d{2})(\d{2})\|(\d{2})(\d{2})\|]]></text> 
 </define>
 <define name="sec" extract="year, month, day, second">
         <text><![CDATA[^[^|]+\|[^|]+\|(\d{4})(\d{2})(\d{2})\|(\d{2})\|]]></text> 
 </define>
 <timePatterns>
     <use name="hourminsec"/>
     <use name="minsec"/>
     <use name="sec"/>
 </timePatterns> 
 <datePatterns>
     <use name="hourminsec"/>
     <use name="minsec"/>
     <use name="sec"/>
</datePatterns>
 </datetime>

That along with a lot of sweating and staring at docs pages and a liberal sprinkling of hope may do it. Barely.

I just realized, "sweating" and "swearing" are only one letter apart. Hmm...

BUT THE PROBLEM - see, you knew I'd bring this back up! I don't know if a) having no hour will work AT ALL, nor do I know b) if it does work, if it substitutes in the CURRENT hour or minute or if it substitutes in zero. My guess is IF it works at all it'll do the current time. But it could also substitute in the modified time of the file - I really don't know and it's not documented as far as I can tell. It could be worth a try, though.

0 Karma

somesoni2
Revered Legend

IMO, it is very difficult (may be not possible) to use field 4 for time. You mentioned that it's generated from your own application, so do you've any control over how it's logged and if Yes, can you ensure that it's formatted correctly as HH:MM:SS ?

0 Karma

cesar_tomas
Explorer

Hi,

Thanks for your answer. Yes, it is an application of the company I work for, but unfortunately, I am not the owner of the application, so I cannot modify this field.

I am thinking to use an eval function to this:

| eval long_hor=len(time_issue) | eval tt=if(long_hor=2,"0000"+tostring(time_issue), if(long_hor=3,"000"+tostring(time_issue), if(long_hor=4,"0000"+tostring(time_issue), if(long_hor=5,"0"+tostring(time_issue), if(long_hor=6,time_issue,""))))) 

but I don't know how to use this function in order to define a new source type.

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...