Hi All,
I am trying to extract the timestamps from the log file name (source) and then find how many logs are produced at a span of 5 min - using "timechart span=5min". But i couldnt do it. It seems the timechart work only with field "_time". I have tried rename the log_time (extarcetd from the source) to _time.
| eval _time=log_time
But failed.............
I got it correct upto "| fields log_time log_type
" command correctly. but the timechart seems to be failing.
Also tried chart but span seems to be not working.
Can any one please help me with ?
source file name -
/var/log/we_accesslog_extsqu_10.10.10.01_20111121_233000_32365
/var/log/mms_export_e_wms_90_10.10.10.02_20111121_232500_09678
| metadata type=sources index="cds_*" NOT index="cds_sysl*"
| fields source
| dedup source
| rex field=source "^(?P<path>[^\_]+)\/(?P<log_type>[^\/1]+)\_(?P<ip>[^\_]+)\_(?P<date>[^\_]+)\_(?P<time>[^\_]+)"
| eval log_time=round(strptime(time, "%H%M%S"), 0) | convert timeformat="%H:%M:%S" ctime(log_time)
| eval log_date=round(strptime(date, "%Y%m%e"), 0) | convert timeformat="%e/%m/%Y" ctime(log_date)
| fields log_time log_type
| timechart span=5min count by log_type --------- not working
| chart count over log_time by log_type span=5min ---------- not working
What's not working with your second attempt using chart
? It looks OK to me, but I wonder whether you checked that your log_time
field was actually created correctly?
I simplified your search a bit:
| metadata type=sources index="cds_*" NOT index="cds_sysl*"
| dedup source
| rex field=source "^(?<path>[^_]+)/(?<log_type>[^/1]+)_(?<ip>[^_]+)_(?<date>[^_]+)_(?<time>[^_]+)"
| eval log_time=strptime(time, "%H%M%S")
(note that log_time
gets an epoch value. This way you can eval
it to _time
.)
Then after that you could either use timechart
:
| eval _time=log_time | timechart span=5m count by log_type
Or chart
, using fieldformat
to show a pretty timestamp instead of the actual epoch value.
| bucket log_time span=5m | fieldformat log_time=strftime(log_time, "%H:%M:%S") | chart count over log_time by log_type
I think the problem is you try to assign "human readable time" into _time while it should be seconds from epoch. Good news is, you already have it from strptime(), ie you don't need those "convert ..." cmds unless you want log_time and log_date for some other purposes.
Maybe this would work better
... | rex field=source "^(?P<path>[^\_]+)\/(?P<log_type>[^\/1]+)\_(?P<ip>[^\_]+)\_(?P<datetime>[^\_]+\_[^\_]+)" | eval _time=strptime(datetime, "%d/%m/%Y_%H:%M:%S") | timechart span=5min count by log_type
If you want use only log_time (ie sum all dates into one) then bucket -cmd could help you. Simply dropping date from above example will make Splunk assume it's current date.
Looks like I waited a bit too long and got second place in this race 🙂
What's not working with your second attempt using chart
? It looks OK to me, but I wonder whether you checked that your log_time
field was actually created correctly?
I simplified your search a bit:
| metadata type=sources index="cds_*" NOT index="cds_sysl*"
| dedup source
| rex field=source "^(?<path>[^_]+)/(?<log_type>[^/1]+)_(?<ip>[^_]+)_(?<date>[^_]+)_(?<time>[^_]+)"
| eval log_time=strptime(time, "%H%M%S")
(note that log_time
gets an epoch value. This way you can eval
it to _time
.)
Then after that you could either use timechart
:
| eval _time=log_time | timechart span=5m count by log_type
Or chart
, using fieldformat
to show a pretty timestamp instead of the actual epoch value.
| bucket log_time span=5m | fieldformat log_time=strftime(log_time, "%H:%M:%S") | chart count over log_time by log_type