My event generator is reporting once per minute, but as a matter of efficiency is including sets of 10 results, which occurred at 6 second intervals, as multi-value fields separated by a ";" delimiter.
For example:
TIMESTAMP,A_METRIC,B_METRIC, ...
20140501 07:00:00, a1;a2;a3;a4;a5;a6;a7;a8;a9;a10, b1;b2;b3;b4;b5;b6;b7;b8;b9;10, ...
In reality, the events have actually occurred as follows:
20140501 07:00:00, a1, b1,
20140501 07:00:06, a2, b2,
20140501 07:00:12, a3, b3,
20140501 07:00:18, a4, b4,
20140501 07:00:24, a5, b5,
20140501 07:00:30, a6, b6,
20140501 07:00:36, a7, b7,
20140501 07:00:42, a8, b8,
20140501 07:00:48, a9, b9,
20140501 07:00:54, a10, b10,
Is there a way to get Splunk to parse the data and properly assign the events to the sub-minute time stamps?
Or is the only alternative to pre-process the data prior to it arriving into Splunk, separating it into 6-second events?
Thanks!
I don't think it would be possible to do at index time so your events are indexed like that...
but It would be possible to do some extra post-processing after indexing to separate them out and change the _time field for each event using a combination of rex, eval and mvexpand..
the question is though, what you want to achieve by this?
Generally in most cases, these metrics will be used for calculations anyway, in which case there's no reason to differentiate them by the second. e.g. stats avg(yourfields) and you could simply just use an mv command for it.
for example:
[your search] | makemv delim=";" A_METRIC | makemv delim=";" B_METRIC | eval metrics=mvzip(A_METRIC,B_METRIC) | mvexpand metrics | rex field=metrics "(?
Thank you for your response. As it turns out, in my case I need to report on each 6 second interval. I ended up writing a pre-processor to take the feed as I outlined above and expanding it so that there would be one event every 6 seconds.