I have a situation where in the span of 10 mins there could be a possibility that we didn't get any data from one of the sourcetype for one interval but started getting data for next interval, by this way I am loosing data in summary index. Any suggestion would be helpful.
Here's a part of my query:
| metadata type=sources index=abc
| search source=random
| eval earliest=lastTime - 300
| eval latest=now()
| fields earliest latest
So this random source is collecting data from all the sourcetypes.
This is the reason that most searches of this type run at least 5 minutes back in time, preferably an hour or more. There really is no way around it. You can examine your latency with a search like this:
|tstats max(_indextime) AS indextime WHERE index=_* OR index=* BY index sourcetype _time
| stats avg(eval(indextime - _time)) AS latency BY index sourcetype
| fieldformat latency = tostring(latency, "duration")
| sort 0 - latency
Don't really understand the question. Can you please elaborate or provide an example?