Getting Data In

How can I determine the timestamp of events I am *indexing* right now?

DerekB
Splunk Employee
Splunk Employee

How can I determine the timestamp of events I am indexing right now?

1 Solution

hexx
Splunk Employee
Splunk Employee

The following search will find all events that have been indexed in the past 60 seconds (note that this doesn't mean at all that these events happened in the past 60 seconds) and show for each indexer:

* The count of events indexed during the past minute
* The time stamp of the oldest event indexed during the past minute
* The time stamp of the newest event indexed during the past minute
* The median indexing latency, expressed as the difference in seconds between the time stamp of the event (_time) and the time at which Splunk indexed the event (_indextime) in seconds

index=* OR index=_* earliest=0 _indextime > [stats count
  | eval sixty_seconds_ago = now() - 60
  | return $sixty_seconds_ago]
| eval latency = _indextime - _time
| stats max(eval(now())) AS now count min(_time) AS earliest_time max(_time) AS latest_time median(latency) AS median_latency by splunk_server
| convert ctime(now) AS now
| convert ctime(earliest_time)
| convert ctime(latest_time)
| rename now AS "Current time" count AS "Event count" earliest_time AS "Earliest time stamp" latest_time AS "Latest time stamp" median_latency AS "Median indexing latency" splunk_server AS "Indexer"

View solution in original post

hexx
Splunk Employee
Splunk Employee

The following search will find all events that have been indexed in the past 60 seconds (note that this doesn't mean at all that these events happened in the past 60 seconds) and show for each indexer:

* The count of events indexed during the past minute
* The time stamp of the oldest event indexed during the past minute
* The time stamp of the newest event indexed during the past minute
* The median indexing latency, expressed as the difference in seconds between the time stamp of the event (_time) and the time at which Splunk indexed the event (_indextime) in seconds

index=* OR index=_* earliest=0 _indextime > [stats count
  | eval sixty_seconds_ago = now() - 60
  | return $sixty_seconds_ago]
| eval latency = _indextime - _time
| stats max(eval(now())) AS now count min(_time) AS earliest_time max(_time) AS latest_time median(latency) AS median_latency by splunk_server
| convert ctime(now) AS now
| convert ctime(earliest_time)
| convert ctime(latest_time)
| rename now AS "Current time" count AS "Event count" earliest_time AS "Earliest time stamp" latest_time AS "Latest time stamp" median_latency AS "Median indexing latency" splunk_server AS "Indexer"

lukejadamec
Super Champion

Try

index=main | table _time | sort -_time

In a realtime 30 second window, and sort the table by time.

lukejadamec
Super Champion

I guess you're right. I tested it, and my method does not work.

0 Karma

hexx
Splunk Employee
Splunk Employee

Because when you ask for a real-time search with a 30s time window, you are asking Splunk to only return events whose time stamp (_time) falls between the current time and 30s before that.

0 Karma

lukejadamec
Super Champion

I'm going to have to test your theory on some old data tomorrow. Why would the timeframe of the search be concerned with the values contained inside the search?

index=_internal | table _indextime,_time

Select Realtime 30 Second Window

0 Karma

hexx
Splunk Employee
Splunk Employee

The problem with your search is that using a 30s window effectively constraints the events returned to events whose time stamp is 30 seconds old or less. As such, if an event that has a time stamp going back 5 minutes will be ignored, hence failing to answer the question: What has been indexed in the past 30s?

0 Karma

lukejadamec
Super Champion

I re-read your question, and I guess I was right the first time.

index=main | table _time

Select Realtime 30 Second Window.

0 Karma

lukejadamec
Super Champion

Learn something new every day. Thanks hexx
How about this:

index=_internal | table _indextime

Select Realtime 30 Second Window.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...