Splunk Search

How to get the sourcetype size in index.

splunkatl
Path Finder

How do I know sourcetype size in index on particular day of last month.
we need to know how much of data reduced after we configured log filtering (ie data consumed any day before 11/16 AND any day after 11/16.

I have checked in Splunk deployment monitor but did not see search defined on Sourcetype.

Tags (2)
1 Solution

jbsplunk
Splunk Employee
Splunk Employee

You can find that search here:

http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume

Counting event sizes over a time range

Roughly, you can run a search where you look at all (or some) data over a range of indexed_time values, counting up the size of the actual events.

For example, where the endpoints START_TIME and END_TIME are numbers in seconds from the start of unix epoch, the search would be

indexed_time>START_TIME indexed_time<END_TIME |eval event_size=len(_raw) | stats sum(event_size)

This is a slow and expensive search, but when you really need to know, can be valuable. It must be run across a time range that can contain all possible events that were indexed at that time -- meaning regardless of timestamp regularity. Typically this means it must be run over all time. The stats computationg as well as initial filters can of course be adjusted to look at the problem more closely.

View solution in original post

jbsplunk
Splunk Employee
Splunk Employee

You can find that search here:

http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume

Counting event sizes over a time range

Roughly, you can run a search where you look at all (or some) data over a range of indexed_time values, counting up the size of the actual events.

For example, where the endpoints START_TIME and END_TIME are numbers in seconds from the start of unix epoch, the search would be

indexed_time>START_TIME indexed_time<END_TIME |eval event_size=len(_raw) | stats sum(event_size)

This is a slow and expensive search, but when you really need to know, can be valuable. It must be run across a time range that can contain all possible events that were indexed at that time -- meaning regardless of timestamp regularity. Typically this means it must be run over all time. The stats computationg as well as initial filters can of course be adjusted to look at the problem more closely.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...