Splunk Search

Only 30 days searchable on any index

cdstealer
Contributor

Hi,
I'm having an issue where any search will only return data from the previous 30 days. I'm not aware of any retention or limits. No matter how far back I set the "start time", I will only get 30 days returned. Any ideas?

0 Karma
1 Solution

lukejadamec
Super Champion

Check Manager>Indexes to verify you have data older than 30 days.

You can also use an epoch timestamp converter to check the time stamps on the buckets in your indexes. The buckets are named with the following format: var/lib/splunk/defaultdb/db/db_oldestEvent_newestEvent_uniqueID

If you don't have any data that is older than 30 days then check for the attribute frozenTimePeriodInSecs in your indexes.conf. This is typically found in the etc/system/default or local directories, but it might be configured in any app default or local directories. This can be set as a default and on an index per index basis.

View solution in original post

lukejadamec
Super Champion

Check Manager>Indexes to verify you have data older than 30 days.

You can also use an epoch timestamp converter to check the time stamps on the buckets in your indexes. The buckets are named with the following format: var/lib/splunk/defaultdb/db/db_oldestEvent_newestEvent_uniqueID

If you don't have any data that is older than 30 days then check for the attribute frozenTimePeriodInSecs in your indexes.conf. This is typically found in the etc/system/default or local directories, but it might be configured in any app default or local directories. This can be set as a default and on an index per index basis.

cdstealer
Contributor

Hi Luke, Thanks for that.. I think I found the answer. The non internal indexes go back to 2010 in cold, but some of the dashboards in use, use the _internal index which is not in cold and only has 30 days of history. Looking in /opt/splunk/etc/system/default/indexes.conf, the frozenTimePeriodInSecs is set to 2419200 which is 28 days.

0 Karma

cdstealer
Contributor

Hi MHibbin, Nothing in the splunk logs and splunk has restarted. I think this maybe historical, unfortunately this is something I've inherited so it's hard for me to say when it started.

0 Karma

MHibbin
Influencer

have you looked in the splunk logs for anything?
have you restarted splunk? - does it complain about dirty indexes
is this a historical problem? or recent one? - any changes around the time of occurrence?

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...