Deployment Architecture

How to build a dataset far back in time with incremental job process

steinroardahl
Observer

Hi,

I admit, it sounds like a standard search procedure, but I save aggregated data in a csv file. At the same time, I do not want to build the dataset in a long search due to large resource consumption.

I would like to carry out the procedure as follows:
1. Define a start date
2. Define an end date
3. Define a step by step search, let's say 5 minute steps

The process will start with an empty csv file.
First run searches up the information from start date and 5 minutes back in time. The information is written to the csv file with the append and the outputlookup command.
Next run starts from the start date - 5 minutes. Finding information appends this to the csv file.
Next run starts from start -10 minutes. Finding information ..

As you can see, I want to build a dataset with a step-by-step process from a given start date to an end date back in time.

How do I do this?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...