Knowledge Management

How to deal with updated input events?

mikclrk
Explorer

I've got a bunch of data records arriving from a remote analytic system. They all have timestamps and a unique key. There are potentially tens of thousands of them arriving each day.

So receiving and storing them as CSV with a search overlay is fine. Now I can produce reports on them.

The problem comes when the remote analytic system receives some 'late' data. At this point it wants to reissue the original record that it put out a few minutes ago. The reissued record will have the same timestamp and the same key as the original record.

So.

What will happen if I reformat all the records into JSON and dump them into a KV store? From reading the documentation about them only being searchable on the search head, having millions of documents in a KV store with a high rate of churn and frequent search activity probably isn't a good idea. Can I even get normal visualizations and searches to work with data in a KV store?

If I want to keep them in CSV format, then it appears I can't simply tell Splunk to replace the earlier record (as I can with ELK). Any thoughts on how I could configure the ingestion process to find and erase the earlier record? Preferably without killing performance by doing a search and delete for every record that comes in. Or how I could phrase a search so it'll only find the most recent version of each event record?

Mik

0 Karma
1 Solution

DalJeanis
Legend

Consider just putting them into an index, regular or summary.

You can build into your search a dedup on the unique id:

Your base search here
| dedup uniqueid
...whatever else you want to do...

Or you can run an occasional search-and-destroy to select all older copies of duplicate records and delete them. That would look something like this...

 your base search here 
| streamstats count as dupcount by uniqueid
| where dupcount > 1
| delete

View solution in original post

0 Karma

DalJeanis
Legend

Consider just putting them into an index, regular or summary.

You can build into your search a dedup on the unique id:

Your base search here
| dedup uniqueid
...whatever else you want to do...

Or you can run an occasional search-and-destroy to select all older copies of duplicate records and delete them. That would look something like this...

 your base search here 
| streamstats count as dupcount by uniqueid
| where dupcount > 1
| delete
0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...