Knowledge Management

How to deal with updated input events?

mikclrk
Explorer

I've got a bunch of data records arriving from a remote analytic system. They all have timestamps and a unique key. There are potentially tens of thousands of them arriving each day.

So receiving and storing them as CSV with a search overlay is fine. Now I can produce reports on them.

The problem comes when the remote analytic system receives some 'late' data. At this point it wants to reissue the original record that it put out a few minutes ago. The reissued record will have the same timestamp and the same key as the original record.

So.

What will happen if I reformat all the records into JSON and dump them into a KV store? From reading the documentation about them only being searchable on the search head, having millions of documents in a KV store with a high rate of churn and frequent search activity probably isn't a good idea. Can I even get normal visualizations and searches to work with data in a KV store?

If I want to keep them in CSV format, then it appears I can't simply tell Splunk to replace the earlier record (as I can with ELK). Any thoughts on how I could configure the ingestion process to find and erase the earlier record? Preferably without killing performance by doing a search and delete for every record that comes in. Or how I could phrase a search so it'll only find the most recent version of each event record?

Mik

0 Karma
1 Solution

DalJeanis
SplunkTrust
SplunkTrust

Consider just putting them into an index, regular or summary.

You can build into your search a dedup on the unique id:

Your base search here
| dedup uniqueid
...whatever else you want to do...

Or you can run an occasional search-and-destroy to select all older copies of duplicate records and delete them. That would look something like this...

 your base search here 
| streamstats count as dupcount by uniqueid
| where dupcount > 1
| delete

View solution in original post

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

Consider just putting them into an index, regular or summary.

You can build into your search a dedup on the unique id:

Your base search here
| dedup uniqueid
...whatever else you want to do...

Or you can run an occasional search-and-destroy to select all older copies of duplicate records and delete them. That would look something like this...

 your base search here 
| streamstats count as dupcount by uniqueid
| where dupcount > 1
| delete
0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...