Splunk Search

How to update a field only if there was a change or delta in the subsearch?

russell120
Communicator

Hi, I have a sample CSV called original.csv. Each day, a search is ran and saved to new.csv. What search to do I need to add an entry to original.csv only when there's a change in new.csv? It's sorted by the field data_center.

Original.csv looks like:

data_center desktop_amount server_amount  employee_amount    date
  alpha          25             10               20       08/29/2019
  bravo          10              5                8       08/29/2019
  charlie        5               2                3       08/29/2019

When the new daily search is ran and results are exported to new.csv, it looks like this:

data_center desktop_amount server_amount  employee_amount    date
  alpha          22             10               20       08/30/2019
  bravo          10              7               10       08/30/2019
  charlie        5               2                3       08/30/2019

What search do I need to create this new table based off the changes in new.csv for each data_center?:

data_center desktop_amount server_amount  employee_amount    date
  alpha          25             10               20       08/29/2019
  alpha          22             10               20       08/30/2019
  bravo          10              5                8       08/29/2019
  bravo          10              7               10       08/30/2019
  charlie        5               2                3       08/30/2019

Since there were changes in data_centers alpha and bravo (alpha's desktop_amount decreased by 3 and bravo's server_amountand employee_amount increased by 2), those new entries were added to original.csv. Since charlie had no change, do noting with it.

0 Karma
1 Solution

woodcock
Esteemed Legend

Like @somesoni2 says, you need dedup but I would not use append for it because it has limits. Use either appendpipe which is unlimited or better yet, append=t like this:

| inputlookup append=t original.csv
| inputlookup append=t new.csv
| dedup data_center desktop_amount server_amount employee_amount

This approach keeps the old data (i.e. the Date), if you would instead prefer to keep the new value for Date, swap the order of new.csv and original.csv.

View solution in original post

0 Karma

woodcock
Esteemed Legend

Like @somesoni2 says, you need dedup but I would not use append for it because it has limits. Use either appendpipe which is unlimited or better yet, append=t like this:

| inputlookup append=t original.csv
| inputlookup append=t new.csv
| dedup data_center desktop_amount server_amount employee_amount

This approach keeps the old data (i.e. the Date), if you would instead prefer to keep the new value for Date, swap the order of new.csv and original.csv.

0 Karma

russell120
Communicator

Works perfectly; I don't know why I didn't hadn't thought of this. Thanks.

0 Karma

somesoni2
Revered Legend

Try like this (assuming those csv files are added as lookup table files. If not, change the search to fetch data from those csv's)

| inputlookup new.csv
| append [| inputlookup original.csv]
| dedup data_center desktop_amount server_amount  employee_amount

This should keep entries from both new.csv and original.csv if any of the count has changed, if not it'll add entries from new.csv (remove the original.csv entry as the values are same, just the date will be updated)

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...