Splunk Search

Is there a programmatic method to list and analyze which objects/resources (indexes, macros, lookups) are used by scheduled searches?

Olli1919
Path Finder

Hello fellow Splunkies,

is there a method to programatically list the objects/resources used by (scheduled) searches, e.g. on which indexes, lookups, and macros they are relying on?

The idea is to quickly identify which searches need to be checked when, for example, a lookup is changed. This is always a good to know when more than just one scheduled search is relying on a specific e.g. lookup. Changing a lookup or macro has the tendency to break things in places one does not always remember. Having such an analysis method would allow to check before breaking it.

Right now I not aware of a method to find out which searches are using a specific lookup, macro, etc.

Thanks for your help.

Regards,
Olli

1 Solution

Olli1919
Path Finder

For lookups, I have come up with this bread+butter method. It checks which scheduled searches are depending on which lookups. This is done by reading the searches via rest api and parsing the unnormalized search string for the keyword "lookup". Then some validation is done against existing lookups, leaving only valid lookups standing (in case the regex grabs more than it should).

| rest /servicesNS/-/-/saved/searches splunk_server=local count=0
 | search is_scheduled=1 dispatch.lookups=1 disabled=0
 | rex max_match=100 field=qualifiedSearch "lookup\s(?<lookup_name>\w+)[\s\|\"]"
 | search lookup_name=*
 | eval lookup_name=mvdedup(lookup_name)
 | mvexpand lookup_name
 | search [| rest /servicesNS/-/-/data/lookup-table-files splunk_server=local count=0 | dedup title | table title
    | append [| rest /servicesNS/-/-/data/transforms/lookups splunk_server=local count=0 | dedup title | table title]
    | rename title as lookup_name]
 | mvcombine lookup_name
 | table eai:acl.app, title, author, eai:acl.owner, description, lookup_name
 | sort eai:acl.app

The results look quite nice when thrown into a viz:
alt text

This solution is crude and propably buggy. If there are ways to improve it, I'd be happy to hear. Theoretically the search normalization process should spit out which ressources where pulled, but I did not go into that depth. Martin Mueller, where are you? 🙂

View solution in original post

Olli1919
Path Finder

For lookups, I have come up with this bread+butter method. It checks which scheduled searches are depending on which lookups. This is done by reading the searches via rest api and parsing the unnormalized search string for the keyword "lookup". Then some validation is done against existing lookups, leaving only valid lookups standing (in case the regex grabs more than it should).

| rest /servicesNS/-/-/saved/searches splunk_server=local count=0
 | search is_scheduled=1 dispatch.lookups=1 disabled=0
 | rex max_match=100 field=qualifiedSearch "lookup\s(?<lookup_name>\w+)[\s\|\"]"
 | search lookup_name=*
 | eval lookup_name=mvdedup(lookup_name)
 | mvexpand lookup_name
 | search [| rest /servicesNS/-/-/data/lookup-table-files splunk_server=local count=0 | dedup title | table title
    | append [| rest /servicesNS/-/-/data/transforms/lookups splunk_server=local count=0 | dedup title | table title]
    | rename title as lookup_name]
 | mvcombine lookup_name
 | table eai:acl.app, title, author, eai:acl.owner, description, lookup_name
 | sort eai:acl.app

The results look quite nice when thrown into a viz:
alt text

This solution is crude and propably buggy. If there are ways to improve it, I'd be happy to hear. Theoretically the search normalization process should spit out which ressources where pulled, but I did not go into that depth. Martin Mueller, where are you? 🙂

brmoote
Engager

Hi,
I am trying to do the same thing as the original question and am also getting hung up on the lookups. The search string you gave is the more complete version of what I was working towards, so it is super helpful, but it doesnt solve all of the problems i was running in to. Specifically I am having a hard time figuring out how to deal with the possibility of specifying extra options for the lookup like append=true in the case of input/outputlookup. I was wondering if you had done any more work on this search string or had ideas on how to account for this?

0 Karma

Olli1919
Path Finder

Hi,

you most certainly solved it yourself, but for sake of completeness:

There is no good solution, as we would need Splunk's SPL parser. A workaround would be come up with a more elaborate regex parser. Of course, in the long run this would not be an ideal solution.

| makeresults count=1
| eval qualifiedSearch=" | inputlookup first_file.csv
 | eval flag=\"this is from the first file\" 
 | append 
  [| outputlookup append=true second_file.csv]"
| table qualifiedSearch
| rex max_match=100 field=qualifiedSearch "[\s\|]+inputlookup\s(?<lookup_name>[^\s\|\]]+)|outputlookup.*\s(?<lookup_name2>[^\s\|\]]+)"
| eval lookup_name_final=mvappend(lookup_name, lookup_name2)

This should parse the following SPL:

| inputlookup first_file.csv
| eval flag="this is from the first file"
| append
[| outputlookup append=true second_file.csv]

finding:

first_file.csv
second_file.csv

0 Karma

jplumsdaine22
Influencer

Can you tell me what visualisation that is?

0 Karma

Olli1919
Path Finder

Sorry for the late reply. It's a d3 force directed node graph, mostly identical to the original Michael Bostock examples (see links below). Its strength is that it takes a simple table and resolves and plots the dependencies between the ressources, chained over all of them (a depends on b, depends on c).

mbostock’s block #4062045: Force-Directed Graph
mbostock’s block #1153292: Mobile Patent Suits

0 Karma

jplumsdaine22
Influencer

Cool thanks!

0 Karma

woodcock
Esteemed Legend

Try these EXCELLENT apps!

Knowledge Object Explorer: https://splunkbase.splunk.com/app/2871/
Data Curator: https://splunkbase.splunk.com/app/1848/

Olli1919
Path Finder

Thanks for the pointer, especially Knowledge Object Explorer. This gives a good point to start. Seems this is not an easy answer.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

It's indeed not easy at all.
Sticking to lookups for a second, your approach looks at explicitly referenced lookups. The Knowledge Object Explorer looks at automatic lookups used for filtering. If you combine the two, you're still missing automatic lookups used just for output.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...