Sample Events Looks like :
{"title": "SavedSearch1", "action_email": "0", "action_summary_index": "0", "alert_expires": "2m", "author": "admin", "disabled": "0", "orphan": "0", "dispatch_earliest_time": "-60m@m", "dispatch_latest_time": "now", "eai_acl_app": "search", "eai_acl_owner": "admin", "eai_acl_sharing": "user", "is_scheduled": "0", "search": "| savedsearch “SavedSearch2” | search index=_audit | head 10", "cluster_name": "BIG_DATA"}
{"title": "Savedsearch2", "action_email": "0", "action_summary_index": "0", "alert_expires": "2m", "author": "admin", "disabled": "0", "orphan": "0", "dispatch_earliest_time": "-60m@m", "dispatch_latest_time": "now", "eai_acl_app": "search", "eai_acl_owner": "admin", "eai_acl_sharing": "user", "is_scheduled": "0", "search": "| savedsearch “index=* | head 100", "cluster_name": "BIG_DATA"}
I have to read the saved search list from my internal logs, check the existance and extract a particular field count and value from it . If the saved search is using another saved search inside the main saved search , then i have to again check the existance and extract the same particular field count and value from it, then join both of them and get the final count and values of that particular field.
Eg : Consider Index as one field, i would have mutliple fields to be calculated in the same process.
SavedSearch1
| savedsearch “SavedSearch2” | search index=_audit | head 10
Savedsearch2
index=* | head 100
index=application_core sourcetype=application_log
| eval [ search index= application_core sourcetype= application_log
| eval anotherSavedSearchUseInSearch="SavedSearch2"
| where title=anotherSavedSearchUseInSearch
| rex max_match=0 field=search "index\s{0,}=\s{0,}\"{0,}\${0,}(?
| eval indexusedinquery = if(isnull(indexusedinquery),"indexNotUsed",indexusedinquery)
| table title indexusedinquery
| fields title indexusedinquery
| eval valuesReturnedfromsecondsearch = title.",".indexusedinquery
| return valuesReturnedfromsecondsearch]
output :
FieldName : valuesReturnedfromsecondsearch
FieldValue : Savedsearch2,*
I completely have no idea what you mean here. I think you need a complete do-over.
To start out with, iterating a search is not a great idea in splunk, and recursion is even worse. you are better off getting everything you want with a single pass.
So, extract ALL saved searches at the same time, verify the existence of that field count on them, and then later you can decide which of the saved searches is relevant to what you want to know.
Sort the records into _time order, and use streamstats to copy forward the most recent value for each search. Then, throw away all but the final record(s) that you want, determine which searches the values are needed from, and then format your final record(s).
Here's a run-anywhere example of one way that you might approach this.
| makeresults | eval mydata="100:search2;search5;7!!!!200:search3;NULL;12!!!!300:search6;search3;15!!!!400:search5;search2,search3;NULL!!!!500:search4;search6,search3;13!!!!600:search3;NULL;12!!!!700:search2;search5;7!!!!800:search1;search2,search3;15"|makemv mydata delim="!!!!"| mvexpand mydata
| rex field=mydata "(?<time>\d+):(?<searchname>[^;]+);(?<subsearches>[^;]+);(?<count>\d*)"
| makemv subsearches delim=","
| eval subsearches=mvfilter(subsearches!="NULL")
| eval _time = relative_time(now(),"@d")+time
| rename COMMENT as "The above just produces test data in the following format..."
| table _time searchname subsearches count
| rename COMMENT as "Put the needed counts in a field named for each search"
| eval subname="subcount_".searchname
| eval {subname}=count
| fields - subname
| rename COMMENT as "Copy them forward, then blank the ones named after the current search, and the ones not used in the current search"
| streamstats last(subcount*) as subcount*
| foreach subcount_* [ | eval <<FIELD>>=case(isnull(subsearches),null(), like(searchname,"<<MATCHSTR>>"),null(), like(subsearches,"<<MATCHSTR>>"),<<FIELD>>)]