Splunk Search

Value from Rex command used to perform a new search with extracted value

splunker1981
Path Finder

Hello all,

New to Splunk and trying to figure out what I am doing wrong or best way to do the following. I am trying to extract a value using regex and then save that value to use in a new search not on the already filtered results. If I run the command shown below and instead do a stats by Forwarder I get the correct values which I am trying to use in the follow-on new search. However when I pipe it to a search I get 0 results. Also I don't think the | search kicks off a new search but I suspect it queries based on what was already filtered or returned?

I have the following search command

index=_test log_type=test_data user="testUserA" | rex ".*FD\s+(?< FwdExtracted>\d+)" | search internal_forwarder_value = FwdExtracted

Thanks for the help in advance

Tags (2)
0 Karma
1 Solution

woodcock
Esteemed Legend

I think what you are trying to do is chain 2 searches together like this:

 index=_test [|search index=_test log_type=test_data user="testUserA" | rex ".*FDs+(?< FwdExtracted>d+)" | eval internal_forwarder_value = FwdExtracted | table internal_forwarder_value]

View solution in original post

0 Karma

woodcock
Esteemed Legend

I think what you are trying to do is chain 2 searches together like this:

 index=_test [|search index=_test log_type=test_data user="testUserA" | rex ".*FDs+(?< FwdExtracted>d+)" | eval internal_forwarder_value = FwdExtracted | table internal_forwarder_value]
0 Karma

woodcock
Esteemed Legend

You need to use where instead of search. The where command presumes that both the LHV and RHV are field names where as search assumes that the RHV is a string literal. If the RHV does not turn out to be a field name (the value of it is NULL), then where will treat it as a string literal also. In any case, this will work for you:

index=_test log_type=test_data user="testUserA" | rex ".*FDs+(?< FwdExtracted>d+)" | where internal_forwarder_value = FwdExtracted
0 Karma

splunker1981
Path Finder

Thanks for the help. Unfortunately using WHERE only pulls out 1 line out of the 10 where the ID exists. I think it's because it's filtering using the initial user="testUserA". What I would like to do is have it pull out all the lines where that ID exists once it's been found using the user to filter on.


Here is a sample of the file contents:

What I am trying to do is something similar to a for loop with grep command.

for i in grep testUserA testFile.data |cut -d ' ' -f 6; do grep $i testFile.data; done;

Part of the issue is that user values only exist once per transaction but the logs can contain multiple lines of data for that one transaction which is tracked by a unique ID. Using the file contents below - you can see that there's 2 users.

  1. testUserA
  2. testUserB

each of those userIDs only exist in 1 line, but testUserA has a total of 10 log entries which have a unique ID of 342293 and testUserB has 6 lines.

testData.txt 22:36:04 devServerCust sequence: FS 342293 sample data test
testData.txt 22:36:04 devServerCust sequence: FS 342293 example line test
testData.txt 22:36:04 devServerCust sequence: FS 342293 random data test
testData.txt 22:36:04 devServerCust sequence: FS 342293 testUserA
testData.txt 22:36:07 devServerCust sequence: FS 342293 idfe
testData.txt 22:36:07 devServerCust sequence: FS 342293 udfe
testData.txt 22:36:07 devServerCust sequence: FS 342293 interN
testData.txt 22:36:07 devServerCust sequence: FS 342293 anerwv3
testData.txt 22:36:07 devServerCust sequence: FS 342293 Ounegdative
testData.txt 22:36:07 devServerCust sequence: FS 342293 query saved
testData.txt 22:36:14 devServerCust sequence: FS 3423345 sample data test
testData.txt 22:36:14 devServerCust sequence: FS 3423345 example line test
testData.txt 22:36:14 devServerCust sequence: FS 3423345 random data test
testData.txt 22:36:14 devServerCust sequence: FS 3423345 testUserB
testData.txt 22:36:17 devServerCust sequence: FS 3423345 idfe
testData.txt 22:36:17 devServerCust sequence: FS 3423345 udfe

0 Karma

somesoni2
SplunkTrust
SplunkTrust

Yes, the "| search " is just adding additional filters to existing search results. Could you provide more details on the requirement? Is it that you're getting some data from the current search and you want to ADD more results using the field extraction you're doing?

0 Karma

splunker1981
Path Finder

Yes - let me clarify.

I want to do a query and extract specific values using regex (doesn't have to be regex is there's a better way). I would then like to perform a new search using the values I captured using the regex in the first query since I don't want any of the data to be filtered and find all entries with that digit in a specific field. Part of the reason I need to do this is I need to pull all lines with the ID (extracted digits) for a given user (user only exists once but ID and log entries associated with that user can be > 1). If I just search on the user it returns just the line where the user exists when in theory there could be 5-10 lines for that one transactionID + user.

For example if I run the search below
__index=_test log_type=test_data user="testUserA" | rex ".*FDs+(?< FwdExtracted>d+)" | stats by FwdExtracted

I get the correct number of results with my test data - in this example 4 results. I then have to click on the FwdExtracted result individually and right click and select "new search" for each result if I want to see all lines associated with that one ID. Which brings up a new screen which I then have to rename FwdExtract field because it doesn't exist since it's kicking off a new search to field I want to search against which in this case is internal_forwarder_value = valueFromLastQueryHere

If you think of this way it might also help. I am really just trying to do a for loop with two greps. In the sample data below there's two records -

  1. testUserA
  2. testUserB

for i in grep testUserA testData.txt | cut -d ' ' -f 6; do grep $i; done;

testData.txt 22:36:04 devServerCust sequence: FS 342293 sample data test
testData.txt 22:36:04 devServerCust sequence: FS 342293 example line test
testData.txt 22:36:04 devServerCust sequence: FS 342293 random data test
testData.txt 22:36:04 devServerCust sequence: FS 342293 testUserA
testData.txt 22:36:07 devServerCust sequence: FS 342293 idfe
testData.txt 22:36:07 devServerCust sequence: FS 342293 udfe
testData.txt 22:36:07 devServerCust sequence: FS 342293 interN
testData.txt 22:36:07 devServerCust sequence: FS 342293 anerwv3
testData.txt 22:36:07 devServerCust sequence: FS 342293 Ounegdative
testData.txt 22:36:07 devServerCust sequence: FS 342293 query saved
testData.txt 22:36:14 devServerCust sequence: FS 3423345 sample data test
testData.txt 22:36:14 devServerCust sequence: FS 3423345 example line test
testData.txt 22:36:14 devServerCust sequence: FS 3423345 random data test
testData.txt 22:36:14 devServerCust sequence: FS 3423345 testUserB
testData.txt 22:36:17 devServerCust sequence: FS 3423345 idfe
testData.txt 22:36:17 devServerCust sequence: FS 3423345 udfe

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...