Splunk Search

How to perform math on a field extracted where there were multiple matches

weidertc
Communicator

Hello,

I have a log entry with a variable number of possible matches with my regex. i had to use max_matches to get them all. for the rows that only have a single match, i can perform math on them, like round() or +1; however, for the rows where there are multiple matches, i am unable to manipulate those matches and when i try it results in a null value being displayed.

How can i perform math on inline field extractions where there are multiple matches?

Chris

0 Karma
1 Solution

DalJeanis
SplunkTrust
SplunkTrust

Here's a quick sample of a nearly trivial method...

| makeresults | eval mykey="somekey" | eval myjunk="someotherfield" | eval mydata="123.456 234.5678 345.678" | makemv mydata | eval holddata=mydata
| rename COMMENT as "The above creates test data"

| rename COMMENT as "Take them apart, round them, put them together"
| mvexpand mydata
| eval mydata=round(mydata,1) 
| mvcombine mydata

One problem with that method is illustrated by what it did to the field holddata. It has been reformatted, and you'd need to do | makemv holddata to change it back into a multivalue field.

This next method might be more appropriate for certain simple data structures.

| makeresults | eval mykey="somekey" | eval myjunk="someotherfield" | eval mydata="123.456 234.5678 345.678" | makemv mydata | eval holddata=mydata
| rename COMMENT as "The above creates test data"

| rename COMMENT as "Take them apart, round them, put them together"
| mvexpand mydata
| eval mydata=round(mydata,1)
| stats values(*) as * by mykey

The more other data you have, or the more complicated it is, the less appropriate that method might be. Sometimes it might be more appropriate to segregate the data and then reintegrate it later.

| makeresults | eval mykey="somekey" | eval myjunk="someotherfield" | eval mydata="123.456 234.5678 345.678" | makemv mydata | eval holddata=mydata
| rename COMMENT as "The above creates test data"

| rename COMMENT as "Take them aside, take them apart, round them, put them together flat, put them back onto the file, then make them back into an mv."
| streamstats count as recno 
| appendpipe 
    [| table recno mydata 
     | mvexpand mydata 
     | eval mydata=round(mydata,1) 
     | stats list(mydata) as mydata2 by recno 
     | nomv mydata2 
     | eval rectype="fixed"
     ]
| eventstats max(mydata2) as mydata by recno
| where isnull(rectype)
| makemv mydata

View solution in original post

DalJeanis
SplunkTrust
SplunkTrust

Here's a quick sample of a nearly trivial method...

| makeresults | eval mykey="somekey" | eval myjunk="someotherfield" | eval mydata="123.456 234.5678 345.678" | makemv mydata | eval holddata=mydata
| rename COMMENT as "The above creates test data"

| rename COMMENT as "Take them apart, round them, put them together"
| mvexpand mydata
| eval mydata=round(mydata,1) 
| mvcombine mydata

One problem with that method is illustrated by what it did to the field holddata. It has been reformatted, and you'd need to do | makemv holddata to change it back into a multivalue field.

This next method might be more appropriate for certain simple data structures.

| makeresults | eval mykey="somekey" | eval myjunk="someotherfield" | eval mydata="123.456 234.5678 345.678" | makemv mydata | eval holddata=mydata
| rename COMMENT as "The above creates test data"

| rename COMMENT as "Take them apart, round them, put them together"
| mvexpand mydata
| eval mydata=round(mydata,1)
| stats values(*) as * by mykey

The more other data you have, or the more complicated it is, the less appropriate that method might be. Sometimes it might be more appropriate to segregate the data and then reintegrate it later.

| makeresults | eval mykey="somekey" | eval myjunk="someotherfield" | eval mydata="123.456 234.5678 345.678" | makemv mydata | eval holddata=mydata
| rename COMMENT as "The above creates test data"

| rename COMMENT as "Take them aside, take them apart, round them, put them together flat, put them back onto the file, then make them back into an mv."
| streamstats count as recno 
| appendpipe 
    [| table recno mydata 
     | mvexpand mydata 
     | eval mydata=round(mydata,1) 
     | stats list(mydata) as mydata2 by recno 
     | nomv mydata2 
     | eval rectype="fixed"
     ]
| eventstats max(mydata2) as mydata by recno
| where isnull(rectype)
| makemv mydata

weidertc
Communicator

wow! thanks for the reply. I was afraid the answer would involve, as you say, segregating the data, and reintegrating it later.

I need to lookup several of the commands you are using, but our query actually has 3 regexes, one before, and one after this, that I will need to integrate this into. It'll take me a while but i will attempt to make sense of this and see if i can make it work. thanks so much for the guide!

Chris

DalJeanis
SplunkTrust
SplunkTrust

@weidertc - you're welcome. Let us know if you need any help getting that to work.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...