Hi All,
I have a list of known application error strings which I wanted to count. I've created a csv file containing these error string named as knownErrorList.csv.
Sample Entries in knownErrorList.csv with headers
component,errorString
app,Error String 1
app,Error String 2
app,Error String 3
My problem is the lookup file is increasing and with the query below, I'm reaching the maxsearches. Is there any other way I can have the same result without using map command?
Query:
| append [ | inputlookup knownErrorList.csv | eval errorTag=errorString ] | map search="search index=app_index source=*/application.log $errorTag$ | eval errorTag=$errorTag$" | stats count by errorTag
Sample Output:
errorTag count
Error String 1 10000
Error String 2 5
Error String 3 3
Whoa, unless I'm misinterpreting your intentions you're doing this way harder and heavier than it needs to be.
The key thing here is to make sure you can get the error messages extracted to a field that you can match against (let's call it errorTag
in both the lookup and the field extraction). After that you can easily get your results through a rewritten search looking something like this:
index=app_index source=*/application.log [inputlookup knownErrorList.csv | fields errorTag] | stats count by errorTag
If you're hitting the subsearch limit, you could do this:
index=app_index source=*/application.log errorTag=* | lookup knownErrorList.csv errorTag OUTPUT component | search component=* | stats count by errorTag
Again, the key thing is to create that field extraction. If the placement of these error messages are completely random you could:
a) go punch your developers in the face, and/or
b) create a field extraction that simply matches each error string explicitly - like, REGEX = (Error String 1|Error String 2|Error String 3|...)
Whoa, unless I'm misinterpreting your intentions you're doing this way harder and heavier than it needs to be.
The key thing here is to make sure you can get the error messages extracted to a field that you can match against (let's call it errorTag
in both the lookup and the field extraction). After that you can easily get your results through a rewritten search looking something like this:
index=app_index source=*/application.log [inputlookup knownErrorList.csv | fields errorTag] | stats count by errorTag
If you're hitting the subsearch limit, you could do this:
index=app_index source=*/application.log errorTag=* | lookup knownErrorList.csv errorTag OUTPUT component | search component=* | stats count by errorTag
Again, the key thing is to create that field extraction. If the placement of these error messages are completely random you could:
a) go punch your developers in the face, and/or
b) create a field extraction that simply matches each error string explicitly - like, REGEX = (Error String 1|Error String 2|Error String 3|...)
Thanks for the help. Creating a specific field for the error message is the way go.
Again, your first step should be to setup a field extraction so you actually get the error messages in one specific field. After that you have a range of options on how to solve your problem.
I just need to have a count of known errors by using a search pattern found in the knownErrorList.csv file. The errorTag is just a pattern found in _raw. It is used to count different variation of it.
Sample
xxxError String 1yyy
iiError String 1oo
The map command I've used will give me - Error String 1 count is 2.
My first query is like this:
index=app_index source=/application.log [ | inputlookup knownErrorList.csv | eval errorQuery=""+errorTag+"*" | return $errorQuery]
The problem with this query is that I can't count by errorTag. This is reason why I'm using the map command.