I currently have a scheduled search that runs every day at a specific time, which calculates the total occurrences of a set of search terms in three different logs. I have the search set to run week to date every day at the same time. I have noticed that as this search runs later in the week, the results start returning less than expected values or 0's and complete very quickly...I am guessing this is due to the overhead of the massive search. Is there a way that I could instead run the search for the given day only, write it to a table and simply add the current day's results to the static values for prior days that week? Here is an example of my search:
SearchTerm source="logfile.log" | timechart count as mycount span=1day | eval Weekday=strftime(_time,"%A %n%m/%d") | appendcols maxtime=300 [search SearchTerm2 source="logfile.log" | timechart count as mycount2 span=1day | eval Weekday=strftime(_time,"%A %n%m/%d")] | appendcols maxtime=300 [search SearchTerm3 source="logfile.log" | timechart count as mycount3 span=1day | eval Weekday=strftime(_time,"%A %n%m/%d")] | table Weekday, mycount, mycount2, mycount3
Have you tried simplifying your search by not using the appendcols command? (its kicking off 3 seperate searches by using appendcols)
Try this (assumes that the source names are each different)
((source=log1.log AND searchteam1) OR (source=log2.log AND searchteam2) OR (source=log3.log AND SearchTerm3)) | timechart count as mycount by source span=1day | eval Weekday=strftime(_time,"%A %n%m/%d")
You can match _raw against a regular expression in count(eval(...)) if you like, but extracting the field would be prettier and more reusable.
Thanks that was exactly the issue! The only issue now is that I am searching for 2 distinct terms in the same log. I am trying to get the unique count for each term...I assume the only way to do that using this method would be to extract the term as a field?
Not sure in what version this changed but you need to define the span=1d right after the timechart command for it to work. that is what it was showing you per minute instead of per day.
((source=log1.log AND searchteam1) OR (source=log2.log AND searchteam2) OR (source=log3.log AND SearchTerm3)) | timechart span=1d count as mycount by source | eval Weekday=strftime(_time,"%A %n%m/%d")
do your log files each have a unique source name? if they do your results will show up like this
_time Weekday source1count source2count source3count
Here's a simple query you can dissect:
| gentimes start=-1 increment=1h | stats values(starttime) count(eval(starttime % 1000 == 0)) as zeroes count as total
It tells you there are 24 values in total, and of those 5 are divisible by 1000.
Martin would you be able to give me a basic example of how I would use that command in my search? I'm not all that familiar with it...anything to get me started would help out.
If the count by source doesn't fit your needs you can also ditch the grouping and instead do count(eval(some condition)) thrice.
Basically my current search requires the count of term1 in logfile1, count of term2 in logfile1, and count of term3 in logfile2. When I tried the example you provided it looks like it is returning the total count per minute on each day. Ultimately I would want to use the table command to show Weekday, count1, count2, count3. This is what I had been doing with appendcols but I think the amount of subsearches is too large as you also suspected.