Knowledge Management

Convert this query to a data model

DEAD_BEEF
Builder

Hello all! I am having problems understanding how to structure my query to a data model with the ultimate goal being for it to be an accelerated data model as the existing search takes much too long to run. The problem I am having is breaking out my query into child queries from the root event and not understanding where to put the constraints (on the parent above or on the child).

I ask that you take a look at this query and provide any input so I can better understand how to build this to an accelerated data model so that I can get the results of this query faster. Maybe this isn't possible or doesn't benefit since I am searching across all indexes and just better to look at at scheduled search? I use this query to find systems that have incorrect time zone settings. Perhaps am I just not understanding what data models are for and attempting to use it in a way it was not designed.

index=* OR index=_*
| dedup host,source
| eval lag=_time-_indextime
| search lag>1000 OR lag<-1000
| convert ctime(_indextime) AS "Index Time"
| convert ctime(_time) AS "Log Time"
|table "Log Time","Index Time",lag,index,host,source
| rename range AS TZ
0 Karma
1 Solution

tiagofbmm
Influencer

Hi

Creating a datamodel for a single query seems too much. Why don't you save it as a scheduled search and accelerate it?

View solution in original post

tiagofbmm
Influencer

Hi

Creating a datamodel for a single query seems too much. Why don't you save it as a scheduled search and accelerate it?

DEAD_BEEF
Builder

Hmm, looks like it can't be accelerated because it doesn't have a transforming command. Trying to rewrite the query to include one to check the box.

0 Karma

tiagofbmm
Influencer

You query seems to not be very accurate. dedup host and source will give you one specific value for that, and if you are trying to understand the lag time of a (source,host) pair, you;d do some stats first.

index=* OR index=_*
| eval lag=_time-_indextime
| stats avg(lag) as lag by source,host
| search lag>1000 OR lag<-1000

0 Karma

DEAD_BEEF
Builder

I see what you mean. I changed it around based on your suggestions but now the index and log time columns do not populate. Any idea why?

index=* OR index=_
| convert ctime(_indextime) AS "Index Time"
| convert ctime(_time) AS "Log Time"
| eval lag=_time-_indextime
| stats avg(lag) as lag by source,host
| search lag>1000 OR lag<-1000
| table "Log Time","Index Time",lag,index,host,source
0 Karma

tiagofbmm
Influencer

Sorry maybe I was no clear enough.

The stats I put there was to help you get some more information than just what you had with dedup (which gave you information only from one event per (source,host) pair).

I think you have two paths here. Either you want to see the Log_Time, Index_Time for all the events, and you can do the following (which is messy as it shows you all the events:

 index=* OR index=_
 | convert ctime(_indextime) AS "Index Time"
 | convert ctime(_time) AS "Log Time"
 | eval lag=_time-_indextime
 | search lag>1000 OR lag<-1000
 | table "Log Time","Index Time",lag,index,host,source

What I would do is make some stats on top of that lag,

 index=* OR index=_
 | convert ctime(_indextime) AS "Index Time"
 | convert ctime(_time) AS "Log Time"
 | eval lag=_time-_indextime
 | bucket _time
 | stats avg(lag) as lag by source,host, "Index Time"
 | search lag>1000 OR lag<-1000
 | table  "Index Time",lag,index,host,source

I believe this last one gives you the information you actually are looking for.

Let me know

0 Karma

DEAD_BEEF
Builder

Yup, the second query was what I was looking for. Much appreciated!

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...