Splunk Search

how to convert sumologic query to splunk?

dbashyam
Explorer

Hi, How to convert this sumologic query to splunk

  _collector="M2" "Memory Monitor" | parse ",DB Job-Connection-Pool:
*/*/*/" as db_job_1,db_job_2,db_job_3 | parse "Host/Appserver/Version:
*/*/*" as host,appserver,version | parse "DB General-Connection-Pool: */" as db_1 | parse ",Used File Descriptors: *," as used_fd | parse "Used Client Connections: */300," as used_client_conn // this extracts the
    # of work item threads in use | parse ",Used Work Item Threads:
*/100," as used_wit | timeslice 5m  // find the peak in 5 minutes | max(used_wit) as maxwit,  max(used_client_conn) as max_cc,   max(db_1) as max_db1    by
    _timeslice, appserver // add up all the WIT in use across the environment, count the number of appservers shown in the logs available | sum (maxwit) as env_wit, count_distinct (appserver) as appserver_count   by _timeslice // Divide the total # of work item threads in use over the number of appservers in use, express as a percentage | env_wit / appserver_count / 100  as wit_pct | fields wit_pct,_timeslice // Show the same time frames per day on the same graph | compare with timeshift 1d 7
0 Karma

to4kawa
Ultra Champion
| max(used_wit) as maxwit, max(used_client_conn) as max_cc, max(db_1) as max_db1 by
    _timeslice, appserver // add up all the WIT in use across the environment, count the number of appservers shown in the logs available 
| sum (maxwit) as env_wit, count_distinct (appserver) as appserver_count by _timeslice // Divide the total # of work item threads in use over the number of appservers in use, express as a percentage 
| env_wit / appserver_count / 100 as wit_pct 
| fields wit_pct,_timeslice // Show the same time frames per day on the same graph 
| compare with timeshift 1d 7 

⇨

| timechart span=5m max(used_wit) as maxwit, max(used_client_conn) as max_cc, max(db_1) as max_db1 by appserver
| stats sum(maxwit) as env_wit, dc(appserver) as appserver_count by _time
| eval wit_pct =  env_wit / appserver_count / 100
| fields _time wit_pct
| bin span=1d _time

Visualization > Line Chart

Please implement parse in props.conf and transforms.conf.

| makeresults 
| eval _raw="[08 Jan 2020 03:00:44,715] [Scheduled-System-Tasks-Thread-13] [INFO] [System:System:] [Memory Monitor] Total JVM (B): 11538530304,Free JVM (B): 10589348776,Used JVM (B): 949181528,VSize (B): 37949321216,RSS (B): 12036202496,Used File Descriptors: 387,Used Work Item Threads: 0/100,Used Client Connections: 0/500,DB Client-Connection-Pool: 0/0/0/200/150/50,DB Job-Connection-Pool: 0/0/0/200/150/50,DB General-Connection-Pool: 4/4/0/200/150/50,Host/Appserver/Version: mmmm.rrrr.com/mmmmm-job/9.0.2.2"
| eval _raw=replace(_raw,",(?=\d+\])",".")
| extract pairdelim="]," kvdelim=":"
| rex "\[(?<time>\d\d \w+ \d{4} \d\d:\d\d:\d\d\.\d{3})\]"
| fields - _*

This is sample data.

mydog8it
Builder

Extracting fields in Splunk can be done on interactivley in the search operation but is most often performed when the data is brought in to Splunk. Most products have the field extraction work done for you in a construct called an add-on, which are available at Splunk's app store, splunkbase.splunk.com. If there is a need to create extractions from a custom log source, you can do that by creating your own configuration files to have it applied to the data for all users at search time or in the Search Processing Language (SPL) (similar to the search you provided). However, specifying acurate field extractions without sample data is not recomended, like impossible to do. Here are a couple different links to field extraction documentation that you can read if you want more information:
https://docs.splunk.com/Documentation/Splunk/8.0.1/Knowledge/Automatickey-valuefieldextractionsatsea...
https://docs.splunk.com/Documentation/Splunk/8.0.0/Knowledge/Createandmaintainsearch-timefieldextrac...

As for the search logic...
Some of the functionality in your search seems the same and some I will take sonme artistic liberty to explain how I would do it in Splunk based on your description.

Timeslice looks like bucket (Alias for the bin command) in Splunk | bucket span=5m Documentation--> https://docs.splunk.com/Documentation/Splunk/8.0.1/SearchReference/Bin
max() in Splunk is an aggregate function. The max() 'command' is an argument to a higher level function (chart, stats, timechart, sparkline()). The higher level function effects the output/visualization of the data. Documentation--> https://docs.splunk.com/Documentation/Splunk/8.0.1/SearchReference/Aggregatefunctions
sum() is another aggregate function in Splunk and is documented on the same page as max()
Math in Splunk looks different than what you are accustomed to. Again it is processed as an argument for the eval function. Documentation--> https://docs.splunk.com/Documentation/Splunk/8.0.1/SearchReference/MathematicalFunctions
The compare function that you show in the last line could be performed as a function of the search. In Splunk I think I would build out this data collection as a metrics store and run my search accross it. Documentation--> https://docs.splunk.com/Documentation/Splunk/8.0.1/Metrics/Overview

I don't think this is exactly what you wanted, but it is what I can offer. I hope you find it helpful.

0 Karma

dbashyam
Explorer

This is the data, could you please help here. I am still trying to digest the links that you gave 🙂

[08 Jan 2020 03:00:44,715] [Scheduled-System-Tasks-Thread-13] [INFO] [System:System:] [Memory Monitor] Total JVM (B): 11538530304,Free JVM (B): 10589348776,Used JVM (B): 949181528,VSize (B): 37949321216,RSS (B): 12036202496,Used File Descriptors: 387,Used Work Item Threads: 0/100,Used Client Connections: 0/500,DB Client-Connection-Pool: 0/0/0/200/150/50,DB Job-Connection-Pool: 0/0/0/200/150/50,DB General-Connection-Pool: 4/4/0/200/150/50,Host/Appserver/Version: mmmm.rrrr.com/mmmmm-job/9.0.2.2

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...