Splunk Search

When using the geostats command in a pie chart, how do I re-sort the data that is displayed?

avaishsplunk
Path Finder

I am trying to build a map, my data is in the below format for multiple cities across the world:

OCode --> LineCount --> LineCount2  --> Lat1 --> Lat2
A         100           120             98.12    -112.12
B         100           150             98.53    -115.23

When i am trying to use the geostats command, in the Pie Chart, I am getting data as

A LineCount  100
B LineCount  100
A LineCount2 120
B LineCount2 150

Although I want it as

A LineCount  100
A LineCount2 120
B LineCount  100
B LineCount2 150
0 Karma

aaraneta_splunk
Splunk Employee
Splunk Employee

@avaishsplunk - Did one of the answers below help provide a solution your question? If yes, please click “Accept” below the best answer to resolve this post and upvote anything that was helpful. If no, please leave a comment with more feedback. Thanks.

0 Karma

woodcock
Esteemed Legend

Building off of what @niketnilay said, you can use whitespace to make the alphanumeric sorting prefix invisible like this:

| rename "Count: GET"  AS "   Count: GET"
         "Sum: GET"    AS  "  Sum: GET" 
         "Count: POST" AS   " Count: POST" 
         "Sum: POST"   AS    "Sum: POST"

Here is a run-anywhere example in action:

| makeresults 
| eval myfield="one two three four" 
| makemv myfield 
| mvexpand myfield
| eval myfieldSortable = case(
    (myfield="one"),   "    " . myfield,
    (myfield="two"),   "   "  . myfield,
    (myfield="three"), "  "   . myfield,
    (myfield="four"),  " "    . myfield,
    true(),                     myfield)
| multireport
    [| eval host="leadingSpacesNO", count=1 | fields - myfieldSortable | xyseries host myfield count]
    [| eval host="leadingSpacesYES", count=2 | fields - myfield | xyseries host myfieldSortable count]
0 Karma

niketn
Legend

Geostats stats fields are sorted alphabetically. You can try to pipe rename commands to add sequence numbers 1., 2., 3. etc as per the sequence you need

Here is an example, by default Count: Get and Count POST are displayed first followed by Sum: GET and Sum: POST. Following will sort GETs first followed by POSTs:

| rename "Count: GET" as "1.Count: GET" 
| rename "Sum: GET" as "2.Sum: GET" 
| rename "Count: POST" as "3.Count: POST" 
| rename "Sum: POST" as "4.Sum: POST"
____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

avaishsplunk
Path Finder

Hello Somesoni2,

Thanks for your reply, below is my search query

inputlookup abc.csv
|eval linecount=0
|eval lcount=0
|fields ORGANIZATION_CODE,linecount,Lat,Long, DESCRIPTION| append  [search index= xxx sourcetype=yyy
|search "XYXY"
|spath output=OpName path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.operationName
|spath output=EvType path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.eventTypeCode
|spath output=Header_Count path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute1
|spath output=Line_Count path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute2
|spath output=Org_Code path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute3
|spath output=status path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute4
|spath output=TimeZone path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute5
|spath output=CDC_RDC path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute6
|eval combined=mvzip(mvzip(mvzip(mvzip(mvzip(mvzip(mvzip(OpName,EvType),Header_Count),Line_Count),Org_Code),status),TimeZone),CDC_RDC)
|mvexpand combined
|eval combined=split(combined,",")
|eval  OpName=mvindex(combined,0)
|eval  EvType=mvindex(combined,1)
|eval Header_Count=mvindex(combined,2)
|eval Line_Count=tonumber(mvindex(combined,3))
|eval Org_Code =mvindex(combined,4)
|eval status =mvindex(combined,5)
|eval TimeZone=mvindex(combined,6)
|eval CDC_RDC=mvindex(combined,7)
|stats sum(Line_Count) as linecount by Org_Code
|rename Org_Code as ORGANIZATION_CODE 
|fields linecount,ORGANIZATION_CODE,Lat,Long, DESCRIPTION]
|stats sum(linecount) as "Lines", values(Lat) as "Latitude", values(Long)  as "Longitude", values(DESCRIPTION) as "OrganizationName" by ORGANIZATION_CODE
| join type=outer ORGANIZATION_CODE[|inputlookup abc.csv| eval Line_Count=0|fields ORGANIZATION_CODE,Line_Count,DESCRIPTION
| append[search index=xxx sourcetype=yyy| search "XYXY"|spath output=OpName
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.operationName|spath output=EvType
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.eventTypeCode|spath output=Header_Count
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute1|spath output=Line_Count
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute2|spath output=Org_Code
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute3|spath output=status
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute4|spath output=TimeZone
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute5|spath output=CDC_RDC
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute6|spath output=Ord_Type
path=payload.gpmGenerateEventLogs.gpmGenerateEventLog{}.attribute7
|eval combined=mvzip(mvzip(mvzip(mvzip(mvzip(mvzip(mvzip(mvzip(OpName,EvType),Header_Count),Line_Count),Org_Code),status),TimeZone),CDC_RDC),Ord_Type)
| mvexpand combined
|eval combined=split(combined,",")
|eval  OpName=mvindex(combined,0)
|eval  EvType=mvindex(combined,1)
|eval Header_Count=mvindex(combined,2)
|eval Line_Count=mvindex(combined,3)
|eval Org_Code =mvindex(combined,4)
|eval status =mvindex(combined,5)
|eval TimeZone=mvindex(combined,6)
|eval CDC_RDC=mvindex(combined,7)
|eval Ord_Type=mvindex(combined,8)
|where Ord_Type="abc" 
| rename Org_Code as ORGANIZATION_CODE] 
|stats sum(Line_Count) as "Lines1" by ORGANIZATION_CODE]|
geostats latfield=Latitude longfield=Longitude,values(Lines) as "OLI", values(Lines1) as "O3P",values(OrganizationName) as "Org" by OrganizationName maxzoomlevel=9 globallimit=0
0 Karma

somesoni2
Revered Legend

Could you share your current search?

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...