Splunk Search

Custom search command: how to make "streaming" commands truly streaming?

arkadyz1
Builder

I've just read this link: Are custom search commands truly 'streaming'? The author there claimed he created a much more 'streaming' interface than what is used by Intersplunk. Has anyone seen any such implementation?

I have a command which needs to simply read the incoming events one by one (I don't even care about sending them further down the pipe) and process each event separately. This setup is working extremely slowly right now, quickly going through the events and then taking over at least a minute 'finalizing job', and sometimes outright hanging (you know when Chrome says 'The page has become unresponsive' or even switches directly to the 'Awww, snap!' page?) All I do there is call splunk.Intersplunk.getOrganizedResults(), loop for r in results: with some simple processing inside the loop, then passing the results on using splunk.Intersplunk.outputResults(results).

Is there any better approach to this task?

0 Karma
1 Solution

arkadyz1
Builder

Found my problem: with all seeming simplicity of processing, my script had some JSON calls for each event (which all are in JSON format). Those JSON methods seem to be quite CPU-intensive and time-costly. A mere 99 events required 63 seconds to process - way too much!

My solution was to leave only direct copying of the relevant fields into an intermediate file and to create another script outside of Splunk for further processing of it. The time of script running went down from 63 sec to .013 sec - quite an improvement! Since my initial script's purpose was to create a file with the events parsed and reassembled in some specific way then saved as an input file for another application, the end result is fine with me.

The irony is that I could not export events directly - they were too long for Splunk to do it (I kid you not - an error stating that 'URL is too long' was the content of the .CSV file I tried to create). That's the reason for me to use a script for this purpose.

View solution in original post

0 Karma

arkadyz1
Builder

Found my problem: with all seeming simplicity of processing, my script had some JSON calls for each event (which all are in JSON format). Those JSON methods seem to be quite CPU-intensive and time-costly. A mere 99 events required 63 seconds to process - way too much!

My solution was to leave only direct copying of the relevant fields into an intermediate file and to create another script outside of Splunk for further processing of it. The time of script running went down from 63 sec to .013 sec - quite an improvement! Since my initial script's purpose was to create a file with the events parsed and reassembled in some specific way then saved as an input file for another application, the end result is fine with me.

The irony is that I could not export events directly - they were too long for Splunk to do it (I kid you not - an error stating that 'URL is too long' was the content of the .CSV file I tried to create). That's the reason for me to use a script for this purpose.

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...