All Apps and Add-ons

Problems pulling in incident data

todd_miller
Communicator

I'm currently running v2.8.0 of the Splunk Add-on for ServiceNow on Splunk v6.3.3. I have the data being ingested via API on a standalone Splunk server that is dumping the data to my standalone, non-clustered indexers. All of the data pulls work, including the custom tables that we have built, except for data in the Incident table. I verified that I can login to the SNOW instance with the account that I was provided and that it does indeed return data. The error I'm seeing in the logs is listed below:

2016-03-21 07:07:51,623 INFO pid=49049 tid=Thread-8 file=snow_job_factory.py:__call__:34 | Start collecting from incident.
2016-03-21 07:07:51,623 INFO pid=49049 tid=Thread-8 file=snow_data_loader.py:_do_collect:117 | start https://instancename.service-now.com/api/now/table/incident?sysparm_exclude_reference_link=true&sysp...
2016-03-21 07:08:53,682 INFO pid=49049 tid=Thread-8 file=snow_data_loader.py:_do_collect:131 | end https://instancename.service-now.com/api/now/table/incident?sysparm_exclude_reference_link=true&sysp...
2016-03-21 07:08:54,786 ERROR pid=49049 tid=Thread-8 file=snow_data_loader.py:collect_data:101 | Failed to get records from https://instancename.service-now.com/incident
2016-03-21 07:08:54,888 INFO pid=49049 tid=Thread-8 file=snow_job_factory.py:__call__:49 | End collecting from incident.
0 Karma

todd_miller
Communicator

See my response to /u/markdflip

markdflip
Path Finder

Did you ever solve this issue? I am experiencing the same problem.

0 Karma

todd_miller
Communicator

Thanks for bumping this, Mark. Yes, I actually did.

So the first thing I did was try and pull from a smaller dataset size (i.e. start only a month back rather than all time). That seemed to work but in actuality it did not.

What actually fixed it was modifying the Splunk_TA_snow/bin/snow_data_loader.py script to use the sysparm_limit command as found below:

def collect_data(self, table, timefield, count=5000):
        assert table and timefield

        objs = []
        with self._lock:
            last_timestamp = self._read_last_collection_time(table, timefield)
            params = "{0}>={1}^ORDERBY{0}&sysparm_limit={2}".format(
                timefield, last_timestamp, count)
            _, content = self._do_collect(table, params)
            if not content:
                return

We also got the SNOW folks to change our REST quota value from 60s to 120s.

It seemed to help us but YMMV.

markdflip
Path Finder

Brilliant, the API params change fixed it. Thanks!

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...