All Apps and Add-ons

splunk_ta_o365 DLP event collection fails with: "Date range for requested content is invalid"

nickhills
Ultra Champion

I have installed and configured the splunk_ta_o365 add in https://splunkbase.splunk.com/app/3110/

It retrieves and collects the data without issue for all of my data sources. However for DLP events, I am getting the following exception.

2019-02-05 15:22:35,149 level=ERROR pid=56844 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=utils.py:wrapper:67 | start_time=1549380135 datainput="365_dlp" | message="Data input was interrupted by an unhandled exception." 
Traceback (most recent call last):
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/utils.py", line 65, in wrapper
    return func(*args, **kwargs)
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 91, in run
    executor.run(adapter)
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunksdc/batch.py", line 47, in run
    for jobs in delegate.discover():
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/modinputs/management_activity.py", line 122, in discover
    for page in subscription.list_available_content(session, start_time, end_time):
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 129, in list_available_content
    response = self._list_available_content(session, _start_time, _end_time)
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 124, in _list_available_content
    return self._perform(session, 'GET', '/subscriptions/content', params)
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 158, in _perform
    return self._request(session, method, url, kwargs)
  File "/apps/splunk/etc/apps/splunk_ta_o365/bin/splunk_ta_o365/common/portal.py", line 170, in _request
    raise O365PortalError(response)
O365PortalError: 400:{"error":{"code":"AF20055","message":"Date range for requested content is invalid startTime:2019-01-29T15:22:15 endTime:2019-01-29T16:22:15."}}

What is interesting is the preceding message in the logs which reads:

2019-02-05 15:22:15,647 level=INFO pid=56844 tid=MainThread logger=splunk_ta_o365.modinputs.management_activity pos=management_activity.py:discover:131 | start_time=1549380135 datainput="365_dlp" | message="Fresh content found." last="20190205144609625000257$20190205151959503000068$dlp_all$DLP_All" first="20190205144609625000257$20190205151959503000068$dlp_all$DLP_All"

The eagle-eyed will notice that both the epoch start_time and the 'first' parameter match the expected time — i.e. today (Tuesday 5th Feb 2019); however, the error message returned by the script cites the date requested as Tuesday 29th January 2019.

I can't see why or where the January date comes from.

I would be interested to hear from anyone who has successfully managed to get DLP events indexed to understand if this is a bug on the TA, or a config on my side.

If my comment helps, please give it a thumbs up!
0 Karma
1 Solution

nickhills
Ultra Champion

I don't hold this up as a correct, or indeed a proper solution, however.

I stopped Splunk on my HF and deleted the checkpoint file for the dlp modinput from
splunk:~/var/lib/splunk/modinputs/splunk_ta_o365_management_activity
Following that I restarted the HF, and the error cleared.

I can only assume some upgrade weirdness took place, and the impact of this was a calculated low risk in my case.
You might think twice if you had already collected historic DLP events, and didn't want to re-index all of your data again, and probably engage with Splunk support.

If my comment helps, please give it a thumbs up!

View solution in original post

0 Karma

nickhills
Ultra Champion

I don't hold this up as a correct, or indeed a proper solution, however.

I stopped Splunk on my HF and deleted the checkpoint file for the dlp modinput from
splunk:~/var/lib/splunk/modinputs/splunk_ta_o365_management_activity
Following that I restarted the HF, and the error cleared.

I can only assume some upgrade weirdness took place, and the impact of this was a calculated low risk in my case.
You might think twice if you had already collected historic DLP events, and didn't want to re-index all of your data again, and probably engage with Splunk support.

If my comment helps, please give it a thumbs up!
0 Karma
Get Updates on the Splunk Community!

Detecting Remote Code Executions With the Splunk Threat Research Team

REGISTER NOWRemote code execution (RCE) vulnerabilities pose a significant risk to organizations. If ...

Observability | Use Synthetic Monitoring for Website Metadata Verification

If you are on Splunk Observability Cloud, you may already have Synthetic Monitoringin your observability ...

More Ways To Control Your Costs With Archived Metrics | Register for Tech Talk

Tuesday, May 14, 2024  |  11AM PT / 2PM ET Register to Attend Join us for this Tech Talk and learn how to ...