I have the app installed on a heavy forwarder forwarding to an a clustered index. Every time it pulls the data it pulls all the way back to the date set in inputs.conf. How can the app remember the last time it pulled from? This is creating replicated data and heavily skewed data.
Having had an email back from the engineer, and looking through digital_shadows.py the app tries to delete all the logs in the index created i.e. [delete_existing_pipeline(self):
delete_query = 'search source=digital_shadows sourcetype=pipeline | delete']
then pull all the logs in again.
This delete cant be done when using a heavy forwarder to pull the logs as it wont be able to access/delete the index.
Also, am i right in saying that the delete command doesn't actually delete the data just makes it unsearchable? So storing a huge amount of replicated data.
Any ideas what the next move is other than to rewrite the app?
Hi @mrlong67, we are aware of this issue and someone from our support team should have communicated that we are still actively investigating how to resolve it. We will update your ticket directly once we have more information. Just a quick correction on your post above that has some incorrect information we passed along to you: the app will ONLY delete older versions of an updated incident and does not delete ALL logs.
We will reach out directly to you when we have more information.
Thanks!
@mrlong67 as per the documentation you should be reaching out to support@digitalshadows.com or your Digital Shadows representative. Have you tried those already?
https://splunkbase.splunk.com/app/3650/#/details
@robcampbell 🙂
A ticket has been raised and awaiting a response from an engineer, Thanks