Splunk Search

how to distribute dynamic data to indexers from search heads for use by custom command on indexers?

redspot
New Member

Hi Devs/Folks,

I'm developing an alternate "lookup" command (in python) that doesn't use the standard CSV system. I'm trying to get a custom command pushed onto each indexer to run locally. But, each custom command needs some data, a shared list object/blob/file/thingy, which is not very big.

How can I get that data to distribute to each indexer? the information will change over time, perhaps hourly.

Thanks,
Wilson

0 Karma

Ayn
Legend

I agree on that the docs could be clearer than this. There's a sort of implicit ttl on bundles which you can derive from settings in limits.conf:

replication_period_sec  = <int>
* The minimum amount of time in seconds between two successive bundle replications.
* Defaults to 60

replication_file_ttl = <int>
* The TTL (in seconds) of bundle replication tarballs, i.e. *.bundle files.
* Defaults to 600 (10m)

sync_bundle_replication = [0|1|auto]
* Flag indicating whether configuration file replication blocks searches or is run asynchronously 
* When setting this flag to auto Splunk will choose to use asynchronous replication iff all the peers 
* support async bundle replication, otherwise it will fallback into sync replication. 
* Defaults to auto 

Other than that, we're using plain old scripts using wget from a central http repository for some stuff in our environment. You could also use rsync, perhaps using lsync (http://ampledata.org/splunk_search_peer_synchronization_with_lsyncd.html ) but really the best option in my opinion would be to use the inbuilt bundle replication mechanisms unless you have good reasons not to use it.

0 Karma

redspot
New Member

because the documentation isn't clear on when replication occurs once files have changed, what the file locking is like on the indexers, etc.

When/How often are changed files pushed? is there a read/write lock on the changed files by whatever pushes them? is there a read/write lock on the files as they are written to the app?

My concerns are about timing and concurrency.

0 Karma

Ayn
Legend

Hi,

a couple of questions back:
- Why build a custom command instead of building something that will serve as a dynamic lookup?
- Did you already consider using the regular bundle replication in distributed search for this? This is pretty much what it's used for - distributing knowledge objects.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...