Hi.
We have a client that strictly enforces an 'only allowed open ports are listening ones' policy. As in, Splunk Forwarder -> Splunk Server default flow won't work.
Is there any way to configure Splunk Server to actively query the Forwarders if they have new info and thus initiate the data collection process?
No there is not. This approach isn't very scalable. However, you could use a different method, such as running scripts on the server to collect the files from the remote machine (without using the forwarder). You could also possibly use NFS or SMB to mount the remote files instead.
Every forwarder could index locally, and a central search head could query out to all indexers via Distributed Search. This may be a nightmare to manage, as Deployment Server (and License Master in 4.2.x) functions require inbound-to-central-server connections, so configuration and licensing would have to be handled individually or through another process. (Not to mention storage and processing needs on each individual server.)
No there is not. This approach isn't very scalable. However, you could use a different method, such as running scripts on the server to collect the files from the remote machine (without using the forwarder). You could also possibly use NFS or SMB to mount the remote files instead.
Thanks gk.
I understand the scalability issue, but wouldn't the addition of that choice for a small range of hosts further extend Splunk applications? Some NMS's do this.
How will enterprises with strict security policies like this, get to know Splunk?
What kind of scripts are you suggesting? rsync/scp/sftp transfer?
How would I guarantee transferring only the new bytes in each log?
The cleanness of a Splunk only solution would be preferable.