We currently have a single Splunk search+indexer locally, and 4 remote indexers in different countries.
As we started setting up a new dedicated search head we noticed as we added the remote indexers the speed of the searches was taking longer and longer to run. These remote indexers do not store much data in comparison to the local indexer, but it's increasing the runtime 10x.
Is this because of latency/bandwidth issues to the remote indexers? If so, is it possible to install remote search heads purely to help with searching those remote indexers, so that the local search head queries the remote search head for requesting data, rather than querying the individual remote indexers?
What is the bandwidth to those sites? Since the remote indexers do not store much data, you may want to forward that data to the local indexer or set up a separate indexer locally to allow a search to that server with higher bandwidth. If you forward that data with splunktcp from the remote indexers you will be assured that the information gets there eventually over the slower links.
It sounds very strange. If you search for * you will get all the raw data. Which you could then export. So leaving the data at the indexers but being allowed to search them provides no security whatsoever.
Bandwidth isn't too bad (upto 1mbit/s) but can be worse at times. It's mainly the latency which is a good 300-400ms and may have packet loss at times.
We can't forward data out of those sites for ownership/security reasons (as strange as it sounds) or we wouldn't have used remote indexers in the first place.