Getting Data In

HTTP Event Collector and splunk index cluster

sim_tcr
Communicator

Hello,

We are running splunk version 6.3.3 with indexer clustering enabled.
We have got 3 indexers in the cluster.
We have got a separate splunk instance running as master server.

We need to enable HTTP Event Collection on these indexers. I am referring documentation http://dev.splunk.com/view/event-collector/SP-CAAAE73 and it says,

Note: Using HTTP Event Collector in a distributed deployment is incompatible with indexer clustering. Specifically, cluster peers are not supported as deployment clients.

Does that mean HEC is not supported on clustered indexers?

We have tested this with single indexer running has HEC and below are the options we passed with docker run command.

--log-driver=splunk --log-opt splunk-token=our token --log-opt
splunk-url=oururl --log-opt
splunk-insecureskipverify=true

But when there are multiple indexers behind a load balancer, we will be specifying splunk-url=ourladobancerurl
But what about --log-opt splunk-token how can we specify all the indexers token there?

Thanks,
Simon Mandy

Tags (1)
0 Karma
1 Solution

sim_tcr
Communicator

Hello Burch,

Here is what we finally decided.

  • 3 heavy forwarders with HEC enabled behind a load balancer.
  • Those 3 heavy forwarder outputs.conf will have all our indexer names.
  • These 3 heavy forwarders will be added as deployment client on our deployment server under a server class named hechf.
  • On the deployment server we will have an app named hechf. Whenever there is a new client who want to use hec, we will go to the deployment server hechf/local/inputs.conf and add a stanza and push it using reload command to HF's
  • New indexes will be created using master as we are currently doing.

Question,
Instead of using deployment server, can we use a search head deployer to push these configs. The advantage I see, HFs will do a rolling restart rather than all at once like in deployment server.

Thanks,
Simon

View solution in original post

0 Karma

sim_tcr
Communicator

Hello Burch,

Here is what we finally decided.

  • 3 heavy forwarders with HEC enabled behind a load balancer.
  • Those 3 heavy forwarder outputs.conf will have all our indexer names.
  • These 3 heavy forwarders will be added as deployment client on our deployment server under a server class named hechf.
  • On the deployment server we will have an app named hechf. Whenever there is a new client who want to use hec, we will go to the deployment server hechf/local/inputs.conf and add a stanza and push it using reload command to HF's
  • New indexes will be created using master as we are currently doing.

Question,
Instead of using deployment server, can we use a search head deployer to push these configs. The advantage I see, HFs will do a rolling restart rather than all at once like in deployment server.

Thanks,
Simon

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Oh that's a much better implementation. You can then set up a load balancer or DNS name to front the HEC as they scale horizontally and simplify the clients that communicate to it.

Make sure to turn off the splunkweb on the indexers to discourage you from making changes on individual indexers AND return compute back to indexer activities.

HEC can NOT be managed by a deployer unfortunately. You can set the app to not force a restart and then trigger it manually. I feel like I know another solution for rolling restart on clients but I am forgetting at this time. 😞

0 Karma

nickhills
Ultra Champion

No - What this means is that indexer peers can not be managed by a deployment server.

Normally if you wanted to "share" HEC config you would use a deployment server to push the config out to the systems, but since you cant (shouldn't) do this with a cluster - that's what the warning is about.

Firstly I would suggest you are better off not using your indexers as HEC collectors. I would probably install some heavy forwarders to do the job instead, however, if this is not an option:

If you do need to use the indexers you can still use the peers to receive HEC events, but you will need to:
a.) manually configure HEC on each peer and share the token
b.) use the cluster master-apps to push a preconfigured app with your HEC config.

If my comment helps, please give it a thumbs up!

sim_tcr
Communicator

I enabled HEC manually on each indexers.
When there are multiple indexers behind a load balancer, we will be specifying splunk-url=ourladobancerurl
But what about --log-opt splunk-token how can we specify all the indexers token there?

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Just so you are aware, "multiple indexers behind a load balancer" is a very bad practice. Forwarders should be sending to indexers using their native load balancing features.
When using HEC, you will be more successful using load balancer in front of multiple HEC Heavy Forwarders where you can control their config centrally (DS). Since you configured each instance of indexer individually, they are therefore using different tokens BUT remember that clustered indexers need to be managed by a master node, not the DS.
For a more active architecture conversation, you should check up with your account team.

0 Karma

nickhills
Ultra Champion

The easiest way is to modify local/inputs.conf - you probably already have a token on each indexer for the HEC you configured using the UI.
Simply replace that token with the one you want to use across the board.

If my comment helps, please give it a thumbs up!
0 Karma

sloshburch
Splunk Employee
Splunk Employee

Agreed that you should def avoid the indexers being HEC receivers. Go with a HF and consider it the start of your data collection tier where you can push/pull data of all types.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...