Security

Access Control for ports?

tomoyagoto
Explorer

I've got a relatively noob question.
If you have splunk indexer set up, anybody with ip reachable can set up tons of forwarders to default indexer port(9997) and DDOS the indexer?
What is the best way to prevent that?

I'm new to splunk and have to design large-scale splunk.
Thank you in advance 🙂

------------ added -----------------
I'm expecting several tenants (forwarders)
So distinguishing those tenants is needed.

tenantA must not send log to index setup for tenantB, and vise versa.

Tags (1)

dwaddle
SplunkTrust
SplunkTrust

One approach you could look at is a series of Heavy forwarders configured as intermediate forwarders. Basically, each tenant gets two heavy forwarder instances dedicated to them. Those intermediate heavy forwarders offload parsing from your indexers and can do simple parsing and access control steps.

One access control step at the intermediate would be a different SSL certificate chain per tenant -- making it difficult for anyone but tenantA to fully establish a connection to tenantA's intermediate. This may still not help with denial-of-service proper, because regardless of the SSL configuration a few rogue clients can establish more than enough connections to cripple it. (These connections would not necessarily even have to complete the SSL handshake if they were coming in sufficiently fast) You will have to look beyond Splunk to find airtight denial-of-service mitigation - and even then the attacker will almost always have the advantage.

Another access control step at the intermediate would be a props.conf / transforms.conf rule forcing all data coming through that forwarder to be routed to a specific index. So, if by SSL configuration, only tenantA systems have the proper certificate to connect to the tenantA intermediate forwarder and the tenantA intermediate forces everything into the tenantA index ... you have a reasonably strong guarantee.

Since you are "new to splunk and trying to design large-scale splunk" -- especially a multi-tenant one -- then I'd suggest you get your proposed configuration checked out and validated by Splunk Professional Services or a Splunk partner. This is no substitute for doing your own homework and being prepared but they can help you avoid traps they've seen before.

tomoyagoto
Explorer

thank you, dwaddle.

Building intermediate forwarders was a completely new idea to me.
I'm going to try it.

So key points are ,
-build intermediate forwarder dedicated to each tenant
-use SSL certificate and/or props.conf&transforms.conf to make sure no rogue client

Also, considering the support from Splunk partner...
Thank you!

0 Karma

Ayn
Legend

One way to limit which systems can send logs to your indexer is to require client certificates in the SSL session that is setup between the forwarder and the indexer. That way only clients holding the correct client certificates will be able to connect.

Ayn
Legend

As far as I know, there is no way to implement access control in such a granular way. It's pretty much an all-or-nothing thing. You could probably achieve some of it by writing nullQueue transforms on your indexer ("if host matches this and index matches this then carry on, otherwise drop it") but as far as SSL certificate checking goes, that can just control whether a client can perform the initial connect or not.

0 Karma

tomoyagoto
Explorer

thank you Ayn.
SSL could make my day.I have one more question regarding SSL certificates.

My questions was not sufficient.
I'm expecting a lot of forwarders, actually tenants.
Can SSL certificate method distinguish tenants?

I'm expecting like this..

  • there are 2 tenants.
  • tenantA can send log to tcp:9998 and index "indexTenantA"
  • tenantB can send log to tcp:9998 and index "indexTenantB"
  • tenantA can't send log to indexTenantB even with edition of inputs.conf
  • tennatB can't send log to indexTenantA event with...
0 Karma

jbsplunk
Splunk Employee
Splunk Employee

Well, per inputs.conf you can configure things in this manner:

   [splunktcp://[remote server]:port]
* This input stanza is used with Splunk instances receiving data from forwarders ("receivers"). See the topic 
  http://docs.splunk.com/Documentation/Splunk/latest/deploy/Aboutforwardingandreceivingdata for more information.
* This is the same as TCP, except the remote server is assumed to be a Splunk instance, most likely a forwarder. 
* 'remote server' is optional.  If specified, will only listen for data from 'remote server'.

If you have a lot of forwarders, that might not be the best solution. My first instinct would actually be to use a local firewall to accept connections on tcp/9997 from only known forwarders and keep security separate from Splunk.

tomoyagoto
Explorer

Thanks, jbsplunk!

I thought firewall too.
But that would be more appropriate to control the access from indexer itself.
and also, I'm expecting lot of forwarders. So as you said, firewalling may not be the best solution.

Thanks you!

0 Karma

jbsplunk
Splunk Employee
Splunk Employee

Just noticed that myself, updated answer to reflect splunktcp, which functions in the same manner.

0 Karma

Ayn
Legend

This is for raw TCP inputs though, not splunktcp.

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...