Getting Data In

UF config works on CentOS/RedHat, but not on Ubuntu server

grijhwani
Motivator

Looking for suggestions for the obvious that I might have overlooked as to why a UF config distributed by Deployment Server (and known to reach all endpoints) works on CentOS and RedHat, but despite the fact the fact the UF on Ubuntu is definitely communicating with the target indexer (as evidenced by tcpdump) it only seems send occasional keep-alives, and no logs.

The config includes a monitor for /var/log, so despite being different platforms, there should be some activity on all of them.

I have been scratching my head over this for two weeks, now.

Later edit -

I omitted to mention that in the splunkd log I see (only once):

TailingProcessor - Could not send data to output queue (parsingQueue), retrying...
TcpOutputProc - Connected to idx=x.x.x.x:yyyy

The indexer address and port are correct.

Tags (2)
0 Karma
1 Solution

grijhwani
Motivator

The difference between the machines is that the Ubuntu ones have - in most cases - had the default SplunkUniversalForwarder app bundle deleted. It seems that the handful of newer ones (which have not) are working fine. So the obvious conclusion is that there is something critical missing from the tailored configurations which is present in the UF config installed by default.

Now to find out what, and determine how to fix it without being reliant on the default configuration.

View solution in original post

0 Karma

grijhwani
Motivator

The difference between the machines is that the Ubuntu ones have - in most cases - had the default SplunkUniversalForwarder app bundle deleted. It seems that the handful of newer ones (which have not) are working fine. So the obvious conclusion is that there is something critical missing from the tailored configurations which is present in the UF config installed by default.

Now to find out what, and determine how to fix it without being reliant on the default configuration.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Are you seeing any data forwarded into _internal on your indexers?

0 Karma

grijhwani
Motivator

Something has come to light. Some more machines I added yesterday ARE reporting in.

0 Karma

acharlieh
Influencer

Is the splunkd process for the UF running as the "root" user on all systems or is it running as a limited "splunk" or other user on some or all? If running as a limited user, are there permission differences on /var/log and the files within? (this may manifest and you may see "Permission denied" messages in the _internal index. ($SPLUNK_HOME/var/log/splunk/splunkd.log)

0 Karma

grijhwani
Motivator

This is actually one of my major bug-bears about Splunk - that it assumes root privelege by default, rather than spawning a root capable agent for parsing local system files, and privelege-dropping everything else.

Nope. Running as root across the board.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...