All Apps and Add-ons

Errno 32 Broken pipe in Hydra worker log

Smile172
Explorer

Hi,

I have set up the app, with a heavy forwarder, but sometimes data comes in, sometimes not.
I found some errors in the Hydra worker's log:
ERROR [ta_ontap_collection_worker://gamma:6012] [ProcessorPerfHandler] Problem collecting processor from server=netapp.domain : [Errno 32] Broken pipe
Traceback (most recent call last):
File "/opt/splunk/etc/apps/Splunk_TA_ontap/bin/ta_ontap/handlers.py", line 93, in runPerf
time=datetime.datetime.fromtimestamp(int(results[object_name]['timestamp'])))
File "/opt/splunk/etc/apps/SA-Hydra/bin/hydra/init.py", line 226, in sendData
self.out.flush()
IOError: [Errno 32] Broken pipe
ERROR [ta_ontap_collection_worker://gamma:6012] [MegaPerfHandler] failed sub job run on sub_handler= server=netapp.domain

Forwarder's OS version: Ubuntu 14.04 LTS

Thank you!

1 Solution

Smile172
Explorer

halr9000, thank you for the answer!

After I disabled IPv6 on the heavy forwarder, the error message disappeared and the data flows seamlessly.

View solution in original post

0 Karma

Smile172
Explorer

halr9000, thank you for the answer!

After I disabled IPv6 on the heavy forwarder, the error message disappeared and the data flows seamlessly.

0 Karma

Smile172
Explorer

Sometimes I have seen data in NetApp app sometimes not. At the beginning the heavy forwarder had 2 IP-s, 1 IPv4 and 1 IPv6 address. I disabled the IPv6 to force binding to IPv4 address and the connection restored 🙂

0 Karma

halr9000
Motivator

Glad that worked out. What led you to that solution? I think some more detail in this answer would be helpful to others who may come after you with the same issue. TIA

0 Karma

halr9000
Motivator

Can you try this on RHEL or CentOS? Those are the only supported platforms for a DCN according to the docs (http://docs.splunk.com/Documentation/NetApp/latest/DeployNetapp/Platformandhardwarerequirements#Splu...). If it's still broken there, then open a support case so that we can get your diag logs.

0 Karma
Get Updates on the Splunk Community!

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...

Updated Team Landing Page in Splunk Observability

We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The ...