All Apps and Add-ons

Is it best practice to move Hadoop logs to HDFS when they rotate to allow them to be visible through Hunk?

alexmc
Explorer

Hadoop generates lots of logs. It struck me recently that when the logs rotate, I might just move them to HDFS and allow them to be visible through Hunk.

Is this what many people do?

I guess I should change the log4j config so that rotated log files all have the date in their name rather than just a ".<digit>" suffix.

This might be bad if the hadoop cluster goes down - but I hope that if that happens, then the current log files will be enough.

Is this "best practice"?

Tags (3)
0 Karma

rdagan_splunk
Splunk Employee
Splunk Employee

So far I am aware of only one other customer who is using Hunk to monitor Hadoop.

Customer use case = http://www.slideshare.net/Hadoop_Summit/enabling-exploratory-analytics-of-data-in-sharedservice-hado...
In addition, here is a good blog on the subject: http://blogs.splunk.com/2014/05/14/hunkonhunk/

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...