Getting Data In

How do you know if a forwarder isn't forwarding

mfalk
Engager

What's a best practice way to determine if a forwarder isn't forwarding?

We have a setup of about 100 hosts all forwarding to a single indexer. How can I be sure that one of the forwarders hasn't stopped forwarding for some reason? I can think of a couple of options like running a saved search and checking if the count of events = 0.

What is everyone else doing?

1 Solution

mfalk
Engager

Looks good. I'll check out the monitor app.

In the short term I've come up with:

| metadata type=hosts | search host=#HostICareAbout# OR host=#HostICareAbout# | eval mytime=strftime (recentTime, "%y-%m-%d %H:%M:%S") | eval currentTime=strftime(now(), "%y-%m-%d %H:%M:%S") | eval minutesAgo=round(((now()-recentTime)/60),0)  | table host,lastTime,recentTime,mytime,currentTime,minutesAgo | where (abs(minutesAgo) < 60)

This query will return a list of hosts that I care about which haven't sent any events within the last 60 minutes (the abs if for detecting when we have hosts in other TimeZones not properly configured). We're thinking of adding a local splunk metric file to be monitored so in case a system just doesn't have anything to forward it'll still forward an entry.

We're trying to figure out a simple file to monitor that won't impact our indexing volume.

View solution in original post

0 Karma

mfalk
Engager

Looks good. I'll check out the monitor app.

In the short term I've come up with:

| metadata type=hosts | search host=#HostICareAbout# OR host=#HostICareAbout# | eval mytime=strftime (recentTime, "%y-%m-%d %H:%M:%S") | eval currentTime=strftime(now(), "%y-%m-%d %H:%M:%S") | eval minutesAgo=round(((now()-recentTime)/60),0)  | table host,lastTime,recentTime,mytime,currentTime,minutesAgo | where (abs(minutesAgo) < 60)

This query will return a list of hosts that I care about which haven't sent any events within the last 60 minutes (the abs if for detecting when we have hosts in other TimeZones not properly configured). We're thinking of adding a local splunk metric file to be monitored so in case a system just doesn't have anything to forward it'll still forward an entry.

We're trying to figure out a simple file to monitor that won't impact our indexing volume.

0 Karma

Damien_Dallimor
Ultra Champion

Use the Splunk Deployment Monitor App.

Refer to this other post I recently answered.

Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...