Getting Data In

Why am I unable to see the resource usage of my indexers via the DMC?

tattersp
Explorer

I am running 6.3.1 on my search head and 6.3.1 on my 3 indexers. I can see the on the resource usage per instance the historic values of the my search heads "Median Physical Memory Usage by Process Class" but when i ask for any of the indexers no data is available.

On the indexers i can see the file /opt/splunk/var/log/introspection/resource_usage.log but even though in the data files the directory are selected for consumption only the kvstore.log is present in the _introspection index.

In the _internal index I can see an entry when I disable and then re-enable these files as follows

10-24-2016 14:29:29.024 +0100 ERROR TailReader - Ignoring path="/opt/splunk/var/log/introspection/resource_usage.log" due to: Invalid indexed extractions configuration - see prior error messages
0 Karma

the0duke0
Path Finder

I was seeing the same behavior. Both resource_usage.log and disk_objects.log were not getting collected, so many DMC panels were blank (sometimes DMC can be tricky). I found that somehow we had two invalid props.conf entries in our _cluster app. To find this:

  1. In the splunkd.log, look for the log entry that proceedes the TailReader error you are seeing above, in our case we saw: `01-20-2017 13:04:38.063 -0500 ERROR IndexedExtractionsConfig - Invalid value='' for parameter='INDEXED_EXTRACTIONS'.

01-20-2017 13:04:38.063 -0500 ERROR TailReader - Ignoring path="C:\Program Files\Splunk\var\log\introspection\resource_usage.log.3" due to: Invalid indexed extractions configuration - see prior error messages
Notice the invalid parameter is INDEXED_EXTRACTIONS and it claims we have a value of ''.
2. On one of your peer indexers, run the following command. You may want to redirect the output to a file to make it easier to view.
splunk btool props list --debug
3. In the output from above, find the line where the invalid parameter is set. In your case as in ours it should be under heading [splunk_resource_usage] and [splunk_disk_objects]
4. Once you find the line, it should point you to the config file that has the bad value. In our case this was in etc\slave-apps\_cluster\local\props.conf, so we just fixed the copy of this file in the master-apps on the cluster master and redistributed the cluster bundle. For some reason we had the following 4 lines in this file:
[splunk_disk_objects]
INDEXED_EXTRACTIONS =
[splunk_resource_usage]
INDEXED_EXTRACTIONS =
We removed all four lines.
5. The change did not cause the indexers to restart, so I had to manually run a rolling restart
splunk rolling-restart cluster-peers`. Once the peers restarted, the all the panels that rely on these logs started to render data.

Hope this helps,
Patrick

Get Updates on the Splunk Community!

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...

Webinar | How to Use Logs from Splunk Platform in Splunk Observability

How to Use Logs from Splunk Platform in Splunk Observability   Logs play a critical role in identifying why ...

Gotta See it to Believe it: 5 Ways to Learn Splunk & Supercharge Your Career Growth

Whether you're a seasoned pro or just dipping your toes into the data-driven universe, there's something for ...