A reboot cured the above issue( In title), which is far from ideal.
See the below lines logged in 'Splunkd.log' on the indexer before reboot of the server.
03-11-2013 17:31:04.383 +0000 WARN AggregatorMiningProcessor - Breaking event because limit of 256 has been exceeded
03-11-2013 17:31:12.973 +0000 ERROR ExecProcessor - message from "python /opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py" Traceback (most recent call last):
03-11-2013 17:31:12.973 +0000 ERROR ExecProcessor - message from "python /opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py" File "/opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py", line 161, in
03-11-2013 17:31:12.973 +0000 ERROR ExecProcessor - message from "python /opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py" next(token)
03-11-2013 17:31:12.973 +0000 ERROR ExecProcessor - message from "python /opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py" File "/opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py", line 67, in next
03-11-2013 17:31:12.973 +0000 ERROR ExecProcessor - message from "python /opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py" en = search(token, srch='status=1')
And a snippet after reboot
03-12-2013 13:47:22.800 +0000 ERROR ExecProcessor - message from "python /opt/splunk_uat/etc/apps/SplunkDeploymentMonitor-old1/bin/scripted_inputs/dm_backfill_factory.py" splunk.ResourceNotFound: [HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/SplunkDeploymentMonitor/backfill/dm_backfill?sort_key=seed&search=status%3D1; [{'text': 'Application is disabled: SplunkDeploymentMonitor', 'code': None, 'type': 'ERROR'}]
Does anyone know whether the above errors can cause Indexer to slow down and replication status move to 'failed' state?
Thanks
... View more