We are seeing these messages in splunkd.log:
07-29-2010 14:26:34.729 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/historydb/db/.metaManifest (No such file or directory)
What does it mean? Does it need to be addressed? If so, what to do?
This is a problem with 4.1.5. As long as the file exists, and you aren't experiencing any other issues symptoms, the message can be ignored. It will be resolved with 4.1.6.
This is a problem with 4.1.5. As long as the file exists, and you aren't experiencing any other issues symptoms, the message can be ignored. It will be resolved with 4.1.6.
# 7/29/10 3:22:01.171 PM
07-29-2010 15:22:01.171 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/os/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 2 7/29/10 2:26:34.805 PM
07-29-2010 14:26:34.805 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/summarydb/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 3 7/29/10 2:26:34.780 PM
07-29-2010 14:26:34.780 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/sample/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 4 7/29/10 2:26:34.753 PM
07-29-2010 14:26:34.753 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/defaultdb/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 5 7/29/10 2:26:34.729 PM
07-29-2010 14:26:34.729 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/historydb/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 6 7/29/10 2:26:34.704 PM
07-29-2010 14:26:34.704 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/fishbucket/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 7 7/29/10 2:26:34.679 PM
07-29-2010 14:26:34.679 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/_internaldb/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 8 7/29/10 2:26:34.654 PM
07-29-2010 14:26:34.654 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/blockSignature/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
# 9 7/29/10 2:26:34.629 PM
07-29-2010 14:26:34.629 ERROR databasePartitionPolicy - unable to open file: /u01/app/splunk/var/lib/splunk/audit/db/.metaManifest (No such file or directory)
* host=plaxslcs001.apollogrp.edu Options|
* sourcetype=splunkd Options|
* source=/u01/app/splunk/var/log/splunk/splunkd.log Options
This is happening during normal operations.. What we are doing is importing a TON of data in to Splunk. So far we are indexing about 5 billion events and for some reason the index is choking now. I can see where it processed files from a specific date and yet it shows no data when looking at that date or date range. I think the two are correlated.
If you can't get data from your index, then I would suggest noting that in the title of your question. You may get a faster response.
Also, do you see this for any indexes other that "historydb" (which I don't think is used anymore)
Did this happen when starting up splunkd
, or just in the middle of normal operations?