Knowledge Management

Why do writes to KV Store fail?

wpreston
Motivator

I've been trying to write to about 900k records to a KV Store using the Splunk SPL and it only partially succeeds. Looking at search.log for the attempted input, I get the following errors:

11-19-2015 10:40:42.697 INFO  DispatchThread - Disk quota = 10485760000
11-19-2015 10:47:04.202 ERROR KVStorageProvider - An error occurred during the last operation ('saveBatchData', domain: '2', code: '4'):     Failed to read 4 bytes from socket within 300000 milliseconds.
11-19-2015 10:47:04.226 ERROR KVStoreLookup - KV Store output failed with code -1 and message '[ "{ \"ErrorMessage\" : \"Failed to read     4 bytes from socket within 300000 milliseconds.\" }" ]'
11-19-2015 10:47:04.226 ERROR SearchResults - An error occurred while saving to the KV Store. Look at search.log for more information.
11-19-2015 10:47:04.226 ERROR outputcsv - An error occurred during outputlookup, managed to write 598001 rows
11-19-2015 10:47:04.226 ERROR outputcsv - Error in 'outputlookup' command: Could not append to collection 'Incident_Collection': An error occurred while saving to the KV Store. Look at search.log for more information..

I've tried inputting from search results and tried inputting from a .csv file using outputlookup, but both give these errors. I've also restarted both Splunk and the server that Splunk runs on. This is a standalone indexer/search head and there is no other search activity going on at the time. The mongod.log file shows no errors.

Any ideas?

0 Karma
1 Solution

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

View solution in original post

wpreston
Motivator

For anyone interested, I got this working. The disk that Splunk was writing to was extremely fragmented. I cleaned the collection with a splunk clean kvstore ... command, defragmented the disk and tried again. This time it worked like a charm.

wpreston
Motivator

I should mention that it does write some of the records to the kvstore. The last time I attempted this, it wrote about 650k of the 900k records to the store. I tried reducing the amount of records being input to about 150k and it failed after about 110k records.

0 Karma
Get Updates on the Splunk Community!

Join Us for Splunk University and Get Your Bootcamp Game On!

If you know, you know! Splunk University is the vibe this summer so register today for bootcamps galore ...

.conf24 | Learning Tracks for Security, Observability, Platform, and Developers!

.conf24 is taking place at The Venetian in Las Vegas from June 11 - 14. Continue reading to learn about the ...

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...