Installation

Getting error "JournalSliceDirectory: Cannot seek to 0" and error #2032 after upgrading to Splunk 6.3.

iKate
Builder

Hi Splunkers!

I've just watched the .conf2015 keynote and being full of inspiration we upgraded our Splunk Enterprise.
But right after this some of our advanced XML dashboards got red-label error:

JournalSliceDirectory: Cannot seek to 0 

Several charts display Error#2032

And others show the message "No results found" when it was possible before the upgrade.

Can we solve it?

Thanks in advance.

P.S. cannot create a new tag "splunk6.3" as I have no permissions for this, nice(

015-09-24 16:42:36,335 ERROR    [5603fdcc467fe464055750] proxy:190 - [HTTP 400] Bad Request; [{'text': 'JournalSliceDirectory: Cannot seek to 0', 'code': None, 'type': 'FATAL'}] Traceback (most recent call last): File "/opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/proxy.py", line 168, in index timeout=simpleRequestTimeout File "/opt/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 565, in simpleRequest raise splunk.BadRequest, (None, serverResponse.messages) BadRequest: [HTTP 400] Bad Request; [{'text': 'JournalSliceDirectory: Cannot seek to 0', 'code': None, 'type': 'FATAL'}]

Update:
Finally, we've upgraded Splunk version to 6.3.3. We still had the same error but I made a trick that found in one of the answers: data from some indexes could not be retrieved for "All time", but it still worked if the custom time for a smaller time period was selected. So I just added earliest=... in searches that were damaged.
And luckily we haven't seen any other problems with the new version so far.

Labels (1)
1 Solution

twollenslegel_s
Splunk Employee
Splunk Employee

I know it won't help anymore, but for reference:

If you are having this issue you may have had a crash or non-clean shutdown and need to repair buckets.

Please take a look at this wiki:
https://wiki.splunk.com/Community:PostCrashFsckRepair

"splunk fsck --all" should show you what buckets are bad, you can either remove them, or try to repair the bucket

Useful options are: --include-hots, --log-to--splunkd-log & --ignore-read-error

USAGE

Supported modes are: scan, repair, clear-bloomfilter, check-integrity, generate-hash-files

:= --one-bucket|--all-buckets-one-index|--all-buckets-all-indexes
[--index-name=] [--bucket-name=] [--bucket-path=]
[--include-hots]
[--local-id=] [--origin-guid=]
[--min-ET=] [--max-LT=]

:= [--try-warm-then-cold] [--log-to--splunkd-log] [--debug] [--v]

fsck repair [--bloomfilter-only]
[--backfill-always|--backfill-never] [--bloomfilter-output-path=]
[--raw-size-only] [--metadata] [--ignore-read-error]

fsck scan [--metadata] [--check-bloomfilter-presence-always]

fsck clear-bloomfilter

fsck check-integrity
fsck generate-hash-files

fsck check-rawdata-format

fsck minify-tsidx --one-bucket --bucket-path= --dont-update-manifest|--home-path=

Notes:
The mode verb 'make-searchable' is synonym for 'repair'.
The mode 'check-integrity' will verify data integrity for buckets created with the integrity-check feature enabled.
The mode 'generate-hash-files' will create or update bucket-level hashes for buckets which were generated with the integrity-check feature enabled.
The mode 'check-rawdata-format' verifies that the journal format is intact for the selected index buckets (the journal is stored in a valid gzip container and has valid journal structure
Flag --log-to--splunkd-log is intended for calls from within splunkd.
If neither --backfill-always nor --backfill-never are given, backfill decisions will be made per indexes.conf 'maxBloomBackfillBucketAge' and 'createBloomfilter' parameters.
Values of 'homePath' and 'coldPath' will always be read from config; if config is not available, use --one-bucket and --bucket-path but not --index-name.
All constraints supplied are implicitly ANDed.
Flag --metadata is only applicable when migrating from 4.2 release.
If giving --include-hots, please recall that hot buckets have no bloomfilters.
Not all argument combinations are valid.
If --help found in any argument position, prints this message & quits.

View solution in original post

0 Karma

twollenslegel_s
Splunk Employee
Splunk Employee

I know it won't help anymore, but for reference:

If you are having this issue you may have had a crash or non-clean shutdown and need to repair buckets.

Please take a look at this wiki:
https://wiki.splunk.com/Community:PostCrashFsckRepair

"splunk fsck --all" should show you what buckets are bad, you can either remove them, or try to repair the bucket

Useful options are: --include-hots, --log-to--splunkd-log & --ignore-read-error

USAGE

Supported modes are: scan, repair, clear-bloomfilter, check-integrity, generate-hash-files

:= --one-bucket|--all-buckets-one-index|--all-buckets-all-indexes
[--index-name=] [--bucket-name=] [--bucket-path=]
[--include-hots]
[--local-id=] [--origin-guid=]
[--min-ET=] [--max-LT=]

:= [--try-warm-then-cold] [--log-to--splunkd-log] [--debug] [--v]

fsck repair [--bloomfilter-only]
[--backfill-always|--backfill-never] [--bloomfilter-output-path=]
[--raw-size-only] [--metadata] [--ignore-read-error]

fsck scan [--metadata] [--check-bloomfilter-presence-always]

fsck clear-bloomfilter

fsck check-integrity
fsck generate-hash-files

fsck check-rawdata-format

fsck minify-tsidx --one-bucket --bucket-path= --dont-update-manifest|--home-path=

Notes:
The mode verb 'make-searchable' is synonym for 'repair'.
The mode 'check-integrity' will verify data integrity for buckets created with the integrity-check feature enabled.
The mode 'generate-hash-files' will create or update bucket-level hashes for buckets which were generated with the integrity-check feature enabled.
The mode 'check-rawdata-format' verifies that the journal format is intact for the selected index buckets (the journal is stored in a valid gzip container and has valid journal structure
Flag --log-to--splunkd-log is intended for calls from within splunkd.
If neither --backfill-always nor --backfill-never are given, backfill decisions will be made per indexes.conf 'maxBloomBackfillBucketAge' and 'createBloomfilter' parameters.
Values of 'homePath' and 'coldPath' will always be read from config; if config is not available, use --one-bucket and --bucket-path but not --index-name.
All constraints supplied are implicitly ANDed.
Flag --metadata is only applicable when migrating from 4.2 release.
If giving --include-hots, please recall that hot buckets have no bloomfilters.
Not all argument combinations are valid.
If --help found in any argument position, prints this message & quits.

0 Karma

iKate
Builder

Thanks for answering! Indeed, these actions could shed light on the problem then.

0 Karma

the_wolverine
Champion

Saw this error in 6.3 -- didn't contact Splunk -- but found bucket issues and someone was running an expensive search -- deleted the back buckets and the error went away. Still wish Splunk would respond to the issue and confirm.

0 Karma

rozmar564
Explorer

We now are on 6.4 and the above mentioned errors went away. It might be a combination of expired data / a few reboots on a few indexers. Bottom line - they went away without finding the cause ;(

0 Karma

selim
Path Finder

Hi, Did you ever get a response regarding this problem? I'm facing the same problem for the data that I put in thaweddb folder. I was able to search just fine with 6.2.5 but I receive this error message for some of the old data.

Also, I noticed that this issue occurred when I selected the earliest to be @y but when I put in 01/01/2015 (beginning of the year) it seems to work. So far, I'm facing inconsistencies and I suspect maybe this is due to background acceleration/summary indexing processes. However, I really do not have a clue at this point 🙂

0 Karma

iKate
Builder

At that moment and we we got no response and moved to previous version. Next week we'll try to upgrade again this time to 6.3.3. Did you solve the problem?

0 Karma

sgundeti
Path Finder

any updates on this issue please??

0 Karma

iKate
Builder

Just what is written in Update part of the question..

0 Karma

rozmar564
Explorer

I am experiencing the same issue and I even posted a question for it - https://answers.splunk.com/answers/326929/errors-during-search-missing-data-after-upgrade-to.html

Seems like nobody knows what is going on. Thankfully we have a license that comes with Splunk support, so once they answer me, I will post the solution.

0 Karma

tjj9309
Engager

I am getting this error as well...

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...