Getting Data In

Splunk Forwarder Crashes

clete2
Path Finder

I am having an issue with Splunk Forwarder on my Linux machine crashing shortly after startup. I have been unable to run splunk fsck because I can't seem to fulfill all the requirements. E.g.:

cleteNAS bin # ./splunk fsck --all-buckets-all-indexes repair
stanza=default Required parameter=blockSignatureDatabase not configured
terminate called after throwing an instance of 'IndexConfigException'
  what():  stanza=default Required parameter=blockSignatureDatabase not configured
ERROR: pid 28921 terminated with signal 6

I set this parameter and I just keep getting more errors. I found an answer that talks about repairing indexes, but I don't see any of the files it mentions.

Below is my error log. I did not include the stderr output because the log below spits out the entirety of the stderr log.

cleteNAS splunk # cat crash-2013-11-27-13\:12\:53.log
[build 182037] 2013-11-27 13:12:53
Received fatal signal 6 (Aborted).
 Cause:
   Signal sent by PID 11662 running under UID 0.
 Crashing thread: archivereader
 Registers:
    RIP:  [0x00007F0038C22395] gsignal + 53 (/lib64/libc.so.6)
    RDI:  [0x0000000000002D8E]
    RSI:  [0x0000000000002DA4]
    RBP:  [0x00000000012995F8]
    RSP:  [0x00007F00329E7018]
    RAX:  [0x0000000000000000]
    RBX:  [0x00007F0039FDC000]
    RCX:  [0xFFFFFFFFFFFFFFFF]
    RDX:  [0x0000000000000006]
    R8:  [0xFEFEFEFEFEFEFEFF]
    R9:  [0x00007F003A02FF60]
    R10:  [0x0000000000000008]
    R11:  [0x0000000000000206]
    R12:  [0x0000000001299678]
    R13:  [0x000000000129A300]
    R14:  [0x00007F00341746A0]
    R15:  [0x00007F00348434DB]
    EFL:  [0x0000000000000206]
    TRAPNO:  [0x0000000000000000]
    ERR:  [0x0000000000000000]
    CSGSFS:  [0x0000000000000033]
    OLDMASK:  [0x0000000000000000]

 OS: Linux
 Arch: x86-64

 Backtrace:
  [0x00007F0038C22395] gsignal + 53 (/lib64/libc.so.6)
  [0x00007F0038C23865] abort + 389 (/lib64/libc.so.6)
  [0x00007F0038C1B39E] ? (/lib64/libc.so.6)
  [0x00007F0038C1B442] ? (/lib64/libc.so.6)
  [0x000000000083AA16] _ZN17ArchiveCrcChecker21seekAndComputeSeekCrcEv + 598 (splunkd)
  [0x000000000083D345] _ZN17ArchiveCrcChecker5writeEPKcm + 357 (splunkd)
  [0x0000000000AA0717] _ZN14ArchiveContext7processERK8PathnameP13ISourceWriter + 855 (splunkd)
  [0x0000000000AA0E95] _ZN14ArchiveContext9readFullyEP13ISourceWriterRb + 1221 (splunkd)
  [0x000000000083CFA2] _ZN16ArchiveProcessor20haveReadAsNonArchiveE14FileDescriptorlPK3Str + 578 (splunkd)
  [0x000000000083EE53] _ZN16ArchiveProcessor4mainEv + 2755 (splunkd)
  [0x0000000000D81A2D] _ZN6Thread8callMainEPv + 61 (splunkd)
  [0x00007F0038FA3FC7] ? (/lib64/libpthread.so.0)
  [0x00007F0038CDA4ED] clone + 109 (/lib64/libc.so.6)
 Linux / cleteNAS / 3.4.3-gentoo / #3 SMP Fri Feb 1 17:38:44 CST 2013 / x86_64
 Last few lines of stderr (may contain info on assertion failure, but also could be old):
    2013-10-12 15:56:00.794 -0500 splunkd started (build 182037)
    Dying on signal #15 (si_code=0), sent by PID 26392 (UID 0)
    2013-11-20 16:48:56.744 -0600 splunkd started (build 182037)
    splunkd: /opt/splunk/p4/splunk/branches/6.0.0/src/pipeline/input/ArchiveProcessor.cpp:1044: bool ArchiveCrcChecker::seekAndComputeSeekCrc(): Assertion `(file_offset_t)_seekPtr >= dp->curPos()' failed.
    2013-11-27 13:10:48.343 -0600 splunkd started (build 182037)
    splunkd: /opt/splunk/p4/splunk/branches/6.0.0/src/pipeline/input/ArchiveProcessor.cpp:1044: bool ArchiveCrcChecker::seekAndComputeSeekCrc(): Assertion `(file_offset_t)_seekPtr >= dp->curPos()' failed.

 /etc/gentoo-release: Gentoo Base System release 2.2
 glibc version: 2.17
 glibc release: stable
Last errno: 0
Threads running: 27
argv: [splunkd -p 8089 start]
Thread: "archivereader", did_join=0, ready_to_run=Y, main_thread=N
First 8 bytes of Thread token @0x7f0034843330:
00000000  00 87 9f 32 00 7f 00 00                           |...2....|
00000008

x86 CPUID registers:
         0: 00000006 68747541 444D4163 69746E65
         1: 00500F10 00020800 00802209 178BFBFF
         2: 00000000 00000000 00000000 00000000
         3: 00000000 00000000 00000000 00000000
         4: 00000000 00000000 00000000 00000000
         5: 00000040 00000040 00000003 00000000
         6: 00000000 00000000 00000001 00000000
  80000000: 8000001B 68747541 444D4163 69746E65
  80000001: 00500F10 00001242 000035FF 2FD3FBFF
  80000002: 20444D41 35332D45 72502030 7365636F
  80000003: 00726F73 00000000 00000000 00000000
  80000004: 00000000 00000000 00000000 00000000
  80000005: FF08FF08 FF280000 20080140 20020140
  80000006: 00000000 42004200 02008140 00000000
  80000007: 00000000 00000000 00000000 000001F9
  80000008: 00003024 00000000 00001001 00000000
  80000009: 00000000 00000000 00000000 00000000
  8000000A: 00000001 00000008 00000000 0000060F
  8000000B: 00000000 00000000 00000000 00000000
  8000000C: 00000000 00000000 00000000 00000000
  8000000D: 00000000 00000000 00000000 00000000
  8000000E: 00000000 00000000 00000000 00000000
  8000000F: 00000000 00000000 00000000 00000000
  80000010: 00000000 00000000 00000000 00000000
  80000011: 00000000 00000000 00000000 00000000
  80000012: 00000000 00000000 00000000 00000000
  80000013: 00000000 00000000 00000000 00000000
  80000014: 00000000 00000000 00000000 00000000
  80000015: 00000000 00000000 00000000 00000000
  80000016: 00000000 00000000 00000000 00000000
  80000017: 00000000 00000000 00000000 00000000
  80000018: 00000000 00000000 00000000 00000000
  80000019: 00000000 00000000 00000000 00000000
  8000001A: 00000000 00000000 00000000 00000000
  8000001B: 000000FF 00000000 00000000 00000000
terminating...
0 Karma
1 Solution

yannK
Splunk Employee
Splunk Employee

Forwarders do not have indexes (except fishbuket) so the bucket repair will fail.

The component that is crashing is the "archivereader" that process the compressed logs files.
Please verify that you have enough memory to uncompress them ?
To identify the issue, disable your inputs and re-enable one by one.

View solution in original post

yannK
Splunk Employee
Splunk Employee

Forwarders do not have indexes (except fishbuket) so the bucket repair will fail.

The component that is crashing is the "archivereader" that process the compressed logs files.
Please verify that you have enough memory to uncompress them ?
To identify the issue, disable your inputs and re-enable one by one.

clete2
Path Finder

I have enough memory to do so. I reinstalled the forwarder and copied my inputs.conf back into the search app, ensuring that my setup stays identical. It is now working.

I just wish I did not have to reinstall.

Thanks for the information about the indexes. I am new to Splunk and am experimenting with a basic setup.

0 Karma

lukejadamec
Super Champion

Running splunk fsck won't do anything on a lightweight forwarder because that is used to repair indexes which don't exist on the lightweight forwarder.
Is this a new install? Have you tried reinstalling the forwarder?

clete2
Path Finder

It is a lightweight forwarder. The only setup I performed was to add files to monitor and a server to forward to.

No errors when it starts up and forks into the background.
I see this message immediately in the stderr log:
2013-11-28 09:43:51.111 -0600 splunkd started (build 182037)

I notice that my Splunk server is reporting lines received. Then, it crashes ~1 minute later:

splunkd: /opt/splunk/p4/splunk/branches/6.0.0/src/pipeline/input/ArchiveProcessor.cpp:1044: bool ArchiveCrcChecker::seekAndComputeSeekCrc(): Assertion `(file_offset_t)_seekPtr >= dp->curPos()' failed.

0 Karma

lukejadamec
Super Champion

When you start the forwarder from the command prompt
splunk\bin\splunk restart
what errors do you get?
Is this a heavy forwarder that is indexing data?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...