This morning I opened a dashboard and was greeted with "results not found."
I thought this was peculiar, so I started doing some digging and found that the server I was forwarding from had this in its log file:
08-10-2011 09:10:49.476 -0400 INFO TailingProcessor - Could not send data to output queue (parsingQueue), retrying...
08-10-2011 09:10:53.799 -0400 INFO BatchReader - Could not send data to output queue (parsingQueue), retrying...
So, I began poking around, and could not figure out what was happening.
Finally, I started to think it was the receiver on the indexer, so I tried to hit the listening port:
jgauthier$ telnet 192.168.74.45 9997
Trying 192.168.74.45...
Nothing. I tried from another system.. nothing. Not a "Connection refused", just totally not accepting. I restarted splunkd, and everything started working.
I could not find any log files on the splunk indexer to help, because it started a few days ago and has since been scrolled from the log.
Any suggestions?
I had a similar problem where the issue was that the Splunk server was running into its 1024 open file limit. I edited the /etc/security/limits.conf to allow for a 2048 softlimit and 4096 hardlimit on "nofile" and restarted. Check with ulimit -a if the new setting has indeed taken effect.
Obviously, this only applies if your receiving Splunk server is a Linux server.
Yup. My splunk server is Windows. (latest version of both)