On my development splunk v6.2.6 I developed a new custom app called auditing with one index which is deployed successfully alongside an existing app called logging with one index.
But when I deploy my new auditing app to my testing splunk v6.2.6, after a while both apps' indexes stop indexing. If I remove the new auditing app from splunk, the old logging index resumes logging events from the point it left off and works as expected. Moving the auditing app back into splunk stops the indexing for both again.
I've gone thru splunk answers to eliminate issues like missing source files, inputs.conf stanzas, etc but so far no luck I also started splunk with --debug but cannot tell in splunkd.log what is breaking the indexing.
Is there any other settings or specific debug msgs I should look for?
Thanks
Thanks. All indexing stopped when I moved the new app back into splunk, external and non-external. After more troubleshooting I determined the root cause was that my new auditing app had a forwarding stanza in /opt/splunk/etc/apps/auditing/local/outputs.conf for forwarding to another splunk which is what I need this app to do.
When I removed that stanza indexing resumed for all indexes.
But I need the old logging app to index only locally and the new auditing app to forward only. I tried all the doc settings to use selective forwarding but could not get that to work.
So I got this to work instead:
1) adding indexAndForward = true to my new auditing app's outputs.conf here:
[tcpout]
defaultGroup = default-autolb-group
indexAndForward = true
[tcpout:default-autolb-group]
server = cefserver
[tcpout-server://cefserver:51515]
2) appending isReadOnly = true to the end of my new auditing app's index's entry in /opt/splunk/etc/apps/auditing/local/indexes.conf
3) On the receiving splunk, which does not have either of these apps, I created a new readonly index with isReadOnly = true named 'logging' to match the old logging app's index named 'logging'. I had to do after I got forwarding working because the receiving splunk started throwing new warnings about "received event for unconfigured/disabled/deleted index='logging'.
You have a stanza that is "broken"
something like this:
sourcetypeName]
Or this:
[stanzaName
Check all the .conf files in your new app.
I believe you will find your events that are "missing" from both indexes in the _internal index or another one of your indexes.
Just discovered that when I enable forwarding in the UI while under the auditing app URL .../manager/auditing/data/outputs/tcp/server using cefhost:51515 to another splunk from the testing splunk all indexing on the testing splunk stops, including _internal. All the missing events start getting indexed on the target cefserver splunk, including _internal.
If I delete the new forwarding entry in /opt/splunk/etc/system/local/outputs.conf (not sure why entry is not created in .../etc/apps/auditing/local/outputs.conf) and restart splunk all indexing resumes on the testing splunk.
Not sure what I have misconfigured. Here are some conf files if that helps (maybe typos as these are typed):
/opt/splunk/etc/system/local/outputs.conf: // commenting out this file and restarting restored indexing
[tcpout]
defaultGroup = default-autolb-group
[tcpout:default-autolb-group]
server = cefserver
[tcpout-server://cefserver:51515]
/opt/splunk/etc/apps/auditing/local/outputs.conf (no file)