in system/local directory below is the configuration.
[monitor:\{Log Location}]
sourcetype=test
index=chilqa
disabled = false
but it is surprising why data is sent to main index still.
is there any other location which is making the index to pass to main index?
Thanks.
Vikram.
Hi vikram_m,
run /opt/splunk/bin/splunk cmd btool inputs list --debug > inputs_list.txt
on your forwarder or target server.
In this way you have all the configurated inputs and you can check if there are other configurations that have the same monitor.
Bye.
Giuseppe
You need to send this to the forwarding server and restart the splunk instance there. Then you need to search only for events that have been forwarded and indexed AFTER the point the forwarder was restarted (old events will obviously stain in main
). If it still goes into main
, then you must not have a index defined in indexes.conf
for chilqa
(or you have not deployed it to your indexer tier or have not restarted the Splunk instances there) and have a last chance index
defined as main
and that is why it is ending up there.
Very likely you didn't configure the index 'chilqa' on your indexer. Take a look at splunkd.log on your indexer and you might find a message like this: Received event for unconfigured/disabled/deleted index='chilqua' with source='<yourlogsource>' host='your forwarder host' sourcetype='sourcetype::test' (1 missing total)
Hello Ssiever,
I cannot find anything for the host name in Splunkd.log
However I can find only below mentioned lines in the /var/log/splunk directory.
metrics.log:09-14-2017 10:32:17.333 +0000 INFO Metrics - group=per_host_thruput, series="db-containers", kbps=7.893489, eps=34.709250, kb=244.701172, ev=1076, avg_age=5464.828067, max_age=29468
metrics.log:09-14-2017 10:32:17.334 +0000 INFO Metrics - group=tcpin_connections, 104.45.237.119:19163:9998, connectionType=cooked, sourcePort=19163, sourceHost=, sourceIp=104.45.237.119, destPort=9998, kb=327.58, _tcp_Bps=26608.31, _tcp_KBps=25.98, _tcp_avg_thruput=25.98, _tcp_Kprocessed=327.58, _tcp_eps=52.19, _process_time_ms=1, chan_new_kBps=0.08, evt_misc_kBps=1.19, evt_raw_kBps=19.51, evt_fields_kBps=5.00, evt_fn_kBps=1.27, evt_fv_kBps=3.73, evt_fn_str_kBps=1.19, evt_fn_meta_dyn_kBps=0.00, evt_fn_meta_predef_kBps=0.00, evt_fn_meta_str_kBps=0.00, evt_fv_num_kBps=0.00, evt_fv_str_kBps=3.73, evt_fv_predef_kBps=0.00, evt_fv_offlen_kBps=0.00, build=4b804538c686, version=6.6.2, os=Windows, arch=x64, hostname=db-containers, guid=A9AADA66-57BB-4410-A075-328AE2C24FA3, fwdType=uf, ssl=false, lastIndexer=None, ack=false
Hi vikram_m,
run /opt/splunk/bin/splunk cmd btool inputs list --debug > inputs_list.txt
on your forwarder or target server.
In this way you have all the configurated inputs and you can check if there are other configurations that have the same monitor.
Bye.
Giuseppe
On executing this command I came to know that in system\default directory the index = default.
There I changed the index to chilqa and took a UF restart.
This resolved the issue.
But the surprise to me was as per Splunk and the conf file precedence local file will be one which has heighest priority and its index and configuration values will be picked up.
Then in this was why is that default folder values were pickup and index was been sent to "main".
Thanks.
Vikram.
but one more problem which I can see here is some of the data from the UF is going to main and some of it is going to ChilQA seems like I need to debug more on the issue. Please help if you have seen similar issue before.
Thanks.
Vikram.
Probably there's a similar problem: at first identify which logs are indexed on Main index (find hosts and sourcetypes) and then debug in the same way your input.conf stanzas.
Bye.
Giuseppe
Now I created one more problem for me.
In hope that I can give definitive answer I found that the UF version was 6.6.2 and our Enterprise instance is 6.5.3.
So I uninstalled UF, restarted UF server, installed 6.5.2 version of UF and configured the UF in similar way.
Now UF has totally stopped from sending data to the enterprise instance.
I feel I am in big trouble on please help.
There are really no issues with running different versions of UF, it is in fact very common. Here is the documentation reference.
The first thing to always check is your forwarder's splunkd.log. If you are on Linux, it's at /opt/splunkforwarder/var/log/splunk/splunkd.log. Check for any error messages there. Feel free to share what you find, if you can't make sense of it.
If you can run /opt/splunkforwarder/bin/splunk cmd btool inputs list --debug
and /opt/splunkforwarder/bin/splunk cmd btool outputs list --debug
and share the output of both with you, we maybe better able to help you.