Helm deploy is successful.
I checked related splunk pods fluentd are up and running:
splunk-splunk-kubernetes-logging-v6gbg 1/1 Running 0 3m57s
splunk-splunk-kubernetes-metrics-agg-d7dc75b4c-pm6gv 1/1 Running 0 3m57s
splunk-splunk-kubernetes-metrics-dpnpw 1/1 Running 0 3m57s
splunk-splunk-kubernetes-objects-7c94bdccc-tlzzb 1/1 Running 0 3m57s
LogA:
2020-02-21 20:07:05 +0000 [info]: #0 starting fluentd worker pid=17 ppid=6 worker=0
2020-02-21 20:07:05 +0000 [info]: #0 fluentd worker is now running worker=0
LogB:
2020-02-21 20:06:55 +0000 [info]: #0 listening port port=24224 bind="0.0.0.0"
2020-02-21 20:06:55 +0000 [warn]: #0 /var/log/kube-apiserver-audit.log not found. Continuing without tailing it.
2020-02-21 20:06:55 +0000 [info]: #0 fluentd worker is now running worker=0
LogC:
2020-02-21 20:07:06 +0000 [info]: #0 starting fluentd worker pid=16 ppid=6 worker=0
2020-02-21 20:07:06 +0000 [info]: #0 fluentd worker is now running worker=0
LogD:
2020-02-21 20:06:53 +0000 [info]: #0 starting fluentd worker pid=15 ppid=6 worker=0
2020-02-21 20:06:53 +0000 [info]: #0 fluentd worker is now running worker=0
But entities cannot be discovered on the splunk instance (on perm). Even tho there are some data already got forwarded:
21/02/2020
15:15:07.742
metric
host = k8s-node1source = kube.node.tasks_stats.nr_io_waitsourcetype = httpevent
I am currently blocked with this issue to use this app. any help is appreciated and thanks in advance 🙂
I found the issue, wrong index type. for em_metrics, it should be created as metrics type index.
This issue has now been resolved.
I found the issue, wrong index type. for em_metrics, it should be created as metrics type index.
This issue has now been resolved.