The environment has 1 search head, and 2 indexers running Splunk 7.2.1. The Splunk App for AWS (5.1.2) and Splunk Add-on (4.6.0), are installed. The add-on is loaded on the sh to manage the inputs.
We have setup multiple inputs that are functioning correctly, including the cloudwatch AWS/EC2, EBS, RDS inputs, however when attempting to set up cloudwatch AWS/S3 namespace we are unable to ingest data.
We have verified via the docs that all index macros are config'd correctly, as well as aws permissoins are set correctly.
We see no errors in the aws:cloudwatch:logs
(index=_internal sourcetype=aws:cloudwatch:log )
We see in the logs that the following steps are processing for the AWS/S3 inputs without error:
Create task for data input.
Start querying data points.
Start running batches.
Batches completed.
Querying data points finished.
Our inputs.conf is listed below. Is there something missing in our configurations below, or any additional thoughts
regarding troubleshooting that we could attempt?
Thanks for any help.
inputs.conf:
[aws_cloudwatch://XXXXXXXX_aws_cloudwatch_56f74518-6038-4bcd-bf6d-c51f687860aa]
aws_account = XXXXXXXX
aws_region = us-west-1
index = aws
metric_dimensions = [{"StorageType":["AllStorageTypes"],"BucketName":["rmp-files"]}]
metric_names = ["NumberOfObjects"]
metric_namespace = AWS/S3
period = 60
polling_interval = 3600
sourcetype = aws:cloudwatch
statistics = ["Average"]
use_metric_format = false
[aws_cloudwatch://XXXXXXXX_aws_cloudwatch_6ff56c72-8097-460d-bbfc-a0e126f6953c]
aws_account = XXXXXXXX
aws_region = us-west-1
index = aws
metric_dimensions = [{"StorageType":["StandardStorage"],"BucketName":["rmp-files"]}]
metric_names = ["BucketSizeBytes"]
metric_namespace = AWS/S3
period = 60
polling_interval = 3600
sourcetype = aws:cloudwatch
statistics = ["Average"]
use_metric_format = false
... View more