I installed Add-on for AWS 4.6.0, the latest version as of this writing and have problems in ingesting some inputs.
[aws_sqs_based_s3://My_Logging_Config]
aws_account = splunk_ec2_role
s3_file_decoder = Config
sourcetype = aws:config
sqs_batch_size = 10
sqs_queue_url = https ://queue.amazonaws.com/6436/Config-US-East-100-sqs
And have errors like message="Failed to ingest file."
*2019-01-17 03:11:02,661 level=ERROR pid=2286 tid=Thread-1 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:ingestfile:285 | start_time=1547691410 datainput="My_Logging_Config", message_id="82aa59-c0f4-4016-9f9c-cd06122b1e36" job_id=35a18f-f3d2-451d-9949-1262cf382e76 created=154769662.24 ttl=300 | message="Failed to ingest file." uri="s3://supp-prd-config-logs/AWSLogs/773569193/Config/us-east-2/2019/1/17/OversizedChangeNotification/AWS::EC2::VPC/vpc-face/773670569193_Config_us-east-2_ChangeNotification_AWS::EC2::VPC_vpc-face_20190117T031101Z_1547694661542.json.gz"*
The error message tells us that it fails in getting data for "OversizedChangeNotification".
This is not supported by the latest add-on for AWS as yet and being worked on via ADDON-20112. For more information please contact Splunk Support.
We are at AWS 6.3.0 version still we are seeing the below errors in splunk_ta_aws_aws_sqs_based_s3_ess-p-sys-awsconfig.log log file.
2023-03-16 02:55:41,201 level=ERROR pid=72620 tid=Thread-8 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_ingest_file:514 | datainput="ess-p-sys-awsconfig" start_time=1678849700, message_id="39693d02-55gg
-4e62-9895-9796fa51ed2f" created=1678902941.093989 ttl=300 job_id=95c972ab-5316-4bda-bfaf-a337ad5effbe | message="Failed to ingest file." uri="s3://essaws-p-system/aws/config/AWSLogs/385473250182/Config/ap-northeast-1/2023/3/15/OversizedChangeNotification/AWS::EC2::SecurityGroup/sg-03e7b8de81918c141/385473250182_Config_ap-northeast-1_ChangeNotification_AWS::EC2::SecurityGroup_sg-03e7b8de81918c141_20230315T171015Z_1678900215945.json.gz"
2023-03-16 02:55:41,201 level=CRITICAL pid=72620 tid=Thread-8 logger=splunk_ta_aws.modinputs.sqs_based_s3.handler pos=handler.py:_process:442 | datainput="ess-p-sys-awsconfig" start_time=1678849700, message_id="39693d02-55gg-4e62-9895-9796fa51ed2f" created=1678902941.093989 ttl=300 job_id=95c972ab-5316-4bda-fgff-ad5effbe | message="An error occurred while processing the message."
Traceback (most recent call last):
File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 431, in _process
self._parse_csv_with_delimiter,
File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/modinputs/sqs_based_s3/handler.py", line 478, in _ingest_file
for records, metadata in self._decode(fileobj, source):
File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/splunk_ta_aws/common/decoder.py", line 95, in __call__
records = document["configurationItems"]
KeyError: 'configurationItems'
Has anyone tried the SQS input on the new Data Manager Add-On? Instead of directly pulling (like the legacy AWS Add-On) the new DM creates a Firehose 'push'.
Also, re the legacy Add-On's issue, I know that I previously posted about my own experience (circa 2019) but now my recollection is that this was only/ever a cosmetic issue, i.e., that there was no actual data-loss. Is that correct? Or, am I maybe confusing this issue with another?
Well it is 2022 and I am still seeing this issue. Has anyone found a way around this?
Hi.
Could you solve this problem?
Thanks
The error message tells us that it fails in getting data for "OversizedChangeNotification".
This is not supported by the latest add-on for AWS as yet and being worked on via ADDON-20112. For more information please contact Splunk Support.
I'm facing the same problem. Did anyone figure out a way around it?
Are you using the legacy TA or the new Data Manager?
It's 2023 I'm having this very same issue as well for AWS Config
I am experiencing this error, too, and two years after it was apparently recognized as an issue in the AWS TA.
Surely there's some workaround? If I manually delete the 'oversize' files from the S3 bucket, will that stop the error-logging ... or will it simply substitute a new different error, e.g. "object not found"?
Or, will the SQS message eventually time-out? (I don't think I have the ability to selectively delete SQS messages?)
Curious to know if anyone has a workaround to the situation as the error logs are massive due to this. Perhaps writing those errors to a different bucket is possible although I have not tried that. Also potentially adding a filter on the bucket may work as well.