Question
stringlengths
0
222
Description
stringlengths
0
790
Answer
stringlengths
0
28.2k
Link
stringlengths
35
92
How do I troubleshoot SMS delivery delays in Amazon SNS?
I get delivery delays of mobile text messaging (SMS) to destination numbers in Amazon Simple Notification Service (Amazon SNS).
"I get delivery delays of mobile text messaging (SMS) to destination numbers in Amazon Simple Notification Service (Amazon SNS).Short descriptionSMS delivery could be delayed for the following reasons:The phone number is temporarily out of the coverage area.The phone number is in a roaming network.There's increased network traffic for a particular carrier.The phone was turned off when a carrier tried delivering the message.ResolutionTroubleshoot single device issuesRestart the device so that it's connected to the nearest network base station.Change the SIM slot to check for a device issue.Check if the device can receive SMS messages from other sources.Troubleshoot multiple device issuesIf delayed SMS delivery is affecting multiple devices, there could be issues with downstream providers and carriers.To troubleshoot potential downstream issues, create a support case for Amazon SNS. Provide the following information in your support case:The AWS Region you're using to send SMS messagesA timestamp of when the issue startedThree samples of SMS logs with the message IDs of failed SMS messages to different numbers not older than three daysNote: SMS deliveries from Amazon CloudWatch logs don't always provide accurate SMS delivery times. In some cases, SMS messages can be delivered before CloudWatch logs are received. The dwellTimeMsUntilDeviceAck value in the delivery logs shows when the carrier received the Delivery Report (DLR), but doesn't provide information on delayed SMS messages.Follow"
https://repost.aws/knowledge-center/sns-sms-delivery-delays
How do I stream data from CloudWatch Logs to a VPC-based Amazon OpenSearch Service cluster in a different account?
"I'm trying to stream data from Amazon CloudWatch Logs to an Amazon OpenSearch Service cluster using a virtual private cloud (VPC) in another account. However, I receive an "Enter a valid Amazon OpenSearch Service Endpoint" error message."
"I'm trying to stream data from Amazon CloudWatch Logs to an Amazon OpenSearch Service cluster using a virtual private cloud (VPC) in another account. However, I receive an "Enter a valid Amazon OpenSearch Service Endpoint" error message.Short descriptionTo stream data from CloudWatch Logs to an OpenSearch Service cluster in another account, perform the following steps:1.    Set up CloudWatch Logs in Account A.2.    Configure AWS Lambda in Account A.3.    Configure Amazon Virtual Private Cloud (Amazon VPC) peering between accounts.ResolutionSet up CloudWatch Logs in Account A1.    Open the CloudWatch Logs console in Account A and select your log group.2.    Choose Actions.3.    Choose the Create OpenSearch subscription filter.4.    For the Select Account option, select This account.5.    For the OpenSearch Service cluster dropdown list, choose an existing cluster for Account A.6.    Choose the Lambda IAM Execution Role that has permissions to make calls to the selected OpenSearch Service cluster.7.    Attach the AWSLambdaVPCAccessExecutionRole policy to your role.8.    In Configure log format and filters, select your Log Format and Subscription Filter Pattern.9.    Choose Next.10.    Enter the Subscription filter name and choose Start Streaming. For more information about streaming, see Streaming CloudWatch Logs data to Amazon OpenSearch Service.Configure Lambda in Account A1.    In Account A, open the Lambda console.2.    Select the Lambda function you created to stream the log.3.    In the function code, update the endpoint variable of the OpenSearch Service cluster in Account B. This update allows the Lambda function to send data to the OpenSearch Service domain in Account B.4.    Choose Configuration.5.    Choose VPC.6.    Under VPC, choose Edit.7.    Select your VPC, subnets, and security groups.Note: This selection makes sure that the Lambda function runs inside a VPC, using VPC routing to send data back to the OpenSearch Service domain. For more information about Amazon Virtual Private Cloud (Amazon VPC) configurations, see Configuring a Lambda function to access resources in a VPC.8.    Choose Save.Configure VPC peering between accounts1.    Open the Amazon VPC console in Account A and Account B.Note: Be sure that your VPC doesn't have overlapping CIDR blocks.2.    Create a VPC peering session between the two custom VPCs (Lambda and OpenSearch Service). This VPC peering session allows Lambda to send data to your OpenSearch Service domain. For more information about VPC peering connections, see Create a VPC peering connection.3.    Update the route table for both VPCs. For more information about route tables, see Update your route tables for a VPC peering connection.4.    In Account A, go to Security Groups.5.    Select the security group assigned to the subnet where Lambda is set up.Note: In this instance, "security group" refers to a subnet network ACL.6.    Add the inbound rule to allow traffic from the OpenSearch Service subnets.7.    In Account B, select the security group assigned to the subnet where OpenSearch Service is set up.8.    Add the inbound rule to allow traffic from the Lambda subnets.9.    In Account B, open the OpenSearch Service console.10.    Choose Actions.11.    Choose modify access policy, and then append the following policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<AWS Account A>:role/<Lambda Execution Role>" }, "Action": "es:*", "Resource": "arn:aws:es:us-east-1: ::<AWS Account B>:domain/<OpenSearch Domain Name>/*" } ]}This policy allows OpenSearch Service to make calls from the Lambda function's execution role.12.    Check the Error count and success rate metric in the Lambda console. This metric verifies whether logs are successfully delivered to OpenSearch Service.13.    Check the Indexing rate metric in OpenSearch Service to confirm whether the data was sent. CloudWatch Logs now streams across both accounts in your Amazon VPC.Follow"
https://repost.aws/knowledge-center/opensearch-stream-data-cloudwatch
How does throttling on my global secondary index affect my Amazon DynamoDB table?
My global secondary index (GSI) is being throttled. How does this affect the base Amazon DynamoDB table?
"My global secondary index (GSI) is being throttled. How does this affect the base Amazon DynamoDB table?Short descriptionThrottling on a GSI affects the base table in different ways, depending on whether the throttling is for read or for write activity:When a GSI has insufficient read capacity, the base table isn't affected.When a GSI has insufficient write capacity, write operations don't succeed on the base table or any of its GSIs.For more information, see Using global secondary indexes in DynamoDB.ResolutionTo prevent throttling, do the following:Be sure that the provisioned write capacity for each GSI is equal to or greater than the provisioned write capacity of the base table. To modify the provisioned throughput of a GSI, use the UpdateTable operation. If automatic scaling is turned on for the base table, then it's a best practice to apply the same settings to the GSI. You can do this by choosing Copy from base table in the DynamoDB console. For best performance, be sure to turn on Use the same read/write capacity settings for all global secondary indexes. This option allows DynamoDB auto scaling to uniformly scale all the global secondary indexes on the base table. For more information, see Enabling DynamoDB auto scaling on existing tables.Be sure that the GSI's partition key distributes read and write operations as evenly as possible across partitions. This helps prevent hot partitions, which can lead to throttling. For more information, see Designing partition keys to distribute your workload evenly.Use Amazon CloudWatch Contributor Insights for DynamoDB to identify the most frequently throttled keys.Follow"
https://repost.aws/knowledge-center/dynamodb-gsi-throttling-table
How do I troubleshoot Amazon Kinesis Agent issues on a Linux machine?
"I'm trying to use Amazon Kinesis Agent on a Linux machine. However, I'm encountering an issue. How do I resolve this?"
"I'm trying to use Amazon Kinesis Agent on a Linux machine. However, I'm encountering an issue. How do I resolve this?Short descriptionThis article covers the following issues:Kinesis Agent is sending duplicate events.Kinesis Agent is causing write throttles and failed records on my Amazon Kinesis stream.Kinesis Agent is unable to read or stream log files.My Amazon Elastic Computing (Amazon EC2) server keeps failing because of insufficient Java heap size.My Amazon EC2 CPU utilization is very high.ResolutionKinesis Agent is sending duplicate eventsIf you receive duplicates whenever you send logs from Kinesis Agent, there's likely a file rotation in place where the match pattern isn't correctly qualified. Whenever you send a log, Kinesis Agent checks the latestUpdateTimestamp of each file that matches the file pattern. By default, Kinesis Agent chooses the most recently updated file, identifying an active file that matches the rotation pattern. If more than one file is updated at the same time, Kinesis Agent can't determine the active file to track. Therefore, Kinesis Agent begins to tail the updated files from the beginning, causing several duplicates.To avoid this issue, create different file flows for each individual file, making sure that your file pattern tracks the rotations instead.Note: If you're tracking a rotation, it's a best practice to use either the create or rename log rotate settings, instead of copytruncate.For example, you can use a file flow that's similar to this one:"flows": [ { "filePattern": "/tmp/app1.log*", "kinesisStream": "yourkinesisstream1" }, { "filePattern": "/tmp/app2.log*", "kinesisStream": "yourkinesisstream2" } ]Kinesis Agent also retries any records that it fails to send back when there are intermittent network issues. If Kinesis Agent fails to receive server-side acknowledgement, it tries again, creating duplicates. In this example, the downstream application must de-duplicate.Duplicates can also occur when the checkpoint file is tempered or removed. If a checkpoint file is stored in /var/run/aws-kinesis-agent, then the file might get cleaned up during a reinstallation or instance reboot. When you run Kinesis Agent again, the application fails as soon as the file is read, causing duplicates. Therefore, keep the checkpoint in the main Agent directory and update the Kinesis Agent configuration with a new location.For example:"checkpointFile": "/aws-kinesis-agent-checkpoints/checkpoints"Kinesis Agent is causing write throttles and failed records on my Amazon Kinesis data streamBy default, Kinesis Agent tries to send the log files as quickly as possible, breaching Kinesis' throughput thresholds. However, failed records are re-queued, and are continuously retried to prevent any data loss. When the queue is full, Kinesis Agent stops tailing the file, which can cause the application to lag.For example, if the queue is full, your log looks similar to this:com.amazon.kinesis.streaming.agent.Agent [WARN] Agent: Tailing is 745.005859 MB (781195567 bytes) behind.Note: The queue size is determined by the publishQueueCapacity parameter (with the default value set to "100").To investigate any failed records or performance issues on your Kinesis data stream, try the following:Monitor the RecordSendErrors metric in Amazon CloudWatch.Review your Kinesis Agent logs to check if any lags occurred. The ProvisionedThroughputExceededException entry is visible only under the DEBUG log level. During this time, Kinesis Agent's record sending speed can be slower if most of the CPU is used to parse and transform data.If you see that Kinesis Agent is falling behind, then consider scaling up your Amazon Kinesis delivery stream.Kinesis Agent is unable to read or stream log filesMake sure that the Amazon EC2 instance that your Kinesis Agent is running on has proper permissions to access your destination Kinesis delivery stream. If Kinesis Agent fails to read the log file, then check whether Kinesis Agent has read permissions for that file. For all files matching this pattern, read permission must be granted to aws-kinesis-agent-user. For the directory containing the files, read and execute permissions must also be granted to aws-kinesis-agent-user. Otherwise, you get an Access Denied error or Java Runtime Exception.My Amazon EC2 server keeps failing because of insufficient Java heap sizeIf your Amazon EC2 server keeps failing because of insufficient Java heap size, then increase the heap size allotted to Amazon Kinesis Agent. To configure the amount of memory available to Kinesis Agent, update the “start-aws-kinesis-agent” file. Increase the set values for the following parameters:JAVA_START_HEAPJAVA_MAX_HEAPNote: On Linux, the file path for “start-aws-kinesis-agent” is “/usr/bin/start-aws-kinesis-agent”.My Amazon EC2 CPU utilization is very highCPU utilization can spike if Kinesis Agent is performing sub-optimized regex pattern matching and log transformation. If you already configured Kinesis Agent, try removing all the regular expression (regex) pattern matches and transformations. Then, check whether you're still experiencing CPU issues.If you still experience CPU issues, then consider tuning the threads and records that are buffered in memory. Or, update some of the default parameters in the /etc/aws-kinesis/agent.json configuration settings file. You can also lower several parameters in the Kinesis Agent configuration file.Here are the general configuration parameters that you can try lowering:sendingThreadsMaxQueueSize: The workQueue size of the threadPool for sending data to destination. The default value is 100.maxSendingThreads: The number of threads for sending data to destination. The minimum value is 2. The default value is 12 times the number of cores for your machine.maxSendingThreadsPerCore: The number of threads per core for sending data to destination. The default value is 12.Here are the flow configuration parameters that you can try lowering:publishQueueCapacity: The maximum number of buffers of records that can be queued up before they are sent to the destination. The default value is 100.minTimeBetweenFilePollsMillis: The time interval when the tracked file is polled and the new data begins to parse. The default value is 100.Follow"
https://repost.aws/knowledge-center/troubleshoot-kinesis-agent-linux
How do I revert to a known stable kernel after an update prevents my Amazon EC2 instance from rebooting successfully?
How do I revert to a stable kernel after an update prevents my Amazon Elastic Compute Cloud (Amazon EC2) instance from rebooting successfully?
"How do I revert to a stable kernel after an update prevents my Amazon Elastic Compute Cloud (Amazon EC2) instance from rebooting successfully?Short descriptionIf you performed a kernel update to your EC2 Linux instance but the kernel is now corrupt, then the instance can't reboot. You can't use SSH to connect to the impaired instance.To revert to the previous versions, do the following:1.    Access the instance's root volume.2.    Update the default kernel in the GRUB bootloader.ResolutionAccess the instance's root volumeThere are two methods to access the root volume:Method 1: Use the EC2 Serial ConsoleIf you enabled EC2 Serial Console for Linux, then you can use it to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Before using the serial console, grant access to it at the account level. Then create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven’t configured access to the serial console, then follow the instructions in Method 2. For information on configuring the EC2 Serial Console for Linux, see Configure access to the EC2 Serial Console.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Method 2: Use a rescue instanceCreate a temporary rescue instance, and then remount your Amazon Elastic Block Store (Amazon EBS) volume on the rescue instance. From the rescue instance, you can configure your GRUB to take the previous kernel for booting.Important: Don't perform this procedure on an instance store-backed instance. Because the recovery procedure requires a stop and start of your instance, any data on that instance is lost. For more information, see Determine the root device type of your instance.1.    Create an EBS snapshot of the root volume. For more information, see Create Amazon EBS snapshots.2.    Open the Amazon EC2 console.Note: Be sure that you're in the correct Region.3.    Select Instances from the navigation pane, and then choose the impaired instance.4.    Choose Instance State, Stop instance, and then select Stop.5.    In the Storage tab, under Block devices, select the Volume ID for /dev/sda1 or /dev/xvda.Note: The root device differs by AMI, but /dev/xvda or /dev/sda1 are reserved for the root device. For example, Amazon Linux 1 and 2 use /dev/xvda. Other distributions, such as Ubuntu 14, 16, 18, CentOS 7, and RHEL 7.5, use /dev/sda1.6.    Choose Actions, Detach Volume, and then select Yes, Detach. Note the Availability Zone.Note: You can tag the EBS volume before detaching it to help identify it in later steps.7.    Launch a rescue EC2 instance in the same Availability Zone.Note: Depending on the product code, you might be required to launch an EC2 instance of the same OS type. For example, if the impaired EC2 instance is a paid RHEL AMI, you must launch an AMI with the same product code. For more information, see Get the product code for your instance.If the original instance is running SELinux (RHEL, CentOS 7 or 8, for example), launch the rescue instance from an AMI that uses SELinux. If you select an AMI running a different OS, such as Amazon Linux 2, any modified file on the original instance has broken SELinux labels.8.    After the rescue instance launches, choose Volumes from the navigation pane, and then choose the detached root volume of the impaired instance.9.    Choose Actions, Attach Volume.10.    Choose the rescue instance ID ( id-xxxxx), and then set an unused device. In this example, /dev/sdf.11.     Use SSH to connect to the rescue instance.12.    Run the lsblk command to view your available disk devices:lsblkThe following is an example of the output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 15G 0 disk└─xvda1 202:1 0 15G 0 part /xvdf 202:0 0 15G 0 disk └─xvdf1 202:1 0 15G 0 partNote: Nitro-based instances expose EBS volumes as NVMe block devices. The output generated by the lsblk command on Nitro-based instances shows the disk names as nvme[0-26]n1. For more information, see Amazon EBS and NVMe on Linux instances. The following is an example of the lsblk command output on a Nitro-based instance:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTnvme0n1 259:0 0 8G 0 disk └─nvme0n1p1 259:1 0 8G 0 part /└─nvme0n1p128 259:2 0 1M 0 part nvme1n1 259:3 0 100G 0 disk └─nvme1n1p1 259:4 0 100G 0 part /13.    Run the following command to become root:sudo -i14.    Mount the root partition of the mounted volume to /mnt. In the preceding example, /dev/xvdf1 or /dev/nvme1n1p1is the root partition of the mounted volume. For more information, see Make an Amazon EBS volume available for use on Linux. Note, in the following example, replace /dev/xvdf1 with the correct root partition for your volume.mount -o nouuid /dev/xvdf1 /mntNote: If /mnt doesn't exist on your configuration, create a mount directory, and then mount the root partition of the mounted volume to this new directory. mkdir /mntmount -o nouuid /dev/xvdf1 /mntYou can now access the data of the impaired instance through the mount directory.15.    Mount /dev, /run, /proc, and /sys of the rescue instance to the same paths as the newly mounted volume:for m in dev proc run sys; do mount -o bind {,/mnt}/$m; doneCall the chroot function to change into the mount directory.Note: If you have a separate /boot partition, mount it to /mnt/boot before running the following command.chroot /mntUpdate the default kernel in the GRUB bootloaderThe current corrupt kernel is in position 0 (zero) in the list. The last stable kernel is in position 1. To replace the corrupt kernel with the stable kernel, use one of the following procedures, based on your distro:GRUB1 (Legacy GRUB) for Red Hat 6 and Amazon LinuxGRUB2 for Ubuntu 14 LTS, 16.04 and 18.04GRUB2 for RHEL 7 and Amazon Linux 2GRUB2 for RHEL 8 and CentOS 8GRUB1 (Legacy GRUB) for Red Hat 6 and Amazon Linux 1Use the sed command to replace the corrupt kernel with the stable kernel in the /boot/grub/grub.conf file:sed -i '/^default/ s/0/1/' /boot/grub/grub.confGRUB2 for Ubuntu 14 LTS, 16.04, and 18.041.    Replace the corrupt GRUB_DEFAULT=0 default menu entry with the stable GRUB_DEFAULT=saved value in the /etc/default/grub file:sed -i 's/GRUB_DEFAULT=0/GRUB_DEFAULT=saved/g' /etc/default/grub2.    Run the update-grub command so that GRUB recognizes the change:update-grub3.    Run the grub-set-default command so that the stable kernel loads at the next reboot. In this example, grub-set-default is set to 1 in position 0:grub-set-default 1GRUB2 for RHEL 7 and Amazon Linux 21.    Replace the corrupt GRUB_DEFAULT=0 default menu entry with the stable GRUB_DEFAULT-saved value in the /etc/default/grub file:sed -i 's/GRUB_DEFAULT=0/GRUB_DEFAULT=saved/g' /etc/default/grub2.    Update GRUB to regenerate the /boot/grub2/grub.cfg file:grub2-mkconfig -o /boot/grub2/grub.cfg3.    Run the grub2-set-default command so that the stable kernel loads at the next reboot. In this example grub2-set-default is set to 1 in position 0:grub2-set-default 1GRUB2 for RHEL 8 and CentOS 8GRUB2 in RHEL 8 and CentOS 8 uses blscfg files and entries in /boot/loader for the boot configuration, instead of the previous grub.cfg format. It's a best practice to use the grubby tool for managing the blscfg files and retrieving information from the /boot/loader/entries/. If the blscfg files are missing from this location or corrupted, grubby doesn't show any results. You must regenerate the files to recover functionality. Therefore, the indexing of the kernels depends on the .conf files located under /boot/loader/entries and on the kernel versions. Indexing is configured to keep the latest kernel with the lowest index. For information on how to regenerate BLS configuration files, see How can I recover my Red Hat 8 or CentOS 8 instance that is failing to boot due to issues with the Grub2 BLS configuration file?1.    Run the grubby --default-kernel command to see the current default kernel:grubby --default-kernel2.    Run the grubby --info=ALL command to see all available kernels and their indexes:grubby --info=ALLThe following is example output from the --info=ALL command:root@ip-172-31-29-221 /]# grubby --info=ALLindex=0kernel="/boot/vmlinuz-4.18.0-305.el8.x86_64"args="ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto $tuned_params"root="UUID=d35fe619-1d06-4ace-9fe3-169baad3e421"initrd="/boot/initramfs-4.18.0-305.el8.x86_64.img $tuned_initrd"title="Red Hat Enterprise Linux (4.18.0-305.el8.x86_64) 8.4 (Ootpa)"id="0c75beb2b6ca4d78b335e92f0002b619-4.18.0-305.el8.x86_64"index=1kernel="/boot/vmlinuz-0-rescue-0c75beb2b6ca4d78b335e92f0002b619"args="ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto"root="UUID=d35fe619-1d06-4ace-9fe3-169baad3e421"initrd="/boot/initramfs-0-rescue-0c75beb2b6ca4d78b335e92f0002b619.img"title="Red Hat Enterprise Linux (0-rescue-0c75beb2b6ca4d78b335e92f0002b619) 8.4 (Ootpa)"id="0c75beb2b6ca4d78b335e92f0002b619-0-rescue"index=2kernel="/boot/vmlinuz-4.18.0-305.3.1.el8_4.x86_64"args="ro console=ttyS0,115200n8 console=tty0 net.ifnames=0 rd.blacklist=nouveau nvme_core.io_timeout=4294967295 crashkernel=auto $tuned_params"root="UUID=d35fe619-1d06-4ace-9fe3-169baad3e421"initrd="/boot/initramfs-4.18.0-305.3.1.el8_4.x86_64.img $tuned_initrd"title="Red Hat Enterprise Linux (4.18.0-305.3.1.el8_4.x86_64) 8.4 (Ootpa)"id="ec2fa869f66b627b3c98f33dfa6bc44d-4.18.0-305.3.1.el8_4.x86_64"Note the path of the kernel that you want to set as the default for your instance. In the preceding example, the path for the kernel at index 2 is /boot/vmlinuz- 0-4.18.0-80.4.2.el8_1.x86_64.3.    Run the grubby --set-default command to change the default kernel of the instance:grubby --set-default=/boot/vmlinuz-4.18.0-305.3.1.el8_4.x86_64Note: Replace 4.18.0-305.3.1.el8_4.x86_64 with your kernel's version number.4.    Run the grubby --default-kernel command to verify that the preceding command worked:grubby --default-kernelIf you're accessing the instance using the EC2 Serial Console, then the stable kernel now loads and you can reboot the instance.If you're using a rescue instance, then complete the steps in the following section.Unmount volumes, detach the root volume from the rescue instance, and then attach the volume to the impaired instanceNote: Complete the following steps if you used Method 2: Use a rescue instance to access the root volume.1.    Exit from chroot, and unmount /dev, /run, /proc, and /sys:exitumount /mnt/{dev,proc,run,sys,}2.    From the Amazon EC2 console, choose Instances, and then choose the rescue instance.3.    Choose Instance State, Stop instance, and then select Yes, Stop.4.    Detach the root volume id-xxxxx (the volume from the impaired instance) from the rescue instance.5.    Attach the root volume you detached in step 4 to the impaired instance as the root volume (/dev/sda1), and then start the instance.Note: The root device differs by AMI. The names /dev/xvda or /dev/sda1 are reserved for the root device. For example, Amazon Linux 1 and 2 use /dev/xvda. Other distributions, such as Ubuntu 14, 16, 18, CentOS 7, and RHEL 7.5, use /dev/sda1.The stable kernel now loads and your instance reboots.Follow"
https://repost.aws/knowledge-center/revert-stable-kernel-ec2-reboot
How can I set the number or size of files when I run a CTAS query in Athena?
"When I run a CREATE TABLE AS SELECT (CTAS) query in Amazon Athena, I want to define the number of files or the amount of data per file."
"When I run a CREATE TABLE AS SELECT (CTAS) query in Amazon Athena, I want to define the number of files or the amount of data per file.ResolutionUse bucketing to set the file size or number of files in a CTAS query.Note: The following steps use the Global Historical Climatology Network Daily public dataset (s3://noaa-ghcn-pds/csv.gz/) to illustrate the solution. For more information about this dataset, see Visualize over 200 years of global climate data using Amazon Athena and Amazon QuickSight. These steps show how to examine your dataset, create the environment, and then modify the dataset:Modify the number of files in the Amazon Simple Storage Service (Amazon S3) dataset.Set the approximate size of each file.Convert the data format and set the approximate file size.Examine the datasetRun the following AWS Command Line Interface (AWS CLI) to verify the number of files and the size of the dataset:Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.aws s3 ls s3://noaa-ghcn-pds/csv.gz/ --summarize --recursive --human-readableThe output looks similar to the following:2019-11-30 01:58:05 3.3 KiB csv.gz/1763.csv.gz2019-11-30 01:58:06 3.2 KiB csv.gz/1764.csv.gz2019-11-30 01:58:06 3.3 KiB csv.gz/1765.csv.gz2019-11-30 01:58:07 3.3 KiB csv.gz/1766.csv.gz...2019-11-30 02:05:43 199.7 MiB csv.gz/2016.csv.gz2019-11-30 02:05:50 197.7 MiB csv.gz/2017.csv.gz2019-11-30 02:05:54 197.0 MiB csv.gz/2018.csv.gz2019-11-30 02:05:57 168.8 MiB csv.gz/2019.csv.gzTotal Objects: 257Total Size: 15.4 GiBCreate the environment1.    Run a statement similar to the following to create a table:CREATE EXTERNAL TABLE historic_climate_gz( id string, yearmonthday int, element string, temperature int, m_flag string, q_flag string, s_flag string, obs_time int)ROW FORMAT DELIMITED FIELDS TERMINATED BY ','STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'LOCATION 's3://noaa-ghcn-pds/csv.gz/'2.    Run the following command to test the table:SELECT * FROM historic_climate_gz LIMIT 10The output shows ten lines from the dataset. After the environment is created, use one or more of the following methods to modify the dataset when you run CTAS queries.Modify the number of files in the datasetIt's a best practice to bucket data by a column that has high cardinality and evenly distributed values. For more information, see Bucketing vs Partitioning. In the following example, we use the yearmonthday field.1.    To convert the dataset into 20 files, run a statement similar to the following:CREATE TABLE "historic_climate_gz_20_files"WITH ( external_location = 's3://awsexamplebucket/historic_climate_gz_20_files/', format = 'TEXTFILE', bucket_count=20, bucketed_by = ARRAY['yearmonthday'] ) ASSELECT * FROM historic_climate_gzReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS queryformat: format that you want for the output (such as ORC, PARQUET, AVRO, JSON, or TEXTFILE)bucket_count: number of files that you want (for example, 20)bucketed_by: field for hashing and saving the data in the bucket (for example, yearmonthday)2.    Run the following command to confirm that the bucket contains the desired number of files:aws s3 ls s3://awsexamplebucket/historic_climate_gz_20_files/ --summarize --recursive --human-readableTotal Objects: 20Total Size: 15.6 GibSet the approximate size of each file1.    Determine how many files you need to achieve the desired file size. For example, to split the 15.4 GB dataset into 2 GB files, you need 8 files (15.4 / 2 = 7.7, rounded up to 8).2.    Run a statement similar to the following:CREATE TABLE "historic_climate_gz_2GB_files"WITH ( external_location = 's3://awsexamplebucket/historic_climate_gz_2GB_file/', format = 'TEXTFILE',    bucket_count=8,     bucketed_by = ARRAY['yearmonthday']) ASSELECT * FROM historic_climate_gzReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS queryformat: must be the same format as the source data (such as ORC, PARQUET, AVRO, JSON, or TEXTFILE)bucket_count: number of files that you want (for example, 20)bucketed_by: field for hashing and saving the data in the bucket. Choose a field with high cardinality.3.    Run the following command to confirm that the dataset contains the desired number of files:aws s3 ls s3://awsexamplebucket/historic_climate_gz_2GB_file/ --summarize --recursive --human-readableThe output looks similar to the following:2019-09-03 10:59:20 1.7 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00000.gz2019-09-03 10:59:20 2.0 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00001.gz2019-09-03 10:59:20 2.0 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00002.gz2019-09-03 10:59:19 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00003.gz2019-09-03 10:59:17 1.7 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00004.gz2019-09-03 10:59:21 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00005.gz2019-09-03 10:59:18 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00006.gz2019-09-03 10:59:17 1.9 GiB historic_climate_gz_2GB_file/20190903_085819_00005_bzbtg_bucket-00007.gzTotal Objects: 8Total Size: 15.0 GiBConvert the data format and set the approximate file size1.    Run a statement similar to the following to convert the data to a different format:CREATE TABLE "historic_climate_parquet"WITH ( external_location = 's3://awsexamplebucket/historic_climate_parquet/', format = 'PARQUET') ASSELECT * FROM historic_climate_gzReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS query format: format that you want to covert to (ORC,PARQUET, AVRO, JSON, or TEXTFILE)2.    Run the following command to confirm the size of the dataset:aws s3 ls s3://awsexamplebucket/historic_climate_parquet/ --summarize --recursive --human-readableThe output looks similar to the following:Total Objects: 30Total Size: 9.8 GiB3.    Determine how many files that you need to achieve the desired file size. For example, if you want 500 MB files and the dataset is 9.8 GB, then you need 20 files (9,800 / 500 = 19.6, rounded up to 20).4.    To convert the dataset into 500 MB files, run a statement similar to the following:CREATE TABLE "historic_climate_parquet_500mb"WITH ( external_location = 's3://awsexamplebucket/historic_climate_parquet_500mb/', format = 'PARQUET', bucket_count=20, bucketed_by = ARRAY['yearmonthday'] ) ASSELECT * FROM historic_climate_parquetReplace the following values in the query:external_location: Amazon S3 location where Athena saves your CTAS query bucket_count: number of files that you want (for example, 20)bucketed_by: field for hashing and saving the data in the bucket. Choose a field with high cardinality.5.    Run the following command to confirm that the dataset contains the desired number of files:aws s3 ls s3://awsexamplebucket/historic_climate_parquet_500mb/ --summarize --recursive --human-readableThe output looks similar to the following:2019-09-03 12:01:45 333.9 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000002019-09-03 12:01:01 666.7 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000012019-09-03 12:01:00 665.6 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000022019-09-03 12:01:06 666.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000032019-09-03 12:00:59 667.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000042019-09-03 12:01:27 666.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000052019-09-03 12:01:10 666.5 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000062019-09-03 12:01:12 668.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000072019-09-03 12:01:03 666.8 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000082019-09-03 12:01:10 646.4 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000092019-09-03 12:01:35 639.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000102019-09-03 12:00:52 529.5 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000112019-09-03 12:01:29 334.2 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000122019-09-03 12:01:32 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000132019-09-03 12:01:34 332.2 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000142019-09-03 12:01:44 333.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000152019-09-03 12:01:51 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000162019-09-03 12:01:39 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000172019-09-03 12:01:47 333.0 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-000182019-09-03 12:01:49 332.3 MiB historic_climate_parquet_500mb/20190903_095742_00001_uipqt_bucket-00019Total Objects: 20Total Size: 9.9 GiBNote: The INSERT INTO statement isn't supported on bucketed tables. For more information, see Bucketed tables not supported.Related informationExamples of CTAS queriesConsiderations and limitations for CTAS queriesFollow"
https://repost.aws/knowledge-center/set-file-number-size-ctas-athena
What support is available for AWS Marketplace rule groups for AWS WAF?
I need support related to the AWS Marketplace AWS WAF rules. Whom can I contact?
"I need support related to the AWS Marketplace AWS WAF rules. Whom can I contact?Short descriptionAWS WAF provides AWS Marketplace rule groups to help you protect your resources. AWS Marketplace rule groups are collections of predefined, ready-to-use rules that are written and updated by AWS Marketplace sellers. For issues related to the AWS WAF Marketplace rules regarding false positives or false negatives, contact the Marketplace seller for further troubleshooting.ResolutionAWS Marketplace managed rule groups are available by subscription through the AWS Marketplace. After you subscribe to an AWS Marketplace managed rule group, you can use it in AWS WAF. To use an AWS Marketplace rule group in an AWS Firewall Manager AWS WAF policy, each account in your organization must subscribe to it.TroubleshootingIf an AWS Marketplace rule group is blocking legitimate traffic, then follow these steps:Exclude specific rules that are blocking legitimate trafficYou can identify the rules that are blocking the requests using either the AWS WAF sampled requests or logging for a web ACL. You can identify the rules in a rule group by viewing the Rule inside rule group in the sampled request or the ruleGroupId field in the web ACL log. For more information, see Access to the rules in an AWS Marketplace rule group.Identify the rule using this pattern:<SellerName>#<RuleGroupName>#<RuleName>Change the action for the AWS Marketplace rule groupIf excluding specific rules doesn't solve the problem, then override a rule group's action from No override to Override to count. Doing this allows the web request to pass through, regardless of the individual rule actions within the rule group. Doing this also provides you with Amazon CloudWatch metrics for the rule group.If the issue continues after setting the AWS Marketplace rule group action to Override to count, then contact the rule group provider's customer support team.Note: For problems with a rule group that is managed by an AWS Marketplace seller, you must contact the provider’s customer support team for further troubleshooting.AWS WAF Marketplace seller's contact informationCloudbric Corp.For issues related to Cloudbric Corp. managed rule groups, see the Cloudbric help center to submit a request.Cyber Security Cloud Inc.For issues related to Cyber Security Cloud managed rules, see Contact Cyber Security Cloud support.F5 (DevCentral)F5 rules for AWS WAF are supported on DevCentral. For issues related to F5 managed rules, submit a question with the tag F5 rules for AWS WAF to DevCentral's technical forum.For more information about F5 rules for AWS WAF, see Overview of F5 rule groups for AWS WAF on the AskF5 website.FortinetFor issues related to Fortinet managed rule groups, send an email to the Fortinet support team.For more information about deploying Fortinet rules for AWS WAF, see Technical tip: deploying Fortinet AWS WAF partner rule groups on the Fortinet community website.GeoGuardFor issues related to GeoGuard managed rule groups, send an email to GeoGuard support.ImpervaFor issues related to an Imperva managed rule groups, send an email to Imperva support.For more information about Imperva rules for AWS WAF, see Get started with Imperva managed rules on AWS WAF on the Imperva documentation website.ThreatSTOPFor issues related to ThreatSTOP managed rule groups, see Contact ThreatSTOP.Related informationAWS Marketplace managed rule groupsFollow"
https://repost.aws/knowledge-center/waf-marketplace-support
Why are some of my AWS Glue tables missing in Athena?
Some of the tables that I see in the AWS Glue console aren't visible in the Amazon Athena console.
"Some of the tables that I see in the AWS Glue console aren't visible in the Amazon Athena console.ResolutionYou might see more tables in the AWS Glue console than in the Athena console for the following reasons:Different data sourcesIf you created tables that point to different data sources, then the consoles show tables from different sets of data. The Athena console shows only those tables that point to Amazon Simple Storage Service (Amazon S3) paths. AWS Glue lists tables that point to different data sources, such as Amazon Relational Database Service (Amazon RDS) DB instances and Amazon DynamoDB tables. For more information on using Athena to query data from different sources, see Connecting to data sources and Using Amazon Athena Federated Query.Unsupported table formatsYour tables doesn't appear in the Athena console if you created them in formats that aren't supported by Athena, such as XML. These tables appear in the AWS Glue Data Catalog, but not in the Athena console. For a list of supported formats, see Supported SerDes and data formats.Unavailable resources from AWS Lake FormationResources in Lake Formation aren't automatically shared with Athena or granted permissions. To make sure that resources are accessible between these services, create policies that allow your resources permission to Athena. For managing resource policies at scale within a single account, use tag-based asset control. For a detailed guide of this process, see Easily manage your data lake at scale using AWS Lake Formation Tag-based access control.For managing resource policies across accounts, you can use tag-based asset control or named resources. For a detailed guide of both options, see Securely share your data across AWS accounts using AWS Lake Formation.Related informationWhat is Amazon Athena?Adding classifiers to a crawler in AWS GlueFollow"
https://repost.aws/knowledge-center/athena-glue-tables-not-visible
Why can't my EC2 instances access the internet using a NAT gateway?
"I created a network address translation (NAT) gateway so that my Amazon Elastic Compute Cloud (Amazon EC2) instances can connect to the internet. However, I can't access the internet from my EC2 instances. Why can't my EC2 instances access the internet using a NAT gateway?"
"I created a network address translation (NAT) gateway so that my Amazon Elastic Compute Cloud (Amazon EC2) instances can connect to the internet. However, I can't access the internet from my EC2 instances. Why can't my EC2 instances access the internet using a NAT gateway?ResolutionInternet connectivity issues with NAT gateways are typically caused by subnet misconfigurations or missing routes. To troubleshoot issues connecting to the internet with your NAT gateway, verify the following:The subnet where the NAT gateway is launched is associated with a route table that has a default route to an internet gateway.The subnet where your EC2 instances are launched is associated with a route table that has a default route to the NAT gateway.Outbound internet traffic is allowed in both the security groups and the network access control list (ACL) that is associated with your source instance.The network ACL associated with the subnet where the NAT gateway is launched allows inbound traffic from the EC2 instances and the internet hosts. Also verify that the network ACL allows outbound traffic to the internet hosts and to the EC2 instances. For example, to allow your EC2 instances to access an HTTPS website, the network ACL associated with the NAT gateway subnet must have the rules as listed in this table.Inbound rules:SourceProtocolPort RangeAllow / DenyVPC CIDRTCP443ALLOWInternet IPTCP1024-65535ALLOWOutbound rules:DestinationProtocolPort RangeAllow / DenyInternet IPTCP443ALLOWVPC CIDRTCP1024-65535ALLOWRelated informationHow do I set up a NAT gateway for a private subnet in Amazon VPC?Work with NAT gatewaysAccess the internet from a private subnetHow do I use the VPC Reachability Analyzer to troubleshoot connectivity issues with an Amazon VPC resource?Follow"
https://repost.aws/knowledge-center/ec2-access-internet-with-NAT-gateway
How do I assign a new parameter group to my existing Amazon ElastiCache cluster without restarting the cluster?
How do I assign a new parameter group to my existing Amazon ElastiCache cluster without restarting the cluster?
"How do I assign a new parameter group to my existing Amazon ElastiCache cluster without restarting the cluster?Short descriptionA parameter group acts as a container for engine configuration values. You can apply a parameter group to one or more cache clusters. ElastiCache uses this parameter group to control the runtime properties of your nodes and clusters. You can modify the values in a parameter group for an existing cluster without restarting the cluster.ResolutionCreate a new parameter group. For detailed steps, see Creating a parameter group.Modify the new parameter group and the parameters. For more information, see Modifying a parameter group.Modify the cluster to use the new parameter group. For detailed steps, see Modifying an ElastiCache cluster.Related informationParameter managementRedis-specific parametersMemcached specific parametersFollow"
https://repost.aws/knowledge-center/elasticache-assign-parameter-group
How can I download the full SQL text from Performance Insight for my Aurora PostgreSQL-Compatible instance?
I want to download the full SQL text from Performance Insights for my Amazon Aurora PostgreSQL-Compatible Edition DB instance.
"I want to download the full SQL text from Performance Insights for my Amazon Aurora PostgreSQL-Compatible Edition DB instance.Short descriptionAurora PostgreSQL-Compatible handles text in Performance Insights differently from other engine types, like Aurora MySQL-Compatible. By default, each row under the Top SQL tab on the Performance Insights dashboard shows 500 bytes of SQL text for each SQL statement. When a SQL statement exceeds 500 bytes, you can view more text in the SQL text section that's below the Top SQL table. The maximum length for the text displayed in the SQL text section is 4 KB. If the SQL statement exceeds 4096 characters, then the truncated version is displayed on the SQL text section. But, you can download the full SQL text from the SQL text section of the TOP SQL tab.The track_activity_query_size DB parameter specifies the amount of memory that's reserved to store the text of the currently running command for each active session. This determines the maximum query length to display in the pg_stat_activity query column. To set the text limit size for SQL statements and store that limit on the database, modify the track_activity_query_size parameter. You can modify this parameter at the instance or cluster parameter group level. See the minimum and maximum allowed values for the text limit size for SQL statements:Aurora_Postgres_VersionMinimumMaximum10.x10010240011.x10010240012.x10010240013.x100104857614.x1001048576ResolutionYou can download the full SQL text from Performance Insights using the Amazon Relational Database Service (Amazon RDS) console. If the full SQL text size exceeds the value of track_activity_query_size, then increase the value of track_activity_query_size before you download the SQL text. The track_activity_query_size parameter is static, so you must reboot the cluster after you've changed its value.For example, the SQL text size might be set to 1 MB, and track_activity_query_size is set to the default value of 4096 bytes. In this case, the full SQL can't be downloaded. When the engine runs the SQL text to Performance Insights, the Amazon RDS console displays only the first 4 KB. Increase the value of track_activity_query_size to 1 MB or larger, and then download the full query. In this case, viewing and downloading the SQL text return a different number of bytes.In the Performance Insights dashboard, you can view or download the full SQL text by following these steps:1.    Open the Amazon RDS console.2.    In the navigation pane, choose Performance Insights.3.    Choose the DB instance that you want to view Performance Insights for.4.    From the Top SQL tab, choose the SQL statement that you want to view.5.    Under the SQL text tab, you can view up to 4,096 bytes for each SQL statement. If the SQL statement falls within this limit, then choose Copy to copy the SQL.6.    If the SQL statement is larger than 4,096, then it's truncated in this view. Choose Download to download the full SQL.Note: Be sure that the track_activity_query_size parameter is set to a larger value than the SQL statement that you want to download.Related informationViewing Aurora PostgreSQL DB cluster and DB parametersRebooting an Aurora cluster (Aurora PostgreSQL and Aurora MySQL before version 2.10)Follow"
https://repost.aws/knowledge-center/aurora-postgresql-performance-insights
How do I troubleshoot the error "READONLY You can't write against a read only replica" after failover of my Redis (cluster mode disabled) cluster?
Why am I receiving the "READONLY You can't write against a read only replica" error in my Amazon ElastiCache for Redis (cluster mode disabled) cluster after failover?
"Why am I receiving the "READONLY You can't write against a read only replica" error in my Amazon ElastiCache for Redis (cluster mode disabled) cluster after failover?Short descriptionIf the primary node failed over to the replica nodes in your Amazon ElastiCache cluster, then the replica takes the role of primary node to serve incoming requests. However, in the following scenarios you receive the READONLY error:You're using a node endpoint instead of primary endpoint of the cluster in your application.-or-DNS caching in the application node routes traffic to the old primary node.Resolution1.    Verify that the cluster is cluster mode disabled. To do this:Open the ElastiCache console, and then select Redis clusters. Verify that the Cluster Mode for the cluster is off.Note: If the Cluster Mode is on, see I'm using ElastiCache or Redis. Why are my Redis client read requests always read from or redirected to the primary node of a shard?2.    Verify that you're sending the write command to the primary endpoint instead of the node endpoint. To validate that the write command is going to the primary node, do one of the following:Option 1Connect to the Redis cluster using the redis-cli and then run the get key command for the updated key. Then, verify the command output to verify that the key value updated after the last command.For example, the following command sets the key1 value to hello:set key1 "hello" OKTo verify that the key set to correctly, run the get command:get key1"hello"Option 2Connect to the Redis cluster using the redis-cli and then run command the MONITOR command. This lists all commands coming to the cluster. Keep in mind that running a single MONITOR client might reduce throughput by more than 50%.3.    To avoid DNS caching issues, turn on retry logic in your application following the guidelines for the Redis client library that your application uses.Follow"
https://repost.aws/knowledge-center/elasticache-correct-readonly-error
How do I upgrade my Elastic Beanstalk environment platform from a deprecated or retired version to the latest version?
"I received a notification that my AWS Elastic Beanstalk platform is a deprecated version. Or, I received a notification that my platform version is marked for retirement."
"I received a notification that my AWS Elastic Beanstalk platform is a deprecated version. Or, I received a notification that my platform version is marked for retirement.Short descriptionDeprecated platform versions are the old platform versions or branches that are available to customers, but aren't recommended by AWS. Deprecated versions might have missing security updates, hot fixes, or the latest versions of other components, such as the web server.Elastic Beanstalk marks platform branches as retired when a component of a supported platform branch is marked End of Life (EOL) by its supplier. Components of a platform branch might be the operating system, runtime, application server, or web server.When a platform branch is marked as retired, it's no longer available to new Elastic Beanstalk customers for deployments to new environments. There's a 90-day grace period from the published retirement date for existing customers with active environments that are running on retired platform branches. When a platform version is mark deprecated, it's available for customers to use until it's marked for retirement.ResolutionMigrate from a retired platformTo upgrade to the latest platform, perform a blue/green deployment. Blue/green deployments deploy a separate environment with the latest platform branch and version. Then, swap the CNAMEs of the two environments to redirect traffic from the old environment to the new environment.Note: Both environments must be in same application and in a working state to swap CNAMEs.For more information, see Blue/Green deployments with Elastic Beanstalk.To check for the retired platform branches, see Elastic Beanstalk platform versions scheduled for retirement.Migrate from a deprecated platformA platform version might be marked for deprecation due to kernel changes, web server changes, security fixes, hot fixes, and so on. These changes are categorized as follows:Patch: Patch version updates provide bug fixes and performance improvements. Patch updates might include minor configuration changes to the on-instance software, scripts, and configuration options.Minor: Minor version updates provide support for new Elastic Beanstalk features.Major: Major version updates provide different kernels, web servers, application servers, and so on.Based on the changes being made, use one of the following migration methods:Minor or patch updatesWith minor or patch changes, your platform branch remains the same. For instructions, see Method 1 - Update your environment's platform version.You can also have Elastic Beanstalk manage platform updates for you. For more information, see Managed platform updates.Major updatesYour platform branch changes in major updates. When you switch platform branches, you must perform a blue/green deployment. You must also use blue/green deployments when migrating from Amazon Linux 1 to Amazon Linux 2 or from a legacy platform to a current platform. For more information, see Method 2 - Perform a blue/green deployment.Related informationUpdating your Elastic Beanstalk environment's platform versionFollow"
https://repost.aws/knowledge-center/elastic-beanstalk-upgrade-platform
How do I use AWS CloudTrail to track API calls to my Amazon EC2 instances?
"I want to track API calls that run, stop, start, and terminate my Amazon Elastic Compute Cloud (Amazon EC2) instances. How do I search for API calls to my Amazon EC2 instances using AWS CloudTrail?"
"I want to track API calls that run, stop, start, and terminate my Amazon Elastic Compute Cloud (Amazon EC2) instances. How do I search for API calls to my Amazon EC2 instances using AWS CloudTrail?Short descriptionAWS CloudTrail allows you to identify and track four types of API calls (event types) made to your AWS account:RunInstancesStopInstancesStartInstancesTerminateInstancesTo review these types of API calls after they've been made to your account, you can use any of the following methods.Note: You can view event history for your account up to the last 90 days.ResolutionTo track API calls using CloudTrail event history1.    Open the CloudTrail console.2.    Choose Event history.3.    For Filter, select Event name from the dropdown list.4.    For Enter event name, enter the event type that you want to search for. Then, choose the event type.5.    For Time range, enter the desired time range that you want to track the event type for.6.    Choose Apply.For more information, see Viewing events with CloudTrail event history and Viewing Cloudtrail events in the CloudTrail console.To track API calls using Amazon Athena queriesFollow the instructions in How do I automatically create tables in Amazon Athena to search through AWS CloudTrail logs?The following are example queries for the RunInstances API call. You can use similar queries for any of the supported event types.Important: Replace cloudtrail-logs with your Athena table name before running any of the following query examples.Example query to return all available event information for the RunInstances API callSELECT *FROM cloudtrail-logsWHERE eventName = 'RunInstances'Example query to return filtered event information for the RunInstances API callSELECT userIdentity.username, eventTime, eventNameFROM cloudtrail-logsWHERE eventName = 'RunInstances'Example query to return event information for the APIs that end with the string "Instances" from a point in time to the current dateImportant: Replace '2021-07-01T00:00:01Z' with the point in time you'd like to return event information from.SELECT userIdentity.username, eventTime, eventNameFROM cloudtrail-logsWHERE (eventName LIKE '%Instances') AND eventTime > '2021-07-01T00:00:01Z'To track API calls using archived Amazon CloudWatch Logs in Amazon Simple Storage Service (Amazon S3)Important: To log events to an Amazon S3 bucket, you must first create a CloudWatch trail.1.    Access your CloudTrail log files by following the instructions in Finding your CloudTrail log files.2.    Download your log files by following the instructions in Downloading your CloudTrail log files.3.    Search through the logs for the event types that you want to track using jq or another JSON command line processor.Example jq procedure for searching CloudWatch logs downloaded from Amazon S3 for specific event types1.    Open a Bash terminal. Then, create the following directory to store the log files:$ mkdir cloudtrail-logs4.    Navigate to the new directory. Then, download the CloudTrail logs by running the following command:Important: Replace the example my_cloudtrail_bucket with your Amazon S3 bucket.$ cd cloudtrail-logs$ aws s3 cp s3://my_cloudtrail_bucket/AWSLogs/012345678901/CloudTrail/eu-west-1/2019/08/07 ./ --recursive5.    Decompress the log files by running the following gzip command:Important: Replace * with the file name that you want to decompress.$ gzip -d *6.    Run a jq query for the event types that you want to search for.Example jq query to return all available event information for the RunInstances API callcat * | jq '.Records[] | select(.eventName=="RunInstances")'Example jq query to return all available event information for the StopInstances and TerminateInstances API callscat * | jq '.Records[] | select(.eventName=="StopInstances" or .eventName=="TerminateInstances" )'Related informationHow can I use CloudTrail to review what API calls and actions have occurred in my AWS account?Creating metrics from log events using filtersAWS Config console now displays API events associated with configuration changesFollow"
https://repost.aws/knowledge-center/cloudtrail-search-api-calls
How do I verify an email address or domain in Amazon SES?
I want to verify an email address or domain that I'm using with Amazon Simple Email Service (Amazon SES). How can I do that?
"I want to verify an email address or domain that I'm using with Amazon Simple Email Service (Amazon SES). How can I do that?ResolutionTo verify an email address, see Verifying an email address identity.To verify a domain, see Verifying a DKIM domain identity with your DNS provider for instructions.Note: If you're using Amazon Route 53 as your DNS provider, then Amazon SES can automatically add your domain or DKIM verification CNAME records to your DNS records. If you aren't using Route 53, you must work with your DNS provider to update your DNS records.Related informationVerified identities in Amazon SESFollow"
https://repost.aws/knowledge-center/ses-verify-email-domain
How do I view objects that failed replication from one Amazon S3 bucket to another?
I want to retrieve a list of objects that failed replication when setting up replication from one Amazon Simple Storage Service (Amazon S3) bucket to another bucket.
"I want to retrieve a list of objects that failed replication when setting up replication from one Amazon Simple Storage Service (Amazon S3) bucket to another bucket.Short descriptionYou can turn on S3 Replication Time Control (S3 RTC) to set up event notifications for eligible objects that failed replication. You can also use S3 RTC to set up notifications for eligible objects that take longer than 15 minutes to replicate. Additionally, you can get a list of objects that failed replication in one of the following ways:Reviewing the Amazon S3 inventory reportRunning the HeadObject API callResolutionAmazon S3 inventory reportAmazon S3 inventory reports list your objects and their metadata on a daily or weekly basis. The replication status of an object can be PENDING, COMPLETED, FAILED, or REPLICA.To find objects that failed replication, filter a recent report for objects with the replication status of FAILED. Then, you can initiate a manual copy of the objects to the destination bucket. You can also re-upload the objects to the source bucket (after rectifying the permissions) to initiate replication.You can also use Amazon Athena to query the inventory report for replication statuses.HeadObject API callFor a list of the objects in the source bucket that are set for replication, you can run the HeadObject API call on the objects. HeadObject returns the PENDING, COMPLETED, or FAILED replication status of an object. In a response to a HeadObject API call, the replication status is found in the x-amz-replication-status element.Note: To run HeadObject, you must have read access to the object that you're requesting. A HEAD request has the same options as a GET request, but without performing a GET.After HeadObject returns the objects with a FAILED replication status, you can initiate a manual copy of the objects to the destination bucket. You can also re-upload the objects to the source bucket (after rectifying the permissions) to initiate replication.Important: If you manually copy objects into the destination bucket, then the Amazon S3 inventory report and HeadObject API calls return a FAILED replication status. This replication status is for the objects in the source bucket. To change the replication status of an object and initiate replication, you must re-upload the object to the source bucket. If the new replication is successful, then the object's replication status changes to COMPLETED. If you must manually copy objects into the destination bucket, then be sure to note the date of the manual copy. Then, filter objects with a FAILED replication status by the last modified date. Doing this lets you to identify which objects are or aren't copied to the destination bucket.Follow"
https://repost.aws/knowledge-center/s3-list-objects-failed-replication
How do I upload my Windows logs to CloudWatch?
I want to upload my Windows logs to Amazon CloudWatch.
"I want to upload my Windows logs to Amazon CloudWatch.ResolutionUpload your Windows logs to CloudWatch with AWS Systems Manager and Amazon CloudWatch agent. Then, store the configuration file in the SSM Parameter Store, a capability of AWS Systems Manager.Create IAM rolesCreate server and administrator AWS Identity and Access Management (IAM) roles to use with the CloudWatch agent. The server role allows instances to upload metrics and logs to CloudWatch. The administrator role creates and stores the CloudWatch configuration template in the Systems Manager Parameter Store.Note: Be sure to follow both IAM role creation procedures to limit access to the admin role.Attach the server roleAttach the server role to any Elastic Compute Cloud (Amazon EC2) instances that you want to upload your logs for.Attach the administrator roleAttach the administrator role to your administrator configuration instance.Install the CloudWatch agent packageDownload and install the CloudWatch agent package with AWS Systems Manager Run Command. In the Targets area, choose your server instances and your administrator instance.Note: Before you install the CloudWatch agent, be sure to update or install SSM agent on the instance.Create the CloudWatch agent configuration fileCreate the CloudWatch agent configuration file on your administrator instance using the configuration wizard. Store the file in the Parameter Store. Record the Parameter Store name that you choose. For an example configuration with logs, see CloudWatch agent configuration file: Logs section.To create your configuration file, complete the following steps:Run PowerShell as an administrator.To start the configuration wizard, open Command Prompt. Then, run the .exe file that's located at C:\Program Files\Amazon\AmazonCloudWatchAgent\amazon-cloudwatch-agent-config-wizard.exe.To create the configuration file, answer the following questions in the configuration wizard:On which OS are you planning to use the agent?Select Windows.Are you using EC2 or On-Premises hosts?Select Ec2.Do you have any existing CloudWatch Log Agent configuration file to import for migration?Select No.Do you want to monitor any host metrics?If you want to push only logs, then select No.Do you want to monitor any customized log files?If you want to push only default Windows Event Logs, then select No. If you also want to push custom logs, then select Yes.Do you want to monitor any Windows event log?If you want to push Windows Event Logs, then select Yes.When the configuration wizard prompts you to store your file in Parameter Store, select Yes to use the parameter in SSM.Apply your configurationTo apply the configuration to the server instances and start uploading logs, start the CloudWatch agent using Systems Manager Run Command.For Targets, choose your server instances.For Optional Configuration Location, enter the Parameter Store name that you chose in the wizard.Related informationCollect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agentQuick Start: Install and configure the CloudWatch Logs agent on a running EC2 Linux instanceFollow"
https://repost.aws/knowledge-center/cloudwatch-upload-windows-logs
Why can't I resolve service domain names for an interface VPC endpoint?
"I'm using an interface Amazon Virtual Private Cloud (Amazon VPC) endpoint for an AWS service. I want to use the default service domain name (for example, ec2.us-east-1.amazonaws.com) to access the service through the VPC interface endpoint. Why can't I resolve service domain names for an interface VPC endpoint?"
"I'm using an interface Amazon Virtual Private Cloud (Amazon VPC) endpoint for an AWS service. I want to use the default service domain name (for example, ec2.us-east-1.amazonaws.com) to access the service through the VPC interface endpoint. Why can't I resolve service domain names for an interface VPC endpoint?ResolutionTo resolve service domain names (for example, ec2.us-east-2-amazonaws.com) for an interface VPC endpoint, keep the following in mind:To resolve service domain names to the interface VPC endpoint's private IPs, you must send the DNS queries to the Amazon-provided DNS of the VPC where the interface endpoint is created. The Amazon-provided DNS is the base of the VPC CIDR plus two.On the VPC where you created the interface VPC endpoint, verify that both DNS attributes of the VPC, DNS Hostnames and DNS Resolution, are turned on.When using interface VPC endpoints to access available AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2), you can turn on private DNS names on the endpoint. When you have this parameter turned on, queries for the service domain name resolve to private IP addresses. These private IP addresses are the IP addresses of the elastic network interfaces created in each of the associated subnets for a given interface endpoint.With private DNS names turned on, you can run AWS API calls using the service domain name (for example, ec2.us-east-1.amazonaws.com) over AWS PrivateLink.For the interface VPC endpoint, verify that private DNS names is turned on. If private DNS names isn't turned on, the service domain name or endpoint domain name resolves to regional public IPs. For steps to turn on private DNS names, see Modify an interface endpoint.You can designate custom domain name servers in the DHCP Option Set for the VPC. When using custom domain name servers, the DNS queries for the service domain names are sent to the custom domain name servers for resolution. The custom domain name servers might be located within the VPC or outside of the VPC.Custom domain name servers must forward the service domain name to the Amazon-provided DNS server of the VPC where the interface endpoints are created.If you're trying to access an interface endpoint from outside of the VPC (cross-VPC or on-premises), make sure that you have the DNS architecture in place. The DNS architecture should forward the DNS queries for the service domain name to the Amazon-provided DNS server of the VPC where the interface endpoints are created.You can use tools such as nslookup or dig against the service domain name from the source network to confirm the IPs that it's resolving to.Or, you can use regional endpoint domain names on your SDK to execute API calls. The regional endpoint domain names of the interface endpoints are resolvable from any network. The following is an example for performing a describe call using the AWS Command Line Interface (AWS CLI):$aws ec2 describe-instances --endpoint-url https://vpce-aaaabbbbcccc-dddd.vpce-svc-12345678.us-east-1.vpce.amazonaws.comIf you created an Amazon Route 53 private hosted zone for the service domain name, make sure that you attach the correct source VPC to the hosted zone. For more information, see How can I troubleshoot DNS resolution issues with my Route 53 private hosted zone?Note: You must establish connectivity from the network to the VPC using VPC peering, AWS Transit Gateway, and so on, for routing DNS queries.Related informationHow do I configure a Route 53 Resolver inbound endpoint to resolve DNS records in my private hosted zone from my remote network?How do I configure a Route 53 Resolver outbound endpoint to resolve DNS records hosted on a remote network from resources in my VPC?Follow"
https://repost.aws/knowledge-center/vpc-interface-configure-dns
How do I configure logging levels manually for specific resources in AWS IoT Core?
I want to configure resource-specific logging manually for my AWS IoT Core logs.
"I want to configure resource-specific logging manually for my AWS IoT Core logs.Short descriptionNote: This article relates only to V2 of AWS IoT Core logs.AWS IoT Core logs allows you to set resource-specific logging levels for:Clients registered as thingsClients not registered as thingsThis is done by creating a logging level for a specific target type and configuring its verbosity level. Target types include THING_GROUP, CLIENT_ID, SOURCE_IP, or PRINCIPAL_ID. It's a best practice to configure default logging to a lower verbosity level and configure resource-specific logging to a higher verbosity level.Log verbosity levels include DISABLED (lowest), ERROR, WARN, INFO, and DEBUG (highest).Important: Depending on your AWS IoT Core fleet size, turning on more verbose log levels can incur high costs and make troubleshooting more difficult. Turning on verbose logging also creates higher data traffic. INFO or DEBUG should only be used as a temporary measure while troubleshooting. After troubleshooting is complete, logging levels should be set back to a less verbose setting.ResolutionPrerequisiteMake sure that you have the AWS Command Line Interface (AWS CLI) installed locally with IoT admin permission credentials. The default AWS Region for AWS CLI must point towards the targeted AWS Region. You must have clients connected to and interacting with your AWS IoT Core endpoints, either as registered or non-registered IoT things.Note: If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.Configure manual logging for clients registered as thingsYou can manage resource-specific logging for multiple things at a defined logging level, and then add or remove things from the thing group manually. Your devices and clients must be registered as IoT things in AWS IoT Core and must connect using the same client ID associated thing name. You can then use a static thing group with a target type of THING_GROUP to manage the thing group. If you configure a parent thing group within a hierarchy, then the configuration applies to the child thing groups of the hierarchy as well.Note: If you use static thing groups as a target type, then you must consider their quota limits. For more information, see AWS IoT Core thing group resource limits and quotas.1.    Create two static thing groups. You can do this using the AWS IoT console or using the create-thing-group command in the AWS CLI. In this example, the AWS CLI is used.aws iot create-thing-group --thing-group-name logging_level_infoaws iot create-thing-group --thing-group-name logging_level_debugNote: If you are using existing thing groups, then replace logging_level_info and logging_level_debug with the names of your thing groups.The output looks similar to the following message:{ "thingGroupName": "logging_level_info", "thingGroupArn": "arn:aws:iot:eu-west1-1:123456789012:thinggroup/logging_level_info", "thingGroupId": "58dd497e-97fc-47d2-8745-422bb21234AA"}{ "thingGroupName": "logging_level_debug", "thingGroupArn": "arn:aws:iot:eu-west-1:123456789012:thinggroup/logging_level_debug", "thingGroupId": "2a9dc698-9a40-4487-81ec-2cb4101234BB"}2.    Run the SetV2LoggingLevel command to set the logging levels for the thing groups: Note: It can take up to 10 minutes for log level configuration changes to be reflected.aws iot set-v2-logging-level \ --log-target targetType=THING_GROUP,targetName=logging_level_info \ --log-level INFOaws iot set-v2-logging-level \--log-target targetType=THING_GROUP,targetName=logging_level_debug \--log-level DEBUGNote: Replace INFO and DEBUG with the log levels that you want to set for each thing group.3.    Run the following command to confirm that the logging levels are configured correctly:aws iot list-v2-logging-levelsThe output looks similar to the following message:{ "logTargetConfigurations": [ { "logTarget": { "targetType": "DEFAULT" }, "logLevel": "WARN" }, { "logTarget": { "targetType": "THING_GROUP", "targetName": "logging_level_debug" }, "logLevel": "DEBUG" }, { "logTarget": { "targetType": "THING_GROUP", "targetName": "logging_level_info" }, "logLevel": "INFO" } ]}4.    Run the AddThingToThingGroup command to add a thing to the appropriate things group:aws iot add-thing-to-thing-group \ --thing-name YourThingName1 \ --thing-group-name logging_level_infoNote: Replace YourThingName1 with the name of the thing that you are adding to the thing group.Configure manual logging for clients not registered as thingsIf you don't register your things to AWS IoT Core, you can still add resource-specific logging levels for multiple target types. These target types are client attributes and include CLIENT_ID, SOURCE_IP, or PRINCIPAL_ID. If your device is already registered as an AWS IoT Core thing, you can still use these client attributes to manage logging levels.1.    Run the SetV2LoggingLevel command to set the logging level for a specific client:aws iot set-v2-logging-level \ --log-target targetType=CLIENT_ID,targetName=YourClientId \ --log-level YourLogLevelNote: To use a different target type, replace CLIENT_ID with a supported value that is used by the targeted client, such as SOURCE_IP or PRINCIPAL_ID.2.    Run the following command to confirm the logging levels are configured correctly:aws iot list-v2-logging-levelsThe output looks similar to the following message:... { "logTarget": { "targetType": "CLIENT_ID", "targetName": "YourClientId" }, "logLevel": "YourLogLevel" }...Monitoring generated logsIt's a best practice to monitor your IoT logs for issues or problems. You can use either the Amazon CloudWatch Logs Console or the AWS CLI to monitor your AWS IoT Core logs.  For more information, see the "Monitoring log entries" section of How do I best manage the logging levels of my AWS IoT logs in AWS IoT Core?Related informationMonitoring AWS IoTHow do I configure the default logging settings for AWS IoT Core?How do I configure logging levels dynamically for specific resources in AWS IoT Core?Follow"
https://repost.aws/knowledge-center/aws-iot-core-configure-manual-logging
How do I resolve the PHP fatal error that I receive when deploying an application on an Elastic Beanstalk PHP platform that connects to a Microsoft SQL Server database?
I receive a PHP fatal error when deploying an application on an AWS Elastic Beanstalk PHP platform that connects to a Microsoft SQL Server database.
"I receive a PHP fatal error when deploying an application on an AWS Elastic Beanstalk PHP platform that connects to a Microsoft SQL Server database.Short descriptionWhen deploying an application on an Elastic Beanstalk PHP platform that connects to a Microsoft SQL server database, you may receive the following error:"PHP Fatal error: Uncaught Error: Call to undefined function sqlsrv_connect() in /var/app/current/DB/"To connect PHP to a Microsoft SQL Server database, you must first install and configure the SQLSRV library and its PDO extension. By default, this library and extension aren't installed and configured.To install the SQLSRV library and PDO extension, you can use a .ebextensions configuration file that runs a script in your Amazon Elastic Compute Cloud (Amazon EC2) instances. The script does the following:Installs the correct driver and tools for Microsoft SQL ServerTurns on the required PHP libraries and extensionsResolutionNote: The following steps apply to Elastic Beanstalk environments with any PHP platform versions.1.    In the root of your application bundle, create a directory named .ebextensions.2.    Create a .ebextensions configuration file, such as the following .ebextensions/pdo_sqlsrv.config. file:Important: The .ebextensions file used in the following resolution is applicable only for Amazon Linux 1 and Amazon Linux 2 Amazon Machine Image (AMI) instances. The .ebextensions file doesn't apply to Windows or custom Ubuntu AMI instances in Elastic Beanstalk.####################################################################################################### Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.#### #### Permission is hereby granted, free of charge, to any person obtaining a copy of this#### software and associated documentation files (the "Software"), to deal in the Software#### without restriction, including without limitation the rights to use, copy, modify,#### merge, publish, distribute, sublicense, and/or sell copies of the Software, and to#### permit persons to whom the Software is furnished to do so.########################################################################################################################################################################################################## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,#### INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A#### PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT#### HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION#### OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE#### SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.###################################################################################################commands: install_mssql: command: | #!/bin/bash set -x # 0. EXIT if pdo_sqlsrv is already installed if php -m | grep -q 'pdo_sqlsrv' then echo 'pdo_sqlsrv is already installed' else # 1. Install libtool-ltdl-devel yum -y install libtool-ltdl-devel # 2. Register the Microsoft Linux repository wget https://packages.microsoft.com/config/rhel/8/prod.repo -O /etc/yum.repos.d/msprod.repo # 3. Install MSSQL and tools ACCEPT_EULA=N yum install mssql-tools msodbcsql17 unixODBC-devel -y --disablerepo=amzn* # The license terms for this product can be downloaded from http://go.microsoft.com/fwlink/?LinkId=746949 and found in /usr/share/doc/mssql-tools/LICENSE.txt . By changing "ACCEPT_EULA=N" to "ACCEPT_EULA=Y", you indicate that you accept the license terms. echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc source ~/.bashrc # 4. Install SQLSRV and its PDO extension, and stop pecl/pecl7 from overwriting php.ini cp -f "/etc/php.ini" "/tmp/php.ini.bk" pecl7 install sqlsrv pdo_sqlsrv || pecl install sqlsrv pdo_sqlsrv cp -f "/tmp/php.ini.bk" "/etc/php.ini" # 5. Manually add the extensions to the proper php.ini.d file and fix parameters sqlvar=$(php -r "echo ini_get('extension_dir');") && chmod 0755 $sqlvar/sqlsrv.so && chmod 0755 $sqlvar/pdo_sqlsrv.so echo extension=pdo_sqlsrv.so >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.*:\s*||"`/30-pdo_sqlsrv.ini echo extension=sqlsrv.so >> `php --ini | grep "Scan for additional .ini files" | sed -e "s|.*:\s*||"`/20-sqlsrv.ini fi3.    Create an application source bundle that includes your .ebextensions file from step 2.4.    Deploy your updated Elastic Beanstalk application.Follow"
https://repost.aws/knowledge-center/elastic-beanstalk-deployment-php-sql
How do I change billing information on the PDF version of the AWS invoice that I receive by email?
I want to change the billing information on the PDF version of the AWS invoice that I receive by email.
"I want to change the billing information on the PDF version of the AWS invoice that I receive by email.ResolutionYour PDF invoice uses the billing information that's associated with your payment method.To update your billing information, complete the following steps:1.    Sign in to the Billing and Cost Management console, and then choose Payment preferences from the navigation pane.2.    Find your default payment method under Default payment preferences, and then choose Edit.3.    Update the information for your current payment method.Note: The name on the Billing Address appears in the ATTN: section of the invoice.4.    Choose Save changes to save your changes.Your updated billing information affects only future invoices. To get an updated PDF invoice for a particular billing period, update the billing information that's associated with your payment method. Then, open a case with AWS Support.Follow"
https://repost.aws/knowledge-center/change-pdf-billing-address
How am I billed for my Amazon EBS snapshots?
I want to know how I'm billed for Amazon Elastic Block Store (Amazon EBS) snapshots.
"I want to know how I'm billed for Amazon Elastic Block Store (Amazon EBS) snapshots.ResolutionCharges for Amazon EBS snapshots are calculated by the gigabyte-month. That is, you are billed for how large the snapshot is and how long you keep the snapshot.Pricing varies depending on the storage tier. For the Standard tier, you're billed only for changed blocks that are stored. For the Archive tier, you're billed for all snapshot blocks that are stored. You're also billed for retrieving snapshots from the Archive tier.The following are example scenarios for each storage tier:Standard tier: You have a volume that's storing 100 GB of data. You're billed for the full 100 GB of data for the first snapshot (snap A). At the time of the next snapshot (snap B), you have 105 GB of data. You're then billed for only the additional 5 GB of storage for incremental snap B.Archive tier: You archive snap B. The snapshot is then moved to the Archive tier, and you're billed for the full 105-GB snapshot block.For detailed pricing information, see Amazon EBS pricing.To view the charges for your EBS snapshots, follow these steps:Open the AWS Billing Dashboard.In the navigation pane, choose Bills.In the Details section, expand Elastic Compute Cloud.You can also use cost allocation tags to track and manage your snapshot costs.Note:You aren't billed for snapshots that another AWS account owns and shares with your account. You're billed only when you copy the shared snapshot to your account. You're also billed for EBS volumes that you create from the shared snapshot.If a snapshot (snap A) is referenced by another snapshot (snap B), then deleting snap B might not reduce the storage costs. When you delete a snapshot, only the data that's unique to that snapshot is removed. Data that's referenced by other snapshots remain, and you are billed for this referenced data. To delete an incremental snapshot, see Incremental snapshot deletion.Related informationWhy did my storage costs not reduce after I deleted a snapshot of my EBS volume and then deleted the volume itself?Why am I being charged for Amazon EBS when all my instances are stopped?Follow"
https://repost.aws/knowledge-center/ebs-snapshot-billing
How do I deploy artifacts to Amazon S3 in a different AWS account using CodePipeline?
I want to deploy artifacts to an Amazon Simple Storage Service (Amazon S3) bucket in a different account. I also want to set the destination account as the object owner. Is there a way to do that using AWS CodePipeline with an Amazon S3 deploy action provider?
"I want to deploy artifacts to an Amazon Simple Storage Service (Amazon S3) bucket in a different account. I also want to set the destination account as the object owner. Is there a way to do that using AWS CodePipeline with an Amazon S3 deploy action provider?ResolutionNote: The following example procedure assumes the following:You have two accounts: a development account and a production account.The input bucket in the development account is named codepipeline-input-bucket (with versioning activated).The default artifact bucket in the development account is named codepipeline-us-east-1-0123456789.The output bucket in the production account is named codepipeline-output-bucket.You're deploying artifacts from the development account to an S3 bucket in the production account.You're assuming a cross-account role created in the production account to deploy the artifacts. The role makes the production account the object owner instead of the development account. To provide the bucket owner in the production account with access to the objects owned by the development account, see the following article: How do I deploy artifacts to Amazon S3 in a different AWS account using CodePipeline and a canned ACL?Create an AWS KMS key to use with CodePipeline in the development accountImportant: You must use the AWS Key Management Service (AWS KMS) customer managed key for cross-account deployments. If the key isn't configured, then CodePipeline encrypts the objects with default encryption, which can't be decrypted by the role in the destination account.1.    Open the AWS KMS console in the development account.2.    In the navigation pane, choose Customer managed keys.3.    Choose Create Key.4.    For Key type, choose Symmetric Key.5.    Expand Advanced Options.6.    For Key material origin, choose KMS. Then, choose Next.7.    For Alias, enter your key's alias. For example: s3deploykey.8.    Choose Next. The Define key administrative permissions page opens.9.    In the Key administrators section, select an AWS Identity and Access Management (IAM) user or role as your key administrator.Choose Next. The Define key usage permissions page opens.11.    In the Other AWS accounts section, choose Add another AWS account.12.    In the text box that appears, add the account ID of the production account. Then, choose Next.Note: You can also select an existing service role in the This Account section. If you select an existing service role, skip the steps in the Update the KMS usage policy in the development account section.13.    Review the key policy. Then, choose Finish.Create a CodePipeline in the development account1.    Open the CodePipeline console. Then, choose Create pipeline.2.    For Pipeline name, enter a name for your pipeline. For example: crossaccountdeploy.Note: The Role name text box is populated automatically with the service role name AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy. You can also choose another, existing service role with access to the KMS key.3.    Expand the Advanced settings section.4.    For Artifact store, select Default location.Note: You can select Custom location if that's required for your use case.5.    For Encryption key, select Customer Managed Key.6.    For KMS customer managed key, select your key's alias from the list (s3deploykey, for this example).Then, choose Next. The Add source stage page opens.For Source provider, choose Amazon S3.8.    For Bucket, enter the name of your development input S3 bucket. For example: codepipeline-input-bucket.Important: The input bucket must have versioning activated to work with CodePipeline.9.    For S3 object key, enter sample-website.zip.Important: To use a sample AWS website instead of your own website, see Tutorial: Create a pipeline that uses Amazon S3 as a deployment provider. Then, search for "sample static website" in the Prerequisites of the 1: Deploy Static Website Files to Amazon S3 section.10.    For Change detection options, choose Amazon CloudWatch Events (recommended). Then, choose Next.11.    On the Add build stage page, choose Skip build stage. Then, choose Skip.12.    On the Add deploy stage page, for Deploy provider, choose Amazon S3.13.    For Region, choose the AWS Region that your production output S3 bucket is in. For example: US East (N. Virginia).Important: If the production output bucket's Region is different than your pipeline's Region, then you must also verify the following:You're using an AWS KMS multi-Region key with multiple replicas.Your pipeline has artifact stores in both Regions.14.    For Bucket, enter the name of your production output S3 bucket. For example: codepipeline-output-bucket.15.    Select the Extract file before deploy check box.Note: If needed, enter a path for Deployment path.16.    Choose Next.17.    Choose Create pipeline. The pipeline runs, but the source stage fails. The following error appears: "The object with key 'sample-website.zip' does not exist."The Upload the sample website to the input bucket section of this article describes how to resolve this error.Update the KMS usage policy in the development accountImportant: Skip this section if you're using an existing CodePipeline service role.1.    Open the AWS KMS console in the development account.2.    Select your key's alias (s3deploykey, for this example).3.    In the Key users section, choose Add.4.    In the search box, enter the service role AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy.5.    Choose Add.Configure a cross-account role in the production accountCreate an IAM policy for the role that grants Amazon S3 permissions to your production output S3 bucket1.    Open the IAM console in the production account.2.    In the navigation pane, choose Policies. Then, choose Create policy.3.    Choose the JSON tab. Then, enter the following policy in the JSON editor:Important: Replace codepipeline-output-bucket with your production output S3 bucket's name.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Put*" ], "Resource": [ "arn:aws:s3:::codepipeline-output-bucket/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::codepipeline-output-bucket" ] } ]}4.    Choose Review policy.5.    For Name, enter a name for the policy. For example: outputbucketdeployaccess.6.    Choose Create policy.Create an IAM policy for the role that grants the required KMS permissions1.    In the IAM console, choose Create policy.2.    Choose the JSON tab. Then, enter the following policy in the JSON editor:Note: Replace the ARN of the KMS key that you created. Replace codepipeline-us-east-1-0123456789 with the name of the artifact bucket in the development account.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt", "kms:ReEncrypt*", "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-east-1:<dev-account-id>:key/<key id>" ] }, { "Effect": "Allow", "Action": [ "s3:Get*" ], "Resource": [ "arn:aws:s3:::codepipeline-us-east-1-0123456789/*" ] }, { "Effect": "Allow", "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::codepipeline-us-east-1-0123456789" ] } ]}3.    Choose Review policy.4.    For Name, enter a name for the policy. For example: devkmss3access.5.    Choose Create policy.Create a cross-account role that the development account can assume to deploy the artifacts1.    Open the IAM console in the production account.2.    In the navigation pane, choose Roles. Then, choose Create role.3.    Choose Another AWS account.4.    For Account ID, enter the development account's AWS account ID.5.    Choose Next: Permissions.6.    From the list of policies, select outputbucketdeployaccess and devkmss3access.7.    Choose Next: Tags.8.    (Optional) Add tags, and then choose Next: Review.9.    For Role name, enter prods3role.10.    Choose Create role.11.    From the list of roles, choose prods3role.12.    Choose the Trust relationship. Then, choose Edit Trust relationship.13.    In the Policy Document editor, enter the following policy:Important: Replace dev-account-id with your development account's AWS account ID. Replace AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy with the name of the service role for your pipeline.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": [ "arn:aws:iam::<dev-account-id>:role/service-role/AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy" ] }, "Action": "sts:AssumeRole", "Condition": {} } ]}14.    Choose Update Trust Policy.Update the bucket policy for the CodePipeline artifact bucket in the development account1.    Open the Amazon S3 console in the development account.2.    In the Bucket name list, choose the name of your artifact bucket in your development account (for this example, codepipeline-us-east-1-0123456789).3.    Choose Permissions. Then, choose Bucket Policy.4.    In the text editor, update your existing policy to include the following policy statements:Important: To align with proper JSON formatting, add a comma after the existing statements. Replace prod-account-id with your production account's AWS account ID. Replace codepipeline-us-east-1-0123456789 with your artifact bucket's name.{ "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<prod-account-id>:root" }, "Action": [ "s3:Get*", "s3:Put*" ], "Resource": "arn:aws:s3:::codepipeline-us-east-1-0123456789/*"},{ "Sid": "", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<prod-account-id>:root" }, "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::codepipeline-us-east-1-0123456789"}5.    Choose Save.Attach a policy to your CodePipeline service role in the development account that allows it to assume the cross-account role that you created1.    Open the IAM console in the development account.2.    In the navigation pane, choose Policies. Then, choose Create policy.3.    Choose the JSON tab. Then, enter the following policy in the JSON editor:Important: Replace prod-account-id with your production account's AWS account ID.{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": [ "arn:aws:iam::<prod-account-id>:role/prods3role" ] }}4.    Choose Review policy.5.    For Name, enter assumeprods3role.6.    Choose Create policy.7.    In the navigation pane, choose Roles. Then, choose the name of the service role for your pipeline (for this example, AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy).8.    Choose Attach Policies. Then, select assumeprods3role.9.    Choose Attach Policy.Update your pipeline to use the cross-account role in the development accountNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.1.    Retrieve the pipeline definition as a file named codepipeline.json by running the following AWS CLI command:Important: Replace crossaccountdeploy with your pipeline's name.aws codepipeline get-pipeline --name crossaccountdeploy > codepipeline.json2.    Add the cross-account IAM role ARN (roleArn) to the deploy action section of the codepipeline.json file. For more information, see the CodePipeline pipeline structure reference in the CodePipeline User Guide.Example cross-account IAM roleArn"roleArn": "arn:aws:iam::your-prod-account id:role/prods3role",Example deploy action that includes a cross-account IAM role ARNImportant: Replace the prod-account-id with your production account's AWS account ID.{ "name": "Deploy", "actions": [ { "name": "Deploy", "actionTypeId": { "category": "Deploy", "owner": "AWS", "provider": "S3", "version": "1" }, "runOrder": 1, "configuration": { "BucketName": "codepipeline-output-bucket", "Extract": "true" }, "outputArtifacts": [], "inputArtifacts": [ { "name": "SourceArtifact" } ], "roleArn": "arn:aws:iam::<prod-account-id>:role/prods3role", "region": "us-east-1", "namespace": "DeployVariables" } ]}3.    Remove the metadata section at the end of the codepipeline.json file.Important: Make sure that you also remove the comma that's before the metadata section.Example metadata section"metadata": { "pipelineArn": "arn:aws:codepipeline:us-east-1:<dev-account-id>:crossaccountdeploy", "created": 1587527378.629, "updated": 1587534327.983}4.    Update the pipeline by running the following command:aws codepipeline update-pipeline --cli-input-json file://codepipeline.jsonUpload the sample website to the input bucket1.    Open the Amazon S3 console in the development account.2.    In the Bucket name list, choose your development input S3 bucket. For example: codepipeline-input-bucket.3.    Choose Upload. Then, choose Add files.4.    Select the sample-website.zip file that you downloaded earlier.5.    Choose Upload to run the pipeline. When the pipeline runs, the following occurs:The source action selects the sample-website.zip from the development input S3 bucket (codepipeline-input-bucket). Then, the source action places the zip file as a source artifact inside the artifact bucket in the development account ( codepipeline-us-east-1-0123456789).In the deploy action, the CodePipeline service role (AWSCodePipelineServiceRole-us-east-1-crossaccountdeploy) assumes the cross-account role (prods3role) in the production account.CodePipeline uses the cross account role (prods3role) to access the KMS key and artifact bucket in the development account. Then, CodePipeline deploys the extracted files to the production output S3 bucket (codepipeline-output-bucket) in the production account.Note: The production account is the owner of the extracted objects in the production output S3 bucket (codepipeline-output-bucket).Follow"
https://repost.aws/knowledge-center/codepipeline-artifacts-s3
How do I resolve "KMSAccessDeniedException" errors from AWS Lambda?
My AWS Lambda function returned a "KMSAccessDeniedException" error.
"My AWS Lambda function returned a "KMSAccessDeniedException" error.Short descriptionUpdate the AWS Key Management Service (AWS KMS) permissions of your AWS Identity and Access Management (IAM) identity based on the error message.Important: If the AWS KMS key and IAM role belong to different AWS accounts, then both the IAM policy and AWS KMS key policy must be updated.For more information about AWS KMS keys and policy management, see AWS KMS keys.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.To resolve "KMS Exception: UnrecognizedClientExceptionKMS Message" errorsThe following error usually occurs when a Lambda function's execution role is deleted and then recreated using the same name, but with a different principal:Calling the invoke API action failed with this message: Lambda was unable to decrypt the environment variables because KMS access was denied. Please check the function's AWS KMS key settings. KMS Exception: UnrecognizedClientExceptionKMS Message: The security token included in the request is invalid.To resolve the error, you must reset the AWS KMS grant for the function's execution role by doing the following:Note: The IAM user that creates and updates the Lambda function must have permission to use the AWS KMS key.1.    Get the Amazon Resource Name (ARN) of the function's current execution role and AWS KMS key, by running the following AWS CLI command:Note: Replace yourFunctionName with your function's name.$ aws lambda get-function-configuration --function-name yourFunctionName2.    Reset the AWS KMS grant by doing one of the following:Update the function's execution role to a different, temporary value, by running the following update-function-configuration command:Important: Replace temporaryValue with the temporary execution role ARN.$ aws lambda update-function-configuration --function-name yourFunctionName --role temporaryValueThen, update the function's execution role back to the original execution role by running the following command:Important: Replace originalValue with the original execution role ARN.$ aws lambda update-function-configuration --function-name yourFunctionName --role originalValue-or-Update the function's AWS KMS key to a different, temporary value, by running the following update-function-configuration command:Important: Replace temporaryValue with a temporary AWS KMS key ARN. To use a default service key, set the kms-key-arn parameter to "".$ aws lambda update-function-configuration --function-name yourFunctionName --kms-key-arn temporaryValueThen, update the function's AWS KMS key back to the original AWS KMS key ARN by running the following command:Important: Replace originalValue with the original AWS KMS key ARN.$ aws lambda update-function-configuration --function-name yourFunctionName --kms-key-arn originalValueFor more information, see Key policies in AWS KMS.To resolve "KMS Exception: AccessDeniedException KMS Message" errorsThe following error indicates that your IAM identity doesn't have the permissions required to perform the kms:Decrypt API action:Lambda was unable to decrypt your environment variables because the KMS access was denied. Please check your KMS permissions. KMS Exception: AccessDeniedException KMS Message: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.To resolve the error, add the following policy statement to your IAM user or role:Important: Replace "your-KMS-key-arn" with your AWS KMS key ARN.{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "kms:Decrypt", "Resource": "your-KMS-key-arn" } ]}For instructions, see Adding permissions to a user (console) or Modifying a role permissions policy (console), based on your use case.To resolve "You are not authorized to perform" errorsThe following errors indicate that your IAM identity doesn't have one of the permissions required to access the AWS KMS key:You are not authorized to perform: kms:Encrypt.You are not authorized to perform: kms:CreateGrant.User: user-arn is not authorized to perform: kms:ListAliases on resource: * with an explicit deny.Note: AWS KMS permissions aren't required for your IAM identity or the function's execution role if you use the default key policy.To resolve these types of errors, verify that your IAM user or role has the permissions required to perform the following AWS KMS API actions:ListAliasesCreateGrantEncryptDecryptFor instructions, see Adding permissions to a user (console) or Modifying a role permissions policy (console), based on your use case.Example IAM policy statement that grants the permissions required to access a customer-managed AWS KMS keyImportant: The Resource value must be "*". The kms:ListAliases action doesn't support low-level permissions. Also, make sure that you replace "your-kms-key-arn" with your AWS KMS key ARN.{ "Version": "2012-10-17", "Statement": [ { "Sid": "statement1", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:Encrypt", "kms:CreateGrant" ], "Resource": "your-kms-key-arn" }, { "Sid": "statement2", "Effect": "Allow", "Action": "kms:ListAliases", "Resource": "*" } ]}To resolve "Access to KMS is not allowed" errorsThe following error indicates that an IAM entity doesn't have permissions to get AWS Secrets Manager secrets:Access to KMS is not allowed (Service: AWSSecretsManager; Status Code: 400; Error Code: AccessDeniedException; Request ID: 123a4bcd-56e7-89fg-hij0-1kl2m3456n78)Make sure that your IAM user or role has permissions required to make the following AWS KMS API actions:DecryptGenerateDataKeyFor more information, see How can I resolve issues accessing an encrypted AWS Secrets Manager secret?Related informationHow do I troubleshoot HTTP 502 and HTTP 500 status code (server-side) errors from AWS Lambda?How do I troubleshoot Lambda function failures?Follow"
https://repost.aws/knowledge-center/lambda-kmsaccessdeniedexception-errors
Why is my API Gateway proxy resource with a Lambda authorizer that has caching activated returning HTTP 403 "User is not authorized to access this resource" errors?
"My Amazon API Gateway proxy resource with an AWS Lambda authorizer that has caching activated returns the following HTTP 403 error message: "User is not authorized to access this resource". Why is this happening, and how do I resolve the error?"
"My Amazon API Gateway proxy resource with an AWS Lambda authorizer that has caching activated returns the following HTTP 403 error message: "User is not authorized to access this resource". Why is this happening, and how do I resolve the error?Short descriptionNote: API Gateway can return 403 User is not authorized to access this resource errors for a variety of reasons. This article addresses 403 errors related to API Gateway proxy resources with a Lambda authorizer that has caching activated only. For information on troubleshooting other types of 403 errors, see How do I troubleshoot HTTP 403 errors from API Gateway?A Lambda authorizer's output returns an AWS Identity and Access Management (IAM) policy to API Gateway. The IAM policy includes an explicit API Gateway API "Resource" element that's in the following format:"arn:aws:execute-api:<region>:<account>:<API_id>/<stage>/<http-method>/[<resource-path-name>/[<child-resources-path>]"When Authorization Caching is activated on a Lambda authorizer, the returned IAM policy is cached. The cached IAM policy is then applied to any additional API requests made within the cache's specified time-to-live (TTL) period.If the API has a proxy resource with a greedy path variable of {proxy+}, the first authorization succeeds. Any additional API requests made to a different path within the cache's TTL period fail and return the following error:"message": "User is not authorized to access this resource"The additional requests fail, because the paths don't match the explicit API Gateway API "Resource" element defined in the cached IAM policy.To resolve the issue, you can modify the Lambda authorizer function's code to return a wildcard (*/*) resource in the output instead. For more information, see Resources and conditions for Lambda actions.Note: To activate authorizer caching, your authorizer must return a policy that is applicable to all methods across an API Gateway. The Lambda authorizer function's code must return a wildcard (*/*) resource in the output to allow all resources. The cache policy expects the same resource path cached, unless you made the same request twice on the same resource-path.ResolutionNote: Modify the example Lambda authorizer function code snippets in this article to fit your use case.In the following example setups, the Lambda functions extract the API Gateway's id value from the method's Amazon Resource Name (ARN) ( "event.methodArn"). Then, the functions define a wildcard "Resource" variable by combining the method ARN's paths with the API's id value and a wildcard ( */*).Example token-based Lambda authorizer function code that returns a wildcard "Resource" variableexports.handler = function(event, context, callback) { var token = event.authorizationToken; var tmp = event.methodArn.split(':'); var apiGatewayArnTmp = tmp[5].split('/'); // Create wildcard resource var resource = tmp[0] + ":" + tmp[1] + ":" + tmp[2] + ":" + tmp[3] + ":" + tmp[4] + ":" + apiGatewayArnTmp[0] + '/*/*'; switch (token) { case 'allow': callback(null, generatePolicy('user', 'Allow', resource)); break; case 'deny': callback(null, generatePolicy('user', 'Deny', resource)); break; case 'unauthorized': callback("Unauthorized"); // Return a 401 Unauthorized response break; default: callback("Error: Invalid token"); // Return a 500 Invalid token response }};// Help function to generate an IAM policyvar generatePolicy = function(principalId, effect, resource) { var authResponse = {}; authResponse.principalId = principalId; if (effect && resource) { var policyDocument = {}; policyDocument.Version = '2012-10-17'; policyDocument.Statement = []; var statementOne = {}; statementOne.Action = 'execute-api:Invoke'; statementOne.Effect = effect; statementOne.Resource = resource; policyDocument.Statement[0] = statementOne; authResponse.policyDocument = policyDocument; } // Optional output with custom properties of the String, Number or Boolean type. authResponse.context = { "stringKey": "stringval", "numberKey": 123, "booleanKey": true }; return authResponse;}Example request parameter-based Lambda authorizer function code that returns a wildcard "Resource" variableexports.handler = function(event, context, callback) { // Retrieve request parameters from the Lambda function input: var headers = event.headers; var queryStringParameters = event.queryStringParameters; var pathParameters = event.pathParameters; var stageVariables = event.stageVariables; // Parse the input for the parameter values var tmp = event.methodArn.split(':'); var apiGatewayArnTmp = tmp[5].split('/'); // Create wildcard resource var resource = tmp[0] + ":" + tmp[1] + ":" + tmp[2] + ":" + tmp[3] + ":" + tmp[4] + ":" + apiGatewayArnTmp[0] + '/*/*'; console.log("resource: " + resource); // if (apiGatewayArnTmp[3]) { // resource += apiGatewayArnTmp[3]; // } // Perform authorization to return the Allow policy for correct parameters and // the 'Unauthorized' error, otherwise. var authResponse = {}; var condition = {}; condition.IpAddress = {}; if (headers.headerauth1 === "headerValue1" && queryStringParameters.QueryString1 === "queryValue1" && stageVariables.StageVar1 === "stageValue1") { callback(null, generateAllow('me', resource)); } else { callback("Unauthorized"); }} // Help function to generate an IAM policyvar generatePolicy = function(principalId, effect, resource) { // Required output: console.log("Resource in generatePolicy(): " + resource); var authResponse = {}; authResponse.principalId = principalId; if (effect && resource) { var policyDocument = {}; policyDocument.Version = '2012-10-17'; // default version policyDocument.Statement = []; var statementOne = {}; statementOne.Action = 'execute-api:Invoke'; // default action statementOne.Effect = effect; statementOne.Resource = resource; console.log("***Resource*** " + resource); policyDocument.Statement[0] = statementOne; console.log("***Generated Policy*** "); console.log(policyDocument); authResponse.policyDocument = policyDocument; } // Optional output with custom properties of the String, Number or Boolean type. authResponse.context = { "stringKey": "stringval", "numberKey": 123, "booleanKey": true }; return authResponse;} var generateAllow = function(principalId, resource) { return generatePolicy(principalId, 'Allow', resource);} var generateDeny = function(principalId, resource) { return generatePolicy(principalId, 'Deny', resource);}For more information on how to edit Lambda function code, see Deploying Lambda functions defined as .zip file archives.Related informationEdit code using the console editorFollow"
https://repost.aws/knowledge-center/api-gateway-lambda-authorization-errors
How do I monitor the performance of my Amazon RDS for MySQL DB instance?
I want to monitor the performance of my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance. What's the best way to do this?
"I want to monitor the performance of my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance. What's the best way to do this?Short descriptionThere are several ways that you can monitor your Amazon RDS for MySQL DB instance:Amazon CloudWatchEnhanced MonitoringRDS Performance InsightsSlow query logsTo troubleshoot any issues or multi-point failures, it's a best practice to monitor your DB instance using a variety of these monitoring methods.ResolutionAmazon CloudWatchAmazon CloudWatch provides real-time metrics of your Amazon RDS for MySQL database instance. By default, Amazon RDS metrics are automatically sent to Amazon CloudWatch every 60 seconds. You can also create a usage alarm to watch a single Amazon RDS metric over a specific time period.To monitor Amazon RDS metrics with Amazon CloudWatch, perform the following steps:Note: Metrics are first grouped by the service namespace, and then by the various dimension combinations within each namespace.1.    Open the Amazon CloudWatch console.2.    (Optional) Update your AWS Region. From the navigation bar, choose the AWS Region where your AWS resources exist. For more information, see Regions and endpoints.3.    In the navigation pane, choose Metrics.4.    Choose the RDS metric namespace.5.    Select a metric dimension.6.    (Optional) Sort, filter, update the display of your metrics:To sort your metrics, use the column heading.To create graph view of your metric, select the check box next to the metric.To filter by resource, choose the resource ID, and then choose Add to search.To filter by metric, choose the metric name, and then choose Add to search.Enhanced Monitoring (within 1-5 seconds of granularity interval)When you use Enhanced Monitoring in Amazon RDS, you can view real-time metrics of an operating system that your DB instance runs on.Note: You must create an AWS Identity Access Management (IAM) role that allows Amazon RDS to communicate with Amazon CloudWatch Logs.To enable Enhanced Monitoring in Amazon RDS, perform the following steps:1.    Scroll to the Monitoring section.2.    Choose Enable enhanced monitoring for your DB instance or read replica.3.    For Monitoring Role, specify the IAM role that you created.4.    Choose Default to have Amazon RDS create the rds-monitoring-role role for you.5.    Set the Granularity property to the interval, in seconds, between points when metrics are collected for your DB instance or read replica. The Granularity property can be set to one of the following values: 1, 5, 10, 15, 30, or 60.RDS Performance InsightsNote: If Performance Insights is manually enabled after creating the DB instance, a reboot instance is required to enable Performance Schema. Performance Schema is disabled when the parameter is set to "0" or "1" or the Source column for the parameter is set to "user". When the performance_schema parameter is disabled, Performance Insights displays a DB load that is categorized by the list state of the Amazon RDS MySQL process. To enable the performance_schema parameter, use reset performance_schema parameter.When you use RDS Performance Insights, you can visualize the database load and filter the load by waits, SQL statements, hosts, or users. This way, you can identify which queries are causing issues and view the wait type and wait events associated to that query.You can enable Performance Insights for Amazon RDS for MySQL in the Amazon RDS console.Slow query loggingYou can enable your slow query log by setting the slow_query_log value to "1". (The default value is "0", which means that your slow query log is disabled.) A slow query log records any queries that run longer than the number of seconds specified for the long_query_time metric. (The default value for the long_query_time metric is "10".) For example, to log queries that run longer than two seconds, you can update the number of seconds for the long_query_time metric to a value such as "2".To enable slow query logs for Amazon RDS for MySQL using a custom parameter group, perform the following:1.    Open the Amazon RDS console.2.    From the navigation pane, choose Parameter groups.3.    Choose the parameter group that you want to modify.4.    Choose Parameter group actions.5.    Choose Edit.6.    Choose Edit parameters.7.    Update the following parameters:log_output: If the general log or slow query log is enabled, update the value to "file" for write logs to the file system. Log files are rotated hourly.long_query_time: Update the value to "2" or greater, to log queries that run longer than two seconds or more.slow_query_log: Update the value to "1" to enable logging. (The default value is "0", which means that logging is disabled.)8.    Choose Save Changes.Note: You can modify the parameter values in a custom DB group, but not a default DB parameter group. If you're unable to modify the parameter in a custom DB parameter group, check whether the Value type is set to "Modifiable". For information on how to publish MySQL logs to an Amazon CloudWatch log group, see Publishing MySQL logs to Amazon CloudWatch Logs.Related informationAccessing MySQL database log filesBest practices for configuring parameters for Amazon RDS for MySQL, part 1: Parameters related to performanceHow can I troubleshoot and resolve high CPU utilization on my Amazon RDS for MySQL, MariaDB, or Aurora for MySQL instances?How do I enable and monitor logs for an Amazon RDS MySQL DB instance?Follow"
https://repost.aws/knowledge-center/rds-mysql-db-performance
How can I encrypt a specific folder in my Amazon S3 bucket using AWS KMS?
I want to encrypt a specific folder in my Amazon Simple Storage Service (Amazon S3) bucket with an AWS Key Management Service (AWS KMS) key. How can I do that?
"I want to encrypt a specific folder in my Amazon Simple Storage Service (Amazon S3) bucket with an AWS Key Management Service (AWS KMS) key. How can I do that?ResolutionEncrypting a folder using the Amazon S3 console1.    Open the Amazon S3 console.2.    Navigate to the folder that you want to encrypt.Warning: If your folder contains a large number of objects, you might experience a throttling error. To avoid throttling errors, consider increasing your Amazon S3 request limits on your Amazon S3 bucket. For more troubleshooting tips on throttling errors, see Why am I receiving a ThrottlingExceptions error when making requests to AWS KMS?3.    Select the folder, and then choose Actions.4.    Choose Edit server-side encryption.5.    Select Enable for Enabling Server-side encryption.6.    Choose Encryption key type for your AWS Key Management Service key (SSE-KMS).7.    Select the AWS KMS key that you want to use for folder encryption.Note: The key named aws/s3 is a default key managed by AWS KMS. You can encrypt the folder with either the default key or a custom key.8.    Choose Save changes.Encrypting a folder using the AWS CLINote: You can't change the encryption of an existing folder using an AWS Command Line Interface (AWS CLI) command. Instead, you can run a command that copies the folder over itself with AWS KMS encryption enabled.To encrypt the files using the default AWS KMS key (aws/s3), run the following command:aws s3 cp s3://awsexamplebucket/abc s3://awsexamplebucket/abc --recursive --sse aws:kmsThis command syntax copies the folder over itself with AWS KMS encryption.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.To encrypt the files using a custom AWS KMS key, run the following command:aws s3 cp s3://awsexamplebucket/abc s3://awsexamplebucket/abc --recursive --sse aws:kms --sse-kms-key-id a1b2c3d4-e5f6-7890-g1h2-123456789abcMake sure to specify your own key ID for --sse-kms-key-id.Requiring that future uploads encrypt objects with AWS KMSAfter you change encryption, only the objects that are already in the folder are encrypted. Objects added to the folder after you change encryption can be uploaded without encryption. You can use a bucket policy to require that future uploads encrypt objects with AWS KMS.For example:{ "Version": "2012-10-17", "Id": "PutObjPolicy", "Statement": [ { "Sid": "DenyIncorrectEncryptionHeader", "Effect": "Deny", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::awsexamplebucket/awsexamplefolder/*", "Condition": { "StringNotEquals": { "s3:x-amz-server-side-encryption": "aws:kms" } } }, { "Sid": "DenyUnEncryptedObjectUploads", "Effect": "Deny", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::awsexamplebucket/awsexamplefolder/*", "Condition": { "Null": { "s3:x-amz-server-side-encryption": true } } } ]}This bucket policy denies access to s3:PutObject on docexamplebucket/docexamplefolder/* unless the request includes server-side encryption with AWS KMS.Related informationProtecting data using server-side encryption with AWS KMS CMKS (SSE-KMS)Follow"
https://repost.aws/knowledge-center/s3-encrypt-specific-folder
How do I verify the authenticity of Amazon SNS messages that are sent to HTTP and HTTPS endpoints?
"I'm sending notifications to an HTTPS—or HTTP—endpoint using Amazon Simple Notification Service (Amazon SNS). I want to prevent spoofing attacks, so how do I verify the authenticity of the Amazon SNS messages that my endpoint receives?"
"I'm sending notifications to an HTTPS—or HTTP—endpoint using Amazon Simple Notification Service (Amazon SNS). I want to prevent spoofing attacks, so how do I verify the authenticity of the Amazon SNS messages that my endpoint receives?ResolutionIt's a best practice to use certificate-based signature validation when verifying the authenticity of an Amazon SNS notification. For instructions, see Verifying the signatures of Amazon SNS messages in the Amazon SNS Developer Guide.To help prevent spoofing attacks, make sure that you do the following when verifying Amazon SNS message signatures:Always use HTTPS to get the certificate from Amazon SNS.Validate the authenticity of the certificate.Verify that the certificate was sent from Amazon SNS.(When possible) Use one of the supported AWS SDKs for Amazon SNS to validate and verify messages.Example message bodyThe following is an example message payload string sent from Amazon SNS:{"Type" : "Notification","MessageId" : "e1f2a232-e8ce-5f0a-b5d3-fbebXXXXXXXX","TopicArn" : "arn:aws:sns:us-east-1:XXXXXXXX:SNSHTTPSTEST","Subject" : "Test","Message" : "TestHTTPS","Timestamp" : "2021-10-07T18:55:19.793Z","SignatureVersion" : "1","Signature" : "VetoDxbYMh0Ii/87swLEGZt6FB0ZzGRjlW5BiVmKK1OLiV8B8NaVlADa6ThbWd1s89A4WX1WQwJMayucR8oYzEcWEH6//VxXCMQxWD80rG/NrxLeoyas4IHXhneiqBglLXh/R9nDZcMAmjPETOW61N8AnLh7nQ27O8Z+HCwY1wjxiShwElH5/+2cZvwCoD+oka3Gweu2tQyZAA9ergdJmXA9ukVnfieEEinhb8wuaemihvKLwGOTVoW/9IRMnixrDsOYOzFt+PXYuKQ6KGXpzV8U/fuJDsWiFa/lPHWw9pqfeA8lqUJwrgdbBS9vjOJIL+u2c49kzlei8zCelK3n7w==","SigningCertURL" : "https://sns.us-east-1.amazonaws.com/SimpleNotificationService-7ff5318490ec183fbaddaa2aXXXXXXXX.pem","UnsubscribeURL" : "https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:XXXXXXXX:SNSHTTPSTEST:b5ab2db8-7775-4852-bd1a-2520XXXXXXXX","MessageAttributes" : {"surname" : {"Type":"String","Value":"SNSHTTPSTest"}}}For more information on message formats that Amazon SNS uses, refer to Parsing message formats.Related informationFanout to HTTP/S endpointsUsing AWS Lamba with Amazon SNSWhat's the Amazon SNS IP address range?Follow"
https://repost.aws/knowledge-center/sns-verify-message-authenticity
How do I migrate from a NAT instance to a NAT gateway?
"I need to migrate from a NAT instance to a NAT gateway, and I want the migration done with minimal downtime."
"I need to migrate from a NAT instance to a NAT gateway, and I want the migration done with minimal downtime.Short descriptionWhen creating a migration plan, consider the following:Do you plan to use the same Elastic IP address for the NAT gateway as currently used by the NAT instance? A new Elastic IP address might not be recognized by external clients.Is your NAT instance performing other functions, such as port forwarding, custom scripts, providing VPN services, or acting as bastion host? A NAT gateway allows instances in a private subnet to connect to the internet or other AWS services. Internet connections towards the NAT gateway are not allowed. It can’t be used for any other functions.Have you configured your NAT instance security groups and your NAT gateway network access control lists (network ACLs) appropriately? You can use security groups on the NAT instance and network ACLs on the NAT instance subnet to control traffic to and from the NAT subnet. You can only use a network ACL to control the traffic to and from the subnet in which the NAT gateway is located.Do your current NAT instances provide high availability across Availability Zones? If so, you might want to create a Multi-AZ architecture. You can do this by creating a NAT gateway in each Availability Zone. Next, configure your private subnet route-tables in a specific Availability Zone to use the NAT gateway from the same Availability Zone. Multi-AZ is useful if you want to avoid charges for inter-AZ traffic.Do you have tasks running through the NAT instance? When the routing is changed from the NAT instance, existing connections are dropped, and the connections must be reestablished.Does your architecture support testing the instance migrations individually? If so, migrate one NAT instance to a NAT gateway and check the connectivity before migrating other instances.Do you allow incoming traffic from ports 1024 - 65535 on the NAT instance's network ACL? You must allow traffic from ports 1024 - 65535 because the NAT gateway uses these as source ports. To learn more, see VPC with public and private subnets (NAT).ResolutionDisassociate the Elastic IP address from the existing NAT instance.Create a NAT gateway in the public subnet for the NAT instance that you want to replace. You can do this with the disassociated Elastic IP address, or with a new Elastic IP address.Review the route tables that refer to the NAT instance or the elastic network interface of the NAT instance. Then edit the route to point to the newly created NAT gateway instead.Note: Repeat this process for every NAT instance and subnet that you want to migrate.Access one of the Amazon Elastic Compute Cloud (Amazon EC2) instances in the private subnet and verify connectivity to the internet.After you have successfully migrated to the NAT gateway and have verified connectivity, you can terminate the NAT instances.Related informationCompare NAT gateways and NAT instancesMigrate from a NAT instance to a NAT gatewayNAT gatewaysHow do I set up a NAT gateway for a private subnet in Amazon VPC?Troubleshoot NAT gatewaysFollow"
https://repost.aws/knowledge-center/migrate-nat-instance-gateway
Why am I experiencing intermittent connectivity issues with my Amazon Redshift cluster?
I'm experiencing intermittent connectivity issues when I try to connect to my Amazon Redshift cluster. Why is this happening and how do I troubleshoot this?
"I'm experiencing intermittent connectivity issues when I try to connect to my Amazon Redshift cluster. Why is this happening and how do I troubleshoot this?Short descriptionIntermittent connectivity issues in your Amazon Redshift cluster are caused by the following:Restricted access for a particular IP address or CIDR blockMaintenance window updatesNode failures or scheduled administration tasksEncryption key rotationsToo many active network connectionsHigh CPU utilization of the leader nodeClient-side connection issuesResolutionRestricted access for a particular IP address or CIDR blockCheck to see if there is restricted access for a particular IP address or CIDR block in your security group. Because of DHCP configuration, your client IP address can change, which can cause connectivity issues. Additionally, if you aren't using elastic IP addresses for your Amazon Redshift cluster, the AWS managed IP address of your cluster nodes might change. For example, your IP address can change when you delete your cluster and then recreate it from a snapshot, or when you resume a paused cluster.Note: Public IP addresses are rotated when the Amazon Redshift cluster is deleted and recreated. Private IP addresses change whenever nodes are replaced.To resolve any network restrictions, consider the following approaches:If your application is caching the public IP address behind a cluster endpoint, be sure to use this endpoint for your Amazon Redshift connection. To be sure of the stability and security in your network connection, avoid using a DNS cache for your connection.It's a best practice to use an elastic IP address for your Amazon Redshift cluster. An elastic IP address allows you to change your underlying configuration without affecting the IP address that clients use to connect to your cluster. This approach is helpful if you are recovering a cluster after a failure. For more information, see Managing clusters in a VPC.If you're using a private IP address to connect to a leader node or compute node, be sure to use the new IP address. For example, if you performed SSH ingestion or have an Amazon EMR configuration that uses the compute node, update your settings with the new IP address. A new private IP address is granted to new nodes after a node replacement.Maintenance window updatesCheck the maintenance window for your Amazon Redshift cluster. During a maintenance window, your Amazon Redshift cluster is unable to process read or write operations. If a maintenance event is scheduled for a given week, it starts during the assigned 30-minute maintenance window. While Amazon Redshift is performing maintenance, any queries or other operations that are in progress are shut down. You can change the scheduled maintenance window from the Amazon Redshift console.Node failures or scheduled administration tasksFrom the Amazon Redshift console, check the Events tab for any node failures or scheduled administration tasks (such as a cluster resize or reboot).If there is a hardware failure, Amazon Redshift might be unavailable for a short period, which can result in failed queries. When a query fails, you see an Events description such as the following:"A hardware issue was detected on Amazon Redshift cluster [cluster name]. A replacement request was initiated at [time]."Or, if an account administrator scheduled a restart or resize operation on your Amazon Redshift cluster, intermittent connectivity issues can occur. Your Events description then indicates the following:"Cluster [cluster name] began restart at [time].""Cluster [cluster name] completed restart at [time]."For more information, seeAmazon Redshift event categories and event messages.Encryption key rotationsCheck your key management settings for your Amazon Redshift cluster. Verify whether you are using AWS Key Management Service (AWS KMS) key encryption and key encryption rotation.If your encryption key is enabled and the encryption key is being rotated, then your Amazon Redshift cluster is unavailable during this time. As a result, you receive the following error message:"pg_query(): Query failed: SSL SYSCALL error: EOF detected"The frequency of your key rotation depends on your environment's policies for data security and standards. Rotate the keys as often as needed or whenever the encrypted key might be compromised. Also, be sure to have a key management plan that supports both your security and cluster availability needs.Too many active connectionsIn Amazon Redshift, all connections to your cluster are sent to the leader node, and there is a maximum limit for active connections. The maximum limit that your Amazon Redshift cluster can support is determined by node type (instead of node count).When there are too many active connections in your Amazon Redshift cluster, you receive the following error:"[Amazon](500310) Invalid operation: connection limit "500" exceeded for non-bootstrap users"If you receive anInvalid operation error while connecting to your Amazon Redshift cluster, it indicates that you have reached the connection limit. You can check the number of active connections for your cluster by looking at theDatabaseConnections metric in Amazon CloudWatch.If you notice a spike in your database connections, there might be a number of idle connections in your Amazon Redshift cluster. To check the number of idle connections, run the following SQL query:select trim(a.user_name) as user_name, a.usesysid, a.starttime, datediff(s,a.starttime,sysdate) as session_dur, b.last_end, datediff(s,case when b.last_end is not null then b.last_end else a.starttime end,sysdate) idle_dur FROM(select starttime,process,u.usesysid,user_name from stv_sessions s, pg_user u where s.user_name = u.usename and u.usesysid>1and process NOT IN (select pid from stv_inflight where userid>1 union select pid from stv_recents where status != 'Done' and userid>1)) a LEFT OUTER JOIN (select userid,pid,max(endtime) as last_end from svl_statementtext where userid>1 and sequence=0 group by 1,2) b ON a.usesysid = b.userid AND a.process = b.pidWHERE (b.last_end > a.starttime OR b.last_end is null)ORDER BY idle_dur;The output looks like this example:process | user_name | usesysid | starttime | session_dur | last_end | idle_dur---------+------------+----------+---------------------+-------------+----------+---------- 14684 | myuser | 100 | 2020-06-04 07:02:36 | 6 | | 6(1 row)When the idle connections are identified, the connection can be shut down using the following command syntax:select pg_terminate_backend(process);The output looks like this example:pg_terminate_backend ---------------------- 1(1 row)High CPU utilization of the leader nodeAll clients connect to an Amazon Redshift cluster using a leader node. High CPU utilization of the leader node can result in intermittent connection issues.If you try to connect to your Amazon Redshift cluster and the leader node is consuming high CPU, you receive the following error message:"Error setting/closing connection"To confirm whether your leader node has reached high CPU utilization, check the CPUUtilization metric in Amazon CloudWatch. For more information, see Amazon Redshift metrics.Client-side connection issuesCheck for a connection issue between the client (such as Workbench/J or PostgreSQL) and server (your Amazon Redshift cluster). A client-side connection reset might occur, if your client is trying to send a request from a port that has been released. As a result, the connection reset can cause intermittent connection issues.To prevent these client-side connection issues, consider the following approaches:Use the keepalive feature in Amazon Redshift to check that the connection between the client and server are operating correctly. The keepalive feature also helps to prevent any connection links from being broken. To check or configure the values for keepalive, see Change TCP/IP timeout settings and Change DSN timeout settings.Check the maximum transition unit (MTU) if your queries appear to be running but hang in the SQL client tool. Sometimes, the queries fail to appear in Amazon Redshift because of a packet drop. A packet drop occurs when there are different MTU sizes in the network paths between two IP hosts. For more information about how to manage packet drop issues, see Queries appear to hang and sometimes fail to reach the cluster.Follow"
https://repost.aws/knowledge-center/redshift-intermittent-connectivity
"Why am I getting the error “The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access” when I download or copy an object from my Amazon S3 bucket?"
"I'm getting the following error when I try to download or copy an object from my Amazon Simple Storage Service (Amazon S3) bucket: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access."
"I'm getting the following error when I try to download or copy an object from my Amazon Simple Storage Service (Amazon S3) bucket: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.ResolutionYou get this error when both the following conditions are true:The object that's stored in the bucket where you are making requests to is encrypted with an AWS Key Management Service (AWS KMS) key.The AWS Identity and Access Management (IAM) role or user that's making the requests doesn't have sufficient permissions to access the AWS KMS key that's used to encrypt the objects.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent version of the AWS CLI.You can check the encryption on an object using the AWS CLI command head-object:aws s3api head-object --bucket my-bucket --key my-objectBe sure to do the following in the preceding command:Replace my-bucket with the name of your bucket.Replace my-object with the name of your object.The output for this command looks like the following:{ "AcceptRanges": "bytes", "ContentType": "text/html", "LastModified": "Thu, 16 Apr 2015 18:19:14 GMT", "ContentLength": 77, "VersionId": "null", "ETag": "\"30a6ec7e1a9ad79c203d05a589c8b400\"", "ServerSideEncryption": "aws:kms", "Metadata": {}, "SSEKMSKeyId": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", "BucketKeyEnabled": true}The SSEKMSKeyId field in the output specifies the AWS KMS key that was used to encrypt the object.To resolve this error, do either of the following:Be sure that the policy that's attached to the IAM user or role has the required permissions. Example:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "kms:DescribeKey", "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": [ "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" ] }}Be sure that the AWS KMS policy has the required permissions. Example:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AWS-account-ID:user/user-name-1" }, "Action": [ "kms:DescribeKey", "kms:GenerateDataKey", "kms:Decrypt" ], "Resource": "*" }}If the IAM user or role and AWS KMS key are from different AWS accounts, then be sure of the following:The policy that's attached to the IAM entity has the required AWS KMS permissions.The AWS KMS key policy grants the required permissions to the IAM entity.Important: You can't use the AWS managed keys in cross-account use cases because the AWS managed key policies can't be modified.To get detailed information about an AWS KMS key, run the describe-key command:aws kms describe-key --key-id arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890abYou can also use the AWS KMS console to view details about an AWS KMS key.Note: Be sure that the AWS KMS key that's used to encrypt the object is enabled.Related informationMy Amazon S3 bucket has default encryption using a custom AWS KMS key. How can I allow users to download from and upload to the bucket?Do I need to specify the AWS KMS key when I download a KMS-encrypted object from Amazon S3?Follow"
https://repost.aws/knowledge-center/s3-download-object-ciphertext-error
How do I configure GoSH on an Amazon RDS instance that is running MySQL?
I have an Amazon Relational Database Service (Amazon RDS) instance that is running MySQL. I want to turn on and configure Global Status History (GoSH) on my RDS DB instance. How can I do this?
"I have an Amazon Relational Database Service (Amazon RDS) instance that is running MySQL. I want to turn on and configure Global Status History (GoSH) on my RDS DB instance. How can I do this?Short descriptionYou can use GoSH to maintain the history of different status variables in Amazon RDS for MySQL. First, you must turn on an event scheduler before you can use GoSH. Then, you can modify GoSH to run at specific intervals and to rotate tables regularly. By default, the GoSH information is collected every five minutes, stored in the mysql.rds_global_status_history table, and the table is rotated every seven days.Resolution1.    Modify the custom DB parameter group attached to the instance so that event_scheduler is set to ON.2.    Log in to your DB instance, and then run this command:SHOW PROCESSLIST;SHOW GLOBAL VARIABLES LIKE 'event_scheduler';3.    Turn on GoSH by running this command:CALL mysql.rds_enable_gsh_collector;4.    To modify the monitoring interval to one minute, run this command:CALL mysql.rds_set_gsh_collector(1);5.    Turn on rotation for the GoSH tables by running this command:CALL mysql.rds_enable_gsh_rotation;6.    Modify the rotation by running this command:CALL mysql.rds_set_gsh_rotation(5);Query the GoSH tables to fetch information about specific operations. For example, the following query provides details about the number of Data Manipulation Language (DML) operations performed on the instance every minute.SELECT collection_start, collection_end, sum(value) AS 'DML Queries Count' from (select collection_start, collection_end, "INSERTS" as "Operation", variable_Delta as "value" from mysql.rds_global_status_history where variable_name = 'com_insert' union select collection_start, collection_end, "UPDATES" as "Operation", variable_Delta as "value" from mysql.rds_global_status_history where variable_name = 'com_update' union select collection_start, collection_end, "DELETES" as "Operation", variable_Delta as "value" from mysql.rds_global_status_history where variable_name = 'com_delete') a group by 1,2;Note: This query is not applicable for MySQL 8.0.Related informationCommon DBA tasks for MySQL DB instancesManaging the global status historyFollow"
https://repost.aws/knowledge-center/enable-gosh-rds-mysql
How do I allow access to my Amazon S3 buckets to customers who do not use TLS 1.2 or higher?
"My customers don't use TLS versions 1.2 or higher, so they can't access content that's stored in my Amazon Simple Storage Service (Amazon S3) buckets. I want to allow these customers to access content in my Amazon S3 buckets using TLS 1.0 or 1.1."
"My customers don't use TLS versions 1.2 or higher, so they can't access content that's stored in my Amazon Simple Storage Service (Amazon S3) buckets. I want to allow these customers to access content in my Amazon S3 buckets using TLS 1.0 or 1.1.Short descriptionAWS enforces the use of TLS 1.2 or higher on all AWS API endpoints. To continue to connect to AWS services, update all software that uses TLS 1.0 or 1.1.ResolutionAmazon CloudFront allows the use of older TLS versions by abstracting customers from the TLS protocol that's used between your CloudFront distribution and Amazon S3.Create a CloudFront distribution with OACWith CloudFront, you can support anonymous and public requests to your S3 buckets. Or, you can make your S3 buckets private and accessible through CloudFront only by requiring signed requests to access your S3 buckets.Support anonymous and public requests to your S3 bucketsNote: The following example assumes that you already have an S3 bucket in use. If you don't have an S3 bucket, then create one.To create the CloudFront distribution, follow these steps:Open the CloudFront console.Choose Create Distribution.Under Origin, for Origin domain, choose your S3 bucket's REST API endpoint from the dropdown list.For Viewer protocol policy, select Redirect HTTP to HTTPS.For Allowed HTTP endpoints, select GET, HEAD, OPTIONS to support read requests.In the Origin access section, select Origin access control settings (recommended).Select Create control setting, and use the default name. For the signing behavior, select Sign requests (recommended), and select Create. The OAC recommended settings automatically authenticates the viewer's request.Select the identity in the dropdown list. After the distribution is created, update the bucket policy to restrict access to OAC.Under Default cache behavior, Viewer, select Redirect HTTP to HTTPS for Viewer Protocol Policy, and leave the other settings as default.Under Cache key and origin requests, select Cache policy and origin request policy (recommended). Then, use CachingOptimized for the Cache policy and CORS-S3Origin for the Origin request policy.Select Create distribution, and then wait for its status to update to Enabled.Require signed requests to access your S3 bucketsAdd security to your S3 buckets by supporting signed requests only. With signed requests, OAC follows your authentication parameters and forwards them to the S3 origin, which then denies anonymous requests.To create a CloudFront distribution that requires signed requests to access your S3 buckets, follow these steps:Open the CloudFront console.Choose Create Distribution.Under Origin, for Origin domain, choose your S3 bucket's REST API endpoint from the dropdown list.For Viewer protocol policy, select Redirect HTTP to HTTPS.For Allowed HTTP endpoints, select GET, HEAD, OPTIONS to support read requests.In the Origin access section, select Origin access control settings (recommended).Block all unsigned requests by checking the Do not sign requests option.Note: Blocking unsigned requests requires every customer to sign their requests so that the S3 origin can evaluate the permissions.Create a custom cache policy to forward the customer's Authorization header to the origin.Under Cache key and origin requests, select Cache policy and origin request policy (recommended).Select Create Policy.Enter a name for the cache policy in the Name section.Under Cache key settings, go to Headers, and select Include the following headers.Under Add Header, select Authorization.Select Create.Control the customer's security policyTo control a security policy in CloudFront, you must have a custom domain. It's a best practice to specify an alternate domain name for your distribution. It's also a best practice to use a custom SSL certificate that's configured in AWS Certificate Manager (ACM). Doing so gives you more control over the security policy, and allows customers to continue to use TLS 1.0. For more information, see Supported protocols and ciphers between viewers and CloudFront.If you use the default *.cloudfront.net domain name, then CloudFront automatically provisions a certificate and sets the security policy to allow TLS 1.0 and 1.1. For more information, see Distribution settings.To configure an alternate domain name for your CloudFront distribution, follow these steps:Sign in to the AWS Management Console, and then open the CloudFront console.Choose the ID for the distribution that you want to update.On the General tab, choose Edit.For Alternate Domain Names (CNAMEs), choose Add item, and enter your domain name.Note: It's a best practice to use a custom canonical name record (CNAME) to access your resources. Using a CNAME gives you greater control over routing, and allows a better transition for your customers.For Custom SSL Certificate, choose the custom SSL certificate from the dropdown list that covers your CNAME to assign it to the distribution.Note: For more information on installing a certificate, see How do I configure my CloudFront distribution to use an SSL/TLS certificate?Choose Create distribution, and wait for its status to update to Enabled.After you create the distribution, you must allow OAC to access your bucket. Complete the following steps:Navigate to the CloudFront console page, and open your CloudFront distribution.Select the Origins tab, select your origin, and then click Edit.Choose Copy policy, open the bucket permission, and update your bucket policy.Open the Go to S3 bucket permissions page.Under Bucket policy, choose Edit. Paste the policy that you copied earlier, and then choose Save. If your bucket policy requires more than reading from S3, then you can add the required APIs.If you use a custom domain name, then change your DNS entries to use the new CloudFront distribution URL. If you don't use a custom domain name, then you must provide the new CloudFront distribution URL to your users. Also, you must update any client or device software that uses the old URL.If you're using an AWS SDK to access Amazon S3 objects, then you must change your code to use regular HTTPS endpoints. Also, make sure that you use the new CloudFront URL. If the objects aren't public and require better control, then you can serve private content with signed URLs and signed cookies.Use S3 presigned URLs to access objectsIf your workflow relies on S3 presigned URLs, then use a CloudFront distribution to relay your query to the S3 origin. First, generate a presigned URL for the object you want. Then, replace the host in the URL with the CloudFront endpoint to deliver the call through CloudFront and automatically upgrade the encryption protocol. To test and generate a presigned URL, run the following CLI command:aws s3 presign s3://BUCKET_NAME/test.jpgExample output:https://bucket_name.s3.us-east-1.amazonaws.com/test.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=%5b...%5d%2F20220901%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=%5b...%5d&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature">https://BUCKET_NAME.s3.us-east-1.amazonaws.com/test.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=[...]%2F20220901%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=[...]&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature =[...]Now change the S3 URL to the new CloudFront endpoint. For example, replace this S3 URL:BUCKET_NAME.s3.eu-west-1.amazonaws.comwith this endpoint:https://DISTRIBUTION_ID.cloudfront.net.Example output:https://<DISTRIBUTION_ID>.cloudfront.net /test.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=[...]%2F20220901%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=[...]&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=[...]To use presigned URLs, apply the following CloudFront settings:Set the OAC signing behavior to Do not sign requests.Set the CloudFront distribution origin request policy to Origin request settings: Headers – None; Cookies – None; Query strings – All.Set the cache policy to Headers – None; Cookies – None; Query strings – None.In AWS CloudTrail, the GET request to download from an S3 presigned URL shows as the identity that generated the presigned URL.If you're using an AWS SDK to access S3 objects, then you must change your code to use the presigned URL. Use a regular HTTPS request instead, and use the new CloudFront URL.Confirm that you're using modern encryption protocols for Amazon S3To test your new policy, use the following example curl command to make HTTPS requests using a specific legacy protocol:curl https://${CloudFront_Domain}/image.png -v --tlsv1.0 --tls-max 1.0The example curl command makes a request to CloudFront using TLS 1.0. This connects to the S3 origin using TLS 1.2 and successfully downloads the file.It's a best practice to use AWS CloudTrail Lake to identify older TLS connections to AWS service endpoints. You can configure the CloudTrail Lake event data store to capture management events or data events. The corresponding CloudTrail event in CloudTrail Lake shows TLS version 1.2, confirming that your customers use modern security policy to connect to Amazon S3.Follow"
https://repost.aws/knowledge-center/s3-access-old-tls
"After I use AWS Organizations to create a member account, how do I access that account?"
"I used AWS Organizations to create a member account in my Organization, and I want to access that account."
"I used AWS Organizations to create a member account in my Organization, and I want to access that account.Short descriptionWhen you create a member account with Organizations, you must specify an email address, an AWS Identity and Access Management (IAM) role, and an account name. If a role name isn't specified, then a default name is assigned: OrganizationAccountAccessRole. To switch to the IAM role and access the member account, use the Organizations console.ResolutionIn the Organizations console, member accounts appear under the Accounts tab. Note the account number, email address, and IAM role name of the member account that you want to access. You can access the member account using either the IAM role or the AWS account root user credentials.Option one: Use the IAM Role1.    Open the AWS Management Console using IAM user credentials.2.    Choose your account name at the top of the page, and then select Switch role.Important: If you signed in with root user credentials, then you can't switch roles. You must sign in as an IAM user or role. For more information, see Switching to a role (console).3.    Enter the account number and role name for the member account.4.    (Optional) You can also enter a custom display name (maximum 64 characters) and a display color for the member account.5.    Choose Switch role.Option two: Use the root user credentialsWhen you create a new member account, Organizations sets an initial password for that account that can't be retrieved. To access the account as the root user for the first time, follow these instructions to reset the initial password:1.    Follow the instructions for Accessing a member account as the root user.2.    After you receive the reset password email, choose the Reset password link.3.    Open the AWS Management Console using the root user name and the new password.For more information, see How do I recover a lost or forgotten AWS password?Note: It's a best practice to use the root user only to create IAM users, groups, and roles. It's also a best practice to use multi-factor authentication for your root user.Related informationAccessing and administering the member accounts in your organizationRemoving a member account from your organizationI can't assume a roleFollow"
https://repost.aws/knowledge-center/organizations-member-account-access
Why am I unable to change the maintenance track for my Amazon Redshift provisioned cluster?
I'm unable to change the maintenance track for my Amazon Redshift provisioned cluster.
"I'm unable to change the maintenance track for my Amazon Redshift provisioned cluster.Short descriptionAmazon Redshift periodically performs maintenance to apply upgrades to your cluster hardware or to perform software patches. During these updates, your Amazon Redshift cluster isn't available for normal operations. If a scheduled maintenance occurs while a query is running, then the query is terminated and rolled back.Note: The following is applicable to only provisioned Amazon Redshift clusters.If you have planned deployments for large data loads, ANALYZE, or VACUUM operations, you can defer maintenance for up to 45 days.Important: You can't defer maintenance after the maintenance window has started.You can make changes to the maintenance track to control the cluster version applied during a maintenance window. There are three maintenance tracks to choose from:Current – Use the most current approved cluster version.Trailing – Use the cluster version before the current version.Preview – Use the cluster version that contains new features available for preview.Changes to maintenance tracks aren't allowed for the following situations:A Redshift cluster requires a hardware upgrade or a node of a Redshift cluster needs to be replaced.Mandatory upgrades or patches are required for a Redshift cluster.The maintenance track can't be set to Trailing for a Redshift cluster with the most current cluster version.If the Redshift provisioned cluster maintenance track is set to Preview, then changes from one Preview to another Preview track isn’t allowed.If the Redshift provisioned cluster track is set to Current or Trailing, then you can't change the maintenance track to Preview.ResolutionNote: If a mandatory maintenance window is required for your Redshift cluster, AWS will send a notification before the start of the maintenance window.Redshift cluster requires hardware upgrade or a node of a Redshift cluster needs to be replacedEach new version change can include updates to the operating system, security, and functionality. AWS will send a notification and make the required changes. This happens automatically when there's a hardware update, or another mandatory update, and the cluster maintenance track is set to Current.Your Amazon Redshift cluster isn't available during the maintenance window.Mandatory upgrades or patches are required for Redshift clusterMandatory upgrades or patches are deployed for a particular cluster or for all clusters in an AWS Region. You will receive a notification before the mandatory upgrade or required patch.AWS requires at least a 30-minute window in your instance's weekly schedule to confirm that all instances have the latest patches and upgrades. During the maintenance window, tasks are performed on clusters and instances. For the security and stability of your data, maintenance can cause instances to be unavailable.Maintenance track can't be set to trailing for a Redshift cluster with the most current cluster versionIf your cluster maintenance track isn't changing to Trailing, it's because your cluster is already using the most current approved cluster version. You must wait until the next new release becomes available for the current version to trail. After a new cluster version is released, you can change your cluster's maintenance track to Trailing and it will stay in Trailing for future maintenance. For more information, see Choosing cluster maintenance tracks.Redshift provisioned cluster maintenance track set to previewIf your cluster maintenance track is set to use the Preview track, then switching from one Preview track to another isn't allowed.If you restore a new Redshift cluster from a snapshot of an older cluster that used the Preview track, the following happens:The restored Redshift cluster inherits the source cluster’s maintenance track.The restored Redshift cluster can’t be changed to a different type of Preview maintenance track.Redshift provisioned cluster maintenance track set to current or trailingIf Current or Trailing is selected for a provisioned Redshift cluster, then the maintenance track can't be changed to the Preview track.Related informationRolling back the cluster versionWhy didn't Amazon Redshift perform any upgrades during the maintenance window?Follow"
https://repost.aws/knowledge-center/redshift-change-maintenance-track
How can I serve multiple domains from a CloudFront distribution over HTTPS?
I want to serve multiple domains from an Amazon CloudFront distribution over HTTPS.
"I want to serve multiple domains from an Amazon CloudFront distribution over HTTPS.ResolutionTo serve multiple domains from CloudFront over HTTPS, add the following values to your distribution settings:Enter all domain names in the Alternate Domain Names (CNAMEs) field. For example, to use the domain names example1.com and example2.com, enter both domain names in Alternate Domain Names (CNAMEs).Note: Choose Add item to add each domain name on a new line.Add your SSL certificate that covers all the domain names. You can add a certificate that's requested with AWS Certificate Manager (ACM). Or, you can add a certificate that's imported to either AWS Identity and Access Management (IAM) or ACM.Note: It's a best practice to import your certificate to ACM. However, you can also import your certificate in the IAM certificate store.For each the domain name, configure your DNS service so that the alternate domain names route traffic to the CloudFront domain name for your distribution. For example, configure example1.com and example2.com to route traffic to d111111abcdef8.cloudfront.net.Note: You can't use CloudFront to route to a specific origin based on the alternate domain name. CloudFront natively supports routing to a specific origin based only on the path pattern. However, you can use Lambda@Edge to route to an origin based on the Host header. For more information, see Dynamically route viewer requests to any origin using Lambda@Edge.Related informationValues that you specify when you create or update a distributionUsing custom URLs by adding alternate domain names (CNAMEs)Follow"
https://repost.aws/knowledge-center/multiple-domains-https-cloudfront
"How can I determine why I was charged for CloudWatch usage, and then how can I reduce future charges?"
"I'm seeing high Amazon CloudWatch charges in my AWS bill. How can I see why I was charged for CloudWatch usage, and then how can I reduce future charges?"
"I'm seeing high Amazon CloudWatch charges in my AWS bill. How can I see why I was charged for CloudWatch usage, and then how can I reduce future charges?Short descriptionReview your AWS Cost and Usage reports to understand your CloudWatch charges. Look for charges for the following services.Note: Items in bold are similar to what you might see in your reports. In your reports, region represents the abbreviation for your AWS Regions.Custom metrics: MetricStorage region-CW:MetricMonitorUsageCloudWatch metric API calls:API Name region-CW:RequestsGetMetricData region-CW:GMD-Requests/MetricsCloudWatch alarms:Unknown region-CW:AlarmMonitorUsageUnknown region-CW:HighResAlarmMonitorUsageCloudWatch dashboards: DashboardHour DashboardsUsageHour(-Basic)CloudWatch Logs:PutLogEvents region-DataProcessing-BytesPutLogEvents region-VendedLog-BytesHourlyStorageMetering region-TimedStorage-ByteHrsCloudWatch Contributor Insights:Contributor Insights Rules: region-CW:ContributorInsightRulesContributor Insights matched log events: region-CW:ContributorInsightEventsCloudWatch Synthetics canary runs: region-CW:Canary-runsWhen you understand what you were charged for and why, use the following recommendations to reduce future costs by adjusting your CloudWatch configuration.To easily monitor your AWS costs in the future, enable billing alerts.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Detailed monitoringCharges are incurred by detailed CloudWatch monitoring for Amazon Elastic Compute Cloud (Amazon EC2) instances, Auto Scaling group launch configurations, or API gateways.To reduce costs, turn off detailed monitoring of instances, Auto Scaling group launch configurations, or API gateways, as appropriate.Custom metricsCharges are incurred by monitoring more than ten custom metrics. Custom metrics include those that you created as well as those used by tools such as the CloudWatch agent and application or OS data from EC2 instances.Request metrics for Amazon Simple Storage Service (Amazon S3) and Amazon Simple Email Service (Amazon SES) events sent to CloudWatch incur charges.PutMetricData calls for a custom metric can also incur charges.Amazon Kinesis Data Streams enhanced (shard-level) metrics and AWS Elastic Beanstalk enhanced health reporting metrics sent to CloudWatch incur charges.To reduce costs, turn off monitoring of custom metrics, as appropriate. To show custom metrics only, enter NOT AWS in Search for any metric, dimension or resource ID box of the CloudWatch console.CloudWatch metric API callsCharges vary by CloudWatch metric API. API calls that exceed the AWS Free Tier limit incur charges. GetMetricData and GetMetricWidgetImage aren't counted in the AWS Free Tier.Third-party monitoring tools can increase costs because they perform frequent API calls.To reduce costs:Make ListMetrics calls through the console for free rather than making them through the AWS CLI.Batch multiple PutMetricData requests into one API call. Also consider pre-aggregating metric data into a StatisticSet. Using these best practices reduces the API call volume and corresponding charges are reduced.In use cases involving a third-party monitoring tool, make sure that you are retrieving only metrics that are actively being monitored or that are being used by workloads. Reducing the retrieved metrics reduces the amount charged. You can also consider using metric streams as an alternative solution, and then evaluate which deployment is the most cost effective.or more information, see Should I use GetMetricData or GetMetricStatistics for CloudWatch metrics? Also be sure to review costs incurred by third-party monitoring tools.CloudWatch alarmsCharges are incurred by the number of metrics associated with a CloudWatch alarm. For example, if you have a single alarm with multiple metrics, you're charged for each metric.To reduce costs, remove unnecessary alarms.CloudWatch dashboardsCharges are incurred when you exceed three dashboards (with up to 50 metrics).Calls to dashboard-related APIs through the AWS CLI or an SDK also incur charges after requests exceed the AWS Free Tier limit.Exception: GetMetricWidgetImage always incurs charges.To reduce costs, delete unnecessary dashboards. If you're using the AWS Free Tier, keep your total number of dashboards to three or less. Also be sure to keep the total number of metrics across all dashboards to less than 50. Make dashboard-related API calls through the console for free rather than making them through the AWS CLI or an SDK.CloudWatch LogsCharges are incurred by ingestion, archival storage, and analysis of Amazon CloudWatch Logs.Ingestion charges reflect the volume of log data ingested by the CloudWatch Logs service. The CloudWatch metric IncomingBytes reports on the volume of log data processed by the service. By visualizing this metric in a CloudWatch graph or dashboard, you can monitor the volume of logs generated by various workloads. If high CloudWatch Logs ingestion charges occur, follow the guidance in Which Log Group is causing a sudden increase in my CloudWatch Logs bill?To reduce ingestion costs, you can re-evaluate logging levels and eliminate the ingestion of unnecessary logsArchival charges are related to the log storage costs over time. The retention policy determines how long CloudWatch Logs keeps the data. You can create a retention policy directing so that CloudWatch automatically deletes data older than the set retention period. This limits the data retained over time. The default retention policy on log groups is set to Never Expire. This setting means that CloudWatch retains data indefinitely. To reduce storage costs, consider changing the retention policy (for example, you can set the retention policy to keep data for 1 week, 1 month, and so on).Analysis charges occur when Log Insights is used to query logs. The charge is based on the volume of data scanned in order to provide query results. The Log Insights console provides a history of previously run queries. To reduce analysis charges, you can review the Log Insights query history and set queries to run over shorter timeframes. This reduces the amount of data scanned.CloudWatch Contributor InsightsCharges are incurred when you exceed one Contributor Insights rule per month, or more than 1 million log events match the rule per month.To reduce costs, view your Contributor Insights reports and remove any unnecessary rules.CloudWatch SyntheticsCharges are incurred when you exceed 100 canary runs per month using CloudWatch Synthetics.To reduce costs, delete any unnecessary canaries.Related informationAmazon CloudWatch pricingAWS services that publish CloudWatch metricsMonitoring metrics with Amazon CloudWatchHow can I determine why I was charged for EventBridge usage, and then how can I reduce future charges?Follow"
https://repost.aws/knowledge-center/cloudwatch-understand-and-reduce-charges
How do I troubleshoot a failed Spark step in Amazon EMR?
I want to troubleshoot a failed Apache Spark step in Amazon EMR.
"I want to troubleshoot a failed Apache Spark step in Amazon EMR.Short descriptionTo troubleshoot failed Spark steps:For Spark jobs submitted with --deploy-mode client: Check the step logs to identify the root cause of the step failure.For Spark jobs submitted with --deploy-mode cluster: Check the step logs to identify the application ID. Then, check the application master logs to identify the root cause of the step failure.ResolutionClient mode jobsWhen a Spark job is deployed in client mode, the step logs provide the job parameters and step error messages. These logs are archived to Amazon Simple Storage Service (Amazon S3). For example:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/controller.gz: This file contains the spark-submit command. Check this log to see the parameters for the job.s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/stderr.gz: This file provides the driver logs. (When the Spark job runs in client mode, the Spark driver runs on the master node.)To find the root cause of the step failure, run the following commands to download the step logs to an Amazon Elastic Compute Cloud (Amazon EC2) instance. Then, search for warnings and errors:#Download the step logs:aws s3 sync s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/ s-2M809TD67U2IA/#Open the step log folder:cd s-2M809TD67U2IA/#Uncompress the log file:find . -type f -exec gunzip {} \;#Get the yarn application id from the cluster mode log:grep "Client: Application report for" * | tail -n 1#Get the errors and warnings from the client mode log:egrep "WARN|ERROR" *For example, this file:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stderr.gzindicates a memory problem:19/11/04 05:24:45 ERROR SparkContext: Error initializing SparkContext.java.lang.IllegalArgumentException: Executor memory 134217728 must be at least 471859200. Please increase executor memory using the --executor-memory option or spark.executor.memory in Spark configuration.Use the information in the logs to resolve the error.For example, to resolve the memory issue, submit a job with more executor memory:spark-submit --deploy-mode client --executor-memory 4g --class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jarCluster mode jobs1.    Check the stderr step log to identify the ID of the application that's associated with the failed step. The step logs are archived to Amazon S3. For example, this log:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/steps/s-2M809TD67U2IA/stderr.gzidentifies application_1572839353552_0008:19/11/04 05:24:42 INFO Client: Application report for application_1572839353552_0008 (state: ACCEPTED)2.    Identify the application master logs. When the Spark job runs in cluster mode, the Spark driver runs inside the application master. The application master is the first container that runs when the Spark job executes. The following is an example list of Spark application logs.In this list, container_1572839353552_0008_01_000001 is the first container, which means that it's the application master.s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stderr.gzs3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stdout.gzs3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000002/stderr.gzs3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000002/stdout.gz3.    After you identify the application master logs, download the logs to an Amazon EC2 instance. Then, search for warnings and errors. For example:#Download the Spark application logs:aws s3 sync s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/ application_1572839353552_0008/#Open the Spark application log folder:cd application_1572839353552_0008/ #Uncompress the log file:find . -type f -exec gunzip {} \;#Search for warning and errors inside all the container logs. Then, open the container logs returned in the output of this command.egrep -Ril "ERROR|WARN" . | xargs egrep "WARN|ERROR"For example, this log:s3://aws-logs-111111111111-us-east-1/elasticmapreduce/j-35PUYZBQVIJNM/containers/application_1572839353552_0008/container_1572839353552_0008_01_000001/stderr.gzindicates a memory problem:19/11/04 05:24:45 ERROR SparkContext: Error initializing SparkContext.java.lang.IllegalArgumentException: Executor memory 134217728 must be at least 471859200. Please increase executor memory using the --executor-memory option or spark.executor.memory in Spark configuration.4.    Resolve the issue identified in the logs. For example, to fix the memory issue, submit a job with more executor memory:spark-submit --deploy-mode cluster --executor-memory 4g --class org.apache.spark.examples.SparkPi /usr/lib/spark/examples/jars/spark-examples.jar 1000Related informationAdding a Spark stepFollow"
https://repost.aws/knowledge-center/emr-spark-failed-step
How can I configure on-premises servers to use temporary credentials with SSM Agent and unified CloudWatch Agent?
I have a hybrid environment with on-premises servers that use AWS Systems Manager Agent (SSM Agent) and the unified Amazon CloudWatch Agent installed. How can I configure my on-premises servers to use only temporary credentials?
"I have a hybrid environment with on-premises servers that use AWS Systems Manager Agent (SSM Agent) and the unified Amazon CloudWatch Agent installed. How can I configure my on-premises servers to use only temporary credentials?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.The unified CloudWatch Agent can be installed to on-premises hosts for improved performance monitoring. You can do this by specifying AWS Identity and Accesses Management (IAM) credentials that are written to a configuration file.However, some use cases might require the greater security of rotating credentials that aren’t saved to local files.In this more secure deployment scenario, the SSM Agent allows the on-premises host to assume an IAM role. Then the unified CloudWatch Agent can be configured to use this IAM role to publish metrics and logs to CloudWatch.To configure your on-premises servers to use only temporary credentials:1.    Integrate the on-premises host with AWS System Manager.2.    Attach the AWS managed IAM CloudWatchAgentServerPolicy to the IAM Service Role for a Hybrid Environment. Now the unified CloudWatch Agent has the permissions to post metrics and logs to CloudWatch.3.    Install or update the AWS CLI.4.    Confirm that the IAM Role is attached to the on-premises host:$ aws sts get-caller-identity{ "UserId": "AROAJXQ3RVCBOTUDZ2AWM:mi-070c8d5758243078f", "Account": "123456789012", "Arn": "arn:aws:sts::123456789012:assumed-role/SSMServiceRole/mi-070c8d5758243078f"}5.    Install the unified CloudWatch Agent.6.    Update the common-config.toml file to:Point to the credentials generated by SSM AgentSet a proxy configuration (if applicable)Note: These credentials are refreshed by the SSM Agent every 30 minutes.Linux:/opt/aws/amazon-cloudwatch-agent/etc/common-config.toml/etc/amazon/amazon-cloudwatch-agent/common-config.toml[credentials] shared_credential_profile = "default" shared_credential_file = "/root/.aws/credentials"Windows:$Env:ProgramData\Amazon\AmazonCloudWatchAgent\common-config.toml[credentials] shared_credential_profile = "default" shared_credential_file = "C:\\Windows\\System32\\config\\systemprofile\\.aws\\credentials"7.    Choose the AWS Region that the unified CloudWatch Agent metrics will post to.8.    Add the region in the credential file referenced by the SSM Agent in Step 5. This corresponds to the file associated with the shared_credential_file.$ cat /root/.aws/config [default]region = "eu-west-1"Note: Be sure to replace eu-west-1 with your target Region.9.    Depending on your host operating system, you might have to update permissions to allow the unified CloudWatch Agent to read the SSM Agent credentials file. Windows hosts run both agents as SYSTEM user and no further action is required.For Linux hosts, by default the unified CloudWatch Agent runs as the root user. The unified CloudWatch Agent can be configured to run as a non-privileged user with the run_as_user option. When using this option, you must grant the unified CloudWatch Agent access to the credentials file.10.    (Windows only) Change the Startup type of the unified CloudWatch Agent service to Automatic (Delayed). This starts the unified CloudWatch Agent service after the SSM Agent service during boot.Related informationSetting up AWS Systems Manager for hybrid environmentsDownload the CloudWatch agent on an on-premises serverInstall and configure the unified CloudWatch Agent to push metrics and logs from an EC2 instance to CloudWatchFollow"
https://repost.aws/knowledge-center/cloudwatch-on-premises-temp-credentials
Why can't I connect to my resources over a Transit Gateway peering connection?
"I have inter-Region AWS Transit Gateway peering set up between my source virtual private cloud (VPC) and remote VPC. However, I am unable to connect my VPC resources over the peering connection. How can I troubleshoot this?"
"I have inter-Region AWS Transit Gateway peering set up between my source virtual private cloud (VPC) and remote VPC. However, I am unable to connect my VPC resources over the peering connection. How can I troubleshoot this?ResolutionConfirm that the source and remote VPCs are attached to the correct transit gatewayUse the following steps at the source VPC and the remote VPC:Open the Amazon Virtual Private Cloud (Amazon VPC) console.From the navigation pane, choose Transit gateway attachments.Confirm that:The VPC attachments are associated with the correct Transit gateway ID that you used to set up peering.The source VPC and the transit gateway that it's attached to are in the same Region.The remote VPC and the transit gateway that it's attached to are in the the same Region.Find the transit gateway route table that the source and the remote VPC attachments are associated withOpen the Amazon VPC console and choose Transit gateway attachments.Select the VPC attachment.In the Associated route table ID column, note the transit gateway route table ID.Find the transit gateway route table that the source and the remote peering attachments are associated withOpen the Amazon VPC console and choose Transit gateway attachments.Select the Peering attachment.In the Associated route table ID column, note the value transit gateway route table ID.Confirm that source VPC attachment associated with a transit gateway has a static route for remote VPC that points to the transit gateway peering attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the Route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote VPC attachments are associated withChoose the Routes tab.Verify the routes for the remote VPC CIDR block that point to the transit gateway peering attachment.Confirm that remote VPC attachment associated with a transit gateway route table has a static route for source VPC that points to the transit gateway peering attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the Route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote VPC attachments are associated with.Choose the Routes tab.Verify the routes for the source VPC CIDR block that point to the transit gateway peering attachment.Note: To route traffic between the peered transit gateways, add a static route to the transit gateway route table that points to the transit gateway peering attachment.Confirm that the source peering attachment associated transit gateway route table has a route for the source VPC that points to the source VPC attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote peering attachments are associated with.Choose the Routes tab.Verify the routes for the source VPC CIDR block pointing to source VPC attachment.Confirm that the remote peering attachment associated transit gateway route table has a route for the remote VPC that points to the remote VPC attachmentOpen the Amazon VPC console and choose Transit gateway route tables.Select the route table. This is the value that you noted in the section Find the transit gateway route table that the source and the remote peering attachments are associated with.Choose the Routes tab.Verify that there are routes for the remote VPC CIDR block pointing to the remote VPC attachment.Confirm that the routes for the source and remote VPCs are in the VPC subnet route table with the gateway set to Transit GatewayOpen the Amazon VPC console.From the navigation pane, choose Route tables.Select the route table used by the instance.Choose the Routes tab.Under Destination, verify that there's a route for the source/remote VPC CIDR block. Then, verify that Target is set to the Transit Gateway ID.Confirm that source and remote Amazon EC2 instance's security group and network open control list (ACL) allows trafficOpen the Amazon EC2 console.From the navigation pane, choose Instances.Select the instance where you're performing the connectivity test.Choose the Security tab.Verify that the Inbound rules and Outbound rules allow traffic.Open the Amazon VPC console.From the navigation pane, choose Network ACLs.Select the network ACL that's associated with the subnet where your instance is located.Select the Inbound rules and Outbound rules. Verify that the rules allows the traffic needed by your use-case.Confirm that the network ACL associated with the transit gateway network interface allows trafficOpen the Amazon EC2 console.From the navigation pane, choose Network Interfaces.In the search bar, enter Transit gateway. The results show that all network interfaces of the transit gateway appear.Note the Subnet ID that's associated with the location where the transit gateway interfaces were created.Open the Amazon VPC console.From the navigation pane, choose Network ACLs.In the Filter network ACLS search bar, enter the subnet ID that you noted in step 3. This shows the network ACL associated with the subnet.Confirm the Inbound rules and Outbound rules of the network ACL allow traffic to or from the source or remote VPC.Follow"
https://repost.aws/knowledge-center/transit-gateway-peering-connection
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
53
Edit dataset card