Question
stringlengths 0
222
| Description
stringlengths 0
790
| Answer
stringlengths 0
28.2k
| Link
stringlengths 35
92
|
---|---|---|---|
How do I resolve the DELETE_FAILED error when deleting the capacity provider in Amazon ECS? | I get an error when I use the AWS Command Line Interface (AWS CLI) or an API to delete a capacity provider for my Amazon Elastic Container Service (Amazon ECS) cluster. | "I get an error when I use the AWS Command Line Interface (AWS CLI) or an API to delete a capacity provider for my Amazon Elastic Container Service (Amazon ECS) cluster.Short descriptionIf you try to delete a capacity provider for your cluster using either the AWS CLI or an API, you might receive one of the following errors:"updateStatus": "DELETE_FAILED""updateStatusReason": "The capacity provider cannot be deleted because it is associated with cluster: your-cluster-name. Remove the capacity provider from the cluster and try again."You might receive these errors for the following reasons:The capacity provider that you're trying to delete is in use by an Amazon ECS service in the capacity provider strategy. The AWS Management Console doesn't let you delete a capacity provider that's in use by an Amazon ECS service. In this scenario, you receive this error message: "The specified capacity provider is in use and cannot be removed" in the console. You can disassociate an existing capacity provider from a cluster only if it's not in use by any existing tasks. If you run the DeleteCapacityProvider AWS CLI command, then the capacity provider transitions into DELETE_FAILED status. To resolve this issue, complete the steps in the Check if your capacity provider is in use by an Amazon ECS service in the capacity provider strategy section.Your capacity provider is used by the default strategy. If you don't choose a capacity provider strategy or launch type when you run a task or create a service, then a capacity provider strategy is associated with your cluster by default. However, the association happens only if the capacity provider is set as the default capacity provider strategy for the cluster. You can delete only the capacity providers that aren't associated with a cluster. To resolve this issue, complete the steps in the Check if your capacity provider is set in the default capacity provider strategy for the cluster section.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.ResolutionCheck if your capacity provider is in use by an Amazon ECS service in the capacity provider strategy1. If you have multiple services in a cluster, then use the following script to verify the services that are using the capacity provider.Before you run the script, do the following:Set the cluster and capacity provider parameters to your values. Set your AWS CLI credentials to your AWS Region. Install jq from the jq website.#! /bin/bashcluster=clustername capacityprovider=capacityprovidernameservices=$(aws ecs list-services --cluster ${cluster} | jq --raw-output '.serviceArns[]')aws ecs describe-services \ --cluster ${cluster} \ --services ${services} \ | jq -r --arg capacityprovider "${capacityprovider}" \ '.services[] | select(.capacityProviderStrategy[]?.capacityProvider == $capacityprovider) | .serviceName'Note: If the script returns a blank output, then none of the services in the cluster are using the capacity provider. Skip to the Check if your capacity provider is set in the default capacity provider strategy for the cluster section.2. Update the services that are returned in the output from the script with a new capacity provider.3. Delete the old capacity provider.Important: You can't update a service using a capacity provider strategy or launch type. You must update the service with another capacity provider.Check if your capacity provider is set in the default capacity provider strategy for the cluster1. To find the default capacity provider for your cluster, run the following command:$ aws ecs describe-clusters --cluster mycluster | jq '.clusters[].defaultCapacityProviderStrategy'[ { "capacityProvider": "oldCP", "weight": 0, "base": 0 }]2. To delete the capacity provider, you must modify the default capacity provider strategy for your cluster using either the Amazon ECS console or the AWS CLI.Using the Amazon ECS console:1. Open the Amazon ECS console.2. In the navigation pane, choose Clusters, and then choose your cluster.3. Choose Update Cluster.Using the AWS CLI:$ aws ecs put-cluster-capacity-providers \ --cluster mycluster \ --capacity-providers newCP \ --default-capacity-provider-strategy capacityProvider=newCP \ --region us-east-1$ aws ecs delete-capacity-provider --capacity-provider oldCP$ aws ecs describe-capacity-providers --capacity-provider oldCPNote: In the preceding code example, replace mycluster with your cluster. Replace newCP with the new capacity provider that you want to add. Replace oldCP with the capacity provider that you want to delete.4. Delete the old capacity provider.Any existing capacity providers associated with a cluster that are omitted from the PutClusterCapacityProviders API call are disassociated from the cluster. The same rules apply to the cluster's default capacity provider strategy.Follow" | https://repost.aws/knowledge-center/ecs-capacity-provider-error |
How do I set up cross-account access from Amazon QuickSight to an Amazon S3 bucket in another account? | I'm trying to create a dataset in Amazon QuickSight using data from an Amazon Simple Storage Service (Amazon S3) bucket in another account. How can I do this? | "I'm trying to create a dataset in Amazon QuickSight using data from an Amazon Simple Storage Service (Amazon S3) bucket in another account. How can I do this?Short descriptionComplete the following steps to create cross-account access from Amazon QuickSight (Account A) to an encrypted Amazon S3 bucket in another account (Account B):Update your S3 bucket policy in Account B (where your S3 bucket resides).Add the S3 bucket as a resource that the QuickSight service role (Account A) can access.Allow the QuickSight service role access to the AWS Key Management Service (KMS) key for the S3 bucket.Note: This article assumes that your S3 bucket is encrypted. It's also a best practice to encrypt your S3 bucket with an AWS KMS key. For more information about how to enable default encryption for Amazon S3, see Enabling Amazon S3 default bucket encryption.ResolutionUpdate your S3 bucket policy in Account BTo set up cross-account access from QuickSight to Amazon S3, complete the following steps:1. Update the bucket policy of your S3 bucket in Account B. For example:{ "Version": "2012-10-17", "Id": "BucketPolicy", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<Account A>:role/service-role/aws-quicksight-service-role-v0" }, "Action": [ "s3:ListBucket", "s3:GetObject", "s3:GetObjectVersion" ], "Resource": [ "arn:aws:s3:::cross-account-qstest-bucket", "arn:aws:s3:::cross-account-qstest-bucket/*" ] } ]}Note: If the aws-quicksight-s3-consumers-role-v0 role exists in Account A, then make sure to use this role instead. Replace aws-quicksight-service-role-v0 with aws-quicksight-s3-consumers-role-v0 to avoid connection issues with Amazon S3.2. Add the QuickSight service role from Account A to the list of users that can access the S3 bucket's AWS KMS key:aws kms create-grant --key-id aws_kms_key_arn --grantee-principal quickSight_role_arn --operations DecryptNote: Replace aws_kms_key_arn with your AWS KMS key's ARN, and quicksight_role_arn with your QuickSight role's ARN.To get your AWS KMS key ARN:Open the Amazon S3 console.Go to the S3 bucket that contains your data file.Choose the Properties tab. The AWS KMS key ARN is located under Default encryption.To get your QuickSight service role ARN:Open the AWS Identity Access Management (IAM) console in Account A.In the left navigation pane, choose Roles.Search for aws-quicksight-service-role.Select your QuickSight service role, and copy its ARN.Note: If the aws-quicksight-s3-consumers-role-v0 role exists in Account A, make sure to use this role instead. Otherwise, you might receive an error when you try to connect to Amazon S3.Add the S3 bucket as a resource that the QuickSight service role can accessTo allow the QuickSight service role access to the S3 bucket in Account B, complete the following steps:Open your Amazon QuickSight console.Choose Manage QuickSight.Choose Security & permissions.Choose Add or remove.Choose Details.Choose Select S3 buckets.Choose the S3 buckets that you can access across AWS tab to verify that your S3 bucket is listed for QuickSight access.(Optional) If your S3 bucket isn't listed, then add your bucket under Use a different bucket.Choose Finish.Allow the QuickSight service role access to the AWS KMS key for the S3 bucketAdd the following inline IAM policy to the QuickSight service role in Account A:{ "Version": "2012-10-17", "Statement": [ { "Sid": "ExampleStmt3", "Effect": "Allow", "Action": [ "kms:Decrypt" ], "Resource": ""arn:aws:kms:us-east-1:<account ID of your S3 bucket>:key/<KEYID>" } ]}Note: The preceding inline policy allows the QuickSight service role to access your AWS KMS key in Account B. Replace ExampleStmt3 with your statement ID.Important: If the aws-quicksight-s3-consumers-role-v0 role exists in Account A, then you must attach the AWS KMS policy to the role. The AWS KMS policy decrypts the data in your S3 bucket. If you attach the updated role policy to your QuickSight service role instead, then you might encounter a permissions error. For information on how to resolve the permissions error, see How do I troubleshoot AWS resource permission errors in Amazon QuickSight?Additional considerationsWhen you're setting up cross-account access from QuickSight to an S3 bucket in another account, consider the following:Check the IAM policy assignments in your QuickSight account. The IAM role policies must grant the QuickSight service role access to the S3 bucket. For more information about viewing your policy assignments, see Setting granular access to AWS services through IAM.Use your manifest file to connect to your S3 bucket, and create a dataset using S3 files. Make sure to use a supported format for your S3 manifest file.Related informationEditing keysI can't connect to Amazon S3Troubleshooting Amazon QuickSightViewing a key policy (console)Follow" | https://repost.aws/knowledge-center/quicksight-cross-account-s3 |
What do I do if I get the "Maximum sending rate exceeded" or "Daily sending quota exceeded" error message from Amazon SES? | What do I do if I get the "Maximum sending rate exceeded" or "Daily sending quota exceeded" error message from Amazon Simple Email Service (Amazon SES)? | "What do I do if I get the "Maximum sending rate exceeded" or "Daily sending quota exceeded" error message from Amazon Simple Email Service (Amazon SES)?ResolutionIf you exceed the maximum sending quota per day or per second, Amazon SES returns the "Maximum sending rate exceeded" or "Daily sending quota exceeded" error message. To address this error message:Use the Amazon SES console to review your account's current sending quotas.Review your sending application and requirements to see if you can stay within the quotas by downscaling or implementing an exponential backoff mechanism.Note: If you receive a "Maximum sending rate exceeded" error, you can retry your request.If you determine that your current sending quotas don't meet your sending requirements, you can do one of the following:Move out of the Amazon SES sandboxAll new Amazon SES accounts are in the Amazon SES sandbox. To see if your account is in the sandbox, use the Amazon SES console.In the sandbox, your account has the following sending quotas:You can send a maximum of 200 messages per day.You can send a maximum of one message per second.Note: If your account is in the Amazon SES sandbox, then you can send messages only to and from verified email addresses.To request to move your account out of the sandbox, open a support case. In the same support case, you can also include a request to increase your sending quota by filling out the Requests section.Request an increase in your sending quotaIf your account is in production mode (not sandbox), you can monitor your quota usage and request a sending quota increase when you need it. To request an increase in your sending quota, open a support case and complete the fields in the Requests section.Note the following important considerations when requesting an increase in your sending quota:If your Amazon SES account is in production mode and you're sending high-quality emails, you might qualify for an automatic increase in your sending quota.When you request a new sending quota, be sure to request only the amount that you need. Your requested quota increase isn't guaranteed.Your sending quota increase can be denied if your Amazon SES account is under review.Follow" | https://repost.aws/knowledge-center/ses-sending-rate-quota-exceeded |
How can I use Data Pipeline to back up a DynamoDB table to an S3 bucket that's in a different account? | I want to use AWS Data Pipeline to back up an Amazon DynamoDB table to an Amazon Simple Storage Service (Amazon S3) bucket that's in a different AWS account. | "I want to use AWS Data Pipeline to back up an Amazon DynamoDB table to an Amazon Simple Storage Service (Amazon S3) bucket that's in a different AWS account.Short descriptionNote: The source account is the account where the DynamoDB table exists. The destination account is the account where the Amazon S3 bucket exists.In the source account, attach an AWS Identity and Access Management (IAM) policy that grants Amazon S3 permissions to the DataPipeline service role and DataPipeline resource role.In the destination account, create a bucket policy that allows the DataPipeline service role and DataPipeline resource role in the source account to access the S3 bucket.In the source account, create a pipeline using the Export DynamoDB table to S3 Data Pipeline template.Add the BucketOwnerFullControl or AuthenticatedRead canned access control list (ACL) to the Step field of the pipeline's EmrActivity object.Activate the pipeline to back up the DynamoDB table to the S3 bucket in the destination account.Create a DynamoDB table in the destination account.To restore the source table to the destination table, create a pipeline using the Import DynamoDB backup data from S3 Data Pipeline template.ResolutionAttach an IAM policy to the Data Pipeline default roles1. In the source account, open the IAM console.2. Choose Policies, and then choose Create policy.3. Choose the JSON tab, and then enter an IAM policy similar to the following. Replace awsdoc-example-bucket with the name of the S3 bucket in the destination account.{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": [ "s3:AbortMultipartUpload", "s3:GetBucketLocation", "s3:GetObject", "s3:ListBucket", "s3:ListBucketMultipartUploads", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::awsdoc-example-bucket/*", "arn:aws:s3:::awsdoc-example-bucket" ] } ]}4. Choose Review policy.5. Enter a Name for the policy, and then choose Create policy.6. In the list of policies, select the check box next to the name of the policy that you just created. You can use the Filter menu and the search box to filter the list of policies.7. Choose Policy actions, and then choose Attach.8. Select the DataPipeline service role and DataPipeline resource role, and then choose Attach policy.Add a bucket policy to the S3 bucketIn the destination account, create a bucket policy similar to the following. Replace these values in the following example:111122223333: the ID of the Data Pipeline account. For more information, see Finding your AWS account ID.awsdoc-example-bucket: the name of the S3 bucket{ "Version": "2012-10-17", "Id": "", "Statement": [ { "Sid": "", "Effect": "Allow", "Action": "s3:*", "Principal": { "AWS": [ "arn:aws:iam::111122223333:role/DataPipelineDefaultRole", "arn:aws:iam::111122223333:role/DataPipelineDefaultResourceRole" ] }, "Resource": [ "arn:aws:s3:::awsdoc-example-bucket", "arn:aws:s3:::awsdoc-example-bucket/*" ] } ]}Create and activate the pipeline1. In the source account, create a pipeline using the Export DynamoDB table to S3 Data Pipeline template:In the Parameters section, enter the Source DynamoDB table name and the Output S3 folder. Use the format s3://awsdoc-example-bucket/ for the bucket.In the Security/Access section, for IAM roles, choose Default.2. Before you Activate the pipeline, choose Edit in Architect.3. Open the Activities section, and then find the EmrActivity object.4. In the Step field, add the BucketOwnerFullControl or AuthenticatedRead canned access control list (ACL). These canned ACLs give the Amazon EMR Apache Hadoop job permissions to write to the S3 bucket in the destination account. Be sure to use the format -Dfs.s3.canned.acl=BucketOwnerFullControl. Put the statement between org.apache.hadoop.dynamodb.tools.DynamoDbExport and #{output.directoryPath}. Example:s3://dynamodb-dpl-#{myDDBRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBExport,-Dfs.s3.canned.acl=BucketOwnerFullControl,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}5. Choose Save, and then choose Activate to activate the pipeline and back up the DynamoDB table to the S3 bucket in the destination account.(Optional) Restore the backup in the destination accountIn the destination account, create a DynamoDB table. The table doesn't have to be empty. However, the import process replaces items that have the same keys as the items in the export file.Create a pipeline using the Import DynamoDB backup data from S3 Data Pipeline template:In the Parameters section, for Input S3 folder, enter the S3 bucket where the DynamoDB backup is stored.In the Security/Access section, for IAM roles, choose Default.Activate the pipeline to restore the backup to the destination table.Related informationBucket owner granting cross-account bucket permissionsManaging IAM policiesFollow" | https://repost.aws/knowledge-center/data-pipeline-account-access-dynamodb-s3 |
How do I see a list of my Amazon EC2 instances that are connected to Amazon EFS? | I want to see a list of my Amazon Elastic Compute Cloud (Amazon EC2) instances that have mounted an Amazon Elastic File System (Amazon EFS). How do I do that? | "I want to see a list of my Amazon Elastic Compute Cloud (Amazon EC2) instances that have mounted an Amazon Elastic File System (Amazon EFS). How do I do that?Short descriptionThe VPC flow logs are used to track the traffic on the elastic network interface of each Amazon EFS mount target. The flow logs can be pushed to Amazon CloudWatch logs. Using CloudWatch logs insights, the traffic flow on the mount target's elastic network interface is filtered to provide the list of Amazon EC2 instances that have mounted an Amazon EFS in a specific timestamp.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Perform the following steps once. After completing these steps, each time you want to list the IP addresses of the clients mounting the Amazon EFS, run a query to create a current list.Create a log groupOpen the CloudWatch console.In the navigation pane, choose Logs, and then choose Log groups.Choose Create log group.Enter a Log group name, Retention setting and an optional KMS key ARN. You can also add Tags here.Choose Create.Create an Identity and Access Management (IAM) role with permission for publishing flow logs to CloudWatch LogsOpen the IAM console.In the navigation pane, under Access management, choose Roles.Choose Create role and create a new IAM role.The IAM policy that's attached to your IAM role must include the permissions to publish the VPC flow logs to CloudWatch. Similarly, it must have a trust relationship that allows the flow logs service to assume the role.Get the list of elastic network interfaces used by the mount target of your Amazon EFSNote: Amazon EFS has a different mount target for each Availability Zone.Open the Amazon EFS console.Under File systems, choose the specific Amazon EFS, and then choose View details.Click on Network, and note the Network Interface ID for each mount target.Create the flow logsOpen the Amazon EC2 console.Choose Network & Security, and then choose Network Interfaces.Choose all the elastic network interfaces that are being used by the mount target.From the Actions menu, choose Create flow log. Use the following values when creating the flow log:Name: OptionalFilter: Select AllMaximum aggregation interval: Choose from default 10 mins or 1 minDestination: Select Send to CloudWatch logsDestination log group: Choose the log group you createdIAM role: Choose the IAM Role you createdLog record format: Chose from AWS default format or Custom format.Tag: OptionalChoose Create.Monitor the flow log status by choosing the specific elastic network interface that you created a flow log for. At the bottom of the screen, choose Flow logs. Verify that the Status is Active.The first flow log are pushed to CloudWatch Logs after about 10 minutes.Verify that the flow logs are in CloudWatch LogsOpen the CloudWatch console, and then choose Logs.Choose the Log groups created in step 1.Verify that all the log streams you created now appear. Each elastic network interface has a different log stream.Run a queryTo run a query in CloudWatch Logs Insights:In the CloudWatch console, choose Logs, and then choose Logs Insights.Choose the log groups that you created from the drop-down menu.Choose the duration that you want to review the flow logs for (5m, 30m, 1h, 3h, 12, or Custom).Enter the query below:fields @timestamp, @message | filter dstPort="2049" | stats count(*) as FlowLogEntries by srcAddr | sort FlowLogEntries descNote: This query reviews all flow logs generated for all mount targets. It filters the logs that have a destination port set to Port=2049 (Amazon EFS clients connect to mount targets on NFS port 2049). It retrieves all unique source IPs (Amazon EFS client IPs), and sorts them by the most active client connections. Activity is determined by the number of entries in the flow log.Choose Run query. The output contains the list of private IPs of all the Amazon EC2 instances where you mounted Amazon EFS.The following is an example of the query output:# srcAddr FlowLogEntries1 111.22.33.44 782 111.55.66.77 363 111.88.99.000 33Run a query using the AWS CLI <br>To run a query from the AWS CLI, follow these steps:After the VPC flow log is set up, you can use an AWS CLI command to run the query.Check that the AWS CLI is updated to the latest version:$ pip install --upgrade awscliCheck that jq is installed:yum install -y jqUse the following AWS CLI query using these query parameters:log-group-name: Enter the log group name you created.start-time / end-time: These values are in Unix/Epoch time. Use the epoch converter to convert human-readable timestamps to Unix/Epoch time.test.json: You can optionally change the json file name each time you run this command. Changing the name makes sure that the previous output isn't merged with the new output.sleep: This value (in seconds) is used as a delay while the CloudWatch Logs insights query is carried out. The value entered depends on how long you want to review the flow logs. If you want to review the logs for a longer duration, such as weeks, then increase the sleep time.aws logs start-query --log-group-name EFS-ENI-Flowlogs --start-time 1643127618 --end-time 1643128901 --query-string 'filter dstPort="2049" | stats count(*) as FlowLogEntries by srcAddr | sort FlowLogEntries desc' > test.json && sleep 10 && jq .queryId test.json | xargs aws logs get-query-results --query-idFollow" | https://repost.aws/knowledge-center/list-instances-connected-to-efs |
"How do I turn on functions, procedures, and triggers for my Amazon RDS for MySQL DB instance?" | "I want to turn on functions, procedures, and triggers for my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance." | "I want to turn on functions, procedures, and triggers for my Amazon Relational Database Service (Amazon RDS) for MySQL DB instance.ResolutionAmazon RDS is a managed service and doesn't provide system access (SUPER privileges). If you turn on binary logging, then set log_bin_trust_function_creators to true in the custom database (DB) parameter group for your DB instance.If you create a DB instance without specifying a DB parameter group, then Amazon RDS creates a new default DB parameter group. For more information, see Working with parameter groups.Create a DB parameter group.Modify the custom DB parameter group, and then set the parameter: log_bin_trust_function_creators=1Choose Save Changes.Note: Before using the DB parameter group with a DB instance, wait at least 5 minutes.In the navigation pane, choose Databases.Choose the DB instance that you want to associate with the DB parameter group.Choose Modify.Select the parameter group that you want to associate with the DB instance.Reboot the DB instance.Note: The parameter group name changes immediately, but parameter group changes aren't applied until you reboot the instance without failover.If you're already using a custom parameter group, then complete only steps 2–3. The parameter log_bin_trust_function_creators is a dynamic parameter that doesn't require a DB reboot.When you turn on automated backup for a MySQL DB instance, you also turn on binary logging. When creating a trigger, you might receive the following error message:"ERROR 1419 (HY000): You don't have the SUPER privilege and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)"If you receive this error, then modify the log_bin_trust_function_creators parameter to 1. This allows functions, procedures, and triggers on your DB instance.Note: When you set log_bin_trust_function_creators=1, unsafe events might be written to the binary log. Binary logging is statement based (binlog_format=STATEMENT).For more details about the parameter log_bin_trust_function_creators, see log_bin_trust_function_creators and Stored program binary logging in the MySQL documentation.Related informationHow can I resolve 1227 and definer errors when importing data to my Amazon RDS for MySQL DB instance using mysqldump?Modifying parameters in a DB cluster parameter groupFollow" | https://repost.aws/knowledge-center/rds-mysql-functions |
How do I mount an Amazon EFS file system on an Amazon ECS container or task running on Fargate? | I want to mount an Amazon Elastic File System (Amazon EFS) file system on an Amazon Elastic Container Service (Amazon ECS) container or task running on AWS Fargate. | "I want to mount an Amazon Elastic File System (Amazon EFS) file system on an Amazon Elastic Container Service (Amazon ECS) container or task running on AWS Fargate.Short descriptionTo mount an Amazon EFS file system on a Fargate task or container, you must first create a task definition. Then, make that task definition available to the containers in your task across all Availability Zones in your AWS Region. Then, your Fargate tasks use Amazon EFS to automatically mount the file system to the tasks that you specify in your task definition.Important: The following resolution applies to the Fargate platform version 1.4.0 or later, which has persistent storage that you can define at the task and container level in Amazon ECS. Fargate platform versions 1.3.0 or earlier don't support persistent storage using Amazon EFS.Before you complete the steps in the Resolution section, you must have the following:Amazon ECS clusterAmazon Virtual Private Cloud (Amazon VPC)Amazon EFS file systemResolutionCreate and configure an Amazon EFS file system1. Create an Amazon EFS file system, and then note the EFS ID and security group ID.Note: Your Amazon EFS file system, Amazon ECS cluster, and Fargate tasks must all be in the same VPC.2. To allow inbound connections on port 2049 (Network File System, or NFS) from the security group associated with your Fargate task or service, edit the security group rules of your EFS file system.3. Update the security group of your Amazon ECS service to allow outbound connections on port 2049 to your Amazon EFS file system's security group.Create a task definition1. Open the Amazon ECS console.2. From the navigation pane, choose Task Definitions, and then choose Create new Task Definition.3. In the Select launch type compatibility section, choose FARGATE, and choose Next Step.4. In the Configure task and container definitions section, for Task Definition Name, enter a name for your task definition.5. In the Volumes section, choose Add volume.6. For Name, enter a name for your volume.7. For Volume type, enter EFS.8. For File system ID, enter the ID for your Amazon EFS file system.Note: You can specify custom options for Root directory, Encryption in transit, and EFS IAM authorization. Or, you can accept the default, where "/" is the root directory.9. Choose Add.10. In the Containers Definition section, choose Add container.11. In the STORAGE AND LOGGING section, in the Mount points sub-section, select the volume that you created for Source volume in step 5.12. For Container path, choose your container path.13. (Optional) In the ENVIRONMENT section, for Entry point, enter your entry point.14. For Command, enter the [df ,-h] command to display the mounted file system. Note: You can use the entry point and command to test if your Amazon EFS file system is mounted successfully. By default, the container exits after the df -h command executes successfully. The JSON task definition example in step 16 uses an infinite while loop to keep the task running.15. Choose Add.16. Fill out the remaining fields in the task definition wizard, and then choose Create.In the following example, the task definition creates a data volume named efs-test. The nginx container mounts the host data volume at the Any_Container_Path path.{ "family": "sample-fargate-test", "networkMode": "awsvpc", "executionRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole", "containerDefinitions": [ { "name": "fargate-app", "image": "nginx", "portMappings": [ { "containerPort": 80, "hostPort": 80, "protocol": "tcp" } ], "essential": true, "entryPoint": [ "sh","-c" ], "command": [ "df -h && while true; do echo \"RUNNING\"; done" ], "mountPoints": [ { "sourceVolume": "efs-test", "containerPath": "Any_Container_Path" } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-group": "AWS_LOG_GROUP_PATH", "awslogs-region": "AWS_REGION", "awslogs-stream-prefix": "AWS_STREAM_PREFIX" } } } ], "volumes": [ { "name": "efs-test", "efsVolumeConfiguration": { "fileSystemId": "fs-123xx4x5" } } ], "requiresCompatibilities": [ "FARGATE" ], "cpu": "256", "memory": "512"}Note: ReplacefileSystemId,logConfiguration,containerPath, and other placeholder values with values for your custom configuration. Also, confirm that your task definition has an execution role Amazon Resource Name (ARN) to support theawslogs log driver.Run a Fargate task and check your task logs1. Run a Fargate task using the task definition that you created earlier.Important: Be sure to run your task on the Fargate platform version 1.4.0.2. To verify that your Amazon EFS file system is successfully mounted to your Fargate container, check your task logs.The output of df-h looks similar to the following:2020-10-27 15:15:35Filesystem 1K-blocks Used Available Use% Mounted on2020-10-27 15:15:35overlay 30832548 9859324 19383976 34% /2020-10-27 15:15:35tmpfs 65536 0 65536 0% /dev2020-10-27 15:15:35shm 2018788 0 2018788 0% /dev/shm2020-10-27 15:15:35tmpfs 2018788 0 2018788 0% /sys/fs/cgroup2020-10-27 15:15:35fs-xxxxxxxx.efs.us-east-1.amazonaws.com:/ 9007199254739968 0 9007199254739968 0% /Any_Container_Path2020-10-27 15:15:35/dev/xvdcz 30832548 9859324 19383976 34% /etc/hosts2020-10-27 15:15:35tmpfs 2018788 0 2018788 0% /proc/acpi2020-10-27 15:15:35tmpfs 2018788 0 2018788 0% /sys/firmware2020-10-27 15:15:35tmpfs 2018788 0 2018788 0% /proc/scsi RUNNINGRelated informationSecurity in Amazon EFSTutorial: Using Amazon EFS file systems with Amazon ECS using the classic consoleFollow" | https://repost.aws/knowledge-center/ecs-fargate-mount-efs-containers-tasks |
How can I allow the tasks in my Amazon ECS services to communicate with each other? | I want to allow the tasks in my Amazon Elastic Container Service (Amazon ECS) services to communicate with each other. | "I want to allow the tasks in my Amazon Elastic Container Service (Amazon ECS) services to communicate with each other.Short descriptionYou can allow your tasks to communicate with each other using Service discovery. Service discovery helps manage HTTP and DNS namespaces for your Amazon ECS services.Note: Service discovery supports the A and SRV DNS record types. DNS records are automatically added or removed as tasks start or stop in the Amazon ECS service. Tasks or applications that need to connect to your Amazon ECS service can locate an existing task from the DNS record.ResolutionImportant: You can't update an existing service to use service discovery, or modify the service discovery configuration once your service is created. You can only configure service discovery when you create a new service.Before you get started with service discovery, see Service discovery and review Service discovery considerations.Configure service discovery using either the Amazon ECS console or the AWS Command Line Interface (AWS CLI).Important: After you have configured service discovery, use the YourServiceDiscoveryServiceName.YourServiceDiscoveryNamespace format to query DNS records for a service discovery service within your Amazon Virtual Private Cloud (Amazon VPC).Follow" | https://repost.aws/knowledge-center/ecs-tasks-services-communication |
How do I troubleshoot VPC-to-VPC connectivity through a transit gateway? | "My virtual private clouds (VPCs) are attached to the same AWS Transit Gateway. However, I'm experiencing connectivity issues between the VPCs. How do I troubleshoot this?" | "My virtual private clouds (VPCs) are attached to the same AWS Transit Gateway. However, I'm experiencing connectivity issues between the VPCs. How do I troubleshoot this?Short descriptionTo troubleshoot connectivity between VPCs attached to the same AWS Transit Gateway, you can either:Check the routing configuration for the AWS Transit Gateway, VPC, and the Amazon EC2 instance.Use Route Analyzer in AWS Network Manager.ResolutionConfirm your routing configurationsConfirm that the VPCs are attached to the same transit gatewayOpen the Amazon Virtual Private Cloud (Amazon VPC) console.From the navigation pane, choose Transit Gateway Attachments.Verify that the VPC attachments are associated with the same Transit Gateway ID.Confirm that the Transit Gateway route table is associated with a VPC attachmentOpen the Amazon VPC console.From the navigation pane, choose Transit gateway route tables.Choose the route tables that are associated with the transit gateway VPC attachment of the source VPC.Choose the Routes tab.Verify that there is a route for Remote VPC IP range with Target as TGW VPC attachment that corresponds to the value for Remote VPC.Choose the route tables that are associated with the transit gateway VPC attachment of the remote VPC.Choose the Routes tab.Verify that there is a route for Source VPC IP range with Target as TGW VPC attachment. Verify that the route corresponds to the value for Source VPC.Confirm that the VPC route table of the source VPC has a route for remote VPC IP range with the gateway set to Transit Gateway.Open the Amazon VPC console.From the navigation pane, choose Route Tables.Select the route table used by the source EC2 instance.Choose the Routes tab.Verify that there's a route for the Remote VPC CIDR block under Destination. Then, verify that the Target is set to Transit Gateway ID.Confirm that the VPC route table of the remote VPC has a route for source VPC IP range with the gateway set to Transit Gateway.Open the Amazon VPC console.From the navigation pane, choose Route Tables.Select the route table that's used by the source EC2 instance.Choose the Routes tab.Verify that there's a route for the Remote VPC CIDR block under Destination. Then, verify that the Target is set to Transit Gateway ID.Check the Availability Zones for the transit gateway VPC attachment for the source and remote VPCsOpen the Amazon VPC console.From the navigation pane, choose Transit Gateway Attachments.Choose the source VPC attachment.Under Details, find the Subnet IDs. Verify that a subnet from the source EC2 instance's Availability Zone is selected.Return to Transit Gateway Attachments. Then, choose the remote VPC attachment.Under Details, find the Subnet IDs. Verify that a subnet from the remote EC2 instance's Availability Zone is selected.To add an Availability Zone to a VPC attachment, choose Actions. Then, modify the Transit Gateway attachment and select any subnet from required Availability Zone.Note: Adding or modifying a VPC attachment subnet can impact data traffic while the attachment is in a Modifying state.Confirm that the Amazon EC2 instance's security group and network access control list (ACL) allows the trafficOpen the Amazon EC2 console.From the navigation pane, choose Instances.Select the instance where you're performing the connectivity test.Choose the Security tab.Verify that the Inbound rules and Outbound rules allow the traffic.Open the Amazon VPC console.From the navigation pane, choose Network ACLs.Select the network ACL that's associated with the subnet where you have the instance.Select the Inbound rules and Outbound rules to verify that the rules allow the traffic.Confirm that the network ACL associated with the transit gateway network interface allows the trafficOpen the Amazon EC2 console.From the navigation pane, choose Network Interfaces.In the search bar, enter Transit Gateway. All network interfaces of the transit gateway appear. Note the Subnet ID associated with the location where the transit gateway interfaces were created.Open the Amazon VPC console.From the navigation pane, choose Network ACLs.In the search bar, enter the subnet ID that you noted in step 3. The network ACL associated with the subnet displays.Confirm that the Inbound rules and Outbound rules of the network ACL allow the remote VPC traffic.Use Route AnalyzerPrerequisite: Complete the steps in Getting started with AWS Network Manager for Transit Gateway networks before proceeding in this section.Once you have created a global network and registered your transit gateway:Access the Amazon VPC console.From the navigation pane, choose Network Manager.Choose the global network where your transit gateway is registered.From the navigation pane, choose Transit Gateway Network. Then, choose Route Analyzer.Fill in the Source and Destination information as needed. Confirm that both Source and Destination have the same transit gateway.Choose Run route analysis.Route Analyzer performs routing analysis and indicates a status of Connected or Not Connected. If the status is Not Connected, then Route Analyzer gives you a routing recommendation. Use the recommendations, then run again the test to confirm connectivity. If connectivity issues continue, see the Confirm your routing configurations section of this article for more troubleshooting steps.Related informationMonitor your transit gatewaysDiagnosing traffic disruption using AWS Transit Gateway Network Manager Route AnalyzerFollow" | https://repost.aws/knowledge-center/transit-gateway-fix-vpc-connection |
Why can't I connect to my Amazon RDS DB or Amazon Aurora DB instance using RDS Proxy? | I can't connect to my Amazon Relational Database Service (Amazon RDS) or Amazon Aurora DB instance through Amazon RDS Proxy. | "I can't connect to my Amazon Relational Database Service (Amazon RDS) or Amazon Aurora DB instance through Amazon RDS Proxy.Short descriptionYou might experience connection failures with RDS Proxy for multiple reasons. The following issues are common causes for RDS Proxy connection failures, even when RDS Proxy is in the Available state:Security group rules, either at the DB instance or at the RDS Proxy, prevent the connection.RDS Proxy works only within a virtual private cloud (VPC), so connections from outside the private network fail.The DB instance doesn't accept the connection because of modification or because it's in a non-available state.For native user name and password mode: you used incorrect authentication credentials.For AWS Identity and Access Management (IAM) DB authentication: the IAM user or role that's associated with the client isn't authorized to connect with RDS Proxy.ResolutionNote: If you use RDS Proxy with an RDS DB instance or Aurora DB cluster that uses IAM authentication, then all users must authenticate their connections. Make sure that all users who connect through a proxy authenticate their connection with user names and passwords. See Setting up IAM policies for more information about IAM support in RDS Proxy.Check that the client can reach RDS Proxy within the private network of a VPCRDS Proxy can be used only within a VPC, and can't be publicly accessible (although the DB instance can be). If you connect from outside a private network, then your connection times out. Note the following attributes for connecting within a VPC:If the client is from the same VPC, then check that your RDS Proxy's security group allows connections from the client on the default port. The default ports are 3306 for MySQL and 5432 for PostgreSQL. Add rules to the security group associated with the VPC to allow the required traffic.If the client is from another VPC, then use VPC peering. To manage the traffic from the other VPC, review the security group and route tables.If your client is from a corporate network, then use AWS Direct Connect or AWS Site-to-Site VPN to connect directly to the VPC.If your client must connect through the public internet, then use SSH tunneling as an intermediate host. This allows you to connect into the RDS Proxy within the same VPC.Check that RDS Proxy can connect with the DB instanceTo manage the connection pool, RDS Proxy must establish a connection with your DB instance. This connection uses the user name and password that's stored in the AWS Secrets Manager. Use the following best practices to make sure that RDS Proxy can connect with your DB instance:Check that the credentials in the Secrets Manager are valid and can connect to the DB instance.Make sure that your DB instance's security group allows traffic from the RDS Proxy. To do this, first determine the security group of the DB instance and RDS Proxy.If the RDS Proxy and DB instance use the same security group, then verify that the security group's inheritance rule is in the inbound rules:Inbound rules for the RDS instance in order to allow connections from RDS proxy:Protocol : TCPPort Range : Port on which the DB engine is running on the RDS instanceSource : Common security group (for self referencing the security group)If they use different security groups, then mention the RDS Proxy's security group in the inbound rule of the DB instance's security group:Inbound rules for the RDS instance in order to allow connections from RDS proxy:Protocol : TCPPort range : Port on which the DB engine is running on the DB instanceSource : Security group of RDS ProxyThe RDS Proxy initiates the connection to manage the pool. Therefore, you must allow outbound traffic to reach the DB instance. To do this, RDS Proxy security group must allow the required traffic in its outbound rule:Protocol : TCPPort range : Port on which the DB engine is running on the RDS instanceDestination : Security group of DB instanceNote: If you already have the following outbound rules attached to the security group of the RDS Proxy, then there is no need to explicitly add the security group. Outbound rules: ALL --- 0.0.0.0/0Check that the IAM role associated with the RDS Proxy can fetch and use the credentials that are required for connections:The IAM role must have the trust policy for rds.amazonaws.com.The IAM policy must have access to call the secretsmanager:GetSecretValue action on the secret.The IAM policy must have access to call the kms:Decrypt action on the AWS Key Management Service (AWS KMS) key that encrypted the secret. You can get the details of the KMS key that's used by Secrets Manager from the AWS KMS console. Note that the KMS key ID must be used for the Resource section. See the following example policy:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": [ "arn:aws:secretsmanager:region:account_id:secret:secret_name" ] }, { "Effect": "Allow", "Action": "kms:Decrypt", "Resource": "arn:aws:kms:region:account_id:key/key_id", "Condition": { "StringEquals": { "kms:ViaService": "secretsmanager.region.amazonaws.com" } } } ]}Note: Be sure to replace account_id, secret_name, region, account_id, and key_id with your relevant values.For more information on what might prevent the proxy from connecting to the DB instance, run the describe-db-proxy-targets command. Then, review the TargetHealth structure in the output. Review the State, Reason, and Description fields for more information on the connection health of the RDS Proxy target:aws rds describe-db-proxy-targets --db-proxy-name $DB_PROXY_NAMEFor more information, see Verifying connectivity for a proxy.Check that the DB instance currently accepts connectionsReview the current status of your DB instance and confirm that it's in the AVAILABLE state. For more information on reviewing the status of your DB instance, see the Amazon RDS and Aurora documentation for DB instance status.Check that the IAM user/role is associated with a client with required permissionsNote: This step is required only if you activated IAM DB Authentication on your RDS Proxy.The client must generate a token to authorize the connection request. To do this, the IAM user and IAM role that's associated with this client must have the rds-db:connect IAM policy. Also, make sure to use the RDS Proxy ID in the ARN for the Resources attribute in the policy:"Resource": "arn:aws:rds-db:us-east-2:1234567890:dbuser:prx-ABCDEFGHIJKL01234/db_user"For more information, see Creating and using an IAM policy for IAM database access.Review the RDS Proxy logsTurn on the Enhanced Logging feature of RDS Proxy. Logging gives detailed information about the SQL statements. These logs are a useful resource to help you understand certain authentication issues. Because this adds to performance overhead, it's a best practice to turn them on only for debugging. To minimize overhead, RDS Proxy automatically turns this setting off 24 hours after you turn it on.Related informationUsing Amazon RDS ProxySet up shared database connections with Amazon RDS ProxyFollow" | https://repost.aws/knowledge-center/rds-proxy-connection-issues |
How do I re-create a terminated EC2 instance? | "My Amazon Elastic Compute Cloud (Amazon EC2) instance was terminated, but I want to recover or restore data from that instance. Can I re-create a terminated EC2 instance?" | "My Amazon Elastic Compute Cloud (Amazon EC2) instance was terminated, but I want to recover or restore data from that instance. Can I re-create a terminated EC2 instance?ResolutionAs part of an Amazon EC2 instance termination, the data on any instance store volumes associated with that instance is deleted. By default, the root Amazon Elastic Block Store (Amazon EBS) volume is automatically deleted.It's not possible to recover either the original Amazon EC2 instance or any volumes that were deleted as part of the termination process. However, you can use the following methods to re-create the terminated instance:Launch a replacement EC2 instance using Amazon EBS snapshots or Amazon Machine Images (AMI) backups that were created from the terminated Amazon EC2 instance.Attach an EBS volume from the terminated instance to another EC2 instance. You can then access the data contained in those volumes.Use the following methods to prevent instance termination and volume deletion:Enable termination protection of the EBS volume when you launch an EC2 instance. If you enable this option and the instance is later terminated, then the EBS root volume isn't deleted. You can then launch a new EC2 instance from the available root volume. For more information, see How can I prevent my Amazon EBS volumes from being deleted when I terminate Amazon EC2 instances?Set the instance shutdown behavior to stop the instance instead of terminating it. For more information, see Change the instance initiated shutdown behavior.If the instance is part of an Amazon EC2 Auto Scaling group, you can customize the termination policy or use scale-in protection. For more information, see Controlling which Auto Scaling instances terminate during scale in.In addition to regularly taking snapshots and AMIs to back up critical data, consider using termination protection to help prevent this issue in the future. You can also automate snapshots with Amazon Data Lifecycle Manager (Amazon DLM) and AWS Backup.Related informationHow can I recover an accidentally deleted AMI?My Spot Instance was terminated. Can I recover it?How do I protect my data against accidental Amazon EC2 instance termination?Follow" | https://repost.aws/knowledge-center/recovery-terminated-instance |
How do I launch Amazon WorkSpaces with a directory that is currently running in another Region from the same account? | "I want to use Amazon WorkSpaces, but the service isn’t yet available in the AWS Region that I currently use for other AWS services. How can I use my Microsoft Active Directory setup in one Region to use Amazon WorkSpaces in a different Region where the service is available?" | "I want to use Amazon WorkSpaces, but the service isn’t yet available in the AWS Region that I currently use for other AWS services. How can I use my Microsoft Active Directory setup in one Region to use Amazon WorkSpaces in a different Region where the service is available?ResolutionTo launch Amazon WorkSpaces using a directory in another Region of the same AWS account, follow the steps below.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, then make sure that you’re using the most recent AWS CLI version.Create virtual private cloud (VPC) peering with another VPC in your accountCreate a VPC peering connection with a VPC in a different Region.Accept the VPC peering connection.The VPC peering connection is activated. You can view your VPC peering connections using the Amazon VPC console, the AWS CLI, or an API.Update route tables for VPC peering in both RegionsUpdate your route tables to turn on communication with the peer VPC over IPv4 or IPv6.You now have two VPCs in your account that are in different Regions, but that are talking to each other.Create an AD Connector and register Amazon WorkSpacesReview the AD Connector prerequisites.Connect your existing directory with AD Connector.When the AD Connector status changes to Active, open the AWS Directory Service console, and then choose the hyperlink for your Directory ID.For AWS apps & services, choose Amazon WorkSpaces to turn on access for Amazon WorkSpaces on this directory.Register the directory with Amazon WorkSpaces.When the value of Registered changes to Yes, you can launch a WorkSpace.Related informationCreate with VPCs in different accounts and RegionsFollow" | https://repost.aws/knowledge-center/workspaces-ad-different-region |
How can I troubleshoot the AWS STS error “the security token included in the request is expired” when using the AWS CLI to assume an IAM role? | "I tried to assume an AWS Identity and Access Management (IAM) role by using the AWS Command Line Interface (AWS CLI). However, I received an error similar to the following:"The security token included in the request is expired."" | "I tried to assume an AWS Identity and Access Management (IAM) role by using the AWS Command Line Interface (AWS CLI). However, I received an error similar to the following:"The security token included in the request is expired."Short descriptionTemporary security credentials for IAM users are requested using the AWS Security Token Service (AWS STS) service. Temporary credentials created with the AssumeRole API action last for one hour by default. After temporary credentials expire, they can't be reused. For more information, see Temporary security credentials in IAM.ResolutionUse the following troubleshooting steps for your use case.If you receive errors when running AWS CLI commands, make sure that you’re using the most recent AWS CLI version.Make sure that your temporary security credential requests can reach AWS endpointsEstablishing credentials for a role requires an access key ID, secret access key, and session token. Requests sent must reach the AWS endpoint within five minutes of the timestamp on the request or the request is denied. For more information, see Why requests are signed.Using profiles to assume an IAM roleA named profile is a collection of settings and credentials that you can apply to an AWS CLI command. You must verify that you're using the correct credentials.The following AWS CLI command uses the default profile credentials:aws s3 lsThis example command uses the project1 profile credentials configured in the .config file:aws s3 ls --profile project1Example output using expired credentials:"An error occurred (ExpiredToken) when calling the ListBuckets operation: The provided token has expired."These profiles are defined in your .aws folder containing the .credentials and .config files.The config file is located at ~/.aws/config for Linux/macOS and C:\Users%USERPROFILE%.aws\config for Windows. The credentials file is located at ~/.aws/credentials for Linux/macOS and C:\Users%USERPROFILE%.aws\credentials for Windows.To check your default profile credentials, run the following command:aws configure list --profile defaultExample output:Name Value Type Location---- ----- ---- --------profile default manual —profileaccess_key TGN7 shared-credentials-filesecret_key SbXb shared-credentials-fileregion us-east-1 config-file ~/.aws/configTo confirm that the same credentials are used for the profile project1, run the following command:aws configure list --profile project1Example output:Name Value Type Location---- ----- ---- --------profile project1 manual —profileaccess_key QN2X config-filesecret_key LPYI config-fileregion eu-west-1 config-file ~/.aws/configIn the example output, note that different credentials might be configured for the default and project1 profiles.You can create a profile in your .aws/config file using the following format:[profile project1]region = eu-west-1aws_access_key_id = <access-Key-for-an-IAM-role>aws_secret_access_key = <secret-access-Key-for-an-IAM-role>aws_session_token = <session-token>These credentials are provided to you when you run the AWS STS assume-role command similar to the following:aws sts assume-role --role-arn arn:aws:iam::<account-number>:role/Prod-Role --role-session-name environment-prodExample output:{"AssumedRoleUser": {"AssumedRoleId": "AROAXXXXXXXXXXXX:environment-prod","Arn": "arn:aws:sts::<account-number>:assumed-role/Prod-Role/environment-prod"},"Credentials": {"SecretAccessKey": "<secret-access-Key-for-an-IAM-role>,"SessionToken": "<session-token>","Expiration": "2020-03-31T17:17:53Z","AccessKeyId": "<access-Key-for-an-IAM-role>"}Note: You can increase the maximum session duration expiration for temporary credentials for IAM roles using the DurationSeconds parameter for your use case.The new assume-role API call then retrieves a new set of valid credentials. Following the API call, you must manually update the ~/.aws/config file with the new temporary credentials.You can avoid updating the config file every time a session expires. Define a profile for the IAM role along with the user that assumes the role in the ~/.aws/config or ~/.aws/credentials file similar to the following:[profile project1]role_arn = <arn-of-IAM-role>source_profile = user1region = <region>Note that user1 is defined in your ~/.aws/credentials file similar to the following:[user1]aws_access_key_id=AKIAIOSFODNN7EXAMPLEaws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEYDefining the source_profile means that you don't have to update your temporary credentials in the ~/.aws/config or ~/.aws/credentials file.The following AWS CLI command lists the Amazon Simple Storage Service (Amazon S3) buckets with credentials for user1 located in the ~/.aws/credentials file.aws s3 ls --profile project1If you're using the AWS CLI with a source_profile element, the API call assume-role puts credentials in the .aws/cli/cache file. Expired credentials are automatically updated in the .aws/cli/cache file. If you receive an error for expired credentials, you can clear the cache with the following commands:Linux/macOS:$ rm -r ~/.aws/cli/cacheWindows:C:\> del /s /q %UserProfile%\.aws\cli\cacheThe AWS CLI creates new credentials in the cache.Create environment variables to assume the IAM role and then verify accessYou can use IAM role credentials to create three environment variables to assume the IAM role similar to the following:Linux/macOS:export AWS_ACCESS_KEY_ID=RoleAccessKeyIDexport AWS_SECRET_ACCESS_KEY=RoleSecretKeyexport AWS_SESSION_TOKEN=RoleSessionTokenWindows:C:\> setx AWS_ACCESS_KEY_ID RoleAccessKeyIDC:\> setx AWS_SECRET_ACCESS_KEY RoleSecretKeyC:\> setx AWS_SESSION_TOKEN RoleSessionTokenTo verify that you assumed the correct IAM role, run the following command:aws sts get-caller-identityThe get-caller-identity command displays information about the IAM identity used to authenticate the request. For more information, see How do I assume an IAM role using the AWS CLI?Environment variables hold temporary cached credentials even after they expire and aren't renewed automatically. Use the following commands to check if credential environment variables are set:Linux/macOS:$ printenv | grep AWSWindows:C:\>set AWSYou can remove expired environment variables with the following commands:Linux/macOS:$ unset AWS_ACCESS_KEY_ID$ unset AWS_SECRET_ACCESS_KEY$ unset AWS_SESSION_TOKENWindows:C:\>set AWS_ACCESS_KEY_ID=C:\>set AWS_SECRET_ACCESS_KEY=C:\>set AWS_SESSION_TOKEN=You can now use the assume-role API call again to get new, valid credentials and set the environment variables again.Important: The .aws/credentials and .aws/config files contain credential details for your IAM entities. When managing your credentials, make sure that you follow security best practices in IAM.Related informationRequesting temporary security credentialsHow do I resolve the error ‘The security token included in the request is expired’ when running Java applications on Amazon EC2?Follow" | https://repost.aws/knowledge-center/sts-iam-token-expired |
How does DNS work with my AWS Client VPN endpoint? | I'm creating an AWS Client VPN endpoint. I need to specify the DNS servers that my end users (clients connected to AWS Client VPN) should query for domain name resolution. How does DNS work with my AWS Client VPN endpoint? | "I'm creating an AWS Client VPN endpoint. I need to specify the DNS servers that my end users (clients connected to AWS Client VPN) should query for domain name resolution. How does DNS work with my AWS Client VPN endpoint?ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.You can specify DNS server IP addresses when you create a new Client VPN endpoint. To do this, specify the IP addresses in the "DNS Server IP address" parameter using the AWS Management Console, the AWS CLI, or the API.You can also modify an existing Client VPN endpoint to specify DNS server IP addresses. To do this, modify the "DNS Server IP address" parameter using the AWS Management Console, AWS CLI, or the API.Considerations for configuring the "DNS Server IP address" parameterFor high availability, it's a best practice to specify two DNS servers. If the primary DNS server becomes unreachable, the end user device resends the query to the secondary DNS server.Note: If the primary DNS server responds with "SERVFAIL," the DNS request isn't sent again to the secondary DNS server.Confirm that both of the specified DNS servers can be reached by end users after they connect to the Client VPN endpoint. DNS lookups depend on the "DNS Server IP address" parameter. If the DNS servers are unreachable, then DNS requests might fail and cause connectivity problems.The "DNS Server IP address" parameter is optional. If there's no DNS server specified, then the DNS IP address configured on the end user's device is used to resolve DNS queries.When using AmazonProvidedDNS (or the Route 53 Resolver inbound endpoint) as the Client VPN DNS server:You can resolve the resource records of the Amazon Route 53 private hosted zone associated with the VPC.The Amazon Relational Database Service (Amazon RDS) public hostnames and AWS service endpoint names that are accessible from the VPC interface endpoint (with "Private DNS" enabled) resolve to a private IP address.Note: Be sure that "DNS Resolution" and "DNS Hostnames" are enabled for the associated VPC.Remember that the Client VPN endpoint uses the source NAT to connect to resources in the associated VPCs.After the client device establishes the Client VPN tunnel, the "DNS Server IP address" parameter is applied. It is applied whether it's full-tunnel or split-tunnel.Full-tunnel: After the client device establishes the tunnel, a route for all traffic through the VPN tunnel is added to the end user device's route table. This causes all traffic (including DNS traffic) to be routed through the Client VPN tunnel. The DNS lookup might fail if the Client VPN's associated VPC (subnet) and the Client VPN route table don't have an appropriate route to reach the configured DNS servers.Split-tunnel: When the Client VPN tunnel is established, only the routes present in the Client VPN route table are added to the end user device's route table. If you can reach the DNS server through the Client VPN's associated VPC, be sure to add a route for the DNS server IP addresses in the Client VPN route table.Note: The following examples demonstrate how DNS works in a few common scenarios. These examples apply to both Windows and Linux environments. However, in a Linux environment, the examples function as described only if the end user's host machine uses the generic networking setting.Scenario #1: Full-tunnel with the "DNS Server IP address" parameter disabledExample 1:The end user client's IPv4 CIDR = 192.168.0.0/16.The Client VPN endpoint VPC's CIDR = 10.123.0.0/16.Local DNS server IP address = 192.168.1.1.The "DNS Server IP address" parameter is disabled (there are no DNS server IP addresses specified).Because the "DNS Server IP address" parameter is disabled, the end user's host machine uses the local DNS server to resolve DNS queries.This Client VPN is configured in full-tunnel mode. A route pointing to the virtual adapter is added to send all traffic over the VPN (destination 0/1 over utun1). However, DNS traffic still doesn't travel over the VPN, because the "DNS Server IP address" parameter is not configured. DNS traffic between the client and the DNS server remains local. The client machine already has a preferred static route to the local DNS server IP (dest. 192.168.1.1/32 over en0) to make that the local DNS resolver is reachable. After the domain name is resolved to the respective IP, the application traffic to the resolved IP travels over the VPN tunnel.Below is a snippet of this example:$ cat /etc/resolv.conf | grep nameservernameserver 192.168.1.1$ netstat -nr -f inet | grep -E 'utun1|192.168.1.1'0/1 192.168.0.1 UGSc 16 0 utun1192.168.1.1/32 link#4 UCS 1 0 en0(...)$ dig amazon.com;; ANSWER SECTION:amazon.com.32INA176.32.98.166;; SERVER: 192.168.1.1#53(192.168.1.1)(...)Example 2:The end user client's IPv4 CIDR = 192.168.0.0/16.The Client VPN endpoint VPC's CIDR = 10.123.0.0/16.The local DNS server IP address is set to public IP = 8.8.8.8.The "DNS Server IP address" parameter is disabled (there are no DNS server IP addresses specified).In this scenario, rather than using the local DNS server at 198.168.1.1, the client uses a public DNS as their local DNS server IP address (in this example, 8.8.8.8). Because there's no static route for 8.8.8.8 using en0, traffic destined to 8.8.8.8 travels over the Client VPN tunnel. If the Client VPN endpoint isn't configured to access the internet, then the public DNS (8.8.8.8) is unreachable and request queries time out.$ cat /etc/resolv.conf | grep nameservernameserver 8.8.8.8$ netstat -nr -f inet | grep -E 'utun1|8.8.8.8'0/1 192.168.0.1 UGSc 5 0 utun1$ dig amazon.com(...);; connection timed out; no servers could be reachedScenario #2: Split-tunnel with the "DNS Server IP address" parameter enabledIn this example:The end user client's IPv4 CIDR = 192.168.0.0/16.The Client VPN endpoint's VPC CIDR = 10.123.0.0/16.The "DNS Server IP address" parameter is enabled and set to 10.10.1.228 and 10.10.0.43. These IP addresses represent the IP addresses of the Route 53 Resolver's inbound endpoints that are present in another VPC (10.10.0.0/16) connected with a transit gateway to the Client VPN endpoint's associated VPC.The associated VPC has "DNS Hostnames" and "DNS Support" enabled, and has an associated Route 53 private hosted zone (example.local).This Client VPN is configured in split-tunnel mode. The routes in the Client VPN route table are added to the route table of the end user's host machine:$ netstat -nr -f inet | grep utun1(...)10.10/16 192.168.0.1 UGSc 2 0 utun1 # Route 53 Resolver inbound endpoints VPC CIDR10.123/16 192.168.0.1 UGSc 0 0 utun1 # Client VPN VPC CIDR(...)Because the "DNS Server IP address" parameter is enabled, and 10.10.1.228 and 10.10.0.43 are configured as the DNS servers, these DNS server parameters are pushed to the end user's host machine when the client establishes the VPN tunnel:$ cat /etc/resolv.conf | grep nameservernameserver 10.10.1.228 # Primary DNS server nameserver 10.10.0.43 # Secondary DNS serverA DNS query issued by the client machine travels over the VPN tunnel to the Client VPN VPC. Next, the DNS request is source NATed and forwarded to the Amazon Route 53 Resolver endpoint over a transit gateway. After the domain is resolved to an IP address, application traffic also travels over the established VPN tunnel (as long as the resolved destination IP matches a route from the Client VPN endpoint route table).Using this configuration, end users can resolve:External domain names using standard DNS resolution.Records from private hosted zones associated with the Route 53 Resolver VPC.Interface endpoint DNS names and EC2 public DNS hostnames.$ dig amazon.com;; ANSWER SECTION:amazon.com.8INA176.32.103.205;; SERVER: 10.10.1.228#53(10.10.1.228)(...)$ dig test.example.local # Route 53 private hosted zone record ;; ANSWER SECTION:test.example.local. 10 IN A 10.123.2.1;; SERVER: 10.10.1.228#53(10.10.1.228)(...)$ dig ec2.ap-southeast-2.amazonaws.com # VPC interface endpoint to EC2 service in Route 53 Resolver VPC;; ANSWER SECTION:ec2.ap-southeast-2.amazonaws.com. 60 INA10.10.0.33;; SERVER: 10.10.1.228#53(10.10.1.228)(...)$ dig ec2-13-211-254-134.ap-southeast-2.compute.amazonaws.com # EC2 instance public DNS hostname running in Route 53 Resolver VPC;; ANSWER SECTION:ec2-13-211-254-134.ap-southeast-2.compute.amazonaws.com. 20 IN A 10.10.1.11;; SERVER: 10.10.1.228#53(10.10.1.228)(...)Follow" | https://repost.aws/knowledge-center/client-vpn-how-dns-works-with-endpoint |
How do I migrate my SSL certificate to the US East (N. Virginia) Region for use with my CloudFront distribution? | "I have an SSL certificate in AWS Certificate Manager (ACM) that I want to associate with my Amazon CloudFront distribution. However, I can't associate the certificate with the distribution because it's not in the US East (N. Virginia) (us-east-1) Region. Can I move the certificate to US East (N. Virginia)?" | "I have an SSL certificate in AWS Certificate Manager (ACM) that I want to associate with my Amazon CloudFront distribution. However, I can't associate the certificate with the distribution because it's not in the US East (N. Virginia) (us-east-1) Region. Can I move the certificate to US East (N. Virginia)?ResolutionYou can't migrate an existing certificate in ACM from one AWS Region to another. Instead, you must import or create a certificate in the target Region.To associate an ACM certificate with a CloudFront distribution, you must import or create the certificate in US East (N. Virginia). Additionally, the certificate must meet the CloudFront requirements.Follow these steps to import or create an ACM certificate for use with a CloudFront distribution:Open the ACM console in the US East (N. Virginia) Region.Note: From the AWS Region selector in the navigation bar, confirm that N. Virginia is selected.Proceed with the steps to import a certificate using the console, or request a certificate using the console.After the certificate is imported or validated successfully, you can associate the certificate and alternate domain names (CNAMEs) with your CloudFront distribution.Related informationIssuing and managing certificatesFollow" | https://repost.aws/knowledge-center/migrate-ssl-cert-us-east |
How do I upload an image or PDF file to Amazon S3 through API Gateway? | I want to upload an image or PDF file to Amazon Simple Storage Service (Amazon S3) through Amazon API Gateway. I also want to retrieve an image or PDF file. | "I want to upload an image or PDF file to Amazon Simple Storage Service (Amazon S3) through Amazon API Gateway. I also want to retrieve an image or PDF file.Short descriptionTo upload an image or PDF as a binary file to an Amazon S3 bucket through API Gateway, activate binary support.To grant your API access to your S3 bucket, create an AWS Identity and Access Management (IAM) role. The IAM role must include permissions for API Gateway to perform the PutObject and GetObject actions on your S3 bucket.ResolutionCreate an IAM role for API Gateway1. Open the IAM console.2. In the navigation pane, choose Roles.3. Choose Create role.4. In the Select type of trusted entity section, choose AWS service.5. In the Choose a use case section, choose API Gateway.6. In the Select your use case section, choose API Gateway.7. Choose Next: Permissions.Note: This section shows the AWS managed service that permits API Gateway to push logs to a user's account. You add permissions for Amazon S3 later.8. (Optional) Choose Next: Tags to add tags.9. Choose Next: Review.10. For Role name, enter a name for your policy. For example: api-gateway-upload-to-s3.11. Choose Create role.Create and attach an IAM policy to the API Gateway role1. Open the IAM console.2. In the navigation pane, choose Roles.3. In the search box, enter the name of the new API Gateway role that you created. Then, choose that role from the Role name column.4. On the Roles detail page tab, choose Add permissions.5. Choose Create inline policy.6. On the Visual editor tab, in the Select a service section, choose Choose a service.7. Enter S3, and then choose S3.8. In the Specify the actions allowed in S3 box, enter PutObject, and then choose PutObject.9. Enter GetObject, and then choose GetObject.10. Expand Resources, and then choose Specific.11. Choose Add ARN.12. For Bucket name, enter the name of your bucket. Include the prefix, if applicable.13. For Object name, enter an object name.Note: The bucket name specifies the location of the uploaded files. The object name specifies the pattern that the object must adhere to for policy alignment. For more information, see Bucket naming rules and Amazon S3 objects overview.14. Choose Add.15. (Optional) Choose Next: Tags to add tags.16. Choose Next: Review.17. For Name, enter the name of your policy.18. Choose Create policy.19. In the policy search box, enter the name of the policy that you created in step 17, and then select that policy.20. Choose Policy actions, and then choose Attach. A list of IAM roles appears.21. Search for the API Gateway role that you created earlier. Then, select the role.22. Choose Attach policy.Create an API Gateway REST APICreate an API to serve your requests1. Open the API Gateway console.2. In the navigation pane, choose APIs.3. Choose Create API.4. In the Choose an API type section, for REST API, choose Build.5. For API Name, enter a name for your API, and then choose Next.6. Choose Create API.Create resources for your API1. In the Resources panel of your API page, select /.2. For Actions, choose Create Resource.3. For Resource Name, enter folder.4. For Resource Path, enter {folder}.5. Choose Create Resource.6. In the Resources panel, select the /{folder} resource that you created in step 5.7. Choose Actions, and then choose Create Resource.8. For Resource Name, enter object.9. For Resource Path, enter {object}.10. Choose Create Resource.Create a PUT method for your API for uploading image or PDF1. Choose Actions, and then choose Create Method.2. From the dropdown list, choose PUT, and then choose the check mark icon.3. Under the Integration type category, choose AWS Service.4. For AWS Region, choose us-east-1 or the AWS Region you see on the Bucket properties page.5. For AWS Service, choose Simple Storage Service (S3).6. Keep AWS Subdomain empty.7. For HTTP method, choose PUT.8. For Action Type, choose Use path override.9. For Path override (optional), enter {bucket}/{key}.10. For Execution role, enter the ARN for the IAM role that you created earlier. ARN creation is part of the Create and attach an IAM policy to the API Gateway role section.11. For Content Handling, choose Passthrough.12. Choose Save.Configure parameter mappings for the PUT method1. In the Resources panel of your API page, choose PUT.2. Choose Integration Request.3. Expand URL Path Parameters.4. Choose Add path.5. For Name, enter bucket.6. For Mapped from, enter method.request.path.folder.7. Choose the check mark icon at the end of the row.8. Repeat steps 4–7. In step 5, set Name to key. In step 6, set Mapped from to method.request.path.object.Create a GET method for your API for retrieving an image1. In the Resources panel of your API page, choose /{object}.2. Choose Actions, and then choose Create Method.3. From the dropdown list, choose GET, and then choose the check mark icon.4. Under the Integration type category, choose AWS Service.5. For AWS Region, choose us-east-1, or the Region that you see on the Bucket properties page.6. For AWS Service, choose Simple Storage Service (S3).7. Keep AWS Subdomain empty.8. For HTTP method, choose GET.9. For Action Type, choose Use path override.10. For Path override (optional), enter {bucket}/{key}.11. For Execution role, enter the ARN for the IAM role that you created earlier. ARN creation is part of the Create and attach an IAM policy to the API Gateway role section.12. For Content Handling, choose Passthrough.13. Choose Save.Configure parameter mappings for the GET method1. In the Resources panel of your API page, choose GET.2. Choose Integration Request.3. Expand URL Path Parameters.4. Choose Add path.5. For Name, enter bucket.6. For Mapped from, enter method.request.path.folder.7. Choose the check mark icon at the end of the row.8. Repeat steps 4–7. In step 5, set Name to key. In step 6, set Mapped from to method.request.path.object.Set up response mapping to see the image or PDF in browser1. In the Resources panel of your API page, choose GET.2. Choose Method Response.3. Expand 200.4. Under Response Body for 200, remove application/json.5. Under Response headers for 200, choose Add header.6. For Name, enter content-type.7. Choose the check mark icon to save.8. Choose Method execution to go back to the Method Execution pane.9. Choose Integration Response.10. Expand 200, and then expand Header Mappings.11. Choose the pencil icon at end of the row named content-type.12. Enter 'image/jpeg' to see an image file.-or-Enter 'application/pdf' to see a PDF file.Set up binary media types for the API1. In the navigation pane of your API page, choose Settings.2. In the Binary Media Types section, choose Add Binary Media Type.3. In the text box, add the following string: */*Note: Don't put the string in quotes. You can substitute a wildcard for a particular Multipurpose Internet Mail Extensions (MIME) type that you want to treat as a binary media type. For example, to have API Gateway treat JPEG images as binary media types, choose 'image/jpeg'. If you add */*, then API Gateway treats all media types as binary media types.4. Choose Save Changes.Resolve CORS error with binary settings for API1. If you use the previously noted APIs (PUT and GET) in a web application, then you might encounter a CORS error. For more information, see CORS errors on the Mozilla website.2. To resolve the CORS error with binary settings turned on, start CORS from the API Gateway console.3. In the Resources panel of your API page, select /{object}.4. For Actions, choose Enable CORS.5. Choose Enable CORS and replace existing CORS headers.6. To update the content handling property, run the update-integration CLI command. This update allows the binary content to handle preflight options requests with mock integration.7. Update the API ID, Resource ID, and AWS Region to run the following two CLI commands. To obtain the API ID and Resource ID, select the {object} resource from the top of the Gateway API console.aws apigateway update-integration --rest-api-id API_ID --resource-id RESOURCE_id --http-method OPTIONS --patch-operations op='replace',path='/contentHandling',value='CONVERT_TO_TEXT' --region AWS_REGIONaws apigateway update-integration-response --rest-api-id API_ID --resource-id RESOURCE_id --http-method OPTIONS --status-code 200 --patch-operations op='replace',path='/contentHandling',value='CONVERT_TO_TEXT' --region AWS_REGIONDeploy your API1. In the navigation pane on your API page, choose Resources.2. In the Resources pane, choose Actions, and then choose Deploy API.3. In the Deploy API window, for Deployment stage, choose [New Stage].4. For Stage name, enter v1.5. Choose Deploy.6. In the navigation pane, choose Stages.7. Choose the v1 stage. The invoke URL for making requests to the deployed API snapshot appears.8. Copy the invoke URL.Note: For more information, see Deploying a REST API in Amazon API Gateway.Invoke your API to upload an image file to S3Append the bucket name and file name of the object to your API's invoke URL. Then, make a PUT HTTP request with a client of your choice. For example, with the Postman external application, choose PUT method from the dropdown. Choose Body, and then choose binary. When the Select File button appears, select a local file to upload.For more information, see Invoking a REST API in Amazon API Gateway.Example curl command to make a PUT HTTP request to upload an image or PDFIn the following example, abc12345 is your API ID, testfolder is your S3 bucket, and testimage.jpeg is the local file that you upload:curl -i --location --request PUT 'https://abc12345.execute-api.us-west-2.amazonaws.com/v1/testfolder/testimage.jpeg' --header 'Content-Type: text/plain' --data-binary '@/Path/to/file/image.jpeg'Important: If */* is included in the binary media types list, then you can make a PUT request to upload the file. If image.jpeg is included in the binary media types list, then you must add Content-Type header to your PUT request. You must set Content-Type header to 'image/jpeg'.You now see the image or PDF in a web browser with the same URL. This is because the web browser makes a GET request.Related informationActivating binary support using the API Gateway REST APIFollow" | https://repost.aws/knowledge-center/api-gateway-upload-image-s3 |
How can I troubleshoot connectivity failures between AWS DMS and a MongoDB source endpoint? | The connectivity between my AWS Database Migration Service (AWS DMS) replication instance and my MongoDB source endpoint failed. How can I troubleshoot "Test Endpoint failed" errors when using MongoDB as the source endpoint? | "The connectivity between my AWS Database Migration Service (AWS DMS) replication instance and my MongoDB source endpoint failed. How can I troubleshoot "Test Endpoint failed" errors when using MongoDB as the source endpoint?ResolutionA MongoDB source endpoint can fail to connect for several reasons. See the following common errors and their resolutions.Connection timeout calling errorsIf an AWS DMS replication instance can't connect to the MongoDB database that is specified, you receive the following error:Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to create new client connection Failed to connect to database., Application-Detailed-Message: Error verifying connection: 'No suitable servers found (`serverSelectionTryOnce` set): [connection timeout calling ismaster on 'mongodbtest.us-west-2.compute.amazonaws.com:27017']' Failed to connect to database.This error occurs when connectivity can't be established between the AWS DMS replication instance to the MongoDB database. Most often, this is caused by configuration issues in the security groups, network access control lists (network ACLs), or on-premises firewalls and IP address tables. To resolve this error, confirm that your network is configured to meet the connectivity requirements for AWS DMS replication instances.Connection refused calling errorsWhen the connection request from the AWS DMS replication instance is refused by the MongoDB instance, you receive the following error:Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to create new client connection Failed to connect to database., Application-Detailed-Message: Error verifying connection: 'No suitable servers found (`serverSelectionTryOnce` set): [connection refused calling ismaster on 'mongodbtest.us-west-2.compute.amazonaws.com:27017']' Failed to connect to database.This error occurs when the the bindIp settings on the MongoDB database don't allow access to connections from replication instances. To resolve this error, modify the bindIp configuration on the MongoDB instance to allow connections from replication instances. For more information, see the MongoDB documentation for IP Binding.Authentication failed errorsWhen the credentials provided aren't correct or use a special character, you receive the following error:Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to create new client connection Failed to connect to database., Application-Detailed-Message: Error verifying connection: 'Authentication failed.' Failed to connect to database.<br>This error occurs when the user name or password provided in the endpoint are incorrect, the authentication source database provided the for the user name field is incorrect, or you used a special character in your password that MongoDB doesn't accept. For more information, see Creating Source and Target Endpoints.To resolve this error, confirm that you have the correct authentication credentials by connecting to the MongoDB database using the user name and password provided in the endpoint.Libmongoc version errorsWhen you use a version of MongoDB that isn't supported for AWS DMS replication, you receive the following error:Test Endpoint failed: Application-Status: 1020912, Application-Message: Failed to create new client connection Failed to connect to database., Application-Detailed-Message: Error verifying connection: 'Server at ec2-35-166-73-109.us-west-2.compute.amazonaws.com:27017 reports wire version 2, but this version of libmongoc requires at least 3 (MongoDB 3.0)' Failed to connect to database.To resolve this error, upgrade the source MongoDB database to a version of MongoDB that is supported by AWS DMS.Related informationUsing MongoDB as a Source for AWS DMSHow can I troubleshoot AWS DMS endpoint connectivity failures?Follow" | https://repost.aws/knowledge-center/dms-connectivity-failures-mongodb |
How do I use CloudTrail to review what API calls and actions have occurred in my AWS account? | "How do I review actions that occurred in my AWS account, such as console logins or terminating an instance?" | "How do I review actions that occurred in my AWS account, such as console logins or terminating an instance?Short descriptionYou can use AWS CloudTrail data to view and track API calls made to your account using the following:CloudTrail Event historyCloudTrail LakeAmazon CloudWatch LogsAmazon Athena queriesAmazon Simple Storage Service (Amazon S3) archived log filesNote: Not all AWS services have logs recorded and available with CloudTrail. For a list of AWS services integrated with CloudTrail, see AWS service topics for CloudTrail.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.CloudTrail Event historyReviewing CloudTrail Event history using the CloudTrail consoleYou can view all supported services and integrations and event types (create, modify, delete, and non-mutable activities) from the past 90 days. You don't need to set up a trail to use CloudTrail Event history.For instructions, see Viewing CloudTrail events in the CloudTrail console.Reviewing CloudTrail Event history using the AWS CLINote: To search for events using the AWS CLI, you must have a trail created and configured to log to CloudWatch Logs. For more information, see Creating a trail. Also, Sending events to CloudWatch Logs.Use the filter-log-events command to apply metric filters to search for specific terms, phrases, and values in your log events. Then, you can transform them into CloudWatch metrics and alarms.For more information, see Filter and pattern syntax.Note: To use the filter-log-events command at scale (for example, automation or a script), it's a best practice to use CloudWatch Logs subscription filters. This is because the filter-log-events API action has API limits. Subscription filters have no such limits. Subscription filters also provide the ability to process large amounts of log data in real time. For more information, see CloudWatch Logs quotas.CloudTrail LakeCloudTrail Lake allows you to aggregate, immutably store, and run SQL-based queries on your events. You can store even data in CloudTrail Lake for up to seven years, or 2,555 days.For more information, see Working with AWS CloudTrail Lake.Amazon CloudWatch LogsNote: To use CloudWatch Logs, you must have a trail created and configured to log to CloudWatch Logs. For more information, see Creating a trail. Also, Sending events to CloudWatch Logs.You can use CloudWatch Logs to search for operations that change the state of a resource (for example, StopInstances). You can also use CloudWatch Logs to search for operations that don't change the state of a resource (for example, DescribeInstances). For instructions, see View log data sent to CloudWatch Logs.Keep in mind the following:You must explicitly configure CloudTrail to send events to CloudWatch Logs, even if you already created a trail.You can't review activity from before the logs were configured.There can be multiple log streams, depending on the size and volume of events. To search across all streams, choose Search Log Group before selecting an individual stream.Because CloudWatch Logs has an event size limitation of 256 KB, CloudTrail doesn't send events larger than 256 KB to CloudWatch Logs.Amazon Athena queriesYou can use Amazon Athena to view CloudTrail data events and management events stored in your Amazon S3 bucket.For more information, see How do I automatically create tables in Amazon Athena to search through AWS CloudTrail logs? Also, Creating the table for CloudTrail logs in Athena using manual partitioning.Amazon S3 archived log filesNote: To view Amazon S3 archived log files, you must have a trail created and configured to log to an S3 bucket. For more information, see Creating a trail.You can see all events captured by CloudTrail in the Amazon S3 log files. You can also manually parse the log files from the S3 bucket Using the CloudTrail Processing Library, the AWS CLI, or send logs to AWS CloudTrail partners.For instructions, see Amazon S3 CloudTrail events.Note: You must have a trail activated to log to an S3 bucket.Related informationWhat is Amazon CloudWatch Logs?Creating metrics from log events using filtersAWS Config console now displays API events associated with configuration changesCreating CloudWatch alarms for CloudTrail events: examplesFollow" | https://repost.aws/knowledge-center/cloudtrail-track-api |
What are the differences between Amazon EC2 and Amazon Lightsail? | I want to use virtual servers to run my applications. What are the differences between Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Lightsail? | "I want to use virtual servers to run my applications. What are the differences between Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Lightsail?ResolutionAWS offers Amazon EC2 and Lightsail for hosting applications. Amazon EC2 is a mix of multiple services and has its own individual features used to create a single architecture. Amazon EC2 instances are meant for small to complex architecture. Lightsail, on other hand, is an integrated product of services offered by AWS. Lightsail is better for small to medium scale workloads.Choosing the right architecture is always based on your application's requirements. There are few points that you can use to help you decide which service will work best for your needs. The following are the main differences between Amazon EC2 and Lightsail:Amazon LightsailAmazon EC2UsageUsed for simple web applications and websites, including custom code, and common CMS.Used for small scale to Enterprise applications such as HPC, big data, and analytics workloads.PerformanceUsed for applications with workloads ranging from small to medium.Used for small scale to higher workloads with complex architecture.EaseDeploying applications in Lightsail can be done with few clicks.Provides an all-in-one experience.Deploying application in Amazon EC2 varies due to multiple factors, such as the type of application, the type of components used, and so on. Each component has its own characteristics and features that can be modified in their respective consoles.Administrative supportLess system admin and system architect efforts are needed in Lightsail.Based on the type of environment, the administrative effort varies. Most of the services in EC2 require a thorough understanding about the components.NetworkManaged by AWS. Customers can add rules to Lightsail firewall.Managed by the customer using VPC and related components.SubnetsLightsail has no concept of private subnets.Customers can create subnets as public or private based on their application needs.ScalabilityAutomatic instance scalability isn't supported in Lightsail.Instances can't be modified after launch. You must launch a new instance to change your plan.Instances can be scaled automatically using an Amazon EC2 Auto Scaling group.EC2 instances can be modified to a new type or to a new virtualization.Flexibility in managing resourcesMinimal flexibility in managing resources such as network, hard disk, load balancer, and so on.Customers can manage all the related components based on the application demands.Elastic volumesNot supportedSupportedResource managementAll resources are managed from the same dashboard.Each resource has its own console and options.PricingPrices are low and there is a fixed pricing model.Pricing follows the pay as you go model.Load balancingThe Lightsail load balancer is available for use with Lightsail instances.There are multiple types of load balancers available.MonitoringMonitoring is available, but is restricted to a few options.Detailed monitoring options are available using Amazon Cloudwatch.BackupBackups are available by using Lightsail snapshots.Backups are available as snapshots and AMIs.EncryptionEncryption is enabled by default and is managed by AWS.Customers can choose to enable or disable encryption.Free tierThe free tier is available for 3 months from the day of signing up.The free tier is available for 12 months from the day of signing up.SupportSupport is provided by the AWS Support team. The scope of troubleshooting application level issues is limited.Support is provided by the AWS Support team. The scope of troubleshooting application level issues is limited.For more information, see Amazon Lightsail or Amazon EC2.For more information about limitations in Lightsail, see What should I consider before choosing Lightsail over Amazon EC2?Follow" | https://repost.aws/knowledge-center/lightsail-differences-from-ec2 |
How do I remove rate limiting for bulk resource record operations in a Route 53 hosted zone? | My account is rate throttled (API throttled) when I perform bulk resource record operations on my Amazon Route 53 hosted zone. | "My account is rate throttled (API throttled) when I perform bulk resource record operations on my Amazon Route 53 hosted zone.Short descriptionWhen you are performing bulk resource record operations for your hosted zone in Route 53, you might receive an HTTP 400 (Bad Request) error. A response header that contains a Code element with a value of Throttling and a Message element with a value of Rate exceeded indicates rate throttling.Warning: Rate throttling happens when the number of API requests is greater than the hard limit of five requests per second (per account).If Route 53 can't process the request before the next request for the same hosted zone arrives, it rejects subsequent requests with another HTTP 400 error. The response header contains a Code element with a value of PriorRequestNotComplete and a Message element with a value of the request was rejected because Route 53 was still processing a prior request.Note: API calls by IAM users within the same account count towards global rate throttling for the account and affect API calls made from the AWS Management Console.ResolutionYou can use the following methods to avoid rate throttling:Batching requestsGroup individual operations of the same type into one change batch operation to reduce API calls.Note: UPSERT requests (update and insert) are counted twice.For example, you could request to CREATE, DELETE, or LIST many records with one batch operation.Using error retries and exponential backoffAdd error retries and exponential backoff to your Route 53 API calls.For example, use a simple exponential backoff algorithm that retries the call in 2^i seconds, where i is the number of retries.Randomizing start timesRandomize the start time for calling Route 53 APIs. Be sure that there aren't multiple applications executing the logic at the same time, because simultaneous requests can cause throttling.Introducing "sleep time" between callsIf the code function calls to Route 53 APIs are consecutive, add "sleep time" in between two calls to minimize throttling risk.Note: If your account is still rate throttled after using these troubleshooting methods, open a support case with Route 53 for assistance to locate the source IP address of the API calls that exceed the per second threshold. Then, you can shut off unnecessary sources, or use these troubleshooting methods to resolve the issue.Related informationLimits – Amazon Route 53Follow" | https://repost.aws/knowledge-center/remove-rate-limiting |
How do I resolve the error "unable to create input format" in Athena? | "When I run a query in Amazon Athena, I get the error "unable to create input format"." | "When I run a query in Amazon Athena, I get the error "unable to create input format".ResolutionThere are multiple causes of this error. Here are some common scenarios and solutions:The AWS Glue crawler can't classify the data formatThe data is stored in Amazon Simple Storage Service (Amazon S3).You run an AWS Glue crawler with a built-in classifier to detect the table schema. The crawler returns a classification of UNKNOWN. At least one column is detected, but the schema is incorrect.When you query the table from Athena, the query fails with the error "HIVE_UNKNOWN_ERROR: Unable to create input format".To resolve this error, use a data type that is supported by a built-in classifier. If the data format can't be classified by a built-in classifier, then consider using a custom classifier.Athena doesn't support the data formatThe data is stored in Amazon S3.You run a crawler to create the table. The crawler classifies the table in a format that Athena doesn't support, such as ion or xml.When you query the table from Athena, the query fails with the error "HIVE_UNKNOWN_ERROR: Unable to create input format".To resolve this error, use a data format that Athena supports.One or more of the AWS Glue table definition properties are emptyThe AWS Glue table isn't created in Athena or by an AWS Glue crawler. The table is created using any other method. For example, the table is created manually on the AWS Glue console.When you query the table from Athena, the query fails with the error "HIVE_UNKNOWN_ERROR: Unable to create input format".This error occurs because one or more of the following properties in the AWS Glue table definition are empty:Input formatOutput formatSerde nameConfirm that these properties are set correctly for the SerDe and data format. Keep in mind that the SerDe that you specify defines the table schema. The SerDe can override the DDL configuration that you specify in Athena when you create your table.To update the table definition properties, do the following:Open the AWS Glue console.Select the table that you want to update.Choose Action, and then choose View details.Choose Edit table.Update the settings for Input format, Output format, or Serde name.Choose Apply.The data source in your Athena query is not supportedAthena supports querying tables only if the tables are stored in Amazon S3. You might get the "unable to create input format" error if you query a data source that's not supported by Athena.To resolve this error, use Athena Query Federation SDK. The SDK allows you to customize Athena with your own code. With Athena Federation SDK, you can integrate with different data sources and proprietary data formats. You can also build new user-defined functions. For more information, see Query any data source with Amazon Athena’s new federated query.Related informationAdding classifiers to a crawlerUsing a SerDeFollow" | https://repost.aws/knowledge-center/athena-unable-to-create-input-format |
How do I resolve the error "Container killed by YARN for exceeding memory limits" in Spark on Amazon EMR? | I want to troubleshoot the error "Container killed by YARN for exceeding memory limits" in Spark on Amazon EMR. | "I want to troubleshoot the error "Container killed by YARN for exceeding memory limits" in Spark on Amazon EMR.Short descriptionUse one of the following methods to resolve this error:Increase memory overhead.Reduce the number of executor cores.Increase the number of partitions.Increase driver and executor memory.ResolutionThe root cause and the appropriate solution for this error depends on your workload. You might have to try each of the following methods, in the following order, to troubleshoot the error. Before you continue to the next method in this sequence, reverse any changes that you made to spark-defaults.conf in the preceding section.Increase memory overheadMemory overhead is the amount of off-heap memory allocated to each executor. By default, memory overhead is set to either 10% of executor memory or 384, whichever is higher. Memory overhead is used for Java NIO direct buffers, thread stacks, shared native libraries, or memory mapped files.Consider making gradual increases in memory overhead, up to 25%. The sum of the driver or executor memory plus the memory overhead must be less than the yarn.nodemanager.resource.memory-mb for your instance type.spark.driver/executor.memory + spark.driver/executor.memoryOverhead < yarn.nodemanager.resource.memory-mbIf the error occurs in the driver container or executor container, consider increasing memory overhead for that container only. You can increase memory overhead while the cluster is running, when you launch a new cluster, or when you submit a job.On a running cluster:Modify spark-defaults.conf on the master node.For example:sudo vim /etc/spark/conf/spark-defaults.confspark.driver.memoryOverhead 512spark.executor.memoryOverhead 512On a new cluster:Add a configuration object similar to the following when you launch a cluster:[ { "Classification": "spark-defaults", "Properties": { "spark.driver.memoryOverhead": "512", "spark.executor.memoryOverhead": "512" } }]For a single job:Use the --conf option to increase memory overhead when you run spark-submit.For example:spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --conf spark.driver.memoryOverhead=512 --conf spark.executor.memoryOverhead=512 /usr/lib/spark/examples/jars/spark-examples.jar 100If increasing the memory overhead doesn't solve the problem, then reduce the number of executor cores.Reduce the number of executor coresThis reduces the maximum number of tasks that the executor can perform, which reduces the amount of memory required. Depending on the driver container throwing this error or the other executor container that's getting this error, consider decreasing cores for the driver or the executor.On a running cluster:Modify spark-defaults.conf on the master node.For example:sudo vim /etc/spark/conf/spark-defaults.confspark.driver.cores 3spark.executor.cores 3On a new cluster:Add a configuration object similar to the following when you launch a cluster:[ { "Classification": "spark-defaults", "Properties": {"spark.driver.cores" : "3", "spark.executor.cores": "3" } }]For a single job:Use the --executor-cores option to reduce the number of executor cores when you run spark-submit.For example:spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-cores 3 --driver-cores 3 /usr/lib/spark/examples/jars/spark-examples.jar 100If you still get the error message, increase the number of partitions.Increase the number of partitionsTo increase the number of partitions, increase the value of spark.default.parallelism for raw Resilient Distributed Datasets, or run a .repartition() operation. Increasing the number of partitions reduces the amount of memory required per partition. Spark heavily uses cluster RAM as an effective way to maximize speed. Therefore, you must monitor memory usage with Ganglia, and then verify that your cluster settings and partitioning strategy meet your growing data needs. If you still get the "Container killed by YARN for exceeding memory limits" error message, increase the driver and executor memory.Increase driver and executor memoryIf the error occurs in either a driver container or an executor container, consider increasing memory for either the driver or the executor, but not both. Be sure that the sum of driver or executor memory plus driver or executor memory overhead is always less than the value of yarn.nodemanager.resource.memory-mb for your EC2 instance type:spark.driver/executor.memory + spark.driver/executor.memoryOverhead < yarn.nodemanager.resource.memory-mbOn a running cluster:Modify spark-defaults.conf on the master node.Example:sudo vim /etc/spark/conf/spark-defaults.confspark.executor.memory 1gspark.driver.memory 1gOn a new cluster:Add a configuration object similar to the following when you launch a cluster:[ { "Classification": "spark-defaults", "Properties": { "spark.executor.memory": "1g", "spark.driver.memory":"1g", } }]For a single job:Use the --executor-memory and --driver-memory options to increase memory when you run spark-submit.For example:spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-memory 1g --driver-memory 1g /usr/lib/spark/examples/jars/spark-examples.jar 100Other solution optionsIf you still get the error message, try the following:Benchmarking: It's a best practice to run your application against a sample dataset. Doing so can help you spot slowdowns and skewed partitions that can lead to memory problems.Data filtration: Be sure that you are processing the minimum amount of data. If you don't filter your data, or if you filter late in the application run, then excess data might slow down the application. This can increase the chance of a memory exception.Dataset size: It's a best practice to process the minimum required data. Partition your data so that only the required data is ingested.Partitioning strategy: Consider using a different partitioning strategy. For example, partition on an alternate key to avoid large partitions and skewed partitions.EC2 instance type: It's possible that your EC2 instance doesn't have the memory resources required for your workload. Switching to a larger memory-optimized instance type might resolve the error. If you still get memory exceptions after changing instance types, try the troubleshooting methods on the new instance.Related informationSpark configurationHow do I resolve the "java.lang.ClassNotFoundException" in Spark on Amazon EMR?Follow" | https://repost.aws/knowledge-center/emr-spark-yarn-memory-limit |
How can I set up cross-account access for Amazon EMRFS? | I want to use the Amazon EMR File System (EMRFS) to write to Amazon Simple Storage Service (Amazon S3) buckets that are in a different AWS account. | "I want to use the Amazon EMR File System (EMRFS) to write to Amazon Simple Storage Service (Amazon S3) buckets that are in a different AWS account.Short descriptionUse one of the following options to set up cross-account access for EMRFS:Add a bucket policy. Add a bucket policy for the destination bucket that grants access to the Amazon EMR account. This is easiest option. However, the destination account doesn't own the objects that EMRFS writes to the destination bucket.Use a custom credentials provider. This option allows you to assume an AWS Identity and Access Management (IAM) role in the destination bucket account. This means that the destination account owns objects that EMRFS writes to the destination bucket.Use role mappings in a security configuration. This option also allows EMRFS to assume an IAM role in the destination bucket account. This is the method that's discussed in this article.ResolutionWhen you use a security configuration to specify IAM roles for EMRFS, you set up role mappings. A role mapping specifies an IAM role that corresponds to an identifier. An identifier determines the basis for access to Amazon S3 through EMRFS. Identifiers can be users, groups, or Amazon S3 prefixes that indicate a data location. When EMRFS makes a request that matches the basis for access, EMRFS has cluster EC2 instances assume the corresponding IAM role for the request. The IAM permissions attached to that role apply, instead of the IAM permissions attached to the service role for cluster EC2 instances. For more information, see Configure IAM roles for EMRFS requests to Amazon S3.In the following steps, an identifier is specified as an Amazon S3 prefix that is accessed through EMRFS. This creates cross-account access for EMRFS using a security configuration with role mapping:1. Create an IAM role in the destination account. This is the role that you will assume from the EMR cluster.2. Add a trust policy similar to the following. The trust policy must allow the Amazon Elastic Compute Cloud (Amazon EC2) role for Amazon EMR to assume the role that you created in step 1. For more information, see Configure roles.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::EMRFSAcctID:role/EMR_EC2_DefaultRole" }, "Action": "sts:AssumeRole" } ]}3. Use the AWS Command Line Interface (AWS CLI) to create a security configuration with a role mapping. The role mapping must specify the role in the destination account (the role that you created in step 1).Note: You must use the AWS CLI or an SDK to create the security configuration. The console doesn't list roles in other accounts, even if you have permissions to assume those roles. If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.Supply a JSON object similar to the following for the role mapping. Replace these values in the example:arn:aws:iam::DestinationAcctID:role/role_in_destination_account: the Amazon Resource Name (ARN) of the role that you created in step 1s3://doc-example-bucket/: the bucket that you want EMRFS to write to{ "AuthorizationConfiguration": { "EmrFsConfiguration": { "RoleMappings": [ { "Role": "arn:aws:iam::DestinationAcctID:role/role_in_destination_account", "IdentifierType": "Prefix", "Identifiers": [ "s3://doc-example-bucket/" ] } ] } }}4. Create an IAM policy and then attach it to the Amazon EMR EC2 instance profile (for example, EMR_EC2_DefaultRole).The following example policy allows AWS Security Token Service (STS) to assume all roles. At a minimum, your policy must allow STS to assume the role that you created in step 1. For more information, see Granting permissions to create temporary security credentials.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "*" } ]}5. Launch an EMR cluster and specify the security configuration that you created in step 3.Note: If the destination bucket uses server-side encryption with AWS Key Management Service (AWS KMS), then the assumed role must be a key user in the AWS KMS customer managed key. You can't access the bucket if the role isn't listed in the AWS KMS key.Related informationCreate a security configurationFollow" | https://repost.aws/knowledge-center/emrfs-cross-account-access |
Why can't I connect my EMR notebook to the cluster? | I can't connect my Amazon EMR notebook to my EMR cluster. | "I can't connect my Amazon EMR notebook to my EMR cluster.Short descriptionWhen connecting an EMR notebook to the EMR cluster, you might receive errors similar to the following:Unable to attach to cluster j-XXXXXXXXXXX. Reason: Attaching the workspace(notebook) failed. Internal error.Notebook is not supported in the chosen Availability Zone. Please try using a cluster in another availability zone.Attaching the workspace(notebook) failed. Invalid configuration.Workspace(notebook) is stopped. Cluster j-XXXXXXXXXX does not have JupyterEnterpriseGateway application installed. Please retry with another cluster.Workspace errors: Not able to attached EMR notebook to running cluster. Error starting kernel. HTTP 403: Forbidden (Workspace is not attached to cluster. Click 'Ok' to continue.)ResolutionVerify that the attached cluster is compatible and meets all cluster requirementsCluster requirements for EMR notebooks are as follows:1. Only clusters created using Amazon EMR release version 5.18.0 and later are supported.2. Clusters created using Amazon Elastic Compute Cloud (Amazon EC2) instances with AMD EPYC processors aren't supported. For example, m5a.* and r5a.* instance types aren't supported.3. EMR notebooks work only with clusters created with the VisibleToAllUsers variable set to true. VisibleToAllUsers is set to true by default.4. The cluster must be launched within an EC2 Amazon Virtual Private Cloud (Amazon VPC). Public and private subnets are supported.5. EMR notebooks currently support Apache Spark clusters only.6. For EMR release versions 5.32.0 and later, or 6.2.0 and later, your cluster must be running the Jupyter Enterprise Gateway application.7. Clusters using Kerberos authentication aren't supported.8. Clusters integrated with AWS Lake Formation support the installation of notebook-scoped libraries only. Installing kernels and libraries on the cluster isn't supported.9. Clusters with multiple primary nodes aren't supported.10. Clusters using Amazon EC2 instances based on AWS Graviton2 aren't supported.For more information, see Cluster requirements.Error: Unable to attach to cluster j-XXXXXXXXXXX. Reason: Attaching the workspace(notebook) failed. Internal errorThis occurs on EMR clusters with Apache Livy impersonation turned on. This means the livy.impersonation.enabled variable is set to true. On Amazon EMR 6.4.0 Livy impersonation is set to true by default. The EMR notebooks feature with Livy user impersonation turned off also have HttpFS turned off by default. This means that the EMR notebook can't connect to clusters that have Livy impersonation turned on. For more information, see Amazon EMR release 6.4.0.To avoid this problem, do the following:You can use any older version or newer version of EMR 6.4.0 where the hadoop-httpfs service is running.-or-Restart the hadoop-httpfs service on cluster by doing the following:1. Use SSH to connect to the EMR primary node.2. Run the following command to start the hadoop-httpfs service:sudo systemctl start hadoop-httpfsOr, you can start the hadoop-httpfs service using an EMR step:==========JAR location: command-runner.jarMain class: NoneArguments: bash -c "sudo systemctl start hadoop-httpfs"Action on failure: Continue==========Run the following command to verify the status of HttpFS:$ sudo systemctl status hadoop-httpfs hadoop-httpfs.service - Hadoop httpfs Loaded: loaded (/etc/systemd/system/hadoop-httpfs.service; disabled; vendor preset: disabled) Active: active (running)...3. Reattach the EMR cluster.Error: Workspace errorsThe following are common workspace errors when trying to connect your EMR cluster to an EMR notebook:Not able to attached EMR notebook to running cluster.Error Starting Kernel.HTTP 403: Forbidden (Workspace is not attached to cluster. Choose 'Ok' to continue.)These errors occur because the AWS root account isn't authorized to attach EMR notebooks to EMR clusters. The root user is considered an unauthorized user to start kernels. If the value of KERNEL_USERNAME appears in the unauthorized_users list, the request to connect fails. For more information, see Security features.To avoid workspace errors, create an AWS Identity and Access Manager (AWS IAM) user, and then attach the cluster to the notebook. For more information, see Creating an IAM user in your AWS account.Follow" | https://repost.aws/knowledge-center/emr-notebook-connection-issues |
"How do I turn on AWS WAF logging and send logs to CloudWatch, Amazon S3, or Kinesis Data Firehose?" | "I want to turn on logging for AWS WAF and send the logs to Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), or Amazon Kinesis Data Firehose. How do I turn on AWS WAF logs, and what are the required permissions?" | "I want to turn on logging for AWS WAF and send the logs to Amazon CloudWatch, Amazon Simple Storage Service (Amazon S3), or Amazon Kinesis Data Firehose. How do I turn on AWS WAF logs, and what are the required permissions?Short descriptionFirst, choose a supported destination for your AWS WAF web ACL. AWS WAF supports the following log destinations:A log group from Amazon CloudWatch LogsAn S3 bucket from Amazon Simple Storage ServiceAmazon Kinesis Data Firehose destinationsBe sure you have the necessary resource permissions to turn on AWS WAF logs. Then, turn on AWS WAF logs using your chosen destination.ResolutionThe following destinations are supported for storing your AWS WAF logs:Amazon CloudWatch LogsTo send the logs to a CloudWatch Logs log group, choose CloudWatch Logs log group as the destination when turning on AWS WAF logs.Either create a new log group or use an existing log group. When turned on, AWS WAF logs are sent to log groups in log streams. You can analyze these logs using Logs Insights. For more information, see What are my options to analyze AWS WAF logs stored in CloudWatch or Amazon S3?Consider the following when using CloudWatch logs:Log group names must start with the prefix aws-waf-logs-.Log groups must be in the same AWS account and Region as your web ACL. For Global web ACLs associated to CloudFront, the log group must be in US East (N. Virginia) Region.Log groups have quotas for log groups when storing logs.Log streams created in log groups have the following format:Region_web-acl-name_log-stream-numberNecessary permissionsThe account turning on the AWS WAF logs using CloudWatch Logs log group, must have the following permissions:wafv2:PutLoggingConfigurationwafv2:DeleteLoggingConfigurationlogs:CreateLogDeliverylogs:DeleteLogDeliverylogs:PutResourcePolicylogs:DescribeResourcePolicieslogs:DescribeLogGroupsThese permissions are necessary to change the web ACL logging configuration, configure log delivery, and to retrieve and edit permissions for a log group. These permissions must be attached to the user that is managing AWS WAF.When these permissions are assigned, AWS automatically adds the following policy in the resource-based policies of CloudWatch Logs. This allows delivery services to push logs to a CloudWatch Logs log group.Note: The account number and Amazon resource name (ARN) will be specific to your account for the following policy.{ "Version": "2012-10-17", "Statement": [ { "Sid": "AWSLogDeliveryWrite20150319", "Effect": "Allow", "Principal": { "Service": ["delivery.logs.amazonaws.com"] }, "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": ["arn:aws:logs:us-east-1:0123456789:log-group:my-log-group:log-stream:*"], "Condition": { "StringEquals": { "aws:SourceAccount": ["0123456789"] }, "ArnLike": { "aws:SourceArn": ["arn:aws:logs:us-east-1:0123456789:*"] } } } ]}If you don't see logs in your log group, check if the preceding necessary permissions are added to your log group's resource based policy using DescribeResourcePolicies API. You can edit the resource based policy for logs services using PutResourcePolicy.For more information on log group permissions, see Enabling logging from certain AWS services.Amazon S3 bucketTo send the logs to an Amazon S3 bucket, choose S3 bucket as the destination when turning on AWS WAF logs.Web ACLs publish the log files to an S3 bucket at five minute intervals. The maximum file size is 75 megabytes (MB). If the file size exceeds the maximum, then a new file is logged. When logs are turned on, you can analyze them using Amazon Athena. For more information, see Querying AWS WAF logs.S3 bucket names for AWS WAF logging must start with the prefix aws-waf-logs-.Necessary permissionsThe account turning on the AWS WAF logs using an S3 bucket, must have the following permissions:wafv2:PutLoggingConfigurationwafv2:DeleteLoggingConfigurationlogs:CreateLogDeliverylogs:DeleteLogDeliverys3:PutBucketPolicys3:GetBucketPolicyThese permissions are necessary to turn on AWS WAF logging and to configure log delivery for an S3 bucket. They are also needed to retrieve and edit the bucket policy to allow AWS WAF log delivery to an S3 bucket.When these permissions are assigned, the following example policy is automatically added in the Bucket policy to allow delivery of logs to the S3 bucket:Note: The account number and Amazon Resource Name (ARN) are specific to your account for the following policy.{ "Version": "2012-10-17", "Id": "AWSLogDeliveryWrite20150319", "Statement": [ { "Sid": "AWSLogDeliveryAclCheck", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::my-bucket", "Condition": { "StringEquals": { "aws:SourceAccount": [ "0123456789" ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:us-east-1:0123456789:*" ] } } }, { "Sid": "AWSLogDeliveryWrite", "Effect": "Allow", "Principal": { "Service": "delivery.logs.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-bucket/AWSLogs/account-ID/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control", "aws:SourceAccount": [ "0123456789" ] }, "ArnLike": { "aws:SourceArn": [ "arn:aws:logs:us-east-1:0123456789:*" ] } } } ]}If you don't see the AWS WAF logs in the S3 bucket, then check if the necessary permissions are present in the bucket policy using GetBucketPolicy API. You can edit the bucket policy using PutBucketPolicy API.To send logs to another AWS account or Region, see How do I send AWS WAF logs to an Amazon S3 bucket in a centralized logging account?Amazon Kinesis Data FirehoseTo send the AWS WAF logs to Kinesis Data Firehose stream, you must create a delivery stream. The delivery stream has different destinations to store logs.Consider the following when using Kinesis Data Firehose:The Kinesis Data Firehose name must start with the prefix aws-waf-logs-.The Kinesis Data Firehose delivery stream must be in the same AWS account and Region as your web ACL. For Global web ACLs associated to CloudFront, the Kinesis Data Firehose must be in the US East (N. Virginia) Region.One AWS WAF log is equivalent to one Kinesis Data Firehose record and is subject to Amazon Kinesis Data Firehose Quotas.Important: If you receive more than 10,000 requests per second, your data is throttled and not all requests are logged. To prevent throttling, you must request an increase in the quota for the Kinesis Data Firehose.Necessary permissionsThe account turning on the AWS WAF logs using the Kinesis Data Firehose destination must have the following permissions:wafv2:PutLoggingConfigurationwafv2:DeleteLoggingConfigurationiam:CreateServiceLinkedRolefirehose:ListDeliveryStreamsFor information about service-linked roles and the iam:CreateServiceLinkedRole permission, see Using service-linked roles for AWS WAF.To create a Kinesis Data Firehose delivery stream, follow these steps:Open the Amazon Kinesis console.For Region, select the AWS Region where you created your web ACL.Note: Select Global if your web ACL is set up for Amazon CloudFront.In the navigation pane, choose Delivery streams.Choose Create delivery stream.For Source, choose Direct PUT.For Destination, choose from the available destinations for Kinesis Firehose.For Delivery stream name, enter a name for your delivery stream starting with aws-waf-logs-.Confirm Data transformation and Record format conversion are both Disabled.Enter the Destination settings based on your destination method chosen in step 6.(Optional) For Buffer hints, compression and encryption, configure to your specifications or keep the default settings.(Optional) For Advanced settings, configure to your specifications or keep the default settings.Review the settings for the delivery stream. If the settings match your specifications, choose Create delivery stream.Turn on AWS WAF logsAfter you decide the destination where you want to send your AWS WAF logs, turn on AWS WAF logging by doing the following:Open the AWS WAF console.For Region, select the AWS Region where you created your web ACL.Note: Select Global if your web ACL is set up for Amazon CloudFront.Select your web ACL.Choose Logging and Metrics, then choose Enable.Choose the Destination of where you want to store the AWS WAF logs from the supported destinations.For Redacted fields, select the fields you want to omit from the logs.For Filter logs, add the filter to control which requests you want to store.Choose Save.Follow" | https://repost.aws/knowledge-center/waf-turn-on-logging |
How do I terminate active resources that I no longer need on my AWS account? | I want to terminate active resources that I no longer need. | "I want to terminate active resources that I no longer need.ResolutionClosing your account doesn't automatically terminate all your active resources. If you're closing your account, first confirm that all active resources are terminated to prevent unexpected charges.SubscriptionsYou will continue to incur charges for Savings Plans, Reserved Instances, or other Subscriptions even after you close or suspend your account.If you have Reserved Instances (RIs) with a monthly charge on your account, then you're billed for these subscriptions until the plan term ends. This applies to monthly charges such as:Amazon Elastic Compute Cloud (Amazon EC2) RIsAmazon Relational Database Service (Amazon RDS) RIsAmazon Redshift RIsAmazon ElastiCache Reserved Cache NodesAWS can't cancel an RI before the subscription term ends. You can list your Amazon EC2 RIs for sale on the EC2 Reserved Instance Marketplace. For more information, see Sell in the Reserved Instance Marketplace.If you have active AWS Marketplace subscriptions, these subscriptions aren't automatically canceled on account closure. You must first terminate all instances of your software in the subscriptions. Then, cancel subscriptions on the Manage subscriptions page of the AWS Marketplace console.If you signed up for a Savings Plan, then you're charged for the compute usage covered under the Savings Plan until the plan term ends.ResourcesTo find your active resources, see How do I check for active resources that I no longer need on my AWS account?To terminate active resources under different services, do the following:Open the AWS Management Console.Open the console for the service that contains the resources that you want to terminate (for example, Amazon Simple Storage Service). You can find a specific service by entering the service name in the search bar.After opening the service console, terminate all your active resources. Be sure to check each Region where you have allocated resources.Tip: You can change the Region with the Region selector in the navigation bar.To terminate your active resources for some commonly used AWS services, do the following:Amazon EC2To delete active Amazon EC2 instances, see How do I delete or terminate my Amazon EC2 resources?You might be charged for Elastic IP addresses even after you terminate all your EC2 instances. You can stop the charges by releasing the IP address. For more information, see Why am I being billed for Elastic IP addresses when all my Amazon EC2 instances are terminated?Even after terminating your EC2 instances, you will still incur charges for the Amazon Elastic Block Store (Amazon EBS) volumes attached to your EC2 instances. Be sure to delete the Amazon EBS volumes and Amazon EBS snapshots. For more information, see Why am I being charged for EBS when all my instances are stopped?Note: The Amazon EBS root device volumes are automatically deleted when you terminate your EC2 instances.Amazon S3To delete Amazon Simple Storage Service (Amazon S3) objects and buckets, see How do I delete Amazon S3 objects and buckets?Note: Be sure that the Server access logging for your Amazon S3 bucket is deactivated before deleting the objects in the bucket. Otherwise, logs might be immediately written to your bucket after you delete your bucket's objects. For more information, see Turning on Amazon S3 server access logging.If you have an Amazon S3 bucket that was created by AWS Elastic Beanstalk, you must first delete the Bucket Policy. The Bucket Policy is located in the Permissions section of the bucket properties in the Amazon S3 console. For more information, see Using Elastic Beanstalk with Amazon S3.Note: If you delete a bucket created by Elastic Beanstalk and you have existing applications in the corresponding Region, your applications might not function accurately.Amazon RDSTo delete Amazon RDS Amazon Relational Database Service (Amazon RDS) resources, see Deleting a DB instance.To delete Amazon RDS snapshots, see Deleting a snapshot.To delete retained automated backups of DB instances, see Working with backups.Important: To delete a DB instance or DB snapshot that has deletion protection activated, you must modify the instance and deactivate deletion protection.Amazon LightsailTo delete your active Lightsail resources, see How do I shut down my Amazon Lightsail resources?Other servicesIt's a best practice to delete your active AWS Directory Service directories before account closure. For more information, see How do I delete an AWS Directory Service directory?To terminate resources for other services not listed in this article, see the AWS Documentation for that service.Related informationWhy did I receive a bill after I closed my AWS account?How do I terminate an Amazon EMR cluster?How do I delete or terminate an Amazon ECS service?How do I delete or terminate a Client VPN endpoint or connection on AWS Client VPN?How do I delete a NAT Gateway in Amazon VPC?Follow" | https://repost.aws/knowledge-center/terminate-resources-account-closure |
"Why did my Amazon RDS DB instance restart, recover, or failover?" | "I want to know the root cause for the restart, recover, or failover of my Amazon Relational Database Service (Amazon RDS) DB instance." | "I want to know the root cause for the restart, recover, or failover of my Amazon Relational Database Service (Amazon RDS) DB instance.Short descriptionThe Amazon RDS database instance automatically performs a restart under the following conditions:There is loss of availability in primary Availability Zone or excessive workload due to performance bottleneck and resource contention.There is an underlying infrastructure issue with the primary instance, such as loss of network connectivity to the primary instance, compute unit issue on primary, or storage issue on primary.The DB instance class type is changed as part of the DB instance vertical scaling activity.The underlying host of the RDS DB instance is undergoing software patching during a specific maintenance window. For more information, see Maintaining a DB instance and Upgrading a DB instance engine version.You initiated a manual reboot of the DB instance using the options Reboot or Reboot with failover.When the DB instance shows potential issues and fails to respond to RDS health checks, RDS automatically initiates a Single-AZ recovery for the Single-AZ deployment and a Multi-AZ failover for the Multi-AZ deployment. Then, the DB instance is restarted so that you can resume database operations as quickly as possible without administrative intervention.ResolutionTo identify the cause of the outage, check the following logs and metrics for your RDS DB instance.Amazon RDS eventsTo identify the root cause of an unplanned outage in your instance, view all the Amazon RDS events for the last 24 hours. All the events are registered in the UTC/GMT time by default. To store events a longer time, send the Amazon RDS events to Amazon CloudWatch Events. For more information, see Creating a rule that triggers on an Amazon RDS event. When your instance restarts, you see one of the following messages in RDS event notifications:The RDS instance was modified by customer: This RDS event message indicates that the failover was initiated by an RDS instance modification.Applying modification to database instance class: This RDS event message indicates that the DB instance class type is changed.Single-AZ deployments become unavailable for a few minutes during this scaling operation.Multi-AZ deployments are unavailable during the time that it takes for the instance to failover. This duration is usually about 60 seconds. This is because the standby database is upgraded before the newly sized database experiences a failover. Then, your database is restarted, and the engine performs recovery to make sure that your database remains in a consistent state.The user requested a failover of the DB instance: This message indicates that you initiated a manual reboot of the DB instance using the option Reboot or Reboot with failover.The primary host of the RDS Multi-AZ instance is unhealthy: This reason indicates a transient underlying hardware issue that led to the loss of communication to the primary instance. This issue might have rendered the instance unhealthy because the RDS monitoring system couldn't communicate with the RDS instance for performing the health checks.The primary host of the RDS Multi-AZ instance is unreachable due to loss of network connectivity: This reason indicates that the Multi-AZ failover and database instance restart were caused by a transient network issue that affected the primary host of your Multi-AZ deployment. The internal monitoring system detected this issue and initiated a failover.The RDS Multi-AZ primary instance is busy and unresponsive, the Multi-AZ instance activation started, or the Multi-AZ instance activation completed: The event log shows these messages under the following situations:The primary DB instance is unresponsive.A memory crunch after an excessive memory consumption in the database prevented the RDS monitoring system from contacting the underlying host. Hence the database restarts by our monitoring system as a proactive measure.The DB instance experienced intermittent network issues with the underlying host.The instance experienced a database load. In this case, you might notice spikes in CloudWatch metrics CPUUtilization, DatabaseConnections, IOPS metrics, and Throughput details. You might also notice depletion of Freeablememory.Database instance patched: This message indicates that the DB instance underwent a minor version upgrade during a maintenance window because the setting Auto minor version upgrade is enabled on the instance.CloudWatch metricsView the CloudWatch metrics for your Amazon RDS instance to check if the database load issue caused the outage. For more information, see Monitoring Amazon RDS metrics with Amazon CloudWatch. Check for spikes in the following key metrics that indicate the availability and health status of your RDS instance:DatabaseConnectionsCPUUtilizationFreeableMemoryWriteIOPSReadIOPSReadThroughputWriteThroughputDiskQueueDepthEnhanced MonitoringAmazon RDS delivers metrics from Enhanced Monitoring into your Amazon CloudWatch Logs account. This provides metrics in real time for the operating system that your DB instance runs on. You can view all the system metrics and process information for your DB instances on the console.You can set the granularity for the Enhanced Monitoring feature to 1, 5, 10, 15, 30, or 60.To turn on Enhanced Monitoring for your Amazon RDS instance, see Setting up and enabling Enhanced Monitoring.Performance InsightsThe Performance Insights dashboard contains information related to database performance that can help you analyze and troubleshoot performance issues. You can also identify queries and wait events that consume excessive resources on your DB instance. Performance Insights collects data at the database level and displays the data on the Performance Insights dashboard. For more information, see Monitoring DB load with Performance Insights on Amazon RDS. When an increase in resource consumption is generated from the application side, use the Support SQL ID from your Performance Insights dashboard and match it to the corresponding query. It's a best practice to use this information to tune the performance of the query and optimize your workload using the guidance of your DBA:Open the Amazon RDS console.In the navigation pane, choose Performance insights.On the Performance Insights page, select your DB instance. You can view the Performance Insights dashboard for this DB instance.Select the time frame when the issue occurred.Choose the Top SQL tab.Choose the settings icon, and then turn on Support ID.Choose Save.RDS database logsTo troubleshoot the cause of the outage for your Amazon RDS DB instance, you can view, download, or watch database log files using the Amazon RDS console or Amazon RDS API operations. You can also query the database log files that are loaded into database tables. For more information, see Monitoring Amazon RDS log files.Keep the following best practices in mind when dealing with RDS instance outages:Enable Multi-AZ deployment on your instance to reduce downtime during an outage. With a Multi-AZ deployment, RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone or two readable standbys. For more information, see Amazon RDS Multi-AZ.Adjust the DB instance maintenance window according to your preference. The DB instance is unavailable during this time only if the system changes, such as a change in DB instance class, are being applied and require an outage, and for only the minimum amount of time required to make the necessary changes. For more information, see Maintaining a DB instance. If you don’t want your instances to go through automatic minor version upgrades, you can turn off this option. For more information, see Automatically upgrading the minor engine version.Be sure that you have enough resources allocated to your database to run queries. With Amazon RDS, the amount of resources allocated depends on the instance type. Also, certain queries, such as stored procedures, might take an unlimited amount of memory. Therefore, if the instance restarts frequently due to lack of resources, consider scaling up your database instance class to keep up with the increasing demands of your applications.To avoid instance throttling, configure Amazon CloudWatch alarms on RDS key metrics that indicate the availability and health status of your RDS instances. For example, you can set a CloudWatch alarm on the FreeableMemory metric so that you receive a notification when available memory reaches 95%. It's a best practice to keep at least 5% of the instance memory free. For more information, see How can I filter Enhanced Monitoring CloudWatch logs to generate automated custom metrics for Amazon RDS?To be notified whenever there is a failover on your RDS instance, subscribe to Amazon RDS event notifications. For more information, see How do I create an Amazon RDS event subscription?To optimize database performance, make sure that your queries are properly tuned. Otherwise, you might experience performance issues and extended wait times.To troubleshoot any kind of load in terms of CPU, memory or any other resource crunch. see How can I troubleshoot high CPU utilization for Amazon RDS or Amazon Aurora PostgreSQL?Related informationBest practices for Amazon RDSWhat factors affect my downtime or database performance in Amazon RDS?Why did my Amazon RDS DB instance fail over?How do I minimize downtime during required Amazon RDS maintenance?How do I check running queries and diagnose resource consumption issues for my Amazon RDS or Amazon Aurora PostgreSQL-Compatible Edition DB instance?Follow" | https://repost.aws/knowledge-center/rds-multi-az-failover-restart |
How do I calculate the time for a contact in the queue in Amazon Connect? | I want to calculate the time for a contact being spent in the queue in Amazon Connect. | "I want to calculate the time for a contact being spent in the queue in Amazon Connect.Short descriptionYou can calculate the time for a contact in the queue in Amazon Connect for active and completed contacts.To calculate time spent in the queue for active contacts, use the following methods:Track Amazon Connect metrics sent to CloudWatch for QueueSize and LongestQueueWaitTimeUse the GetCurrentMetricData API to track CONTACTS_IN_QUEUE and OLDEST_CONTACT_AGEUse Amazon Connect contact events for tracking individual contactsTo calculate time spent in the queue for completed contacts, use the following methods:Track maximum queued time using historical metricsTrack duration using the QueueInfo data in contact records for individual contactsResolutionFor active contactsTrack QueueSize and LongestQueueWaitTime metricsOpen the Amazon CloudWatch console.In the navigation pane, choose Metrics, and then choose All metrics.On the Metrics tab, choose Connect, and then choose Queue metrics.Select the metrics QueueSize and LongestQueueWaitTime.Choose the Graphed metrics tab. Then, for Statistic, choose Maximum.Review both the QueueSize and LongestQueueWaitTime.The QueueSize is the number of contacts in the queue. The LongestQueueWaitTime shows the longest amount of time in seconds that a contact waited in a queue. For more information, see Monitoring your instance using CloudWatch.Tip: You can set a CloudWatch alarm on the LongestQueueWaitTime metric to get a notification if it reaches a certain threshold. For additional information, see Creating an alarm from a metric on a graph.Use the GetCurrentMetricData API to track CONTACTS_IN_QUEUE and OLDEST_CONTACT_AGEFirst, to find the QueueID and InstanceID for the API request parameters, do the following:Log in to your Amazon Connect instance using your access URL (https://alias.awsapps.com/connect/login -or- https://domain.my.connect.aws). You must use the administrator account or the emergency access Amazon Connect instance login.In the navigation menu, choose Routing, and then choose Queues.Choose the name of the queue that you want to review.In the Queue Details, choose the show additional queue information.Find the queue ARN shown as arn:aws:connect:region:account-id:instance/instance-id/queue/queue-id. Make note of the AWS Region, instance-id, and queue-id for the next steps.Then, to run the GetCurrentMetricData API, do the following:1. Navigate to AWS CloudShell.2. Run the following AWS Command Line Interface (AWS CLI) command:Note: Replace queue-id, instance-id, and region with your values.aws connect get-current-metric-data --filters Queues=<queue-id> --instance-id <instance-id> --current-metrics Name=CONTACTS_IN_QUEUE,Unit=COUNT Name=OLDEST_CONTACT_AGE,Unit=SECONDS --groupings QUEUE --region <region>Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.You receive an output similar to the following:{ "MetricResults": [ { "Dimensions": { "Queue": { "Id": "<queue-id>", "Arn": "<queue-arn>" } }, "Collections": [ { "Metric": { "Name": "CONTACTS_IN_QUEUE", "Unit": "COUNT" }, "Value": 0.0 }, { "Metric": { "Name": "OLDEST_CONTACT_AGE", "Unit": "SECONDS" }, "Value": 0.0 } ] } ], "DataSnapshotTime": "<The time at which the metrics were retrieved and cached for pagination.>"}Use contact events for tracking individual contacts1. Open the Amazon EventBridge console.2. In the navigation pane, choose Rules.3. Choose Create rule.4. For Rule type, choose Rule with an event pattern.5. Choose Next.6. In the Creation Method, choose Use pattern form.7. In the Event pattern, select Event source as AWS Services, AWS Service as Amazon Connect, and Event Type as Amazon Connect Contact Event.8. Under Target1, choose Target type as AWS Service.9. Under Select a target, choose Lambda function. For the function, do the following:Create a Lambda function with the console, using runtime Python 3.8.For the Lambda function code, use the following:import jsondef lambda_handler(event, context): # TODO implement print(event) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') }Note: The Lambda function prints all events and is intended for testing. The process to calculate the time spent by a specific contact in a queue must be set up manually.10. Choose Skip to Review and create, and then choose Create rule.11. Access the Amazon CloudWatch logs for AWS Lambda to see a near real-time stream of contacts, such as, voice calls, chat, and task events. For example, you can see if a call is queued in your Amazon Connect contact center.Note: Available contact events are INITIATED, CONNECTED_TO_SYSTEM, QUEUED, CONNECTED_TO_AGENT, and DISCONNECTED. Events are released on a best effort basis.12. To determine the time spent by a specific contact in the queue, first locate the following information:The QUEUED event timestamp for a specific contact ID.The CONNECTED_TO_AGENT event timestamp for the same contact ID.13. To calculate the time spent by the specific contact in the queue, subtract the QUEUED timestamp from the CONNECTED_TO_AGENT timestamp.For completed contactsTrack queued time using historical metricsTo view the historical metric reports, do the following:Log in to your Amazon Connect instance using your access URL (https://alias.awsapps.com/connect/login -or- https://domain.my.connect.aws).Important: You must log in as a user that has sufficient permissions required to view historical metrics reports.In the navigation menu, choose Analytics and optimization, Historical metrics.Choose the Queues report type.Choose the gear icon.On the Metrics tab, choose Maximum queued time.On the Interval & Time range tab, set the Interval, Time Zone, and Time range.When you're finished customizing your report, choose Apply. The Maximum queued time shows the longest time that a contact spent waiting in the queue for the selected interval and time range.(Optional) To save your report for future use, choose Save, provide a name for the report, and then choose Save.Tip: You can schedule a historical metrics report for future use.You can also use the GetMetricData API to track QUEUED_TIME. The GetMetricData API metrics are available for only a 24 hour duration.Track duration in the QueueInfo using the contact search for individual contactsTo use the contact search, do the following:View a contact record in the UI to open the contact trace record (CTR) that you want to view.If the contact was queued, then the Queue section populates and lists the duration of time the contact spent in the queue. Note: Data retention for CTR is 24 months upon contact initiation.To retain contact data for longer than 24 months, stream the CTRs using the following method:Create an Amazon Kinesis Data Firehose delivery stream or an Amazon Kinesis data stream. Then, activate data streaming in your instance.Note: For an alternate method, see Analyze Amazon Connect contact trace record with Amazon Athena and Amazon QuickSight.Follow" | https://repost.aws/knowledge-center/connect-calculate-contact-time-in-queue |
Why aren't messages that I publish to my Amazon SNS topic getting delivered to my subscribed Amazon SQS queue that has server-side encryption activated? | "When I publish messages to my Amazon Simple Notification Service (Amazon SNS) topic, they're not delivered to my Amazon Simple Queue Service (Amazon SQS) queue. How do I fix this issue if my Amazon SNS topic or Amazon SQS queue—or both—have server-side encryption (SSE) activated?" | "When I publish messages to my Amazon Simple Notification Service (Amazon SNS) topic, they're not delivered to my Amazon Simple Queue Service (Amazon SQS) queue. How do I fix this issue if my Amazon SNS topic or Amazon SQS queue—or both—have server-side encryption (SSE) activated?Short descriptionYour Amazon SQS queue must use a AWS KMS key (KMS key) that is customer managed. This KMS key must include a custom key policy that gives Amazon SNS sufficient key usage permissions.Note: The required permissions aren't included in the default key policy of the AWS managed KMS key for Amazon SQS, and you can't modify this policy.If your topic has SSE activated, you must also do the following:Configure AWS Key Management (AWS KMS) permissions that allow your publisher to publish messages to your encrypted topic.Resolution1. Create a new customer managed KMS key with a key policy that has the required permissions for Amazon SNS.2. Configure SSE for your Amazon SQS queue using the custom KMS key you just created.3. (If your Amazon SNS topic has SSE activated) Configure AWS KMS permissions that allow your publisher to publish messages to your encrypted topic.For more information, see Activating server-side encryption (SSE) for an Amazon SNS topic with an encrypted Amazon SQS queue subscribed.Note: To troubleshoot other message delivery issues, see Amazon SNS message delivery status.Related informationEncryption at rest for Amazon SQSEncryption at rest for Amazon SNS dataConfiguring server-side encryption (SSE) for an SNS topicUsing key policies in AWS KMSEncrypting messages published to Amazon SNS with AWS KMSFollow" | https://repost.aws/knowledge-center/sns-topic-sqs-queue-sse-cmk-policy |
Why aren't my Amazon S3 objects replicating when I set up replication between my buckets? | "I set up cross-Region replication (CRR) or same-Region replication (SRR) between my Amazon Simple Storage Service (Amazon S3) buckets. However, objects aren't replicating to the destination bucket." | "I set up cross-Region replication (CRR) or same-Region replication (SRR) between my Amazon Simple Storage Service (Amazon S3) buckets. However, objects aren't replicating to the destination bucket.ResolutionTo troubleshoot S3 objects that aren't replicating to the destination bucket, check the different types of permissions for your bucket. Also, check the public access settings and bucket ownership settings.Tip: Upload an object to the source bucket to test the replication after each configuration change. It's a best practice to make one configuration change at a time to identify any replication setup issues.After you resolve the issues causing replication to fail, there might be objects in the source bucket that weren't replicated. By default, S3 replication doesn't replicate existing objects or objects with a replication status of FAILED or REPLICA. To check the replication status for objects that didn't replicate, see Retrieve list of S3 objects that failed replication. Use S3 batch replication to replicate these objects.Minimum Amazon S3 permissionsConfirm that the AWS Identity Access Management (IAM) role that you used in the replication rule has the correct permissions. If the source and destination buckets are in different accounts, confirm that the destination account's bucket policy grants sufficient permissions to the replication role.The following example shows an IAM policy with the minimum permissions that are required for replication. Based on the replication rule options (example: encryption with SSE-KMS), you might need to grant additional permissions.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetReplicationConfiguration", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::SourceBucket" ] }, { "Effect": "Allow", "Action": [ "s3:GetObjectVersionForReplication", "s3:GetObjectVersionAcl", "s3:GetObjectVersionTagging" ], "Resource": [ "arn:aws:s3:::SourceBucket/*" ] }, { "Effect": "Allow", "Action": [ "s3:ReplicateObject", "s3:ReplicateDelete", "s3:ReplicateTags" ], "Resource": "arn:aws:s3:::DestinationBucket/*" } ]}Note: Replace SourceBucket and DestinationBucket with the names of your S3 buckets.The IAM role must have a trust policy that allows Amazon S3 to assume the role to replicate objects.Example:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "s3.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}If the destination bucket is in another account, then the destination bucket policy must grant the following permissions:{ "Version": "2012-10-17", "Id": "PolicyForDestinationBucket", "Statement": [ { "Sid": "Permissions on objects", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role" }, "Action": [ "s3:ReplicateTags", "s3:ReplicateDelete", "s3:ReplicateObject" ], "Resource": "arn:aws:s3:::DestinationBucket/*" } ]}Note: Replace arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role with the ARN of your replication role.Additional Amazon S3 permissionsIf the replication rule is set to Change object ownership to the destination bucket owner, then the IAM role must have the s3:ObjectOwnerOverrideToBucketOwner permissions. This permission is placed on the S3 object resource.Example:{ "Effect":"Allow", "Action":[ "s3:ObjectOwnerOverrideToBucketOwner" ], "Resource":"arn:aws:s3:::DestinationBucket/*"}The destination account must also grant the s3:ObjectOwnerOverrideToBucketOwner permission through the bucket policy:{ "Sid":"1", "Effect":"Allow", "Principal":{"AWS":"arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role"}, "Action":["s3:ObjectOwnerOverrideToBucketOwner"], "Resource":"arn:aws:s3:::DestinationBucket/*"}Note: If the destination bucket's object ownership settings include Bucket owner enforced, then you don't need Change object ownership to the destination bucket owner in the replication rule. This change occurs by default.If the replication rule has delete marker replication activated, then the IAM role must have the s3:ReplicateDelete permissions. If the destination bucket is in another account, then the destination bucket owner must also grant this permission through the bucket policy. For example:{ "Effect":"Allow", "Action":["s3:ReplicateDelete"], "Resource":"arn:aws:s3:::DestinationBucket/*"}Note: Replace DestinationBucket with the name of your S3 bucket.After you add the additional S3 permissions to the IAM role, the same permissions must be granted through the bucket policy on the destination bucket:{ "Version": "2012-10-17", "Id": "PolicyForDestinationBucket", "Statement": [ { "Sid": "Stmt1644945277847", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role" }, "Action": [ "s3:ReplicateObject", "s3:ReplicateTags", "s3:ObjectOwnerOverrideToBucketOwner", "s3:ReplicateDelete" ], "Resource": "arn:aws:s3:::DestinationBucket/*" } ]}AWS KMS permissionsIf a bucket's source objects are encrypted with an AWS KMS key, then the replication rule must be configured to include AWS KMS-encrypted objects.To include objects encrypted with AWS KMS, do the following:1. Open the Amazon S3 console.2. Choose the S3 bucket that contains the source objects.3. On the Management tab, select a replication rule.5. Choose Edit.6. Under Encryption, choose Replicate objects encrypted with AWS KMS.7. Under AWS KMS key for encrypting destination objects, select an AWS KMS key. The default option is to use the AWS KMS key (aws/S3).Examples: Example policies – Using SSE-S3 and SSE-KMS with replicationImportant: If the destination bucket is in a different AWS account, then specify an AWS KMS customer managed key that's owned by the destination account. Don't use the default aws/S3 key. This encrypts the objects with the AWS managed key that's owned by the source account and can't be shared with another account. As a result, the destination account can't access the objects in the destination bucket.AWS KMS additional permissions for cross-account scenariosTo use an AWS KMS key that belongs to the destination account to encrypt the destination objects, the destination account must grant the replication role in the AWS KMS key policy:{ "Sid": "AllowS3ReplicationSourceRoleToUseTheKey", "Effect": "Allow", "Principal": {"AWS": "arn:aws:iam::SourceBucket-account-ID:role/service-role/source-account-IAM-role"}, "Action": ["kms:GenerateDataKey", "kms:Encrypt"], "Resource": "*"}Note: Suppose that you use an asterisk (*) for Resource in the AWS KMS key policy. In this case, the policy grants permission for the AWS KMS key only to the replication role. The policy doesn't allow the replication role to elevate its permissions.Additionally, the source account must add the following minimum permissions to the replication role's IAM policy:{ "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:GenerateDataKey" ], "Resource": [ "SourceKmsKeyArn" ]},{ "Effect": "Allow", "Action": [ "kms:GenerateDataKey", "kms:Encrypt" ], "Resource": [ "DestinationKmsKeyArn" ]}By default, the AWS KMS key policy grants the root user full permissions to the key. These you can delegate these permissions can to other users in the same account. You can use an IAM policy to grant the replication role permissions to the source KMS key. This is sufficient unless there are deny statements in the source KMS key policy.Explicit deny and conditional allow statementsIf your objects still aren't replicating after you've validated the permissions, then check for any explicit deny statements:Deny statements in the destination bucket policy or AWS KMS key policies that restrict access to the following might cause replication to fail.Specific CIDR rangesVirtual Private Cloud (VPC) endpointsS3 access pointsDeny statements or permissions boundaries that are attached to the IAM role might cause replication to fail.Deny statements in AWS Organizations service control policies (SCPs) that are attached to either the source or destination accounts might cause replication to fail.Tip: Before removing any explicit deny statements, confirm the reason for using the deny and determine whether the statement has an impact on data security.Amazon S3 bucket keys and replication considerationsIf either the source or destination KMS keys grant permissions based on the encryption context, then confirm that the S3 Bucket Keys are turned on for the buckets. If the buckets have Bucket Keys turned on, then the encryption context must be for the bucket level resource:"kms:EncryptionContext:aws:s3:arn": [ "arn:aws:s3:::SOURCE_BUCKET_NAME" ]"kms:EncryptionContext:aws:s3:arn": [ "arn:aws:s3:::DESTINATION_BUCKET_NAME" ]Note: Replace SOURCE_BUCKET_NAME and DESTINATION_BUCKET_NAME with the names of your source and destination buckets. If Bucket Keys aren't turned on for the source or destination buckets, then the encryption context must be the object level resource:"kms:EncryptionContext:aws:s3:arn": [ "arn:aws:s3:::SOURCE_BUCKET_NAME/*" ]"kms:EncryptionContext:aws:s3:arn": [ "arn:aws:s3:::DESTINATION_BUCKET_NAME/*" ]Note: Replace SOURCE_BUCKET_NAME and DESTINATION_BUCKET_NAME with the names of your source and destination buckets.Object ACLs and Block Public AccessCheck whether the source and destination buckets are using access control lists (ACLs). If the object includes an ACL that allows public access but the destination bucket is using block public access, then replication fails.Source object ownershipIf the objects in the source bucket were uploaded by another AWS account, then the source account might not have permission to those objects. Check the source bucket to see if ACLs are deactivated. If the source bucket has ACLs deactivated, then the source account is the owner of all objects in the bucket. If the source bucket doesn't have ACLs deactivated, then check to see if object ownership is set to Object owner preferred or Bucket owner preferred. If the bucket is set to Bucket owner preferred, then the source bucket objects must have the bucket-owner-full-control ACL for the bucket owner to become the object owner.The source account can take ownership of all objects in their bucket by deactivating ACLs. Most use cases don't require using ACLs to manage access. It's a best practice to use IAM and bucket policies to manage access to S3 resources. To deactivate ACLs on your S3 bucket, see Controlling ownership of objects and disabling ACLs for your bucket. Be sure to evaluate the current usage of ACLs on your bucket and objects. Your current bucket and IAM policies must grant sufficient permissions so that you can deactivate ACLs without impacting Amazon S3 access.Replication rule filterBe sure that you specified the replication rule filter correctly.If you specify a rule filter with a combination of a key prefix and object tags, S3 performs a logical AND operation to combine these filters. In other words, the rule applies to a subset of objects with a specific key prefix and specific tags.Related informationWalkthroughs: Examples for configuring replicationFollow" | https://repost.aws/knowledge-center/s3-troubleshoot-replication |
Why can’t I push log data to CloudWatch Logs with the awslogs agent? | I'm unable to push log data to Amazon CloudWatch Logs using the CloudWatch Logs agent (awslogs). | "I'm unable to push log data to Amazon CloudWatch Logs using the CloudWatch Logs agent (awslogs).ResolutionBefore you begin, confirm that the awslogs agent can connect to the CloudWatch Logs API endpoint.Be sure that your configuration has the following attributes:Internet connectivityValid security group configurationsValid network access control lists (network ACLs)Important: This reference is for the earlier CloudWatch Logs agent that is no longer supported. If you use Instance Metadata Service Version 2 (IMDSv2), then you must use the new unified CloudWatch agent. Even if you aren't using IMDSv2, it's a best practice to use the newer unified CloudWatch agent instead of the logs agent.Fingerprinting issuesReview the header lines of the source log file. You set this file's path when configuring the data to be pushed to CloudWatch.If the first few lines are blank or contain non-event data that stays the same, there might be issues with the log-identifying hash.If the header lines are the same, then update the file_fingerprint_lines option in the agent configuration file. Be sure to specify the lines in each file that are used for generating the identifying hash.Check the awslogs log file for errorsReview the /var/log/awslogs.log log file. Be sure to note any error messages.Permissions errors include:NoCredentialsError: Unable to locate credentials – If you didn't add an AWS Identity and Access Management (IAM) role to the instance, create and attach an IAM role. If you already added an IAM role to the instance, then update the IAM user credentials in the /etc/awslogs/awscli.conf file.ClientError: An error occurred (AccessDeniedException) when calling the PutLogEvents operation: User: arn:aws:iam::012345678910: / is not authorized to perform: logs:PutLogEvents[...] – Configure the IAM role or user with the required permissions for CloudWatch Logs.Timestamp errors include:Fall back to previous event time: {'timestamp': 1492395793000, 'start_position': 17280L, 'end_position': 17389L}, previousEventTime: 1492395793000, reason: timestamp could not be parsed from message. – Confirm that the log events begin with a timestamp. Check if the datetime_format specified in /etc/awslogs/awslogs.conf matches the timestamp format of the log events. Change the datetime_format to match the timestamp format as needed.No file is found with given path ' ' – Update the log file path in the agent configuration file to the correct path.Other awslogs issuesIf logs stopped pushing after a log rotation, check the supported log rotation methods. For more information, see CloudWatch Logs agent FAQ.If logs are pushed briefly only after the awslogs agent is restarted, then check for duplicates in the [logstream] section of the agent configuration file. Each section must have a unique name.If the awslogs.log log file takes up too much disk space, then check the log file for errors, and then correct them. If the log file contains only informational messages, then specify a lower logging level for the logging_config_file option in the agent configuration file.Further troubleshootingFor further troubleshooting, note the instance-id (your instance's ID). Then, collect and review the following based on your configuration.Yum installations:yum version$ yum info awslogs$ yum info aws-cli-plugin-cloudwatch-logs/etc/awslogs/awslogs.conf file/etc/awslogs/awscli.conf fileOther relevant files in /etc/awslogs//var/log/awslogs.log fileScript-based installations:The awslogs version, obtained with the following command:$ /var/awslogs/bin/awslogs-version.sh/var/awslogs/etc/awslogs.conf file/var/awslogs/etc/awscli.conf fileOther relevant files in /var/awslogs/etc//var/log/awslogs.log/var/log/awslogs-agent-setup.logFor rotation-related issues, collect and review:A snippet of the source logsA list of the monitoring target directory's contents. Use the command ls -la with the directory path to obtain this:$ ls -la <Monitoring-Target-Directory-Path>Related informationGetting started with CloudWatch LogsFollow" | https://repost.aws/knowledge-center/push-log-data-cloudwatch-awslogs |
How do I use pivoted data after an AWS Glue relationalize transformation? | I want to use the AWS Glue relationalize transform to flatten my data. Which fields can I use as partitions to store the pivoted data in Amazon Simple Storage Service (Amazon S3)? | "I want to use the AWS Glue relationalize transform to flatten my data. Which fields can I use as partitions to store the pivoted data in Amazon Simple Storage Service (Amazon S3)?Short descriptionThe relationalize transform makes it possible to use NoSQL data structures, such as arrays and structs, in relational databases. The relationalize transform returns a collection of DynamicFrames (a DynamicFrameCollection in Python and an array in Scala). All DynamicFrames returned by a relationalize transform can be accessed through their individual names in Python, and through array indexes in Scala.ResolutionRelationalize the dataThis tutorial uses the following schema:|-- family_name: string|-- name: string|-- gender: string|-- image: string|-- images: array| |-- element: struct| | |-- url: stringUse the following relationalize syntax for Python:# AWS Glue Data Catalog: database and table namesdb_name = "us-legislators"tempDir = "s3://awsexamplebucket/temp_dir/"# Create dynamic frames from the source tablespersons = glueContext.create_dynamic_frame.from_catalog(database=db_name, table_name=tbl_persons)# Relationalize transformationdfc = persons.relationalize("root", tempDir)dfc.select('root_images').printSchema()dfc.select('root_images').show()Use the following relationalize syntax for Scala:// AWS Glue Data Catalog: database and table namesval dbName = "us-legislators"val tblPersons = "persons_json"// Output Amazon S3 temp directoryval tempDir = "s3://awsexamplebucket/temp_dir"val persons: DynamicFrame = glueContext.getCatalogSource(database = dbName, tableName = tblPersons).getDynamicFrame()val personRelationalize = persons.relationalize(rootTableName = "root", stagingPath = tempDir)personRelationalize(2).printSchema()personRelationalize(2).show()Interpret the pivoted dataThis relationalize transform produces two schemas: root and root_images.root:|-- family_name: string|-- name: string|-- gender: string|-- image: string|-- images: longroot_images:|-- id: long|-- index: int|-- images.val.url: stringid: order of the array element (1, 2, or 3)index: index position for each element in an arrayimages.val.url: value for images.val.url in root_imagesThese are the only fields that can be used as partition fields to store this pivoted data in Amazon S3. Specifying root table fields, such as name, doesn't work because those fields don't exist in root_images.Join the relationalized data to get the normalized dataThe id attribute in root_images is the order the arrays (1, 2, or 3) in the dataset. The images attribute in root holds the value of the array index. This means that you must use images and id to join root and root_images. You can run dynamicFrame.show() to verify the order of the arrays and the value of the array index.To join root and root_images:Python:joined_root_root_images = Join.apply(dfc.select('root'), dfc.select('root_images'), 'images', 'id')Scala:val joined_root_root_images = personRelationalize(0).join(keys1 = Seq("images"), keys2 = Seq("id"), frame2 = personRelationalize(1))Store the pivoted dataTo store the pivoted data in Amazon S3 with partitions:Python:datasink4 = glueContext.write_dynamic_frame.from_options(frame = dfc.select('root_images'), connection_type = "s3", connection_options = {"path": outputHistoryDir,"partitionKeys":["id"]}, format = "csv",transformation_ctx = "datasink4")Scala:Note: In the following example, personRelationalize(2) is the root_images pivoted data table.glueContext.getSinkWithFormat(connectionType = "s3", options = JsonOptions(Map("path" -> paths, "partitionKeys" -> List("id"))), format = "csv", transformationContext = "").writeDynamicFrame(personRelationalize(2))To store the pivoted data in Amazon S3 without partitions:Python:datasink5 = glueContext.write_dynamic_frame.from_options(frame = dfc.select('root_images'), connection_type = "s3", connection_options = {"path": outputHistoryDir}, format = "csv",transformation_ctx = "datasink5"Scala:Note: In the following example, personRelationalize(2) is the root_images pivoted data table.glueContext.getSinkWithFormat(connectionType = "s3", options = JsonOptions(Map("path" -> paths)), format = "csv", transformationContext = "").writeDynamicFrame(personRelationalize(2))After you write the data to Amazon S3, query the data in Amazon Athena or use a DynamicFrame to write the data to a relational database, such as Amazon Redshift.Related informationSimplify querying nested JSON with the AWS Glue relationalize transformCode example: Joining and relationalizing DataFollow" | https://repost.aws/knowledge-center/glue-pivoted-data-relationalize |
How do I ingest and visualize the AWS Cost and Usage Report (CUR) into Amazon QuickSight? | I want to ingest the AWS Cost and Usage Report (CUR) from Amazon Simple Storage Service (Amazon S3) into Amazon QuickSight. How do I import and visualize my monthly CUR results? | "I want to ingest the AWS Cost and Usage Report (CUR) from Amazon Simple Storage Service (Amazon S3) into Amazon QuickSight. How do I import and visualize my monthly CUR results?ResolutionYou can ingest and visualize your AWS Cost and Usage Report (CUR) by doing the following:1. Create a Cost and Usage Report.2. Authorize QuickSight to access your Amazon S3 bucket.3. Open the QuickSight console.4. Choose Manage Data.5. Choose New Data Set.6. Choose S3. This opens the Amazon S3 data source dialog box.7. In the Amazon S3 data source dialog box, enter a data source name.Important: Make sure that you've set the proper permissions for QuickSight to access the S3 bucket. Otherwise, you can't access any data in QuickSight.8. In the Upload a Manifest field, choose Upload, or enter the URL of your AWS Cost and Usage Report manifest file like this:s3://awscostusagereport-quicksight/report_path_example/QuickSight/AWS_CUR_QS_ReportName-20200601-20200701-QuickSightManifest.jsonReplace the following variables to indicate the S3 folder and JSON file that you created in Step 1:awscostusagereport-quicksight: Your S3 bucketreport_path_example: The CUR Report path prefixAWS_CUR_QS_ReportName: Your CUR Report name20200601-20200701: The date range of your CUR Report9. Choose Connect. This creates and opens the new dataset.10. After you create the new dataset, choose Visualize to display all the data fields in your AWS Cost and Usage Report. For more information about the CUR data field definitions, see the Data dictionary.Additional troubleshootingIf you can't find the QuickSight folder under your S3 bucket, try the following:Allow 24 hours before attempting to regenerate the CUR report and manifest file.Verify that you selected QuickSight for the Enable report data integration for field.If you get a "We can't parse the manifest file as a valid JSON" error message, try the following:Check whether you've correctly authorized QuickSight to access your S3 bucket.Verify that the URL of the manifest file that was copied over is located in the QuickSight folder. Each service folder must have its own manifest file.If you have multiple AWS accounts under the same organization, make sure that you have proper access rights from the management account. This report is available only to the management account owner and any users who are granted access by the management account. For more information, see Consolidated billing for AWS Organizations.Related informationWhat are AWS Cost and Usage Reports?I can't connect to Amazon S3Follow" | https://repost.aws/knowledge-center/quicksight-cost-usage-report |
Why am I not able to see the Amazon CloudWatch metrics for my AWS Glue ETL job even after I enabled job metrics? | "I've enabled the option to create job metrics for my AWS Glue extract, transform, and load (ETL) job. However, I can't see the job metrics in Amazon CloudWatch." | "I've enabled the option to create job metrics for my AWS Glue extract, transform, and load (ETL) job. However, I can't see the job metrics in Amazon CloudWatch.Short descriptionAWS Glue sends metrics to CloudWatch every 30 seconds, and the CloudWatch console dashboard is configured to display these metrics every minute. The AWS Glue metrics represent delta values from previously reported values. The metrics dashboard aggregates the 30-second values to obtain a value for the last minute. The job metrics for your job are enabled with the initialization of a GlueContext in the job script. The metrics are updated only at the end of an Apache Spark task. The job metrics represent the aggregate values across all completed Spark tasks.ResolutionIncrease the run time of your AWS Glue job: The CloudWatch metrics are reported every 30 seconds. Therefore, if the run time of your job is less than 30 seconds, then the job metrics aren't sent to CloudWatch. AWS Glue uses the metrics from Spark, and Spark uses the DropWizard metrics library for publishing metrics. To get the AWS Glue metrics, your job must run for at least 30 seconds. Updating your job to process more data can help increase the run time of your job. However, you can use a temporary workaround to see the job metrics. You can increase the run time of your AWS Glue job by including the function time.sleep() in your job. You can include time.sleep() in your job at the start or end of the code based on your use case.**Important:**Using the time.sleep() function is not a coding best practice.For Python:import timetime.sleep(30)For Scala:Thread.sleep(30)Be sure that the job completed the Spark tasks: Job metrics are reported after the Spark tasks are complete. Therefore, check and confirm that the Spark tasks for your job are completed, and the job did not fail.Be sure that GlueContext is initialized in the job script: The GlueContext class in your job script enables writing metrics into CloudWatch. If you're using a custom script that uses only a DataFrame and not a DynamicFrame, the GlueContext class might not be initialized. This might result in the metrics not getting written to CloudWatch. If you're using a custom script, be sure to update your job to initialize the GlueContext class.Be sure that the AWS Glue IAM role has the required permission: Check and confirm that the IAM role attached to the ETL job has the cloudwatch:PutMetricData permission to create metrics in CloudWatch. If you're using a custom role, then be sure that the role has the permission to write the job metrics into CloudWatch.Note: It's a best practice to use the AWS managed policy AWSGlueServiceRole to manage permissions.Related informationMonitoring AWS Glue using Amazon CloudWatch metricsJob monitoring and debuggingFollow" | https://repost.aws/knowledge-center/glue-job-metrics-cloudwatch |
Why did my Amazon EMR cluster terminate with an "application provisioning failed" error? | My Amazon EMR cluster terminated with an "application provisioning failed" error. What does this error mean and how can I fix it? | "My Amazon EMR cluster terminated with an "application provisioning failed" error. What does this error mean and how can I fix it?ResolutionYou can see an "application provisioning failed" error when Amazon EMR can't install, configure, or start specified software when it's launching an EMR cluster. The following sections show you how to find and review provisioning logs. They also show you different types of errors and steps you can take to resolve those errors.Review Amazon EMR provisioning logs stored in Amazon S3Amazon EMR provisioning logs are stored in an Amazon Simple Storage Service (Amazon S3) bucket specified at cluster launch. The storage location of the logs uses the following Amazon S3 URI syntax:s3://example-log-location/example-cluster-ID/node/example-primary-node-ID/provision-node/apps-phase/0/example-UUID/puppet.log.gzNote: Replace example-log-location, example-cluster-ID, example-primary-node-ID, and example-UUID with your system's naming.Open the Amazon EMR console. In the navigation pane, choose Clusters. Then, choose the failed EMR cluster to see the cluster details.In the Summary section, choose "Terminated with errors" and note the primary node ID included in the error message.In the Cluster logs section, choose the Amazon S3 location URL to be redirected to the cluster logs in the Amazon S3 console.Navigate to your UUID folder by following this path: node/example-primary-node-ID/provision-node/apps-phase/0/example-UUID/.Note: Replace example-primary-node-ID and example-UUID with your system's naming.In the resulting list, select puppet.log.gz and choose Open to see the provisioning on a new browser tab.Identify the reasons for failures in provisioning logsUnsupported configuration parameters can cause errors. Errors can also be caused by wrong hostnames, incorrect passwords, or general operating system issues. Search logs for related keywords, including "error" or "fail."The following is a list of common error types:Issues connecting to an external metastore with an Amazon Relational Database Service (Amazon RDS) instance.Issues connecting to an external key distribution center (KDC).Issues when starting services, such as YARN ResourceManager and Hadoop NameNode.Issues when downloading or installing applications.S3 logs aren't available.Issues connecting to an external metastore with an Amazon RDS instanceSome Amazon EMR applications, such as Hive, Hue, or Oozie, can be configured to store data in an external database, such as Amazon RDS. When there's an issue with a connection, a message appears.The following is an example error message from Hive:2022-11-26 02:59:36 +0000 /Stage[main]/Hadoop_hive::Init_metastore_schema/Exec[init hive-metastore schema]/returns (notice): org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.2022-11-26 02:59:36 +0000 /Stage[main]/Hadoop_hive::Init_metastore_schema/Exec[init hive-metastore schema]/returns (notice): Underlying cause: java.sql.SQLNonTransientConnectionException : Could not connect to address=(host=hostname)(port=3306)(type=master) : Socket fail to connect to host:hostname, port:3306. hostname2022-11-26 02:59:36 +0000 /Stage[main]/Hadoop_hive::Init_metastore_schema/Exec[init hive-metastore schema]/returns (notice): SQL Error code: -1To resolve this type of error:Verify that the RDS instance hostname, user, password, and database are correct.Verify that the RDS instance security group inbound rules allow connections from the Amazon EMR primary node security group.Issues connecting to an external KDCAmazon EMR lets you configure an external KDC to add an additional layer of security. You can also create a trust relationship with an Active Directory server. When there's an issue with contacting the KDC or joining a domain, a message appears.The following is an example error message from Puppet:2022-11-26 03:02:01 +0000 Puppet (err): 'echo "${AD_DOMAIN_JOIN_PASSWORD}" | realm join -v -U "${AD_DOMAIN_JOIN_USER}"@"${CROSS_REALM_TRUST_REALM}" "${CROSS_REALM_TRUST_DOMAIN}"' returned 1 instead of one of [0]2022-11-26 03:02:01 +0000 /Stage[main]/Kerberos::Ad_joiner/Exec[realm_join]/returns (err): change from 'notrun' to ['0'] failed: 'echo "${AD_DOMAIN_JOIN_PASSWORD}" | realm join -v -U "${AD_DOMAIN_JOIN_USER}"@"${CROSS_REALM_TRUST_REALM}" "${CROSS_REALM_TRUST_DOMAIN}"' returned 1 instead of one of [0]To resolve this type of error:Verify that the Kerberos realm is spelled correctly.Verify that the KDC administrative password is spelled correctly.Verify that the Active Directory join user and password are spelled correctly.Verify that the Active Directory join user exists in Active Directory and has the correct permissions.Verify that KDC and Active Directory servers are on Amazon EC2. Then, verify that the KDC and Active Directory security group inbound rules allow connections from the Amazon EMR primary node security group.Verify that KDC and Active Directory aren't on Amazon EC2. Then, verify that KDC and Active Directory allow connections from the EMR cluster virtual private cloud (VPC) and subnet.Issues when starting services, such as YARN ResourceManager, Hadoop NameNode, or Spark History ServerAmazon EMR allows the custom configuration of all applications at EMR cluster launch. But, sometimes these configurations prevent services from starting. When there's an issue preventing a service from starting a message appears.The following is an example error message from Spark History Server:2022-11-26 03:34:13 +0000 Puppet (err): Systemd start for spark-history-server failed!journalctl log for spark-history-server:-- Logs begin at Sat 2022-11-26 03:27:57 UTC, end at Sat 2022-11-26 03:34:13 UTC. --Nov 26 03:34:10 ip-192-168-1-32 systemd[1]: Starting Spark history-server...Nov 26 03:34:10 ip-192-168-1-32 spark-history-server[1076]: Starting Spark history-server (spark-history-server):[OK]Nov 26 03:34:10 ip-192-168-1-32 su[1112]: (to spark) root on noneNov 26 03:34:13 ip-192-168-1-32 systemd[1]: spark-history-server.service: control process exited, code=exited status=1Nov 26 03:34:13 ip-192-168-1-32 systemd[1]: Failed to start Spark history-server.Nov 26 03:34:13 ip-192-168-1-32 systemd[1]: Unit spark-history-server.service entered failed state.Nov 26 03:34:13 ip-192-168-1-32 systemd[1]: spark-history-server.service failed.2022-11-26 03:34:13 +0000 /Stage[main]/Spark::History_server/Service[spark-history-server]/ensure (err): change from 'stopped' to 'running' failed: Systemd start for spark-history-server failed!journalctl log for spark-history-server:To resolve this type of error:Verify the service that failed to start. Check if the provided configurations are spelled correctly.Navigate the following path to see the S3 log to investigate the reason for the failure: s3://example-log-location/example-cluster-ID/node/example-primary-node-ID/applications/example-failed-application/example-failed-service.gz.Note: Replace example-log-location, example-cluster-ID, example-primary-node-ID, example-failed-application, and example-failed-service with your system's naming.Issues when downloading or installing applicationsAmazon EMR can install many applications. But, sometimes there's an issue when one application can't be downloaded or installed. This can cause the EMR cluster to fail. When this failure happens, the provisioning logs don't complete. You must review the stderr.gz log instead to find similar messages caused by failed yum installations.The following is an example error message from stderr.gz:stderr.gzError Summary-------------Disk Requirements: At least 2176MB more space needed on the / filesystem. 2022-11-26 03:18:44,662 ERROR Program: Encountered a problem while provisioningjava.lang.RuntimeException: Amazon-linux-extras topics enabling or yum packages installation failed.To resolve this type of error, increase the root Amazon Elastic Block Store (Amazon EBS) volume during the EMR cluster launch.S3 logs aren't availableAmazon EMR fails to provision applications, and there aren't any logs generated in Amazon S3. In this scenario, it's likely that a network error caused S3 logging to fail.To resolve this type of error:Verify that the Logging option is turned on during the EMR cluster launch. For more information, see Configure cluster logging and debugging.When using a custom AMI, verify that there are no firewall rules interfering with the required Amazon EMR network settings. For more information, see Working with Amazon EMR-managed security groups.When using a custom AMI, check to see if there are any failed primary nodes. Open the Amazon EMR console, and in the navigation pane, choose Hardware to determine if clusters couldn't launch any primary nodes.When using a custom AMI, verify that you're following best practices. For more information, see Using a custom AMI.Related informationEMR cluster failed to provisionFollow" | https://repost.aws/knowledge-center/emr-application-provisioning-error |
How do I troubleshoot Application Load Balancer HTTP 403 forbidden errors? | My Application Load Balancer is returning HTTP 403 forbidden errors. How can I troubleshoot this? | "My Application Load Balancer is returning HTTP 403 forbidden errors. How can I troubleshoot this?ResolutionFollow these troubleshooting steps for your scenario.Important: Before you begin, make sure that you have access logging enabled for your Application Load Balancer. For instructions, see Enable access logging.An AWS WAF web access control list (web ACL) is configured to monitor requests to your Application Load Balancer and it blocked a request.The load balancer sends HTTP errors to access logs and increments the HTTPCode_ELB_4XX_Count metric similar to the following:elb_status_code = 403target_status_code = -actions_executed = wafThis means that the load balancer forwarded the request to AWS WAF to determine whether the request should be forwarded to the target. Then, AWS WAF determined that the request should be rejected. To diagnose the rule configuration, review the AWS WAF logs. For more information, see Managing logging for a web ACL.The Application Load Balancer might have a rule configured with a fixed-response action to provide an HTTP 403 response.Check the access logs for a fixed-response action similar to the following:elb_status_code = 403target_status_code = -actions_executed = fixed-responseThis log indicates that the rule configuration has a fixed-response action to provide an HTTP 403 error.The target responded with an HTTP 403 error and the Application Load Balancer is forwarding this response to the client.Check the access logs for 403 entries for values similar to the following:elb_status_code = 403target_status_code = 403If the target_status_code and elb_status_code values match, then the target application sent the HTTP 403 response. To determine why the target application generated the HTTP 403 forbidden error, check with your application vendor. You can also use the X-Amzn-Trace-Id header to trace requests through the Application Load Balancer. For more information, see How do I trace an Application Load Balancer request using X-Amzn-Trace-Id?Related informationTroubleshoot your Application Load BalancersHTTP 403: ForbiddenFollow" | https://repost.aws/knowledge-center/alb-http-403-error |
How do I configure an EC2 Windows instance to allow file downloads using Internet Explorer? | I need to download third-party software from the internet to my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance. The Internet Explorer security configuration is blocking my attempts. How can I enable downloads? | "I need to download third-party software from the internet to my Amazon Elastic Compute Cloud (Amazon EC2) Windows instance. The Internet Explorer security configuration is blocking my attempts. How can I enable downloads?Short descriptionAmazon Machine Images (AMIs) for Windows Server enable Internet Explorer Enhanced Security Configuration by default, because web browsing isn't a best practice on a server. Internet Explorer Enhanced Security Configuration disables file downloads using Internet Explorer.If you want to download and install tools from the internet, you can change the security configuration to enable downloads.Note: If you enable downloads on your EC2 Windows instance, be sure to download files only from trusted sources. After you download and install the software, it's a best practice to re-enable Internet Explorer Enhanced Security Configuration.ResolutionConnect to your EC2 Windows instance.Open the Windows Start menu, and then open Server Manager.Follow the instructions for the Windows Server version that your EC2 Windows instance is running: Windows Server 2012 R2, Windows Server 2016, or Windows Server 2019: Choose Local Server from the left navigation pane. For IE Enhanced Security Configuration, choose On.Windows Server 2008 R2: Choose Server Manager from the navigation pane. In the Server Summary - Security Information section, choose Configure IE ESC.For Administrators, select Off.For Users, select Off.Choose OK.Close Server Manager.Note: The security configuration change takes effect when you open a new Internet Explorer session. If Internet Explorer is already open, open a new tab. Or, close the browser, and then re-open it.Related informationTutorials for Amazon EC2 instances running Windows ServerTutorial: Get started with Amazon EC2 Windows instancesFollow" | https://repost.aws/knowledge-center/ec2-windows-file-download-ie |
How do I remove the "clientTransferProhibited" status from my Route 53 domain? | The status of my Amazon Route 53 domain is "clientTransferProhibited". How do I remove this status? | "The status of my Amazon Route 53 domain is "clientTransferProhibited". How do I remove this status?Short descriptionThe domain registries for all generic top-level domains (TLDs) and several geographic TLDs provide the option to lock your domain. Locking a domain prevents someone from transferring the domain to another registrar without your permission. If you enable transfer lock for a domain, then the status is updated to "clientTransferProhibited". To remove the status, disable the transfer lock.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Open the Route 53 console.In the navigation pane, choose Registered domains.Select the name of the domain that you want to update.For Transfer Lock, choose Disable.Choose Save.Or, you can run the following command in the AWS CLI:aws route53domains disable-domain-transfer-lock \ --region us-east-1 \ --domain-name example.comNote: Be sure to replace the Region and domain name placeholders with your corresponding values.Related informationLocking a domain to prevent unauthorized transfer to another registrardisable-domain-transfer-lockFollow" | https://repost.aws/knowledge-center/route-53-fix-clienttransferprohibited |
How do I set up a Spark SQL JDBC connection on Amazon EMR? | I want to I configure a Java Database Connectivity (JDBC) driver for Spark Thrift Server so that I can run SQL queries from a SQL client on my Amazon EMR cluster. | "I want to I configure a Java Database Connectivity (JDBC) driver for Spark Thrift Server so that I can run SQL queries from a SQL client on my Amazon EMR cluster.Resolution1. Download and install SQuirrel SQL Client.2. Connect to the master node using SSH.3. On the master node, run the following command to start Spark Thrift Server:sudo /usr/lib/spark/sbin/start-thriftserver.sh4. Copy all .jar files from the /usr/lib/spark/jars directory on the master node to your local machine.5. Open SQuirrel SQL Client and create a new driver:For Name, enter Spark JDBC Driver.For Example URL, enter jdbc:hive2://localhost:10001.6. On the Extra Class Path tab, choose Add.7. In the dialog box, navigate to the directory where you copied the .jar files in step 4, and then select all the files.8. In the Class Name field, enter org.apache.hive.jdbc.HiveDriver, and then choose OK.9. On your local machine, set up an SSH tunnel using local port forwarding:ssh -o ServerAliveInterval=10 -i path-to-key-file -N -L 10001:localhost:10001 hadoop@master-public-dns-name10. To connect to the Spark Thrift Server, create a new alias in SQuirrel SQL Client:For Name, enter Spark JDBC.For Driver, enter Spark JDBC Driver.For URL, enter jdbc:hive2://localhost:10001.For Username, enter hadoop.11. Run queries from SQuirrel SQL Client.Follow" | https://repost.aws/knowledge-center/jdbc-connection-emr |
I'm trying to upload a large file to Amazon S3 with encryption using an AWS KMS key. Why is the upload failing? | "I'm trying to upload a large file to my Amazon Simple Storage Service (Amazon S3) bucket. In my upload request, I'm including encryption information using an AWS Key Management Service (AWS KMS) key. However, I get an Access Denied error. Meanwhile, when I upload a smaller file with encryption information, the upload succeeds. How can I fix this?" | "I'm trying to upload a large file to my Amazon Simple Storage Service (Amazon S3) bucket. In my upload request, I'm including encryption information using an AWS Key Management Service (AWS KMS) key. However, I get an Access Denied error. Meanwhile, when I upload a smaller file with encryption information, the upload succeeds. How can I fix this?Short descriptionConfirm that you have the permission to perform kms:Decrypt actions on the AWS KMS key that you're using to encrypt the object.The AWS CLI (aws s3 commands), AWS SDKs, and many third-party programs automatically perform a multipart upload when the file is large. To perform a multipart upload with encryption using an AWS KMS key, the requester must have kms:GenerateDataKey and kms:Decrypt permissions. The kms:GenerateDataKey permissions allow the requester to initiate the upload. With kms:Decrypt permissions, newly uploaded parts can be encrypted with the same key used for previous parts of the same object.Note: After all the parts are uploaded successfully, the uploaded parts must be assembled to complete the multipart upload operation. Because the uploaded parts are server-side encrypted using a KMS key, object parts must be decrypted before they can be assembled. For this reason, the requester must have kms:Decrypt permissions for multipart upload requests using server-side encryption with KMS CMKs (SSE-KMS).ResolutionIf your AWS Identity and Access Management (IAM) role and key are in the same account, then kms:Decrypt permissions must be specified in the key policy. If your IAM role belongs to a different account than the key, kms:Decrypt permissions must be specified in both the key and IAM policy.Key policyReview the AWS KMS key policy by using the AWS Management Console policy view.In the key policy, search for statements where the Amazon Resource Name (ARN) of your IAM user or role is listed as an AWS principal. The ARN is in the format: arn:aws:iam::111122223333:user/john.Then, check the list of actions allowed by the statements associated with your IAM user or role. The list of allowed actions must include kms:Decrypt, using an SSE-KMS, for multipart uploads to work.For example, this statement in a key policy allows the user John to perform the kms:Decrypt and kms:GenerateDataKey actions:{ "Sid": "Allow use of the key", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::111122223333:user/john" }, "Action": [ "kms:Decrypt", "kms:GenerateDataKey" ], "Resource": "*" },IAM permissionsTo review your IAM permissions, open the IAM console, and then open your IAM user or role.Review the list of permissions policies applied to your IAM user or role. Make sure that there's an applied policy that allows you to perform the kms:Decrypt action on the key used to encrypt the object.For example:{ "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:GenerateDataKey" ], "Resource": [ "arn:aws:kms:example-region-1:123456789098:key/111aa2bb-333c-4d44-5555-a111bb2c33dd" ] }}This example statement grants the IAM user access to perform kms:Decrypt and kms:GenerateDataKey on the key (arn:aws:kms:example-region-1:123456789098:key/111aa2bb-333c-4d44-5555-a111bb2c33dd).For instructions on how to update your IAM permissions, see Changing permissions for an IAM user.Related informationAWS Policy GeneratorFollow" | https://repost.aws/knowledge-center/s3-large-file-encryption-kms-key |
Why am I getting an Access Denied error for ListObjectsV2 when I run the sync command on my Amazon S3 bucket? | "I'm running the aws s3 sync command to copy objects to or from an Amazon Simple Storage Service (Amazon S3) bucket. However, I'm getting an Access Denied error when I call the ListObjectsV2 operation. How do I resolve this?" | "I'm running the aws s3 sync command to copy objects to or from an Amazon Simple Storage Service (Amazon S3) bucket. However, I'm getting an Access Denied error when I call the ListObjectsV2 operation. How do I resolve this?Short descriptionWhen you run the aws s3 sync command, Amazon S3 issues the following API calls: ListObjectsV2, CopyObject, GetObject, and PutObject.More specifically, the following happens:1. Amazon S3 lists the source and destination to check whether the object exists.2. Amazon S3 then performs the following API calls:CopyObject call for a bucket to bucket operationGetObject for a bucket to local operationPutObject for a local to bucket operationNote: This resolution assumes that the GetObject and PutObject calls are already granted to the AWS Identity Access Management (IAM) user or role. This resolution addresses how to resolve the Access Denied error caused by improper ListBucket permissions or using incorrect sync command syntax with Requester Pays.ResolutionConfiguring the IAM policyVerify that you have the permission for s3:ListBucket on the Amazon S3 buckets that you're copying objects to or from. You must have this permission to perform ListObjectsV2 actions.Note: s3:ListBucket is the name of the permission that allows a user to list the objects in a bucket. ListObjectsV2 is the name of the API call that lists the objects in a bucket.If your IAM user or role belong to another AWS account, then check whether your IAM and bucket policies permit the s3:ListBucket action. You must have permission to s3:ListBucket on both your IAM policy and bucket policy.If your user or role belongs to the bucket owner's account, then you don't need both the IAM and bucket policies to allow s3:ListBucket. You need only one of them to allow the action.Important: If either the IAM policy or bucket policy already allow the s3:ListBucket action, then check the other policy for statements that explicitly deny the action. An explicit deny statement overrides an allow statement.The following is an example IAM policy that grants access to s3:ListBucket:{ "Version": "2012-10-17", "Statement": [{ "Sid": "Stmt1546506260896", "Action": "s3:ListBucket", "Effect": "Allow", "Resource": "arn:aws:s3:::AWSDOC-EXAMPLE-BUCKET" }]}The following is an example bucket policy that grants the user arn:aws:iam::123456789012:user/testuser access to s3:ListBucket:{ "Id": "Policy1546414473940", "Version": "2012-10-17", "Statement": [{ "Sid": "Stmt1546414471931", "Action": "s3:ListBucket", "Effect": "Allow", "Resource": "arn:aws:s3:::AWSDOC-EXAMPLE-BUCKET", "Principal": { "AWS": [ "arn:aws:iam::123456789012:user/testuser" ] } }]}Using the sync command with Requester PaysIf your bucket belongs to another AWS account and has Requester Pays enabled, verify that your bucket policy and IAM permissions both grant ListObjectsV2 permissions. If the ListObjectsV2 permissions are properly granted, then check your sync command syntax. When using the sync command, you must include the --request-payer requester option. Otherwise, you receive an Access Denied error.For example:aws s3 sync ./ s3://requester-pays-bucket/ --request-payer requesterRelated informationBucket owner granting cross-account bucket permissionsBucket policies and user policiesFollow" | https://repost.aws/knowledge-center/s3-access-denied-listobjects-sync |
Why is OpenSearch Dashboards in red status on my Amazon OpenSearch Service domain? | OpenSearch Dashboards keeps showing red status on my Amazon OpenSearch Service domain. Why is this happening and how do I troubleshoot this? | "OpenSearch Dashboards keeps showing red status on my Amazon OpenSearch Service domain. Why is this happening and how do I troubleshoot this?Short descriptionOpenSearch Dashboards shows a green status when all health checks pass on every node of the OpenSearch Service cluster. If a health check fails, then OpenSearch Dashboards enters red status. OpenSearch Dashboards also shows red status when OpenSearch Service is in red cluster status. The OpenSearch Dashboards status can turn red for the following reasons:Node failure caused by an issue with an Amazon Elastic Compute Cloud (Amazon EC2) instance or Amazon Elastic Block Store (Amazon EBS) volume. For more information about node crashes, see Why did my OpenSearch Service node crash?Insufficient memory for your nodes.Upgrading OpenSearch Service to a newer version.Incompatibility between OpenSearch Dashboards and OpenSearch Service versions.A single-node cluster is running with a heavy load and no dedicated leader nodes. The dedicated leader node could also be unreachable. For more information about how OpenSearch Service increases cluster stability, see Dedicated leader nodes.ResolutionUse one or more of the following methods to resolve the OpenSearch Dashboards red status for your OpenSearch Service domain.Note: If your cluster shows a circuit breaker exception, first increase the circuit breaker limit. If you don't have a circuit breaker exception, try the other methods before you increase the circuit breaker limit.Tune queriesIf you're running complex queries (such as heavy aggregations), then tune the queries for maximum performance. Sudden spikes in heap memory consumption can be caused by the field data or data structures that are used for aggregation queries.Review the following API calls to identify the cause of the spike, replacing os-endpoint with your domain endpoint:$curl os-endpoint/_nodes/stats/breaker?pretty$curl "os-endpoint/_nodes/stats/indices/fielddata?level=indices&fields=*"For more information about managing memory usage, see Tune for search speed on the Elasticsearch website.Use dedicated leader nodesIt's a best practice to allocate three dedicated leader nodes for each OpenSearch Service domain. For more information about improving cluster stability, see Get started with OpenSearch Service: Use dedicated leader instances to improve cluster stability.Scale upTo scale up your domain, increase the number of nodes or choose an Amazon EC2 instance type that holds more memory. For more information about scaling, see How can I scale up or scale out my OpenSearch Service domain?Check your shard distributionCheck the index that your shards are ingesting into to confirm that they are evenly distributed across all data nodes. If your shards are unevenly distributed, one or more of the data nodes could run out of storage space.Use the following formula to confirm that the shards are distributed evenly:Total number of shards = shards per node * number of data nodesFor example, if there are 24 shards in the index, and there are eight data nodes, then you have three shards per node. For more information about the number of shards needed, see Get started with OpenSearch Service: How many shards do I need?Check your versionsImportant: Your OpenSearch Dashboards and OpenSearch Service versions must be compatible.Run the following API call to confirm that your versions are compatible, replacing os-endpoint with your domain endpoint:$curl os-endpoint/.kibana/config/_search?prettyNote: An unsuccessful command can indicate compatibility issues between OpenSearch Dashboards and Supported OpenSearch Service versions. For more information about compatible OpenSearch Dashboards and Elasticsearch versions, see Set up on the Elasticsearch website.Monitor resourcesSet up Amazon CloudWatch alarms that notify you when resources are used above a certain threshold. For example, if you set an alarm for JVM memory pressure, then take action before the pressure reaches 100%. For more information about CloudWatch alarms, see Recommended CloudWatch alarms and Improve the operational efficiency of OpenSearch Service domains with automated alarms using CloudWatch.Increase the circuit breaker limitTo prevent the cluster from running out of memory, try increasing the parent or field data circuit breaker limit. For more information about field data circuit breaker limits, see Circuit breaker settings on the Elasticsearch website.Related informationCan't access OpenSearch DashboardsHow do I resolve the "Courier fetch: n of m shards failed" error in OpenSearch Dashboards on Amazon OpenSearch Service?How do I resolve the "cannot restore index [.kibana] because it's open" error in Amazon OpenSearch Service?Troubleshooting an upgradeFollow" | https://repost.aws/knowledge-center/opensearch-dashboards-red-status |
How do I troubleshoot connectivity issues between my Amazon ECS tasks for an Amazon EC2 launch type and an Amazon RDS database? | My application is running as a set of tasks launched by Amazon Elastic Container Service (Amazon ECS) on Amazon Elastic Compute Cloud (Amazon EC2) instances. My application can't communicate with the Amazon Relational Database Service (Amazon RDS) database. | "My application is running as a set of tasks launched by Amazon Elastic Container Service (Amazon ECS) on Amazon Elastic Compute Cloud (Amazon EC2) instances. My application can't communicate with the Amazon Relational Database Service (Amazon RDS) database.ResolutionVerify your network configurationsTo verify if a container instance can establish a connection to the database, complete the following steps for either Linux-based or Windows-based container instances:For Linux-based container instances:1. Use SSH to connect to the container instance where your task is placed.2. To connect to your RDS database, run the following command:$ telnet test.ab12cde3fg4.us-east-1.rds.amazonaws.com 3306Note: Replace test.ab12cde3fg4.us-east-1.rds.amazonaws.com with your database endpoint. Replace 3306 with your database port.The output looks similar to the following:> Trying 172.31.122.28 > Connected to test.ab12cde3fg4.us-east-1.rds.amazonaws.com > Escape character is '^]'.Important: Telnet isn't pre-installed on Amazon ECS-optimized Amazon Machine Images (AMIs). To install Telnet, run the sudo yum install telnet -y command.For Windows-based container instances:1. Use the Remote Desktop Protocol (RDP) to connect to the container instance where your task is placed.2. To connect to your RDS database, run the following command using either the Windows command prompt or Windows PowerShell:$ telnet test.ab12cde3fg4.us-east-1.rds.amazonaws.com 3306Note: Replace test.ab12cde3fg4.us-east-1.rds.amazonaws.com with your database endpoint. Replace 3306 with your database port.Important: Telnet isn't pre-installed on Amazon ECS-optimized Windows AMIs. To install Telnet, run the Install-WindowsFeature -Name Telnet-Client command using PowerShell as administrator.If the connection is established, a blank page appears.If the connection isn't established and you receive "Connection Timed Out" or "Connect failed' errors, then complete the following steps:1. Check if the attached security groups allow access to the RDS database. You can use either the DescribeInstances API call, or the Description tab for your selected instance ID in the Amazon EC2 console.Note: In the bridge and host networking mode, security groups attached to the container instance govern access to the database. In the awsvpc network mode, the security groups associated during the launch of the service or task govern access.Tip: As a best practice, create a security group that allows incoming traffic from the database port. Then, attach the security group to the database and container instance, or associate the security group with tasks based on awsvpc.2. Check if the network access control list (network ACL) and route table associated with the subnet allow access to the database.Verify the database connection parameters1. In the environment section of your container definition, pass your environment variables securely. To pass them securely, reference your environment variables from AWS Systems Manager Parameter Store or AWS Secrets Manager.Note: An application uses parameters (such as database endpoint, database port, and database access credentials) to establish a connection with the database. These parameters are usually passed as environment variables to the task.2. If the container in your task can establish a connection with the database, but can't authenticate due to incorrect connection parameters (such as database user name or database password), then reset your database password.3. Remove any leading or trailing character spaces from your connection parameters.Note: Syntax errors can result in a failed connection between your container and the RDS database.Follow" | https://repost.aws/knowledge-center/ecs-task-connect-rds-database |
How do I use the pgaudit extension to audit my Amazon RDS DB instance that is running PostgreSQL? | "I want to audit all my databases, roles, relations, or columns, and I want to provide different levels of auditing to different roles. How can I configure the pgaudit extension for different role levels on an Amazon Relational Database Service (Amazon RDS) DB instance that is running Amazon RDS for PostgreSQL?" | "I want to audit all my databases, roles, relations, or columns, and I want to provide different levels of auditing to different roles. How can I configure the pgaudit extension for different role levels on an Amazon Relational Database Service (Amazon RDS) DB instance that is running Amazon RDS for PostgreSQL?ResolutionThere are different parameters that you can set to log activity on your PostgreSQL DB instance. To audit different databases, roles, tables, or columns, you can use the pgaudit extension. After you activate the pgaudit extension, you can configure the pgaudit.log parameter to audit specific databases, roles, tables, and columns.Activating the pgaudit extension on an Amazon RDS instance running PostgreSQL1. Create a specific database role called rds_pgaudit by running the following command:CREATE ROLE rds_pgaudit;CREATE ROLE2. Modify the below parameters in your custom DB parameter group that is associated with your DB instance:Add or append pgaudit to shared_preload_librariesConfigure pgaudit.role to rds_pgaudit, the role created in step 13. Reboot the instance so that the changes to the parameter group are applied to the instance.4. Confirm that pgaudit is initialized by running the following command:show shared_preload_libraries;shared_preload_libraries --------------------------rdsutils,pgaudit(1 row)5. Create the pgaudit extension by running the following command:CREATE EXTENSION pgaudit;CREATE EXTENSION6. Confirm that pgaudit.role is set to rds_pgaudit by running the following command:show pgaudit.role;pgaudit.role------------------rds_pgaudit7. Configure the pgaudit.log parameter to audit any of the following:ALL audits the following commands.MISC audits miscellaneous commands, such as DISCARD, FETCH, CHECKPOINT, VACUUM, SET.DDL audits all data description language (DDL) that is not included in the ROLE class.ROLE audits statements related to roles and privileges, such as GRANT, REVOKE, CREATE/ALTER/DROP ROLE.FUNCTION audits function calls and DO blocks.WRITE audits INSERT, UPDATE, DELETE, TRUNCATE, and COPY when the destination is a relation.READ audits SELECT and COPY when the source is a relation or a query.Depending on what you want to audit, set the value of the pgaudit.log parameter for a database, role, or table.Using the pgaudit extension to audit databases1. To set the value of the pgaudit.log parameter for a database, role, or table, set the parameter pgaudit.log to none at the parameter group level:> show pgaudit.log;+---------------+| pgaudit.log ||---------------|| none |+---------------+SHOWRun the following command to override the system configuration for this parameter at this database only:ALTER DATABASE test_database set pgaudit.log='All';This changes the value of parameterpgaudit.log toAll, so thattest_database is the only database that is audited in the RDS DB instance.3. Connect to test_database and run the following query:select * from test_table;The output of the error log is similar to the following:2019-06-25 19:21:35 UTC:192.0.2.7(39330):testpar@test_database:[21638]:LOG: AUDIT: SESSION,2,1,READ,SELECT,,,select * from test_table;,<not logged>Using the pgaudit extension to audit rolesSimilarly to configuring the pgaudit.log parameter at the database level, the role is modified to have a different value for the pgaudit.log parameter. In the following example commands, the roles test1 and test2 are altered to have different pgaudit.log configurations.1. Set different values for the pgaudit.log parameter for test1 and test2 by running the following commands:ALTER ROLE test1 set pgaudit.log='All';ALTER ROLE test2 set pgaudit.log='DDL';2. Check that the modifications are made at the role level by running the following query:> select rolname,rolconfig from pg_roles where rolname in ('test1',' test2');+-----------+----------------------+| rolname | rolconfig ||-----------+----------------------|| test1 | [u'pgaudit.log=All'] || test2 | [u'pgaudit.log=DDL'] |+-----------+----------------------+SELECT 2Time: 0.010s3. Run the following queries for both test1 and test2:CREATE TABLE test_table (id int);CREATE TABLEselect * from test_table;id ----(0 rows)The log output is similar to the following for test1:...2019-06-26 14:51:12 UTC:192.0.2.7(44754):test1@postgres:[3547]:LOG: AUDIT: SESSION,1,1,DDL,CREATE TABLE,,,CREATE TABLE test_table (id int);,<not logged>2019-06-26 14:51:18 UTC:192.0.2.7(44754):test1@postgres:[3547]:LOG: AUDIT: SESSION,2,1,READ,SELECT,,,select * from test_table;,<not logged>...The log output is similar to the following for test2 after running the same queries:...2019-06-26 14:53:54 UTC:192.0.2.7(44772):test2@postgres:[5517]:LOG: AUDIT: SESSION,1,1,DDL,CREATE TABLE,,,CREATE TABLE test_table (id int);,<not logged>...Note: There is no audit entry for the SELECT query because the pgaudit.log parameter for test2 is configured to DDL only.Using the pgaudit extension to audit tablesConfiguring the pgaudit.log parameter audits and logs statements that affect a specific relation. Only SELECT, INSERT, UPDATE, and DELETE commands can be logged by the pgaudit extension. TRUNCATE isn't included in the object audit logging. If you grant the rds_pgaudit role access to an operation (such as SELECT, DELETE, INSERT, or UPDATE) on the table that you want to audit, any grant audit logs the corresponding statement. The following example grants the rds_pgaudit role access to SELECT and DELETE, so that all the SELECT and DELETE statements on test_table are audited.1. Grant the rds_pgaudit role access to SELECT and DELETE by running the following command:grant select, delete on test_table to rds_pgaudit;2. Test that the audit logging is configured correctly by running a DELETE statement on test_table:Time: 0.008s DELETE 1>delete from test_table where pid=5050;The output for the DELETE statement is similar to the following:2019-06-25 17:13:02UTC:192.0.2.7(41810):postgresql104saz@postgresql104saz:[24976]:LOG: AUDIT: OBJECT,3,1,WRITE,DELETE,TABLE,public.t1,delete from test_table where pid=5050,<not logged>Using the pgaudit extension to audit columnsYou can also set the auditing at a column level for a specific table. For example, when sensitive data exists in one column only. In the following example command, a payroll table is created, and the table has a sensitive column that includes salary data that must be audited:create table payroll( name text, salary text);1. Grant the rds_pgaudit role access to SELECT on the salary column so that any SELECT on this column is audited:grant select (salary) on payroll to rds_pgaudit;2. SELECT all the columns in the table, including the salary column:select * from payroll;In the following example output, any SELECT that includes the salary column is audited. However, a SELECT that doesn't contain the salary column isn't audited.2019-06-25 18:25:02UTC:192.0.2.7(42056):postgresql104saz@postgresql104saz:[4118]:LOG: AUDIT: OBJECT,2,1,READ,SELECT,TABLE,public.payroll,select * from payroll,<not logged>Related informationCommon DBA tasks for PostgreSQLFollow" | https://repost.aws/knowledge-center/rds-postgresql-pgaudit |
How can I update the cross-realm trust principal password for an existing Amazon EMR cluster? | I set up cross-realm trust with an Active Directory domain on a Kerberized Amazon EMR cluster. I need to change the principal password. | "I set up cross-realm trust with an Active Directory domain on a Kerberized Amazon EMR cluster. I need to change the principal password.ResolutionAmazon EMR creates a krbtgt principal using the cross-realm trust principal password that you specify at cluster launch. This principal is stored in the key distribution center (KDC) on the master node. It looks like this:krbtgt/ADTrustRealm@KerberosRealmTo update the cross-realm trust principal password:1. Connect to the master node using SSH.2. Open the kadmin.local tool:sudo kadmin.local3. List all principals to find the principal that you want to update (for example, krbtgt/MYADDOMAIN.COM@MYEMRDOMAIN.COM):list_principals4. Run the following command to update the password for the cross-realm trust principal. In the following example, replace krbtgt/MYADDOMAIN.COM@MYEMRDOMAIN.COM with your principal.change_password krbtgt/MYADDOMAIN.COM@MYEMRDOMAIN.COM5. Exit the kadmin.local tool:exit6. To confirm that the new password works, obtain a Kerberos ticket for an Active Directory user and then list HDFS files. Example:kinit myaduser@MYADDOMAIN.COMhdfs dfs -ls /tmpRelated informationTutorial: Configure a cross-realm trust with an Active Directory domainCross-realm trustHow can I renew an expired Kerberos ticket that I'm using for Amazon EMR authentication?Follow" | https://repost.aws/knowledge-center/emr-change-cross-realm-trust-password |
How do I create another user with the same privileges as a master user for my Amazon RDS or Aurora DB instance that's running PostgreSQL? | I have an Amazon Relational Database Service (Amazon RDS) or an Amazon Aurora DB instance that runs PostgreSQL. I want another user to have the same permissions as the master user for my DB instance. How do I duplicate or clone the master user? | "I have an Amazon Relational Database Service (Amazon RDS) or an Amazon Aurora DB instance that runs PostgreSQL. I want another user to have the same permissions as the master user for my DB instance. How do I duplicate or clone the master user?ResolutionA DB instance that runs PostgreSQL has only one master user that is created when the instance is created. However, you can create another user that has all the same permissions as the master user.Note: PostgreSQL logs passwords in cleartext in the log files. To prevent this, review How can I stop Amazon RDS for PostgreSQL from logging my passwords in clear-text in the log files?Important: The rds_superuser role has the most privileges for an RDS DB instance. Don't assign this role to a user unless they need the most access to the RDS DB instance.1. Create a new user by running the CREATE ROLE command:postgres=> CREATE ROLE new_master WITH PASSWORD 'password' CREATEDB CREATEROLE LOGIN;Note: Replace new_master and password with your user name and password.2. Grant the role that you created rds_superuser permissions:postgres=> GRANT rds_superuser TO new_master;Note: Replace new_master with your user name.The new user now has the same permissions as the master user.Related informationCreating RolesMaster User Account PrivilegesHow do I allow users to connect to Amazon RDS with IAM credentials?Follow" | https://repost.aws/knowledge-center/rds-aurora-postgresql-clone-master-user |
How do I mount Amazon EFS using the EFS DNS name on a Linux Machine that is joined with AWS Managed Microsoft AD? | "I am using the AWS Directory Service for AWS Managed Microsoft AD. I joined my Amazon Elastic Compute Cloud (Amazon EC2) Linux instances into the Active Directory Domain. As a result, I am unable to mount the Amazon Elastic File System (Amazon EFS) using the EFS DNS name. How can I resolve this issue?" | "I am using the AWS Directory Service for AWS Managed Microsoft AD. I joined my Amazon Elastic Compute Cloud (Amazon EC2) Linux instances into the Active Directory Domain. As a result, I am unable to mount the Amazon Elastic File System (Amazon EFS) using the EFS DNS name. How can I resolve this issue?Short descriptionWhen you join your Linux machine with AWS Managed Microsoft AD, you configure your instance to use the DNS servers for your Active Directory.For AWS Managed Microsoft AD, all DNS requests are forwarded to the IP address of the Amazon provided DNS servers for your VPC. These DNS servers resolve names that are configured in your Amazon Route 53 private hosted zones. If you aren't using Route 53 private hosted zones, then your DNS requests are forwarded to public DNS servers. If no private hosted zone exists for your AWS services, then DNS requests are forwarded to public DNS servers. This means that they can only resolve AWS services DNS to public IP addresses. For more information, see Configure DNS.With Amazon EFS, the file system DNS name automatically resolves to the mount target’s IP address in the Availability Zone of the connecting Amazon EC2 instance. This is a private IP address, which can only be resolved within the same VPC. By changing the DNS servers from the defaulted VPC-provided DNS, the EFS is no longer able to resolve the IP address so mounting by DNS fails. For more information, see Mounting on Amazon EC2 with a DNS name.Example of issueThis example uses the AWS Managed Microsoft AD. The DNS servers provided are 172.31.28.100 and 172.31.4.147. The EFS file system was created in the same VPC with mount target 172.31.47.69.1. Using netcat, confirm that the EC2 instance can establish a connection with the EFS mount target 172.31.47.69:$ nc -vz 172.31.47.69 2049Ncat: Version 7.50 ( https://nmap.org/ncat )Ncat: Connected to 172.31.47.69:2049.Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.2. On the EC2 Linux server, you can mount the EFS using the DNS name. The EFS is then unmounted.sudo mount -t efs -o tls fs-123456:/ efsdf -ThFilesystem Type Size Used Avail Use% Mounted ondevtmpfs devtmpfs 475M 0 475M 0% /devtmpfs tmpfs 483M 0 483M 0% /dev/shmtmpfs tmpfs 483M 516K 483M 1% /runtmpfs tmpfs 483M 0 483M 0% /sys/fs/cgroup/dev/xvda1 xfs 8.0G 1.6G 6.5G 19% /tmpfs tmpfs 97M 0 7M 0% /run/user/0tmpfs tmpfs 97M 0 97M 0% /run/user/1000127.0.0.1:/ nfs4 8.0E 0 8.0E 0% /home/ec2-user/efssudo umount /efs3. The /etc/resolv.conf file shows the Amazon provided DNS and nameserver:cat /etc/resolv.conf ; generated by /usr/sbin/dhclient-script search eu-west-2.compute.internal options timeout:2 attempts:5 nameserver 172.31.0.24. On the EC2 Linux server, integrate Microsoft AD, and then configure the Active Directory DNS servers:echo 'supersede domain-name-servers 172.31.28.100, 172.31.4.147;' | sudo tee --append /etc/dhcp/dhclient.confecho 'supersede domain-search "rachel.com";' | sudo tee --append /etc/dhcp/dhclient.confsudo dhclient -rsudo dhclient5. Confirm that the DNS servers are configured by checking the resolv.conf file:cat /etc/resolv.conf options timeout:2 attempts:5; generated by /usr/sbin/dhclient-scriptsearch rachel.com. eu-west-2.compute.internalnameserver 172.31.28.100nameserver 172.31.4.1476. Run dig on the file system to see that the mount target private IP isn't returned:$ dig fs-123456.efs.eu-west-2.amazonaws.com ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> fs-123456.efs.eu-west-2.amazonaws.com;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57378;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4000;; QUESTION SECTION:;fs-123456.efs.eu-west-2.amazonaws.com. IN ANote: The DNS request doesn't resolve to any A record, and the status shows NXDOMAIN.7. Mounting the EFS using DNS name fails:sudo mount -t efs -o tls fs-123456:/ efsFailed to resolve "fs-123456.efs.eu-west-2.amazonaws.com" - check that your file system ID is correct, and ensure that the VPC has an EFS mount target for this file system ID.See https://docs.aws.amazon.com/console/efs/mount-dns-name for more detail.Attempting to lookup mount target ip address using botocore. Failed to import necessary dependency botocore, please install botocore first.If you use the Amazon provided name server for the VPC, then note that it successfully resolves:dig @172.31.0.2 fs-123456.efs.eu-west-2.amazonaws.com; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> fs-123456.efs.eu-west-2.amazonaws.com;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24926;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4000;; QUESTION SECTION:;fs-123456.efs.eu-west-2.amazonaws.com. IN A;; ANSWER SECTION:fs-123456.efs.eu-west-2.amazonaws.com. 60 INA 172.31.25.79ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.Configure conditional forwarders for your Microsoft AD to forward requests to the Amazon VPC-provided DNS. This method also works for resolving other AWS services DNS to their private IP addresses, if you use the Active Directory provided DNS.To do this, use the AWS CLI command to create a conditional forwarder rule. This forwards all sub-domains of a domain to a specific DNS server IP. For example, you can forward all DNS requests for sub-domains of amazonaws.com to the private IP of the Amazon VPC provided DNS.Note: The Amazon VPC provided DNS IP is the reserved IP address at the base of the VPC IPv4 network range plus two.To create the conditional forwarder rule, run the AWS CLI command create-conditional-forwarder on the command line of the Linux instance that you want to mount the EFS on:aws ds create-conditional-forwarder --directory-id d-9c671fb35f --remote-domain-name amazonaws.com --dns-ip-addrs 172.31.0.2 --region eu-west-2Use the following parameters:directory-id - Enter the AD directory ID.remote-domain-name - You can specify any domain. This rule is applied to all FQDN matching this domain or sub-domains.dns-ip-addrs - Enter the Amazon VPC provided DNS IP.This allows for DNS resolution of the EFS DNS and the EFS can now be mounted with the DNS name.dig fs-123456.efs.eu-west-2.amazonaws.com ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.5.2 <<>> fs-123456.efs.eu-west-2.amazonaws.com;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24926;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4000;; QUESTION SECTION:;fs-123456.efs.eu-west-2.amazonaws.com. IN A;; ANSWER SECTION:fs-123456.efs.eu-west-2.amazonaws.com. 60 INA 172.31.25.79The EFS can now be mounted using the DNS name.sudo mount -t efs -o tls fs-123456:/ efs[ec2-user@ip-172-31-35-167 ~]$ df -ThFilesystem Type Size Used Avail Use% Mounted ondevtmpfs devtmpfs 475M 0 475M 0% /devtmpfs tmpfs 483M 0 483M 0% /dev/shmtmpfs tmpfs 483M 520K 483M 1% /runtmpfs tmpfs 483M 0 483M 0% /sys/fs/cgroup/dev/xvda1 xfs 8.0G 1.6G 6.5G 19% /tmpfs tmpfs 97M 0 97M 0% /run/user/0tmpfs tmpfs 97M 0 97M 0% /run/user/1000127.0.0.1:/ nfs4 8.0E 0 8.0E 0% /home/ec2-user/efsRelated informationHow do I see a list of my Amazon EC2 instances that are connected to Amazon EFS?How can I mount an Amazon EFS volume to AWS Batch in a managed compute environment?Follow" | https://repost.aws/knowledge-center/efs-mount-fqdn-microsoft-ad |
How do I resolve search or write rejections in OpenSearch Service? | "When I submit a search or write request to my Amazon OpenSearch Service cluster, the requests are rejected." | "When I submit a search or write request to my Amazon OpenSearch Service cluster, the requests are rejected.Short descriptionWhen you write or search for data in your OpenSearch Service cluster, you might receive the following HTTP 429 error or es_rejected_execution_exception:error":"elastic: Error 429 (Too Many Requests): rejected execution of org.elasticsearch.transport.TransportService$7@b25fff4 on EsThreadPoolExecutor[bulk, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@768d4a66[Running, pool size = 2, active threads = 2, queued tasks = 200, completed tasks = 820898]] [type=es_rejected_execution_exception]"Reason={"type":"es_rejected_execution_exception","reason":"rejected execution of org.elasticsearch.transport.TcpTransport$RequestHandler@3ad6b683 on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@bef81a5[Running, pool size = 25, active threads = 23, queued tasks = 1000, completed tasks = 440066695]]"The following variables can contribute to an HTTP 429 error or es_rejected_execution_exception:Data node instance types and search or write limitsHigh values for instance metricsActive and Queue threadsHigh CPU utilization and JVM memory pressureHTTP 429 errors can occur because of search and write requests to your cluster. Rejections can also come from a single node or multiple nodes of your cluster.Note: Different versions of Elasticsearch use different thread pools to process calls to the _index API. Elasticsearch versions 1.5 and 2.3 use the index thread pool. Elasticsearch versions 5.x, 6.0, and 6.2 use the bulk thread pool. Elasticsearch versions 6.3 and later use the write thread pool. For more information, see Thread pool on the Elasticsearch website.ResolutionData node instance types and search or write limitsA data node instance type has fixed virtual CPUs (vCPUs). Plug the vCPU count into your formula to retrieve the concurrent search or write operations that your node can perform before the node enters a queue. If an active thread becomes full, then the thread spills over to a queue and is eventually rejected. For more information about the relationship between vCPUs and node types, see OpenSearch Service pricing.Additionally, there is a limit to how many searches per node or writes per node that you can perform. This limit is based on the thread pool definition and Elasticsearch version number. For more information, see Thread pool on the Elasticsearch website.For example, if you choose the R5.2xlarge node type for five nodes in your Elasticsearch cluster (version 7.4), then the node will have 8 vCPUs.Use the following formula to calculate maximum active threads for search requests:int ((# of available_processors * 3) / 2) + 1Use the following formula to calculate maximum active threads for write requests:int (# of available_processors)For an R5.2xlarge node, you can perform a maximum of 13 search operations:(8 VCPUs * 3) / 2 + 1 = 13 operationsFor an R5.2xlarge node, you can perform a maximum of 8 write operations:8 VCPUs = 8 operationsFor an OpenSearch Service cluster with five nodes, you can perform a maximum of 65 search operations:5 nodes * 13 = 65 operationsFor an OpenSearch Service cluster with five nodes, you can perform a maximum of 40 write operations:5 nodes * 8 = 40 operationsHigh values for instance metricsTo troubleshoot your 429 exception, check the following Amazon CloudWatch metrics for your cluster:IndexingRate: The number of indexing operations per minute. A single call to the _bulk API that adds two documents and updates two counts as four operations that might spread across nodes. If that index has one or more replicas, other nodes in the cluster also record a total of four indexing operations. Document deletions don't count towards the IndexingRate metric.SearchRate: The total number of search requests per minute for all shards on a data node. A single call to the _search API might return results from many different shards. If five different shards are on one node, then the node reports "5" for this metric, even if the client only made one request.CoordinatingWriteRejected: The total number of rejections that occurred on the coordinating node. These rejections are caused by the indexing pressure that accumulated since the OpenSearch Service startup.PrimaryWriteRejected: The total number of rejections that occurred on the primary shards. These rejections are caused by indexing pressure that accumulated since the last OpenSearch Service startup.ReplicaWriteRejected: The total number of rejections that occurred on the replica shards because of indexing pressure. These rejections are caused by indexing pressure that accumulated since the last OpenSearch Service startup.ThreadpoolWriteQueue: The number of queued tasks in the write thread pool. This metric tells you whether a request is being rejected because of high CPU usage or high indexing concurrency.ThreadpoolWriteRejected: The number of rejected tasks in the write thread pool.Note: The default write queue size was increased from 200 to 10,000 in OpenSearch Service version 7.9. As a result, this metric is no longer the only indicator of rejections from OpenSearch Service. Use the CoordinatingWriteRejected, PrimaryWriteRejected, and ReplicaWriteRejected metrics to monitor rejections in versions 7.9 and later.ThreadpoolSearchQueue: The number of queued tasks in the search thread pool. If the queue size is consistently high, then consider scaling your cluster. The maximum search queue size is 1,000.ThreadpoolSearchRejected: The number of rejected tasks in the search thread pool. If this number continually grows, then consider scaling your cluster.JVMMemoryPressure: The JVM memory pressure specifies the percentage of the Java heap in a cluster node. If JVM memory pressure reaches 75%, OpenSearch Service initiates the Concurrent Mark Sweep (CMS) garbage collector. The garbage collection is a CPU-intensive process. If JVM memory pressure stays at this percentage for a few minutes, then you might encounter cluster performance issues. For more information, see How do I troubleshoot high JVM memory pressure on my Amazon OpenSearch Service cluster?Note: The thread pool metrics that are listed help inform you about the IndexingRate and SearchRate.For more information about monitoring your OpenSearch Service cluster with CloudWatch, see Instance metrics.Active and Queue threadsIf there is a lack of CPUs or high request concurrency, then a queue can fill up quickly, resulting in an HTTP 429 error. To monitor your queue threads, check the ThreadpoolSearchQueue and ThreadpoolWriteQueue metrics in CloudWatch.To check your Active and Queue threads for any search rejections, use the following command:GET /_cat/thread_pool/search?v&h=id,name,active,queue,rejected,completedTo check Active and Queue threads for write rejections, replace "search" with "write". The rejected and completed values in the output are cumulative node counters, which are reset when new nodes are launched. For more information, see the Example with explicit columns section of cat thread pool API on the Elasticsearch website.Note: The bulk queue on each node can hold between 50 and 200 requests, depending on which Elasticsearch version you are using. When the queue is full, new requests are rejected.Errors for search and write rejectionsSearch rejectionsA search rejection error indicates that active threads are busy and that queues are filled up to the maximum number of tasks. As a result, your search request can be rejected. You can configure OpenSearch Service logs so that these error messages appear in your search slow logs.Note: To avoid extra overhead, set your slow log threshold to a generous amount. For example, if most of your queries take 11 seconds and your threshold is "10", then OpenSearch Service takes more time to write a log. You can avoid this overhead by setting your slow log threshold to 20 seconds. Then, only a small percentage of the slower queries (that take longer than 11 seconds) is logged.After your cluster is configured to push search slow logs to CloudWatch, set a specific threshold for slow log generation. You can set a specific threshold for slow log generation with the following HTTP POST call:curl -XPUT http://<your domain’s endpoint>/index/_settings -d '{"index.search.slowlog.threshold.query.<level>":"10s"}'Write rejectionsA 429 error message as a write rejection indicates a bulk queue error. The es_rejected_execution_exception[bulk] indicates that your queue is full and that any new requests are rejected. This bulk queue error occurs when the number of requests to the cluster exceeds the bulk queue size (threadpool.bulk.queue_size). A bulk queue on each node can hold between 50 and 200 requests, depending on which Elasticsearch version you are using.You can configure OpenSearch Service logs so that these error messages appear in your index slow logs.Note: To avoid extra overhead, set your slow log threshold to a generous amount. For example, if most of your queries take 11 seconds and your threshold is "10", then OpenSearch Service will take additional time to write a log. You can avoid this overhead by setting your slow log threshold to 20 seconds. Then, only a small percentage of the slower queries (that take longer than 11 seconds) is logged.After your cluster is configured to push search slow logs to CloudWatch, set a specific threshold for slow log generation. To set a specific threshold for slow log generation, use the following HTTP POST call:curl -XPUT http://<your domain’s endpoint>/index/_settings -d '{"index.indexing.slowlog.threshold.query.<level>":"10s"}'Write rejection best practicesHere are some best practices that mitigate write rejections:When documents are indexed faster, the write queue is less likely to reach capacity.Tune bulk size according to your workload and desired performance. For more information, see Tune for indexing speed on the Elasticsearch website.Add exponential retry logic in your application logic. The exponential retry logic makes sure that failed requests are automatically retried.Note: If your cluster continuously experiences high concurrent requests, then the exponential retry logic won't help resolve the 429 error. Use this best practice when there is a sudden or occasional spike of traffic.If you are ingesting data from Logstash, then tune the worker count and bulk size. It's a best practice to set your bulk size between 3-5 MB.For more information about indexing performance tuning, see How can I improve the indexing performance on my OpenSearch Service cluster?Search rejection best practicesHere are some best practices that mitigate search rejections:Switch to a larger instance type. OpenSearch Service relies heavily on the filesystem cache for faster search results. The number of threads in the thread pool on each node for search requests is equal to the following: int((# of available_processors * 3) / 2) + 1. Switch to an instance with more vCPUs to get more threads to process search requests.Turn on search slow logs for a given index or for all indices with a reasonable threshold value. Check to see which queries are taking longer to run and implement search performance strategies for your queries. For more information, see Troubleshooting Elasticsearch searches, for beginners or Advanced tuning: Finding and fixing slow Elasticsearch queries on the Elasticsearch website.Follow" | https://repost.aws/knowledge-center/opensearch-resolve-429-error |
How do I resolve the CNAMEAlreadyExists error when I set up a CNAME alias for my CloudFront distribution? | "When I set up a Canonical Name record (CNAME) alias for my Amazon CloudFront distribution, I get a "CNAMEAlreadyExists" error." | "When I set up a Canonical Name record (CNAME) alias for my Amazon CloudFront distribution, I get a "CNAMEAlreadyExists" error.Short descriptionYou can't use the same CNAME alias for more than one CloudFront distribution. When the CNAME alias that you're trying to add is already associated with another CloudFront distribution, you receive an error:"One or more of the CNAMEs you provided are already associated with a different resource. (Service: AmazonCloudFront; Status Code: 409; Error Code: CNAMEAlreadyExists; Request ID: a123456b-c78d-90e1-23f4-gh5i67890jkl*"If you have access to both source and target distributions, then manually remove the CNAME association from the existing CloudFront distribution. Then, associate the CNAME with the new CloudFront distribution.Note: If you want to manually associate the CNAME, then you might need to wait until the old distribution's status is Deployed before you proceed.If you don't know the distribution ID, then use the ListConflictingAliases CloudFront API. This allows you to find partial information about the distribution and the account ID for the conflicting CNAME alias. Then, use AssociateAlias API to move your CNAME from the existing distribution (source distribution) to the new distribution (target distribution). Use one of the following resolutions based on your scenario:For source and target distributions that are in the same account, refer to the Use the AssociateAlias API to move your CNAME section.For cross-account source and target distributions, refer to the Deactivate source distribution with the conflicting CNAME section.If you can't deactivate the source distribution because of downtime to existing traffic, then refer to Use a wildcard to move the alternate domain name.Note: You can't use a wildcard to move an apex domain (example.com). To move an apex domain when the source and target distributions are in different AWS accounts, contact AWS Support to move an alternate domain name.ResolutionUse the AssociateAlias API to move your CNAMENote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.1. In the AWS Identity and Access Management (IAM) policy of the user or role that's making the API request, add the following resource-level permissions:Important: The IAM user or role that makes the request must have these resource-level permissions in both the source distribution and target distribution.{ "Version": "2012-10-17", "Statement": [ { "Sid": "CloudFrontCnameSwapSameAcc", "Effect": "Allow", "Action": [ "cloudfront:GetDistribution", "cloudfront:ListConflictingAliases", "cloudfront:AssociateAlias", "cloudfront:UpdateDistribution" ], "Resource": [ "arn:aws:cloudfront::SourceAcccount:distribution/SourceDistroID", "arn:aws:cloudfront::TargetAccount:distribution/TargetDistroID" ] } ]}Note: Replace SourceAcccount with the AWS account number of the source distribution. Replace SourceDistroID with the source distribution ID. Replace TargetAccountID with the AWS account number of the target distribution. Replace TargetDistroID with target distribution ID.2. Identify the distribution with the conflicting CNAME. If you don't know which distribution has the conflicting CNAME, then use the ListConflictingAliases API to find that distribution:$ aws cloudfront list-conflicting-aliases --distribution-id YourDistributionID --alias YourCNAMENote: Replace YourDistributionID with the ID of your distribution. Replace YourCNAME with the conflicting CNAME alias.3. To verify ownership of the domain, you must have read access to YourDistributionID. You must also have an SSL certificate associated with the CloudFront distribution that secures the conflicting CNAME.When you're ready to verify ownership, create a DNS TXT record for the CNAME that resolves to the target distribution's canonical name. Your TXT record must include an underscore before the CNAME, Apex, or Wildcard:_.example.com. 900 IN TXT "dexample123456.cloudfront.net"_cname.example.com. 900 IN TXT "dexample123456.cloudfront.net"_*.example.com. 900 IN TXT "dexample123456.cloudfront.net"4. Verify that the target distribution has a valid SSL certificate.Note: The subject name or subject alternative name must match or overlap with the given CNAME alias. It's a best practice to have a valid certificate issued from a trusted CA that's listed in Mozilla's CA certificate list or AWS Certificate Manager.5. Run the AssociateAlias API request from the account that owns the target distribution:$ aws cloudfront associate-alias --target-distribution-id YourTargeDistributiontID --alias your_cname.example.comDeactivate the source distribution with the conflicting CNAMEIf your source distribution and target distribution are in different AWS accounts, then first disable the source distribution that's associated with the conflicting domain. Then, use the AssociateAlias API to move the CNAME.You can use the associate-alias command to move Apex domains between different AWS accounts1. Open the CloudFront console.2. On the navigation pane, choose Distributions3. Select the source distribution, and then choose Disable.If you don't know which distribution has the conflicting CNAME, then use the ListConflictingAliases API to find that distribution. Replace YourDistributionID with the ID of your distribution and YourCNAME with the name of the conflicting CNAME:$ aws cloudfront list-conflicting-aliases --distribution-id YourDistributionID --alias YourCNAMENote: The ListConflictingAliases API requires the GetDistribution and ListConflictingAliases permissions.After you deactivate the source distribution, follow the steps in the Use the AssociateAlias API to move your CNAME section.If you don't have access to the AWS account with the source distribution, or if you can't deactivate the source distribution, then contact AWS Support.Use a wildcard to move the alternate domain nameIf your source distribution and target distribution are in different accounts but you can't deactivate the source distribution, then move the CNAME. To do this, use a wildcard. You must have access to both the source distribution and target distribution for this process.This process involves multiple updates to both source and target distributions. Wait for each distribution to fully deploy the latest change before you proceed to the next step.1. Update the target distribution to add a wildcard CNAME that covers the alternate domain name that you want to move. If your domain is www.example.com, then add the wildcard alternate domain name *.example.com to the target distribution.Note: You must have an SSL/TLS certificate on the target distribution that secures the wildcard domain name2. Update the DNS settings for the CNAME to point to the target distribution's canonical name. For example, if your domain is www.example.com, then update the DNS record for www.example.com to route traffic to the target distribution's canonical name:www.example.com. 86400 IN CNAME "dexample123456.cloudfront.net"Note: Even after you update the DNS settings, the source distribution serves requests that use the alternate domain name. This is because the alternate domain name is still associated to the source distribution.3. Update the source distribution to remove the alternate domain name.Note: During this step, there's no interruption to the live traffic. Because the requested domain name matches the wildcard domain that's added to the target distribution, live traffic uses the target distribution settings.4. To add the alternate domain name that you want to move, update the target distribution.5. To validate the DNS record for the CNAME, use dig or a similar DNS query tool:dig CNAME www.example.com +shortnslookup example.com6. (Optional) To remove the wildcard alternate domain name, update the target distribution.Follow" | https://repost.aws/knowledge-center/resolve-cnamealreadyexists-error |
Why am I getting a "403 Forbidden" error when I try to upload files in Amazon S3? | "I'm trying to upload files to my Amazon Simple Storage Service (Amazon S3) bucket using the Amazon S3 console. However, I'm getting a "403 Forbidden" error." | "I'm trying to upload files to my Amazon Simple Storage Service (Amazon S3) bucket using the Amazon S3 console. However, I'm getting a "403 Forbidden" error.Short descriptionThe "403 Forbidden" error can occur due to the following reasons:Permissions are missing for s3:PutObject to add an object or s3:PutObjectAcl to modify the object's ACL.You don't have permission to use an AWS Key Management Service (AWS KMS) key.There is an explicit deny statement in the bucket policy.Amazon S3 Block Public Access is turned on.An AWS Organizations service control policy doesn't allow access to Amazon S3.ResolutionCheck your permissions for s3:PutObject or s3:PutObjectAclFollow these steps:Open the AWS Identity and Access Management (IAM) console.Navigate to the identity that's used to access the bucket, such as User or Role. Choose the name of the identity.Choose the Permissions tab, and expand each policy to view its JSON policy document.In the JSON policy documents, search for policies related to Amazon S3 access. Then, confirm that you have permissions for the s3:PutObject or s3:PutObjectAcl actions on the bucket.Ask for permission to use an AWS KMS keyTo upload objects that are encrypted with AWS KMS, you must have permissions to perform AWS KMS actions. You must be able to perform kms:Decrypt and kms:GenerateDataKey actions at minimum.Important: If you're uploading an object to a bucket in a different account, you can't use the AWS managed key aws/S3 as the default encryption key. This is because the AWS managed key policy can't be modified.Check the bucket policy for explicit deny statementsFollow these steps:Open the Amazon S3 console.From the list of buckets, open the bucket you want to upload files to.Choose the Permissions tab.Choose Bucket policy.Search for statements with "Effect": "Deny".Review these statements and make sure that they don't prevent uploads to the bucket.Important: Before saving a bucket policy with "Effect": "Deny", make sure to check for any statements that deny access to the S3 bucket. If you get locked out, see I accidentally denied everyone access to my Amazon S3 bucket. How do I regain access?The following example statement explicitly denies access to s3:PutObject on example-bucket unless the upload request encrypts the object with the AWS KMS key whose ARN matches arn:aws:kms:us-east-1:111122223333:key:{ "Version": "2012-10-17", "Statement": [ { "Sid": "ExampleStmt", "Action": [ "s3:PutObject" ], "Effect": "Deny", "Resource": "arn:aws:s3:::example-bucket/*", "Condition": { "StringNotLikeIfExists": { "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:111122223333:key/*" } }, "Principal": "*" } ]}Remove the public ACL from your request or disable S3 Block Public AccessIf you're passing a public ACL, such as public-read or authenticated-read in your PUT request, it makes the S3 object public. If the S3 Block Public Access feature is turned on for this account or bucket, then your upload request is denied.Note: It's not a best practice to make an object public unless your use case requires it.To successfully upload the object as a publicly available object, modify the S3 Block Access feature as required. If your use case doesn't require making the object publicly available, then remove the mentioned public ACL from the PUT request.For configuring the S3 Block Public Access settings at the account level, see Configuring block public access settings for your account. For configuring settings at the bucket level, see Configuring block public access settings for your S3 buckets. Also, see The meaning of "public".Review service control policies for AWS OrganizationsIf you use AWS Organizations, check if the service control policies explicitly deny S3 actions. If so, modify the policy as desired.Related informationHow do I troubleshoot 403 Access Denied errors from Amazon S3?How do I troubleshoot the error "You don't have permissions to edit bucket policy" when I try to modify a bucket policy in Amazon S3?Follow" | https://repost.aws/knowledge-center/s3-403-forbidden-error |
How can I verify that authenticated encryption with associated data encryption is used when calling AWS KMS APIs? | "How can I verify that authenticated encryption with associated data encryption is used when calling AWS Key Management Service (AWS KMS) Encrypt, Decrypt, and ReEncrypt APIs?" | "How can I verify that authenticated encryption with associated data encryption is used when calling AWS Key Management Service (AWS KMS) Encrypt, Decrypt, and ReEncrypt APIs?Short descriptionAWS KMS provides an encryption context that you can use to verify the authenticity of AWS KMS API calls, and the integrity of the ciphertext returned by the AWS Decrypt API.ResolutionTo verify the integrity of data encrypted with the AWS KMS APIs, you pass a set of key-value pairs as an encryption context during AWS KMS encryption, and again when you call the Decrypt or ReEncrypt APIs. If the encryption context that you pass to the Decrypt API is identical to the encryption context that you pass to the Encrypt or ReEncrypt APIs, the integrity of the ciphertext returned is protected.To learn more about using encryption context to protect the integrity of encrypted data, see the AWS Security Blog How to protect the integrity of your encrypted data by using AWS KMS and EncryptionContext.Follow" | https://repost.aws/knowledge-center/kms-encryption-context |
How can I enable Elastic IP addresses on my AWS Transfer Family SFTP-enabled server endpoint with a custom listener port? | "I want to make my AWS Transfer Family SFTP-enabled server accessible using Elastic IP addresses, and the listener port can't be port 22." | "I want to make my AWS Transfer Family SFTP-enabled server accessible using Elastic IP addresses, and the listener port can't be port 22.ResolutionIf you can use port 22 as your listener port, then create an internet-facing endpoint for your server.However, if you must change the listener port to a port other than port 22 (for migration), then follow these steps:Create an Amazon Virtual Private Cloud (Amazon VPC) and allocate IP addressesCreate an Amazon VPC in the same AWS Region as your server.Create subnets in your VPC within Availability Zones that you want to use your server in.Note: One AWS Transfer Family server can support up to three Availability Zones.Allocate up to three Elastic IP addresses in the same Region as your server. Or, you can choose to bring your own IP address range (BYOIP).Note: The number of Elastic IP addresses must match the number of Availability Zones that you use with your server endpoints.Create an AWS Transfer Family SFTP-enabled server with an internal VPC endpoint typeFollow the steps to create a server endpoint that's accessible only from within your VPC.After you create the server, view the server's details from the AWS Transfer Family console. Under Endpoint configuration, note the Private IPv4 Addresses. You need these IP addresses for the steps to create a Network Load Balancer.Create a Network Load Balancer and define the VPC endpoint of the server as the load balancer's targetOpen the Amazon Elastic Compute Cloud (Amazon EC2) console.From the navigation pane, choose Load Balancers.Choose Create Load Balancer.Under Network Load Balancer, choose Create.For Step 1: Configure Load Balancer, enter the following:For Name, enter a name for the load balancer.For Scheme, select internet-facing.For Listeners, keep Load Balancer Protocol as TCP. Then, change the associated Load Balancer Port to your custom listener port.For VPC, select the Amazon VPC that you created.For Availability Zones, select the Availability Zones associated with the public subnets that are available in the same VPC you use with your server endpoints.For the IPv4 address of each subnet, select one of the Elastic IP addresses that you allocated.Choose Next: Configure Security Settings.Choose Next: Configure Routing.For Step 3: Configure Routing, enter the following:For Target group, select New target group.For Name, enter a name for the target group.For Target type, select IP.For Protocol, select TCP.For Port, enter 22.Note: The AWS Transfer Family servers support traffic only over port 22. The load balancer must communicate to the server over port 22.Under Health checks, for Protocol, select TCP.Choose Next: Register Targets.For Step 4: Register Targets, enter the following:For Network, confirm that the Amazon VPC you want to use is selected.For IP, enter the private IPv4 addresses of your server's endpoints. You copied these IP addresses after creating the server.Choose Add to list.Repeat steps 10 and 11 until you've entered the private IP addresses for all of your server's endpoints.Choose Next: Review.Choose Create.After you set up the server and load balancer, clients communicate to the load balancer over the custom port listener. Then, the load balancer communicates to the server over port 22.Test access to the server from an Elastic IP addressConnect to the server over the custom port using an Elastic IP address or the DNS name of the Network Load Balancer. For example, the following OpenSSH command connects to the server using an Elastic IP address and a custom port:Note: Replace [port] with your custom port. Then, replace 192.0.2.3 with an Elastic IP address that you allocated.sftp -i sftpuserkey -P [port] sftpuser@192.0.2.3Important: Manage access to your server from client IP addresses using the network access control lists (network ACLs) for the subnets configured on the load balancer. Network ACL permissions are set at the subnet level, so the rules apply to all resources using the subnet. You can't control access from client IP addresses using security groups because the load balancer's target type is set to IP instead of Instance. This means that the load balancer doesn't preserve source IP addresses. If the Network Load Balancer's health checks fail, this means the load balancer can't connect to the server endpoint. To troubleshoot this, check the following:Confirm that the server endpoint's associated security group allows inbound connections from the subnets configured on the load balancer. The load balancer must be able to connect to the server endpoint over port 22.Confirm that the server's State is Online.Related informationLift and shift migration of SFTP servers to AWSFollow" | https://repost.aws/knowledge-center/sftp-enable-elastic-ip-custom-port |
How do I configure geoproximity routing using the Route 53 console? | I want to configure geoproximity routing using the Amazon Route 53 console. | "I want to configure geoproximity routing using the Amazon Route 53 console.Short descriptionGeoproximity routing allows Amazon Route 53 to route traffic based on the geographic location of users and resources. Using an element called bias, you can optionally increase or decrease the size of a geographic area.Note: You must use Route 53 traffic flow to use a geoproximity routing policy.ResolutionCreate a traffic policy with geoproximity routingCreate a traffic policy.For Start point, choose the desired record type, such as A or MX.For Connect to, choose Geoproximity rule.Choose your Endpoint Location. If you choose Custom, you must enter the location’s latitude and longitude Coordinates. Otherwise, choose the Amazon Elastic Compute Cloud (Amazon EC2) Region where your endpoint is located, such as US East (N. Virginia).(Optional) For Bias, enter the desired bias value. Choose Show geoproximity map for a visual representation of the effects the bias that you're configuring.(Optional) Under Health checks, select or clear Evaluate target health. Then, select a health check to associate with the record.For Connect to, choose New endpoint.For Type, choose Value.For Value, enter the IP address of your endpoint.For each additional endpoint, choose Add another geoproximity location, and then repeat steps 4-9.Choose Create traffic policy.(Optional) Create a policy recordUsing a policy record, you can route traffic from the internet to the resources specified in your traffic policy. When you create a policy record, you specify the traffic policy to use and the hosted zone where it's created. For more information, see Creating policy records.(Optional) Test your geoproximity routing policyTo test the DNS response using the Route 53 console, see Checking DNS responses from Route 53.Important: Amazon Route 53 uses the edns-client-subnet extension of EDNS0 to improve the accuracy of the responses returned.For Amazon Route 53 to use EDNS0 when making routing decisions, the resolver used to perform the query must support EDNS0.If the resolver doesn't support EDNS0: The IP address of the resolver is used to make a routing decision.If the resolver supports EDNS0: A truncated version of the client IP address making the original request is passed to Amazon Route 53 and used to make a decision.When testing how Route 53 is responding to geoproximity queries, you must determine if the resolver supports EDNS0. For more information, see How can I determine if my public DNS resolver supports the EDNS Client Subnet extension?Follow" | https://repost.aws/knowledge-center/route-53-geoproximity-routing-console |
How do I troubleshoot IRSA errors in Amazon EKS? | "When I use AWS Identity and Access Management (IAM) roles for service accounts (IRSA) with Amazon Elastic Kubernetes Service (Amazon EKS), I get errors." | "When I use AWS Identity and Access Management (IAM) roles for service accounts (IRSA) with Amazon Elastic Kubernetes Service (Amazon EKS), I get errors.Short descriptionTo troubleshoot issues with IRSA in Amazon EKS, complete one or more of the following actions based on your use case:Check the formatting of the IAM Amazon Resource Name (ARN).Check whether you have an IAM OpenID Connect (OIDC) provider for your AWS account.Verify the audience of the OIDC provider.Verify that you created the OIDC resource with a root certificate thumbprint.Check the configuration of your IAM role's trust policy.Verify that your pod identity webhook configuration exists and is valid.Verify that your pod identity webhook is injecting environment variables to your pods using IRSA.Verify that you're using supported AWS SDKs.ResolutionCheck the formatting of the IAM ARNIf you set your IAM ARN in the relative service account annotation with incorrect formatting, then you get the following error:An error occurred (ValidationError) when calling the AssumeRoleWithWebIdentityoperation: Request ARN is invalidHere's an example of an incorrect ARN format: eks.amazonaws.com/role-arn: arn:aws:iam::::1234567890:role/exampleThis ARN format is incorrect because it has an extra colon ( : ). To verify correct ARN format, see IAM ARNs.Check if you have an IAM OIDC provider for your AWS accountIf you didn't create an OIDC provider, then you get the following error:An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: No OpenIDConnect provider found in your account for https://oidc.eks.region.amazonaws.com/id/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxTo troubleshoot this, get the IAM OIDC provider URL:aws eks describe-cluster --name cluster name --query "cluster.identity.oidc.issuer" --output textNote: Replace cluster name with your cluster name.You get an output that's similar to the following example:https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041ETo list the IAM OIDC providers, run the following command:aws iam list-open-id-connect-providers | grep EXAMPLED539D4633E53DE1B716D3041ENote: Replace EXAMPLED539D4633E53DE1B716D3041E with the value that the previous command returned.If the OIDC provider doesn't exist, then use the following eksctl command to create one:eksctl utils associate-iam-oidc-provider --cluster cluster name --approveNote: Replace cluster name with your cluster name.You can also use the AWS Management Console to create an IAM OIDC provider for your cluster.Verify the audience of the IAM OIDC providerWhen you create an IAM OIDC provider, you must use sts.amazonaws.com as your audience. If the audience is incorrect, then you get the following error:An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: Incorrect token audienceTo check the audience of the IAM OIDC provider, run the following command:aws iam get-open-id-connect-provider --open-id-connect-provider-arn ARN-of-OIDC-providerNote: Replace ARN-of-OIDC-provider with the ARN of your OIDC provider.-or-Complete the following steps:1. Open the Amazon EKS console.2. Select the name of your cluster, and then choose the Configuration tab.3. In the Details section, note the value of the OpenID Connect provider URL.4. Open the IAM console.5. In the navigation pane, under Access Management, choose Identity Providers.6. Select the provider that matches the URL for your cluster.To change the audience, complete the following steps:1. Open the IAM console.2. In the navigation pane, under Access Management, choose Identity Providers.3. Select the provider that matches the URL for your cluster.4. Choose Actions, and then choose Add audience.5. Add sts.amazonaws.com.Verify that you created the IAM OIDC resource with a root certificate thumbprintIf you didn't use a root certificate thumbprint to create the OIDC provider, then you get the following error:An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: OpenIDConnect provider's HTTPS certificate doesn't match configured thumbprintNote: Non-root certificate thumbprints renew yearly, and root certificate thumbprints renew every decade. It's a best practice to use a root certificate thumbprint when you create an IAM OIDC.Suppose that you used one of the following to create your IAM OIDC:AWS Command Line Interface (AWS CLI)AWS Tools for PowerShellIAM API to create your IAM OIDCIn this case, you must manually obtain the thumbprint. If you created your IAM OIDC in the IAM console, then it's a best practice to manually obtain the thumbprint. With this thumbprint, you can verify that the console fetched the correct IAM OIDC.Obtain the root certificate thumbprint and its expiration date:echo | openssl s\_client -servername oidc.eks.your-region-code.amazonaws.com -showcerts -connect oidc.eks.your-region-code.amazonaws.com:443 2>/dev/null | awk '/-----BEGIN CERTIFICATE-----/{cert=""} {cert=cert $0 "\\n"} /-----END CERTIFICATE-----/{last\_cert=cert} END{printf "%s", last\_cert}' | openssl x509 -fingerprint -noout -dates | sed 's/://g' | awk -F= '{print tolower($2)}'Note: Replace your-region-code with the AWS Region that your cluster is located in.You receive an output that's similar to the following example:9e99a48a9960b14926bb7f3b02e22da2b0ab7280 sep 2 000000 2009 gmt jun 28 173916 2034 gmtIn this output, 9e99a48a9960b14926bb7f3b02e22da2b0ab7280 is the thumbprint, sep 2 000000 2009 gmt is the certificate start date, and jun 28 173916 2034 is the certificate expiration date.Check the configuration of your IAM role's trust policyIf the trust policy of the IAM role is misconfigured, then you get the following error:An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentityTo resolve this issue, make sure that you're using the correct IAM OIDC provider. If the IAM OIDC provider is correct, then use the IAM role configuration guide to check if the trust policy's conditions are correctly configured.Verify that your pod identity webhook configuration exists and is validThe pod identity webhook is responsible for injecting the necessary environment variables and projected volume. If you accidentally deleted or changed your webhook configuration, then IRSA stops working.To verify that your webhook configuration exists and is valid, run the following command:kubectl get mutatingwebhookconfiguration pod-identity-webhook -o yamlVerify that your pod identity webhook is injecting environment variables to your pods that use IRSARun one of the following commands:kubectl get pod <pod-name> -n <ns> -o yaml | grep aws-iam-token-or-kubectl get pod <pod-name> -n <ns> -o yaml | grep AWS\_WEB\_IDENTITY\_TOKEN\_FILEVerify that you're using supported AWS SDKsMake sure that you're using an AWS SDK version that supports assuming an IAM role through the OIDC web identity token file.Related informationWhy can't I use an IAM role for the service account in my Amazon EKS pod?How do I troubleshoot an OIDC provider and IRSA in Amazon EKS?Follow" | https://repost.aws/knowledge-center/eks-troubleshoot-irsa-errors |
How can I handle a high bounce rate with emails that I send using Amazon SES? | The bounce rate of my Amazon Simple Email Service (Amazon SES) account is higher than normal. It's putting my account at risk for an account review or a sending pause. How can I identify what's causing the increase in bounce rate and resolve it? | "The bounce rate of my Amazon Simple Email Service (Amazon SES) account is higher than normal. It's putting my account at risk for an account review or a sending pause. How can I identify what's causing the increase in bounce rate and resolve it?ResolutionUse a bounce and complaint monitoring systemTo avoid a high bounce rate or quickly address increasing bounces, it's a best practice to implement a bounce and complaint monitoring system. The following are mechanisms that you can use to monitor bounce and complaint sending activity and rates.Amazon SNSYou can use Amazon Simple Notification Service (Amazon SNS) to monitor bounces, complaints, and deliveries. You must configure Amazon SNS notifications on each identity that you intend to monitor. To view example notifications, go to Amazon SNS notification examples for Amazon SES.Event publishingTo use event publishing to monitor email sending, you must create a configuration set. Event publishing works only when you send the message using a configuration set. You can configure a default configuration set for each active sending identity, or specify your configuration set when you send the message.Also, set up an Amazon SNS event destination or an Amazon Kinesis Data Firehose event destination. Setting up an event destination allows you to get detailed information in JSON format. See example event records for Amazon SNS and Kinesis Data Firehose.Email feedback forwardingEmail feedback forwarding monitors only bounces and complaints. Email feedback forwarding is turned on by default, but you can turn it off if you're using an alternative monitoring system. The information that you receive from feedback email is limited compared with Amazon SNS notifications or event publishing.To get the most information about your bounces and complaints, it's a best practice to use Amazon SNS or event publishing as your monitoring system.Reputation metricsYou can use the Reputation metrics dashboard in the Amazon SES console to monitor bounce and complaint rates. Create reputation monitoring alarms using Amazon CloudWatch. It's a best practice to set up CloudWatch alarms and AWS Lambda to automatically pause email sending when the bounce or complaint rate exceeds a certain threshold. Pausing allows you to investigate the cause of the increased bounce or complaint rate. It's a best practice to maintain a bounce rate that's under 5% and a complaint rate that's under 0.1%.Identify what's causing the increased bounce rateTo stop the bounce rate from increasing, you must first identify the email addresses that are causing the bounces. Use the information from the Amazon SNS notifications or Amazon SES event data to find the email addresses.To identify what's causing the increased bounce rate, review the bounceType and diagnosticCode fields in the Amazon SNS bounce notifications or event data. For information on bounce events, see Bounce types.Stop sending email to the addresses that are increasing your bounce rateAfter identifying the email addresses that are resulting in bounces, stop sending messages to those addresses so that your bounce rate doesn't continue to increase. Be sure to remove the email addresses from your email recipient lists.Turn on the Amazon SES account-level suppression listWhen the account-level suppression list is activated, the email addresses that result in a hard bounce are added to the suppression list. Subsequent messages to these addresses aren't sent, resulting in a lower bounce rate.Resolve the root cause of the increased bounce rateThe following are some of the most common reasons for an increase or sudden spike in bounce rate:You send messages to an email list that contains recipients with invalid mailboxes. This can happen if you use an incorrect or out-of-date list of recipients. You must remove any invalid email addresses from the list before sending messages again. For more information on managing email lists, see Amazon SES best practices: Top 5 best practices for list management.Based on your reputation metrics, mailbox providers can block the IP address that you're sending email from. If there's a large number of subscriber complaints, then a mailbox provider might block your messages. Follow the guidance that you find on the mailbox provider's postmaster site to remove the block.Mailbox providers can use a third-party blocklist to filter email. If you continue to send messages to a mailbox after your messages are blocked, then your bounce rate can increase. You can contact the mailbox provider or blocklist provider to get information on why your messages are blocked. After you're removed from a blocklist, you must address the issues and change your sending habits accordingly. Be sure to review the policies for removing Amazon SES IP addresses from blocklists.Review the design of your sending applicationAfter you identify the root cause of the increase in bounces, review the design of your sending application. The design can impact your bounce metrics. For example, your application sends a confirmation email to a user when they sign up, and the user enters an invalid email address. The confirmation email then results in a hard bounce. Another example is setting up your application to retry sending emails whenever there's a failure. If there are issues on the recipient's mailbox, then you can continue to get bounces.For more best practices and actions that you can take for successful email sending, see Building and maintaining your lists. For more information on ways to keep a low bounce rate, see What can I do to minimize bounces?Automate how bounces are processedYou can also implement a solution to automate how bounces are processed. See the following examples:Handling bounces and complaintsMaintaining a healthy email database with AWS Lambda, Amazon SNS, and Amazon DynamoDBRelated informationBest practices for sending email using Amazon SESWhat is considered a soft bounce on Amazon SES, and how can I monitor soft bounces?Configuring notifications using the Amazon SES consoleUnderstand email delivery issuesFollow" | https://repost.aws/knowledge-center/ses-high-bounce-rate |
How can I override the default error handling in Amazon Lex? | I want to modify the default error handling of my Amazon Lex bot. How can I do this? | "I want to modify the default error handling of my Amazon Lex bot. How can I do this?Short descriptionAmazon Lex offers default error handling in the form of predefined prompts. But by using the fallback intent, you can get a greater level of control over how your bot reacts to situations where user input isn't matched. You can use the fallback intent to manage the conversation flow, use business logic, or handover your bot conversations to a human agent. You can also design the fallback intent to trigger an AWS Lambda function, and provide responses.ResolutionConfigure fallback intent using the Amazon Lex V1 consoleNote: If you want to switch from the Amazon Lex V2 console to the Amazon Lex V1 console, from the navigation pane, choose Return to V1 console.Open the Amazon Lex V1 console, and then choose the bot that you want to configure fallback intent for.From the Intents section, choose the + sign.Search for AMAZON.Fallback in the existing intents.Enter a name for the built-in intent, and then create the intent.Optionally, you can add a Lambda function in the fulfillment code hook of the newly created fallback intent. This triggers the Lambda function when the fallback intent is fulfilled.Note: You can add a fallback intent by adding the built-in AMAZON.Fallback intent type to your bot using the console. You can also specify the intent using the PutBot operation or choose the intent from the list of built-in intents in the console. Configure fallback intent using the Amazon Lex V2 consoleOpen the Amazon Lex V2 console, and then choose the bot that you want to configure fallback intent for.From the Language section, under the specific language that your bot uses, choose intents.Choose Fallback intent.Optionally, enable a Lambda function for fulfillment using the advanced options for fulfillment. To use a specific Lambda function, attach the function to your bot alias. The same Lambda function is used for all intents in a language supported by the bot.Note: The built-in AMAZON.Fallback intent type is added to your bot automatically when you create a bot using the console. If you use the API, then specify the intent using the CreateBot operation.You can't add these items to a fallback intent:UtterancesSlotsConfirmation promptsRelated informationConfiguring fulfillment progress updatesUsing an AWS Lambda functionAMAZON.FallbackIntentFollow" | https://repost.aws/knowledge-center/lex-override-default-handling |
How can I use and override reverse DNS rules with Route 53 Resolver? | How do I use and override auto-defined reverse DNS rules with Amazon Route 53 Resolver? | "How do I use and override auto-defined reverse DNS rules with Amazon Route 53 Resolver?ResolutionTo use Resolver rules:You must turn on the DNS resolution and DNS hostnames attributes of the virtual private cloud (VPC).DNS queries must be sent to the Amazon-provided DNS resolver of that VPC.After "DNSHostname" is turned on, Resolver automatically creates auto-defined system rules that define how queries for selected domains are resolved by default. To override an auto-defined rule, create a forwarding rule (Resolver rule) for the domain name. Reverse DNS name resolution with Resolver depends on auto-defined rules, Resolver rules, and private hosted zone configurations.The Amazon-provided DNS resolver evaluates the "most specific domain name" rule in the following priority order:Resolver rules – Rules that are manually configured for the domain name that the Resolver forwards to the target IP address.Rules for private hosted zones – For each private hosted zone that you associate with a VPC, Resolver creates a rule and associates it with the DNS resolver of the VPC. If you associate the private hosted zone with multiple VPCs, Resolver associates the rule with each VPC's DNS resolver.Auto-defined rules for reverse DNS – Resolver creates auto-defined rules for reverse DNS lookup and localhost-related domains when you set "enableDnsHostnames" for the associated VPC to "true."Rules apply to the CIDR block ranges of a VPC and all connected VPCs with DNS support enabled. Resolver creates the most generic rules possible given the CIDR block range.Example of how to override auto-defined rulesThe resources in this example are as follows:DNS query source VPC1 with CIDR 10.237.52.0/22DNSHostname attribute = EnabledDNSSupport attribute = EnabledConnected VPC2 (connected through a transit gateway or VPC peering with DNS support enabled) with CIDR 10.104.2.0/24VPC DNS resolver = Amazon-provided DNS resolverRoute 53 Resolver outbound endpoint with connectivity to 192.168.1.4/32 (DNS server located in another network)The following auto-defined system rules were then created by Resolver:Rules for private IP addressesRules for VPC1 CIDRRules for VPC2 CIDR (Peered VPC)10.in-addr.arpa.52.237.10.in-addr.arpa.2.104.10.in-addr.arpa.16.172.in-addr.arpa. through 31.172.in-addr.arpa53.237.10.in-addr.arpa.168.192.in-addr.arpa.54.237.10.in-addr.arpa.254.169.254.169.in-addr.arpa.55.237.10.in-addr.arpa.The DNS resolution requirements for the environment where queries are forwarded are:Priority numberCIDR for reverse DNS queryDestination DNS server110.237.53.0/24192.168.1.4/32 (another network)210.237.52.0/22 except 10.237.53.0/24Amazon-provided DNS resolver310.104.2.0/24Private hosted zone410.0.0.0/8 except all of the above192.168.1.4/32 (another network)The following steps achieve the preceding configuration:Note: The source performing the DNS query is VPC1 and all requests are sent to the Amazon-provided DNS IP address.Because the IP address range 10.237.53.0/24 is part of VPC1 CIDR 10.237.52.0/22, there are auto-defined system rules that apply to this IP address range. Create a Resolver rule for domain 53.237.10.in-addr.arpa to override the auto-defined system rule for IP addresses in the 10.237.53.0/24 range. Set the target IP address to 192.168.1.4/32.For IP addresses in the 10.237.52.0/22 except 10.237.53.0/24 range, auto-defined system rules are available. The Amazon-provided DNS resolver resolves these DNS queries.For IP addresses in the 10.104.2.0/24 range, there's already an auto-defined most specific rule available for VPC2 CIDR. However, because rules for private hosted zones have higher priority over auto-defined rules, a private hosted zone for domain name 2.104.10.in-addr.arpa must be created.Create a Resolver rule for domain name 10.in-addr.arpa. This rule sends reverse DNS queries for IP addresses in the 10.0.0.0/8 range (except IP addresses in the 10.237.52.0/22 and 10.104.2.0/24 ranges) to a DNS server in another network with an IP address of 192.168.1.4/32. The rule also overrides the auto-defined system rule.The following rules now meet the requirements and are considered by the Resolver based on priority:Custom Resolver rules: 53.237.10.in-addr.arpa. and 10.in-addr.arpa.Rule created for private hosted zone: 2.104.10.in-addr.arpa.The reverse DNS query for IP addresses in the 10.0.0.0/8 range are resolved based on Resolver rule priority. The rule for the private hosted zone and the auto-defined rules based on the most specific domain name rule are as follows:Priority numberIP address range for reverse DNS queryDestination DNS server110.237.53.0/24By 192.169.1.4/32 using "most specific Resolver rule"210.237.52.0/22 except 10.27.53.0/24By Amazon-provided DNS resolver using default rules ("most specific system rule")310.104.2.0/24By Amazon-provided DNS resolver using default rules created for the private hosted zone410.0.0.0/8 except all of the aboveBy 192.168.1.4/32 using Resolver rule (There are no other more specific rules available. Resolver rule with domain name 10.in-addr.arpa. has higher priority over auto-defined rules for the same domain name.)You can also disable default reverse DNS rules with Route 53 Resolver. For more information, see Forwarding rules for reverse DNS queries in Resolver.Related informationResolving DNS queries between VPCs and your networkForwarding outbound DNS queries to your networkFollow" | https://repost.aws/knowledge-center/route-53-override-reverse-dns-rules |
How do I use AWS AppSync to access private resources in my VPC? | "I want to access Amazon Virtual Private Cloud (Amazon VPC) resources from my AWS AppSync GraphQL API, but I don't know how to do that." | "I want to access Amazon Virtual Private Cloud (Amazon VPC) resources from my AWS AppSync GraphQL API, but I don't know how to do that.Short descriptionTo access Amazon VPC resources from an AWS AppSync GraphQL API, follow these steps:Create an AWS Lambda function to run inside the same Amazon VPC as the resources that you want to access.Create an AWS AppSync API, and then attach the Lambda function as the data source.Configure the AWS AppSync API schema.Attach either a Lambda resolver or a direct Lambda resolver to the target GraphQL field.Test the GraphQL field.Configuring a Lambda function to connect to private subnets in an Amazon VPC lets you access the private resources in the Amazon VPC. The results of the downstream private resources are then passed back to the AWS AppSync GraphQL API and returned to the client. Private resources include databases, containers, private APIs, private Amazon OpenSearch Service domains, or other private services behind an Application or Network Load Balancer.AWS AppSync invokes the Lambda API through Lambda public service endpoints. The invoked API is secured by the Signature Version 4 signing process. This allows AWS AppSync to assume an AWS Identity and Access Management (IAM) role with permissions to invoke the target Lambda function. For information on Lambda functions that are configured to access your Amazon VPC, see VPC networking for Lambda.Note:Only public endpoints are supported by AWS AppSync. As a workaround, use Lambda resolvers as an entry point to Amazon VPC resources.Adding a Lambda function between your Amazon VPC private resources and the AWS AppSync API introduces a new application layer with a minor latency overhead.If you use an Amazon VPC with Lambda functions, then you incur additional charges. For more information, see Lambda pricing.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you're using the most recent AWS CLI version.Create an AWS Lambda function to run inside the same Amazon VPC as the resources you want to accessIn the following example scenario, an existing internal Application Load Balancer is called from a Python Lambda function. An id value is obtained from the arguments parameter that's passed in the event object that the AWS AppSync invocation sends to Lambda.import urllib.requestimport jsondef lambda_handler(event, context): URL = 'http://XXXXXXXXXXX.us-east-1.elb.amazonaws.com/' + event['arguments']['id'] #Open the JSON reponse, read it, convert it to JSON object and return it response = urllib.request.urlopen(URL) raw_json = response.read() loaded_json = json.loads(raw_json) return loaded_jsonNote: For direct Lambda resolvers, the function receives a payload that consists of the entire context object.Example response from the Lambda function:{ "id": "25", "name": "hamburger", "available": true}Create an AWS AppSync API, and attach the Lambda function as the data sourceUsing the AWS Management ConsoleTo create the GraphQL API in AWS AppSync:Open the AWS AppSync console.On the dashboard, choose Create API.On the Getting Started page, under Customize your API or import from Amazon DynamoDB, choose Build from scratch.Choose Start.In the API name field, enter a name for the API.Choose Create.Note: The preceding steps automatically create an API key for authorization that's valid for seven days. However, you can use any authorization type.To create the Lambda data source:Open the AWS AppSync console.Choose Data Sources.Choose Create data source.Under Create new Data Source, enter the data source name that you want to define.For Data source type, select Amazon Lambda function.For Region, select the AWS Region that contains the Lambda function.For Function ARN, select the function that you created.Provide an existing role to allow AWS AppSync to manage the Lambda function. You can also let the wizard create one for you.Choose Create.Using the AWS Command Line Interface (AWS CLI)1. Create the GraphQL API using API_KEY as the default authentication type for testing:$ aws appsync create-graphql-api --name api-name --authentication-type API_KEYNote: Replace api name with your API's name.2. Create an API key:$ aws appsync create-api-key --api-id apiIdNote: Replace apiId with you API's ID.3. Create the IAM role that's used by the data source, and then attach the trust policy that allows AWS AppSync to assume the role:$ aws iam create-role --role-name Test-Role-for-AppSync --assume-role-policy-document file://trustpolicy.jsonTrust policy JSON:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "appsync.amazonaws.com" }, "Action": "sts:AssumeRole" } ]}4. Embed the permissions policy to the role:$ aws iam put-role-policy --role-name Test-Role-for-AppSync --policy-name Permissions-Policy-For-AppSync --policy-document file://permissionspolicy.jsonPermissions policy JSON:{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowFunctionInvocation", "Effect": "Allow", "Action": "lambda:InvokeFunction", "Resource": [ "lambda-ARN" ] } ]}Note: Replace lambda-ARN with your Lambda ARN.5. Create the data source, and indicate the ARN of the IAM role and the Lambda function:$ aws appsync create-data-source --api-id apiId \--name LambdaDataSourceName \--type AWS_LAMBDA \--service-role-arn IAM-role-ARN \--lambda-config lambdaFunctionArn=Lambda-function-ARNNote: Replace LambdaDataSourceName with the name of your data source, apiId with your API's ID, IAM-role-ARN with your IAM role's ARN, and Lambda-function-ARN with your Lambda function's ARN.Configure the AWS AppSync API schemaUsing the AWS Management ConsoleConfigure the GraphQL schema definition to access the Lambda function response:1. Open the AWS AppSync console.2. On the left panel, choose Schema.3. Copy and paste the provided schema into the editor.The following example schema has a response query that returns data with the Product type structure:type Product { id: ID! name: String available: Boolean}type Query { response(id: ID!): Product}schema { query: Query}4. Choose Save Schema.Using the AWS CLI1. Save the preceding GraphQL schema as schema.graphql.2. Create the schema in AWS AppSync:$ aws appsync start-schema-creation --api-id "apiId" --definition fileb://schema.graphqlNote: Replace apiId with your API's ID.Attach either a Lambda resolver or a direct Lambda resolver to a GraphQL fieldUsing the AWS Management ConsoleAttach the resolver:Open the AWS AppSync console.On the Schema page of your API, under Resolvers, scroll to the response query. Or, filter by query in the resolver types.Next to the Response field, choose Attach.On the Create new Resolver page, for Data source name, select the name of the Lambda data source. Note: Lambda direct integration is used for the example scenario, so configuring the mapping templates isn't necessary.Choose Save Resolvers.Using the AWS CLICreate the direct Lambda resolver, and specify the data source name from the preceding steps:$ aws appsync create-resolver \--field-name response \--type-name Query \--api-id "apiId" \--data-source-name LambdaDataSourceNameNote: Replace apiId with you API's ID and LambdaDataSourceName with the name of the created data source.Test the GraphQL fieldTo test the GraphQL field:1. Open the AWS AppSync console.2. In the left navigation pane, choose Queries.3. In the Query editor, design your GraphQL query.Example GraphQL query:query MyQuery { response(id: "1") { available name id }}The preceding query gets the product ID 1 from the Application Load Balancer.4. To run the test query, choose Play.Example response from the API:{ "data": { "response": { "available": false, "id": "1", "name": "pizza" } }}Using the terminalRun the query by calling the POST method. The following example uses the curl command:$ curl --request POST 'https://<Grapqhl-API-endpoint>/graphql' \--header 'Content-Type: application/graphql' \--header 'x-api-key: api-key' \--data-raw '{"query":"query MyQuery {\n response(id: \"1\") {\n available\n id\n name\n }\n}\n","variables":{}}'Note: Replace api-key with your API key.Related informationSimplify access to multiple microservices with AWS AppSync and AWS AmplifyBuilding a presence API using AWS AppSync, AWS Lambda, Amazon ElastiCache and Amazon EventBridgeSecurity best practices for AWS AppSyncProvisioned concurrency for Lambda functionsFollow" | https://repost.aws/knowledge-center/appsync-access-private-resources-in-vpc |
How do I use the Oracle Instant Client to run a Data Pump import or export for my Amazon RDS for Oracle DB instance? | I want to use the impdp and expdp utilities to perform an export and import into my Amazon Relational Database Service (Amazon RDS) for Oracle DB instance. | "I want to use the impdp and expdp utilities to perform an export and import into my Amazon Relational Database Service (Amazon RDS) for Oracle DB instance.Short descriptionThere are several ways to perform an export or import into an Amazon RDS for Oracle DB instance.After setting up the environment, you can:Import tables from a source Oracle RDS instance to a target Oracle RDS instance.Export data from an Oracle RDS instance and create a dump file locally on an Amazon Elastic Compute Cloud (EC2) instance or remote host.Export data from an Oracle RDS instance and store the dump file on the RDS host.Import a dump file located on an RDS host.Transfer dump files between your RDS for Oracle DB instance and an Amazon Simple Storage Service (Amazon S3) bucket using the S3 Integration option.ResolutionTo deliver a managed service experience, host-level access to make use of the impdp and expdp utilities on the RDS host isn't allowed. An alternate option is to use the Data Pump API (DBMS_DATAPUMP) to perform the imports or exports. However, you can perform this task by using the Data Pump utilities on a remote host.Oracle Instant Client is a lightweight client that you can install on your computer or on an Amazon EC2 instance. Oracle Instant Client includes the impdp and expdp utilities that you can use to perform the export and import operations from the command line.PrerequisitesDo the following before using the Oracle Instant Client:Review Doc ID 553337.1 to check whether the binary that you're downloading is compatible with the source and target versions. Exporting from a client with an equal or a later version is usually supported. Importing using a client version that's the same as the target Amazon RDS major version is supported. For example, if the version of the source instance is 12.2 and the version of the target instance is 19c, then you can install the latest 19c version of the Oracle Instant Client.To use Data Pump, install the Tools package on top of the Basic package. To install the packages, see the Oracle Instant Client documentation.Make sure that the Daylight Saving Time (DST) version of the target RDS instance is equal to or later than that of the source instance. Otherwise, you get the following error while running the import: ORA-39405. Use the following query to check the current DST version of your instance. To update the DST version to the latest available version in an Oracle RDS instance, use the TIMEZONE_FILE_AUTOUPGRADE option.SELECT * FROM V$TIMEZONE_FILE;To test the Data Pump Import or Export from a database link using an Oracle Instant Client, do the following:1. Create a test Amazon EC2 instance using the Amazon Linux 2 operating system.2. Download the Basic (RPM) package, Tools (RPM) package, and the SQL*Plus (RPM) package. In this article, the following RPM's are the latest available downloads:oracle-instantclient19.16-basic-19.16.0.0.0-1.x86_64.rpmoracle-instantclient19.16-tools-19.16.0.0.0-1.x86_64.rpmoracle-instantclient19.16-sqlplus-19.16.0.0.0-1.x86_64.rpm3. Transfer the binaries to the EC2 instance. For more information, see Transfer files to Linux instances using an SCP client.4. Follow the instructions in the Oracle documentation for Installing Oracle Instant Client Using RPM. This process installs the binaries in the default location /usr/lib/oracle/example-client-version/client64. For example, if you download the binaries for version 19.16, then the default binary location for the installation is /usr/lib/oracle/19.16/client64/bin.5. Install the SQL*Plus Package (RPM) package. SQL*Plus is used to test the connectivity between the EC2 instance and RDS instance.Example:sudo yum install oracle-instantclient19.16-sqlplus-19.16.0.0.0-1.x86_64.rpm6. Set or update the following environmental variables, as seen in this example:export PATH=$PATH:/usr/lib/oracle/19.16/client64/binexport LD_LIBRARY_PATH=/usr/lib/oracle/19.16/client64/lib7. Create your configuration files, such as tnsnames.ora and sqlnet.ora, in the following location: /usr/lib/oracle/ example-client-version/client64/lib/network/admin. In this example, the location will be: /usr/lib/oracle/19.16/client64/lib/network/admin.Setting up the environment1. Add the required TNS entries for the Data Pump Import or Export to the tnsnames.ora file.Example of an entry in the tnsnames.ora file:target = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = xxxx.rxrxrmwt1x471wi.eu-central-1.rds.amazonaws.com) (PORT = 1521)))(CONNECT_DATA = (SID = orcl)))For more information, see Configuring SQL*Plus to use SSL with an Oracle DB instance.Update the inbound rules for the security group of the source and target RDS instances to allow connections from the EC2 instance.Create test tables in the source RDS instance to perform the export by running queries similar to the following:CREATE TABLE TEST1 AS SELECT * FROM DBA_TABLES;CREATE TABLE TEST2 AS SELECT * FROM DBA_OBJECTS;CREATE TABLE TEST3 AS SELECT * FROM DBA_DATA_FILES;Import tables from a source Oracle RDS instance to a target Oracle RDS instanceTo import the tables from a source Oracle RDS instance into a target Oracle RDS instance, do the following:1. Run a query similar to the example below to create a database link between the source and target databases. This is used with the network_link parameter:CREATE DATABASE LINK sample_conn CONNECT TO example-username IDENTIFIED BY example-password USING '(DESCRIPTION = (ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = example-hostname)(PORT = example-port)))(CONNECT_DATA =(SERVICE_NAME = example-service-name)))';The database link connecting the target instance to the source instance has inbound rules that allow connections of the target instance.2. Complete the prerequisites and setup outlined in this article before running the impdp command.3. Log in to the EC2 instance that contains the Oracle instant client.4. To import data from the source instance to the target instance, run a command similar to the following:impdp admin@target directory=data_pump_dir logfile=imp_test_tables_using_nw_link.log tables=admin.test1,admin.test2,admin.test3 network_link=sample_connExample output:Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - ProductionStarting "ADMIN"."SYS_IMPORT_TABLE_01": admin/********@target directory=data_pump_dir logfile=imp_test_tables_using_nw_link.log tables=admin.test1,admin.test2,admin.test3 network_link=sample_connEstimate in progress using BLOCKS method...Processing object type TABLE_EXPORT/TABLE/TABLE_DATATotal estimation using BLOCKS method: 3.625 MBProcessing object type TABLE_EXPORT/TABLE/TABLE. . imported "ADMIN"."TEST2" 20634 rows. . imported "ADMIN"."TEST1" 1537 rows. . imported "ADMIN"."TEST3" 6 rowsProcessing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICSProcessing object type TABLE_EXPORT/TABLE/STATISTICS/MARKERJob "ADMIN"."SYS_IMPORT_TABLE_01" successfully completed at Wed Oct 14 23:57:28 2020 elapsed 0 00:01:06Export data from an Oracle RDS instance and create a dump file locally on a remote hostTo export the data from an Oracle RDS instance and create a dump file locally, do the following:Install an Oracle database on an EC2 instance or remote host. In the following example, Oracle XE is installed on a Windows EC2 instance. For more information on Oracle XE, see Oracle Database XE Quick Start.Update the inbound rules for the security group of the source RDS instances to allow connections from the EC2 instance.1. Log in to the XE database with an Oracle Client, such as SQL*Plus. Then, create a directory on the Oracle XE database. This directory will reference the directory where you want to create the dump file on the EC2 instance. Run a query similar to the following:create directory exp_dir as 'C:\TEMP\';2. On the XE database, create a database link to your source RDS database using a command similar to this example:CREATE DATABASE LINK exp_rds CONNECT TO admin identified by example_password USING '(DESCRIPTION = (ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = example-hostname)(PORT=example-port)))(CONNECT_DATA =(SERVICE_NAME = example-service-name)))';3. Test the database link similar to the following:select sysdate from dual@exp_rds;4. To create the dump file on the EC2 instance, run a command similar to the following:expdp system network_link=exp_rds directory=exp_dir dumpfile=table_dump.dmp logfile=expdp_table_dump.log tables=admin.test1,admin.test2,admin.test3Example output:Connected to: Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database. Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** network_link=exp_rds directory=exp_dir dumpfile=table_dump.dmp logfile=expdp_table_dump.log tables=admin.test1,admin.test2,admin.test3 Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKERProcessing object type TABLE_EXPORT/TABLE/TABLE . . exported "ADMIN"."TEST2" 2.713 MB 23814 rows. . exported "ADMIN"."TEST1" 677.1 KB 1814 rows. . exported "ADMIN"."TEST3" 15.98 KB 5 rows Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: C:\TEMP\TABLE_DUMP.DMPJob "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at Wed Aug 24 18:15:25 2022 elapsed 0 00:00:18Export data from an Oracle RDS instance and store the dump file on the RDS hostTo export data from an Oracle RDS instance and store the dump file on the RDS host, do the following:1. Complete the prerequisites and setup outlined in this article before running the expdp command.2. Log in to the EC2 instance that contains the Oracle instant client.3. Create a dump file on the RDS instance by running a command similar to the following:expdp admin@target dumpfile=table_dump.dmp logfile=expdp_table_dump.log tables=admin.test1,admin.test2,admin.test3Example output:Export: Release 19.0.0.0.0 - Production on Wed Aug 24 16:18:58 2022Version 19.16.0.0.0Copyright (c) 1982, 2022, Oracle and/or its affiliates. All rights reserved.Password:Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - ProductionStarting "ADMIN"."SYS_EXPORT_TABLE_01": admin/********@target dumpfile=table_dump.dmp logfile=expdp_table_dump.log tables=admin.test1,admin.test2,admin.test3Processing object type TABLE_EXPORT/TABLE/TABLE_DATAProcessing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICSProcessing object type TABLE_EXPORT/TABLE/STATISTICS/MARKERProcessing object type TABLE_EXPORT/TABLE/TABLE. . exported "ADMIN"."TEST2" 2.713 MB 23814 rows. . exported "ADMIN"."TEST1" 677.1 KB 1814 rows. . exported "ADMIN"."TEST3" 15.98 KB 5 rowsMaster table "ADMIN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded******************************************************************************Dump file set for ADMIN.SYS_EXPORT_TABLE_01 is: /rdsdbdata/datapump/table_dump.dmpJob "ADMIN"."SYS_EXPORT_TABLE_01" successfully completed at Wed Aug 24 16:19:20 2022 elapsed 0 00:00:15Import dump file located on the RDS hostTo import a dump file that is stored on the RDS host, do the following:Note: In this example, the data exists in the DATA_PUMP_DIR on the RDS host.1. Complete the prerequisites and setup outlined in this article before running the impdp command.2. Log in to the EC2 instance that contains the instant client.3. Run a command similar to the following on the EC2 instance to import the dump file located on the RDS host.Note: In this example, the tables are truncated before the data is imported.impdp admin@target directory=DATA_PUMP_DIR dumpfile=table_dump.dmp logfile=impdp_table_dump.log tables=admin.test1,admin.test2,admin.test3 table_exists_action=truncateExample output:import: Release 19.0.0.0.0 - Production on Thu Sep 8 13:24:44 2022Version 19.16.0.0.0Copyright (c) 1982, 2022, Oracle and/or its affiliates. All rights reserved.Password:Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - ProductionMaster table "ADMIN"."SYS_IMPORT_TABLE_01" successfully loaded/unloadedStarting "ADMIN"."SYS_IMPORT_TABLE_01": admin/********@target directory=DATA_PUMP_DIR dumpfile=table_dump.dmp logfile=impdp_table_dump.log tables=admin.test1,admin.test2,admin.test3 table_exists_action=truncateProcessing object type TABLE_EXPORT/TABLE/TABLETable "ADMIN"."TEST2" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncateTable "ADMIN"."TEST3" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncateTable "ADMIN"."TEST1" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncateProcessing object type TABLE_EXPORT/TABLE/TABLE_DATA. . imported "ADMIN"."TEST2" 2.749 MB 24059 rows. . imported "ADMIN"."TEST1" 677.2 KB 1814 rows. . imported "ADMIN"."TEST3" 15.98 KB 5 rowsProcessing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICSProcessing object type TABLE_EXPORT/TABLE/STATISTICS/MARKERJob "ADMIN"."SYS_IMPORT_TABLE_01" successfully completed at Thu Sep 8 13:24:54 2022 elapsed 0 00:00:06Transferring dump files between your RDS for Oracle DB instance and an Amazon S3 bucketTo transfer dump files between an RDS Oracle DB instance and an Amazon S3 bucket, you can use the S3 Integration option. For more information, see Transferring files between Amazon RDS for Oracle and an Amazon S3 bucket.Related informationOverview of Oracle Data PumpDBMS_DATAPUMPFollow" | https://repost.aws/knowledge-center/rds-oracle-instant-client-datapump |
Why is my Amazon EKS pod stuck in the ContainerCreating state with the error "failed to create pod sandbox"? | My Amazon Elastic Kubernetes Service (Amazon EKS) pod is stuck in the ContainerCreating state with the error "failed to create pod sandbox". | "My Amazon Elastic Kubernetes Service (Amazon EKS) pod is stuck in the ContainerCreating state with the error "failed to create pod sandbox".ResolutionYour Amazon EKS pods might be stuck in the ContainerCreating state with a network connectivity error for several reasons. Use the following troubleshooting steps based on the error message that you get.Error response from daemon: failed to start shim: fork/exec /usr/bin/containerd-shim: resource temporarily unavailable: unknownThis error occurs because of an operating system limitation that's caused by the defined kernel settings for maximum PID or maximum number of files.Run the following command to get information about your pod:$ kubectl describe pod example_podExample output:kubelet, ip-xx-xx-xx-xx.xx-xxxxx-x.compute.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "example_pod": Error response from daemon: failed to start shim: fork/exec /usr/bin/containerd-shim: resource temporarily unavailable: unknownTo temporarily resolve the issue, restart the node.To troubleshoot the issue, do the following:Gather the node logs.Review the Docker logs for the error "dockerd[4597]: runtime/cgo: pthread_create failed: Resource temporarily unavailable".Review the Kubelet log for the following errors:"kubelet[5267]: runtime: failed to create new OS thread (have 2 already; errno=11)""kubelet[5267]: runtime: may need to increase max user processes (ulimit -u)".Identify the zombie processes by running the ps command. All the processes listed with the Z state in the output are the zombie processes.Network plugin cni failed to set up pod network: add cmd: failed to assign an IP address to containerThis error indicates that the Container Network Interface (CNI) can't assign an IP address for the newly provisioned pod.The following are reasons why the CNI fails to provide an IP address to the newly created pod:The instance used the maximum allowed elastic network interfaces and IP addresses.The Amazon Virtual Private Cloud (Amazon VPC) subnets have an IP address count of zero.The following is an example of network interface IP address exhaustion:Instance type Maximum network interfaces Private IPv4 addresses per interface IPv6 addresses per interfacet3.medium 3 6 6In this example, the instance t3.medium has a maximum of 3 network interfaces, and each network interface has a maximum of 6 IP addresses. The first IP address is used for the node and isn't assignable. This leaves 17 IP addresses that the network interface can allocate.The Local IP Address Management daemon (ipamD) logs show the following message when the network interface runs out of IP addresses:"ipamd/ipamd.go:1285","msg":"Total number of interfaces found: 3 ""AssignIPv4Address: IP address pool stats: total: 17, assigned 17""AssignPodIPv4Address: ENI eni-abc123 does not have available addresses"Run the following command to get information about your pod:$ kubectl describe pod example_podExample output:Warning FailedCreatePodSandBox 23m (x2203 over 113m) kubelet, ip-xx-xx-xx-xx.xx-xxxxx-x.compute.internal (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" network for pod "provisioning-XXXXXXXXXXXXXXX": networkPlugin cni failed to set up pod "provisioning-XXXXXXXXXXXXXXX" network: add cmd: failed to assign an IP address to containerReview the subnet to identify if the subnet ran out of free IP addresses. You can view available IP addresses for each subnet in the Amazon VPC console under the Subnets section.Subnet: XXXXXXXXXXIPv4 CIDR Block 10.2.1.0/24 Number of allocated ips 254 Free address count 0To resolve this issue, scale down some of the workload to free up available IP addresses. If additional subnet capacity is available, then you can scale the node. You can also create an additional subnet. For more information, see How do I use multiple CIDR ranges with Amazon EKS? Follow the instructions in the Create subnets with a new CIDR range section.Error while dialing dial tcp 127.0.0.1:50051: connect: connection refusedThis error indicates that the aws-node pod failed to communicate with IPAM because the aws-node pod failed to run on the node.Run the following commands to get information about the pod:$ kubectl describe pod example_pod$ kubectl describe pod/aws-node-XXXXX -n kube-systemExample outputs:Warning FailedCreatePodSandBox 51s kubelet, ip-xx-xx-xx-xx.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" network for pod "example_pod": NetworkPlugin cni failed to set up pod "example_pod" network: add cmd: Error received from AddNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", failed to clean up sandbox container"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" network for pod "example_pod": NetworkPlugin cni failed to teardown pod "example_pod" network: del cmd: error received from DelNetwork gRPC call: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"]To troubleshoot this issue, verify that the aws-node pod is deployed and is in the Running state:kubectl get pods --selector=k8s-app=aws-node -n kube-systemNote: Make sure that you're running the correct version of the VPC CNI plugin for the cluster version.The pods might be in Pending state due to Liveness and Readiness probe errors. Be sure that you have the latest recommended VPC CNI add-on version according to the compatibility table.Run the following command to view the last log message from the aws-node pod:kubectl -n kube-system exec -it aws-node-XXX-- tail -f /host/var/log/aws-routed-eni/ipamd.log | tee ipamd.logThe issue might also occur because the Dockershim mount point fails to mount. The following is an example message that you can receive when this issue occurs:Getting running pod sandboxes from \"unix:///var/run/dockershim.sock\Not able to get local pod sandboxes yet (attempt 1/5): rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directorThe preceding message indicates that the pod didn't mount var/run/dockershim.sock.To resolve this issue, try the following:Restart the aws-node pod to remap the mount point.Cordon the node, and scale the nodes in the node group.Upgrade the Amazon VPC network interface to the latest cluster version that's supported.If you added the CNI as a managed plugin in the AWS Management Console, then the aws-node fails the probes. Managed plugins overwrite the service account. However, the service account isn't configured with the selected role. To resolve this issue, turn off the plugin from the AWS Management Console, and create the service account using a manifest file. Or, edit the current aws-node service account to add the role that's used on the managed plugin.Network plugin cni failed to set up pod "my-app-xxbz-zz" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4AddressYou get this error because of either of the following reasons:The pod isn't running properly.The certificate that the pod is using isn't created successfully.This error relates to the Amazon VPC admission controller webhook that's required on Amazon EKS clusters to run Windows workloads. The webhook is a plugin that runs a pod in the kube-system namespace. The component runs on Linux nodes and allows networking for incoming pods on Windows nodes.Run the following command to get the list of pods that are affected:kubectl get podsExample output:my-app-xxx-zz 0/1 ContainerCreating 0 58m <none> ip-XXXXXXX.compute.internal <none>my-app-xxbz-zz 0/1 ContainerCreating 0 58m <none>Run the following command to get information about the pod:$ kubectl describe pod my-app-xxbz-zzExample output:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" network for pod "<POD_ANME>": networkPlugin cni failed to set up pod "example_pod" network: failed to parse Kubernetes args: pod does not have label vpc.amazonaws.com/PrivateIPv4AddressReconciler worker 1 starting processing node ip-XXXXXXX.compute.internal.Reconciler checking resource vpc.amazonaws.com/PrivateIPv4Address warmpool size 1 desired 3 on node ip-XXXXXXX.compute.internal.Reconciler creating resource vpc.amazonaws.com/PrivateIPv4Address on node ip-XXXXXXX.compute.internal.Reconciler failed to create resource vpc.amazonaws.com/PrivateIPv4Address on node ip-XXXXXXX.compute.internal: node has no open IP address slots.Windows nodes support one network interface per node. Each Windows node can run as many pods as the available IP addresses per network interface, minus one. To resolve this issue, scale up the number of Windows nodes.If the IP addresses aren't the issue, then review the Amazon VPC admission controller pod event and logs.Run the following command to confirm that the Amazon VPC admission controller pod is created:$ kubectl get pods -n kube-system OR kubectl get pods -n kube-system | grep "vpc-admission"Example output:vpc-admission-webhook-5bfd555984-fkj8z 1/1 Running 0 25mRun the following command to get information about the pod:$ kubectl describe pod vpc-admission-webhook-5bfd555984-fkj8z -n kube-systemExample output: Normal Scheduled 27m default-scheduler Successfully assigned kube-system/vpc-admission-webhook-5bfd555984-fkj8z to ip-xx-xx-xx-xx.ec2.internal Normal Pulling 27m kubelet Pulling image "xxxxxxx.dkr.ecr.xxxx.amazonaws.com/eks/vpc-admission-webhook:v0.2.7" Normal Pulled 27m kubelet Successfully pulled image "xxxxxxx.dkr.ecr.xxxx.amazonaws.com/eks/vpc-admission-webhook:v0.2.7" in 1.299938222s Normal Created 27m kubelet Created container vpc-admission-webhook Normal Started 27m kubelet Started container vpc-admission-webhookRun the following command to check the pod logs for any configuration issues:$ kubectl logs vpc-admission-webhook-5bfd555984-fkj8z -n kube-systemExample output:I1109 07:32:59.352298 1 main.go:72] Initializing vpc-admission-webhook version v0.2.7.I1109 07:32:59.352866 1 webhook.go:145] Setting up webhook with OSLabelSelectorOverride: windows.I1109 07:32:59.352908 1 main.go:105] Webhook Server started.I1109 07:32:59.352933 1 main.go:96] Listening on :61800 for metrics and healthzI1109 07:39:25.778144 1 webhook.go:289] Skip mutation for as the target platform is .The preceding output shows that the container started successfully. The pod then adds the vpc.amazonaws.com/PrivateIPv4Address label to the application pod. However, the manifest for the application pod must contain a node selector or affinity so that the pod is scheduled on the Windows nodes.Other options to troubleshoot the issue include verifying the following:You deployed the Amazon VPC admission controller pod in the kube-system namespace.Logs or events aren't pointing to an expired certificate. If the certificate is expired and Windows pods are stuck in the Container creating state, then you must delete and redeploy the pods.There aren't any timeouts or DNS-related issues.If you don't create the Amazon VPC admission controller, then turn on Windows support for your cluster.Important: Amazon EKS doesn't require you to turn on the Amazon VPC admission controller to support Windows node groups. If you turned on the Amazon VPC admission controller, then remove legacy Windows support from your data plane.Related informationAmazon EKS networkingFollow" | https://repost.aws/knowledge-center/eks-failed-create-pod-sandbox |
Why can't I connect to my S3 bucket using interface VPC endpoints? | I can't connect to my Amazon Simple Storage Service (Amazon S3) bucket using interface Amazon Virtual Private Cloud (Amazon VPC) endpoints. How can I troubleshoot this? | "I can't connect to my Amazon Simple Storage Service (Amazon S3) bucket using interface Amazon Virtual Private Cloud (Amazon VPC) endpoints. How can I troubleshoot this?Short descriptionTo troubleshoot this error, check the following:Verify the policy associated with the interface VPC endpoint and the S3 bucket.Verify that your network can connect to the S3 endpoints.Verify that your DNS can resolve to the S3 endpoints IP addresses.Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.ResolutionVerify the policy associated with the interface VPC endpoint and the S3 bucketBy default, an S3 bucket doesn't have a policy associated with it when you create a bucket. A policy associated with an S3 interface endpoint during the time of creation allows any action to any S3 bucket by default. For information on viewing the policy associated with your endpoint, see View your interface endpoint.Verify that your network can connect to the S3 endpointsCheck connectivity between the source and the destination. For example, check the network access control list (ACL) and the security group associated with the S3 interface endpoints to confirm that traffic is allowed to the interface endpoint.Use the following telnet command to test connectivity between the AWS resource or from an on-premises host and the S3 endpoint. In the following command, replace S3_interface_endpoint_DNS with the DNS of your S3 interface endpoint.telnet bucket.S3_interface_endpoint_DNS 443Trying a.b.x.y...Connected to bucket.vpce-0a1b2c3d4e5f6g-m7o5iqbh.s3.us-east-2.vpce.amazonaws.comYou can also test telnet connectivity using a test Amazon Elastic Compute Cloud (Amazon EC2) instance. Test the connectivity in the subnet where you have the endpoint from the source (on-premises host or another instance) to verify that layer 3 connectivity exists from the source to the destination AWS resource. Make sure that you use the same security group in the test instance that's associated with the S3 interface endpoint. Testing this connectivity helps to determine if the issue is with the security group or the network ACL.Verify that the DNS can resolve to the S3 endpointsMake sure that you can resolve the interface endpoint DNS from the source. You can use tools such as nslookup, dig, and so, on to do this. The following example uses dig. In the following command, replace S3_interface_endpoint_DNS with the DNS of your S3 interface endpoint.dig *s3_interface_endpoint_DNS@local_nameserverNote: Amazon-provided DNS server is the .2 IP address of the VPC CIDR. Your on-premises host is the local name server of the host listed in the /etc/resolv.conf file.Related informationAccessing buckets and S3 access points from S3 interface endpointsFollow" | https://repost.aws/knowledge-center/vpc-interface-endpoints-connect-s3 |
How can I restrict users in certain locations from accessing web content served by my CloudFront distribution? | I want to restrict users in certain countries from accessing the web content served by my Amazon CloudFront distribution. | "I want to restrict users in certain countries from accessing the web content served by my Amazon CloudFront distribution.ResolutionTurn on CloudFront geo restriction for your distribution by following these steps:Open the CloudFront console.Choose the distribution that you want to apply geo restriction to.Choose the Geographic Restrictions tab.Choose Edit.To allow access to countries, for Restriction type choose Allow List. To block access from certain countries, choose Block List.For Countries, select the countries that you want to allow or block. Then, choose Add.Choose Save Changes.Note: You can set your CloudFront distribution to return a custom error message when a user from a blocked country tries to access content.Consider these additional ways to restrict access to your content served through CloudFront:Be sure that any AWS security groups on your CloudFront origin have restricted HTTP or HTTPS access to the CloudFront IP address ranges. This prevents access to those IP addresses from outside of CloudFront. For more information, see Automatically update security groups for Amazon CloudFront IP ranges using AWS Lambda.You can use AWS WAF to monitor and restrict HTTP and HTTPS requests, and to control access to your content.Related informationHow AWS Shield worksFollow" | https://repost.aws/knowledge-center/cloudfront-geo-restriction |
How do I increase security in Amazon Cognito by using MFA settings for users and user pools? | I want to increase security for my Amazon Cognito users and user pools by implementing multi-factor authentication (MFA). How do I do this? | "I want to increase security for my Amazon Cognito users and user pools by implementing multi-factor authentication (MFA). How do I do this?Short descriptionMFA settings for Amazon Cognito can be set to off, optional, or required for users and user pools.If MFA is off, then no users are prompted with an MFA challenge during sign in. If MFA is optional, then MFA is added at the user level. Only users who have MFA configured are prompted with an MFA challenge during sign in. If MFA is required, then each user is prompted with an MFA challenge during sign in.SMS text messages and time-based one-time passwords (TOTP) are both second authentication factor options for Amazon Cognito users and user pools.Resolution1. Set up MFA for your Amazon Cognito user pool.Important: The user pool’s MFA settings can change the authentication flow. For more information, see User pool authentication flow.2. Set up a second authentication factor for Amazon Cognito users.To configure SMS text messages as the second factor for users:Make sure that each user has a phone number. Phone numbers are marked as verified after SMS text messages are acknowledged.Set up an SMS text message MFA.Set up SMS text messages in Amazon Cognito user pools.Use the AdminSetUserMFAPreference API or the SetUserMFAPreference API, depending on the use case. For SMSMfaSettings, set both Enabled and PreferredMfa to True.Provide any other required parameters depending on the API, then invoke the API.To configure TOTP as the second factor for users:Set up a TOTP software token MFA.Use the AdminSetUserMFAPreference API or the SetUserMFAPreference API, depending on the use case. For SoftwareTokenMfaSettings, set both Enabled and PreferredMfa to True.Provide any other required parameters depending on the API, then invoke the API.Note: You can use the AWS Command Line Interface (AWS CLI) to associate a TOTP software token MFA and then set TOTP as the second authentication factor. For more information, see How do I activate TOTP MFA for Amazon Cognito user pools?Follow" | https://repost.aws/knowledge-center/cognito-mfa-settings-users-and-pools |
How do I restore a CloudHSM cluster from a previous backup? | I want to restore my hardware security module (HSM) in my AWS CloudHSM cluster to a previous version. | "I want to restore my hardware security module (HSM) in my AWS CloudHSM cluster to a previous version.Short descriptionCloudHSM periodically makes a backup of your cluster, and also automatically creates a backup when you delete an HSM. The backups are stored in an Amazon Simple Storage Service (Amazon S3) bucket in the same AWS Region as your cluster.If you don't need your HSM, you can delete it. When you are ready to use that HSM again, you can create a new HSM in the same cluster as the HSM that you deleted. If you deleted your cluster, you can create a new one from the backup. For more information, see AWS CloudHSM cluster backups.ResolutionFollow the instructions for Creating AWS CloudHSM cluster from backups. Then, create an HSM in that cluster.Follow" | https://repost.aws/knowledge-center/restore-cloudhsm-cluster |
Why can't I delete my Amazon SNS topic subscription? | "I want to delete my Amazon Simple Notification Service (Amazon SNS) topic subscription. However, I either get an error message or the option to delete the subscription is deactivated in the console." | "I want to delete my Amazon Simple Notification Service (Amazon SNS) topic subscription. However, I either get an error message or the option to delete the subscription is deactivated in the console.Short descriptionThere are three situations when Amazon SNS doesn't let you delete your Amazon SNS topic subscription:Your topic subscription is in the Pending Confirmation status.Your topic subscription is in the Deleted status.The AWS Identity and Access Management (IAM) entity that's trying to delete your topic subscription doesn't have the required permissions to unsubscribe.Note: After three days, Amazon SNS automatically removes subscriptions that are in the Deleted and Pending Confirmation status from your account.If your topic subscription is in the Pending Confirmation status, then the Delete button is deactivated in the Amazon SNS console.If your topic subscription is in the Deleted status and you try to delete the subscription, then Amazon SNS returns the following error message:"Error code: InvalidParameter - Error message: Invalid parameter: SubscriptionArn Reason: An ARN must have at least 6 elements, not 1”If the IAM entity that's trying to delete your subscription doesn't have the required permissions to unsubscribe, then Amazon SNS returns a Permissions Denied error.ResolutionCheck if your Amazon SNS topic subscription is in the Deleted or Pending Confirmation statusImportant: If subscriptions are in the Deleted or Pending Confirmation status when you delete their topic, then you can't manually remove the subscriptions from your account. You must wait three days for Amazon SNS to automatically remove the subscriptions from your account.1. Open the Amazon SNS console.2. In the left navigation pane, choose Subscriptions.3. On the Subscriptions page, find the subscription that you want to delete. Then, in the Status column, check if the subscription is in either the Deleted or Pending Confirmation status.4. Complete the steps in one of the following sections depending on whether your subscription is in the Deleted or Pending Confirmation status. If your subscription isn't in Deleted or Pending Confirmation status, then complete the steps in the To troubleshoot Permissions Denied errors section.Your Amazon SNS topic subscription is in the Deleted statusThere are two reasons that a topic subscription is in the Deleted status without being removed from your account:A member of an Amazon SNS topic mailing list selects the unsubscribe link in an email message sent from the topic.An Amazon Simple Queue Service (Amazon SQS) queue in another AWS account that's subscribed to the topic deletes the cross-account subscription.A member of the topic mailing list selects the unsubscribe link in an email sent from the topicTake one of the following actions:1. In your email inbox, open the email that has the following subject line: AWS Notification - Unsubscribe Confirmation.2. At the bottom of the email, select the Resubscribe link. After you select the Resubscribe link, the email subscription is reconfirmed and you can delete it from the Amazon SNS console.-or-Recreate the deleted email subscription, and then confirm it. After you create and confirm the subscription, you can delete it from the Amazon SNS console.Note: Email spam filters can also unsubscribe the mailing list's email address.An Amazon SQS queue in another account that's subscribed to the topic deletes the cross-account subscriptionComplete the following steps:1. Follow the instructions in How do I recreate a "Deleted" Amazon SNS topic subscription for an Amazon SQS queue in another AWS account?2. Use the AWS account that owns the subscription to delete the subscription.Your Amazon SNS topic subscription is in the Pending Confirmation statusThere are four reasons that a topic subscription is in the Pending Confirmation status without being removed from your account:The subscription is added but isn't yet confirmed.The email address that's added to the subscription isn't valid.The delivery rate for email messages exceeds the default quota of 10 messages per second.The HTTP or HTTPS endpoint isn't automatically processing the Subscription Confirmation request that Amazon SNS made.The HTTP or HTTPS endpoint isn't valid.When any of the following endpoint types are subscribed to an SNS topic, the subscription remains in the Pending Confirmation status until it's confirmed:EmailHTTPHTTPSCross-account Amazon SQSTo confirm a subscription that's associated with the preceding types of endpoints, select the Confirm Subscription link that was sent to the endpoint. After the subscription is confirmed, you can delete it from the Amazon SNS console.For all other scenarios, you must resubscribe the endpoint to the SNS topic, and then complete the following steps to delete it:1. Open the Amazon SNS console.2. In the left navigation pane, choose Subscriptions.3. On the Subscriptions page, find the subscription that you want to delete. Then, choose Request confirmation. A confirmation request is sent to the designated endpoint.4. Based on the type of endpoint that you're using, take one of the following actions to confirm the subscription:For email endpointsIn your email inbox, open the email that has the following subject line: AWS Notification - Subscription Confirmation. Then, choose Confirm Subscription.Note: If you don't see the subscription confirmation email, then check your email's spam and junk folders.For cross-account Amazon SQS endpointsFind the subscription confirmation message in the Amazon SQS queue. Then, send an HTTP GET request to the SubscribeURL that's in the message body. For more information, see Sending Amazon SNS messages to an Amazon SQS queue in a different account.For HTTP and HTTPS endpointsMake sure that your endpoint can handle the HTTP POST requests that Amazon SNS uses to send subscription confirmation and notification messages. For more information, see Make sure your endpoint is ready to process Amazon SNS messages.5. When the subscription is in the Confirmed status, delete the subscription.Note: For email, HTTP, and HTTPS endpoints, three days must elapse before the subscription is removed from your account after you delete it.To troubleshoot Permissions Denied errorsCheck unsubscribe permissionComplete the following steps to confirm that the IAM entity that's trying to delete your topic subscription has the required permissions to unsubscribe:1. Open the IAM Policy Simulator console.2. In the left Users, Groups, and Roles pane, choose the IAM entity that you're using to delete the topic subscription.3. In the Policy Simulator pane, for the Select service dropdown list, select SNS.4. For the Select actions dropdown list, select Unsubscribe.5. Choose Run Simulation.6. Under Action Settings and Results, in the Permission column, check if the unsubscribe permission is Allowed or Denied.If your IAM entity doesn't allow the sns:Unsubscribe action, contact your system administrator and ask them to add the required permissions. For more information, see Adding and removing IAM identity permissions.Follow" | https://repost.aws/knowledge-center/sns-cannot-delete-topic-subscription |
How do I sign in to the AWS Management Console? | I want to access AWS resources through the AWS Management Console. How do I sign in? | "I want to access AWS resources through the AWS Management Console. How do I sign in?ResolutionSigning in as the AWS account root userIf you're a root user, open the Sign in page, select Root user, and sign in using your AWS account root user credentials.Signing in as the AWS Identity and Access Management (IAM) user with a custom URLSign in using a custom URL https://account_alias_or_id.signin.aws.amazon.com/console/. You must replace account_alias_or_id with the account alias or account ID provided by the root user.Signing in as the IAM user on the Sign-in pageIf you have previously signed in as the IAM user on the browser, you might see the Sign in as IAM user page when you open the Sign-in page. Your account ID or account alias might already be saved. In that case, just enter your IAM user credentials, and then choose Sign in.If you are signing in on the browser for the first time, open the Sign in page, select IAM user, and then enter the 12-digit AWS account ID or account alias. Choose Next. In the Sign in as IAM user page, enter your IAM user credentials, and then choose Sign in.If you have trouble signing in as the IAM user, contact your account administrator for the specialized URL and account credentials to use.For the AWS GovCloud (US) Region, sign in to the AWS Management Console for the AWS GovCloud (US) using your IAM account credentials.If you have trouble signing in to your AWS account, see I'm having trouble signing in to or accessing my AWS account.Note: For security purposes, AWS Support doesn't have access to view, provide, or change your account credentials.Related informationWhat is the AWS Management Console?Signing in to the AWS Management Console as an IAM user or root userAWS Management Console for the AWS GovCloud (US) RegionTroubleshooting AWS sign-in or account issuesEnable access to the AWS Management Console with AD credentialsFollow" | https://repost.aws/knowledge-center/sign-in-console |
Why can't I configure ACM certificates for my website hosted on an EC2 instance? | I want to configure AWS Certificate Manager (ACM) certificates for my website hosted on an Amazon Elastic Compute Cloud (Amazon EC2) instance. | "I want to configure AWS Certificate Manager (ACM) certificates for my website hosted on an Amazon Elastic Compute Cloud (Amazon EC2) instance.Short descriptionConfiguring an Amazon Issued ACM public certificate for a website that's hosted on an EC2 instance requires exporting the certificate. However, you can't export the certificate because ACM manages the private key that signs and creates the certificate. For more information, see Security for certificate private keys.Instead, you can associate an ACM certificate with a load balancer or an ACM SSL/TLS certificate with a CloudFront distribution. Before you begin, follow the instructions for requesting a public certificate.Note: You must request or import an ACM certificate in the same AWS Region as your load balancer. CloudFront distributions must request the certificate in the US East (N. Virginia) Region.ResolutionFollow these steps to associate your certificate:Create an Application Load Balancer, Network Load Balancer, Classic Load Balancer, or CloudFront distribution.Note: If you already have an Application Load Balancer, Network Load Balancer, Classic Load Balancer, or CloudFront distribution, then you can skip this step.Associate the certificate with your ELB, or configure a CloudFront distribution to use an SSL/TLS certificate.Put the EC2 instance behind your ELB or CloudFront distribution.Route traffic to your ELB or CloudFront distribution.Create an ELB or CloudFront distributionFollow the instructions for your use case:Create an Application Load BalancerCreate a Network Load BalancerCreate a Classic Load BalancerCreate a CloudFront distributionAssociate the certificate with ELB or configure it with a CloudFront distributionFollow the instructions for your use case:Associate the certificate with a Classic, Application, or Network Load BalancerConfigure your CloudFront distribution to use an SSL/TLS certificatePut the EC2 instance behind your ELB or CloudFront distributionFollow the instructions for your use case:Register targets with your target group for your Application or Network Load BalancerRegister or deregister EC2 instances for your Classic Load BalancerUse Amazon EC2 with CloudFront distributionsRoute traffic to your ELB or CloudFront distributionFollow the instructions for your use case:Route traffic to a load balancerRoute traffic to a CloudFront distributionNote: Public ACM certificates can be installed on Amazon EC2 instances that are connected to a Nitro Enclave, but not to other Amazon EC2 instances.Related informationEmail validationDNS validationMaking Amazon Route 53 the DNS service for an existing domainFollow" | https://repost.aws/knowledge-center/configure-acm-certificates-ec2 |
How do I set up a Lambda function to invoke when a state changes in AWS Step Functions? | I want to invoke an AWS Lambda function whenever a state changes in AWS Step Functions. How do I do that? | "I want to invoke an AWS Lambda function whenever a state changes in AWS Step Functions. How do I do that?ResolutionNote: These instructions describe how to use an Amazon EventBridge events rule to invoke a Lambda function whenever a state changes in Step Functions. As you follow the steps, make sure that you do the following:Confirm the event change that you use to invoke the Lambda function is a supported API action.Create the Step Functions state machine, Lambda function, and EventBridge events rule in the same AWS Region.Create IAM roles for Step Functions and Lambda1. Create an AWS Identity and Access Management (IAM) role for Step Functions. When you create the IAM role, do the following:Grant the IAM role permissions to perform any actions required for your use case.Allow the action, lambda:InvokeFunction, to have your state machine invoke your Lambda function.Note: The managed policy, AWSLambdaRole, includes the permissions required to invoke Lambda functions.2. Create a Lambda execution role that grants your function permission to upload logs to Amazon CloudWatch.Note: The managed policy, AWSLambdaBasicExecutionRole, grants your function the basic permissions to upload logs to CloudWatch.Create a Step Functions state machineCreate a state machine in the Step Functions console. For IAM role for executions, choose the existing role that you created for Step Functions.For more information, see What is AWS Step Functions?Create a Lambda function that's configured to print the event it receives1. Create a function in the Lambda console. For Execution role, choose the existing role that you created for Lambda.2. In the Lambda console, use the code editor to update the function code so that when it runs, the function prints the event that it receives.Example Python code that tells a Lambda function to print the events it receivesimport jsondef lambda_handler(event, context):print("Received event: " + json.dumps(event)) return {'statusCode': 200,'body': json.dumps("Hello")}For more information, see Building Lambda functions with Python.Create an EventBridge events rule that invokes your Lambda function whenever a state changes in Step Functions1. Open the EventBridge console.2. In the left navigation pane, under Events, choose Rules.3. Choose Create rule.4. For Name, enter a name for the rule.5. For Define pattern, choose Event Pattern.6. For Event matching pattern, choose Pre-defined pattern by service.7. For Service provider, choose AWS.8. For Service Name, choose Step Functions.9. For Event Type, choose Step Functions Execution Status Change.Note: You can also choose to have All Events for Step Functions initiate the rule. Or, you can choose AWS API Call through CloudTrail to initiate the rule for certain Step Functions API call events, such as StartExecution. For more information, see Events from AWS services.10. Choose the statuses, state machine Amazon Resource Names (ARNs), and execution ARNs that you want to initiate the event. You can choose Any for each type of trigger, or identify Specific statuses or ARNs for each trigger.11. Under Select targets, confirm that Lambda function is the target type.12. For Function, choose the Lambda function that you created13. Choose Create rule.For more information, see Amazon EventBridge events and EventBridge for Step Functions execution status changes.Test your setup1. In the Step Functions console, start a new execution of your state machine.2. In the CloudWatch console, in the left navigation pane, under Logs, choose Log groups.3. Choose the log stream created by your Lambda function.4. Verify the event details in the log stream.Note: It can take several minutes for the log stream to appear after the new execution starts.Related informationMonitoring Step Functions using CloudWatchCreating a Step Functions state machine that uses LambdaFollow" | https://repost.aws/knowledge-center/lambda-state-change-step-functions |
How do I troubleshoot NXDOMAIN responses when using Route 53 as the DNS service? | "I'm receiving an NXDOMAIN response from the DNS resolver, or a DNS_PROBE_FINISHED_NXDOMAIN error when resolving Amazon Route 53 records." | "I'm receiving an NXDOMAIN response from the DNS resolver, or a DNS_PROBE_FINISHED_NXDOMAIN error when resolving Amazon Route 53 records.ResolutionDetermine if the domain is in the active or suspended state1. Run a whois query against the domain.Note: Make sure that whois is installed before running the following commands.For Windows: Open a Windows command prompt, and then enter whois -v example.com.For Linux: Open your SSH client. In the command prompt, enter whois example.com.Note: If the domain is registered with Amazon Registrar, then you can use the Amazon Registrar whois lookup tool.2. Check the status of the domain. If the value of Domain Status is clientHold, then the domain is suspended.Example whois output:whois example.com Domain Name: EXAMPLE.COM Registry Domain ID: 87023946\_DOMAIN\_COM-VRSN Registrar WHOIS Server: whois.godaddy.com Registrar URL: http://www.godaddy.com Updated Date: 2020-05-08T10:05:49Z Creation Date: 2002-05-28T18:22:16Z Registry Expiry Date: 2021-05-28T18:22:16Z Registrar: GoDaddy.com, LLC Registrar IANA ID: 146 Registrar Abuse Contact Email: abuse@godaddy.com Registrar Abuse Contact Phone: 480-624-2505 Domain Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited Domain Status: clientHold https://icann.org/epp#clientHold Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited Name Server: ns-1470.awsdns-55.org. Name Server: ns-1969.awsdns-54.co.uk. Name Server: ns-736.awsdns-28.net. Name Server: ns-316.awsdns-39.com.To make a domain available on the internet again, remove it from suspended status. The following are the most common reasons that a domain might be suspended:You registered a new domain, but you didn't click the link in the confirmation email.You turned off automatic renewal for the domain, and the domain expired.You changed the email address for the registrant contact, but you didn't verify that the new email address is valid.For more information, see My domain is suspended (status is ClientHold).Confirm that the correct name servers are configured on the domain registrar1. In the whois output, note the Name Servers that are authoritative for your domain. See the preceding whois output for an example.You can also use the dig utility to check the configured name servers.Example dig +trace output:dig +trace example.com; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.amzn2.2 <<>> +trace example.com;; global options: +cmd. 518400 IN NS H.ROOT-SERVERS.NET.. 518400 IN NS I.ROOT-SERVERS.NET.. 518400 IN NS J.ROOT-SERVERS.NET.. 518400 IN NS K.ROOT-SERVERS.NET.;; Received 239 bytes from 10.0.0.2#53(10.0.0.2) in 0 mscom. 172800 IN NS a.gtld-servers.net.com. 172800 IN NS m.gtld-servers.net.com. 172800 IN NS h.gtld-servers.net.C41A5766com. 86400 IN RRSIG DS 8 1 86400 20210329220000 20210316210000 42351 . ;; Received 1174 bytes from 192.112.36.4#53(G.ROOT-SERVERS.NET) in 104 msexample.com. 172800 IN NS ns-1470.awsdns-55.org. ------>Name servers of interest.example.com. 172800 IN NS ns-1969.awsdns-54.co.uk.example.com. 172800 IN NS ns-736.awsdns-28.net.example.com. 172800 IN NS ns-316.awsdns-39.com.;; Received 732 bytes from 192.33.14.30#53(b.gtld-servers.net) in 91 msexample.com. 3600 IN A 104.200.22.130example.com. 3600 IN A 104.200.23.95example.com. 3600 IN NS ns-1470.awsdns-55.org.example.com. 3600 IN NS ns-1969.awsdns-54.co.uk.example.com. 3600 IN NS ns-736.awsdns-28.net.example.com. 3600 IN NS ns-316.awsdns-39.com.;; Received 127 bytes from 173.201.72.25#53(ns-1470.awsdns-55.org) in 90 ms2. Open the Route 53 console.3. In the navigation pane, choose Hosted zones.4. On the Hosted zones page, select the radio button (not the name) for the hosted zone. Then, choose View details.5. On the details page for the hosted zone, choose Hosted zone details.6. Confirm that the Name Servers listed in the hosted zone details are identical to the Name Servers in the whois or dig +trace output.Important: If the name servers aren't identical, then update them at the domain registrar. For domains registered with Route 53, see Adding or changing name servers and glue records for a domain. For domains registered with a third party, refer to the provider's documentation for steps on how to update the name servers.Confirm that the requested record existsCheck that the hosted zone for the domain contains the requested record. For example, if you're receiving an NXDOMAIN response when trying to resolve www.example.com, then check the example.com hosted zone for the www.example.com record. For steps on how to list records in Route 53, see Listing records.If you have a CNAME record pointing to another domain name, make sure that the canonical name exists and is resolvable.Exampleexample.com CNAME record is configured with a value of blog.example.com. In this case, verify that the record blog.example.com exists and is resolvable.Check for subdomain delegation issues1. Check the parent hosted zone for a Name Server (NS) record for the domain name that you're resolving. If an NS record for a subdomain exists, then the authority for the domain and its subdomains is delegated to another zone. For example, if an NS record for www.example.com exists, then the authority for www is delegated to the name servers in the NS record. If the delegation is valid, then you must create the record for the domain in the delegated zone, not the parent zone of example.com.2. If the delegation isn't valid, then delete the NS record for the domain. Confirm that the parent hosted zone (example.com) contains a record for the domain name that you're trying to resolve.3. Resolvers that implement QNAME minimization include minimum detail in each query, as is required for that step in the resolution process. This might cause an NXDOMAIN issue in some resolvers. When you configure multiple levels of subdomain delegation, follow strict delegation at every level. For more information, see Routing traffic for additional levels of subdomains.Determine if the DNS resolution issue exists only in the VPC1. Check the resolver IP address that's configured on the client operating system (OS). For Linux, check the /etc/resolv.conf file. For Windows, check the DNS servers in the ipconfig /all output. Look for the default virtual private cloud (VPC) DNS resolver (VPC CIDR+2). For example, if the VPC CIDR is 10.0.0.0/8, then the DNS resolver IP address is 10.0.0.2. If you don't see the VPC DNS resolver in /etc/resolv.conf, then check the custom DNS resolver.2. If you're using the VPC DNS resolver, then check the private hosted zones and Route 53 resolver rules.When using resolver rules and private hosted zonesIf the resolver rule and private hosted zone domain name match, then the resolver rule takes precedence. For more information, see Considerations when working with a private hosted zone. In this case, the DNS query is sent to the target IP address that's configured as the target in the resolver rule.When using a private hosted zone and no resolver ruleVerify that there's a private hosted zone with matching domain names associated with the VPC. For example, you might have a public hosted zone and a private hosted zone for the domain that's associated with a VPC. This is a split-view or split-horizon DNS. In this case, clients in the VPC can't resolve records created in the public hosted zone. If the record isn't present in the private hosted zone, then the VPC DNS doesn't fall back to the public hosted zone.When using only resolver rules and no private hosted zoneCheck the Route 53 resolver rules. If there's a rule that matches the domain name, then the query for the domain routes to the configured target IP addresses. This means that the query isn't routed to the default public resolvers.Determine if your issue is the result of negative cachingNegative caching is the process of storing a negative response from an authoritative name server in the cache. An NXDOMAIN response is considered a negative response. Consider the following example:A client makes a DNS query for neg.example.com and receives a response code of NXDOMAIN because the record neg.example.com doesn't exist.This user also owns example.com, so they create a new record for neg.example.com. The user continues to receive an NXDOMAIN response when users in other networks can successfully resolve the record.When the user makes a query to neg.example.com before creating the new record, they receive an NXDOMAIN response. If the user turned on negative caching in their resolver settings, then the resolver caches this response. After the user creates the new record, they make the query again. The resolver previously received this query and cached it, so it returned the response from the cache.There's no record returned in the answer of a negative response, so there's no Time to Live (TTL) value compared to a positive response. In this case, the resolver uses the lowest value from the following:The minimum TTL value of the Start of Authority (SOA) record.The TTL value of the SOA record to cache the NXDOMAIN response.To confirm this issue, send a query directly to the name server to see if you're getting a response. For example:dig www.example.com @ns-1470.awsdns-55.orgFollow" | https://repost.aws/knowledge-center/route-53-troubleshoot-nxdomain-responses |
Why am I seeing high CPU usage on my Amazon Redshift leader node? | My Amazon Redshift cluster's leader node is experiencing high CPU utilization. Why is this happening? | "My Amazon Redshift cluster's leader node is experiencing high CPU utilization. Why is this happening?Short descriptionYour Amazon Redshift cluster's leader node parses and develops execution plans to carry out database operations. The leader node also performs final processing of queries and merging or sorting of data before returning that data to the client. Depending on how complex or resource-intensive the database operations are, the CPU utilization can spike for your cluster's leader node.In particular, your leader node's CPU utilization can spike for the following reasons:Increase in database connectionsQuery compilation and recompilationHigh number of concurrent connectionsHigh number of concurrent queries running in WLMLeader node-only functions and catalog queriesNote: You can't check for specific processes that occupy your leader node. Use the STV_RECENTS table to check for the queries that are running at a particular time.ResolutionIncrease in database connectionsThe client server communicates with the Amazon Redshift cluster through the leader node. If there are a growing number of database connections, the CPU utilization increases to process those connections. Check Amazon CloudWatch metrics to make sure the DatabaseConnections limit hasn't been exceeded.Query compilation and recompilationAmazon Redshift generates and compiles code for each query execution plan. Query compilation and recompilation are resource-intensive operations, and this can result in high CPU usage of the leader node. However, CPU performance will return to normal when the query compilation or recompilation operations are complete.Note also that Amazon Redshift caches compiled code. When a query is submitted, Amazon Redshift reuses whatever segments are available while the remaining segments are recompiled. As a result, this process can contribute to high CPU usage of the leader node.Note: After an Amazon Redshift cluster reboots, the compiled segment stills persists, even though the results cache gets cleared. Amazon Redshift doesn't run the query if your query was previously cached. All caches are removed when a patch is applied.To check the compilation time (in seconds) and segment execution location for each query segment, use the SVL_COMPILE system view:select userid,xid,pid,query,segment,locus,starttime, endtime,datediff(second,starttime,endtime) as TimetoCompile,compile from svl_compile;High number of concurrent connectionsMore connections can lead to a higher concurrency and an increase in transactions of your Amazon Redshift cluster. The increase in transactions can result in high CPU utilization of the leader node.To check for concurrent connections, run the following query:select s.process as process_id,c.remotehost || ':' || c.remoteport as remote_address,s.user_name as username,s.starttime as session_start_time,s.db_name,i.starttime as current_query_time,i.text as query from stv_sessions sleft join pg_user u on u.usename = s.user_nameleft join stl_connection_log con c.pid = s.processand c.event = 'authenticated'left join stv_inflight i on u.usesysid = i.userid and s.process = i.pidwhere username <> 'rdsdb'order by session_start_time desc;Then, use PG_TERMINATE_BACKEND to close any active sessions.High number of concurrent queries running in WLMAll client connections are processed through the leader node. Before returning data to the client server, Amazon Redshift's leader node parses, optimizes, and then compiles queries. The leader node also distributes tasks to compute nodes, performing final sorting or aggregation. With high query concurrency, CPU usage can increase at the leader node level. Additionally, some database operations can be applied only at the leader node level. For example, a query with a LIMIT clause might consume high CPU because the limit is applied to the leader node before data is redistributed.To confirm whether there is correlation between the number of concurrent queries and CPU usage, first check the WLMRunningQueries and CPUutilization metrics in Amazon CloudWatch.Then, check to see which queries are consuming high CPU:SELECT userid, query, xid, aborted,ROUND(query_cpu_time::decimal,3),query_execution_time,segment_execution_time,substring(querytxt,1,250)FROM stl_queryJOIN(SELECT query,query_cpu_time,query_execution_time,segment_execution_timeFROM svl_query_metrics_summaryORDER BY 2 DESC) a USING (query)WHERE userid>1AND starttime BETWEEN '2019-12-02 22:00:00' and '2019-12-05 23:59:59'ORDER BY 5 DESC;Review the output to confirm which queries are processed by the leader node and any other outlier queries that increase CPU usage.Note: It's a best practice to tune query performance for your queries. Consider increasing your leader node capacity and choosing large node types (rather than adding more compute nodes).Leader node-only functions and catalog queriesAmazon Redshift implements certain SQL functions supported on the leader node. If there are complex queries with leader node functions and overloading catalog queries, then CPU utilization can spike on a leader node. For more information, see SQL functions supported on the leader node.To identify steps referencing catalog tables (which are executed only on a leader node), check the EXPLAIN plan:explain select * from pg_class; QUERY PLAN ---------------------------------------------------------------- LD Seq Scan on pg_class (cost=0.00..24.57 rows=557 width=243)Check for the LD prefix in your output. In this example, the LD prefix appears in "LD Seq Scan on pg_class (cost=0.00..24.57 rows=557 width=243)". The LD prefix indicates that a query is running exclusively on a leader node, which can cause a spike in your CPU usage.Follow" | https://repost.aws/knowledge-center/redshift-high-cpu |
How long does object replication take on Amazon S3? | I configured object replication from one Amazon Simple Storage Service (Amazon S3) bucket to another bucket. How long does it take to replicate objects? | "I configured object replication from one Amazon Simple Storage Service (Amazon S3) bucket to another bucket. How long does it take to replicate objects?ResolutionMost objects replicate within 15 minutes. However, sometimes replication can take a couple of hours. In rare cases, the replication can take longer.There are several factors that can affect replication time, including (but not limited to):The size of the objects to be replicated.For cross-Region replication, the pairing of the source and destination AWS Regions.If you don't see object replicas in the destination bucket after you configure replication, see Troubleshooting replication.Follow" | https://repost.aws/knowledge-center/s3-object-replication-time |
How can I resolve connection issues between the CloudHSM client and the CloudHSM cluster? | I want to troubleshoot and resolve connection issues between my AWS CloudHSM cluster and HSM client. | "I want to troubleshoot and resolve connection issues between my AWS CloudHSM cluster and HSM client.ResolutionVerify that the CloudHSM client package installedCloudHSM clients must have the CloudHSM client software installed to communicate with the HSM. Verify that the CloudHSM client package is installed using one of the following commands:Red Hat Enterprise Linux (RHEL) and Amazon Linux:rpm -qa | grep cloudhsmUbuntu:dpkg --list | grep cloudhsmWindows PowerShell:Get-Service -Name AWSCloudHSMClientIf the CloudHSM client software isn't installed, follow the instructions to install it. For Linux distributions, see Install and configure the AWS CloudHSM client (Linux). For Windows, see Install and configure the AWS CloudHSM client (Windows).Verify that the CloudHSM security group is associated with the CloudHSM client instanceWhen you create a cluster, CloudHSM automatically creates a security group named cloudhsm-cluster-clusterID-sg, and then associates the groups with the cluster. Client instances must be associated with this cluster security group to access the HSM.1. Open the CloudHSM console, and choose Clusters.2. Choose the Cluster ID.3. In General configuration under Security group, note the cloudhsm-cluster-clusterID-sg security group ID.4. Open the Amazon EC2 console, and then choose Instances.5. Choose your Instance ID, and then choose the Description tab.6. Check the Security groups associated with the instance.7. If the cloudhsm-cluster-clusterID-sg security group ID isn't associated with the EC2 instance, follow the instructions to Connect the Amazon EC2 instance to the AWS CloudHSM cluster.Verify that the CloudHSM client daemon is runningIf the CloudHSM client daemon is not running, then application hosts can't connect to HSMs. Verify that the CloudHSM client daemon is running using one of the following commands:Amazon Linux 2, CentOS 7, RHEL 7, and Ubuntu 16.04 LTS:sudo systemctl is-active cloudhsm-clientCentOS 6, Amazon Linux, and RHEL 6:sudo status cloudhsm-clientWindows PowerShell:Get-Service -Name AWSCloudHSMClient | Format-Table DisplayName,Status -AutoSizeIf the output shows the CloudHSM client daemon status as stopped, then follow the instructions to Start the AWS CloudHSM Client.Update the configuration file for the CloudHSM client elastic network interface IP address1. Open the CloudHSM console, and then choose Clusters.2. Choose Cluster ID.3. Choose the HSMs tab, and then note ENI IP address.Note: You can also use the AWS Command Line Interface (AWS CLI) describe-clusters command.4. For instructions to update the client's configuration file with the ENI IP address from step 3, see Lost connection to the cluster.For more information, see Troubleshooting AWS CloudHSM.Related informationWhich CloudHSM certificates are used for the client-server end-to-end encrypted connection?Follow" | https://repost.aws/knowledge-center/resolve-cloudhsm-connection-issues |
How can I troubleshoot high replica lag with Amazon RDS for MySQL? | I want to find the cause of replica lag when using Amazon Relational Database Service (Amazon RDS) for MySQL. How can I do this? | "I want to find the cause of replica lag when using Amazon Relational Database Service (Amazon RDS) for MySQL. How can I do this?Short descriptionAmazon RDS for MySQL uses asynchronous replication. This means that sometimes, the replica isn't able to keep up with the primary DB instance. As a result, replication lag can occur.When you use an Amazon RDS for MySQL read replica with binary log file position-based replication, you can monitor replication lag. In Amazon CloudWatch, check the ReplicaLag metric for Amazon RDS. The ReplicaLag metric reports the value of the Seconds_Behind_Master field of the SHOW SLAVE STATUS command.The Seconds_Behind_Master field shows the difference between the current timestamp on the replica DB instance. The original timestamp logged on the primary DB instance for the event that is being processed on the replica DB instance is also shown.MySQL replication works with three threads: the Binlog Dump thread, the IO_THREAD, and the SQL_THREAD. For more information about how these threads function, see the MySQL documentation for Replication threads. If there is a delay in replication, identify whether the lag is caused by the replica IO_THREAD or the replica SQL_THREAD. Then, you can identify the root cause of the lag.ResolutionTo identify which replication thread is lagging, see the following examples:1. Run the SHOW MASTER STATUS command on the primary DB instance, and review the output:mysql> SHOW MASTER STATUS;+----------------------------+----------+--------------+------------------+-------------------+| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |+----------------------------+----------+--------------+------------------+-------------------+| mysql-bin-changelog.066552| 521 | | | |+----------------------------+----------+--------------+------------------+-------------------+1 row in set (0.00 sec)Note: In the example output, the source or primary DB instance is writing the binary logs to the file mysql-bin.066552.2. Run the SHOW SLAVE STATUS command on the replica DB instance and review the output:Example 1:mysql> SHOW SLAVE STATUS\G;*************************** 1. row ***************************Master_Log_File: mysql-bin.066548Read_Master_Log_Pos: 10050480Relay_Master_Log_File: mysql-bin.066548Exec_Master_Log_Pos: 10050300Slave_IO_Running: YesSlave_SQL_Running: YesIn Example 1, the Master_Log_File: mysql-bin.066548 indicates that the replica IO_THREAD is reading from the binary log file mysql-bin.066548. The primary DB instance is writing the binary logs to the mysql-bin.066552 file. This output shows that the replica IO_THREAD is behind by four binlogs. But, the Relay_Master_Log_File is mysql-bin.066548, which indicates that the replica SQL_THREAD is reading from same file as the IO_THREAD. This means that the replica SQL_THREAD is keeping up, but the replica IO_THREAD is lagging behind.Example 2:mysql> SHOW SLAVE STATUS\G*************************** 1. row ***************************Master_Log_File: mysql-bin.066552Read_Master_Log_Pos: 430Relay_Master_Log_File: mysql-bin.066530Exec_Master_Log_Pos: 50360Slave_IO_Running: YesSlave_SQL_Running: YesExample 2 shows that the primary instance's log file is mysql-bin-changelog.066552. The output shows that IO_THREAD is keeping up with the primary DB instance. In the replica output, the SQL thread is performing Relay_Master_Log_File: mysql-bin-changelog.066530. As a result, SQL_THREAD is lagging behind by 22 binary logs.Normally, IO_THREAD doesn't cause large replication delays, because the IO_THREAD only reads the binary logs from the primary or source instance. But, network connectivity and network latency can affect the speed of the reads between the servers. The IO_THREAD replica might be performing slower because of high bandwidth usage.If the replica SQL_THREAD is the source of replication delays, then those delays could be caused by the following:Long-running queries on the primary DB instanceInsufficient DB instance class size or storageParallel queries run on the primary DB instanceBinary logs synced to the disk on the replica DB instanceBinlog_format on the replica is set to ROWReplica creation lagLong-running queries on the primary instanceLong-running queries on the primary DB instance that take an equal amount of time to run on the replica DB instance can increase seconds_behind_master. For example, if you initiate a change on the primary instance and it takes an hour to run, then the lag is one hour. Because the change might also take one hour to complete on the replica, by the time the change is complete, the total lag is approximately two hours. This is an expected delay, but you can minimize this lag by monitoring the slow query log on the primary instance. You can also identify long-running statements to reduce lag. Then, break long-running statements into smaller statements or transactions.Insufficient DB instance class size or storageIf the replica DB instance class or storage configuration is lower than the primary, then the replica might throttle because of insufficient resources. The replica is unable to keep up with the changes made on the primary instance. Make sure that the DB instance type of the replica is the same or higher than the primary DB instance. For replication to operate effectively, each read replica requires the same amount of compute and storage resources as the source DB instance. For more information, see DB instance classes.Parallel queries run on the primary DB instanceIf you run queries in parallel on the primary, they are committed in a serial order on the replica. This is because the MySQL replication is single threaded (SQL_THREAD), by default. If a high volume of writes to the source DB instance occurs in parallel, then the writes to the read replica are serialized using a single SQL_THREAD. This can cause a lag between the source DB instance and read replica.Multi-threaded (parallel) replication is available for MySQL 5.6, MySQL 5.7, and higher versions. For more information about multi-threaded replication, see the MySQL documentation for Binary logging options and variables.Multi-threaded replication can cause gaps in replication. For example, multi-threaded replication isn't a best practice when skipping the replication errors, because it's difficult to identify which transactions you are skipping. This can lead to gaps in data consistency between the primary and replica DB instances.Binary logs synced to the disk on the replica DB instanceTurning on automatic backups on the replica might result in overhead to sync the binary logs to the disk on the replica. The default value of the parameter sync_binlog is set to 1. If you change this value to 0, then you also turn off the synchronization of the binary log to the disk by the MySQL server. Instead of logging to the disk, the operating system (OS) occasionally flushes the binary logs to disk.Turning off the binary log synchronization can reduce the performance overhead required to sync the binary logs to disk on every commit. But, if there is a power failure or the OS crashes, some of the commits might not be synchronized to the binary logs. This asynchronization can affect point in time restore (PITR) capabilities. For more information, see the MySQL documentation for sync_binlog.Binlog_format is set to ROWIf you set binlog_format on the primary DB instance to ROW, and the source table is missing a primary key, then the SQL thread will perform a full table scan on replica. This is because the default value of parameter slave_rows_search_algorithms is TABLE_SCAN,INDEX_SCAN. To resolve this issue in the short term, change the search algorithm to INDEX_SCAN,HASH_SCAN to reduce the overhead of full table scan. For the long term, it's a best practice to add an explicit primary key to each table.For more information about the slave-rows-search-algorithms parameter, see the MySQL documentation for slave_rows_search_algorithms.Replica creation lagAmazon RDS creates a read replica of a MySQL primary instance by taking a DB snapshot. Then, Amazon RDS restores the snapshot to create a new DB instance (replica) and establishes replication between the two.Amazon RDS takes time to create new read replicas. When the replication is established, there is a lag for the duration of the time that it takes to create a backup of the primary. To minimize this lag, create a manual backup before calling for the replica creation. Then, the snapshot taken by the replica creation process is an incremental backup, which is faster.When you restore a read replica from a snapshot, the replica doesn't wait for all the data to be transferred from the source. The replica DB instance is available to perform the DB operations. The new volume is created from existing Amazon Elastic Block Store (Amazon EBS) snapshot loads in the background.Note: For Amazon RDS for MySQL replicas (EBS-based volumes), the replica lag can increase initially. This is because the lazy loading effect can influence the replication performance.Consider turning on the InnoDB cache warming feature. This can provide performance gains by saving the current state of the buffer pool of the primary DB instance. Then, reload the buffer pool on a restored read replica.Related informationWorking with MySQL replication in Amazon RDSWorking with MySQL read replicasFollow" | https://repost.aws/knowledge-center/rds-mysql-high-replica-lag |
How do I troubleshoot connecting to my EC2 Linux instance using a secondary IP address? | I can't connect to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance using a secondary IP address. How do I troubleshoot this? | "I can't connect to my Amazon Elastic Compute Cloud (Amazon EC2) Linux instance using a secondary IP address. How do I troubleshoot this?Short descriptionTo connect to your instance through SSH using a secondary IP address, make sure that your instance meets the following prerequisites:If you launched your instances in a private subnet, then connect through SSH using the secondary private IP address. Make sure to choose the secondary IPv4 address from the IPv4 CIDR block range of the subnet for the network interface.If you launched your instances in a public subnet, then allocate an Elastic IP address to the secondary private IP address attached to your instance.Make sure that the operating system on your EC2 instance recognizes the secondary private IP address.For Amazon Linux, see Configure the operating system on your instance to recognize secondary private IPv4 addresses.For Ubuntu, see How can I make my secondary network interface work in my Ubuntu EC2 instance?For other Linux distributions, see the network configuration documentation for your distribution.If your instance meets these prerequisites, then do the following to troubleshoot connecting through SSH:Connect through SSH with verbose messaging on to identify the error.Review the system logs to look for errors.ResolutionNote: Review the general connection prerequisites before beginning.Connect through SSH with verbose messaging on to identify the errorFor detailed instructions, see How do I troubleshoot problems connecting to my Amazon EC2 Linux instance using SSH?Review the system logs to look for errorsIf the preceding steps don't resolve the issue, then review the instance's system logs. There are two methods for accessing the logs on your instance:Method 1: Use the EC2 Serial ConsoleIf you enabled EC2 Serial Console for Linux, then you can use it to troubleshoot supported Nitro-based instance types. The serial console helps you troubleshoot boot issues, network configuration, and SSH configuration issues. The serial console connects to your instance without the need for a working network connection. You can access the serial console using the Amazon EC2 console or the AWS Command Line Interface (AWS CLI).Before using the serial console, you must grant access to it at the account level and create AWS Identity and Access Management (IAM) policies granting access to your IAM users. Also, every instance using the serial console must include at least one password-based user. If your instance is unreachable and you haven’t configured access to the serial console, follow the instructions in Method 2. For information on configuring the EC2 Serial Console for Linux, see Configure access to the EC2 Serial Console.Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Method 2: Access the logs using a rescue instanceWarning: Before starting this procedure, be aware that:Stopping the instance erases any data on instance store volumes. Be sure that you backup any data on the instance store volume that you want to keep. For more information, see Determine the root device type of your instance.Stopping and then starting the instance changes the public IP address of your instance. It's a best practice to use an Elastic IP address instead of a public IP address when routing external traffic to your instance.1. Open the Amazon EC2 console.2. Choose Instances from the navigation pane, and select instance that you're trying to connect to.3. Choose Instance State, Stop Instance, and then select Stop. Make a note of the Instance ID.Note: If you're not using the New EC2 Experience, then select the instance that you're trying to connect to. Then choose Actions, Instance State, Stop, Stop.4. Detach the root Amazon Elastic Block Store (Amazon EBS) volume from the stopped instance. Make a note of the device name of the root EBS volume. The device name is required when you reattach the volume after troubleshooting.5. Launch a new EC2 instance in the same Availability Zone as the original instance. The new instance becomes your "rescue" instance.Note: It's a best practice to use an Amazon Linux 2 instance as the rescue instance. Using an Amazon Linux 2 instance prevents the rescue instance from booting up from the attached EBS volume because the UUID or name of the EBS volume are the same.6. After the rescue instance launches, choose Volumes from the navigation pane, and then select the detached root volume of the impaired instance.Note: If the impaired instance's root volume has Marketplace codes and the rescue instance isn't Amazon Linux, then stop the rescue instance before attaching the root EBS volume. An instance might have Marketplace codes if you launched the instance from an official RHEL or CentOS AMI, for example.7. Choose Actions, Attach Volume.8. Select Instances in the navigation pane, and then select the rescue instance.9. Choose Instance state, Start instance.Note: If you're not using the New EC2 Experience console, then select the instance that you're trying to connect to, then choose Actions, Instance State, Start.11. Connect to the rescue instance through SSH.12. Run the following command to verify that the EBS volume successfully attached to the rescue instance. In the following command, the volume attached as /dev/sdf.$ lsblkThe following is an example of the command output:NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTxvda 202:0 0 20G 0 disk └─xvda1 202:1 0 20G 0 part /xvdf 202:80 0 100G 0 disk13. Use the following commands to create a mount point directory, and then mount the attached volume to the rescue instance. In the following example, the mount point directory is /test.$ sudo su$ mkdir /test$ mount /dev/xvdf1 /test$ df -h$ cd /test14. Locate errors in the system logs and authentication-related logs that are timestamped with the times that you attempted access.Amazon Linux, RHEL, CentOS$ sudo cat /test/var/log/messagesAmazon Linux, RHEL, CentOS (Authentication-related issues)$ sudo cat /test/var/log/secureUbuntu, Debian (System Logs)$ sudo cat /test/var/log/syslogUbuntu, Debian (Authentication-related issues)$ sudo cat /test/var/log/auth.log15. After reviewing the configurations and addressing any errors, unmount and detach the EBS root volume from the rescue instance.$ umount /test16. Attach the volume to the original instance. The device name is /dev/xvda.Related informationConnect to your Linux instance using an SSH clientFollow" | https://repost.aws/knowledge-center/ec2-instance-connect-with-secondary-ip |
How do I troubleshoot packet loss on my AWS VPN connection? | I'm having constant or intermittent packet loss and high latency issues with my AWS Virtual Private Network (AWS VPN) connection. What tests can I run to be sure that the issue isn't occurring inside my Amazon Virtual Private Cloud (Amazon VPC)? | "I'm having constant or intermittent packet loss and high latency issues with my AWS Virtual Private Network (AWS VPN) connection. What tests can I run to be sure that the issue isn't occurring inside my Amazon Virtual Private Cloud (Amazon VPC)?Short descriptionPacket loss issues vary with AWS VPN Internet traffic hops between the on-premises network and the Amazon VPC. It's a best practice to isolate and confirm where the packet loss is coming from.ResolutionCheck the source and destination hosts for resource utilization issues such as CPUUtilization, NetworkIn/NetworkOut, NetworkPacketsIn/NetworkPacketsOut to verify that you aren't hitting network limits.Use MTR to check for ICMP or TCP packet loss and latencyMTR provides a continuous updated output that allows you to analyze network performance over time. It combines the functionality of traceroute and ping in a single network diagnostic tool.Install the MTR network tool on your EC2 instance in the VPC to check for ICMP or TCP packet loss and latency.Amazon Linux:sudo yum install mtrUbuntu:sudo apt-get install mtrWindows:Download and install WinMTR.Note: For Windows OS, WinMTR doesn't support TCP-based MTR.Run the following tests between the private and public IP address for your EC2 instances and on-premises host bi-directionally. The path between nodes on a TCP/IP network can change when the direction is reversed. It's a best practice to get MTR results bi-directionally.Note:Make sure that the security group and NACL rules allow ICMP traffic from the source instance.Make sure that the test port is open on the destination instance, and the security group and NACL rules allow traffic from the source on the protocol and port.The TCP-based result determine if there is application-based packet loss or latency on the connection. MTR version 0.85 and higher have the TCP option.Private IP EC2 instance on-premises host report:mtr -n -c 200Private IP EC2 instance on-premises host report:mtr -n -T -c 200 -P 443 -m 60Public IP EC2 instance on-premises host report:mtr -n -c 200Public IP EC2 instance on-premises host report:mtr -n -T -c 200 -P 443 -m 60Use traceroute to determine latency or routing issuesThe Linux traceroute utility identifies the path taken from a client node to the destination node. The utility records the time in milliseconds for each router to respond to the request. The traceroute utility also calculates the amount of time each hop takes before reaching its destination.To install traceroute, run the following commands:Amazon Linux:sudo yum install tracerouteUbuntu:sudo apt-get install traceroutePrivate IP address of EC2 instance and on-premises host test:Amazon Linux:sudo traceroutesudo traceroute -T -p 80Windows:tracerttracetcpNote: The arguments -T -p 80 -n perform a TCP-based trace on port 80. Be sure that you have port 80 or the port that you are testing open bi-directionally.The Linux traceroute option to specify a TCP-based trace instead of ICMP is useful because most internet devices deprioritize ICMP-based trace requests. A few timed-out requests are common, so watch for packet loss to the destination or in the last hop of the route. Packet loss over several hops might indicate an issue.Note: It's a best practice to run the traceroute command bi-directionally from the client to the server and then from the server back to the client.Use hping3 to determine end-to-end TCP packet loss and latency problemsHping3 is a command-line oriented TCP/IP packet assembler and analyzer that measures end-to-end packet loss and latency over a TCP connection. In addition to ICMP echo requests, hping3 supports TCP, UDP, and RAW-IP protocols. Hping3 also includes a traceroute mode that can send files between a covered channel. Hping3 is designed to scan hosts, assist with penetration testing, test intrusion detection systems, and send files between hosts.MTRs and traceroute capture per-hop latency. However, hping3 yields results that show end-to-end min/avg/max latency over TCP in addition to packet loss. To install hping3, run the following commands:Amazon Linux:sudo yum --enablerepo=epel install hping3Ubuntu:sudo apt-get install hping3Run the following commands:hping3 -S -c 50 -V <Public IP of EC2 instance or on-premises host>hping3 -S -c 50 -V <Private IP of EC2 instance or on-premises host>Note: By default, hping3 sends TCP headers to the target host's port 0 with a winsize of 64 without the tcp flag on.Packet capture samples using tcpdump or WiresharkPerforming simultaneous packet captures between your test EC2 instance in the VPC and your on-premises host when duplicating the issue helps to determine if there are any application or network layer issues on the VPN connection. You can install tcpdump on your Linux instance or Wireshark on a Windows instance to perform packet captures.Install tcpdump on Amazon Linux:sudo yum install tcpdumpInstall tcpdump on Ubuntu:sudo apt-get install tcpdumpInstall Wireshark on Windows OS:Install Wireshark and take a packet capture.Explicit Congestion Notification (ECN)For connecting to Windows instances, enabling ECN might cause packet losses or performance issues. Disable ECN to improve performance.Run the following command to determine if ECN capability is enabled:netsh interface tcp show globalIf ECN capability is enabled, run the following command to disable it:netsh interface tcp set global ecncapability=disabledRelated informationHow do I troubleshoot network performance issues between Amazon EC2 Linux instances in a VPC and an on-premises host over the internet gateway?How can I determine whether my DNS queries to the Amazon provided DNS server are failing due to VPC DNS throttling?Follow" | https://repost.aws/knowledge-center/vpn-packet-loss |
How do I set up an AWS Network Firewall with a NAT gateway? | I want to configure my AWS Network Firewall to inspect traffic using a NAT gateway. | "I want to configure my AWS Network Firewall to inspect traffic using a NAT gateway.Short descriptionAWS Network Firewall provides more granular control over traffic to and from the resources inside your Amazon Virtual Private Cloud (Amazon VPC). To protect your Amazon VPC resources, you can deploy your Network Firewall endpoints in their own subnets and route the workload instance traffic through them. This can be done by:Creating a VPCCreating a firewallConfiguring the traffic routingNote: Network Firewall cannot inspect workloads in the same subnet where the firewall endpoints are deployed.ResolutionCreating a VPCOpen the Amazon VPC console.On the VPC dashboard, click Create VPC.Under VPC settings, enter the following:Choose VPC and more.Under Name tag auto-generation, enter a name for the VPC. For this example, the VPC is named Protected_VPC_10.0.0.0_16-vpc. If the Auto-generate option is selected, the name will be added as a name tag to all resources in the VPC.For IPv4 CIDR block, enter 10.0.0.0/16.For IPv6 CIDR block, choose No IPv6 CIDR block.For Tenancy, choose Default.For Number of Availability Zones (AZs), choose 2.Under Customize AZs, choose two Availability Zones. For this example, us-east-2a and us-east-2b are selected.For Number of public subnets, choose 2.For Number of private subnets, choose 4. Two of the private subnets are for the firewall and two are for the workload subnets.For NAT gateways ($), choose 1 per AZ. The NAT gateways are deployed in the public subnets automatically.For VPC endpoints, choose None.Choose Create VPC.Name the subnets according to their purpose:The two public subnets are for the NAT gateways and are named Public_Subnet_AZa and Public_Subnet_AZb for this example.For the private subnets, two are for the firewall endpoints and are named Firewall_Subnet_AZa and Firewall_Subnet_AZb for this example.The other two private subnets are for the workload endpoints and are named Private_Subnet_AZa and Private_Subnet_AZb for this example.Create a firewallOn the navigation pane, under Network Firewall, choose Firewalls.Choose Create firewall.Under Create firewall, enter the following:Enter a name for the firewall. For this example, the firewall is named Network-Firewall-Test.For VPC, choose Protected_VPC_10.0.0.0_16-vpc.For Firewall subnets, choose the first Availability Zone (us-east-2a) and choose Firewall_Subnet_AZa for the subnet. Then, choose Add new subnet and repeat for the second Availability Zone (us-east-2b) and choose Firewall_Subnet_AZb for the subnet.For Associated firewall policy, choose Create and associate an empty firewall policy.For New firewall policy name, enter a name for the new policy.Choose Create firewall. Each subnet must have a unique routing table. The four private subnets have a unique routing table associated with it, while the public subnets share a routing table. You must create a new routing table with a static route to an internet gateway and associate it to one of the public subnets.Configure the traffic routingThe traffic flows as follows:Traffic initiated from workload instance in AZa is forwarded to firewall endpoint in AZa.The firewall endpoint in AZa will route the traffic to the NAT gateway in AZa.NAT gateway in AZa forwards the traffic to the internet gateway associated with the VPC.The internet gateway forwards the traffic out to the internet.The reverse traffic follows the same path in the opposite direction:Return traffic from the internet reaches the internet gateway attached to the VPC. There can only be one internet gateway attached to a VPC.The internet gateway forwards the traffic to the NAT gateway in AZa. The internet gateway makes this decision based on the workload Availability Zone. Because the destination of the traffic is in AZa, the internet gateway picks the NAT gateway in AZa to forward the traffic. There is no need to maintain a route table for internet gateway.The NAT gateway in AZa forward the traffic to the firewall endpoint in AZa.The firewall endpoint in AZa forward the traffic to the workload in AZa.Note: Internet gateways can identify the NAT gateway for packets returning from the internet to the workload instances.After creating the VPC and firewall, you must configure the routing tables. When configuring the routing tables, keep the following in mind:The private subnet in AZa (Private_Subnet_AZa) forwards all traffic destined to the internet to the firewall endpoint in AZa (Firewall_Subnet_AZa). This is repeated with the private subnet in AZb and the firewall endpoint in AZb.The firewall subnet in AZa (Firewall_Subnet_AZa) forwards all traffic destined to the internet to a NAT gateway in AZa ( Public_Subnet_AZa). This is repeated with the firewall subnet in AZb and the NAT gateway in AZb.The public subnet in AZa (Public_Subnet_AZa) forwards all traffic to the internet gateway attached to the VPC.Return traffic follows the same path in reverse.Note: Traffic is kept in the same Availability Zone so that the network firewall has both the egress and ingress traffic route through the same firewall endpoint. This allows the firewall endpoints in each Availability Zone to make stateful inspections of the packets.The following are example configurations of the routing tables:Public_Subnet_RouteTable_AZa (Subnet association: Public_Subnet_AZa)DestinationTarget0.0.0.0/0Internet gateway10.0.0.0/16Local10.0.128.0/20Firewall endpoint in AZaNote: In this example, 10.0.128.0/20 is the CIDR of Private_Subnet_AZa.Public_Subnet_RouteTable_AZb (Subnet association: Public_Subnet_AZb)DestinationTarget0.0.0.0/0Internet gateway10.0.0.0/16Local10.0.16.0/20Firewall endpoint in AZbNote: In this example, 10.0.16.0/20 is the CIDR of Private_Subnet_AZb.Firewall_Subnet_RouteTable_AZa (Subnet association: Firewall_Subnet_AZa)DestinationTarget0.0.0.0/0NAT gateway in Public_Subnet_AZa10.0.0.0/16LocalFirewall_Subnet_RouteTable_AZb (Subnet association: Firewall_Subnet_AZb)DestinationTarget0.0.0.0/0NAT gateway in Public_Subnet_AZb10.0.0.0/16LocalPrivate_Subnet_RouteTable_AZa (Subnet association: Private_Subnet_AZa)DestinationTarget0.0.0.0/0Firewall endpoint in AZa10.0.0.0/16LocalPrivate_Subnet_RouteTable_AZb (Subnet association: Private_Subnet_AZb)DestinationTarget0.0.0.0/0Firewall endpoint in AZb10.0.0.0/16LocalTo verify if your routing was configured correctly, you can deploy an EC2 instance in one of your private subnets to test your internet connectivity. Without any rules configured in the network firewall policy, traffic will not be inspected and can reach the internet. After confirming your routing, security group, and network access control lists (network ACLs) are configured, add rules to your firewall policy.Note: You can also set up Network Firewall to route traffic from the internet through the firewall, then the NAT gateway. For more information, see Architecture with an internet gateway and a NAT gateway.Related informationLogging and monitoring in AWS Network FirewallRoute table conceptsDeployment models for AWS Network Firewall with VPC routing enhancementsFollow" | https://repost.aws/knowledge-center/network-firewall-set-up-with-nat-gateway |
How do I assign a static hostname to an Amazon EC2 instance running SLES? | "I changed the hostname of my Amazon Elastic Compute Cloud (Amazon EC2) instance. However, when I reboot or stop and then start the instance, the hostname changes back. How do I make the hostname persist?" | "I changed the hostname of my Amazon Elastic Compute Cloud (Amazon EC2) instance. However, when I reboot or stop and then start the instance, the hostname changes back. How do I make the hostname persist?Short descriptionTo make a hostname persist when rebooting or stopping and starting your EC2 instance, add the hostname to the appropriate configuration files on the instance.Note: The following steps apply to SLES. For instructions that apply to other distributions, see one of the following:Changing the system hostnameHow do I assign a static hostname to an Amazon EC2 instance running RHEL 5 or 6, CentOS 5 or 6, or Amazon Linux?How do I assign a static hostname to an Amazon EC2 instance running Ubuntu Linux?How do I assign a static hostname to an Amazon EC2 instance running RHEL 7 or CentOS 7?Resolution1. Connect to your EC2 Linux instance using SSH. For more information, see Connecting to your Linux instance using SSH.2. Switch to the root user.sudo su3. Use the hostnamectl command to set the new hostname. Replace new-hostname with your hostname.SLES 11:hostname new-hostnameSLES 12 and SLES 15:hostnamectl set-hostname new-hostname4. Use the vim editor to update the /etc/hosts file with the new hostname.vim /etc/hosts5. Find the localhost string and append the new hostname. Again, replace new-hostname with your hostname.127.0.0.1 localhost new-hostname6. Save and exit the vim editor by pressing Shift + : (colon) to open a new command entry box in the vim editor. Type wq and then press Enter to save changes and exit the vim editor.SLES 11 (additional step for this OS version only)Use the vim editor to update the /etc/HOSTNAME file with the new hostname.vim /etc/HOSTNAMEFind the current hostname string and replace it with the new hostname.Press Shift + : (colon) to open a new command entry box in the vim editor, type wq, and then press Enter to save changes and exit vim.7. Use the vim editor to update the /etc/cloud/cloud.cfg file on your SLES Linux instance.vim /etc/cloud/cloud.cfg8. Find the preserve_hostname string and change the default setting to true so that the hostname is preserved between restarts or reboots.preserve_hostname: true9. Save and exit the vim editor by pressing Shift + : (colon) to open a new command entry box in the vim editor. Type wq and then press Enter to save changes and exit the vim editor.10. Reboot the instance.sudo reboot11. Connect to your EC2 instance, and then run the Linux hostname command without any parameters to verify that the hostname change persisted.hostnameThe command returns the new hostname.Related informationChanging the hostname of your Linux instanceFollow" | https://repost.aws/knowledge-center/linux-static-hostname-suse |
Why do I get Access Denied errors when I use a Lambda function to upload files to an Amazon S3 bucket in another AWS account? | I get an Access Denied error when I use an AWS Lambda function to upload files to an Amazon Simple Storage Service (Amazon S3) bucket. The Amazon S3 bucket is in another AWS account. | "I get an Access Denied error when I use an AWS Lambda function to upload files to an Amazon Simple Storage Service (Amazon S3) bucket. The Amazon S3 bucket is in another AWS account. Short descriptionIf the permissions between a Lambda function and an Amazon S3 bucket are incomplete or incorrect, then Lambda returns an Access Denied error.To set up permissions between a Lambda function in one account (account 1) and an S3 bucket in another account (account 2), do the following:1. (In account 1) Create a Lambda execution role that allows the Lambda function to upload objects to Amazon S3.2. (In account 2) Modify the S3 bucket's bucket policy to allow the Lambda function to upload objects to the bucket.ResolutionImportant: The following solution requires a Lambda function in one AWS account and an S3 bucket in another account.Example code for a Lambda function that uploads files to an S3 bucket (Python version 3.8)import json import boto3 s3 = boto3.client('s3') def lambda_handler(event,context): bucket = 'AccountBBucketName' transactionToUpload = {} transactionToUpload['transactionId'] = '12345' transactionToUpload['type'] = 'PURCHASE' transactionToUpload['amount'] = 20 transactionToUpload['customerId'] = 'CID-1111' filename = 'CID-1111'+'.json' uploadByteStream = bytes(json.dumps(transactionToUpload).encode('UTF-8')) s3.put_object(Bucket=bucket, Key=filename, Body=uploadByteStream, ACL='bucket-owner-full-control') print("Put Complete")Note: Before passing the bucket-owner-full-control ACL in the upload request, confirm that ACLs aren't deactivated on the bucket. Do this in the ownership settings of the S3 bucket. For more information, see Controlling ownership of objects and disabling ACLs for your bucket. (In account 1) Create a Lambda execution role that allows the Lambda function to upload objects to Amazon S31. Create an AWS Identity and Access Management (IAM) role for your Lambda function.2. Copy the IAM role's Amazon Resource Name (ARN).Note: You must get the IAM role's ARN before you can update the S3 bucket's bucket policy. One way to get the IAM role's ARN is to run the AWS Command Line Interface (AWS CLI) get-role command. If you receive errors when running AWS CLI commands, make sure that you're using the most recent version of the AWS CLI.3. Attach a policy to the IAM role that grants the permission to upload objects (s3:PutObject) to the bucket in Account 2.Example IAM policy that grants an IAM role s3:PutObject and s3:PutObjectAcl permissions{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::AccountBBucketName/*" } ]}4. Change your Lambda function's execution role to the IAM role that you created. For instructions, see Configuring Lambda function options.(In account 2) Modify the S3 bucket's bucket policy to allow the Lambda function to upload objects to the bucketUpdate the bucket policy so that it specifies the Lambda execution role's ARN as a Principal that has access to the action s3:PutObject.Example S3 bucket policy that allows a Lambda function to upload objects to the bucketNote: The following policy also grants the Lambda function's execution role the permission to s3:PutObjectAcl.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::AccountA:role/AccountARole" }, "Action": [ "s3:PutObject", "s3:PutObjectAcl" ], "Resource": "arn:aws:s3:::AccountBBucketName/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] }Related informationHow do I troubleshoot 403 Access Denied errors from Amazon S3?Follow" | https://repost.aws/knowledge-center/access-denied-lambda-s3-bucket |
How do I build a Lambda deployment package for C# .NET? | "I created an AWS Lambda function deployment package in C#. However, when I try to invoke the function, Lambda returns one of the following errors: "module not found," "module cannot be loaded," or "cannot find class." How do I troubleshoot the issue?" | "I created an AWS Lambda function deployment package in C#. However, when I try to invoke the function, Lambda returns one of the following errors: "module not found," "module cannot be loaded," or "cannot find class." How do I troubleshoot the issue?Short descriptionIf your C# Lambda function returns any of the following errors, then the function's deployment package's folder structure isn't configured correctly:module not foundmodule cannot be loadedcannot find classTo resolve the issue, you must build a C# Lambda function deployment package that has the correct folder structure. There are two ways you can build and deploy a C# Lambda function deployment package with the correct folder structure:The .NET Core command line interface (.NET Core CLI) and the Amazon.Lambda.Tools extensionThe AWS Toolkit for Visual StudioResolutionTo use the .NET Core CLI and the Amazon.Lambda.Tools extension1. Install the default Lambda .NET templates and add the Amazon.Lambda.Tools extension to the .NET Core CLI by running the following command:dotnet new -i 'Amazon.Lambda.Templates::*'2. Either create a new Lambda function using one of the templates you installed, or add the Amazon.Lambda.Tools extension to an existing project.To create a new Lambda function using one of the templates you installedFrom the .NET Core CLI in the Lambda function's project root directory, run the following command:Important: Replace {function-name} with your function's name. Replace {aws-region} with the AWS Region that you want your function in.dotnet new lambda.EmptyFunction --name {function-name} --profile default --region {aws-region}To add the Amazon.Lambda.Tools extension to an existing projectFrom the .NET Core CLI in the Lambda function's project root directory, run the following command:dotnet tool install -g Amazon.Lambda.ToolsNote: The Amazon.Lambda.Tools extension will prompt you to provide any required parameters that are missing.3. Download your deployment package's dependencies by running the following command:Important: Replace {your-function-directory} with your function directory's name.cd {your-function-directory}dotnet restoreNote: If you get a not compatible error, make sure that you're using a version of .NET Core that's compatible with Lambda tools. To download earlier versions of .NET Core, see the .NET Download Archives website.4. Build your Lambda deployment package by running the following command:dotnet lambda deploy-functionNote: Or, you can build a Lambda deployment package from scratch and deploy it separately. For instructions, see Deploying an AWS Lambda project with the .NET Core CLI.5. The .NET Core CLI prompts you to enter a function name and assign an AWS Identity and Access Management (IAM) role to the function. Enter a name for your function and assign the function an IAM role. Your function is then created.To use the AWS Toolkit for Visual Studio1. Download and install the AWS Toolkit for Visual Studio.2. Create and build an AWS Lambda Project (.NET Core) project. For instructions, see Using the AWS Lambda templates in the AWS Toolkit for Visual Studio and AWS Toolkit for Visual Studio in the AWS Lambda developer guide.Important: Make sure that the function handler signature is in the following format:ASSEMBLY::TYPE::METHODTo confirm that the function is formatted correctly, review the files under your function's src/{function-name} directory. For more information, see .NET Core CLI and AWS Lambda function handler in C#.Follow" | https://repost.aws/knowledge-center/build-lambda-deployment-dotnet |
How can I troubleshoot the pod status in Amazon EKS? | My Amazon Elastic Kubernetes Service (Amazon EKS) pods that are running on Amazon Elastic Compute Cloud (Amazon EC2) instances or on a managed node group are stuck. I want to get my pods in the Running state. | "My Amazon Elastic Kubernetes Service (Amazon EKS) pods that are running on Amazon Elastic Compute Cloud (Amazon EC2) instances or on a managed node group are stuck. I want to get my pods in the Running state.ResolutionImportant: The following steps apply only to pods launched on Amazon EC2 instances or a managed node group. These steps don't apply to pods launched on AWS Fargate.Find out the status of your pod1. To get the status of your pod, run the following command:$ kubectl get pod2. To get information from the Events history of your pod, run the following command:$ kubectl describe pod YOUR_POD_NAMENote: The example commands covered in the following steps are in the default namespace. For other namespaces, append the command with -n YOURNAMESPACE.3. Based on the status of your pod, complete the steps in one of the following sections: Your pod is in the Pending state, Your pod is in the Waiting state, or Your pod is in the CrashLoopBackOff state.Your pod is in the Pending statePods in the Pending state can't be scheduled onto a node. This can occur due to insufficient resources or with the use of hostPort. For more information, see Pod phase in the Kubernetes documentation.If you have insufficient resources available on the worker nodes, then consider deleting unnecessary pods. You can also add more resources on the worker nodes. You can use the Kubernetes Cluster Autoscaler to automatically scale your worker node group when resources in your cluster are scarce.Insufficient CPU$ kubectl describe pod frontend-cpu Name: frontend-cpuNamespace: defaultPriority: 0Node: <none>Labels: <none>Annotations: kubernetes.io/psp: eks.privilegedStatus: Pending...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 22s (x14 over 13m) default-scheduler 0/3 nodes are available: 3 Insufficient cpu.Insufficient Memory$ kubectl describe pod frontend-memoryName: frontend-memoryNamespace: defaultPriority: 0Node: <none>Labels: <none>Annotations: kubernetes.io/psp: eks.privilegedStatus: Pending...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 80s (x14 over 15m) default-scheduler 0/3 nodes are available: 3 Insufficient memory.If you're defined a hostPort for your pod, then follow these best practices:Don't specify a hostPort unless it's necessary, because the hostIP, hostPort, and protocol combination must be unique.If you specify a hostPort, then schedule the same number of pods as there are worker nodes.Note: There is a limited number of places that a pod can be scheduled when you bind a pod to a hostPort.The following example shows the output of the describe command for frontend-port-77f67cff67-2bv7w, which is in the Pending state. The pod is unscheduled because the requested host port isn't available for worker nodes in the cluster.Port unavailable$ kubectl describe pod frontend-port-77f67cff67-2bv7w Name: frontend-port-77f67cff67-2bv7wNamespace: defaultPriority: 0Node: <none>Labels: app=frontend-port pod-template-hash=77f67cff67Annotations: kubernetes.io/psp: eks.privilegedStatus: PendingIP: IPs: <none>Controlled By: ReplicaSet/frontend-port-77f67cff67Containers: app: Image: nginx Port: 80/TCP Host Port: 80/TCP...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 11s (x7 over 6m22s) default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.If the pods are unable to schedule because the nodes have taints that the pod can't allow, then the example output is similar to the following:$ kubectl describe pod nginx Name: nginxNamespace: defaultPriority: 0Node: <none>Labels: run=nginxAnnotations: kubernetes.io/psp: eks.privilegedStatus: Pending...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 8s (x10 over 9m22s) default-scheduler 0/3 nodes are available: 3 node(s) had taint {key1: value1}, that the pod didn't tolerate.You can check your nodes taints with following command:$ kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints NAME TAINTSip-192-168-4-78.ap-southeast-2.compute.internal [map[effect:NoSchedule key:key1 value:value1]]ip-192-168-56-162.ap-southeast-2.compute.internal [map[effect:NoSchedule key:key1 value:value1]]ip-192-168-91-249.ap-southeast-2.compute.internal [map[effect:NoSchedule key:key1 value:value1]]If you want to retain your node taints, then you can specify a toleration for a pod in the PodSpec. For more information, see the Concepts section in the Kubernetes documentation.-or-Remove the node taint by appending - at the end of taint value:$ kubectl taint nodes NODE_Name key1=value1:NoSchedule-If your pods are still in the Pending state after trying the preceding steps, then complete the steps in the Additional troubleshooting section.Your container is in the Waiting stateA container in the Waiting state is scheduled on a worker node (for example, an EC2 instance), but can't run on that node.Your container can be in the Waiting state because of an incorrect Docker image or incorrect repository name. Or, your pod could be in the Waiting state because the image doesn't exist or you lack permissions.If you have the incorrect Docker image or repository name, then complete the following:1. Confirm that the image and repository name is correct by logging into Docker Hub, Amazon Elastic Container Registry (Amazon ECR), or another container image repository.2. Compare the repository or image from the repository with the repository or image name specified in the pod specification.If the image doesn't exist or you lack permissions, then complete the following:1. Verify that the image specified is available in the repository and that the correct permissions are configured to allow the image to be pulled.2. To confirm that image pull is possible and to rule out general networking and repository permission issues, manually pull the image. You must pull the image from the Amazon EKS worker nodes with Docker. For example:$ docker pull yourImageURI:yourImageTag3. To verify that the image exists, check that both the image and tag are present in either Docker Hub or Amazon ECR.Note: If you're using Amazon ECR, then verify that the repository policy allows image pull for the NodeInstanceRole. Or, verify that the AmazonEC2ContainerRegistryReadOnly role is attached to the policy.The following example shows a pod in the Pending state with the container in the Waiting state because of an image pull error:$ kubectl describe po web-testName: web-testNamespace: defaultPriority: 0PriorityClassName: <none>Node: ip-192-168-6-51.us-east-2.compute.internal/192.168.6.51Start Time: Wed, 22 Jul 2021 08:18:16 +0200Labels: app=web-testAnnotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"web-test"},"name":"web-test","namespace":"default"},"spec":{... kubernetes.io/psp: eks.privilegedStatus: PendingIP: 192.168.1.143Containers: web-test: Container ID: Image: somerandomnonexistentimage Image ID: Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: ErrImagePull...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 66s default-scheduler Successfully assigned default/web-test to ip-192-168-6-51.us-east-2.compute.internal Normal Pulling 14s (x3 over 65s) kubelet, ip-192-168-6-51.us-east-2.compute.internal Pulling image "somerandomnonexistentimage" Warning Failed 14s (x3 over 55s) kubelet, ip-192-168-6-51.us-east-2.compute.internal Failed to pull image "somerandomnonexistentimage": rpc error: code = Unknown desc = Error response from daemon: pull access denied for somerandomnonexistentimage, repository does not exist or may require 'docker login' Warning Failed 14s (x3 over 55s) kubelet, ip-192-168-6-51.us-east-2.compute.internal Error: ErrImagePullIf your containers are still in the Waiting state after trying the preceding steps, then complete the steps in the Additional troubleshooting section.Your pod is in the CrashLoopBackOff statePods stuck in CrashLoopBackOff are starting and crashing repeatedly.If you receive the "Back-Off restarting failed container" output message, then your container probably exited soon after Kubernetes started the container.To look for errors in the logs of the current pod, run the following command:$ kubectl logs YOUR_POD_NAMETo look for errors in the logs of the previous pod that crashed, run the following command:$ kubectl logs --previous YOUR-POD_NAMENote: For a multi-container pod, you can append the container name at the end. For example:$ kubectl logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]If the Liveness probe isn't returning a successful status, then verify that the Liveness probe is configured correctly for the application. For more information, see Configure Probes in the Kubernetes documentation.The following example shows a pod in a CrashLoopBackOff state because the application exits after starting, Notice State, Last State, Reason, Exit Code and Restart Count along with Events.$ kubectl describe pod crash-app-b9cf4587-66ftw Name: crash-app-b9cf4587-66ftwNamespace: defaultPriority: 0Node: ip-192-168-91-249.ap-southeast-2.compute.internal/192.168.91.249Start Time: Tue, 12 Oct 2021 12:24:44 +1100Labels: app=crash-app pod-template-hash=b9cf4587Annotations: kubernetes.io/psp: eks.privilegedStatus: RunningIP: 192.168.82.93IPs: IP: 192.168.82.93Controlled By: ReplicaSet/crash-app-b9cf4587Containers: alpine: Container ID: containerd://a36709d9520db92d7f6d9ee02ab80125a384fee178f003ee0b0fcfec303c2e58 Image: alpine Image ID: docker.io/library/alpine@sha256:e1c082e3d3c45cccac829840a25941e679c25d438cc8412c2fa221cf1a824e6a Port: <none> Host Port: <none> State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 12 Oct 2021 12:26:21 +1100 Finished: Tue, 12 Oct 2021 12:26:21 +1100 Ready: False Restart Count: 4 ...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m30s default-scheduler Successfully assigned default/crash-app-b9cf4587-66ftw to ip-192-168-91-249.ap-southeast-2.compute.internal Normal Pulled 2m25s kubelet Successfully pulled image "alpine" in 5.121853269s Normal Pulled 2m22s kubelet Successfully pulled image "alpine" in 1.894443044s Normal Pulled 2m3s kubelet Successfully pulled image "alpine" in 1.878057673s Normal Created 97s (x4 over 2m25s) kubelet Created container alpine Normal Started 97s (x4 over 2m25s) kubelet Started container alpine Normal Pulled 97s kubelet Successfully pulled image "alpine" in 1.872870869s Warning BackOff 69s (x7 over 2m21s) kubelet Back-off restarting failed container Normal Pulling 55s (x5 over 2m30s) kubelet Pulling image "alpine" Normal Pulled 53s kubelet Successfully pulled image "alpine" in 1.858871422sExample of liveness probe failing for the pod:$ kubectl describe pod nginxName: nginxNamespace: defaultPriority: 0Node: ip-192-168-91-249.ap-southeast-2.compute.internal/192.168.91.249Start Time: Tue, 12 Oct 2021 13:07:55 +1100Labels: app=nginxAnnotations: kubernetes.io/psp: eks.privilegedStatus: RunningIP: 192.168.79.220IPs: IP: 192.168.79.220Containers: nginx: Container ID: containerd://950740197c425fa281c205a527a11867301b8ec7a0f2a12f5f49d8687a0ee911 Image: nginx Image ID: docker.io/library/nginx@sha256:06e4235e95299b1d6d595c5ef4c41a9b12641f6683136c18394b858967cd1506 Port: 80/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 12 Oct 2021 13:10:06 +1100 Finished: Tue, 12 Oct 2021 13:10:13 +1100 Ready: False Restart Count: 5 Liveness: http-get http://:8080/ delay=3s timeout=1s period=2s #success=1 #failure=3 ...Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m47s default-scheduler Successfully assigned default/nginx to ip-192-168-91-249.ap-southeast-2.compute.internal Normal Pulled 2m44s kubelet Successfully pulled image "nginx" in 1.891238002s Normal Pulled 2m35s kubelet Successfully pulled image "nginx" in 1.878230117s Normal Created 2m25s (x3 over 2m44s) kubelet Created container nginx Normal Started 2m25s (x3 over 2m44s) kubelet Started container nginx Normal Pulled 2m25s kubelet Successfully pulled image "nginx" in 1.876232575s Warning Unhealthy 2m17s (x9 over 2m41s) kubelet Liveness probe failed: Get "http://192.168.79.220:8080/": dial tcp 192.168.79.220:8080: connect: connection refused Normal Killing 2m17s (x3 over 2m37s) kubelet Container nginx failed liveness probe, will be restarted Normal Pulling 2m17s (x4 over 2m46s) kubelet Pulling image "nginx"If your pods are still in the CrashLoopBackOff state after trying the preceding steps, then complete the steps in the Additional troubleshooting section.Additional troubleshootingIf your pod is still stuck after completing steps in the previous sections, then try the following steps:1. To confirm that worker nodes exist in the cluster and are in Ready status, run the following command:$ kubectl get nodesExample output:NAME STATUS ROLES AGE VERSIONip-192-168-6-51.us-east-2.compute.internal Ready <none> 25d v1.21.2-eks-5047edip-192-168-86-33.us-east-2.compute.internal Ready <none> 25d v1.21.2-eks-5047edIf the nodes are NotReady, then see How can I change the status of my nodes from NotReady or Unknown status to Ready status? or can't join the cluster, How can I get my worker nodes to join my Amazon EKS cluster?2. To check the version of the Kubernetes cluster, run the following command:$ kubectl version --shortExample output:Client Version: v1.21.2-eks-5047edServer Version: v1.21.2-eks-c0eccc3. To check the version of the Kubernetes worker node, run the following command:$ kubectl get node -o custom-columns=NAME:.metadata.name,VERSION:.status.nodeInfo.kubeletVersionExample output:NAME VERSIONip-192-168-6-51.us-east-2.compute.internal v1.21.2-eks-5047edip-192-168-86-33.us-east-2.compute.internal v1.21.2-eks-5047ed4. Confirm that the Kubernetes server version for the cluster matches the version of the worker nodes within an acceptable version skew (from the Kubernetes documentation). Use the output from the preceding steps 2 and 3 as the basis for this comparison.Important: The patch versions can be different (for example, v1.21.x for the cluster vs. v1.21.y for the worker node).If the cluster and worker node versions are incompatible, then create a new node group with eksctl (see the eksctl tab) or AWS CloudFormation (see the Self-managed nodes tab).-or-Create a new managed node group (Kubernetes: v1.21, platform: eks.1 and above) using a compatible Kubernetes version. Then, delete the node group with the incompatible Kubernetes version.5. Confirm that the Kubernetes control plane can communicate with the worker nodes by verifying firewall rules against recommended rules in Amazon EKS security group considerations. Then, verify that the nodes are in Ready status.Follow" | https://repost.aws/knowledge-center/eks-pod-status-troubleshooting |
How can I configure NAT on my VPC CIDR for traffic traversing a VPN connection? | I have an AWS VPN connection to a VPC that's managed by Amazon Virtual Private Cloud (Amazon VPC) where the network CIDRs overlap. I want to configure NAT for my AWS VPN. | "I have an AWS VPN connection to a VPC that's managed by Amazon Virtual Private Cloud (Amazon VPC) where the network CIDRs overlap. I want to configure NAT for my AWS VPN.Short descriptionAWS VPN doesn't provide a managed option to apply NAT to VPN traffic. Instead, manually configure NAT using a software-based VPN solution. There are many of these VPN solutions in the AWS Marketplace.NAT can also be manually configured on the Amazon Elastic Compute Cloud (EC2) Linux instance that is running a software-based VPN solution along with iptables.ResolutionThis example configuration uses two VPCs. The first is an AWS managed VPN and the second is a software-based VPN solution that is used as the customer gateway.Before you begin, confirm that you set up an AWS Site-to-Site VPN connection. Then, install your selected VPN solution on the EC2 Linux instance by using your distribution's package manager.Allow VPN trafficConfigure your VPC route table, security groups, and network ACLs to allow VPN traffic:1. Enter the route towards the destination network into your route table. Set the elastic network interface of your software VPN EC2 instance as the target.2. Confirm that your route table has a default route with a target of an internet gateway.3. Allow inbound traffic using UDP port 500 (ISAKMP) and 4500 (IPsec NAT-Traversal) in the instance's security group rules.4. Turn off source/destination checks to allow the instance to forward IP packets.Configure VPN connectionConfigure the Site-to-Site VPN connection for your relevant solution. AWS offers downloadable example configuration files based on device vendor and model.Configure iptablesConfigure your iptables rules for source NAT or destination NAT.For source NAT, use the following string, filling in appropriate values in place of the brackets:sudo iptables -t nat -A POSTROUTING -d <Destination address or CIDR> -j SNAT --to-source <Your desired IP address>For destination NAT, use the following string, filling in appropriate values in place of the brackets:sudo iptables -t nat -A PREROUTING -j DNAT --to-destination <Your desired IP address>To save your running iptables configuration to a file, use the following command:sudo iptables-save > /etc/iptables.confTo load this configuration on boot, enter the following line in /etc/rc.local before the exit 0 statement:iptables-restore < /etc/iptables.confOptional: Test your AWS Site-to-Site VPN connection. If the test is successful, the traffic is appropriately translated based on the iptables configuration.Related informationNAT instancesFollow" | https://repost.aws/knowledge-center/configure-nat-for-vpn-traffic |
How can I restrict access to launch Amazon EC2 instances from only tagged AMIs? | I want to restrict users' access so that they can launch Amazon Elastic Compute Cloud (Amazon EC2) instances only from tagged Amazon Machine Images (AMIs). How can I restrict access to launch EC2 instances by using AMI tags? | "I want to restrict users' access so that they can launch Amazon Elastic Compute Cloud (Amazon EC2) instances only from tagged Amazon Machine Images (AMIs). How can I restrict access to launch EC2 instances by using AMI tags?ResolutionTo restrict users' access to launch EC2 instances using tagged AMIs, create an AMI from an existing instance—or use an existing AMI—and then add a tag to the AMI. Then, create a custom AWS Identity and Access Management (IAM) policy with a tag condition that restricts users' permissions to launch only instances that use the tagged AMI.In this example IAM policy, there are three statement IDs (Sids):Sid ReadOnlyAccess allows users to view any EC2 resources in your account using Describe*, which includes all the EC2 actions that begin with Describe. Sid ReadOnlyAccess also allows users to get console output and screenshots of an EC2 instance. For more information, see GetConsoleOutput and GetConsoleScreenshot. The Amazon CloudWatch permissions for DescribeAlarms and GetMetricStatistics allow basic health information about EC2 instances to appear in the Amazon EC2 console. The IAM permission for ListInstanceProfiles allows the existing instance profiles to display in the IAM role list on the Configure Instance Details page when launching an EC2 instance. However, the ListInstanceProfiles API doesn't allow users to attach an IAM role to an EC2 instance.Sid ActionsRequiredtoRunInstancesInVPC grants users permission to perform the RunInstances API using any instance, key pair, security group, volume, network interface, or subnet in the us-east-1 Region using resource-level permissions by specifying the ARN for each resource.Sid LaunchingEC2withAMIsAndTags allows users to launch EC2 instances using an AMI if the AMI has a tag Environment with value set to Prod, and the AMI is in the us-east-1 Region. Resource-level permission is set to an ARN for any AMI that is in us-east-1 Region, and the condition matches the value of EC2:ResourceTag/Environment tag key and key value Prod.The following IAM policy uses resource-level permissions for the required resources for the RunInstances API action. For more information about the required resources for RunInstances, see Supported resource-level permissions.Note:This policy allows users to list roles when launching an EC2 instance, but users aren't able to launch an instance with a role attached unless they have the iam:PassRole permission.This policy doesn't allow users to create new security groups. Users must select an existing security group to launch an EC2 instance unless users have the EC2 CreateSecurityGroup permission. The EC2:CreateSecurityGroup API action grants access to create only a security group—this action doesn't add or modify any rules. To add inbound rules, users must have permissions to the inbound EC2 AuthorizeSecurityGroupIngress API action and the outbound EC2 AuthorizeSecurityGroupEgress API action.{ "Version": "2012-10-17", "Statement": [ { "Sid": "ReadOnlyAccess", "Effect": "Allow", "Action": [ "ec2:Describe*", "ec2:GetConsole*", "cloudwatch:DescribeAlarms", "cloudwatch:GetMetricStatistics", "iam:ListInstanceProfiles" ], "Resource": "*" }, { "Sid": "ActionsRequiredtoRunInstancesInVPC", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": [ "arn:aws:ec2:us-east-1:AccountId:instance/*", "arn:aws:ec2:us-east-1:AccountId:key-pair/*", "arn:aws:ec2:us-east-1:AccountId:security-group/*", "arn:aws:ec2:us-east-1:AccountId:volume/*", "arn:aws:ec2:us-east-1:AccountId:network-interface/*", "arn:aws:ec2:us-east-1:AccountId:subnet/*" ] }, { "Sid": "LaunchingEC2withAMIsAndTags", "Effect": "Allow", "Action": "ec2:RunInstances", "Resource": "arn:aws:ec2:us-east-1::image/ami-*", "Condition": { "StringEquals": { "ec2:ResourceTag/Environment": "Prod" } } } ]}Follow" | https://repost.aws/knowledge-center/restrict-launch-tagged-ami |
How can I use AWS RAM to share Route 53 Resolver rules across multiple VPCs and AWS accounts? | I want to use AWS Resource Access Manager (AWS RAM) to share Amazon Route 53 Resolver rules across multiple virtual private clouds (VPCs) or AWS accounts. | "I want to use AWS Resource Access Manager (AWS RAM) to share Amazon Route 53 Resolver rules across multiple virtual private clouds (VPCs) or AWS accounts.ResolutionCreate the Route 53 Resolver rules (if you don't already have rules)Before you begin, consider the following:Route 53 Resolver is a Regional service. You can only share and associate VPCs in the same Region where you created the rules.You must have permissions to use the PutResolverRulePolicy action to share rules across AWS accounts.The account that you share rules with can't change or delete the shared rule.In Account A, create Route 53 Resolver rules to share with other accounts and VPCs.Share the Route 53 Resolver rules with AWS RAMOpen the Route 53 console in Account A.In the navigation pane, choose Rules.Select the rule that you want to share.Choose Share.For Name, enter a descriptive name for the resource share.For Select Resource Type, choose Resolver Rules.Select the Resolver Rule ID to share.Specify the Principal to share. The Principal can be a single account or an organization.(Optional) Complete the Tags section.Accept the shared Route 53 Resolver rules in AWS RAMOpen the AWS RAM console.In the navigation pane, choose Shared with me, Resource shares.Select the resource share ID for the Route 53 Resolver rules.Choose Accept resource share.Associate the Route 53 Resolver rules with a VPCOpen the Route 53 console in Account B.In the navigation pane, choose Rules.Select the rule that you just shared.Choose Associate VPC.Select the VPC from the drop-down list, and then choose Add.DNS queries from the VPC now use the outbound endpoint for the shared rule from Account A. AWS RAM manages connectivity between the VPC and the outbound endpoint for the rule from Account A.Related informationManaging forwarding rulesFollow" | https://repost.aws/knowledge-center/route-53-share-resolver-rules-with-ram |
What happens when I turn on or turn off the Amazon S3 Block Public Access setting? | I want to know how turning the Amazon Simple Storage Service (Amazon S3) Block Public Access setting on or off affects the Amazon S3 operations. | "I want to know how turning the Amazon Simple Storage Service (Amazon S3) Block Public Access setting on or off affects the Amazon S3 operations.ResolutionAvailable settingsAmazon S3 Block Public Access settings have different levels of restrictions that you can apply through four configurable options:Block public access that's granted through new access control lists (ACLs): Amazon S3 blocks public access permissions that you apply to newly added buckets or objects. S3 also prevents the creation of new public access ACLs for existing buckets and objects. This setting doesn't change existing permissions that allow public access to S3 resources using ACLs.Block public access that's granted through any ACL: S3 ignores all ACLs that grant public access to buckets and objects.Block public access that's granted through new public bucket or access point policies: S3 blocks new bucket and access point policies that grant public access to buckets and objects. This setting doesn't change existing policies that allow public access to S3 resources.Block public and cross-account access that's granted through public bucket or access point policies: S3 ignores public and cross-account access for buckets or access points with policies that grant public access to buckets and objects.The first and third options are intended to prevent new updates to S3 bucket policies or object ACLs that grant public access. These settings don't change existing policies or ACLs that currently grant public access.The second and fourth options are intended to prevent and ignore new and existing bucket policies or object ACLs that grant public access. For more information, see Block public access settings.Turning on or off S3 Block Public Access settingsNote: As of April 2023, all newly-created S3 buckets have S3 Block Public Access turned on by default.You can turn on this setting at the account level, bucket level, or both.To block public access settings for all the S3 buckets in an account, see Configuring block public access settings for your account.To block public access settings for a specific S3 bucket and access point, see Configuring block public access settings for your S3 buckets.After your turn on block public settings for a bucket, the following happens:Anonymous and unauthenticated requests are denied with no exceptions. S3 URIs and URLs that are accessed using a web browser return HTTP 403 Access Denied errors with the corresponding request ID.Any public ACL that's applied to S3 objects are ignored, resulting in revoked access for users that rely on this ACL for object access.After your turn off block public settings for a bucket, the following happens:An object with public bucket policy or public ACL access is now accessible to anyone on the internet with a link to the object's path. This includes web trawlers and unauthorized users.You might incur increased costs that are associated with S3 requests, such as LIST or GET. An anonymous request that's made against the public bucket or object is charged to the bucket owner.Applicable AWS Config rules and AWS Identity and Access Management (IAM) Access Analyzer for S3 generate warnings about your bucket's public status. To be compliant with these rules, you must turn on block public access settings.When you turn off the block public access setting, your S3 bucket's Access column shows one of the following on the console:Objects can be public: The bucket isn't public, but anyone with the appropriate permissions can grant public access to objects.Buckets and objects not public: The bucket and objects don't have public access.Only authorized users of this account: Access is isolated to IAM account users and roles and AWS service principals because there is a policy that grants public access.Public: Everyone has access to one or more list objects, write objects, and read and write permissions.Required PermissionsTo turn on or turn off S3 Block Public Access settings, your IAM role or user must have the following S3 permissions:Account level: s3:PutAccountPublicAccessBlockBucket level: s3:PutBucketPublicAccessBlock To view your current S3 Block Public Access settings, your IAM role or user must have the following S3 permissions:Account level: s3:GetAccountPublicAccessBlockBucket level: s3:GetBucketPublicAccessBlockFor more information, see Permissions.Troubleshooting errorsYou might get an Access Denied error when you try to turn on or turn off the Block Public Access settings on your S3 bucket. To troubleshoot this error, try the following:Verify that service control policies don't include organizational policies that prevent modifying the S3 Block Public Access settings at either account or bucket level. Check Deny statements for s3:PutBucketPublicAccessBlock and s3:PutAccountPublicAccessBlock actions.Verify that the IAM user or role has the required permissions for the resources.Verify that the S3 bucket where you want to modify the settings doesn't have an existing public S3 bucket policy (any bucket policy statements with Principal: "*").Identifying the userTo identify which IAM entity modified this setting on your bucket or account, use AWS CloudTrail events. You can filter these events for the following EventNames in your CloudTrail console:For account level, look for PutAccountPublicAccessBlock.For bucket level, look for PutBucketPublicAccessBlock.To identify the caller ARN, check against the UserIdentity field in the log: "userIdentity": { "type": "AssumedRole", "principalId": "[AccountID]:[RoleName]", "arn": "arn:aws:sts::[AccountID]:assumed-role/[RoleName]/[RoleSession]",Then, verify the S3 bucket resource that you want to check:"requestParameters": { "publicAccessBlock": "", "bucketName": "[BucketName]"Other considerationsBucket policies that grant access on the aws:SourceIp condition key with broad IP address ranges (for example, 0.0.0.0/1) are evaluated as public.You can use IAM Access Analyzer for S3 to review buckets with bucket ACLs, bucket policies, or access point policies that grant public access. If your bucket shows Error against its Access column in your S3 console, then your IAM role or user lacks sufficient permissions to list your buckets and their public access settings. Make sure to add the following permissions to your user or role policy:s3:GetAccountPublicAccessBlocks3:GetBucketPublicAccessBlocks3:GetBucketPolicyStatuss3:GetBucketLocations3:GetBucketAcls3:ListAccessPointss3:ListAllMyBucketsAmazon S3 doesn't support Block Public Access settings on a per-object basis.When you apply block public access settings to an account, the settings apply to all AWS Regions globally. The settings might not take effect in all Regions immediately or simultaneously, but they eventually propagate to all Regions.Follow" | https://repost.aws/knowledge-center/s3-block-public-access-setting |
How can I use the DMS batch apply feature to improve CDC replication performance? | "I'm running a full load and a change data capture (CDC) AWS Database Migration Service (AWS DMS) task. The source latency isn't high, but the target latency is high or it's increasing. How can I speed up the CDC replication phase?" | "I'm running a full load and a change data capture (CDC) AWS Database Migration Service (AWS DMS) task. The source latency isn't high, but the target latency is high or it's increasing. How can I speed up the CDC replication phase?Short descriptionAWS DMS uses the following methods to replicate data in the change data capture (CDC) phase:Transactional applyBatch applyThe AWS DMS CDC process is single threaded, by default (transactional apply). This is the same method used for SQL replication as for all other online transactional processing (OLTP) database engines. DMS CDC replication is dependent on the source database transaction logs. During the ongoing replication phase, DMS applies changes using a transactional apply method, as follows:DMS reads changes from the transaction log, from the source into the replication DB instance memory.DMS translates changes, and then passes them on to a sorter component.The sorter component sorts transactions in commit order, and then forwards them to the target, sequentially.If the rate of change is high on the source DB, then this process can take time. You might see a spike in CDC target latency metrics when DMS receives high incoming workload from source DB.DMS uses a single threaded replication method to process the CDC changes. DMS provides the task level setting BatchApplyEnabled to quickly process changes on a target using batches. BatchApplyEnabled is useful if you have high workload on the source DB, and a task with high target CDC latency. By default, DMS deactivates BatchApplySetting. You can activate this using AWS Command Line Interface (AWS CLI).How batch apply worksIf you run a task with BatchApplyEnabled, DMS processes changes in the following way:DMS collects the changes in batch from the source DB transaction logs.DMS creates a table called the net changes table, with all changes from the batch.This table resides in the memory of the replication DB instance, and is passed on to the target DB instance.DMS applies a net changes algorithm that nets out all changes from the net changes table to actual target table.For example, if you run a DMS task with BatchApplyEnabled, and you have a new row insert, ten updates to that row, and a delete for that row in a single batch, then DMS nets out all these transactions and doesn’t carry them over. It does this because the row is eventually deleted and no longer exists. This process reduces the number of actual transactions that are applied on the target.BatchApplyEnabled applies the net changes algorithm in row level of a table within a batch of a particular task. So, if the source database has frequent changes (update, delete, and insert) or a combination of those workloads on the same rows, you can then get optimal use from the BatchApplyEnabled. This minimizes the changes to be applied to the target. If the collected batch is unique in changes (update/delete/insert changes for different row records), then the net change table algorithm process can't filter any events. As a result, all batch events are applied on the target in batch mode. Tables must have either a primary key or a unique key for batch apply to work.DMS also provides the BatchApplyPreserveTransaction setting for change-processing tuning. If you activate BatchApplyEnabled, then BatchApplyPreserveTransaction turns on, by default. If you set it to true, then transactional integrity is preserved. A batch is guaranteed to contain all the changes within a transaction from the source. This setting applies only to Oracle target endpoints.Note: Pay attention to the advantages and disadvantages of this setting. When the BatchApplyPreserveTransaction setting is true, DMS captures the entire long-running transaction in the memory of the replication DB instance. It does this according to the task settings MemoryLimitTotal and MemoryKeepTime, and swaps as needed, before it sends changes to the net changes table. When the BatchApplyPreserveTransaction setting is false, changes from a single transaction can span across multiple batches. This can lead to data loss when partially applied, for example, due to target database unavailability.For more information about DMS latency and the batch apply process, see Part 2 and Part 3 of the Debugging your AWS DMS migrations blogs.Use cases for batch applyYou can use batch apply in the following circumstances:The task has a high number of transactions captured from the source and this is causing target latency.The task has a workload from source that is a combination of insert, update, and delete on the same rows.No requirement to keep strict referential integrity on the target (disabled FKs).LimitationsBatch apply currently has the following limitations**:**The Amazon Redshift target uses batch apply, by default. The Amazon Simple Storage Service (Amazon S3) target is forced to use transactional apply.Batch apply can only work on tables with primary key/unique index. For tables with no primary key/unique index, bulk apply will only apply the insert in bulk mode, but performs updates and deletes one-by-one. If the table has primary key/unique index but one-by-one mode switched is observed, see How can I troubleshoot why Amazon Redshift switched to one-by-one mode because a bulk operation failed during an AWS DMS task?When LOB columns are included in the replication, you can use BatchApplyEnabled in limited LOB mode, only. For more information, see Target metadata task settings.When BatchApplyEnabled is set to true, AWS DMS generates an error message if a target table has a unique constraint.ResolutionNote: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you’re using the most recent AWS CLI version.BatchApplySetting is disabled by default. You can activate this setting using either the AWS CLI or the AWS DMS Console. Complete the following setup tasks on your system before enabling batch setting:Install and configure the latest version of the AWS CLI.Create an IAM user with programmatic access.Check the batch setting status of an existing taskOpen the AWS DMS Console.From the Navigation panel, choose Database migration tasksChoose your task, and then choose Task Setting (JSON). In the JSON, the BatchApplyEnabled is listed in the disabled status.Activate batch setting using the AWS CLIOpen the system with AWS CLI installed.Run the aws configure command to open the AWS CLI prompt.Enter your AWS access key ID and then press Enter.Enter your AWS secret key ID and then press Enter.Enter the Region name of your DMS resources and then press Enter.Enter the output format and then press Enter.Run the modify-replication-task command with task ARN and batch setting conditions.Note: Confirm that the task is in the stopped state before you modify the task. Change the ARN on the following command based on your task, and then runs it to change the task setting.After the command has run successfully in the AWS CLI, open the DMS console and check the batch setting status of your task again. The BatchApplyEnabled is now listed as "enabled" in the Task Setting (JSON).You can now start the DMS task and observe the migration performance.aws dms modify-replication-task --replication-task-arn arn:aws:dms:us-east-1:123456789123:task:4VUCZ6ROH4ZYRIA25M3SE6NXCM --replication-task-settings "{\"TargetMetadata\":{\"BatchApplyEnabled\":true}}"Activate batch setting using the AWS DMS ConsoleOpen the AWS DMS Console.From the navigation panel, choose Database migration task.Choose your task, and then choose Modify.From the Task settings section, choose JSON editor.Modify the task settings that you want to change. For example, from the TargetMetadata section, change BatchApplyEnabled to true (default is false).Click save to modify the task.Verify that the changes have taken effect by following these steps:From the Task list page, choose the task you modified.From the Overview details tab, expand Task settings (JSON).Review the task settings for the task.Troubleshoot CDCLatencyTarget high after running task in batch modeIf the CDCLatencyTarget is high after running the task in batch mode, the latency could be caused by the following:Long-running transaction on target due to lack of primary and secondary indexInsufficient resource availability to process the workload on targetHigh resource contention on DMS replication instanceFollow the DMS best practices to troubleshoot these issues.Related informationMonitoring AWS DMS tasksHow to script a database migrationAutomating AWS DMS migration tasksHow do I create source or target endpoints using AWS DMS?Change processing tuning settingsFollow" | https://repost.aws/knowledge-center/dms-batch-apply-cdc-replication |
What is the difference between a multivalue answer routing policy and a simple routing policy? | I'm creating an Amazon Route 53 record for my domain and I need to choose a routing policy. Should I use a simple routing policy or a multivalue answer routing policy? | "I'm creating an Amazon Route 53 record for my domain and I need to choose a routing policy. Should I use a simple routing policy or a multivalue answer routing policy?Short descriptionUse a simple routing policy for traffic that requires only standard DNS records and that doesn't require special options such as weighted routing or latency routing. For example, use simple routing when you need to route traffic to a single resource. You can't use multiple records of the same name and type with simple routing. However, a single record can contain multiple values (such as IP addresses).Use a multivalue answer routing policy to help distribute DNS responses across multiple resources. For example, use multivalue answer routing when you want to associate your routing records with a Route 53 health check. For example, use multivalue answer routing when you need to return multiple values for a DNS query and route traffic to multiple IP addresses.ResolutionSimple RoutingUse a simple routing policy when you're:Creating only one basic record of each name and typeRouting traffic to a single resource (such as your website's web server)Routing traffic to a single record with multiple values (such as an A record that specifies multiple IP addresses)Note: Route 53 returns the values in a random order to the client, and you can't weight or otherwise determine the order with a simple routing policy.In the following example, all the records for a domain are created in one resource record set:NameTypeValueTTLwww.example.comA Record192.0.2.160198.51.100.160203.0.113.160When a client makes a DNS request, Route 53 returns all three listed IP addresses.Note: You can't attach a health check to a simple routing policy. Instead, Route 53 returns all values to the client regardless the status of an IP address. When an unhealthy IP address is returned, the user's client tries to connect to the unhealthy IP and the user experiences downtime.Multivalue Answer RoutingUse a multivalue answer routing policy when you're:Creating more than one record of the same name and typeRouting traffic to multiple resourcesAssociating a Route 53 health check with recordsWhen a client makes a DNS request with multivalue answer routing, Route 53 responds to DNS queries with up to eight healthy records selected at random for the particular domain name. These records can each be attached to a Route 53 health check, which helps prevent clients from receiving a DNS response that is not reachable.Multivalue answer routing distributes DNS responses across multiple IP addresses. If a web server becomes unavailable after a resolver caches a response, a client can try up to eight other IP addresses from the response to avoid downtime.Note: Multivalue answer routing is not a substitute for Elastic Load Balancing (ELB). Route 53 randomly selects any eight records. When you perform dig (on Linux) or nslookup (on Windows) on your domain name multiple times, you might notice that the IP addresses rotate. This rotation improves availability and provides some load balancing functionality. Your operating system performs this round-robin DNS for cached responses, not Route 53.When you want to enter more than one value in a multivalue answer record set, you must create a new resource record with the same name, and then enter each value separately. If you don't do this, you receive the following error: Getting error: The record set could not be saved because: - Each Multivalue answer record can have only one value. (Route 53 returns one answer from multiple records.).In the following example, there are multiple A Records, each with different values:NameTypeValueTTLSet IDHealth Checkwww.example.comA Record192.0.2.260Web1Awww.example.comA Record198.51.100.260Web2Bwww.example.comA Record203.0.113.260Web3CNote: If you create two or more multivalue answer routing records with the same name and type, and then specify different values for TTL, Route 53 changes the TTL value of all the records to the last specified value.Related informationchange-resource-record-setsQuotas on recordsFollow" | https://repost.aws/knowledge-center/multivalue-versus-simple-policies |
Why am I not getting Amazon SES bounce notifications from Amazon SNS? | "I turned on Amazon Simple Notification Service (Amazon SNS) notifications for whenever email that I send using Amazon Simple Email Service (Amazon SES) results in a bounce. However, I'm not getting bounce notifications from Amazon SNS." | "I turned on Amazon Simple Notification Service (Amazon SNS) notifications for whenever email that I send using Amazon Simple Email Service (Amazon SES) results in a bounce. However, I'm not getting bounce notifications from Amazon SNS.ResolutionCheck the following:Verify that your Amazon SNS topic is in the same AWS Region as your Amazon SES identity (domain or email address). The topic and the identity must be in the same Region for bounce notifications to work.If you set up Amazon SNS notifications for a verified domain, then check whether email addresses within the domain are verified as separate identities. SNS notifications apply only to an individual Amazon SES identity that's verified in your account.In addition to creating the SNS topic, confirm that you also subscribed an endpoint to the topic to receive notifications.When you subscribe an email address to an SNS topic, you receive an email from Amazon SNS asking you to confirm the subscription. Check your email for this message, and then confirm the subscription.If the SNS topic is using AWS Key Management Service (AWS KMS) encryption, then confirm that Amazon SES has permissions to the encryption key.Important: Use the Amazon SES mailbox simulator email address to test or troubleshoot event notifications because these emails addresses are designed for testing. Bounces from the mailbox simulator address aren't counted in your account's metrics.Follow" | https://repost.aws/knowledge-center/ses-resolve-sns-bounce-notifications |
How can I reduce data transfer charges for my NAT gateway? | I need to reduce data transfer charges on my bill for traffic going through a NAT gateway in my Amazon VPC. | "I need to reduce data transfer charges on my bill for traffic going through a NAT gateway in my Amazon VPC.ResolutionFirst, determine the major sources of traffic through your NAT gateway. Then, to reduce data transfer and processing charges, consider the following strategies:Determine whether the instances sending the most traffic are in the same Availability Zone (AZ) as the NAT gateway. If they're not, then create new NAT gateways in the same AZ as the resource to reduce cross-AZ data transfer charges.Determine whether the majority of your NAT gateway charges are from traffic to Amazon Simple Storage Service or Amazon DynamoDB in the same Region. If they are, then set up a gateway VPC endpoint. Route traffic to and from the AWS resource through the gateway VPC endpoint, rather than through the NAT gateway. There's no processing or hourly charges for using gateway VPC endpoints.If most traffic through your NAT gateway is to AWS services that support interface VPC endpoints, then create an interface VPC endpoint for the services. See pricing details for interface VPC endpoints to determine the potential cost savings.Note: You can set up alarms to monitor use of your NAT gateway in the future using Amazon CloudWatch.Related informationAmazon VPC pricingFollow" | https://repost.aws/knowledge-center/vpc-reduce-nat-gateway-transfer-costs |
Why do I receive the error "You can only join an organization whose Seller of Record is same as your account" when I'm trying to join an organization? | "When I try to join an AWS Organization, I receive the error "You can only join an organization whose Seller of Record is same as your account." What does this error mean?" | "When I try to join an AWS Organization, I receive the error "You can only join an organization whose Seller of Record is same as your account." What does this error mean?ResolutionThis error indicates that your seller of record or AWS partition is not the same as the management account in the AWS Organization.The seller of record refers to whether your account is with Amazon Web Services, Inc. (AWS Inc.), Amazon Web Services EMEA SARL (AWS Europe), Amazon Web Services India Private Limited (AWS India), or another seller of record. Due to billing constraints, AWS India accounts can't be in the same organization as accounts with other AWS sellers of record.An AWS partition is a group of Regions. The AWS supported partitions are AWS Regions, AWS China Regions, and AWS GovCloud (US) Regions. Due to legal and billing constraints, you can join an organization only if it's in the same partition as your account. Accounts in the AWS Regions partition can't be in an organization with accounts from the AWS China Regions partition or AWS GovCloud (US) Regions partition.For example, in an AWS Europe organization, you can have both an AWS Inc. and an Amazon Web Services Canada, Inc. (AWS Canada) account.If you're not sure the seller of record that your account is with, see Determining which company your account is with.If you're not sure the seller of record that the management account of the organization is with, contact the owner of the management account. To see the contact details of the management account:Sign in to the AWS Management Console.On the navigation bar, choose your account name, and then choose My Organization.Related informationWhy can't I join an organization in AWS Organizations?AWS GovCloud (US) compared to standard AWS RegionsFollow" | https://repost.aws/knowledge-center/organizations-seller-of-record-error |
How can I update database parameters in a Lightsail MySQL or PostgreSQL database? | How can I update database parameters in an Amazon Lightsail MySQL or PostgreSQL database? | "How can I update database parameters in an Amazon Lightsail MySQL or PostgreSQL database?Short descriptionWhen a Lightsail database is created, it is uses a custom parameter group that is named after the instance endpoint, unlike Amazon Relational Database Service (Amazon RDS) DB instances that use a default DB parameter group. To modify the database parameters for a Lightsail database instance, use the AWS Command Line Interface (AWS CLI).ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.1. Install the AWS CLI in the same AWS Region as the Lightsail database.2. Get a list of the available database parameters that can be modified.3. After you identify the parameter that you want to change, update the parameter.Note: If you update a static parameter and the apply method is set to pending-reboot, then the parameter update is applied only after the instance is rebooted.The following is an example command for Lightsail Amazon RDS MySQL 5.7.26:aws lightsail update-relational-database-parameters --relational-database-name Lightsail-Database-Ireland-1 --parameters "parameterName=connect_timeout,parameterValue=30,applyMethod=immediate"See the following example output for this operation:{ "operations": [ { "status": "Succeeded", "resourceType": "RelationalDatabase", "isTerminal": true, "statusChangedAt": 1579868316.024, "location": { "availabilityZone": "eu-west-1a", "regionName": "eu-west-1" }, "operationType": "UpdateRelationalDatabaseParameters", "resourceName": "Lightsail-Database-Ireland-1", "id": "23a7de77-aa6c-4831-8525-8c6d97921676", "createdAt": 1579868316.024 } ]}The following is an example command for Lightsail Amazon RDS PostgreSQL 10.10:aws lightsail update-relational-database-parameters --relational-database-name lightsail-postgres --parameters "parameterName=deadlock_timeout,parameterValue=30,applyMethod=immediate"See the following example output for this operation:{ "operations": [ { "status": "Succeeded", "resourceType": "RelationalDatabase", "isTerminal": true, "statusChangedAt": 1579869403.669, "location": { "availabilityZone": "eu-west-1a", "regionName": "eu-west-1" }, "operationType": "UpdateRelationalDatabaseParameters", "resourceName": "lightsail-postgres", "id": "e18a2827-b140-4872-b90c-ab7850a7b6df", "createdAt": 1579869403.669 } ]}Related informationCreating a database in Amazon LightsailFollow" | https://repost.aws/knowledge-center/lightsail-update-database-parameter |
Why did my AWS DMS task fail with no errors? | I am using AWS Database Migration Service (AWS DMS) to migrate my data from a source engine to a target engine. But the task is failing without any errors. How do I troubleshoot this issue? | "I am using AWS Database Migration Service (AWS DMS) to migrate my data from a source engine to a target engine. But the task is failing without any errors. How do I troubleshoot this issue?Short descriptionWhen an AWS DMS task fails, the task logs provide information about the cause of the failure with either error messages (]E:) or warning messages, (]W:). In some cases, an AWS DMS task can fail without any errors or warnings, which can make it difficult to troubleshoot the cause. Most often, this is caused by one of the three following reasons:1. Resource contention on the replication instanceCPU and memory are the two most important resources that are required for a migration task:CPU is required to convert the source datatype to the AWS DMS type data type, and then finally to the target data type.Memory is required because AWS DMS creates streams to the source and target. AWS DMS stores information in the stream buffers in memory on the replication instance.CPU and memory are also used by the internal monitoring system to monitor the replication instance. Any contention on either CPU or memory can cause a migration task to fail silently.2. Storage Full status on the replication instanceIf the replication instance storage is full, a migration task can fail silently with no errors. For more information, see Why is my AWS DMS replication instance in a storage-full status?3. An internal error occurredAWS DMS tasks can also fail silently if there are internal errors, which aren't visible in task logs that are logged by default.ResolutionFirst, check the time of the last entry in the task logs after the task failed silently. Then, verify the CPU, memory, and disk utilization on the replication instance around the same time that the failure was logged. If you see a combination of the low FreeableMemory and high SwapUsage, then there might be memory contention on the replication instance. Be sure to check both metrics. For more information, see Data Migration Service metrics.To view the CloudWatch metrics, follow these steps: Open the AWS DMS console, and choose Database migration tasks from the navigation pane.Choose the name of task that failed.Note the name of the Replication instance from the Overview details section.Choose Replication instances from the navigation pane.Choose the name of the replication instance noted in the step 3.In the Migration task metrics section, you can view the CPUUtilization, SwapUsage, FreeableMemory,and FreeStorageSpace metrics.To view more details, hover over the metric, and choose the more options icon (three vertical dots).Choose View in metrics.This opens the CloudWatch console where you can view the metric's utilization at the time that the task failed.If you see constant CPU or memory contention, consider reducing the number of tasks that are running on the replication instance. You can do this by launching new replication instances and distributing the tasks across multiple replication instances. Or, consider scaling up the replication instance to a larger instance type.Note: T2 instances provide a baseline performance after the CPU credits are exhausted. For example, a T2.micro instance provides a baseline performance of 10%. For this reason, take into account the instance type that is used and verify the CPU utilization accordingly. For more information about CPU credits and baseline performance, see CPU Credits and Baseline Performance for Burstable Performance Instances.After you identify the source of the silent failure, restart the task. If there isn't contention on CPU, memory, or disk space, then the task most likely failed because of an internal error. To troubleshoot internal errors, enable detailed debugging on all the five log components. After detailed debugging is enabled, restart the task and review the task logs to identify why the task failed.Related informationTroubleshooting migration tasks in AWS Database Migration ServiceChoosing the optimum size for a replication instanceFollow" | https://repost.aws/knowledge-center/dms-task-failed-no-errors |
How do I import my keys into AWS Key Management Service? | I want to import my key material into AWS Key Management Service (AWS KMS) so I can use 256-bit symmetric keys with AWS services. | "I want to import my key material into AWS Key Management Service (AWS KMS) so I can use 256-bit symmetric keys with AWS services.ResolutionAWS KMS allows you to import your key material into an AWS KMS key. You can then use this key with AWS services that are supported by AWS KMS.Follow these steps to import your key material into AWS KMS:1. Create an AWS KMS key with no key material. Note your AWS KMS key's ID.Note: For Define Key Administrative Permissions and Define Key Usage Permissions, it's a best practice to separate the key administrator and key roles. This limits the impact if either credential is exposed.2. Open a terminal on your local machine or Amazon Elastic Compute Cloud (Amazon EC2) instance with OpenSSL installed. For more information, see the OpenSSL website.3. To generate a 256-bit symmetric key, run the following command:openssl rand -out PlaintextKeyMaterial.bin 324. To describe the key and get the parameters for the import, run the following AWS Command Line Interface (AWS CLI) commands:Note: The commands store the public key and import token parameters into a variable.export KEY=`aws kms --region eu-west-2 get-parameters-for-import --key-id example1-2345-67ab-9123-456789abcdef --wrapping-algorithm RSAES_OAEP_SHA_256 --wrapping-key-spec RSA_2048 --query '{Key:PublicKey,Token:ImportToken}' --output text`5. To place the public key and import token into separate base64-encoded files, run the following command:echo $KEY | awk '{print $1}' > PublicKey.b64echo $KEY | awk '{print $2}' > ImportToken.b646. To convert the base64-encoded file into binary files to import, run the following commands:openssl enc -d -base64 -A -in PublicKey.b64 -out PublicKey.binopenssl enc -d -base64 -A -in ImportToken.b64 -out ImportToken.bin7. To encrypt the key material with the public key that was converted to a binary file, run the following command:openssl pkeyutl -in PlaintextKeyMaterial.bin -out EncryptedKeyMaterial.bin -inkey PublicKey.bin -keyform DER -pubin -encrypt -pkeyopt rsa_padding_mode:oaep -pkeyopt rsa_oaep_md:sha2568. To import the encrypted key material into AWS KMS, run the following command:Note: This example specifies that the key material doesn't expire, but you can set an expiration date for your key material. For more information, see ExpirationModel.aws kms --region eu-west-2 import-key-material --key-id example1-2345-67ab-9123-456789abcdef --encrypted-key-material fileb://EncryptedKeyMaterial.bin --import-token fileb://ImportToken.bin --expiration-model KEY_MATERIAL_DOES_NOT_EXPIRE9. Verify that the imported key status is set to Enabled. To do this, review the key in the AWS KMS console, or run the DescribeKey API action.If you can't import your key, then note the following causes and resolutions:You waited longer than 24 hours and the import token is expired. To resolve this, download the wrapping key and import token again to re-encrypt the key material.Your key material is not a 256-bit symmetric key. To resolve this, verify that the file size of the encrypted key material is 32 bytes. To check the file size, run one of the following commands:Linuxwc -c <filename>.binWindowsdir <filename>.binFor more information, see Importing key material in AWS KMS keys.Related informationAbout imported key materialI'm using OpenSSL to import my key into AWS KMS, but I'm getting an "InvalidCiphertext" error. How can I fix this?Follow" | https://repost.aws/knowledge-center/import-keys-kms |
Why was my credit card declined when I tried to pay my AWS bill? | "I tried to pay an AWS bill or purchase a subscription, such as an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance, but the charge was unsuccessful. How do I fix this?" | "I tried to pay an AWS bill or purchase a subscription, such as an Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instance, but the charge was unsuccessful. How do I fix this?ResolutionIf your payment method is declined when you try to pay your AWS bill, check the following:Account information associated with your payment methodCheck Payment Methods in the Billing and Cost Management console to confirm that your credit card number, expiration date, billing address, and phone number are correct. Important: Confirm that the billing address and phone number match the information on file with the institution that issued your credit card.Confirm that your credit card is accepted by the AWS seller providing services to you. The name of the AWS seller is noted on your invoice.Transaction and balance limits on your payment methodIf the information associated with your payment method is correct, contact your bank and check the following:Confirm that your credit card has enough available credit to pay your AWS invoice. If not, ask your bank to adjust your withdrawal or purchase limits.Confirm that you authorize the charge for AWS services. The charge descriptor ends in aws.amazon.com for AWS Inc. and AWS Europe accounts.Payment issuer verification or authenticationIf your card issuer requires payment verification or authentication, review the following:AWS doesn’t support card verification value (CVV). If your card issuer requires the use of CVV, then check with the card issuer to waive the requirement.AWS doesn't support 3D Secure authentication. If you're an Amazon Web Services India Private Limited (AWS India) customer, and your card issuer requires 3D Secure authentication, then check with the card issuer to waive the requirement.If you're an AWS Europe customer, you might need to verify your credit card payment with the card issuer. For more information, see Managing your AWS Europe credit card payment verification.Retry the paymentIf the issue is resolved retry the charges.If you're in the process of purchasing a subscription and the payment is declined, create a case with AWS Support and then select Account and billing support.When creating a case with AWS Support to retry your payment, keep the following in mind:AWS Support can retry the charges for a Reserved Instance or Savings Plans only during the month of purchase.AWS Support can't retry payments for Amazon Route 53 domain registration, renewal, or transfer. To resubmit the domain registration, renewal, or transfer, see Transferring registration for a domain to Amazon Route 53.Related informationHow do I add a new payment method to my AWS account?How do I change the default payment method associated with my AWS account?How do I view past or current AWS payments?Follow" | https://repost.aws/knowledge-center/credit-card-declined |
How do I troubleshoot issues when connecting to my Amazon MSK cluster? | I'm experiencing issues when I try to connect to my Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster. | "I'm experiencing issues when I try to connect to my Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster.ResolutionWhen you try to connect to an Amazon MSK cluster, you might get the following types of errors:Errors that are not specific to the authentication type of the clusterErrors that are specific to TLS client authenticationErrors that are specific to AWS Identity and Access Management (IAM) client authenticationErrors that are specific to Simple Authentication and Security Layer/Salted Challenge Response Mechanism (SASL/SCARM) client authenticationErrors that are not related to a specific authentication typeWhen you try to connect to your Amazon MSK cluster, you might get one of the following errors irrespective of the authentication type enabled for your cluster.java.lang.OutOfMemoryError: Java heap spaceYou get this error when you don't mention the client properties file when running a command for cluster operations using any type of authentication:For example, you get the OutOfMemoryError when you run the following command with IAM authentication port:./kafka-topics.sh --create --bootstrap-server $BOOTSTRAP:9098 --replication-factor 3 --partitions 1 --topic TestTopicHowever, the following command runs successfully when you run it with IAM authentication port:./kafka-topics.sh --create --bootstrap-server $BOOTSTRAP:9098 --command-config client.properties --replication-factor 3 --partitions 1 --topic TestTopicTo resolve this error, be sure to include appropriate properties based on the type of authentication in the client.properties file.org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopicsYou get this error typically when there is a network misconfiguration between the client application and Amazon MSK Cluster.To troubleshoot this issue, check the network connectivity by performing the following connectivity test.Run the command from the client machine.telnet bootstrap-broker port-numberBe sure to do the following:Replace bootstrap-broker with one of the broker addresses from your Amazon MSK Cluster.Replace port-number with the appropriate port value based on the authentication that's turned on for your cluster.If the client machine is able to access the brokers, then there are no connectivity issues. If not, review the network connectivity, especially the inbound and outbound rules for the security group.org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [test_topic]You get this error when you're using IAM authentication and your access policy blocks topic operations, such as WriteData and ReadData.Note that permission boundaries and service control policies also block user attempting to connect to the cluster without the required authorization.If you're using non-IAM authentication, then check if you added topic level access control lists (ACLs) that block operations.Run the following command to list the ACLs that are applied on a topic:bin/kafka-acls.sh --bootstrap-server $BOOTSTRAP:PORT --command-config adminclient-configs.conf –-list –-topic testtopicConnection to node -1 (b-1-testcluster.abc123.c7.kafka.us-east-1.amazonaws.com/3.11.111.123:9098) failed authentication due to: Client SASL mechanism 'SCRAM-SHA-512' not enabled in the server, enabled mechanisms are [AWS_MSK_IAM]-or-Connection to node -1 (b-1-testcluster.abc123.c7.kafka.us-east-1.amazonaws.com/3.11.111.123:9096) failed authentication due to: Client SASL mechanism 'AWS_MSK_IAM' not enabled in the server, enabled mechanisms are [SCRAM-SHA-512]You get these errors when you're using a port number that doesn't match with the SASL mechanism or protocol in the client properties file. This is the properties file that you used in the command to run cluster operations.To communicate with brokers in a cluster that's set up to use SASL/SCRAM, use the following ports: 9096 for access from within AWS and 9196 for public accessTo communicate with brokers in a cluster that's set up to use IAM access control, use the following ports: 9098 for access from within AWS and 9198 for public accessTimed out waiting for connection while in state: CONNECTINGYou might get this error when the client tries to connect to the cluster through the Apache ZooKeeper string, and the connection can't be established. This error might also result when the Apache ZooKeeper string is wrong.You get the following error when you use the incorrect Apache ZooKeeper string to connect to the cluster:./kafka-topics.sh --zookeeper z-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:2181,z-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:2181,z-3.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:2181 --list[2020-04-10 23:58:47,963] WARN Client session timed out, have not heard from server in 10756ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)[2020-04-10 23:58:58,581] WARN Client session timed out, have not heard from server in 10508ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)[2020-04-10 23:59:08,689] WARN Client session timed out, have not heard from server in 10004ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)Exception in thread "main" kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTINGat kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:259)at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:255)at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:113)at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1858)at kafka.admin.TopicCommand$ZookeeperTopicService$.apply(TopicCommand.scala:321)at kafka.admin.TopicCommand$.main(TopicCommand.scala:54)at kafka.admin.TopicCommand.main(TopicCommand.scala)To resolve this error, do the following:Verify that the Apache ZooKeeper string used is correct.Be sure that the security group for your Amazon MSK cluster allows inbound traffic from the client's security group on the Apache ZooKeeper ports.Topic 'topicName' not present in metadata after 60000 ms. or Connection to node -<node-id> (<broker-host>/<broker-ip>:<port>) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)You might get this error under either of the following conditions:The producer or consumer is unable to connect to the broker host and port.The broker string is not valid.If you get this error even though the connectivity of the client or broker was working initially, then the broker might be down.You get the following error when you try to access the cluster from outside the virtual private cloud (VPC) using the broker string for producing data:./kafka-console-producer.sh --broker-list b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9092,b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9092 --topic test[2020-04-10 23:51:57,668] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)org.apache.kafka.common.errors.TimeoutException: Topic test not present in metadata after 60000 ms.You get the following error when you try to access the cluster from outside the VPC for consuming data using broker string:./kafka-console-consumer.sh --bootstrap-server b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9092,b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9092 --topic test[2020-04-11 00:03:21,157] WARN [Consumer clientId=consumer-console-consumer-88994-1, groupId=console-consumer-88994] Connection to node -1 (b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com/172.31.6.19:9092) could not be established. Broker may notbe available. (org.apache.kafka.clients.NetworkClient)[2020-04-11 00:04:36,818] WARN [Consumer clientId=consumer-console-consumer-88994-1, groupId=console-consumer-88994] Connection to node -2 (b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com/172.31.44.252:9092) could not be established. Broker maynot be available. (org.apache.kafka.clients.NetworkClient)[2020-04-11 00:05:53,228] WARN [Consumer clientId=consumer-console-consumer-88994-1, groupId=console-consumer-88994] Connection to node -1 (b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com/172.31.6.19:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)To troubleshoot these errors, do the following:Be sure that the correct broker string and port are used.If the error is caused due to the broker being down, check the Amazon CloudWatch metric ActiveControllerCount to verify that the controller was active throughout the period. The value of this metric must be 1. Any other value might indicate that one of the brokers in the cluster is unavailable. Also, check the metric ZooKeeperSessionState to confirm that the brokers were constant communicating with the Apache ZooKeeper nodes. To understand why the broker failed, view the metric KafkaDataLogsDiskUsed metric and check if the broker ran out of storage space. For more information on Amazon MSK metrics and the expected values, see Amazon MSK metrics for monitoring with CloudWatch.Be sure that the error is not caused by the network configuration. Amazon MSK resources are provisioned within the VPC. Therefore, by default, clients are expected to connect to the Amazon MSK cluster or produce and consume from the cluster over a private network in the same VPC. If you access the cluster from outside the VPC, then you might get these errors. For information on troubleshooting errors when the client is in the same VPC as the cluster, see Unable to access cluster from within AWS: networking issues. For information on accessing the cluster from outside the VPC, see How do I connect to my Amazon MSK cluster outside of the VPC?Errors that are specific to TLS client authenticationYou might get the following errors when you try to connect to that cluster that's TLS client authentication enabled. These errors might be caused due to issues with SSL related configuration.Bootstrap broker <broker-host>:9094 (id: -<broker-id> rack: null) disconnectedYou might get this error when the producer or consumer tries to connect to a TLS-encrypted cluster over TLS port 9094 without passing the SSL configuration.You might get the following error when the producer tries to connect to the cluster:./kafka-console-producer.sh --broker-list b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094,b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 --topic test[2020-04-10 18:57:58,019] WARN [Producer clientId=console-producer] Bootstrap broker b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 (id: -2 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)[2020-04-10 18:57:58,342] WARN [Producer clientId=console-producer] Bootstrap broker b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 (id: -2 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)[2020-04-10 18:57:58,666] WARN [Producer clientId=console-producer] Bootstrap broker b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)You might get the following error when the consumer tries to connect to the cluster:./kafka-console-consumer.sh --bootstrap-server b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094,b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 --topic test[2020-04-10 19:09:03,277] WARN [Consumer clientId=consumer-console-consumer-79102-1, groupId=console-consumer-79102] Bootstrap broker b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)[2020-04-10 19:09:03,596] WARN [Consumer clientId=consumer-console-consumer-79102-1, groupId=console-consumer-79102] Bootstrap broker b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)[2020-04-10 19:09:03,918] WARN [Consumer clientId=consumer-console-consumer-79102-1, groupId=console-consumer-79102] Bootstrap broker b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 (id: -2 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)To resolve this error, set up the SSL configuration. For more information, see How do I get started with encryption?If client authentication is enabled for your cluster, then you must add additional parameters related to your ACM Private CA certificate. For more information, see Mutual TLS authentication.ERROR Modification time of key store could not be obtained: <configure-path-to-truststore>-or-Failed to load keystoreIf there is an issue with the truststore configuration, then this error can occur when truststore files are loaded for the producer and consumer. You might view information similar to the following in the logs:./kafka-console-consumer --bootstrap-server b-2.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094,b-1.encryption.3a3zuy.c7.kafka.us-east-1.amazonaws.com:9094 --topic test --consumer.config /home/ec2-user/ssl.config[2020-04-11 10:39:12,194] ERROR Modification time of key store could not be obtained: /home/ec2-ser/certs/kafka.client.truststore.jks (org.apache.kafka.common.security.ssl.SslEngineBuilder)java.nio.file.NoSuchFileException: /home/ec2-ser/certs/kafka.client.truststore.jks[2020-04-11 10:39:12,253] ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer$)Caused by: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /home/ec2-ser/certs/kafka.client.truststore.jks of type JKSIn this case, the logs indicate an issue with loading the truststore file. The path to the truststore file is wrongly configured in the SSL configuration. You can resolve this error by providing the correct path for the truststore file in the SSL configuration.This error might also occur due the following conditions:Your truststore or key store file is corrupted.The password of the truststore file is incorrect.Error when sending message to topic test with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed-or-Connection to node -<broker-id> (<broker-hostname>/<broker-hostname>:9094) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)You might get the following error when there is an issue with the key store configuration of the producer leading to the authentication failure:./kafka-console-producer --broker-list b-2.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com:9094,b-1.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com:9094,b-4.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com:9094 --topic example --producer.config/home/ec2-user/ssl.config[2020-04-11 11:13:19,286] ERROR [Producer clientId=console-producer] Connection to node -3 (b-4.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com/172.31.6.195:9094) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)You might get the following error when there is an issue with the key store configuration of the consumer leading to the authentication failure:./kafka-console-consumer --bootstrap-server b-2.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com:9094,b-1.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com:9094,b-4.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com:9094 --topic example --consumer.config/home/ec2-user/ssl.config[2020-04-11 11:14:46,958] ERROR [Consumer clientId=consumer-1, groupId=console-consumer-46876] Connection to node -1 (b-2.tlscluster.5818ll.c7.kafka.us-east-1.amazonaws.com/172.31.15.140:9094) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)[2020-04-11 11:14:46,961] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failedTo resolve this error, be sure that you have correctly configured the key store related configuration.java.io.IOException: keystore password was incorrectYou might get this error when the password for the key store or truststore is incorrect.To troubleshoot this error, do the following:Check whether the key store or truststore password is correct by running the following command:keytool -list -keystore kafka.client.keystore.jksEnter keystore password:Keystore type: PKCS12Keystore provider: SUNYour keystore contains 1 entryschema-reg, Jan 15, 2020, PrivateKeyEntry,Certificate fingerprint (SHA1): 4A:F3:2C:6A:5D:50:87:3A:37:6C:94:5E:05:22:5A:1A:D5:8B:95:EDIf the password for the key store or truststore is incorrect, then you might see the following error:keytool -list -keystore kafka.client.keystore.jksEnter keystore password:keytool error: java.io.IOException: keystore password was incorrectYou can view the verbose output of the above command by adding the -v flag:keytool -list -v -keystore kafka.client.keystore.jksYou can also use these commands to check if the key store is corrupted.You might also get this error when the secret key associated with the alias is incorrectly configured in the SSL configuration of the producer and consumer. To verify this root cause, run the following command:keytool -keypasswd -alias schema-reg -keystore kafka.client.keystore.jksEnter keystore password:Enter key password for <schema-reg>New key password for <schema-reg>:Re-enter new key password for <schema-reg>:If your password for the secret of the alias (example: schema-reg) is correct, then the command asks you to enter a new password for the secret key else. Otherwise, the command fails with the following message:keytool -keypasswd -alias schema-reg -keystore kafka.client.keystore.jksEnter keystore password:Enter key password for <schema-reg>keytool error: java.security.UnrecoverableKeyException: Get Key failed: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.You can also verify if a particular alias is part of the key store by running the following command:keytool -list -keystore kafka.client.keystore.jks -alias schema-regEnter keystore password:schema-reg, Jan 15, 2020, PrivateKeyEntry,Certificate fingerprint (SHA1): 4A:F3:2C:6A:5D:50:87:3A:37:6C:94:5E:05:22:5A:1A:D5:8B:95:EDErrors that are specific to IAM client authenticationConnection to node -1 (b-1.testcluster.abc123.c2.kafka.us-east-1.amazonaws.com/10.11.111.123:9098) failed authentication due to: Access denied-or-org.apache.kafka.common.errors.SaslAuthenticationException: Access deniedBe sure that the IAM role that accesses the Amazon MSK cluster allows cluster operations as mentioned in IAM access control.In addition to access policies, permission boundaries and service control policies block the user that attempts to connect to the cluster, but fails to pass on the required authorization.org.apache.kafka.common.errors.SaslAuthenticationException: Too many connects-or-org.apache.kafka.common.errors.SaslAuthenticationException: Internal errorYou get these errors when your cluster is running on a kafka.t3.small broker type with IAM access control and you exceeded the connection limit. The kafka.t3.small instance type accepts only one TCP connection per broker per second. When this connection limit is exceeded, your creation test fails and you get this error, indicating invalid credentials. For more information, see How Amazon MSK works with IAM.To resolve this error, consider doing the following:In your Amazon MSK Connect worker configuration, update the values for reconnect.backoff.ms and reconnect.backoff.max.ms to 1000 or higher.Upgrade to a larger broker instance type (such as kafka.m5.large or higher). For more information, see Broker types and Right-size your cluster: Number of partitions per broker.Errors that are specific to SASL/SCRAM client authenticationConnection to node -1 (b-3.testcluster.abc123.c2.kafka.us-east-1.amazonaws.com/10.11.111.123:9096) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-512Be sure that you stored the user credentials in AWS Secrets Manager and associated these credentials with the Amazon MSK cluster.When you access the cluster over 9096 port, be sure that the user and password used in AWS Secrets Manager is the same as those in client properties.When you try to retrieve the secrets using the get-secret-value API, make sure that the password used in AWS Secrets Manager doesn't contain any special characters, such as (/]).org.apache.kafka.common.errors.ClusterAuthorizationException: Request Request(processor=11, connectionId=INTERNAL_IP-INTERNAL_IP-0, session=Session(User:ANONYMOUS,/INTERNAL_IP), listenerName=ListenerName(REPLICATION_SECURE), securityProtocol=SSL, buffer=null) is not authorizedYou get this error when both the following conditions are true:You turned on SASL/SCRAM authentication for your Amazon MSK cluster.You've set resourceType=CLUSTER and operation=CLUSTER_ACTION in the ACLs for your cluster.The Amazon MSK cluster doesn't support this setting because this setting prevents the internal Apache Kafka replication. With this setting, the identity of the brokers appear as ANONYMOUS for inter-broker communication. If you need your cluster to support these ACLs while using the SASL/SCRAM authentication, then you must grant the permissions for ALL operations to the ANONYMOUS user. This prevents the restriction of replication between the brokers.Run the following command to grant this permission to the ANONYMOUS user:./kafka-acls.sh --authorizer-propertieszookeeper.connect=example-ZookeeperConnectString --add --allow-principalUser:ANONYMOUS --operation ALL --clusterRelated informationConnecting to an Amazon MSK clusterHow do I troubleshoot common issues when using my Amazon MSK cluster with SASL/SCRAM authentication?Follow" | https://repost.aws/knowledge-center/msk-cluster-connection-issues |
What steps do I need to take before changing the instance type of my EC2 Linux instance? | My system requires more CPU or memory than is available on my current Amazon Elastic Compute Cloud (Amazon EC2) Linux instance. I want to know what steps I need to take before I change my instance type. | "My system requires more CPU or memory than is available on my current Amazon Elastic Compute Cloud (Amazon EC2) Linux instance. I want to know what steps I need to take before I change my instance type.Short descriptionTo optimize your Amazon EC2 Linux instance for your workload, change the instance type. Changing the instance type lets you modify the following configurations for your workload:Number of CPU coresAmount of RAMAmount of assigned instance store spaceAmazon Elastic Block Store (Amazon EBS) optimizationEnhanced networkingGPU coresFPGAsMachine learning acceleratorsNote: It's a best practice to maintain backups of your instances and data. Before you change your infrastructure, create an AMI or create snapshots of your EBS volumes.ResolutionVerify that your current instance type is compatible with the new instance typeBefore you change instance types or instance families, verify that the current instance type and the new instance type are compatible. For a list of compatibility issues, see Compatibility for changing the instance type.After you verify compatibility, you can change the instance type of your Amazon EBS-backed instance.Stop your instanceBefore you change instance types, you must stop your instance.Important:If your instance is instance store backed or has instance store volumes that contain data, then the data is lost when you stop the instance. If you're moving from one instance store-backed instance to another instance store-backed instance, then you must migrate the instance. For more information, see Change the instance type of an instance store-backed instance.If your instance is part of an Amazon EC2 Auto Scaling group, then stopping the instance might terminate the instance. If you launched the instance with Amazon EMR, AWS CloudFormation, or AWS Elastic Beanstalk, then your instance might be part of an AWS Auto Scaling group. Instance termination in this scenario depends on the instance scale-in protection settings for your Auto Scaling group. If your instance is part of an Auto Scaling group, then temporarily remove the instance from the Auto Scaling group before starting the resolution steps.If you're not using an Elastic IP address, then stopping and starting the instance changes the public IP address of your instance. It's a best practice to use an Elastic IP address instead of a public IP address when routing external traffic to your instance. If you're using Amazon Route 53, then you might have to update the Route 53 DNS records when the public IP changes.Enhanced networkingIf you're converting to an instance that supports enhanced networking, then install any required drivers, and turn on enhanced networking on your current instance. For more information, see Enhanced networking on Linux.Nitro-based instance typesIf you're changing your instance to a Nitro-based instance type, then take the following actions:Confirm that you installed the NVMe and ENA modules on your instance.Check that any block devices that are listed in /etc/fstab are compatible with NVMe block device names (/dev/nvme1, /dev/nvme2, and so on).Amazon EBS volumes are exposed as NVMe devices to these instance types, and the device names are changed on a stop or start event. To avoid volume mismatch, use the UUIDs or labels file system to mount the file systems.To automate these checks, run the NitroInstanceChecks script. For more information, see Why is my Linux instance not booting after I changed its type to a Nitro-based instance type? Follow the instructions in the Run the NitroInstanceChecks script section.After the script runs and you make necessary updates, verify that the DRIVERS entry in /etc/udev/rules.d/70-persistent-net.rules is set to ? or ENA.Use a text editor to access the file. The following example uses the vi editor.vi /etc/udev/rules.d/70-persistent-net.rulesThe correct entry appears as follows:SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="01:23:45:67:89:ab", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0Networking on current generation instancesCurrent generation instances launch only in a virtual private cloud (VPC). If your current instance is an EC2-Classic instance, then migrate the instance to a Linux instance in a VPC.Mixing EC2 architecturesIf your instance's source AMI is built for a specific architecture, then you're restricted to creating instance types of the same architecture. Examples of AMIs that are built for specific architecture include 32-bit (i386), 64-bit (x86_64), or 64-bit ARM (arm64). This is also the case if your instance is running an AMI that's created for the mac1 instance type. You can't move these images between instance types.Related informationStatus checks for your instancesAmazon EC2 pricingWhat do I need to do before migrating my EC2 instance to a sixth generation instance to make sure that I get maximum network performance?Follow" | https://repost.aws/knowledge-center/resize-instance |
How can I be sure that manually installed libraries persist in Amazon SageMaker if my lifecycle configuration times out when I try to install the libraries? | "When I try to install additional libraries, my lifecycle configuration scripts run for more than five minutes. This causes the Amazon SageMaker notebook instance to time out. I want to resolve this issue. Also, I want to be sure that my manually installed libraries persist between notebook instance sessions." | "When I try to install additional libraries, my lifecycle configuration scripts run for more than five minutes. This causes the Amazon SageMaker notebook instance to time out. I want to resolve this issue. Also, I want to be sure that my manually installed libraries persist between notebook instance sessions.Short descriptionIf a lifecycle configuration script runs for longer than five minutes, the script fails, and the notebook instance isn't created or started.Use one of the following methods to resolve this issue:nohup: The nohup command that's short for 'no hangup' is a Linux command that ignores the hang up signal. Using this command with the ampersand symbol at the end forces the lifecycle configuration script to run in the background until the packages are installed. This method is a best practice for less technical users, and is more appropriate as a short-term solution.Note: The nohup command ignores the hang up signal. Therefore, you must use it with the ampersand symbol for the script to continue to run in the background. The shell that runs the lifecycle configuration script is terminated at the end of the script. Therefore, add nohup in the beginning of the command and & at the end of the command to force the lifecycle configuration script to run in the background.Create a custom, persistent Conda installation on the notebook instance's Amazon Elastic Block Store (Amazon EBS) volume: Run the on-create script in the terminal of an existing notebook instance. This script uses Miniconda to create a separate Conda installation on the EBS volume ( /home/ec2-user/SageMaker/). Then, run the on-start script as a lifecycle configuration to make the custom environment available as a kernel in Jupyter. This method is a best practice for more technical users, and it is a better long-term solution.ResolutionUse one of the following methods to resolve lifecycle configuration timeouts.Run the nohup commandUse the nohup command to force the lifecycle configuration script to continue running in the background even after the five-minute timeout period expires. Be sure to add the ampersand (&) at the end of the command.Example:#!/bin/bashset -enohup pip install xgboost &The script stops running after the libraries are installed. You aren't notified when this happens, but you can use the ps command to find out if the script is still running.Note: You can also use the nohup command if your lifecycle configuration script times out in other scenarios, such as when you download large Amazon Simple Storage Service (Amazon S3) objects.Create a custom persistent Conda installation on the notebook instance's EBS volume1. In the terminal of an existing notebook instance, create a .sh file using your preferred editor.Example:vim custom-script.sh2. Copy the contents of the on-create script into the .sh file. This script creates a new Conda environment in a custom Conda installation. This script also installs NumPy and Boto3 in the new Conda environment.Note: The notebook instance must have internet connectivity to download the Miniconda installer and ipykernel.3. Mark the script as executable, and then run it.Example:chmod +x custom-script.sh./custom-script.sh4. When installation is complete, stop the notebook instance.5. Copy the on-start script into a .sh file.#!/bin/bashset -e# OVERVIEW# This script installs a custom, persistent installation of conda on the Notebook Instance's EBS volume, and ensures# that these custom environments are available as kernels in Jupyter.# # The on-start script uses the custom conda environment created in the on-create script and uses the ipykernel package# to add that as a kernel in Jupyter.## For another example, see:# https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-add-external.html#nbi-isolated-environmentsudo -u ec2-user -i <<'EOF'unset SUDO_UIDWORKING_DIR=/home/ec2-user/SageMaker/custom-miniconda/source "$WORKING_DIR/miniconda/bin/activate"for env in $WORKING_DIR/miniconda/envs/*; doBASENAME=$(basename "$env")source activate "$BASENAME"python -m ipykernel install --user --name "$BASENAME" --display-name "Custom ($BASENAME)"done# Optionally, uncomment these lines to disable SageMaker-provided Conda functionality.# echo "c.EnvironmentKernelSpecManager.use_conda_directly = False" >> /home/ec2-user/.jupyter/jupyter_notebook_config.py# rm /home/ec2-user/.condarcEOFecho "Restarting the Jupyter server.."# For notebook instance with alinux (notebook-al1-v1)initctl restart jupyter-server --no-wait# Use this instead for notebook instance with alinux2 (notebook-al2-v1)systemctl restart jupyter-server6. On the stopped notebook instance, add the on-start script as a lifecycle configuration. This script makes the custom environment available as a kernel in Jupyter every time that you start the notebook instance.7. Start the notebook instance, and then install your custom libraries in the custom environment.For example, to install pyarrow:import sys!conda install --yes --prefix {sys.prefix} -c conda-forge pyarrowIf you get an error message that says that you need to update Conda, run the following commands. Then, try installing the custom libraries again.!conda install -p "/home/ec2-user/anaconda3" "conda>=4.8" --yes!conda install -p "/home/ec2-user/SageMaker/custom-miniconda/miniconda" "conda>=4.8" --yesIf you stop and then start your notebook instance, your custom Conda environment and libraries are still available. You don't have to install them again.Note: You can use the Amazon CloudWatch logs to troubleshoot issues with lifecycle configuration scripts. You can view the script execution logs in the log stream LifecycleConfigOnStart under the aws/sagemaker/studio namespace.Related informationAmazon SageMaker notebook instance lifecycle configuration samplesLifecycle configuration best practicesDebugging lifecycle configurationsFollow" | https://repost.aws/knowledge-center/sagemaker-lifecycle-script-timeout |
How can I troubleshoot low freeable memory in an Amazon RDS for MySQL database? | "I'm running an Amazon Relational Database Service (Amazon RDS) for MySQL instance. I see that my available memory is low, my database is out of memory, or low memory is causing latency issues in my application. How do I identify the source of the memory utilization, and how can I troubleshoot low freeable memory?" | "I'm running an Amazon Relational Database Service (Amazon RDS) for MySQL instance. I see that my available memory is low, my database is out of memory, or low memory is causing latency issues in my application. How do I identify the source of the memory utilization, and how can I troubleshoot low freeable memory?Short descriptionIn Amazon RDS for MySQL, you can monitor four memory statuses:Active: The memory that's actively being consumed by database processes or threads.Buffer: A buffer is a temporary space in memory that's used to hold a block of data.Free Memory: The memory that's available for use.Cache: Caching is a technique where data is temporarily stored in memory, enabling fast retrieval of data.By default, when you create an Amazon RDS for MySQL instance, buffers and caches are allocated to improve database operations. Amazon RDS for MySQL also has an internal memory component (such as key_buffers_size or query_cache_size) that creates internal temporary tables to perform certain operations.When you're using Amazon RDS for MySQL, make sure to understand how MySQL uses and allocates memory. After you identify the components that are using memory, you can look for bottlenecks at the instance and database level. Then, monitor those specific metrics and configure your sessions for optimal performance.ResolutionHow MySQL uses memoryIn Amazon RDS for MySQL, 80% to 90% of the available memory on an instance is allocated with the default parameters. This allocation is optimal for performance, but if you set parameters that use more memory, then modify other parameters to use less memory to compensate.You can calculate the approximate memory usage for your RDS for MySQL DB instance like this:Maximum MySQL Memory Usage = innodb_buffer_pool_size + key_buffer_size + ((read_buffer_size + read_rnd_buffer_size + sort_buffer_size + join_buffer_size) X max_connections)Buffer poolsGlobal buffers and caches include components like Innodb_buffer_pool_size, Innodb_log_buffer_size, key_buffer_size, and query_cache_size. The innodb_buffer_pool_size parameter is the memory area for RAM where innodb caches the database tables and index-related data. A larger buffer pool requires less I/O operation diverted back to the disk. By default, the innodb_buffer_pool_size uses a maximum of 75% of available memory allocated to the Amazon RDS DB instance:innodb_buffer_pool_size = {DBInstanceClassMemory*3/4}Make sure to review this parameter first to identify the source of memory usage. Then, consider reducing the value for innodb_buffer_pool_size by modifying the parameter value in your custom parameter group.For example, the default DBInstanceClassMemory*3/4 can be reduced to *5/8 or *1/2. Make sure that the instance's BufferCacheHitRatio value isn't too low. If the BufferCacheHitRatio value is low, you might need to increase the instance size for more RAM. For more information, see Best practices for configuring parameters for Amazon RDS for MySQL, part 1: Parameters related to performance.MySQL threadsMemory is also allocated for each MySQL thread that's connected to a MySQL DB instance. The following threads require allocated memory:thread_stacknet_buffer_lengthread_buffer_sizesort_buffer_sizejoin_buffer_sizemax_heap_table_sizetmp_table_sizeAdditionally, MySQL creates internal temporary tables to perform some operations. These tables are created initially as memory-based tables. When the tables reach the size specified by tmp_table_size or max_heap_table_size (whichever has the lowest value), then the table is converted to a disk-based table. When multiple sessions create internal temporary tables, you might see increases in memory utilization. To reduce memory utilization, avoid using temporary tables in your queries.Note: When you increase the limits tmp_table_size and max_heap_table_size, larger temporary tables are able to live in-memory. To confirm whether an implicit temporary table has been created, use the created_tmp_tables variable. For more information about this variable, see created_tmp_tables on the MySQL website.JOIN and SORT operationsMemory usage will increase if multiple buffers of the same type, such as join_buffer_size or sort_buffer_size, are allocated during a JOIN or SORT operation. For example, MySQL allocates one JOIN buffer to perform JOIN between two tables. If a query involves multi-table JOINs and all the queries require a JOIN buffer, then MySQL allocates one fewer JOIN buffer than the total number of tables. Configuring your session variables with a value that is too high can cause issues if the queries aren't optimized. You can allocate the minimum memory to session-level variables such as join_buffer_size and join_buffer_size andsort_buffer_size. For more information, see Working with DB parameter groups.If you perform bulk inserts to MYISAM tables, then bulk_insert_buffer_size bytes of memory are used. For more information, see Best practices for working with MySQL storage engines.The Performance SchemaMemory can be consumed by the Performance Schema if you enabled the Performance Schema for Performance Insights on Amazon RDS for MySQL. When the Performance Schema is enabled, then MySQL allocates internal buffers when the instance is started and during server operations. For more information about how the Performance Schema uses memory, see the MySQL Documentation for The Performance Schema memory-allocation model.Along with the Performance Schema tables, you can also use MySQL sys schema. For example, you can use the performance_schema event to show how much memory is allocated for internal buffers that are used by Performance Schema. Or, you can run a query like this to see how much memory is allocated:SELECT * FROM performance_schema.memory_summary_global_by_event_name WHERE EVENT_NAME LIKE 'memory/performance_schema/%';Memory instruments are listed in the setup_instruments table, following a "memory/code_area/instrument_name" format. To enable memory instrumentation, update the ENABLED column of the relevant instruments in the setup_instruments table:UPDATE performance_schema.setup_instruments SET ENABLED = 'YES' WHERE NAME LIKE 'memory/%';Monitoring memory usage on your instanceAmazon CloudWatch metricsMonitor the Amazon CloudWatch metrics for DatabaseConnections, CPUUtilization, ReadIOPS, and WriteIOPS when available memory is low.For DatabaseConnections, it's important to note that each connection made to the database needs some amount of memory allocated to it. Therefore, a spike in database connections can cause a drop in freeable memory. In Amazon RDS, the soft limit for max_connections is calculated like this:{DBInstanceClassMemory/12582880}Monitor whether you're exceeding this soft limit by checking the DatabaseConnections metric in Amazon CloudWatch.Additionally, check for memory pressure by monitoring the CloudWatch metrics for SwapUsage in addition to FreeableMemory. If you see that a large amount of swap is used and you have low FreeableMemory, then your instance might be under high memory pressure. High memory pressure affects database performance. It's a best practice to keep memory pressure levels below 95%. For more information, see Why is my Amazon RDS instance using swap memory when I have sufficient memory?Enhanced MonitoringTo monitor the resource utilization on a DB instance, enable Enhanced Monitoring. Then, set a granularity of one or five seconds (the default is 60 seconds). With Enhanced Monitoring, you can monitor the freeable and active memory in real time.You can also monitor the threads that are consuming maximum CPU and memory by listing the threads for your DB instance:mysql> select THREAD_ID, PROCESSLIST_ID, THREAD_OS_ID from performance_schema.threads;Then, map the thread_OS_ID to the thread_ID:select p.* from information_schema.processlist p, performance_schema.threads t where p.id=t.processlist_id and t.thread_os_id=<Thread ID from EM processlist>;Troubleshooting low freeable memoryIf you're experiencing low freeable memory issues, consider the following troubleshooting tips:Make sure that you have enough resources allocated to your database to run your queries. With Amazon RDS, the amount of resources allocated depends on the instance type. Also, certain queries, such as stored procedures, can take an unlimited amount of memory while being run.Avoid any long-running transactions by breaking up large queries into smaller queries.To view all active connections and queries in your database, use the SHOW FULL PROCESSLIST command. If you observe a long-running query with JOIN or SORTS operations, then you must enough RAM for the optimizer to calculate the plan. Also, if you identify a query that needs a temporary table, you must have additional memory to allocate to the table.To view long-running transactions, memory utilization statistics, and locks, use the SHOW ENGINE INNODB STATUS command. Review the output and check the BUFFER POOL AND MEMORY entries. The BUFFER POOL AND MEMORY entry provides information about memory allocation for InnoDB, such as “Total Memory Allocated”, “Internal Hash Tables”, and “Buffer Pool Size”. The InnoDB Status also helps to provide additional information regarding latches, locks, and deadlocks.If your workload often encounters deadlocks, then modify the innodb_lock_wait_timeout parameter in your custom parameter group. InnoDB relies on the innodb_lock_wait_timeout setting to roll back transactions when a deadlock occurs.To optimize database performance, make sure that your queries are properly tuned. Otherwise, you might experience performance issues and extended wait times.Use Amazon RDS Performance Insights to help you monitor DB instances and identify any problematic queries.Monitor Amazon CloudWatch metrics such as CPU utilization, IOPS, memory, and swap usage so that the instance doesn't throttle.Set a CloudWatch alarm on the FreeableMemory metric so that you receive a notification when available memory reaches 95%. It's a best practice to keep at least 5% of the instance memory free.Regularly upgrade your instance to a more recent minor version of MySQL. Older minor versions are more likely to contain memory leak-related bugs.Related informationOverview of monitoring Amazon RDSWhy is my Amazon RDS DB instance using swap memory when I have sufficient memory?Follow" | https://repost.aws/knowledge-center/low-freeable-memory-rds-mysql-mariadb |
How do I create a new Direct Connect connection with physical redundancy? | I want to create a new AWS Direct Connect connection with physical redundancy. | "I want to create a new AWS Direct Connect connection with physical redundancy.Short descriptionDirect Connect offers the following physical redundancy (location, path, and device) options:Development and test - This option is for non-critical workloads or development workloads and uses separate connections that terminate on separate devices in one location. It provides resiliency against device failure, but doesn't provide resiliency against location failure.High resiliency - This option is for critical workloads and uses two single connections to multiple connections. It provides resiliency against connectivity failures caused by a fiber cut or a device failure and also prevents a complete location failure. With this option, you can order dedicated connections to achieve an SLA of 99.9%.Maximum resiliency - This option is for critical workloads and uses separate connections that terminate on separate devices in more than one location. It provides resiliency against device, connectivity, and complete location failures. With this option, you can order dedicated connections to achieve an SLA of 99.9%.Note: It's a best practice to use the Connection wizard in the Direct Connect Resiliency Toolkit to order the dedicated connections to achieve your SLA objective.ResolutionNote: The following steps apply to dedicated connections. For hosted connections, contact an AWS Direct Connect Partner.1. Sign in to the AWS Management Console, and then open your Direct Connect console.2. Choose Create connection.3. Under Connection ordering type, choose Connection wizard.4. Under Resiliency level, select the option that you want, and then choose Next.5. Under Connection settings, select the appropriate Bandwidth, Location, and Service provider.6. (Optional) To configure MACsec support and Tags, choose Additional settings, and then configure the settings.7 Choose Next to go to the Review and create page. Review your order, and then choose Create.8. Download your Letter of Authorization and Connecting Facility Assignment (LOA-CFA).Related informationDedicated connectionsHosted connectionsUsing the AWS Direct Connect Resiliency Toolkit to get startedAWS Direct Connect Service Level AgreementAWS Direct Connect Mac SecurityFollow" | https://repost.aws/knowledge-center/direct-connect-physical-redundancy |
How do I troubleshoot Lambda function failures? | "When I try to invoke my AWS Lambda function, it fails and returns an error." | "When I try to invoke my AWS Lambda function, it fails and returns an error.ResolutionTo troubleshoot Lambda function failures, determine what's causing the error by using one or more of the AWS services and features listed in this article. Then, follow the links provided to review the troubleshooting best practices for each issue.Identify and troubleshoot any networking errorsIf there are issues with your Lambda networking configuration, you see many types of errors. The following are some of the most common Lambda networking-related errors:If your function isn't in a virtual private cloud (VPC) and you tried to access resources using a private DNS name, then you see following error:UnknownHostExceptionError: getaddrinfo ENOTFOUNDIf your function is in a VPC and then loses internet access or times out, you see the following error:connect ETIMEDOUT 176.32.98.189:443Task timed out after 10.00 secondsIf the VPC that your function is in reaches its elastic network interface limit, you see the following error:ENILimitReachedException: The elastic network interface limit was reached for the function's VPC.If the Transmission Control Protocol (TCP) connection is dropped, you see the following error:Connection reset by peerECONNRESETECONNREFUSEDTo troubleshoot Lambda networking errors1. Confirm that there's a valid network path to the endpoint that your function is trying to reach for your Amazon Virtual Private Cloud (Amazon VPC). For more information, see Configuring a Lambda Function to Access Resources in a VPC.2. Confirm that your function has access to the internet. For more information, see How do I give internet access to a function that's connected to an Amazon VPC? Also, see How do I troubleshoot timeout issues with a Lambda function that's in an Amazon VPC?3. To troubleshoot DNS resolution related issues, make sure that the VPC is configured for private resource access. If you're not using AWS provided DNS use an EC2 instance to make sure that the custom provided DHCP option resolves DNS name correctly. For more information, see How does DNS work and how do I troubleshoot partial or intermittent DNS failures?Note: If you can't determine why your function code isn't reaching a public endpoint after reviewing your VPC configuration, turn on VPC Flow Logs. VPC Flow Logs allow you to see all the network traffic flowing to and from a VPC. VPC Flow Logs also allows you to determine why a specific request was denied or didn't route. For more information, see Troubleshoot networking issues in Lambda.Identify and troubleshoot any permission errorsIf the security permissions for your Lambda deployment package are incorrect, you see one of the following errors:EACCES: permission denied, open '/var/task/index.js'cannot load such file -- function[Errno 13] Permission denied: '/var/task/function.py'The Lambda runtime needs permission to read the files in your deployment package. You can use the chmod command to change the file mode. The following example commands make all files and folders in the current directory readable by any user:chmod -R o+rX .For more information, see Troubleshoot deployment issues in Lambda.If your AWS Identity and Access Management (IAM) identities don't have permission to invoke a function, then you receive the following error:User: arn:aws:iam::123456789012:user/developer is not authorized to perform: lambda:InvokeFunction on resource: my-functionTo troubleshoot Lambda permissions errorsReview your Lambda log file entries in AWS CloudTrail. The requester making calls to Lambda must have the IAM permissions required to invoke your function. To grant the required permissions, update your Lambda function permissions.For more information, see the following topics:Understanding AWS Lambda log file entriesTroubleshooting AWS Lambda identity and accessIAM: lambda:InvokeFunction not authorized.Identify and troubleshoot any code errorsIf there are issues with your Lambda code, you see many types of errors. The following are some of the more common Lambda code-related errors:Unable to marshal response: Object of type AttributeError is not JSON serializableIssue: The AWS SDK included on the runtime is not the latest version(Node.js) Function returns before code finishes executingKeyErrorTo troubleshoot Lambda code errors1. Review your Amazon CloudWatch Logs for Lambda.You can use CloudWatch to view all logs generated by your function's code and identify potential issues. For more information, see Accessing Amazon CloudWatch Logs for AWS Lambda. For details on function logging, see the following Lambda function logging instructions for the programming language that you're using:Python Lambda function logging instructionsNode.js Lambda function logging instructionsJava Lambda function logging instructionsGo Lambda function logging instructionsC# Lambda function logging instructionsPowerShell Lambda function logging instructionsRuby Lambda function logging instructionsNote: If your function is returning a stack trace, then the error message in the stack trace specifies what's causing the error.2. Use AWS X-Ray to identify any code performance bottlenecks. If your Lambda function uses downstream AWS resources, microservices, databases, or HTTP web APIs, then you can use AWS X-Ray to troubleshoot code performance issues. For more information, see Using AWS Lambda with AWS X-Ray.3. Confirm that your function's deployment package can import any required dependencies. Follow the Lambda deployment packages instructions for the programming language that you're using:Python Lambda deployment package instructionsNode.js Lambda deployment package instructionsJava Lambda deployment package instructionsGo Lambda deployment package instructionsC# Lambda deployment package instructionsPowerShell Lambda deployment package instructionsRuby Lambda deployment package instructionsNote: You can also use Lambda layers to add dependencies that are outside of your deployment package.4. (For code deployed as a container image) Confirm that you're installing the runtime interface client and deploying the image correctly. Follow the container image instructions for the programming language that you're using:Python Lambda container image instructionsNode.js Lambda container image instructionsJava Lambda container image instructionsGo Lambda container image instructionsC# Lambda container image instructionsRuby Lambda container image instructionsIdentify and troubleshoot any throttling errorsIf your function gets throttled, you see the following error:Rate exceeded429 TooManyRequestsExceptionTo troubleshoot Lambda throttling errorsReview your CloudWatch metrics for Lambda. For more information, see Working with Lambda function metrics.Key metrics to monitor:ConcurrentExecutionsUnreservedConcurrentExecutionsThrottlesNote: If requests to invoke your function arrive faster than the function can scale or exceed your concurrency limit, then requests fail with a 429 throttling error. For more information, see Lambda function scaling. Also, How do I troubleshoot Lambda function throttling with "Rate exceeded" and 429 "TooManyRequestsException" errors?Identify and troubleshoot any Invoke API 500 and 502 errorsIf your invoke request fails, then you see any of the following 502 or 500 server-side errors:InvalidRuntimeExceptionInvalidSecurityGroupIDExceptionInvalidZipFileExceptionKMSAccessDeniedExceptionKMSNotFoundExceptionYou have exceeded the maximum limit for Hyperplane ENIs for your accountSubnetIPAddressLimitReachedExceptionTo troubleshoot Lambda Invoke API 500 and 502 errorsFollow the instructions in How do I troubleshoot HTTP 502 and HTTP 500 status code (server-side) errors from AWS Lambda? For a list of possible errors with descriptions, see Errors in the Lambda invoke API reference.Identify and troubleshoot any container image errorsIf you're using container images and there's an issue with a container image, you see any of the following errors:"errorType": "Runtime.InvalidEntrypoint"Error: You are using an AWS CloudFormation template, and your container ENTRYPOINT is being overridden with a null or empty value.To troubleshoot Lambda container image errorsFollow the instructions in Troubleshoot container image issues in Lambda.Related informationMonitoring and troubleshooting Lambda applicationsError handling and automatic retries in LambdaFollow" | https://repost.aws/knowledge-center/lambda-troubleshoot-function-failures |
How do I create a validation only AWS DMS task? | "I want to create an AWS Database Migration Service (AWS DMS) task to use for validation purposes, such as for previewing and validating data." | "I want to create an AWS Database Migration Service (AWS DMS) task to use for validation purposes, such as for previewing and validating data.Short descriptionAWS DMS allows you to create validation only tasks using either the AWS DMS console or the AWS Command Line Interface (AWS CLI). You can use validation only tasks to validate your data without performing any migration or data replication. When you use validation only tasks, there is no overhead on your existing migration task because the validation itself is decoupled from migration.There are two types of validation only tasks: Full load validation only tasks, and change data capture (CDC) validation only tasks.Full load validation only tasks complete much faster than their CDC equivalent when many failures are reported. But, in full load mode, changes to the source or target endpoint are reported as failures, which can be a disadvantage.CDC validation only tasks delay validation based on average latency. They then retry failures multiple times before reporting them. If the majority of the data comparisons result in failures, then a CDC validation only task is very slow, which is a potential disadvantage.For more information on how you can use validation only tasks, see the Validation only use cases section of Validation only tasks.ResolutionNote: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.Create a validation only task using the AWS DMS console1. Open the AWS DMS console, and then from the navigation pane, choose Database migration tasks.2. Choose Create task, and then under Task configuration, enter the details for your task.3. Under Migration type, choose Replicate data changes only.4. Under Task settings, select the JSON editor, and then change these settings:"EnableValidation": true,"ValidationOnly": true,5. For Migration Type, choose one of these options:For a full load validation only task, choose Migrate existing data.For a CDC validation only task, choose Replicate data changes only.6. Under Migration task startup configuration, choose Manually later. This allows you to verify the task settings before starting the task.Note: The default TargetTablePrepMode is set to DO_NOTHING. If TargetTablePrepMode has been modified, set the TargetTablePrepMode to DO_NOTHING.7. Choose Create task.Create validation only task using AWS CLI1. For Linux and Windows environments, run the create-replication-task command to create a validation only task. You can also specify a cdc-start-time, which might be useful if you need to start validation from a specific timestamp. See these examples:Linux:aws dms create-replication-task --replication-task-identifier validation-only-task --replication-task-settings '{"FullLoadSettings":{"TargetTablePrepMode":"DO_NOTHING"},"ValidationSettings":{"EnableValidation":true,"ValidationOnly":true}}' --replication-instance-arnarn:aws:dms:us-east-1:xxxxxxxxxxx:rep:ABCDEFGH12346 --source-endpoint-arn arn:aws:dms:us-east-1:xxxxxxxxxxxx:endpoint:KSXGO6KATGOXBDZXKRV3QNIZV4 --target-endpoint-arn arn:aws:dms:us-east-1:xxxxxxxxxxxxxxx:endpoint:7SIYPBZTE2X3CZ7FPN7KKOAV6Q --migration-typecdc --cdc-start-time "2022-06-08T 00:12:12" --table-mappings file://Table-mappings.jsonWindows:aws dms create-replication-task --replication-task-identifier validation-only-task --replication-task-settings '{"FullLoadSettings":{"TargetTablePrepMode":"DO_NOTHING"},"ValidationSettings":{"EnableValidation":true,"ValidationOnly":true}}' --replication-instance-arn2. Open the AWS DMS console, and then from the navigation pane, choose Database migration tasks.3. Confirm that the task that you created with the AWS CLI is created.4. From the Overview details section, expand the Task settings (JSON), and then confirm that these settings are in place:"EnableValidation": true,"ValidationOnly": true,"TargetTablePrepMode": "DO_NOTHING",These examples create a CDC validation only task. Use the same settings for a full load validation only task, but change the --migration-type to full-load when you run the create-replication-task command.Related informationTask settings exampleLimitationsFollow" | https://repost.aws/knowledge-center/dms-validation-only-task |