File size: 42,473 Bytes
b6ae5ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 |
[
{
"question": "A company is creating a prototype of an ecommerce website on AWS. The website consists of an Application Load Balancer, an Auto Scaling group of Amazon EC2 instances for web servers, and an Amazon RDS for MySQL DB instance that runs with the Single-AZ configuration. The website is slow to respond during searches of the product catalog. The product catalog is a group of tables in the MySQL database that the company does not ate frequently. A solutions architect has determined that the CPU utilization on the DB instance is high when product catalog searches occur. What should the solutions architect recommend to improve the performance of the website during searches of the product catalog?",
"options": [
"Migrate the product catalog to an Amazon Redshift database. Use the COPY command to load the product catalog tables.",
"Implement an Amazon ElastiCache for Redis cluster to cache the product catalog. Use lazy loading to populate the cache.",
"Add an additional scaling policy to the Auto Scaling group to launch additional EC2 instances when database response is slow.",
"Turn on the Multi-AZ configuration for the DB instance. Configure the EC2 instances to throttle the product catalog queries that are sent to the database."
],
"correct": [
"B"
]
},
{
"question": "An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2 instance. During a monthly sales event, database usage increases and causes database connection issues for the application. The traffic is unpredictable for subsequent monthly sales events, which impacts the sales forecast. The company needs to maintain performance when there is an unpredictable increase in traffic. Which solution resolves this issue in the MOST cost-effective way?",
"options": [
"Migrate the PostgreSQL database to Amazon Aurora Serverless v2.",
"Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodate increased usage.",
"Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a larger instance type",
"Migrate the PostgreSQL database to Amazon Redshift to accommodate increased usage"
],
"correct": [
"A"
]
},
{
"question": "A company is migrating a new application from an on-premises data center to a new VPC in the AWS Cloud. The company has multiple AWS accounts and VPCs that share many subnets and applications. The company wants to have fine-grained access control for the new application. The company wants to ensure that all network resources across accounts and VPCs that are granted permission to access the new application can access the application. Which solution will meet these requirements?",
"options": [
"Set up a VPC peering connection for each VPC that needs access to the new application VPC. Update route tables in each VPC to enable connectivity.",
"Deploy a transit gateway in the account that hosts the new application. Share the transit gateway with each account that needs to connect to the application. Update route tables in the VPC that hosts the new application and in the transit gateway to enable connectivity.",
"Use an AWS PrivateLink endpoint service to make the new application accessible to other VPCs. Control access to the application by using an endpoint policy.",
"Use an Application Load Balancer (ALB) to expose the new application to the internet. Configure authentication and authorization processes to ensure that only specified VPCs can access the application."
],
"correct": [
"B"
]
},
{
"question": "A company has an AWS Direct Connect connection from its corporate data center to its VPC in the us-east-1 Region. The company recently acquired a corporation that has several VPCs and a Direct Connect connection between its on-premises data center and the eu-west-2 Region. The CIDR blocks for the VPCs of the company and the corporation do not overlap. The company requires connectivity between two Regions and the data centers. The company needs a solution that is scalable while reducing operational overhead. What should a solutions architect do to meet these requirements?",
"options": [
"Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in eu-west-2.",
"Create private virtual interfaces from the Direct Connect connection in us-east-1 to the VPCs in eu- west- 2.",
"Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. Use AWS VPN CloudHub to send and receive data between the data centers and each VPC.",
"Connect the existing Direct Connect connection to a Direct Connect gateway. Route traffic from the virtual private gateways of the VPCs in each Region to the Direct Connect gateway."
],
"correct": [
"D"
]
},
{
"question": "A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket A series ot data preparation jobs aggregate the data for reporting The data preparation jobs need to run at regular intervals in parallel A few jobs need to run in a specific order later The company wants to remove the operational overhead of job error handling retry logic, and state management Which solution will meet these requirements?",
"options": [
"Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket Invoke Other Lambda functions at regularly scheduled intervals",
"Use Amazon Athena to process the data Use Amazon EventBndge Scheduler to invoke Athena on a regular internal",
"Use AWS Glue DataBrew to process the data Use an AWS Step Functions state machine to run the DataBrew data preparation jobs",
"Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the data once at midnight."
],
"correct": [
"C"
]
},
{
"question": "A company uses Amazon FSx for NetApp ONTAP in its primary AWS Region for CIFS and NFS file shares. Applications that run on Amazon EC2 instances access the file shares The company needs a storage disaster recovery (OR) solution in a secondary Region. The data that is replicated in the secondary Region needs to be accessed by using the same protocols as the primary Region. Which solution will meet these requirements with the LEAST operational overhead?",
"options": [
"Create an AWS Lambda function lo copy the data to an Amazon S3 bucket. Replicate the S3 bucket (o the secondary Region.",
"Create a backup of the FSx for ONTAP volumes by using AWS Backup. Copy the volumes to the secondary Region. Create a new FSx for ONTAP instance from the backup.",
"Create an FSx for ONTAP instance in the secondary Region. Use NetApp SnapMirror to replicate data from the primary Region to the secondary Region.",
"Create an Amazon Elastic File System (Amazon EFS) volume. Migrate the current data to the volume.Replicate the volume to the secondary Region."
],
"correct": [
"C"
]
},
{
"question": "A company stores data in PDF format in an Amazon S3 bucket The company must follow a legal requirement to retain all new and existing data in Amazon S3 for 7 years. Which solution will meet these requirements with the LEAST operational overhead?",
"options": [
"Turn on the S3 Versionmg feature for the S3 bucket Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor authentication (MFA) delete for all S3 objects.",
"Turn on S3 Object Lock with governance retention mode for the S3 bucket Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance",
"Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance",
"Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance"
],
"correct": [
"C"
]
},
{
"question": "A company is designing an event-driven order processing system Each order requires multiple validation steps after the order is created. An independent AWS Lambda function performs each validation step. Each validation step is independent from the other validation steps Individual validation steps need only a subset of the order event information. The company wants to ensure that each validation step Lambda function has access to only the information from the order event that the function requires The components of the order processing system should be loosely coupled to accommodate future business changes. Which solution will meet these requirements?",
"options": [
"Create an Amazon Simple Queue Service (Amazon SQS> queue for each validation step. Create a new Lambda function to transform the order data to the format that each validation step requires and to publish the messages to the appropriate SQS queues Subscribe each validation step Lambda function to its corresponding SQS queue",
"Create an Amazon Simple Notification Service {Amazon SNS) topic. Subscribe the validation step Lambda functions to the SNS topic. Use message body filtering to send only the required data to each subscribed Lambda function.",
"Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure the input transformer to send only the required data to each target validation step Lambda function.",
"Create an Amazon Simple Queue Service {Amazon SQS) queue Create a new Lambda function to subscribe to the SQS queue and to transform the order data to the format that each validation step requires. Use the new Lambda function to perform synchronous invocations of the validation step Lambda functions in parallel on separate threads."
],
"correct": [
"C"
]
},
{
"question": "A company has a service that reads and writes large amounts of data from an Amazon S3 bucket in the same AWS Region. The service is deployed on Amazon EC2 instances within the private subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the public subnet. However, the company wants a solution that will reduce the data output costs. Which solution will meet these requirements MOST cost-effectively?",
"options": [
"Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network interface of this instance as the destination for all S3 traffic.",
"Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network interface of this instance as the destination for all S3 traffic.",
"Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3 traffic.",
"Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT gateway as the destination for all S3 traffic."
],
"correct": [
"C"
]
},
{
"question": "A company is developing a social media application that must scale to meet demand spikes and handle ordered processes. Which AWS services meet these requirements?",
"options": [
"ECS with Fargate, RDS, and SQS for decoupling.",
"ECS with Fargate, RDS, and SNS for decoupling.",
"DynamoDB, Lambda, DynamoDB Streams, and Step Functions.",
"Elastic Beanstalk, RDS, and SNS for decoupling. Answer: A * Option A combines ECS with Fargate for scalability, RDS for relational data, and SQS for decoupling with message ordering (FIFO queues). * Option B uses SNS, which does not maintain message order. * Option C is suitable for serverless workflows but not relational data. * Option D relies on Elastic Beanstalk, which offers less flexibility for scaling. NO.11 A company is planning to migrate data to an Amazon S3 bucket The data must be encrypted at rest within the S3 bucket The encryption key must be rotated automatically every year. Which solution will meet these requirements with the LEAST operational overhead? A. Migrate the data to the S3 bucket. Use server-side encryption with Amazon S3 managed keys (SSE- S3). Use the built-in key rotation behavior of SSE-S3 encryption keys. B. Create an AWS Key Management Service (AWS KMS) customer managed key Enable automatic key rotation Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket. C. Create an AWS Key Management Service (AWS KMS) customer managed key Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket. Manually rotate the KMS key every year. D. Use customer key material to encrypt the data Migrate the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS) key without key material Import the customer key material into the KMS key. Enable automatic key rotation."
],
"correct": [
"B"
]
},
{
"question": "A company is using AWS to design a web application that will process insurance quotes Users will request quotes from the application Quotes must be separated by quote type, must be responded to within 24 hours, and must not get lost The solution must maximize operational efficiency and must minimize maintenance. Which solution meets these requirements?",
"options": [
"Create multiple Amazon Kinesis data streams based on the quote type Configure the web application to send messages to the proper data stream Configure each backend group of application servers to use the Kinesis Client Library (KCL) to pool messages from its own data stream",
"Create an AWS Lambda function and an Amazon Simple Notification Service (Amazon SNS) topic for each quote type Subscribe the Lambda function to its associated SNS topic Configure the application to publish requests tot quotes to the appropriate SNS topic",
"Create a single Amazon Simple Notification Service (Amazon SNS) topic Subscribe Amazon Simple Queue Service (Amazon SQS) queues to the SNS topic Configure SNS message filtering to publish messages to the proper SQS queue based on the quote type Configure each backend application server to use its own SQS queue",
"Create multiple Amazon Kinesis Data Firehose delivery streams based on the quote type to deliver data streams to an Amazon Elasucsearch Service (Amazon ES) cluster Configure the application to send messages to the proper delivery stream Configure each backend group of application servers to search for the messages from Amazon ES and process them accordingly"
],
"correct": [
"C"
]
},
{
"question": "A company needs to optimize the cost of its Amazon EC2 Instances. The company also needs to change the type and family of its EC2 instances every 2-3 months. What should the company do lo meet these requirements?",
"options": [
"Purchase Partial Upfront Reserved Instances tor a 3-year term.",
"Purchase a No Upfront Compute Savings Plan for a 1-year term.",
"Purchase All Upfront Reserved Instances for a 1 -year term.",
"Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term."
],
"correct": [
"B"
]
},
{
"question": "A company is deploying a two-tier web application in a VPC. The web tier is using an Amazon EC2 Auto Scaling group with public subnets that span multiple Availability Zones. The database tier consists of an Amazon RDS for MySQL DB instance in separate private subnets. The web tier requires access to the database to retrieve product information. The web application is not working as intended. The web application reports that it cannot connect to the database. The database is confirmed to be up and running. All configurations for the network ACLs. security groups, and route tables are still in their default states. What should a solutions architect recommend to fix the application?",
"options": [
"Add an explicit rule to the private subnet's network ACL to allow traffic from the web tier's EC2 instances.",
"Add a route in the VPC route table to allow traffic between the web tier's EC2 instances and Ihe database tier.",
"Deploy the web tier's EC2 instances and the database tier's RDS instance into two separate VPCs. and configure VPC peering.",
"Add an inbound rule to the security group of the database tier's RDS instance to allow traffic from the web tier's security group."
],
"correct": [
"D"
]
},
{
"question": "A solutions architect is designing an application that helps users fill out and submit registration forms. The solutions architect plans to use a two-tier architecture that includes a web application server tier and a worker tier. The application needs to process submitted forms quickly. The application needs to process each form exactly once. The solution must ensure that no data is lost. Which solution will meet these requirements?",
"options": [
"Use an Amazon Simple Queue Service {Amazon SQS) FIFO queue between the web application server tier and the worker tier to store and forward form data.",
"Use an Amazon API Gateway HTTP API between the web application server tier and the worker tier to store and forward form data.",
"Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web application server tier and the worker tier to store and forward form data.",
"Use an AWS Step Functions workflow. Create a synchronous workflow between the web application server tier and the worker tier that stores and forwards form data."
],
"correct": [
"A"
]
},
{
"question": "A digital image processing company wants to migrate its on-premises monolithic application to the AWS Cloud. The company processes thousands of images and generates large files as part of the processing workflow. The company needs a solution to manage the growing number of image processing jobs. The solution must also reduce the manual tasks in the image processing workflow. The company does not want to manage the underlying infrastructure of the solution. Which solution will meet these requirements with the LEAST operational overhead?",
"options": [
"Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 Spot Instances to process the images. Configure Amazon Simple Queue Service (Amazon SQS) to orchestrate the workflow. Store the processed files in Amazon Elastic File System (Amazon EFS)",
"Use AWS Batch jobs to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon S3 bucket.",
"Use AWS Lambda functions and Amazon EC2 Spot Instances lo process the images. Store the processed files in Amazon FSx.",
"Deploy a group of Amazon EC2 instances to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon Elastic Block Store (Amazon EBS) volume."
],
"correct": [
"B"
]
},
{
"question": "A company has a web application in the AWS Cloud and wants to collect transaction data in real time. The company wants to prevent data duplication and does not want to manage infrastructure. The company wants to perform additional processing on the data after the data is collected. Which solution will meet these requirements?",
"options": [
"Configure an Amazon Simple Queue Service (Amazon SOS) FIFO queue. Configure an AWS Lambda function with an event source mapping for the FIFO queue to process the data.",
"Configure an Amazon Simple Queue Service (Amazon SQS) FIFO queue Use an AWS Batch job to remove duplicate data from the queue Configure an AWS Lambda function to process the data.",
"Use Amazon Kinesis Data Streams to send the Incoming transaction data to an AWS Batch job that removes duplicate data. Launch an Amazon EC2 instance that runs a custom script lo process the data. 11",
"Set up an AWS Step Functions state machine to send incoming transaction data to an AWS Lambda function to remove duplicate data. Launch an Amazon EC2 instance that runs a custom script to process the data."
],
"correct": [
"A"
]
},
{
"question": "A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team's account. The other company wants to poll the queue without giving up its own account permissions to do so. How should a solutions architect provide access to the SQS queue?",
"options": [
"Create an instance profile that provides the other company access to the SQS queue.",
"Create an 1AM policy that provides the other company access to the SQS queue.",
"Create an SQS access policy that provides the other company access to the SQS queue.",
"Create an Amazon Simple Notification Service (Amazon SNS) access policy that provides the other company access to the SQS queue."
],
"correct": [
"C"
]
},
{
"question": "A company runs its applications on Amazon EC2 instances. The company performs periodic financial assessments of itsAWS costs. The company recently identified unusual spending. The company needs a solution to prevent unusual spending. The solution must monitor costs and notify responsible stakeholders in the event of unusual spending. Which solution will meet these requirements?",
"options": [
"Use an AWS Budgets template to create a zero spend budget",
"Create an AWS Cost Anomaly Detection monitor in the AWS Billing and Cost Management console.",
"CreateAWS Pricing Calculator estimates for the current running workload pricing details_",
"Use Amazon CloudWatch to monitor costs and to identify unusual spending"
],
"correct": [
"B"
]
},
{
"question": "A company is building a data analysis platform on AWS by using AWS Lake Formation. The platform will ingest data from different sources such as Amazon S3 and Amazon RDS. The company needs a secure solution to prevent access to portions of the data that contain sensitive information.",
"options": [
"Create an IAM role that includes permissions to access Lake Formation tables.",
"Create data filters to implement row-level security and cell-level security.",
"Create an AWS Lambda function that removes sensitive information before Lake Formation ingests re data.",
"Create an AWS Lambda function that perodically Queries and removes sensitive information from Lake Formation tables."
],
"correct": [
"B"
]
},
{
"question": "A company has a data ingestion workflow that consists the following: * An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries * An AWS Lambda function to process the data and record metadata The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job. Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Select TWO.)",
"options": [
"Configure the Lambda function In multiple Availability Zones.",
"Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe It to me SNS topic.",
"Increase the CPU and memory that are allocated to the Lambda function.",
"Increase provisioned throughput for the Lambda function.",
"Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue"
],
"correct": [
"B",
"E"
]
},
{
"question": "A company is moving its data management application to AWS. The company wants to transition to an event- driven architecture. The architecture needs to the more distributed and to use serverless concepts whit performing the different aspects of the workflow. The company also wants to minimize operational overhead. Which solution will meet these requirements?",
"options": [
"Build out the workflow in AWS Glue Use AWS Glue to invoke AWS Lambda functions to process the workflow slaps",
"Build out the workflow in AWS Step Functions Deploy the application on Amazon EC2 Instances Use Step Functions to invoke the workflow steps on the EC2 instances",
"Build out the workflow in Amazon EventBridge. Use EventBridge to invoke AWS Lambda functions on a schedule to process the workflow steps.",
"Build out the workflow m AWS Step Functions Use Step Functions to create a stale machine Use the stale machine to invoke AWS Lambda functions to process the workflow steps"
],
"correct": [
"D"
]
},
{
"question": "A company runs an application that uses Amazon RDS for PostgreSQL The application receives traffic only on weekdays during business hours The company wants to optimize costs and reduce operational overhead based on this usage. Which solution will meet these requirements?",
"options": [
"Use the Instance Scheduler on AWS to configure start and stop schedules.",
"Turn off automatic backups. Create weekly manual snapshots of the database.",
"Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.",
"Purchase All Upfront reserved DB instances"
],
"correct": [
"A"
]
},
{
"question": "A company runs an application on EC2 instances that need access to RDS credentials stored in AWS Secrets Manager. Which solution meets this requirement?",
"options": [
"Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy to grant the role access to the secret.",
"Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the user access to the secret.",
"Create a resource-based policy for the secret. Use EC2 Instance Connect to access the secret.",
"Create an identity-based policy for the secret. Grant direct access to the EC2 instances. Answer: A * Option A uses an IAM role attached to the EC2 instance profile, enabling secure and automated access to Secrets Manager. This is the recommended approach. * Option B uses IAM users, which is less secure and harder to manage. * Option C is not practical for accessing secrets programmatically. * Option D violates best practices by granting direct access to the EC2 instance. NO.25 A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS. What should a solutions architect do to meet this requirement? A. Update the ALB's network ACL to accept only HTTPS traffic B. Create a rule that replaces the HTTP in the URL with HTTPS. C. Create a listener rule on the ALB to redirect HTTP traffic to HTTPS. D. Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI)."
],
"correct": [
"C"
]
},
{
"question": "A company has a website hosted on AWS. The website is behind an Application Load Balancer (ALB) that is configured to handle HTTP and HTTPS separately. The company wants to forward all requests to the website so that the requests will use HTTPS. What should a solutions architect do to meet this requirement?",
"options": [
"Update the ALB's network ACL to accept only HTTPS traffic",
"Create a rule that replaces the HTTP in the URL with HTTPS.",
"Create a listener rule on the ALB to redirect HTTP traffic to HTTPS.",
"Replace the ALB with a Network Load Balancer configured to use Server Name Indication (SNI)."
],
"correct": [
"C"
]
},
{
"question": "A company is migrating its databases to Amazon RDS for PostgreSQL. The company is migrating its applications to Amazon EC2 instances. The company wants to optimize costs for long- running workloads. Which solution will meet this requirement MOST cost-effectively?",
"options": [
"Use On-Demand Instances for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year Compute Savings Plan with the No Upfront option for the EC2 instances.",
"Purchase Reserved Instances for a 1 year term with the No Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the No Upfront option for the EC2 instances.",
"Purchase Reserved Instances for a 1 year term with the Partial Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the Partial Upfront option for the EC2 instances.",
"Purchase Reserved Instances for a 3 year term with the All Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 3 year EC2 Instance Savings Plan with the All Upfront option for the EC2 instances. Answer: D NO.27 A company hosts its core network services, including directory services and DNS, in its on- premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services. What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead? A. Create a DX connection in each new account. Route the network traffic to the on-premises servers. B. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on- premises servers. C. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers. D. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers."
],
"correct": [
"D"
]
},
{
"question": "A company hosts its core network services, including directory services and DNS, in its on- premises data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services. What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?",
"options": [
"Create a DX connection in each new account. Route the network traffic to the on-premises servers.",
"Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on- premises servers.",
"Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.",
"Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers."
],
"correct": [
"D"
]
},
{
"question": "A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate the data to an AWS managed service for development and maintenance of the application data. The solution must require minimal operational support and provide immutable, cryptographically verifiable logs of data changes. Which solution will meet these requirements MOST cost-effectively?",
"options": [
"Copy the records from the application into an Amazon Redshift cluster.",
"Copy the records from the application into an Amazon Neptune cluster.",
"Copy the records from the application into an Amazon Timestream database.",
"Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger."
],
"correct": [
"D"
]
},
{
"question": "A company has a new mobile app. Anywhere in the world, users can see local news on topics they choose. Users also can post photos and videos from inside the app. Users access content often in the first minutes after the content is posted. New content quickly replaces older content, and then the older content disappears. The local nature of the news means that users consume 90% of the content within the AWS Region where it is uploaded. Which solution will optimize the user experience by providing the LOWEST latency for content uploads?",
"options": [
"Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.",
"Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.",
"Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to Amazon S3.",
"Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple distributions of Amazon CloudFront."
],
"correct": [
"B"
]
},
{
"question": "A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum CPU available. The company wants to optimize the costs to run the job. Which solution will meet these requirements?",
"options": [
"Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU (vCPU) and 1 GB of memory.",
"Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge scheduled rule to run the code each hour.",
"Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine Image (AMI). Ensure that the schedule stops the container when the task finishes.",
"Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2 instance when the next job starts."
],
"correct": [
"B"
]
},
{
"question": "A company needs a solution to automate email ingestion. The company needs to automatically parse email messages, look for email attachments, and save any attachments to an Amazon S3 bucket in near real time. Email volume varies significantly from day to day. Which solution will meet these requirements?",
"options": [
"Set up email receiving in Amazon Simple Email Service {Amazon SES). Create a rule set and a receipt rule. Create an AWS Lambda function that Amazon SES can invoke to process the email bodies and attachments.",
"Set up email content filtering in Amazon Simple Email Service (Amazon SES). Create a content filtering rule based on sender, recipient, message body, and attachments.",
"Set up email receiving in Amazon Simple Email Service (Amazon SES). Configure Amazon SES and S3 Event Notifications to process the email bodies and attachments.",
"Create an AWS Lambda function to process the email bodies and attachments. Use Amazon EventBridge to invoke the Lambda function. Configure an EventBridge rule to listen for incoming emails."
],
"correct": [
"A"
]
},
{
"question": "A company recently migrated a monolithic application to an Amazon EC2 instance and Amazon RDS. The application has tightly coupled modules. The existing design of the application gives the application the ability to run on only a single EC2 instance. The company has noticed high CPU utilization on the EC2 instance during peak usage times. The high CPU utilization corresponds to degraded performance on Amazon RDS for read requests. The company wants to reduce the high CPU utilization and improve read request performance. Which solution will meet these requirements?",
"options": [
"Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Configure an RDS read replica for read requests.",
"Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Add an RDS read replica and redirect all read /write traffic to the replica.",
"Configure an Auto Scaling group with a minimum size of 1 and maximum size of 2. Resize the RDS DB instance to an instance type that has more CPU capacity.",
"Resize the EC2 instance to an EC2 instance type that has more CPU capacity Configure an Auto Scaling group with a minimum and maximum size of 1. Resize the RDS DB instance to an instance type that has more CPU capacity."
],
"correct": [
"A"
]
},
{
"question": "A company is building an ecommerce web application on AWS. The application sends information about new orders to an Amazon API Gateway REST API to process. The company wants to ensure that orders are processed in the order that they are received. Which solution will meet these requirements?",
"options": [
"Use an API Gateway integration to publish a message to an Amazon Simple Notification Service (Amazon SNS) topic when the application receives an order. Subscribe an AWS Lambda function to the topic to perform processing.",
"Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) FIFO queue when the application receives an order. Configure the SQS FIFO queue to invoke an AWS Lambda function for processing.",
"Use an API Gateway authorizer to block any requests while the application processes an order.",
"Use an API Gateway integration to send a message to an Amazon Simple Queue Service (Amazon SQS) standard queue when the application receives an order. Configure the SQS standard queue to invoke an AWS Lambda function for processing."
],
"correct": [
"B"
]
},
{
"question": "A company is subscribed to the AWS Business Support plan. Compliance rules require the company to check on AWS infrastructure health before deployments can proceed. The company needs a programmatic and automated way to check on infrastructure health at the beginning of new deployments. Which solution will meet these requirements?",
"options": [
"Use the AWS Trusted Advisor API at the start of each deployment. Pause all new deployments if the API returns any issues.",
"Use the AWS Health API at the start of each deployment. Pause all new deployments if the API returns any issues.",
"Query the AWS Support API at the start of each deployment. Pause all new deployments if the API returns any open issues.",
"Send an API call to each workload ahead of deployment. Pause the deployments if the API call 22 fails."
],
"correct": [
"B"
]
},
{
"question": "An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS. The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead. Which solution will meet these requirements?",
"options": [
"Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.",
"Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.",
"Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.",
"Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access."
],
"correct": [
"C"
]
}
] |