Free Sample — 15 Practice Questions
Preview 15 of 1003 questions from the SAA-C03 exam.
Try before you buy — purchase the full study guide for all 1003 questions with answers and explanations.
Question 171
A company hosts an application on Amazon EC2 instances that run in a single Availability Zone. The application is accessible by using the transport layer of the Open Systems Interconnection (OSI) model. The company needs the application architecture to have high availability.
Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)
A. Configure new EC2 instances in a different Availability Zone. Use Amazon Route 53 to route traffic to all instances.
B. Configure a Network Load Balancer in front of the EC2 instances.
C. Configure a Network Load Balancer for TCP traffic to the instances. Configure an Application Load Balancer for HTTP and HTTPS traffic to the instances.
D. Create an Auto Scaling group for the EC2 instances. Configure the Auto Scaling group to use multiple Availability Zones. Configure the Auto Scaling group to run application health checks on the instances.
E. Create an Amazon CloudWatch alarm. Configure the alarm to restart EC2 instances that transition to a stopped state.
Show Answer
Correct Answer: B, D
Explanation:
High availability requires eliminating the single–Availability Zone failure. Using an Auto Scaling group across multiple AZs (D) provides instance-level and AZ-level resilience in a cost-effective, managed way. Because the application is accessed at the transport layer (Layer 4/TCP), a Network Load Balancer (B) is the appropriate and lowest-cost load balancer to distribute TCP traffic and support high availability. ALB is unnecessary, and Route 53 alone does not provide load balancing or health-based traffic distribution at the instance level.
Question 6
A company's image-hosting website gives users around the world the ability to up load, view, and download images from their mobile devices. The company currently hosts the static website in an Amazon S3 bucket.
Because of the website's growing popularity, the website's performance has decreased. Users have reported latency issues when they upload and download images.
The company must improve the performance of the website.
Which solution will meet these requirements with the LEAST implementation effort?
A. Configure an Amazon CloudFront distribution for the S3 bucket to improve the download performance. Enable S3 Transfer Acceleration to improve the upload performance.
B. Configure Amazon EC2 instances of the right sizes in multiple AWS Regions. Migrate the application to the EC2 instances. Use an Application Load Balancer to distribute the website traffic equally among the EC2 instances. Configure AWS Global Accelerator to address global demand with low latency.
C. Configure an Amazon CloudFront distribution that uses the S3 bucket as an origin to improve the download performance. Configure the application to use CloudFront to upload images to improve the upload performance. Create S3 buckets in multiple AWS Regions. Configure replication rules for the buckets to replicate users' data based on the users' location. Redirect downloads to the S3 bucket that is closest to each user's location.
D. Configure AWS Global Accelerator for the S3 bucket to improve network performance. Create an endpoint for the application to use Global Accelerator instead of the S3 bucket.
Show Answer
Correct Answer: A
Explanation:
Amazon CloudFront in front of the S3 bucket caches and serves images from edge locations, significantly reducing download latency for global users. Enabling S3 Transfer Acceleration leverages the same edge network to optimize upload paths into S3, improving upload performance. This solution directly addresses both upload and download latency with minimal architectural change and the least implementation effort compared to multi-Region replication, EC2 migrations, or Global Accelerator setups.
Question 846
An ecommerce company has an order-processing application that uses Amazon API Gateway and an AWS Lambda function. The application stores data in an Amazon Aurora PostgreSQL database. During a recent sales event, a sudden surge in customer orders occurred. Some customers experienced timeouts, and the application did not process the orders of those customers.
A solutions architect determined that the CPU utilization and memory utilization were high on the database because of a large number of open connections. The solutions architect needs to prevent the timeout errors while making the least possible changes to the application.
Which solution will meet these requirements?
A. Configure provisioned concurrency for the Lambda function. Modify the database to be a global database in multiple AWS Regions.
B. Use Amazon RDS Proxy to create a proxy for the database. Modify the Lambda function to use the RDS Proxy endpoint instead of the database endpoint.
C. Create a read replica for the database in a different AWS Region. Use query string parameters in API Gateway to route traffic to the read replica.
D. Migrate the data from Aurora PostgreSQL to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS). Modify the Lambda function to use the DynamoDB table.
Show Answer
Correct Answer: B
Explanation:
The timeouts are caused by too many concurrent database connections from Lambda, leading to high CPU and memory utilization on Aurora PostgreSQL. Amazon RDS Proxy provides connection pooling and reuses existing database connections, which is specifically designed for serverless workloads like Lambda. Switching the Lambda function to use the RDS Proxy endpoint requires minimal application changes and directly addresses the root cause. Other options either do not solve the connection exhaustion problem or require significant architectural changes.
Question 493
A company wants to add its existing AWS usage cost to its operation cost dashboard. A solutions architect needs to recommend a solution that will give the company access to its usage cost programmatically. The company must be able to access cost data for the current year and forecast costs for the next 12 months.
Which solution will meet these requirements with the LEAST operational overhead?
A. Access usage cost-related data by using the AWS Cost Explorer API with pagination.
B. Access usage cost-related data by using downloadable AWS Cost Explorer report .csv files.
C. Configure AWS Budgets actions to send usage cost data to the company through FTP.
D. Create AWS Budgets reports for usage cost data. Send the data to the company through SMTP.
Show Answer
Correct Answer: A
Explanation:
The requirement is to access AWS usage and cost data programmatically, include historical data for the current year, and generate a 12‑month cost forecast with the least operational overhead. AWS Cost Explorer API directly fulfills all of these needs: it provides programmatic access, supports historical cost and usage queries, and includes built‑in forecasting capabilities up to 12 months ahead. Using the API avoids managing file downloads, transfers, or email workflows, resulting in minimal operational overhead. The other options rely on manual reports, file handling, or notification mechanisms and do not provide native forecasting through an API.
Question 993
A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the compute and memory attributes of the DB instance.
Which solution meets these requirements MOST cost-effectively?
A. Stop the DB instance when tests are completed. Restart the DB instance when required.
B. Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.
C. Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
D. Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.
Show Answer
Correct Answer: C
Explanation:
The tests run only 48 hours per month, so the biggest cost to eliminate is ongoing DB instance compute. Stopping an RDS instance still incurs storage (and possibly provisioned IOPS) costs and is limited in duration, so savings are modest. Auto Scaling is not applicable to RDS instances, and downsizing the instance would reduce compute/memory attributes, which is not allowed. Creating a snapshot, terminating the DB instance, and restoring it when needed removes nearly all compute costs while retaining the same instance class on restore, making it the most cost‑effective option.
Question 536
A company wants to securely exchange data between its software as a service (SaaS) application Salesforce account and Amazon S3. The company must encrypt the data at rest by using AWS Key Management Service (AWS KMS) customer managed keys (CMKs). The company must also encrypt the data in transit. The company has enabled API access for the Salesforce account.
A. Create AWS Lambda functions to transfer the data securely from Salesforce to Amazon S3.
B. Create an AWS Step Functions workflow. Define the task to transfer the data securely from Salesforce to Amazon S3.
C. Create Amazon AppFlow flows to transfer the data securely from Salesforce to Amazon S3.
D. Create a custom connector for Salesforce to transfer the data securely from Salesforce to Amazon S3.
Show Answer
Correct Answer: C
Explanation:
Amazon AppFlow is a fully managed service designed to securely transfer data between SaaS applications like Salesforce and AWS services such as Amazon S3. It supports encryption in transit using HTTPS/TLS and encryption at rest in S3 using AWS KMS customer managed keys, meeting the security requirements without custom code.
Question 96
A company hosts an ecommerce application that stores all data in a single Amazon RDS for MySQL DB instance that is fully managed by AWS. The company needs to mitigate the risk of a single point of failure.
Which solution will meet these requirements with the LEAST implementation effort?
A. Modify the RDS DB instance to use a Multi-AZ deployment. Apply the changes during the next maintenance window.
B. Migrate the current database to a new Amazon DynamoDB Multi-AZ deployment. Use AWS Database Migration Service (AWS DMS) with a heterogeneous migration strategy to migrate the current RDS DB instance to DynamoDB tables.
C. Create a new RDS DB instance in a Multi-AZ deployment. Manually restore the data from the existing RDS DB instance from the most recent snapshot.
D. Configure the DB instance in an Amazon EC2 Auto Scaling group with a minimum group size of three. Use Amazon Route 53 simple routing to distribute requests to all DB instances.
Show Answer
Correct Answer: A
Explanation:
A Multi-AZ deployment for Amazon RDS provides synchronous replication to a standby instance in another Availability Zone with automatic failover. It directly mitigates the single point of failure while keeping the same database engine and configuration, requiring only a modification of the existing instance. This is the least implementation effort compared to redesigning the data layer, migrating to a different database, or manually managing multiple instances.
Question 119
A company is migrating five on-premises applications to VPCs in the AWS Cloud. Each application is currently deployed in isolated virtual networks on premises and should be deployed similarly in the AWS Cloud. The applications need to reach a shared services VPC. All the applications must be able to communicate with each other.
If the migration is successful, the company will repeat the migration process for more than 100 applications.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Deploy software VPN tunnels between the application VPCs and the shared services VPC. Add routes between the application VPCs in their subnets to the shared services VPC.
B. Deploy VPC peering connections between the application VPCs and the shared services VPC. Add routes between the application VPCs in their subnets to the shared services VPC through the peering connection.
C. Deploy an AWS Direct Connect connection between the application VPCs and the shared services VPAdd routes from the application VPCs in their subnets to the shared services VPC and the applications VPCs. Add routes from the shared services VPC subnets to the applications VPCs.
D. Deploy a transit gateway with associations between the transit gateway and the application VPCs and the shared services VPC. Add routes between the application VPCs in their subnets and the application VPCs to the shared services VPC through the transit gateway.
Show Answer
Correct Answer: D
Explanation:
The requirements call for isolated VPCs per application, full inter-VPC communication, access to a shared services VPC, and the ability to scale beyond 100 applications with the least administrative overhead. AWS Transit Gateway provides a hub-and-spoke model that centralizes routing and connectivity, allowing all application VPCs and the shared services VPC to communicate through a single managed gateway. This avoids the exponential growth and management complexity of VPC peering or VPN tunnels and is far more appropriate than Direct Connect for VPC-to-VPC connectivity within AWS.
Question 274
A company’s website is used to sell products to the public. The site runs on Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer (ALB). There is also an Amazon CloudFront distribution, and AWS WAF is being used to protect against SQL injection attacks. The ALB is the origin for the CloudFront distribution. A recent review of security logs revealed an external malicious IP that needs to be blocked from accessing the website.
What should a solutions architect do to protect the application?
A. Modify the network ACL on the CloudFront distribution to add a deny rule for the malicious IP address.
B. Modify the configuration of AWS WAF to add an IP match condition to block the malicious IP address.
C. Modify the network ACL for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.
D. Modify the security groups for the EC2 instances in the target groups behind the ALB to deny the malicious IP address.
Show Answer
Correct Answer: B
Explanation:
AWS WAF is already in use with the CloudFront distribution and is the correct layer to block malicious client IP addresses. Adding an IP set (IP match condition) to the WAF web ACL will block the malicious IP at the edge before requests reach CloudFront, the ALB, or EC2 instances. CloudFront does not use network ACLs, security groups do not support deny rules, and blocking at the EC2 or subnet level is less effective and not appropriate for traffic coming through CloudFront.
Question 737
A company has hundreds of Amazon EC2 Linux-based instances in the AWS Cloud. Systems administrators have used shared SSH keys to manage the instances. After a recent audit, the company’s security team is mandating the removal of all shared keys. A solutions architect must design a solution that provides secure access to the EC2 instances.
Which solution will meet this requirement with the LEAST amount of administrative overhead?
A. Use AWS Systems Manager Session Manager to connect to the EC2 instances.
B. Use AWS Security Token Service (AWS STS) to generate one-time SSH keys on demand.
C. Allow shared SSH access to a set of bastion instances. Configure all other instances to allow only SSH access from the bastion instances.
D. Use an Amazon Cognito custom authorizer to authenticate users. Invoke an AWS Lambda function to generate a temporary SSH key.
Show Answer
Correct Answer: A
Explanation:
AWS Systems Manager Session Manager provides secure, auditable access to EC2 instances without SSH keys, inbound ports, or bastion hosts. Access is controlled via IAM, eliminating shared keys and minimizing administrative overhead compared to custom key generation or bastion-based approaches.
Question 924
A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB in size and growing. The company has started to share the data with a European marketing firm that has S3 buckets. The company wants to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?
A. Configure the Requester Pays feature on the company's S3 bucket.
B. Configure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing firm's S3 buckets.
C. Configure cross-account access for the marketing firm so that the marketing firm has access to the company's S3 bucket.
D. Configure the company's S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing firm's S3 buckets.
Show Answer
Correct Answer: A
Explanation:
The requirement is to keep the *company’s* data transfer costs as low as possible while sharing a large and growing S3 dataset with another organization. Enabling **Requester Pays** shifts S3 request and data transfer charges to the requester (the European marketing firm), resulting in $0 data transfer cost for the bucket owner. Cross-Region Replication or syncing would cause the company to pay ongoing transfer and additional storage costs, and cross-account access alone does not change who pays for data transfer.
Question 451
A company runs an application on AWS. The application receives inconsistent amounts of usage. The application uses AWS Direct Connect to connect to an on-premises MySQL-compatible database. The on-premises database consistently uses a minimum of 2 GiB of memory.
The company wants to migrate the on-premises database to a managed AWS service. The company wants to use auto scaling capabilities to manage unexpected workload increases.
Which solution will meet these requirements with the LEAST administrative overhead?
A. Provision an Amazon DynamoDB database with default read and write capacity settings.
B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
D. Provision an Amazon RDS for MySQL database with 2 GiB of memory.
Show Answer
Correct Answer: C
Explanation:
The database must be MySQL-compatible, handle unpredictable workloads, and scale automatically with minimal administrative effort. Amazon Aurora Serverless v2 is MySQL-compatible and provides fine-grained, automatic scaling of compute capacity using Aurora Capacity Units (ACUs) without manual instance management. This directly addresses unexpected workload increases and minimizes operational overhead. DynamoDB is not MySQL-compatible, standard RDS for MySQL does not support automatic compute scaling, and provisioned Aurora requires manual capacity management.
Question 898
A company wants to build a scalable key management infrastructure to support developers who need to encrypt data in their applications.
What should a solutions architect do to reduce the operational burden?
A. Use multi-factor authentication (MFA) to protect the encryption keys.
B. Use AWS Key Management Service (AWS KMS) to protect the encryption keys.
C. Use AWS Certificate Manager (ACM) to create, store, and assign the encryption keys.
D. Use an IAM policy to limit the scope of users who have access permissions to protect the encryption keys.
Show Answer
Correct Answer: B
Explanation:
The goal is to build a scalable key management infrastructure while reducing operational burden. AWS Key Management Service (AWS KMS) is a fully managed service that handles key creation, storage, rotation, auditing, and secure hardware (HSMs) automatically. It integrates natively with many AWS services and scales without customer-managed infrastructure, which directly reduces operational overhead. The other options improve security or access control but do not provide a managed, scalable key management solution.
Question 192
A company uses Salesforce. The company needs to load existing data and ongoing data changes from Salesforce to Amazon Redshift for analysis. The company does not want the data to travel over the public internet.
Which solution will meet these requirements with the LEAST development effort?
A. Establish a VPN connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.
B. Establish an AWS Direct Connect connection from the VPC to Salesforce. Use AWS Glue DataBrew to transfer data.
C. Create an AWS PrivateLink connection in the VPC to Salesforce. Use Amazon AppFlow to transfer data.
D. Create a VPC peering connection to Salesforce. Use Amazon AppFlow to transfer data.
Show Answer
Correct Answer: C
Explanation:
Amazon AppFlow is a fully managed service that natively supports Salesforce as a source and Amazon Redshift as a destination, handling both initial loads and ongoing data changes with minimal development effort. When combined with AWS PrivateLink, the data transfer remains on the AWS private network and does not traverse the public internet. VPN and Direct Connect require more setup and do not reduce development effort, AWS Glue DataBrew is not designed for continuous change data capture, and VPC peering cannot be used with an external SaaS provider like Salesforce.
Question 50
A global ecommerce company uses a monolithic architecture. The company needs a solution to manage the increasing volume of product data. The solution must be scalable and have a modular service architecture. The company needs to maintain its structured database schemas. The company also needs a storage solution to store product data and product images.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use an Amazon EC2 instance in an Auto Scaling group to deploy a containerized application. Use an Application Load Balancer to distribute web traffic. Use an Amazon RDS DB instance to store product data and product images.
B. Use AWS Lambda functions to manage the existing monolithic application. Use Amazon DynamoDB to store product data and product images. Use Amazon Simple Notification Service (Amazon SNS) for event-driven communication between the Lambda functions.
C. Use Amazon Elastic Kubernetes Service (Amazon EKS) with an Amazon EC2 deployment to deploy a containerized application. Use an Amazon Aurora cluster to store the product data. Use AWS Step Functions to manage workflows. Store the product images in Amazon S3 Glacier Deep Archive.
D. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate to deploy a containerized application. Use Amazon RDS with a Multi-AZ deployment to store the product data. Store the product images in an Amazon S3 bucket.
Show Answer
Correct Answer: D
Explanation:
The requirements call for scalability, a modular/service-based architecture, retention of structured schemas, storage for both product data and images, and the least operational overhead. Amazon ECS with AWS Fargate removes server and cluster management while supporting containerized, modular services. Amazon RDS preserves structured relational schemas with managed operations (Multi-AZ for availability), and Amazon S3 is the optimal, low-overhead service for storing product images. Other options introduce higher operational complexity (EC2, EKS) or do not meet schema or use-case needs (DynamoDB, Glacier).