Amazon

SCS-C02 — AWS Certified Security - Specialty Study Guide

306 practice questions Updated 2026-02-20 $19 (70% off) HTML + PDF formats

SCS-C02 Exam Overview

Prepare for the Amazon SCS-C02 certification exam with our comprehensive study guide. This study material contains 306 practice questions sourced from real exams and expert-verified for accuracy. Each question includes the correct answer and a detailed explanation to help you understand the material thoroughly.

The SCS-C02 exam — AWS Certified Security - Specialty — is offered by Amazon. Our study materials were last updated on 2026-02-20 to reflect the most recent exam objectives and content.

What You Get

306 Practice Questions

Complete question bank covering all exam domains and objectives.

HTML + PDF Formats

Interactive HTML file (recommended) for screen study and a print-ready PDF.

Instant Download

Access your study materials immediately after purchase.

Email with Permanent Download Links

You will receive a confirmation email with permanent download links in case you want to download the files again in the future.

Why Choose CheapestExamDumps?

Lowest Price Available

Only $19 per exam — competitors charge $50-$300 for similar content.

Updated Monthly

Study materials refreshed within 30 days of any exam content changes.

Free Preview

Try 15 real practice questions before you buy — no signup required.

Instant Access

Download HTML + PDF immediately after payment. No waiting, no account needed.

$63 $19

One-time payment · HTML + PDF · Instant download · 306 questions

Free Sample — 15 Practice Questions

Preview 15 of 306 questions from the SCS-C02 exam. Try before you buy — purchase the full study guide for all 306 questions with answers and explanations.

Question 140

A company uses AWS Organizations. The company has more than 100 AWS accounts and will increase the number of accounts. The company also uses an external corporate identity provider (IdP). The company needs to provide users with role-based access to the accounts. The solution must maximize scalability and operational efficiency. Which solution will meet these requirements?

A. In each account, create a set of dedicated IAM users. Ensure that all users assume these IAM users through federation with the existing IdP.
B. Deploy an IAM role in a central identity account. Allow users to assume the role through federation with the existing IdP. In each account, deploy a set of IAM roles that match the desired access patterns. Include a trust policy that allows access from the central identity account. Edit the permissions policy for the role in each account to match user access requirements.
C. Enable AWS IAM Identity Center. Integrate IAM Identity Center with the company's existing IdP. Create permission sets that match the desired access patterns. Assign permissions to match user access requirements.
D. In each account, deploy a set of IAM roles that match the desired access patterns. Create a trust policy with the existing IdP. Update each role's permissions policy to use SAML-based IAM condition keys that are based on user access requirements.
Show Answer
Correct Answer: C
Explanation:
The requirement is to provide scalable, role-based access across more than 100 AWS accounts while integrating with an external IdP and minimizing operational overhead. AWS IAM Identity Center (successor to AWS SSO) is purpose-built for this scenario. It integrates directly with external IdPs, centrally manages identities and permission sets, and automatically provisions access across all accounts in AWS Organizations. This avoids per-account role and policy management, making it far more scalable and operationally efficient than manually configuring IAM roles or users in each account.

Question 291

A company hosts a web application on an Apache web server. The application runs on Amazon EC2 instances that are in an Auto Scaling group. The company configured the EC2 instances to send the Apache web server logs to an Amazon CloudWatch Logs group that the company has configured to expire after 1 year. Recently, the company discovered in the Apache web server logs that a specific IP address is sending suspicious requests to the web application. A security engineer wants to analyze the past week of Apache web server logs to determine how many requests that the IP address sent and the corresponding URLs that the IP address requested. What should the security engineer do to meet these requirements with the LEAST effort?

A. Export the CloudWatch Logs group data to Amazon S3. Use Amazon Macie to query the logs for the specific IP address and the requested URL.
B. Configure a CloudWatch Logs subscription to stream the log group to an Amazon OpenSearch Service cluster. Use OpenSearch Service to analyze the logs for the specific IP address and the requested URLs.
C. Use CloudWatch Logs Insights and a custom query syntax to analyze the CloudWatch logs for the specific IP address and the requested URLs.
D. Export the CloudWatch Logs group data to Amazon S3. Use AWS Glue to crawl the S3 bucket for only the log entries that contain the specific IP address. Use AWS Glue to view the results.
Show Answer
Correct Answer: C
Explanation:
The logs are already stored in Amazon CloudWatch Logs. CloudWatch Logs Insights can query historical log data directly with minimal setup, using built-in query syntax for Apache access logs to filter by a specific IP address, count requests, and list requested URLs. This meets the requirement to analyze the past week of logs with the least effort, without exporting data or configuring additional services.

Question 60

A consultant agency needs to perform a security audit for a company’s production AWS account. Several consultants need access to the account. The consultant agency already has its own AWS account. The company requires multi-factor authentication (MFA) for all access to its production account. The company also forbids the use of long-term credentials. Which solution will provide the consultant agency with access that meets these requirements?

A. Create an IAM group. Create an IAM user for each consultant. Add each user to the group. Turn on MFA for each consultant.
B. Configure Amazon Cognito on the company’s production account to authenticate against the consultant agency’s identity provider (IdP). Add MFA to a Cognito user pool.
C. Create an IAM role in the consultant agency’s AWS account. Define a trust policy that requires MFA. In the trust policy, specify the company’s production account as the principal. Attach the trust policy to the role.
D. Create an IAM role in the company’s production account. Define a trust policy that requires MFA. In the trust policy, specify the consultant agency’s AWS account as the principal. Attach the trust policy to the role.
Show Answer
Correct Answer: D
Explanation:
Creating an IAM role in the company’s production account and trusting the consultant agency’s AWS account allows cross-account access using temporary credentials via STS, satisfying the requirement to forbid long-term credentials. The role’s trust policy can require MFA (aws:MultiFactorAuthPresent), enforcing MFA for all access. Permissions are controlled centrally in the production account, providing least-privilege, auditable, and time-limited access for consultants.

Question 173

A company has an application that needs to get objects from an Amazon S3 bucket. The application runs on Amazon EC2 instances. All the objects in the S3 bucket are encrypted with an AWS Key Management Service (AWS KMS) customer managed key. The resources in the VPC do not have access to the internet and use a gateway VPC endpoint to access Amazon S3. The company discovers that the application is unable to get objects from the S3 bucket. Which factors could cause this issue? (Choose three.)

A. The IAM instance profile that is attached to the EC2 instances does not allow the s3:ListBucket action for the S3 bucket.
B. The IAM instance profile that is attached to the EC2 instances does not allow the s3:ListParts action for the S3 bucket.
C. The KMS key policy that encrypts the objects in the S3 bucket does not allow the kms:ListKeys action to the EC2 instance profile ARN.
D. The KMS key policy that encrypts the objects in the S3 bucket does not allow the kms:Decrypt action to the EC2 instance profile ARN.
E. The S3 bucket policy does not allow access from the gateway VPC endpoint.
F. The security group that is attached to the EC2 instances is missing an inbound rule from the S3 managed prefix list over port 443.
Show Answer
Correct Answer: A, D, E
Explanation:
To retrieve KMS-encrypted objects from Amazon S3, three independent permission layers must be satisfied: IAM, KMS, and S3 networking/policy controls. A is correct: If the application workflow requires listing objects (common before downloads), lack of s3:ListBucket permission on the bucket will prevent object retrieval operations. D is correct: Objects encrypted with a customer managed KMS key require the caller to have kms:Decrypt permission on that key. Without this, S3 cannot return the object even if S3 permissions are correct. E is correct: When using a gateway VPC endpoint for S3, the bucket policy must explicitly allow access from that VPC endpoint. If it does not, requests from the EC2 instances will be denied. The other options are incorrect because ListParts is only for multipart uploads, kms:ListKeys is not required for decryption, and security groups do not control traffic to gateway VPC endpoints.

Question 235

A security team is working on a solution that will use Amazon EventBridge to monitor new Amazon S3 objects. The solution will monitor for public access and for changes to any S3 bucket policy or setting that result in public access. The security team configures EventBridge to watch for specific API calls that are logged from AWS CloudTrail. EventBridge has an action to send an email notification through Amazon Simple Notification Service (Amazon SNS) to the security team immediately with details of the API call. Specifically, the security team wants EventBridge to watch for the s3:PutObjectAcl, s3:DeleteBucketPolicy, and s3:PutBucketPolicy API invocation logs from CloudTrail. While developing the solution in a single account, the security team discovers that the s3:PutObjectAcl API call does not invoke an EventBridge event However, the s3:DeleteBucketPolicy API call and the s3:PutBucketPolicy API call do invoke an event. The security team has enabled CloudTrail for AWS management events with a basic configuration in the AWS Region in which EventBridge is being tested. Verification of the EventBridge event pattern indicates that the pattern is set up correctly. The security team must implement a solution so that the s3:PutObjectAcl API call will invoke an EventBridge event. The solution must not generate false notifications. Which solution will meet these requirements?

A. Modify the EventBridge event pattern by selecting Amazon S3. Select All Events as the event type.
B. Modify the EventBridge event pattern by selecting Amazon S3. Select Bucket Level Operations as the event type.
C. Enable CloudTrail Insights to identify unusual API activity.
D. Enable CloudTrail to monitor data events for read and write operations to S3 buckets.
Show Answer
Correct Answer: D
Explanation:
The s3:PutObjectAcl API is an object-level operation, which CloudTrail records as an S3 data event, not a management event. Only management events were enabled, which is why bucket-level calls (PutBucketPolicy, DeleteBucketPolicy) triggered EventBridge events but PutObjectAcl did not. Enabling CloudTrail data events for S3 read/write operations will log PutObjectAcl and allow EventBridge to match it without generating false notifications.

Question 187

A company is designing a new application stack. The design includes web servers and backend servers that are hosted on Amazon EC2 instances. The design also includes an Amazon Aurora MySQL DB cluster. The EC2 instances are in an Auto Scaling group that uses launch templates. The EC2 instances for the web layer and the backend layer are backed by Amazon Elastic Block Store (Amazon EBS) volumes. No layers are encrypted at rest A security engineer needs to implement encryption at rest. Which combination of steps will meet these requirements? (Choose two.)

A. Modify EBS default encryption settings in the target AWS Region to enable encryption. Use an Auto Scaling group instance refresh.
B. Modify the launch templates for the web layer and the backend layer to add AWS Certificate Manager (ACM) encryption for the attached EBS volumes. Use an Auto Scaling group instance refresh.
C. Create a new AWS Key Management Service (AWS KMS) encrypted DB cluster from a snapshot of the existing DB cluster.
D. Apply AWS Key Management Service (AWS KMS) encryption to the existing DB cluster.
E. Apply AWS Certificate Manager (ACM) encryption to the existing DB cluster.
Show Answer
Correct Answer: A, C
Explanation:
For the EC2 layers, enabling EBS default encryption in the Region ensures all newly created EBS volumes are encrypted with AWS KMS. An Auto Scaling group instance refresh replaces existing instances so they relaunch with encrypted volumes, satisfying encryption at rest for web and backend servers (A). For Amazon Aurora, encryption at rest cannot be enabled on an existing unencrypted cluster. The correct approach is to take a snapshot and restore it to a new KMS-encrypted DB cluster (C). Options involving ACM are incorrect because ACM is for TLS certificates, not storage encryption, and existing Aurora clusters cannot be encrypted in place.

Question 243

A company is using Amazon Route 53 Resolver for its hybrid DNS infrastructure. The company has set up Route 53 Resolver forwarding rules for authoritative domains that are hosted on on-premises DNS servers. A new security mandate requires the company to implement a solution to log and query DNS traffic that goes to the on-premises DNS servers. The logs must show details of the source IP address of the instance from which the query originated. The logs also must show the DNS name that was requested in Route 53 Resolver. Which solution will meet these requirements?

A. Use VPC Traffic Mirroring. Configure all relevant elastic network interfaces as the traffic source, include amazon-dns in the mirror filter, and set Amazon CloudWatch Logs as the mirror target. Use CloudWatch Insights on the mirror session logs to run queries on the source IP address and DNS name.
B. Configure VPC flow logs on all relevant VPCs. Send the logs to an Amazon S3 bucket. Use Amazon Athena to run SQL queries on the source IP address and DNS name.
C. Configure Route 53 Resolver query logging on all relevant VPCs. Send the logs to Amazon CloudWatch Logs. Use CloudWatch Insights to run queries on the source IP address and DNS name.
D. Modify the Route 53 Resolver rules on the authoritative domains that forward to the on-premises DNS servers. Send the logs to an Amazon S3 bucket. Use Amazon Athena to run SQL queries on the source IP address and DNS name.
Show Answer
Correct Answer: C
Explanation:
Route 53 Resolver query logging is specifically designed to log DNS queries handled by Route 53 Resolver, including queries forwarded to on-premises DNS servers. The logs include the source IP address of the originating instance and the DNS name requested. Sending these logs to Amazon CloudWatch Logs allows querying with CloudWatch Logs Insights to meet the security and auditing requirements. Other options (VPC Flow Logs, Traffic Mirroring, or modifying resolver rules) do not provide DNS-level details such as the queried DNS name.

Question 70

A security engineer uses Amazon Macie to scan a company’s Amazon S3 buckets for sensitive data. The company has many S3 buckets and many objects stored in the S3 buckets. The security engineer must identify S3 buckets that contain sensitive data and must perform additional scanning on those S3 buckets. Which solution will meet these requirements with the LEAST administrative overhead?

A. Configure S3 Cross-Region Replication (CRR) on the S3 buckets to replicate the objects to a second AWS Region. Configure Macie in the second Region to scan the replicated objects daily.
B. Create an AWS Lambda function as an S3 event destination for the S3 buckets. Configure the Lambda function to start a Macie scan of an object when the object is uploaded to an S3 bucket.
C. Configure Macie automated discovery to continuously sample data from the S3 buckets. Perform full scans of the S3 buckets where Macie discovers sensitive data.
D. Configure Macie scans to run on the S3 buckets. Aggregate the results of the scans in an Amazon DynamoDB table. Use the DynamoDB table for queries.
Show Answer
Correct Answer: C
Explanation:
Amazon Macie automated discovery continuously samples data across many S3 buckets with minimal configuration, identifying where sensitive data exists. The engineer can then run targeted full scans only on those buckets, avoiding custom code, replication, or manual aggregation and resulting in the least administrative overhead.

Question 131

A security engineer is designing an IAM policy for a script that will use the AWS CLI. The script currently assumes an IAM role that is attached to three AWS managed IAM policies: AmazonEC2FullAccess, AmazonDynamoDBFullAccess, and AmazonVPCFullAccess. The security engineer needs to construct a least privilege IAM policy that will replace the AWS managed IAM policies that are attached to this role. Which solution will meet these requirements in the MOST operationally efficient way?

A. In AWS CloudTrail, create a trail for management events. Run the script with the existing AWS managed IAM policies. Use IAM Access Analyzer to generate a new IAM policy that is based on access activity in the trail. Replace the existing AWS managed IAM policies with the generated IAM policy for the role.
B. Remove the existing AWS managed IAM policies from the role. Attach the IAM Access Analyzer Role Policy Generator to the role. Run the script. Return to IAM Access Analyzer and generate a least privilege IAM policy. Attach the new IAM policy to the role.
C. Create an account analyzer in IAM Access Analyzer. Create an archive rule that has a filter that checks whether the PrincipalArn value matches the ARN of the role. Run the script. Remove the existing AWS managed IAM policies from the role.
D. In AWS CloudTrail, create a trail for management events. Remove the existing AWS managed IAM policies from the role. Run the script. Find the authorization failure in the trail event that is associated with the script. Create a new IAM policy that includes the action and resource that caused the authorization failure. Repeat the process until the script succeeds. Attach the new IAM policy to the role.
Show Answer
Correct Answer: A
Explanation:
Option A is the most operationally efficient because it leverages existing permissions to observe real access behavior and automatically generate a least-privilege policy. By running the script with the current AWS managed policies and using IAM Access Analyzer with CloudTrail management events, AWS can analyze actual API calls and produce a policy that includes only the required actions and resources. This minimizes manual effort, avoids trial-and-error, and aligns directly with AWS-recommended tooling for least-privilege policy generation.

Question 281

An ecommerce company has a web application architecture that runs primarily on containers. The application containers are deployed on Amazon Elastic Container Service (Amazon ECS). The container images for the application are stored in Amazon Elastic Container Registry (Amazon ECR). The company's security team is performing an audit of components of the application architecture. The security team identifies issues with some container images that are stored in the container repositories. The security team wants to address these issues by implementing continual scanning and on-push scanning of the container images. The security team needs to implement a solution that makes any findings from these scans visible in a centralized dashboard. The security team plans to use the dashboard to view these findings along with other security-related findings that they intend to generate in the future. There are specific repositories that the security team needs to exclude from the scanning process. Which solution will meet these requirements?

A. Use Amazon Inspector. Create inclusion rules in Amazon ECR to match repositories that need to be scanned. Push Amazon Inspector findings to AWS Security Hub.
B. Use ECR basic scanning of container images. Create inclusion rules in Amazon ECR to match repositories that need to be scanned. Push findings to AWS Security Hub.
C. Use ECR basic scanning of container images. Create inclusion rules in Amazon ECR to match repositories that need to be scanned. Push findings to Amazon Inspector.
D. Use Amazon Inspector. Create inclusion rules in Amazon Inspector to match repositories that need to be scanned. Push Amazon Inspector findings to AWS Config.
Show Answer
Correct Answer: A
Explanation:
Amazon Inspector provides enhanced container image scanning for Amazon ECR, supporting both on-push and continual scanning. It allows configuration of inclusion rules to control which repositories are scanned, meeting the requirement to exclude specific repositories. Amazon Inspector integrates natively with AWS Security Hub, which serves as a centralized dashboard to aggregate Inspector findings alongside other current and future security findings. ECR basic scanning lacks continual scanning and broader integration, and AWS Config is not intended as a centralized vulnerability findings dashboard.

Question 117

An application team wants to use AWS Certificate Manager (ACM) to request public certificates to ensure that data is secured in transit. The domains that are being used are not currently hosted on Amazon Route 53. The application team wants to use an AWS managed distribution and caching solution to optimize requests to its systems and provide better points of presence to customers. The distribution solution will use a primary domain name that is customized. The distribution solution also will use several alternative domain names. The certificates must renew automatically over an indefinite period of time. Which combination of steps should the application team take to deploy this architecture? (Choose three.)

A. Request a certificate from ACM in the us-west-2 Region. Add the domain names that the certificate will secure.
B. Send an email message to the domain administrators to request validation of the domains for ACM.
C. Request validation of the domains for ACM through DNS. Insert CNAME records into each domain's DNS zone.
D. Create an Application Load Balancer for the caching solution. Select the newly requested certificate from ACM to be used for secure connections.
E. Create an Amazon CloudFront distribution for the caching solution. Enter the main CNAME record as the Origin Name. Enter the subdomain names or alternate names in the Alternate Domain Names Distribution Settings. Select the newly requested certificate from ACM to be used for secure connections.
F. Request a certificate from ACM in the us-east-1 Region. Add the domain names that the certificate will secure.
Show Answer
Correct Answer: C, E, F
Explanation:
The managed distribution and caching solution is Amazon CloudFront. CloudFront requires its ACM certificate to be requested in the us-east-1 Region, so F is required and A is incorrect. Because the domains are not in Route 53 and certificates must renew automatically indefinitely, DNS validation with CNAME records is the correct validation method, making C correct and B incorrect. To provide global caching, custom domain names, and HTTPS using the ACM certificate, the team must create a CloudFront distribution and associate the certificate and alternate domain names, which is E. An Application Load Balancer is not a caching/distribution service, so D is incorrect.

Question 274

A company discovers a billing anomaly in its AWS account. A security consultant investigates the anomaly and discovers that an employee who left the company 30 days ago still has access to the account. The company has not monitored account activity in the past. The security consultant needs to determine which resources have been deployed or reconfigured by the employee as quickly as possible. Which solution will meet these requirements?

A. In AWS Cost Explorer, filter chart data to display results from the past 30 days. Export the results to a data table. Group the data table by resource.
B. Use AWS Cost Anomaly Detection to create a cost monitor. Access the detection history. Set the time frame to Last 30 days. In the search area, choose the service category.
C. In AWS CloudTrail, filter the event history to display results from the past 30 days. Create an Amazon Athena table that contains the data. Partition the table by event source.
D. Use AWS Audit Manager to create an assessment for the past 30 days. Apply a usage-based framework to the assessment. Configure the assessment to assess by resource.
Show Answer
Correct Answer: C
Explanation:
The requirement is to quickly determine which resources were deployed or reconfigured by a specific employee over the past 30 days. AWS CloudTrail records API calls for resource creation, modification, and deletion, including the identity that made the call. Filtering CloudTrail event history for the last 30 days directly shows what actions the former employee performed. Using Amazon Athena to query CloudTrail logs enables fast, flexible analysis at scale. Cost Explorer and Cost Anomaly Detection focus on spend, not configuration changes, and AWS Audit Manager is for compliance assessments, not rapid forensic investigation.

Question 79

A company’s engineering team is developing a new application that creates AWS Key Management Service (AWS KMS) customer managed key grants for users. Immediately after a grant is created, users must be able to use the KMS key to encrypt a 512-byte payload. During load testing, AccessDeniedException errors occur occasionally when a user first attempts to use the key to encrypt. Which solution should the company’s security specialist recommend to eliminate these AccessDeniedException errors?

A. Instruct users to implement a retry mechanism every 2 minutes until the call succeeds.
B. Instruct the engineering team to consume a random grant token from users and to call the CreateGrant operation by passing the grant token to the operation. Instruct users to use that grant token in their call to encrypt.
C. Instruct the engineering team to create a random name for the grant when calling the CreateGrant operation. Return the name to the users and instruct them to provide the name as the grant token in the call to encrypt.
D. Instruct the engineering team to pass the grant token returned in the CreateGrant response to users. Instruct users to use that grant token in their call to encrypt.
Show Answer
Correct Answer: D
Explanation:
AWS KMS grants are eventually consistent, so immediately after CreateGrant a user might receive AccessDeniedException. To allow immediate use, AWS KMS provides a grant token in the CreateGrant response. When the user includes this grant token in the Encrypt call, KMS honors the permissions before full propagation. This directly eliminates the intermittent errors without retries or custom tokens.

Question 90

A company uses Amazon Elastic Kubernetes Service (Amazon EKS) clusters to run its Kubernetes-based applications. The company uses Amazon GuardDuty to protect the applications. EKS Protection is enabled in GuardDuty. However, the corresponding GuardDuty feature is not monitoring the Kubernetes-based applications. Which solution will cause GuardDuty to monitor the Kubernetes-based applications?

A. Enable VPC flow logs for the VPC that hosts the EKS clusters.
B. Assign the CloudWatchEventsFullAccess AWS managed policy to the EKS clusters.
C. Ensure that the AmazonGuardDutyFullAccess AWS managed policy is attached to the GuardDuty service role.
D. Enable the control plane logs in Amazon EKS. Ensure that the logs are ingested into Amazon CloudWatch.
Show Answer
Correct Answer: D
Explanation:
Amazon GuardDuty EKS Protection relies on Amazon EKS control plane (especially audit) logs to analyze Kubernetes API activity and detect threats. If these logs are not enabled and delivered to Amazon CloudWatch Logs, GuardDuty cannot monitor Kubernetes-based applications even if EKS Protection is turned on. Enabling EKS control plane logs and ensuring they are ingested into CloudWatch allows GuardDuty to perform continuous threat detection.

Question 305

A company has deployed Amazon GuardDuty and now wants to implement automation for potential threats. The company has decided to start with RDP brute force attacks that come from Amazon EC2 instances in the company's AWS environment. A security engineer needs to implement a solution that blocks the detected communication from a suspicious instance until investigation and potential remediation can occur. Which solution will meet these requirements?

A. Configure GuardDuty to send the event to an Amazon Kinesis data stream. Process the event with an Amazon Kinesis Data Analytics for Apache Flink application that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS). Add rules to the network ACL to block traffic to and from the suspicious instance.
B. Configure GuardDuty to send the event to Amazon EventBridge. Deploy an AWS WAF web ACL. Process the event with an AWS Lambda function that sends a notification to the company through Amazon Simple Notification Service (Amazon SNS) and adds a web ACL rule to block traffic to and from the suspicious instance.
C. Enable AWS Security Hub to ingest GuardDuty findings and send the event to Amazon EventBridge. Deploy AWS Network Firewall. Process the event with an AWS Lambda function that adds a rule to a Network Firewall firewall policy to block traffic to and from the suspicious instance.
D. Enable AWS Security Hub to ingest GuardDuty findings. Configure an Amazon Kinesis data stream as an event destination for Security Hub. Process the event with an AWS Lambda function that replaces the security group of the suspicious instance with a security group that does not allow any connections.
Show Answer
Correct Answer: C
Explanation:
The requirement is to automatically block RDP (TCP 3389) communication from a suspicious EC2 instance detected by GuardDuty. This is a Layer 3/4 control, not web (Layer 7). GuardDuty findings can be sent to EventBridge (directly or via Security Hub) to trigger a Lambda function. AWS Network Firewall operates at Layers 3 and 4 and can dynamically block traffic to and from specific IPs or instances by updating firewall policies, effectively isolating the instance while investigation occurs. Option A is unnecessarily complex and relies on NACL changes, which are coarse-grained and risky. Option B is incorrect because AWS WAF only protects HTTP/HTTPS (L7) traffic, not RDP. Option D relies on replacing security groups; security groups are stateful, lack explicit deny rules, and are less reliable for immediately stopping active or outbound attack traffic. Therefore, Option C best meets the requirements.

$63 $19

Get all 306 questions with detailed answers and explanations

SCS-C02 — Frequently Asked Questions

What is the Amazon SCS-C02 exam?

The Amazon SCS-C02 exam — AWS Certified Security - Specialty — is a professional IT certification exam offered by Amazon.

How many practice questions are included?

This study guide contains 306 practice questions, each with an expert-verified correct answer and a detailed explanation. Questions cover all exam domains and objectives.

Is there a free sample available?

Yes! We provide a free sample of 15 practice questions from the SCS-C02 exam right on this page. Scroll up to preview them and evaluate the quality of our materials before purchasing.

When was this SCS-C02 study guide last updated?

This study guide was last updated on 2026-02-20. We regularly refresh our materials to reflect the latest exam content and objectives so you're always studying current material.

What file formats do I receive?

After purchase you receive two files: an interactive HTML file with show/hide answer toggles (ideal for studying on screen) and a PDF file (ideal for printing or offline study). Both work on any device — desktop, tablet, or phone.

How much does the SCS-C02 study guide cost?

The Amazon SCS-C02 study guide costs $19 (discounted from $63). This is a one-time payment with no subscriptions or hidden fees.

How do I get my files after payment?

After successful payment via Stripe, you are immediately redirected to a download page with links to your HTML and PDF files. We also send the download links to your email address as a backup, so you'll always have access.

Why choose CheapestExamDumps over other providers?

CheapestExamDumps offers the lowest price at $19 per exam — competitors charge $50-$300 for similar content. All study materials are expert-verified, updated monthly, and include a free 15-question preview with no signup required. You get instant access to both HTML and PDF formats after payment.