Amazon

DOP-C02 — AWS Certified DevOps Engineer - Professional Study Guide

386 practice questions Updated 2026-02-19 $19 (70% off) HTML + PDF formats

DOP-C02 Exam Overview

Prepare for the Amazon DOP-C02 certification exam with our comprehensive study guide. This study material contains 386 practice questions sourced from real exams and expert-verified for accuracy. Each question includes the correct answer and a detailed explanation to help you understand the material thoroughly.

The DOP-C02 exam — AWS Certified DevOps Engineer - Professional — is offered by Amazon. Our study materials were last updated on 2026-02-19 to reflect the most recent exam objectives and content.

What You Get

386 Practice Questions

Complete question bank covering all exam domains and objectives.

HTML + PDF Formats

Interactive HTML file (recommended) for screen study and a print-ready PDF.

Instant Download

Access your study materials immediately after purchase.

Email with Permanent Download Links

You will receive a confirmation email with permanent download links in case you want to download the files again in the future.

Why Choose CheapestExamDumps?

Lowest Price Available

Only $19 per exam — competitors charge $50-$300 for similar content.

Updated Monthly

Study materials refreshed within 30 days of any exam content changes.

Free Preview

Try 15 real practice questions before you buy — no signup required.

Instant Access

Download HTML + PDF immediately after payment. No waiting, no account needed.

$63 $19

One-time payment · HTML + PDF · Instant download · 386 questions

Free Sample — 15 Practice Questions

Preview 15 of 386 questions from the DOP-C02 exam. Try before you buy — purchase the full study guide for all 386 questions with answers and explanations.

Question 390

A company uses AWS Key Management Service (AWS KMS) keys and manual key rotation to meet regulatory compliance requirements. The security team wants to be notified when any keys have not been rotated after 90 days. Which solution will accomplish this?

A. Configure AWS KMS to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
B. Configure an Amazon EventBridge event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon Simple Notification Service (Amazon SNS) topic.
C. Develop an AWS Config custom rule that publishes to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
D. Configure AWS Security Hub to publish to an Amazon Simple Notification Service (Amazon SNS) topic when keys are more than 90 days old.
Show Answer
Correct Answer: C
Explanation:
AWS KMS does not natively emit notifications when keys exceed a certain age, and neither Trusted Advisor nor Security Hub provides a check for manual KMS key rotation age. AWS Config is designed to evaluate resource compliance over time. By creating an AWS Config custom rule that evaluates KMS keys and checks whether manual rotation has occurred within 90 days, the company can detect noncompliant keys and trigger notifications through Amazon SNS.

Question 117

A company is running a custom-built application that processes records. All the components run on Amazon EC2 instances that run in an Auto Scaling group. Each record's processing is a multistep sequential action that is compute-intensive. Each step is always completed in 5 minutes or less. A limitation of the current system is that if any steps fail, the application has to reprocess the record from the beginning. The company wants to update the architecture so that the application must reprocess only the failed steps. What is the MOST operationally efficient solution that meets these requirements?

A. Create a web application to write records to Amazon S3. Use S3 Event Notifications to publish to an Amazon Simple Notification Service (Amazon SNS) topic. Use an EC2 instance to poll Amazon SNS and start processing. Save intermediate results to Amazon S3 to pass on to the next step.
B. Perform the processing steps by using logic in the application. Convert the application code to run in a container. Use AWS Fargate to manage the container instances. Configure the container to invoke itself to pass the state from one step to the next.
C. Create a web application to pass records to an Amazon Kinesis data stream. Decouple the processing by using the Kinesis data stream and AWS Lambda functions.
D. Create a web application to pass records to AWS Step Functions. Decouple the processing into Step Functions tasks and AWS Lambda functions.
Show Answer
Correct Answer: D
Explanation:
AWS Step Functions is designed to orchestrate multi-step, sequential workflows with built-in state management, error handling, and retry logic. By breaking the record processing into Step Functions tasks backed by AWS Lambda, each step’s state is preserved. If a step fails, only that step is retried or reprocessed instead of restarting the entire workflow. This serverless, managed approach removes the need to build custom retry and state-passing logic, making it the most operationally efficient solution.

Question 77

A company has multiple AWS accounts in an organization in AWS Organizations that has all features enabled. The company’s DevOps administrator needs to improve security across all the company's AWS accounts. The administrator needs to identify the top users and roles in use across all accounts. Which solution will meet these requirements with the MOST operational efficiency?

A. Create a new organization trail in AWS CloudTrail. Configure the trail to send log events to Amazon CloudWatch Logs. Create a CloudWatch Contributor Insights rule for the userIdentity.arn log field. View the results in CloudWatch Contributor Insights.
B. Create an unused access analysis for the organization by using AWS Identity and Access Management Access Analyzer. Review the analyzer results and determine if each finding has the intended level of permissions required for the workload.
C. Create a new organization trail in AWS CloudTrail. Create a table in Amazon Athena that uses partition projection. Load the Athena table with CloudTrail data. Query the Athena table to find the top users and roles.
D. Generate a Service access report for each account by using Organizations. From the results, pull the last accessed date and last accessed by account fields to find the top users and roles.
Show Answer
Correct Answer: A
Explanation:
The requirement is to identify the top users and roles across all accounts with the MOST operational efficiency. An organization-wide CloudTrail combined with CloudWatch Contributor Insights directly aggregates and ranks contributors (for example, userIdentity.arn) without building data pipelines, managing schemas, or running queries. This provides near-real-time insights with minimal setup and ongoing maintenance. Athena-based analysis works but requires ongoing query management and data handling, which is less operationally efficient. The other options do not identify top users and roles in active use across accounts.

Question 156

A company has deployed a new platform that runs on Amazon Elastic Kubernetes Service (Amazon EKS). The new platform hosts web applications that users frequently update. The application developers build the Docker images for the applications and deploy the Docker images manually to the platform. The platform usage has increased to more than 500 users every day. Frequent updates, building the updated Docker images for the applications, and deploying the Docker images on the platform manually have all become difficult to manage. The company needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if Docker image scanning returns any HIGH or CRITICAL findings for operating system or programming language package vulnerabilities. Which combination of steps will meet these requirements? (Choose two.)

A. Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon S3 event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a step to the pipeline to initiate the AWS CodeBuild project.
B. Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon EventBridge event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a step to the pipeline to initiate the AWS CodeBuild project.
C. Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on basic scanning for the ECR repository. Create an Amazon EventBridge rule that monitors Amazon GuardDuty events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
D. Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on enhanced scanning for the ECR repository. Create an Amazon EventBridge rule that monitors ECR image scan events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
E. Create an AWS CodeBuild project that scans the Dockerfile. Configure the project to build the Docker images and store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository if the scan is successful. Configure an SNS topic to provide notification if the scan returns any vulnerabilities.
Show Answer
Correct Answer: B, D
Explanation:
The company needs an automated CI/CD workflow and vulnerability notifications for container images. Option B correctly automates builds by using AWS CodeCommit with AWS CodePipeline triggered by Amazon EventBridge on commits, which is the appropriate native mechanism for reacting to repository changes. Option D correctly handles the security requirement by building images with AWS CodeBuild, storing them in Amazon ECR, enabling enhanced scanning, and using Amazon EventBridge to detect ECR image scan results and notify an SNS topic when HIGH or CRITICAL vulnerabilities are found. Other options either use incorrect triggers, incorrect services (GuardDuty), or unsupported scanning approaches.

Question 17

A company is running its ecommerce website on AWS. The website is currently hosted on a single Amazon EC2 instance in one Availability Zone. A MySQL database runs on the same EC2 instance. The company needs to eliminate single points of failure in the architecture to improve the website's availability and resilience. Which solution will meet these requirements with the LEAST configuration changes to the website?

A. Deploy the application by using AWS Fargate containers. Migrate the database to Amazon DynamoDB. Use Amazon API Gateway to route requests.
B. Deploy the application on EC2 instances across multiple Availability Zones. Put the EC2 instances into an Auto Scaling group behind an Application Load Balancer. Migrate the database to Amazon Aurora Multi-AZ. Use Amazon CloudFront for content delivery.
C. Use AWS Elastic Beanstalk to deploy the application across multiple AWS Regions. Migrate the database to Amazon Redshift. Use Amazon ElastiCache for session management.
D. Migrate the application to AWS Lambda functions. Use Amazon S3 for static content hosting. Migrate the database to Amazon DocumentDB (with MongoDB compatibility).
Show Answer
Correct Answer: B
Explanation:
Option B removes the single points of failure with minimal changes to the existing EC2/MySQL architecture. Placing EC2 instances in an Auto Scaling group across multiple Availability Zones behind an Application Load Balancer provides high availability for the application tier. Migrating MySQL to Amazon Aurora Multi-AZ improves database resilience with low application impact due to MySQL compatibility. The other options require major architectural rewrites or inappropriate services, making them less suitable.

Question 338

A company has chosen AWS to host a new application. The company needs to implement a multi-account strategy. A DevOps engineer creates a new AWS account and an organization in AWS Organizations. The DevOps engineer also creates the OU structure for the organization and sets up a landing zone by using AWS Control Tower. The DevOps engineer must implement a solution that automatically deploys resources for new accounts that users create through AWS Control Tower Account Factory. When a user creates a new account, the solution must apply AWS CloudFormation templates and SCPs that are customized for the OU or the account to automatically deploy all the resources that are attached to the account. All the OUs are enrolled in AWS Control Tower. Which solution will meet these requirements in the MOST automated way?

A. Use AWS Service Catalog with AWS Control Tower. Create portfolios and products in AWS Service Catalog. Grant granular permissions to provision these resources. Deploy SCPs by using the AWS CLI and JSON documents.
B. Deploy CloudFormation stack sets by using the required templates. Enable automatic deployment. Deploy stack instances to the required accounts. Deploy a CloudFormation stack set to the organization’s management account to deploy SCPs.
C. Create an Amazon EventBridge rule to detect the CreateManagedAccount event. Configure AWS Service Catalog as the target to deploy resources to any new accounts. Deploy SCPs by using the AWS CLI and JSON documents.
D. Deploy the Customizations for AWS Control Tower (CfCT) solution. Use an AWS CodeCommit repository as the source. In the repository, create a custom package that includes the CloudFormation templates and the SCP JSON documents.
Show Answer
Correct Answer: D
Explanation:
Customizations for AWS Control Tower (CfCT) is purpose-built to extend AWS Control Tower by automatically applying CloudFormation templates and Service Control Policies (SCPs when new accounts are created through Account Factory. CfCT integrates with Control Tower lifecycle events, supports OU- and account-level customization, and uses a source repository (such as CodeCommit) to manage templates and SCPs declaratively. This provides the most automated, native, and scalable solution for deploying customized resources and policies across all Control Tower–managed accounts.

Question 170

A DevOps engineer manages a company's Amazon Elastic Container Service (Amazon ECS) cluster. The cluster runs on several Amazon EC2 instances that are in an Auto Scaling group. The DevOps engineer must implement a solution that logs and reviews all stopped tasks for errors. Which solution will meet these requirements?

A. Create an Amazon EventBridge rule to capture task state changes. Send the event to Amazon CloudWatch Logs. Use CloudWatch Logs Insights to investigate stopped tasks.
B. Configure tasks to write log data in the embedded metric format. Store the logs in Amazon CloudWatch Logs. Monitor the ContainerInstanceCount metric for changes.
C. Configure the EC2 instances to store logs in Amazon CloudWatch Logs. Create a CloudWatch Contributor Insights rule that uses the EC2 instance log data. Use the Contributor Insights rule to investigate stopped tasks.
D. Configure an EC2 Auto Scaling lifecycle hook for the EC2_INSTANCE_TERMINATING scale-in event. Write the SystemEventLog file to Amazon S3. Use Amazon Athena to query the log file for errors.
Show Answer
Correct Answer: A
Explanation:
Amazon ECS emits task state change events (including STOPPED with stop reasons and exit codes) to Amazon EventBridge. Creating an EventBridge rule to capture these events and send them to CloudWatch Logs provides a centralized, structured record of all stopped tasks. CloudWatch Logs Insights can then be used to query and analyze errors across tasks. The other options focus on EC2-level logs, metrics, or Auto Scaling events and do not directly or comprehensively capture ECS task stop reasons.

Question 40

A company's DevOps team uses Node Package Manager (NPM) open source libraries to build applications. The DevOps team runs its application build process in an AWS CodeBuild project that downloads the NPM libraries from public NPM repositories. The company wants to host the NPM libraries in private NPM repositories. The company also needs to be able to run checks on new versions of the libraries before the DevOps team uses the libraries. Which solution will meet these requirements with the LEAST operational effort?

A. Create an AWS CodeArtifact repository with an upstream repository named npm-store. Configure the application build process to use the CodeArtifact repository as the default source for NPM. Create an AWS CodePipeline pipeline to perform the required checks on package versions in the CodeArtifact repository. Set the package status to unlisted if a failure occurs.
B. Enable Amazon S3 caching in the CodeBuild project configuration. Add a step in the buildspec.yaml config file to perform the required checks on the package versions in the cache.
C. Create an AWS CodeCommit repository for each library. Clone the required NPM libraries to the appropriate CodeCommit repository. Modify the CodeBuild appspec.yaml config file to use the private CodeCommit repositories. Add a step to perform the required checks on the package versions.
D. Create an AWS CodeCommit repository for each library. Clone the required NPM libraries to the appropriate CodeCommit repository. Modify the CodeBuild buildspec.yaml config file so that NPM uses the private CodeCommit repositories. Add an AWS CodePipeline pipeline that performs the required checks on the package versions for each new commit to the repositories. Configure the pipeline to revert to the most recent commit in the event of a failure.
Show Answer
Correct Answer: A
Explanation:
AWS CodeArtifact natively supports private package repositories for NPM with upstream connections to public npmjs.org, allowing the company to host dependencies privately while automatically proxying and caching public packages. This requires minimal operational effort compared to manually cloning or managing packages. CodeArtifact integrates directly with CodeBuild as the default NPM source, and package version checks can be automated with CodePipeline. Packages that fail validation can be marked unlisted to prevent usage, fully meeting the requirements with the least overhead.

Question 130

A company uses the AWS Cloud Development Kit (AWS CDK) to define its application. The company uses a pipeline that consists of AWS CodePipeline and AWS CodeBuild to deploy the CDK application. The company wants to introduce unit tests to the pipeline to test various infrastructure components. The company wants to ensure that a deployment proceeds if no unit tests result in a failure. Which combination of steps will enforce the testing requirement in the pipeline? (Choose two.)

A. Update the CodeBuild build phase commands to run the tests then to deploy the application. Set the OnFailure phase property to ABORT.
B. Update the CodeBuild build phase commands to run the tests then to deploy the application. Add the --rollback true flag to the cdk deploy command.
C. Update the CodeBuild build phase commands to run the tests then to deploy the application. Add the --require-approval any-change flag to the cdk deploy command.
D. Create a test that uses the AWS CDK assertions module. Use the template.hasResourceProperties assertion to test that resources have the expected properties.
E. Create a test that uses the cdk diff command. Configure the test to fail if any resources have changed.
Show Answer
Correct Answer: A, D
Explanation:
The pipeline must both run unit tests and stop deployment if those tests fail. Updating the CodeBuild build phase to run tests before deployment and configuring failure handling to abort the build ensures the pipeline only continues when tests pass (A). Writing unit tests using the AWS CDK assertions module, such as template.hasResourceProperties, provides proper unit-level validation of the synthesized infrastructure (D). The other options do not enforce unit test execution or failure-based gating of deployments.

Question 70

A company uses an organization in AWS Organizations to manage 10 AWS accounts. All features are enabled, and trusted access for AWS CloudFormation is enabled. A DevOps engineer needs to use CloudFormation to deploy an IAM role to the Organizations management account and all member accounts in the organization. Which solution will meet these requirements with the LEAST operational overhead?

A. Create a CloudFormation StackSet that has service-managed permissions. Set the root OU as a deployment target.
B. Create a CloudFormation StackSet that has service-managed permissions. Set the root OU as a deployment target. Deploy a separate CloudFormation stack in the Organizations management account.
C. Create a CloudFormation StackSet that has self-managed permissions. Set the root OU as a deployment target.
D. Create a CloudFormation StackSet that has self-managed permissions. Set the root OU as a deployment target. Deploy a separate CloudFormation stack in the Organizations management account.
Show Answer
Correct Answer: B
Explanation:
CloudFormation StackSets with service-managed permissions cannot deploy stack instances to the AWS Organizations management account, even if the management account is included in the root OU. Therefore, using only a StackSet (option A) would deploy the IAM role to all member accounts but not to the management account. To meet the requirement of deploying the IAM role to both the management account and all member accounts with the least operational overhead, the correct approach is to use a service-managed StackSet targeting the root OU for member accounts and deploy a separate CloudFormation stack directly in the management account.

Question 309

A company is building a new pipeline by using AWS CodePipeline and AWS CodeBuild in a build account. The pipeline consists of two stages. The first stage is a CodeBuild job to build and package an AWS Lambda function. The second stage consists of deployment actions that operate on two different AWS accounts: a development environment account and a production environment account. The deployment stages use the AWS CloudFormation action that CodePipeline invokes to deploy the infrastructure that the Lambda function requires. A DevOps engineer creates the CodePipeline pipeline and configures the pipeline to encrypt build artifacts by using the AWS Key Management Service (AWS KMS) AWS managed key for Amazon S3 (the aws/s3 key). The artifacts are stored in an S3 bucket. When the pipeline runs, the CloudFormation actions fail with an access denied error. Which combination of actions must the DevOps engineer perform to resolve this error? (Choose two.)

A. Create an S3 bucket in each AWS account for the artifacts. Allow the pipeline to write to the S3 buckets. Create a CodePipeline S3 action to copy the artifacts to the S3 bucket in each AWS account. Update the CloudFormation actions to reference the artifacts S3 bucket in the production account.
B. Create a customer managed KMS key. Configure the KMS key policy to allow the IAM roles used by the CloudFormation action to perform decrypt operations. Modify the pipeline to use the customer managed KMS key to encrypt artifacts.
C. Create an AWS managed KMS key. Configure the KMS key policy to allow the development account and the production account to perform decrypt operations. Modify the pipeline to use the KMS key to encrypt artifacts.
D. In the development account and in the production account, create an IAM role for CodePipeline. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipeline account, configure the CodePipeline CloudFormation action to use the roles.
E. In the development account and in the production account, create an IAM role for CodePipeline. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipeline account, modify the artifacts S3 bucket policy to allow the roles access. Configure the CodePipeline CloudFormation action to use the roles.
Show Answer
Correct Answer: B, E
Explanation:
The failure occurs because the pipeline encrypts artifacts with the AWS managed aws/s3 KMS key, which cannot be used for cross-account access since its key policy cannot be modified. To fix this, the engineer must switch to a customer managed KMS key and explicitly allow the CloudFormation execution roles in the development and production accounts to decrypt artifacts (B). Additionally, for cross-account access, CodePipeline must assume roles in the target accounts, and the artifacts S3 bucket policy must allow those roles to access and decrypt the artifacts; the CloudFormation actions must be configured to use these roles (E).

Question 325

A company has many applications. Different teams in the company developed the applications by using multiple languages and frameworks. The applications run on premises and on different servers with different operating systems. Each team has its own release protocol and process. The company wants to reduce the complexity of the release and maintenance of these applications. The company is migrating its technology stacks, including these applications, to AWS. The company wants centralized control of source code, a consistent and automatic delivery pipeline, and as few maintenance tasks as possible on the underlying infrastructure. What should a DevOps engineer do to meet these requirements?

A. Create one AWS CodeCommit repository for all applications. Put each application's code in a different branch. Merge the branches, and use AWS CodeBuild to build the applications. Use AWS CodeDeploy to deploy the applications to one centralized application server.
B. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time. Use AWS CodeDeploy to deploy the applications to one centralized application server.
C. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build the applications one at a time and to create one AMI for each server. Use AWS CloudFormation StackSets to automatically provision and decommission Amazon EC2 fleets by using these AMIs.
D. Create one AWS CodeCommit repository for each of the applications. Use AWS CodeBuild to build one Docker image for each application in Amazon Elastic Container Registry (Amazon ECR). Use AWS CodeDeploy to deploy the applications to Amazon Elastic Container Service (Amazon ECS) on infrastructure that AWS Fargate manages.
Show Answer
Correct Answer: D
Explanation:
The company needs centralized source control, a consistent automated delivery pipeline, and minimal infrastructure maintenance across heterogeneous applications. Using separate CodeCommit repositories per application supports independent teams while maintaining centralized control. Building Docker images with CodeBuild standardizes packaging across languages and OS differences. Storing images in ECR and deploying to ECS on AWS Fargate eliminates server management, patching, and OS maintenance, meeting the requirement for the fewest possible infrastructure tasks. CodeDeploy integrates with ECS to provide a consistent, automated deployment process.

Question 196

A company has deployed a complex container-based workload on AWS. The workload uses Amazon Managed Service for Prometheus for monitoring. The workload runs in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster in an AWS account. The company’s DevOps team wants to receive workload alerts by using the company’s Amazon Simple Notification Service (Amazon SNS) topic. The SNS topic is in the same AWS account as the EKS cluster. Which combination of steps will meet these requirements? (Choose three.)

A. Use the Amazon Managed Service for Prometheus remote write URL to send alerts to the SNS topic
B. Create an alerting rule that checks the availability of each of the workload’s containers.
C. Create an alert manager configuration for the SNS topic.
D. Modify the access policy of the SNS topic. Grant the aps.amazonaws.com service principal the sns:Publish permission and the sns:GetTopicAttributes permission for the SNS topic.
E. Modify the IAM role that Amazon Managed Service for Prometheus uses. Grant the role the sns:Publish permission and the sns:GetTopicAttributes permission for the SNS topic.
F. Create an OpenID Connect (OIDC) provider for the EKS cluster. Create a cluster service account. Grant the account the sns:Publish permission and the sns:GetTopicAttributes permission by using an IAM role.
Show Answer
Correct Answer: B, C, D
Explanation:
To receive alerts from Amazon Managed Service for Prometheus (AMP) into an Amazon SNS topic, three elements are required. First, alerting rules must exist so that conditions in the workload generate alerts (B). Second, AMP uses Alertmanager to route alerts, so an Alertmanager configuration with an SNS receiver that references the SNS topic ARN is required (C). Third, AMP itself publishes notifications to SNS, so the SNS topic resource policy must allow the aps.amazonaws.com service principal to publish and read topic attributes (D). The other options either misuse Prometheus features (A) or incorrectly rely on IAM roles or EKS service accounts, which are not how AMP publishes alerts to SNS (E, F).

Question 234

A company has configured an Amazon S3 event source on an AWS Lambda function. The company needs the Lambda function to run when a new object is created or an existing object is modified in a particular S3 bucket. The Lambda function will use the S3 bucket name and the S3 object key of the incoming event to read the contents of the created or modified S3 object. The Lambda function will parse the contents and save the parsed contents to an Amazon DynamoDB table. The Lambda function's execution role has permissions to read from the S3 bucket and to write to the DynamoDB table. During testing, a DevOps engineer discovers that the Lambda function does not run when objects are added to the S3 bucket or when existing objects are modified. Which solution will resolve this problem?

A. Increase the memory of the Lambda function to give the function the ability to process large files from the S3 bucket.
B. Create a resource policy on the Lambda function to grant Amazon S3 the permission to invoke the Lambda function for the S3 bucket.
C. Configure an Amazon Simple Queue Service (Amazon SQS) queue as an OnFailure destination for the Lambda function.
D. Provision space in the /tmp folder of the Lambda function to give the function the ability to process large files from the S3 bucket.
Show Answer
Correct Answer: B
Explanation:
For Amazon S3 to trigger a Lambda function, the Lambda function must have a resource-based policy that explicitly allows the S3 service (and the specific bucket) to invoke it. Even if the Lambda execution role has permissions to read S3 and write to DynamoDB, the function will not run unless S3 is granted invoke permissions. The absence of this permission causes the function to never be triggered on object creation or modification events.

Question 355

A company that uses electronic health records is running a fleet of Amazon EC2 instances with an Amazon Linux operating system. As part of patient privacy requirements, the company must ensure continuous compliance for patches for operating system and applications running on the EC2 instances. How can the deployments of the operating system and application patches be automated using a default and custom repository?

A. Use AWS Systems Manager to create a new patch baseline including the custom repository. Run the AWS-RunPatchBaseline document using the run command to verify and install patches.
B. Use AWS Direct Connect to integrate the corporate repository and deploy the patches using Amazon CloudWatch scheduled events, then use the CloudWatch dashboard to create reports.
C. Use yum-config-manager to add the custom repository under /etc/yum.repos.d and run yum-config-manager-enable to activate the repository.
D. Use AWS Systems Manager to create a new patch baseline including the corporate repository. Run the AWS-AmazonLinuxDefaultPatchBaseline document using the run command to verify and install patches.
Show Answer
Correct Answer: A
Explanation:
AWS Systems Manager Patch Manager is designed to automate OS and application patching on EC2 instances. You can create a custom patch baseline that includes both the default Amazon Linux repositories and an additional custom/corporate repository. The AWS-RunPatchBaseline SSM document is then used to scan for compliance and install approved patches automatically. Other options either do not provide compliance reporting, automation, or cannot work with custom repositories.

$63 $19

Get all 386 questions with detailed answers and explanations

DOP-C02 — Frequently Asked Questions

What is the Amazon DOP-C02 exam?

The Amazon DOP-C02 exam — AWS Certified DevOps Engineer - Professional — is a professional IT certification exam offered by Amazon.

How many practice questions are included?

This study guide contains 386 practice questions, each with an expert-verified correct answer and a detailed explanation. Questions cover all exam domains and objectives.

Is there a free sample available?

Yes! We provide a free sample of 15 practice questions from the DOP-C02 exam right on this page. Scroll up to preview them and evaluate the quality of our materials before purchasing.

When was this DOP-C02 study guide last updated?

This study guide was last updated on 2026-02-19. We regularly refresh our materials to reflect the latest exam content and objectives so you're always studying current material.

What file formats do I receive?

After purchase you receive two files: an interactive HTML file with show/hide answer toggles (ideal for studying on screen) and a PDF file (ideal for printing or offline study). Both work on any device — desktop, tablet, or phone.

How much does the DOP-C02 study guide cost?

The Amazon DOP-C02 study guide costs $19 (discounted from $63). This is a one-time payment with no subscriptions or hidden fees.

How do I get my files after payment?

After successful payment via Stripe, you are immediately redirected to a download page with links to your HTML and PDF files. We also send the download links to your email address as a backup, so you'll always have access.

Why choose CheapestExamDumps over other providers?

CheapestExamDumps offers the lowest price at $19 per exam — competitors charge $50-$300 for similar content. All study materials are expert-verified, updated monthly, and include a free 15-question preview with no signup required. You get instant access to both HTML and PDF formats after payment.