Free Sample — 15 Practice Questions
Preview 15 of 555 questions from the DVA-C02 exam.
Try before you buy — purchase the full study guide for all 555 questions with answers and explanations.
Question 5
A developer built an application that uses AWS Lambda functions to process images. The developer wants to improve image processing times throughout the day.
The developer needs to create an Amazon CloudWatch Logs Insights query that shows the average, slowest, and fastest processing time in 1-minute intervals.
Which query will meet these requirements?
Show Answer
Correct Answer: A
Explanation:
AWS Lambda writes a REPORT log line for each invocation that includes the execution Duration. A correct CloudWatch Logs Insights query must filter for REPORT entries, then calculate avg(Duration), max(Duration), and min(Duration) grouped into 1-minute bins (bin(1m)). Option A does exactly this, producing average, slowest, and fastest processing times per minute. The other options either filter the wrong log entries or do not aggregate duration correctly.
Question 546
A company is building a scalable data management solution by using AWS services to improve the speed and agility of development. The solution will ingest large volumes of data from various sources and will process this data through multiple business rules and transformations.
The solution requires business rules to run in sequence and to handle reprocessing of data if errors occur when the business rules run. The company needs the solution to be scalable and to require the least possible maintenance.
Which AWS service should the company use to manage and automate the orchestration of the data flows to meet these requirements?
A. AWS Batch
B. AWS Step Functions
C. AWS Glue
D. AWS Lambda
Show Answer
Correct Answer: B
Explanation:
AWS Step Functions is a fully managed workflow orchestration service designed to coordinate multiple steps in sequence, apply business rules, handle retries and error handling, and support reprocessing when failures occur. It scales automatically and minimizes operational overhead, making it ideal for orchestrating complex data processing pipelines across AWS services.
Question 37
A company is building a social media application. A developer is modifying an AWS Lambda function that updates a database with data that tracks each user's online activity. A web application server uses the AWS SDK to invoke the Lambda function.
The developer has tested the new Lambda code and is ready to deploy the code into production. However, the developer wants to allow only a small percentage of the invocations from the AWS SDK to call the new code.
Which solution will meet these requirements?
A. Configure a Lambda version that has a specific weight value for the updated Lambda function.
B. Create an alias for the Lambda function. Configure a specific weight value for the updated version.
C. Create an Application Load Balancer. Specify weighted target groups for the original Lambda function and the updated Lambda function.
D. Create a Network Load Balancer. Specify weighted target groups for the original Lambda function and the updated Lambda function.
Show Answer
Correct Answer: B
Explanation:
AWS Lambda supports traffic shifting by using versions and aliases. An alias can be configured with a routing configuration that sends a small, specified percentage of invocations to a new version while the rest continue to use the existing version. This enables gradual or canary deployments directly when the function is invoked via the AWS SDK. Versions alone cannot receive traffic without an alias, and load balancers are unnecessary and not appropriate for SDK-based Lambda invocations.
Question 417
A developer wants to add request validation to a production environment Amazon API Gateway API. The developer needs to test the changes before the API is deployed to the production environment. For the test, the developer will send test requests to the API through a testing tool.
Which solution will meet these requirements with the LEAST operational overhead?
A. Export the existing API to an OpenAPI file. Create a new API. Import the OpenAPI file. Modify the new API to add request validation. Perform the tests. Modify the existing API to add request validation. Deploy the existing API to production.
B. Modify the existing API to add request validation. Deploy the updated API to a new API Gateway stage. Perform the tests. Deploy the updated API to the API Gateway production stage.
C. Create a new API. Add the necessary resources and methods, including new request validation. Perform the tests. Modify the existing API to add request validation. Deploy the existing API to production
D. Clone the existing API. Modify the new API to add request validation. Perform the tests. Modify the existing API to add request validation. Deploy the existing API to production.
Show Answer
Correct Answer: B
Explanation:
API Gateway stages are designed to support multiple environments from the same API configuration. By adding request validation to the existing API and deploying it to a new, non-production stage, the developer can test the changes safely using a testing tool without impacting production traffic. Once validated, the same configuration can be promoted to the production stage. This avoids creating or cloning APIs and therefore has the least operational overhead.
Question 269
A developer is creating an AWS Lambda function that consumes messages from an Amazon Simple Queue Service (Amazon SQS) standard queue. The developer notices that the Lambda function processes some messages multiple times.
How should developer resolve this issue MOST cost-effectively?
A. Change the Amazon SQS standard queue to an Amazon SQS FIFO queue by using the Amazon SQS message deduplication ID.
B. Set up a dead-letter queue.
C. Set the maximum concurrency limit of the AWS Lambda function to 1.
D. Change the message processing to use Amazon Kinesis Data Streams instead of Amazon SQS.
Show Answer
Correct Answer: A
Explanation:
Amazon SQS standard queues provide at-least-once delivery, so duplicate message processing is expected behavior. Using an SQS FIFO queue with message deduplication IDs ensures exactly-once processing within the deduplication window, directly addressing duplicate processing. This is more appropriate and cost-effective than limiting Lambda concurrency, adding a DLQ, or replacing SQS with another service.
Question 251
A developer is creating a serverless application that uses an AWS Lambda function. The developer will use AWS CloudFormation to deploy the application. The application will write logs to Amazon CloudWatch Logs. The developer has created a log group in a CloudFormation template for the application to use. The developer needs to modify the CloudFormation template to make the name of the log group available to the application at runtime.
Which solution will meet this requirement?
A. Use the AWS::Include transform in CloudFormation to provide the log group's name to the application.
B. Pass the log group's name to the application in the user data section of the CloudFormation template.
C. Use the CloudFormation template's Mappings section to specify the log group's name for the application.
D. Pass the log group's Amazon Resource Name (ARN) as an environment variable to the Lambda function.
Show Answer
Correct Answer: D
Explanation:
To make the log group name available to a Lambda function at runtime, the value must be passed into the function’s execution environment. CloudFormation can reference the created log group and inject its ARN (or name) as an environment variable on the Lambda function, which the code can read at runtime. AWS::Include is for importing template fragments, user data applies to EC2 (not Lambda), and Mappings are static and not exposed to the function at runtime.
Question 514
A developer is using an AWS Lambda function to generate avatars for profile pictures that are uploaded to an Amazon S3 bucket. The Lambda function is automatically invoked for profile pictures that are saved under the /original/ S3 prefix. The developer notices that some pictures cause the Lambda function to time out. The developer wants to implement a fallback mechanism by using another Lambda function that resizes the profile picture.
Which solution will meet these requirements with the LEAST development effort?
A. Set the image resize Lambda function as a destination of the avatar generator Lambda function for the events that fail processing.
B. Create an Amazon Simple Queue Service (Amazon SQS) queue. Set the SQS queue as a destination with an on failure condition for the avatar generator Lambda function. Configure the image resize Lambda function to poll from the SQS queue.
C. Create an AWS Step Functions state machine that invokes the avatar generator Lambda function and uses the image resize Lambda function as a fallback. Create an Amazon EventBridge rule that matches events from the S3 bucket to invoke the state machine.
D. Create an Amazon Simple Notification Service (Amazon SNS) topic. Set the SNS topic as a destination with an on failure condition for the avatar generator Lambda function. Subscribe the image resize Lambda function to the SNS topic.
Show Answer
Correct Answer: A
Explanation:
AWS Lambda Destinations natively support routing failed asynchronous invocations directly to another Lambda function. Configuring the image resize Lambda as the failure destination requires no additional infrastructure, no polling logic, and no orchestration code. This directly satisfies the fallback requirement with the least development effort compared to adding SQS, SNS, or Step Functions.
Question 152
A developer needs to store files in an Amazon S3 bucket for a company's application. Each S3 object can have multiple versions. The objects must be permanently removed 1 year after object creation.
The developer creates an S3 bucket that has versioning enabled.
What should the developer do next to meet the data retention requirements?
A. Create an S3 Lifecycle rule on the S3 bucket. Configure the rule to expire current versions of objects and permanently delete noncurrent versions 1 year after object creation.
B. Create an event notification for all object creation events in the S3 bucket. Configure the event notification to invoke an AWS Lambda function. Program the Lambda function to check the object creation date and to delete the object if the object is older than 1 year.
C. Create an event notification for all object removal events in the S3 bucket. Configure the event notification to invoke an AWS Lambda function. Program the Lambda function to check the object creation date and to delete the object if the object is older than 1 year.
D. Create an S3 Lifecycle rule on the S3 bucket. Configure the rule to delete expired object delete markers and permanently delete noncurrent versions 1 year after object creation.
Show Answer
Correct Answer: A
Explanation:
With S3 versioning enabled, lifecycle rules are the native way to enforce time-based retention. To permanently remove data after 1 year from object creation, the rule must both expire current versions and permanently delete noncurrent versions after 1 year. This ensures all versions are removed without custom code. Event notifications with Lambda are unnecessary and less reliable for retention, and delete markers alone would not remove current versions.
Question 466
A company has hundreds of AWS Lambda functions that the company's QA team needs to test by using the Lambda function URLs. A developer needs to configure the authentication of the Lambda functions to allow access so that the QA IAM group can invoke the Lambda functions by using the public URLs.
Which solution will meet these requirements?
A. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the AWS_IAM auth type. Run another script to create an IAM identity-based policy that allows the lambda:InvokeFunctionUrl action to all the Lambda function Amazon Resource Names (ARNs). Attach the policy to the QA IAM group.
B. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the NONE auth type. Run another script to create an IAM resource-based policy that allows the lambda:InvokeFunctionUrl action to all the Lambda function Amazon Resource Names (ARNs). Attach the policy to the QA IAM group.
C. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the AWS_IAM auth type. Run another script to loop on the Lambda functions to create an IAM identity-based policy that allows the lambda:InvokeFunctionUrl action from the QA IAM group's Amazon Resource Name (ARN).
D. Create a CLI script that loops on the Lambda functions to add a Lambda function URL with the NONE auth type. Run another script to loop on the Lambda functions to create an IAM resource-based policy that allows the lambda:InvokeFunctionUrl action from the QA IAM group's Amazon Resource Name (ARN).
Show Answer
Correct Answer: A
Explanation:
Lambda function URLs should use the AWS_IAM auth type so access is restricted to authenticated IAM principals. For same-account access, AWS allows authorization via identity-based policies without requiring a resource-based policy on each function. Creating one identity-based policy that allows lambda:InvokeFunctionUrl on the relevant Lambda ARNs and attaching it to the QA IAM group scales well to hundreds of functions and cleanly grants the required access. Options B and D make the URLs public (NONE auth). Option C is poorly defined and unnecessarily implies per-function policy handling.
Question 73
A developer is building the authentication mechanism for a new mobile app. Users need to be able to sign up, sign in, and access secured backend AWS resources.
Which solution will meet these requirements?
A. Use AWS Identity and Access Management Access Analyzer to generate IAM policies. Create an IAM role. Attach the policies to the role. Integrate the IAM role with an identity provider that the mobile app uses.
B. Create an IAM policy that grants access to the backend resources. Create an IAM role. Attach the policy to the role. Create an Amazon API Gateway endpoint. Attach the role to the endpoint. Integrate the endpoint with the mobile app.
C. Create an Amazon Cognito identity pool. Configure permissions by choosing a default IAM role for authenticated users or guest users in the identity pool. Associate the identity pool with an identity provider. Integrate the identity pool with the mobile app.
D. Create an Amazon Cognito user pool. Configure the security requirements by choosing a password policy, multi-factor authentication (MFA) requirements, and user account recovery options. Create an app client. Integrate the app client with the mobile app.
Show Answer
Correct Answer: D
Explanation:
Amazon Cognito user pools provide a fully managed authentication service with built-in user sign-up and sign-in, password policies, MFA, and account recovery, and integrate easily with mobile apps. The app can then use the issued tokens to access secured backend AWS resources. Identity pools alone do not handle user sign-up/sign-in, and the other options do not provide a complete mobile authentication solution.
Question 160
A developer creates an Amazon DynamoDB table. The table has OrderID as the partition key and NumberOfItemsPurchased as the sort key. The data type of the partition key and the sort key is Number.
When the developer queries the table, the results are sorted by NumberOfItemsPurchased in ascending order. The developer needs the query results to be sorted by NumberOfItemsPurchased in descending order.
Which solution will meet this requirement?
A. Create a local secondary index (LSI) on the NumberOfItemsPurchased sort key.
B. Change the sort key from NumberOfItemsPurchased to NumberOfItemsPurchasedDescending.
C. In the Query operation, set the ScanIndexForward parameter to false.
D. In the Query operation, set the KeyConditionExpression parameter to false.
Show Answer
Correct Answer: C
Explanation:
DynamoDB Query results are sorted by the sort key in ascending order by default. To retrieve results in descending order without changing the table schema or indexes, the Query operation provides the ScanIndexForward parameter. Setting ScanIndexForward to false reverses the sort order on the sort key (NumberOfItemsPurchased), returning results in descending order.
Question 134
A company has an application that uses an AWS Lambda function to consume messages from an Amazon Simple Queue Service (Amazon SQS) queue. The SQS queue is configured with a dead-letter queue. Due to a defect in the application, AWS Lambda failed to process some messages. A developer fixed the bug and wants to process the failed messages again.
How should the developer resolve this issue?
A. Use the SendMessageBatch API to send messages from the dead-letter queue to the original SQS queue.
B. Use the ChangeMessageVisibility API to configure messages in the dead-letter queue to be visible in the original SQS queue.
C. Use the StartMessageMoveTask API to move messages from the dead-letter queue to the original SQS queue.
D. Use the PurgeQueue API to remove messages from the dead-letter queue and return the messages to the original SQS queue.
Show Answer
Correct Answer: C
Explanation:
Amazon SQS provides the StartMessageMoveTask API specifically to move messages from a dead-letter queue back to the source queue (or another queue) after issues are resolved. It safely re-drives failed messages for reprocessing without manual resend logic. Other options are incorrect because ChangeMessageVisibility cannot move messages between queues, PurgeQueue deletes messages permanently, and SendMessageBatch would require custom logic and does not preserve DLQ handling automatically.
Question 516
A developer is writing an AWS Lambda function. The developer wants to log key events that occur while the Lambda function runs. The developer wants to include a unique identifier to associate the events with a specific function invocation. The developer adds the following code to the Lambda function:
Which solution will meet this requirement?
A. Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to standard output.
B. Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to a file.
C. Obtain the request identifier from the AWS request ID field in the event object. Configure the application to write logs to standard output.
D. Obtain the request identifier from the AWS request ID field in the context object. Configure the application to write logs to a file.
Show Answer
Correct Answer: A
Explanation:
Each Lambda invocation has a unique AWS request ID available in the context object, not the event object. Logging to standard output is the recommended approach because Lambda automatically sends stdout/stderr to Amazon CloudWatch Logs, making logs durable and searchable. Writing logs to files is not appropriate because the Lambda filesystem is ephemeral and not automatically exported.
Question 6
A developer is building an application that stores sensitive user data. The application includes an Amazon CloudFront distribution and multiple AWS Lambda functions that handle user requests.
The user requests contain over 20 data fields. Each application transaction contains sensitive data that must be encrypted. Only specific parts of the application need to have the ability to decrypt the data.
Which solution will meet these requirements?
A. Associate the CloudFront distribution with a Lambda@Edge function. Configure the function to perform field-level asymmetric encryption by using a user-defined RSA public key that is stored in AWS Key Management Service (AWS KMS).
B. Integrate AWS WAF with CloudFront to protect the sensitive data. Use a Lambda function and self-managed keys to perform the encryption and decryption processes.
C. Configure the CloudFront distribution to use WebSockets by forwarding all viewer request headers to the origin. Create an asymmetric AWS KMS key. Configure the CloudFront distribution to use field-level encryption. Use the AWS KMS key.
D. Configure the cache behavior in the CloudFront distribution to require HTTPS for communication between viewers and CloudFront. Configure GoudFront to require users to access the files by using either signed URLs or signed cookies.
Show Answer
Correct Answer: A
Explanation:
The requirement is to encrypt only specific fields in user requests so that only designated parts of the application can decrypt them. CloudFront field-level encryption is designed for this use case: it encrypts selected request fields at the edge using an asymmetric public key, and only trusted application components with the private key can decrypt the data. Using Lambda@Edge to apply field-level asymmetric encryption with a user-defined RSA public key satisfies selective encryption and controlled decryption. The other options either do not provide field-level encryption, misuse KMS with CloudFront field-level encryption, or only secure transport rather than encrypting data fields.
Question 433
A company uses AWS Lambda functions and an Amazon S3 trigger to process images into an S3 bucket. A development team set up multiple environments in a single AWS account.
After a recent production deployment, the development team observed that the development S3 buckets invoked the production environment Lambda functions. These invocations caused unwanted execution of development S3 files by using production Lambda functions. The development team must prevent these invocations. The team must follow security best practices.
Which solution will meet these requirements?
A. Update the Lambda execution role for the production Lambda function to add a policy that allows the execution role to read from only the production environment S3 bucket.
B. Move the development and production environments into separate AWS accounts. Add a resource policy to each Lambda function to allow only S3 buckets that are within the same account to invoke the function.
C. Add a resource policy to the production Lambda function to allow only the production environment S3 bucket to invoke the function.
D. Move the development and production environments into separate AWS accounts. Update the Lambda execution role for each function to add a policy that allows the execution role to read from the S3 bucket that is within the same account.
Show Answer
Correct Answer: C
Explanation:
The issue is that development S3 buckets are able to invoke the production Lambda function. S3 can invoke a Lambda function only if the Lambda function’s **resource-based policy** allows it. The correct fix is to restrict the production Lambda function’s resource policy so that **only the production S3 bucket** is permitted to invoke it.
Option C directly enforces least privilege, prevents the unwanted invocations, and aligns with AWS security best practices. Changing execution roles (A, D) does not control who can invoke the function, and moving to separate accounts (B) is a broader architectural choice that is not required to solve the stated problem.