Free Sample — 15 Practice Questions
Preview 15 of 445 questions from the AZ-204 exam.
Try before you buy — purchase the full study guide for all 445 questions with answers and explanations.
Question 182
You are developing an application to store business-critical data in Azure Blob storage.
The application must meet the following requirements:
• Data must not be modified or deleted for a user-specified interval.
• Data must be protected from overwrites and deletes.
• Data must be written once and allowed to be read many times.
You need to protect the data in the Azure Blob storage account.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. Configure a time-based retention policy for the storage account.
B. Create an account shared-access signature (SAS).
C. Enable the blob change feed for the storage account.
D. Enable version-level immutability support for the storage account.
E. Enable point-in-time restore for containers in the storage account.
F. Create a service shared-access signature (SAS).
Show Answer
Correct Answer: A, D
Explanation:
The requirements describe Write-Once, Read-Many (WORM) protection: data cannot be modified or deleted for a specified interval and must be protected from overwrites and deletes. A time-based retention policy enforces WORM by preventing modification and deletion for a user-defined period. To apply immutability at the blob/version level, the storage account must have version-level immutability support enabled, which is required to enforce immutable retention on blob versions.
Question 456
HOTSPOT -
You are developing an ASP.NET Core web application. You plan to deploy the application to Azure Web App for Containers.
The application needs to store runtime diagnostic data that must be persisted across application restarts. You have the following code:
You need to configure the application settings so that diagnostic data is stored as required.
How should you configure the web app's settings? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Show Answer
Correct Answer: WEBSITES_ENABLE_APP_SERVICE_STORAGE = true
DIAGDATA = /home
Explanation:
Azure Web App for Containers requires persistent storage to be explicitly enabled so data survives restarts. Setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to true mounts shared storage, and on Linux containers this storage is exposed at /home. Pointing the DIAGDATA environment variable to /home ensures diagnostic files are persisted.
Question 357
HOTSPOT -
You need to implement the Log policy.
How should you complete the EnsureLogging method in EventGridController.cs? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Show Answer
Correct Answer: Box 1: logdrop
Box 2: 15
Box 3: UpdateApplicationSetting
Explanation:
Application logging to Azure Blob Storage requires setting app settings. The container name is set in DIAGNOSTICS_AZUREBLOBCONTAINERSASURL, retention is configured via DIAGNOSTICS_AZUREBLOBRETENTIONINDAYS (15 days), and these values are applied using WebApps.UpdateApplicationSetting.
Question 478
You are developing a solution that will use Azure messaging services.
You need to ensure that the solution uses a publish-subscribe model and eliminates the need for constant polling.
What are two possible ways to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Service Bus
B. Event Hub
C. Event Grid
D. Queue
Show Answer
Correct Answer: A, C
Explanation:
The requirement is a publish-subscribe model that eliminates the need for constant polling. Azure Service Bus (using topics and subscriptions) supports true pub/sub with push-based message delivery. Azure Event Grid is explicitly designed for event-based pub/sub with push notifications to subscribers. Event Hubs is primarily optimized for high-throughput event ingestion and consumers typically use pull/long-polling patterns, and Azure Queues do not support pub/sub.
Question 103
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You deploy an Azure Container Apps app and disable ingress on the container app.
Users report that they are unable to access the container app. You investigate and observe that the app has scaled to 0 instances.
You need to resolve the issue with the container app.
Solution: Enable ingress, create a custom scale rule, and apply the rule to the container app.
Does the solution meet the goal?
Show Answer
Correct Answer: B
Explanation:
The issue is that the container app scaled to zero because ingress was disabled and no minimum replicas or appropriate scale trigger existed. While enabling ingress restores an endpoint, simply creating a custom scale rule does not guarantee the app will start unless the rule actively triggers scaling or minReplicas is set. The stated solution does not explicitly ensure a running instance, so it does not reliably resolve the problem.
Question 224
You need to reduce read latency for the retail store solution.
What are two possible ways to achieve the goal? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.
A. Create a new composite index for the store location data queries in Azure Cosmos DB. Modify the queries to support parameterized SQL and update the Azure Function app to call the new queries.
B. Provision an Azure Cosmos DB dedicated gateway. Update the Azure Function app connection string to use the new dedicated gateway endpoint.
C. Configure Azure Cosmos DB consistency to session consistency. Cache session tokens in a new Azure Redis cache instance after every write. Update reads to use the session token stored in Azure Redis.
D. Provision an Azure Cosmos DB dedicated gateway. Update blob storage to use the new dedicated gateway endpoint.
E. Configure Azure Cosmos DB consistency to strong consistency. Increase the RUs for the container supporting store location data.
Show Answer
Correct Answer: A, B
Explanation:
To reduce read latency in Azure Cosmos DB, optimizing query execution and minimizing network latency are key. Creating an appropriate composite index directly improves query performance by allowing Cosmos DB to efficiently serve the specific store location queries with fewer RUs and faster execution. Provisioning a dedicated gateway provides a closer, optimized endpoint with integrated caching, reducing network hops and delivering lower and more predictable read latency for applications such as Azure Functions. Other options either add unnecessary complexity, do not directly reduce read latency, or can actually increase latency.
Question 69
HOTSPOT
-
A company has an Azure Cosmos DB for NoSQL account. The account is configured for session consistency. Data is written to a single Azure region and data can be read from three Azure regions.
An application that will access the Azure Cosmos DB for NoSQL container data using an SDK has the following requirements:
• Reads from the application must return the most recent committed version of an item from any Azure region.
• The container items should not automatically be deleted.
You need to implement the changes to the Azure Cosmos DB for NoSQL account.
What should you do? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Show Answer
Correct Answer: Change the default consistency level on the account.
Change the Time to Live (TTL) property on items to -1.
Explanation:
To guarantee reads always return the most recent committed version from any region, the account must use Strong consistency, which requires changing the default consistency level at the account level (stronger consistency cannot be set via the SDK). To prevent automatic deletion of items, TTL must be set to -1, which disables expiration.
Question 139
You are developing several Azure API Management (APIM) hosted APIs.
The APIs have the following requirements:
• Require a subscription key to access all APIs.
• Include terms of use that subscribers must accept to use the APIs.
• Administrators must review and accept or reject subscription attempts.
• Limit the count of multiple simultaneous subscriptions.
You need to implement the APIs.
What should you do?
A. Configure and apply header-based versioning.
B. Create and publish a product.
C. Configure and apply query string-based versioning.
D. Add a new revision to all APIs. Make the revisions current and add a change log entry.
Show Answer
Correct Answer: B
Explanation:
In Azure API Management, products are the mechanism that enforce subscriptions and subscription keys, define and require acceptance of terms of use, allow administrator approval of subscription requests, and can limit the number of subscriptions. Versioning or revisions do not address these access control and governance requirements. Therefore, creating and publishing a product meets all stated requirements.
Question 24
You are developing a microservices-based application that uses Azure Container Apps. The application consists of several containerized services that handle tasks, such as processing orders, managing inventory, and generating reports. You deploy two microservices named serviceA and serviceB to support managing inventory.
You have the following requirements:
• serviceA and serviceB must publish events to a single Azure Event Hub by using the Event Hubs SDK.
• serviceA must publish 1,000 events per second.
• serviceB must publish 3,000 events per second.
• Costs must be minimized.
You need to support the publishing of events.
What should you do?
A. Create four partitions. Update serviceA to use one partition and serviceB to use three partitions.
B. Enable and configure Azure Event Hubs Capture.
C. Create an Azure Event Hubs dedicated cluster. Configure the capacity units to one and the scaling units to two.
D. Create and configure an Azure Schema Registry in Event Hubs. Update serviceA and serviceB to validate message schemas.
E. Create four consumer groups. Update serviceA to use one consumer group and serviceB to use three consumer groups.
Show Answer
Correct Answer: A
Explanation:
Azure Event Hubs ingestion throughput scales with partitions and throughput units. In the Standard tier, each partition supports up to ~1,000 events per second (or 1 MB/s). To handle a combined 4,000 events per second at minimum cost, you should create four partitions. Assigning one partition’s worth of throughput to serviceA (1,000 events/sec) and three partitions’ worth to serviceB (3,000 events/sec) satisfies the throughput requirements without unnecessary features or higher-cost options. Other options do not increase ingress throughput or would significantly increase cost.
Question 269
You are a developer at your company.
You need to edit the workflows for an existing Logic App.
What should you use?
A. the Enterprise Integration Pack (EIP)
B. the Logic App Code View
C. the API Connections
D. the Logic Apps Designer
Show Answer
Correct Answer: D
Explanation:
To edit workflows in an existing Azure Logic App, you should use the Logic Apps Designer. The Designer is the primary tool for creating and modifying workflows through a visual interface, allowing you to add, remove, and configure triggers and actions easily. Code View can edit the underlying JSON, but the standard and recommended way to edit workflows themselves is the Logic Apps Designer.
Question 470
You are developing an application that uses Azure Blob storage.
The application must read the transaction logs of all the changes that occur to the blobs and the blob metadata in the storage account for auditing purposes. The changes must be in the order in which they occurred, include only create, update, delete, and copy operations and be retained for compliance reasons.
You need to process the transaction logs asynchronously.
What should you do?
A. Process all Azure Blob storage events by using Azure Event Grid with a subscriber Azure Function app.
B. Enable the change feed on the storage account and process all changes for available events.
C. Process all Azure Storage Analytics logs for successful blob events.
D. Use the Azure Monitor HTTP Data Collector API and scan the request body for successful blob events.
Show Answer
Correct Answer: B
Explanation:
Azure Blob Storage change feed provides an ordered, immutable, durable, read-only transaction log of blob and blob metadata changes, including create, update, delete, and copy operations. It supports asynchronous processing and long-term retention for auditing and compliance, which matches all stated requirements better than event-driven notifications or analytics logs.
Question 313
DRAG DROP -
You develop an Azure solution that uses Cosmos DB.
The current Cosmos DB container must be replicated and must use a partition key that is optimized for queries.
You need to implement a change feed processor solution.
Which change feed processor components should you use? To answer, drag the appropriate components to the correct requirements. Each component may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view the content.
NOTE: Each correct selection is worth one point.
Select and Place:
Show Answer
Correct Answer: Monitored container
Lease container
Host
Delegate
Explanation:
The monitored container stores the source data that generates the change feed. The lease container coordinates and tracks progress across multiple workers. The host is the compute instance that runs the change feed processor and listens for changes. The delegate contains the handler logic that processes each batch of changes.
Question 477
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing an Azure solution to collect point-of-sale (POS) device data from 2,000 stores located throughout the world. A single device can produce 2 megabytes (MB) of data every 24 hours. Each store location has one to five devices that send data.
You must store the device data in Azure Blob storage. Device data must be correlated based on a device identifier. Additional stores are expected to open in the future.
You need to implement a solution to receive the device data.
Solution: Provision an Azure Event Grid. Configure event filtering to evaluate the device identifier.
Does the solution meet the goal?
Show Answer
Correct Answer: B
Explanation:
No. Azure Event Grid is designed for lightweight event notifications, not for ingesting and reliably transporting device data streams. POS devices generate message-based data that must not be lost and must be correlated by device ID. Event Grid has payload size limits and lacks features like durable ingestion, ordering, and back-pressure handling. Services such as Azure Event Hubs, IoT Hub, or Service Bus are appropriate for receiving POS device data and routing it to Azure Blob Storage.
Question 9
You have an on-premises, public-facing website named www.contoso.com.
You plan to test availability of www.contoso.com by using Application Insights availability tests.
You need to configure a test that will generate HTTP POST requests that include custom headers. Your solution must minimize development effort.
Which type of test should you configure?
A. Multi-step web test
B. Standard test
C. URL ping test
D. Custom TrackAvailability test
Show Answer
Correct Answer: B
Explanation:
Application Insights Standard availability tests support HTTP POST requests with custom headers and request body, while requiring no custom code. Multi-step web tests are deprecated and require more setup, URL ping tests are limited to simple GETs, and Custom TrackAvailability requires development effort.
Question 94
Case study -
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study -
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. When you are ready to answer a question, click the Question button to return to the question.
Background -
Munson’s Pickles and Preserves Farm is an agricultural cooperative corporation based in Washington, US, with farms located across the United States. The company supports agricultural production resources by distributing seeds fertilizers, chemicals, fuel, and farm machinery to the farms.
Current Environment -
The company is migrating all applications from an on-premises datacenter to Microsoft Azure. Applications support distributors, farmers, and internal company staff.
Corporate website -
• The company hosts a public website located at http://www.munsonspicklesandpreservesfarm.com. The site supports farmers and distributors who request agricultural production resources.
Farms -
• The company created a new customer tenant in the Microsoft Entra admin center to support authentication and authorization for applications.
Distributors -
• Distributors integrate their applications with data that is accessible by using APIs hosted at http://www.munsonspicklesandpreservesfarm.com/api to receive and update resource data.
Requirements -
The application components must meet the following requirements:
Corporate website -
• The site must be migrated to Azure App Service.
• Costs must be minimized when hosting in Azure.
• Applications must automatically scale independent of the compute resources.
• All code changes must be validated by internal staff before release to production.
• File transfer speeds must improve, and webpage-load performance must increase.
• All site settings must be centrally stored, secured without using secrets, and encrypted at rest and in transit.
• A queue-based load leveling pattern must be implemented by using Azure Service Bus queues to support high volumes of website agricultural production resource requests.
Farms -
• Farmers must authenticate to applications by using Microsoft Entra ID.
Distributors -
• The company must track a custom telemetry value with each API call and monitor performance of all APIs.
• API telemetry values must be charted to evaluate variations and trends for resource data.
Internal staff -
• App and API updates must be validated before release to production.
• Staff must be able to select a link to direct them back to the production app when validating an app or API update.
• Staff profile photos and email must be displayed on the website once they authenticate to applications by using their Microsoft Entra ID.
Security -
• All web communications must be secured by using TLS/HTTPS.
• Web content must be restricted by country/region to support corporate compliance standards.
• The principle of least privilege must be applied when providing any user rights or process access rights.
• Managed identities for Azure resources must be used to authenticate services that support Microsoft Entra ID authentication.
Issues -
Corporate website -
• Farmers report HTTP 503 errors at the same time as internal staff report that CPU and memory usage are high.
• Distributors report HTTP 502 errors at the same time as internal staff report that average response times and networking traffic are high.
• Internal staff report webpage load sizes are large and take a long time to load.
• Developers receive authentication errors to Service Bus when they debug locally.
Distributors -
• Many API telemetry values are sent in a short period of time. Telemetry traffic, data costs, and storage costs must be reduced while preserving a statistically correct analysis of the data points sent by the APIs.
You need to secure the corporate website to meet the security requirements.
What should you do?
A. Create an Azure Cache for Redis instance. Update the code to support the cache.
B. Create an Azure Content Delivery Network profile and endpoint. Configure the endpoint. С. Create an App Service instance with a standard plan. Configure the custom domain with a TLS/SSL certificate.
D. Create an Azure Application Gateway with a Web Application Firewall (WAF). Configure end-to-end TLS encryption and the WAF.
Show Answer
Correct Answer: D
Explanation:
The primary goal is to secure the corporate website. Azure Application Gateway with Web Application Firewall (WAF) directly addresses the security requirements: it enforces TLS/HTTPS end-to-end, provides protection against common web exploits, supports geo-based access restrictions via WAF rules, and offers centralized traffic inspection consistent with least-privilege principles. Other options focus on performance or hosting but do not comprehensively meet the stated security controls.