Google

Professional Cloud DevOps Engineer — Google Cloud Certified - Professional Cloud DevOps Engineer Study Guide

199 practice questions Updated 2026-02-19 $19 (70% off) HTML + PDF formats

Professional Cloud DevOps Engineer Exam Overview

Prepare for the Google Professional Cloud DevOps Engineer certification exam with our comprehensive study guide. This study material contains 199 practice questions sourced from real exams and expert-verified for accuracy. Each question includes the correct answer and a detailed explanation to help you understand the material thoroughly.

The Professional Cloud DevOps Engineer exam — Google Cloud Certified - Professional Cloud DevOps Engineer — is offered by Google. Our study materials were last updated on 2026-02-19 to reflect the most recent exam objectives and content.

What You Get

199 Practice Questions

Complete question bank covering all exam domains and objectives.

HTML + PDF Formats

Interactive HTML file (recommended) for screen study and a print-ready PDF.

Instant Download

Access your study materials immediately after purchase.

Email with Permanent Download Links

You will receive a confirmation email with permanent download links in case you want to download the files again in the future.

Why Choose CheapestExamDumps?

Lowest Price Available

Only $19 per exam — competitors charge $50-$300 for similar content.

Updated Monthly

Study materials refreshed within 30 days of any exam content changes.

Free Preview

Try 15 real practice questions before you buy — no signup required.

Instant Access

Download HTML + PDF immediately after payment. No waiting, no account needed.

$63 $19

One-time payment · HTML + PDF · Instant download · 199 questions

Free Sample — 15 Practice Questions

Preview 15 of 199 questions from the Professional Cloud DevOps Engineer exam. Try before you buy — purchase the full study guide for all 199 questions with answers and explanations.

Question 90

You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs must be available for one year so that the client can import the logs into their logging service. You must minimize required code changes. What should you do?

A. Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service. Ensure that all the ports required to send logs are open in the VPC firewall.
B. Create a Pub/Sub topic, subscription, and logging sink. Configure the logging sink to send all logs into the topic. Give your client access to the topic to retrieve the logs.
C. Create a storage bucket and appropriate VPC firewall rules. Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket.
D. Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days. Configure the logging sink to send logs to the bucket. Give your client access to the bucket to retrieve the logs.
Show Answer
Correct Answer: D
Explanation:
Cloud Run and Cloud Functions already write logs to Cloud Logging by default. Creating a dedicated logs bucket with a 365‑day retention and routing logs to it via a logging sink requires no application code changes, meets the one‑year retention requirement, and allows controlled access for the client to export/import logs. Other options require code changes or are not the recommended logging architecture.

Question 59

Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do?

A. Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPayload field.
B. Install a Fluent Bit sidecar container, and use a JSON parser.
C. Install the log agent in the Cloud Run container image, and use the log agent to forward logs to Cloud Logging.
D. Configure the log agent to convert log text payload to JSON payload.
Show Answer
Correct Answer: A
Explanation:
In Cloud Run, logs are automatically collected from stdout/stderr by the platform, and you cannot install or configure a Logging agent. To get JSON-based structured logs, the recommended and supported approach is to have the application emit structured logs directly. Using the Cloud Logging SDK (or supported logging libraries) allows the application to write log entries with a jsonPayload, which Cloud Logging natively understands as structured logs. Options involving log agents or payload conversion (C and D) are not applicable to Cloud Run, and a Fluent Bit sidecar (B) adds unnecessary complexity when you control the application code.

Question 172

You support a multi-region web service running on Google Kubernetes Engine (GKE) behind a Global HTTP/S Cloud Load Balancer (CLB). For legacy reasons, user requests first go through a third-party Content Delivery Network (CDN), which then routes traffic to the CLB. You have already implemented an availability Service Level Indicator (SLI) at the CLB level. However, you want to increase coverage in case of a potential load balancer misconfiguration, CDN failure, or other global networking catastrophe. Where should you measure this new SLI? (Choose two.)

A. Your application servers' logs.
B. Instrumentation coded directly in the client.
C. Metrics exported from the application servers.
D. GKE health checks for your application servers.
E. A synthetic client that periodically sends simulated user requests.
Show Answer
Correct Answer: B, E
Explanation:
To detect failures beyond the CDN and Cloud Load Balancer (such as CDN outages, global routing issues, or misconfigurations), the SLI must be measured from the user’s perspective. Client-side instrumentation captures real end-user experience before traffic reaches your infrastructure, while synthetic clients actively probe the full request path and can detect global networking or CDN/CLB failures. Server-side logs, metrics, or GKE health checks are too far downstream and miss these classes of failures.

Question 31

You are designing a new multi-tenant Google Kubernetes Engine (GKE) cluster for a customer. Your customer is concerned with the risks associated with long-lived credentials use. The customer requires that each GKE workload has the minimum Identity and Access Management (IAM) permissions set following the principle of least privilege (PoLP). You need to design an IAM impersonation solution while following Google-recommended practices. What should you do?

A. 1. Create a Google service account. 2. Create a node pool, and set the Google service account as the default identity. 3. Ensure that workloads can only run on the designated node pool by using node selectors, taints, and tolerations. 4. Repeat for each workload.
B. 1. Create a Google service account. 2. Create a node pool without taints, and set the Google service account as the default identity. 3. Grant IAM permissions to the Google service account.
C. 1. Create a Google service account. 2. Create a Kubernetes service account in a Workload Identity-enabled cluster. 3. Link the Google service account with the Kubernetes service account by using the roles/iam.workloadIdentityUser role and iam.gke.io/gcp-service-account annotation. 4. Map the Kubernetes service account to the workload. 5. Repeat for each workload.
D. 1. Create a Google service account. 2. Create a service account key for the Google service account. 3. Create a Kubernetes secret with a service account key. 4. Ensure that workload mounts the secret and set the GOOGLE_APPLICATION_CREDENTIALS environment variable to point at the mount path. 5. Repeat for each workload.
Show Answer
Correct Answer: C
Explanation:
The requirement is workload-level IAM with least privilege and avoidance of long-lived credentials. Google-recommended practice for this in GKE is Workload Identity. Option C uses a Workload Identity–enabled cluster, maps each Kubernetes service account to a dedicated Google service account via roles/iam.workloadIdentityUser and the iam.gke.io/gcp-service-account annotation, and assigns that KSA to the workload. This provides fine-grained, per-workload IAM impersonation without service account keys. The other options either grant permissions at the node level (A, B), violating least privilege, or rely on long-lived service account keys (D), which is explicitly discouraged.

Question 149

Your company is developing applications that are deployed on Google Kubernetes Engine (GKE). Each team manages a different application. You need to create the development and production environments for each team, while minimizing costs. Different teams should not be able to access other teams' environments. What should you do?

A. Create one GCP Project per team. In each project, create a cluster for Development and one for Production. Grant the teams IAM access to their respective clusters.
B. Create one GCP Project per team. In each project, create a cluster with a Kubernetes namespace for Development and one for Production. Grant the teams IAM access to their respective clusters.
C. Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Identity Aware Proxy so that each team can only access its own namespace.
D. Create a Development and a Production GKE cluster in separate projects. In each cluster, create a Kubernetes namespace per team, and then configure Kubernetes Role-based access control (RBAC) so that each team can only access its own namespace.
Show Answer
Correct Answer: D
Explanation:
To minimize costs while isolating teams and environments, you should share GKE clusters where possible and enforce isolation at the Kubernetes level. Creating one Development cluster and one Production cluster (in separate projects) avoids the cost of per-team clusters. Using Kubernetes namespaces per team provides logical separation, and Kubernetes RBAC is the correct mechanism to restrict each team’s access to only its own namespace. IAM or IAP are not designed for fine-grained, in-cluster authorization. This aligns with GKE multi-tenancy best practices.

Question 96

You are building an application that runs on Cloud Run. The application needs to access a third-party API by using an API key. You need to determine a secure way to store and use the API key in your application by following Google-recommended practices. What should you do?

A. Save the API key in Secret Manager as a secret. Reference the secret as an environment variable in the Cloud Run application.
B. Save the API key in Secret Manager as a secret key. Mount the secret key under the /sys/api_key directory, and decrypt the key in the Cloud Run application.
C. Save the API key in Cloud Key Management Service (Cloud KMS) as a key. Reference the key as an environment variable in the Cloud Run application.
D. Encrypt the API key by using Cloud Key Management Service (Cloud KMS), and pass the key to Cloud Run as an environment variable. Decrypt and use the key in Cloud Run.
Show Answer
Correct Answer: A
Explanation:
Google-recommended practice for Cloud Run is to store sensitive values like API keys in Secret Manager and inject them into the service as environment variables. This keeps secrets out of source code and images, provides IAM-based access control and auditing, and is natively supported by Cloud Run. Cloud KMS is for key management and encryption, not for directly storing and injecting application secrets, and mounting or manually decrypting secrets is unnecessary and unsupported in the described ways.

Question 116

You are configuring the frontend tier of an application deployed in Google Cloud. The frontend tier is hosted in nginx and deployed using a managed instance group with an Envoy-based external HTTP(S) load balancer in front. The application is deployed entirely within the europe-west2 region, and only serves users based in the United Kingdom. You need to choose the most cost-effective network tier and load balancing configuration. What should you use?

A. Premium Tier with a global load balancer
B. Premium Tier with a regional load balancer
C. Standard Tier with a global load balancer
D. Standard Tier with a regional load balancer
Show Answer
Correct Answer: D
Explanation:
The application serves users only in a single region (europe-west2) and does not require Google’s global Anycast or premium backbone. Standard Tier is the most cost-effective option, and a regional external HTTP(S) load balancer is sufficient for single-region traffic. Using Premium Tier or a global load balancer would add unnecessary cost without benefits in this scenario.

Question 137

You support a popular mobile game application deployed on Google Kubernetes Engine (GKE) across several Google Cloud regions. Each region has multiple Kubernetes clusters. You receive a report that none of the users in a specific region can connect to the application. You want to resolve the incident while following Site Reliability Engineering practices. What should you do first?

A. Reroute the user traffic from the affected region to other regions that don't report issues.
B. Use Stackdriver Monitoring to check for a spike in CPU or memory usage for the affected region.
C. Add an extra node pool that consists of high memory and high CPU machine type instances to the cluster.
D. Use Stackdriver Logging to filter on the clusters in the affected region, and inspect error messages in the logs.
Show Answer
Correct Answer: A
Explanation:
In Site Reliability Engineering, the first priority during an incident is to mitigate user impact and restore service availability before performing root cause analysis. Since an entire region is unavailable but other regions are healthy, rerouting traffic away from the affected region immediately restores connectivity for users. Investigating metrics or logs and adding capacity are secondary steps once the incident impact is contained.

Question 45

As part of your company's initiative to shift left on security, the InfoSec team is asking all teams to implement guard rails on all the Google Kubernetes Engine (GKE) clusters to only allow the deployment of trusted and approved images. You need to determine how to satisfy the InfoSec team's goal of shifting left on security. What should you do?

A. Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images
B. Use Binary Authorization to attest images during your CI/CD pipeline
C. Configure Identity and Access Management (IAM) policies to create a least privilege model on your GKE clusters.
D. Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods
Show Answer
Correct Answer: B
Explanation:
The requirement is to enforce guardrails that only trusted and approved container images can be deployed to GKE, as part of a shift-left security approach. Binary Authorization is designed for this purpose: it enforces admission-time policies on GKE that allow only images with valid attestations, typically generated during the CI/CD pipeline, to run. This prevents unapproved images from ever being deployed. The other options either focus on vulnerability scanning (A), access control (C), or runtime monitoring (D), none of which enforce image trust at deployment time.

Question 176

You are part of an organization that follows SRE practices and principles. You are taking over the management of a new service from the Development Team, and you conduct a Production Readiness Review (PRR). After the PRR analysis phase, you determine that the service cannot currently meet its Service Level Objectives (SLOs). You want to ensure that the service can meet its SLOs in production. What should you do next?

A. Adjust the SLO targets to be achievable by the service so you can bring it into production.
B. Notify the development team that they will have to provide production support for the service.
C. Identify recommended reliability improvements to the service to be completed before handover.
D. Bring the service into production with no SLOs and build them when you have collected operational data.
Show Answer
Correct Answer: C
Explanation:
In SRE practice, if a service cannot meet its defined SLOs during a Production Readiness Review, the correct next step is to identify and require specific reliability improvements before handover. Adjusting SLOs to fit poor reliability, shipping without SLOs, or pushing ops back to development all undermine SRE principles. The PRR explicitly exists to surface gaps and drive reliability work so the service can sustainably meet its agreed SLOs in production.

Question 64

You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?

A.
B.
C.
D.
Show Answer
Correct Answer: C
Explanation:
In a joint-ownership (DevOps) model, reliability and service health are shared, while core specialties remain distinct. The DevOps team should continue managing service infrastructure and operational tooling. The Software Development team should remain responsible for writing and submitting code. Both teams should jointly define, adopt, and publish SLOs and share accountability during incidents, ensuring collaboration rather than siloed ownership. Option C best reflects this balanced shared-responsibility model.

Question 66

Your organization is using Helm to package containerized applications. Your applications reference both public and private charts. Your security team flagged that using a public Helm repository as a dependency is a risk. You want to manage all charts uniformly, with native access control and VPC Service Controls. What should you do?

A. Store public and private charts in OCI format by using Artifact Registry.
B. Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider.
C. Store public and private charts by using Git repository. Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket. Connect Helm to the bucket by using https://[bucket].storage-googleapis.com/[helmchart] as the Helm repository.
D. Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend.
Show Answer
Correct Answer: A
Explanation:
Artifact Registry natively supports Helm charts in OCI format and integrates with Google Cloud IAM and VPC Service Controls. By mirroring both public and private charts into Artifact Registry, you eliminate reliance on external public repositories while managing all charts uniformly with centralized access control, auditability, and network security. The other options either lack native Helm support with VPC Service Controls or add unnecessary operational complexity.

Question 150

You support a stateless web-based API that is deployed on a single Compute Engine instance in the europe-west2-a zone. The Service Level Indicator (SLI) for service availability is below the specified Service Level Objective (SLO). A postmortem has revealed that requests to the API regularly time out. The time outs are due to the API having a high number of requests and running out memory. You want to improve service availability. What should you do?

A. Change the specified SLO to match the measured SLI
B. Move the service to higher-specification compute instances with more memory
C. Set up additional service instances in other zones and load balance the traffic between all instances
D. Set up additional service instances in other zones and use them as a failover in case the primary instance is unavailable
Show Answer
Correct Answer: C
Explanation:
The availability issue is caused by load-related timeouts and memory exhaustion on a single instance. Horizontally scaling the stateless API across multiple instances and distributing traffic with a load balancer directly addresses the root cause by reducing per-instance load and improving availability. Changing the SLO does not fix the problem, vertical scaling alone does not address availability or traffic spikes, and passive failover does not help with ongoing high load.

Question 77

You need to create a Cloud Monitoring SLO for a service that will be published soon. You want to verify that requests to the service will be addressed in fewer than 300 ms at least 90% of the time per calendar month. You need to identify the metric and evaluation method to use. What should you do?

A. Select a latency metric for a request-based method of evaluation.
B. Select a latency metric for a window-based method of evaluation.
C. Select an availability metric for a request-based method of evaluation.
D. Select an availability metric for a window-based method of evaluation.
Show Answer
Correct Answer: A
Explanation:
The objective specifies a latency target (requests handled in under 300 ms) and a proportion of requests meeting that target (90%) over a calendar month. Latency rules out availability metrics, and measuring the percentage of requests that meet a latency threshold is exactly what a request-based SLO evaluation does. Window-based evaluation measures the percentage of time windows that are good, not the percentage of requests.

Question 196

You use a multiple step Cloud Build pipeline to build and deploy your application to Google Kubernetes Engine (GKE). You want to integrate with a third-party monitoring platform by performing a HTTP POST of the build information to a webhook. You want to minimize the development effort. What should you do?

A. Add logic to each Cloud Build step to HTTP POST the build information to a webhook.
B. Add a new step at the end of the pipeline in Cloud Build to HTTP POST the build information to a webhook.
C. Use Stackdriver Logging to create a logs-based metric from the Cloud Build logs. Create an Alert with a Webhook notification type.
D. Create a Cloud Pub/Sub push subscription to the Cloud Build cloud-builds PubSub topic to HTTP POST the build information to a webhook.
Show Answer
Correct Answer: D
Explanation:
Cloud Build natively publishes build events to a Cloud Pub/Sub topic. Creating a Pub/Sub push subscription that forwards these events to a webhook allows you to send build information via HTTP POST without modifying the build pipeline itself. This minimizes development effort compared to adding custom logic or steps in Cloud Build, and is the recommended integration pattern for external systems.

$63 $19

Get all 199 questions with detailed answers and explanations

Professional Cloud DevOps Engineer — Frequently Asked Questions

What is the Google Professional Cloud DevOps Engineer exam?

The Google Professional Cloud DevOps Engineer exam — Google Cloud Certified - Professional Cloud DevOps Engineer — is a professional IT certification exam offered by Google.

How many practice questions are included?

This study guide contains 199 practice questions, each with an expert-verified correct answer and a detailed explanation. Questions cover all exam domains and objectives.

Is there a free sample available?

Yes! We provide a free sample of 15 practice questions from the Professional Cloud DevOps Engineer exam right on this page. Scroll up to preview them and evaluate the quality of our materials before purchasing.

When was this Professional Cloud DevOps Engineer study guide last updated?

This study guide was last updated on 2026-02-19. We regularly refresh our materials to reflect the latest exam content and objectives so you're always studying current material.

What file formats do I receive?

After purchase you receive two files: an interactive HTML file with show/hide answer toggles (ideal for studying on screen) and a PDF file (ideal for printing or offline study). Both work on any device — desktop, tablet, or phone.

How much does the Professional Cloud DevOps Engineer study guide cost?

The Google Professional Cloud DevOps Engineer study guide costs $19 (discounted from $63). This is a one-time payment with no subscriptions or hidden fees.

How do I get my files after payment?

After successful payment via Stripe, you are immediately redirected to a download page with links to your HTML and PDF files. We also send the download links to your email address as a backup, so you'll always have access.

Why choose CheapestExamDumps over other providers?

CheapestExamDumps offers the lowest price at $19 per exam — competitors charge $50-$300 for similar content. All study materials are expert-verified, updated monthly, and include a free 15-question preview with no signup required. You get instant access to both HTML and PDF formats after payment.