Google

Professional Cloud Developer — Google Cloud Certified - Professional Cloud Developer Study Guide

357 practice questions Updated 2026-02-19 $19 (70% off) HTML + PDF formats

Professional Cloud Developer Exam Overview

Prepare for the Google Professional Cloud Developer certification exam with our comprehensive study guide. This study material contains 357 practice questions sourced from real exams and expert-verified for accuracy. Each question includes the correct answer and a detailed explanation to help you understand the material thoroughly.

The Professional Cloud Developer exam — Google Cloud Certified - Professional Cloud Developer — is offered by Google. Our study materials were last updated on 2026-02-19 to reflect the most recent exam objectives and content.

What You Get

357 Practice Questions

Complete question bank covering all exam domains and objectives.

HTML + PDF Formats

Interactive HTML file (recommended) for screen study and a print-ready PDF.

Instant Download

Access your study materials immediately after purchase.

Email with Permanent Download Links

You will receive a confirmation email with permanent download links in case you want to download the files again in the future.

Why Choose CheapestExamDumps?

Lowest Price Available

Only $19 per exam — competitors charge $50-$300 for similar content.

Updated Monthly

Study materials refreshed within 30 days of any exam content changes.

Free Preview

Try 15 real practice questions before you buy — no signup required.

Instant Access

Download HTML + PDF immediately after payment. No waiting, no account needed.

$63 $19

One-time payment · HTML + PDF · Instant download · 357 questions

Free Sample — 15 Practice Questions

Preview 15 of 357 questions from the Professional Cloud Developer exam. Try before you buy — purchase the full study guide for all 357 questions with answers and explanations.

Question 291

Your API backend is running on multiple cloud providers. You want to generate reports for the network latency of your API. Which two steps should you take? (Choose two.)

A. Use Zipkin collector to gather data.
B. Use Fluentd agent to gather data.
C. Use Stackdriver Trace to generate reports.
D. Use Stackdriver Debugger to generate report.
E. Use Stackdriver Profiler to generate report.
Show Answer
Correct Answer: A, C
Explanation:
To generate network latency reports for an API running across multiple cloud providers, you need distributed tracing. A Zipkin collector can gather trace and latency data from services regardless of where they run. Stackdriver Trace can then ingest this trace data and generate latency analysis and reports. Fluentd focuses on logs, while Debugger and Profiler are for code inspection and CPU/memory profiling, not network latency reporting.

Question 268

You have written a Cloud Function that accesses other Google Cloud resources. You want to secure the environment using the principle of least privilege. What should you do?

A. Create a new service account that has Editor authority to access the resources. The deployer is given permission to get the access token.
B. Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to get the access token.
C. Create a new service account that has Editor authority to access the resources. The deployer is given permission to act as the new service account.
D. Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to act as the new service account.
Show Answer
Correct Answer: D
Explanation:
To apply the principle of least privilege, the Cloud Function should run as a dedicated service account that has only the minimum permissions required, which is best achieved with a custom IAM role. When deploying a Cloud Function with a user-managed service account, the deployer must have the iam.serviceAccounts.actAs permission on that service account so the function can run as it. Granting broad roles like Editor or using access token permissions is unnecessary and violates least privilege.

Question 196

You have an application written in Python running in production on Cloud Run. Your application needs to read/write data stored in a Cloud Storage bucket in the same project. You want to grant access to your application following the principle of least privilege. What should you do?

A. Create a user-managed service account with a custom Identity and Access Management (IAM) role.
B. Create a user-managed service account with the Storage Admin Identity and Access Management (IAM) role.
C. Create a user-managed service account with the Project Editor Identity and Access Management (IAM) role.
D. Use the default service account linked to the Cloud Run revision in production.
Show Answer
Correct Answer: A
Explanation:
To follow the principle of least privilege, the Cloud Run service should use a dedicated user-managed service account that has only the exact permissions required to read and write objects in the specific Cloud Storage bucket. Creating a custom IAM role (or alternatively assigning the minimal predefined storage roles at the bucket level) avoids granting broad permissions like Storage Admin, Project Editor, or the default service account, all of which provide excessive access beyond what the application needs.

Question 188

You are running a web application on Google Kubernetes Engine that you inherited. You want to determine whether the application is using libraries with known vulnerabilities or is vulnerable to XSS attacks. Which service should you use?

A. Google Cloud Armor
B. Debugger
C. Web Security Scanner
D. Error Reporting
Show Answer
Correct Answer: C
Explanation:
Web Security Scanner actively scans web applications running on GCP (including GKE services) to detect common vulnerabilities such as XSS, use of vulnerable libraries, and other OWASP Top 10 issues. The other options do not perform vulnerability scanning.

Question 121

Your application stores customers’ content in a Cloud Storage bucket, with each object being encrypted with the customer's encryption key. The key for each object in Cloud Storage is entered into your application by the customer. You discover that your application is receiving an HTTP 4xx error when reading the object from Cloud Storage. What is a possible cause of this error?

A. You attempted the read operation on the object with the customer's base64-encoded key.
B. You attempted the read operation without the base64-encoded SHA256 hash of the encryption key.
C. You entered the same encryption algorithm specified by the customer when attempting the read operation.
D. You attempted the read operation on the object with the base64-encoded SHA256 hash of the customer's key.
Show Answer
Correct Answer: B
Explanation:
When using customer-supplied encryption keys (CSEK) with Cloud Storage, read requests must include both the base64-encoded raw encryption key and the base64-encoded SHA256 hash of that key. If the SHA256 hash is missing, invalid, or not provided, Cloud Storage cannot verify the key and returns an HTTP 4xx error (typically 400). Therefore, attempting to read the object without supplying the base64-encoded SHA256 hash of the encryption key is a valid cause of the error.

Question 54

You are developing an external-facing application on GKE that provides a streaming API to users. You want to offer two subscription tiers, “basic" and “premium", to users based on the number of API requests that each client application is allowed to make each day. You want to design the application architecture to provide subscription tiers to users while following Google-recommended practices. What should you do?

A. 1. Configure the service on GKE as a backend to an Apigee proxy. 2. Provide API keys to users to identify client applications. 3. Configure a Quota policy in Apigee for API keys based on the subscription tier.
B. 1. Configure the service on GKE as a backend to an Apigee proxy. 2. Provide API keys to users to identify client applications. 3. Configure a SpikeArrest policy in Apigee for API keys based on the subscription tier.
C. 1. Configure the service on GKE as a backend to two new projects, each with a separate Application Load Balancer. 2. Configure the quota "Queries per second (QPS) per region per network” for each project individually. 3. Provide users with API endpoints based on the subscription tier.
D. 1. Deploy the application to two GKE clusters, one for each subscription tier. Configure each cluster to have a separate Ingress. 2. Configure each cluster as a backend to an Apigee proxy. 3. Provide API keys to users to identify client applications. 4. Configure separate rate limits for client applications based on the subscription tier.
Show Answer
Correct Answer: A
Explanation:
Apigee is Google-recommended for external API management. Using Apigee in front of GKE lets you identify client applications with API keys and enforce per-client, per-day request limits via Quota policies, which directly map to basic and premium subscription tiers. SpikeArrest is for short-term traffic bursts, not daily quotas, and the other options add unnecessary infrastructure complexity and do not follow best practices for API subscription management.

Question 350

You are building a new API. You want to minimize the cost of storing and reduce the latency of serving images. Which architecture should you use?

A. App Engine backed by Cloud Storage
B. Compute Engine backed by Persistent Disk
C. Transfer Appliance backed by Cloud Filestore
D. Cloud Content Delivery Network (CDN) backed by Cloud Storage
Show Answer
Correct Answer: D
Explanation:
The goal is to minimize storage cost and reduce image-serving latency. Cloud Storage provides low-cost, highly durable object storage ideal for images, and Cloud CDN caches those images at Google’s global edge locations, dramatically reducing latency for end users. The other options either rely on more expensive compute or disk resources, or are not designed for serving content at scale. A CDN can serve static image content directly from Cloud Storage without an application layer in between.

Question 109

You are developing an online gaming platform as a microservices application on Google Kubernetes Engine (GKE). Users on social media are complaining about long loading times for certain URL requests to the application. You need to investigate performance bottlenecks in the application and identify which HTTP requests have a significantly high latency span in user requests. What should you do?

A. Configure GKE workload metrics using kubectl. Select all Pods to send their metrics to Cloud Monitoring. Create a custom dashboard of application metrics in Cloud Monitoring to determine performance bottlenecks of your GKE cluster.
B. Update your microservices to log HTTP request methods and URL paths to STDOUT. Use the logs router to send container logs to Cloud Logging. Create filters in Cloud Logging to evaluate the latency of user requests across different methods and URL paths.
C. Instrument your microservices by installing the OpenTelemetry tracing package. Update your application code to send traces to Trace for inspection and analysis. Create an analysis report on Trace to analyze user requests.
D. Install tcpdump on your GKE nodes. Run tcpdump to capture network traffic over an extended period of time to collect data. Analyze the data files using Wireshark to determine the cause of high latency.
Show Answer
Correct Answer: C
Explanation:
To identify high-latency HTTP requests in a distributed microservices application, you need end-to-end request visibility. Distributed tracing with OpenTelemetry and Cloud Trace is purpose-built for this use case, allowing you to inspect individual request spans, see latency breakdowns across services, and pinpoint bottlenecks. Metrics alone lack per-request detail, logs are inefficient for latency analysis at scale, and packet captures are impractical and low-level for application performance troubleshooting.

Question 50

You are developing a secure document sharing platform. The platform allows users to share documents with other users who may be external to their organization. Access to these documents should be revoked after a configurable time period. The documents are stored in Cloud Storage. How should you configure Cloud Storage to support this functionality?

A. Create signed policy documents on the Cloud Storage bucket.
B. Apply access control list (ACL) permissions to the Cloud Storage bucket.
C. Generate a signed URL for each document the user wants to share.
D. Grant the Storage Object Viewer IAM role to all authenticated users.
Show Answer
Correct Answer: C
Explanation:
Signed URLs provide time-limited, revocable access to specific Cloud Storage objects without requiring the recipient to have a Google account or IAM permissions. This directly supports secure sharing with external users and automatic access expiration. ACLs and IAM are not time-bound per object, and signed policy documents are for uploads, not controlled downloads.

Question 287

Case study - This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study. At the end of this case study, a review screen will appear. This screen allows you to review your answers and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section. To start the case study - To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these buttons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question. Company Overview - HipLocal is a community application designed to facilitate communication between people in close proximity. It is used for event planning and organizing sporting events, and for businesses to connect with their local communities. HipLocal launched recently in a few neighborhoods in Dallas and is rapidly growing into a global phenomenon. Its unique style of hyper-local community communication and business outreach is in demand around the world. Executive Statement - We are the number one local community app; it's time to take our local community services global. Our venture capital investors want to see rapid growth and the same great experience for new local and virtual communities that come online, whether their members are 10 or 10000 miles away from each other. Solution Concept - HipLocal wants to expand their existing service, with updated functionality, in new regions to better serve their global customers. They want to hire and train a new team to support these regions in their time zones. They will need to ensure that the application scales smoothly and provides clear uptime data. Existing Technical Environment - HipLocal's environment is a mix of on-premises hardware and infrastructure running in Google Cloud Platform. The HipLocal team understands their application well, but has limited experience in global scale applications. Their existing technical environment is as follows: * Existing APIs run on Compute Engine virtual machine instances hosted in GCP. * State is stored in a single instance MySQL database in GCP. * Data is exported to an on-premises Teradata/Vertica data warehouse. * Data analytics is performed in an on-premises Hadoop environment. * The application has no logging. * There are basic indicators of uptime; alerts are frequently fired when the APIs are unresponsive. Business Requirements - HipLocal's investors want to expand their footprint and support the increase in demand they are seeing. Their requirements are: * Expand availability of the application to new regions. * Increase the number of concurrent users that can be supported. * Ensure a consistent experience for users when they travel to different regions. * Obtain user activity metrics to better understand how to monetize their product. * Ensure compliance with regulations in the new regions (for example, GDPR). * Reduce infrastructure management time and cost. * Adopt the Google-recommended practices for cloud computing. Technical Requirements - * The application and backend must provide usage metrics and monitoring. * APIs require strong authentication and authorization. * Logging must be increased, and data should be stored in a cloud analytics platform. * Move to serverless architecture to facilitate elastic scaling. * Provide authorized access to internal apps in a secure manner. Which database should HipLocal use for storing user activity?

A. BigQuery
B. Cloud SQL
C. Cloud Spanner
D. Cloud Datastore
Show Answer
Correct Answer: A
Explanation:
The requirement is to store and analyze user activity metrics at global scale to understand usage and monetization. This is analytical, append-heavy data rather than transactional user state. BigQuery is a fully managed, serverless analytics data warehouse designed for large volumes of event and activity data, provides powerful querying and aggregation, integrates with logging and monitoring, supports compliance needs, and minimizes infrastructure management. The other options (Cloud SQL, Cloud Spanner, Datastore) are optimized for transactional or operational workloads, not large-scale analytics.

Question 176

You are deploying a microservices application to Google Kubernetes Engine (GKE) that will broadcast livestreams. You expect unpredictable traffic patterns and large variations in the number of concurrent users. Your application must meet the following requirements: • Scales automatically during popular events and maintains high availability • Is resilient in the event of hardware failures How should you configure the deployment parameters? (Choose two.)

A. Distribute your workload evenly using a multi-zonal node pool.
B. Distribute your workload evenly using multiple zonal node pools.
C. Use cluster autoscaler to resize the number of nodes in the node pool, and use a Horizontal Pod Autoscaler to scale the workload.
D. Create a managed instance group for Compute Engine with the cluster nodes. Configure autoscaling rules for the managed instance group.
E. Create alerting policies in Cloud Monitoring based on GKE CPU and memory utilization. Ask an on-duty engineer to scale the workload by executing a script when CPU and memory usage exceed predefined thresholds.
Show Answer
Correct Answer: A, C
Explanation:
To handle unpredictable livestream traffic and ensure high availability, you should distribute workloads across multiple zones and enable automatic scaling. Using a multi-zonal node pool spreads nodes evenly across zones, providing resilience against zonal or hardware failures. Combining the Cluster Autoscaler (to scale nodes) with the Horizontal Pod Autoscaler (to scale pods based on load) allows the application to automatically scale during popular events while maintaining availability.

Question 203

Users are complaining that your Cloud Run-hosted website responds too slowly during traffic spikes. You want to provide a better user experience during traffic peaks. What should you do?

A. Read application configuration and static data from the database on application startup.
B. Package application configuration and static data into the application image during build time.
C. Perform as much work as possible in the background after the response has been returned to the user.
D. Ensure that timeout exceptions and errors cause the Cloud Run instance to exit quickly so a replacement instance can be started.
Show Answer
Correct Answer: B
Explanation:
Cloud Run scales by starting new instances during traffic spikes, so cold start performance matters. Packaging application configuration and static data into the container image avoids extra network calls (for example to databases or storage) during startup and request handling, reducing latency when new instances spin up. The other options either add load during spikes, do not directly reduce request latency, or address failure handling rather than performance.

Question 336

You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service will exist within the same GKE cluster. You want clients to be able to get the IP address of the service. What should you do?

A. Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's cluster IP address.
B. Define a GKE Service. Clients should use the service name in the URL to connect to the service.
C. Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in the client container.
D. Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.
Show Answer
Correct Answer: B
Explanation:
Within the same GKE cluster, the correct and Kubernetes-native way to expose and discover a service is to define a GKE (Kubernetes) Service. Kubernetes automatically creates an internal DNS entry that resolves the service name to its cluster IP. Clients do not need to retrieve or manage the IP address directly; they should use the service name in the URL, and DNS will resolve it to the service’s IP. GKE Endpoints are for external access and are not appropriate for intra-cluster service discovery.

Question 64

You are developing a container build pipeline for an application hosted on GKE. You have the following requirements: • Only images that are created using your build pipeline should be deployed on your GKE cluster. • All code and build artifacts should remain within your environment and protected from data exfiltration. How should you build the pipeline?

A. 1. Create a build pipeline by using Cloud Build with the default worker pool. 2. Deploy container images to a private container registry in your VPC. 3. Create a VPC firewall policy in your project that denies all egress and ingress traffic to public networks.
B. 1. Create a build pipeline by using Cloud Build with a private worker pool. 2. Use VPC Service Controls to place all components and services in your CI/CD pipeline inside a security perimeter. 3. Configure your GKE cluster to only allow container images signed by Binary Authorization.
C. 1. Create a build pipeline by using Cloud Build with a private worker pool. 2. Configure the CI/CD pipeline to build container images and store them in Artifact Registry. 3. Configure Artifact Registry to encrypt container images by using customer-managed encryption keys (CMEK).
D. 1. Create a build pipeline by using Cloud Build with the default worker pool. 2. Configure the CI/CD pipeline to build container images and store them in Artifact Registry. 3. Configure your GKE cluster to only allow container images signed by Binary Authorization.
Show Answer
Correct Answer: B
Explanation:
The requirements are to ensure only pipeline-built images can run on GKE and to prevent data exfiltration. Using Cloud Build with a private worker pool keeps builds inside your VPC. VPC Service Controls place CI/CD services within a security perimeter to mitigate data exfiltration. Binary Authorization enforces that only trusted, pipeline-signed images are deployable to GKE. Other options either lack strong exfiltration protection or do not enforce image provenance on the cluster.

Question 83

You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which function is consuming the most CPU and memory resources. What should you do?

A. Add print commands to the application source code to log when each function is called, and redeploy the application.
B. Create a Cloud Logging query that gathers the web application s logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identify time-intensive functions.
C. Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your application on the Trace overview page, and identify which functions cause the most latency.
D. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions.
Show Answer
Correct Answer: D
Explanation:
To identify which functions consume the most CPU and memory in a Go application running on GKE, you need continuous, low-overhead profiling. Google Cloud Profiler is designed for this purpose: after importing and initializing the Profiler agent, it collects CPU and memory profiles from production workloads and presents them as flame graphs in the Cloud Console. These flame graphs clearly show which functions are responsible for the highest CPU time and memory usage. The other options rely on logging or tracing, which are not suitable for detailed CPU and memory attribution.

$63 $19

Get all 357 questions with detailed answers and explanations

Professional Cloud Developer — Frequently Asked Questions

What is the Google Professional Cloud Developer exam?

The Google Professional Cloud Developer exam — Google Cloud Certified - Professional Cloud Developer — is a professional IT certification exam offered by Google.

How many practice questions are included?

This study guide contains 357 practice questions, each with an expert-verified correct answer and a detailed explanation. Questions cover all exam domains and objectives.

Is there a free sample available?

Yes! We provide a free sample of 15 practice questions from the Professional Cloud Developer exam right on this page. Scroll up to preview them and evaluate the quality of our materials before purchasing.

When was this Professional Cloud Developer study guide last updated?

This study guide was last updated on 2026-02-19. We regularly refresh our materials to reflect the latest exam content and objectives so you're always studying current material.

What file formats do I receive?

After purchase you receive two files: an interactive HTML file with show/hide answer toggles (ideal for studying on screen) and a PDF file (ideal for printing or offline study). Both work on any device — desktop, tablet, or phone.

How much does the Professional Cloud Developer study guide cost?

The Google Professional Cloud Developer study guide costs $19 (discounted from $63). This is a one-time payment with no subscriptions or hidden fees.

How do I get my files after payment?

After successful payment via Stripe, you are immediately redirected to a download page with links to your HTML and PDF files. We also send the download links to your email address as a backup, so you'll always have access.

Why choose CheapestExamDumps over other providers?

CheapestExamDumps offers the lowest price at $19 per exam — competitors charge $50-$300 for similar content. All study materials are expert-verified, updated monthly, and include a free 15-question preview with no signup required. You get instant access to both HTML and PDF formats after payment.