Free Sample — 15 Practice Questions
Preview 15 of 296 questions from the Professional Cloud Architect exam.
Try before you buy — purchase the full study guide for all 296 questions with answers and explanations.
Question 26
Your company runs a critical, revenue-generating ecommerce application that is served by a regional managed instance group (MIG) behind an external HTTP(S) Load Balancer. The operations team is currently overwhelmed with low-priority notifications and is starting to ignore alerts. Your team's service level objective (SLO) is to maintain 99.9% availability, which is measured by the ratio of successful requests (2xx status codes) to total requests. You want to minimize noise from non-critical events and ensure that the team is only notified of issues that are actionable and threaten the SLO. What should you do?
A. Focus on cause-based alerts, creating alerting policies with thresholds for the Compute Engine instances, including CPU utilization, memory usage, disk I/O, and network traffic.
B. Create log-based alerts for only the WARN and ERROR log entries generated by the application to ensure that no potential issue is missed.
C. Implement an error budget policy based on the availability of the SLO. Create a "page” alert that triggers only when the rate of burn of the error budget predicts a full exhaustion within the next 24 hours.
D. Configure alerts based on predictive metrics. Use the instance count of the MIG as the primary metric to trigger an alert.
Show Answer
Correct Answer: C
Explanation:
The goal is to reduce alert noise and notify only on issues that threaten the 99.9% availability SLO, which is defined by successful requests. Error-budget burn rate alerting is an SRE best practice for this scenario because it is symptom-based and directly tied to user impact. A burn-rate alert that predicts full error budget exhaustion within 24 hours pages the team only when the SLO is genuinely at risk and human intervention is required. The other options are noisy or indirect: infrastructure metrics and logs are cause-based and not necessarily correlated with user impact, and MIG instance count changes can be normal and not affect availability.
Question 236
Your web application must comply with the requirements of the European Union's General Data Protection Regulation (GDPR). You are responsible for the technical architecture of your web application. What should you do?
A. Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various certifications and provides ג€pass-onג€ compliance when you use native features.
B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application.
C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps.
D. Define a design for the security of data in your web application that meets GDPR requirements.
Show Answer
Correct Answer: D
Explanation:
GDPR compliance is ultimately the responsibility of the application owner. While cloud providers offer compliant infrastructure and tools, they do not make an application automatically GDPR-compliant. As the technical architect, you must design how personal data is collected, processed, stored, protected, and deleted in accordance with GDPR requirements (e.g., data minimization, security, access control, retention, and user rights). The other options are helpful but insufficient on their own.
Question 23
Company Overview -
Altostrat is a prominent player in the media industry, with an extensive collection of audio and video content that comprises podcasts, interviews, news broadcasts, and documentaries. Their success in delivering premium content to a diverse audience requires a content management system that can keep pace with the dynamic media landscape.
Solution Concept -
Altostrat seeks to modernize its content management and user engagement strategies using Google Cloud's generative AI. They want a platform that empowers customers with personalized recommendations, natural language interactions and seamless self-service support. Simultaneously, they want to drive revenue growth through dynamic pricing targeted marketing, and personalized product suggestions.
The seamless integration of AI-powered tools into the existing Google Cloud environment will enable Altostrat to efficiently manage their vast media library, enhance user experiences, and unlock new revenue streams. Google Cloud's generative AI will solidify their leadership in the media industry.
Existing Technical Environment -
Altostrat’s content management and delivery platform leverages GKE for scalability and high availability, essential for handling their vast media library. Their extensive media library spanning various documents, audio and video formats is stored in Cloud Storage. To gain valuable insights into user behavior, content consumption patterns, and audience demographics, Altostrat leverages BigQuery as their primary data warehouse. Additionally, they use Cloud Run functions for serverless execution of event-driven tasks such as video transcoding metadata extraction, and personalized content recommendations.
While Altostrat has made significant strides in cloud adoption, they also maintain some legacy on-premises systems for specific workflows like content ingestion and archival. These systems are slated for modernization and migration to Google Cloud in the near future. User management and authentication are currently handled through a combination of Google Identity and third-party identity providers. For monitoring and observability, Altostrat relies on a mix of native Google Cloud tools like Cloud Monitoring and open-source solutions like Prometheus, with alerts primarily delivered via email notifications.
Business Requirements -
• Accelerate and enhance the reliability of operational workflows across all environments. [Google Cloud + On-premises]
• Simplify infrastructure management for rapid application deployment.
• Optimize cloud storage costs while maintaining high availability and scalability for media content.
• Enable natural language interaction with the platform with 24/7 user support.
• Automatically generate concise summaries of media content.
• Extract rich metadata from media assets using NLP and computer vision.
• Detect and filter inappropriate content.
• Analyze media content to identify trends and extract insights.
• Inform content strategy and decision making with data.
Technical Requirements -
• Modernize CI/CD for containerized deployments with a centralized management platform.
• Secure, high-performance hybrid cloud connectivity for data ingestion.
• Provide scalable, performant kubernetes environments both on-premises and in the cloud.
• Optimize cloud storage costs for growing media volumes.
• Design AI-powered detection of harmful content.
• Ensure that AI systems are auditable and their decisions can be explained.
• Leverage LLMs and conversational AI for personalized experiences and content virality.
• Develop advanced chatbots with natural language understanding to provide personalized assistance.
• Automated summarization for diverse media.
Executive Statement -
At Altostrat, we are embracing the next frontier of artificial intelligence to revolutionize our content strategy. By harnessing the power of generative AI, we will create an unparalleled user experience by empowering our audience with intelligent toots for content discovery, personalized recommendations, and seamless interaction. Reliability and cost management are our top priorities. This strategic initiative will deepen engagement, foster customer loyalty, and unlock new revenue streams through targeted marketing and tailored content offerings. We see a future where Al-driven innovation is central to our business, leading to greater success for our company and delivering exceptional value to our customers.
For this question, refer to the Altostrat Media case study. Altostrat is concerned about sophisticated, multi-vector Distributed Denial of Service (DDoS) attacks targeting various layers of their infrastructure. DDoS attacks could potentially disrupt video streaming and cause financial losses. You need to mitigate this risk. What should you do?
A. Set up VPC Service Controls to restrict access to sensitive resources and prevent data exfiltration.
B. Configure Cloud Next Generation Firewall (NGFW) with custom rules to filter malicious traffic at the network level.
C. Deploy Google Cloud Armor with pre-configured and custom rules for L3/L4 and L7 protection
D. Activate Security Command Center to monitor security posture and detect potential threats.
Show Answer
Correct Answer: C
Explanation:
Mitigating sophisticated, multi-vector DDoS attacks requires an active, in-line protection service that operates across network and application layers. Google Cloud Armor provides managed DDoS protection with preconfigured and custom rules for L3/L4 and L7, integrates with Google’s global edge, and is designed to protect services like video streaming from volumetric and application-layer attacks. The other options focus on access control (VPC Service Controls), general firewalling without managed DDoS mitigation (NGFW), or monitoring and detection rather than prevention (Security Command Center).
Question 90
For this question, refer to the TerramEarth case study. TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost. What should you do?
A. Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
B. Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
C. Create a Cloud Monitoring uptime check to validate the application URL. If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.
D. Use Cloud Error Reporting to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
Show Answer
Correct Answer: C
Explanation:
Cloud Monitoring uptime checks can monitor external/on‑premises URLs without installing agents. They are purpose‑built, highly reliable, and low cost compared to custom polling jobs. An uptime check can trigger an alert that publishes to Pub/Sub, which in turn invokes a Cloud Function to switch traffic to a "Site is unavailable" page and notify Ops. This is fully cloud‑native, event‑driven (runs only on failure), and cheaper and more reliable than running scheduled containers or VMs. Cloud Error Reporting is not designed for availability checks.
Question 261
Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?
A. Google Cloud Dataflow
B. Google Cloud Dataproc
C. Google Compute Engine
D. Google Kubernetes Engine
Show Answer
Correct Answer: B
Explanation:
The requirement is to scale Apache Spark and Hadoop workloads to the cloud with minimal operational effort and minimal code changes. Google Cloud Dataproc is a fully managed service specifically for running Spark and Hadoop, allowing you to lift and shift existing jobs, quickly create and scale clusters, and offload cluster management. The other options either require significant re-architecture (Dataflow), low-level VM management (Compute Engine), or additional platform complexity without native Hadoop/Spark management (GKE).
Question 134
Your team will start developing a new application using microservices architecture on Kubernetes Engine. As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment. You want to ensure that all code deployed in the development environment follows this process. What should you do?
A. Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.
B. Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster.
C. Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions.
D. Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions.
Show Answer
Correct Answer: C
Explanation:
The requirement is an automated CI/CD process that triggers on pushes to the develop branch, runs tests, builds container images, and deploys successfully tested code automatically to the development environment, while preventing manual or bypassed deployments. A Cloud Build trigger satisfies this by centrally enforcing build and test steps. Separating build from deployment with a deployment pipeline that watches for new images and restricting deploy permissions ensures only validated artifacts are deployed. Options A and B rely on developer actions and cannot guarantee enforcement. Option D is incorrect because vulnerability scanning does not replace running tests.
Question 68
You are designing an application for use only during business hours. For the minimum viable product release, you'd like to use a managed product that automatically `scales to zero` so you don't incur costs when there is no activity.
Which primary compute resource should you choose?
A. Cloud Functions
B. Compute Engine
C. Google Kubernetes Engine
D. AppEngine flexible environment
Show Answer
Correct Answer: A
Explanation:
The requirement is a managed compute service that can automatically scale to zero so there is no cost when idle. Cloud Functions is a fully managed serverless offering that scales down to zero when there are no invocations. Compute Engine always incurs costs unless manually stopped, GKE has ongoing control-plane costs and does not truly scale all components to zero, and App Engine flexible environment requires at least one running instance. Therefore, Cloud Functions best meets the requirement.
Question 4
You are developing a deep learning model that requires high-performance access to large volumes of media data currently stored in Cloud Storage. Model training will be executed on multiple VM instances with GPUs attached, but your application must interact with the data as if it were on the local file system. You need to minimize complexity and cost. What should you do?
A. Copy the data from Cloud Storage to Filestore, and then mount the Filestore volume as a local file system on your VM instances.
B. Create a shared persistent disk, attach the disk to your VM instances, and load data from the Cloud Storage bucket.
C. Use the gcsfuse command line tool to mount the Cloud Storage bucket as a local file system, and perform read/write operations in your bucket using standard file system semantics.
D. Use the gsutil command line tool to download the data to your VM instances.
Show Answer
Correct Answer: C
Explanation:
The requirement is to access Cloud Storage data as if it were on a local file system across multiple GPU VMs, while minimizing cost and complexity. gcsfuse mounts a Cloud Storage bucket directly as a file system, allowing standard POSIX-like read/write access without copying or duplicating data. Filestore adds extra cost and management overhead, shared persistent disks are not designed for high‑throughput shared training data and still require data staging, and gsutil requires explicitly downloading data to each VM, increasing complexity and storage usage.
Question 80
Your company has a Google Workspace account and Google Cloud Organization. Some developers in the company have created Google Cloud projects outside of the Google Cloud Organization.
You want to create an Organization structure that allows developers to create projects, but prevents them from modifying production projects. You want to manage policies for all projects centrally and be able to set more restrictive policies for production projects.
You want to minimize disruption to users and developers when business needs change in the future. You want to follow Google-recommended practices. Now should you design the Organization structure?
A. 1. Create a second Google Workspace account and Organization. 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on both Organizations. 5. Additionally, set the production policies on the original Organization.
B. 1. Create a folder under the Organization resource named ג€Production.ג€ 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the ג€Productionג€ folder.
C. 1. Create folders under the Organization resource named ג€Developmentג€ and ג€Production.ג€ 2. Grant all developers the Project Creator IAM role on the ג€Developmentג€ folder. 3. Move the developer projects into the ג€Developmentג€ folder. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the ג€Productionג€ folder.
D. 1. Designate the Organization for production projects only. 2. Ensure that developers do not have the Project Creator IAM role on the Organization. 3. Create development projects outside of the Organization using the developer Google Workspace accounts. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the individual production projects.
Show Answer
Correct Answer: C
Explanation:
Use a single Organization with folders to separate environments. Creating Development and Production folders lets you grant Project Creator only in Development, centrally manage org-level policies, and apply stricter policies to Production via folder-level constraints. This follows Google best practices, avoids multiple organizations, allows easy project movement between environments, and minimizes future disruption.
Question 163
Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team.
Which three actions should you take? (Choose three.)
A. Use Stackdriver Logging to search for the module log entries
B. Read the debug GCE Activity log using the API or Cloud Console
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs
D. Identify whether a live migration event of the failed server occurred, using in the activity log
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics
F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen
Show Answer
Correct Answer: A, C, E
Explanation:
To collect actionable failure details from Linux kernel module issues on GCE VMs, you need OS-level logs, low-level kernel output, and correlated system behavior around the failure window. Cloud Logging (Stackdriver Logging) lets you search syslog and kernel messages emitted by the module. The serial console provides access to kernel and boot-time messages even when the VM is unstable or unreachable, which is critical for kernel module failures. Stackdriver Monitoring metrics, aligned to the failure timeframe, help correlate resource anomalies (CPU, memory, I/O) with the crashes. Activity/Audit logs focus on control-plane events and do not capture kernel-level failures, and exporting a VM image would not reliably reproduce an intermittent kernel issue.
Question 278
You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.
What should you do?
A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container clusters resize CLUSTER_Name ג€" -size 10
B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE - -tags enable- autoscaling max-nodes-10
C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
Show Answer
Correct Answer: C
Explanation:
To enable automatic scaling on an already running Google Kubernetes Engine cluster, you must update the existing cluster (or its node pool) to enable Cluster Autoscaler and define minimum and maximum node counts. Option C does exactly this by enabling autoscaling with specified limits, allowing the cluster to grow or shrink based on demand. Option A only resizes the cluster to a fixed size, B is unrelated to autoscaling, and D unnecessarily recreates the cluster.
Question 63
Your company and one of its partners each have a Google Cloud project in separate organizations. Your company's project (prj-a) runs in Virtual Private Cloud
(vpc-a). The partner's project (prj-b) runs in vpc-b. There are two instances running on vpc-a and one instance running on vpc-b. Subnets defined in both VPCs are not overlapping. You need to ensure that all instances communicate with each other via internal IPs, minimizing latency and maximizing throughput. What should you do?
A. Set up a network peering between vpc-a and vpc-b.
B. Set up a VPN between vpc-a and vpc-b using Cloud VPN.
C. Configure IAP TCP forwarding on the instance in vpc-b, and then launch the following gcloud command from one of the instances in vpc-a gcloud: gcloud compute start-iap-tunnel INSTANCE_NAME_IN_VPC_8 22 \ --local-host-port=localhost:22
D. 1. Create an additional instance in vpc-a. 2. Create an additional instance in vpc-b. 3. Install OpenVPN in newly created instances. 4. Configure a VPN tunnel between vpc-a and vpc-b with the help of OpenVPN.
Show Answer
Correct Answer: A
Explanation:
VPC Network Peering provides direct, private connectivity between VPCs using Google’s internal network. It allows instances in different projects and even different organizations to communicate using internal IP addresses with low latency and high throughput, as long as subnet CIDR ranges do not overlap. This exactly matches the requirement. Cloud VPN or OpenVPN add encryption overhead and higher latency, and IAP TCP forwarding is for administrative access, not full network connectivity.
Question 165
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app's functionality.
Which two actions should you take? (Choose two.)
A. Remove Python after running pip
B. Remove dependencies from requirements.txt
C. Use a slimmed-down base image like Alpine Linux
D. Use larger machine types for your Google Container Engine node pools
E. Copy the source after he package dependencies (Python and pip) are installed
Show Answer
Correct Answer: C, E
Explanation:
C: Using a slimmed-down base image (e.g., Alpine) reduces image size, leading to faster image pulls and container startup during deployments.
E: Ordering Dockerfile steps to install dependencies before copying application source maximizes Docker layer caching, so code changes don’t invalidate dependency layers, significantly speeding up rebuilds and deployments.
Question 19
Company Overview -
Altostrat is a prominent player in the media industry, with an extensive collection of audio and video content that comprises podcasts, interviews, news broadcasts, and documentaries. Their success in delivering premium content to a diverse audience requires a content management system that can keep pace with the dynamic media landscape.
Solution Concept -
Altostrat seeks to modernize its content management and user engagement strategies using Google Cloud's generative AI. They want a platform that empowers customers with personalized recommendations, natural language interactions and seamless self-service support. Simultaneously, they want to drive revenue growth through dynamic pricing targeted marketing, and personalized product suggestions.
The seamless integration of AI-powered tools into the existing Google Cloud environment will enable Altostrat to efficiently manage their vast media library, enhance user experiences, and unlock new revenue streams. Google Cloud's generative AI will solidify their leadership in the media industry.
Existing Technical Environment -
Altostrat’s content management and delivery platform leverages GKE for scalability and high availability, essential for handling their vast media library. Their extensive media library spanning various documents, audio and video formats is stored in Cloud Storage. To gain valuable insights into user behavior, content consumption patterns, and audience demographics, Altostrat leverages BigQuery as their primary data warehouse. Additionally, they use Cloud Run functions for serverless execution of event-driven tasks such as video transcoding metadata extraction, and personalized content recommendations.
While Altostrat has made significant strides in cloud adoption, they also maintain some legacy on-premises systems for specific workflows like content ingestion and archival. These systems are slated for modernization and migration to Google Cloud in the near future. User management and authentication are currently handled through a combination of Google Identity and third-party identity providers. For monitoring and observability, Altostrat relies on a mix of native Google Cloud tools like Cloud Monitoring and open-source solutions like Prometheus, with alerts primarily delivered via email notifications.
Business Requirements -
• Accelerate and enhance the reliability of operational workflows across all environments. [Google Cloud + On-premises]
• Simplify infrastructure management for rapid application deployment.
• Optimize cloud storage costs while maintaining high availability and scalability for media content.
• Enable natural language interaction with the platform with 24/7 user support.
• Automatically generate concise summaries of media content.
• Extract rich metadata from media assets using NLP and computer vision.
• Detect and filter inappropriate content.
• Analyze media content to identify trends and extract insights.
• Inform content strategy and decision making with data.
Technical Requirements -
• Modernize CI/CD for containerized deployments with a centralized management platform.
• Secure, high-performance hybrid cloud connectivity for data ingestion.
• Provide scalable, performant kubernetes environments both on-premises and in the cloud.
• Optimize cloud storage costs for growing media volumes.
• Design AI-powered detection of harmful content.
• Ensure that AI systems are auditable and their decisions can be explained.
• Leverage LLMs and conversational AI for personalized experiences and content virality.
• Develop advanced chatbots with natural language understanding to provide personalized assistance.
• Automated summarization for diverse media.
Executive Statement -
At Altostrat, we are embracing the next frontier of artificial intelligence to revolutionize our content strategy. By harnessing the power of generative AI, we will create an unparalleled user experience by empowering our audience with intelligent toots for content discovery, personalized recommendations, and seamless interaction. Reliability and cost management are our top priorities. This strategic initiative will deepen engagement, foster customer loyalty, and unlock new revenue streams through targeted marketing and tailored content offerings. We see a future where Al-driven innovation is central to our business, leading to greater success for our company and delivering exceptional value to our customers.
For this question, refer to the Altostrat Media case study. Altostrat stores a large library of media content, including sensitive interviews and documentaries, in Cloud Storage. They are concerned about the confidentiality of this content and want to protect it from unauthorized access. You need to implement a Google-recommended solution that is easy to integrate and provides Altostrat with control and auditability of the encryption keys. What should you do?
A. Configure Cloud Storage to use server-side encryption with Google-managed encryption keys. Create a bucket policy to restrict access to only authorized Google groups and required service accounts.
B. Use Cloud Storage default encryption at rest. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets
C. Implement client-side encryption before uploading it to Cloud Storage. Store the encryption keys in a HashiCorp Vault instance deployed on Google Kubernetes Engine (GKE). Implement fine-grained access control to sensitive Cloud Storage buckets using IAM roles.
D. Use customer-managed encryption keys (CMEK) for all Cloud Storage buckets storing sensitive media content. Implement fine-grained access control using IAM roles and groups to restrict access to sensitive buckets.
Show Answer
Correct Answer: D
Explanation:
Altostrat requires strong confidentiality, control over encryption keys, auditability, and easy integration using Google‑recommended services. Customer‑managed encryption keys (CMEK) with Cloud KMS allow Altostrat to control key lifecycle, enable detailed audit logs of key usage, and integrate natively with Cloud Storage without operational overhead. Combining CMEK with fine‑grained IAM access control meets security, compliance, and simplicity requirements. Google‑managed keys (A, B) do not provide customer key control, and client‑side encryption with third‑party key management (C) adds complexity and is not the recommended native approach on Google Cloud.
Question 114
Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data. The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team. The development team does not want the networking team to have access to the sensitive data on the instances. What should you do?
A. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use Cloud VPN to join the two VPCs.
B. 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team.
C. 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team.
D. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use VPC Peering to join the two VPCs.
Show Answer
Correct Answer: C
Explanation:
The requirement is to strictly separate network administration from compute administration while still allowing applications to run on shared network resources. In a single project (option B), the Compute Admin role includes compute.network.* permissions, which would allow the development team to modify networking resources—violating the requirement that only the networking team manages networks. Shared VPC is designed for this exact scenario: the networking team controls the VPC in a host project, while the development team manages Compute Engine instances in a service project that has no VPC of its own. This ensures the networking team cannot access instance data, and the development team cannot manage network resources.