Free Sample — 15 Practice Questions
Preview 15 of 342 questions from the Professional Cloud Security Engineer exam.
Try before you buy — purchase the full study guide for all 342 questions with answers and explanations.
Question 93
Your organization has applications that run in multiple clouds. The applications require access to a Google Cloud resource running in your project. You must use short-lived access credentials to maintain security across the clouds. What should you do?
A. Create a managed workload identity. Bind an attested identity to the Compute Engine workload.
B. Create a service account key. Download the key to each application that requires access to the Google Cloud resource.
C. Create a workload identity pool with a workload identity provider for each external cloud. Set up a service account and add an IAM binding for impersonation.
D. Create a VPC firewall rule for ingress traffic with an allowlist of the IP ranges of the external cloud applications.
Show Answer
Correct Answer: C
Explanation:
The requirement is to grant applications running in multiple external clouds secure access to Google Cloud resources using short-lived credentials. Workload Identity Federation is designed for this use case. By creating a workload identity pool and a provider for each external cloud, external workloads can exchange their native identities for short-lived Google Cloud tokens. Binding these identities to a service account via IAM impersonation avoids long-lived service account keys and provides centralized, secure access management. Other options either use long-lived credentials, apply only within Google Cloud, or do not address identity-based access.
Question 36
Your organization is worried about recent news headlines regarding application vulnerabilities in production applications that have led to security breaches. You want to automatically scan your deployment pipeline for vulnerabilities and ensure only scanned and verified containers can run in the environment. What should you do?
A. Use Kubernetes role-based access control (RBAC) as the source of truth for cluster access by granting “container.clusters.get” to limited users. Restrict deployment access by allowing these users to generate a kubeconfig file containing the configuration access to the GKE cluster.
B. Use gcloud artifacts docker images describe LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE_ID@sha256:HASH --show-package-vulnerability in your CI/CD pipeline, and trigger a pipeline failure for critical vulnerabilities.
C. Enforce the use of Cloud Code for development so users receive real-time security feedback on vulnerable libraries and dependencies before they check in their code.
D. Enable Binary Authorization and create attestations of scans.
Show Answer
Correct Answer: D
Explanation:
The requirement is to automatically scan containers and enforce that only scanned and verified images can be deployed. Binary Authorization integrates with vulnerability scanning and enforces deployment-time policies based on attestations, ensuring unverified or vulnerable containers cannot run. Other options do not provide enforceable, deployment-time control over container execution.
Question 273
A customer's internal security team must manage its own encryption keys for encrypting data on Cloud Storage and decides to use customer-supplied encryption keys (CSEK).
How should the team complete this task?
A. Upload the encryption key to a Cloud Storage bucket, and then upload the object to the same bucket.
B. Use the gsutil command line tool to upload the object to Cloud Storage, and specify the location of the encryption key.
C. Generate an encryption key in the Google Cloud Platform Console, and upload an object to Cloud Storage using the specified key.
D. Encrypt the object, then use the gsutil command line tool or the Google Cloud Platform Console to upload the object to Cloud Storage.
Show Answer
Correct Answer: B
Explanation:
With Customer-Supplied Encryption Keys (CSEK), Google Cloud Storage performs the encryption and decryption using a key that the customer provides at request time. The correct way is to upload objects using a supported tool (such as gsutil or gcloud storage) while specifying the encryption key in the request. You do not upload the key itself, and you do not pre-encrypt the object. The Cloud Console cannot be used for CSEK operations. Therefore, using the gsutil command-line tool and specifying the encryption key during upload is correct.
Question 116
Your organization is rolling out a new continuous integration and delivery (CI/CD) process to deploy infrastructure and applications in Google Cloud. Many teams will use their own instances of the CI/CD workflow. It will run on Google Kubernetes Engine (GKE). The CI/CD pipelines must be designed to securely access Google Cloud APIs.
What should you do?
A. 1. Create two service accounts, one for the infrastructure and one for the application deployment. 2. Use workload identities to let the pods run the two pipelines and authenticate with the service accounts. 3. Run the infrastructure and application pipelines in separate namespaces.
B. 1. Create a dedicated service account for the CI/CD pipelines. 2. Run the deployment pipelines in a dedicated nodes pool in the GKE cluster. 3. Use the service account that you created as identity for the nodes in the pool to authenticate to the Google Cloud APIs.
C. 1. Create individual service accounts for each deployment pipeline. 2. Add an identifier for the pipeline in the service account naming convention. 3. Ensure each pipeline runs on dedicated pods. 4. Use workload identity to map a deployment pipeline pod with a service account.
D. 1. Create service accounts for each deployment pipeline. 2. Generate private keys for the service accounts. 3. Securely store the private keys as Kubernetes secrets accessible only by the pods that run the specific deploy pipeline.
Show Answer
Correct Answer: C
Explanation:
The CI/CD pipelines must securely access Google Cloud APIs from GKE, and Google’s recommended best practice is to use Workload Identity rather than node service accounts or service account keys. Option C provides the strongest security and least-privilege model by creating a dedicated service account per pipeline, allowing granular IAM permissions per team or application. Mapping each pipeline’s pod to its own service account via Workload Identity avoids long-lived keys and prevents permission sharing across pipelines. Options A and B are less granular, and D is discouraged because it relies on managing and storing service account keys.
Question 137
Your organization must comply with the regulation to keep instance logging data within Europe. Your workloads will be hosted in the Netherlands in region europe-west4 in a new project. You must configure Cloud Logging to keep your data in the country.
What should you do?
A. Configure the organization policy constraint gcp.resourceLocations to europe-west4.
B. Configure log sink to export all logs into a Cloud Storage bucket in europe-west4.
C. Create a new log bucket in europe-west4, and redirect the _Default bucket to the new bucket.
D. Set the logging storage region to europe-west4 by using the gcloud CLI logging settings update.
Show Answer
Correct Answer: D
Explanation:
For a new project, the correct and intended way to enforce log data residency is to set the Cloud Logging storage location at creation time. Using `gcloud logging settings update --storage-location=europe-west4` configures the default region for the automatically created _Default and _Required log buckets so that all logs are stored in Europe from the start. Option C is a workaround mainly for existing projects where buckets already exist; it requires manual bucket creation and sink redirection, which is unnecessary for a new project.
Question 15
A batch job running on Compute Engine needs temporary write access to a Cloud Storage bucket. You want the batch job to use the minimum permissions necessary to complete the task. What should you do?
A. Create a service account with full Cloud Storage administrator permissions. Assign the service account to the Compute Engine instance.
B. Create a service account and embed a long-lived service account key file that has write permissions specified directly in the batch job script.
C. Create a service account with the storage.objectCreator role. Use service account impersonation in the batch job's code.
D. Grant the predefined storage.objectCreator role to the Compute Engine instance's default service account.
Show Answer
Correct Answer: C
Explanation:
The job only needs to create objects in Cloud Storage, so the predefined storage.objectCreator role provides the minimum required permission. Using a service account without long‑lived keys and accessing it at runtime (via impersonation) avoids embedding credentials and follows least‑privilege and security best practices. The other options either grant excessive permissions or rely on insecure key handling.
Question 279
An engineering team is launching a web application that will be public on the internet. The web application is hosted in multiple GCP regions and will be directed to the respective backend based on the URL request.
Your team wants to avoid exposing the application directly on the internet and wants to deny traffic from a specific list of malicious IP addresses.
Which solution should your team implement to meet these requirements?
A. Cloud Armor
B. Network Load Balancing
C. SSL Proxy Load Balancing
D. NAT Gateway
Show Answer
Correct Answer: A
Explanation:
Cloud Armor is a Google Cloud edge security service that integrates with external HTTP(S) load balancers. It allows you to block or allow traffic based on source IP addresses and other rules before traffic reaches the backend, preventing direct exposure of the application. Network Load Balancing, SSL Proxy Load Balancing, and NAT Gateway do not provide IP-based web application firewall capabilities.
Question 118
You are a Cloud Identity administrator for your organization. In your Google Cloud environment, groups are used to manage user permissions. Each application team has a dedicated group. Your team is responsible for creating these groups and the application teams can manage the team members on their own through the Google Cloud console. You must ensure that the application teams can only add users from within your organization to their groups.
What should you do?
A. Change the configuration of the relevant groups in the Google Workspace Admin console to prevent external users from being added to the group.
B. Set an Identity and Access Management (IAM) policy that includes a condition that restricts group membership to user principals that belong to your organization.
C. Define an Identity and Access Management (IAM) deny policy that denies the assignment of principals that are outside your organization to the groups in scope.
D. Export the Cloud Identity logs to BigQuery. Configure an alert for external members added to groups. Have the alert trigger a Cloud Function instance that removes the external members from the group.
Show Answer
Correct Answer: A
Explanation:
Group membership controls are managed in Google Workspace / Cloud Identity, not via IAM policies. You can configure each Google Group to disallow adding external members, ensuring application teams can only add users from your organization. IAM policies (including conditions or deny policies) govern access to Google Cloud resources, not who can be a member of a Google Group. Logging and remediation is reactive and does not prevent the issue.
Question 190
An organization's security and risk management teams are concerned about where their responsibility lies for certain production workloads they are running in
Google Cloud and where Google's responsibility lies. They are mostly running workloads using Google Cloud's platform-as-a-Service (PaaS) offerings, including
App Engine primarily.
Which area in the technology stack should they focus on as their primary responsibility when using App Engine?
A. Configuring and monitoring VPC Flow Logs
B. Defending against XSS and SQLi attacks
C. Managing the latest updates and security patches for the Guest OS
D. Encrypting all stored data
Show Answer
Correct Answer: B
Explanation:
With Google App Engine (a PaaS offering), Google is responsible for the underlying infrastructure, networking, OS, and platform patching, as well as default encryption of data at rest. The customer remains primarily responsible for application-layer security. This includes writing secure code and protecting the application from common web vulnerabilities such as XSS and SQL injection. Therefore, defending against XSS and SQLi attacks is the correct focus area.
Question 235
You are the project owner for a regulated workload that runs in a project you own and manage as an Identity and Access Management (IAM) admin. For an upcoming audit, you need to provide access reviews evidence. Which tool should you use?
A. Policy Troubleshooter
B. Policy Analyzer
C. IAM Recommender
D. Policy Simulator
Show Answer
Correct Answer: B
Explanation:
For access reviews evidence in a regulated audit, you need a tool that shows who has access to what resources and why. Policy Analyzer provides visibility into effective IAM policies and principals across resources, which is specifically designed to support access reviews and audits. The other tools focus on troubleshooting, simulation, or recommendations rather than audit evidence.
Question 105
Your Google Cloud environment has one organization node, one folder named “Apps”, and several projects within that folder. The organizational node enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the terramearth.com organization. The “Apps” folder enforces the constraints/iam.allowedPolicyMemberDomains organization policy, which allows members from the flowlogistic.com organization. It also has the inheritFromParent: false property.
You attempt to grant access to a project in the “Apps” folder to the user
.
What is the result of your action and why?
A. The action succeeds because members from both organizations, terramearth.com or flowlogistic.com, are allowed on projects in the “Apps” folder.
B. The action succeeds and the new member is successfully added to the project's Identity and Access Management (IAM) policy because all policies are inherited by underlying folders and projects.
C. The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy must be defined on the current project to deactivate the constraint temporarily.
D. The action fails because a constraints/iam.allowedPolicyMemberDomains organization policy is in place and only members from the flowlogistic.com organization are allowed.
Show Answer
Correct Answer: D
Explanation:
The folder-level constraints/iam.allowedPolicyMemberDomains policy on the “Apps” folder has inheritFromParent: false, so it does not inherit the organization-level policy. Only the folder’s policy applies to projects under it, allowing members only from flowlogistic.com. Therefore, attempting to grant access to a user from terramearth.com fails.
Question 40
Your organization is developing a sophisticated machine learning (ML) model to predict customer behavior for targeted marketing campaigns. The BigQuery dataset used for training includes sensitive personal information. You must design the security controls around the AI/ML pipeline. Data privacy must be maintained throughout the model’s lifecycle and you must ensure that personal data is not used in the training process. Additionally, you must restrict access to the dataset to an authorized subset of people only. What should you do?
A. De-identify sensitive data before model training by using Cloud Data Loss Prevention (DLP)APIs. and implement strict Identity and Access Management (IAM) policies to control access to BigQuery.
B. Implement Identity-Aware Proxy to enforce context-aware access to BigQuery and models based on user identity and device.
C. Implement at-rest encryption by using customer-managed encryption keys (CMEK) for the pipeline. Implement strict Identity and Access Management (IAM) policies to control access to BigQuery.
D. Deploy the model on Confidential VMs for enhanced protection of data and code while in use. Implement strict Identity and Access Management (IAM) policies to control access to BigQuery.
Show Answer
Correct Answer: A
Explanation:
The requirements are to prevent use of personal data in training and restrict dataset access. De-identifying sensitive data with Cloud DLP ensures PII is removed before model training, satisfying privacy across the ML lifecycle. Strict IAM policies on BigQuery limit access to an authorized subset of users. Other options address only access context, encryption at rest, or protection in use, but do not ensure personal data is excluded from training.
Question 162
You have a highly sensitive BigQuery workload that contains personally identifiable information (PII) that you want to ensure is not accessible from the internet. To prevent data exfiltration, only requests from authorized IP addresses are allowed to query your BigQuery tables.
What should you do?
A. Use service perimeter and create an access level based on the authorized source IP address as the condition.
B. Use Google Cloud Armor security policies defining an allowlist of authorized IP addresses at the global HTTPS load balancer.
C. Use the Restrict Resource Service Usage organization policy constraint along with Cloud Data Loss Prevention (DLP).
D. Use the Restrict allowed Google Cloud APIs and services organization policy constraint along with Cloud Data Loss Prevention (DLP).
Show Answer
Correct Answer: A
Explanation:
To prevent data exfiltration from BigQuery and ensure that only requests from authorized IP addresses can access sensitive PII, you should use VPC Service Controls. By creating a service perimeter and defining an access level based on authorized source IP addresses, you can restrict BigQuery access so that queries are only allowed from approved networks. Other options do not control direct BigQuery access at the API level or are focused on different security objectives (such as DLP inspection rather than network-based access control).
Question 108
You control network traffic for a folder in your Google Cloud environment. Your folder includes multiple projects and Virtual Private Cloud (VPC) networks. You want to enforce on the folder level that egress connections are limited only to IP range 10.58.5.0/24 and only from the VPC network “dev-vpc”. You want to minimize implementation and maintenance effort.
What should you do?
A. 1. Leave the network configuration of the VMs in scope unchanged. 2. Create a new project including a new VPC network “new-vpc”. 3. Deploy a network appliance in “new-vpc” to filter access requests and only allow egress connections from “dev-vpc” to 10.58.5.0/24.
B. 1. Leave the network configuration of the VMs in scope unchanged. 2. Enable Cloud NAT for “dev-vpc” and restrict the target range in Cloud NAT to 10.58.5.0/24.
C. 1. Attach external IP addresses to the VMs in scope. 2. Define and apply a hierarchical firewall policy on folder level to deny all egress connections and to allow egress to IP range 10.58.5.0/24 from network dev-vpc.
D. 1. Attach external IP addresses to the VMs in scope. 2. Configure a VPC Firewall rule in “dev-vpc” that allows egress connectivity to IP range 10.58.5.0/24 for all source addresses in this network.
Show Answer
Correct Answer: C
Explanation:
You need to enforce an egress restriction centrally at the **folder level** across multiple projects and VPCs with minimal ongoing maintenance. Hierarchical firewall policies are specifically designed for this purpose: they apply at the organization or folder level and are inherited by all projects.
Option C uses a hierarchical firewall policy to deny all egress traffic by default and explicitly allow egress only to 10.58.5.0/24 and only when the source network is dev-vpc. This meets both requirements: folder-level enforcement and precise control over source network and destination range. Other options either work at the VPC/project level only (B, D), misuse Cloud NAT (B), or add unnecessary complexity (A).
Question 227
Your Security team believes that a former employee of your company gained unauthorized access to Google Cloud resources some time in the past 2 months by using a service account key. You need to confirm the unauthorized access and determine the user activity. What should you do?
A. Use Security Health Analytics to determine user activity.
B. Use the Cloud Monitoring console to filter audit logs by user.
C. Use the Cloud Data Loss Prevention API to query logs in Cloud Storage.
D. Use the Logs Explorer to search for user activity.
Show Answer
Correct Answer: D
Explanation:
To confirm unauthorized access and determine activity from a service account key, you must examine audit logs. Google Cloud audit logs are queried using Cloud Logging’s Logs Explorer, where you can filter by service account, method calls, IP addresses, and time range (past 2 months). Cloud Monitoring focuses on metrics, not detailed user actions; Security Health Analytics detects misconfigurations, not historical access; and DLP is unrelated. Therefore, Logs Explorer is the correct tool.