Google

Professional Security Operations Engineer — Google Cloud Certified - Professional Security Operations Engineer Study Guide

29 practice questions Updated 2026-02-20 $19 (70% off) HTML + PDF formats

Professional Security Operations Engineer Exam Overview

Prepare for the Google Professional Security Operations Engineer certification exam with our comprehensive study guide. This study material contains 29 practice questions sourced from real exams and expert-verified for accuracy. Each question includes the correct answer and a detailed explanation to help you understand the material thoroughly.

The Professional Security Operations Engineer exam — Google Cloud Certified - Professional Security Operations Engineer — is offered by Google. Our study materials were last updated on 2026-02-20 to reflect the most recent exam objectives and content.

What You Get

29 Practice Questions

Complete question bank covering all exam domains and objectives.

HTML + PDF Formats

Interactive HTML file (recommended) for screen study and a print-ready PDF.

Instant Download

Access your study materials immediately after purchase.

Email with Permanent Download Links

You will receive a confirmation email with permanent download links in case you want to download the files again in the future.

Why Choose CheapestExamDumps?

Lowest Price Available

Only $19 per exam — competitors charge $50-$300 for similar content.

Updated Monthly

Study materials refreshed within 30 days of any exam content changes.

Free Preview

Try 15 real practice questions before you buy — no signup required.

Instant Access

Download HTML + PDF immediately after payment. No waiting, no account needed.

$63 $19

One-time payment · HTML + PDF · Instant download · 29 questions

Free Sample — 15 Practice Questions

Preview 15 of 29 questions from the Professional Security Operations Engineer exam. Try before you buy — purchase the full study guide for all 29 questions with answers and explanations.

Question 26

Your organization's Google Security Operations (SecOps) tenant is ingesting a vendor's firewall logs in its default JSON format using the Google-provided parser for that log. The vendor recently released a patch that introduces a new field and renames an existing field in the logs. The parser does not recognize these two fields and they remain available only in the raw logs, while the rest of the log is parsed normally. You need to resolve this logging issue as soon as possible while minimizing the overall change management impact. What should you do?

A. Write a code snippet, and deploy it in a parser extension to map both fields to UDM.
B. Use the web interface-based custom parser feature in Google SecOps to copy the parser, and modify it to map both fields to UDM.
C. Deploy a third-party data pipeline management tool to ingest the logs, and transform the updated fields into fields supported by the default parser.
D. Use the Extract Additional Fields tool in Google SecOps to convert the raw log entries to additional fields.
Show Answer
Correct Answer: D
Explanation:
The requirement is to resolve the issue quickly with minimal change management impact. The existing Google-provided parser already works for most fields; only two new or renamed fields are unparsed and still present in the raw log. Using **Extract Additional Fields** allows you to parse values directly from the raw log and surface them as additional fields without copying or extending the parser, writing code, or changing the ingestion pipeline. Parser extensions or custom parsers introduce more maintenance overhead and risk, and third‑party pipelines add unnecessary complexity. Therefore, Extract Additional Fields is the fastest and least disruptive solution.

Question 28

You are responsible for identifying suspicious activity and security events at your organization. You have been asked to search in Google Security Operations (SecOps) for network traffic associated with an active HTTP backdoor that runs on TCP port 5555. You want to use the most effective approach to identify traffic originating from the server that is running the backdoor. What should you do?

A. Detect on events where network.ApplicationProtocol is HTTP.
B. Detect on events where target.port is 5555.
C. Detect on events where principal.port is 5555.
D. Detect on events where network.ip_protocol is TCP.
Show Answer
Correct Answer: C
Explanation:
You want to identify traffic originating from the server running the backdoor. In Google SecOps UDM, the principal represents the source of the connection, while the target represents the destination. An active HTTP backdoor listening on TCP port 5555 will generate outbound traffic with source (principal) port 5555 when it responds. Detecting on principal.port = 5555 is therefore the most effective way to identify traffic originating from the compromised server.

Question 2

You are a SOC analyst at an organization that uses Google Security Operations (SecOps). You are investigating suspicious activity in your organization's environment. Alerts in Google SecOps indicate repeated PowerShell activity on a set of endpoints. Outbound connections are made to a domain that does not appear in your threat intelligence feeds. The activity occurs across multiple systems and user accounts. You need to search across impacted systems and user identities to identify the malicious user and understand the scope of the compromise. What should you do?

A. Perform a YARA-L 2.0 search to correlate activity across impacted systems and users.
B. Perform a raw log search for the suspicious domain string, and manually pivot to related user activity.
C. Use the User Sign-In Overview dashboard to monitor authentication trends and anomalies across all users.
D. Use the Behavioral Analytics dashboard in Risk Analytics to identify abnormal IP-based activity and high-risk user behavior.
Show Answer
Correct Answer: D
Explanation:
The requirement is to identify a malicious user and understand the scope of compromise across multiple systems and identities. Behavioral Analytics within Risk Analytics is purpose-built for this: it correlates activity across users, endpoints, and network behavior, baselines normal behavior, and highlights anomalous actions such as widespread PowerShell usage and unusual outbound connections. The other options are more manual, narrow, or focused on specific data types rather than holistic user and asset risk.

Question 23

You work for a large international company that has several Compute Engine instances running in production. You need to configure monitoring and alerting for Compute Engine instances tagged with compliance=pci that have an external IP address assigned. What should you do?

A. Create a custom Event Threat Detection module that alerts when a Compute Engine instance with the compliance=pci tag is assigned an external IP address.
B. Deploy the compute.vmExternalIpAccess organization policy constraint to prevent specific projects or folders with the compliance=pci tag from creating Compute Engine instances with external IP addresses.
C. Create a custom Security Health Analytics (SHA) module. Configure the detection logic to scan Cloud Asset Inventory data for compute.googleapis.com/Instance assets, and Search for the compliance=pci tag.
D. Use the PUBLIC_IP_ADDRESS Security Health Analytics (SHA) detector to identify Compute Engine instances with external IP addresses. Determine whether the compliance=pci tag exists on the instances.
Show Answer
Correct Answer: C
Explanation:
You need targeted monitoring and alerting for instances that both have an external IP and are in PCI scope (compliance=pci). A custom Security Health Analytics module can evaluate Cloud Asset Inventory data for Compute Engine instances, check for the presence of an external IP address, and match the compliance=pci tag, generating findings only when both conditions are met. The other options either prevent configuration, are not designed for this type of configuration monitoring, or lack tag-based targeting.

Question 11

Your organization is a Google Security Operations (SecOps) customer. The compliance team requires a weekly export of case resolutions and SLA metrics of high and critical severity cases over the past week. The compliance team's post-processing scripts require this data to be formatted as tabular data in CSV files, zipped, and delivered to their email each Monday morning. What should you do?

A. Generate a report in SOAR Reports, and schedule delivery of the report.
B. Use statistics in search, and configure a Google SecOps SOAR job to format and send the report.
C. Build an Advanced Report in SOAR Reports, and schedule delivery of the report.
D. Build a detection rule with outcomes, and configure a Google SecOps SOAR job to format and send the report.
Show Answer
Correct Answer: C
Explanation:
The requirement is an automated, weekly export of SOAR case data (resolutions and SLA metrics) for high and critical cases, delivered as zipped CSV via email. Advanced Reports in Google SecOps SOAR are designed to report on case management data, can be scheduled, exported in CSV format, automatically zipped, and emailed. Basic reports or search statistics do not fully support case/SLA reporting, and detection rules are unrelated to reporting use cases.

Question 1

You are a SOC analyst working a case in Google Security Operations (SecOps). The case contains a file hash that your playbooks have automatically enriched with VirusTotal context and categorized as likely malicious. You need to quickly identify devices and users in your organization who have interacted with this file. What should you do?

A. Build a playbook to perform a UDM search matching on the file hash in Google SecOps SIEM.
B. Build a playbook to query your threat intelligence platform (TIP) for the presence of the file hash.
C. Use a manual action in Google SecOps SOAR to perform a UDM search matching on the file hash in Google SecOps SIEM.
D. Use a manual action in Google SecOps SOAR to query your threat intelligence platform (TIP) for the presence of the file hash.
Show Answer
Correct Answer: C
Explanation:
The goal is to quickly find which internal devices and users interacted with a known malicious file. That requires searching your organization’s own security telemetry, which resides in Google SecOps SIEM using the Unified Data Model (UDM). A manual action allows the analyst to immediately run a UDM search on the file hash without waiting to design or deploy a playbook. Querying a TIP would only provide external intelligence, not internal exposure data.

Question 4

You are responsible for selecting and prioritizing potential sources of data to integrate with Google Security Operations (SecOps). Your company has recently started using several Google Cloud services to increase security in its Google Cloud organization. You need to determine which logs should be ingested into Google SecOps to reduce the effort required to write detections. What should you do?

A. Ingest Google Cloud Armor logs by using Cloud Logging.
B. Deploy a Bindplane agent to ingest event logs from Compute Engine VMs that provide endpoint visibility.
C. Integrate Security Command Center (SCC) into Google SecOps to ingest logs originating from the Google Cloud services.
D. Use Google Threat Intelligence to gain insight about threat group behavior and support threat hunting activities.
Show Answer
Correct Answer: C
Explanation:
Integrating Security Command Center (SCC) with Google SecOps provides normalized, high-value security findings and logs from multiple Google Cloud services out of the box. This reduces the effort to write detections because SCC aggregates and enriches security-relevant signals (misconfigurations, threats, vulnerabilities) rather than requiring custom parsing and correlation of raw service logs.

Question 15

Your organization has recently acquired Company A, which has its own SOC and security tooling. You have already configured ingestion of Company A's security telemetry and migrated their detection rules to Google Security Operations (SecOps). You now need to enable Company A's analysts to work their cases in Google SecOps. You need to ensure that Company A's analysts: do not have access to any case data originating from outside of Company A. are able to re-purpose playbooks previously developed by your organization's employees. You need to minimize effort to implement your solution. What is the first step you should take?

A. Acquire a second Google SecOps SOAR tenant for Company
B. Provision a new service account for Company A.
C. Define a new SOC role for Company A.
D. Create a Google SecOps SOAR environment for Company A.
Show Answer
Correct Answer: D
Explanation:
Creating a separate Google SecOps SOAR environment provides logical data segregation so Company A’s analysts can only access their own cases, while still residing in the same overall SecOps instance. This allows reuse and repurposing of existing playbooks with minimal additional setup, unlike a separate tenant or custom roles, which add complexity or do not ensure full data isolation.

Question 5

Your company's risk management and compliance team requires regular reporting on compliance with industry standard control frameworks for a regulated business unit that continuously adds projects. You need to create a report that includes evidence of non-compliant resources found in this environment. How should you generate this report?

A. Run an audit using the compliance framework in Audit Manager. Export the evaluation for consumption by the second-line team.
B. Run queries for the required controls using the Cloud Asset Inventory data stored in BigQuery. Schedule this report to run regularly.
C. Implement the control framework using Rego, and deploy this framework in Workload Manager. Schedule a regular report in Workload Manager.
D. Implement the built-in posture for the compliance framework within the Security Command Center (SCC) posture.
Show Answer
Correct Answer: D
Explanation:
Security Command Center (SCC) postures provide built-in industry-standard compliance frameworks and continuously evaluate the environment as new projects and resources are added. SCC automatically identifies and reports non-compliant resources with evidence, which matches the requirement for ongoing compliance reporting in a regulated GCP environment. Other options either reference non-GCP services, are not designed for compliance frameworks, or require significant custom implementation.

Question 20

Your team is responsible for cybersecurity for a large multinational corporation. You have been tasked with identifying unknown command and control nodes (C2s) that are potentially active in your organization's environment. You need to generate a list of potential matches within the next 24 hours. What should you do?

A. Write a rule in Google Security Operations (SecOps) that scans historic network outbound connections against ingested threat intelligence Run the rule in a retrohunt against the full tenant.
B. Load network records into BigQuery to identify endpoints that are communicating with domains outside three standard deviations of normal.
C. Review Security Health Analytics (SHA) findings in Security Command Center (SCC).
D. Write a YARA-L rule in Google Security Operations (SecOps) that compares network traffic of endpoints to low prevalence domains against recent WHOIS registrations.
Show Answer
Correct Answer: D
Explanation:
The task is to identify unknown or novel C2 infrastructure quickly. Known threat intelligence (A) is ineffective for truly unknown C2s. SHA findings (C) focus on misconfigurations, not C2 discovery. Pure statistical outlier analysis in BigQuery (B) may surface anomalies but lacks C2-specific context and is slower to operationalize. A YARA-L rule in Google SecOps that correlates low-prevalence outbound domains with recent WHOIS registrations directly targets common traits of newly stood-up C2 infrastructure and can be run immediately across telemetry, making it the best choice.

Question 17

You are a security analyst at an organization that uses Google Security Operations (SecOps). You notice suspicious login attempts on several user accounts. You need to determine whether these attempts are part of a coordinated attack as quickly as possible. What action should you take first?

A. Enable default curated detections to automatically block suspicious IP addresses.
B. Use UDM Search to query historical logs for recent IOCs associated with the suspicious login attempts.
C. Remove user accounts that have repeated invalid login attempts.
D. Look for correlations across impacted users in the Risk Analytics dashboard.
Show Answer
Correct Answer: D
Explanation:
To quickly determine whether multiple suspicious login attempts are part of a coordinated attack, the fastest first step is to look for correlations across affected users. The Risk Analytics dashboard is designed to surface cross-user patterns, shared indicators, and aggregated risk signals, allowing rapid confirmation of coordinated activity. UDM Search is better suited for deeper, follow-up investigation once coordination is suspected, while the other options are premature or reactive.

Question 6

You are planning log onboarding for a Google Security Operations (SecOps) SIEM deployment in a cloud-heavy enterprise environment. The detection engineering team is requesting log sources that support visibility into: User identity behavior - Lateral movement - Privilege escalation attempts - You need to determine which telemetry sources are ingested first. Which log source should you prioritize?

A. Cloud access security broker (CASB) logs
B. EDR logs
C. IAM logs
D. Network firewall logs
Show Answer
Correct Answer: C
Explanation:
In a cloud-heavy enterprise, IAM logs provide the most direct and comprehensive visibility into user identity behavior, authentication and authorization events, role changes, privilege escalations, and access patterns that can indicate lateral movement. These signals are foundational for detection engineering focused on identity-centric threats, making IAM logs the highest priority to ingest first.

Question 19

Your organization recently implemented Google Security Operations (SecOps) with Applied Threat Intelligence enabled. You were notified by the networking team about potentially anomalous communications to external domains in the last 30 days. You plan to start your threat hunting by looking at communications to external domains. You are ingesting the following logs into Google SecOps: Firewall logs - Proxy logs - DNS logs - DHCP logs - What should you do? (Choose two.)

A. Perform a UDM search across the logs for domains with geolocations that were first seen in the last 30 days.
B. Perform a UDM search across the logs for domains with low prevalence that were first seen in the last 30 days.
C. Perform a raw log search across the logs for domains with low prevalence that were first seen in the last 30 days.
D. Identify the domains with the higher normalized risk in Risk Analytics. Drill down into those entities to determine their prevalence and if they were first seen in the last 30 days.
E. Navigate to the IOC Matches page and filter based on domain type over the last 30 days. Look for the first seen and last seen timestamps for the reported domains. Investigate these domains using the IOC drilldown link.
Show Answer
Correct Answer: B, D
Explanation:
The goal is proactive threat hunting for potentially anomalous external domain communications, not investigation of already known IOCs. B is correct because a UDM search allows correlation across firewall, proxy, DNS, and DHCP logs, and filtering on low-prevalence domains first seen in the last 30 days is a standard hunting technique for identifying suspicious or newly observed external infrastructure. D is correct because Risk Analytics leverages Applied Threat Intelligence to normalize risk across entities. Starting with domains that have higher normalized risk and then drilling down to review prevalence and first-seen timing is an effective, intelligence-driven way to prioritize suspicious external communications. E is not ideal for this use case because the IOC Matches page is focused on known indicators already present in threat intelligence feeds, whereas this scenario is about discovering potentially anomalous or previously unknown domains.

Question 21

You are developing a new detection rule in Google Security Operations (SecOps). You are defining the YARA-L logic that includes complex event, match, and condition sections. You need to develop and test the rule to ensure that the detections are accurate before the rule is migrated to production. You want to minimize impact to production processes. What should you do?

A. Develop the rule logic in the UDM search, review the search output to inform changes to filters and logic, and copy the rule into the Rules Editor.
B. Use Gemini in Google SecOps to develop the rule by providing a description of the parameters and conditions, and transfer the rule into the Rules Editor.
C. Develop the rule in the Rules Editor, define the sections the rule logic, and test the rule using the test rule feature.
D. Develop the rule in the Rules Editor, define the sections of the rule logic, and test the rule by setting it to live but not alerting. Run a YARA-L retrohunt from the rules dashboard.
Show Answer
Correct Answer: C
Explanation:
To minimize impact to production while validating complex YARA-L logic, the correct approach is to develop the rule directly in the Google SecOps Rules Editor and use its built-in test rule feature. This allows you to run the rule logic against historical data and verify accuracy without enabling the rule live or consuming production detection resources. Other options either bypass proper rule testing workflows, rely on auxiliary tools, or risk impacting production by setting rules live.

Question 9

You are receiving security alerts from multiple connectors in your Google Security Operations (SecOps) instance. You need to identify which IP address entities are internal to your network and label each entity with its specific network name. This network name will be used as the trigger for the playbook. What should you do?

A. Configure each network in the Google SecOps SOAR settings.
B. Enrich the IP address entities as the initial step of the playbook.
C. Modify the entity attribute in the alert overview.
D. Create an outcome variable in the rule to assign the network name.
Show Answer
Correct Answer: A
Explanation:
To identify which IP address entities are internal and automatically label them with a specific network name for use as a playbook trigger, you must define your internal networks centrally in Google SecOps. By configuring networks (CIDR ranges) in the SecOps/SOAR settings, the platform can automatically classify IP entities as internal and associate them with the correct network name at ingestion time. Playbook enrichment, manual entity edits, or rule outcome variables occur too late or are not designed to define internal network membership globally.

$63 $19

Get all 29 questions with detailed answers and explanations

Professional Security Operations Engineer — Frequently Asked Questions

What is the Google Professional Security Operations Engineer exam?

The Google Professional Security Operations Engineer exam — Google Cloud Certified - Professional Security Operations Engineer — is a professional IT certification exam offered by Google.

How many practice questions are included?

This study guide contains 29 practice questions, each with an expert-verified correct answer and a detailed explanation. Questions cover all exam domains and objectives.

Is there a free sample available?

Yes! We provide a free sample of 15 practice questions from the Professional Security Operations Engineer exam right on this page. Scroll up to preview them and evaluate the quality of our materials before purchasing.

When was this Professional Security Operations Engineer study guide last updated?

This study guide was last updated on 2026-02-20. We regularly refresh our materials to reflect the latest exam content and objectives so you're always studying current material.

What file formats do I receive?

After purchase you receive two files: an interactive HTML file with show/hide answer toggles (ideal for studying on screen) and a PDF file (ideal for printing or offline study). Both work on any device — desktop, tablet, or phone.

How much does the Professional Security Operations Engineer study guide cost?

The Google Professional Security Operations Engineer study guide costs $19 (discounted from $63). This is a one-time payment with no subscriptions or hidden fees.

How do I get my files after payment?

After successful payment via Stripe, you are immediately redirected to a download page with links to your HTML and PDF files. We also send the download links to your email address as a backup, so you'll always have access.

Why choose CheapestExamDumps over other providers?

CheapestExamDumps offers the lowest price at $19 per exam — competitors charge $50-$300 for similar content. All study materials are expert-verified, updated monthly, and include a free 15-question preview with no signup required. You get instant access to both HTML and PDF formats after payment.