Free Sample — 15 Practice Questions
Preview 15 of 313 questions from the Associate Cloud Engineer exam.
Try before you buy — purchase the full study guide for all 313 questions with answers and explanations.
Question 257
You built an application on Google Cloud that uses Cloud Spanner. Your support team needs to monitor the environment but should not have access to table data.
You need a streamlined solution to grant the correct permissions to your support team, and you want to follow Google-recommended practices. What should you do?
A. Add the support team group to the roles/monitoring.viewer role
B. Add the support team group to the roles/spanner.databaseUser role.
C. Add the support team group to the roles/spanner.databaseReader role.
D. Add the support team group to the roles/stackdriver.accounts.viewer role.
Show Answer
Correct Answer: A
Explanation:
The support team needs visibility into the health and performance of Cloud Spanner without accessing any table data. The roles/monitoring.viewer role provides read-only access to Cloud Monitoring metrics and dashboards for resources like Spanner, which is the Google-recommended, least-privilege approach for monitoring. Spanner database roles (databaseUser, databaseReader) grant access to data, and stackdriver.accounts.viewer only shows account structure, not resource monitoring.
Question 239
You need to set a budget alert for use of Compute Engineer services on one of the three Google Cloud Platform projects that you manage. All three projects are linked to a single billing account. What should you do?
A. Verify that you are the project billing administrator. Select the associated billing account and create a budget and alert for the appropriate project.
B. Verify that you are the project billing administrator. Select the associated billing account and create a budget and a custom alert.
C. Verify that you are the project administrator. Select the associated billing account and create a budget for the appropriate project.
D. Verify that you are project administrator. Select the associated billing account and create a budget and a custom alert.
Show Answer
Correct Answer: A
Explanation:
To create a budget alert in Google Cloud, you must have billing permissions on the Cloud Billing account, not just project-level admin rights. Budgets and alerts are created at the billing account level and can be scoped to a specific project and service such as Compute Engine. A Billing Account Administrator can select the shared billing account, create a budget, scope it to the appropriate project, and rely on the built-in budget alerting without needing a custom alert. Project administrator roles do not have sufficient billing permissions, and a custom alert is not required for this use case.
Question 211
Your company is moving from an on-premises environment to Google Cloud. You have multiple development teams that use Cassandra environments as backend databases. They all need a development environment that is isolated from other Cassandra instances. You want to move to Google Cloud quickly and with minimal support effort. What should you do?
A. 1. Build an instruction guide to install Cassandra on Google Cloud. 2. Make the instruction guide accessible to your developers.
B. 1. Advise your developers to go to Cloud Marketplace. 2. Ask the developers to launch a Cassandra image for their development work.
C. 1. Build a Cassandra Compute Engine instance and take a snapshot of it. 2. Use the snapshot to create instances for your developers.
D. 1. Build a Cassandra Compute Engine instance and take a snapshot of it. 2. Upload the snapshot to Cloud Storage and make it accessible to your developers. 3. Build instructions to create a Compute Engine instance from the snapshot so that developers can do it themselves.
Show Answer
Correct Answer: B
Explanation:
The requirement is to move quickly to Google Cloud with minimal support effort while giving each development team an isolated Cassandra environment. Google Cloud Marketplace provides preconfigured Cassandra images that can be launched with a few clicks, requiring no custom build, snapshots, or documentation maintenance. This minimizes operational overhead and speeds up adoption compared to building, snapshotting, or documenting custom Cassandra installations.
Question 7
You ate managing an application deployed on Cloud Run. The development team has released a new version of the application. You want to deploy and redirect traffic to this new version of the application. To ensure traffic to the new version of the application is served with no startup time, you want to ensure that there are two idle instances available for incoming traffic before adjusting the traffic flow. You also want to minimize administrative overhead. What should you do?
A. Ensure the checkbox “Serve this revision immediately” is unchecked when deploying the new revision. Before changing the traffic rules, use a traffic simulation tool to send load to the new revision.
B. Configure service autoscaling and set the minimum number of instances to 2.
C. Configure revision autoscaling for the new revision and set the minimum number of instances to 2.
D. Configure revision autoscaling for the existing revision and set the minimum number of instances to 2.
Show Answer
Correct Answer: C
Explanation:
Cloud Run scaling settings such as minimum instances are configured per revision, not per service. To ensure the new version has two warm (idle) instances ready before shifting traffic and to minimize operational effort, you should set the minimum number of instances to 2 on the new revision. This guarantees no cold-start latency when traffic is redirected.
Question 291
You are creating a Google Kubernetes Engine (GKE) cluster with a cluster autoscaler feature enabled. You need to make sure that each node of the cluster will run a monitoring pod that sends container metrics to a third-party monitoring solution. What should you do?
A. Deploy the monitoring pod in a StatefulSet object.
B. Deploy the monitoring pod in a DaemonSet object.
C. Reference the monitoring pod in a Deployment object.
D. Reference the monitoring pod in a cluster initializer at the GKE cluster creation time.
Show Answer
Correct Answer: B
Explanation:
A DaemonSet ensures that exactly one copy of a Pod runs on every node (or selected nodes) in the cluster. This is ideal for node-level monitoring agents, because as the GKE cluster autoscaler adds or removes nodes, the monitoring pod is automatically scheduled on each new node and removed when nodes are deleted. Other controllers like Deployment or StatefulSet do not guarantee one pod per node.
Question 155
You have developed an application that consists of multiple microservices, with each microservice packaged in its own Docker container image. You want to deploy the entire application on Google Kubernetes Engine so that each microservice can be scaled individually. What should you do?
A. Create and deploy a Custom Resource Definition per microservice.
B. Create and deploy a Docker Compose File.
C. Create and deploy a Job per microservice.
D. Create and deploy a Deployment per microservice.
Show Answer
Correct Answer: D
Explanation:
In Kubernetes, a Deployment manages a set of identical Pods and supports rolling updates and independent scaling. Creating one Deployment per microservice allows each service to be deployed, managed, and scaled independently on GKE. Other options do not fit this goal: CRDs define new resource types, Docker Compose is not a Kubernetes deployment mechanism, and Jobs are intended for finite batch tasks, not long-running microservices.
Question 313
You need to set up permissions for a set of Compute Engine instances to enable them to write data into a particular Cloud Storage bucket. You want to follow
Google-recommended practices. What should you do?
A. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/devstorage.write_only'.
B. Create a service account with an access scope. Use the access scope 'https://www.googleapis.com/auth/cloud-platform'.
C. Create a service account and add it to the IAM role 'storage.objectCreator' for that bucket.
D. Create a service account and add it to the IAM role 'storage.objectAdmin' for that bucket.
Show Answer
Correct Answer: C
Explanation:
Google-recommended best practice is to grant permissions via IAM roles on a service account, following the principle of least privilege. To allow Compute Engine instances to write objects to a specific Cloud Storage bucket, you attach a service account to the instances and grant that service account the IAM role storage.objectCreator on the target bucket. This role allows object creation only, without read or delete permissions. Access scopes alone are not sufficient or recommended for fine-grained control, and broader roles or scopes (such as cloud-platform or storage.objectAdmin) grant unnecessary privileges.
Question 174
You have developed a containerized web application that will serve internal colleagues during business hours. You want to ensure that no costs are incurred outside of the hours the application is used. You have just created a new Google Cloud project and want to deploy the application. What should you do?
A. Deploy the container on Cloud Run for Anthos, and set the minimum number of instances to zero.
B. Deploy the container on Cloud Run (fully managed), and set the minimum number of instances to zero.
C. Deploy the container on App Engine flexible environment with autoscaling, and set the value min_instances to zero in the app.yaml.
D. Deploy the container on App Engine flexible environment with manual scaling, and set the value instances to zero in the app.yaml.
Show Answer
Correct Answer: B
Explanation:
Cloud Run (fully managed) can scale to zero instances when there is no traffic, so you incur no compute costs outside business hours. It is designed for containerized applications and uses pay-per-request billing. Cloud Run for Anthos requires a running GKE cluster (incurring costs), and App Engine flexible environment cannot scale to zero, so it would still incur costs.
Question 203
You built an application on your development laptop that uses Google Cloud services. Your application uses Application Default Credentials for authentication and works fine on your development laptop. You want to migrate this application to a Compute Engine virtual machine (VM) and set up authentication using Google- recommended practices and minimal changes. What should you do?
A. Assign appropriate access for Google services to the service account used by the Compute Engine VM.
B. Create a service account with appropriate access for Google services, and configure the application to use this account.
C. Store credentials for service accounts with appropriate access for Google services in a config file, and deploy this config file with your application.
D. Store credentials for your user account with appropriate access for Google services in a config file, and deploy this config file with your application.
Show Answer
Correct Answer: A
Explanation:
The application already uses Application Default Credentials (ADC). On Compute Engine, ADC automatically uses the VM’s attached service account via the metadata server. The Google‑recommended practice with minimal changes is therefore to grant the required IAM roles to the service account used by the VM (preferably a dedicated VM service account). No code changes or credential files are needed. Creating and wiring a service account key into the application (option B) adds unnecessary changes and key management, which Google discourages.
Question 38
You are the Organization Administrator for your company's Google Cloud resources. Your company has strict compliance rules that require you to be notified about any modifications to files and documents hosted on Cloud Storage. In a recent incident, one of your team members was able to modify files and you did not receive any notifications, causing other production jobs to fail. You must ensure that you receive notifications for all changes to files and documents in Cloud Storage while minimizing management overhead. What should you do?
A. View Cloud Audit logs for all Cloud Storage files in Logs Explorer. Filter by Admin Activity logs.
B. Enable Cloud Storage object versioning on your bucket. Configure Pub/Sub notifications for your Cloud Storage buckets.
C. Enable versioning on the Cloud Storage bucket. Set up a custom script that scans versions of Cloud Storage objects being modified and alert the admin by using the script.
D. Configure Object change notifications on the Cloud Storage buckets. Send the events to Pub/Sub.
Show Answer
Correct Answer: D
Explanation:
The requirement is to receive notifications for all object changes in Cloud Storage with minimal management overhead. Configuring Cloud Storage object change notifications and sending events to Pub/Sub provides real-time, automated notifications for create, update, and delete operations and integrates easily with alerting systems. Enabling object versioning is not required for notifications and adds additional cost and operational overhead, while log review or custom scripts are more manual and less suitable for real-time alerting.
Question 61
You are planning to migrate your on-premises data to Google Cloud. The data includes:
• 200 TB of video files in SAN storage
• Data warehouse data stored on Amazon Redshift
• 20 GB of PNG files stored on an S3 bucket
You need to load the video files into a Cloud Storage bucket, transfer the data warehouse data into BigQuery, and load the PNG files into a second Cloud Storage bucket. You want to follow Google-recommended practices and avoid writing any code for the migration. What should you do?
A. Use gcloud storage for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
B. Use Transfer Appliance for the videos, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
C. Use Storage Transfer Service for the video files, BigQuery Data Transfer Service for the data warehouse data, and Storage Transfer Service for the PNG files.
D. Use Cloud Data Fusion for the video files, Dataflow for the data warehouse data, and Storage Transfer Service for the PNG files.
Show Answer
Correct Answer: B
Explanation:
The 200 TB of video files reside on on‑premises SAN storage, which is not a supported source for Storage Transfer Service and would be inefficient to move over the network at that scale. Google‑recommended practice for very large on‑premises datasets is to use Transfer Appliance. Amazon Redshift to BigQuery migration is natively supported and automated by BigQuery Data Transfer Service without custom code. The 20 GB of PNG files stored in Amazon S3 are best migrated using Storage Transfer Service, which is designed for S3‑to‑Cloud Storage transfers. This combination follows best practices and avoids writing any code.
Question 94
You have deployed an application on a single Compute Engine instance. The application writes logs to disk. Users start reporting errors with the application. You want to diagnose the problem. What should you do?
A. Navigate to Cloud Logging and view the application logs.
B. Configure a health check on the instance and set a “consecutive successes” Healthy threshold value of 1.
C. Connect to the instance’s serial console and read the application logs.
D. Install and configure the Ops agent and view the logs from Cloud Logging.
Show Answer
Correct Answer: D
Explanation:
The application writes logs to the VM’s local disk. By default, Compute Engine instances do not automatically send application logs to Cloud Logging. Installing and configuring the Ops Agent is required to collect those on-disk logs and forward them to Cloud Logging, where you can then inspect them to diagnose the errors. Simply navigating to Cloud Logging without the agent would not show the application logs, and health checks or the serial console are not appropriate for routine log analysis.
Question 215
You need to create a custom IAM role for use with a GCP service. All permissions in the role must be suitable for production use. You also want to clearly share with your organization the status of the custom role. This will be the first version of the custom role. What should you do?
A. Use permissions in your role that use the 'supported' support level for role permissions. Set the role stage to ALPHA while testing the role permissions.
B. Use permissions in your role that use the 'supported' support level for role permissions. Set the role stage to BETA while testing the role permissions.
C. Use permissions in your role that use the 'testing' support level for role permissions. Set the role stage to ALPHA while testing the role permissions.
D. Use permissions in your role that use the 'testing' support level for role permissions. Set the role stage to BETA while testing the role permissions.
Show Answer
Correct Answer: A
Explanation:
All permissions must be suitable for production use, which means only permissions with the SUPPORTED support level should be included. Permissions with TESTING support are explicitly not recommended for production. To clearly communicate the status of the custom role and because this is the first version, the appropriate launch stage is ALPHA, which is informational and commonly used for an initial iteration that is still being validated. Therefore, using SUPPORTED permissions with an ALPHA stage is the correct choice.
Question 84
You are using Looker Studio to visualize a table from your data warehouse that is built on top of BigQuery. Data is appended to the data warehouse during the day. At night, the daily summary is recalculated by overwriting the table. You just noticed that the charts in Looker Studio are broken, and you want to analyze the problem. What should you do?
A. In Cloud Logging, create a filter for your Looker Studio report.
B. Use the open source CLI tool, Snapshot Debugger, to find out why the data was not refreshed correctly.
C. Review the Error Reporting page in the Google Cloud console to find any errors.
D. Use the BigQuery interface to review the nightly job and look for any errors.
Show Answer
Correct Answer: D
Explanation:
Looker Studio charts breaking after a nightly overwrite strongly suggests an issue with the BigQuery job that rebuilds the table (for example, schema changes, failed jobs, or partial writes). The most direct and relevant step is to inspect the BigQuery interface for the nightly job’s execution details and errors. Logging, Error Reporting, or debugging tools are not as appropriate for diagnosing data pipeline or query job failures in BigQuery.
Question 4
Your company is closely monitoring their cloud spend. You need to allow different teams to monitor their Google Cloud costs. You must ensure that team members receive notifications when their cloud spend reaches certain thresholds and give team members the ability to create dashboards for additional insights with detailed billing data. You want to follow Google-recommended practices and minimize engineering costs. What should you do?
A. Deploy Grafana to Compute Engine. Create a dashboard for each team that uses the data from the Cloud Billing API. Ask each team to create their own alerts in Cloud Monitoring.
B. Set up alerts for each team based on required thresholds. Create a shell script to read data from the Cloud Billing API, and push the results to BigQuery. Grant team members access to BigQuery.
C. Deploy Grafana to Compute Engine. Create a dashboard for each team that uses the data from the Cloud Billing Budget API. Ask each team to create their own alerts in Grafana.
D. Set up alerts for each team based on required thresholds. Set up billing exports to BigQuery. Grant team members access to BigQuery.
Show Answer
Correct Answer: D
Explanation:
Google-recommended best practice for cost monitoring is to use Cloud Billing Budgets and alerts for threshold notifications, and export detailed billing data to BigQuery for analysis. BigQuery billing export provides granular, queryable cost data with minimal engineering effort, and teams can build their own dashboards (for example with Looker Studio) without maintaining custom scripts or Grafana infrastructure. This minimizes operational overhead while meeting all requirements.