For More Visit

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

For more visit

examsrocket.com
Question 1:
Your company runs one batch process in an on-premises server that
takes around 30 hours to complete. The task runs monthly, can be
performed offline, and must be restarted if interrupted. You want to
migrate this workload to the cloud while minimizing cost. What should
you do?

Migrate the workload to a Compute Engine Preemptible VM.

Migrate the workload to a Google Kubernetes Engine cluster with


Preemptible nodes.

Migrate the workload to a Compute Engine VM. Start and stop


(Correct)
the instance as needed.

Create an Instance Template with Preemptible VMs On. Create a Managed


Instance Group from the template and adjust Target CPU Utilization.
Migrate the workload.

Explanation
A is incorrect because a preemptible VM is not fit for long-running tasks as it
can be terminated anytime.

B is incorrect because a preemptible VM is not fit for long-running tasks as it


can be terminated anytime.

C is correct because migrating the job to compute engine is the best


approach and starting and stopping the VM as needed will save costs.

D is incorrect because a preemptible VM is not fit for long-running tasks as it


can be terminated anytime.

Links:
https://cloud.google.com/compute/all-pricing
https://cloud.google.com/preemptible-vms

Question 2:
You are developing a new application and are looking for a Jenkins
installation to build and deploy your source code. You want to automate
the installation as quickly and easily as possible. What should you do?

Deploy Jenkins through the Google Cloud Marketplace. (Correct)

Create a new Compute Engine instance. Run the Jenkins executable.

Create a new Kubernetes Engine cluster. Create a deployment for the


Jenkins image.

Create an instance template with the Jenkins executable. Create a


managed instance group with this template.

Explanation
A is correct because deploying Jenkins through the GCP marketplace is the
best way to deploy Jenkins quickly.

B is incorrect because it is not the fastest and most secure way.

C is incorrect because it is not the fastest and most secure way.

D is incorrect because it is not the fastest and most secure way.

Links:
https://cloud.google.com/marketplace

Question 3:
You have downloaded and installed the gcloud command-line interface (CLI)
and have authenticated with your Google Account. Most of your Compute
Engine instances in your project run in the europe-west1-d zone. You want to
avoid having to specify this zone with each CLI command when managing
these instances. What should you do?

Set the europe-west1-d zone as the default zone using the


(Correct)
gcloud config subcommand.

In the Settings page for Compute Engine under Default location, set the
zone to europe-west1-d.

In the CLI installation directory, create a file called default.conf containing


zone=europe-west1-d.

Create a Metadata entry on the Compute Engine page with key


compute/zone and value europe-west1-d.

Explanation
A is correct because setting the default zone will enable gcloud to use the
same zone for all gcloud services without having to specify it every time a
command is run.

B is incorrect because changing the settings in the GCP console does not
affect the gcloud command-line tool.

C is incorrect because the default zone is set through the command line.

D is incorrect because changing the compute engine metadata does not


change the gcloud command-line config.

Links:
https://cloud.google.com/compute/docs/gcloud-compute#set-default-region-
zone-environment-variables

Question 4:
The core business of your company is to rent out construction equipment at
large scale. All the equipment that is being rented out has been equipped with
multiple sensors that send event information every few seconds. These
signals can vary from engine status, distance traveled, fuel level, and more.
Customers are billed based on the consumption monitored by these sensors.
You expect high throughput 'up to thousands of events per hour per device'
and need to retrieve consistent data based on the time of the event. Storing
and retrieving individual signals should be atomic. What should you do?

Create a file in Cloud Storage per device and append new data to that file.

Create a file in Cloud Filestore per device and append new data to that
file.

Ingest the data into Datastore. Store data in an entity group based on the
device.

Ingest the data into Cloud Bigtable. Create a row key based on
(Correct)
the event timestamp.

Explanation
A is incorrect because Cloud Storage is not the right choice for such a high
frequency of data.

B is incorrect because Cloud Firestore will not handle such a large amount of
time-series data.

C is incorrect because Datastore is not the right choice for such a high
frequency of data.

D is correct because Cloud Bigtable is a petabyte-scale no-sql database that


is very good at storing and analyzing time-series data.

Links:
https://cloud.google.com/bigtable

Question 5:
You are asked to set up application performance monitoring on Google
Cloud projects A, B, and C as a single pane of glass. You want to monitor
CPU, memory, and disk. What should you do?

Enable API and then share charts from project A, B, and C.

Enable API and then give the metrics.reader role to projects A, B, and C.

Enable API and then use default dashboards to view all projects in
sequence.

Enable API, create a workspace under project A, and then add


(Correct)
projects B and C.

Explanation
A is incorrect because sharing charts from different projects is not efficient
and safe.

B is incorrect because the metrics will reside in different projects and it will be
difficult to show them in a single dashboard.

C is incorrect because monitoring separate projects separately are not


scalable.

D is correct because workspaces are made for monitoring multiple projects.

Links:
https://cloud.google.com/blog/products/management-tools/using-
stackdriver-workspaces-help-manage-your-hybrid-and-multicloud-
environment
https://cloud.google.com/monitoring/settings

Question 6:
You created several resources in multiple Google Cloud projects. All
projects are linked to different billing accounts. To better estimate future
charges, you want to have a single visual representation of all costs
incurred. You want to include new cost data as soon as possible. What
should you do?

Configure Billing Data Export to BigQuery and visualize the data


(Correct)
in Data Studio.

Visit the Cost Table page to get a CSV export and visualize it using Data
Studio.

Fill all resources in the Pricing Calculator to get an estimate of the monthly
cost.

Use the Reports view in the Cloud Billing Console to view the desired cost
information.

Explanation
A is correct because you can run analysis on Bigquery after exporting the
billing reports from all projects to the same dataset.

B is incorrect because CSV export is a manual process and not very efficient
and scalable.

C is incorrect because we need actual prices and not just estimates.

D is incorrect because the reports view will not show billing information of all
projects in the same window as the billing accounts are different.

Links:
https://cloud.google.com/billing/docs/how-to/export-data-bigquery

Question 7:
Your company has workloads running on Compute Engine and on-
premises. The Google Cloud Virtual Private Cloud (VPC) is connected to
your WAN over a Virtual Private Network (VPN). You need to deploy a
new Compute Engine instance and ensure that no public Internet traffic
can be routed to it. What should you do?

Create the instance without a public IP address. (Correct)

Create the instance with Private Google Access enabled.

Create a deny-all egress firewall rule on the VPC network.

Create a route on the VPC to route all traffic to the instance over the VPN
tunnel.

Explanation
A is correct because an instance without a public IP address is not accessible
through the internet.

B is incorrect because enabling Private Google Access does not prevent


internet traffic from entering the VM.

C is incorrect because we need an ingress rule in this case and not an egress
rule.

D is incorrect because this is way too invasive and doesn't explicitly address
the issue of preventing public internet traffic from reaching your instance.

Links:
https://medium.com/google-cloud/how-to-ssh-into-your-gce-machine-
without-a-public-ip-4d78bd23309e

Question 8:
Your team maintains the infrastructure for your organization. The current
infrastructure requires changes. You need to share your proposed
changes with the rest of the team. You want to follow Google's
recommended best practices. What should you do?

Use Deployment Manager templates to describe the proposed changes


and store them in a Cloud Storage bucket.

Use Deployment Manager templates to describe the proposed


(Correct)
changes and store them in Cloud Source Repositories.

Apply the changes in a development environment, run gcloud compute


instances list, and then save the output in a shared Storage bucket.

Apply the changes in a development environment, run gcloud compute


instances list, and then save the output in Cloud Source Repositories.

Explanation
A is incorrect because Deployment Manager is used to make changes to the
infrastructure but the templates should be versioned using a version control
system and not cloud storage.

B is correct because Deployment Manager is used to make changes to the


infrastructure but the templates should be versioned using a version control
system like Cloud Source Repositories and not cloud storage.

C is incorrect because applying changes before the review is not a best


practice.

D is incorrect because applying changes before the review is not a best


practice.

Links:
https://cloud.google.com/deployment-manager/docs
https://github.com/GoogleCloudPlatform/deploymentmanager-samples

Question 9:
You have a Compute Engine instance hosting an application used
between 9 AM and 6 PM on weekdays. You want to back up this instance
daily for disaster recovery purposes. You want to keep the backups for
30 days. You want the Google-recommended solution with the least
management overhead and the least number of services. What should
you do?

1. Update your instances' metadata to add the following value: snapshot


'schedule: 0 1 * * *
2. Update your instances' metadata to add the following value: snapshot
'retention: 30'

1. In the Cloud Console, go to the Compute Engine Disks page


and select your instance's disk.
2. In the Snapshot Schedule section, select Create Schedule and
(Correct)
configure the following parameters: - Schedule frequency: Daily -
Start Time: 1:00 AM - 2:00 AM - Autodelete snapshots after: 30
days

1. Create a Cloud Function that creates a snapshot of your instance's disk.


2. Create a Cloud Function that deletes snapshots that are older than 30
days.
3. Use Cloud Scheduler to trigger both Cloud Functions daily at 1:00 AM.

1. Create a bash script in the instance that copies the content of the disk
to Cloud Storage.
2. Create a bash script in the instance that deletes data older than 30
days in the backup Cloud Storage bucket.
3. Configure the instance's crontab to execute these scripts daily at 1:00
AM.

Explanation
A is incorrect because there is no need to update the instance metadata to
schedule backups.

B is correct because the snapshot schedule feature allows periodic backups


of VMs and it also has an auto-delete feature.

C is incorrect because there is no need to create a Cloud Function for backing


up VMs.

D is incorrect because there is no need to create a bash script for backing up


VMs.

Links:
https://cloud.google.com/compute/docs/disks/scheduled-snapshots

Question 10:
Your existing application running in Google Kubernetes Engine (GKE) consists
of multiple pods running on four GKE n1-standard-2 nodes. You need to
deploy additional pods requiring n2-highmem-16 nodes without any
downtime. What should you do?

Use gcloud container clusters upgrade. Deploy the new services.

Create a new Node Pool and specify machine type n2-highmem-


(Correct)
16 nodes. Deploy the new pods.

Create a new cluster with n2-highmem-16 nodes. Redeploy the pods and
delete the old cluster.

Create a new cluster with both n1-standard-2 and n2-highmem-16 nodes.


Redeploy the pods and delete the old cluster.

Explanation
A is incorrect because you need to create a new node pool for the new pods
as they require different types of instances.

B is correct because you can add new types of instances to the GKE cluster
by adding node pools. It will not cause any downtime to the existing cluster.

C is incorrect because there is no need to create a new cluster for it.

D is incorrect because there is no need to create a new cluster for it.

Links:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools

Question 11:
You have an application that uses Cloud Spanner as a database backend
to keep current state information about users. Cloud Bigtable logs all
events triggered by users. You export Cloud Spanner data to Cloud
Storage during daily backups. One of your analysts asks you to join data
from Cloud Spanner and Cloud Bigtable for specific users. You want to
complete this ad hoc request as efficiently as possible. What should you
do?

Create a dataflow job that copies data from Cloud Bigtable and Cloud
Storage for specific users.

Create a dataflow job that copies data from Cloud Bigtable and Cloud
Spanner for specific users.

Create a Cloud Dataproc cluster that runs a Spark job to extract data from
Cloud Bigtable and Cloud Storage for specific users.

Create two separate BigQuery external tables on Cloud Storage


and Cloud Bigtable. Use the BigQuery console to join these (Correct)
tables through user fields, and apply appropriate filters.

Explanation
A is incorrect because creating a dataflow job can require significant effort.

B is incorrect because creating a dataflow job can require significant effort.

C is incorrect because using Dataproc and Spark can require significant effort
and time.

D is correct because Bigquery supports analytics on data through external


tables from Cloud Storage and Bigtable. It is perfect for this use case.

Links:
https://cloud.google.com/bigquery/external-data-sources

Question 12:
You are hosting an application from Compute Engine virtual machines (VMs)
in us-central1-a. You want to adjust your design to support the failure of a
single Compute Engine zone, eliminate downtime, and minimize cost. What
should you do?

1. Create Compute Engine resources in us-central1-b.


(Correct)
2. Balance the load across both us-central1-a and us-central1-b.

1. Create a Managed Instance Group and specify us-central1-a as the


zone.
2. Configure the Health Check with a short Health Interval.

1. Create an HTTP(S) Load Balancer.


2. Create one or more global forwarding rules to direct traffic to your VMs.

1. Perform regular backups of your application.


2. Create a Cloud Monitoring Alert and be notified if your application
becomes unavailable.
3. Restore from backups when notified.

Explanation
A is correct because, in order to remediate the problem of a single point of
failure, we have to replicate VMs within multiple zones.

B is incorrect because a health check will not be helpful if the zone goes
down.

C is incorrect because creating a load balancer does not automatically provide


high availability.

D is incorrect because backing up is a good practice but it does not help in


case of zone failure.

Question 13:
A colleague handed over a Google Cloud Platform project for you to
maintain. As part of a security checkup, you want to review who has
been granted the Project Owner role. What should you do?

In the console, validate which SSH keys have been stored as project-wide
keys.

Navigate to Identity-Aware Proxy and check the permissions for these


resources.

Enable Audit Logs on the IAM & admin page for all resources, and validate
the results.

Use the command "gcloud projects get-iam-policy" to view the


(Correct)
current role assignments.

Explanation
A is incorrect because SSH keys and IAM roles have no connection between
them.

B is incorrect because IAP roles and project owner roles are two different
types of roles.

C is incorrect because the requirement is to see who currently has the owner
role and Audit logs will show historical data.

D is correct because viewing the role assignments in the command line is the
fastest and easiest way to check who has what role.

Links:
https://groups.google.com/g/google-cloud-dev/c/Z6sZs7TvygQ?pli=1

Question 14:
You are running multiple VPC-native Google Kubernetes Engine clusters
in the same subnet. The IPs available for the nodes are exhausted, and
you want to ensure that the clusters can grow in nodes when needed.
What should you do?

Create a new subnet in the same region as the subnet being used.

Add an alias IP range to the subnet used by the GKE clusters.

Create a new VPC, and set up VPC peering with the existing VPC.

Expand the CIDR range of the relevant subnet for the cluster. (Correct)

Explanation
A is incorrect because there is no need to create a new subnet and migrate all
nodes to it as subnet IP ranges can be expanded.

B is incorrect because adding an alias IP range does not expand the subnet.

C is incorrect because there is no need to create a new VPC.

D is correct because every subnet must have a primary IP address range. You
can expand the primary IP address range at any time, even when Google
Cloud resources use the subnet; however, you cannot shrink or change a
subnet's primary IP address scheme after the subnet has been created. The
first two and last two IP addresses of a primary IP address range are reserved
by Google Cloud.

Links:
https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips
Question 15:
You have a batch workload that runs every night and uses a large number
of virtual machines (VMs). It is fault-tolerant and can tolerate some of
the VMs being terminated. The current cost of VMs is too high. What
should you do?

Run a test using simulated maintenance events. If the test is


successful, use preemptible N1 Standard VMs when running (Correct)
future jobs.

Run a test using simulated maintenance events. If the test is successful,


use N1 Standard VMs when running future jobs.

Run a test using a managed instance group. If the test is successful, use
N1 Standard VMs in the managed instance group when running future
jobs.

Run a test using N1 standard VMs instead of N2. If the test is successful,
use N1 Standard VMs when running future jobs.

Explanation
A is correct because preemptible VMs can provide up to 80% discount over
normal VMs if the workloads are fault-tolerant.

B is incorrect because N1-standard VMs do not save cost as much as


preemptible VMs.

C is incorrect because a managed instance group does not provide cost


savings as much as preemptible VMs.

D is incorrect because N1-standard VMs do not save cost as much as


preemptible VMs.

Links:
https://cloud.google.com/preemptible-vms

Question 16:
You are working with a user to set up an application in a new VPC behind
a firewall. The user is concerned about data egress. You want to
configure the fewest open egress ports. What should you do?

Set up a low-priority (65534) rule that blocks all egress and a


(Correct)
high-priority rule (1000) that allows only the appropriate ports.

Set up a high-priority (1000) rule that pairs both ingress and egress ports.

Set up a high-priority (1000) rule that blocks all egress and a low-priority
(65534) rule that allows only the appropriate ports.

Set up a high-priority (1000) rule to allow the appropriate ports.

Explanation
A is correct because An egress rule whose action is allow, the destination is
0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send
traffic to any destination, except for traffic blocked by Google Cloud. Thus,
any firewall rule will override this default egress rule and we need higher
priority rules to open the ports.

B is incorrect because a rule can be either for ingress or egress, not both.

C is incorrect because we need a low-priority rule that blocks all traffic.

D is incorrect because all egress ports are allowed by default so this rule does
nothing.

Links:
https://cloud.google.com/vpc/docs/firewalls

Question 17:
Your company runs its Linux workloads on Compute Engine instances.
Your company will be working with a new operations partner that does
not use Google Accounts. You need to grant access to the instances to
your operations partner so they can maintain the installed tooling. What
should you do?

Enable Cloud IAP for the Compute Engine instances, and add the
operations partner as a Cloud IAP Tunnel User.

Tag all the instances with the same network tag. Create a firewall rule in
the VPC to grant TCP access on port 22 for traffic from the operations
partner to instances with the network tag.

Set up Cloud VPN between your Google Cloud VPC and the internal
network of the operations partner.

Ask the operations partner to generate SSH key pairs, and add
(Correct)
the public keys to the VM instances.

Explanation
A is incorrect because the operations partner does not have a google
account.

B is incorrect because creating a firewall rule does not grant access to the
VMs, as authentication is also required.

C is incorrect because setting up VPN does not grant access to VMs.

D is correct because the operations partner can use SSH keys to SSH into
the VMs.

Links:
https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
https://cloud.google.com/compute/docs/instances/access-
overview#managing_user_access

Question 18:
You have created a code snippet that should be triggered whenever a
new file is uploaded to a Cloud Storage bucket. You want to deploy this
code snippet. What should you do?

Use App Engine and configure Cloud Scheduler to trigger the application
using Pub/Sub.

Use Cloud Functions and configure the bucket as a trigger


(Correct)
resource.

Use Google Kubernetes Engine and configure a CronJob to trigger the


application using Pub/Sub.

Use Dataflow as a batch job, and configure the bucket as a data source.

Explanation
A is incorrect because even though it works, it is not the fastest way.

B is correct because Cloud Functions can respond to change notifications


emerging from Google Cloud Storage. These notifications can be configured
to trigger in response to various events inside a bucket—object creation,
deletion, archiving, and metadata updates.

C is incorrect because using Kubernetes for such a small workload is not


efficient.

D is incorrect because using Dataflow for such a small workload is not


efficient.

Links:
https://cloud.google.com/functions/docs/calling/storage#event_types

Question 19:
You have been asked to set up Object Lifecycle Management for objects
stored in storage buckets. The objects are written once and accessed
frequently for 30 days. After 30 days, the objects are not read again
unless there is a special need. The objects should be kept for three
years, and you need to minimize cost. What should you do?

Set up a policy that uses Nearline storage for 30 days and then moves to
Archive storage for three years.

Set up a policy that uses Standard storage for 30 days and then
(Correct)
moves to Archive storage for three years.

Set up a policy that uses Nearline storage for 30 days, then moves the
Coldline for one year, and then moves to Archive storage for two years.

Set up a policy that uses Standard storage for 30 days, then moves to
Coldline for one year, and then moves to Archive storage for two years.

Explanation
A is incorrect because a Nearline storage bucket should not be accessed
frequently.

B is correct because if the object is not going to be used after 30 days, it


makes sense to archive it.

C is incorrect because a Nearline storage bucket should not be accessed


frequently.

D is incorrect because archive storage is more efficient for this use case.

Links:
https://cloud.google.com/storage/docs/storage-classes#standard

Question 20:
You are storing sensitive information in a Cloud Storage bucket. For legal
reasons, you need to be able to record all requests that read any of the
stored data. You want to make sure you comply with these requirements.
What should you do?

Enable the Identity Aware Proxy API on the project.

Scan the bucket using the Data Loss Prevention API.

Allow only a single Service Account access to read the data.

Enable Data Access audit logs for the Cloud Storage API. (Correct)

Explanation
A is incorrect because IAP does not offer data access logs.

B is incorrect because scanning the bucket using DLP API will not log access
requests by users.

C is incorrect because allowing a single service account to read the data


might not be possible.

D is correct because Data Access logs: Entries for operations that modify
objects or read a project, bucket, or object. There are several sub-types of
data access logs:
ADMIN_READ: Entries for operations that read the configuration or metadata
of a project, bucket, or object.
DATA_READ: Entries for operations that read an object.
DATA_WRITE: Entries for operations that create or modify an object.

Links:
https://cloud.google.com/storage/docs/audit-logs#types

Question 21:
You are the team lead of a group of 10 developers. You provided each
developer with an individual Google Cloud Project that they can use as
their personal sandbox to experiment with different Google Cloud
solutions. You want to be notified if any of the developers are spending
above $500 per month on their sandbox environment. What should you
do?

Create a single budget for all projects and configure budget alerts on this
budget.

Create a separate billing account per sandbox project and enable


BigQuery billing exports. Create a Data Studio dashboard to plot the
spending per billing account.

Create a budget per project and configure budget alerts on all of


(Correct)
these budgets.

Create a single billing account for all sandbox projects and enable
BigQuery billing exports. Create a Data Studio dashboard to plot the
spending per project.

Explanation
A is incorrect because we need individual budget alerts per project.

B is incorrect because creating a separate billing account for all projects may
not be feasible or scalable.

C is correct because you can create budgets per project.= to get notified for
every project that goes out of budget.

D is incorrect because data studio does not provide alerts.

Links:
https://cloud.google.com/billing/docs/how-to/budgets

Question 22:
You are deploying a production application on Compute Engine. You want
to prevent anyone from accidentally destroying the instance by clicking
the wrong button. What should you do?

Disable the flag 'Delete boot disk when instance is deleted.'

Enable delete protection on the instance. (Correct)

Disable Automatic restart on the instance.

Enable Preemptibility on the instance.

Explanation
A is incorrect because it will not prevent the VM from being deleted.

B is correct because delete protection protects the VMs from being


accidentally deleted.

C is incorrect because automatic restart does not prevent the VM from being
deleted.

D is incorrect because it does not prevent the VM from being deleted.

Links:
https://cloud.google.com/compute/docs/instances/preventing-accidental-vm-
deletion

Question 23:
You are working with a Cloud SQL MySQL database at your company. You
need to retain a month-end copy of the database for three years for audit
purposes. What should you do?

Set up an export job for the first of the month. Write the export file to an
Archive class Cloud Storage bucket.

Save the automatic first-of-the-month backup for three years.


(Correct)
Store the backup file in an Archive class Cloud Storage bucket.

Set up an on-demand backup for the first of the month. Write the backup
to an Archive class Cloud Storage bucket.

Convert the automatic first-of-the-month backup to an export file. Write


the export file to a Coldline class Cloud Storage bucket.

Explanation
A is incorrect because there is no need to create an export job as the export
functionality is built-in with Cloud SQL.

B is correct because backups are managed by Cloud SQL according to


retention policies, and are stored separately from the Cloud SQL instance to
take the backup and store it in Bucket.

C is incorrect because you cannot set up an "on-demand" backup. Users


would have to make backups manually every month. Also, you cannot choose
your Archival storage as a destination

D is incorrect because you cannot convert backup to export files. Also


Coldline class is less cost-effective than the Archival class.

Links:
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups

Question 24:
Your company uses a large number of Google Cloud services centralized
in a single project. All teams have specific projects for testing and
development. The DevOps team needs access to all of the production
services in order to perform their job. You want to prevent Google Cloud
product changes from broadening their permissions in the future. You
want to follow Google-recommended practices. What should you do?

Grant all members of the DevOps team the role of Project Editor on the
organization level.

Grant all members of the DevOps team the role of Project Editor on the
production project.

Create a custom role that combines the required permissions.


Grant the DevOps team the custom role on the production (Correct)
project.

Create a custom role that combines the required permissions. Grant the
DevOps team the custom role on the organization level.

Explanation
A is incorrect because granting the role at the organization level will grant the
DevOps team access to all projects in the organization. It could be a security
risk.

B is incorrect because the editor role is too broad and should not be assigned.

C is correct because a custom role should be created with all required


permissions and granted to the DevOps team in the production project.

D is incorrect because granting the role at the organization level will grant the
DevOps team access to all projects in the organization. It could be a security
risk.

Links:
https://cloud.google.com/iam/docs/understanding-custom-
roles#basic_concepts

Question 25:
You are building an application that processes data files uploaded from
thousands of suppliers. Your primary goals for the application are data
security and the expiration of aged data. You need to design the application
to:
1. Restrict access so that suppliers can access only their own data.
2. Give suppliers write access to data only for 30 minutes.
3. Delete data that is over 45 days old.
You have a very short development cycle, and you need to make sure that the
application requires minimal maintenance. Which two strategies should you
use? (Choose two.)

Build a lifecycle policy to delete Cloud Storage objects after 45


(Correct)
days.

Use signed URLs to allow suppliers limited time access to store


(Correct)
their objects.

Set up an SFTP server for your application, and create a separate user for
each supplier.

Build a Cloud function that triggers a timer of 45 days to delete objects


that have expired.

Develop a script that loops through all Cloud Storage buckets and deletes
any buckets that are older than 45 days.

Explanation
A is correct because a lifecycle policy can be used to delete data that is more
than 45 days old.

B is correct because signed URLs allow limited-time access to cloud storage


buckets.

C is incorrect because the SFTP server does nothing in this case as the files
are on Cloud Storage,

D is incorrect because there is no need to build a cloud function to delete


cloud storage objects.

E is incorrect because there is no need to develop a script.

Links:
https://cloud.google.com/storage/docs/lifecycle#delete
https://cloud.google.com/storage/docs/access-control/signed-urls

Question 26:
Your company wants to standardize the creation and management of
multiple Google Cloud resources using Infrastructure as Code. You want
to minimize the amount of repetitive code needed to manage the
environment. What should you do?

Develop templates for the environment using Cloud Deployment


(Correct)
Manager.

Use curl in a terminal to send a REST request to the relevant Google API
for each individual resource.

Use the Cloud Console interface to provision and manage all related
resources.

Create a bash script that contains all requirement steps as gcloud


commands.

Explanation
A is correct because Cloud Deployment Manager can be used to develop
templates that can be applied to multiple environments.

B is incorrect because using CURL to call GCP APIs is not efficient and not
recommended.

C is incorrect because we want to use infrastructure as a code tool.

D is incorrect because creating a bash script to call gcloud commands is not


efficient and its not a proper infrastructure as a code tool.

Links:
https://cloud.google.com/deployment-manager/docs/fundamentals

Question 27:
You are performing a monthly security check of your Google Cloud
environment and want to know who has access to view data stored in
your Google Cloud Project. What should you?

Enable Audit Logs for all APIs that are related to data storage.

Review the IAM permissions for any role that allows for data
(Correct)
access.

Review the Identity-Aware Proxy settings for each resource.

Create a Data Loss Prevention job.

Explanation
A is incorrect because checking the API logs does not give a list of users who
have access to view data.

B is correct because the IAM permissions will show which users have read
access.

C is incorrect because the IAP settings will not give a definitive list of users as
some services like Cloud Storage do not use IAP.

D is incorrect because creating a Data Loss prevention job does not tell you
who has access to data.

Links:
https://cloud.google.com/compute/docs/access

Question 28:
Your company has embraced a hybrid cloud strategy where some of the
applications are deployed on Google Cloud. A Virtual Private Network
(VPN) tunnel connects your Virtual Private Cloud (VPC) in Google Cloud
with your company's on-premises network. Multiple applications in
Google Cloud need to connect to an on-premises database server, and
you want to avoid having to change the IP configuration in all of your
applications when the IP of the database changes. What should you do?

Configure Cloud NAT for all subnets of your VPC to be used when
egressing from the VM instances.

Create a private zone on Cloud DNS, and configure the


(Correct)
applications with the DNS name.

Configure the IP of the database as custom metadata for each instance,


and query the metadata server.

Query the Compute Engine internal DNS from the applications to retrieve
the IP of the database.

Explanation
A is incorrect because Cloud NAT is used to provide internet access to
resources and that’s not the requirement here.

B is correct because Cloud DNS forwarding zones let you configure target
name servers for specific private zones. Using a forwarding zone is one way to
implement outbound DNS forwarding from your VPC network.

C is incorrect because the custom metadata will need to be updated


whenever the IP address of the database changes.

D is incorrect because querying the Compute engine DNS does not help
because the database server is on-premises.

Links:
https://gcloud.devoteam.com/blog/google-cloud-platform-dns-forwarding-
big-thing-enterprises

Question 29:
You have developed a containerized web application that will serve
internal colleagues during business hours. You want to ensure that no
costs are incurred outside of the hours the application is used. You have
just created a new Google Cloud project and want to deploy the
application. What should you do?

Deploy the container on Cloud Run for Anthos, and set the minimum
number of instances to zero.

Deploy the container on Cloud Run (fully managed), and set the
(Correct)
minimum number of instances to zero.

Deploy the container on App Engine flexible environment with autoscaling,


and set the value min_instances to zero in the app.yaml.

Deploy the container on App Engine flexible environment with manual


scaling, and set the value instances to zero in the app.yaml.

Explanation
A is incorrect because we don’t need Anthos for this.

B is correct because Cloud Run charges you only for the resources you use,
rounded up to the nearest 100 milliseconds. Note that each of these
resources has a free tier.

C is incorrect because the App Engine flexible environment does not scale
down to zero when not in use.

D is incorrect because the App Engine flexible environment does not scale
down to zero when not in use.

Links:
https://cloud.google.com/run/pricing

Question 30:
You have experimented with Google Cloud using your own credit card
and expensed the costs to your company. Your company wants to
streamline the billing process and charge the costs of your projects to
their monthly invoice. What should you do?

Grant the financial team the IAM role of 'Billing Account User' on the
billing account linked to your credit card.

Set up BigQuery billing export and grant your financial department IAM
access to query the data.

Create a ticket with Google Billing Support to ask them to send the invoice
to your company.

Change the billing account of your projects to the billing account


(Correct)
of your company.

Explanation
A is incorrect because we need to migrate the billing of the project to the
company's billing account and not the other way round.

B is incorrect because setting up Bigquery billing export does not migrate


billing.

C is incorrect because there is no need to create a ticket to Google Support


for this.

D is correct because changing the billing account to the company’s billing


account will enable the company to get a single invoice.

Links:
https://cloud.google.com/billing/docs/how-to/modify-
project#change_the_billing_account_for_a_project

Question 31:
You are running a data warehouse on BigQuery. A partner company is
offering a recommendation engine based on the data in your data
warehouse. The partner company is also running their application on
Google Cloud. They manage the resources in their own project, but they
need access to the BigQuery dataset in your project. You want to provide
the partner company with access to the dataset. What should you do?

Create a Service Account in your own project, and grant this Service
Account access to BigQuery in your project.

Create a Service Account in your own project, and ask the partner to grant
this Service Account access to BigQuery in their project.

Ask the partner to create a Service Account in their project, and have
them give the Service Account access to BigQuery in their project.

Ask the partner to create a Service Account in their project, and


grant their Service Account access to the BigQuery dataset in (Correct)
your project.

Explanation
A is incorrect because the partner company needs to create a service account
as they own the application.

B is incorrect because the partner company needs to create a service account


as they own the application.

C is incorrect because access to bigquery has to be granted in your project as


the data resides in your project and not the partner’s.

D is correct because access to bigquery has to be granted in your project as


the data resides in your project and not the partner’s.

Links:
https://cloud.google.com/dataprep/docs/concepts/cross-bq-datasets
https://cloud.google.com/bigquery/docs/dataset-access-controls

Question 32:
Your web application has been running successfully on Cloud Run for
Anthos. You want to evaluate an updated version of the application with a
specific percentage of your production users (canary deployment). What
should you do?

Create a new service with the new version of the application. Split traffic
between this version and the version that is currently running.

Create a new revision with the new version of the application.


Split traffic between this version and the version that is currently (Correct)
running.

Create a new service with the new version of the application. Add HTTP
Load Balancer in front of both services.

Create a new revision with the new version of the application. Add HTTP
Load Balancer in front of both revisions.

Explanation
A is incorrect because you need to create a new revision of the same service
instead of creating a separate service to split traffic.

B is correct because you need to create a new revision of the same service
instead of creating a separate service to split traffic.

C is incorrect because Cloud Run provides support for traffic splitting by


default and you don’t need to use a load balancer for it.

D is incorrect because Cloud Run provides support for traffic splitting by


default and you don’t need to use a load balancer for it.

Links:
https://cloud.google.com/run/docs/rollouts-rollbacks-traffic-migration
https://servian.dev/3-best-features-of-google-cloud-run-546e367242ea?
gi=7a48b2bda8a7

Question 33:
Your company developed a mobile game that is deployed on Google
Cloud. Gamers are connecting to the game with their personal phones
over the Internet. The game sends UDP packets to update the servers
about the gamers' actions while they are playing in multiplayer mode.
Your game backend can scale over multiple virtual machines (VMs), and
you want to expose the VMs over a single IP address. What should you
do?

Configure an SSL Proxy load balancer in front of the application servers.

Configure an Internal UDP load balancer in front of the application servers.

Configure an External HTTP(s) load balancer in front of the application


servers.

Configure an External Network load balancer in front of the


(Correct)
application servers.

Explanation
A is incorrect because the SSL proxy load balancer does not support UDP.

B is incorrect because the application needs public internet exposure which is


not provided by the internal load balancer.

C is incorrect because HTTP(s) load balancer is for TCP and not UDP.

D is correct because External Network Load Balancer exposes the traffic to


the internet and it supports UDP.

Links:
https://cloud.google.com/load-balancing/docs/network

Question 34:
You are working for a hospital that stores its medical images in an on-
premises data room. The hospital wants to use Cloud Storage for archival
storage of these images. The hospital wants an automated process to
upload any new medical images to Cloud Storage. You need to design
and implement a solution. What should you do?

Create a Pub/Sub topic, and enable a Cloud Storage trigger for the
Pub/Sub topic. Create an application that sends all medical images to the
Pub/Sub topic.

Deploy a Dataflow job from the batch template, 'Datastore to Cloud


Storage.' Schedule the batch job on the desired interval.

Create a script that uses the gsutil command line interface to


synchronize the on-premises storage with Cloud Storage. (Correct)
Schedule the script as a cron job.

In the Cloud Console, go to Cloud Storage. Upload the relevant images to


the appropriate bucket.

Explanation
A is incorrect because there is no need for pub/sub as you only need to
upload the file to cloud storage.

B is incorrect because there is no need for dataflow for this.

C is correct because gsutil is the right tool to synchronize Cloud Storage with
an on-premises file system and automating the upload with a shell script is
fairly easy.

D is incorrect because it is not automated.

Links:
https://cloud.google.com/storage/docs/gsutil/commands/rsync

Question 35:
Your auditor wants to view your organization's use of data in Google
Cloud. The auditor is most interested in auditing who accessed data in
Cloud Storage buckets. You need to help the auditor access the data they
need. What should you do?

Turn on Data Access Logs for the buckets they want to audit, and
(Correct)
then build a query in the log viewer that filters on Cloud Storage.

Assign the appropriate permissions, and then create a Data Studio report
on Admin Activity Audit Logs.

Assign the appropriate permissions, and the use Cloud Monitoring to


review metrics.

Use the export logs API to provide the Admin Activity Audit Logs in the
format they want.

Explanation
A is correct because information about users accessing data is available
through Data Access Logs.

B is incorrect because Admin activity logs don’t contain data access


information.

C is incorrect because Cloud Monitoring does not provide information on data


access.

D is incorrect because Admin activity logs don’t contain data access


information.

Links:
https://cloud.google.com/storage/docs/audit-logging

Question 36:
You received a JSON file that contained a private key of a Service
Account in order to get access to several resources in a Google Cloud
project. You downloaded and installed the Cloud SDK and want to use
this private key for authentication and authorization when performing
gcloud commands. What should you do?

Use the command gcloud auth login and point it to the private key.

Use the command gcloud auth activate-service-account and


(Correct)
point it to the private key.

Place the private key file in the installation directory of the Cloud SDK and
rename it to 'credentials.json'.

Place the private key file in your home directory and rename it to
'GOOGLE_APPLICATION_CREDENTIALS'.

Explanation
A is incorrect because gcloud auth login is for authenticating a user and not a
service account.

B is correct because the gcloud auth activate-service-account command is


used to activate the service account in Cloud SDK.

C is incorrect because placing the file in the installation directory does not
activate the service account.

D is incorrect because GOOGLE_APPLICATION_CREDENTIALS is for


applications other than Cloud SDK. For Cloud SDK, use gcloud auth activate-
service-account command.

Links:
https://cloud.google.com/sdk/docs/authorizing

Question 37:
You are monitoring an application and receive user feedback that a
specific error is spiking. You notice that the error is caused by a Service
Account having insufficient permissions. You are able to solve the
problem but want to be notified if the problem recurs. What should you
do?

In the Log Viewer, filter the logs on severity 'Error' and the name of the
Service Account.

Create a sink to BigQuery to export all the logs. Create a Data Studio
dashboard on the exported logs.

Create a custom log-based metric for the specific error to be


(Correct)
used in an Alerting Policy.

Grant Project Owner access to the Service Account.

Explanation
A is incorrect because just creating a filter in the log console does not provide
the alerting capability.

B is incorrect because data studio does not provide alerting.

C is correct because you need to create a log-based metric for the error to
get notified if it occurs again.

D is incorrect because it is not a good practice to grant more permissions


than required to users or service accounts.

Links:
https://cloud.google.com/logging/docs/logs-based-metrics
https://cloud.google.com/error-reporting/docs/notifications
https://cloud.google.com/logging/docs/logs-based-metrics/charts-and-alerts

Question 38:
You are developing a financial trading application that will be used
globally. Data is stored and queried using a relational structure, and
clients from all over the world should get the exact identical state of the
data. The application will be deployed in multiple regions to provide the
lowest latency to end users. You need to select a storage option for the
application data while minimizing latency. What should you do?

Use Cloud Bigtable for data storage.

Use Cloud SQL for data storage.

Use Cloud Spanner for data storage. (Correct)

Use Firestore for data storage.

Explanation
A is incorrect because Cloud Bigtable only provides eventual consistency.

B is incorrect because Cloud SQL does not provide global availability.

C is correct because Cloud Spanner is a fully managed relational database


with unlimited scale, strong consistency, and up to 99.999% availability.

D is incorrect because

Links:
https://cloud.google.com/spanner
https://cloud.google.com/spanner/docs

Question 39:
You are about to deploy a new Enterprise Resource Planning (ERP)
system on Google Cloud. The application holds the full database in-
memory for fast data access, and you need to configure the most
appropriate resources on Google Cloud for this application. What should
you do?

Provision preemptible Compute Engine instances.

Provision Compute Engine instances with GPUs attached.

Provision Compute Engine instances with local SSDs attached.

Provision Compute Engine instances with M1 machine type. (Correct)

Explanation
A is incorrect because preemptible machines can shut down at any time and it
can cause data loss.

B is incorrect because adding GPU does not improve the performance of


RAM.

C is incorrect because adding a local SSD does not improve the performance
of RAM.

D is correct because "The application holds the full database in-memory for
fast data access", so it'll be more appropriate to use memory-optimized
machine types

Links:
https://cloud.google.com/compute/docs/machine-types#m1_machine_types

Question 40:
You have developed an application that consists of multiple
microservices, with each microservice packaged in its own Docker
container image. You want to deploy the entire application on Google
Kubernetes Engine so that each microservice can be scaled individually.
What should you do?

Create and deploy a Custom Resource Definition per microservice.

Create and deploy a Docker Compose File.

Create and deploy a Job per microservice.

Create and deploy a Deployment per microservice. (Correct)

Explanation
A is incorrect because deploying microservices on Kubernetes does not
require custom resource definition.

B is incorrect because you cannot deploy a docker-compose file on


Kubernetes.

C is incorrect because microservices run as deployments or statefulsets on


Kubernetes and not jobs.

D is correct because microservices run as deployments or statefulsets on


Kubernetes.

Links:
https://medium.com/stakater/k8s-deployments-vs-statefulsets-vs-
daemonsets-60582f0c62d4
https://kubernetes.io/docs/concepts/workloads/controllers/deployment/

Question 41:
You will have several applications running on different Compute Engine
instances in the same project. You want to specify at a more granular
level the service account each instance uses when calling Google Cloud
APIs. What should you do?

When creating the instances, specify a Service Account for each


(Correct)
instance.

When creating the instances, assign the name of each Service Account as
instance metadata.

After starting the instances, use gcloud compute instances update to


specify a Service Account for each instance.

After starting the instances, use gcloud compute instances update to


assign the name of the relevant Service Account as instance metadata.

Explanation
A is correct because assigning different service accounts to different
compute engine instances is a best practice if the instances require granular
access control.

B is incorrect because assigning the name of the service account in metadata


does not enable granular access control.

C is incorrect because you don’t need to update the instance to add a service
account. It can be done at the time of creation.

D is incorrect because assigning the name of the service account in metadata


does not enable granular access control.

Links:
https://cloud.google.com/compute/docs/access/service-
accounts#associating_a_service_account_to_an_instance

Question 42:
You are creating an application that will run on Google Kubernetes
Engine. You have identified MongoDB as the most suitable database
system for your application and want to deploy a managed MongoDB
environment that provides a support SLA. What should you do?

Create a Cloud Bigtable cluster, and use the HBase API.

Deploy MongoDB Atlas from the Google Cloud Marketplace. (Correct)

Download a MongoDB installation package, and run it on Compute Engine


instances.

Download a MongoDB installation package, and run it on a Managed


Instance Group.

Explanation
A is incorrect because Cloud Bigtable is not MongoDB

B is correct MongoDB Atlas is actually managed and supported by third-


party service providers.

C is incorrect because running MongoDB on your own is not covered by SLA.

D is incorrect because running MongoDB on your own is not covered by SLA.

Links:
https://console.cloud.google.com/marketplace/details/gc-launcher-for-
mongodb-atlas/mongodb-atlas

Question 43:
You are managing a project for the Business Intelligence (BI) department
in your company. A data pipeline ingests data into BigQuery via
streaming. You want the users in the BI department to be able to run the
custom SQL queries against the latest data in BigQuery. What should you
do?

Create a Data Studio dashboard that uses the related BigQuery tables as a
source and give the BI team view access to the Data Studio dashboard.
Create a Service Account for the BI team and distribute a new private key
to each member of the BI team.

Use Cloud Scheduler to schedule a batch Dataflow job to copy the data
from BigQuery to the BI team's internal data warehouse.

Assign the IAM role of BigQuery User to a Google Group that


(Correct)
contains the members of the BI team.

Explanation
A is incorrect because the BI team wants to run queries and not build
dashboards.

B is incorrect because distributing service account private keys is not safe


and should be avoided if possible.

C is incorrect because queries can be run directly on Bigquery. There is no


need to migrate data to another data warehouse.

D is correct because roles/bigquery.user, when applied to a dataset, provides


the ability to read the dataset's metadata and list tables in the dataset.

Links:
https://cloud.google.com/bigquery/docs/access-control

Question 44:
Your company is moving its entire workload to Compute Engine. Some
servers should be accessible through the Internet, and other servers
should only be accessible over the internal network. All servers need to
be able to talk to each other over specific ports and protocols. The
current on-premises network relies on a demilitarized zone (DMZ) for the
public servers and a Local Area Network (LAN) for the private servers.
You need to design the networking infrastructure on Google Cloud to
match these requirements. What should you do?

1. Create a single VPC with a subnet for the DMZ and a subnet for
the LAN.
2. Set up firewall rules to open up relevant traffic between the (Correct)
DMZ and the LAN subnets, and another firewall rule to allow
public ingress traffic for the DMZ.

1. Create a single VPC with a subnet for the DMZ and a subnet for the
LAN.
2. Set up firewall rules to open up relevant traffic between the DMZ and
the LAN subnets, and another firewall rule to allow public egress traffic for
the DMZ.

1. Create a VPC with a subnet for the DMZ and another VPC with a subnet
for the LAN.
2. Set up firewall rules to open up relevant traffic between the DMZ and
the LAN subnets, and another firewall rule to allow public ingress traffic
for the DMZ.

1. Create a VPC with a subnet for the DMZ and another VPC with a subnet
for the LAN.
2. Set up firewall rules to open up relevant traffic between the DMZ and
the LAN subnets, and another firewall rule to allow public egress traffic for
the DMZ.

Explanation
A is correct because the DMZ and LAN can be logically separated as 2
subnets and firewall rules can be created to open up ports between them.

B is incorrect because the DMZ needs public ingress and not egress.

C is incorrect because there is no need to create 2 separate VPCs.

D is incorrect because there is no need to create 2 separate VPCs.

Links:
https://medium.com/google-cloud/a-dmz-what-is-that-acc3b21b9653
https://cloud.google.com/vpc/docs/vpc#:~:text=Subnets%20are%20regional
%20resources.,or%20arrives%20at%20a%20VM.

Question 45:
You need to define an address plan for a future new GKE cluster in your VPC.
This will be a VPC native cluster, and the default Pod IP range allocation will
be used. You must pre-provision all the needed VPC subnets and their
respective IP address ranges before cluster creation. The cluster will initially
have a single node, but it will be scaled to a maximum of three nodes if
necessary. You want to allocate the minimum number of Pod IP addresses.
Which subnet mask should you use for the Pod IP address range?

/21

/22 (Correct)

/23

/25

Explanation
A is incorrect, The cluster can be scaled to 3 nodes. Each node can consist of
a maximum of 110 pods as per best practice. It means a total of 330 pods
need to be assigned IP addresses. /21 CIDR will provision 2^(32-21)=2048 IP
addresses which are over-provisioning. GKE allots /24 for POD IP allocation
per node.

B is correct, The cluster can be scaled to 3 nodes. Each node can consist of
a maximum of 110 pods as per best practice. It means a total of 330 pods
need to be assigned IP addresses. /21 CIDR will provision 2^(32-21)=2048 IP
addresses which are over-provisioning. GKE allots /24 for POD IP allocation
per node. If we go with /22, then 2^(24-22)=4, which means we can scale up
to 4 nodes. Our requirement is for 3 nodes. Hence, /22 is enough for pod IP
allocation.

C is incorrect, The cluster can be scaled to 3 nodes. Each node can consist of
a maximum of 110 pods as per best practice. It means a total of 330 pods
need to be assigned IP addresses. GKE allots /24 for POD IP allocation per
node. If we go with /23, then 2^(24-23)=2, which means we can scale up to 2
nodes. Our requirement is for 3 nodes. Hence, /23 is not enough for pod IP
allocation.

D is incorrect, The cluster can be scaled to 3 nodes. Each node can consist of
a maximum of 110 pods as per best practice. It means a total of 330 pods
need to be assigned IP addresses. GKE allots /24 for POD IP allocation per
node. If we go with /25, then we will fall short of IPs and cannot scale the
pods.

Links:
https://cloud.google.com/kubernetes-engine/docs/concepts/alias-
ips#cluster_sizing_secondary_range_pods

Question 46:
You are a project owner and need your co-worker to deploy a new version of
your application to App Engine. You want to follow Google’s recommended
practices. Which IAM roles should you grant your co-worker?

A. Project Editor

B. App Engine Service Admin

C. App Engine Deployer (Correct)

D. App Engine Code Viewer

Explanation
A is not correct because this access is too wide, and Google recommends the
least-privilege. Also, Google recommends predefined roles instead of
primitive roles like Project Editor.
B is not correct because although it gives write access to module-level and
version-level settings, users cannot deploy a new version.
C is correct because this gives write access only to create a new version.
D is not correct because this is read-only access.
Links:
https://cloud.google.com/iam/docs/understanding-roles

Question 47:
Your company has reserved a monthly budget for your project. You want to be
informed automatically of your project spend so that you can take action
when you approach the limit. What should you do?

A. Link a credit card with a monthly limit equal to your budget.

B. Create a budget alert for 50%, 90%, and 100% of your total
(Correct)
monthly budget.

C. In App Engine Settings set a daily budget at the rate of 1/30 of your
monthly budget.

D. In the GCP Console, configure billing export to BigQuery. Create a


saved view that queries your total spend.

Explanation
A is not correct because this will just give you the spend, but will not alert you
when you approach the limit.

B Is correct because a budget alert will warn you when you reach the limits
set.

C is not correct because if you exceed the budget, you will still be billed for it.
Furthermore, there is no alerting when you hit that limit by GCP.

D Is not correct because those budgets are only on App Engine, not other
GCP resources. Furthermore, this makes subsequent requests fail, rather than
alert you in time so you can mitigate appropriately.
Links:
https://cloud.google.com/appengine/pricing#spending_limit
https://cloud.google.com/billing/docs/how-to/budgets

Question 48:
You have a project using BigQuery. You want to list all BigQuery jobs for that
project. You want to set this project as the default for the bq command-line
tool. What should you do?

A. Use "gcloud config set project" to set the default project. (Correct)

B. Use "bq config set project" to set the default project.

C. Use "gcloud generate config-url" to generate a URL to the Google


Cloud Platform Console to set the default project.

D. Use "bq generate config-url" to generate a URL to the Google Cloud


Platform Console to set the default project.

Explanation
A is correct because you need to use gcloud to manage the config/defaults.
B is not correct because the bq command-line tool assumes the gcloud
configuration settings and can’t be set through BigQuery.
C is not correct because entering this command will not achieve the desired
result and will generate an error.
D is not correct because entering this command will not achieve the desired
result and will generate an error.
Links:
https://cloud.google.com/bigquery/docs/reference/bq-cli-reference
https://cloud.google.com/sdk/gcloud/reference/config/set

Question 49:
You have a Kubernetes cluster with 1 node-pool. The cluster receives a lot of
traffic and needs to grow. You decide to add a node. What should you do?

A. Use "gcloud container clusters resize" with the desired


(Correct)
number of nodes.

B. Use "kubectl container clusters resize" with the desired number of


nodes.

C. Edit the managed instance group of the cluster and increase the
number of VMs by 1.

D. Edit the managed instance group of the cluster and enable autoscaling.

Explanation
A is correct because this resizes the cluster to the desired number of nodes.

B is not correct because you need to use gcloud, not kubectl.

C is not correct because you should not manually manage the MIG behind a
cluster.

D is not correct because you should not manually manage the MIG behind a
cluster.

Question 50:
You want to select and configure a solution for storing and archiving data on
the Google Cloud Platform. You need to support compliance objectives for
data from one geographic location. This data is archived after 30 days and
needs to be accessed annually. What should you do?

Select Multi-Regional Storage. Add a bucket lifecycle rule that archives


data after 30 days to Coldline Storage.

Select Multi-Regional Storage. Add a bucket lifecycle rule that archives


data after 30 days to Nearline Storage.

Select Regional Storage. Add a bucket lifecycle rule that archives data
after 30 days to Nearline Storage.

Select Regional Storage. Add a bucket lifecycle rule that archives


(Correct)
data after 30 days to Coldline Storage.

Explanation
A is incorrect because there is no mention that the data needs to be stored in
multiple regions.

B is incorrect because there is no mention that the data needs to be stored in


multiple regions.

C is incorrect because the data is going to be accessed once a year and the
use-case for nearline storage is to access the data once a month.

D is correct because the data is going to be accessed once a year, so


coldline storage is the best option.

Links:
https://cloud.google.com/storage/docs/storage-classes
For more visit

examsrocket.com

You might also like