Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

In which cloud operating model is a customer responsible for maintaining the Operating System?

• SaaS
• CaaS
• PaaS
• IaaS

The risk that a cloud provider might go out of business and the customers might not be able to
recover data is known as:

• Vendor closure
• Vendor lock-in
• Vendor lock-out
• Vending machine

Where can you connect to GCP with your physical network using Dedicated Interconnect?

• Chosen GCP Region


• Chosen GCP Zone within a Region
• Colocation Facilities.

Feedback
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/choosing-
colocation-facilities

All GCP resources are associated with

• a service account
• a user
• a billing account
• a project

What's the notable difference in monthly billing costs of GCP usage between deploying solution
in a single zone vs spreading them throughout zones or regions?

• There is no big difference, since we're just paying for the resources we use.
• There is a difference is so-called egress costs, which might be much higher when data is
exchanged between different locations.
• Mutli-zone and multi-region solutions should be cheaper, since the traffic is equally
distributed between the locations.

What are examples of global GCP resources? (choose two)

• Global Persistent Disks.


• Subnets
• some Load Balancers.
• Cloud SQL
• Global GCS Buckets
• VPCs
• Anthos GKE Clusters

What are examples of regional GCP resources? (choose three)


0/1
• GCE Instances
• IP Addresses
• All Persistent Disks
• Regional Persistent Disks
• Regional MIGs
• Zonal GKE clusters

Correct answer
IP Addresses
Regional Persistent Disks
Regional MIGs

A Virtual Private Cloud subnet is

• multi-regional
• regional
• global
• zonal

In Google Cloud, what is the minimum number of IPs that a VM can have?

• Two: One internal and one external IP address


• Three: One internal, one external and one alias IP address
• One: Only an internal IP address
• No IP address is required

How many network interfaces does a GCE instance have?

• Only one.
• One if it has an Internal IP only, two if it has Internal and External IP
• It depends on NIC_NUMBER metadata value.
• One, unless it's deployed in multiple VPCs.

Service accounts are used to provide


0/1
• administrative access to VMs
• automation for deployments
• traffic routing between services
• authentication between services

Correct answer
authentication between services

What are the options to connect to Linux-based GCE instances that have no external IP? (choose
three)
0/1
• Connect from your Organization node since it has privileges to all the other resources.
• Use IAP (Identity-Aware Proxy).
• Login via a bastion host.
• Directly connect with "ssh" command from Cloud Shell, since it has the visibility into all
project instances.
• Use Cloud VPN with local visibility of GCP-based resources.

Correct answer
Use IAP (Identity-Aware Proxy).
Login via a bastion host.
Use Cloud VPN with local visibility of GCP-based resources.

Which feature can save up to ~30% of a VM's cost over a month?

• Sustained use discount


• Pre-emptible discount
• Committed use discount
• Custom machine types

Which feature can save up to ~80% of a VM's cost?

• Committed use discount


• Custom machine types
• Pre-emptible discount
• Sustained use discount

You would like to optimize the infrastructure costs by buying 3-year commitment for a specific
requirement. However, you're not sure if your requirements do not change during this time and
you might be forced to change the number and size of the GCE instances. What should you do?
0/1
• Buy a 1-year commitment to save some costs and accept the fact that when you stop /
resize specific VM, you no longer benefit from the commitment.
• Buy a 3-year commitment, checking option "early opt-out possible" that allows you to
cancel the commitment earlier with a small penalty.
• Buy a 3-year commitment and use it for different number and sizes of VM instances as
your requirements change during this period.

Correct answer
• Buy a 3-year commitment and use it for different number and sizes of VM instances as
your requirements change during this period.

You would like to deploy a GCE instance with a GPU attached in your local region, but Cloud
Console says this option is not available. How do you deploy such a VM?

• You need to to it from command line, using "gcloud compute instances create"
• You need to contact GCP support to enable GPUs in your projects, since they're normally
hidden.
• You need to choose another zone or region, since GPUs are not available in every
location.
• You need to purchase a commitment first to deploy GPUs.

A customer has some non-productive GCE instances that will not be needed for foreseeable
future. What actions would you make in order to stop billing for instances and attached disks,
without loosing data kept on PDs?
• Stop the VMs for the time being, until they are needed again.
• Drop VMs with PDs and rebuild them when needed.
• Stop VMs, Make snapshots from PDs attached and delete both VMs and PDs.
• Use "gcloud compute freeze <gce_name> including disks" command.

What's the easiest way to automate GCE instance startup and shutdown schedule?
• Schedule those activities in crontab of each instance.
• Use Cloud Workflows to trigger startup and shutdown scripts on selected VMs.
• Create Instance Schedule and add selected instances from the project.
• Implement a 3rd party task scheduling system to trigger REST API to GCE instance.

As part of your backup plan, you want to be able to restore Compute Engine instances using
snapshots. How would you do it using fewest steps possible?

• Export the snapshots to Cloud Storage. Create disks from the exported snapshot files.
Create images from the new disks.
• Export the snapshots to Cloud Storage. Create images from the exported snapshot files.
• Use the snapshots to create replacement disks. Use the disks to create instances as
needed.
• Use the snapshots to create replacement instances as needed.

Feedback
This is correct because the scenario asks how to recreate instances. You can create an
instance directly from a snapshot without restoring to disk first.

How would you design a GCE-based system that is latency-sensitive and should ensure business
continuity even if a whole GCP zone was not accessible?
• Deploy in the same zone, but using multiple GCE instances in case one of them fails.
• Deploy multiple VMs in separate zones in the same region.
• Deploy multiple VMs in separate regions.
• Deploy a single VM and schedule snapshots that can be restored to another zone in case
of failure.

You notice that there are problems with I/O throughput on one your machines. It's an n2-
standard-2 and you've just increased it's SSD size from 500GB to 1TB, but it still did not help.
What can you do to quickly address this issue?
• Add an Extreme SSD disk that performs even faster.
• Re-size the GCE instance to n2-standard-8.
• Change the processing pattern from smaller to larger file sizes.
• Check if boot disk is not too small.

Feedback
Each core is subject to a 2 Gbits/second (Gbps) cap for peak performance. Each
additional core increases the network cap, up to a theoretical maximum of 16 Gbps for
each virtual machine.

One of the linux GCE instances is not booting properly. What actions can you take to validate
the errors? (choose two).
• Enable Serial Console connectivity and connect from there.
• Resize the VM to a larger size.
• Connect via Identity-Aware Proxy (IAP) instead of ssh.
• Unmount the boot disk from the failing machine, attach it to a different one as non-boot
disk and validate the contents of the disk.

Correct answer
Enable Serial Console connectivity and connect from there.
Unmount the boot disk from the failing machine, attach it to a different one as non-boot
disk and validate the contents of the disk.

What are your options to automatically deploy a GCE instance that already contain specific
software and configuration? (choose three).
• Create a custom OS image with all the software installed and deploy a GCE instance
using this image.
• Deploy a VM with a public OS image, ssh into it and trigger the configuration from a
script.
• Deploy a VM with a public OS image and use startup scripts to install/configure the
software.
• Use a specific GCP Marketplace solution.

How can you optimize the infrastructure costs for your VMs? (choose five)

• Use Committed Use Discounts.


• Automate shutdowns and startups to reduce the time VMs are up.
• Follow GCP Recommendations for the VM sizing.
• Limit the number of Licenced Users that can login to VMs.
• Use Preemptible Instances.
• Deploy VMs in cheaper regions.

You'd like to ensure that contents of a chosen Persistent Disk are replicated to another zone so
that in the event of a zonal failure, you're able to quickly mount it to a different VM in another
zone, without loosing any data. What is your best option?
0/1
• Schedule frequent snapshots that can be used to create a PD in the second zone in case of
outage.
• Create a regional PD and forcefully mount it to a GCE instance in another zone in case of
outage.
• Create two separate zonal PDs and create an rsync-based replication between them.
• You don't need to do anything. PDs are automatically replicated between zones for
resiliency.

A customer contacted you saying that they provisioned a Cloud SQL instance in one of the
projects, but he can't connect to this instance from another project, despite VPC peering is
configured between those two projects. What's the most likely reason?
0/1
• The connection is probably done via Cloud SQL instance name; connecting via IP
address should be used instead.
• Appropriate Cloud SQL roles are missing; connectivity should be established after
adding "Cloud SQL viewer" role.
• Cloud SQL automatically creates a VPC peering, so any attempts to login from another
peered VPC will not work.
• Cloud SQL needs to be registered in Cloud Console in order for the connectivity to be
enabled.

YourHealth company would like to migrate their workloads to GCP, but they have strict
compliance requirements that don't allow them to share physical compute infrastructure with
other customers. How would they still be able to use GCE instances?
• They can't; when creating a GCE VM, you have to accept shared model and not having
your own, dedicated physical machine underneath.
• They can use sole-tenant nodes.
• They can deploy GCE VMs of exactly the size of a physical machine underneath.
• They can query GCP metadata for the ID of physical machine and use this ID when
deploying all their GCE VMs.

Feedback
https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes

BizBank has a requirement to only use data services that ensure data encryption. Which services
in GCP can they consider?
0/1
• GCS
• Cloud Spanner
• Bigtable
• Firestore
• Persistent Disks

Correct answer
GCS
Cloud Spanner
Bigtable
Firestore
Persistent Disks

MyCompany would like to know whenever someone accesses data stored in GCS bucket in one
of the critical projects. How can they configure it?

• They don't have to - audit logs are on by default in GCP.


• They need to reach out to Google support to provide this data.
• They can activate data access audit logging for GCS.
• They have to activate "Audit Data" API.
Feedback
https://cloud.google.com/logging/docs/audit#types

When creating a PD, you chose a CSEK model to manage KEK keys. Unfortunately, you stored
the encryption key on a company laptop that was stolen from you. How can you recover the lost
key needed to decrypt the data stored on PD?
• You need to reach out to Google support and initiate Key Recovery procedure.
• You don't have to; Google still stores DEK keys in GCP, which are enough to decrypt the
data.
• You can generate another encryption key and use this one instead.
• Google can't help you recover the key, since it's not stored permanently by Google.
GamersFirst has a MySQL database that grew significantly in recent years and has to support
8000 requests per second in busiest time of the day. They already implemented various
techniques to optimize the current solution and they're thinking of migrating to Cloud Spanner.
How can they execute such a migration?
• Perform MySQL database export and import it into Spanner.
• Use Database Migration Service that GCP provides.
• Database schemas needs to be converted, application using the database needs to be
modified and bulk export/import of data has to be done..
• Instead of migrating to Spanner, they can just add another MySQL Read Replica, since
MySQL scales linearly with the number of nodes.

Diagnostic Questions

Cymbal Direct drones continuously send data during deliveries. You need to process and analyze the
incoming telemetry data. After processing, the data should be retained, but it will only be accessed once
every month or two. Your CIO has issued a directive to incorporate managed services wherever possible.
You want a cost-effective solution to process the incoming streams of data.

• Ingest data with IoT Core, process it with Dataprep, and store it in a Coldline Cloud Storage
bucket.
• Ingest data with IoT Core, and then publish to Pub/Sub. Use Dataflow to process the data, and
store it in a Nearline Cloud Storage bucket.
• Ingest data with IoT Core, and then publish to Pub/Sub. Use BigQuery to process the data, and
store it in a Standard Cloud Storage bucket.

D. Ingest data with IoT Core, and then store it in BigQuery.

Customers need to have a good experience when accessing your web application so they will continue
to use your service. You want to define key performance indicators (KPIs) to establish a service level
objective (SLO).

• Eighty-five percent of customers are satisfied users


• Eighty-five percent of requests succeed when aggregated over 1 minute
• Low latency for > 85% of requests when aggregated over 1 minute
• Eighty-five percent of requests are successful

Cymbal Direct developers have written a new application. Based on initial usage estimates, you decide
to run the application on Compute Engine instances with 15 Gb of RAM and 4 CPUs. These instances
store persistent data locally. After the application runs for several months, historical data indicates that
the application requires 30 Gb of RAM. Cymbal Direct management wants you to make adjustments that
will minimize costs. What should you do?
• Stop the instance, and then use the command gcloud compute instances set-machine-type
VM_NAME --machine-type e2-standard-8. Start the instance again.
• Stop the instance, and then use the command gcloud compute instances set-machine-type
VM_NAME --machine-type e2-standard-8. Set the instance’s metadata to: preemptible: true.
Start the instance again.
• Stop the instance, and then use the command gcloud compute instances set-machine-type
VM_NAME --machine-type 2-custom-4-30720. Start the instance again.
• Stop the instance, and then use the command gcloud compute instances set-machine-type
VM_NAME --machine-type 2-custom-4-30720. Set the instance’s metadata to: preemptible:
true. Start the instance again.

You are creating a new project. You plan to set up a Dedicated interconnect between two of your data
centers in the near future and want to ensure that your resources are only deployed to the same regions
where your data centers are located. You need to make sure that you don’t have any overlapping IP
addresses that could cause conflicts when you set up the interconnect. You want to use RFC 1918 class B
address space. What should you do?

• Create a new project, leave the default network in place, and then use the default 10.x.x.x
network range to create subnets in your desired regions.
• Create a new project, delete the default VPC network, set up an auto mode VPC network, and
then use the default 10.x.x.x network range to create subnets in your desired regions.
• Create a new project, delete the default VPC network, set up a custom mode VPC network, and
then use IP addresses in the 172.16.x.x address range to create subnets in your desired regions.
• Create a new project, delete the default VPC network, set up the network in custom mode, and
then use IP addresses in the 192.168.x.x address range to create subnets in your desired zones.
Use VPC Network Peering to connect the zones in the same region to create regional networks.

Cymbal Direct is working with Cymbal Retail, a separate, autonomous division of Cymbal with different
staff, networking teams, and data center. Cymbal Direct and Cymbal Retail are not in the same Google
Cloud organization. Cymbal Retail needs access to Cymbal Direct’s web application for making bulk
orders, but the application will not be available on the public internet. You want to ensure that Cymbal
Retail has access to your application with low latency. You also want to avoid egress network charges if
possible. What should you do?

• Verify that the subnet range Cymbal Retail is using doesn’t overlap with Cymbal Direct’s subnet
range, and then enable VPC Network Peering for the project.
• If Cymbal Retail does not have access to a Google Cloud data center, use Carrier Peering to
connect the two networks.
• Specify Cymbal Direct’s project as the Shared VPC host project, and then configure Cymbal
Retail’s project as a service project.
• Verify that the subnet Cymbal Retail is using has the same IP address range with Cymbal Direct’s
subnet range, and then enable VPC Network Peering for the project.
Cymbal Direct's employees will use Google Workspace. Your current on-premises network cannot meet
the requirements to connect to Google's public infrastructure. What should you do?

• Order a Dedicated Interconnect from a Google Cloud partner, and ensure that proper routes are
configured.
• Connect the network to a Google point of presence, and enable Direct Peering.
• Order a Partner Interconnect from a Google Cloud partner, and ensure that proper routes are
configured.
• Connect the on-premises network to Google’s public infrastructure via a partner that supports
Carrier Peering.

You are working with a client who is using Google Kubernetes Engine (GKE) to migrate applications from
a virtual machine–based environment to a microservices-based architecture. Your client has a complex
legacy application that stores a significant amount of data on the file system of its VM. You do not want
to re-write the application to use an external service to store the file system data. What should you do?

• In Cloud Shell, create a YAML file defining your Deployment called deployment.yaml. Create a
Deployment in GKE by running the command kubectl apply -f deployment.yaml
• In Cloud Shell, create a YAML file defining your Container called build.yaml. Create a Container
in GKE by running the command gcloud builds submit –config build.yaml .
• In Cloud Shell, create a YAML file defining your StatefulSet called statefulset.yaml. Create a
StatefulSet in GKE by running the command kubectl apply -f statefulset.yaml
• In Cloud Shell, create a YAML file defining your Pod called pod.yaml. Create a Pod in GKE by
running the command kubectl apply -f pod.yaml

You are working in a mixed environment of VMs and Kubernetes. Some of your resources are on-
premises, and some are in Google Cloud. Using containers as a part of your CI/CD pipeline has sped up
releases significantly. You want to start migrating some of those VMs to containers so you can get
similar benefits. You want to automate the migration process where possible. What should you do?

• Manually create a GKE cluster, and then use Migrate to Containers (Migrate for Anthos) to set
up the cluster, import VMs, and convert them to containers.
• Use Migrate to Containers (Migrate for Anthos) to automate the creation of Compute Engine
instances to import VMs and convert them to containers.
• Manually create a GKE cluster. Use Cloud Build to import VMs and convert them to containers.
• Use Migrate for Compute Engine to import VMs and convert them to containers.

You might also like