Download as txt, pdf, or txt
Download as txt, pdf, or txt
You are on page 1of 20

Practice Test 1 - Results

�Return to review
Attempt 1
All questions
Question 1:�Skipped
You need to very quickly set up Nginx on GCP. �Which of the following is the
fastest option to get up and running?
?
None of the other options would work.
?
Cloud Dataprep
?
GCP Marketplace
(Correct)
?
Compute Engine
?
Cloud Dataflow
Explanation
Nginx cannot run on Cloud Dataprep, nor on Cloud Dataflow. Setting it up on Compute
Engine would take a lot more time/effort than using the marketplace. The Cloud
Launcher was renamed to be the GCP Marketplace--so these refer to the same thing--
and this is a quick way to deploy all sorts of different systems, including Nginx.
https://console.cloud.google.com/marketplace/details/click-to-deploy-images/nginx
https://www.nginx.com/partners/google-cloud-platform/
https://www.nginx.com/partners/google-cloud-platform/
https://techcrunch.com/2018/07/18/googles-cloud-launcher-is-now-the-gcp-
marketplace-adds-container-based-applications/
https://cloud.google.com/marketplace/ https://cloud.google.com/marketplace/docs/
Question 2:�Skipped
You run the command `kubectl deploy-pod mypodname` in Cloud Shell. �What should you
expect to see?
?
An authentication failure
?
An authorization failure
?
Status about the newly-deployed pod
?
A configuration error
?
An "unknown command" error
(Correct)
Explanation
This is not a valid command and kubectl will complain that it is unknown.
https://kubernetes.io/docs/reference/kubectl/cheatsheet/
https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/
Question 3:�Skipped
You need to set up Lifecycle Management on a new Cloud Storage bucket. �Which of
the following options should you consider?
?
Identify an existing bucket with the configuration you want, download that bucket's
configuration XML using gsutil, apply the configuration file for the new bucket via
the XML API.
?
Identify an existing bucket with the configuration you want, download that bucket's
configuration UML using gsutil, upload the configuration file in the GCS console.
?
Identify an existing bucket with the configuration you want, download that bucket's
configuration JSON using the JSON API, apply the configuration file for the new
bucket via gsutil.
(Correct)
?
Identify an existing bucket with the configuration you want, download that bucket's
configuration XML from the GCS console, apply the configuration file for the new
bucket in the GCS console.
Explanation
There are some details here that you just need to remember: 1) The GCS console does
not deal with Lifecycle configuration files, 2) gsutil deals with JSON files, and
3) the JSON and XML APIs each deal with JSON and XML, respectively.
https://cloud.google.com/storage/docs/managing-lifecycles
https://cloud.google.com/storage/docs/lifecycle
Question 4:�Skipped
You need to start a set of virtual machines to run year-end processing in a new GCP
project. �How can you enable the Compute API in the fewest number of steps?
?
Navigate to the Compute section of the console.
(Correct)
?
Open Cloud Shell, run `gcloud services enable compute`.
?
Do nothing. It is enabled by default.
?
Open Cloud Shell, configure authentication, select the "defaults" project, run
`gcloud enable compute service`
?
Open Cloud Shell, configure authentication, run `gcloud services enable
compute.googleapis.com`
Explanation
There is no such thing as a "defaults" project. Each API must be enabled before it
can be used. Some APIs are enabled by default, but GCE is not. Navigating to the
Compute Engine of the console automatically enables the GCE API. You do not have to
configure authentication to be able to use Cloud Shell, but regardless, using Cloud
Shell would take more steps than simply navigating to the GCE console.
Question 5:�Skipped
You are planning to run a multi-node database on GKE. �Which of the following
things do you need to consider?
?
At least one DB pod must always be running for data to stay persisted
?
You should use PodReplicationState objects
?
You should use cross-region container replication
?
You should use a StatefulSet object
(Correct)
?
GKE handles disk replication across pods
Explanation
There is no such thing as a PodReplicationState object, in Kubernetes. Data will be
persisted in Persistent Volumes even if all DB pods have failed or been shut down.
Kubernetes StatefulSet objects exist to manage applications that _do_ want to
preserve state--unlike the normal applications that should be stateless.
https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps
https://stackoverflow.com/questions/41732819/why-statefulsets-cant-a-stateless-pod-
use-persistent-volumes
Question 6:�Skipped
You need to very quickly set up Wordpress on GCP. �Which of the following are the
fastest options to get up and running?
?
Cloud Launcher
(Correct)
?
Cloud Functions
?
Only one of the other options would work
?
Cloud Press
?
Compute Engine
?
GCP Marketplace
(Correct)
Explanation
There is no such GCP service as "Cloud Press". Wordpress is not designed to run on
Google Cloud Functions. The Cloud Launcher was renamed to be the GCP Marketplace--
so these refer to the same thing--and this is a quick way to deploy all sorts of
different things in GCP. https://techcrunch.com/2018/07/18/googles-cloud-launcher-
is-now-the-gcp-marketplace-adds-container-based-applications/
https://cloud.google.com/marketplace/ https://cloud.google.com/wordpress/
https://console.cloud.google.com/marketplace/details/click-to-deploy-
images/wordpress https://cloud.google.com/marketplace/docs/
Question 7:�Skipped
You have two web applications that you want to deploy in GCP--one written in Ruby
and the other written in Rust. �Which of the following GCP services would be
capable of handling these apps?
?
Web Engine
?
App Engine Flexible
(Correct)
?
Web Engine Ex
?
Cloud Dataflow
?
App Engine Standard
Explanation
There is no GCP service called Web Engine or Web Engine Ex. App Engine Standard
supports apps written in Java, Python, and Go. Cloud Dataflow and Cloud Dataproc
are services for processing large volumes of data, not for hosting web apps. Ruby
and Rust applications could both be run in containers on App Engine Flexible.
https://cloud.google.com/appengine/kb/ https://cloud.google.com/dataflow/
https://cloud.google.com/dataproc/
Question 8:�Skipped
Which of the following is NOT a part of having a Java program running on a GCE
instance access the Cloud Tasks API in a Google-recommended way?
?
The GCE instance should be using a service account.
?
The program should use the Google SDK.
?
The Cloud Tasks API should be enabled.
?
The service account should have access to the Cloud Tasks API.
?
The program should pass "Metadata-Flavor: Google" to the SDK.
(Correct)
?
The access scopes should include access to the Cloud Tasks API.
Explanation
Java programs can use the SDK to access GCP services, and the SDK will take care of
the details of retrieving the access token from the metadata service and
communicating with the service. As such, your program need not concern itself with
the "Metadata-Flavor: Google" header; the SDK will handle that.
https://cloud.google.com/compute/docs/access/service-accounts
https://developers.google.com/identity/protocols/OAuth2ServiceAccount
Question 9:�Skipped
You currently have 300TB of Closed-Circuit Television (CCTV) capture data and are
adding new data at a rate of 80TB/month. �The rate of data captured and needing to
be stored is expected to grow to 200TB/month within one year because new locations
are being added, each with 4-10 cameras. �Archival data must be stored for six
months, and as inexpensively as possible. �The users of your system currently need
to access 250TB of current-month footage and 50TB of archival footage, and access
rates are expected to grow linearly with data volume. �Which of the following
storage options best suits this purpose?
?
Immediately store all data as Coldline, because the access volume is low.
?
Store new data as Regional and then use Lifecycle Management to transition it to
Coldline after 30 days.
?
Store new data as Multi-Regional and then use Lifecycle Management to transition it
to Nearline after 30 days.
(Correct)
?
Always keep all data stored as Multi-Regional, because access volume is high.
?
Store new data as Multi-Regional and then use Lifecycle Management to transition it
to Regional after 30 days.
Explanation
Data cannot be transitioned from Multi-Regional to Regional through Lifecycle
Management; that would change the location. The access rate for new data is
250/80--so quite high--but archival data access is lower--50/300. Because of this,
we need to start with Regional or Multi-Regional and should transition to Nearline
to meet the "as inexpensively as possible" requirement for archival data. And even
though the option doesn't list this, you would probably also want to set the
Lifecycle Management to automatically delete the objects after their archival
period expires. https://cloud.google.com/storage/pricing
https://cloud.google.com/storage/docs/storage-classes
https://cloud.google.com/storage/docs/lifecycle
Question 10:�Skipped
You are planning out your usage of GCP. �Which of the following things do you need
to consider about Service Accounts?
?
The default access scopes allow full access to all services.
?
To use service accounts, you must enable the Service Account API.
?
Access scopes are related to service APIs and not service accounts.
?
The default service account is restricted in what it can do by the default access
scopes.
(Correct)
Explanation
The scopes that restrict what can be done through a service account are somewhat
limited, by default.
Question 11:�Skipped
You are estimating the cost of hosting a system on GKE and exposing two Services,
externally. �Which of the following things will you do?
?
Put your estimated network traffic into the Cloud Load Balancer in the Networking
tab.
?
Put your estimated number of instances needed to host the system in the GCE tab.
?
Put your estimated number and size of SSDs needed on the Cloud Storage tab.
?
None of the other options is correct.
(Correct)
Explanation
You need to be very familiar with the pricing calculator, and the described system
would be entered entirely on the GKE tab. The Cloud Storage tab is for the object-
based GCS, not the block-based Persistent Disks. The Networking tab covers egress
and VPN tunnels, but not Load Balancing. GKE does use GCE, under the hood, but you
price it through GKE, not GCE directly.
https://cloud.google.com/products/calculator/
Question 12:�Skipped
You need to process data streamed from Cloud Pub/Sub. �Which of the following is a
managed service that would handle this situation?
?
App Engine
?
Cloud Dataproc
?
Cloud Storage
?
Cloud Dataflow
(Correct)
?
Cloud Storage Processing
Explanation
Google does not have a service called "Cloud Storage Processing". App Engine is not
made to handle processing of this type. Cloud Dataproc is made for running
Hadoop/Spark clusters but does not support streaming jobs. Cloud Dataflow is for
newly-built processing that can take advantage of Apache Beam and supports both
batch and streaming. https://cloud.google.com/dataflow/
https://cloud.google.com/dataproc/
Question 13:�Skipped
You are designing the logging structure for a containerized Java application that
will run on GKE. �Which of the following options is recommended and will use the
least number of steps to enable your developers to later access and search logs?
?
Have the developers write logs using the App Engine Java SDK
?
Have the developers write log lines to a file named stackdriver.log
?
Have the developers write log lines to a file named stackdriver.log, install and
run the Stackdriver agent beside the application
?
Have the developers write log lines to a file named application.log, install the
Stackdriver agent on the VMs, configure the Stackdriver agent to monitor and push
application.log
?
Have the developers write log lines to stdout and stderr, install and run the
Stackdriver agent beside the application
?
Have the developers write log lines to stdout and stderr
(Correct)
Explanation
The App Engine SDKs only work for apps running on App Engine. Stackdriver does not
automatically send files named stackdriver.log . "Stackdriver Logging is enabled by
default when you create a new cluster using the gcloud command-line tool or Google
Cloud Platform Console." Logging to stdout and stderr on GKE _is_ the recommended
way to log: "Containers offer an easy and standardized way to handle logs because
you can write them to stdout and stderr. Docker captures these log lines and allows
you to access them by using the docker logs command. As an application developer,
you don't need to implement advanced logging mechanisms. Use the native logging
mechanisms instead." https://cloud.google.com/kubernetes-engine/docs/how-to/logging
https://cloud.google.com/solutions/best-practices-for-operating-containers
Question 14:�Skipped
What is the easiest way to delete a project?
?
There is no general way to delete a project. �Projects are immutable.
?
Open a support request to delete the project and wait 2-5 days for them to complete
the task.
?
In the monthly project budget email, click the link to "Delete Project and
Unsubscribe".
?
Simply ignore the monthly project renewal email and the project will automatically
be deleted in 15 days.
?
Run `gcloud projects delete oldprojid`
(Correct)
Explanation
You do not need to involve Support to delete a project; you just do that, yourself.
Project budgets do not include any way to delete the project. If you rack up
charges, you're liable for them even if you delete the project or it gets
suspended.
Question 15:�Skipped
Who can change the billing account linked to a project?
?
Any project editor
?
Any project billing administrator
(Correct)
?
Any project auditor
?
Any user of the project
?
Only Google Support
?
The project owner
(Correct)
Explanation
Google Support does not generally get involved in changing project billing
accounts. Auditors cannot (should not be able to) make changes. Project editors and
users do not have authority to make billing changes.
Question 16:�Skipped
You need to determine who just started a particular GCE instance that does not meet
your organization's resource labelling policies. �How can you determine who to
follow up with, in the least number of steps?
?
From the notifications menu, navigate to the Activity Log. �Look for the log line,
"USER_EMAIL created INSTANCE_NAME".
(Correct)
?
Navigate to the project dashboard. �Navigate to the "Activity" tab. �Look for the
log line, "USER_EMAIL created INSTANCE_NAME".
?
Navigate to the Compute Engine section of the console. �Navigate into the details
of the instance in question. �Navigate to the "Monitoring" tab. �Identify the user
by the displayed "Owner" property.
?
Navigate to the Compute Engine section of the console. �Navigate into the details
of the instance in question. �Identify the user by the displayed "Owner" property.
?
From the notifications menu, navigate to the Activity Log. �For "Date/time", choose
"Select Range" and include today's date. �Look for the log line, "USER_EMAIL
created INSTANCE_NAME".
Explanation
The Compute Engine section of the console does not identify any "Owner".
Information about "Who did What, and When?" should be identified from the Activity
Log or fuller Audit Log. You must become very familiar with the Activity Log. It
takes fewer steps to get to the Activity Log by dropping the notifications menu,
though navigating from the project dashboard is also a valid way to get there.
Because the instance was just started, it should show up on the first page without
needing to change the displayed date range.
https://console.cloud.google.com/home/activity
https://cloud.google.com/logging/docs/audit/
https://cloud.google.com/compute/docs/audit-logging
Question 17:�Skipped
What will happen if a running GKE pod encounters a fatal error?
?
If it is a part of a host, GKE will automatically restart the pod in an available
deployment.
?
GKE pods are tiered and cannot encounter fatal errors.
?
You can tell GKE to restart the pod in an available deployment.
?
If it is a part of a deployment, GKE will automatically restart that pod on an
available node.
(Correct)
Explanation
GKE tries to ensure that the number of pods you've specified in your deployment are
always running, so it will restart one if it fails. All the other options are using
terms in ways that don't make sense (such as "an available deployment"). From the
documentation: `Pods do not 'heal' or repair themselves. For example, if a Pod is
scheduled on a node which later fails, the Pod is deleted. Similarly, if a Pod is
evicted from a node for any reason, the Pod does not replace itself.`
https://cloud.google.com/kubernetes-engine/docs/concepts/pod
Question 18:�Skipped
Which of the following roles has the highest level of access?
?
Controller
?
Project Editor
?
Organization Superuser
?
Project Owner
(Correct)
?
Compute Administrator
?
Organization Auditor
Explanation
There are no such roles as Organization Superuser, Organization Auditor, nor
Controller. The Project Owner has all of the capabilities of the other two (Project
Editor and Compute Administrator), and more. (There is, however, a "Super Admin"
role for an organization that can control everything.)
https://cloud.google.com/iam/docs/overview
https://cloud.google.com/iam/docs/understanding-roles
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-
organizations#define_domain_administration_roles
Question 19:�Skipped
You are working together with a contractor from the Acme company and you need to
allow App Engine running in one of Acme's GCP projects to write to a Cloud Pub/Sub
topic you own. �Which of the following pieces of information are enough to let you
enable that access?
?
The Acme GCP project's project ID
(Correct)
?
The Acme GCP project's project number
?
The email address of the Acme contractor
?
The Acme GCP project's name
?
The email address of the Acme project service account
(Correct)
Explanation
You need to grant access to the service account being used by Acme's App Engine
app, not the contractor, so you don't care about the contractor's email address. If
you are given the service account email address, you're done; that's enough. If you
need to use the pattern to construct the email address, you'll need to know the
Project ID (not its number, unlike for GCE!) to construct the email address used by
the default App Engine service account: `PROJECT_ID@appspot.gserviceaccount.com`
https://cloud.google.com/iam/docs/service-accounts
https://cloud.google.com/iam/docs/understanding-service-accounts
https://cloud.google.com/iam/docs/granting-roles-to-service-accounts
Question 20:�Skipped
You are designing the logging structure for a non-containerized Java application
that will run on GAE. �Which of the following options is recommended and will use
the least number of steps to enable your developers to later access and search
logs?
?
Have the developers write log lines to a file named stackdriver.log.
?
Have the developers write log lines to stdout and stderr, install and run the
Stackdriver agent beside the application.
?
Have the developers write logs using the App Engine Java SDK.
(Correct)
?
Have the developers write log lines to a file named application.log, install the
Stackdriver agent on the VMs, configure the Stackdriver agent to monitor and push
application.log.
?
Have the developers write log lines to stdout and stderr.
?
Have the developers write log lines to a file named stackdriver.log, install and
run the Stackdriver agent beside the application.
Explanation
In App Engine Standard, you should log using the App Engine SDK and the connection
to Stackdriver (i.e. agent installation and configuration) is handled automatically
for you. https://cloud.google.com/appengine/articles/logging
Question 21:�Skipped
You have a GKE cluster that currently has six nodes but will soon run out of
capacity. �What should you do?
?
In the GKE console, edit the cluster and specify the new desired size.
(Correct)
?
Nothing. �GKE is always fully managed and will scale up by default.
?
Clusters are immutable so simply create a new cluster for the larger workload.
?
Run `gcloud compute instances create anyname --gke`
?
Run `gcloud compute instances create gke-7`
Explanation
Clusters are editable, not immutable, and should not be recreated because of
changes in demand. Cluster autoscaling is an optional setting. You do not manage
nodes via GCE, directly--you always manage them through GKE, even though you can
see them via GCE. https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-
architecture https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-
cluster
Question 22:�Skipped
When comparing `n1-standard-8`, `n1-highcpu-8`, and `n1-highmem-16`, which of the
following statements are true?
?
The `n1-highcpu-8` is the least expensive
(Correct)
?
The `n1-highmem-16` has twice as many CPUs as the `n1-highcpu-8`
(Correct)
?
They all cost the same amount
?
The `n1-standard-8` is the least expensive
?
The `n1-highmem-16` has twice as much RAM as the `n1-highcpu-8`
Explanation
The number at the end of the machine type indicates how many CPUs it has, and the
type tells you where in the range of allowable RAM that machine falls--from minimum
(highcpu) to balanced (standard) to maximum (highmem). The cost of each machine
type is determined by how much CPU and RAM it uses. Understanding that is enough to
correctly answer this question. https://cloud.google.com/compute/docs/machine-types
. https://cloud.google.com/compute/pricing#pricing
Question 23:�Skipped
You currently have 850TB of Closed-Circuit Television (CCTV) capture data and are
adding new data at a rate of 80TB/month. �The rate of data captured and needing to
be stored is expected to grow to 200TB/month within one year because new locations
are being added, each with 4-10 cameras. �Which of the following storage options
best suits this purpose without encountering storage or throughput limits?
?
One Cloud Storage bucket per year, per location
?
One Cloud Storage bucket per month, for all locations
?
One Cloud Storage bucket for all objects
(Correct)
?
One Cloud Storage bucket per week
?
One Cloud Storage bucket per CCTV camera
Explanation
This question might make you think you need to do some math to calculate rates and
compare to limits, but you don't. You don't need to split your data up to avoid
bucket-level limits. It is generally easiest (and best) to manage all your data in
a single bucket and using things like folders for organizing them. In fact, if you
separate data into many buckets, you are more likely to encounter limits around
bucket creation and deletion. https://cloud.google.com/storage/quotas
Question 24:�Skipped
You are responsible for securely managing employee access to Google Cloud. �Which
of the following are Google-recommended practices for this?
?
Have each employee set up a GMail account using two-factor authentication.
?
Set up all employee accounts to use the corporate security office phone number for
account rescue.
?
Use Cloud Identity or GSuite to manage Google accounts for employees.
(Correct)
?
Enforce MFA on employee accounts.
(Correct)
?
Use Google Cloud Directory Sync to push Google account changes to corporate head
office via LDAP.
Explanation
MFA stands for Multi-Factor Authentication, and it is a best practice to use this
to secure accounts. Cloud Identity and GSuite are the two ways to centrally manage
Google accounts. Google Cloud Directory Sync (GCDS) does use LDAP to connect to
your organization's directory server, but it only pulls data to synchronize and
never pushes changes. https://cloud.google.com/docs/enterprise/best-practices-for-
enterprise-organizations
Question 25:�Skipped
You run the command�kubectl describe pod mypodname�in Cloud Shell. �What should you
expect to see?
?
An "unknown command" error
?
Information about the named pod
(Correct)
?
An authorization failure
?
A configuration error
?
An authentication failure
Explanation
This is a valid command and Cloud Shell will automatically configure kubectl with
the required authentication information to allow you to interact with the GKE
cluster through it. https://kubernetes.io/docs/tutorials/kubernetes-
basics/explore/explore-intro/
Question 26:�Skipped
When will a newly-created project become available?
?
At the end of the billing cycle of the linked billing account
?
On the first day of the month.
?
At midnight.
?
Once the project owner has logged out and back in again.
?
After a few minutes of initialization.
(Correct)
Explanation
Projects only take a few minutes to finish creating and can be used immediately
after that.
Question 27:�Skipped
You already have a GCP project but want another one for a new developer who has
started working for your company. �How can you create a new project?
?
Configure GCS for your local machine using QUIK bindings and press its "New
Project" button.
?
Turn on Gold level support on an existing project, phone support to create a new
project.
?
Enable Silver support on your billing account, email support to create a new
project.
?
In the GCP mobile app, navigate to the support section and press "Create new
project".
?
In the console, press on the current project name, then press on "Create New".
(Correct)
?
You cannot create a new project.
Explanation
You can create new projects, up to your quota. Support does not create projects for
you; that's something you do, yourself. "QUIK bindings" are just something made up.
Question 28:�Skipped
You are designing the object security structure for sensitive customer information.
�Which of the following should you be sure to include in your planning?
?
Hash and salt all data, to limit the blast radius of any potential breach.
?
Randomize object names, to support security through obscurity.
?
Use both ACLs and roles, to achieve defense in depth.
?
Assign only limited access, to achieve least privilege.
(Correct)
?
Ensure there is a honeypot, to support penetration testing.
?
None of the other options is appropriate.
Explanation
Least privilege is a paramount concern for data security, and you definitely do
want to restrict access as much as possible to support this. Hashing and salting
_passwords_ is important, but if you hash information you need to view (not just
compare), then hashing will make it unusable. ACLs and roles can both be used, but
they will not create multiple layers of security that an attacker would need to go
through: any allow in either of them will suffice to view the data. Hashing and
salting _passwords_ is important, but if you hash information you need to view (not
just compare), then hashing will make it unusable. Security through obscurity is
not an effective strategy for securing data (or anything, really); you must assume
that every attacker knows what you know and still ensure data safety. Penetration
testing can be used as a part of your overall security strategy, but it doesn't
require a honeypot and is not your primary consideration.
https://www.owasp.org/index.php/Security_by_Design_Principles
Question 29:�Skipped
You have a volume of data that is accessed very rarely (on average once every 3-4
years) but should be retrieved very quickly (less than one second) when it is.
�Which of the following do you need to consider when deciding how to store this
data?
?
Thaw time from GCS Coldline may not be quick enough.
?
Request latency of GCS Multi-Regional may not be quick enough.
?
Retrieval time from GCS Nearline may not be quick enough.
?
All of the GCS storage classes would work fine.
(Correct)
?
Only one of the other options is correct.
(Correct)
Explanation
"All storage classes offer low latency (time to first byte typically tens of
milliseconds) and high durability." https://cloud.google.com/storage/docs/storage-
classes
Question 30:�Skipped
You are currently using an `n1-highcpu-8` machine type and it is good but you would
just like a bit more RAM. �Which of the following is the most cost effective option
to achieve this?
?
Switch to `n1-highcpu-10`
?
Switch to a custom machine type with 8 CPUs and more RAM
(Correct)
?
Switch to `n1-highmem-8`
?
Switch to `n1-highmem-4`
?
Switch to `n1-highmem-16`
?
Switch to `n1-highcpu-16`
Explanation
The custom machine type with 8 CPUs is by far the best choice, here. You don't want
(or need) to choose a machine type with a different number of CPUs, so the only
other option to consider would be `n1-highmem-8`--but that has the _maximum_ amount
of RAM for 8 CPUs and you only want (and only want to pay for) "just a bit more
RAM". https://cloud.google.com/compute/docs/instances/creating-instance-with-
custom-machine-type https://cloud.google.com/compute/docs/machine-types
https://cloud.google.com/compute/pricing#pricing
Question 31:�Skipped
You need to store some recently recorded customer focus sessions into a new GCP
project. �How can you enable the GCS API in the fewest number of steps?
?
Open Cloud Shell, run `gcloud services enable storage`
?
Do nothing. It is enabled by default.
(Correct)
?
Open Cloud Shell, configure authentication, run `gcloud services enable
storage.googleapis.com`
?
Navigate to the Storage section of the console.
?
Open Cloud Shell, configure authentication, select the "defaults" project, run
`gcloud enable storage service`
Explanation
There is no such thing as a "defaults" project. Each API must be enabled before it
can be used. Some APIs are enabled by default, and that includes GCS. You do not
have to configure authentication to be able to use Cloud Shell, but regardless,
using Cloud Shell would take more steps than doing nothing. :-)
Question 32:�Skipped
You are designing the logging structure for a non-containerized Java application
that will run on GCE. �Which of the following options is recommended and will use
the least number of steps to enable your developers to later access and search
logs?
?
Have the developers write log lines to stdout and stderr, install and run the
Stackdriver agent beside the application.
?
Have the developers write log lines to a file named stackdriver.log.
?
Have the developers write log lines to stdout and stderr.
?
Have the developers write log lines to a file named stackdriver.log, install and
run the Stackdriver agent beside the application.
?
Have the developers write log lines to a file named application.log, install the
Stackdriver agent on the VMs, configure the Stackdriver agent to monitor and push
application.log.
(Correct)
?
Have the developers write logs using the App Engine Java SDK.
Explanation
The App Engine SDKs only work for apps running on App Engine. Stackdriver does not
automatically send files named stackdriver.log . Stackdriver is not installed by
default on GCE. Logging to stdout and stderr on GCE is not the recommended way to
get logs to Stackdriver; configuring a custom log file location is.
https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-
engine-6600d81e70e3 https://cloud.google.com/logging/docs/agent/configuration
Question 33:�Skipped
You need to store thousands of 2TB objects for one month and it is very unlikely
that you will need to retrieve any of them. �Which of the following options would
be the most cost-effective?
?
Multi-Regional Cloud Storage bucket
?
Coldline Cloud Storage bucket
?
Regional Cloud Storage bucket
?
Nearline Cloud Storage bucket
(Correct)
?
Bigtable
Explanation
Bigtable is not made for storing large objects. Coldline's minimum storage duration
of 90 days makes it more expensive than Nearline. Multi-Regional and Regional are
both more expensive than Nearline. https://cloud.google.com/storage/docs/storage-
classes https://cloud.google.com/storage/pricing https://cloud.google.com/bigtable/
Question 34:�Skipped
You are designing the logging structure for a containerized Java application that
will run on GAE Flex. �Which of the following options is recommended and will use
the least number of steps to enable your developers to later access and search
logs?
?
Have the developers write logs using the App Engine Java SDK
?
Have the developers write log lines to a file named stackdriver.log, install and
run the Stackdriver agent beside the application
?
Have the developers write log lines to stdout and stderr, install and run the
Stackdriver agent beside the application
?
Have the developers write log lines to stdout and stderr
(Correct)
?
Have the developers write log lines to a file named stackdriver.log
Explanation
In App Engine Flex the connection to Stackdriver (i.e. agent installation and
configuration) is handled automatically for you. In GAE Flex, you _could_ write
logs using the App Engine SDK--and that would work--but it's best practice for
containers to log to stdout and stderr, instead: "Containers offer an easy and
standardized way to handle logs because you can write them to stdout and stderr.
Docker captures these log lines and allows you to access them by using the docker
logs command. As an application developer, you don't need to implement advanced
logging mechanisms. Use the native logging mechanisms instead."
https://cloud.google.com/appengine/articles/logging
https://cloud.google.com/solutions/best-practices-for-operating-containers
Question 35:�Skipped
You are planning to host your system in Google App Engine. �Which of the following
statements is NOT true about using the pricing calculator?
?
You enter the amount of Cloud Storage you'll use on the App Engine tab.
?
You enter the amount of Outgoing Network Traffic on the App Engine tab.
?
You select your required operating system on the App Engine tab.
(Correct)
?
None of the other options is untrue.
?
You enter the number of instances on the App Engine tab.
Explanation
You cannot choose the operating system for App Engine; that's handled internally.
Also, you need to be very familiar with the pricing calculator and those are all
valid things you can enter on the App Engine tab about a GAE-hosted system.
https://cloud.google.com/products/calculator/
Question 36:�Skipped
You want to create a new GCS bucket in Iowa. �How could you go about doing this?
?
Begin creating the bucket and set the location to Iowa when prompted.
(Correct)
?
Make sure the project is homed in the Iowa region then just create the bucket.
?
First create the bucket in Cloud Shell and then set its location to Iowa using the
console.
?
At the top of the GCP console, drop the region selector and choose us-central1,
then create the bucket.
?
At the top of the GCP console, drop the zone selector and choose us-central1-a, us-
central1-b, us-central1-c, or us-central1-f, then create the bucket.
?
Only one of the other options will work.
(Correct)
Explanation
GCP does not have top-level location selectors like AWS does. GCP is global by
default and it's only individual resources that live in certain locations. When
you're creating a bucket in the console, the wizard asks you where you want to put
it. Also, you cannot move a bucket after its region has been chosen during
creation. https://cloud.google.com/docs/geography-and-regions
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-
organizations#projects_are_not_based_on_geography_or_equivalent_to_zones
Question 37:�Skipped
You are responsible for onboarding a new employee. �Which of the following is a
Google-recommended practice?
?
Grant them admin access to each GCE instance that they will need to manage.
?
Add them to a Google Group for their job function.
(Correct)
?
Generate and provide them with an SSH key.
?
None of the other options is a Google-recommended practice.
?
Log onto Cloud Shell with their account, to configure it.
Explanation
SSH keys should be generated and managed by the person who will use them. There
should be no need to log onto a new employee's account for them. Access to GCE
instances should not be managed directly between instances and employees; a Google
Group should be used to manage this. https://cloud.google.com/docs/enterprise/best-
practices-for-enterprise-organizations
Question 38:�Skipped
You are planning a log analysis system to be deployed on GCP. �Which of the
following would be the best way to store the logs, long-term?
?
BigTable
?
Stackdriver Logging
?
Activity Log
?
Cloud Pub/Sub
?
Cloud Storage
(Correct)
Explanation
Stackdriver Logging only retains logs for 30 days, but it can then send the logs on
to GCS for long-term storage (in whichever storage class is desired). Cloud Pub/Sub
is not long-term storage. https://cloud.google.com/logging/
http://gcp.solutions/diagram/Log%20Processing
Question 39:�Skipped
How should you enable a GCE instance to read files from a bucket in the same
project?
?
When launching the instance, remove the default service account so it falls back to
project-level access
(Correct)
?
Do not change the default service account setup and attachment
?
Only one of the other options is correct
(Correct)
?
Log onto the instance and run `gcloud services enable storage.googleapis.com`
?
Log into Cloud Shell and run `gcloud services enable storage.googleapis.com`
Explanation
By default, both the default service account and te default scopes can read from
GCS buckets in the same project, so you should just leave those alone and it will
work. https://cloud.google.com/iam/docs/granting-roles-to-service-accounts
Question 40:�Skipped
You have previously installed the Google Cloud SDK on your work laptop and
configured it. �You now run the command `gcloud compute instances create newvm` but
it does not prompt you to specify a zone. �Which of the following could explain
this?
?
The project configured for gcloud is located in a particular zone.
?
Only one of the other options is correct.
(Correct)
?
Your gcloud configuration includes a value for compute/zone
(Correct)
?
Your gcloud configuration includes a value for compute/region
?
In Cloud Shell, you previously set a zone as the default one GCE should use.
Explanation
Projects are global and and are not "located" in any region or zone. The gcloud
family of tools save their default zone information locally where they're
installed, and these are separate from console settings. The gcloud tool _can_ pull
the values set in the console if you rerun `gcloud init`, but gcloud does not push
its configuration to the place the console uses.
https://cloud.google.com/compute/docs/regions-zones/changing-default-zone-region
Question 41:�Skipped
You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil
to retrieve a large amount of data from Cloud Storage. �Of the following steps,
which is the last one to happen?
?
The gcloud command to start the instance completes
?
Space is reserved on a host machine
?
The instance goes into the Running state
?
The instance startup script completes
(Correct)
Explanation
After a request to create a new instance has been accepted and while space is being
found on some host machine, that instance starts in the Provisioning state. After
space has been found and reserved on a host machine, the instance state goes to
Staging while the host prepares to run it and sorts out things like the network
adapter that will be used. Immediately when the VM is powered on and the OS starts
booting up, the instance is considered to be Running. That's when gcloud completes,
if it was run without `--async`.
https://cloud.google.com/compute/docs/instances/checking-instance-status
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
Question 42:�Skipped
How can you link a new project with your billing account?
?
If you created the project in Cloud Shell, do nothing.
?
If you created the project in the console, do nothing.
(Correct)
?
If you created the project via gsutil, do nothing.
?
If Google Titanium support created the project, do nothing.
?
Whenever a project is created, it is always linked with the billing account of
whoever created it.
?
If you created the project via gcloud, link it with a command under 'gcloud beta
billing'.
(Correct)
Explanation
If you created the project via gcloud, link it with a command under `gcloud beta
billing`.
Question 43:�Skipped
You navigate to the Activity Log for a project containing a GKE cluster you
created. �If you filter the Resource Type to "GCE VM Instance", which of the
following will you see?
?
You will not see any lines because the instances are owned by GKE.
?
You will see lines of the form "DEFAULT_GCE_SERVICE_ACCOUNT created
GKE_NODE_INSTANCE_NAME"
?
You will see lines of the form "YOUR_EMAIL created GKE_NODE_INSTANCE_NAME"
?
None of the other options is correct.
(Correct)
Explanation
Log lines for GKE node creation will show up in the activity log. But the creation
is not attached to your account--you only created the GKE cluster. Neither is it
the GCE default service account that creates such instances--that account is meant
to be used by applications running _on_ GCE instances, not GKE management like
this. Instead, log lines will use the passive voice "GKE_NODE_INSTANCE_NAME was
created" to indicate that this was an automatic action taken by GCP because you had
previously configured/requested it do that.
https://console.cloud.google.com/home/activity
https://cloud.google.com/logging/docs/audit/
https://cloud.google.com/compute/docs/audit-logging
Question 44:�Skipped
You are planning to run a single-node database on GKE. �Which of the following
things do you need to consider?
?
You should use a DaemonSet object
?
You should use DataSet and DataSetReplication objects
?
You should use PersistentVolume and PersistentVolumeClaim objects
(Correct)
?
The data will likely be corrupted when a deployment changes or a pod fails
?
GKE handles disk replication across pods
Explanation
Databases are all about preserving information--about _keeping and not losing_
data--so we need to make sure that GKE knows that we care about the data we store
and need to keep it around. To do this, we need Persistent Volumes and Persistent
Volume Claims. GKE does not replicate disks across pods; it ensures that the data
for a pod persists and is still available to it when it recovers from a failure.
https://cloud.google.com/kubernetes-engine/docs/how-to/stateful-apps
https://stackoverflow.com/questions/41732819/why-statefulsets-cant-a-stateless-pod-
use-persistent-volumes
Question 45:�Skipped
You need to store trillions of 2KB objects for one month and it you will need to
run analytical processing against all of them from hundreds of nodes. �Which of the
following options would be the most cost-effective?
?
Nearline Cloud Storage bucket
?
Regional Cloud Storage bucket
?
Bigtable
(Correct)
?
Coldline Cloud Storage bucket
?
Multi-Regional Cloud Storage bucket
Explanation
Bigtable is made for large analytical workloads. With Cloud Storage, you pay for
read operations, so that can get quite expensive when it's not the right fit for
the data and access patterns. https://cloud.google.com/bigtable/
https://cloud.google.com/storage/pricing
Question 46:�Skipped
You are designing the object security structure for sensitive customer information.
�Which of the following should you be sure to include in your planning?
?
Assign all employees to a single full-access group, to keep security simple.
?
Do not grant any bucket-level permissions, so that new objects are secure by
default.
(Correct)
?
None of the other options is appropriate.
?
Give write access and read access to different people, to ensure separation of
duties.
?
Put each customer's objects in a separate bucket, to limit attack surface area.
Explanation
https://www.owasp.org/index.php/Security_by_Design_Principlesabout who can read and
who can write data. Security should be simple, yes, but it needs to actually be
_secure_, first. You should generally not design your system to need so many
buckets, and you _can_ properly secure the data with object-level ACLs. It can be a
good strategy to not allow any bucket-level access and force access to be granted
explicitly at the object level.
https://www.owasp.org/index.php/Security_by_Design_Principles
Question 47:�Skipped
You are planning to use BigTable for your system on GCP. �Which of the following
statements is true about using the pricing calculator for this situation?
?
None of the other options is correct.
?
You need to estimate query volume for the BigTable autoscaling estimation.
?
You need to enter the number of BigTable nodes you'll provision.
(Correct)
?
You need to estimate how much GCS data will be backing the BigTable.
Explanation
BigTable is priced by provisioned nodes. BigTable does not autoscale. BigTable does
not store its data in GCS. https://cloud.google.com/products/calculator/
https://cloud.google.com/bigtable/
https://cloud.google.com/bigtable/docs/instances-clusters-nodes
Question 48:�Skipped
How should you enable a GCE instance in Project A (having project ID `project-a-
id`) to read files from a bucket in a Project B (having project ID `project-b-id`)?
?
Only one of the other options is correct
?
Log onto the instance and run `gcloud services enable storage.googleapis.com
--project-id project-b-id`
?
Do not change the default service account setup and attachment
(Correct)
?
In Project B, grant bucket read access to Project A's default compute service
account.
(Correct)
?
Log into Cloud Shell in Project A and run `gcloud services enable
storage.googleapis.com --project-id project-b-id`
?
When launching the instance, remove the default service account so it falls back to
project-level access
?
Log into Cloud Shell in Project B and run `gcloud services enable
storage.googleapis.com --project-id project-a-id`
Explanation
Since the default scopes allow reading from GCS, all that remains to get this
situation working is for Project B (which owns and controls access to the bucket)
to grant access to the service account being used. APIs are enabled for a project
and do not differ between internal and external access.
https://cloud.google.com/iam/docs/granting-roles-to-service-accounts
Question 49:�Skipped
You are thinking through all the things that happen when a Compute Engine instance
starts up with a startup script that installs the Stackdriver agent and runs gsutil
to retrieve a large amount of data from Cloud Storage. �Of the following steps,
which is the first one to happen?
?
The metadata service returns information about this instance to the first
requestor.
(Correct)
?
Data retrieval from GCS completes
?
Stackdriver Logging shows the first log lines from the startup script.
?
The instance startup script begins.
Explanation
Immediately when the VM is powered on and the OS starts booting up, the instance is
considered to be Running. That's when gcloud completes, if it was run without `--
async`. Then the metadata service will provide the startup script to the OS boot
process. The gsutil command will also need to get metadata--like the service
account token--but since it it is synchronous by default and will take some time to
transfer the volume of data to the instance, the Stackdriver agent should have a
chance to push logs and show the startup script progress. When the transfer is
done, the startup script will complete and more logs will eventually be pushed to
Stackdriver Logging. https://cloud.google.com/compute/docs/instances/checking-
instance-status
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
https://cloud.google.com/compute/docs/storing-retrieving-metadata
https://cloud.google.com/compute/docs/startupscript
Question 50:�Skipped
You have two web applications that you want to deploy in GCP--one written in Ruby
and the other written in Rust. �Which of the following GCP services would be
capable of handling these apps?
?
Web Engine
?
Cloud Dataproc
?
Kubernetes Engine
(Correct)
?
Stackdriver
?
Cloud Functions
?
Compute Engine
(Correct)
Explanation
There is no GCP service called Web Engine. Stackdriver is a family of services for
monitoring and debugging apps, not for hosting them. Cloud Dataflow and Cloud
Dataproc are services for processing large volumes of data, not for hosting web
apps. Cloud Functions does not support Rust or Ruby. Ruby and Rust applications
could both be run in containers or directly on VMs.
https://cloud.google.com/stackdriver/ https://cloud.google.com/dataflow/
https://cloud.google.com/dataproc/ https://cloud.google.com/functions/
https://cloud.google.com/kubernetes-engine/ https://cloud.google.com/compute/
Continue
Retake test

You might also like