Professional Documents
Culture Documents
Cloud Architect Practice Exam
Cloud Architect Practice Exam
The Cloud Architect practice exam will familiarize you with types of questions you may encounter on the
certification exam and help you determine your readiness or if you need more preparation and/or
experience. Successful completion of the practice exam does not guarantee that you will pass the
certification exam as the actual exam is longer and covers a wider range of topics.
D. In Google Cloud Storage and stored in a Nearline bucket. Set an Object Lifecycle
Management policy to change the storage class to Coldline for data older than 5 years.
Feedback
C (Correct Answer) - The access pattern fits Nearline storage class requirements and Nearline is a
more cost-effective storage approach than Multi-Regional. The object lifecycle management policy
to delete data is correct versus changing the storage class to Coldline.
A and B - Multi-Regional storage class is incorrect.
D - Changing the storage class to Coldline is incorrect.
Correct answer
C. Google Cloud Storage
Feedback
C (Correct Answer) - Google Cloud Storage supports Multi-Regional buckets that synchronize data
across regions automatically.
A - Google Cloud SQL instances are deployed within a single region.
B - Google Cloud Bigtable data is stored within a single region.
D - Google Cloud Datastore is stored within a single region.
A. Deploy each service into a single project within the same VPC.
B. Configure Shared VPC, and add each project as a service of the Shared VPC
project.
Correct
C. Configure each service to communicate with the others over HTTPS protocol.
D. Configure a global load balancer for each project, and communicate between each
service using the global load balancer IP addresses.
Feedback
B (Correct Answer) - Using a shared VPC allows each team to individually manage their own
application resources, while enabling each application to communicate between each other
securely over RFC1918 address space.
A - Deploying services into a single project results in every team accessing and managing the same
project resources. This is difficult to manage and control as the number of teams involved increases.
C - HTTPS is a valid option, however this answer does not address the need to ensure management
of individual projects.
D - The global load balancer uses a public IP address, and therefore it does not conform to the
requirement of communication over RFC1918 address space.
A. Use Google Cloud Regional Storage for the first 30 days, and then move to
Coldline Storage.
B. Use Google Cloud Nearline Storage for the first 30 days, and then move to Coldline
Storage.
C. Use Google Cloud Regional Storage for the first 30 days, and then move to Nearline
Storage.
Incorrect
D. Use Google Cloud Regional Storage for the first 30 days, and then move to Google
Persistent Disk.
Correct answer
A. Use Google Cloud Regional Storage for the first 30 days, and then move to Coldline
Storage.
Feedback
A (Correct Answer) - Since the data is accessed frequently within the first 30 days, using Google
Cloud Regional Storage will enable the most cost-effective solution for storing and accessing the
data. For videos older than 30 days, Google Cloud Coldline Storage offers the most cost-effective
solution since it won’t be accessed.
B - While Google Cloud Coldline storage is cost-effective for long-term video storage, Google Cloud
Nearline Storage would not be an effective solution for the first 30 days as the data is expected to
be accessed frequently.
C - While Google Cloud Regional Storage is the most cost-effective solution for the first 30 days,
Google Cloud Nearline Storage is not cost effective for long-term storage.
D - While Google Cloud Regional Storage is the most cost-effective solution for the first 30 days,
storing the data on Google Cloud Persistent Disk would not be cost-effective for long term storage.
A. Deploy the new application version temporarily, and then roll it back.
B. Create a second project with the new app in isolation, and onboard users.
C. Set up a second Google App Engine service, and then update a subset of clients to
hit the new service.
D. Deploy a new version of the application, and use traffic splitting to send a small
percentage of traffic to it.
Correct
Feedback
D (Correct Answer) - Deploying a new version without assigning it as the default version will not
create downtime for the application. Using traffic splitting allows for easily redirecting a small
amount of traffic to the new version and can also be quickly reverted without application downtime.
A - Deploying the application version as default requires moving all traffic to the new version. This
could impact all users and disable the service.
B - Deploying a second project requires data synchronization and having an external traffic splitting
solution to direct traffic to the new application. While this is possible, with Google App Engine,
these manual steps are not required.
C - App Engine services are intended for hosting different service logic. Using different services
would require manual configuration of the consumers of services to be aware of the deployment
process and manage from the consumer side who is accessing which service.
A. Log into one of the machines running the microservice and wait for the log storm.
B. In the Stackdriver Error Reporting dashboard, look for a pattern in the times the
problem occurs.
C. Configure your microservice to send traces to Stackdriver Trace so you can find
what is taking so long.
D. Set up a log metric in Stackdriver Logging, and then set up an alert to notify
you when the number of log lines increases past a threshold.
Correct
Feedback
D (Correct Answer) - Since you know that there is a burst of log lines you can set up a metric that
identifies those lines. Stackdriver will also allow you to set up a text, email or messaging alert that
can notify promptly when the error is detected so you can hop onto the system to debug.
A - Logging into an individual machine may not see the specific performance problem as multiple
machines may be in the configuration and reducing the chances of interacting with an intermittent
performance problem.
B - Error reporting won’t necessarily catch the log lines unless they are stack traces in the proper
format. Additionally just because there is a pattern doesn’t mean you will know exactly when and
where to log in to debug.
C - Trace may tell you where time is being spent but wont let you hone in on the exact host that the
problem is occurring on because you generally only send samples of traces. There is also no alerting
on traces to notify exactly when the problem is happening.
For this question, refer to the Dress4Win case study.
https://www.google.com/url?q=https://goo.gl/6hwzeD&sa=D&ust=1543275408577000&usg=AFQjC
NGAyKY77BHPDQeaH6c-2-Fgp9MVEQ
For future phases, Dress4Win is looking at options to deploy data
analytics to the Google Cloud. Which option meets their business and
technical requirements?
A. Run current jobs from the current technical environment on Google Cloud
Dataproc.
B. Review all current data jobs. Identify the most critical jobs and create Google
BigQuery tables to store and query data.
C. Review all current data jobs. Identify the most critical jobs and develop Google
Cloud Dataflow pipelines to process data.
Incorrect
Correct answer
A. Run current jobs from the current technical environment on Google Cloud Dataproc.
Feedback
A (Correct Answer) - There is no requirement to migrate the current jobs to a different technology.
Using managed services where possible is a requirement. Using Google Cloud Dataproc allows the
current jobs to be executed within Google Cloud Platform on a managed services offering.
B - Migrating the existing data jobs to a different technology such as Google BigQuery, is not a
requirement.
C - Migrating existing data jobs to a different technology such as Google Cloud Dataflow, is not a
requirement.
D - Using managed services where possible is a requirement. The current jobs can run on a
Hadoop/Spark cluster in Google Compute Engine but it is not a managed services solution.
For this question, refer to the Dress4Win case study.
https://www.google.com/url?q=https://goo.gl/6hwzeD&sa=D&ust=1543275408580000&usg=AFQjC
NEXoPFetJfFON_a_COxqq2Nw1rWsg
Correct answer
B. Use managed services whenever possible.
E. Evaluate and choose an automation framework for provisioning resources in the
cloud.
Feedback
B, E (Correct Answers) - Using managed services whenever possible is a requirement met by using
Google App Engine Flexible Environment. Using the Google Cloud SDK allows for provisioning and
management of Google Cloud Platform resources including Google App Engine Flexible
Environment.
A - The solution may support this requirement but will require additional solution components to
support and thus does not meet the requirement as stated.
C - The solution may support this requirement however there is no information on the specific
production services and how capacity would be saved.
D - The solution may support this requirement but will require additional solution components to
support and thus does not meet the requirement as stated.
F - The solution may support this requirement but will require additional solution components to
support and thus does not meet the requirement as stated.
For this question, refer to the Dress4Win case study.
https://www.google.com/url?q=https://goo.gl/6hwzeD&sa=D&ust=1543275408584000&usg=AFQjC
NGZio7SEIMiuRaqHPMspU7zA7vq1w
The architecture diagram below shows an event-based processing
pipeline that Dress4win is building to label and compress
user-uploaded images. Which GCP products should they use in
boxes 1, 2 and 3?
Correct answer
C. Google Cloud Storage, Google Cloud Pub/Sub, Google Cloud Dataflow
Feedback
C (Correct Answer) - Cloud Storage API easily allows write only bucket for the image uploads from
the client, the upload event is then pushed into Pub/Sub triggering the Cloud Function to grab the
file, push it through the Vision API, and send the meta-data into Pub/Sub, where Dataflow will see
the message and process the file from GCS and store the metadata into Cloud SQL.
A - An App Engine app could be written to accept image uploads, but Datastore is not for storing
image files.
B - An App Engine app could be written to accept image uploads, but natively Dataflow needs
either a GCS bucket or a PubSub topic to listen to for event processing. Connecting Dataflow to
AppEngine is a highly unusual architecture.
D - Connecting users directly to Dataflow for image uploads is not going to be able to handle the
bursty nature of user upload traffic efficiently and thus won’t give a reliable experience to users.
For this question, refer to the Dress4Win case study.
https://www.google.com/url?q=https://goo.gl/6hwzeD&sa=D&ust=1543275408588000&usg=AFQjC
NEM2HaOJeuopgDGlshM_sMjZU2TQQ
Feedback
D (Correct Answer) - Having the scanners be located outside the cloud environment will best
emulate end user penetration testing. Using the public internet to access the environments best
emulates end user traffic.
A - Google does not require notification for customers conducting security scanning on their own
applications.
B - Deploying the security scanners within the cloud environment may not test the border security
configuration that end users would normally access. This does not emulate as close as possible end
user traffic.
C - Deploying the security scanners using the VPN between the on-premises and cloud
environments may not test the border security configuration that end users would normally access.
VPN traffic may be trusted higher than public internet traffic and not emulate as close as possible
end user traffic.
Your company’s architecture is shown in the diagram. You want to
automatically and simultaneously deploy new code to each Google
Container Engine cluster. Which method should you use?
Correct answer
A. Use an automation tool, such as Jenkins.
Feedback
A (Correct Answer) - This meets the criteria of doing this automatically and simultaneously.
B - Federated mode allows for deployment in a federated way, but does not do anything beyond
that, you still have to have a tool like Jenkins to enable the "automated " part of the question, and
with Jenkins you can accomplish the goal without necessarily needing federation to be enabled.
C - This may work in very simple examples, but as complexity grows this will become
unmanageable.
D - Google Container Builder does not offer a way to push images to different clusters, they are
published to Google Container Registry.
Correct answer
A. Load data into Google BigQuery.
Feedback
BigQuery is the only of these Google products that supports an SQL interface and a high enough
SLA (99.9%) to make it readily available. Cloud Storage does not have an SQL interface.
A. Use the Linux dd and netcat commands to copy and stream the root disk contents
to a new virtual machine instance in the US-East region.
B. Create a snapshot of the root disk and select the snapshot as the root disk when
you create a new virtual machine instance in the US-East region.
Incorrect
C. Create an image file from the root disk with Linux dd command, create a new disk
from the image file, and use it to create a new virtual machine instance in the US-East
region.
D. Create a snapshot of the root disk, create an image file in Google Cloud
Storage from the snapshot, and create a new virtual machine instance in the
US-East region using the image file for the root disk.
Correct answer
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage
from the snapshot, and create a new virtual machine instance in the US-East region
using the image file for the root disk.
Feedback
D (Correct Answer) - This approach meets all of the requirements, it is easy to do and works cross
project and cross region.
A - This approach affects performance of the existing machine and incurs significate network costs.
B - This approach does not allow you to create the VM in the new project since snapshots are
limited to the project in which they are taken.
C - dd will not work correctly on a mounted disk.
D. Create randomized bucket and object names. Enable public access, but only
provide specific file URLs to people who do not have Google accounts and need
access.
Feedback
C (Correct Answer) - This grants the least privilege required to access the data and minimizes the
risk of accidentally granting access to the wrong people.
A - Signed URLs could potentially be leaked.
B - This is needlessly permissive, users only require one permission in order to get access.
D - This is security through obscurity, also known as no security at all.
Your customer is moving their corporate applications to Google Cloud
Platform. The security team wants detailed visibility of all projects in
the organization. You provision the Google Cloud Resource Manager
and set up yourself as the org admin. Which Google Cloud Identity
and Access Management (Cloud IAM) roles should you give to the
security team?
Feedback
Answer B gives the security team read only access to everything your company produces, anything
else gives them the ability to, accidentally or otherwise, change things.
B. Enable object versioning on the website's static data files stored in Google
Cloud Storage.
Correct
D. Enable Google Cloud Deployment Manager (CDM) on the project, and define each
change with a new CDM template.
Incorrect
Correct answer
B. Enable object versioning on the website's static data files stored in Google Cloud
Storage.
C. Use managed instance groups with the “update-instances” command when starting
a rolling update.
Feedback
B (Correct Answer) - This is a seamless way to ensure the last known good version of the static
content is always available.
C (Correct Answer) - This allows for easy management of the VMs and lets GCE take care of
updating each instance.
A - This copy process is unreliable and makes it tricky to keep things in sync, it also doesn’t provide
a way to rollback once a bad version of the data has been written to the copy.
D - This would add a great deal of overhead to the process and would cause conflicts in association
between different Deployment Manager deployments which could lead to unexpected behavior if an
old version is changed.
E - This approach doesn’t scale well, there is a lot of management work involved.
Correct
B. Fragment the monolithic platform into microservices.
C. Remove the QA environment. Start executing canary releases.
Feedback
A (Correct Answer) - Allows for extensive testing of the application in the green environment before
sending traffic to it. Typically the two environments are identical otherwise which gives the highest
level of testing assurance.
B (Correct Answer) - Allows for smaller, more incremental rollouts of updates (each microservice
can be updated individually) which will reduce the likelihood of an error in each rollout.
C - Would remove a well proven step from the general release strategy, a canary release platform is
not a replacement for QA, it should be additive.
D - Doesn’t really help the rollout strategy, there is no inherent property of a relational database that
makes it more subject to failed releases than any other type of data storage.
E - Doesn’t really help either since NoSQL databases do not offer anything over relational databases
that would help with release quality.
A lead software engineer tells you that his new application design
uses websockets and HTTP sessions that are not distributed across
the web servers. You want to help him ensure his application will run
properly on Google Cloud Platform. What should you do?
A. Help the engineer to convert his websocket code to use HTTP streaming.
B. Review the encryption requirements for websocket connections with the security
team.
C. Meet with the cloud operations team and the engineer to discuss load balancer
options.
Correct
D. Help the engineer redesign the application to use a distributed user session service
that does not rely on websockets and HTTP sessions.
Feedback
C (Correct Answer) - The HTTP(S) load balancer in GCP handles websocket traffic natively.
Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a
front end, for scale and availability.
A - There is no compelling reason to move away from websockets as part of a move to GCP.
B - This may be a good exercise anyway, but it doesn’t really have any bearing on the GCP
migration.
D - There is no compelling reason to move away from websockets as part of a move to GCP.
This form was created inside of Google.com. Privacy & Terms