Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Special Offer | Flat 15% OFF on All Courses | Use Coupon - WHIZSITE15

Search Courses D b Ask Expert M Team Account Hi, abhijeet

Dashboard My Courses All Courses Inbox

! j My Courses j AWS Certified Solutions Architect Professional j Practice Test II j Report

Practice Test II Completed on 03-February-2021

Attempt Marks Obtained


Congratulations, you passed. Keep
Your score
it up
Time Taken Result
1 79 / 80 98.75% N/A Congratulations!
Passed

Attempt Marks Obtained Your score Time Taken Result


01 79 / 80 98.75% N/A Congratulations!
Passed

Share your Result with your friends


hm

Domains wise Quiz Performance Report Join us on Slack community

No Domain Total Question Correct Incorrect Unattempted Marked as Review

1 Continuous Improvement for Existing Solutions 33 33 0 0 1

2 Design for Organizational Complexity 12 12 0 0 2

3 Design for New Solutions 20 19 1 0 2

4 Migration Planning 9 9 0 0 2

5 Cost Control 6 6 0 0 0

Total All Domain 80 79 1 0 7

Review the Answers Sorting by All

Question 7 Incorrect

Domain :Design for New Solutions

Your company runs a popular map service as a SaaS platform. Your dynamic multi-page application users are spread across the world but not all of them are
using the system heavily, so the load is high in some of the regions but not all. The application uses the NoSQL database and runs it on a cluster of EC2
machines and using the custom tools to replicate the data across different regions. The current database size is around 10PB and as the popularity of the
application grows, the database is also growing rapidly. Now the application is serving millions of requests from your SaaS platform. The management has
decided to come up with a plan to re-design the architecture dynamically, both from the application availability and infrastructure cost perspective. Please
suggest the necessary changes. Select 3 Options.

Route53 with Latency based routing policy to redirect to the lowest latency region and deploy the application into regions from where the
A.
heavy load is generating. A
B. Migrate the application on S3 and use CloudFront edge locations to serve the requests.

C. Use DynamoDB global tables to replicate the data into multiple regions.
A
D. Deploy the ElastiCache to the regions in which DynamoDB is not running.
A
E. Use RDS with the Read Replicas into multiple regions, application servers will use the read replicas to serve the traffic.

Explanation:

Correct Answer: A, C, D

Option A is CORRECT because the AWS Route53 Latency based routing will redirect the request to the lowest latency region to help serve the application
faster

Option B is INCORRECT because the AWS S3 can only be used for Static Website Hosting or Single Page Applications, in the current case the application is
a SaaS platform and dynamic in nature

Option C is CORRECT because the DynamoDB global tables provide replication capabilities, with global tables you can have fully managed, multi-region,
and multi-master database that provides fast, local, read and write performance for massively scaled, global applications.

Option D is CORRECT because in the current case the size of the database is into petabytes, we can use the region's caching servers to load the data from
the nearest region.

Option E is INCORRECT because AWS RDS is a relational database service and the application in the context is using the NoSQL database, it would require
a significant engineering effort to re-design the application to fit the relational database and scale for such huge amount of data.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 8 Correct

Domain :Design for New Solutions

You have developed a web application to collect monthly expense reports. As the nature of the application and looking at the usage statistics, it is mostly
used around the last week of the month and the first week of the month. To increase the application performance you added a caching layer in front of the
application servers, so the reports are cached and served immediately. You started off with Elasticache Redis with a "cache.t2.small" node type. The
application has been running fine and by looking at the performance activity into the CloudWatch, hardly 50% of the requests are served by the cache and
the cache is not able to cope with additional content requirements. You want to improve the application with minimal changes and resources. Please select a
valid option.

z] A. Modify the ElastiCache instance from t2 small to t2 medium, as t2 medium is more suitable for the given requirement.
A
] B. Create a new ElastiCache instance with t2 micro, and terminate the t2 small instance.

Migrate the application to Elastic Beanstalk to use auto-scaling and set the desired and min capacity to 1, use the RDS and Cache layer of
] C.
Beanstalk to save the cost.

] D. Run the web application from S3 and serve with CloudFront.

Explanation:

Correct Answer: A

Option A is CORRECT because we can modify the cache node type from "cache.t2.small" to "cache.t2.medium" in the console. We must increase the size of
the Redis instance for the server to serve more requests from the cache.

Option B is INCORRECT because creating a new Elasticache instance with a "cache.t2.micro" node type is not needed here

Option C is INCORRECT because migrating to Beanstalk will simply not save the cost, also Beanstalk has an RDS layer but no caching layer.

Option D is INCORRECT because S3 and CloudFront will incur additional cost for such a minimal use case

There are two things here.

Elastic Cache Instance -> this supports the underlying Elasticache engine. This is similar to our ec2 instance like "t2.micro" , "t2.mall" etc. This instance type
can be modified either through the AWS console or through the CLI

Redis -> This is the engine that runs the elasticache cluster. This engine can only be "upgraded" and cannot be "downgraded".

Please refer to page 99 of the below link

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/redis-ug.pdf

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 9 Correct

Domain :Design for New Solutions

You are an IT administrator and you are responsible for managing several on-premises databases in VMware vSphere environments. The R&D team has just
created several RDS instances on VMware to utilize the latest AWS RDS on VMware features. Then those new databases can be managed by using RDS
console, API, and CLI. Which activities does the Amazon RDS on VMware manage on your behalf? (Select FOUR)

z A. The patching of the RDS on-premises operating systems and database engines.
A
z B. Instance health monitoring and failover capabilities of the on-premises instances.
A
z C. Online backups based on retention policies of databases in RDS VMware.
A
z D. Point-in-time restore from on-premises instances and cloud backups when needed.
A
E. IP management such as a dedicated public IP has been allocated by AWS VPC.

Explanation:

Correct Answer – A,B,C ,D

Explanation:

Amazon Relational Database Service (RDS) on VMware lets you deploy managed databases in on-premises VMware environments using the Amazon RDS
technology. For what Amazon RDS on VMware manages on your behalf, refer to https://aws.amazon.com/rds/vmware/faqs/ and
https://docs.aws.amazon.com/AmazonRDS/latest/RDSonVMwareUserGuide/rds-feature-support.html

Option A is CORRECT: Because RDS on VMware takes care of the patching for databases.

Option B is CORRECT: Because RDS on VMWare has instance health monitoring and failover capabilities.

Option C is CORRECT: Because after being configured, RDS on VMware takes care of the backup and retention just like what it does in AWS RDS.

Option D is CORRECT: For a similar reason as Option C.

Option E is incorrect: Because RDS on VMware communicates with AWS RDS using a dedicated VPN channel. There is no public IP allocated by AWS VPC.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 13 Correct

Domain :Design for New Solutions

Your team is building up a smart home iOS APP. The end users have used your company’s camera-equipped home devices such as baby monitors, webcams,
and home surveillance systems. Then the videos are uploaded to AWS. Afterward, through the mobile APP, users can play the on-demand or live videos
using the format of HTTP Live Streaming (HLS). Which combinations of steps should you use to accomplish this task? (Select TWO)

A. Create a Kinesis Data Firehose to ingest, durably store, and encrypt the live videos from the users’ home devices.

z B. Use AWS Elemental MediaLive and AWS Elemental MediaPackage with Amazon CloudFront.
A
Transform the stream data to HLS compatible data by using Kinesis Data Analytics or customer code in EC2/Lambda. Then in the mobile
C.
application, use HLS protocol to display the video stream by using the converted HLS streaming data.

z D. In the mobile application, use HLS to display the video stream by using the HLS streaming session URL.
A

Explanation:

Correct Answers – B, D

Explanation:

AWS provides a live streaming solution that combines AWS Elemental MediaLive and AWS Elemental MediaPackage with Amazon CloudFront to build a
highly resilient and scalable architecture that delivers your live content worldwide. The diagram below presents the live streaming video architecture you
can automatically deploy using the solution's implementation guide and accompanying AWS CloudFormation template.

Option A is INCORRECT because Kinesis Data Firehose is not used for live streaming instead it is used for streaming data delivery.

Option B is CORRECT because AWS provides a live streaming solution that combines AWS Elemental MediaLive and AWS Elemental MediaPackage with
Amazon CloudFront to build a highly resilient and scalable architecture that delivers your live content worldwide.

AWS Elemental MediaStore is a video origination and storage service that offers the high performance and immediate consistency required for live and on-
demand media. You can use AWS Elemental MediaStore to store assets that MediaLive retrieves and uses when transcoding, and as a destination for
output from MediaLive.

Option C is incorrect because transforming the stream data to HLS compatible data by using Kinesis Data Analytics or customer code in EC2/Lambda is
not needed or irrelevant here.

Option D is CORRECT because the GetHLSStreamingSessionURL API is called to retrieve the HLS streaming session URL. When you have the HLS
streaming session URL, provide it to the video player which will be able to play the video.

AWS Docs for reference:

https://aws.amazon.com/solutions/live-streaming-on-aws/

https://docs.aws.amazon.com/medialive/latest/ug/medialive-ug.pdf#what-is

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 14 Marked as review Correct

Domain :Design for New Solutions

An IoT company has a new product which is a camera device. The device has installed several sensors and can record video as required. The device has
AWS Kinesis Video Streams SDK in the software and is able to transmit recorded video in real-time to AWS Kinesis. Then the end-users can use a desktop or
web client to view, download, or share the video stream. The client app should be simple and use a third-party player such as Google Shaka Player to display
the video stream from Kinesis. How should the client app be designed?

] A. The client can use HTTP Live Streaming (HLS) for live playback. Use GetMedia API to process and play Kinesis video streams.

The client can use HLS for live playback. Use GetHLSStreamingSessionURL API to retrieve the HLS streaming session URL then provide the
]
z B.
URL to the video player. A
The client can use Adobe HTTP Dynamic Streaming (HDS) for live playback. Use GetHDSStreamingSessionURL API to retrieve the HDS streaming
] C.
session URL then provide the URL to the video player.

The client can use Microsoft Smooth Streaming (MSS) for live playback. Use GetMSSStreaming API to retrieve the MSS streaming to the video
] D.
player.

Explanation:

Correct Answer – B

Explanation:

The most straightforward way to view or live playback the video in Kinesis Video Streams is by using HLS. HTTP Live Streaming (HLS) is an industry-standard
HTTP-based media streaming communications protocol.

Option A is incorrect: Because although GetMedia API may work, it is not as simple as HLS. You may have to create a player that uses GetMedia and build it
yourself. However, in this case, a third-party player is needed. Reference is in https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-
hls.html#how-hls-ex1-session.

Option B is CORRECT: Because GetHLSStreamingSessionURL API is required for third party players to play the HLS streams.

Option C is incorrect: Because HTTP Live Streaming (HLS) should be used to playback the Kinesis Video Streams.

Option D is incorrect: Same reason as Option C.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 15 Correct

Domain :Design for New Solutions

Which of the following are associated with using the "HLS" method of viewing the Kinesis video stream? (Select TWO)

z A. A web application that is able to display the video stream using the third-party player Video.js.
A
B. In order to process Kinesis video streams, a SAAS provider needs to build a new video player that is integrated into their major online product.

C. Able to view only live video, not archived video.

Playback video by typing in the HLS streaming session URL in the location bar of the "Apple Safari Technology" browser for debugging
z D.
purposes. A

Explanation:

Correct Answer – A, D

Explanation:

For differences between GetMedia API and HLS, please refer to https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-hls.html#how-hls-ex1-
display.

Option A is CORRECT: Because a third-party player that supports HLS can be used to integrate with Kinesis Video Streams.

Option B is incorrect: Because if a new player is needed which means that you have to build your own player, GetMedia API will be suitable.

Option C is incorrect: You can use HLS to view an Amazon Kinesis video stream, either for live playback or to view archived video

Please refer to the following link on HLS

https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-hls.html#how-hls-ex1-display

https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-playback.html

Option D is CORRECT: Because the Apple Safari Technology browser can playback video if the HLS streaming session URL is typed in the location bar.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 20 Correct

Domain :Design for New Solutions

An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons, they need disaster
recovery capability in a separate region with a Recovery Time Objective of 5 hours and a Recovery Point Objective of 24 hours. They should synchronize their
data on a regular basis and be able to provision a web application rapidly using CloudFormation. The objective is to minimize changes to the existing web
application and control the throughput of DynamoDB used for the synchronization of data.
Which design would you choose to meet these requirements?

Use AWS Data Pipeline to schedule a DynamoDB Cross-Region copy once a day, create a “Last updated” attribute in your DynamoDB table that
] A.
would represent the timestamp of the last update and use it as a filter.

Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in
] B.
the second region.

Use the AWS Data Pipeline to schedule an export of the DynamoDB table to S3 ( using EMR cluster ) in the current region once a day, then
]z C.
schedule another task immediately after it that will import data from S3 to DynamoDB ( using EMR cluster ) in the other region. A
Send each item into an SQS queue in the second region; use an auto-scaling group behind the SQS queue to replay the write in the second
] D.
region.

Explanation:

Answer - C

Exporting and Importing DynamoDB Data Using AWS Data Pipeline:

You can use AWS Data Pipeline to export data from a DynamoDB table to a file in an Amazon S3 bucket. You can also use the console to import data from
Amazon S3 into a DynamoDB table, in the same AWS region or in a different region.

To export a DynamoDB table, you use the AWS Data Pipeline console to create a new pipeline. The pipeline launches an Amazon EMR cluster to perform the
actual export. Amazon EMR reads the data from DynamoDB and writes the data to an export file in an Amazon S3 bucket.

The process is similar for an import, except that the data is read from the Amazon S3 bucket and written to the DynamoDB table.

Note: The question says "An International company has deployed a multi-tier web application that relies on DynamoDB in a single region."

Option A is INCORRECT because a Data Pipeline is not needed in this case. Dynamodb provides Cross-Region replication via global tables.

Options B and D are INCORRECT since using a Data Pipeline would fit in the requirements rather than using a custom script or an SQS queue as a solution to
the given requirements.

Please check the below link to know more about sync the data to S3.

https://aws.amazon.com/articles/using-dynamodb-with-amazon-elastic-mapreduce/

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 30 Correct

Domain :Design for New Solutions

Your company’s on-premises content management system has the following architecture:
Application Tier – Java code on a JBoss application server
Database Tier – Oracle database regularly backed up to Amazon Simple Storage Service (S3) using the Oracle RMAN backup utility
Static Content – stored on a 512GB gateway stored Storage Gateway volume attached to the application server via the iSCSI interface
Which AWS based disaster recovery strategy will give you the best RTO?

Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Generate an EBS volume
]
z A.
of static content from the Storage Gateway and attach it to the JBoss EC2 server. A
Deploy the Oracle database on RDS. Deploy the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon Glacier. Generate an
] B.
EBS volume of static content from the Storage Gateway and attach it to the JBoss EC2 server.

Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content by
] C.
attaching an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the JBoss EC2 server.

Deploy the Oracle database and the JBoss app server on EC2. Restore the RMAN Oracle backups from Amazon S3. Restore the static content
] D.
from an AWS Storage Gateway-VTL running on Amazon EC2

Explanation:

Answer - A

Option A is CORRECT because (i) it deploys the Oracle database on EC2 instance by restoring the backups from S3 which is quick, and (ii) it generates the
EBS volume of static content from Storage Gateway. Due to these points, option A meet the best RTO compared to all the remaining options.

Option B is incorrect because restoring the backups from the Amazon Glacier will be slow and will not meet the RTO.

Option C is incorrect because there is no need to attach the Storage Gateway as an iSCSI volume; you can just easily and quickly create an EBS volume from
the Storage Gateway. Then you can generate snapshots from the EBS volumes for better recovery time.

Option D is incorrect as restoring the content from Virtual Tape Library will not fit into the RTO.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 39 Marked as review Correct

Domain :Design for New Solutions

Your company produces customer commissioned one-of-a-kind skiing helmets combining high fashion with custom technical enhancements Customers can
show off their individuality on the ski slopes and have access to head-up-displays. GPS rear-view cams and any other technical innovation they wish to
embed in the helmet. The current manufacturing process is data-rich and complex, including assessments to ensure that the custom electronics and
materials used to assemble the helmets are to the highest standards. Assessments are a mixture of human and automated assessments. You need to add a
new set of assessments to model the failure modes of the electronics using GPUs with CUDA across a cluster of servers with low latency networking. What
architecture would allow you to automate the existing process using a hybrid approach and ensure that the architecture can support the evolution of
processes over time?

Use AWS Data Pipeline to manage the movement of data & meta-data and assessments. Use an Auto Scaling group of G2 instances in a
] A.
placement group.

Use AWS Step Functions to manage assessments, movement of data, & meta-data. Use an Auto Scaling group of G2 instances in a
]
z B.
placement group. A
Use Amazon Simple Workflow (SWF) to manage assessments movement of data & meta-data. Use an Auto Scaling group of C3 instances with
] C.
SR-IOV (Single Root I/O Virtualization).

Use AWS data Pipeline to manage the movement of data & meta-data and assessments. Use an Auto Scaling group of C3 with SR-IOV (Single
] D.
Root I/O virtualization).

Explanation:

Answer - B

The main point to consider in this question is that the assessments include human interaction as well. In most such cases, always look for AWS Step
Functions in the options.

Option A is incorrect because this will be useful during the batch jobs, which deal with the automated assessments. For the human assessment, this will not
be a useful option.

Option B is CORRECT because (a) it enables assessment via human interaction, (b) uses Auto Scaled G2 instances that are efficient in automated
assessments due to their GPU and low latency networking.

Please refer the below link for AWS Step Functions

https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html

Option C is incorrect because although SWF can be used for human tasks, C3 instances and SR-IOV will not provide the required GPU.

Option D is incorrect because (a) this will be useful during the batch jobs, which deal with the automated assessments. For the human assessment, this will
not be a useful option, and (b) C3 instances and SR-IOV will not provide the required GPU.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 41 Correct

Domain :Design for New Solutions

You are an AWS Cloud Architect in a big company and your company is under a planning phase for a fresh project. The project needs to be developed and
deployed completely in AWS and various deployment services are now considered. Your team members are debating on which service should be used
between OpsWorks and CloudFormation. For the below options, which ones should you consider helping to choose the most appropriate service? Select 3.

The experiences of Chef and Recipe. If the team is lacking Chef knowledge, OpsWorks may not be considered as the learning curve would
z A.
be steep. A
The budget of the whole project. Except for the resources such as EC2, the OpsWorks ( except Opsworks-chef automate ) and CloudFormation
B.
templates and stacks are charged differently.

The schedule of the project. If the timeline is not under pressure, the team can still choose to learn new skills such as Chef for OpsWorks or
z C.
JSON scripting for CloudFormation. A
Understand if the team prefers to have deeper control at infrastructure setup. If yes, CloudFormation may be a better choice. Otherwise,
z D.
OpsWorks is able to take care of some basic configurations automatically. A
Understand the limitations of both CloudFormation and OpsWorks. For example, OpsWorks does not support spot EC2 instances or Windows EC2
E.
instances.

Explanation:

Correct Answer – A, C, D

The question asks for the items that help with choosing the service of OpsWorks or CloudFormation. One major feature of OpsWorks is that it has used Chef.
For OpsWorks, it is very common that a custom recipe is needed. That might be a simple task if the team has a Chef expert, but if there is not, there is a pretty
steep learning curve. No matter in which way, the project’s schedule should be always considered.

Explanation:

Option A is CORRECT because the experience of Chef and Recipes is a key factor to choose OpsWorks or not.

Option B is incorrect because the OpsWorks ( except Opsworks-chef automate ) and CloudFormation templates and stacks themselves do not bring cost.
You only need to pay the resources that are set up in stacks.

Option C is CORRECT because the project’s schedule is also a key factor to consider.

Option D is CORRECT because CloudFormation is better at a lower level scripting if the team prefers to have a deeper infrastructure control with code.

Option E is incorrect: It is indeed essential to check the limitations of the two. However, OpsWorks supports spot EC2 instances and Windows EC2 instances.
Refer to https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-os-windows.html.
Ask our Experts Rate this Question?
vu

View Queries open

j
Question 42 Correct

Domain :Design for New Solutions

Your company currently has a 2-tier web application running in an on-premises data center. You have experienced several infrastructure failures in the past
two months resulting in significant financial losses. Your CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the
other company executives, he asks you to develop a disaster recovery plan to deploy the application, post its AMI creation, and help to improve Business
continuity in the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective (RPO) of 1 hour or less. He also
asks you to implement the solution within 2 weeks. Your database is 200GB in size and you have a 20Mbps Internet connection. What is the solution?

Create an EBS backed private AMI which includes a fresh install of your application. Develop a CloudFormation template that includes your
]z A. AMI and the required EC2, AutoScaling, and ELB resources to support deploying the application across Multiple- Availability-Zones.
Asynchronously replicate the transactions from your on-premises database to a database instance in AWS across a secure VPN connection.
A

Deploy your application on EC2 instances within an Auto Scaling Group across multiple Availability Zones. Asynchronously replicate the
] B.
transactions from your on-premises database to a database instance in AWS across a secure VPN connection.

Create an EBS backed private AMI which includes a fresh install of your application. Setup a script in your data center to back up the local
] C.
database every 1 hour and encrypt and copy the resulting file to an S3 bucket using multi-part upload.

Install your application on a compute-optimized EC2 instance capable of supporting the application’s average load. Synchronously replicate the
] D.
transactions from your on-premises database to a database instance in AWS across a secure Direct Connect connection.

Explanation:

Answer - A

Option A is CORRECT because (a) with AMIs, the newly created EC2 instances will be ready with the pre-installed application; thus, reducing the RTO, (b)
with CloudFormation, the entire stack can be automatically provisioned, and (c) since there are no additional services used, the cost will stay low.

Option B is incorrect because although this could work, (a) deploying EC2 instances for this scenario will be expensive, and (b) in case of disaster, the
recovery will potentially be slower since the new EC2 need to be manually updated with the application software and patches, especially since it does not
use the AMIs.

Option C is incorrect because it has a low-performance issue. (a) Backing up local DB of 200GB on a 20Mbps connection every hour will be very slow, and
(b) even with the incremental backup, recovering from the incremental backup take times and might not satisfy the given RTO.

Option D is incorrect because (a) the EC2 instance is a single point of failure, which needs to be made highly available via Auto Scaling, and (b) it can only
handle the average load of the application; so, in case of peak load, it may fail, and (c) AWS Direct Connection will be an expensive solution compared to
the setup of option A.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 49 Correct

Domain :Design for New Solutions

Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table? Assume that no security
keys are allowed to be stored on the EC2 instance.
Choose 2 options from the below:

z A. Create an IAM Role that allows write access to the DynamoDB table.
A
B. Encode the IAM User credentials into the application.

C. Create an IAM User that allows write access to the DynamoDB table.

D. Add an IAM User to a running EC2 instance.

z E. Launch an EC2 Instance with the IAM Role included in the launch configuration.
A

Explanation:

Answer – A and E.

To enable an AWS service to access another one, the most important requirement is to create an appropriate IAM Role and attaching that role to the service
that needs access.

Option A is CORRECT because it creates the appropriate IAM Role for accessing the DynamoDB table.

Option B is INCORRECT because this is not a best practice and we need to use IAM Role.

Options C and D are incorrect because IAM Role is preferred and more secure way than IAM User.

Option E is CORRECT because it launches the EC2 instance after attaching the required role.

See the steps below:

1. Create the IAM Role with appropriate permissions

2. Launch an EC2 instance with this role

3. Attach the role to a running EC2

Reference Link:

http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

https://aws.amazon.com/about-aws/whats-new/2017/02/new-attach-an-iam-role-to-your-existing-amazon-ec2-instance/

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 51 Correct

Domain :Design for New Solutions

API gateway and Lambda non-proxy integrations have been chosen to implement an application by a software engineer. The application is a data analysis
tool that returns some statistic results when the HTTP endpoint is called. The Lambda needs to communicate with some back-end data services such as
Keen.io. However, there are chances that error happens, such as wrong data requested, bad communications, etc. The Lambda is written using Java and two
exceptions may be returned which are BadRequestException and InternalErrorException. What should the software engineer do to map these two
exceptions in the API gateway with proper HTTP return codes? For example, BadRequestException and InternalErrorException are mapped to HTTP return
codes 400 and 500 respectively. Select 2.

A. Add the corresponding error codes (400 and 500) on the Integration Response in the API gateway.

z B. Add the corresponding error codes (400 and 500) on the Method Response in API gateway.
A
C. Put the mapping logic into Lambda itself so that when exception happens, error codes are returned at the same time in a JSON body.

Add Integration Responses where regular expression patterns are set such as BadRequest or InternalError. Associate them with HTTP status
z D.
codes. A
Add Method Responses where regular expression patterns are set, such as BadRequest or InternalError. Associate them with HTTP status codes
E.
400 and 500.

Explanation:

Correct Answer – B, D

Explanation:

When an API gateway is established, there are four parts:

Method Request/Method Response mainly deal with API gateways and they are the API's interface with the API's frontend (a client), whereas Integration
Request and Integration Response are the API's interface with the backend. In this case, the backend is a Lambda.

For the mapping of exceptions that come from Lambda, Integration Response is the correct place to configure. However, the corresponding error code (400)
on the method response should be created first. Otherwise, API Gateway throws an invalid configuration error response at runtime. Below is an example to
map BadRequestException to HTTP return code 400:

Option A is incorrect: Because HTTP error codes are defined as firstly in Method Response instead of Integration Response.

Option B is CORRECT: Because HTTP error codes are defined as firstly in Method Response instead of Integration Response. (Same reason as A).

Option C is incorrect: Because Integration Response in API gateway should be used. Refer to
https://docs.aws.amazon.com/apigateway/latest/developerguide/handle-errors-in-lambda-integration.html on “how to Handle Lambda Errors in API
Gateway”.

Option D is CORRECT: Because BadRequest or InternalError should be mapped to 400 and 500 in Integration Response settings.

Option E is incorrect: Because Method Response is the interface with frontend. It does not deal with how to map the response from Lambda/backend.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 53 Correct

Domain :Design for New Solutions

John is a software contractor and is working on a web application. Since the budget is limited and the schedule is tight, he decides to implement it using API
gateway and Lambda so that he does not need to consider the server management, scalability, etc. The customer has raised concerns that the APIs should
be kept secure and there should be mechanisms to control the access to API endpoints. Which below method can be used to help secure the API?

Attach a resource policy to the API Gateway API, which controls access to the API Gateway resources. Access can be controlled by IAM condition
] A.
elements, including conditions on AWS account, source VPC, etc.

Use IAM permissions to control access to the API Gateway component. For example, in order to call a deployed API, the API caller must be
] B.
granted permission to perform required IAM actions supported by the API execution component of API Gateway.

Use a Lambda function as the authorizer. When a client calls the API, the API Gateway either supplies the authorization token that is extracted
] C. from a specified request header for the token-based authorizer or it passes in the incoming request parameters as the input to the request
parameters-based authorizer Lambda function.

Use an Amazon Cognito user pool to control who can access the API in Amazon API Gateway. You need to use the Amazon Cognito console,
] D.
CLI/SDK, or API to create a user pool. Then, in the API Gateway, create an API Gateway authorizer with the chosen user pool.

z] E. All the above options are correct.


A

Explanation:

Correct Answer – E

Explanation:

There are multiple mechanisms that can be used to control access to the API in the API gateway. And several methods can be used together to implement a
very granular and secure application. https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-to-api.html is an
introduction to the methods.

The below mechanisms can be chosen:

Resource policies let you create resource-based policies to allow or deny access to your APIs and methods from the specified source IP addresses or VPC
endpoints. It can be configured in the API Gateway console:

Standard AWS IAM roles and policies offer flexible and robust access controls that can be applied to an entire API or individual methods.

The below is an IAM policy example to call the Lambda function:

Lambda authorizers are Lambda functions that control access to REST API methods using bearer token authentication as well as information described by
headers, paths, query strings, stage variables, or context variables request parameters.

In API Gateway console, lambda authorizers can be created in the below place:

Amazon Cognito user pools let you create customizable authentication and authorization solutions for your REST APIs.

As a result, option E is correct.

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 59 Correct

Domain :Design for New Solutions

Server-side encryption is about data encryption at rest. That is, Amazon S3 encrypts your data at the object level as it writes it to disk in its data centers and
decrypts it for you when you go to access it. A few different options are depending on how you choose to manage the encryption keys. One of the options is
called 'Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3)'. Which of the following best describes how this encryption method works?
Choose the correct option from the below:

There are separate permissions for the use of an envelope key (a key that protects your data's encryption key) that provides added protection
] A.
against unauthorized access of your objects in S3 and also provides you with an audit trail of when your key was used and by whom.

Each object is encrypted with a unique key employing strong encryption. As an additional safeguard, it encrypts the key itself with a master
]
z B.
key that it regularly rotates. A
] C. You manage the encryption keys and Amazon S3 manages the encryption, as it writes to disk, and decryption when you access your objects.

] D. A randomly generated data encryption key is returned from Amazon S3, which is used by the client to encrypt the object data.

Explanation:

Answer – B

Server-side encryption with Amazon S3-managed encryption keys (SSE-S3) employs strong multi-factor encryption. Amazon S3 encrypts each object with a
unique key. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. Amazon S3 server-side encryption uses one of the
strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

Option A is incorrect because there are no separate permissions to the key that protects the data key.

Option B is CORRECT because as mentioned above, each object is encrypted with a strong unique key and that key itself is encrypted by a master key.

Option C is incorrect because the keys are managed by the AWS.

Option D is incorrect because there is no randomly generated key and the client does not do the encryption.

For more information on S3 encryption, please visit the link

https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 71 Correct

Domain :Design for New Solutions

You have two Elastic Compute Cloud (EC2) instances inside a Virtual Private Cloud (VPC) in the same Availability Zone (AZ) but in different subnets. One
instance is running a database and the other instance an application that will interface with the database.
You want to confirm that they can talk to each other for your application to work properly. Which two things do we need to confirm in the VPC settings so that
these EC2 instances can communicate inside the VPC?
Choose 2 correct options from the below:

z A. Security groups are set to allow the application host to talk to the database on the right port/protocol.
A
B. Both instances are the same instance class and using the same key-pair.

C. The default route is set to a NAT instance or Internet Gateway (IGW) for them to communicate.

z D. A network ACL that allows communication between the two subnets.


A

Explanation:

Answer - A and D

In order to have the instances communicate with each other, you need to properly configure both Security Group and Network access control lists (NACLs).
For the exam, remember that the Security Group operates at the instance level; whereas, the NACL operates at subnet level.

Option A is CORRECT because the security groups must be defined in order to allow the webserver to communicate with the database server. An example
image from the AWS documentation is given below:

Option B is incorrect because it is not necessary to have the two instances of the same type or the same key-pair.

Option C is incorrect because configuring NAT instance or NAT gateway will not enable the two servers to communicate with each other. NAT instance/NAT
gateway is used to enable the communication between instances in the private subnets and the Internet.

Option D is CORRECT because the two servers are in two separate subnets. In order for them to communicate with each other, you need to configure the
NACL as shown below:

For more information on VPC and Subnets, please visit the below URL:

http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html
Ask our Experts Rate this Question?
vu

View Queries open

j
Question 72 Correct

Domain :Design for New Solutions

You are managing a legacy application inside VPC with hard-coded IP addresses in its configuration. Which mechanisms will allow the application to failover
to new instances without much reconfiguration?
Choose 2 options from the below:

A. Use the traffic manager to route the traffic to the failover instances.

z B. Create a secondary ENI that can be moved to the failover instance.


A
C. Use Route53 health checks to reroute the traffic to the failover instance.

z D. Assign a secondary private IP address to the primary ENI of the failover instance.
A

Explanation:

Answer - B and D

Option A is incorrect because rerouting to a failover instance cannot be done through a Traffic Manager.

Option B is CORRECT because the attributes of a network interface follow it as it's attached or detached from an instance and reattached to another instance.
When you move a network interface from one instance to another, network traffic is redirected to the new instance.

Option C is incorrect because Route 53 cannot reroute the traffic to the failover instance with the same IP address.

Option D is CORRECT because you can have a secondary IP address that can be configured on the primary ENI of the failover instance.

Best Practices for Configuring Network Interfaces

You can attach a network interface to an instance when it's running (hot attach), when it's stopped (warm attach), or when the instance is being launched
(cold attach).

You can detach secondary (ethN) network interfaces when the instance is running or stopped. However, you can't detach the primary (eth0) interface.

You can attach a network interface in one subnet to an instance in another subnet in the same VPC; however, both the network interface and the instance
must reside in the same Availability Zone.

When launching an instance from the CLI or API, you can specify the network interfaces to attach to the instance for both the primary (eth0) and additional
network interfaces.

Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and
route tables on the operating system of the instance.

A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and
modify the route table accordingly. Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure
themselves.

Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the
network bandwidth to or from the dual-homed instance.

If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. If
possible, use a secondary private IPv4 address on the primary network interface instead. For more information, see Assigning a Secondary Private IPv4
Address.

For more information on Network Interfaces, please visit the below URL:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 74 Correct

Domain :Design for New Solutions

Your team is excited about the use of AWS because now they have access to "programmable Infrastructure”. You have been asked to manage your AWS
infrastructure in a manner similar to the way you might manage application code. You want to be able to deploy exact copies of different versions of your
infrastructure, stage changes into different environments, revert back to previous versions, and identify what versions are running at any particular time
(development, test, QA , and production). Which approach addresses this requirement?

] A. Use cost allocation reports and AWS Opsworks to deploy and manage your infrastructure.

] B. Use AWS CloudWatch metrics and alerts along with resource tagging to deploy and manage your infrastructure.

] C. Use AWS Beanstalk and a version control system like GIT to deploy and manage your infrastructure.

z] D. Use AWS CloudFormation and a version control system like GIT to deploy and manage your infrastructure.
A

Explanation:

Answer – D

You can use AWS Cloud Formation’s sample templates or create your own templates to describe the AWS resources, and any associated dependencies or
runtime parameters, required to run your application. You don’t need to figure out the order for provisioning AWS services or the subtleties of making those
dependencies work. CloudFormation takes care of this for you. After the AWS resources are deployed, you can modify and update them in a controlled and
predictable way, in effect applying version control to your AWS infrastructure the same way you do with your software. You can also visualize your templates
as diagrams and edit them using a drag-and-drop interface with the AWS CloudFormation Designer.

Option A is incorrect because Cost Allocation Reports is not helpful for the purpose of the question.

Option B is incorrect because CloudWatch is used for monitoring the metrics pertaining to different AWS resources.

Option C is incorrect because it does not have the concept of programmable Infrastructure.

Option D is CORRECT because AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related
AWS resources, provisioning and updating them in an orderly and predictable fashion.

For more information on CloudFormation, please visit the link:

https://aws.amazon.com/cloudformation/

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 75 Correct

Domain :Design for New Solutions

What would happen to an RDS (Relational Database Service) multi-Availability Zone deployment if the primary DB instance fails?

] A. The IP address of the primary DB instance is switched to the standby DB instance.

] B. The primary RDS (Relational Database Service) DB instance reboots and remains as primary.

] C. A new DB instance is created in the standby availability zone.

z] D. The canonical name record (CNAME) is changed from primary to standby.


A

Explanation:

Answer – D

Option A is incorrect because IP address of the primary and standby instances remain same and are not changed.

Option B is incorrect because the CNAME record of the primary DB instance changes to the standby instance.

Option C is incorrect because there is no new instance created in the standby AZ.

Option D is CORRECT because the CNAME of the primary DB instance changes to the standby instance so that there is no impact of on the application
setting or any reference to the primary instance.

More information on Amazon RDS Multi-AZ deployment:

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production
database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the
data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be
highly reliable. In case of an infrastructure failure (for example, instance hardware failure, storage failure, or network disruption), Amazon RDS performs an
automatic failover to the standby, so that you can resume database operations as soon as the failover is complete.

And as per the AWS documentation, the CNAME is changed to the standby DB when the primary one fails.

https://aws.amazon.com/rds/faqs/

For more information on Multi-AZ RDS, please visit the link:

https://aws.amazon.com/rds/details/multi-az/

Ask our Experts Rate this Question?


vu

View Queries open


j

Question 77 Correct

Domain :Design for New Solutions

An organization is planning to use AWS for their production rollout. The organization wants to implement automation for deployment such that it will
automatically create a LAMP stack, download the latest PHP installable from S3, and set up the ELB. Which of the below mentioned AWS services meets the
requirement for making an easy and orderly deployment of the software?

z] A. AWS Elastic Beanstalk


A
] B. AWS Cloudfront

] C. AWS Cloudformation

] D. AWS DevOps

Explanation:

Answer – A

The Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services.

We can simply upload code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, Auto Scaling to
application health monitoring. Meanwhile, we can retain full control over the AWS resources used in the application and can access the underlying resources
at any time.

Hence, A is the CORRECT answer.

For more information on launching a LAMP stack with Elastic Beanstalk ( click on "Getting started with the Implementation Guide" at the bottom of the page ):

https://aws.amazon.com/getting-started/projects/launch-lamp-web-app/faq/

We can do it on AWS CloudFormation as well in a harder way and it will be less Native:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/deploying.applications.html

Ask our Experts Rate this Question?


vu

View Queries open


j

Finish Review

Certification Company Support Join us on Slack!

Cloud Certification Become Our Instructor Contact Us Join our open Slack community and
get your queries answered instantly!
Java Certification Support Help Topics Our experts are online to answer
your questions!
PM Certification Discussions
Follow us
Big Data Certification Blog
hom
Business

© Copyright 2021. Whizlabs Software Pvt. Ltd. All Right Reserved.

You might also like