Download as pdf or txt
Download as pdf or txt
You are on page 1of 191

Untitled Exam

Testlet 1

TESTLET OVERVIEW

Title: Case Study

The following testlet will present a Case Study followed by [count] multiple choice question(s), [count] create a
tree question(s), [count] build list and reorder question(s) and [count] drop and connect question(s).

You will have [count] minutes to complete the testlet.

For help on how to answer the questions, click the Instuctions button on the question screen.

Overview

Litware, Inc. is a United States-based grocery retailer.


Litware has a main office and a primary datacenter in Seattle.
The company has 50 retail stores across the United States and an emerging online presence.
Each store connects directly to the internet.

Existing environment

Cloud and Data Service Environments

Litware has an Azure subscription that contains the resources shown in the following table.

Each container in productdb is configured for manual throughput.

The con-product container stores the company's product catalog data. Each document in con-product includes
a con-productvendor value.
Most queries targeting the data in con-product are in the following format.

SELECT * FROM con-product p WHERE p.con-productVendor - 'name'

Most queries targeting the data in the con-productVendor container are in the following format

SELECT * FROM con-productVendor pv


ORDER BY pv.creditRating, pv.yearFounded

Current Problems

Litware identifies the following issues:

- Updates to product categories in the con-productVendor container do not propagate automatically to


documents in the con-product container.
- Application updates in con-product frequently cause HTTP status code 429 "Too many requests". You
discover that the 429 status code relates to excessive request unit (RU) consumption during the updates.

Requirements

Planned Changes

Litware plans to implement a new Azure Cosmos DB for NoSQL account named account2 that will contain a
database named iotdb. The iotdb database will contain two containers named con-iot1 and con-iot2.

Litware plans to make the following changes:


- Store the telemetry data in account2.
- Configure account1 to support multiple read-write regions.
- Implement referential integrity for the con-product container.
- Use Azure Functions to send notifications about product updates to different recipients.
- Develop an app named App1 that will run from all locations and query the data in account1.
- Develop an app named App2 that will run from the retail stores and query the data in account2. App2 must
be limited to a single HTTPS endpoint when accessing account2.

Business Requirements

Litware identifies the following business requirements:


- Whenever there are multiple solutions for a requirement, select the solution that provides the best
performance, as long as there are no additional costs associated.
- Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.
- Minimize the number of firewall changes in the retail stores.

IoT Telemetry Requirements

Litware identifies the following IoT telemetry requirements:


- Write the telemetry data from streamanalytics1 to the con-iot1 container as the data arrives. Use
streamanalytics1 to create hopping window aggregated telemetry readings once per hour and write the data to
con-iot2.
- Automatically delete items in con-iot1 after one hour unless a per-item time to live is set.
- Optimize the partitioning of con-iot1 for queries that support historical data of a specific device.
- Aggregate the telemetry data by device by using streamanalytics1 and write the aggregate data to con-iot2.
- Ensure that the items in con-iot2 persist regardless of the per-item time to live setting.

Product Catalog Requirements

Litware identifies the following requirements for the product catalog:


- Implement a custom conflict resolution policy for the product catalog data.
- Minimize the frequency of errors during updates of the con-product container.
- Once multi-region writes are configured, maximize the performance of App1 queries against the data in
account1.
- Trigger the execution of two Azure functions following every update to any document in the con-product
container.

QUESTION 1
You are troubleshooting the current issues caused by the application updates.

Which action can address the application updates issue without affecting the functionality of the application?

A. Enable time to live for the con-product container.


B. Set the default consistency level of account1 to strong.
C. Set the default consistency level of account1 to bounded staleness.
D. Add a custom indexing policy to the con-product container.

Correct Answer: D
Explanation

Explanation/Reference:

- Add a custom indexing policy to the con-product container.

Explanation:

Scenario:
- Application updates in con-product frequently cause HTTP status code 429 "Too many requests".
- You discover that the 429 status code relates to excessive request unit (RU) consumption during the
updates.

A "Request rate too large" exception, also known as error code 429, indicates that your requests against Azure
Cosmos DB are being rate limited.

When you use provisioned throughput, you set the throughput measured in request units per second (RU/s)
required for your workload. Database operations against the service such as reads, writes, and queries
consume some number of request units (RUs). Learn more about request units.

In a given second, if the operations consume more than the provisioned request units, Azure Cosmos DB will
return a 429 exception. Each second, the number of request units available to use is reset.

Before taking an action to change the RU/s, it's important to understand the root cause of rate limiting and
address the underlying issue.

Incorrect answers:
" Enable time to live for the con-product container."
- Would enable time to live for the con-product container, which would delete expired documents.
- This would not address the issue of updates not propagating automatically.

"Set the default consistency level of account1 to strong."


- Would set the default consistency level of account1 to strong.
- This would require all replicas of a document to be in sync before the document is considered committed.
This would increase the latency of updates, which could make the issue worse.

"Set the default consistency level of account1 to bounded staleness."


- Would set the default consistency level of account1 to bounded staleness.
- This would allow for a small amount of data inconsistency, which could help to reduce the number of RUs
consumed by the updates. However, it would not address the issue of updates not propagating automatically.

The consistency level needs to be an issue, as this is a multiregional write now, and the default one was
"Session" which is "Eventual" in that case.
This may lead to out-of-order appearances and Item updates issue at the end.

The Strong consistency will solve the problem for sure. But Bounded staleness is less expensive and keeps the
order (with delay for multi regions). So one of those two.

Bounded staleness
- Is frequently chosen by globally distributed applications that expect low write latencies but require total
global order guarantee.
- Is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing
etc.

Reference:

Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/troubleshoot-request-rate-too-large

https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

[GPT]

QUESTION 2
You configure multi-region writes for account1.

You need to ensure that App1 supports the new configuration for account1. The solution must meet the
business requirements and the product catalog requirements.

What should you do?

A. Set the default consistency level of account1 to bounded staleness.


B. Create a private endpoint connection.
C. Modify the connection policy of App1.
D. Increase the number of request units per second (RU/s) allocated to the con-product and con-
productVendor containers.

Correct Answer: D
Explanation
Explanation/Reference:

- Modify the connection policy of App1.

Community: Increase the number of request units per second (RU/s) allocated to the con-product and con-
productVendor containers

Explanation:

The connection policy of App1 needs to be modified to allow for multi-region writes. This can be done by setting
the UseMultipleWriteLocations property to true and specifying the region where the application is deployed to
the ApplicationRegion property.

Incorrect answers:

"Set the default consistency level of account1 to bounded staleness."


- Would set the default consistency level of account1 to bounded staleness.
- This would allow for a small amount of data inconsistency, which could help to reduce the number of errors
during updates of the con-product container. However, it would not meet the business requirement of
maximizing the performance of App1 queries against the data in account1.

"Create a private endpoint connection."


- Would create a private endpoint connection.
- This would allow App1 to connect to account1 without going through the public internet. This would improve
security, but it would not meet the business requirement of minimizing the number of firewall changes in the
retail stores.

"Increase the number of request units per second (RU/s) allocated to the con-product and con-productVendor
containers."
- Would increase the number of request units per second (RU/s) allocated to the con-product and con-
productVendor containers.
- This would improve the performance of updates to the con-product container, but it would not meet the
business requirement of minimizing the frequency of errors during update

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

Request Units in Azure Cosmos DB


https://learn.microsoft.com/en-us/azure/cosmos-db/request-units

[GPT]

QUESTION 3
You need to implement a solution for the planned changes and meet the product catalog requirements.

What should you do to implement the conflict resolution policy?

A. Set the default consistency level for account1 to eventual


B. Remove frequently changed fields from the index policy of the con-product container
C. Create a new container and migrate the product catalog data to the new container
D. Disable indexing on all fields in the index policy of the con-product container

Correct Answer: C
Explanation

Explanation/Reference:

- Create a new container and migrate the product catalog data to the new container

Scenario:
- Implement a custom conflict resolution policy for the product catalog data.

Update conflicts can be of the following three types:


- Insert conflicts: These conflicts can occur when an application simultaneously inserts two or more items
with the same unique index in two or more regions. For example, this conflict might occur with an ID property.
- Replace conflicts: These conflicts can occur when an application updates the same item simultaneously in
two or more regions.
- Delete conflicts: These conflicts can occur when an application simultaneously deletes an item in one
region and updates it in another region.

Azure Cosmos DB offers a flexible policy-driven mechanism to resolve write conflicts.


You can select from two conflict resolution policies on an Azure Cosmos DB container:

- Last Write Wins (LWW)


- This resolution policy, by default, uses a system-defined timestamp property.
- It's based on the time-synchronization clock protocol.
- If you use the API for NoSQL, you can specify any other custom numerical property (e.g., your own
notion of a timestamp) to be used for conflict resolution.
- A custom numerical property is also referred to as the conflict resolution path.
- Last Write Wins is the default conflict resolution policy and uses timestamp _ts for the following APIs:
SQL, MongoDB, Cassandra, Gremlin and Table. Custom numerical property is available only for API for
NoSQL

- Custom:
- This resolution policy is designed for application-defined semantics for reconciliation of conflicts.
- When you set this policy on your Azure Cosmos DB container, you also need to register a merge
stored procedure.
- This procedure is automatically invoked when conflicts are detected under a database transaction on
the server.
- The system provides exactly once guarantee for the execution of a merge procedure as part of the
commitment protocol.
- Custom conflict resolution policy is available only for API for NoSQL accounts and can be set only at
creation time. It is not possible to set a custom resolution policy on an existing container.

Resources:

Conflict types and resolution policies when using multiple write regions
- https://learn.microsoft.com/en-us/azure/cosmos-db/conflict-resolution-policies
[CONFIRMED]

QUESTION 4
You plan to implement con-iot1 and con-iot2.

You need to configure the default Time to Live setting for each container. The solution must meet the IoT
telemetry requirements.

What should you configure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- con-iot1: On (3,600 seconds)


- con-iot2: Off

con-iot1: On (3,600 seconds)


- Scenario: "Automatically delete items in con-iot1 after one hour unless a per-item time to live is set."

con-iot2: Off
- Scenario: "Ensure that the items in con-iot2 persist regardless of the per-item time to live setting."

[CONFIRMED]

QUESTION 5
You need to select the capacity mode and scale configuration for account2 to support the planned changes and
meet the business requirements.

What should you select? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- Capacity mode: Provisioned throughput


- Scale configuration: Manual throughput on iotcon1 and iotcont2

Explanation:

Capacity mode: Provisioned throughput


- Scenario: Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.
- Provisioned throughput is best suited for workloads with sustained traffic requiring predictable performance.
- Billing model: Billing is done on a per-hour basis for the RU/s provisioned, regardless of how many RUs
were consumed.
- Note: Azure Cosmos DB is available in two different capacity modes: provisioned throughput and
serverless. You can perform the exact same database operations in both modes, but the way you get billed for
these operations is radically different

Scale configuration: Manual throughput on iotcon1 and iotcont2


- Workloads with steady or predictable traffic ( 5k telemetry per second is predictable)
- https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-choose-offer

[CONFIRMED]
QUESTION 6
You need to identify which connectivity mode to use when implementing App2. The solution must support the
planned changes and meet the business requirements.

Which connectivity mode should you identify?

A. Direct mode over HTTPS


B. Gateway mode (using HTTPS)
C. Direct mode over TCP

Correct Answer: B
Explanation

Explanation/Reference:

- Gateway mode (using HTTPS)

Explanation:

Scenario:
- App2 must be limited to a single HTTPS endpoint when accessing account2.

Gateway mode meets the requirements.

Gateway mode
- Gateway mode is supported on all SDK platforms.
- If your application runs within a corporate network with strict firewall restrictions, gateway mode is the best
choice because it uses the standard HTTPS port and a single DNS endpoint.
- The performance tradeoff, however, is that gateway mode involves an additional network hop every time
data is read from or written to Azure Cosmos DB.
- We also recommend gateway connection mode when you run applications in environments that have a
limited number of socket connections.

Direct mode
- Direct mode supports connectivity through TCP protocol, using TLS for initial authentication and encrypting
traffic, and offers better performance because there are fewer network hops.
- The application connects directly to the backend replicas.
- Direct mode is currently only supported on .NET and Java SDK platforms.

Reference:
https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/sdk-connection-modes

[CONFIRMED]

QUESTION 7
You need to provide a solution for the Azure Functions notifications following updates to con-product.

The solution must meet the business requirements and the product catalog requirements.

Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

A. Configure the trigger for each function to use a different leaseCollectionPrefix.


B. Configure the trigger for each function to use the same leaseCollectionName.
C. Configure the trigger for each function to use a different leaseCollectionName.
D. Configure the trigger for each function to use the same leaseCollectionPrefix.

Correct Answer: AB
Explanation

Explanation/Reference:

- Configure the trigger for each function to use a different leaseCollectionPrefix.


- Configure the trigger for each function to use the same leaseCollectionName.

Explanation:

Scenario:
- Use Azure Functions to send notifications about product updates to different recipients.
- Trigger the execution of two Azure functions following every update to any document in the con-product
container.

LeaseCollectionPrefix
- When set, the value is added as a prefix to the leases created in the Lease collection for this function.
- Using a prefix allows two separate Azure Functions to share the same Lease collection by using different
prefixes.

LeaseCollectionName
- The name of the collection used to store leases.
- When not set, the value leases is used.

Reference:
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger

[NOT-CONFIRMED]

QUESTION 8
You need to recommend indexes for con-product and con-productVendor. The solution must meet the product
catalog requirements and the business requirements.

Which type of index should you recommend for each container? To answer, select the appropriate options in
the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- con-product: Range on con-productVendor.


- con-productVendor: Composite in creditRating and yearFounded

Azure Cosmos DB currently supports three types of indexes.


- Range index is based on an ordered tree-like structure.
- Spatial indices enable efficient queries on geospatial objects such as - points, lines, polygons, and
multipolygon. These queries use ST_DISTANCE, ST_WITHIN, ST_INTERSECTS keywords.
- Composite indexes increase the efficiency when you're performing operations on multiple fields.

con-product: Range on con-productVendor.


- Range index is based on an ordered tree-like structure.
- The range index type is used for:
Equality queries:
SELECT * FROM container c WHERE c.property = 'value'
SELECT * FROM c WHERE c.property IN ("value1", "value2", "value3")

- Note: The con-product container stores the company's product catalog data. Each document in con-product
includes a con-productvendor value. Most queries targeting the data in con-product are in the following format.
SELECT * FROM con-product p WHERE p.con-productVendor - 'name'

con-productVendor: Composite in creditRating and yearFounded

- Composite indexes increase the efficiency when you are performing operations on multiple fields.
- The composite index type is used for:
ORDER BY queries on multiple properties:
SELECT * FROM container c ORDER BY c.property1, c.property2

- Note: Most queries targeting the data in the con-productVendor container are in the following format
SELECT * FROM con-productVendor pv
ORDER BY pv.creditRating, pv.yearFounded

Reference:

Overview of indexing in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/index-overview

[CONFIRMED]

QUESTION 9
You need to select the partition key for con-iot1. The solution must meet the IoT telemetry requirements.

What should you select?

A. the timestamp
B. the humidity
C. the temperature
D. the device ID

Correct Answer: D
Explanation

Explanation/Reference:

- the device ID

Explanation:

Scenario:
- The iotdb database will contain two containers named con-iot1 and con-iot2.
- Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.

The partition key is what will determine how data is routed in the various partitions by Cosmos DB and needs to
make sense in the context of your specific scenario.

The IoT Device ID is generally the "natural" partition key for IoT applications.

Reference:
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db

[NOT-CONFIRMED]
Question Set 1

QUESTION 1
You have an Azure Cosmos DB Core (SQL) API account named account1 that has the
disableKeyBasedMetadataWriteAccess property enabled.

You are developing an app named App1 that will be used by a user named DevUser1 to create containers in
account1. DevUser1 has a non-privileged user account in the Azure Active Directory (Azure AD) tenant.

You need to ensure that DevUser1 can use App1 to create containers in account1.

What should you do? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- Grant permissions to create containers by using: Role-based access control (RBAC)


- Create containers by using the: Azure Resource Manager API
Azure Cosmos DB containers
- An Azure Cosmos DB container is where data is stored.
- Unlike most relational databases which scale up with larger VM sizes, Azure Cosmos DB scales out. Data is
stored on one or more servers, called partitions.
- A container is a billable entity, where the cost is determined by the throughput and used storage.
- Containers can span one or more partitions or servers and can scale to handle practically unlimited volumes
of storage or throughput. For API for NoSQL, the resource is called a container.

Disable key based metadata write access


- When disableKeyBasedMetadataWriteAccess is set to true, the metadata operations issued by the SDK are
blocked.
- Alternatively, you can use Azure portal, Azure CLI, Azure PowerShell, or Azure Resource Manager
template deployments to perform these operations.

So, Grant permissions to create containers by using: Role-based access control (RBAC).

Resource tokens
- Resource tokens offer access to specific containers, documents, partition keys, attachments, triggers,
stored procedures, and UDFs.

Azure Resource Manager (ARM)


- Can be used to assist in deploying and managing Azure Cosmos DB accounts, containers, and databases.

Microsoft Graph API


- Is a RESTful web API that enables users to access MS Cloud service resources.

Azure Cosmos DB account access key


- A combination of 88 characters consisting of letters, digits, and special characters.
- Provide access to administrative resources for Azure Cosmos DB accounts.

Resources:

Databases, containers, and items in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/resource-model#azure-cosmos-db-containers

How to audit Azure Cosmos DB control plane operations


- https://learn.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs

Database.CreateContainerIfNotExistsAsync Method
- https://docs.microsoft.com/en-us/dotnet/api/
microsoft.azure.cosmos.database.createcontainerifnotexistsasync

Create and update documents with the Azure Cosmos DB for NoSQL SDK
- https://github.com/MicrosoftLearning/dp-420-cosmos-db-dev/blob/main/instructions/06-sdk-crud.md

Creating a Partitioned Container with .NET SDK


- https://azurecosmosdb.github.io/labs/dotnet/labs/01-creating_partitioned_collection.html
[GPT]

QUESTION 2
You are developing an application that will use an Azure Cosmos DB Core (SQL) API account as a data
source.

You need to create a report that displays the top five most ordered fruits as shown in the following table.

A collection that contains aggregated data already exists.

The following is a sample document:

{
"name" : "apple",
"type" : ["fruit","exotic"],
"orders" : 1000
}

Which two queries can you use to retrieve data for the report? Each correct answer presents a complete
solution.

NOTE: Each correct selection is worth one point.

A. SELECT TOP i.name, i.types, i.orders


FROM items i
WHERE EXISTS (SELECT VALUE t FROM t IN i.types WHERE t.name = 'fruit')
ORDER BY i.orders, i.types
B. SELECT TOP i.name, i.types, i.orders
FROM items i
WHERE EXISTS (SELECT VALUE t FROM t IN i.types WHERE t.name = 'fruit')
ORDER BY i.orders DESC
C. SELECT TOP i.name, i.types, i.orders
FROM items i
WHERE EXISTS (SELECT VALUE t FROM t IN i.types WHERE t.name = 'fruit')
ORDER BY i.types DESC
D. SELECT TOP i.name, i.types, i.orders
FROM items i
WHERE ARRAY_CONTAINS(i.types, {name: 'fruit'} )
ORDER BY i.orders DESC

Correct Answer: BD
Explanation

Explanation/Reference:

- SELECT TOP i.name, i.types, i.orders


FROM items i
WHERE EXISTS (SELECT VALUE t FROM t IN i.types WHERE t.name = 'fruit')
ORDER BY i.orders DESC

- SELECT TOP i.name, i.types, i.orders


FROM items i
WHERE ARRAY_CONTAINS(i.types, {name: 'fruit'} )
ORDER BY i.orders DESC

Trick: ORDER BY i.types DESC

Explanation

ARRAY_CONTAINS
- Returns a boolean indicating whether the array contains the specified value.
- You can check for a partial or full match of an object by using a boolean expression within the function.

Incorrect Answers:

SELECT TOP i.name, i.types, i.orders


FROM items i
WHERE EXISTS (SELECT VALUE t FROM t IN i.types WHERE t.name = 'fruit')
ORDER BY i.orders, i.types

- Default sorting ordering is Ascending. Must use Descending order.

SELECT TOP i.name, i.types, i.orders


FROM items i
WHERE EXISTS (SELECT VALUE t FROM t IN i.types WHERE t.name = 'fruit')
ORDER BY i.types DESC

- The order should be applied on Orders, not on Type.

ARRAY_CONTAINS (NoSQL query)


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/query/array-contains

[CONFIRMED]

QUESTION 3
You have a database in an Azure Cosmos DB Core (SQL) API account.

You plan to create a container that will store employee data for 5,000 small businesses. Each business will
have up to 25 employees. Each employee item will have an emailAddress value.

You need to ensure that the emailAddress value for each employee within the same company is unique.

To what should you set the partition key and the unique key? To answer, select the appropriate options in the
answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- Partition key: companyId


- Unique key: emailAddres

Explanantion:

Logical partitions are formed based on the value of a partition key that is associated with each item in a
container.
All the items in a logical partition have the same partition key value.

Partition key: companyId

- After you create a container with a unique key policy, the creation of a new or an update of an existing item
resulting in a duplicate within a logical partition is prevented, as specified by the unique key constraint.
- The partition key combined with the unique key guarantees the uniqueness of an item within the scope of the
container.

- For example, consider an Azure Cosmos container with Email address as the unique key constraint and
CompanyID as the partition key. When you configure the user's email address with a unique key, each item has
a unique email address within a given CompanyID. Two items can't be created with duplicate email addresses
and with the same partition key value.

Unique key: emailAddres

- UniqueKey will need to be set to emailAddress, since the job of a unique key is to ensure uniqueness of a
value within a logical partition.

Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/unique-keys

QUESTION 4
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. The container1
container has 120 GB of data.

The following is a sample of a document in container1.

{
"customerId" : "5425",
"orderId" : "9d7816e6-f401-42ba-ad05-0e03de35c0b8",
"orderDate" : "2019-05-03",
"orderDeatils" : [ ]
}

The orderId property is used as the partition key.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- SELECT * FROM c where c.orderDate = "2019-05-03" : Yes


- SELECT * FROM c where c.customerId = "5425": Yes
- SELECT * FROM c where c.orderDate = "2019-05-03" and c.orderId = "9d7816e6-f401-42ba-ad05-
0e03de35c0b8": No

In-partition query
- When you query data from containers, if the query has a partition key filter specified, Azure Cosmos DB
automatically optimizes the query.
- It routes the query to the physical partitions corresponding to the partition key values specified in the filter.
- Example: SELECT * FROM c WHERE c.DeviceId = 'XMS-0001'
- Example: SELECT * FROM c WHERE c.DeviceId = 'XMS-0001' AND c.Location = 'Seattle'

Cross-partition query
- Each physical partition has its own index.
- Therefore, when you run a cross-partition query on a container, you're effectively running one query per
physical partition.
- Azure Cosmos DB automatically aggregates results across different physical partitions.
- Example: SELECT * FROM c WHERE c.Location = 'Seattle`

- RESUME: Cross partition means it will scan more than 1 partition, easy as that.

Parallel cross-partition query


- Parallel cross-partition queries allow you to perform low latency, cross-partition queries.
- MaxConcurrency: Sets the maximum number of simultaneous network connections to the container's
partitions. If you set this property to -1, the SDK manages the degree of parallelism. If the MaxConcurrency is
set to 0, there's a single network connection to the container's partitions.
- MaxBufferedItemCount: Trades query latency versus client-side memory utilization. If this option is omitted
or to set to -1, the SDK manages the number of items buffered during parallel query execution.
Query an Azure Cosmos DB container
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-query-container

[CONFIRMED]

QUESTION 5
You have an Azure Cosmos DB Core (SQL) API account that is configured for multi-region writes. The account
contains a database that has two containers named container1 and container2.

The following is a sample of a document in container1:

{
"customerId": 1234,
"firstName": "John",
"lastName": "Smith",
"policyYear": 2021
}

The following is a sample of a document in container2:

{
"gpsId": 1234,
"latitude": 38.8951,
"longitude": -77.0364
}

You need to configure conflict resolution to meet the following requirements:


- For container1 you must resolve conflicts by using the highest value for policyYear.
- For container2 you must resolve conflicts by accepting the distance closest to latitude: 40.730610
and longitude: -73.935242.
- Administrative effort must be minimized to implement the solution.

What should you configure for each container?

To answer, drag the appropriate configurations to the correct containers. Each configuration may be used once,
more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:

Correct Answer:
Explanation

Explanation/Reference:

- Container1: Last Write Wins (default) mode


- Container2: Merge Procedures (custom) mode

Explanation:

For the highest value wins, there is one description relate to Last Write Wins (default) mode.
For API for NoSQL, this may also be set to a user-defined path with a numeric type. In a conflict, the highest
value wins.

It is explicitly stated that Last Write Wins used the DEFAULT (timestamp) mode, so increasing policyYear will
not be used by Last Write Wins.

Last Write Wins (LWW):


- This resolution policy, by default, uses a system-defined timestamp property.
- It's based on the timesynchronization clock protocol.
- If you use the SQL API, you can specify any other custom numerical property (e.g., your own notion of a
timestamp) to be used for conflict resolution.
- A custom numerical property is also referred to as the conflict resolution path."

Container1: Last Write Wins (default) mode


- Setting the ConflictResolutionMode to LastWriterWins and using the default settings for the policy in the
Azure Cosmos DB SDK will meet the goal of ensuring that any conflicts are sent to the conflicts feed.
- The LastWriterWins policy will resolve conflicts by choosing the document with the most recent
timestamp as the winner, and the conflicts feed will store the conflicting versions of the documents.

Custom:
- This resolution policy is designed for application-defined semantics for reconciliation of conflicts.
- When you set this policy on your Azure Cosmos container, you also need to register a merge stored
procedure.
- This procedure is automatically invoked when conflicts are detected under a database transaction on the
server.
- The system provides exactly once guarantee for the execution of a merge procedure as part of the
commitment protocol.
- "Custom conflict resolution policy is available only for SQL API accounts and can be set only at creation
time. It is not possible to set a custom resolution policy on an existing container."

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/conflict-resolution-policies
https://docs.microsoft.com/enus/azure/cosmos-db/sql/how-to-manage-conflicts

QUESTION 6
You have an Azure Cosmos DB Core (SQL) API account named storage1 that uses provisioned throughput
capacity mode.

The storage1 account contains the databases shown in the following table.

The databases contain the containers shown in the following table.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- At a minimum, you will be billed for 4,000 RU/s per hour for db1: NO
- The maximum throughput that can be consumed by cn11 is 400 RU/s: NO
- To db2, you can add a new container that uses database throughput: YES

At a minimum, you will be billed for 4,000 RU/s per hour for db1: NO
- Four containers with 1000 RU/s each.

The maximum throughput that can be consumed by cn11 is 400 RU/s: NO


- Max 8000 RU/s for db2. 8 containers / 8000 RUS/s = 1000 RU/s.
- So 1000 RU/s for each container.

To db2, you can add a new container that uses database throughput: YES
- For db2, the maximum request units per second are 8000 RU/s and 8 containers (cn11 to cn18), so 1000
RU/s for each container. Can very well add an additional container.

- The throughput provisioned on an Azure Cosmos DB container is exclusively reserved for that container.
- We recommend that you configure throughput at the container granularity when you want predictable
performance for the container.
- Can very well add an additional container.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/plan-manage-costs
https://azure.microsoft.com/en-us/pricing/details/cosmos-db/

QUESTION 7
You have a database named telemetry in an Azure Cosmos DB Core (SQL) API account that stores IoT data.
The database contains two containers named readings and devices.

Documents in readings have the following structure.

- id
- deviceid
- timestamp
- ownerid
- measures (array)
- type
- value
- metricid

Documents in devices have the following structure.

- id
- deviceid
- owner
- ownerid
- emailaddress
- name
- brand
- model

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- To return for all devices owned by a specific emailaddress, multiple queries must: YES
- To return deviceid, ownerid, timestamp, and value for a specific metricid, a join must be performed: YES
- To return deviceid, ownerid, emailaddress, and model, a join must be performed: NO

Explanation:

To return for all devices owned by a specific emailaddress, multiple queries must: YES
- Requires 2 requests to get from 2 containers

To return deviceid, ownerid, timestamp, and value for a specific metricid, a join must be performed: YES
- You need value and metricid properties from the measures array.
- You need join to get the data from Array

SELECT m.deviceid
FROM m
WHERE m.metricid = ?

SELECT d.deviceid, o.ownerid, d.timestamp


FROM d
JOIN o IN d.owner
WHERE d.deviceid = @deviceid

To return deviceid, ownerid, emailaddress, and model, a join must be performed: NO


- No joins required

[DOUBT]

QUESTION 8
The settings for a container in an Azure Cosmos DB Core (SQL) API account are configured as shown in the
following exhibit.
Which statement describes the configuration of the container?

A. All items will be deleted after one year.


B. Items stored in the collection will be retained always, regardless of the items time to live value.
C. Items stored in the collection will expire only if the item has a time to live value.
D. All items will be deleted after one hour.

Correct Answer: C
Explanation

Explanation/Reference:

- Items stored in the collection will expire only if the item has a time to live value.

Explanation:

Time to Live or TTL


- The ability to delete items automatically from a container after a certain time period.
- By default, you can set time to live at the container level and override the value on a per-item basis.
- After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these
items after the time period, since the time they were last modified.
- The maximum value for TTL is 2147483647 seconds, the approximate equivalent of 24,855 days or 68
years.

Time to Live:
- Off: Meaning the items won’t expire ever (even if items are marked as deleted)
- On (no default): will let items get deleted (as per the expired time mentioned). This will allow us to delete
items at separate intervals.
- On: will delete expired items based on TTL value. Once you set the TTL at the container, all items will be
automatically removed once the period expires.

Examples

| Container.DefaultTimeToLive | Item.ttl | Expiration in seconds


| 1000 | null | 1000
| 1000 | -1 | This item will never expire
| 1000 | 2000 | 2000
| null | null | This item will never expire
| null | -1 | This item will never expire
| null | 2000 | TTL is disabled at the container level. This item will never expi

Time to Live on a container (set using DefaultTimeToLive):


- If missing (or set to null), items are not expired automatically.
- If present and the value is set to "-1", it is equal to infinity, and items don’t expire by default.
- If present and the value is set to some non-zero number "n" – items will expire "n" seconds after their last
modified time.

Geospatial Configuration
- Specifies how the spatial data will be indexed.
- Specify one Geospatial Configuration per container: geography or geometry.

Geography data indexing


- Spatial indexing enabled for the geography data type.

Geometry data indexing


- Similar to the geography data type, you must specify relevant paths and types to index.
- Each geospatial path requires its own boundingBox.
- BoundingBox properties:
- xmin: the minimum indexed x coordinate
- ymin: the minimum indexed y coordinate
- xmax: the maximum indexed x coordinate
- ymax: the maximum indexed y coordinate

Reference:

Control Retention of items using TTL in Azure Cosmos DB


- https://www.sqlshack.com/control-retention-of-items-using-ttl-in-azure-cosmos-db/

Time to Live (TTL) in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/time-to-live

Index geospatial data with Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/query/geospatial-index

[CONFIRMED]

QUESTION 9
You have an Azure Cosmos DB Core (SQL) API account named account1 that is configured for automatic
failover. The account1 account has a single read-write region in West US and a read region in East US.

You run the following PowerShell command.


Update-AzCosmosDBAccountFailoverPriority -ResourceGroupName “rg1” -Name
“account1” -FailoverPolicy @(“East US”, “West US”)

What is the effect of running the command?

A. The account will be unavailable to writes during the change


B. The provisioned throughput for account1 will increase
C. The account will be configured for multi-region writes
D. A manual failover will occur

Correct Answer: C
Explanation

Explanation/Reference:

- The account will be configured for multi-region writes

Update-AzCosmosDBAccountFailoverPriority
- Update Failover Region Priority of a Cosmos DB Account.

The Update-AzCosmosDBAccountFailoverPriority command with the -FailoverPolicy parameter updates the


failover policy for the Cosmos DB account by specifying the order in which the regions are selected for failover.

Trigger a manual failover


- Any change to a region with failoverPriority=0 triggers a manual failover and can only be done to an account
configured for manual failover.
- Changes to all other regions simply changes the failover priority for an Azure Cosmos DB account.

Change failover priority or trigger failover for an Azure Cosmos DB account with single write region by using
PowerShell
- https://learn.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/common/failover-priority-update

Update-AzCosmosDBAccountFailoverPriority
- https://learn.microsoft.com/en-us/powershell/module/az.cosmosdb/update-
azcosmosdbaccountfailoverpriority

[Doubt]

QUESTION 10
You have an Azure Cosmos DB Core (SQL) API account that uses a custom conflict resolution policy. The
account has a registered merge procedure that throws a runtime exception.

The runtime exception prevents conflicts from being resolved.

You need to use an Azure function to resolve the conflicts.


What should you use?

A. a function that pulls items from the conflicts feed and is triggered by a timer trigger
B. a function that receives items pushed from the change feed and is triggered by an Azure Cosmos DB trigger
C. a function that pulls items from the change feed and is triggered by a timer trigger
D. a function that receives items pushed from the conflicts feed and is triggered by an Azure Cosmos DB
trigger

Correct Answer: A
Explanation

Explanation/Reference:

- a function that pulls items from the conflicts feed and is triggered by a timer trigger

Explanation:

All conflicts related data is stored in the conflicts feed, you have to orchestrate the function using a timer.

When a conflict occurs, you can resolve the conflict by using different conflict resolution policies.

Conflict resolution policy can only be specified at container creation time and cannot be modified after container
creation.

If you configure your container with the custom resolution option, and you fail to register a merge procedure on
the container or the merge procedure throws an exception at runtime, the conflicts are written to the conflicts
feed.

Your application then needs to manually resolve the conflicts in the conflicts feed.

"a function that pulls items from the conflicts feed and is triggered by a timer trigger"
- Is the correct answer since there is no trigger mechanism for the conflict feed in Cosmos DB

Incorrect answers

"a function that receives items pushed from the change feed and is triggered by an Azure Cosmos DB trigger"
"a function that pulls items from the change feed and is triggered by a timer trigger"
- They are pull from the changes feed.

"a function that pulls items from the conflicts feed and is triggered by a timer trigger"
- Mentions an Azure Cosmos DB trigger, but this is only for the change feed, not for the conflicts feed, hence

Conflict types and resolution policies when using multiple write regions
- https://learn.microsoft.com/en-us/azure/cosmos-db/conflict-resolution-policies

Manage conflict resolution policies in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-conflicts

QUESTION 11
The following is a sample of a document in orders.

The orders container uses customerId as the partition key.

You need to provide a report of the total items ordered per month by item type.

The solution must meet the following requirements:


- Ensure that the report can run as quickly as possible.
- Minimize the consumption of request units (RUs).

What should you do?

A. Configure the report to query orders by using a SQL query.


B. Configure the report to query a new aggregate container. Populate the aggregates by using the change
feed.
C. Configure the report to query orders by using a SQL query through a dedicated gateway.
D. Configure the report to query a new aggregate container. Populate the aggregates by using SQL queries
that run daily.

Correct Answer: B
Explanation

Explanation/Reference:

- Configure the report to query a new aggregate container. Populate the aggregates by using the
change feed.

Explanation:

After processing items in the change feed, you can build a materialized view and persist aggregated values
back in Azure Cosmos DB. If you're using Azure Cosmos DB to build a game, you can, for example, use
change feed to implement real-time leaderboards based on scores from completed games.

"Configure the report to query a new aggregate container. Populate the aggregates by using SQL queries that
run daily."
- Suggests populating the aggregates by using SQL queries that run daily. While this may reduce the RU
consumption during querying, it may not necessarily minimize RU consumption overall. Additionally, this
approach may result in stale data since the aggregates are only updated once a day.
- The best approach to minimize RU consumption and ensure the report runs as quickly as possible is to use
the change feed to populate the aggregates in real-time, as suggested in this option.
- This way, the aggregates are always up-to-date, and the report can be generated quickly and with minimal
RU consumption.

Change feed
- Is a persistent record of changes to a container in the order they occur.
- Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos DB container for any
changes.
- It then outputs the sorted list of documents that were changed in the order in which they were modified.
- The persisted changes can be processed asynchronously and incrementally, and the output can be
distributed across one or more consumers for parallel processing.

- Change feed is enabled by default for all Azure Cosmos DB accounts.


- You can use your provisioned throughput to read from the change feed, just like any other Azure Cosmos DB
operation, in any of the regions associated with your Azure Cosmos DB account.

- The change feed includes insert and update operations made to items within the container.
- If you're using all versions and deletes mode (preview), you also get changes from delete operations and
TTL expirations.

- The Change feed don't includes updates or deletes.

Reference:

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-design-patterns#high-availability

Change feed in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed

[DOUBT]

QUESTION 12
You have three containers in an Azure Cosmos DB Core (SQL) API account as shown in the following table.

You have the following Azure functions:

- A function named Fn1 that reads the change feed of cn1


- A function named Fn2 that reads the change feed of cn2
- A function named Fn3 that reads the change feed of cn3

You perform the following actions:

- Delete an item named item1 from cn1.


- Update an item named item2 in cn2.
- For an item named item3 in cn3, update the item time to live to 3,600 seconds.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- Fn1 will receive item1 from the change feed: NO


- Fn2 can check the _eLag of the item2 to see whether the item is an update or an insert: NO
- Fn3 will receive item3 from the change feed: YES

The change feed includes insert and update operations made to items within the container.
If you're using all versions and deletes mode (preview), you also get changes from delete operations and TTL
expirations.

The Change feed don't includes updates or deletes.

Fn1 will receive item1 from the change feed: NO


- The item was deleted from cn1, so it will not be received by Fn1.
- Azure Cosmos DB's change feed is a great choice as a central data store in event sourcing architectures
where all data ingestion is modeled as writes (no updates or deletes).
- https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed#features-of-change-feed
- Note:
- The change feed does not capture deletes.
- If you delete an item from your container, it is also removed from the change feed.
- The most common method of handling this is adding a soft marker on the items that are being deleted.
- You can add a property called "deleted" and set it to "true" at the time of deletion.
- This document update will show up in the change feed. Y
- You can set a TTL on this item so that it can be automatically deleted later.

Fn2 can check the _eLag of the item2 to see whether the item is an update or an insert: NO
- Every item stored in an Azure Cosmos DB container has a system defined _etag property.
- The value of the _etag is automatically generated and updated by the server every time the item is updated
- The _etag format should not take dependency on it, because it can change anytime.

Fn3 will receive item3 from the change feed: YES


- Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos container for any
changes.
- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-design-patterns https://
docs.microsoft.com/en-us/azure/cosmos-db/change-feed

Reference:

Change feed design patterns in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-design-patterns

Change feed in Azure Cosmos DB


- https://docs.microsoft.com/enus/azure/cosmos-db/change-feed

Transactions and optimistic concurrency control


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/database-transactions-optimistic-concurrency

[CONFIRMED]

QUESTION 13
You configure Azure Cognitive Search to index a container in an Azure Cosmos DB Core (SQL) API account as
shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on the information
presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- The [answer choice] field is limited to exact math comparisons: country


- The [answer choice] field is hidden from the search results: name

The [answer choice] field is limited to exact math comparisons: country


- The country field is filterable.
- Note: filterable: Indicates whether to enable the field to be referenced in $filter queries. Filterable differs from
searchable in how strings are handled. Fields of type Edm.String or Collection(Edm.String) that are filterable do
not undergo lexical analysis, so comparisons are for exact matches only.

The [answer choice] field is hidden from the search results: name
- The name field is not Retrievable.
- Retrievable: Indicates whether the field can be returned in a search result. Set this attribute to false if you
want to use a field (for example, margin) as a filter, sorting, or scoring mechanism but do not want the field to
be visible to the end user.
- Note: searchable: Indicates whether the field is full-text searchable and can be referenced in search queries.

Reference:

Create Index (Azure Cognitive Search REST API)


- https://docs.microsoft.com/en-us/rest/api/searchservice/create-index

[CONFIRMED]

QUESTION 14
You have a database named db1 in an Azure Cosmos DB for NoSQL account named account1.
You need to write JSON data to db1 by using Azure Stream Analytics. The solution must minimize costs.

Which should you do before you can use db1 as an output of Stream Analytics?

A. In account, add a private endpoint.


B. In db1, create containers that have a custom indexing policy and analytical store disabled.
C. In account, enable a dedicated gateway.
D. In db1, create containers that have an automatic indexing policy and analytical store enabled.

Correct Answer: B
Explanation

Explanation/Reference:

- In db1, create containers that have a custom indexing policy and analytical store disabled

Explanation:

When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created
based on the operational data in your container.

This column store is persisted separately from the row-oriented transactional store for that container, in a
storage account that is fully managed by Azure Cosmos DB, in an internal subscription.

Customers don't need to spend time with storage administration.

The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You
don't need the Change Feed or ETL to sync the data.

Stream Analytics doesn't create containers in your database. Instead, it requires you to create them
beforehand.
You can then control the billing costs of Azure Cosmos DB containers.
You can also tune the performance, consistency, and capacity of your containers directly by using the Azure
Cosmos DB APIs.

We don't need Analytical Store because we don't have ColumnStore.

Azure Stream Analytics


- Azure Stream Analytics is a fully managed, real-time analytics engine designed to analyze and process fast
moving streams of data.
- Azure Stream Analytics support processing events in CSV, JSON, and Avro data formats.
- Both JSON and Avro data can be structured and contain some complex types such as nested objects
(records) and arrays.

Azure Cosmos DB
- In Azure Cosmos DB, data is indexed following indexing policies that are defined for each container. The
default indexing policy for newly created containers enforces range indexes for any string or number. You can
override this policy with your own custom indexing policy.

Analytical store
- Key/value databases (serialized object for each key value)
- Document databases (XML, YAML, JSON, or BSON)
- Column store databases (stores column families)
- Graph databases (store information as a collection of objects and relationships)
- Telemetry and time series databases are an append-only collection of object
Stream Analytics doesn't create containers in your database. Instead, it requires you to create them up front.

You can then control the billing costs of Azure Cosmos DB containers.

You can also tune the performance, consistency, and capacity of your containers directly by using the Azure
Cosmos DB APIs.

Azure Cosmos DB analytical store


- Azure Cosmos DB analytical store is a fully isolated column store for enabling large-scale analytics against
operational data in your Azure Cosmos DB, without any impact to your transactional workloads.
- Azure Cosmos DB transactional store is schema-agnostic, and it allows you to iterate on your transactional
applications without having to deal with schema or index management.
- In contrast to this, Azure Cosmos DB analytical store is schematized to optimize for analytical query
performance. This article describes in detailed about analytical storage.

Basics of Azure Cosmos DB as an output target


- https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-documentdb-output#basics-of-
azure-cosmos-db-as-an-output-target

Parse JSON and Avro data in Azure Stream Analytics


- https://learn.microsoft.com/en-us/azure/stream-analytics/stream-analytics-parsing-json

[CONFIRMED

QUESTION 15
You have a database named db1 in an Azure Cosmos DB for NoSQL account named account1. The db1
database has a manual throughput of 4,000 request units per second (RU/s).

You need to move db1 from manual throughput to autoscale throughput by using the Azure CLI. The solution
must provide a minimum of 4,000 RU/s and a maximum of 40,000 RU/s.

How should you complete the CLI statements? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:
Explanation

Explanation/Reference:

az cosmosdb sql database throughput migrate


- g "account1" \
-g "cosmosdbrg" \
-n "db1" \
-t 'autoscale'

az cosmosdb sql database throughput update


- g "account1" \
-g "cosmosdbrg" \
-n "db1" \
--throughput 4000

Explanation

Commands
- az cosmosdb sql database throughput migrate: Migrate the throughput of the SQL database between
autoscale and manually provisioned.
- az cosmosdb sql database throughput show: Get the throughput of the SQL database under an Azure
Cosmos DB account.
- az cosmosdb sql database throughput update: Update the throughput of the SQL database under an Azure
Cosmos DB account.
--max-throughput: The maximum throughput resource can scale to (RU/s). Provided when the resource
is autoscale enabled. The minimum value can be 4000 (RU/s).
--throughput: The throughput of SQL database (RU/s).

az cosmosdb sql database throughput migrate -a "account1" -g "cosmosdbrg" -n "db1" --throughput-type


autoscale

az cosmosdb sql database throughput update -a "account1" -g "cosmosdbrg" -n "db1" --throughput 4000

az cosmosdb sql database throughput


- https://learn.microsoft.com/en-us/cli/azure/cosmosdb/sql/database/throughput

[CONFIRMED]

QUESTION 16
You have an Azure Cosmos DB for NoSQL account.

You need to create an Azure Monitor query that lists recent modifications to the regional failover policy.

Which Azure Monitor table should you query?

A. CDBPartitionKeyStatistics.
B. CDBQueryRunTimeStatistics.
C. CDBDataPlaneRequests.
D. CDBControlPlaneRequests.

Correct Answer: D
Explanation

Explanation/Reference:

- CDBControlPlaneRequests.

CDBControlPlaneRequests
- This table details all control plane operations executed on the account, which include modifications to the
regional failover policy, indexing policy, IAM role assignments, backup/restore policies, VNet and firewall rules,
private links as well as updates and deletes of the account.

CDBPartitionKeyRUConsumption
- This table details the RU (Request Unit) consumption for logical partition keys in each region, within each of
their physical partitions.
- This data can be used to identify hot partitions from a request volume perspective.

CDBQueryRunTimeStatistics
- This table details query operations executed against a SQL API account.
- By default, the query text and its parameters are obfuscated to avoid logging PII data with full text query
logging available by request.

CDBDataPlaneRequests
- The DataPlaneRequests table captures every data plane operation for the Cosmos DB account.
- Data Plane requests are operations executed to create, update, delete or retrieve data within the account.

References:

CDBPartitionKeyRUConsumption
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbpartitionkeyruconsumption

CDBQueryRuntimeStatistics
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbqueryruntimestatistics

CDBDataPlaneRequests.
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbdataplanerequests

CDBControlPlaneRequests.
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbcontrolplanerequests

[CONFIRMED]

QUESTION 17
You have a container in an Azure Cosmos DB for NoSQL account that stores data about orders.

The following is a sample of an order document.

{
"orderId" : "d4a9179b-5ead-43a3-b851-add9071ac4b6",
"customerId" : "f6e39103-bdc7-4346-9cfb-45daa4b2becf",
"orderDate" : "2022-09-29".
"orderItems" : [...],
"total" : 12345
}

Documents are up to 2 KB.

You plan to receive one million orders daily.

Customers will frequently view then past order history.

You are the evaluating whether to use orderDate as the partition key.

What are two effects of using orderDate as the partition key? Each correct answer presents a complete
solution.

NOTE: Each correct selection is worth one point.

A. You will exceed the maximum number of partition key values.


B. Queries will run cross-partition.
C. You will exceed the maximum storage per partition
D. There will always be a hot partition.

Correct Answer: BD
Explanation
Explanation/Reference:

- There will always be a hot partition.


- Queries will run cross-partition.

Hot partitions
- Throughput provisioned for a container is divided evenly among physical partitions.
- A partition key design that doesn't distribute requests evenly might result in too many requests directed to a
small subset of partitions that become "hot."
- Hot partitions lead to inefficient use of provisioned throughput, which might result in rate-limiting
and higher costs.

Azure Cosmos DB service quotas


- Maximum RUs per partition (logical & physical): 10,000
- Maximum number of distinct (logical) partition keys: Unlimited
- Maximum storage across all items per (logical) partition: 20 GB

Incorrect answers

You will exceed the maximum number of partition key values.


- Maximum number of distinct (logical) partition keys: Unlimited
- Maximum storage across all items per (logical) partition: 20 GB
- There is no limit to the total number of physical partitions in your container. As your provisioned
throughput or data size grows, Azure Cosmos DB will automatically create new physical partitions by splitting
existing ones.

You will exceed the maximum storage per partition


- Maximum storage across all items per (logical) partition: 20 GB

Reference:

Azure Cosmos DB service quotas


- https://learn.microsoft.com/en-us/azure/cosmos-db/concepts-limits

https://www.tallan.com/blog/2019/04/30/horizontal-partitioning-in-azure-cosmos-db
https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits

[CONFIRMED]

QUESTION 18
You have an app that stores data in an Azure Cosmos DB Core (SQL) API account. The app performs queries
that return large result sets.

You need to return a complete result set to the app by using pagination. Each page of results must return 80
items.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of
actions to the answer area and arrange them in the correct order.
Select and Place:

Correct Answer:

Explanation

Explanation/Reference:

1. Configure the MaxItemCount in QueryRequestOptions


2. Run the query and provide a continuation token
3. Append the results to a variable

Query executions
- Sometimes query results will be split over multiple pages. Each page's results is generated by a separate
query execution.
- You can specify the maximum number of items returned by a query by setting the MaxItemCount.
- The MaxItemCount is specified per request and tells the query engine to return that number of items or
fewer. You can set MaxItemCount to -1 if you don't want to place a limit on the number of results per query
execution.

You should configure the MaxItemCount property to limit the number of items returned per page to 80.
Additionally, you should use the continuation token returned in the response to retrieve the next page of results.

Step 1: Configure the MaxItemCount in QueryRequestOptions

- You can specify the maximum number of items returned by a query by setting the MaxItemCount. The
MaxItemCount is specified per request and tells the query engine to return that number of items or fewer.
Step 2: Run the query and provide a continuation token

- In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's
progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time
using the continuation token.
- If the query returns a continuation token, then there are additional query results.

Step 3: Append the results to a variable


- The Azure Cosmos DB output binding lets you write a new document to an Azure Cosmos DB database
using the SQL API.

Reference:

Pagination in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-pagination

[CONFIRMED]

QUESTION 19
You have an application named App1 that reads the data in an Azure Cosmos DB Core (SQL) API account.
App1 runs the same read queries every minute. The default consistency level for the account is set to eventual.
You discover that every query consumes request units (RUs) instead of using the cache.
You verify the IntegratedCacheItemHitRate metric and the IntegratedCacheQueryHitRate metric. Both metrics
have values of 0.
You verify that the dedicated gateway cluster is provisioned and used in the connection string.
You need to ensure that App1 uses the Azure Cosmos DB integrated cache.

What should you configure?

A. the indexing policy of the Azure Cosmos DB container


B. the consistency level of the requests from App1
C. the connectivity mode of the App1 CosmosClient
D. the default consistency level of the Azure Cosmos DB account

Correct Answer: C
Explanation

Explanation/Reference:

- the connectivity mode of the App1 CosmosClient

------
[AI]: the consistency level of the requests from App1.

To ensure that App1 uses the Azure Cosmos DB integrated cache, you need to configure the consistency level
of the requests from App1 to session or eventual.
-----

Explanation:

As the integrated cache is specific to the Azure Cosmos DB account and needs significant memory and CPU, it
needs a dedicated gateway node. Therefore, you need to configure the connectivity mode of the application
CosmosClient. Connect to Azure Cosmos DB with the help of gateway mode.

IntegratedCacheItemHitRate (key metrics for the integrated cache)


- The proportion of point reads that used the integrated cache (out of all point reads routed through the
dedicated gateway with session or eventual consistency).
- This value is an average of integrated cache instances across all dedicated gateway nodes.

IntegratedCacheQueryHitRate (key metrics for the integrated cache)


- The proportion of queries that used the integrated cache (out of all queries routed through the dedicated
gateway with session or eventual consistency).
- This value is an average of integrated cache instances across all dedicated gateway nodes.

If the values from IntegratedCacheItemHitRate and IntegratedCacheQueryHitRate are zero, then requests
aren't hitting the integrated cache. Check that you're using the dedicated gateway connection string, connecting
with gateway mode, and have set session or eventual consistency.

If you're using the .NET or Java SDK, set the connection mode to gateway mode.
This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of
connecting besides gateway mode.

Because the integrated cache is specific to your Azure Cosmos DB account and requires significant CPU and
memory, it requires a dedicated gateway node. Connect to Azure Cosmos DB using gateway mode.

Reference:

Azure Cosmos DB integrated cache


- https://learn.microsoft.com/en-us/azure/cosmos-db/integrated-cache

[CONFIRMED]

QUESTION 20
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account named account1 that
is set to the session default consistency level. The average size of an item in contained is 20 KB.

You have an application named App1 that uses the Azure Cosmos DB SDK and performs a point read on the
same set of items in container1 every minute.

You need to minimize the consumption of the request units (RUs) associated to the reads by App1.

What should you do?

A. In App1, modify the connection policy settings


B. In App1, change the consistency level of read requests to consistent prefix
C. In account1, provision a dedicated gateway and integrated cache
D. In account1, change the default consistency level to bounded staleness

Correct Answer: B
Explanation

Explanation/Reference:

- In App1, change the consistency level of read requests to consistent prefix


Explanation:

Since the application performs a point read on the same set of items in the container every minute, using
"consistent prefix" as the consistency level will allow the application to consume fewer RUs.

Consistent Prefix
- This guarantees that a read operation will return the most recent version of the data, but it may not reflect all
writes made to the database at the time of the read.
- This consistency level may have lower latency than session consistency.

Bounded Staleness
- A guarantee that the data read is at most a certain number of versions or time interval behind the latest
write.
- This consistency level may have lower latency than strong consistency.

[NOT-CONFIRMED]

QUESTION 21
You provision Azure resources by using the following Azure Resource Manager (ARM) template.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Note: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- The alert will be triggered when an Azure Cosmos DB key is used: NO


- The alert action will be performed when the alert is triggered: NO
- The alert will be triggered when an item that has a new partition key value is created: NO

Explanation:

The alert will be triggered when an Azure Cosmos DB key is used: NO


- An alert is triggered when the DB key is regenerated, not when it is used.
- Note: The az cosmosdb keys regenerate command regenerates an access key for a Azure Cosmos DB
database account.

The alert action will be performed when the alert is triggered: NO


- Only an SMS action will be taken.
- Emailreceivers is empty so no email action is taken.

The alert will be triggered when an item that has a new partition key value is created: NO
- The trigger is fired when the database account has a new key regenerated not when an item with a new
partition key value is created.
- When a new access key is generated not when an item with a new partition key value is created.
Reference: https://docs.microsoft.com/en-us/cli/azure/cosmosdb/keys

[CONFIRMED]

QUESTION 22
You are designing an Azure Cosmos DB Core (SQL) API solution to store data from IoT devices. Writes from
the devices will be occur every second.

The following is a sample of the data

{
"id" : "03c1ca5a-db18-4231-908f-09a9bc7a7c3e",
"deviceManufacturer" : "Contoso, Ltd",
"deviceId" : "f460df85-799f-4d58-b051-67561b4993c6",
"timestamp" : "2021-09-19T13:47:45",
"sensor1Value" : true,
"sensor2Value" : "75",
"sensor3Value" : "4554",
"sensor4Value" : "454",
"sensor5Value" : "42128",
}

You need to select a partition key that meets the following requirements for writes:
- Minimizes the partition skew
- Avoids capacity limits
- Avoids hot partitions

What should you do?

A. Use timestamp as the partition key.


B. Create a new synthetic key that contains deviceId and sensor1Value.
C. Create a new synthetic key that contains deviceId and deviceManufacturer.
D. Create a new synthetic key that contains deviceId and a random number.

Correct Answer: D
Explanation

Explanation/Reference:

- Create a new synthetic key that contains deviceId and a random number.

Explanation:

In order to avoid a hot partition we have no option but to use a strategy like that suggested in the section "Use a
partition key with a random suffix"
- A partition key design that doesn't distribute requests evenly might result in too many requests directed to a
small subset of partitions that become "hot."
- Hot partitions lead to inefficient use of provisioned throughput, which might result in rate-limiting and higher
costs.

Use a partition key with a random suffix.


- Distribute the workload more evenly is to append a random number at the end of the partition key value.
- When you distribute items in this way, you can perform parallel write operations across partitions.

Because of frequency (1s) we may not want to put too much data into the same container.
The maxim quota for logical partition is 20GB, which can exhaust pretty fast.

Incorrect Answers:

Use timestamp as the partition key.


- You will also not like to partition the data on "DateTime", because this will create a hot partition. Imagine you
have partitioned the data on time, then for a given minute, all the calls will hit one partition. If you need to
retrieve the data for a customer, then it will be a fan-out query because data may be distributed on all the
partitions.

Create a new synthetic key that contains deviceId and sensor1Value.


- Senser1Value has only two values.
- Create a hot partition because of the low cardinality.

Create a new synthetic key that contains deviceId and deviceManufacturer.


- All the devices could have the same manufacturer.
- Create a hot partition because of the low cardinality.

Reference:

Use a partition key with a random suffix


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/synthetic-partition-keys#use-a-partition-key-with-a-
random-suffix

Partitioning and horizontal scaling in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/partitioning-overview

[CONFIRMED]

QUESTION 23
You are designing an Azure Cosmos DB Core (SQL) API solution to store data from IoT devices. Writes from
the devices will be occur every second.

The following is a sample of the data

{
"id" : "03c1ca5a-db18-4231-908f-09a9bc7a7c3e",
"deviceManufacturer" : "Contoso, Ltd",
"deviceId" : "f460df85-799f-4d58-b051-67561b4993c6",
"timestamp" : "2021-09-19T13:47:45",
"sensor1Value" : true,
"sensor2Value" : "75",
"sensor3Value" : "4554",
"sensor4Value" : "454",
"sensor5Value" : "42128",
}

You need to select a partition key that meets the following requirements for writes:
- Minimizes the partition skew
- Avoids capacity limits
- Avoids hot partitions
What should you do?

A. Create a new synthetic key that contains deviceId and timestamp.


B. Use timestamp as the partition key.
C. Use deviceManufacturer as the partition key.
D. Use sensor1Value as the partition key.

Correct Answer: A
Explanation

Explanation/Reference:

- Create a new synthetic key that contains deviceId and timestamp.

Concatenate multiple properties of an item.

You can form a partition key by concatenating multiple property values into a single artificial partitionKey
property. These keys are referred to as synthetic keys.

For example, consider the following example document:


{
"deviceId": "abc-123",
"date": 2018
}

For the previous document, one option is to set /deviceId or /date as the partition key.
Use this option, if you want to partition your container based on either device ID or date.
Another option is to concatenate these two values into a synthetic partitionKey property that's used as the
partition key.

{
"deviceId": "abc-123",
"date": 2018,
"partitionKey": "abc-123-2018"
}

Incorrect:

Use timestamp as the partition key.


- You will also not like to partition the data on "DateTime", because this will create a hot partition. Imagine you
have partitioned the data on time, then for a given minute, all the calls will hit one partition. If you need to
retrieve the data for a customer, then it will be a fan-out query because data may be distributed on all the
partitions.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/synthetic-partition-keys https://docs.microsoft.com/en-
us/azure/cosmos-db/concepts-limits

[CONFIRMED]

QUESTION 24
You maintain a relational database for a book publisher.
The database contains the following tables.

The most common query lists the books for a given authorId.

You need to develop a non-relational data model for Azure Cosmos DB Core (SQL) API that will replace the
relational database. The solution must minimize latency and read operation costs.

What should you include in the solution?

A. Create a container for Author and a container for Book. In each Author document, embed bookId for each
book by the author. In each Book document embed authorId of each author.
B. Create Author, Book, and Bookauthorlnk documents in the same container.
C. Create a container that contains a document for each Author and a document for each Book. In each Book
document, embed authorId.
D. Create a container for Author and a container for Book. In each Author document and Book document
embed the data from Bookauthorlnk.

Correct Answer: C
Explanation

Explanation/Reference:

- Create a container that contains a document for each Author and a document for each Book. In each
Book document, embed authorId.

Explanation:

The most read cost effective.

We are explicitly required to minimize cost and latency, A although it would get the job done it would be highly
inefficient the relationship between an author and books is a one to few (1-many but bounded), hence data
should be embedded refer to the below links
Data modeling in Azure Cosmos DB
- Retrieving a complete person record from the database is now a single read operation against a single
container and for a single item.
- Updating the contact details and addresses of a person record is also a single write operation against a
single item.

Here is a diagram of the data model for option "Create a container that contains a document for each Author
and a document for each Book. In each Book document, embed authorId."

Container: Books
Document: Book1
authorId: 1
title: "The Da Vinci Code"
genre: "Thriller"

Container: Authors
Document: Author1
authorId: 1
fullname: "Dan Brown"
address: "123 Main Street"
contactinfo: "123-456-7890"

Incorrect answers.

Create a container for Author and a container for Book. In each Author document, embed bookId for each book
by the author. In each Book document embed authorId of each author.
- In some scenarios, you may want to mix different document types in the same collection; this is usually the
case when you want multiple, related documents to sit in the same partition.
- For example, you could put both books and book reviews documents in the same collection and partition it
by bookId.
- This solution create two containers = two reads, and the solution must minimize latency and read
operation costs.

References:

Data modeling in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/modeling-data

How to model and partition data on Azure Cosmos DB using a real-world example
- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-model-partition-example

[DOUBT]

QUESTION 25
You have an Azure Cosmos DB Core (SQL) API account.

You run the following query against a container in the account.

SELECT
IS_NUMBER("1234") AS A,
IS_NUMBER(1234) AS B,
IS_NUMBER({prop: 1234}) AS C

What is the output of the query?

A. [{"A": false, "B": true, "C": false}]


B. [{"A": true, "B": false, "C": true}]
C. [{"A": true, "B": true, "C": false}]
D. [{"A": true, "B": true, "C": true}]

Correct Answer: A
Explanation

Explanation/Reference:

- [{"A": false, "B": true, "C": false}]

Explanation:

IS_NUMBER returns a Boolean value indicating if the type of the specified expression is a number.

"1234" is a string, not a number.

Reference:

IS_NUMBER (Azure Cosmos DB)


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-is-number

[CONFIRMED]

QUESTION 26
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted
into a container.

Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the
solution.

Note: Each correct selection is worth one point.

A. Append pre to the name of the JavaScript function trigger.


B. For each create request, set the access condition in RequestOptions.
C. Register the trigger as a pre-trigger.
D. For each create request, set the consistency level to session in RequestOptions.
E. For each create request, set the trigger name in RequestOptions.

Correct Answer: CE
Explanation

Explanation/Reference:

- Register the trigger as a pre-trigger.


- For each create request, set the trigger name in RequestOptions.

Explanation:
When you run an operation by specifying PreTriggerInclude and then passing the name of the trigger in a List
object, pretriggers are passed in the RequestOptions object.

Even though the name of the trigger is passed as a List, you can still run only one trigger per operation.

Register the trigger as a pre-trigger.


- When triggers are registered, you can specify the operations that it can run with.

For each create request, set the trigger name in RequestOptions.


- When executing, pre-triggers are passed in the RequestOptions object by specifying PreTriggerInclude
and then passing the name of the trigger in a List object.

Reference:

How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB
- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs

[CONFIRMED]

QUESTION 27
You have a container in an Azure Cosmos DB Core (SQL) API account.

You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.

What should you include in the code?

To answer, select the appropriate options in the answer area.


NOTE: Each correct selection is worth one point

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- RequestOptions property to set: AccessCondition


- Document property that will be compared: _etag

Explanation:

RequestOptions property to set: AccessCondition


- It's supposed to be either AccessCondition or IfMatchETag directly.
- Consistency levels have nothing to do with optimistic concurrency.

Document property that will be compared: _etag


- The ItemRequestOptions class helped us implement optimistic concurrency by specifying that we wanted
the SDK to use the If-Match header to allow the server to decide whether a resource should be updated.
- The If-Match value is the ETag value to be checked against. If the ETag value matches the server ETag
value, the resource is updated.

Example

// If ETag is current, then this will succeed. Otherwise the request will fail with HTTP 412 Precondition Failure

await client.ReplaceDocumentAsync(
readCopyOfBook.SelfLink,
new Book { Title = "Moby Dick", Price = 14.99 },
new RequestOptions
{
AccessCondition = new AccessCondition
{
Condition = readCopyOfBook.ETag,
Type = AccessConditionType.IfMatch
}
});

Reference:
RequestOptions.AccessCondition Property
- https://learn.microsoft.com/en-us/dotnet/api/
microsoft.azure.documents.client.requestoptions.accesscondition?view=azure-dotnet#examples

[CONFIRMED]

QUESTION 28
You are creating a database in an Azure Cosmos DB Core (SQL) API account. The database will be used by
an application that will provide users with the ability to share online posts. Users will also be able to submit
comments on other users posts.

You need to store the data shown in the following table.

The application has the following characteristics:


- Users can submit an unlimited number of posts.
- The average number of posts submitted by a user will be more than 1,000.
- Posts can have an unlimited number of comments from different users.
- The average number of comments per post will be 100, but many posts will exceed 1,000 comments.
- Users will be limited to having a maximum of 20 interests.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- If you embed the post data into the users data instead of creating a separate document for each post, you
will increase the write operation cost for new post: YES
- If you embed the post data into the users data instead of creating a separate document for each post, you
will increase the write operation costs for new comments: YES
- If you embed the post data into the users data instead of creating a separate document for each post, you
will increase the read operations cost for displaying the users and their associated interests: NO

If you embed the post data into the users data instead of creating a separate document for each post, you will
increase the write operation cost for new post: YES
- Non-relational data increases write costs, but can decrease read costs.

If you embed the post data into the users data instead of creating a separate document for each post, you will
increase the write operation costs for new comments: YES
- Non-relational data increases write costs, but can decrease read costs.

If you embed the post data into the users data instead of creating a separate document for each post, you will
increase the read operations cost for displaying the users and their associated interests: NO
- Non-relational data increases write costs, but can decrease read costs.

QUESTION 29
You have a container in an Azure Cosmos DB Core (SQL) API account. The container stores telemetry data
from IoT devices. The container uses telemetryId as the partition key and has a throughput of 1,000 request
units per second (RU/s).

Approximately 5,000 IoT devices submit data every five minutes by using the same telemetryId value.
You have an application that performs analytics on the data and frequently reads telemetry data for a single IoT
device to perform trend analysis.

The following is a sample of a document in the container.

{
"id" : "9ccf1906-2a30-4dc0-9644-2185f5dcbbd7",
"deviceId" : "bba6fe24-6d97-4935-8d58-36baa4b8a0e1",
"telemetryId" : "9d7816e6-f401-42ba-ad05-0e03de35c0b8",
"date" : "2019-05-03"
"time" : "13:05",
"tempe" : "21"
}

You need to reduce the amount of request units (RUs) consumed by the analytics application.

What should you do?

A. Decrease the offerThroughput value for the container.


B. Increase the offerThroughput value for the container.
C. Move the data to a new container that has a partition key of deviceId.
D. Move the data to a new container that uses a partition key of date.

Correct Answer: C
Explanation

Explanation/Reference:

- Move the data to a new container that has a partition key of deviceId.

Explanation:

The partition key is what will determine how data is routed in the various partitions by Cosmos DB and needs to
make sense in the context of your specific scenario.

The IoT Device ID is generally the "natural" partition key for IoT applications.

The analytics application would frequently read from the same partition of Device ID is used as partition key.

Note: The partition key determines how Azure Cosmos DB routes data in partitions. The IoT Device ID is the
usual partition key for IoT applications.

Reference:
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db

[CONFIRMED]

QUESTION 30
You have the indexing policy shown in the following exhibit.
Use the drop-down menus to select the answer choice that answers each question based on the information
presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- When creating a query, which ORDER BY statement will execute successfully: ORDER BY c.name DESC,
c.age DESC
- During the creation if an item, when will the index update: At the same time as the item creation

Explanation:

Composite indexes
- Queries that have an ORDER BY clause with two or more properties require a composite index.
- You can also define a composite index to improve the performance of many equality and range queries.
- By default, no composite indexes are defined so you should add composite indexes as needed.

ORDER BY clause in Azure Cosmos DB


- ASC sorts from the lowest value to highest value.
- DESC sorts from highest value to lowest value.
- ASC is the default sort order.
- Null values are treated as the lowest possible values.

But there no ORDER BY c.name ASC, c.age ASC, so, the correct answer is ORDER BY c.name DESC, c.age
DESC.

Composite indexes
- Queries that have an ORDER BY clause with two or more properties require a composite index.
- You can also define a composite index to improve the performance of many equality and range queries.
- By default, no composite indexes are defined so you should add composite indexes as needed.
- Two or more property paths. The sequence in which property paths are defined matters.
- The order of composite index paths (ascending or descending) should also match the order in the
ORDER BY clause.

When creating a query, which ORDER BY statement will execute successfully: ORDER BY c.name DESC,
c.age DESC
- Queries that have an ORDER BY clause with two or more properties require a composite index.
- The following considerations are used when using composite indexes for queries with an ORDER BY clause
with two or more properties:
- If the composite index paths do not match the sequence of the properties in the ORDER BY clause, then
the composite index can't support the query.
- The order of composite index paths (ascending or descending) should also match the order in the ORDER
BY clause. The composite index also supports an ORDER BY clause with the opposite order on all paths.

During the creation if an item, when will the index update: At the same time as the item creation
- Azure Cosmos DB supports two indexing modes:
- Consistent: The index is updated synchronously as you create, update or delete items. This means
that the consistency of your read queries will be the consistency configured for the account.
- None: Indexing is disabled on the container.

Reference:

Composite indexes
- https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy#composite-indexes

https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

Simular question: https://www.examtopics.com/discussions/microsoft/view/36279-exam-az-204-topic-3-


question-13-discussion/

[CONFIRMED]

QUESTION 31
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. Upserts of items in
container1 occur every three seconds.

You have an Azure Functions app named function1 that is supposed to run whenever items are inserted or
replaced in container1.

You discover that function1 runs, but not on every upsert.


You need to ensure that function1 processes each upsert within one second of the upsert.

Which property should you change in the Function.json file of function1?

A. checkpointInterval
B. leaseCollectionsThroughput
C. maxItemsPerInvocation
D. feedPollDelay

Correct Answer: D
Explanation

Explanation/Reference:

- feedPollDelay

Explanation:

Upsert operation is an operation where a row is inserted into a database table if it does not already exist, or
updated if it exists.
FeedPollDelay represents the delay time (in ms) between the poll of a partition for new updates upon the feed
after all current modifications are drained. The default value for this is 5000 ms i.e. 5 seconds.

Therefore, if you want that each upsert is processed by function1 within 1 second of the upsert, you need to
change the FeedPollDelay.

FeedPollDelay
- The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all
current changes are drained.
- Default is 5,000 milliseconds, or 5 seconds.

Incorrect Answers:

checkpointInterval
- When set, it defines, in milliseconds, the interval between lease checkpoints.
- Default is always after each Function call.

MaxItemsPerInvocation
- When set, this property sets the maximum number of items received per Function call.
- If operations in the monitored container are performed through stored procedures, transaction scope is
preserved when reading items from the change feed.
- As a result, the number of items received could be higher than the specified value so that the items
changed by the same transaction are returned as part of one atomic batch.

Reference:

Azure Cosmos DB trigger for Azure Functions 2.x and higher


- https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger

[CONFIRMED]

QUESTION 32
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

The following is a sample of a document in container1.

{
"studentId": "631282",
"firstName": "James",
"lastName": "Smith",
"enrollmentYear": 1990,
"isActivelyEnrolled": true,
"address": {
"street": "",
"city": "",
"stateProvince": "",
"postal": "",
}
}

The container1 container has the following indexing policy.

{
"indexingMode": "consistent",
"includePaths": [
{
"path": "/*"
},
{
"path": "/address/city/?"
}
],
"excludePaths": [
{
"path": "/address/*"
},
{
"path": "/firstName/?"
}
]
}
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Note: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- The /isActiveEnrolled property is included in the index: Yes


- The /fisrtname property is included in the index: No
- The /address/city property is included in the index: Yes
Explanation:

The /isActiveEnrolled property is included in the index: Yes


- "path": "/*" is in includePaths.
- Include the root path to selectively exclude paths that don't need to be indexed. This is the recommended
approach as it lets Azure Cosmos DB proactively index any new property that may be added to your model.

The /fisrtname property is included in the index: No


- "path": "/firstName/?" is in excludePaths.

The /address/city property is included in the index: Yes


- "path": "/address/city/?" is in includePaths

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

QUESTION 33
You have a database in an Azure Cosmos DB SQL API Core (SQL) account that is used for development.

The database is modified once per day in a batch process.

You need to ensure that you can restore the database if the last batch process fails. The solution must
minimize costs.

How should you configure the backup settings? To answer, select the appropriate options in the answer are

Note: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- Backup interval: 24 hours


- Backup retention: 2 days

Scenario:
- restore the database if the last batch process fails

Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time,
only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you
can change the backup interval and the retention period from the Azure portal.

Backup Interval
- It’s the interval at which Azure Cosmos DB attempts to take a backup of your data.
- Backup Interval can't be less than 1 hour and greater than 24 hours.
- When you change this interval, the new interval takes into effect starting from the time when the last backup
was taken.

Backup Retention
- It represents the period where each backup is retained.
- You can configure it in hours or days.
- The minimum retention period can’t be less than two times the backup interval (in hours) and it
can’t be greater than 720 hours.

Backup retention: 2 days = 2 * 24 hours

References

Periodic backup and restore in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/periodic-backup-restore-introduction
Modify periodic backup interval and retention period in Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/periodic-backup-modify-interval-retention?tabs=azure-
portal

[CONFIRMED]

QUESTION 34
You have the following query.

SELECT * FROM ?
WHERE c.sensor = "TEMP1"
AND c.value < 22
AND c.timestamp >= 1619146031231

You need to recommend a composite index strategy that will minimize the request units (RUs) consumed by
the query.

What should you recommend?

A. a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)
B. a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC,
value DESC, timestamp DESC)
C. a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)
D. a composite index for (sensor ASC, value ASC, timestamp ASC)

Correct Answer: A
Explanation

Explanation/Reference:

- a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp
ASC)

Explanation:

You can use composite indexes in Azure Cosmos DB to optimize additional cases of the most common
inefficient queries!

Anytime you have a slow or high request unit (RU) query, you should consider optimizing it with a composite
index.

This allows you to optimize queries that have multiple equality and range filters in the same filter expression.

Queries with filters on multiple properties


- If a query has filters on two or more properties, it may be helpful to create a composite index for these
properties.
- For example, consider the following query that has both an equality and range filter:

SELECT * FROM c WHERE c.name = "John" AND c.age > 18

This query will be more efficient, taking less time and consuming fewer RUs, if it's able to leverage a
composite index on (name ASC, age ASC).
- Queries with multiple range filters can also be optimized with a composite index.
- However, each individual composite index can only optimize a single range filter.
- Range filters include >, <, <=, >=, and !=. The range filter should be defined last in the composite index.

- Consider the following query with an equality filter and two range filters:

SELECT * FROM c WHERE c.name = "John" AND c.age > 18 AND c._ts > 1612212188

This query will be more efficient with a composite index on (name ASC, age ASC) and (name ASC, _ts
ASC).
However, the query wouldn't utilize a composite index on (age ASC, name ASC) because the
properties with equality filters must be defined first in the composite index.
Two separate composite indexes are required instead of a single composite index on (name ASC, age
ASC, _ts ASC) since each composite index can only optimize a single range filter.

Reference:

Three ways to leverage composite indexes in Azure Cosmos DB


- https://azure.microsoft.com/en-us/blog/three-ways-to-leverage-composite-indexes-in-azure-cosmos-db/

Queries with filters on multiple properties


- https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy#queries-with-filters-on-multiple-properties

[NOT CONFIRMED]

QUESTION 35
You have an Azure Cosmos DB Core (SQL) API account named account1.

You have the Azure virtual networks and subnets shown in the following table.

The vnet1 and vnet2 networks are connected by using a virtual network peer.

The Firewall and virtual network settings for account1 are configured as shown in the exhibit.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Note: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- VM1 can access account1: YES


- VM2 can access account1: NO
- VM3 can access account1: NO

Explanation:

The table is not in sync with FW regards the subnets.


- /24 is inside /16
- Only allowed subnet 1.

Based on the table Y N N.

Based on the Firewall and virtual network settings for account1 exhibit: N Y N.

VM1 can access account1: YES


- VM1 is on vnet1.subnet1 which has the Endpoint Status enabled.

VM2 can access account1: NO


- VM2 is on vnet1.subnet2.
- Only vnet1 and subnet1 added to Azure Cosmos account have access.
- Their peered VNets cannot access the account until the subnets within peered virtual networks are added to
the account.

VM3 can access account1: NO


- VM2 is on vnet2.subnet3.
- Only vnet1 and subnet1 added to Azure Cosmos account have access.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-vnet-service-endpoint

[CONFIRMED]

QUESTION 36
You plan to create an Azure Cosmos DB Core (SQL) API account that will use customer-managed keys stored
in Azure Key Vault.
You need to configure an access policy in Key Vault to allow Azure Cosmos DB access to the keys.

Which three permissions should you enable in the access policy? Each correct answer presents part of the
solution.

Note: Each correct selection is worth one point.

A. Wrap Key
B. Get
C. List
D. Update
E. Sign
F. Verify
G. Unwrap Key
Correct Answer: ABG
Explanation

Explanation/Reference:

- Wrap Key
- Get
- Unwrap Key

Explanation:

To Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault: Add an access
policy to your Azure Key Vault instance:

1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys.
Select Access Policies from the left menu:

2. Select + Add Access Policy.

3. Under the Key permissions drop-down menu, select Get, Unwrap Key, and Wrap Key permissions:

Reference:

https://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview

https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk

[CONFIRMED]

QUESTION 37
You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API
account. The data from a container named telemetry must be added to a Kafka topic named iot. The solution
must store the data in a compact binary format.

Which three configuration items should you include in the solution? Each correct answer presents part of the
solution.

Note: Each correct selection is worth one point.

A. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
B. "key.converter": "org.apache.kafka.connect.json.JsonConverter"
C. "key.converter": "io.confluent.connect.avro.AvroConverter"
D. "connect.cosmos.containers.topicmap": "iot#telemetry"
E. "connect.cosmos.containers.topicmap": "iot"
F. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"

Correct Answer: ACD


Explanation

Explanation/Reference:

- "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
- "key.converter": "io.confluent.connect.avro.AvroConverter"
- "connect.cosmos.containers.topicmap": "iot#telemetry"

Explanation:

In Azure Data Factory, when storing data to Azure Cosmos DB for NoSQL, we must configure our linked
service as a sink. This will load our data.
A sink is the destination for data after it has been transformed.

CosmosDBSourceConnector
- Read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic.

CosmosDBSinkConnector
- Export data from Apache Kafka topics to an Azure Cosmos DB database.

connect.cosmos.containers.topicmap
- Mapping between Kafka Topics and Cosmos Containers, formatted using CSV as shown:
topic#container,topic2#container

"key.converter": "io.confluent.connect.avro.AvroConverter"
- Avro is binary format, while JSON is text.

"connect.cosmos.containers.topicmap": "iot#telemetry"
- Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for
the sink connector. Extract:
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector", "key.converter":
"org.apache.kafka.connect.json.AvroConverter" "connect.cosmos.containers.topicmap": "hotels#kafka"
Read from Azure Cosmos DB

{
"name": "cosmosdb-source-connector",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connect.cosmos.task.poll.interval": "100",
"connect.cosmos.connection.endpoint": "<cosmos-endpoint>",
"connect.cosmos.master.key": "<cosmos-key>",
"connect.cosmos.databasename": "<cosmos-database>",
"connect.cosmos.containers.topicmap": "<kafka-topic>#<cosmos-container>",
"connect.cosmos.offset.useLatest": false,
"value.converter.schemas.enable": "false",
"key.converter.schemas.enable": "false"
}
}

Incorrect Answers:

"connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"
- CosmosDBSourceConnector is to get Cosmos db data to kafka topic.
- CosmosDBSinkConnector is to export Kafka topics data to Cosmos db.
- We want to have data from cosmos to kafka so source, not sink.

"key.converter": "org.apache.kafka.connect.json.JsonConverter"

- JSON is plain text.


- Note, full example:
{
"name": "cosmosdb-sink-connector",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector",
"tasks.max": "1",
"topics": [ "hotels" ], "value.converter": "org.apache.kafka.connect.json.AvroConverter",

"value.converter.schemas.enable":
"false", "key.converter": "org.apache.kafka.connect.json.AvroConverter",
"key.converter.schemas.enable": "false", "connect.cosmos.connection.endpoint":
"https://.documents.azure.com:443/", "connect.cosmos.master.key": "",
"connect.cosmos.databasename": "kafkaconnect",
"connect.cosmos.containers.topicmap": "hotels#kafka"
}
}

Reference:
Kafka Connect for Azure Cosmos DB - source connector
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/kafka-connector-source

Kafka Connect for Azure Cosmos DB - sink connector


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/kafka-connector-sink

Kafka Connect for Azure Cosmos DB (SQL API)


- https://microsoft.github.io/kafka-connect-cosmosdb/

https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink

https://www.confluent.io/blog/kafkaconnect-deep-dive-converters-serialization-explained/

[CONFIRMED]

QUESTION 38
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to
write a dataset.

The data flow will use 2,000 Apache Spark partitions. You need to ensure that the ingestion from each Spark
partition is balanced to optimize throughput.

Which sink setting should you configure?

A. Throughput
B. Write throughput budget
C. Batch size
D. Collection action

Correct Answer: C
Explanation

Explanation/Reference:

- Batch size

Explanation:

Batch size
- An integer that represents how many objects are being written to Azure Cosmos DB collection in each
batch.
- Usually, starting with the default batch size is sufficient.
- Azure Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size
* Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
- The larger the batch size, the better throughput the service can achieve, while make sure you allocate
enough RUs to empower your workload

Throughput
- Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each
execution of this data flow during the read operation. Minimum is 400.

Write throughput budget


- An integer that represents the RUs you want to allocate for this Data Flow write operation, out of the total
throughput allocated to the collection.

Collection action
- Determines whether to recreate the destination collection prior to writing.
- None: No action will be done to the collection.
- Recreate: The collection will get dropped and recreated.

Reference:

Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
- https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db

[CONFIRMED]

QUESTION 39
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to provide a user named User1 with the ability to insert items into container1 by using role-based
access control (RBAC). The solution must use the principle of least privilege.

Which roles should you assign to User1?

A. CosmosDB Operator only


B. DocumentDB Account Contributor and Cosmos DB Built-in Data Contributor
C. DocumentDB Account Contributor only
D. Cosmos DB Built-in Data Contributor only

Correct Answer: D
Explanation

Explanation/Reference:

- Cosmos DB Built-in Data Contributor only

Explanation:

Cosmos DB Built-in Data Contributor


- Grants your application full read-write access to the data;
- You can optionally create custom, fine-tuned roles.
- Microsoft.DocumentDB/databaseAccounts/readMetadata
- Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/*

Cosmos DB Operator
- Can provision Azure Cosmos DB accounts, databases, and containers.
- Cannot access any data or use Data Explorer.

DocumentDB Account Contributor


- Can manage Azure Cosmos DB accounts.
- Azure Cosmos DB is formerly known as DocumentDB.

Reference:
Azure role-based access control in Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/role-based-access-control

[CONFIRMED]

QUESTION 40
You have an Azure Cosmos DB Core (SQL) API account.
You configure the diagnostic settings to send all log information to a Log Analytics workspace.
You need to identify when the provisioned request units per second (RU/s) for resources within the account
were modified.

You write the following query.

AzureDiagnostics
| where Category == "ControlPlaneRequests"

What should you include in the query?

A. | where OperationName startswith "AccountUpdateStart"


B. | where OperationName startswith "SqlContainersDelete"
C. | where OperationName startswith "MongoCollectionsThroughputUpdate"
D. | where OperationName startswith "SqlContainersThroughputUpdate"

Correct Answer: D
Explanation

Explanation/Reference:

- | where OperationName startswith "SqlContainersThroughputUpdate"

Explanation:

The OperationName field in the AzureDiagnostics log contains the name of the operation that was performed.

In this case, you are interested in the operation that updates the provisioned request units per second (RU/s)
for resources within the account. This operation is called SqlContainersThroughputUpdate.

Get diagnostic logs for control plane operations:

AzureDiagnostics
| where Category == "ControlPlaneRequests"
| where OperationName startswith "SqlContainersThroughputUpdate"

This will filter the log entries to only include those where the OperationName starts with
SqlContainersThroughputUpdate, which indicates that the provisioned throughput for a SQL container was
updated.

| where OperationName startswith "AccountUpdateStart"


- This would include operations that do not update the provisioned RU/s.

| where OperationName startswith "SqlContainersDelete"


- This would include operations that delete SQL containers, not update their provisioned RU/s.
| where OperationName startswith "MongoCollectionsThroughputUpdate"
- This would include operations that update the provisioned RU/s for MongoDB collections, not SQL
containers.

Reference:

How to audit Azure Cosmos DB control plane operations


- https://learn.microsoft.com/en-us/azure/cosmos-db/audit-control-plane-logs

[GPT]

QUESTION 41
You have a database in an Azure Cosmos DB Core (SQL) API account. The database is backed up every two
hours.

You need to implement a solution that supports point-in-time restore.

What should you do first?

A. Enable Continuous Backup for the account.


B. Configure the Backup & Restore settings for the account.
C. Create a new account that has a periodic backup policy.
D. Configure the Point In Time Restore settings for the account.

Correct Answer: A
Explanation

Explanation/Reference:

- Enable Continuous Backup for the account.

Explanation:

By enabling Continuous Backup for the Azure Cosmos DB Core (SQL) API account, you ensure that the
database is continuously backed up, which is a prerequisite for supporting point-in-time restore.

When creating a new Azure Cosmos DB account, in the Backup policy tab, choose continuous mode to enable
the point in time restore functionality for the new account. With the point-in-time restore, data is restored to a
new
account, currently you can't restore to an existing account.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/provision-account-continuous-backup

[CONFIRMED]

QUESTION 42
You have an Azure Cosmos DB Core (SQL) API account used by an application named App1.

You open the Insights pane for the account and see the following chart.

Use the drop-down menus to select the answer choice that answers each question based on the information
presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- The HTTP 404 status code is caused by [answer choice]: requesting resources that do not exist
- There are [answer choice] successful resource creations in the accounts during the time period of the chart:
6 thousand

Explanation:
- 200 OK: Success on List, Get, Replace, Patch, Query. Returned for a successful response.
- 201 Created: Success on PUT or POST. Object created or updated successfully.
- 400 Bad Request: The override set in the x-ms-consistency-level header is stronger than the one set during
account creation. For example, if the consistency level is Session, the override can't be Strong or Bounded.
- 404 Not Found: The document is no longer a resource, that is, the document was deleted.
The HTTP 404 status code is caused by [answer choice]: requesting resources that do not exist
- 404 Not Found: Returned when a resource does not exist on the server.
- If you are managing or querying an index, check the syntax and verify the index name is specified correctly.

There are [answer choice] successful resource creations in the accounts during the time period of the chart: 6
thousand
- 201 Created: Success on PUT or POST. Object created or updated successfully.

Reference:

Review common response codes


- https://learn.microsoft.com/en-us/training/modules/monitor-responses-events-azure-cosmos-db-sql-api/2-
review-common-response-codes

[CONFIRMED]

QUESTION 43
You have a database in an Azure Cosmos DB Core (SQL) API account.

You need to create an Azure function that will access the database to retrieve records based on a variable
named accountnumber. The solution must protect against SQL injection attacks.

How should you define the command statement in the function?

A. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = 'accountnumber'"


B. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = LIKE @accountnumber"
C. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = @accountnumber"
D. cmd = "SELECT * FROM Persons p WHERE p.accountnumber = '" + accountnumber + "'"

Correct Answer: C
Explanation

Explanation/Reference:

- cmd = "SELECT * FROM Persons p WHERE p.accountnumber = @accountnumber"

Explanation:

Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation.

Parameterized SQL provides robust handling and escaping of user input, and prevents accidental exposure of
data through SQL injection.

Example:

SELECT *
FROM Families f
WHERE f.lastName = @lastName AND f.address.state = @addressState
Reference:

Parameterized queries in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/sql-query-parameterized-queries

[CONFIRMED]

QUESTION 44
You plan to deploy two Azure Cosmos DB Core (SQL) API accounts that will each contain a single database.

The accounts will be configured as shown in the following table.

How should you provision the containers within each account to minimize costs?

To answer, select the appropriate options in the answer are

Note: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- development: Serverless capacity mode


- shipments: Provisioned throughput capacity mode and autoscale throughput

Explanation:

Serverless
- In the serverless account type, you don't have to configure provisioned throughput when you create
containers in your Azure Cosmos DB account.
- For each billing period, you're billed for the number of RUs that your database operations consumed.
- You're developing, testing, prototyping, or offering your users a new application, and you don't yet
know the traffic pattern.

Provisioned throughput
- In the provisioned throughput account type, you commit to a certain amount of throughput (expressed in
RUs per second or RU/s) that is provisioned on your databases and containers.
- The cost of your database operations is then deducted from the number of RUs that are available every
second.
- For each billing period, you're billed for the amount of throughput that you provisioned.

development: Serverless capacity mode


- Azure Cosmos DB serverless best fits scenarios where you expect intermittent and unpredictable traffic
with long idle times.
- Because provisioning capacity in such situations isn't required and may be costprohibitive, Azure Cosmos
DB serverless should be considered in the following use-cases:
- Getting started with Azure Cosmos DB
- Running applications with bursty, intermittent traffic that is hard to forecast, or low (<10%) average-to-
peak traffic ratio Developing, testing, prototyping and running in production new applications where the traffic
pattern is unknown.
- Integrating with serverless compute services like Azure Functions

shipments: Provisioned throughput capacity mode and autoscale throughput

- The use cases of autoscale include:


- Variable or unpredictable workloads: When your workloads have variable or unpredictable spikes in
usage, autoscale helps by automatically scaling up and down based on usage. Examples include retail
websites that have different traffic patterns depending on seasonality; IOT workloads that have spikes at
various times during the day; line of business applications that see peak usage a few times a month or year,
and more.
- With autoscale, you no longer need to manually provision for peak or average capacity.

Reference:

Azure Cosmos DB serverless account type


- https://docs.microsoft.com/en-us/azure/cosmos-db/serverless

Provision autoscale throughput on database or container in Azure Cosmos DB - API for NoSQL
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-provision-autoscale-throughput

[CONFIRMED]

QUESTION 45
You have an Azure Cosmos DB Core (SQL) API account named account1.

In account1, you run the following query in a container that contains 100GB of data.

SELECT *
FROM c
WHERE LOWER(c.categoryid) = "hockey"

You view the following metrics while performing the query.


For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Note: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- The query performs a cross-partition query: NO


- The query uses an index: NO
- Recreating the container with the partition key set to /categoryId will improve the performance of the query:
YES

Explanation:

The query performs a cross-partition query: NO


- Each physical partition should have its own index, but since no index is used, the query is not cross-
partition.

The query uses an index: NO


- Index utilization is 0%
- Index LookupTime is also zero.

Recreating the container with the partition key set to /categoryId will improve the performance of the query: YES
- A partition key index will be created, and the query will perform across the partitions.

Reference:
Query an Azure Cosmos DB container
- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-query-container

[CONFIRMED]

QUESTION 46
You have an Azure Cosmos DB Core (SQL) API account that is used by 10 web apps.
You need to analyze the data stored in the account by using Apache Spark to create machine learning models.

The solution must NOT affect the performance of the web apps.

Which two actions should you perform? Each correct answer presents part of the solution.

Note: Each correct selection is worth one point.

A. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
B. Create a private endpoint connection to the account.
C. In an Azure Synapse Analytics serverless SQL pool, create a view that uses OPENROWSET and the
CosmosDB provider.
D. Enable Azure Synapse Link for the account and Analytical store on the container.
E. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.oltp as the data source.

Correct Answer: AD
Explanation

Explanation/Reference:

- In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
- Enable Azure Synapse Link for the account and Analytical store on the container.

Explanation:

In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
- By creating a table in an Apache Spark pool in Azure Synapse that uses the cosmos.olap data source, you
can efficiently access and analyze the data stored in your Azure Cosmos DB account.
- This approach leverages the optimized capabilities of Apache Spark and allows you to perform machine
learning tasks on the data.

Enable Azure Synapse Link for the account and Analytical store on the container.
- Enabling Azure Synapse Link for your Azure Cosmos DB account and enabling the Analytical store on the
container provides seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
- It allows you to load and analyze data from Azure Cosmos DB using Apache Spark in Azure Synapse
Analytics without impacting the performance of your web apps.

Read from Azure Cosmos DB

productsDataFrame = spark.read.format("cosmos.olap")\
.option("spark.synapse.linkedService", "cosmicworks_serv")\
.option("spark.cosmos.container", "products")\
.load()
Write to Azure Cosmos DB

productsDataFrame.write.format("cosmos.oltp")\
.option("spark.synapse.linkedService", "cosmicworks_serv")\
.option("spark.cosmos.container", "products")\
.mode('append')\
.save()

------------

1. Navigate to the Data hub.

2. Select the Linked tab (1), expand the Azure Cosmos DB group (if you don't see this, select the Refresh
button above), then expand the WoodgroveCosmosDb account (2). Right-click on the transactions container
(3), select New notebook (4), then select Load to DataFrame (5).

3. In the generated code within Cell 1 (3), notice that the spark.read format is set to cosmos.olap. This instructs
Synapse Link to use the container's analytical store. If we wanted to connect to the transactional store, like to
read from the change feed or write to the container, we'd use cosmos.oltp instead.
Reference:

https://github.com/microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics/blob/main/Handson%20lab/
HOL%20step-by%20step%20-%20Cosmos%20DB%20real-time%20advanced%20analytics.md

[NOT-CONFIRMED]

QUESTION 47
You have an Azure Cosmos DB Core (SQL) account that has a single write region in West Europe.

You run the following Azure CLI script.

az cosmosdb update -n $accountName -g $resourceGroupName \


--locations regionName='West Europe' failoverPriority=0 isZoneRedundant-False \
--locations regionName='North Europe' failoverPriority=1 isZoneRedundant-False

az cosmosdb failover-priority-change -n $accountname -g $resourceGroupName \


--failover-policies 'North Europe'=0 'West Europe=1'

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- After running the script, there will be an instance of Azure Cosmos DB in North Europe that is writable: YES
- After running the script, the Azure Cosmos DB instance in West Europe will be writable: NO
- The cost of the Azure Cosmos DB account is unaffected by running the script: NO

Explanation:

Any change to a region with failoverPriority=0 triggers a manual failover and can only be done to an account
configured for manual failover.

failoverPriority
- All failoverPriority values must be unique.
- There must be one failoverPriority value zero (0) specified.
- All remaining failoverPriority values can be any positive integer and they don't have to be contiguos, neither
written in any specific order. E.g eastus=0 westus=1.

After running the script, there will be an instance of Azure Cosmos DB in North Europe that is writable: YES
- There is a change of the priority zero, effectively changing the write region.
- The second command is ruining a manual failover.
- Triggering a manual failover by changing priority zero causes write and read regions to be swapped.

After running the script, the Azure Cosmos DB instance in West Europe will be writable: NO
- Triggering a manual failover by changing priority zero causes write and read regions to be swapped.
The cost of the Azure Cosmos DB account is unaffected by running the script: NO
- Each region is billed additionally.
- When evaluating the cost of a solution that replicates across multiple regions, you should calculate the cost
as a multiple of the single-region cost relative to the number of replicas.
- The core formula is RU/s x # of regions.

Evaluate the cost of distributing data globally


- https://learn.microsoft.com/en-us/training/modules/configure-replication-manage-failovers-azure-cosmos-
db/4-evaluate-cost-distributing-data-globally

[CONFIRMED]

QUESTION 48
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account.
You are designing an application that will use db1.
In db1, you are creating a new container named coll1 that will store online orders.

The following is a sample of a document that will be stored in coll1.

{
"customerId" : "bba6fe24-6d97-4935-8d58-36baa4b8a0e1",
"orderId" : "9d7816e6-f401-42ba-ad05-0e03de35c0b8",
"orderDate" : "2021-09-29",
"orderDetails" : [ ]
}

The application will have the following characteristics:


- New orders will be created frequently by different customers.
- Customers will often view their past order history.

You need to select the partition key value for coll1 to support the application. The solution must minimize costs.

To what should you set the partition key?

A. orderId
B. customerId
C. orderDate
D. id

Correct Answer: B
Explanation

Explanation/Reference:

- customerId

All orders from one customer will be stored on the same partition which will be beneficial for reads.

If most of your workload's requests are queries and most of your queries have an equality filter on the same
property, this property can be a good partition key choice.

For example, if you frequently run a query that filters on UserID, then selecting UserID as the partition key
would reduce the number of cross-partition queries.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview

[CONFIRMED]

QUESTION 49
You have a container in an Azure Cosmos DB Core (SQL) API account. The container stores data about
families. Data about parents, children, and pets are stored as separate documents.
Each document contains the address of each family. Members of the same family share the same partition key
named familyId.

You need to update the address for each member of the same family that share the same address.

The solution must meet the following requirements:


- Be atomic, consistent, isolated, and durable (ACID).
- Provide the lowest latency.

What should you do?

A. Update the document of each family member separately by using a patch operation
B. Update the document of each family member separately and set the consistency level to strong
C. Update the document of each family member by using a transactional batch operation

Correct Answer: C
Explanation

Explanation/Reference:

- Update the document of each family member by using a transactional batch operation

Keyword is transactional.

ACDI = Atomicity, Consistency, Isolation, Durability

Atomicity
- Guarantees that all the operations done inside a transaction are treated as a single unit, and either all of
them are committed or none of them are.

Consistency
- Makes sure that the data is always in a valid state across transactions.

Isolation
- Guarantees that no two transactions interfere with each other – many commercial systems provide multiple
isolation levels that can be used based on the application needs.

Durability
- Ensures that any change that is committed in a database will always be present.

Azure Cosmos DB supports full ACID compliant transactions with snapshot isolation for operations within the
same logical partition key.

When the ExecuteAsync method is called, all operations in the TransactionalBatch object are grouped,
serialized into a single payload, and sent as a single request to the Azure Cosmos DB service.
- The Azure Cosmos DB request size limit constrains the size of the TransactionalBatch payload to not
exceed 2 MB, and the maximum execution time is 5 seconds.
- There's a current limit of 100 operations per TransactionalBatch to ensure the performance is as expected
and within SLAs.

Reference:

Transactional batch operations


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/transactional-batch

[CONFIRMED]

QUESTION 50
You need to create a data store for a directory of small and medium-sized businesses (SMBs).

The data store must meet the following requirements:


- Store companies and the users employed by them. Each company will have less than 1,000 users.
- Some users have data that is greater than 2 KB.
- Associate each user to only one company.
- Provide the ability to browse by company.
- Provide the ability to browse the users by company.
- Whenever a company or user profile is selected, show a details page for the company and all the related
users.
- Be optimized for reading data.

Which design should you implement to optimize the data store for reading data?

A. In a directory container, create a document for each company and a document for each user. Use the
company ID as the partition key.
B. Create a user container that uses the user ID as the partition key and a company container that uses the
company ID as the partition key. Add the company ID to each user document.
C. In a user container, create a document for each user. Embed the company into each user document. Use
the user ID as the partition key.
D. In a company container, create a document for each company. Embed the users into company documents.
Use the company ID as the partition key.

Correct Answer: B
Explanation

Explanation/Reference:

- Create a user container that uses the user ID as the partition key and a company container that uses
the company ID as the partition key. Add the company ID to each user document.

Scenario: Each company will have less than 1,000 users. Some users have data that is greater than 2 KB.
- Each user data > 2 KB (kilobyte), 1000 users by company is 1.95 MB (megabyte)
- Maximum size of an item is 2 MB (UTF-8 length of JSON representation).
- Large document sizes up to 16 MB are supported with Azure Cosmos DB for MongoDB only.

So, a unique document for each company and embed the users into company is not good.

"Create a user container that uses the user ID as the partition key and a company container that uses the
company ID as the partition key. Add the company ID to each user document."
This design separates the users and companies into two different containers, with each container optimized for
querying their respective entities. The user container uses the user ID as the partition key, which allows for
efficient retrieval of individual users. The company container uses the company ID as the partition key, which
enables efficient retrieval of all users associated with a particular company.

Adding the company ID to each user document in the user container allows for easy and fast querying of users
by company. Whenever a company or user profile is selected, the details page for the company and all related
users can be retrieved by joining the data from the two containers using the company ID.

This design meets all the requirements specified in the prompt, including efficient reading of data, browsing by
company, and browsing users by company.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput

[CONFIRMED]

QUESTION 51
You are building an application that will store data in an Azure Cosmos DB Core (SQL) API account. The
account uses the session default consistency level. The account is used by five other applications. The account
has a single read-write region and 10 additional read region.
Approximately 20 percent of the items stored in the account are updated hourly.
Several users will access the new application from multiple devices.
You need to ensure that the users see the same item values consistently when they browse from the different
devices.
The solution must NOT affect the other applications.

Which two actions should you perform?

Each correct answer presents part of the solution.


NOTE: Each correct selection is worth one point.

A. Set the default consistency level to eventual


B. Associate a session token to the device
C. Use implicit session management when performing read requests
D. Provide a stored session token when performing read requests
E. Associate a session token to the user account

Correct Answer: CE
Explanation

Explanation/Reference:

- Associate a session token to the user account


- Use implicit session management when performing read requests

Session consistency guarantees that all read-and-write operations made by a client in a session are consistent
with each other.

A session token is use with session consistency in the Azure Cosmos DB service.

If you do not flow the Azure Cosmos DB SessionToken you could end up with inconsistent read results for a
period of time.
Associating a session token to the user account ensures that the same session is used across multiple
devices.

Using implicit session management when performing read requests ensures that the session token is
automatically passed along with the request.

To ensure that users see the same item values consistently when they browse from different devices, you
should associate a session token to the user account and use implicit session management when
performing read requests.

Incorrect answers:

Set the default consistency level to eventual


- Eventual consistency is the weakest form of consistency because a client may read the values that are older
than the ones it read in the past.

Understanding Azure Cosmos DB Consistency Levels


- https://www.serverlessnotes.com/docs/understanding-azure-cosmos-db-consistency-levels

[CONFIRMED]

QUESTION 52
You have a container that stores data about families.

The following is a sample document.


You run the following query against the container.

SELECT
ch.name ?? ch.firstName AS childName,
f.parents,
ARRAY_LENGHT (ch.pets) ?? 0 AS numberOfPets,
ch.pets
FROM c AS f
JOIN ch IN f.children

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- Children who do not have parents defined will appear on the list: YES
- Children who have more than one pet will appear on the list multiple times: NO
- Children who do not have pets defined will appear on the list: YES

Explanation:

Children who do not have parents defined will appear on the list: YES
- The query return a value because "c" at "FROM c AS f" is represent the entire document and not only the
parent's array.

[CONFIRMED]

QUESTION 53
You plan to store order data in an Azure Cosmos DB Core (SQL) API account. The data contains information
about orders and their associated items.

You need to develop a model that supports order read operations. The solution must minimize the number of
requests.

What should you do?


A. Create a database for orders and a database for order items.
B. Create a single database that contains a container for orders and a container for order items.
C. Create a single database that contains one container. Store orders and order items in separate documents
in the container.
D. Create a single database that contains one container. Create a separate document for each order and
embed the order items into the order documents.

Correct Answer: D
Explanation

Explanation/Reference:

- Create a single database that contains one container. Create a separate document for each order
and embed the order items into the order documents

Creating a single database with two containers, one for orders and another for order items, is the best approach
to support order read operations while minimizing the number of requests.

This approach provides a clear separation of concerns between orders and order items, making it easier to
manage the data and perform read operations.

With this approach, queries that require only order data can be directed to the orders container, while queries
that require order item data can be directed to the order items container.

This reduces the number of requests required to retrieve the data needed for each query.

By denormalizing data, your application may need to issue fewer queries and updates to complete common
operations.
Typically denormalized data models provide better read performance.

Note: In general, use embedded data models when:


- There are contained relationships between entities.
- There are one-to-few relationships between entities.
- There's embedded data that changes infrequently.
- There's embedded data that will not grow without bound.
- There's embedded data that is queried frequently together.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/modeling-data

QUESTION 54
You plan to use a multi-region Azure Cosmos DB Core (SQL) API account to store data for a new application
suite.

The suite contains the applications shown in the following table.


Each application should use the weakest consistency level possible.

Which consistency level should you configure for each application? To answer, select the appropriate options in
the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- Reporting: Eventual
- Purchasing: Strong
- Fulfillment: Consistent prefix

Explanation:

Reporting: within 5 mins


- The weakest consistency level in Azure Cosmos DB is Eventual Consistency.
- With eventual consistency, there is no guarantee of immediate consistency across all replicas of the data.
However, the replicas will eventually converge to a consistent state. This means that the reporting application
may not see the total order counts updated immediately, but it will eventually see the updates within a
reasonable time frame.

Purchasing: latest committed


- If the purchasing application must guarantee that the latest committed stock quantities are always used, then
the Strong Consistency level should be used.
- Strong consistency guarantees that all reads and writes are always consistent across all replicas of the data.
This means that the purchasing application will always see the latest committed stock quantities.
- However, it is important to note that strong consistency is not the weakest consistency level possible. It is
actually the strongest consistency level offered by Azure Cosmos DB. Is there anything else you would like to
know?
Fulfillment: never out-of-order
- If the fulfillment application must ensure that orders are read in the order in which they are placed, then the
Consistent Prefix consistency level should be used.
- Consistent Prefix guarantees that reads never see out-of-order writes.
- This means that the fulfillment application will always see the orders in the order in which they were placed.
-Consistent Prefix is not the weakest consistency level possible, but it is weaker than Strong and Bounded
Staleness consistency levels.

Strong
- All reads and writes are guaranteed to be always consistent across all replicas of the data.
- Strong consistency is the most restrictive level and can result in higher latencies.

Bounded Staleness
- A guarantee that the data read is at most a certain number of versions or time interval behind the latest
write.
- This consistency level may have lower latency than strong consistency.

Session
- All read-and-write operations made by a client in a session are guaranteed to be consistent with each other.
- This consistency level has lower latency than bounded staleness.

Consistent Prefix
- This guarantees that a read operation will return the most recent version of the data, but it may not reflect all
writes made to the database at the time of the read.
- This consistency level may have lower latency than session consistency.

Eventual
- This does not guarantee that the read operation will reflect the most recent write, but it guarantees that all
writes will eventually be reflected in the read operation.
- This consistency level may have the lowest latency of all.

[GPT]

QUESTION 55
You are designing a data model for an Azure Cosmos DB Core (SQL) API account.

What are the partition limits for request units per second (RU/s) and storage?

To answer, select the appropriate options in the answer area.


NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- Maximum RU/s per physical partition: 10,000 RU/s


- Maximum storage per logical partition: 20 GB

- Maximum RUs per partition (logical & physical): 10,000


- Maximum storage across all items per (logical) partition: 20 GB
Azure Cosmos DB service quotas
- https://learn.microsoft.com/en-us/azure/cosmos-db/concepts-limits

[CONFIRMED]

QUESTION 56
You have an Azure Cosmos DB container named container1.

You need to insert an item into container1. The solution must ensure that the item is deleted automatically after
two hours.

How should you complete the item definition? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:
Explanation

Explanation/Reference:

- " ttl " : 7200 ,

Explanation:

Time to Live or TTL


- The ability to delete items automatically from a container after a certain time period.
- By default, you can set time to live at the container level and override the value on a per-item basis.
- After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these
items after the time period, since the time they were last modified.
- The maximum value for TTL is 2147483647 seconds, the approximate equivalent of 24,855 days or 68
years.
Time to Live:
- Off: Meaning the items won’t expire ever (even if items are marked as deleted)
- On (no default): will let items get deleted (as per the expired time mentioned)
- On: will delete expired items based on TTL value

Reference:

Time to Live (TTL) in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/time-to-live

Control Retention of items using TTL in Azure Cosmos DB


- https://www.sqlshack.com/control-retention-of-items-using-ttl-in-azure-cosmos-db/

[CONFIRMED]

QUESTION 57
You have a container in an Azure Cosmos DB Core (SQL) API account. The database that has a manual
throughput of 30,000 request units per second (RU/s).

The current consumption details are shown in the following chart.

Use the drop-down menus to select the answer choice that answers each question based on the information
presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- Each partition supports throughput of up to [answer choice] RU/s: 5,000


- The container can scale [answer choice] RU/s without a partition split: 30,000

Explanation:

Each partition supports throughput of up to [answer choice] RU/s: 5,000


- 30,000 / 6 (partitions) = 5,000

The container can scale [answer choice] RU/s without a partition split: 30,000
- In an Azure Cosmos DB Core (SQL) API account, each partition supports throughput of up to 10,000 request
units per second (RU/s).
- The container can scale to a maximum throughput of 30,000 RU/s without requiring a partition split because
the maximum RU/s per physical partition in Azure Cosmos DB is 10,000, and since the container is set to a
manual throughput of 30,000 RU/s, it can be accommodated within three physical partitions, each handling
10,000 RU/s.

Example
- 1 container with 5 physical partitions and 30,000 RU/s of manual provisioned throughput.
- We can increase the RU/s to 5 * 10,000 RU/s = 50,000 RU/s instantly.
- Similarly if we had a container with autoscale max RU/s of 30,000 RU/s (scales between 3000 - 30,000 RU/
s), we could increase our max RU/s to 50,000 RU/s instantly (scales between 5000 - 50,000 RU/s).

[CONFIRMED]

QUESTION 58
You have an Azure Cosmos DB database.
You plan to create a new container named container1 that will store product data and product category data
and will primarily support read requests.

You need to configure a partition key for container1.

The solution must meet the following requirements:


- Minimize the size of the partition.
- Minimize maintenance effort.

Which two characteristics should you prioritize? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. unique
B. high cardinality
C. low cardinality
D. static

Correct Answer: BD
Explanation

Explanation/Reference:

- high cardinality
- static

For all containers, your partition key should:


- Should only contain String values
- Have a high cardinality. In other words, the property should have a wide range of possible values.
- Spread request unit (RU) consumption and data storage evenly across all logical partitions.
- Have values that are no larger than 2048 bytes typically, or 101 bytes if large partition keys aren't enabled.

Static:
- Selecting a static partition key means choosing a key that does not frequently change.
- This minimizes the need for updates to the partition key value and reduces maintenance effort.
- It also helps in maintaining a consistent distribution of data across partitions over time.

Partitioning and horizontal scaling in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/partitioning-overview

[CONFIRMED]

QUESTION 59
You have an Azure Cosmos DB account named account1 that has a default consistency level of session.

You have an app named App1.

You need to ensure that the read operations of App1 can request either bounded staleness or consistent prefix
consistency.

What should you modify for each consistency level? To answer, select the appropriate options in the answer
area.
Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- For Bounded staleness: The account level options


- For Consistent prefix: The request level options

Explanation:

| Strong | Bounded Staleness | Session | Consistent Prefix | Eventual |


-------------------------------------------------------------------------------------------------------
<< Stronger Consistency Weaker Consistency >>
-------------------------------------------------------------------------------------------------------
>>>>>>>>>> availability, lower latency, higher throughput >>>>>>>>>

Scenario:
- Account1 that has a default consistency level of session

Change session to bounded staleness or consistent is going to be stronger consistency

Consistency can only be relaxed at the SDK instance or request level.

To move from weaker to stronger consistency, update the default consistency for the Azure Cosmos
DB account.

Bounded Staleness
- A guarantee that the data read is at most a certain number of versions or time interval behind the latest
write.
- This consistency level may have lower latency than strong consistency.

Consistent Prefix
- This guarantees that a read operation will return the most recent version of the data, but it may not reflect all
writes made to the database at the time of the read.
- This consistency level may have lower latency than session consistency.

[CONFIRMED]

QUESTION 60
You are developing an application that will connect to an Azure Cosmos DB Core (SQL) API account. The
account has a single read-write region and one additional read region. The regions are configured for automatic
failover.

The account has the following connection strings. (Line numbers are included for reference only.)

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- If the primary write region fails, applications that write to the database must use a different connection string
to continue to use the service: NO
- The Primary Read-only SQL Connection String and the Secondary Read-Only SQL Connection String will
connect to different regions from an application running in the EAST US Azure: NO
- Applications can choose from which region the read by setting the PreferredLocations property within their
connection properties: YES

Explanation:

If the primary write region fails, applications that write to the database must use a different connection string to
continue to use the service: NO
- The same connection string can still be used.
- Azure Cosmos DB Core read write region connection strings
- Azure Cosmos DB "primary write region" failover "connection string"

The Primary Read-only SQL Connection String and the Secondary Read-Only SQL Connection String will
connect to different regions from an application running in the EAST US Azure: NO
- The AccountEndpoint for both are the same.

Applications can choose from which region the read by setting the PreferredLocations property within their
connection properties: YES
- The ConnectionPolicy.PreferredLocations property gets and sets the preferred locations (regions) for geo-
replicated database accounts in the Azure Cosmos DB service. For example, "East US" as the preferred
location.
- When EnableEndpointDiscovery is true and the value of this property is non-empty, the SDK uses the
locations in the collection in the order they are specified to perform operations, otherwise if the value of this
property is not specified, the SDK uses the write region as the preferred location for all operations.

Reference:

ConnectionPolicy.PreferredLocations Property
- https://docs.microsoft.com/en-us/dotnet/api/
microsoft.azure.documents.client.connectionpolicy.preferredlocations

QUESTION 61
You have a global ecommerce application that stores data in an Azure Cosmos DB Core (SQL) API account.
The account is configured for multi-region writes.

You need to create a stored procedure for a custom conflict resolution policy for a new container. In the event
of a conflict caused by a deletion, the deletion must always take priority.

Which parameter should you check in the stored procedure function?

A. isTombstone
B. conflictingItems
C. existingItem
D. incomingItem

Correct Answer: A
Explanation

Explanation/Reference:

- isTombstone

isTombstone
- Boolean indicating if the incomingItem is conflicting with a previously deleted item. When true,
existingItem is also null.

conflictingItems
- Array of the committed version of all items in the container that are conflicting with incomingItem on ID or
any other unique index properties.

existingItem
- The currently committed item. This value is non-null in an update and null for an insert or deletes.

incomingItem
- The item being inserted or updated in the commit that is generating the conflicts. Is null for delete
operations.

Reference:

Manage conflict resolution policies in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-conflicts

[CONFIRMED]
QUESTION 62
You have an Azure Cosmos DB Core (SQL) API account named account1 that supports an application named
App1. App1 uses the consistent prefix consistency level.
You configure account1 to use a dedicated gateway and integrated cache.
You need to ensure that App1 can use the integrated cache.

Which two actions should you perform for App1? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Change the consistency level of requests to session.


B. Change the account endpoint to http://account1.documents.azure.com.
C. Change the account endpoint to http://account1.sqlx.cosmos.azure.com.
D. Change the connection mode to direct.
E. Change the consistency level of requests to strong.

Correct Answer: AD
Explanation

Explanation/Reference:

- Change the consistency level of requests to session.


- Change the connection mode to direct.

Explanation:

When you connect to Azure Cosmos DB using direct mode, your application connects directly to the Azure
Cosmos DB backend.

Session consistency is the most widely used consistency level for both single region and globally distributed
Azure Cosmos DB accounts.
- With session consistency, single client sessions can read their own writes.
- Any reads with session consistency that don't have a matching session token will incur RU charges.
- This includes the first request for a given item or query when the client application is started or restarted,
unless you explicitly pass a valid session token.
- Clients outside of the session performing writes will see eventual consistency when they are using the
integrated cache.

Azure Cosmos DB dedicated gateway


- https://learn.microsoft.com/en-us/azure/cosmos-db/dedicated-gateway

Azure Cosmos DB integrated cache


- https://learn.microsoft.com/en-us/azure/cosmos-db/integrated-cache

[CONFIRMED]

QUESTION 63
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region
and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.

How will writes from the application be handled?

A. Writes will use the eventual consistency level.


B. Azure Cosmos DB will reject writes from the application.
C. Writes will use the strong consistency level.
D. The write order is not guaranteed during replication.

Correct Answer: C
Explanation

Explanation/Reference:

- Writes will use the strong consistency level.

Overriding the default consistency level only applies to reads within the SDK client.

An account configured for strong consistency by default will still write and replicate data synchronously to every
region in the account.

When the SDK client instance or request overrides this with Session or weaker consistency, reads will be
performed using a single replica.

Reference:

Manage consistency levels in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-manage-consistency

[CONFIRMED]

QUESTION 64
You have a multi-region Azure Cosmos DB account named account1 that has a default consistency level of
strong.

You have an app named App1 that is configured to request a consistency level of session.

How will the read and write operations of App1 be handled?

To answer, select the appropriate options in the answer area.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- Write operations will: Be applied only to the replica to which App1 is connected.
- Read operations will be performed by the replica: To which App1 is connected.

The default consistency level of account1 is strong, which means that all writes must be replicated to all regions
before they are considered committed. However, App1 is configured to request a consistency level of session,
which means that the write operations will only be applied to the replica to which App1 is connected.

This means that there is a potential for data inconsistency between the different regions. However, the session
consistency level is designed to provide a good balance between consistency and performance.

Therefore, the write operations of App1 will be applied only to the replica to which App1 is connected. This will
provide a good balance between consistency and performance.

Therefore, the read operations of App1 will be served from the replica to which App1 is connected. This will
provide a good balance between consistency and performance.

Write Incorrect answers:

"Write and replicate data to every region asynchronously."


- would write and replicate data to every region asynchronously. This would not meet the requirements of the
session consistency level.

" Write and replicate data to every region synchronously"


- would write and replicate data to every region synchronously. This would meet the requirements of the strong
consistency level, but it would not meet the requirements of the session consistency level.

Read Incorrect answers:

"Therefore, the read operations of App1 will be served from the replica to which App1 is connected. This will
provide a good balance between consistency and performance."
- would read from the replica with the lowest estimated latency. This would not meet the requirements of the
session consistency level.

"That responds firtst."


- would read from the replica that responds first. This would not meet the requirements of the session
consistency level.

[GPT]
QUESTION 65
You have an Azure Cosmos DB Core (SQL) API account.
The change feed is enabled on a container named invoice.
You create an Azure function that has a trigger on the change feed.

What is received by the Azure function?

A. only the changed properties and the system-defined properties of the updated items
B. only the partition key and the changed properties of the updated items
C. all the properties of the original items and the updated items
D. all the properties of the updated items

Correct Answer: D
Explanation

Explanation/Reference:

- all the properties of the updated items

All properties are sent to the change feed.

This is testable by creating an Azure Function with CosmosDB input trigger.

[CONFIRMED]

QUESTION 66
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API.

The solution must meet the following requirements:


- Support queries from the serverless SQL pool.
- Only pay for analytical compute when running queries.
- Ensure that analytical processes do NOT affect operational processes.

Which three actions should you perform in sequence?

To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the
correct order.

Select and Place:


Correct Answer:

Explanation

Explanation/Reference:

1 - Create an Azure Cosmos DB Core (SQL) API account.


2 - Enable Azure Synapse Link.
3 - Create a database and a container that has Analytical store enabled.

Explanation:

Steps:

1. Create a new Azure account, or select an existing Azure Cosmos DB account.

2. Navigate to your Azure Cosmos DB account and open the Azure Synapse Link under Intergrations in the
left pane.

3. Select Enable. This process can take 1 to 5 minutes to complete.

4. Create analytical store enabled containers to automatically start replicating your operational data from the
transactional store to the analytical store

References:
Configure and use Azure Synapse Link for Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/configure-synapse-link

[CONFIRMED]

QUESTION 67
You have an Azure Cosmos DB account named account1.

You have several apps that connect to account1 by using the account's secondary key.

You then configure the apps to authenticate by using service principals.

You need to ensure that account1 will only allow apps to connect by using an Azure AD identity.

Which account property should you modify?

A. disableKeyBasedMetadataWriteAccess
B. disableLocalAuth
C. userAssignedIdentities
D. allowedOrigins

Correct Answer: B
Explanation

Explanation/Reference:

- disableLocalAuth

userAssignedIdentities
- Managed identities for Azure resources provide Azure services with an automatically managed identity in
Azure Active Directory.
- A managed identity from Azure Active Directory (Azure AD) allows your app to easily access other Azure AD-
protected resources such as Azure Key Vault.
- Creating an app with a user-assigned identity requires that you create the identity and then add its resource
identifier to your app config.

A system-assigned identity is tied to your application and is deleted if your app is deleted. An app can only have
one system-assigned identity.

A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have
multiple user-assigned identities.

disableKeyBasedMetadataWriteAccess
- If you disable write access changes to any resource can happen from a user with the proper Azure role and
credentials.

disableLocalAuth
- Enforcing role-based access control as the only authentication method.
- This disable the account's primary/secondary keys.

allowedOrigins
- The API for NoSQL in Azure Cosmos DB supports Cross-Origin Resource Sharing (CORS) by using the
“allowedOrigins” header.
- After you enable the CORS support for your Azure Cosmos DB account, only authenticated requests are
evaluated to determine whether they're allowed according to the rules you've specified.
- With CORS support, resources from a browser can directly access Azure Cosmos DB through the
JavaScript library or directly from the REST API for simple operations.

Setting the disableLocalAuth property to true ensures that local authentication, such as using the secondary
key, is disabled for the Azure Cosmos DB account.
This means that only Azure AD identities (service principals) will be able to authenticate and access the
account.

On the other hand, the userAssignedIdentities property is used to specify the Azure AD identities that are
allowed to access the account.
While it can also be used to enforce authentication through Azure AD, it requires you to assign and manage
user-assigned identities explicitly.

In the context of the given scenario, if you want to ensure that only Azure AD identities can connect to
the account, the most direct and effective option would be to disable local authentication by setting
disableLocalAuth to true.

How to use managed identities for App Service and Azure Functions
- https://learn.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=ps%2Chttp#add-a-
system-assigned-identity

Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account
- https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac#disable-local-auth

Configure Cross-Origin Resource Sharing (CORS)


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-configure-cross-origin-resource-sharing

How to use managed identities to connect to Azure Cosmos DB from an Azure virtual machine
- https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-vm-
managed-identities-cosmos?tabs=azure-resource-manager

[CONFIRMED]

QUESTION 68
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account.
You have a third-party application that is exposed through a REST API.
You need to migrate data from the application to a container in db1 on a weekly basis.

What should you use?

A. Azure Migrate
B. Azure Data Factory
C. Database Migration Assistant

Correct Answer: B
Explanation

Explanation/Reference:
- Azure Data Factory

Azure Data Factory


- With Azure Data Factory, you can create a pipeline that connects to the third-party application's REST API as
a data source and configures the destination to be the container in db1.
- By scheduling the pipeline to run weekly, you can migrate the data from the application to the specified
container in db1 on a recurring basis.

Azure Migrate
- Is a service designed for migrating on-premises virtual machines and database Migration Assistant is a tool
that helps you assess and migrate databases to Azure, but it is not designed for migrating data from a third-
party application exposed through a REST API.

[CONFIRMED]

QUESTION 69
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.

You need to process the change feed of the container by using a web app that has multiple instances. The
change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The
multiple instances must share the workload.

Which three actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Configure the same processor name for all the instances


B. Configure a different processor name for each instance
C. Configure a different instance name for each instance
D. Configure a different lease container configuration for each instance
E. Configure the same instance name for all the instances
F. Configure the same lease container configuration for all the instances

Correct Answer: ACF


Explanation

Explanation/Reference:

- Configure the same processor name for all the instances


- Configure a different instance name for each instance
- Configure the same lease container configuration for all the instances

Configure the same lease container configuration for all the instances:
- The monitored container: The monitored container has the data from which the change feed is generated.
Any inserts and updates to the monitored container are reflected in the change feed of the container.
- The lease container: The lease container acts as state storage and coordinates processing the change feed
across multiple workers. The lease container can be stored in the same account as the monitored container or
in a separate account.
- The compute instance: A compute instance hosts the change feed processor to listen for changes.
Depending on the platform, it might be represented by a virtual machine (VM), a kubernetes pod, an Azure App
Service instance, or an actual physical machine. The compute instance has a unique identifier that's called the
instance name throughout this article.
- The delegate: The delegate is the code that defines what you, the developer, want to do with each batch of
changes that the change feed processor reads.

To take advantage of the compute distribution within the deployment unit, the only key requirements are that:
- All instances should have the same lease container configuration.
- All instances should have the same value for processorName.
- Each instance needs to have a different instance name (WithInstanceName).

This will ensure that all instances share the same processor and lease container configuration, while each
instance has a unique identifier to differentiate it from other instances.

Change feed processor in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/change-feed-processor?tabs=dotnet#dynamic-
scaling

[DOUBT]

QUESTION 70
You have an Azure Cosmos DB for NoSQL container.

The container contains items that have the following properties.

You need to protect the data stored in the container by using Always Encrypted. For each property, you must
use the strongest type of encryption and ensure that queries execute properly.

What is the strongest type of encryption that you can apply to each property? To answer, select the appropriate
options in the answer area.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- dateofBirth: Deterministic. (Filtering in queries)


- healthStatus: No encryption. (True/False)
- hasProvidedTaxNumber: Deterministic. (Plain text value)

The Azure Cosmos DB service never sees the plain text of properties encrypted with Always Encrypted.

However, it still supports some querying capabilities over the encrypted data, depending on the encryption type
used for a property.

Deterministic encryption
- It always generates the same encrypted value for any given plain text value and encryption configuration.
- Using deterministic encryption allows queries to perform equality filters on encrypted properties.
- However, it may allow attackers to guess information about encrypted values by examining patterns in the
encrypted property. This is especially true if there's a small set of possible encrypted values, such as True/
False, or North/South/East/West region.

Randomized encryption
- It uses a method that encrypts data in a less predictable manner.
- Randomized encryption is more secure, but prevents queries from filtering on encrypted properties.

Use client-side encryption with Always Encrypted for Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-always-encrypted

[CONFIRMED]

QUESTION 71
You need to create a database in an Azure Cosmos DB Core (SQL) API account. The database will contain
three containers named coll1, coll2, and coll3. The coll1 container will have unpredictable read and write
volumes.
The coll2 and coll3 containers will have predictable read and write volumes.

The expected maximum throughput for coll1 and coll2 is 50,000 request units per second (RU/s) each.

How should you provision the collection while minimizing costs?

A. Create a serverless account.


B. Create a provisioned throughput account. Set the throughput for coll1 to Autoscale. Set the throughput for
coll2 and coll3 to Manual.
C. Create a provisioned throughput account. Set the throughput for coll1 to Manual. Set the throughput for
coll2 and coll3 to Autoscale.

Correct Answer: B
Explanation

Explanation/Reference:

- Create a provisioned throughput account. Set the throughput for coll1 to Autoscale. Set the
throughput for coll2 and coll3 to Manual.

To minimize costs while provisioning the collection in Azure Cosmos DB Core (SQL) API account.
- Create a provisioned throughput account.
- Set the throughput for coll1 to Autoscale.
- Set the throughput for coll2 and coll3 to Manual.

[CONFIRMED]

QUESTION 72
You have an Apache Spark pool in Azure Synapse Analytics that runs the following Python code in a notebook
For each of the following statements. select Yes if the statement is true. Otherwise. select No.
NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- New and update orders will be added to contoso-erp.orders: NO


- The code performs bulk data ingestion from contoso-app.
- Both contoso-app and contoso-erp have Analytical store enabled: NO

Explanation:

Apache Spark supports various data formats such as Parquet, Avro, CSV, JSON, NoSQL, etc

Spark Streaming Output modes


- Append: This is the default mode. Only the new rows in the streaming DataFrame/Dataset will be written to
the sink.
- Complete: Which all the rows in the streaming DataFrame/Dataset will be written to the sink every time there
are some updates.
- Update:Only the rows that were updated in the streaming DataFrame/Dataset will be written to the sink every
time there are some updates.

Example how to to batch up items from a PySpark DataFrame

df.foreachPartition { ele =>


ele.grouped(1000).foreach { chunk =>
postToServer(chunk)
}

New and update orders will be added to contoso-erp.orders: NO


- Since the append mode is enabled, updates will not be recorded, only new records are added.

The code performs bulk data ingestion from contoso-app: NO


-

Both contoso-app and contoso-erp have Analytical store enabled: NO


- Doesn't need the analytical store to be enabled
- https://learn.microsoft.com/en-us/azure/synapse-analytics/synapse-link/how-to-query-analytical-store-spark-
3#load-streaming-dataframe-from-azure-cosmos-db-container

Spark Streaming – Different Output modes explained


- https://sparkbyexamples.com/spark/spark-streaming-outputmode/#complete

Load streaming DataFrame from Azure Cosmos DB container


- https://learn.microsoft.com/en-us/azure/synapse-analytics/synapse-link/how-to-query-analytical-store-spark-
3#load-streaming-dataframe-from-azure-cosmos-db-container

[DOUBT]

QUESTION 73
You have a database in an Azure Cosmos DB Core (SQL API) account. The database contains a container
named container1. The indexing mode of container1 is set to none.

You configure Azure Cognitive Search to extract data from container1 and make the data searchable.
You discover that the Cognitive Search index is missing all the data from the Azure Cosmos DB index.

What should you do to resolve the issue?

A. Modify the index attributes in Cognitive Search to Searchable.


B. Modify the index attributes in Cognitive Search to Retrievable.
C. Modify the indexing policy of container1 to exclude the /* path.
D. Change the indexing mode of container1 to consistent.

Correct Answer: D
Explanation

Explanation/Reference:

- Change the indexing mode of container1 to consistent.

To make data searchable in Azure Cognitive Search, the indexing mode of the container in Azure Cosmos DB
Core (SQL API) account must be set to consistent or lazy.

Azure Cognitive Search needs an index.

Azure Cosmos DB supports two indexing modes:


- Consistent: This is the default configuration. The index is updated synchronously as you create, update or
delete items.
- Lazy: Indexing isn't recommended and may result in missing data.
- None: Indexing is disabled on the container. This is commonly used when a container is used as a pure
key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk
operations.

Note: In Azure Cognitive Search, a search index is your searchable content, available to the search engine for
indexing, full text search, and filtered queries.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy
https://docs.microsoft.com/en-us/azure/search/search-what-is-an-index

[CONFIRMED]

QUESTION 74
You have an Azure Cosmos DB Core (SQL) API account that frequently receives the same three queries.
You need to configure indexing to minimize RUs consumed by the queries.

Which type of index should you use for each query?

To answer, select the appropriate options in the answer area.


NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- SELECT * FROM c WHERE c.city IN ('Moncton', 'Toronto', 'Montreal'): Range


- SELECT * FROM c WHERE c.city = 'Moncton' AND c.age > 45 AND c.age < 70: Composite
- SELECT * FROM c WHERE c.city = 'Moncton' AND c.age > 45: Composite

Explanation

Azure Cosmos DB currently supports three types of indexes.

Range index is based on an ordered tree-like structure.


- SELECT * FROM container c WHERE c.property = 'value'
- SELECT * FROM c WHERE c.property IN ("value1", "value2", "value3")

Spatial indices enable efficient queries on geospatial objects such as - points, lines, polygons, and
multipolygon. These queries use ST_DISTANCE, ST_WITHIN, ST_INTERSECTS keywords.
- SELECT * FROM container c WHERE ST_DISTANCE(c.property, { "type": "Point", "coordinates": [0.0,
10.0] }) < 40

Composite indexes increase the efficiency when you're performing operations on multiple fields.
- SELECT * FROM container c ORDER BY c.property1, c.property2
- SELECT * FROM container c WHERE c.property1 = 'value' AND c.property2 > 'value'

Overview of indexing in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/index-overview
[CONFIRMED]

QUESTION 75
You have the following Azure Resource Manager (ARM) template.
You plan to deploy the template in incremental mode.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- When the template is deployed, an Azure Cosmos DB account named acct01 will be created if the account
does NOT exist: NO
- When the template is deployed, an container named mycontainer in mydb will be created if the container
does NOT exist: YES
- When the template is deployed, if mycontainer exists and has a partition key on name, the partition key will
change to companyid: NO

Explanation:

When the template is deployed, an Azure Cosmos DB account named acct01 will be created if the account
does NOT exist: NO
- The ARM template don't create the account.

When the template is deployed, an container named mycontainer in mydb will be created if the container does
NOT exist: YES
- The ARM template create the container.
When the template is deployed, if mycontainer exists and has a partition key on name, the partition key will
change to companyid: NO
- You choose a partition key when you create a container in Azure Cosmos DB.
- Note that you can't change the partition key of a container.
- If you do need to change it, you need to migrate the container data to a new container with the correct key.

Create an Azure Cosmos DB and a container by using an ARM template


- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-template-json?tabs=CLI

How to choose a partition key in Azure Cosmos DB


- https://microsoft.github.io/AzureTipsAndTricks/blog/tip335.html

[CONFIRMED]

QUESTION 76
You have an Azure Cosmos DB Core (SQL) API account that has multiple write regions.
You need to receive an alert when requests that target the database exceed the available request units per
second (RU/s).

Which Azure Monitor signal should you use?

A. Data Usage.
B. Provisioned Throughput.
C. Total Request Units.
D. Document Count.

Correct Answer: C
Explanation

Explanation/Reference:

- Total Request Units

Explanation:

Total Request Units


- Measures the total number of RUs consumed by requests that target the database.

Data Usage
- Measures the amount of data stored in the database.

Provisioned Throughput
- Measures the provisioned throughput of the database.

Document Count
- Measures the number of documents in the database.

Azure Monitor signal condition:


- Metric: Get an alert when rate limiting occurs on the total request units metric.
- Activity Log: This alert triggers when a certain event occurs.
- Log (Log Analytics): This alert triggers when the value of a specified property in the results of a Log Analytics
query crosses a threshold you assign.

Alert type
- Rate limiting on request units (metric alert): Alerts if the container or a database has exceeded the
provisioned throughput limit.
- Region failed over: When a single region is failed over. This alert is helpful if you didn't enable service-
managed failover.
- Rotate keys (activity log alert): Alerts when the account keys are rotated. You can update your application
with the new keys.

Total Request Units is a signal condition.

Provisioned Throughput is an alert type.

Create alerts for Azure Cosmos DB using Azure Monitor


- https://learn.microsoft.com/en-us/azure/cosmos-db/create-alerts#create-an-alert-rule

Monitor Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/monitor?tabs=azure-diagnostics#alerts

[CONFIRMED]

QUESTION 77
You have an Azure Cosmos DB Core (SQL) API account that has multiple write regions.
You need to receive an alert when requests that target the database exceed the available request units per
second (RU/s).

Which Azure Monitor signal should you use?

A. Data Usage.
B. Metadata Requests.
C. Region Removed.
D. Region Added.

Correct Answer: B
Explanation

Explanation/Reference:

- Metadata Requests.

Explanation:

You can use Metric alerts in Azure Monitor to receive an alert when requests that target the database exceed
the available request units per second (RU/s).
DataUsage (Data Usage)
- Used to monitor total data usage at container and region, minimum granularity should be 5 minutes.
- Data Usage shows the amount of data stored in the database.
- This includes both read and write operations.
- This monitors the amount of data that is being used by the Azure Cosmos DB account, and not the number
of requests that are being made to the account.

MetadataRequests (Metadata Requests)


- Used to monitor metadata requests in scenarios where requests are being throttled.
- Metadata Requests tracks the number of requests that are made to the database for metadata, such as
getting the list of collections, database names, container names, and partition keys.
- This will allow you to monitor the number of requests that are being made to the Azure Cosmos DB account
for metadata operations and to receive an alert when the number of requests exceeds the RU/s limit.

Incorrect answers:

Region Added
- Track the addition of regions from the Azure Cosmos DB account
- When a new region is added, the total provisioned throughput for the account is increased by the amount of
RU/s that was provisioned for the existing regions.

Region Removed
- Track the removal of regions from the Azure Cosmos DB account.

-----

TotalRequestUnits (Total Request Units)


- Used to monitor Total RU usage at a minute granularity. To get average RU consumed per second, use Sum
aggregation at minute interval/level and divide by 60.
- This metric shows the total number of request units consumed by all operations in a given time interval.

DocumentCount (Document Count)


- Used to monitor document count at container and region, minimum granularity should be 5 minutes.

ProvisionedThroughput (Provisioned Throughput)


- Used to monitor provisioned throughput per container.
- Shows the amount of throughput that is provisioned for the account or container.

Reference:

Monitoring Azure Cosmos DB data reference


- https://docs.microsoft.com/en-us/azure/cosmos-db/monitor-cosmos-db-reference#metrics

[PRE-CONFIRMED]
QUESTION 78
You configure a backup for an Azure Cosmos DB Core (SQL) API account as shown in the following exhibit.

Use the drop-down menus to select the answer choice that completes each statement based on the information
presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- The current backup interval and retention policy provide protection 12 hours
- In case of emergency, you must create a support ticket to restore the backup.

Explanation:

A full backup is taken every 4 hours (240 minutes). Only the last two backups are stored by default
Both the backup interval and the retention period can be configured in the Azure portal

If you accidentally delete your database or a container, you can file a support ticket or call the Azure support to
restore the data from automatic online backups.
Azure support is available for selected plans only such as Standard, Developer, and plans higher than
those tiers. Azure support isn't available with Basic plan.

In this scenario:
- Backup Interval is 120 minutes = RPO is 2 hours
- Backup Retention is 12 hours = retention policy

References:

Evaluate periodic backup


- https://learn.microsoft.com/en-us/training/modules/implement-backup-restore-for-azure-cosmos-db-sql-
api/2-evaluate-periodic-backup

Request data restoration from an Azure Cosmos DB backup


- https://learn.microsoft.com/en-us/azure/cosmos-db/periodic-backup-request-data-restore

[CONFIRMED]

QUESTION 79
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account named account1.

You configure container1 to use Always Encrypted by using an encryption policy as shown in the C# and the
Java exhibits.

(Click the C# tab to view the encryption policy in C#).

(Click the Java tab to see the encryption policy in Java.)


For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- You can perform that filters on the creditcard property: NO


- You can perform a query that filters on the SSN property: YES
- An application can be allowed to read the creditcard property while being restricted from reading the SSN
property: YES

Explanation:

Creditcard use Randomized encryption


- It uses a method that encrypts data in a less predictable manner.
- Randomized encryption is more secure, but prevents queries from filtering on encrypted properties.

SSN use Deterministic encryption


- It always generates the same encrypted value for any given plain text value and encryption configuration.
- Using deterministic encryption allows queries to perform equality filters on encrypted properties.
- However, it may allow attackers to guess information about encrypted values by examining patterns in the
encrypted property. This is especially true if there's a small set of possible encrypted values, such as True/
False, or North/South/East/West region.

An application can be allowed to read the creditcard property while being restricted from reading the SSN
property: YES
- Reading documents when only a subset of properties can be decrypted.
- In situations where the client does not have access to all the CMK used to encrypt properties, only a subset
of properties can be decrypted when data is read back. For example, if property1 was encrypted with key1 and
property2 was encrypted with key2, a client application that only has access to key1 can still read data, but not
property2. In such a case, you must read your data through SQL queries and project away the properties that
the client can't decrypt: SELECT c.property1, c.property3 FROM c.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-always-encrypted

[NOT-CONFIRMED]

QUESTION 80
You work in the Intel company that has its clients across 4 different countries, with hundreds of thousands of
clients for countries 1 and 2, and a few thousand clients for countries 3 and 4. Requests for every country total
approximately 50,000 RU/s every hour of the day.

The application team of the company suggests using countryId as the partition key for this container.

A. This partition key will cause fan out when filtering by countryId
B. This partition key could cause storage hot partitions
C. This partition key will prevent throughput hot partitions

Correct Answer: B
Explanation

Explanation/Reference:

- This partition key could cause storage hot partitions

In the given scenario, countries 1 and 2 have hundreds of thousands of clients. Storage hot partitions are likely
for countries 1 or 2 since they would have to bulk of the data stored.

Incorrect answers:

"This partition key will cause fan out when filtering by countryId"
- The suggested partition will prevent fan out as we will be always going through one partition per countryId.

"This partition key will prevent throughput hot partitions"


- There is still a possibility of throughput hot partitions.
Partitioning and horizontal scaling in Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/partitioning-overview

[WHIZLABS]

QUESTION 81
Consider the following two statements.

Which of the above statement(s) is/are true?

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- A container with provisioned throughput can't be converted to a shared database container: YES
- A shared database container can’t be converted to have a dedicated throughput: YES

A container with provisioned throughput cannot be converted to shared database container.

Conversely a shared database container cannot be converted to have a dedicated throughput.

You will need to move the data to a container with the desired throughput setting. (Container copy jobs for
NoSQL and Cassandra APIs help with this process.)

Introduction to provisioned throughput in Azure Cosmos DB


- https://learn.microsoft.com/en-us/azure/cosmos-db/set-throughput
[WHIZLABS]

QUESTION 82
Gateway mode using the dedicated gateway is one of the ways to connect to an Azure Cosmos DB account.

Which of the following statement(s) is/are true about a dedicated gateway?

A. Dedicated gateways are supported only on SQL API accounts.


B. A dedicated gateway can be provisioned in Azure Cosmos DB accounts with IP firewalls or Private
Link configured.
C. A dedicated gateway can be provisioned in an Azure Cosmos DB account in a Virtual Network (Vnet).
D. You can use RBAC (role-based access control) to authenticate data plane requests routed through the
dedicated gateway.

Correct Answer: A
Explanation

Explanation/Reference:

- Dedicated gateways are supported only on SQL API accounts.

A dedicated gateway cluster can be provisioned in API for NoSQL accounts.

A dedicated gateway cluster can have up to five nodes by default and you can add or remove nodes at any
time.

All dedicated gateway nodes within your account share the same connection string.

Dedicated gateway nodes are independent from one another.

Once created, you can add or remove dedicated gateway nodes, but you can't modify the size of the nodes. To
change the size of your dedicated gateway nodes you can deprovision the cluster and provision it again in a
different size. This will result in a short period of downtime unless you change the connection string in your
application to use the standard gateway during reprovisioning.

Azure Cosmos DB dedicated gateway - Overview


- https://learn.microsoft.com/en-us/azure/cosmos-db/dedicated-gateway#connect-to-azure-cosmos-db-using-
direct-mode

[WHIZLABS]

QUESTION 83
You need to create a .NET console app to manage data in the Azure Cosmos DB SQL API account. As a part
of the procedure, you need to create a database.

Which of the following method can you use to create a database?

A. BuildDatabaseAsync
B. CreateCosmosDatabase
C. CreateContainerAsync
D. CreateDatabaseAsync
Correct Answer: D
Explanation

Explanation/Reference:

- CreateDatabaseAsync

A database is the logical container of items that are partitioned across containers. Either
the CreateDatabaseAsync or CreateDatabaseIfNotExistsAsync method of the CosmosClient class can be
used to create a database.

CreateContainerAsync
- Is used to create a container, not a database.

[WHIZLABS]

QUESTION 84
You are part of a development team and your team creates a set of aggregate metadata items that are needed
to be changed anytime, once you successfully update or create an item within your container.

Which of the following server-side programming constructs would you use for this task?

A. User-defined function
B. System-defined function
C. Pre-trigger
D. Post-trigger

Correct Answer: D
Explanation

Explanation/Reference:

- Post-trigger

A post-trigger runs its logic after the item has been successfully updated or created. At this point, you can
update the aggregate metadata items.

User-defined function
- Is used only within the context of a SQL query.

System-defined function
- The system defined functions are also called as Library Functions or Standard Functions or Pre-Defined
Functions. The implementation of system defined functions is already defined by the system.

Pre-trigger
- Will run its logic too early before the item gets successfully created or updated.

Post-trigger
- Runs its logic after the item has been successfully updated or created.
How to write stored procedures, triggers, and user-defined functions in Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-write-stored-procedures-triggers-udfs

[WHIZLABS]

QUESTION 85
Which of the following Azure CLI command can you use to initiate a manual failover to another region in your
Azure Cosmos DB SQL API account?

A. az cosmosdb update command to "bounce" the service


B. az cosmosdb failover-priority-change --failover-policies command to change the priority of any region
C. az cosmosdb failover-priority-change --failover-policies command ensuring that you specifically change the
priority of the region with a priority = 0

Correct Answer: C
Explanation

Explanation/Reference:

- az cosmosdb failover-priority-change --failover-policies command ensuring that you specifically


change the priority of the region with a priority = 0

You need to use az cosmosdb failover-priority-change --failover-policies command ensuring that you
specifically change the priority of the region with a priority = 0.

Incorrect answers:

"az cosmosdb update command to "bounce" the service"


- The given command will only change the characteristics of the service.

[WHIZLABS]

QUESTION 86
Which of the following is the most optimal method of managing referential integrity between different containers
in Azure Cosmos DB?

A. Creating an Azure Cosmos DB Function that will periodically search for changes done to your containers
and replicate those changes to the referenced containers
B. Using an Azure Function trigger for Azure Cosmos DB to leverage on the change feed processor to update
the referenced containers
C. When any changes are created by your app to one container, also ensure it duplicates the changes on the
referenced container

Correct Answer: B
Explanation

Explanation/Reference:
- Using an Azure Function trigger for Azure Cosmos DB to leverage on the change feed processor to
update the referenced containers

The change feed APIs help in keeping the track of all the Insert and Updates on a container, creating a trigger
depending upon the change feed, will allow the users to send those changes to other containers that also
require those changes.

Incorrect answers:

"Creating an Azure Cosmos DB Function that will periodically search for changes done to your containers and
replicate those changes to the referenced containers"
- Azure Cosmos DB functions don’t modify data and their scope is limited only within the container they are
defined in.

"When any changes are created by your app to one container, also ensure it duplicates the changes on the
referenced container"
- While it can be possible to add this logic to your app, adding the code to the application won’t add
complexity to your environment, but also it won’t guarantee that any changes carried out outside of the app
would be replicated to the other containers.

Manage referential integrity by using change feed


- https://learn.microsoft.com/en-us/training/modules/design-data-partitioning-strategy/3-manage-referential-
integrity-by-using-change-feed

[WHIZLABS]

QUESTION 87
Range, spatial, and composite are three different types of indexes supported by Azure Cosmos DB.

Which of these indexes are used for geospatial objects like points, polygons, LineStrings, etc?

A. Range Indexes
B. Composite Indexes
C. Spatial Indexes

Correct Answer: C
Explanation

Explanation/Reference:

- Spatial Indexes

Spatial Indexes are used for geospatial objects. Currently, it supports Points, Polygons, MultiPolygons, and
LineStrings. Spatial indexes can also be used on GeoJSON objects.

Use composite indexes when you need to enhance the efficiency of queries that execute operations on several
fields.
[WHIZLABS]

QUESTION 88
Range, spatial, and composite are three different types of indexes supported by Azure Cosmos DB.

Which of these indexes are used for performing operations on multiple fields?

A. Range Indexes
B. Composite Indexes
C. Spatial Indexes

Correct Answer: B
Explanation

Explanation/Reference:

- Composite Indexes

Azure Cosmos DB currently supports three types of indexes.

Range index is based on an ordered tree-like structure.

Spatial indices enable efficient queries on geospatial objects such as - points, lines, polygons, and
multipolygon. These queries use ST_DISTANCE, ST_WITHIN, ST_INTERSECTS keywords.

Composite indexes increase the efficiency when you're performing operations on multiple fields.

[WHIZLABS]

QUESTION 89
The server-side latency direct and server-side latency gateway are two metrics that can be used to view the
server-side latency of any operation in two different connection modes.

Which of the following API(s) in Azure Cosmos DB supports both of these metrics?

A. SQL
B. MongoDB
C. Cassandra
D. Gremlin
E. Table

Correct Answer: AE
Explanation

Explanation/Reference:

- SQL
- Table

SQL API supports both Server-Side Latency Direct as well as Server-Side Latency Gateway.
Table supports both Server-Side Latency Direct as well as Server-Side Latency Gateway.

MongoDB supports only Server-Side Latency Gateway.

Cassandra supports only Server-Side Latency Gateway.

Gremlin supports does not support Server-Side Latency Direct.

[WHIZLABS]

QUESTION 90
You are working on an app that will save the device metrics produced every minute by various IoT devices.

You decide to collect the data for two entities:


- The devices
- The device metrics produced by each device.

You were about to create two containers for these identified entities but suddenly your data engineer suggests
placing both entities in a single container, not in two different containers.

What would you do?

A. Create a document with the deviceid property and other device data, and add a property called ‘type’ having
the value ‘device’. Also, create another document for each metrics data collected using devicemetricsid
property.
B. Create a document with the deviceid property and other device data. After that embed each metrics
collection into the document with the devicemetricsid property and all the metrics data.
C. Create a document with the deviceid property and other device data, and add a property called ”type” with
the value ’device‘. Create another document for each metrics data using the devicemetricsid and deviceid
properties and add a property called ”type” with the value devicemetrics.

Correct Answer: C
Explanation

Explanation/Reference:

- Create a document with the deviceid property and other device data, and add a property called
”type” with the value ’device‘. Create another document for each metrics data using the
devicemetricsid and deviceid properties and add a property called ”type” with the value devicemetrics.

If you create two different types of documents including the property ‘deviceid’ for both entities, it will ease
referencing both entities inside the container.

Embedding each metric data collected every minute would create high amounts of requests.

Incorrect Answers:

"Create a document with the deviceid property and other device data, and add a property called ‘type’ having
the value ‘device’. Also, create another document for each metrics data collected using devicemetricsid
property."
- Creating two types of documents but not including the deviceid for the device metrics document would make
it impossible to reference the device actually referenced by the collected metrics.
"Create a document with the deviceid property and other device data. After that embed each metrics collection
into the document with the devicemetricsid property and all the metrics data."
- Embedding each metric data collected every minute would create high amounts of requests.

[WHIZLABS]

QUESTION 91
In a shared throughput database, the containers share the throughput (Request Units per Second) allocated to
that database.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- 25
- 400
With manually provisioned throughput you cannot provision less than 400 Request Units per second on the
database and increments of 100.

With manually provisioned throughput, there can be up to 25 (twenty-five) containers with a minimum of 400
Request Units per second on the database.

With autoscale provisioned throughput, there can be up to 25 (twenty-five) containers with an autoscale
maximum of 4000 Request Units per second (scaling b/w 400 – 4000 Request Units per second).

[WHIZLABS]

QUESTION 92
While working in .NET SDK v3, you need to enable multi-region writes in your app that uses Azure Cosmos DB.
WestUS2 is the region where your application would be deployed and Cosmos DB is replicated.

Which of the following attributes would you set to WestUS2 to achieve the goal?

A. ApplicationRegion
B. setCurrentLocation
C. setPreferredLocations
D. connection_policy.PreferredLocations

Correct Answer: A
Explanation

Explanation/Reference:

- ApplicationRegion

To enable multi-region writes in your application that uses Azure Cosmos DB while working in .NET SDK v3,
you would set the ApplicationRegion attribute to WestUS2.

Here’s an example of how you can do this:

CosmosClient cosmosClient = new CosmosClient("<connection-string-from-portal>", new


CosmosClientOptions()
{
ApplicationRegion = Regions.WestUS2,
});

setCurrentLocation
- Is used in .NET SDK v2.
- Is used to specify the region where your application is currently connected. This is not relevant to enabling
multi-region writes.

setPreferredLocations
- Is used in Async Java V2 SDK.

connection_policy.PreferredLocations
- Is used in Python SDK.
[WHIZLABS]

QUESTION 93
While chairing a team session, you are telling the team members about the Azure Cosmos DB Emulator.

Which of the following statement is not true about Azure Cosmos DB Emulator?

A. Azure Cosmos DB Emulator offers an emulated environment that would run on the local developer
workstation.
B. The emulator does support only a single fixed account and a familiar primary key. You can even regenerate
the key while utilizing the Azure Cosmos DB Emulator.
C. The emulator doesn’t offer multi-region replication.
D. The emulator doesn’t offer various Azure Cosmos DB consistency levels as offered by the cloud service.

Correct Answer: B
Explanation

Explanation/Reference:

- The emulator does support only a single fixed account and a familiar primary key. You can even
regenerate the key while utilizing the Azure Cosmos DB Emulator.

Scenario: statement is not true about Azure Cosmos DB Emulator

The emulator does support only a single fixed account and a familiar primary key but you can’t regenerate the
key while utilizing the Azure Cosmos DB Emulator.

Azure Cosmos DB Emulator offers an emulated environment that would run on the local developer workstation.

The emulator doesn’t offer multi-region replication.

The emulator doesn’t offer various Azure Cosmos DB consistency levels as offered by the cloud service

https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator

[WHIZLABS]

QUESTION 94
You need to configure the consistency levels on a per-request basis.

Which of the following C# class would you use in the .NET SDK for Azure Cosmos DB SQL API?

A. CosmosClientOptions
B. CosmosConfigOptions
C. ItemRequestOptions
D. Container

Correct Answer: C
Explanation
Explanation/Reference:

- ItemRequestOptions

ItemRequestOptions
- The ItemRequestOptions class contains various session token and consistency level configuration properties
on a per-request basis.

CosmosClientOptions
- CosmosClientOptions class configures the overall client used in the SDK in several operations.

ItemRequestOptions
- The ItemRequestOptions class includes several properties to configure per-request options.

Container
- The Container class has methods for performing operations on a container but does not provide any way to
configure per-request options.

[WHIZLABS]

QUESTION 95
Which of the following function in Spark SQL separates the array’s elements into multiple rows with positions
and utilizes the column names ‘pos’ for position and ‘col’ for elements of the array?

A. explode()
B. posexplode()
C. preexplode()
D. Separate()

Correct Answer: B
Explanation

Explanation/Reference:

- posexplode()

posexplode()
- Is a function in Spark SQL that separates the array’s elements into multiple rows with positions and utilizes
the column names pos for position and col for elements of the array.

explode()
- Separates the array’s elements into many rows and utilizes the default column name array’s elements.

[WHIZLABS]
QUESTION 96
Which of the following method of the ChangeFeedProcessor class is invoked to start consuming changes from
the change feed?

A. StartAsync
B. GetChangeFeedProcessorBuilder<>
C. Build

Correct Answer: A
Explanation

Explanation/Reference:

- StartAsync

StartAsync
- Is a method in the ChangeFeedProcessor class and is invoked to start consuming changes from the change
feed.

GetChangeFeedProcessorBuilder<>
- The given is a method of Container class that creates the builder to eventually build a change feed
processor.

Build
- The build method is invoked at the end of creating a change feed processor (or estimator) and isn’t a method
of the ChangeFeedProcessor class.

[WHIZLABS]

QUESTION 97
While defining a custom index policy, which of the following path expression can be used to define an included
path that will include all possible properties from the root of any JSON document?

A. /[]
B. /?
C. /*
D. /()

Correct Answer: C
Explanation

Explanation/Reference:

- /*

The property path can be defined with the below-given additions:


- a path that leads to a scalar value (number or string) ends with /?
- The elements from an array are addressed together through the /[] notation (instead of /1 , /0 etc.)
- Wildcard /* can be utilized to match any elements below the referenced node.
Indexing policies in Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy

[WHIZLABS]

QUESTION 98
In the Azure portal, which of the following tab inside the Insights pane demonstrates the percentage (%) of
successful requests over the total requests per hour?

A. Storage
B. Requests
C. System
D. Availability

Correct Answer: D
Explanation

Explanation/Reference:

- Availability

The availability tab demonstrates the % of successful requests out of the total requests per hour. Azure
Cosmos DB SLAs define the success rate.

Incorrect answers:

Storage
- The storage tab demonstrates the size of data and index usage over the specific time period.

Requests
- The requests tab demonstrates the total number of requests processed by operation type, by status code,
and the count of failed requests.

Availability
- It demonstrates the number of metadata requests served by the primary partition.

[WHIZLABS]

QUESTION 99
The Azure Cosmos DB (SQL API) connector supports various authentication types like Key authentication,
Service principal authentication, System-assigned managed identity authentication and User-assigned
managed identity authentication.

Which of the above statement(s) is/are true?

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- The service principal authentication is currently supported in the data flow: NO


- The system-assigned managed identity authentication is currently supported in the data flow: NO
- The user-assigned managed identity authentication isn’t currently supported in the data flow: YES

Azure Cosmos DB for NoSQL using Service Connector supports system-assigned managed identity, user-
assigned managed identity, secret/connection string and service principal authentication types for Azure App
Service, Azure Container Apps and Azure Spring Apps.

But all the three authentication types are not supported in the data flow.

Service principal authentication in the data flow


- Azure Cosmos DB support service principal authentication is not currently supported in the data flow.
- Data flow in Azure Data Factory does not support service principal authentication for Azure Cosmos DB.

System-assigned managed identity authentication in the data flow


- Azure Cosmos DB support system-assigned managed identity authentication in the data flow.
- Managed identity is a feature of Azure Active Directory, which allows Azure services to authenticate with
other Azure services without the need for explicit credentials or secrets.
-

User-assigned managed identity authentication in the data flow


- Azure Cosmos DB support for user-assigned managed identity authentication
- But is not available in the data flow feature.

Currently, all the three authentication types i.e service principal authentication, system-assigned managed
identity authentication, and user-assigned managed identity authentication are not supported in the data flow.

Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
- https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db

[WHIZLABS]

QUESTION 100
You want to disable all indexing for a container in Azure Cosmos Database.

Which of the following property of the indexing policy would help you in meeting the goal?

A. excludedPaths
B. includedPath
C. automatic
D. indexingMode

Correct Answer: D
Explanation

Explanation/Reference:

- indexingMode

None and consistent are two indexing modes supported by Azure Cosmos DB.
If you set the indexingMode property to none, it disables all indexing.

Incorrect answers:

excludedPaths
- The excludedPaths property specifies the paths that will be excluded from the index. It says nothing about the
running of the indexer.
includedPath
- The includedPaths property specifies the paths that will be included in the index. It says nothing about the
running of the indexer.

automatic
- If you disable the automatic indexing by assigning it a value as false, it won’t disable all indexing for the
container.

[WHIZLABS]

QUESTION 101
Your development team already has an existing Azure Cosmos DB account, database, and container. One of
the senior executives asks you to configure Azure Cosmos DB for another team with unique and different
needs for regional replication and default consistency levels.

Which of the following new sources would you create?

A. Database
B. Container
C. Account

Correct Answer: C
Explanation

Explanation/Reference:

- Account

In order to have different behavior for replication and regional, your must create a new account.

The replication settings and default consistency level configured on a cosmos account apply to all Cosmos
databases and containers under that account. As per the scenario, another team has unique needs for regional
replication and default consistency levels.

To support the different behavior for replication and default consistency levels, there is a need to create a new
account.

Incorrect answers:

Database
- If you create a new database, the global replication settings and default consistency level will remain the
same between both teams.

Container
- If you create a new container, the global replication settings and default consistency level will remain the
same between both teams.

[WHIZLABS]
QUESTION 102
The web development team of the company successfully estimates the throughput needs of your app within a
5% margin of error, with no significant variances over time. While running the app in production, the team
reckons that your workload would be extraordinarily stable.

With both these inputs, which of the following throughput options would you consider?

A. Serverless
B. Standard
C. Autoscale
D. Hybrid

Correct Answer: B
Explanation

Explanation/Reference:

- Standard

Standard throughput is well-suited for workloads with steady traffic.


It needs a static number of RUs to be assigned ahead of time.
Whereas the Autoscale throughput suits better for unpredictable traffic.

Incorrect answers:

Serverless
- Serverless is a better option for workloads with widely varying traffic and low average-to-peak traffic ratios.

Autoscale
- Autoscale throughput suits best for unpredictable traffic.

Hybrid
- Hybrid is not a valid throughput option.

[WHIZLABS]

QUESTION 103
You have an application that queries an Azure Cosmos DB Core (SQL) API account.

You discover that the following two queries run frequently.

SELECT * FROM c WHERE c.name = @name ORDER BY c.name DESC, c.timestamp DESC

SELECT * FROM c WHERE c.name = @name AND c.timestamp ORDER BY c.name ASC, c.timestamp
ASC

You need to minimize the request units (RUs) consumed by reads and writes.

What should you create?


A. a composite index for (name DESC, timestamp ASC).
B. a composite index for (name ASC, timestamp ASC) and a composite index for (name DESC, timestamp
DESC).
C. a composite index for (name ASC, timestamp ASC).
D. a composite index for (name ASC, timestamp DESC).

Correct Answer: B
Explanation

Explanation/Reference:

- a composite index for (name ASC, timestamp ASC) and a composite index for (name DESC,
timestamp DESC).

Explanation:

To minimize the request units (RUs) consumed by reads and writes, you should create a composite index for
(name ASC, timestamp ASC) and a composite index for (name DESC, timestamp DESC) .

This is because the two queries you provided have different ORDER BY clauses on the same two properties:
name and timestamp, one in descending order and the other in ascending order.

Incorrect answers:

a composite index for (name DESC, timestamp ASC).


- Would only create one composite index, which would not be as efficient for either query.

a composite index for (name ASC, timestamp ASC).


- Would only create a composite index for name ASC, timestamp ASC.
- This would not be efficient for the second query, which orders the results by name DESC, timestamp DESC.

a composite index for (name ASC, timestamp DESC).


- Would only create a composite index for name ASC, timestamp DESC.
- This would not be efficient for the first query, which orders the results by name DESC, timestamp DESC.

[GPT]

QUESTION 104
You have an Azure Cosmos DB database named database1 that contains a container named container1. The
container1 container stores product data and has the following indexing policy.

{
"indexingMode": "consistent",
"includePaths":
[
{
"path": "/product/category/?"
},
{
"path": "/product/brand/?"
}
],
"excludedPaths":
[
{
"path": "/*"
},
{
"path": "/product/brand"
}
]
}

Which path will be indexed?

A. /product/category
B. /product/brand/tailspin
C. /product/brand
D. /product/[]/category

Correct Answer: A
Explanation

Explanation/Reference:

- /product/category

In the indexing policy you provided, the includePaths specifies that the /product/category/? and /product/brand/?
paths should be indexed.

However, the excludedPaths specifies that all paths (/*) and the /product/brand path should be excluded from
indexing.

Since excludedPaths takes precedence over includePaths, the only path that will be indexed is the /product/
category/? path.

The ? at the end of the path indicates that all immediate children of the /product/category property will be
indexed.

[CONFIRMED]
Question Set 1

QUESTION 1
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure Synapse pipeline that uses Azure Cosmos DB Core (SQL) API as the input and
Azure Blob Storage as the output.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation

Azure Synapse pipeline is not designed for copying data from Azure Cosmos DB Core (SQL) API to Azure Blob
Storage.

This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.

Azure Synapse pipeline is not designed for copying data from Azure Cosmos DB Core (SQL) API to Azure Blob
Storage.

Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."

You can create an Azure Synapse pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure
Blob Storage as the output. You can follow these steps:
1. Enable Azure Synapse Link for your Azure Cosmos DB accounts.
2. Enable Azure Synapse Link for your containers.
3. Connect your Azure Cosmos DB database to an Azure Synapse workspace.
4. Query analytical store using Azure Synapse Analytics.

Reference:
https://learn.microsoft.com/en-us/azure/cosmos-db/configure-synapse-link

[GPT]
QUESTION 2
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input
and Azure Blob Storage as the output.

Does this meet the goal?

A. Yes
B. No

Correct Answer: A
Explanation

Explanation/Reference:

- YES

Explanation

This solution allows you to copy the data from container1 to a blob storage account, which can then be used as
a reference data input for your Stream Analytics job.

Creating an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure
Blob Storage as the output. This is a valid option because you can transform or copy reference data to Blob
Storage from Azure Data Factory to use cloud-based and on-premises data stores.

You can create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and
Azure Blob Storage as the output. You can follow these steps:
1. Create a data factory.
2. Create a pipeline with a copy activity.
3. Test run the pipeline.
4. Trigger the pipeline manually.
5. Trigger the pipeline on a schedule.

Reference:

https://learn.microsoft.com/en-us/azure/data-factory/tutorial-copy-data-portal

[GPT]

QUESTION 3
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger
and Azure event hub as the output.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation

Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as
the output is not a reference data input, but a stream input.

This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.

Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."

You can create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and
Azure event hub as the output. You can use the following steps:
1. Use the Azure Cosmos DB Trigger to listen for inserts and updates across partitions.
2. Use the Azure Functions trigger for Azure Cosmos DB (AKA CosmosDBTrigger) to dynamically distribute
work across Function instances.
3. Configure the Azure event hub as the output.

[GPT]

QUESTION 4
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure function to copy data to another Azure Cosmos DB Core (SQL) API container.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation:

Copying data to another Azure Cosmos DB Core (SQL) API container does not make it available as reference
data for Stream Analytics, since Azure Cosmos DB Core (SQL) API is not supported as a reference data
source.

This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.

Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."

You can create an Azure function to copy data to another Azure Cosmos DB Core (SQL) API container. You
can use the following steps:
1. Connect Azure Functions to Azure Cosmos DB using Visual Studio Code.
2. Create your Azure Cosmos DB account.
3. Create an Azure Cosmos DB database and container.
4. Update your function app settings.

Reference:

https://learn.microsoft.com/en-us/azure/azure-functions/functions-add-output-binding-cosmos-db-vs-code

[GPT]

QUESTION 5
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.

You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.

Solution: You create an Azure function to copy data to another Azure Cosmos DB for NoSQL container.

Does this meet the goal?


A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation:

Copying data to another Azure Cosmos DB for NoSQL container does not make it available as reference data
for Stream Analytics, since Azure Cosmos DB for NoSQL is not supported as a reference data source.

This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.

Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."

You can copy data from an Azure Cosmos DB Core (SQL) API container to another Azure Cosmos DB for
NoSQL container using Azure Function. You can perform offline container copy within an Azure Cosmos DB
account by using container copy jobs. You might need to copy data within your Azure Cosmos DB account if
you want to achieve any of these scenarios:
- Copy all items from one container to another.
- Change the granularity at which throughput is provisioned, from database to container and vice versa.
- Change the partition key of a container.
- Update the unique keys for a container.
- Rename a container or database.
- Adopt new features that are supported only for new containers.

[GPT]

QUESTION 6
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a database in an Azure Cosmos DB for NoSQL account that is configured for multi-region writes.

You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The
solution must ensure that any conflicts are sent to the conflicts feed.

Solution: You set ConflictResolutionMode to LastWriterWins and you use the default settings for the policy.

Does this meet the goal?

A. Yes
B. No
Correct Answer: B
Explanation

Explanation/Reference:

- NO

This solution set ConflictResolutionMode to LastWriterWins and using the default settings for the policy will use
the system-defined timestamp property to resolve conflicts automatically, which may not reflect your application
logic or requirements. Conflicts resolved with this policy do not show up in the conflict feed.

Instead use "You set ConflictResolutionMode to Custom and you use the default settings for the
policy."

References:

Conflict types and resolution policies when using multiple write regions
- https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/conflict-resolution-policies.md

[GPT]

QUESTION 7
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a database in an Azure Cosmos DB for NoSQL account that is configured for multi-region writes.

You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The
solution must ensure that any conflicts are sent to the conflicts feed.

Solution: You set ConflictResolutionMode to Custom and you use the default settings for the policy.

Does this meet the goal?

A. Yes
B. No

Correct Answer: A
Explanation

Explanation/Reference:

- YES

This solution allows you to use a custom stored procedure to resolve conflicts according to your application
logic. The default settings for the custom policy do not specify any ResolutionProcedure, which means that all
conflicts are sent to the conflicts feed without any automatic resolution.
Conflicts and conflict resolution policies are applicable if your Azure Cosmos DB account is configured with
multiple write regions.

For Azure Cosmos DB accounts configured with multiple write regions, update conflicts can occur when writers
concurrently update the same item in multiple regions.

Azure Cosmos DB offers a flexible policy-driven mechanism to resolve write conflicts. You can select from two
conflict resolution policies on an Azure Cosmos DB container:
- Last Write Wins (LWW): This resolution policy, by default, uses a system-defined timestamp property for the
following APIs: SQL, MongoDB, Cassandra, Gremlin and Table. Custom numerical property is available only for
API for NoSQL. If the path isn't set or it's invalid, it defaults to _ts. Conflicts resolved with this policy do not show
up in the conflict feed.
- Custom: This resolution policy is designed for application-defined semantics for reconciliation of conflicts. Is
available only for API for NoSQL accounts and can be set only at creation time. It is not possible to set a
custom resolution policy on an existing container.

References:

Conflict types and resolution policies when using multiple write regions
- https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/conflict-resolution-policies.md

[GPT]

QUESTION 8
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a database in an Azure Cosmos DB for NoSQL account that is configured for multi-region writes.

You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The
solution must ensure that any conflicts are sent to the conflicts feed.

Solution: You set ConflictResolutionMode to Custom. You set ResolutionProcedure to a custom stored
procedure. You configure the custom stored procedure to use the conflictingItems parameter to resolve
conflicts.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation:
This solution set ConflictResolutionMode to Custom and setting ResolutionProcedure to a custom stored
procedure will use that stored procedure to resolve conflicts automatically, which may not reflect your
application logic or requirements. Conflicts resolved with this policy do not show up in the conflict feed.

Instead use "You set ConflictResolutionMode to Custom and you use the default settings for the
policy."

References:

Conflict types and resolution policies when using multiple write regions
- https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/conflict-resolution-policies.md

[GPT]

QUESTION 9
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.

Solution: You configure an Azure Monitor alert to trigger the function.

Does this meet the goal?

A. Yes
B. No

Correct Answer: A
Explanation

Explanation/Reference:

- YES

Explanation:

An Azure Monitor alert can be configured to trigger a function when certain conditions are met, such as the
normalized request units per seconds.

You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.

Note: Alerts are used to set up recurring tests to monitor the availability and responsiveness of your Azure
Cosmos DB resources. Alerts can send you a notification in the form of an email, or execute an Azure Function
when one of your metrics reaches the threshold or if a specific event is logged in the activity log.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
[CONFIRMED]

QUESTION 10
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.

Solution: You configure the function to have an Azure CosmosDB trigger.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation:

Instead "You configure an Azure Monitor alert to trigger the function."

You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts

[CONFIRMED]

QUESTION 11
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.

Solution: You configure an application to use the change feed processor to read the change feed and you
configure the application to trigger the function.

Does this meet the goal?


A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation:

Instead configure an Azure Monitor alert to trigger the function.

You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts

[CONFIRMED]

QUESTION 12
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.

Solution: You configure Azure Event Grid to send events to the function by using an Event Grid trigger in the
function.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Explanation

Explanation/Reference:

- NO

Explanation:

Instead configure an Azure Monitor alert to trigger the function.

You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.

Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
[CONFIRMED]
Question Set 1

QUESTION 1
You are configuring the Backup & Restore section of an Azure Cosmos DB Core (SQL) API account using the
Azure portal as shown below:

You are currently subscribed to a Basic Azure support plan.

Hot Area:
Correct Answer:

Explanation

Explanation/Reference:

- YES
- YES
- NO

Explanation:

You must have the Azure Cosmos DB Operator role assigned at the Azure subscription level to successfully
configure Backup & Restore section. A designated individual with an Azure Cosmos DB Operator role can
configure the Backup & Restore section in Azure Cosmos DB.

By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the
account is being provisioned supports it. Three data redundancy options are available when you are configuring
the Azure Cosmos DB Backup & Restore section:
- Geo-redundant backup storage. This copies your data asynchronously across a paired region.
- Zone-redundant backup storage. This copies your data synchronously across three Azure availability zones
(AZs) in the primary Azure region.
- Locally-redundant backup storage. This copies your data synchronously three times within a single physical
location in the primary Azure region.

You cannot call Azure support to restore data from automatic online backups in the case of the accidental
deletion of a database or a container. You can create a support ticket with Microsoft Azure support or call Azure
support to restore data from automatic online backups only if you have a Standard or Developer plan or any
other plan higher than those. Since this use-case scenario mentions that you have a Basic Azure support plan,
you cannot restore data from automatic online backups by calling or contacting Azure support. You need to
upgrade from a Basic plan to at least a Standard or Developer plan to be able to leverage this feature.

References

Configure Azure Cosmos DB account with periodic backup


- https://docs.microsoft.com/en-us/azure/cosmos-db/configure-periodic-backup-restore

[MEASUREUP]

QUESTION 2
You are an Azure Cosmos DB developer for an Independent Software Vendor (ISV). You have deployed an e-
commerce application for one of your clients that uses Azure Cosmos DB Core (SQL) API as a data source.

The client wants to move between regions where data is replicated in Azure Cosmos DB.

You need to move data from one region to another using the Azure portal.

Which two actions should you perform to achieve the desired outcome? Each correct answer presents part of
the solution.

A. Perform an automatic failover to the new region and, finally, remove the original region.
B. Pause the throughput scale-up operation in the account.
C. Perform a manual failover to the new region and, finally, remove the original region.
D. Add a new region to the account.

Correct Answer: CD
Explanation

Explanation/Reference:

- Add a new region to the account.


- Perform a manual failover to the new region and, finally, remove the original region.

Explanation:

You should add a new region to the account and perform a manual failover to the new region and, finally,
remove the original region. Since you are performing a manual move between regions in an Azure Cosmos DB
account, the desired outcome is achieved by performing two simple steps: you add the desired (new) region to
the account, and then perform a manual failover to the new region. Finally, you remove the original region,
which completes the process.
You should not pause the throughput scale-up operation in the account, since this is not required for the
desired outcome. Having said that, it is important to know that, if you are performing a failover operation to a
new region in Azure Cosmos DB while an asynchronous throughput scaling UP operation is in progress, the
throughput scale-up operation will be paused. The scale-up operation will resume when the failover has
succeeded or completed. You, as a developer, do not need to pause it explicitly.

You should not perform an automatic failover to the new region and, finally, remove the original region. You are
not performing an automatic failover in this scenario but, rather, performing a manual migration as the use-case
scenario specifies.

References

Move an Azure Cosmos DB account to another region


- https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-move-regions

[MEASUREUP]

QUESTION 3
You are an Azure Cosmos DB developer for an online retail organization. You develop an online platform that
uses Azure Cosmos DB Core (SQL) API as a data source.

The following is the sample schema of an item in the Azure Cosmos DB container:
foodGroup is the partition key for the container.

You plan to execute the following query:

SELECT * FROM c WHERE c.foodGroup = 'Pork Products' and IS_DEFINED(c.description) and IS_DEFINED
(c.manufacturerName) ORDER BY c.tags.name ASC, c.version ASC

In order to execute an ORDER BY query, you create a Composite Index directly in the Azure portal. The
indexing policy is shown below:
You need to analyze both the utilized indexed paths and the recommended indexed paths using the Azure
Cosmos DB .NET SDK. You are using the .NET SDK ver 3.26.1.

Choose the correct options from the drop-down menus.


Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- PopulatedIndexMetrics
- IndexMetrics

You should choose;PopulateIndexMetrics and IndexMetrics.

In Azure Cosmos DB Core (SQL) API, you can use indexing metrics to understand both the utilized indexes and
the recommended indexed paths. You can use this information to fine-tune database query performance. This
is especially important in cases where you are not sure how to modify the indexing policies on your database
container. In .NET SDK, the indexing metrics are only supported in .NET version 3.21.0 or higher. You need to
set the PopulateIndexMetrics property to true, thereby activating the indexing metrics for the given SDK version
number. However, you should note that since activating Indexing Metrics incurs an overhead to the queries, the
recommendation is to not activate it for every production query. You should only activate Indexing Metrics for
queries which you feel are performing slowly and need query performance optimization.<br><br>You should not
choose the EnableScanInQuery option. If you wished to enable scans on specific queries that could not be
served because indexing was switched off, you would set this property to True.

You should not choose the SessionToken option. If you wish to manage the session token being used for the
session consistency level for Azure Cosmos DB service for your application, you can use the SessionToken
property.

You should not choose the Diagnostics option. You should use the Diagnostics property when you wish to
capture diagnostic information for the current request to Azure Cosmos DB service.
References

Indexing policies in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy

Indexing policies in Azure Cosmos DB

Manage indexing policies in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-manage-indexing-policy?tabs=dotnetv2%
2Cpythonv3

Indexing metrics in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/index-metrics

Optimize query performance with Azure Cosmos DB indexing metrics


- https://devblogs.microsoft.com/cosmosdb/query-performance-indexing-metrics/

QueryRequestOptions.PopulateIndexMetrics Property
- https://docs.microsoft.com/en-us/dotnet/api/
microsoft.azure.cosmos.queryrequestoptions.populateindexmetrics

[MEASUREUP]

QUESTION 4
You are an Azure Cosmos DB developer for an online media firm. You have developed an online blogging
platform that enables bloggers to blog about their favorite topics.

You decide to:


- Use Azure Cosmos DB Core (SQL) API as a data source for this online blogging platform.
- Store blog posts and respective blogger information in two separate Azure Cosmos DB containers: Posts
containers to store blog posts and Users containers to store user information.
- Reference blog posts to their respective bloggers via an authorId field.
- Generate an aggregate of recently published blog posts and authors.

The schema of the planned aggregate is as follows:

{
"id": 343812121,
"topic": "ExoticTravel",
"topicId": "3912a-23",
"author": {
"userdetails": "Steve Madden",
"photoId": "https://www.onlinedp.com/SteveMadden.jpg",
},
"postpubtime": "2022-02-12T12:03:00Z"
}

You need to implement an aggregation between the two containers (Posts and Users), denormalize the feed by
adding an author to its associated blog post, and upsert it in the Feed container.

What should you do?

A. Use Azure Synapse pipelines to create a data transformation ETL from the Posts and Users containers.
Update the authorId field of all posts manually.
B. Use one Azure Function with the Azure Cosmos DB Change Feed Processor (CFP) library.
C. Use Azure Data Factory (ADF) Data Flow pipelines to create a data transformation ETL from the Posts and
Users containers. Update the authorId field of all posts manually.
D. Use two Azure Functions with the Azure Cosmos DB Change Feed Processor (CFP) library.

Correct Answer: D
Explanation

Explanation/Reference:

- Use two Azure Functions with the Azure Cosmos DB Change Feed Processor (CFP) library.

You should use two Azure Functions with the Azure Cosmos DB Change Feed Processor (CFP) library.
- The first Azure Function listens to the Posts change feed.
- The second Azure Function listens to the Users change feed.

The second Azure Function ensures that the authorId field of all posts is updated. Using the Azure Cosmos DB
Change Feed is the most appropriate, cost-effective, and highly scalable solution for solving this use-case
scenario.

In this case, Azure Functions should be used for the following four reasons:
- Even though the use-case is a basic aggregation across two containers, a much more complex pipeline and
transformation could be designed by using Azure Functions, if the need arises.
- Azure Functions are completely independent of application code.
- Azure Functions can be scaled out if the need arises, by choosing an appropriate Azure Functions
Consumption Plan.
- Life-Cycle processing for the Change Feed can be easily designed, maintained, and troubleshooted using
the Change Feed Library (CFP) Azure Functions, since the CFP is resilient to user code errors. This means
that if your delegate implementation faces an unhandled exception error during execution, the thread
processing that specific batch of changes is stopped, and a new thread is created automatically by the CFP.
Thus, using CFP via Azure Functions builds availability, resiliency, maintainability, and extensibility into the
solution.

You should not use one Azure Function with the Azure Cosmos DB Change Feed Processor (CFP)
library.&nbsp;This is not an appropriate solution since two Azure Functions should be used for the two different
activities: a function to listen to the Posts change feed, and another to listen to the Users change feed. This
recommendation ensures that you design an Azure Function that is automatically triggered when there is new
event in your Azure Cosmos DB Change Feed. Since Posts and Users are two separate containers with two
separate Change Feeds, you need to use two different functions to track the change feed events.

You should not use Azure Data Factory (ADF) Data Flow pipelines to create a data transformation ETL from the
Posts and Users containers and update the authorId field of all posts manually. An Azure Data Factory (ADF)
service is a data orchestration as a service feature from Microsoft Azure that allows the orchestration, copying
and replication of batch data from a source to a destination. Since we plan to use an event-triggered pipeline
activity in real-time, an ADF service is not a recommended solution for this scenario.

You should not use Azure Synapse pipelines to create a data transformation ETL from the Posts and Users
containers and update the authorId field of all posts manually.&nbsp;An Azure Synapse data pipeline uses ADF
connectors in the Synapse platform to build data orchestration, copying and replication of batch data from a
source to a destination. Since we plan to use an event-triggered pipeline activity in real-time, an ADF service is
not a recommended solution for this use-case scenario.
References

Change feed processor in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-processor

Serverless event-based architectures with Azure Cosmos DB and Azure Functions


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-functions

Denormalizing your data with Azure Functions and Cosmos DB’s change feed
- https://medium.com/@thomasweiss_io/denormalizing-your-data-with-azure-functions-and-cosmos-dbs-
change-feed-c4a8687cda88

[MEASUREUP]

QUESTION 5
You are an Azure Cosmos DB Developer for a traditional retail organization. Your organization has decided to
migrate its existing on-premises applications to Azure.

One of the applications that you have been tasked to migrate to Azure currently runs on a traditional on-
premises environment. The environment has the following three characteristics:
- The existing on-premises environment has a single replica set with a replication factor of R = 3.
- The replica set resides on a four-core server SKU.
- Your physical CPUs do not support hyperthreading.

You need to estimate the number of provisioned RU/s for the Azure Cosmos DB Core (SQL) API.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- YES
- YES
- NO

The recommended RU/s for Azure Cosmos DB Core (SQL) API is calculated as: Provisioned RU/s = (600 RU/
s/vCore) * (12 vCores)/(3) = 2,400 RU/s. In this scenario, the recommended RU/s for Azure Cosmos DB Core
(SQL) API is calculated using the following equation: Provisioned RU/s = (600 RU/s/vCore) * (12 vCores)/(3) =
2,400 RU/s.

The recommended formula for arriving at a rough starting-point RU/s estimate from vCores (or vCPUs) in a
traditional on-premises environment to Azure Cosmos DB is: Provisioned RU/s = C * T/R, wherein:
- T: Total vCores and/or vCPUs in your existing database data-bearing replica set(s). The correct way to
arrive at this number is to examine the number of vCores or vCPUs in each data-bearing replica set of your
existing database, and then add them up to get a total.
- R: Replication factor of your existing data-bearing replica set(s).
- C: Recommended provisioned RU/s per vCore or vCPU. This value derives from the architecture of Azure
Cosmos DB:
- C = 600 RU/s/vCore for Azure Cosmos DB SQL API;
- C = 1000 RU/s/vCore for Azure Cosmos DB API for MongoDB v4.0;
- C estimates for Cassandra API, Gremlin API, or other APIs are currently not available.

In this scenario, we have a single replica set with R = 3; that is to say, 3 * 4 core = 12 vCores.

Assuming that 1 vCore = 1 vCPU would provide the most accurate RU/s estimate in the given scenario. Given
that the scenario mentions that the physical CPUs do not support hyperthreading, 1 vCore = 1 vCPU would be
a correct assumption.

If the replication factor of your existing data-bearing replica set is unknown, assuming that R=1 is not
recommended. In Azure Cosmos DB, if the replication factor of your existing data-bearing replica set is
unknown, you should not assume that R=1; given that if the average replication factor (R) is not explicitly
known, R = 3 is the recommended value.
References

Request Units in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/request-units

What is a Request Unit? | Azure Cosmos DB Essentials Season 2


- https://www.youtube.com/watch?
v=3naCwuXhLlk&list=PLmamF3YkHLoLLGUtSoxmUkORcWaTyHlXp&index=6

Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s
- https://docs.microsoft.com/en-us/azure/cosmos-db/convert-vcore-to-request-unit

[MEASUREUP]

QUESTION 6
You are developing a new web appp that uses Azure Comsos DB core (SQL) API as a data source. You are
using

You are developing a new web app that uses Azure Cosmos DB Core (SQL) API as a data source, You are
using Azure Cosmos DB Java SDK V4 to initialize the SDK as exhibited below:

ArrayList<String> preferredRegions = new ArrayList<String>();


preferredRegions.add("West US");
preferredRegions.add("West US 2");
preferredRegions.add("West US 3");

client = new CosmosCIientBuilder()


.endpoint (AccountSettings.HOST)
.key(AccountSettings.MASTER.KEY)
.preferredRegions(preferredRegions)
.contentResponseOnWriteEnabled(true)
.consistencyLeveI(ConsisteneyLeveI.SESSION)
.buildAsyncClient();

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- YES
- YES
- YES

Explanation

The SDK automatically sends all writes to the current write region. In this use-case scenario. .prererredRegions
(preferredRegions) is an ordered list or Azure Regions and the SDK automatically sends all writes to the current
write region.

All reads are directed to the hrst available region in the preferredRegions list. If the request fails, the client will
fail down the list to the next region as mentioned in the preferredRegions list.

The ConsistencyLevelSESSlON is reeion•aenostic. This exhibits the Session consistency. This is the default
level applied to Cosmos accounts. When working with Session consistency. each new write request to Azure
Cosmos 0B is assigned a new SessionToken. which the CosmosCIient uses internally with each read/query
request to ensure that the set consistency level is maintained.

Consistency levels in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels

How to configure multi-region writes in Azure Cosmos DB


- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-multi-master?tabs=api-async

[MEASUREUP]

QUESTION 7
You have recently joined an independent software vendor (ISV) as an Azure Cosmos DB developer. You are
working on a client project. and you are tasked with implementing a Cosmos DB change feed solution that uses
the change feed processor (CFP) library.

You use the change feed pull model for the implementation. You are using the FeedRange to parallelize the
processing of the change feed as shown below:

IReadOnlyList<FeedRange> ranges = await container.GetFeedRangeAsync();


For each of the following statements. select Yes if the statement is true. Otherwise, select No.

Hot Area:

Correct Answer:

Explanation

Explanation/Reference:

- NO
- YES
- YES

Polling for future changes is not an automated process in this scenario. With the change feed pull model, you
can consume the Azure Cosmos DB change feed at your own speed. There is no prerequisite to consume the
feed at any predetermined pace. However. polling for all future changes is a manual process.

Work is automatically spread across multiple consumers using a change feed processor (CFP). If you are using
the change feed pull model, you can use the FeedRange to parallelize the processing of the change feed. A
FeedRange represents a range of partition key values. When you Obtain a list of FeedRanges for your
container, you get one FeedRange per physical partition.

In the Azure Cosmos the Change feed pull model, if you Choose to use a FeedRange. you can then create a
Feedlterator to parallelize the processing of the change feed across multiple machines or threads. You can
create multiple Feedlterator and thereby process the change feed in parallel.
[MEASUREUP]

QUESTION 8
You are an Azure Cosmos DB developer for a large enterprise. You have provisioned a dedicated gateway
cluster with an integrated cache for one of your applications that uses an Azure Cosmos DB Core (SQL) API as
a data source.

You predominantly use point reads as the query pattern for your application. Your Azure Cosmos DB account
uses the default Session level consistency.

You use the , NET SDK to define the query cache as exhibited below.

For each of the following statements. select Yes if the statement is true. Otherwise, select No.

Hot Area:

Correct Answer:
Explanation

Explanation/Reference:

- YES
- NO
- YES

Explanation

Session consistency signifies that single client sessions will read their own writes. Azure Cosmos DB Core
(SQL) API Integrated Cache only supports two types of consistency levels: session and eventual. If you
configure any Other type Of consistency level at a specific query level (or at a request-level), it will always
bypass the integrated cache, meaning that the cache is not used at all. In our use-case scenario, you use by
default session consistency. When integrated cache is used with the by default session consistency for a single
client session, the client session will always read their own writes. Any clients situated outside of the session
performing writes Will see eventual consistency.

MaxInteeratedCacheStaleness does not use global Time-to-Live (TTL) to refresh the Integrated Cache. The
MaxIntegratedCacheStaIeness refers to the maximum acceptable staleness for cached point reads and
queries, regardless of the selected consistency. This property is conheurable at the request-level. This property
does not use any global TTL to refresh the Integrated Cache. Data is only evicted from the integrated cache if
the cache is full, or if a new read has been executed with a lower value than the age of the current cached
entry.

IntegratedCacheItemHitRate and IntegratedCacheQueryHitRate metrics will result in zero values for the above
code, IntegratedCacheltemHitRate and IntegratedCacheQueryHitRate are two of the Azure Cosmos DB
Integrated Cache metrics that you can monitor if you want to dive deeper into any troubleshooting issues. The
IntegratedCacheItem*itRate provides the proportion of point reads that used the integrated cache (out of all
point reads routed through the dedicated gateway with session or eventual consistency).

The IntegratedCacheQueryHitRate provides the proportion of queries that used the integrated cache (out of all
queries routed through the dedicated gateway with session or eventual consistency). In our use-case scenario
both of these two values will be zero because MaxInteeratedCacheStaIeness Timespan is set to 0.

This means that none of your point reads and query read requests Will ever use the integrated cache,
regardless of the consistency level. If you want to ensure that Integrated Cache is used for all of your point
reads, and query read requests. you should set a value for the TimeSpan property, e.g., PromMinutes(30). If
this is not configured. the default MaxIntegratedCacheStaIeness is five minutes.

[MEASUREUP]
QUESTION 9
You are an Azure Cosmos DB developer at an independent software vendor. You are architecting an online e-
commerce solution that uses the Azure Cosmos DB Core (SQL) API as a data source. You are using the Azure
Cosmos DB .NET SDK v? for development.

You have decided to use a unique key constraint for an Azure Cosmos DB container. The code you are using is
Shown below:

await client.GetDatabase("database").DefineContainer(name: "container",


partitionKeyPath: "/myPartitionKey")
.WithUniqueKey()
.Path("/firstName")
.Path("/lastName")
.Path("/mobile")
.Attach()
.CreateIfNotExistAsync();

For each Of the following statements, select Yes if the statement is true. Otherwise. select NO.

Hot Area:

Correct Answer:

Explanation
Explanation/Reference:

- NO
- YES
- NO

Explanation

/mobile is not scoped to a container's underlying physical partition layout. The /mobile unique key is scoped to a
container's underlying logical partitions.

The unique keys, /firstName, /IastName and /'mobile guarantees uniqueness for our partition key, /
myPartitionKey.

Creating unique key in an Azure Cosmos DB container allows you to odd an additional layer Of data integrity to
the container. The essence lies in the fact that a unique key guarantees uniqueness per partition key.

When a container is leveraging a unique key policy, Request Unit (RU) charges for Azure Cosmos DB CRUD
operations (create, read, update, and delete) for a given item will be higher.

[MEASUREUP]

You might also like