Professional Documents
Culture Documents
DP 420
DP 420
Testlet 1
TESTLET OVERVIEW
The following testlet will present a Case Study followed by [count] multiple choice question(s), [count] create a
tree question(s), [count] build list and reorder question(s) and [count] drop and connect question(s).
For help on how to answer the questions, click the Instuctions button on the question screen.
Overview
Existing environment
Litware has an Azure subscription that contains the resources shown in the following table.
The con-product container stores the company's product catalog data. Each document in con-product includes
a con-productvendor value.
Most queries targeting the data in con-product are in the following format.
Most queries targeting the data in the con-productVendor container are in the following format
Current Problems
Requirements
Planned Changes
Litware plans to implement a new Azure Cosmos DB for NoSQL account named account2 that will contain a
database named iotdb. The iotdb database will contain two containers named con-iot1 and con-iot2.
Business Requirements
QUESTION 1
You are troubleshooting the current issues caused by the application updates.
Which action can address the application updates issue without affecting the functionality of the application?
Correct Answer: D
Explanation
Explanation/Reference:
Explanation:
Scenario:
- Application updates in con-product frequently cause HTTP status code 429 "Too many requests".
- You discover that the 429 status code relates to excessive request unit (RU) consumption during the
updates.
A "Request rate too large" exception, also known as error code 429, indicates that your requests against Azure
Cosmos DB are being rate limited.
When you use provisioned throughput, you set the throughput measured in request units per second (RU/s)
required for your workload. Database operations against the service such as reads, writes, and queries
consume some number of request units (RUs). Learn more about request units.
In a given second, if the operations consume more than the provisioned request units, Azure Cosmos DB will
return a 429 exception. Each second, the number of request units available to use is reset.
Before taking an action to change the RU/s, it's important to understand the root cause of rate limiting and
address the underlying issue.
Incorrect answers:
" Enable time to live for the con-product container."
- Would enable time to live for the con-product container, which would delete expired documents.
- This would not address the issue of updates not propagating automatically.
The consistency level needs to be an issue, as this is a multiregional write now, and the default one was
"Session" which is "Eventual" in that case.
This may lead to out-of-order appearances and Item updates issue at the end.
The Strong consistency will solve the problem for sure. But Bounded staleness is less expensive and keeps the
order (with delay for multi regions). So one of those two.
Bounded staleness
- Is frequently chosen by globally distributed applications that expect low write latencies but require total
global order guarantee.
- Is great for applications featuring group collaboration and sharing, stock ticker, publish-subscribe/queueing
etc.
Reference:
Diagnose and troubleshoot Azure Cosmos DB request rate too large (429) exceptions
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/troubleshoot-request-rate-too-large
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
[GPT]
QUESTION 2
You configure multi-region writes for account1.
You need to ensure that App1 supports the new configuration for account1. The solution must meet the
business requirements and the product catalog requirements.
Correct Answer: D
Explanation
Explanation/Reference:
Community: Increase the number of request units per second (RU/s) allocated to the con-product and con-
productVendor containers
Explanation:
The connection policy of App1 needs to be modified to allow for multi-region writes. This can be done by setting
the UseMultipleWriteLocations property to true and specifying the region where the application is deployed to
the ApplicationRegion property.
Incorrect answers:
"Increase the number of request units per second (RU/s) allocated to the con-product and con-productVendor
containers."
- Would increase the number of request units per second (RU/s) allocated to the con-product and con-
productVendor containers.
- This would improve the performance of updates to the con-product container, but it would not meet the
business requirement of minimizing the frequency of errors during update
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels
[GPT]
QUESTION 3
You need to implement a solution for the planned changes and meet the product catalog requirements.
Correct Answer: C
Explanation
Explanation/Reference:
- Create a new container and migrate the product catalog data to the new container
Scenario:
- Implement a custom conflict resolution policy for the product catalog data.
- Custom:
- This resolution policy is designed for application-defined semantics for reconciliation of conflicts.
- When you set this policy on your Azure Cosmos DB container, you also need to register a merge
stored procedure.
- This procedure is automatically invoked when conflicts are detected under a database transaction on
the server.
- The system provides exactly once guarantee for the execution of a merge procedure as part of the
commitment protocol.
- Custom conflict resolution policy is available only for API for NoSQL accounts and can be set only at
creation time. It is not possible to set a custom resolution policy on an existing container.
Resources:
Conflict types and resolution policies when using multiple write regions
- https://learn.microsoft.com/en-us/azure/cosmos-db/conflict-resolution-policies
[CONFIRMED]
QUESTION 4
You plan to implement con-iot1 and con-iot2.
You need to configure the default Time to Live setting for each container. The solution must meet the IoT
telemetry requirements.
What should you configure? To answer, select the appropriate options in the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
con-iot2: Off
- Scenario: "Ensure that the items in con-iot2 persist regardless of the per-item time to live setting."
[CONFIRMED]
QUESTION 5
You need to select the capacity mode and scale configuration for account2 to support the planned changes and
meet the business requirements.
What should you select? To answer, select the appropriate options in the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
[CONFIRMED]
QUESTION 6
You need to identify which connectivity mode to use when implementing App2. The solution must support the
planned changes and meet the business requirements.
Correct Answer: B
Explanation
Explanation/Reference:
Explanation:
Scenario:
- App2 must be limited to a single HTTPS endpoint when accessing account2.
Gateway mode
- Gateway mode is supported on all SDK platforms.
- If your application runs within a corporate network with strict firewall restrictions, gateway mode is the best
choice because it uses the standard HTTPS port and a single DNS endpoint.
- The performance tradeoff, however, is that gateway mode involves an additional network hop every time
data is read from or written to Azure Cosmos DB.
- We also recommend gateway connection mode when you run applications in environments that have a
limited number of socket connections.
Direct mode
- Direct mode supports connectivity through TCP protocol, using TLS for initial authentication and encrypting
traffic, and offers better performance because there are fewer network hops.
- The application connects directly to the backend replicas.
- Direct mode is currently only supported on .NET and Java SDK platforms.
Reference:
https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/sdk-connection-modes
[CONFIRMED]
QUESTION 7
You need to provide a solution for the Azure Functions notifications following updates to con-product.
The solution must meet the business requirements and the product catalog requirements.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
Correct Answer: AB
Explanation
Explanation/Reference:
Explanation:
Scenario:
- Use Azure Functions to send notifications about product updates to different recipients.
- Trigger the execution of two Azure functions following every update to any document in the con-product
container.
LeaseCollectionPrefix
- When set, the value is added as a prefix to the leases created in the Lease collection for this function.
- Using a prefix allows two separate Azure Functions to share the same Lease collection by using different
prefixes.
LeaseCollectionName
- The name of the collection used to store leases.
- When not set, the value leases is used.
Reference:
https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-cosmosdb-v2-trigger
[NOT-CONFIRMED]
QUESTION 8
You need to recommend indexes for con-product and con-productVendor. The solution must meet the product
catalog requirements and the business requirements.
Which type of index should you recommend for each container? To answer, select the appropriate options in
the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- Note: The con-product container stores the company's product catalog data. Each document in con-product
includes a con-productvendor value. Most queries targeting the data in con-product are in the following format.
SELECT * FROM con-product p WHERE p.con-productVendor - 'name'
- Composite indexes increase the efficiency when you are performing operations on multiple fields.
- The composite index type is used for:
ORDER BY queries on multiple properties:
SELECT * FROM container c ORDER BY c.property1, c.property2
- Note: Most queries targeting the data in the con-productVendor container are in the following format
SELECT * FROM con-productVendor pv
ORDER BY pv.creditRating, pv.yearFounded
Reference:
[CONFIRMED]
QUESTION 9
You need to select the partition key for con-iot1. The solution must meet the IoT telemetry requirements.
A. the timestamp
B. the humidity
C. the temperature
D. the device ID
Correct Answer: D
Explanation
Explanation/Reference:
- the device ID
Explanation:
Scenario:
- The iotdb database will contain two containers named con-iot1 and con-iot2.
- Ensure that Azure Cosmos DB costs for IoT-related processing are predictable.
The partition key is what will determine how data is routed in the various partitions by Cosmos DB and needs to
make sense in the context of your specific scenario.
The IoT Device ID is generally the "natural" partition key for IoT applications.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db
[NOT-CONFIRMED]
Question Set 1
QUESTION 1
You have an Azure Cosmos DB Core (SQL) API account named account1 that has the
disableKeyBasedMetadataWriteAccess property enabled.
You are developing an app named App1 that will be used by a user named DevUser1 to create containers in
account1. DevUser1 has a non-privileged user account in the Azure Active Directory (Azure AD) tenant.
You need to ensure that DevUser1 can use App1 to create containers in account1.
What should you do? To answer, select the appropriate options in the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
So, Grant permissions to create containers by using: Role-based access control (RBAC).
Resource tokens
- Resource tokens offer access to specific containers, documents, partition keys, attachments, triggers,
stored procedures, and UDFs.
Resources:
Database.CreateContainerIfNotExistsAsync Method
- https://docs.microsoft.com/en-us/dotnet/api/
microsoft.azure.cosmos.database.createcontainerifnotexistsasync
Create and update documents with the Azure Cosmos DB for NoSQL SDK
- https://github.com/MicrosoftLearning/dp-420-cosmos-db-dev/blob/main/instructions/06-sdk-crud.md
QUESTION 2
You are developing an application that will use an Azure Cosmos DB Core (SQL) API account as a data
source.
You need to create a report that displays the top five most ordered fruits as shown in the following table.
{
"name" : "apple",
"type" : ["fruit","exotic"],
"orders" : 1000
}
Which two queries can you use to retrieve data for the report? Each correct answer presents a complete
solution.
Correct Answer: BD
Explanation
Explanation/Reference:
Explanation
ARRAY_CONTAINS
- Returns a boolean indicating whether the array contains the specified value.
- You can check for a partial or full match of an object by using a boolean expression within the function.
Incorrect Answers:
[CONFIRMED]
QUESTION 3
You have a database in an Azure Cosmos DB Core (SQL) API account.
You plan to create a container that will store employee data for 5,000 small businesses. Each business will
have up to 25 employees. Each employee item will have an emailAddress value.
You need to ensure that the emailAddress value for each employee within the same company is unique.
To what should you set the partition key and the unique key? To answer, select the appropriate options in the
answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanantion:
Logical partitions are formed based on the value of a partition key that is associated with each item in a
container.
All the items in a logical partition have the same partition key value.
- After you create a container with a unique key policy, the creation of a new or an update of an existing item
resulting in a duplicate within a logical partition is prevented, as specified by the unique key constraint.
- The partition key combined with the unique key guarantees the uniqueness of an item within the scope of the
container.
- For example, consider an Azure Cosmos container with Email address as the unique key constraint and
CompanyID as the partition key. When you configure the user's email address with a unique key, each item has
a unique email address within a given CompanyID. Two items can't be created with duplicate email addresses
and with the same partition key value.
- UniqueKey will need to be set to emailAddress, since the job of a unique key is to ensure uniqueness of a
value within a logical partition.
Reference: https://docs.microsoft.com/en-us/azure/cosmos-db/unique-keys
QUESTION 4
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. The container1
container has 120 GB of data.
{
"customerId" : "5425",
"orderId" : "9d7816e6-f401-42ba-ad05-0e03de35c0b8",
"orderDate" : "2019-05-03",
"orderDeatils" : [ ]
}
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
In-partition query
- When you query data from containers, if the query has a partition key filter specified, Azure Cosmos DB
automatically optimizes the query.
- It routes the query to the physical partitions corresponding to the partition key values specified in the filter.
- Example: SELECT * FROM c WHERE c.DeviceId = 'XMS-0001'
- Example: SELECT * FROM c WHERE c.DeviceId = 'XMS-0001' AND c.Location = 'Seattle'
Cross-partition query
- Each physical partition has its own index.
- Therefore, when you run a cross-partition query on a container, you're effectively running one query per
physical partition.
- Azure Cosmos DB automatically aggregates results across different physical partitions.
- Example: SELECT * FROM c WHERE c.Location = 'Seattle`
- RESUME: Cross partition means it will scan more than 1 partition, easy as that.
[CONFIRMED]
QUESTION 5
You have an Azure Cosmos DB Core (SQL) API account that is configured for multi-region writes. The account
contains a database that has two containers named container1 and container2.
{
"customerId": 1234,
"firstName": "John",
"lastName": "Smith",
"policyYear": 2021
}
{
"gpsId": 1234,
"latitude": 38.8951,
"longitude": -77.0364
}
To answer, drag the appropriate configurations to the correct containers. Each configuration may be used once,
more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
For the highest value wins, there is one description relate to Last Write Wins (default) mode.
For API for NoSQL, this may also be set to a user-defined path with a numeric type. In a conflict, the highest
value wins.
It is explicitly stated that Last Write Wins used the DEFAULT (timestamp) mode, so increasing policyYear will
not be used by Last Write Wins.
Custom:
- This resolution policy is designed for application-defined semantics for reconciliation of conflicts.
- When you set this policy on your Azure Cosmos container, you also need to register a merge stored
procedure.
- This procedure is automatically invoked when conflicts are detected under a database transaction on the
server.
- The system provides exactly once guarantee for the execution of a merge procedure as part of the
commitment protocol.
- "Custom conflict resolution policy is available only for SQL API accounts and can be set only at creation
time. It is not possible to set a custom resolution policy on an existing container."
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/conflict-resolution-policies
https://docs.microsoft.com/enus/azure/cosmos-db/sql/how-to-manage-conflicts
QUESTION 6
You have an Azure Cosmos DB Core (SQL) API account named storage1 that uses provisioned throughput
capacity mode.
The storage1 account contains the databases shown in the following table.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- At a minimum, you will be billed for 4,000 RU/s per hour for db1: NO
- The maximum throughput that can be consumed by cn11 is 400 RU/s: NO
- To db2, you can add a new container that uses database throughput: YES
At a minimum, you will be billed for 4,000 RU/s per hour for db1: NO
- Four containers with 1000 RU/s each.
To db2, you can add a new container that uses database throughput: YES
- For db2, the maximum request units per second are 8000 RU/s and 8 containers (cn11 to cn18), so 1000
RU/s for each container. Can very well add an additional container.
- The throughput provisioned on an Azure Cosmos DB container is exclusively reserved for that container.
- We recommend that you configure throughput at the container granularity when you want predictable
performance for the container.
- Can very well add an additional container.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/plan-manage-costs
https://azure.microsoft.com/en-us/pricing/details/cosmos-db/
QUESTION 7
You have a database named telemetry in an Azure Cosmos DB Core (SQL) API account that stores IoT data.
The database contains two containers named readings and devices.
- id
- deviceid
- timestamp
- ownerid
- measures (array)
- type
- value
- metricid
- id
- deviceid
- owner
- ownerid
- emailaddress
- name
- brand
- model
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- To return for all devices owned by a specific emailaddress, multiple queries must: YES
- To return deviceid, ownerid, timestamp, and value for a specific metricid, a join must be performed: YES
- To return deviceid, ownerid, emailaddress, and model, a join must be performed: NO
Explanation:
To return for all devices owned by a specific emailaddress, multiple queries must: YES
- Requires 2 requests to get from 2 containers
To return deviceid, ownerid, timestamp, and value for a specific metricid, a join must be performed: YES
- You need value and metricid properties from the measures array.
- You need join to get the data from Array
SELECT m.deviceid
FROM m
WHERE m.metricid = ?
[DOUBT]
QUESTION 8
The settings for a container in an Azure Cosmos DB Core (SQL) API account are configured as shown in the
following exhibit.
Which statement describes the configuration of the container?
Correct Answer: C
Explanation
Explanation/Reference:
- Items stored in the collection will expire only if the item has a time to live value.
Explanation:
Time to Live:
- Off: Meaning the items won’t expire ever (even if items are marked as deleted)
- On (no default): will let items get deleted (as per the expired time mentioned). This will allow us to delete
items at separate intervals.
- On: will delete expired items based on TTL value. Once you set the TTL at the container, all items will be
automatically removed once the period expires.
Examples
Geospatial Configuration
- Specifies how the spatial data will be indexed.
- Specify one Geospatial Configuration per container: geography or geometry.
Reference:
[CONFIRMED]
QUESTION 9
You have an Azure Cosmos DB Core (SQL) API account named account1 that is configured for automatic
failover. The account1 account has a single read-write region in West US and a read region in East US.
Correct Answer: C
Explanation
Explanation/Reference:
Update-AzCosmosDBAccountFailoverPriority
- Update Failover Region Priority of a Cosmos DB Account.
Change failover priority or trigger failover for an Azure Cosmos DB account with single write region by using
PowerShell
- https://learn.microsoft.com/en-us/azure/cosmos-db/scripts/powershell/common/failover-priority-update
Update-AzCosmosDBAccountFailoverPriority
- https://learn.microsoft.com/en-us/powershell/module/az.cosmosdb/update-
azcosmosdbaccountfailoverpriority
[Doubt]
QUESTION 10
You have an Azure Cosmos DB Core (SQL) API account that uses a custom conflict resolution policy. The
account has a registered merge procedure that throws a runtime exception.
A. a function that pulls items from the conflicts feed and is triggered by a timer trigger
B. a function that receives items pushed from the change feed and is triggered by an Azure Cosmos DB trigger
C. a function that pulls items from the change feed and is triggered by a timer trigger
D. a function that receives items pushed from the conflicts feed and is triggered by an Azure Cosmos DB
trigger
Correct Answer: A
Explanation
Explanation/Reference:
- a function that pulls items from the conflicts feed and is triggered by a timer trigger
Explanation:
All conflicts related data is stored in the conflicts feed, you have to orchestrate the function using a timer.
When a conflict occurs, you can resolve the conflict by using different conflict resolution policies.
Conflict resolution policy can only be specified at container creation time and cannot be modified after container
creation.
If you configure your container with the custom resolution option, and you fail to register a merge procedure on
the container or the merge procedure throws an exception at runtime, the conflicts are written to the conflicts
feed.
Your application then needs to manually resolve the conflicts in the conflicts feed.
"a function that pulls items from the conflicts feed and is triggered by a timer trigger"
- Is the correct answer since there is no trigger mechanism for the conflict feed in Cosmos DB
Incorrect answers
"a function that receives items pushed from the change feed and is triggered by an Azure Cosmos DB trigger"
"a function that pulls items from the change feed and is triggered by a timer trigger"
- They are pull from the changes feed.
"a function that pulls items from the conflicts feed and is triggered by a timer trigger"
- Mentions an Azure Cosmos DB trigger, but this is only for the change feed, not for the conflicts feed, hence
Conflict types and resolution policies when using multiple write regions
- https://learn.microsoft.com/en-us/azure/cosmos-db/conflict-resolution-policies
QUESTION 11
The following is a sample of a document in orders.
You need to provide a report of the total items ordered per month by item type.
Correct Answer: B
Explanation
Explanation/Reference:
- Configure the report to query a new aggregate container. Populate the aggregates by using the
change feed.
Explanation:
After processing items in the change feed, you can build a materialized view and persist aggregated values
back in Azure Cosmos DB. If you're using Azure Cosmos DB to build a game, you can, for example, use
change feed to implement real-time leaderboards based on scores from completed games.
"Configure the report to query a new aggregate container. Populate the aggregates by using SQL queries that
run daily."
- Suggests populating the aggregates by using SQL queries that run daily. While this may reduce the RU
consumption during querying, it may not necessarily minimize RU consumption overall. Additionally, this
approach may result in stale data since the aggregates are only updated once a day.
- The best approach to minimize RU consumption and ensure the report runs as quickly as possible is to use
the change feed to populate the aggregates in real-time, as suggested in this option.
- This way, the aggregates are always up-to-date, and the report can be generated quickly and with minimal
RU consumption.
Change feed
- Is a persistent record of changes to a container in the order they occur.
- Change feed support in Azure Cosmos DB works by listening to an Azure Cosmos DB container for any
changes.
- It then outputs the sorted list of documents that were changed in the order in which they were modified.
- The persisted changes can be processed asynchronously and incrementally, and the output can be
distributed across one or more consumers for parallel processing.
- The change feed includes insert and update operations made to items within the container.
- If you're using all versions and deletes mode (preview), you also get changes from delete operations and
TTL expirations.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/change-feed-design-patterns#high-availability
[DOUBT]
QUESTION 12
You have three containers in an Azure Cosmos DB Core (SQL) API account as shown in the following table.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
The change feed includes insert and update operations made to items within the container.
If you're using all versions and deletes mode (preview), you also get changes from delete operations and TTL
expirations.
Fn2 can check the _eLag of the item2 to see whether the item is an update or an insert: NO
- Every item stored in an Azure Cosmos DB container has a system defined _etag property.
- The value of the _etag is automatically generated and updated by the server every time the item is updated
- The _etag format should not take dependency on it, because it can change anytime.
Reference:
[CONFIRMED]
QUESTION 13
You configure Azure Cognitive Search to index a container in an Azure Cosmos DB Core (SQL) API account as
shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on the information
presented in the graphic.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
The [answer choice] field is hidden from the search results: name
- The name field is not Retrievable.
- Retrievable: Indicates whether the field can be returned in a search result. Set this attribute to false if you
want to use a field (for example, margin) as a filter, sorting, or scoring mechanism but do not want the field to
be visible to the end user.
- Note: searchable: Indicates whether the field is full-text searchable and can be referenced in search queries.
Reference:
[CONFIRMED]
QUESTION 14
You have a database named db1 in an Azure Cosmos DB for NoSQL account named account1.
You need to write JSON data to db1 by using Azure Stream Analytics. The solution must minimize costs.
Which should you do before you can use db1 as an output of Stream Analytics?
Correct Answer: B
Explanation
Explanation/Reference:
- In db1, create containers that have a custom indexing policy and analytical store disabled
Explanation:
When you enable analytical store on an Azure Cosmos DB container, a new column-store is internally created
based on the operational data in your container.
This column store is persisted separately from the row-oriented transactional store for that container, in a
storage account that is fully managed by Azure Cosmos DB, in an internal subscription.
The inserts, updates, and deletes to your operational data are automatically synced to analytical store. You
don't need the Change Feed or ETL to sync the data.
Stream Analytics doesn't create containers in your database. Instead, it requires you to create them
beforehand.
You can then control the billing costs of Azure Cosmos DB containers.
You can also tune the performance, consistency, and capacity of your containers directly by using the Azure
Cosmos DB APIs.
Azure Cosmos DB
- In Azure Cosmos DB, data is indexed following indexing policies that are defined for each container. The
default indexing policy for newly created containers enforces range indexes for any string or number. You can
override this policy with your own custom indexing policy.
Analytical store
- Key/value databases (serialized object for each key value)
- Document databases (XML, YAML, JSON, or BSON)
- Column store databases (stores column families)
- Graph databases (store information as a collection of objects and relationships)
- Telemetry and time series databases are an append-only collection of object
Stream Analytics doesn't create containers in your database. Instead, it requires you to create them up front.
You can then control the billing costs of Azure Cosmos DB containers.
You can also tune the performance, consistency, and capacity of your containers directly by using the Azure
Cosmos DB APIs.
[CONFIRMED
QUESTION 15
You have a database named db1 in an Azure Cosmos DB for NoSQL account named account1. The db1
database has a manual throughput of 4,000 request units per second (RU/s).
You need to move db1 from manual throughput to autoscale throughput by using the Azure CLI. The solution
must provide a minimum of 4,000 RU/s and a maximum of 40,000 RU/s.
How should you complete the CLI statements? To answer, select the appropriate options in the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation
Commands
- az cosmosdb sql database throughput migrate: Migrate the throughput of the SQL database between
autoscale and manually provisioned.
- az cosmosdb sql database throughput show: Get the throughput of the SQL database under an Azure
Cosmos DB account.
- az cosmosdb sql database throughput update: Update the throughput of the SQL database under an Azure
Cosmos DB account.
--max-throughput: The maximum throughput resource can scale to (RU/s). Provided when the resource
is autoscale enabled. The minimum value can be 4000 (RU/s).
--throughput: The throughput of SQL database (RU/s).
az cosmosdb sql database throughput update -a "account1" -g "cosmosdbrg" -n "db1" --throughput 4000
[CONFIRMED]
QUESTION 16
You have an Azure Cosmos DB for NoSQL account.
You need to create an Azure Monitor query that lists recent modifications to the regional failover policy.
A. CDBPartitionKeyStatistics.
B. CDBQueryRunTimeStatistics.
C. CDBDataPlaneRequests.
D. CDBControlPlaneRequests.
Correct Answer: D
Explanation
Explanation/Reference:
- CDBControlPlaneRequests.
CDBControlPlaneRequests
- This table details all control plane operations executed on the account, which include modifications to the
regional failover policy, indexing policy, IAM role assignments, backup/restore policies, VNet and firewall rules,
private links as well as updates and deletes of the account.
CDBPartitionKeyRUConsumption
- This table details the RU (Request Unit) consumption for logical partition keys in each region, within each of
their physical partitions.
- This data can be used to identify hot partitions from a request volume perspective.
CDBQueryRunTimeStatistics
- This table details query operations executed against a SQL API account.
- By default, the query text and its parameters are obfuscated to avoid logging PII data with full text query
logging available by request.
CDBDataPlaneRequests
- The DataPlaneRequests table captures every data plane operation for the Cosmos DB account.
- Data Plane requests are operations executed to create, update, delete or retrieve data within the account.
References:
CDBPartitionKeyRUConsumption
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbpartitionkeyruconsumption
CDBQueryRuntimeStatistics
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbqueryruntimestatistics
CDBDataPlaneRequests.
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbdataplanerequests
CDBControlPlaneRequests.
- https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/cdbcontrolplanerequests
[CONFIRMED]
QUESTION 17
You have a container in an Azure Cosmos DB for NoSQL account that stores data about orders.
{
"orderId" : "d4a9179b-5ead-43a3-b851-add9071ac4b6",
"customerId" : "f6e39103-bdc7-4346-9cfb-45daa4b2becf",
"orderDate" : "2022-09-29".
"orderItems" : [...],
"total" : 12345
}
You are the evaluating whether to use orderDate as the partition key.
What are two effects of using orderDate as the partition key? Each correct answer presents a complete
solution.
Correct Answer: BD
Explanation
Explanation/Reference:
Hot partitions
- Throughput provisioned for a container is divided evenly among physical partitions.
- A partition key design that doesn't distribute requests evenly might result in too many requests directed to a
small subset of partitions that become "hot."
- Hot partitions lead to inefficient use of provisioned throughput, which might result in rate-limiting
and higher costs.
Incorrect answers
Reference:
https://www.tallan.com/blog/2019/04/30/horizontal-partitioning-in-azure-cosmos-db
https://docs.microsoft.com/en-us/azure/cosmos-db/concepts-limits
[CONFIRMED]
QUESTION 18
You have an app that stores data in an Azure Cosmos DB Core (SQL) API account. The app performs queries
that return large result sets.
You need to return a complete result set to the app by using pagination. Each page of results must return 80
items.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of
actions to the answer area and arrange them in the correct order.
Select and Place:
Correct Answer:
Explanation
Explanation/Reference:
Query executions
- Sometimes query results will be split over multiple pages. Each page's results is generated by a separate
query execution.
- You can specify the maximum number of items returned by a query by setting the MaxItemCount.
- The MaxItemCount is specified per request and tells the query engine to return that number of items or
fewer. You can set MaxItemCount to -1 if you don't want to place a limit on the number of results per query
execution.
You should configure the MaxItemCount property to limit the number of items returned per page to 80.
Additionally, you should use the continuation token returned in the response to retrieve the next page of results.
- You can specify the maximum number of items returned by a query by setting the MaxItemCount. The
MaxItemCount is specified per request and tells the query engine to return that number of items or fewer.
Step 2: Run the query and provide a continuation token
- In the .NET SDK and Java SDK you can optionally use continuation tokens as a bookmark for your query's
progress. Azure Cosmos DB query executions are stateless at the server side and can be resumed at any time
using the continuation token.
- If the query returns a continuation token, then there are additional query results.
Reference:
[CONFIRMED]
QUESTION 19
You have an application named App1 that reads the data in an Azure Cosmos DB Core (SQL) API account.
App1 runs the same read queries every minute. The default consistency level for the account is set to eventual.
You discover that every query consumes request units (RUs) instead of using the cache.
You verify the IntegratedCacheItemHitRate metric and the IntegratedCacheQueryHitRate metric. Both metrics
have values of 0.
You verify that the dedicated gateway cluster is provisioned and used in the connection string.
You need to ensure that App1 uses the Azure Cosmos DB integrated cache.
Correct Answer: C
Explanation
Explanation/Reference:
------
[AI]: the consistency level of the requests from App1.
To ensure that App1 uses the Azure Cosmos DB integrated cache, you need to configure the consistency level
of the requests from App1 to session or eventual.
-----
Explanation:
As the integrated cache is specific to the Azure Cosmos DB account and needs significant memory and CPU, it
needs a dedicated gateway node. Therefore, you need to configure the connectivity mode of the application
CosmosClient. Connect to Azure Cosmos DB with the help of gateway mode.
If the values from IntegratedCacheItemHitRate and IntegratedCacheQueryHitRate are zero, then requests
aren't hitting the integrated cache. Check that you're using the dedicated gateway connection string, connecting
with gateway mode, and have set session or eventual consistency.
If you're using the .NET or Java SDK, set the connection mode to gateway mode.
This step isn't necessary for the Python and Node.js SDKs since they don't have additional options of
connecting besides gateway mode.
Because the integrated cache is specific to your Azure Cosmos DB account and requires significant CPU and
memory, it requires a dedicated gateway node. Connect to Azure Cosmos DB using gateway mode.
Reference:
[CONFIRMED]
QUESTION 20
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account named account1 that
is set to the session default consistency level. The average size of an item in contained is 20 KB.
You have an application named App1 that uses the Azure Cosmos DB SDK and performs a point read on the
same set of items in container1 every minute.
You need to minimize the consumption of the request units (RUs) associated to the reads by App1.
Correct Answer: B
Explanation
Explanation/Reference:
Since the application performs a point read on the same set of items in the container every minute, using
"consistent prefix" as the consistency level will allow the application to consume fewer RUs.
Consistent Prefix
- This guarantees that a read operation will return the most recent version of the data, but it may not reflect all
writes made to the database at the time of the read.
- This consistency level may have lower latency than session consistency.
Bounded Staleness
- A guarantee that the data read is at most a certain number of versions or time interval behind the latest
write.
- This consistency level may have lower latency than strong consistency.
[NOT-CONFIRMED]
QUESTION 21
You provision Azure resources by using the following Azure Resource Manager (ARM) template.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
The alert will be triggered when an item that has a new partition key value is created: NO
- The trigger is fired when the database account has a new key regenerated not when an item with a new
partition key value is created.
- When a new access key is generated not when an item with a new partition key value is created.
Reference: https://docs.microsoft.com/en-us/cli/azure/cosmosdb/keys
[CONFIRMED]
QUESTION 22
You are designing an Azure Cosmos DB Core (SQL) API solution to store data from IoT devices. Writes from
the devices will be occur every second.
{
"id" : "03c1ca5a-db18-4231-908f-09a9bc7a7c3e",
"deviceManufacturer" : "Contoso, Ltd",
"deviceId" : "f460df85-799f-4d58-b051-67561b4993c6",
"timestamp" : "2021-09-19T13:47:45",
"sensor1Value" : true,
"sensor2Value" : "75",
"sensor3Value" : "4554",
"sensor4Value" : "454",
"sensor5Value" : "42128",
}
You need to select a partition key that meets the following requirements for writes:
- Minimizes the partition skew
- Avoids capacity limits
- Avoids hot partitions
Correct Answer: D
Explanation
Explanation/Reference:
- Create a new synthetic key that contains deviceId and a random number.
Explanation:
In order to avoid a hot partition we have no option but to use a strategy like that suggested in the section "Use a
partition key with a random suffix"
- A partition key design that doesn't distribute requests evenly might result in too many requests directed to a
small subset of partitions that become "hot."
- Hot partitions lead to inefficient use of provisioned throughput, which might result in rate-limiting and higher
costs.
Because of frequency (1s) we may not want to put too much data into the same container.
The maxim quota for logical partition is 20GB, which can exhaust pretty fast.
Incorrect Answers:
Reference:
[CONFIRMED]
QUESTION 23
You are designing an Azure Cosmos DB Core (SQL) API solution to store data from IoT devices. Writes from
the devices will be occur every second.
{
"id" : "03c1ca5a-db18-4231-908f-09a9bc7a7c3e",
"deviceManufacturer" : "Contoso, Ltd",
"deviceId" : "f460df85-799f-4d58-b051-67561b4993c6",
"timestamp" : "2021-09-19T13:47:45",
"sensor1Value" : true,
"sensor2Value" : "75",
"sensor3Value" : "4554",
"sensor4Value" : "454",
"sensor5Value" : "42128",
}
You need to select a partition key that meets the following requirements for writes:
- Minimizes the partition skew
- Avoids capacity limits
- Avoids hot partitions
What should you do?
Correct Answer: A
Explanation
Explanation/Reference:
You can form a partition key by concatenating multiple property values into a single artificial partitionKey
property. These keys are referred to as synthetic keys.
For the previous document, one option is to set /deviceId or /date as the partition key.
Use this option, if you want to partition your container based on either device ID or date.
Another option is to concatenate these two values into a synthetic partitionKey property that's used as the
partition key.
{
"deviceId": "abc-123",
"date": 2018,
"partitionKey": "abc-123-2018"
}
Incorrect:
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/synthetic-partition-keys https://docs.microsoft.com/en-
us/azure/cosmos-db/concepts-limits
[CONFIRMED]
QUESTION 24
You maintain a relational database for a book publisher.
The database contains the following tables.
The most common query lists the books for a given authorId.
You need to develop a non-relational data model for Azure Cosmos DB Core (SQL) API that will replace the
relational database. The solution must minimize latency and read operation costs.
A. Create a container for Author and a container for Book. In each Author document, embed bookId for each
book by the author. In each Book document embed authorId of each author.
B. Create Author, Book, and Bookauthorlnk documents in the same container.
C. Create a container that contains a document for each Author and a document for each Book. In each Book
document, embed authorId.
D. Create a container for Author and a container for Book. In each Author document and Book document
embed the data from Bookauthorlnk.
Correct Answer: C
Explanation
Explanation/Reference:
- Create a container that contains a document for each Author and a document for each Book. In each
Book document, embed authorId.
Explanation:
We are explicitly required to minimize cost and latency, A although it would get the job done it would be highly
inefficient the relationship between an author and books is a one to few (1-many but bounded), hence data
should be embedded refer to the below links
Data modeling in Azure Cosmos DB
- Retrieving a complete person record from the database is now a single read operation against a single
container and for a single item.
- Updating the contact details and addresses of a person record is also a single write operation against a
single item.
Here is a diagram of the data model for option "Create a container that contains a document for each Author
and a document for each Book. In each Book document, embed authorId."
Container: Books
Document: Book1
authorId: 1
title: "The Da Vinci Code"
genre: "Thriller"
Container: Authors
Document: Author1
authorId: 1
fullname: "Dan Brown"
address: "123 Main Street"
contactinfo: "123-456-7890"
Incorrect answers.
Create a container for Author and a container for Book. In each Author document, embed bookId for each book
by the author. In each Book document embed authorId of each author.
- In some scenarios, you may want to mix different document types in the same collection; this is usually the
case when you want multiple, related documents to sit in the same partition.
- For example, you could put both books and book reviews documents in the same collection and partition it
by bookId.
- This solution create two containers = two reads, and the solution must minimize latency and read
operation costs.
References:
How to model and partition data on Azure Cosmos DB using a real-world example
- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-model-partition-example
[DOUBT]
QUESTION 25
You have an Azure Cosmos DB Core (SQL) API account.
SELECT
IS_NUMBER("1234") AS A,
IS_NUMBER(1234) AS B,
IS_NUMBER({prop: 1234}) AS C
Correct Answer: A
Explanation
Explanation/Reference:
Explanation:
IS_NUMBER returns a Boolean value indicating if the type of the specified expression is a number.
Reference:
[CONFIRMED]
QUESTION 26
You need to implement a trigger in Azure Cosmos DB Core (SQL) API that will run before an item is inserted
into a container.
Which two actions should you perform to ensure that the trigger runs? Each correct answer presents part of the
solution.
Correct Answer: CE
Explanation
Explanation/Reference:
Explanation:
When you run an operation by specifying PreTriggerInclude and then passing the name of the trigger in a List
object, pretriggers are passed in the RequestOptions object.
Even though the name of the trigger is passed as a List, you can still run only one trigger per operation.
Reference:
How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB
- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs
[CONFIRMED]
QUESTION 27
You have a container in an Azure Cosmos DB Core (SQL) API account.
You need to use the Azure Cosmos DB SDK to replace a document by using optimistic concurrency.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
Example
// If ETag is current, then this will succeed. Otherwise the request will fail with HTTP 412 Precondition Failure
await client.ReplaceDocumentAsync(
readCopyOfBook.SelfLink,
new Book { Title = "Moby Dick", Price = 14.99 },
new RequestOptions
{
AccessCondition = new AccessCondition
{
Condition = readCopyOfBook.ETag,
Type = AccessConditionType.IfMatch
}
});
Reference:
RequestOptions.AccessCondition Property
- https://learn.microsoft.com/en-us/dotnet/api/
microsoft.azure.documents.client.requestoptions.accesscondition?view=azure-dotnet#examples
[CONFIRMED]
QUESTION 28
You are creating a database in an Azure Cosmos DB Core (SQL) API account. The database will be used by
an application that will provide users with the ability to share online posts. Users will also be able to submit
comments on other users posts.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- If you embed the post data into the users data instead of creating a separate document for each post, you
will increase the write operation cost for new post: YES
- If you embed the post data into the users data instead of creating a separate document for each post, you
will increase the write operation costs for new comments: YES
- If you embed the post data into the users data instead of creating a separate document for each post, you
will increase the read operations cost for displaying the users and their associated interests: NO
If you embed the post data into the users data instead of creating a separate document for each post, you will
increase the write operation cost for new post: YES
- Non-relational data increases write costs, but can decrease read costs.
If you embed the post data into the users data instead of creating a separate document for each post, you will
increase the write operation costs for new comments: YES
- Non-relational data increases write costs, but can decrease read costs.
If you embed the post data into the users data instead of creating a separate document for each post, you will
increase the read operations cost for displaying the users and their associated interests: NO
- Non-relational data increases write costs, but can decrease read costs.
QUESTION 29
You have a container in an Azure Cosmos DB Core (SQL) API account. The container stores telemetry data
from IoT devices. The container uses telemetryId as the partition key and has a throughput of 1,000 request
units per second (RU/s).
Approximately 5,000 IoT devices submit data every five minutes by using the same telemetryId value.
You have an application that performs analytics on the data and frequently reads telemetry data for a single IoT
device to perform trend analysis.
{
"id" : "9ccf1906-2a30-4dc0-9644-2185f5dcbbd7",
"deviceId" : "bba6fe24-6d97-4935-8d58-36baa4b8a0e1",
"telemetryId" : "9d7816e6-f401-42ba-ad05-0e03de35c0b8",
"date" : "2019-05-03"
"time" : "13:05",
"tempe" : "21"
}
You need to reduce the amount of request units (RUs) consumed by the analytics application.
Correct Answer: C
Explanation
Explanation/Reference:
- Move the data to a new container that has a partition key of deviceId.
Explanation:
The partition key is what will determine how data is routed in the various partitions by Cosmos DB and needs to
make sense in the context of your specific scenario.
The IoT Device ID is generally the "natural" partition key for IoT applications.
The analytics application would frequently read from the same partition of Device ID is used as partition key.
Note: The partition key determines how Azure Cosmos DB routes data in partitions. The IoT Device ID is the
usual partition key for IoT applications.
Reference:
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/iot-using-cosmos-db
[CONFIRMED]
QUESTION 30
You have the indexing policy shown in the following exhibit.
Use the drop-down menus to select the answer choice that answers each question based on the information
presented in the graphic.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- When creating a query, which ORDER BY statement will execute successfully: ORDER BY c.name DESC,
c.age DESC
- During the creation if an item, when will the index update: At the same time as the item creation
Explanation:
Composite indexes
- Queries that have an ORDER BY clause with two or more properties require a composite index.
- You can also define a composite index to improve the performance of many equality and range queries.
- By default, no composite indexes are defined so you should add composite indexes as needed.
But there no ORDER BY c.name ASC, c.age ASC, so, the correct answer is ORDER BY c.name DESC, c.age
DESC.
Composite indexes
- Queries that have an ORDER BY clause with two or more properties require a composite index.
- You can also define a composite index to improve the performance of many equality and range queries.
- By default, no composite indexes are defined so you should add composite indexes as needed.
- Two or more property paths. The sequence in which property paths are defined matters.
- The order of composite index paths (ascending or descending) should also match the order in the
ORDER BY clause.
When creating a query, which ORDER BY statement will execute successfully: ORDER BY c.name DESC,
c.age DESC
- Queries that have an ORDER BY clause with two or more properties require a composite index.
- The following considerations are used when using composite indexes for queries with an ORDER BY clause
with two or more properties:
- If the composite index paths do not match the sequence of the properties in the ORDER BY clause, then
the composite index can't support the query.
- The order of composite index paths (ascending or descending) should also match the order in the ORDER
BY clause. The composite index also supports an ORDER BY clause with the opposite order on all paths.
During the creation if an item, when will the index update: At the same time as the item creation
- Azure Cosmos DB supports two indexing modes:
- Consistent: The index is updated synchronously as you create, update or delete items. This means
that the consistency of your read queries will be the consistency configured for the account.
- None: Indexing is disabled on the container.
Reference:
Composite indexes
- https://learn.microsoft.com/en-us/azure/cosmos-db/index-policy#composite-indexes
https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy
[CONFIRMED]
QUESTION 31
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account. Upserts of items in
container1 occur every three seconds.
You have an Azure Functions app named function1 that is supposed to run whenever items are inserted or
replaced in container1.
A. checkpointInterval
B. leaseCollectionsThroughput
C. maxItemsPerInvocation
D. feedPollDelay
Correct Answer: D
Explanation
Explanation/Reference:
- feedPollDelay
Explanation:
Upsert operation is an operation where a row is inserted into a database table if it does not already exist, or
updated if it exists.
FeedPollDelay represents the delay time (in ms) between the poll of a partition for new updates upon the feed
after all current modifications are drained. The default value for this is 5000 ms i.e. 5 seconds.
Therefore, if you want that each upsert is processed by function1 within 1 second of the upsert, you need to
change the FeedPollDelay.
FeedPollDelay
- The time (in milliseconds) for the delay between polling a partition for new changes on the feed, after all
current changes are drained.
- Default is 5,000 milliseconds, or 5 seconds.
Incorrect Answers:
checkpointInterval
- When set, it defines, in milliseconds, the interval between lease checkpoints.
- Default is always after each Function call.
MaxItemsPerInvocation
- When set, this property sets the maximum number of items received per Function call.
- If operations in the monitored container are performed through stored procedures, transaction scope is
preserved when reading items from the change feed.
- As a result, the number of items received could be higher than the specified value so that the items
changed by the same transaction are returned as part of one atomic batch.
Reference:
[CONFIRMED]
QUESTION 32
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
{
"studentId": "631282",
"firstName": "James",
"lastName": "Smith",
"enrollmentYear": 1990,
"isActivelyEnrolled": true,
"address": {
"street": "",
"city": "",
"stateProvince": "",
"postal": "",
}
}
{
"indexingMode": "consistent",
"includePaths": [
{
"path": "/*"
},
{
"path": "/address/city/?"
}
],
"excludePaths": [
{
"path": "/address/*"
},
{
"path": "/firstName/?"
}
]
}
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy
QUESTION 33
You have a database in an Azure Cosmos DB SQL API Core (SQL) account that is used for development.
You need to ensure that you can restore the database if the last batch process fails. The solution must
minimize costs.
How should you configure the backup settings? To answer, select the appropriate options in the answer are
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Scenario:
- restore the database if the last batch process fails
Azure Cosmos DB automatically takes a full backup of your database every 4 hours and at any point of time,
only the latest two backups are stored by default. If the default intervals aren't sufficient for your workloads, you
can change the backup interval and the retention period from the Azure portal.
Backup Interval
- It’s the interval at which Azure Cosmos DB attempts to take a backup of your data.
- Backup Interval can't be less than 1 hour and greater than 24 hours.
- When you change this interval, the new interval takes into effect starting from the time when the last backup
was taken.
Backup Retention
- It represents the period where each backup is retained.
- You can configure it in hours or days.
- The minimum retention period can’t be less than two times the backup interval (in hours) and it
can’t be greater than 720 hours.
References
[CONFIRMED]
QUESTION 34
You have the following query.
SELECT * FROM ?
WHERE c.sensor = "TEMP1"
AND c.value < 22
AND c.timestamp >= 1619146031231
You need to recommend a composite index strategy that will minimize the request units (RUs) consumed by
the query.
A. a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp ASC)
B. a composite index for (sensor ASC, value ASC, timestamp ASC) and a composite index for (sensor DESC,
value DESC, timestamp DESC)
C. a composite index for (value ASC, sensor ASC) and a composite index for (timestamp ASC, sensor ASC)
D. a composite index for (sensor ASC, value ASC, timestamp ASC)
Correct Answer: A
Explanation
Explanation/Reference:
- a composite index for (sensor ASC, value ASC) and a composite index for (sensor ASC, timestamp
ASC)
Explanation:
You can use composite indexes in Azure Cosmos DB to optimize additional cases of the most common
inefficient queries!
Anytime you have a slow or high request unit (RU) query, you should consider optimizing it with a composite
index.
This allows you to optimize queries that have multiple equality and range filters in the same filter expression.
This query will be more efficient, taking less time and consuming fewer RUs, if it's able to leverage a
composite index on (name ASC, age ASC).
- Queries with multiple range filters can also be optimized with a composite index.
- However, each individual composite index can only optimize a single range filter.
- Range filters include >, <, <=, >=, and !=. The range filter should be defined last in the composite index.
- Consider the following query with an equality filter and two range filters:
SELECT * FROM c WHERE c.name = "John" AND c.age > 18 AND c._ts > 1612212188
This query will be more efficient with a composite index on (name ASC, age ASC) and (name ASC, _ts
ASC).
However, the query wouldn't utilize a composite index on (age ASC, name ASC) because the
properties with equality filters must be defined first in the composite index.
Two separate composite indexes are required instead of a single composite index on (name ASC, age
ASC, _ts ASC) since each composite index can only optimize a single range filter.
Reference:
[NOT CONFIRMED]
QUESTION 35
You have an Azure Cosmos DB Core (SQL) API account named account1.
You have the Azure virtual networks and subnets shown in the following table.
The vnet1 and vnet2 networks are connected by using a virtual network peer.
The Firewall and virtual network settings for account1 are configured as shown in the exhibit.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
Based on the Firewall and virtual network settings for account1 exhibit: N Y N.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-vnet-service-endpoint
[CONFIRMED]
QUESTION 36
You plan to create an Azure Cosmos DB Core (SQL) API account that will use customer-managed keys stored
in Azure Key Vault.
You need to configure an access policy in Key Vault to allow Azure Cosmos DB access to the keys.
Which three permissions should you enable in the access policy? Each correct answer presents part of the
solution.
A. Wrap Key
B. Get
C. List
D. Update
E. Sign
F. Verify
G. Unwrap Key
Correct Answer: ABG
Explanation
Explanation/Reference:
- Wrap Key
- Get
- Unwrap Key
Explanation:
To Configure customer-managed keys for your Azure Cosmos account with Azure Key Vault: Add an access
policy to your Azure Key Vault instance:
1. From the Azure portal, go to the Azure Key Vault instance that you plan to use to host your encryption keys.
Select Access Policies from the left menu:
3. Under the Key permissions drop-down menu, select Get, Unwrap Key, and Wrap Key permissions:
Reference:
https://learn.microsoft.com/en-us/azure/storage/common/customer-managed-keys-overview
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-setup-cmk
[CONFIRMED]
QUESTION 37
You need to configure an Apache Kafka instance to ingest data from an Azure Cosmos DB Core (SQL) API
account. The data from a container named telemetry must be added to a Kafka topic named iot. The solution
must store the data in a compact binary format.
Which three configuration items should you include in the solution? Each correct answer presents part of the
solution.
A. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
B. "key.converter": "org.apache.kafka.connect.json.JsonConverter"
C. "key.converter": "io.confluent.connect.avro.AvroConverter"
D. "connect.cosmos.containers.topicmap": "iot#telemetry"
E. "connect.cosmos.containers.topicmap": "iot"
F. "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"
Explanation/Reference:
- "connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector"
- "key.converter": "io.confluent.connect.avro.AvroConverter"
- "connect.cosmos.containers.topicmap": "iot#telemetry"
Explanation:
In Azure Data Factory, when storing data to Azure Cosmos DB for NoSQL, we must configure our linked
service as a sink. This will load our data.
A sink is the destination for data after it has been transformed.
CosmosDBSourceConnector
- Read data from the Azure Cosmos DB change feed and publish this data to a Kafka topic.
CosmosDBSinkConnector
- Export data from Apache Kafka topics to an Azure Cosmos DB database.
connect.cosmos.containers.topicmap
- Mapping between Kafka Topics and Cosmos Containers, formatted using CSV as shown:
topic#container,topic2#container
"key.converter": "io.confluent.connect.avro.AvroConverter"
- Avro is binary format, while JSON is text.
"connect.cosmos.containers.topicmap": "iot#telemetry"
- Create the Azure Cosmos DB sink connector in Kafka Connect. The following JSON body defines config for
the sink connector. Extract:
"connector.class": "com.azure.cosmos.kafka.connect.sink.CosmosDBSinkConnector", "key.converter":
"org.apache.kafka.connect.json.AvroConverter" "connect.cosmos.containers.topicmap": "hotels#kafka"
Read from Azure Cosmos DB
{
"name": "cosmosdb-source-connector",
"config": {
"connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connect.cosmos.task.poll.interval": "100",
"connect.cosmos.connection.endpoint": "<cosmos-endpoint>",
"connect.cosmos.master.key": "<cosmos-key>",
"connect.cosmos.databasename": "<cosmos-database>",
"connect.cosmos.containers.topicmap": "<kafka-topic>#<cosmos-container>",
"connect.cosmos.offset.useLatest": false,
"value.converter.schemas.enable": "false",
"key.converter.schemas.enable": "false"
}
}
Incorrect Answers:
"connector.class": "com.azure.cosmos.kafka.connect.source.CosmosDBSinkConnector"
- CosmosDBSourceConnector is to get Cosmos db data to kafka topic.
- CosmosDBSinkConnector is to export Kafka topics data to Cosmos db.
- We want to have data from cosmos to kafka so source, not sink.
"key.converter": "org.apache.kafka.connect.json.JsonConverter"
"value.converter.schemas.enable":
"false", "key.converter": "org.apache.kafka.connect.json.AvroConverter",
"key.converter.schemas.enable": "false", "connect.cosmos.connection.endpoint":
"https://.documents.azure.com:443/", "connect.cosmos.master.key": "",
"connect.cosmos.databasename": "kafkaconnect",
"connect.cosmos.containers.topicmap": "hotels#kafka"
}
}
Reference:
Kafka Connect for Azure Cosmos DB - source connector
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/kafka-connector-source
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/kafka-connector-sink
https://www.confluent.io/blog/kafkaconnect-deep-dive-converters-serialization-explained/
[CONFIRMED]
QUESTION 38
You are implementing an Azure Data Factory data flow that will use an Azure Cosmos DB (SQL API) sink to
write a dataset.
The data flow will use 2,000 Apache Spark partitions. You need to ensure that the ingestion from each Spark
partition is balanced to optimize throughput.
A. Throughput
B. Write throughput budget
C. Batch size
D. Collection action
Correct Answer: C
Explanation
Explanation/Reference:
- Batch size
Explanation:
Batch size
- An integer that represents how many objects are being written to Azure Cosmos DB collection in each
batch.
- Usually, starting with the default batch size is sufficient.
- Azure Cosmos DB limits single request's size to 2MB. The formula is "Request Size = Single Document Size
* Batch Size". If you hit error saying "Request size is too large", reduce the batch size value.
- The larger the batch size, the better throughput the service can achieve, while make sure you allocate
enough RUs to empower your workload
Throughput
- Set an optional value for the number of RUs you'd like to apply to your Azure Cosmos DB collection for each
execution of this data flow during the read operation. Minimum is 400.
Collection action
- Determines whether to recreate the destination collection prior to writing.
- None: No action will be done to the collection.
- Recreate: The collection will get dropped and recreated.
Reference:
Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
- https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db
[CONFIRMED]
QUESTION 39
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to provide a user named User1 with the ability to insert items into container1 by using role-based
access control (RBAC). The solution must use the principle of least privilege.
Correct Answer: D
Explanation
Explanation/Reference:
Explanation:
Cosmos DB Operator
- Can provision Azure Cosmos DB accounts, databases, and containers.
- Cannot access any data or use Data Explorer.
Reference:
Azure role-based access control in Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/role-based-access-control
[CONFIRMED]
QUESTION 40
You have an Azure Cosmos DB Core (SQL) API account.
You configure the diagnostic settings to send all log information to a Log Analytics workspace.
You need to identify when the provisioned request units per second (RU/s) for resources within the account
were modified.
AzureDiagnostics
| where Category == "ControlPlaneRequests"
Correct Answer: D
Explanation
Explanation/Reference:
Explanation:
The OperationName field in the AzureDiagnostics log contains the name of the operation that was performed.
In this case, you are interested in the operation that updates the provisioned request units per second (RU/s)
for resources within the account. This operation is called SqlContainersThroughputUpdate.
AzureDiagnostics
| where Category == "ControlPlaneRequests"
| where OperationName startswith "SqlContainersThroughputUpdate"
This will filter the log entries to only include those where the OperationName starts with
SqlContainersThroughputUpdate, which indicates that the provisioned throughput for a SQL container was
updated.
Reference:
[GPT]
QUESTION 41
You have a database in an Azure Cosmos DB Core (SQL) API account. The database is backed up every two
hours.
Correct Answer: A
Explanation
Explanation/Reference:
Explanation:
By enabling Continuous Backup for the Azure Cosmos DB Core (SQL) API account, you ensure that the
database is continuously backed up, which is a prerequisite for supporting point-in-time restore.
When creating a new Azure Cosmos DB account, in the Backup policy tab, choose continuous mode to enable
the point in time restore functionality for the new account. With the point-in-time restore, data is restored to a
new
account, currently you can't restore to an existing account.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/provision-account-continuous-backup
[CONFIRMED]
QUESTION 42
You have an Azure Cosmos DB Core (SQL) API account used by an application named App1.
You open the Insights pane for the account and see the following chart.
Use the drop-down menus to select the answer choice that answers each question based on the information
presented in the graphic.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- The HTTP 404 status code is caused by [answer choice]: requesting resources that do not exist
- There are [answer choice] successful resource creations in the accounts during the time period of the chart:
6 thousand
Explanation:
- 200 OK: Success on List, Get, Replace, Patch, Query. Returned for a successful response.
- 201 Created: Success on PUT or POST. Object created or updated successfully.
- 400 Bad Request: The override set in the x-ms-consistency-level header is stronger than the one set during
account creation. For example, if the consistency level is Session, the override can't be Strong or Bounded.
- 404 Not Found: The document is no longer a resource, that is, the document was deleted.
The HTTP 404 status code is caused by [answer choice]: requesting resources that do not exist
- 404 Not Found: Returned when a resource does not exist on the server.
- If you are managing or querying an index, check the syntax and verify the index name is specified correctly.
There are [answer choice] successful resource creations in the accounts during the time period of the chart: 6
thousand
- 201 Created: Success on PUT or POST. Object created or updated successfully.
Reference:
[CONFIRMED]
QUESTION 43
You have a database in an Azure Cosmos DB Core (SQL) API account.
You need to create an Azure function that will access the database to retrieve records based on a variable
named accountnumber. The solution must protect against SQL injection attacks.
Correct Answer: C
Explanation
Explanation/Reference:
Explanation:
Azure Cosmos DB supports queries with parameters expressed by the familiar @ notation.
Parameterized SQL provides robust handling and escaping of user input, and prevents accidental exposure of
data through SQL injection.
Example:
SELECT *
FROM Families f
WHERE f.lastName = @lastName AND f.address.state = @addressState
Reference:
[CONFIRMED]
QUESTION 44
You plan to deploy two Azure Cosmos DB Core (SQL) API accounts that will each contain a single database.
How should you provision the containers within each account to minimize costs?
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
Serverless
- In the serverless account type, you don't have to configure provisioned throughput when you create
containers in your Azure Cosmos DB account.
- For each billing period, you're billed for the number of RUs that your database operations consumed.
- You're developing, testing, prototyping, or offering your users a new application, and you don't yet
know the traffic pattern.
Provisioned throughput
- In the provisioned throughput account type, you commit to a certain amount of throughput (expressed in
RUs per second or RU/s) that is provisioned on your databases and containers.
- The cost of your database operations is then deducted from the number of RUs that are available every
second.
- For each billing period, you're billed for the amount of throughput that you provisioned.
Reference:
Provision autoscale throughput on database or container in Azure Cosmos DB - API for NoSQL
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-provision-autoscale-throughput
[CONFIRMED]
QUESTION 45
You have an Azure Cosmos DB Core (SQL) API account named account1.
In account1, you run the following query in a container that contains 100GB of data.
SELECT *
FROM c
WHERE LOWER(c.categoryid) = "hockey"
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
Recreating the container with the partition key set to /categoryId will improve the performance of the query: YES
- A partition key index will be created, and the query will perform across the partitions.
Reference:
Query an Azure Cosmos DB container
- https://docs.microsoft.com/en-us/azure/cosmos-db/sql/how-to-query-container
[CONFIRMED]
QUESTION 46
You have an Azure Cosmos DB Core (SQL) API account that is used by 10 web apps.
You need to analyze the data stored in the account by using Apache Spark to create machine learning models.
The solution must NOT affect the performance of the web apps.
Which two actions should you perform? Each correct answer presents part of the solution.
A. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
B. Create a private endpoint connection to the account.
C. In an Azure Synapse Analytics serverless SQL pool, create a view that uses OPENROWSET and the
CosmosDB provider.
D. Enable Azure Synapse Link for the account and Analytical store on the container.
E. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.oltp as the data source.
Correct Answer: AD
Explanation
Explanation/Reference:
- In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
- Enable Azure Synapse Link for the account and Analytical store on the container.
Explanation:
In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
- By creating a table in an Apache Spark pool in Azure Synapse that uses the cosmos.olap data source, you
can efficiently access and analyze the data stored in your Azure Cosmos DB account.
- This approach leverages the optimized capabilities of Apache Spark and allows you to perform machine
learning tasks on the data.
Enable Azure Synapse Link for the account and Analytical store on the container.
- Enabling Azure Synapse Link for your Azure Cosmos DB account and enabling the Analytical store on the
container provides seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
- It allows you to load and analyze data from Azure Cosmos DB using Apache Spark in Azure Synapse
Analytics without impacting the performance of your web apps.
productsDataFrame = spark.read.format("cosmos.olap")\
.option("spark.synapse.linkedService", "cosmicworks_serv")\
.option("spark.cosmos.container", "products")\
.load()
Write to Azure Cosmos DB
productsDataFrame.write.format("cosmos.oltp")\
.option("spark.synapse.linkedService", "cosmicworks_serv")\
.option("spark.cosmos.container", "products")\
.mode('append')\
.save()
------------
2. Select the Linked tab (1), expand the Azure Cosmos DB group (if you don't see this, select the Refresh
button above), then expand the WoodgroveCosmosDb account (2). Right-click on the transactions container
(3), select New notebook (4), then select Load to DataFrame (5).
3. In the generated code within Cell 1 (3), notice that the spark.read format is set to cosmos.olap. This instructs
Synapse Link to use the container's analytical store. If we wanted to connect to the transactional store, like to
read from the change feed or write to the container, we'd use cosmos.oltp instead.
Reference:
https://github.com/microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics/blob/main/Handson%20lab/
HOL%20step-by%20step%20-%20Cosmos%20DB%20real-time%20advanced%20analytics.md
[NOT-CONFIRMED]
QUESTION 47
You have an Azure Cosmos DB Core (SQL) account that has a single write region in West Europe.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- After running the script, there will be an instance of Azure Cosmos DB in North Europe that is writable: YES
- After running the script, the Azure Cosmos DB instance in West Europe will be writable: NO
- The cost of the Azure Cosmos DB account is unaffected by running the script: NO
Explanation:
Any change to a region with failoverPriority=0 triggers a manual failover and can only be done to an account
configured for manual failover.
failoverPriority
- All failoverPriority values must be unique.
- There must be one failoverPriority value zero (0) specified.
- All remaining failoverPriority values can be any positive integer and they don't have to be contiguos, neither
written in any specific order. E.g eastus=0 westus=1.
After running the script, there will be an instance of Azure Cosmos DB in North Europe that is writable: YES
- There is a change of the priority zero, effectively changing the write region.
- The second command is ruining a manual failover.
- Triggering a manual failover by changing priority zero causes write and read regions to be swapped.
After running the script, the Azure Cosmos DB instance in West Europe will be writable: NO
- Triggering a manual failover by changing priority zero causes write and read regions to be swapped.
The cost of the Azure Cosmos DB account is unaffected by running the script: NO
- Each region is billed additionally.
- When evaluating the cost of a solution that replicates across multiple regions, you should calculate the cost
as a multiple of the single-region cost relative to the number of replicas.
- The core formula is RU/s x # of regions.
[CONFIRMED]
QUESTION 48
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account.
You are designing an application that will use db1.
In db1, you are creating a new container named coll1 that will store online orders.
{
"customerId" : "bba6fe24-6d97-4935-8d58-36baa4b8a0e1",
"orderId" : "9d7816e6-f401-42ba-ad05-0e03de35c0b8",
"orderDate" : "2021-09-29",
"orderDetails" : [ ]
}
You need to select the partition key value for coll1 to support the application. The solution must minimize costs.
A. orderId
B. customerId
C. orderDate
D. id
Correct Answer: B
Explanation
Explanation/Reference:
- customerId
All orders from one customer will be stored on the same partition which will be beneficial for reads.
If most of your workload's requests are queries and most of your queries have an equality filter on the same
property, this property can be a good partition key choice.
For example, if you frequently run a query that filters on UserID, then selecting UserID as the partition key
would reduce the number of cross-partition queries.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview
[CONFIRMED]
QUESTION 49
You have a container in an Azure Cosmos DB Core (SQL) API account. The container stores data about
families. Data about parents, children, and pets are stored as separate documents.
Each document contains the address of each family. Members of the same family share the same partition key
named familyId.
You need to update the address for each member of the same family that share the same address.
A. Update the document of each family member separately by using a patch operation
B. Update the document of each family member separately and set the consistency level to strong
C. Update the document of each family member by using a transactional batch operation
Correct Answer: C
Explanation
Explanation/Reference:
- Update the document of each family member by using a transactional batch operation
Keyword is transactional.
Atomicity
- Guarantees that all the operations done inside a transaction are treated as a single unit, and either all of
them are committed or none of them are.
Consistency
- Makes sure that the data is always in a valid state across transactions.
Isolation
- Guarantees that no two transactions interfere with each other – many commercial systems provide multiple
isolation levels that can be used based on the application needs.
Durability
- Ensures that any change that is committed in a database will always be present.
Azure Cosmos DB supports full ACID compliant transactions with snapshot isolation for operations within the
same logical partition key.
When the ExecuteAsync method is called, all operations in the TransactionalBatch object are grouped,
serialized into a single payload, and sent as a single request to the Azure Cosmos DB service.
- The Azure Cosmos DB request size limit constrains the size of the TransactionalBatch payload to not
exceed 2 MB, and the maximum execution time is 5 seconds.
- There's a current limit of 100 operations per TransactionalBatch to ensure the performance is as expected
and within SLAs.
Reference:
[CONFIRMED]
QUESTION 50
You need to create a data store for a directory of small and medium-sized businesses (SMBs).
Which design should you implement to optimize the data store for reading data?
A. In a directory container, create a document for each company and a document for each user. Use the
company ID as the partition key.
B. Create a user container that uses the user ID as the partition key and a company container that uses the
company ID as the partition key. Add the company ID to each user document.
C. In a user container, create a document for each user. Embed the company into each user document. Use
the user ID as the partition key.
D. In a company container, create a document for each company. Embed the users into company documents.
Use the company ID as the partition key.
Correct Answer: B
Explanation
Explanation/Reference:
- Create a user container that uses the user ID as the partition key and a company container that uses
the company ID as the partition key. Add the company ID to each user document.
Scenario: Each company will have less than 1,000 users. Some users have data that is greater than 2 KB.
- Each user data > 2 KB (kilobyte), 1000 users by company is 1.95 MB (megabyte)
- Maximum size of an item is 2 MB (UTF-8 length of JSON representation).
- Large document sizes up to 16 MB are supported with Azure Cosmos DB for MongoDB only.
So, a unique document for each company and embed the users into company is not good.
"Create a user container that uses the user ID as the partition key and a company container that uses the
company ID as the partition key. Add the company ID to each user document."
This design separates the users and companies into two different containers, with each container optimized for
querying their respective entities. The user container uses the user ID as the partition key, which allows for
efficient retrieval of individual users. The company container uses the company ID as the partition key, which
enables efficient retrieval of all users associated with a particular company.
Adding the company ID to each user document in the user container allows for easy and fast querying of users
by company. Whenever a company or user profile is selected, the details page for the company and all related
users can be retrieved by joining the data from the two containers using the company ID.
This design meets all the requirements specified in the prompt, including efficient reading of data, browsing by
company, and browsing users by company.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/set-throughput
[CONFIRMED]
QUESTION 51
You are building an application that will store data in an Azure Cosmos DB Core (SQL) API account. The
account uses the session default consistency level. The account is used by five other applications. The account
has a single read-write region and 10 additional read region.
Approximately 20 percent of the items stored in the account are updated hourly.
Several users will access the new application from multiple devices.
You need to ensure that the users see the same item values consistently when they browse from the different
devices.
The solution must NOT affect the other applications.
Correct Answer: CE
Explanation
Explanation/Reference:
Session consistency guarantees that all read-and-write operations made by a client in a session are consistent
with each other.
A session token is use with session consistency in the Azure Cosmos DB service.
If you do not flow the Azure Cosmos DB SessionToken you could end up with inconsistent read results for a
period of time.
Associating a session token to the user account ensures that the same session is used across multiple
devices.
Using implicit session management when performing read requests ensures that the session token is
automatically passed along with the request.
To ensure that users see the same item values consistently when they browse from different devices, you
should associate a session token to the user account and use implicit session management when
performing read requests.
Incorrect answers:
[CONFIRMED]
QUESTION 52
You have a container that stores data about families.
SELECT
ch.name ?? ch.firstName AS childName,
f.parents,
ARRAY_LENGHT (ch.pets) ?? 0 AS numberOfPets,
ch.pets
FROM c AS f
JOIN ch IN f.children
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- Children who do not have parents defined will appear on the list: YES
- Children who have more than one pet will appear on the list multiple times: NO
- Children who do not have pets defined will appear on the list: YES
Explanation:
Children who do not have parents defined will appear on the list: YES
- The query return a value because "c" at "FROM c AS f" is represent the entire document and not only the
parent's array.
[CONFIRMED]
QUESTION 53
You plan to store order data in an Azure Cosmos DB Core (SQL) API account. The data contains information
about orders and their associated items.
You need to develop a model that supports order read operations. The solution must minimize the number of
requests.
Correct Answer: D
Explanation
Explanation/Reference:
- Create a single database that contains one container. Create a separate document for each order
and embed the order items into the order documents
Creating a single database with two containers, one for orders and another for order items, is the best approach
to support order read operations while minimizing the number of requests.
This approach provides a clear separation of concerns between orders and order items, making it easier to
manage the data and perform read operations.
With this approach, queries that require only order data can be directed to the orders container, while queries
that require order item data can be directed to the order items container.
This reduces the number of requests required to retrieve the data needed for each query.
By denormalizing data, your application may need to issue fewer queries and updates to complete common
operations.
Typically denormalized data models provide better read performance.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/sql/modeling-data
QUESTION 54
You plan to use a multi-region Azure Cosmos DB Core (SQL) API account to store data for a new application
suite.
Which consistency level should you configure for each application? To answer, select the appropriate options in
the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- Reporting: Eventual
- Purchasing: Strong
- Fulfillment: Consistent prefix
Explanation:
Strong
- All reads and writes are guaranteed to be always consistent across all replicas of the data.
- Strong consistency is the most restrictive level and can result in higher latencies.
Bounded Staleness
- A guarantee that the data read is at most a certain number of versions or time interval behind the latest
write.
- This consistency level may have lower latency than strong consistency.
Session
- All read-and-write operations made by a client in a session are guaranteed to be consistent with each other.
- This consistency level has lower latency than bounded staleness.
Consistent Prefix
- This guarantees that a read operation will return the most recent version of the data, but it may not reflect all
writes made to the database at the time of the read.
- This consistency level may have lower latency than session consistency.
Eventual
- This does not guarantee that the read operation will reflect the most recent write, but it guarantees that all
writes will eventually be reflected in the read operation.
- This consistency level may have the lowest latency of all.
[GPT]
QUESTION 55
You are designing a data model for an Azure Cosmos DB Core (SQL) API account.
What are the partition limits for request units per second (RU/s) and storage?
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
[CONFIRMED]
QUESTION 56
You have an Azure Cosmos DB container named container1.
You need to insert an item into container1. The solution must ensure that the item is deleted automatically after
two hours.
How should you complete the item definition? To answer, select the appropriate options in the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
Reference:
[CONFIRMED]
QUESTION 57
You have a container in an Azure Cosmos DB Core (SQL) API account. The database that has a manual
throughput of 30,000 request units per second (RU/s).
Use the drop-down menus to select the answer choice that answers each question based on the information
presented in the graphic.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
The container can scale [answer choice] RU/s without a partition split: 30,000
- In an Azure Cosmos DB Core (SQL) API account, each partition supports throughput of up to 10,000 request
units per second (RU/s).
- The container can scale to a maximum throughput of 30,000 RU/s without requiring a partition split because
the maximum RU/s per physical partition in Azure Cosmos DB is 10,000, and since the container is set to a
manual throughput of 30,000 RU/s, it can be accommodated within three physical partitions, each handling
10,000 RU/s.
Example
- 1 container with 5 physical partitions and 30,000 RU/s of manual provisioned throughput.
- We can increase the RU/s to 5 * 10,000 RU/s = 50,000 RU/s instantly.
- Similarly if we had a container with autoscale max RU/s of 30,000 RU/s (scales between 3000 - 30,000 RU/
s), we could increase our max RU/s to 50,000 RU/s instantly (scales between 5000 - 50,000 RU/s).
[CONFIRMED]
QUESTION 58
You have an Azure Cosmos DB database.
You plan to create a new container named container1 that will store product data and product category data
and will primarily support read requests.
Which two characteristics should you prioritize? Each correct answer presents part of the solution.
A. unique
B. high cardinality
C. low cardinality
D. static
Correct Answer: BD
Explanation
Explanation/Reference:
- high cardinality
- static
Static:
- Selecting a static partition key means choosing a key that does not frequently change.
- This minimizes the need for updates to the partition key value and reduces maintenance effort.
- It also helps in maintaining a consistent distribution of data across partitions over time.
[CONFIRMED]
QUESTION 59
You have an Azure Cosmos DB account named account1 that has a default consistency level of session.
You need to ensure that the read operations of App1 can request either bounded staleness or consistent prefix
consistency.
What should you modify for each consistency level? To answer, select the appropriate options in the answer
area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
Scenario:
- Account1 that has a default consistency level of session
To move from weaker to stronger consistency, update the default consistency for the Azure Cosmos
DB account.
Bounded Staleness
- A guarantee that the data read is at most a certain number of versions or time interval behind the latest
write.
- This consistency level may have lower latency than strong consistency.
Consistent Prefix
- This guarantees that a read operation will return the most recent version of the data, but it may not reflect all
writes made to the database at the time of the read.
- This consistency level may have lower latency than session consistency.
[CONFIRMED]
QUESTION 60
You are developing an application that will connect to an Azure Cosmos DB Core (SQL) API account. The
account has a single read-write region and one additional read region. The regions are configured for automatic
failover.
The account has the following connection strings. (Line numbers are included for reference only.)
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- If the primary write region fails, applications that write to the database must use a different connection string
to continue to use the service: NO
- The Primary Read-only SQL Connection String and the Secondary Read-Only SQL Connection String will
connect to different regions from an application running in the EAST US Azure: NO
- Applications can choose from which region the read by setting the PreferredLocations property within their
connection properties: YES
Explanation:
If the primary write region fails, applications that write to the database must use a different connection string to
continue to use the service: NO
- The same connection string can still be used.
- Azure Cosmos DB Core read write region connection strings
- Azure Cosmos DB "primary write region" failover "connection string"
The Primary Read-only SQL Connection String and the Secondary Read-Only SQL Connection String will
connect to different regions from an application running in the EAST US Azure: NO
- The AccountEndpoint for both are the same.
Applications can choose from which region the read by setting the PreferredLocations property within their
connection properties: YES
- The ConnectionPolicy.PreferredLocations property gets and sets the preferred locations (regions) for geo-
replicated database accounts in the Azure Cosmos DB service. For example, "East US" as the preferred
location.
- When EnableEndpointDiscovery is true and the value of this property is non-empty, the SDK uses the
locations in the collection in the order they are specified to perform operations, otherwise if the value of this
property is not specified, the SDK uses the write region as the preferred location for all operations.
Reference:
ConnectionPolicy.PreferredLocations Property
- https://docs.microsoft.com/en-us/dotnet/api/
microsoft.azure.documents.client.connectionpolicy.preferredlocations
QUESTION 61
You have a global ecommerce application that stores data in an Azure Cosmos DB Core (SQL) API account.
The account is configured for multi-region writes.
You need to create a stored procedure for a custom conflict resolution policy for a new container. In the event
of a conflict caused by a deletion, the deletion must always take priority.
A. isTombstone
B. conflictingItems
C. existingItem
D. incomingItem
Correct Answer: A
Explanation
Explanation/Reference:
- isTombstone
isTombstone
- Boolean indicating if the incomingItem is conflicting with a previously deleted item. When true,
existingItem is also null.
conflictingItems
- Array of the committed version of all items in the container that are conflicting with incomingItem on ID or
any other unique index properties.
existingItem
- The currently committed item. This value is non-null in an update and null for an insert or deletes.
incomingItem
- The item being inserted or updated in the commit that is generating the conflicts. Is null for delete
operations.
Reference:
[CONFIRMED]
QUESTION 62
You have an Azure Cosmos DB Core (SQL) API account named account1 that supports an application named
App1. App1 uses the consistent prefix consistency level.
You configure account1 to use a dedicated gateway and integrated cache.
You need to ensure that App1 can use the integrated cache.
Which two actions should you perform for App1? Each correct answer presents part of the solution.
Correct Answer: AD
Explanation
Explanation/Reference:
Explanation:
When you connect to Azure Cosmos DB using direct mode, your application connects directly to the Azure
Cosmos DB backend.
Session consistency is the most widely used consistency level for both single region and globally distributed
Azure Cosmos DB accounts.
- With session consistency, single client sessions can read their own writes.
- Any reads with session consistency that don't have a matching session token will incur RU charges.
- This includes the first request for a given item or query when the client application is started or restarted,
unless you explicitly pass a valid session token.
- Clients outside of the session performing writes will see eventual consistency when they are using the
integrated cache.
[CONFIRMED]
QUESTION 63
You have an Azure Cosmos DB Core (SQL) API account named account1 that has a single read-write region
and one additional read region. Account1 uses the strong default consistency level.
You have an application that uses the eventual consistency level when submitting requests to account1.
Correct Answer: C
Explanation
Explanation/Reference:
Overriding the default consistency level only applies to reads within the SDK client.
An account configured for strong consistency by default will still write and replicate data synchronously to every
region in the account.
When the SDK client instance or request overrides this with Session or weaker consistency, reads will be
performed using a single replica.
Reference:
[CONFIRMED]
QUESTION 64
You have a multi-region Azure Cosmos DB account named account1 that has a default consistency level of
strong.
You have an app named App1 that is configured to request a consistency level of session.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- Write operations will: Be applied only to the replica to which App1 is connected.
- Read operations will be performed by the replica: To which App1 is connected.
The default consistency level of account1 is strong, which means that all writes must be replicated to all regions
before they are considered committed. However, App1 is configured to request a consistency level of session,
which means that the write operations will only be applied to the replica to which App1 is connected.
This means that there is a potential for data inconsistency between the different regions. However, the session
consistency level is designed to provide a good balance between consistency and performance.
Therefore, the write operations of App1 will be applied only to the replica to which App1 is connected. This will
provide a good balance between consistency and performance.
Therefore, the read operations of App1 will be served from the replica to which App1 is connected. This will
provide a good balance between consistency and performance.
"Therefore, the read operations of App1 will be served from the replica to which App1 is connected. This will
provide a good balance between consistency and performance."
- would read from the replica with the lowest estimated latency. This would not meet the requirements of the
session consistency level.
[GPT]
QUESTION 65
You have an Azure Cosmos DB Core (SQL) API account.
The change feed is enabled on a container named invoice.
You create an Azure function that has a trigger on the change feed.
A. only the changed properties and the system-defined properties of the updated items
B. only the partition key and the changed properties of the updated items
C. all the properties of the original items and the updated items
D. all the properties of the updated items
Correct Answer: D
Explanation
Explanation/Reference:
[CONFIRMED]
QUESTION 66
You have an Azure Synapse Analytics workspace named workspace1 that contains a serverless SQL pool.
You have an Azure Table Storage account that stores operational data.
You need to replace the Table storage account with Azure Cosmos DB Core (SQL) API.
To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the
correct order.
Explanation
Explanation/Reference:
Explanation:
Steps:
2. Navigate to your Azure Cosmos DB account and open the Azure Synapse Link under Intergrations in the
left pane.
4. Create analytical store enabled containers to automatically start replicating your operational data from the
transactional store to the analytical store
References:
Configure and use Azure Synapse Link for Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/configure-synapse-link
[CONFIRMED]
QUESTION 67
You have an Azure Cosmos DB account named account1.
You have several apps that connect to account1 by using the account's secondary key.
You need to ensure that account1 will only allow apps to connect by using an Azure AD identity.
A. disableKeyBasedMetadataWriteAccess
B. disableLocalAuth
C. userAssignedIdentities
D. allowedOrigins
Correct Answer: B
Explanation
Explanation/Reference:
- disableLocalAuth
userAssignedIdentities
- Managed identities for Azure resources provide Azure services with an automatically managed identity in
Azure Active Directory.
- A managed identity from Azure Active Directory (Azure AD) allows your app to easily access other Azure AD-
protected resources such as Azure Key Vault.
- Creating an app with a user-assigned identity requires that you create the identity and then add its resource
identifier to your app config.
A system-assigned identity is tied to your application and is deleted if your app is deleted. An app can only have
one system-assigned identity.
A user-assigned identity is a standalone Azure resource that can be assigned to your app. An app can have
multiple user-assigned identities.
disableKeyBasedMetadataWriteAccess
- If you disable write access changes to any resource can happen from a user with the proper Azure role and
credentials.
disableLocalAuth
- Enforcing role-based access control as the only authentication method.
- This disable the account's primary/secondary keys.
allowedOrigins
- The API for NoSQL in Azure Cosmos DB supports Cross-Origin Resource Sharing (CORS) by using the
“allowedOrigins” header.
- After you enable the CORS support for your Azure Cosmos DB account, only authenticated requests are
evaluated to determine whether they're allowed according to the rules you've specified.
- With CORS support, resources from a browser can directly access Azure Cosmos DB through the
JavaScript library or directly from the REST API for simple operations.
Setting the disableLocalAuth property to true ensures that local authentication, such as using the secondary
key, is disabled for the Azure Cosmos DB account.
This means that only Azure AD identities (service principals) will be able to authenticate and access the
account.
On the other hand, the userAssignedIdentities property is used to specify the Azure AD identities that are
allowed to access the account.
While it can also be used to enforce authentication through Azure AD, it requires you to assign and manage
user-assigned identities explicitly.
In the context of the given scenario, if you want to ensure that only Azure AD identities can connect to
the account, the most direct and effective option would be to disable local authentication by setting
disableLocalAuth to true.
How to use managed identities for App Service and Azure Functions
- https://learn.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=ps%2Chttp#add-a-
system-assigned-identity
Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account
- https://learn.microsoft.com/en-us/azure/cosmos-db/how-to-setup-rbac#disable-local-auth
How to use managed identities to connect to Azure Cosmos DB from an Azure virtual machine
- https://learn.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-vm-
managed-identities-cosmos?tabs=azure-resource-manager
[CONFIRMED]
QUESTION 68
You have a database named db1 in an Azure Cosmos DB Core (SQL) API account.
You have a third-party application that is exposed through a REST API.
You need to migrate data from the application to a container in db1 on a weekly basis.
A. Azure Migrate
B. Azure Data Factory
C. Database Migration Assistant
Correct Answer: B
Explanation
Explanation/Reference:
- Azure Data Factory
Azure Migrate
- Is a service designed for migrating on-premises virtual machines and database Migration Assistant is a tool
that helps you assess and migrate databases to Azure, but it is not designed for migrating data from a third-
party application exposed through a REST API.
[CONFIRMED]
QUESTION 69
You have a container in an Azure Cosmos DB Core (SQL) API account.
Data update volumes are unpredictable.
You need to process the change feed of the container by using a web app that has multiple instances. The
change feed will be processed by using the change feed processor from the Azure Cosmos DB SDK. The
multiple instances must share the workload.
Which three actions should you perform? Each correct answer presents part of the solution.
Explanation/Reference:
Configure the same lease container configuration for all the instances:
- The monitored container: The monitored container has the data from which the change feed is generated.
Any inserts and updates to the monitored container are reflected in the change feed of the container.
- The lease container: The lease container acts as state storage and coordinates processing the change feed
across multiple workers. The lease container can be stored in the same account as the monitored container or
in a separate account.
- The compute instance: A compute instance hosts the change feed processor to listen for changes.
Depending on the platform, it might be represented by a virtual machine (VM), a kubernetes pod, an Azure App
Service instance, or an actual physical machine. The compute instance has a unique identifier that's called the
instance name throughout this article.
- The delegate: The delegate is the code that defines what you, the developer, want to do with each batch of
changes that the change feed processor reads.
To take advantage of the compute distribution within the deployment unit, the only key requirements are that:
- All instances should have the same lease container configuration.
- All instances should have the same value for processorName.
- Each instance needs to have a different instance name (WithInstanceName).
This will ensure that all instances share the same processor and lease container configuration, while each
instance has a unique identifier to differentiate it from other instances.
[DOUBT]
QUESTION 70
You have an Azure Cosmos DB for NoSQL container.
You need to protect the data stored in the container by using Always Encrypted. For each property, you must
use the strongest type of encryption and ensure that queries execute properly.
What is the strongest type of encryption that you can apply to each property? To answer, select the appropriate
options in the answer area.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
The Azure Cosmos DB service never sees the plain text of properties encrypted with Always Encrypted.
However, it still supports some querying capabilities over the encrypted data, depending on the encryption type
used for a property.
Deterministic encryption
- It always generates the same encrypted value for any given plain text value and encryption configuration.
- Using deterministic encryption allows queries to perform equality filters on encrypted properties.
- However, it may allow attackers to guess information about encrypted values by examining patterns in the
encrypted property. This is especially true if there's a small set of possible encrypted values, such as True/
False, or North/South/East/West region.
Randomized encryption
- It uses a method that encrypts data in a less predictable manner.
- Randomized encryption is more secure, but prevents queries from filtering on encrypted properties.
[CONFIRMED]
QUESTION 71
You need to create a database in an Azure Cosmos DB Core (SQL) API account. The database will contain
three containers named coll1, coll2, and coll3. The coll1 container will have unpredictable read and write
volumes.
The coll2 and coll3 containers will have predictable read and write volumes.
The expected maximum throughput for coll1 and coll2 is 50,000 request units per second (RU/s) each.
Correct Answer: B
Explanation
Explanation/Reference:
- Create a provisioned throughput account. Set the throughput for coll1 to Autoscale. Set the
throughput for coll2 and coll3 to Manual.
To minimize costs while provisioning the collection in Azure Cosmos DB Core (SQL) API account.
- Create a provisioned throughput account.
- Set the throughput for coll1 to Autoscale.
- Set the throughput for coll2 and coll3 to Manual.
[CONFIRMED]
QUESTION 72
You have an Apache Spark pool in Azure Synapse Analytics that runs the following Python code in a notebook
For each of the following statements. select Yes if the statement is true. Otherwise. select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
Apache Spark supports various data formats such as Parquet, Avro, CSV, JSON, NoSQL, etc
[DOUBT]
QUESTION 73
You have a database in an Azure Cosmos DB Core (SQL API) account. The database contains a container
named container1. The indexing mode of container1 is set to none.
You configure Azure Cognitive Search to extract data from container1 and make the data searchable.
You discover that the Cognitive Search index is missing all the data from the Azure Cosmos DB index.
Correct Answer: D
Explanation
Explanation/Reference:
To make data searchable in Azure Cognitive Search, the indexing mode of the container in Azure Cosmos DB
Core (SQL API) account must be set to consistent or lazy.
Note: In Azure Cognitive Search, a search index is your searchable content, available to the search engine for
indexing, full text search, and filtered queries.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/index-policy
https://docs.microsoft.com/en-us/azure/search/search-what-is-an-index
[CONFIRMED]
QUESTION 74
You have an Azure Cosmos DB Core (SQL) API account that frequently receives the same three queries.
You need to configure indexing to minimize RUs consumed by the queries.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation
Spatial indices enable efficient queries on geospatial objects such as - points, lines, polygons, and
multipolygon. These queries use ST_DISTANCE, ST_WITHIN, ST_INTERSECTS keywords.
- SELECT * FROM container c WHERE ST_DISTANCE(c.property, { "type": "Point", "coordinates": [0.0,
10.0] }) < 40
Composite indexes increase the efficiency when you're performing operations on multiple fields.
- SELECT * FROM container c ORDER BY c.property1, c.property2
- SELECT * FROM container c WHERE c.property1 = 'value' AND c.property2 > 'value'
QUESTION 75
You have the following Azure Resource Manager (ARM) template.
You plan to deploy the template in incremental mode.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- When the template is deployed, an Azure Cosmos DB account named acct01 will be created if the account
does NOT exist: NO
- When the template is deployed, an container named mycontainer in mydb will be created if the container
does NOT exist: YES
- When the template is deployed, if mycontainer exists and has a partition key on name, the partition key will
change to companyid: NO
Explanation:
When the template is deployed, an Azure Cosmos DB account named acct01 will be created if the account
does NOT exist: NO
- The ARM template don't create the account.
When the template is deployed, an container named mycontainer in mydb will be created if the container does
NOT exist: YES
- The ARM template create the container.
When the template is deployed, if mycontainer exists and has a partition key on name, the partition key will
change to companyid: NO
- You choose a partition key when you create a container in Azure Cosmos DB.
- Note that you can't change the partition key of a container.
- If you do need to change it, you need to migrate the container data to a new container with the correct key.
[CONFIRMED]
QUESTION 76
You have an Azure Cosmos DB Core (SQL) API account that has multiple write regions.
You need to receive an alert when requests that target the database exceed the available request units per
second (RU/s).
A. Data Usage.
B. Provisioned Throughput.
C. Total Request Units.
D. Document Count.
Correct Answer: C
Explanation
Explanation/Reference:
Explanation:
Data Usage
- Measures the amount of data stored in the database.
Provisioned Throughput
- Measures the provisioned throughput of the database.
Document Count
- Measures the number of documents in the database.
Alert type
- Rate limiting on request units (metric alert): Alerts if the container or a database has exceeded the
provisioned throughput limit.
- Region failed over: When a single region is failed over. This alert is helpful if you didn't enable service-
managed failover.
- Rotate keys (activity log alert): Alerts when the account keys are rotated. You can update your application
with the new keys.
[CONFIRMED]
QUESTION 77
You have an Azure Cosmos DB Core (SQL) API account that has multiple write regions.
You need to receive an alert when requests that target the database exceed the available request units per
second (RU/s).
A. Data Usage.
B. Metadata Requests.
C. Region Removed.
D. Region Added.
Correct Answer: B
Explanation
Explanation/Reference:
- Metadata Requests.
Explanation:
You can use Metric alerts in Azure Monitor to receive an alert when requests that target the database exceed
the available request units per second (RU/s).
DataUsage (Data Usage)
- Used to monitor total data usage at container and region, minimum granularity should be 5 minutes.
- Data Usage shows the amount of data stored in the database.
- This includes both read and write operations.
- This monitors the amount of data that is being used by the Azure Cosmos DB account, and not the number
of requests that are being made to the account.
Incorrect answers:
Region Added
- Track the addition of regions from the Azure Cosmos DB account
- When a new region is added, the total provisioned throughput for the account is increased by the amount of
RU/s that was provisioned for the existing regions.
Region Removed
- Track the removal of regions from the Azure Cosmos DB account.
-----
Reference:
[PRE-CONFIRMED]
QUESTION 78
You configure a backup for an Azure Cosmos DB Core (SQL) API account as shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on the information
presented in the graphic.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- The current backup interval and retention policy provide protection 12 hours
- In case of emergency, you must create a support ticket to restore the backup.
Explanation:
A full backup is taken every 4 hours (240 minutes). Only the last two backups are stored by default
Both the backup interval and the retention period can be configured in the Azure portal
If you accidentally delete your database or a container, you can file a support ticket or call the Azure support to
restore the data from automatic online backups.
Azure support is available for selected plans only such as Standard, Developer, and plans higher than
those tiers. Azure support isn't available with Basic plan.
In this scenario:
- Backup Interval is 120 minutes = RPO is 2 hours
- Backup Retention is 12 hours = retention policy
References:
[CONFIRMED]
QUESTION 79
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account named account1.
You configure container1 to use Always Encrypted by using an encryption policy as shown in the C# and the
Java exhibits.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Explanation:
An application can be allowed to read the creditcard property while being restricted from reading the SSN
property: YES
- Reading documents when only a subset of properties can be decrypted.
- In situations where the client does not have access to all the CMK used to encrypt properties, only a subset
of properties can be decrypted when data is read back. For example, if property1 was encrypted with key1 and
property2 was encrypted with key2, a client application that only has access to key1 can still read data, but not
property2. In such a case, you must read your data through SQL queries and project away the properties that
the client can't decrypt: SELECT c.property1, c.property3 FROM c.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/how-to-always-encrypted
[NOT-CONFIRMED]
QUESTION 80
You work in the Intel company that has its clients across 4 different countries, with hundreds of thousands of
clients for countries 1 and 2, and a few thousand clients for countries 3 and 4. Requests for every country total
approximately 50,000 RU/s every hour of the day.
The application team of the company suggests using countryId as the partition key for this container.
A. This partition key will cause fan out when filtering by countryId
B. This partition key could cause storage hot partitions
C. This partition key will prevent throughput hot partitions
Correct Answer: B
Explanation
Explanation/Reference:
In the given scenario, countries 1 and 2 have hundreds of thousands of clients. Storage hot partitions are likely
for countries 1 or 2 since they would have to bulk of the data stored.
Incorrect answers:
"This partition key will cause fan out when filtering by countryId"
- The suggested partition will prevent fan out as we will be always going through one partition per countryId.
[WHIZLABS]
QUESTION 81
Consider the following two statements.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- A container with provisioned throughput can't be converted to a shared database container: YES
- A shared database container can’t be converted to have a dedicated throughput: YES
You will need to move the data to a container with the desired throughput setting. (Container copy jobs for
NoSQL and Cassandra APIs help with this process.)
QUESTION 82
Gateway mode using the dedicated gateway is one of the ways to connect to an Azure Cosmos DB account.
Correct Answer: A
Explanation
Explanation/Reference:
A dedicated gateway cluster can have up to five nodes by default and you can add or remove nodes at any
time.
All dedicated gateway nodes within your account share the same connection string.
Once created, you can add or remove dedicated gateway nodes, but you can't modify the size of the nodes. To
change the size of your dedicated gateway nodes you can deprovision the cluster and provision it again in a
different size. This will result in a short period of downtime unless you change the connection string in your
application to use the standard gateway during reprovisioning.
[WHIZLABS]
QUESTION 83
You need to create a .NET console app to manage data in the Azure Cosmos DB SQL API account. As a part
of the procedure, you need to create a database.
A. BuildDatabaseAsync
B. CreateCosmosDatabase
C. CreateContainerAsync
D. CreateDatabaseAsync
Correct Answer: D
Explanation
Explanation/Reference:
- CreateDatabaseAsync
A database is the logical container of items that are partitioned across containers. Either
the CreateDatabaseAsync or CreateDatabaseIfNotExistsAsync method of the CosmosClient class can be
used to create a database.
CreateContainerAsync
- Is used to create a container, not a database.
[WHIZLABS]
QUESTION 84
You are part of a development team and your team creates a set of aggregate metadata items that are needed
to be changed anytime, once you successfully update or create an item within your container.
Which of the following server-side programming constructs would you use for this task?
A. User-defined function
B. System-defined function
C. Pre-trigger
D. Post-trigger
Correct Answer: D
Explanation
Explanation/Reference:
- Post-trigger
A post-trigger runs its logic after the item has been successfully updated or created. At this point, you can
update the aggregate metadata items.
User-defined function
- Is used only within the context of a SQL query.
System-defined function
- The system defined functions are also called as Library Functions or Standard Functions or Pre-Defined
Functions. The implementation of system defined functions is already defined by the system.
Pre-trigger
- Will run its logic too early before the item gets successfully created or updated.
Post-trigger
- Runs its logic after the item has been successfully updated or created.
How to write stored procedures, triggers, and user-defined functions in Azure Cosmos DB
- https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/how-to-write-stored-procedures-triggers-udfs
[WHIZLABS]
QUESTION 85
Which of the following Azure CLI command can you use to initiate a manual failover to another region in your
Azure Cosmos DB SQL API account?
Correct Answer: C
Explanation
Explanation/Reference:
You need to use az cosmosdb failover-priority-change --failover-policies command ensuring that you
specifically change the priority of the region with a priority = 0.
Incorrect answers:
[WHIZLABS]
QUESTION 86
Which of the following is the most optimal method of managing referential integrity between different containers
in Azure Cosmos DB?
A. Creating an Azure Cosmos DB Function that will periodically search for changes done to your containers
and replicate those changes to the referenced containers
B. Using an Azure Function trigger for Azure Cosmos DB to leverage on the change feed processor to update
the referenced containers
C. When any changes are created by your app to one container, also ensure it duplicates the changes on the
referenced container
Correct Answer: B
Explanation
Explanation/Reference:
- Using an Azure Function trigger for Azure Cosmos DB to leverage on the change feed processor to
update the referenced containers
The change feed APIs help in keeping the track of all the Insert and Updates on a container, creating a trigger
depending upon the change feed, will allow the users to send those changes to other containers that also
require those changes.
Incorrect answers:
"Creating an Azure Cosmos DB Function that will periodically search for changes done to your containers and
replicate those changes to the referenced containers"
- Azure Cosmos DB functions don’t modify data and their scope is limited only within the container they are
defined in.
"When any changes are created by your app to one container, also ensure it duplicates the changes on the
referenced container"
- While it can be possible to add this logic to your app, adding the code to the application won’t add
complexity to your environment, but also it won’t guarantee that any changes carried out outside of the app
would be replicated to the other containers.
[WHIZLABS]
QUESTION 87
Range, spatial, and composite are three different types of indexes supported by Azure Cosmos DB.
Which of these indexes are used for geospatial objects like points, polygons, LineStrings, etc?
A. Range Indexes
B. Composite Indexes
C. Spatial Indexes
Correct Answer: C
Explanation
Explanation/Reference:
- Spatial Indexes
Spatial Indexes are used for geospatial objects. Currently, it supports Points, Polygons, MultiPolygons, and
LineStrings. Spatial indexes can also be used on GeoJSON objects.
Use composite indexes when you need to enhance the efficiency of queries that execute operations on several
fields.
[WHIZLABS]
QUESTION 88
Range, spatial, and composite are three different types of indexes supported by Azure Cosmos DB.
Which of these indexes are used for performing operations on multiple fields?
A. Range Indexes
B. Composite Indexes
C. Spatial Indexes
Correct Answer: B
Explanation
Explanation/Reference:
- Composite Indexes
Spatial indices enable efficient queries on geospatial objects such as - points, lines, polygons, and
multipolygon. These queries use ST_DISTANCE, ST_WITHIN, ST_INTERSECTS keywords.
Composite indexes increase the efficiency when you're performing operations on multiple fields.
[WHIZLABS]
QUESTION 89
The server-side latency direct and server-side latency gateway are two metrics that can be used to view the
server-side latency of any operation in two different connection modes.
Which of the following API(s) in Azure Cosmos DB supports both of these metrics?
A. SQL
B. MongoDB
C. Cassandra
D. Gremlin
E. Table
Correct Answer: AE
Explanation
Explanation/Reference:
- SQL
- Table
SQL API supports both Server-Side Latency Direct as well as Server-Side Latency Gateway.
Table supports both Server-Side Latency Direct as well as Server-Side Latency Gateway.
[WHIZLABS]
QUESTION 90
You are working on an app that will save the device metrics produced every minute by various IoT devices.
You were about to create two containers for these identified entities but suddenly your data engineer suggests
placing both entities in a single container, not in two different containers.
A. Create a document with the deviceid property and other device data, and add a property called ‘type’ having
the value ‘device’. Also, create another document for each metrics data collected using devicemetricsid
property.
B. Create a document with the deviceid property and other device data. After that embed each metrics
collection into the document with the devicemetricsid property and all the metrics data.
C. Create a document with the deviceid property and other device data, and add a property called ”type” with
the value ’device‘. Create another document for each metrics data using the devicemetricsid and deviceid
properties and add a property called ”type” with the value devicemetrics.
Correct Answer: C
Explanation
Explanation/Reference:
- Create a document with the deviceid property and other device data, and add a property called
”type” with the value ’device‘. Create another document for each metrics data using the
devicemetricsid and deviceid properties and add a property called ”type” with the value devicemetrics.
If you create two different types of documents including the property ‘deviceid’ for both entities, it will ease
referencing both entities inside the container.
Embedding each metric data collected every minute would create high amounts of requests.
Incorrect Answers:
"Create a document with the deviceid property and other device data, and add a property called ‘type’ having
the value ‘device’. Also, create another document for each metrics data collected using devicemetricsid
property."
- Creating two types of documents but not including the deviceid for the device metrics document would make
it impossible to reference the device actually referenced by the collected metrics.
"Create a document with the deviceid property and other device data. After that embed each metrics collection
into the document with the devicemetricsid property and all the metrics data."
- Embedding each metric data collected every minute would create high amounts of requests.
[WHIZLABS]
QUESTION 91
In a shared throughput database, the containers share the throughput (Request Units per Second) allocated to
that database.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- 25
- 400
With manually provisioned throughput you cannot provision less than 400 Request Units per second on the
database and increments of 100.
With manually provisioned throughput, there can be up to 25 (twenty-five) containers with a minimum of 400
Request Units per second on the database.
With autoscale provisioned throughput, there can be up to 25 (twenty-five) containers with an autoscale
maximum of 4000 Request Units per second (scaling b/w 400 – 4000 Request Units per second).
[WHIZLABS]
QUESTION 92
While working in .NET SDK v3, you need to enable multi-region writes in your app that uses Azure Cosmos DB.
WestUS2 is the region where your application would be deployed and Cosmos DB is replicated.
Which of the following attributes would you set to WestUS2 to achieve the goal?
A. ApplicationRegion
B. setCurrentLocation
C. setPreferredLocations
D. connection_policy.PreferredLocations
Correct Answer: A
Explanation
Explanation/Reference:
- ApplicationRegion
To enable multi-region writes in your application that uses Azure Cosmos DB while working in .NET SDK v3,
you would set the ApplicationRegion attribute to WestUS2.
setCurrentLocation
- Is used in .NET SDK v2.
- Is used to specify the region where your application is currently connected. This is not relevant to enabling
multi-region writes.
setPreferredLocations
- Is used in Async Java V2 SDK.
connection_policy.PreferredLocations
- Is used in Python SDK.
[WHIZLABS]
QUESTION 93
While chairing a team session, you are telling the team members about the Azure Cosmos DB Emulator.
Which of the following statement is not true about Azure Cosmos DB Emulator?
A. Azure Cosmos DB Emulator offers an emulated environment that would run on the local developer
workstation.
B. The emulator does support only a single fixed account and a familiar primary key. You can even regenerate
the key while utilizing the Azure Cosmos DB Emulator.
C. The emulator doesn’t offer multi-region replication.
D. The emulator doesn’t offer various Azure Cosmos DB consistency levels as offered by the cloud service.
Correct Answer: B
Explanation
Explanation/Reference:
- The emulator does support only a single fixed account and a familiar primary key. You can even
regenerate the key while utilizing the Azure Cosmos DB Emulator.
The emulator does support only a single fixed account and a familiar primary key but you can’t regenerate the
key while utilizing the Azure Cosmos DB Emulator.
Azure Cosmos DB Emulator offers an emulated environment that would run on the local developer workstation.
The emulator doesn’t offer various Azure Cosmos DB consistency levels as offered by the cloud service
https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator
[WHIZLABS]
QUESTION 94
You need to configure the consistency levels on a per-request basis.
Which of the following C# class would you use in the .NET SDK for Azure Cosmos DB SQL API?
A. CosmosClientOptions
B. CosmosConfigOptions
C. ItemRequestOptions
D. Container
Correct Answer: C
Explanation
Explanation/Reference:
- ItemRequestOptions
ItemRequestOptions
- The ItemRequestOptions class contains various session token and consistency level configuration properties
on a per-request basis.
CosmosClientOptions
- CosmosClientOptions class configures the overall client used in the SDK in several operations.
ItemRequestOptions
- The ItemRequestOptions class includes several properties to configure per-request options.
Container
- The Container class has methods for performing operations on a container but does not provide any way to
configure per-request options.
[WHIZLABS]
QUESTION 95
Which of the following function in Spark SQL separates the array’s elements into multiple rows with positions
and utilizes the column names ‘pos’ for position and ‘col’ for elements of the array?
A. explode()
B. posexplode()
C. preexplode()
D. Separate()
Correct Answer: B
Explanation
Explanation/Reference:
- posexplode()
posexplode()
- Is a function in Spark SQL that separates the array’s elements into multiple rows with positions and utilizes
the column names pos for position and col for elements of the array.
explode()
- Separates the array’s elements into many rows and utilizes the default column name array’s elements.
[WHIZLABS]
QUESTION 96
Which of the following method of the ChangeFeedProcessor class is invoked to start consuming changes from
the change feed?
A. StartAsync
B. GetChangeFeedProcessorBuilder<>
C. Build
Correct Answer: A
Explanation
Explanation/Reference:
- StartAsync
StartAsync
- Is a method in the ChangeFeedProcessor class and is invoked to start consuming changes from the change
feed.
GetChangeFeedProcessorBuilder<>
- The given is a method of Container class that creates the builder to eventually build a change feed
processor.
Build
- The build method is invoked at the end of creating a change feed processor (or estimator) and isn’t a method
of the ChangeFeedProcessor class.
[WHIZLABS]
QUESTION 97
While defining a custom index policy, which of the following path expression can be used to define an included
path that will include all possible properties from the root of any JSON document?
A. /[]
B. /?
C. /*
D. /()
Correct Answer: C
Explanation
Explanation/Reference:
- /*
[WHIZLABS]
QUESTION 98
In the Azure portal, which of the following tab inside the Insights pane demonstrates the percentage (%) of
successful requests over the total requests per hour?
A. Storage
B. Requests
C. System
D. Availability
Correct Answer: D
Explanation
Explanation/Reference:
- Availability
The availability tab demonstrates the % of successful requests out of the total requests per hour. Azure
Cosmos DB SLAs define the success rate.
Incorrect answers:
Storage
- The storage tab demonstrates the size of data and index usage over the specific time period.
Requests
- The requests tab demonstrates the total number of requests processed by operation type, by status code,
and the count of failed requests.
Availability
- It demonstrates the number of metadata requests served by the primary partition.
[WHIZLABS]
QUESTION 99
The Azure Cosmos DB (SQL API) connector supports various authentication types like Key authentication,
Service principal authentication, System-assigned managed identity authentication and User-assigned
managed identity authentication.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
Azure Cosmos DB for NoSQL using Service Connector supports system-assigned managed identity, user-
assigned managed identity, secret/connection string and service principal authentication types for Azure App
Service, Azure Container Apps and Azure Spring Apps.
But all the three authentication types are not supported in the data flow.
Currently, all the three authentication types i.e service principal authentication, system-assigned managed
identity authentication, and user-assigned managed identity authentication are not supported in the data flow.
Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
- https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-cosmos-db
[WHIZLABS]
QUESTION 100
You want to disable all indexing for a container in Azure Cosmos Database.
Which of the following property of the indexing policy would help you in meeting the goal?
A. excludedPaths
B. includedPath
C. automatic
D. indexingMode
Correct Answer: D
Explanation
Explanation/Reference:
- indexingMode
None and consistent are two indexing modes supported by Azure Cosmos DB.
If you set the indexingMode property to none, it disables all indexing.
Incorrect answers:
excludedPaths
- The excludedPaths property specifies the paths that will be excluded from the index. It says nothing about the
running of the indexer.
includedPath
- The includedPaths property specifies the paths that will be included in the index. It says nothing about the
running of the indexer.
automatic
- If you disable the automatic indexing by assigning it a value as false, it won’t disable all indexing for the
container.
[WHIZLABS]
QUESTION 101
Your development team already has an existing Azure Cosmos DB account, database, and container. One of
the senior executives asks you to configure Azure Cosmos DB for another team with unique and different
needs for regional replication and default consistency levels.
A. Database
B. Container
C. Account
Correct Answer: C
Explanation
Explanation/Reference:
- Account
In order to have different behavior for replication and regional, your must create a new account.
The replication settings and default consistency level configured on a cosmos account apply to all Cosmos
databases and containers under that account. As per the scenario, another team has unique needs for regional
replication and default consistency levels.
To support the different behavior for replication and default consistency levels, there is a need to create a new
account.
Incorrect answers:
Database
- If you create a new database, the global replication settings and default consistency level will remain the
same between both teams.
Container
- If you create a new container, the global replication settings and default consistency level will remain the
same between both teams.
[WHIZLABS]
QUESTION 102
The web development team of the company successfully estimates the throughput needs of your app within a
5% margin of error, with no significant variances over time. While running the app in production, the team
reckons that your workload would be extraordinarily stable.
With both these inputs, which of the following throughput options would you consider?
A. Serverless
B. Standard
C. Autoscale
D. Hybrid
Correct Answer: B
Explanation
Explanation/Reference:
- Standard
Incorrect answers:
Serverless
- Serverless is a better option for workloads with widely varying traffic and low average-to-peak traffic ratios.
Autoscale
- Autoscale throughput suits best for unpredictable traffic.
Hybrid
- Hybrid is not a valid throughput option.
[WHIZLABS]
QUESTION 103
You have an application that queries an Azure Cosmos DB Core (SQL) API account.
SELECT * FROM c WHERE c.name = @name ORDER BY c.name DESC, c.timestamp DESC
SELECT * FROM c WHERE c.name = @name AND c.timestamp ORDER BY c.name ASC, c.timestamp
ASC
You need to minimize the request units (RUs) consumed by reads and writes.
Correct Answer: B
Explanation
Explanation/Reference:
- a composite index for (name ASC, timestamp ASC) and a composite index for (name DESC,
timestamp DESC).
Explanation:
To minimize the request units (RUs) consumed by reads and writes, you should create a composite index for
(name ASC, timestamp ASC) and a composite index for (name DESC, timestamp DESC) .
This is because the two queries you provided have different ORDER BY clauses on the same two properties:
name and timestamp, one in descending order and the other in ascending order.
Incorrect answers:
[GPT]
QUESTION 104
You have an Azure Cosmos DB database named database1 that contains a container named container1. The
container1 container stores product data and has the following indexing policy.
{
"indexingMode": "consistent",
"includePaths":
[
{
"path": "/product/category/?"
},
{
"path": "/product/brand/?"
}
],
"excludedPaths":
[
{
"path": "/*"
},
{
"path": "/product/brand"
}
]
}
A. /product/category
B. /product/brand/tailspin
C. /product/brand
D. /product/[]/category
Correct Answer: A
Explanation
Explanation/Reference:
- /product/category
In the indexing policy you provided, the includePaths specifies that the /product/category/? and /product/brand/?
paths should be indexed.
However, the excludedPaths specifies that all paths (/*) and the /product/brand path should be excluded from
indexing.
Since excludedPaths takes precedence over includePaths, the only path that will be indexed is the /product/
category/? path.
The ? at the end of the path indicates that all immediate children of the /product/category property will be
indexed.
[CONFIRMED]
Question Set 1
QUESTION 1
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure Synapse pipeline that uses Azure Cosmos DB Core (SQL) API as the input and
Azure Blob Storage as the output.
A. Yes
B. No
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation
Azure Synapse pipeline is not designed for copying data from Azure Cosmos DB Core (SQL) API to Azure Blob
Storage.
This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.
Azure Synapse pipeline is not designed for copying data from Azure Cosmos DB Core (SQL) API to Azure Blob
Storage.
Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."
You can create an Azure Synapse pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure
Blob Storage as the output. You can follow these steps:
1. Enable Azure Synapse Link for your Azure Cosmos DB accounts.
2. Enable Azure Synapse Link for your containers.
3. Connect your Azure Cosmos DB database to an Azure Synapse workspace.
4. Query analytical store using Azure Synapse Analytics.
Reference:
https://learn.microsoft.com/en-us/azure/cosmos-db/configure-synapse-link
[GPT]
QUESTION 2
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input
and Azure Blob Storage as the output.
A. Yes
B. No
Correct Answer: A
Explanation
Explanation/Reference:
- YES
Explanation
This solution allows you to copy the data from container1 to a blob storage account, which can then be used as
a reference data input for your Stream Analytics job.
Creating an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and Azure
Blob Storage as the output. This is a valid option because you can transform or copy reference data to Blob
Storage from Azure Data Factory to use cloud-based and on-premises data stores.
You can create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the input and
Azure Blob Storage as the output. You can follow these steps:
1. Create a data factory.
2. Create a pipeline with a copy activity.
3. Test run the pipeline.
4. Trigger the pipeline manually.
5. Trigger the pipeline on a schedule.
Reference:
https://learn.microsoft.com/en-us/azure/data-factory/tutorial-copy-data-portal
[GPT]
QUESTION 3
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger
and Azure event hub as the output.
A. Yes
B. No
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation
Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and Azure event hub as
the output is not a reference data input, but a stream input.
This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.
Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."
You can create an Azure function that uses Azure Cosmos DB Core (SQL) API change feed as a trigger and
Azure event hub as the output. You can use the following steps:
1. Use the Azure Cosmos DB Trigger to listen for inserts and updates across partitions.
2. Use the Azure Functions trigger for Azure Cosmos DB (AKA CosmosDBTrigger) to dynamically distribute
work across Function instances.
3. Configure the Azure event hub as the output.
[GPT]
QUESTION 4
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure function to copy data to another Azure Cosmos DB Core (SQL) API container.
A. Yes
B. No
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation:
Copying data to another Azure Cosmos DB Core (SQL) API container does not make it available as reference
data for Stream Analytics, since Azure Cosmos DB Core (SQL) API is not supported as a reference data
source.
This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.
Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."
You can create an Azure function to copy data to another Azure Cosmos DB Core (SQL) API container. You
can use the following steps:
1. Connect Azure Functions to Azure Cosmos DB using Visual Studio Code.
2. Create your Azure Cosmos DB account.
3. Create an Azure Cosmos DB database and container.
4. Update your function app settings.
Reference:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-add-output-binding-cosmos-db-vs-code
[GPT]
QUESTION 5
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a container named container1 in an Azure Cosmos DB Core (SQL) API account.
You need to make the contents of container1 available as reference data for an Azure Stream Analytics job.
Solution: You create an Azure function to copy data to another Azure Cosmos DB for NoSQL container.
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation:
Copying data to another Azure Cosmos DB for NoSQL container does not make it available as reference data
for Stream Analytics, since Azure Cosmos DB for NoSQL is not supported as a reference data source.
This solution do not seem to be directly related to making the contents of container1 available as reference data
for an Azure Stream Analytics job.
Instead "You create an Azure Data Factory pipeline that uses Azure Cosmos DB Core (SQL) API as the
input and Azure Blob Storage as the output."
You can copy data from an Azure Cosmos DB Core (SQL) API container to another Azure Cosmos DB for
NoSQL container using Azure Function. You can perform offline container copy within an Azure Cosmos DB
account by using container copy jobs. You might need to copy data within your Azure Cosmos DB account if
you want to achieve any of these scenarios:
- Copy all items from one container to another.
- Change the granularity at which throughput is provisioned, from database to container and vice versa.
- Change the partition key of a container.
- Update the unique keys for a container.
- Rename a container or database.
- Adopt new features that are supported only for new containers.
[GPT]
QUESTION 6
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a database in an Azure Cosmos DB for NoSQL account that is configured for multi-region writes.
You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The
solution must ensure that any conflicts are sent to the conflicts feed.
Solution: You set ConflictResolutionMode to LastWriterWins and you use the default settings for the policy.
A. Yes
B. No
Correct Answer: B
Explanation
Explanation/Reference:
- NO
This solution set ConflictResolutionMode to LastWriterWins and using the default settings for the policy will use
the system-defined timestamp property to resolve conflicts automatically, which may not reflect your application
logic or requirements. Conflicts resolved with this policy do not show up in the conflict feed.
Instead use "You set ConflictResolutionMode to Custom and you use the default settings for the
policy."
References:
Conflict types and resolution policies when using multiple write regions
- https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/conflict-resolution-policies.md
[GPT]
QUESTION 7
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a database in an Azure Cosmos DB for NoSQL account that is configured for multi-region writes.
You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The
solution must ensure that any conflicts are sent to the conflicts feed.
Solution: You set ConflictResolutionMode to Custom and you use the default settings for the policy.
A. Yes
B. No
Correct Answer: A
Explanation
Explanation/Reference:
- YES
This solution allows you to use a custom stored procedure to resolve conflicts according to your application
logic. The default settings for the custom policy do not specify any ResolutionProcedure, which means that all
conflicts are sent to the conflicts feed without any automatic resolution.
Conflicts and conflict resolution policies are applicable if your Azure Cosmos DB account is configured with
multiple write regions.
For Azure Cosmos DB accounts configured with multiple write regions, update conflicts can occur when writers
concurrently update the same item in multiple regions.
Azure Cosmos DB offers a flexible policy-driven mechanism to resolve write conflicts. You can select from two
conflict resolution policies on an Azure Cosmos DB container:
- Last Write Wins (LWW): This resolution policy, by default, uses a system-defined timestamp property for the
following APIs: SQL, MongoDB, Cassandra, Gremlin and Table. Custom numerical property is available only for
API for NoSQL. If the path isn't set or it's invalid, it defaults to _ts. Conflicts resolved with this policy do not show
up in the conflict feed.
- Custom: This resolution policy is designed for application-defined semantics for reconciliation of conflicts. Is
available only for API for NoSQL accounts and can be set only at creation time. It is not possible to set a
custom resolution policy on an existing container.
References:
Conflict types and resolution policies when using multiple write regions
- https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/conflict-resolution-policies.md
[GPT]
QUESTION 8
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have a database in an Azure Cosmos DB for NoSQL account that is configured for multi-region writes.
You need to use the Azure Cosmos DB SDK to implement the conflict resolution policy for a container. The
solution must ensure that any conflicts are sent to the conflicts feed.
Solution: You set ConflictResolutionMode to Custom. You set ResolutionProcedure to a custom stored
procedure. You configure the custom stored procedure to use the conflictingItems parameter to resolve
conflicts.
A. Yes
B. No
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation:
This solution set ConflictResolutionMode to Custom and setting ResolutionProcedure to a custom stored
procedure will use that stored procedure to resolve conflicts automatically, which may not reflect your
application logic or requirements. Conflicts resolved with this policy do not show up in the conflict feed.
Instead use "You set ConflictResolutionMode to Custom and you use the default settings for the
policy."
References:
Conflict types and resolution policies when using multiple write regions
- https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cosmos-db/conflict-resolution-policies.md
[GPT]
QUESTION 9
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.
A. Yes
B. No
Correct Answer: A
Explanation
Explanation/Reference:
- YES
Explanation:
An Azure Monitor alert can be configured to trigger a function when certain conditions are met, such as the
normalized request units per seconds.
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
Note: Alerts are used to set up recurring tests to monitor the availability and responsiveness of your Azure
Cosmos DB resources. Alerts can send you a notification in the form of an email, or execute an Azure Function
when one of your metrics reaches the threshold or if a specific event is logged in the activity log.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
[CONFIRMED]
QUESTION 10
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.
A. Yes
B. No
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation:
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
[CONFIRMED]
QUESTION 11
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.
Solution: You configure an application to use the change feed processor to read the change feed and you
configure the application to trigger the function.
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation:
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
[CONFIRMED]
QUESTION 12
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some question sets might have
more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.
You have an Azure Cosmos DB Core (SQL) API account named account1 that uses autoscale throughput.
You need to run an Azure function when the normalized request units per second for a container in account1
exceeds a specific value.
Solution: You configure Azure Event Grid to send events to the function by using an Event Grid trigger in the
function.
A. Yes
B. No
Correct Answer: B
Explanation
Explanation/Reference:
- NO
Explanation:
You can set up alerts from the Azure Cosmos DB pane or the Azure Monitor service in the Azure portal.
Reference:
https://docs.microsoft.com/en-us/azure/cosmos-db/create-alerts
[CONFIRMED]
Question Set 1
QUESTION 1
You are configuring the Backup & Restore section of an Azure Cosmos DB Core (SQL) API account using the
Azure portal as shown below:
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- YES
- YES
- NO
Explanation:
You must have the Azure Cosmos DB Operator role assigned at the Azure subscription level to successfully
configure Backup & Restore section. A designated individual with an Azure Cosmos DB Operator role can
configure the Backup & Restore section in Azure Cosmos DB.
By default, your existing periodic backup mode accounts have geo-redundant storage if the region where the
account is being provisioned supports it. Three data redundancy options are available when you are configuring
the Azure Cosmos DB Backup & Restore section:
- Geo-redundant backup storage. This copies your data asynchronously across a paired region.
- Zone-redundant backup storage. This copies your data synchronously across three Azure availability zones
(AZs) in the primary Azure region.
- Locally-redundant backup storage. This copies your data synchronously three times within a single physical
location in the primary Azure region.
You cannot call Azure support to restore data from automatic online backups in the case of the accidental
deletion of a database or a container. You can create a support ticket with Microsoft Azure support or call Azure
support to restore data from automatic online backups only if you have a Standard or Developer plan or any
other plan higher than those. Since this use-case scenario mentions that you have a Basic Azure support plan,
you cannot restore data from automatic online backups by calling or contacting Azure support. You need to
upgrade from a Basic plan to at least a Standard or Developer plan to be able to leverage this feature.
References
[MEASUREUP]
QUESTION 2
You are an Azure Cosmos DB developer for an Independent Software Vendor (ISV). You have deployed an e-
commerce application for one of your clients that uses Azure Cosmos DB Core (SQL) API as a data source.
The client wants to move between regions where data is replicated in Azure Cosmos DB.
You need to move data from one region to another using the Azure portal.
Which two actions should you perform to achieve the desired outcome? Each correct answer presents part of
the solution.
A. Perform an automatic failover to the new region and, finally, remove the original region.
B. Pause the throughput scale-up operation in the account.
C. Perform a manual failover to the new region and, finally, remove the original region.
D. Add a new region to the account.
Correct Answer: CD
Explanation
Explanation/Reference:
Explanation:
You should add a new region to the account and perform a manual failover to the new region and, finally,
remove the original region. Since you are performing a manual move between regions in an Azure Cosmos DB
account, the desired outcome is achieved by performing two simple steps: you add the desired (new) region to
the account, and then perform a manual failover to the new region. Finally, you remove the original region,
which completes the process.
You should not pause the throughput scale-up operation in the account, since this is not required for the
desired outcome. Having said that, it is important to know that, if you are performing a failover operation to a
new region in Azure Cosmos DB while an asynchronous throughput scaling UP operation is in progress, the
throughput scale-up operation will be paused. The scale-up operation will resume when the failover has
succeeded or completed. You, as a developer, do not need to pause it explicitly.
You should not perform an automatic failover to the new region and, finally, remove the original region. You are
not performing an automatic failover in this scenario but, rather, performing a manual migration as the use-case
scenario specifies.
References
[MEASUREUP]
QUESTION 3
You are an Azure Cosmos DB developer for an online retail organization. You develop an online platform that
uses Azure Cosmos DB Core (SQL) API as a data source.
The following is the sample schema of an item in the Azure Cosmos DB container:
foodGroup is the partition key for the container.
SELECT * FROM c WHERE c.foodGroup = 'Pork Products' and IS_DEFINED(c.description) and IS_DEFINED
(c.manufacturerName) ORDER BY c.tags.name ASC, c.version ASC
In order to execute an ORDER BY query, you create a Composite Index directly in the Azure portal. The
indexing policy is shown below:
You need to analyze both the utilized indexed paths and the recommended indexed paths using the Azure
Cosmos DB .NET SDK. You are using the .NET SDK ver 3.26.1.
Correct Answer:
Explanation
Explanation/Reference:
- PopulatedIndexMetrics
- IndexMetrics
In Azure Cosmos DB Core (SQL) API, you can use indexing metrics to understand both the utilized indexes and
the recommended indexed paths. You can use this information to fine-tune database query performance. This
is especially important in cases where you are not sure how to modify the indexing policies on your database
container. In .NET SDK, the indexing metrics are only supported in .NET version 3.21.0 or higher. You need to
set the PopulateIndexMetrics property to true, thereby activating the indexing metrics for the given SDK version
number. However, you should note that since activating Indexing Metrics incurs an overhead to the queries, the
recommendation is to not activate it for every production query. You should only activate Indexing Metrics for
queries which you feel are performing slowly and need query performance optimization.<br><br>You should not
choose the EnableScanInQuery option. If you wished to enable scans on specific queries that could not be
served because indexing was switched off, you would set this property to True.
You should not choose the SessionToken option. If you wish to manage the session token being used for the
session consistency level for Azure Cosmos DB service for your application, you can use the SessionToken
property.
You should not choose the Diagnostics option. You should use the Diagnostics property when you wish to
capture diagnostic information for the current request to Azure Cosmos DB service.
References
QueryRequestOptions.PopulateIndexMetrics Property
- https://docs.microsoft.com/en-us/dotnet/api/
microsoft.azure.cosmos.queryrequestoptions.populateindexmetrics
[MEASUREUP]
QUESTION 4
You are an Azure Cosmos DB developer for an online media firm. You have developed an online blogging
platform that enables bloggers to blog about their favorite topics.
{
"id": 343812121,
"topic": "ExoticTravel",
"topicId": "3912a-23",
"author": {
"userdetails": "Steve Madden",
"photoId": "https://www.onlinedp.com/SteveMadden.jpg",
},
"postpubtime": "2022-02-12T12:03:00Z"
}
You need to implement an aggregation between the two containers (Posts and Users), denormalize the feed by
adding an author to its associated blog post, and upsert it in the Feed container.
A. Use Azure Synapse pipelines to create a data transformation ETL from the Posts and Users containers.
Update the authorId field of all posts manually.
B. Use one Azure Function with the Azure Cosmos DB Change Feed Processor (CFP) library.
C. Use Azure Data Factory (ADF) Data Flow pipelines to create a data transformation ETL from the Posts and
Users containers. Update the authorId field of all posts manually.
D. Use two Azure Functions with the Azure Cosmos DB Change Feed Processor (CFP) library.
Correct Answer: D
Explanation
Explanation/Reference:
- Use two Azure Functions with the Azure Cosmos DB Change Feed Processor (CFP) library.
You should use two Azure Functions with the Azure Cosmos DB Change Feed Processor (CFP) library.
- The first Azure Function listens to the Posts change feed.
- The second Azure Function listens to the Users change feed.
The second Azure Function ensures that the authorId field of all posts is updated. Using the Azure Cosmos DB
Change Feed is the most appropriate, cost-effective, and highly scalable solution for solving this use-case
scenario.
In this case, Azure Functions should be used for the following four reasons:
- Even though the use-case is a basic aggregation across two containers, a much more complex pipeline and
transformation could be designed by using Azure Functions, if the need arises.
- Azure Functions are completely independent of application code.
- Azure Functions can be scaled out if the need arises, by choosing an appropriate Azure Functions
Consumption Plan.
- Life-Cycle processing for the Change Feed can be easily designed, maintained, and troubleshooted using
the Change Feed Library (CFP) Azure Functions, since the CFP is resilient to user code errors. This means
that if your delegate implementation faces an unhandled exception error during execution, the thread
processing that specific batch of changes is stopped, and a new thread is created automatically by the CFP.
Thus, using CFP via Azure Functions builds availability, resiliency, maintainability, and extensibility into the
solution.
You should not use one Azure Function with the Azure Cosmos DB Change Feed Processor (CFP)
library. This is not an appropriate solution since two Azure Functions should be used for the two different
activities: a function to listen to the Posts change feed, and another to listen to the Users change feed. This
recommendation ensures that you design an Azure Function that is automatically triggered when there is new
event in your Azure Cosmos DB Change Feed. Since Posts and Users are two separate containers with two
separate Change Feeds, you need to use two different functions to track the change feed events.
You should not use Azure Data Factory (ADF) Data Flow pipelines to create a data transformation ETL from the
Posts and Users containers and update the authorId field of all posts manually. An Azure Data Factory (ADF)
service is a data orchestration as a service feature from Microsoft Azure that allows the orchestration, copying
and replication of batch data from a source to a destination. Since we plan to use an event-triggered pipeline
activity in real-time, an ADF service is not a recommended solution for this scenario.
You should not use Azure Synapse pipelines to create a data transformation ETL from the Posts and Users
containers and update the authorId field of all posts manually. An Azure Synapse data pipeline uses ADF
connectors in the Synapse platform to build data orchestration, copying and replication of batch data from a
source to a destination. Since we plan to use an event-triggered pipeline activity in real-time, an ADF service is
not a recommended solution for this use-case scenario.
References
Denormalizing your data with Azure Functions and Cosmos DB’s change feed
- https://medium.com/@thomasweiss_io/denormalizing-your-data-with-azure-functions-and-cosmos-dbs-
change-feed-c4a8687cda88
[MEASUREUP]
QUESTION 5
You are an Azure Cosmos DB Developer for a traditional retail organization. Your organization has decided to
migrate its existing on-premises applications to Azure.
One of the applications that you have been tasked to migrate to Azure currently runs on a traditional on-
premises environment. The environment has the following three characteristics:
- The existing on-premises environment has a single replica set with a replication factor of R = 3.
- The replica set resides on a four-core server SKU.
- Your physical CPUs do not support hyperthreading.
You need to estimate the number of provisioned RU/s for the Azure Cosmos DB Core (SQL) API.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- YES
- YES
- NO
The recommended RU/s for Azure Cosmos DB Core (SQL) API is calculated as: Provisioned RU/s = (600 RU/
s/vCore) * (12 vCores)/(3) = 2,400 RU/s. In this scenario, the recommended RU/s for Azure Cosmos DB Core
(SQL) API is calculated using the following equation: Provisioned RU/s = (600 RU/s/vCore) * (12 vCores)/(3) =
2,400 RU/s.
The recommended formula for arriving at a rough starting-point RU/s estimate from vCores (or vCPUs) in a
traditional on-premises environment to Azure Cosmos DB is: Provisioned RU/s = C * T/R, wherein:
- T: Total vCores and/or vCPUs in your existing database data-bearing replica set(s). The correct way to
arrive at this number is to examine the number of vCores or vCPUs in each data-bearing replica set of your
existing database, and then add them up to get a total.
- R: Replication factor of your existing data-bearing replica set(s).
- C: Recommended provisioned RU/s per vCore or vCPU. This value derives from the architecture of Azure
Cosmos DB:
- C = 600 RU/s/vCore for Azure Cosmos DB SQL API;
- C = 1000 RU/s/vCore for Azure Cosmos DB API for MongoDB v4.0;
- C estimates for Cassandra API, Gremlin API, or other APIs are currently not available.
In this scenario, we have a single replica set with R = 3; that is to say, 3 * 4 core = 12 vCores.
Assuming that 1 vCore = 1 vCPU would provide the most accurate RU/s estimate in the given scenario. Given
that the scenario mentions that the physical CPUs do not support hyperthreading, 1 vCore = 1 vCPU would be
a correct assumption.
If the replication factor of your existing data-bearing replica set is unknown, assuming that R=1 is not
recommended. In Azure Cosmos DB, if the replication factor of your existing data-bearing replica set is
unknown, you should not assume that R=1; given that if the average replication factor (R) is not explicitly
known, R = 3 is the recommended value.
References
Convert the number of vCores or vCPUs in your nonrelational database to Azure Cosmos DB RU/s
- https://docs.microsoft.com/en-us/azure/cosmos-db/convert-vcore-to-request-unit
[MEASUREUP]
QUESTION 6
You are developing a new web appp that uses Azure Comsos DB core (SQL) API as a data source. You are
using
You are developing a new web app that uses Azure Cosmos DB Core (SQL) API as a data source, You are
using Azure Cosmos DB Java SDK V4 to initialize the SDK as exhibited below:
For each of the following statements, select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- YES
- YES
- YES
Explanation
The SDK automatically sends all writes to the current write region. In this use-case scenario. .prererredRegions
(preferredRegions) is an ordered list or Azure Regions and the SDK automatically sends all writes to the current
write region.
All reads are directed to the hrst available region in the preferredRegions list. If the request fails, the client will
fail down the list to the next region as mentioned in the preferredRegions list.
The ConsistencyLevelSESSlON is reeion•aenostic. This exhibits the Session consistency. This is the default
level applied to Cosmos accounts. When working with Session consistency. each new write request to Azure
Cosmos 0B is assigned a new SessionToken. which the CosmosCIient uses internally with each read/query
request to ensure that the set consistency level is maintained.
[MEASUREUP]
QUESTION 7
You have recently joined an independent software vendor (ISV) as an Azure Cosmos DB developer. You are
working on a client project. and you are tasked with implementing a Cosmos DB change feed solution that uses
the change feed processor (CFP) library.
You use the change feed pull model for the implementation. You are using the FeedRange to parallelize the
processing of the change feed as shown below:
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- NO
- YES
- YES
Polling for future changes is not an automated process in this scenario. With the change feed pull model, you
can consume the Azure Cosmos DB change feed at your own speed. There is no prerequisite to consume the
feed at any predetermined pace. However. polling for all future changes is a manual process.
Work is automatically spread across multiple consumers using a change feed processor (CFP). If you are using
the change feed pull model, you can use the FeedRange to parallelize the processing of the change feed. A
FeedRange represents a range of partition key values. When you Obtain a list of FeedRanges for your
container, you get one FeedRange per physical partition.
In the Azure Cosmos the Change feed pull model, if you Choose to use a FeedRange. you can then create a
Feedlterator to parallelize the processing of the change feed across multiple machines or threads. You can
create multiple Feedlterator and thereby process the change feed in parallel.
[MEASUREUP]
QUESTION 8
You are an Azure Cosmos DB developer for a large enterprise. You have provisioned a dedicated gateway
cluster with an integrated cache for one of your applications that uses an Azure Cosmos DB Core (SQL) API as
a data source.
You predominantly use point reads as the query pattern for your application. Your Azure Cosmos DB account
uses the default Session level consistency.
You use the , NET SDK to define the query cache as exhibited below.
For each of the following statements. select Yes if the statement is true. Otherwise, select No.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- YES
- NO
- YES
Explanation
Session consistency signifies that single client sessions will read their own writes. Azure Cosmos DB Core
(SQL) API Integrated Cache only supports two types of consistency levels: session and eventual. If you
configure any Other type Of consistency level at a specific query level (or at a request-level), it will always
bypass the integrated cache, meaning that the cache is not used at all. In our use-case scenario, you use by
default session consistency. When integrated cache is used with the by default session consistency for a single
client session, the client session will always read their own writes. Any clients situated outside of the session
performing writes Will see eventual consistency.
MaxInteeratedCacheStaleness does not use global Time-to-Live (TTL) to refresh the Integrated Cache. The
MaxIntegratedCacheStaIeness refers to the maximum acceptable staleness for cached point reads and
queries, regardless of the selected consistency. This property is conheurable at the request-level. This property
does not use any global TTL to refresh the Integrated Cache. Data is only evicted from the integrated cache if
the cache is full, or if a new read has been executed with a lower value than the age of the current cached
entry.
IntegratedCacheItemHitRate and IntegratedCacheQueryHitRate metrics will result in zero values for the above
code, IntegratedCacheltemHitRate and IntegratedCacheQueryHitRate are two of the Azure Cosmos DB
Integrated Cache metrics that you can monitor if you want to dive deeper into any troubleshooting issues. The
IntegratedCacheItem*itRate provides the proportion of point reads that used the integrated cache (out of all
point reads routed through the dedicated gateway with session or eventual consistency).
The IntegratedCacheQueryHitRate provides the proportion of queries that used the integrated cache (out of all
queries routed through the dedicated gateway with session or eventual consistency). In our use-case scenario
both of these two values will be zero because MaxInteeratedCacheStaIeness Timespan is set to 0.
This means that none of your point reads and query read requests Will ever use the integrated cache,
regardless of the consistency level. If you want to ensure that Integrated Cache is used for all of your point
reads, and query read requests. you should set a value for the TimeSpan property, e.g., PromMinutes(30). If
this is not configured. the default MaxIntegratedCacheStaIeness is five minutes.
[MEASUREUP]
QUESTION 9
You are an Azure Cosmos DB developer at an independent software vendor. You are architecting an online e-
commerce solution that uses the Azure Cosmos DB Core (SQL) API as a data source. You are using the Azure
Cosmos DB .NET SDK v? for development.
You have decided to use a unique key constraint for an Azure Cosmos DB container. The code you are using is
Shown below:
For each Of the following statements, select Yes if the statement is true. Otherwise. select NO.
Hot Area:
Correct Answer:
Explanation
Explanation/Reference:
- NO
- YES
- NO
Explanation
/mobile is not scoped to a container's underlying physical partition layout. The /mobile unique key is scoped to a
container's underlying logical partitions.
The unique keys, /firstName, /IastName and /'mobile guarantees uniqueness for our partition key, /
myPartitionKey.
Creating unique key in an Azure Cosmos DB container allows you to odd an additional layer Of data integrity to
the container. The essence lies in the fact that a unique key guarantees uniqueness per partition key.
When a container is leveraging a unique key policy, Request Unit (RU) charges for Azure Cosmos DB CRUD
operations (create, read, update, and delete) for a given item will be higher.
[MEASUREUP]