AI 100T01A ENU TrainerHandbook

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 120

MCT USE ONLY.

STUDENT USE PROHIBITED


Microsoft
Official
Course

AI-100T01
Designing and
Implementing an Azure AI
Solution
MCT USE ONLY. STUDENT USE PROHIBITED
AI-100T01
Designing and Implementing an
Azure AI Solution
MCT USE ONLY. STUDENT USE PROHIBITED
II  
MCT USE ONLY. STUDENT USE PROHIBITED
  III
MCT USE ONLY. STUDENT USE PROHIBITED
Contents

■■ Module 0 Introducing Azure Cognitive Services  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1


Overview of Azure Cognitive Services  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1
Creating Cognitive Services using Azure Portal  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  28
Testing Cognitive Services using API Testing Console  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  37
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  40
■■ Module 1 Creating Bots  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  43
Introducing the Bot Service  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  43
Create a Basic Chat Bot  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  48
Testing with Bot Emulator  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  55
■■ Module 2 Enhancing Bots with QnA Maker  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  57
Introducing QnA Maker  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  57
Implementing a Knowledge Base with QnA Maker  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  62
Integrating QnA with a Bot  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  68
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  70
■■ Module 3 Learn How to Create Language Understanding with LUIS  . . . . . . . . . . . . . . . . . . . . . .  71
Introducing Language Understanding  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71
Creating a LUIS Service  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  74
Build Intents and Utterances  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  76
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  84
■■ Module 4 Enhance Your Bot with LUIS  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  85
Overview of Language Understanding in AI Applications  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  85
Integrating LUIS and Bots  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  91
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  98
■■ Module 5 Integrate Cognitive Services with Bots and Agents  . . . . . . . . . . . . . . . . . . . . . . . . . . . .  99
Understanding Cognitive Services for Bot Interactions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  99
Sentiment for Bots with Text Analytics  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  102
Detect Language in a Bot  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  108
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  113
MCT USE ONLY. STUDENT USE PROHIBITED
Module 0 Introducing Azure Cognitive Servic-
es

Overview of Azure Cognitive Services


Introduction
Building artificial intelligence (AI) into your applications can be a daunting task. AI involves the use of
machine learning models and algorithms that are typically complex and could require lots of processing
horsepower. Rather than building these models and algorithms yourself, why not take advantage of
prebuilt AI functionality that someone else has created? This is where the Microsoft Cognitive Services
can offer a benefit.
The Microsoft Azure Cognitive Services consist of APIs, SDKs, and services available to help you build
intelligent applications without having direct AI or data science skills or knowledge. They are built on
Microsoft’s evolving portfolio of machine learning APIs and enable you to easily add cognitive features
like emotion detection in images, speech recognition, and language understanding.
The goal of Azure Cognitive Services is to help developers you applications that can see, hear, speak,
understand, and even begin to reason. The catalog of services within Azure Cognitive Services can be
categorized into five main pillars - Vision, Speech, Language, Search, and Knowledge.
Note: The intent of this lesson is not to teach you each and every cognitive service and API that is
available but to offer a sampling of some of the most common among them. By understanding the
concepts, presented in the overviews and material, you gain an understanding of what the services do.
Also, the samples and walkthroughs are designed to help you understand how to access the services.
Many of the services follow similar access patterns for access in terms of endpoints and access keys, all
will return JSON responses for use in your application development, and the requests sent ot the services
follow similar protocols. You simply apply the knowledge gained in the presented services, and apply
that to an understanding of the remaining services.

Introducing Computer Vision


As you might expect, the set of APIs that provide vision services, revolve around imaging or visual
content. The services available, like many Azure products and services, will change over time but current-
MCT USE ONLY. STUDENT USE PROHIBITED 2  Module 0 Introducing Azure Cognitive Services  

ly there are a set of services that are available in release versions and some that are currently in preview.
In this course, we will describe each of the services in general but will pick a couple of key services to drill
down in further. Covering every cognitive service that is available would take a considerable course but
once you understand the mechanics of accessing the services in one area, it is easy to transition to using
any of the others.
If you want to create an application that can process images and return information about what is in the
images, you should consider the advanced algorithms that are part of the Computer Vision API. The
services available in Computer Vision allow you to upload images for analysis, or provide a URL that links
to an image that will be analyzed by the service.
The Computer Vision API provides algorithms to process images and return insights. For example, you
can find out if an image has mature content, or can use it to find all the faces in an image. It also has
other features like estimating dominant and accent colors, categorizing the content of images, and
describing an image with complete English sentences. Additionally, it can also intelligently generate
image thumbnails for displaying large images effectively.
Some of the key aspects that the computer vision API can analyze in images are listed here:
●● Tag visual features - Identify and tag visual features in an image, from a set of thousands of recog-
nizable objects, living things, scenery, and actions. When the tags are ambiguous or not common
knowledge, the API response provides ‘hints’ to clarify the meaning of the tag in the context of a
known setting. Tagging isn't limited to the main subject, such as a person in the foreground, but also
includes the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so
on.
●● Detect objects - Object detection is similar to tagging, but the API returns the bounding box coordi-
nates for each tag applied. For example, if an image contains a dog, cat and person, the Detect
operation will list those objects together with their coordinates in the image. You can use this func-
tionality to process further relationships between the objects in an image. It also lets you know when
there are multiple instances of the same tag in an image.
●● Detect brands - Identify commercial brands in images or videos from a database of thousands of
global logos. You can use this feature, for example, to discover which brands are most popular on
social media or most prevalent in media product placement.
●● Categorize an image - Identify and categorize an entire image, using a category taxonomy with
parent/child hereditary hierarchies. Categories can be used alone, or with our new tagging models.
Currently, English is the only supported language for tagging and categorizing images.
●● Describe an image - Generate a description of an entire image in human-readable language, using
complete sentences. Computer Vision's algorithms generate various descriptions based on the objects
identified in the image. The descriptions are each evaluated and a confidence score generated. A list is
then returned ordered from highest confidence score to lowest.
●● Detect faces - Detect faces in an image and provide information about each detected face. Computer
Vision returns the coordinates, rectangle, gender, and age for each detected face.
Computer Vision provides a subset of the functionality that can be found in Face, and you can use the
Face service for more detailed analysis, such as facial identification and pose detection.
●● Detect image types - Detect characteristics about an image, such as whether an image is a line
drawing or the likelihood of whether an image is clip art.
●● Detect domain-specific content - Use domain models to detect and identify domain-specific
content in an image, such as celebrities and landmarks. For example, if an image contains people,
Computer Vision can use a domain model for celebrities included with the service to determine if the
people detected in the image match known celebrities.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  3

●● Detect the color scheme - Analyze color usage within an image. Computer Vision can determine
whether an image is black & white or color and, for color images, identify the dominant and accent
colors.
●● Generate a thumbnail - Analyze the contents of an image to generate an appropriate thumbnail for
that image. Computer Vision first generates a high-quality thumbnail and then analyzes the objects
within the image to determine the area of interest. Computer Vision then crops the image to fit the
requirements of the area of interest. The generated thumbnail can be presented using an aspect ratio
that is different from the aspect ratio of the original image, depending on your needs.
●● Get the area of interest - Analyze the contents of an image to return the coordinates of the area of
interest. This is the same function that is used to generate a thumbnail, but instead of cropping the
image, Computer Vision returns the bounding box coordinates of the region, so the calling applica-
tion can modify the original image as desired.
The Computer Vision API is available in many regions across the globe. To find the region nearest you,
see the Products available by region1.
You can use the Computer Vision API to:
●● Analyze images for insight
●● Extract printed text from images using optical character recognition (OCR).
●● Recognize printed and handwritten text from images
●● Recognize celebrities and landmarks
●● Analyze video
●● Generate a thumbnail of an image

How to call the Computer Vision API


You call Computer Vision in your application using client libraries or the REST API directly. We'll call the
REST API in this topic. To make a call:
1. Get an API access key
You are assigned access keys when you sign up for a Computer Vision service account. A key must be
passed in the header of every request.
2. Make a POST call to the API
Format the URL as follows: region.api.cognitive.microsoft.com/vision/v2.0/resource/[parameters]
●● region - the region where you created the account, for example, westus.
●● resource - the Computer Vision resource you are calling such as analyze, describe, generateTh-
umbnail, ocr, models, recognizeText, tag.
You can supply the image to be processed either as a raw image binary or an image URL.
The request header must contain the subscription key, which provides access to this API.
3. Parse the response
The response holds the insight the Computer Vision API has about your image, as a JSON payload.
An example of what you can expect to be returned from a call to the face detection API is displayed
here.

1 https://azure.microsoft.com/en-us/global-infrastructure/services/?products=cognitive-services
MCT USE ONLY. STUDENT USE PROHIBITED 4  Module 0 Introducing Azure Cognitive Services  

The Computer Vision APIs


Tagging Images
This aspect focuses on identifying and tagging visual features in an image, from a set of thousands of
recognizable objects, living things, scenery, and actions. When the tags are ambiguous or not common
knowledge, the API response provides ‘hints’ to clarify the meaning of the tag in the context of a known
setting. Tagging isn't limited to the main subject, such as a person in the foreground, but also includes
the setting (indoor or outdoor), furniture, tools, plants, animals, accessories, gadgets, and so on. An
example of the returned JSON for image tagging might look like this:
{
"tags": [
{
"name": "grass",
"confidence": 0.9999995231628418
},
{
"name": "outdoor",
"confidence": 0.99992108345031738
},
{
"name": "house",
"confidence": 0.99685388803482056
}
],
"requestId": "06f39352-e445-42dc-96fb-0a1288ad9cf1",
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  5

"metadata": {
"height": 200,
"width": 300,
"format": "Jpeg"
}
}

Detecting Objects
Object detection is similar to tagging, but the API returns the bounding box coordinates for each tag
applied. For example, if an image contains a dog, cat and person, the Detect operation will list those
objects together with their coordinates in the image. You can use this functionality to process further
relationships between the objects in an image. It also lets you know when there are multiple instances of
the same tag in an image. This API will apply tags according to the object or living thing found in the
image supplied. An example of a returned JSON might look like this:

{
"objects":[
{
"rectangle":{
"x":730,
"y":66,
"w":135,
"h":85
},
"object":"kitchen appliance",
"confidence":0.501
},
{
"rectangle":{
"x":523,
"y":377,
"w":185,
"h":46
},
"object":"computer keyboard",
"confidence":0.51
},
{
"rectangle":{
"x":471,
"y":218,
"w":289,
"h":226
},
"object":"Laptop",
"confidence":0.85,
"parent":{
"object":"computer",
MCT USE ONLY. STUDENT USE PROHIBITED 6  Module 0 Introducing Azure Cognitive Services  

"confidence":0.851
}
},
{
"rectangle":{
"x":654,
"y":0,
"w":584,
"h":473
},
"object":"person",
"confidence":0.855
}
],
"requestId":"a7fde8fd-cc18-4f5f-99d3-897dcd07b308",
"metadata":{
"width":1260,
"height":473,
"format":"Jpeg"
}
}

Detect Brands
Identify commercial brands in images or videos from a database of thousands of global logos. You can
use this feature, for example, to discover which brands are most popular on social media or most preva-
lent in media product placement. Microsoft considers this feature to be a specialized mode of object
detection. It is based on a database containing thousands of globally recognizable logos that may be
found in your images or videos. IF recognized brands are detected, the results will list the brand name, a
confidence score between 0 and 1, and the coordinates of a bounding box where the logo was detected.
If you find that a brand is present in an image but not detected by this API, you might consider creating a
Custom Vision service to address the issue.
An example of a result in JSON, where an image contains a Microsoft logo and brand name, would look
like this:
{
"brands":[
{
"name":"Microsoft",
"confidence":0.657,
"rectangle":{
"x":436,
"y":473,
"w":568,
"h":267
}
},
{
"name":"Microsoft",
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  7

"confidence":0.85,
"rectangle":{
"x":101,
"y":561,
"w":273,
"h":263
}
}
],
"requestId":"10dcd2d6-0cf6-4a5e-9733-dc2e4b08ac8d",
"metadata":{
"width":1286,
"height":1715,
"format":"Jpeg"
}
}

Categorize an Image
Identify and categorize an entire image, using a category taxonomy with parent/child hereditary hierar-
chies. Categories can be used alone, or with our new tagging models.
Currently, English is the only supported language for tagging and categorizing images.

Describe an Image
Generate a description of an entire image in human-readable language, using complete sentences.
Computer Vision's algorithms generate various descriptions based on the objects identified in the image.
The descriptions are each evaluated and a confidence score generated. A list is then returned ordered
from highest confidence score to lowest. An example JSON result is shown here:

{
"description": {
"tags": ["outdoor", "building", "photo", "city", "white", "black",
"large", "sitting", "old", "water", "skyscraper", "many", "boat", "river",
"group", "street", "people", "field", "tall", "bird", "standing"],
"captions": [
{
"text": "a black and white photo of a city",
"confidence": 0.95301952483304808
},
{
"text": "a black and white photo of a large city",
"confidence": 0.94085190563213816
},
{
"text": "a large white building in a city",
"confidence": 0.93108362931954824
}
MCT USE ONLY. STUDENT USE PROHIBITED 8  Module 0 Introducing Azure Cognitive Services  

]
},
"requestId": "b20bfc83-fb25-4b8d-a3f8-b2a1f084b159",
"metadata": {
"height": 300,
"width": 239,
"format": "Jpeg"
}
}

Detect Faces
Detect faces in an image and provide information about each detected face. Computer Vision returns the
coordinates, rectangle, gender, and age for each detected face.
Computer Vision provides a subset of the functionality that can be found in Face, and you can use the
Face service for more detailed analysis, such as facial identification and pose detection.

Detect Image Types


Detect characteristics about an image, such as whether an image is a line drawing or the likelihood of
whether an image is clip art.

Detect Domain-specific Content


Use domain models to detect and identify domain-specific content in an image, such as celebrities and
landmarks. For example, if an image contains people, Computer Vision can use a domain model for
celebrities included with the service to determine if the people detected in the image match known
celebrities.
You can also use domain-specific models to supplement general image analysis. You do this as part of
high-level categorization by specifying domain-specific models in the details parameter of the Analyze
API call.

Detect the Color Scheme


Analyze color usage within an image. Computer Vision can determine whether an image is black & white
or color and, for color images, identify the dominant and accent colors. Returned colors belong to the
set: black, blue, brown, gray, green, orange, pink, purple, red, teal, white, and yellow.

Generate a Thumbnail
Analyze the contents of an image to generate an appropriate thumbnail for that image. Computer Vision
first generates a high-quality thumbnail and then analyzes the objects within the image to determine the
area of interest. Computer Vision then crops the image to fit the requirements of the area of interest. The
generated thumbnail can be presented using an aspect ratio that is different from the aspect ratio of the
original image, depending on your needs.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  9

Recognize Printed and Handwritten Text


Computer Vision provides a number of services that detect and extract printed or handwritten text that
appears in images. This is useful in a variety of scenarios such as note taking, medical records, security,
and banking. The components of this API are:
●● Read API - detects textual content in the supplied images and converts identified text into a ma-
chine-readable character stream. There are some requirements for the images however:

●● image must be JPEG, PNG, BMP, PDF, or TIFF


●● image dimensions must be between 50 x 50 and 10000 x 10000 pixels
●● PDF pages must be 17 x 17 inches
●● file size must be less that 20 megabytes (MB)
●● Optical Character Recognition (OCR) - this feature is similar to the Read API but it operates synchro-
nously and is not really optimized to perform OCR on large documents. There is support for various
languages2 as well. The image requirements are close to the Read API but do differ a bit:

●● image must be JPEG, PNG, GIF, or BMP


●● image size must be between 50 x 50 and 4200 x 4200 pixels
●● text in an image can be rotated in an image at 90, 180, 270, or straight (360) but support for
smaller angles up to 40 degrees is also possible
●● Limitations exist where false positives may result from partially recognized words in photos where
text is the dominant feature

Detect Adult and Racy Content


Computer Vision can also detect adult material in images. Content flags are applied with a score between
zero and one with values closer to 1 representing a greater confidence that the content is adult oriented.
Note
This feature is also offered by the Azure Content Moderator service3. You should use this alternative for
solutions to more rigorous content moderation scenarios, such as text moderation and human review
workflows.
The API uses content flag definitions to identify the “type” of content. Adult images are defined as those
which are pornographic in nature and often depict nudity and sexual acts while Racy images are defined
as images that are sexually suggestive in nature and often contain less sexually explicit content than
images tagged as Adult.
To identify adult and racy content, the Analyze API makes use of the Analyze Image method. This
method returns two boolean properties, isAdultContent and isRacyContent, in the JSON response to
indicate adult and racy content respectively. The method also returns two properties, adultScore and
racyScore, which represent the confidence scores for identifying adult and racy content respectively.

2 https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/language-support#text-recognition
3 https://docs.microsoft.com/azure/cognitive-services/content-moderator/overview
MCT USE ONLY. STUDENT USE PROHIBITED 10  Module 0 Introducing Azure Cognitive Services  

Introducing the Speech APIs


The speech APIs that are a part of the Azure Cognitive Services, provide specific functionality that can
help your applications to work with speech, which is essentially the spoken word. Your applications may
need to use voice-triggering to perform some action when spoken words are detected by the application.
You might want to perform transcription of recordings that a call center might be engaged in. You can
also create voice-enabled bots to interact with users. The speech services your application take advan-
tage of are:
●● Speech-to-text services that can:

●● Transcribe continuous real-time speech into text.


●● Batch-transcribe speech from audio recordings.
●● Support intermediate results, end-of-speech detection, automatic text formatting, and profanity
masking.
●● Call on Language Understanding (LUIS) to derive user intent from transcribed speech.*
●● Text-to-speech services that can:

●● Provides neural text-to-speech voices nearly indistinguishable from human speech (English).
●● Convert text to natural-sounding speech.
●● Offer multiple genders and/or dialects for many supported languages.
●● Support plain text input or Speech Synthesis Markup Language (SSML).
●● Speech translation services that can:

●● Translate streaming audio in near-real-time.


●● Process recorded speech.
●● Provide results as text or synthesized speech.
Accessing the speech API can be done through the Speech SDK or from REST APIs. The Speech SDK
contains native APIs for use with C#, C++, and Java while the REST APIs can be called through the
standard HTTP-based API calls. It's also important to note that not every service is available on each of
these methods. The Speech SDK supports Speech-to-Text and speech translation but not text-to-speech.
The REST APIs support Speech-to-Text and Text-to-Speech, but not speech translation.
Before using the speech services, you should familiarize yourself with the region availability and pricing
options for on the Cognitive Services Pricing - Speech Services4 web page.
✱ LUIS intents and entities can be derived using a separate LUIS subscription. With this subscription, the SDK
can call LUIS for you and provide entity and intent results. With the REST API, you can call LUIS yourself to
derive intents and entities with your LUIS subscription.

Speech-to-Text Translation
Speech-to-text from Azure Speech Services, also known as speech-to-text, enables real-time transcription
of audio streams into text that your applications, tools, or devices can consume, display, and take action
on as command input.

4 https://azure.microsoft.com/en-us/pricing/details/cognitive-services/speech-services/
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  11

Speech translation services are exposed through platform-independent REST-based APIs, or an SDK, that
allow them to be integrated into any solution requiring multi-language speech translation.

Microsoft has a set of trained data that resides in the cloud and is the source used in what Microsoft calls
the Universal language model. Speech-to-text uses this language model to perform the translation.
You can use different input sources for the audio such as:
●● microphones
●● streaming audio
●● locally or cloud-stored audio file (using the Speech SDK and REST APIs)
Note The Speech SDK has support for audio in 16-bit WAV, 16-bit PCM formats as well as 16KHz/8KHz
single-channel audio for speech recognition as in the input. Additionl formats are supported with the
REST endpoint for speech-to-text or the batch transcription service.

Core features
Features available via the Speech SDK and REST APIs:

Use case SDK REST


Transcribe short utterances (<15 Yes Yes
seconds). Only supports final
transcription result.
Continuous transcription of long Yes No
utterances and streaming audio
(>15 seconds). Supports interim
and final transcription results.
Derive intents from recognition Yes No*
results with LUIS.
Batch transcription of audio files No Yes**
asynchronously.
Create and manage speech No Yes**
models.
Create and manage custom No Yes**
model deployments.
Create accuracy tests to measure No Yes**
the accuracy of the baseline
model versus custom models.
Manage subscriptions. No Yes**
MCT USE ONLY. STUDENT USE PROHIBITED 12  Module 0 Introducing Azure Cognitive Services  

* LUIS intents and entities can be derived using a separate LUIS subscription. With this subscription, the
SDK can call LUIS for you and provide entity and intent results. With the REST API, you can call LUIS
yourself to derive intents and entities with your LUIS subscription.
** These services are available using the cris.ai endpoint. See Swagger5 reference.

Calling the Speech Translation API


Although fully REST-enabled, you can also use the APIs included in the Speech SDK as the following code
samples illustrate.
For example, here's a simplified version of what a call to the Speech Translation API might look like in
Python:

import azure.cognitiveservices.speech as speechsdk

# Creates an instance of a speech config with specified subscription key and


service region.
# Replace with your own subscription key and service region (e.g., "west-
us").
speech_key, service_region = "YourSubscriptionKey", "YourServiceRegion"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=ser-
vice_region)

# Creates a recognizer with the given settings


speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config)

print("Say something...")

# Starts speech recognition, and returns after a single utterance is recog-


nized. The end of a
# single utterance is determined by listening for silence at the end or
until a maximum of 15
# seconds of audio is processed. The task returns the recognition text as
result.
# Note: Since recognize_once() returns only a single utterance, it is
suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recog-
nition() instead.
result = speech_recognizer.recognize_once()

# Checks result.
if result.reason == speechsdk.ResultReason.RecognizedSpeech:
print("Recognized: {}".format(result.text))
elif result.reason == speechsdk.ResultReason.NoMatch:
print("No speech could be recognized: {}".format(result.no_match_de-
tails))
elif result.reason == speechsdk.ResultReason.Canceled:

5 https://westus.cris.ai/swagger/ui/index
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  13

cancellation_details = result.cancellation_details
print("Speech Recognition canceled: {}".format(cancellation_details.
reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
print("Error details: {}".format(cancellation_details.error_de-
tails))

Making the same API call in C# is simply a change in language-specific syntax:


using System;
using System.Threading.Tasks;
using Microsoft.CognitiveServices.Speech;

namespace helloworld
{
class Program
{
public static async Task RecognizeSpeechAsync()
{
// Creates an instance of a speech config with specified sub-
scription key and service region.
// Replace with your own subscription key and service region
(e.g., "westus").
var config = SpeechConfig.FromSubscription("YourSubscriptionKey",
"YourServiceRegion");

// Creates a speech recognizer.


using (var recognizer = new SpeechRecognizer(config))
{
Console.WriteLine("Say something...");

// Starts speech recognition, and returns after a single


utterance is recognized. The end of a
// single utterance is determined by listening for silence
at the end or until a maximum of 15
// seconds of audio is processed. The task returns the
recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single
utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use Start-
ContinuousRecognitionAsync() instead.
var result = await recognizer.RecognizeOnceAsync();

// Checks result.
if (result.Reason == ResultReason.RecognizedSpeech)
{
Console.WriteLine($"We recognized: {result.Text}");
}
else if (result.Reason == ResultReason.NoMatch)
MCT USE ONLY. STUDENT USE PROHIBITED 14  Module 0 Introducing Azure Cognitive Services  

{
Console.WriteLine($"NOMATCH: Speech could not be recog-
nized.");
}
else if (result.Reason == ResultReason.Canceled)
{
var cancellation = CancellationDetails.FromResult(re-
sult);
Console.WriteLine($"CANCELED: Reason={cancellation.
Reason}");

if (cancellation.Reason == CancellationReason.Error)
{
Console.WriteLine($"CANCELED: ErrorCode={cancella-
tion.ErrorCode}");
Console.WriteLine($"CANCELED: ErrorDetails={cancel-
lation.ErrorDetails}");
Console.WriteLine($"CANCELED: Did you update the
subscription info?");
}
}
}
}

static void Main()


{
RecognizeSpeechAsync().Wait();
Console.WriteLine("Please press a key to continue.");
Console.ReadLine();
}
}
}

IMPORTANT
The Python and C# code snippets above have been reduced for brevity and would require more code to
function properly in real scenarios.

Speech to Text
This API gives you the functionality necessary to perform real-time transcription of audio streams in
textual representation. A prime example might be a scenario where you want to have closed captions for
a speaker at an event or during a presentation.
Speech-to-text uses the Universal language model that was trained using Microsoft-owned data and is
deployed in the cloud. It's optimal for conversational and dictation scenarios. The audio input can come
from a connected microphone, streaming audio, or from audio that is stored as a file. If the Universal
language model does not meet your needs, you can create custom models to meet your needs. Micro-
soft supports the following options for customization:
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  15

Model Description
Acoustic Model Creating a custom acoustic model is helpful if your
application, tools, or devices are used in a particu-
lar environment, like in a car or factory with
specific recording conditions. Examples involve
accented speech, specific background noises, or
using a specific microphone for recording.
Language Model Create a custom language model to improve
transcription of industry-specific vocabulary and
grammar, such as medical terminology, or IT
jargon.
Pronunciation Model With a custom pronunciation model, you can
define the phonetic form and display of a word or
term. It's useful for handling customized terms,
such as product names or acronyms. All you need
to get started is a pronunciation file – a simple .txt
file.

Walkthrough - Creating a Speech to Text Service


Let's create a speech translation subscription using the Azure portal.
1. Sign into the Azure portal6.
2. Click or Select + Create a resource, type in “Speech” (without quotation marks) in the "Search the
Marketplace" entry and press Enter.
3. Once search results are returned, select Speech from the “Results” panel, then, in the subsequent
panel, Click or Select Create.
4. Enter a unique name for your service, such as “SpeechDemo1” or other relevant name.
5. Select your subscription.
6. Choose a location that is closest to you.
7. Select a Pricing tier (you can use F0 for this option or the lowest cost available in your region).
8. Create a new Resource Group named mslearn-speechapi to hold your resources.
9. Click or Select Create to create the service.
After a short delay, your new service will be provisioned and available, and new API keys will be generat-
ed for programmatic use.
TIP:
If you miss the notification that your resource is published, you can simply Click or Select the notification
icon in the top bar of the portal and select Go To Resource as shown here.
With a Speech Translation subscription created you're now able to access your API endpoint and sub-
scription keys.
To access your Speech Translation subscription, you'll need to get two pieces of information from the
Azure portal:
1. A Subscription Key that is passed with every request to authenticate the call.

6 https://portal.azure.com?azure-portal=true
MCT USE ONLY. STUDENT USE PROHIBITED 16  Module 0 Introducing Azure Cognitive Services  

2. The Endpoint that exposes your service on the network.

View the Subscription Keys


1. Click or Select Resource groups in the left sidebar of the portal, and then Click or Select the resource
group created for this service.
2. Select your service that you just created.
3. Select Keys under the “Resource Management” group to view your new API keys.
4. Copy the value of KEY 1 or KEY 2 to the clipboard for use in an application.

View the endpoint


1. Select Overview from the menu group, locate the “Endpoint” label, and make note of the Endpoint
value. This value will be the URL used when generating temporary tokens.
Note Key1 and the Endpoint are also available on the Quick Start page under the Resource Management
section.

Introducing the Language Service


The language services are designed to ensure apps and services can understand the meaning of unstruc-
tured text or recognize the intent behind a speaker’s utterances. The different features available, like
many of the Azure offerings, change periodically and the main source
of up-to-date information should always be the Azure site for each technology. You can monitor the
current offerings for the Language services on the Cognitive Services Home Page7. THe current
offerings,
as of the creation of this content, are:
●● Text Analytics - The Text Analytics API is a Cognitive Service designed to help you extract information
from text. Through the service you can identify language, discover sentiment, extract key phrases, and
detect well-known entities from text. Under the covers, the service uses a machine learning classifica-
tion algorithm to generate a sentiment score between 0 and 1. Scores closer to 1 indicate positive
sentiment, while scores closer to 0 indicate negative sentiment. A score close to 0.5 indicates no
sentiment or a neutral statement. You don't have to worry about the implementation details of the
algorithm.
You focus on using the service by making calls to it from your app. As we'll see shortly, you structure a
POST request, send it to the endpoint, and receive a JSON response that contains pertinent informa-
tion identified in the text.
●● Translator Text - There are two aspects to this service, translator text and custom translator. Transla-
tor Text is a cloud-based machine translation service you can use to translate text in near real-time
through a simple REST API call.
Automatic language detection is also a part of this service. The API uses modern neural machine
translation technology and offers statistical machine translation technology.
Custom Translator is an extension of the Translator Text API which allows you to build neural transla-
tion systems.
The customized translation system can be used to translate text with the Translator Text API or
Microsoft Speech Services.

7 https://azure.microsoft.com/en-us/services/cognitive-services/
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  17

●● Language Understanding (LUIS) - Language Understanding (LUIS) is a cloud-based API service that
applies custom machine-learning intelligence to a user's conversational, natural language text to
predict overall meaning, and pull out relevant, detailed information.
A client application for LUIS is any conversational application that communicates with a user in natural
language to complete a task. Examples of client applications include social media apps, chat bots, and
speech-enabled desktop applications.
Language understanding is a concept that even humans get wrong from time to time. A good
example is the use of slang terms or localized phrases. If you are in Indonesia at a public place,
perhaps a mall or in a restaurant, and you're searching for the restroom. Indonesia language lessons
might teach you to ask where the restroom is with the phrase, "Di mana kamar kecil?". While this is
technically correct, it applies mainly to seeking the restroom in someone's house because kamar kecil
literally means, small (kecil) room (kamar). In public, it's more correct to ask, "Di mana WC?", or "Di
mana toilette?". However, almost all Indonesians will know what you are asking. What happens if you
attempt to have a computer perform that translation to understand what you asked? Will it get the
correct answer or will it try to direct you to a “small room” somewhere that isn't actually a restroom?
Likewise, in the English language, there are many scenarios where a human “understands” the mean-
ing of a phrase or statement, where the subtle similarities aren't apparent to a non-native English
speaker. How many would understand the phrase "swing the door to"? This is the same as “shut the
door” or "close the door", but not everyone would understand these equivalents. For AI to understand
language, specific aspects are critical to aid the algorithm in making comparisons and distinctions.
This is where the Language Understanding Intelligent Service, LUIS, comes into play.
LUIS makes use of three key aspects for understanding language:
Intent - An intent represents a task or action the user wants to perform.
It is a purpose or goal expressed in a user's utterance.
Utterance - Utterances are input from the user that your app needs to
interpret.
Entities - The entity represents a word or phrase inside the utterance
that you want extracted.

You will learn more about LUIS in later modules.


●● QnA Maker - this service can offer question and answer extraction from unstructured text, Knowledge
base creation from collections of Q&As, and semantic matching for knowledge bases. One of the key
aspects around QnA Maker is the ability to parse existing frequently
asked questions (FAQs), and generate question and answer pairs. You can then integrate this with a
chat bot to provide end-user support through bot interactions based on your FAQs.

In Preview
●● Immersive Reader - The language service also has an immersive reader feature in preview as of this
writing. Immersive Reader can help users read and understand text and supports a feature set that
aids readers of all abilities.
MCT USE ONLY. STUDENT USE PROHIBITED 18  Module 0 Introducing Azure Cognitive Services  

Walkthrough - Call the Text Analytics API from


the Online Testing Console
Create an access key
Every call to the Text Analytics API requires a subscription key. Often called an access key, it is used to
validate that you have access to make the call. We'll use the Azure portal to grab a key.
1. Sign into the Azure portal.
2. Click or Select Create a resource.
3. In the Search the Marketplace search box, type in text analytics and hit return.
4. Select Text Analytics in the search results and then select the Create button.
5. In the Create page that opens, enter the following values into each field.
●● Name - MyTextAnalyticsAPIAccount - The name of the Cognitive Services account. We recommend
using a descriptive name. Valid characters are a-z, 0-9, and -.
●● Subscription - Choose your subscription - The subscription under which this new Cognitive
Services API account with Text Analytics API is created.
●● Location - choose a region from the dropdown
●● Pricing tier - F0 - The cost of your Cognitive Services account depends on the actual usage and
the options you choose. We recommend selecting the F0 tier for our purposes here.
●● Resource group - Select Use existing and choose an existing RG, or create a new one if necessary
6. Select Create at the bottom of the page to start the account creation process.
7. Watch for a notification that the deployment is in progress. You'll then get a notification that the
account has been deployed successfully to your resource group.

Get the access key


Now that we have our Cognitive Services account, let's find the access key so we can start calling the API.
1. Click or Select on the Go to resource button on the Deployment succeeded notification. This action
opens the account Quickstart.
2. Select the Keys menu item from the menu on the left, or in the Grab your keys section of the quick-
start. This action opens the Manage keys page.
3. Copy one of the keys using the copy button.
Important Always keep your access keys safe and never share them.
4. Store this key for the rest of this walkthrough. We'll use it shortly to make API calls from the testing
console and throughout the rest of the module.

Call the API from the testing console


Now that we have our key, we can head over to the testing console and take the API for a spin.
1. Navigate to the following URL in your favorite browser. Replace [location] with the location you
selected when creating the Text Analytics cognitive services account earlier in this walkthrough. For
example, if you created the account in eastus, you'd replace [location] with eastus in the URL.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  19

https://[location].dev.cognitive.microsoft.com/docs/services/TextAnalytics.V2.0
The landing page displays a menu on the left and content to the right. The menu lists the POST methods
you can call on the Text Analytics API. These endpoints are Detect Language, Entities, Key Phrases, and
Sentiment. To call one of these operations, we need to do a few things.
●● Select the method we want to call.
●● Add the access key that we saved earlier in the lesson to each call.
2. From the left menu, select Sentiment. This selection opens the Sentiment documentation to the right.
As the documentation shows, we'll be making a REST call in the following format.
https://[location].api.cognitive.microsoft.com/text/analytics/v2.0/sentiment
[location] is replaced with the location that you selected when you created the Text Analytics account.
We'll pass in our subscription key, or access key, in the ocp-Apim-Subscription-Key header.

Make some API calls


1. Select the appropriate location button, (it should be the same location you created the service in), to
open the live, interactive, API console (Ensure you use the location where you created your service).
2. Paste the access key you saved earlier into the field labeled Ocp-Apim-Subscription-Key. Notice, in the
HTTP request panel, that the key will be written automatically into the HTTP request window as a
header value, represented by dots rather than displaying the key's value.
3. Scroll to the bottom of the page and Click or Select Send.
Let's examine the sections of this result panel in more detail.
In the Headers section of the user interface, we set the access, or subscription, key in the header of
our request.

Next, we have the request body section, which holds a documents array in JSON format. Each docu-
ment in the array has three properties. The properties are “language”, "id", and “text”. The "id" is a
number in this example, but it can be anything you want as long as it's unique in the documents array.
In this example, we're also passing in documents written in three different languages. Over 15 lan-
guages are supported in the Sentiment feature of the Text Analytics API. For more information, check
out Supported languages in the Text Analytics API. The maximum size of a single document is 5,000
characters, and one request can have up to 1,000 documents.

The complete request, including the headers and the request URL are displayed in the next section.
MCT USE ONLY. STUDENT USE PROHIBITED 20  Module 0 Introducing Azure Cognitive Services  

The last portion of the page shows the information about the response. The response holds the
insight the Text Analytics API had about our documents. An array of documents is returned to us,
without the original text. We get back an “id” and "score" for each document. The API returns a
numeric score between 0 and 1. Scores close to 1 indicate positive sentiment, while scores close to 0
indicate negative sentiment. A score of 0.5 indicates the lack of sentiment, a neutral statement. In this
example, we have two pretty positive documents and one negative document.

Introducing Search
The Search aspect of Cognitive Services, at a high level, utilizes Bing as the search engine. Having said
that, you can focus your aspects around Search on specific areas. Note that this topic is rather large and
this course will not be covering these search APIs in depth but merely providing an introduction and leav-
ing it to the reader to explore any of the APIs that are of interest.

Workflow
A common workflow for using these services would be:
1. Create a Cognitive Services API account with access to the Bing Search APIs. If you don't have an
Azure subscription, you can create an account for free.
2. Send a request to the API, with a valid search query.
3. Process the API response by parsing the returned JSON message.
The current Search APIs support the following separate areas:

Bing News Search


The Bing News Search API enables you to integrate Bing's news search capabilities in your applications.
By sending search queries with the API, you can get relevant news articles from multiple categories and
sources, similar to on the news portion of the Bing search engine. It consists of the following features:
●● Suggesting and using search terms - Improve your search experience by using the Bing Autosuggest
API to display suggested search terms as they are typed.
●● Get general news - Find news by sending a search query to the Bing News Search API, and getting
back a list of relevant news articles.
●● Today's top news - Get the top news stories for the day, across all categories.
●● News by category - Search for news in specific categories.
●● Headline news - Search for top headlines across all categories.

Bing Video Search


The Bing Video Search API makes it easy to add video searching capabilities to your services and applica-
tions. By sending user search queries with the API, you can get and display relevant and high-quality
videos..
Video Search supports the following features:
●● Suggest search terms in real-time - Improve your app experience by using the Bing Autosuggest API
to display suggested search terms as they are typed.
●● Filter and restrict video results - Filter the videos returned by editing query parameters.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  21

●● Crop, resize, and display thumbnails - Edit and display thumbnail previews for the videos returned by
Bing Video Search API.
●● Get trending videos - Search for trending videos from around the world.
●● Get video insights - Customize a search for trending videos from around the world.

Bing Web Search


The Bing Web Search API enables you to integrate Bing's search capabilities in your applications. By send-
ing search queries with the API, you can get relevant webpages, images, videos, news and more.
In addition to instant answers, Bing Web Search provides additional features and functionality that allow
you to customize search results for your users.
●● Suggest search terms in real time - Improve your application experience by using the Bing Autosug-
gest API to display suggested search terms as they are typed.
●● Filter and restrict results by content type - Customize and refine search results with filters and query
parameters for web pages, images, videos, safe search, and more.
●● Hit highlighting for unicode characters - Identify and remove unwanted unicode characters from
search results before displaying them to users with hit highlighting.
●● Localize search results by country, region, and/or market - Bing Web Search supports more than three
dozen countries or regions. Use this feature to refine search results for a specific country/region or
market.
●● Analyze search metrics with Bing Statistics - Bing Statistics is a paid subscription that provides analyt-
ics on call volume, top query strings, geographic distribution, and more.

Bing Autosuggest
Bing Autosuggest API lets you send a partial search query term to Bing and get back a list of suggested
queries that other users have searched on. For example, as the user enters each character of their search
term, you'd call this API and populate the search box's drop-down list with the suggested query strings.
The workflow for this service is just a bit different due to the way it works. You still create the Cognitive
Services API account, as in the workflow at the opening of this topic, but during step two, you should
send a request to the API each time a users types a new character in the search box. Then you
process the returned JSON.
The rationale as to why you call this API each time the user types a new character in your application's
search box is because, as more characters are entered, the API will return more relevant suggested search
queries. For example, the suggestions the API might return for a single s are likely to be less relevant than
ones for sail.
The user can then select a suggestion from the drop-down list, you can use it to begin searching with one
of the Bing Search APIs, or directly go to the Bing search results page.

Bing Custom Search


The Bing Custom Search API enables you to create tailored, ad-free search experiences for topics that you
care about. You can specify the domains and webpages for Bing to search, as well as pin, boost, or
MCT USE ONLY. STUDENT USE PROHIBITED 22  Module 0 Introducing Azure Cognitive Services  

demote specific content to create a custom view of the web and help your users quickly find relevant
search results. It contains the following features:
●● Custom real-time search suggestions - Provide search suggestions that can be displayed as a drop-
down list as your users type.
●● Custom image search experiences - Enable your users to search for images from the domains and
websites specified in your custom search instance.
●● Custom video search experiences - Enable your users to search for videos from the domains and sites
specified in your custom search instance.
●● Share your custom search instance - Collaboratively edit and test your search instance by sharing it
with members of your team.
●● Configure a UI for your applications and websites - Collaboratively edit and test your search instance
by sharing it with members of your team.

Bing Entity Search


The Bing Entity Search API sends a search query to Bing and gets results that include entities and places.
Place results include restaurants, hotel, or other local businesses. Bing returns places if the query specifies
the name of the local business or asks for a type of business (for example, restaurants near me). Bing
returns entities if the query specifies well-known people, places (tourist attractions, states, countries, etc.),
or things.
Feature List:
●● Real-time search suggestions - Provide search suggestions that can be displayed as a dropdown list as
your users type.
●● Entity disambiguation - Get multiple entities for queries with multiple possible meanings.
●● Find places - Search for and return information on local businesses and entities

Bing Image Search


The Bing Image Search API enables you to use Bing's image search capabilities in your application. By
sending search queries to the API, you can get high-quality images similar to bing.com/images. While the
Bing Image Search API provides image-only search results, you can combine or use the other available
Bing Search APIs to find many types of content on the web.
Bing Image Search features are listed here:
●● Suggest search terms in real-time - Improve your app experience by using the Bing Autosuggest API
to display suggested search terms as they are typed.
●● Filter and restrict image results - Filter the images that Bing returns by editing query parameters.
●● Crop, resize, and display thumbnails - Edit and display thumbnail previews for the images returned by
Bing Image Search.
●● Pivot & expand user search queries - Expand your search capabilities by including and displaying
Bing-suggested search terms to queries.
●● Get trending images - Customize a search for trending images from around the world.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  23

Bing Visual Search


The Bing Visual Search API provides similar image details to those shown on Bing.com/images. By
uploading an image or providing a URL to one, this API can identify a variety of details about it, including
visually similar images, shopping sources, webpages that include the image, and more. If you use the
Bing Image Search API, you can use insight tokens attached to the API's search results instead of upload-
ing an image.
Rather than a feature list, like the other APIs, this service uses insights that Visual Search lets you discover:
●● Visually similar images - A list of images that are visually similar to the input image.
●● Visually similar products - Products that are visually similar to the product shown.
●● Shopping sources - Places where you can buy the item shown in the input image.
●● Related searches - Related searches made by others or that are based on the contents of the image.
●● Web pages that include the image - Webpages that include the input image.
●● Recipes - Webpages that include recipes for making the dish shown in the input image
In addition to these insights, Visual Search also returns a diverse set of terms (tags) derived from the
input image. These tags allow users to explore concepts found in the image. For example, if the input
image is of a famous athlete, one of the tags could be the name of the athlete, another tag could be
Sports. Or, if the input image is of an apple pie, the tags could be Apple Pie, Pies, Desserts, so users can
explore related concepts.
The Visual Search results also include bounding boxes for regions of interest in the image. For example, if
the image contains several celebrities, the results may include bounding boxes for each of the recognized
celebrities in the image. Or, if Bing recognizes a product or clothing in the image, the result may include a
bounding box for the recognized product or clothing item.

Bing Local Business Search (Preview)


This service, at the time this course was published, is in preview so note that features and functionality
may change. It also only supports the English US market at this time, nor does it support autosuggest.
The Bing Local Business Search API is a RESTful service that enables your applications to find information
about local businesses based on search queries. For example, q=<business-name> in Redmond, Wash-
ington, or q=Italian restaurants near me.
●● Find local businesses and locations - The Bing Local Business Search API gets localized results from a
query. Results include a URL for the business's website and display text, phone number, and geo-
graphical location, including: GPS coordinates, city, street address
●● Filter local results with geographic boundaries - Add coordinates as search parameters to limit results
to a specific geographic area, specified by either a circular area or square bounding box.
●● Filter local business results by category - Search for local business results by category. This option
uses reverse IP location or GPS coordinates of the caller to return localized results in various catego-
ries of business.
Also important to note is that the workflow for this service is slightly different than the others:
1. Call the Bing Local Business Search API from any programming language that can make HTTP re-
quests and parse JSON responses. This service is - accessible using the REST API.
2. Create a Cognitive Services API account with access to the Bing Search APIs. If you don't have an
Azure subscription, you can create a free account.
MCT USE ONLY. STUDENT USE PROHIBITED 24  Module 0 Introducing Azure Cognitive Services  

3. URL encode your search terms for the q="" query parameter. For example, q=nearby+restaurant or
q=nearby%20restaurant. Set pagination as well, if needed.
4. Send a request to the Bing Local Business Search API
5. Parse the JSON response

Demo - Search Services


Bing Autosuggest
1. Open a browser and navigate to the Bing Autosuggest page8
2. Select the Market drop down to get a list of available languages supported
3. Ensure that en-us (English-United States) is chosen, or if you are delivering this in another location,
select the local language that matches the language used at your location
4. Begin typing text into the search box and show the results being displayed in the Preview pane to the
right of the search box
5. Discuss with the students the results that are displayed. For example, if you enter ‘tes’ into the search
box, without the single quotes, results will be returned such as tesla, test, tesco, etc.
6. Select the JSON tab in result pane on the right
7. Discuss with the students, the JSON output and the fields included. THis JSON is what will be re-
turned to the an application that uses this service

Bing Image Search


1. Open a browser and navigate to the Bing Image Search API page9
2. Enter a search term or make use of the some of the examples on the page, provided by Microsoft. For
example, select cute animals
3. Images of cute animals will be displayed in the Preview panel
4. Scroll down the page and point out the options drop-downs. Make some selections to filter on color
and size
5. Point out to the students how the color filter impacts the results. For example, with Blue selected as
the color, you may see a couple of images of dogs with blue collars or other adornments. This shows
that, while the dog isn't blue, there is still a blue object in the image
6. Once again, display the JSON panel to show the results that would be returned to a calling application
of this API

Introducing Decision
The Decision component of the Azure Cognitive Services suite is a relatively new addition to the family. It
is intended to help you enable informed and efficient decision making. Decision has brought over the
Content Moderator service that previously sat in other areas

8 https://azure.microsoft.com/en-us/services/cognitive-services/autosuggest/
9 https://azure.microsoft.com/en-us/services/cognitive-services/bing-image-search-api/
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  25

of Cognitive Services and also adds to new features in preview. Decision consists of the following
services:
●● Content Moderator - Use this service to detect potentially offensive content such as unwanted
images, filter text for profanity or other unwanted text, and moderate content that might be consid-
ered adult or racy content.
●● Anomaly Detector (Preview) - This feature set can help you to monitor business health in real-time,
leverage interactive data analytics for your business, and even help to conduct Internet of Things (IoT)
remote monitoring.
●● Personalizer (Preview) - Personalizer helps to deliver richer personalized experiences in the applica-
tions that you develop. It can understand and manage the reinforcement learning loop and can be
deployed anywhere from the cloud to the edge.
Because the Personalizer and Anomaly Detector features are still in preview, they are not covered in this
course content.

Content Moderator
If your organization publishes content that will be viewed or consumed by the public, or even internally,
and you want to ensure that offensive or adult-related content is not included, you can use the various
features available in the content moderator services. Content moderator has the ability to evaluate text,
images, and video content for offensive material. To help ensure that the service doesn't inadvertently
include or exclude content accidentally, you can also rely on the human review tool aspect of content
moderator.

Text Moderation
The service can evaluate written text to determine if it contains the following:
●● Profanity - capable of detecting for profanity in more than 100 languages
●● Inappropriate text - this is context dependent and also supports the use of custom lists for evaluation
●● Personally Identifiable Information (PII) - scan text and identify the potential of exposing PII informa-
tion
●● Information in custom term lists - block or allow content according to your own content policies

Image Moderation
●● Use machine-assisted image moderation and human-in-the-loop Review tool to moderate images for
adult and racy content.
●● Scan images for text content and extract the text.
●● Detect faces to aid in detecting personal data that may be contained in the images due to the
recognition of faces in the images. You can match images against custom lists, and take further action.
●● The Optical Character Recognition (OCR) operation predicts the presence of text content in an image
and extracts it for text moderation.
●● The Match operation allows fuzzy matching of incoming images against any of your custom lists,
created and managed using the List operations. If a match is found, the operation returns the identifi-
er and the moderation tags of the matched image.
MCT USE ONLY. STUDENT USE PROHIBITED 26  Module 0 Introducing Azure Cognitive Services  

Video Moderation
Use Content Moderator's machine-assisted video moderation and human review tool to moderate videos
and transcripts for adult (explicit) and racy (suggestive) content.
●● Video-trained classifier (preview) - Microsoft's adult and racy video classifier is trained with videos
●● Shot detection - rather than just outputting frames, the service also provides shot-level data so you
can review the video using shot-level or frame-level data
●● Key frame detection - the service identifies and outputs only potentially complete (good) frames. The
feature makes frame-level adult and racy analysis easier and more efficient.
●● Visualization for human review - using the review tool, users can evaluate the insights of the video,
change tags related to the video and perform many features associated with reviewing and evaluating
video.
●● Transcript moderation - video files that will have transcripts or closed captioning require moderation
to search for offensive speech. This option requires the use of the Azure Media Indexer to convert the
speech to text, if the transcript files are not available, and then use the Content Moderator review API
to scan the text in the review tool

Review Tool
The Review tool, when used in conjunction with the machine-assisted moderation APIs, allows you to
accomplish the following tasks in the content moderation process:
●● A common set of tools to moderate text, image, and video.
●● Automate the creation of human reviews when moderation API results come in.
●● Assign or escalate content reviews to multiple review teams, organized by content category or
experience level.
●● Use default or custom logic filters (workflows) to sort and track content, without writing any code.
●● Use connectors to process content with Microsoft PhotoDNA, Text Analytics, and Face APIs in addition
to the Content Moderator APIs.
●● Build your own connector to create workflows for any API or business process.
●● Get key performance metrics on your content moderation processes.

Demo - Content Moderator and Review Tool


1. Open a browser and navigate to the Content Moderator landing page10
2. Select the Get started button
3. On the Content Moderator dashboard page that loads, select the Sign up button. If you already have
an account, select the Sign In option in the upper right hand corner of the page.
4. If you are using the Sign Up option, enter a Microsoft account (LiveID) and password
5. Select Yes to let the app access your info
6. Complete the Surname and Given name fields then select Continue
7. Enter the code that is sent to your email

10 https://azure.microsoft.com/en-us/services/cognitive-services/content-moderator/
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Azure Cognitive Services  27

8. You should be taken to your Azure portal with the resource being deployed
9. Once the resource is deployed and you are at the Quick Start page, select the Content Moderator
Review Tool link in section 2
10. This will open a new browser tab where you can create your review team
11. Complete the fields for Region, Team Name, and Team ID
12. Agree to the terms and select Create Team
13. In a few seconds, the Content Moderator dashboard opens
14. Select the Try drop down and choose Text
15. Select the Click here link to use default sample text
16. Point out to the students the contents of the text that will be evaluated and then select Submit
17. After a few seconds, the process completes and there is a Click her to review link at the of the page.
Select that to see the result
18. Discuss with the students the highlighted components in the left pane
19. Discuss the resulting output in the right pane showing that HasProfanity is True as is hasPII. Point out
the places in the text where those values are present.
20. You can also choose to test this with images or video files if you have any on the local computer.
MCT USE ONLY. STUDENT USE PROHIBITED 28  Module 0 Introducing Azure Cognitive Services  

Creating Cognitive Services using Azure Portal


Understanding Cognitive Services Accounts
A lot of the previous topics discussed a step in the workflow for creating a Cognitive Service API account.
There are a couple of very important aspects that need to be considered up front. First of all, there is a
prerequisite for all Cognitive Services accounts, that the user has a valid Microsoft Azure subscription. All
Cognitive Services are created through this Azure subscription.
The second key aspect is the fact that there are two different types of Azure Cognitive Services subscrip-
tions that you need to understand when creating an account to use the APIs.
●● Subscription to a single service - This is simply creating a subscription to single service, such as a
Computer Vision subscription service. The subscription will be restricted to that resource that is
created.
●● Multi-Service subscription - this is a single subscription that spans across most of the Azure Cognitive
Services and is not restricted to a single resource. This will also consolidate billing.

Creating a Single-Service Subscription


In order to create a single-service subscription, you will need to select the individual type of Cognitive
Service that you want to use for the subscription. As an example, you could create a subscription for the
Face API in the Computer Vision service by following these steps:
1. Sign into your Azure portal
2. Click or Select + Create a Resource
3. Type Face in the Search Marketplace text box and press Enter

4. Select Face from the list and then Click or Select Create
5. You would next complete the details in the Create blade
6. Click or Select Create to create the single-service subscription for the Face API

Create a Multi-Service Subscription


1. Sign into your Azure portal
2. Click or Select + Create a Resource
3. Type Cognitive Services in the Search Marketplace text box and press Enter
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating Cognitive Services using Azure Portal  29

4. Select Cognitive Services and Click or Select Create


5. You would next complete the details in the Create blade
6. Click or Select Create to create the multi-service subscription for the Cognitive Services
Anytime you create a new resource in the Azure portal, it can take a few seconds to a few minutes to
create that resource. An alert will pop up to let you know that the resource was created, along with a
button to Go To Resource. If you miss the alert, you can always Click or Select the notification icon at the
top right and then select Go To Resource. You can also choose to pin the resource to a dashboard.

Managing your Cognitive Services Account


After creating a Cognitive Services account, and selecting to Go To the Resource, the Azure portal
defaults to placing you on the Quick Start blade for the service. The main reason is that Microsoft wants
you to be able to understand the core aspects of the service and provides a link to get the access keys,
but also shows links where you can investigate the API reference information, learn how to make calls to
the service endpoint, and view some documentation on the service.
MCT USE ONLY. STUDENT USE PROHIBITED 30  Module 0 Introducing Azure Cognitive Services  

Using the Overview blade, you can ascertain basic information about your service such as the resource
group it is part of, its current status, the region it is located, assigned subscription, pricing tier, and of
course, the endpoint that would be used to access the service. The other core areas from the top left
options of the service permit you to view the activity log for the service, handle access control, manage
the tags for the service and to run some diagnostics.
Activity Log blade will display a history of access to the service. Options permit you to filter on manage-
ment groups, Azure subscriptions, timespan, event severity, resource group, or add your own filter.
Access Control is the blade you will use to manage role-based access control (RBAC). We don't cover
RBAC here as it is covered in the Azure Administrator tracks, both online and classroom-based.
Tags is another area that was covered in the Azure Administrator training and pertains to tagging
resources for identification in billing invoice investigations.
Diagnose and solve problems will open a blade that will display resource health along with guidance on
determining issues with the service and some troubleshooting steps.

Resource Management
Under the Resource Management section, you will find the previously mentioned Quick Start but at the
top of the section is the option to view the Keys. Managing the keys for your service is critical to security.
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating Cognitive Services using Azure Portal  31

Applications that want to connect to and use this service, must use one of the keys listed on this page.
You can use the Copy button at the far right of each key to copy it to the clipboard for use in your
applications. If you think that your keys have been compromised, you should use the regenerate options
at the top of this blade to create new keys for the service.
Selecting the Pricing Tier option opens a blade where you can choose a new pricing tier that is different
from that originally chosen when you created the service. Pricing tiers can be added or removed in Azure
so you should review this blade periodically to not only see what changes may have been made, but to
also ensure that the tier you have chosen is correct for your usage model.
The Billing by Subscription opens a blade that provides you with options to select an Azure subscription,
resource and resource group and time span to view the current costs associated with the services. Note
that there is a delay in the reporting that may affect the timeliness of the information displayed on this
blade.
If you want to view basic properties for the service, you will find that information on the Properties blade.
Essentially these are read only fields that show associated subscription name and ID, the resource group
(RG) this service is assigned to and the ID for the RG.
You can add Locks to the service by using the Locks blade. We will not cover locks in this course.
Many Azure administrators will look to manage services and resources through the use of Azure Resource
Manager (ARM) templates. To aid in using ARM with your service, selecting the Automation Script blade
results in Azure generating a template for you automatically. The Azure Administrator training, AZ-1xx
set of courses, covers ARM templates and their usage.
MCT USE ONLY. STUDENT USE PROHIBITED 32  Module 0 Introducing Azure Cognitive Services  

Monitoring your Cognitive Service


The Azure portal allows you to interact with monitoring for your service through the use of alerts, metrics,
diagnostics and logging. All of these options are found in the Monitoring section.

Alerts
Alerts proactively notify you when important conditions are found in your monitoring data. They allow
you to identify and address issues before the users of your system notice them. Alerts have a specific
flow that you should understand to effectively use them.

When you create a rule, the Azure portal opens the Create rule blade where you specify the following
information:
Resource - the resource to which the alert rule will apply. If you do this from the service blade, the
service is automatically selected but you can select another resource. This is actually indicative of the fact
that Alerts and Alert rules extend across many Azure resources
Condition - this is a signal logic that the alert rule will use. Signals are emitted by the target resource and
can be of several types. Metric, Activity log, Application Insights, and Log. The list is fairly extensive with
some signals being specific a type of resource
Action Group - who, or what, should be notified of the alert for action. Notify your team via email and
text messages or automate actions using webhooks, runbooks, functions, logic apps
Alert Details - provide a name for the alert rule and a description to explain the purpose for the alert
rule.
The default option is to enable the new rule as soon as it is created but you can turn that off if you don't
want the rule to enabled immediately.

What you can alert on


You can alert on metrics and logs as described in monitoring data sources. These include but are not
limited to:
●● Metric values
●● Log search queries
●● Activity Log events
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating Cognitive Services using Azure Portal  33

●● Health of the underlying Azure platform


●● Tests for web site availability

Metrics
Selecting the Metrics monitoring option opens the Metrics blade that consists of a chart. Initially this
chart will be blank until you add a metric that you want to display. The resource should be auto-populat-
ed in the resource entry and a default metric namespace. Click or Selecting the arrow in the Metric drop
down will allow you to select a metric to add to the chart. Only once you have selected a metric, will the
aggregation options be available. If you think about it, the aggregations will change depending on the
metric chosen anyway.

You can also change the chart type from the default line type to an area, bar, or scatter or grid options.
You can also choose to create a new alert rule directly from this blade if you wanted to chart a different
alert rule. Once you have created your metric chart, you can pin it to the dashboard for easy monitoring.

Diagnostic Settings
This blade allows you to select ab Azure subscription, resource group, resource types, and resources to
use in a diagnostic scenario. You can turn on diagnostics to collect:
●● Audit data
●● RequestResponse information for the service
●● All Metrics that are configured for the service

Walkthrough - Creating and Managing a Cogni-


tive Services Account
In this walkthrough, you will create a cognitive services account and explore the options available for
managing and monitoring the account.
MCT USE ONLY. STUDENT USE PROHIBITED 34  Module 0 Introducing Azure Cognitive Services  

Task 1 - Create a Cognitive Service account


1. Open a web browser and log in to your Azure portal.
2. Select the + Create a resource option in the portal
3. Because the Azure portal has a tendency to change from time-to-time, we recommend using the
Search the Marketplace option to locate specific resources. As a result, type in cognitive services
and the search feature should bring up an option in the list for Cognitive Services. Select that option
to open the Cognitive Services blade.
4. Read the descriptive text on the page and note that this option is for creating a “product bundle” for
accessing multiple cognitive services with a single API key.
5. Click or Select or Select, or select the Create button
6. On the Create blade, enter the following information:

1. Name - choose a unique name for your Cognitive Services account


2. Subscription - choose the appropriate subscription for your account, likely your Azure Pass
subscription
3. Location - select a supported location that is closest to you
4. Pricing tier - select S0
5. Resource Group - either create a new RG for this task or use an existing on from a previous
walkthrough or lab
6. Select the check box that reads I confirm I have read and understood the notice below.
7. Click or Select or Select Create

Task 2 - Connect to and Manage the Resource


1. Once the CS account has been created, you can Click or Select or Select the Go To Resource message
that pops up to go to the resource
2. The resource page opens on the Quick Start page that provides you with your API key and Endpoint
that would be used to access this service account

1. You can also access the API key(s) from the Keys page and the Endpoint form the Overview page.
3. Click or Select or Select the Overview option and review the information on this page
4. Select the Access control (IAM) option. Azure uses Role Based Access Control (RBAC) and this is
where you control the RBAC for your Cognitive Services account. You can also check currently
assigned access levels on this page
5. Select the Tags option in the left nav pane. This is where you create tags for categorizing resources.
Azure fundamentals training discusses the importance of tags for billing and other identificaiton
needs for your resources
6. If you wish to change the pricing tier for this service or account, you can select Pricing tier under the
Resource Management group in the left nav pane
7. Under Resource Management, select the Billing by Subscription option to view the costs associated
with each resource you have created, filtered by subscription. Note that this will show all resources
and not just the current Cognitive Services account
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating Cognitive Services using Azure Portal  35

8. The Properties option, under Resource Management, displays basic information about the service and
allows you to change things such as the subscription and resource group assignments.
9. You can choose the type of lock to apply to this resource on the Locks page. You can choose to lock
the entire subscription, the resource group, or the resource itself. THe type of locks supported are
Read-only and Delete and have been covered in Azure Fundamentals training.
10. Finally, if you decide that you want to generate a template to make creation of this resource easier in
the future, or to script it for later uses, choose the Export template option, under Resource Manage-
ment. Templates are covered in other Azure training.

Task 3 - Evaluate Monitoring Functions


1. Choose the Activity Log option in the left nav.
2. Review the filters and options on this page for the ability to monitor activity related to the resource
3. Under the Monitoring category in the left nav pane, select Alerts. Because you have no alerts created
yet, there will be no entries on this page.
4. Select the + New alert rule option, either the button on the page or in the top nav bar for this blade.
5. Because you were on the resource page, the resource should be automatically selected under the
Resource section. If not, use the Select button to locate and select the Cognitive Services account we
just created
6. Select or Click or Select or Select the Add button under the Condition section
7. Review the available conditions that you can add and then select the Total Calls option
8. On the Total Calls configuration page, review the options available.
9. Leave the Threshold as static, in the Alert logic
10. Choose Greater than for the operator, Count as the aggregation type, and set a threshold value of
50,000
11. Note the Condition preview updates to reflect the options you chose for the logic
12. Under the Evaluation based on section, set the Aggregation granularity to 15 minutes and the
Frequency of evaluation to 1 minute
13. This indicates the threshold to evaluate for the logic you applied, if it occurs over 15 minutes, re-
freshed every minute.
14. Because we have no activity on this service, this alert has no meaningful representation except to
show you how to configure an alert.
15. Click or Select or Select Done
16. Notice that the Condition text has been updated an indicates a monthly cost for this condition. Be
sure to factor these costs into your solution
17. Now let's create an action to take when this condition occurs by selecting the Add option for the
Actions section
18. If you have no action groups created, you will need to create a new one by selecting the Create action
group button
19. Give the action group a name, a short name, select the subscription and choose the resource group
we have been using so far
20. Give the action a name such as, text admin
MCT USE ONLY. STUDENT USE PROHIBITED 36  Module 0 Introducing Azure Cognitive Services  

21. Now you can select an action to take. Click or Select or Select or move to the Action Type drop down
and choose Email/SMS/Push/Voice as the option
22. Select the SMS check box and enter a phone number to receive SMS messages.
23. Leave the remaining options at their default and Click or Select or Select OK
24. Select OK again on the Add Action Group page
25. Back on the Configured Actions panel, select Done to apply the actions and close the pane
26. Provide an Alert rule name in the Alert Details section
27. Select Sev 1 for the Severity
28. Leave Yes selected under enable rule upon creation
29. Select or Click or Select or Select the Create alert rule button
30. Explore the remaining monitoring options such as Metrics, Diagnotic Settings and Logs to gain a
familiarity with the monitoring options available. Because these are common across resources in
Azure, refer to the Azure training options for more details on the usage.
MCT USE ONLY. STUDENT USE PROHIBITED
 Testing Cognitive Services using API Testing Console  37

Testing Cognitive Services using API Testing


Console
Testing with the API Testing Console
Many of the cognitive services provide a mechanism for quickly testing you service directly in a browser
window. Each service will have specific needs to use it effectively, such as the type of information you will
submit to the service and how you get that data or information to the service for testing. Each service
that offers the web API testing console, will outline the requirements and provide guidance for how to
use it. A key aspect to keep in mind is that your testing needs to ensure the use of the testing console in
the same geographic location where your service resides. For example, you will not be able to open a
testing console in West US and test a service that was created in West Europe.
The testing console uses POST commands for the testing and each will list the parameters that will be
required in the POST request. For example, the Face detection API lists the request URL template as,
https://[location].api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&return-
FaceLandmarks][&returnFaceAttributes]. The required parameters are:
●● returnFaceId (optional) - boolean - Return faceIds of the detected faces or not. The default value is
true.
●● returnFaceLandmarks (optional) - boolean - Return face landmarks of the detected faces or not. The
default value is false.
●● returnFaceAttributes (optional) - string - Analyze and return the one or more specified face attributes
in the comma-separated string like “returnFaceAttributes=age,gender”. Supported face attributes
include age, gender, headPose, smile, facialHair, glasses, emotion, hair, makeup, occlusion, accesso-
ries, blur, exposure and noise. Face attribute analysis has additional computational and time cost.
The landing page for the testing console will also display a sample, for the most part, of the data it
expects and a sample of data that will be returned to your calling application. As an example, visiting the
West US11 cognitive service testing, landing page, you will get an idea of the information provided.
Immediately under the description for the service, there will be a series of buttons that you can Click or
Select to open the actual testing console web page. Click or Selecting on the West US button opens such
a console.

11 https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236
MCT USE ONLY. STUDENT USE PROHIBITED 38  Module 0 Introducing Azure Cognitive Services  

In this particular testing console the service will detect faces in an image however, rather than uploading
an image directly from this page or from an application that you are using, you supply a URL in the JSON
body for the request. This URL could represent the location of images in a web page or even from a
source control repository like GitHub.
Note also that you will need the Key from your service before this API test will succeed. That assumes
that you have a face detection service available and the keys are available to you. You would paste the
key into the Ocp-Apim-Subscription-Key. This is apparent because that field is highlighted in red indicat-
ing a value is required.
After the key is provided, an image URL is entered in the Request body JSON, you can Click or Select the
Send button which activates the service and performs a face detection on the image that was available in
the URL. The response body will be displayed at the bottom of the page when the action takes place.
This could be a successful completion or there may be errors generated. Errors will be displayed with the
standard HTTP error codes such as 401 or 404, but if it succeeds, you should see a 200 HTTP success
code and a resultant JSON response as shown in this sample. Not the format of the response and the
data that is returned. You see coordinates for any detected faces.
MCT USE ONLY. STUDENT USE PROHIBITED
 Testing Cognitive Services using API Testing Console  39
MCT USE ONLY. STUDENT USE PROHIBITED 40  Module 0 Introducing Azure Cognitive Services  

Lab
Lab Environment
The hands-on experiences in this course require a computer with the following software installed:
●● Visual Studio (latest version)
●● .NET Framework 4.0 (or higher) SDK
●● Lab Setup Instructions and content are in GitHub repo for this course12.
The Azure Portal requires a modern browser that's compatible with your operating system:
●● Microsoft Edge (latest version)
●● Internet Explorer 11
●● Safari (latest version, Mac only)
●● Chrome (latest version)
●● Firefox (latest version)
Note that the labs can be completed with a local computer configured with the above software or, if your
Azure subscription permits, you can also make use of the Microsoft Azure Data Science Virtual Machine
(DSVM).

Lab 1: Meeting the Technical Requirements


Lab Objectives
●● Configure the lab environment
●● Download necessary support files and assets
This lab is meant for an Artificial Intelligence (AI) Engineer or an AI Developer on Azure. To ensure you
have time to work through the exercises, there are certain requirements to meet before starting the labs
for this course.
You should ideally have some previous exposure to Visual Studio. We will be using it for everything we
are building in the labs, so you should be familiar with how to use it to create applications. Additionally,
this is not a class where we teach code or development. We assume you have some familiarity with C#
(intermediate level - you can learn here and here), but you do not know how to implement solutions with
Cognitive Services.
All lab instructions can be found on the GitHub repo for this course13

Lab 2: Implement Computer Vision


Lab Objectives
●● Work with the Computer Vision API

12 https://github.com/MicrosoftLearning/AI-100-Design-Implement-Azure-AISol
13 https://github.com/MicrosoftLearning/AI-100-Design-Implement-Azure-AISol
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab  41

This hands-on lab guides you through creating an intelligent console application from end-to-end using
Cognitive Services (specifically the Computer Vision API). We use the ImageProcessing portable class
library (PCL), discussing its contents and how to use it in your own applications.
MCT USE ONLY. STUDENT USE PROHIBITED
Module 1 Creating Bots

Introducing the Bot Service


Introduction to Bots
Perhaps one of the first AI applications that a developer might start out with, is a Bot. Typically bots are
in the form of chat bots but they can be other types of agents. A bot is an app that users interact with in
a conversational way using text, graphics (cards), or speech. It may be a simple question and answer
dialog, or a sophisticated bot that allows people to interact with services in an intelligent manner using
pattern matching, state tracking and artificial intelligence techniques well-integrated with existing
business services.
Microsoft provides the Azure Bot Service which is designed to help you manage and deploy your bots,
and also helps you connect to various devices and channels, enabling a broader connectivity of your bot
to end-users. The current Bot Framework SDK, in version 4.0, now supports C#, JS, Python, and Java. As
of this writing the support will be found as such:
●● C# and JS have stable releases in 4.5.0
●● Python is in preview with 4.5.0b2
●● Java is 4.0.0a6 and is preview also
●● Documentation support is currently available for C# and JS
●● Samples are available for C# using .NET Core and Web API
●● Samples are available for JS in Node.js, TypeScript, and es6
●● Samples are available for Python
Note: the above feature set and support is undergoing updates on a regu-
lar basis and you should refer to the <a href="https://github.com/micro-
soft/botframework-sdk" title="" target="_blank" data-generated=''>GitHub
repo for the Bot Framework SDK</a> for the latest release and preview
information
MCT USE ONLY. STUDENT USE PROHIBITED 44  Module 1 Creating Bots  

Building a bot should follow a consistent approach from planning to finished product as outlined in this
diagram.

Microsoft helps you to develop intelligent bots using an open-source SDK and set of tools that allow you
to:
●● Create bots that can interact with users in a natural way by integrating Azure Cognitive Services into
your bot (speech, search, language understanding, vision, QnA Maker)
●● Use the open-source SDK and tools for building, testing, and publishing your bot to the Azure
platform
●● Integrate popular channels to reach your customers in more ways (some examples include Skype,
Cortana, Microsoft Office, Microsoft Teams, Facebook Messenger, Kik, and Slack)
●● Create your own branded virtual assistant through the use of the Microsoft solution accelerators
Pricing for bot services will vary depending on usage and channels selected. Be sure to visit the pricing
page1 for up-to-date details on how much a bot may cost.

The Azure Bot Service


The Microsoft Bot Service provides a platform for building and publishing bots. You can use the Bot
Builder SDK or the Azure Bot Service to create your bot and
publish it as a web service. Then you can make your bot available through one or more channels. The bot
connector service handles the message exchange between your bot and the channels through which
users engage with it.
Currently, the bot services and SDK support the creation of bots using C# or JavaScript as the develop-
ment languages. Creating your bot in JavaScript will require working in Node.js. When we look closer at
these two main components, we note that the Bot Builder SDK is what you will use for developing your
bots and the Bot Service is what you will use to connect your bot to the different channels you wish to
support.

Bot Development Lifecycle


The typical bot development cycle follows a pattern of:
●● Plan - think about what you want your bot to accomplish. Why are you creating the bot in the first
place? It's recommended that you follow the guidance found on design guidelines page2.
●● Build - you can create your bot in the Azure portal or by using Visual Studio and C# to developer with
.NET. You can also use Node.js and JavaScript to build your bot as well.

1 https://azure.microsoft.com/en-us/pricing/details/bot-service/
2 https://aka.ms/AA3m39t
MCT USE ONLY. STUDENT USE PROHIBITED
 Introducing the Bot Service  45

●● Test - You should always test your bot before you release it. There are currently two recommended
testing mechanisms:

●● Test your bot locally with the emulator. The Bot Framework Emulator is a stand-alone app that not
only provides a chat interface, but also debugging and interrogation tools to help understand how
and why your bot does what it does. The emulator can be run on a locally alongside your in-devel-
opment bot application.
●● Test your bot on the web. Once configured through the Azure portal your bot can also be reached
through a web chat interface. The web chat interface is a great way to grant access to your bot to
testers and other people who do not have direct access to the bot's running code.
●● Publish - Once testing is complete, you can publish your bot to Microsoft Azure or host in on your
own web service or data center
●● Connect - You can connect your bot to many different channels for users to gain access to the bot
such as:

●● Facebook
●● Messenger
●● Skype
●● Slack
●● Microsoft Teams
●● SMS/Text
●● Cortana
●● Evaluate - Once your bot is released, you should monitor the bot for sustained engineering. You may
find users' interactions may be different than your initial plan so you might find it necessary to
improve the bot over time.

Compliance Considerations
The Azure Bot Service can address you compliance concerns as well because the service is now compliant
with the following:
●● ISO 27001:2013 and ISO 27018:2014
●● PCI DSS
●● Microsoft's HIPAA BAA
●● SOC 1 and SOC 2

Understanding How a Bot Works


As mentioned in the introduction to this lesson, bots are apps that users typically interact with, in a
conversational manner. If we look under the covers of the bot to better understand the functionality, we
would see that every interaction between a user and the bot, generates what is known as an activity.
These activities are exchanged between the bot's channels ( the connected app), and the bot itself.
Recall that the Bot Framework Service is responsible for this interaction.
To visualize this, let's use an example of a simple echo bot and display an interaction diagram for the
conversation between a user and the bot.
MCT USE ONLY. STUDENT USE PROHIBITED 46  Module 1 Creating Bots  

In this diagram, you see two activity types, a ConversationUpdate and a Message. The Bot Framework
Service may send a conversation update when a party joins the conversation. For example, on starting a
conversation with the Bot Framework Emulator, you will see two conversation update activities (one for
the user joining the conversation and one for the bot joining). To distinguish these conversation update
activities, check whether the members added property includes a member other than the bot.
The message activity carries conversation information between the parties. In an echo bot example, the
message activities are carrying simple text and the channel will render this text. Alternatively, the message
activity might carry text to be spoken, suggested actions or cards to be displayed.
It's also important to note that channels may include some additional information to be included with an
activity. This may be supporting information or metadata as necessary for the specific channel.
These activities will be sent using HTTP POST requests. The diagram also depicts that the POST requests
are also paired with HTTP status responses. In our case, the responses are all HTTP 200 OK status codes.
You can also experience 404 not found or 401 authorization codes as well depending on the specifics of
the request. For example, if your endpoint URL is not valid, you may experience an HTTP 404 error code.
You will gain more exposure to the service and framework as you begin working with the labs in the
course but one final diagram will help you understand the activity processing stack.
MCT USE ONLY. STUDENT USE PROHIBITED
 Introducing the Bot Service  47

In the example above, the bot replied to the message activity with another message activity containing
the same text message. Processing starts with the HTTP POST request, with the activity information
carried as a JSON payload, arriving at the web server. In C# this will typically be an ASP.NET project, in a
JavaScript Node.js project this is likely to be one of the popular frameworks such as Express or Restify.
The adapter, an integrated component of the SDK, is the core of the SDK runtime. The activity is carried
as JSON in the HTTP POST body. This JSON is deserialized to create the Activity object that is then
handed to the adapter with a call to process activity method. On receiving the activity, the adapter
creates a turn context and calls the middleware.
MCT USE ONLY. STUDENT USE PROHIBITED 48  Module 1 Creating Bots  

Create a Basic Chat Bot


Bot Design Principles
One of the key considerations for bot design is something that you might think is a given scenario. Why
do you want to create a bot? What is the bot's purpose? Creating a bot just for the sake of having one
will likely lead to the creation of a bot that doesn't get used effectively, or how you intended. Remember
that your bot will be competing for the user's attention with other apps. You might also find users that
are not familiar with, or prefer to NOT interact with a bot. Microsoft lists some key factors around bot
design that focus on factors that do not guarantee success and some that can “influence” the success of
your bot. They are listed here.
Factors that do not guarantee success:
●● How “smart” the bot is: In most cases, it is unlikely that making your bot smarter will guarantee happy
users or adoption of your platform. In reality, many bots have little advanced machine learning or
natural language capabilities. A bot may include those capabilities if they're necessary to solve the
problems that it's designed to address, however you should not assume any correlation between a
bot's intelligence and user adoption of the bot.
●● How much natural language the bot supports: Your bot can be great at conversations. It can have a
vast vocabulary and can even make great jokes. But unless it addresses the problems that your users
need to solve, these capabilities may contribute very little to making your bot successful. In fact, some
bots have no conversational capability at all. And in many cases, that's perfectly fine.
●● Voice: It isn’t always the case that enabling bots for speech will lead to great user experiences. Often,
forcing users to use voice can result in a frustrating user experience. As you design your bot, always
consider whether voice is the appropriate channel for the given problem. Is there going to be a noisy
environment? Will voice convey the information that needs to be shared with the user?
Factors that influence success:
●● Does the bot easily solve the user’s problem with the minimum number of steps?
●● Does the bot solve the user’s problem better/easier/faster than any of the alternative experiences?
●● Does the bot run on the devices and platforms the user cares about?
●● Is the bot discoverable? Do the users naturally know what to do when using it?

Your Bot Development Environment


Unless you are creating all but a simple web app bot, you will need to consider how you will deal with the
develop phase of Bot creation. For example, if your development team is familiar with the .NET Frame-
work and C#, then you can make use of the Bot Framework SDK for .NET.
The Bot Framework SDK for .NET is a powerful framework for constructing bots that can handle both
free-form interactions and more guided conversations where the user selects from possible values. It is
easy to use and leverages C# to provide a familiar way for .NET developers to write bots.
Using the SDK, you can build bots that take advantage of the following SDK features:
●● Powerful dialog system with dialogs that are isolated and composable
●● Built-in prompts for simple things such as Yes/No, strings, numbers, and enumerations
●● Built-in dialogs that utilize powerful AI frameworks such as LUIS
MCT USE ONLY. STUDENT USE PROHIBITED
 Create a Basic Chat Bot  49

●● FormFlow for automatically generating a bot (from a C# class) that guides the user through the
conversation, providing help, navigation, clarification, and confirmation as needed
If your development team is familiar with JavaScript and Node.js, then you can opt to use the Bot Frame-
work SDK for Node.js. This is a powerful, easy-to-use framework that provides a familiar way for Node.js
developers to write bots. You can use it to build a wide variety of conversational user interfaces, from
simple prompts to free-form conversations.
The conversational logic for your bot is hosted as a web service. The Bot Framework SDK uses restify, a
popular framework for building web services, to create the bot's web server. The SDK is also compatible
with Express and the use of other web app frameworks is possible with some adaption.
Using the SDK, you can take advantage of the following SDK features:
●● Powerful system for building dialogs to encapsulate conversational logic.
●● Built-in prompts for simple things such as Yes/No, strings, numbers, and enumerations, as well as
support for messages containing images and attachments, and rich cards containing buttons.
●● Built-in support for powerful AI frameworks such as LUIS.
●● Built-in recognizers and event handlers that guide the user through the conversation, providing help,
navigation, clarification, and confirmation as needed.

Additional Environments
The Bot Framework SDK is open source and you can find up-to-date information on the GitHub repo3
for the framework. On the repo, you will find additional environment and language support as it be-
comes available.

Currently, there are preview versions available for Python4 and for Java5. Keep in mind that these are
preview and updates, additions, or deletions are possible.

Walkthrough - Create a Basic Bot in the Azure


Portal
In this short exercise, you will create a simple web chat bot from within the Azure portal and test the
interaction with that bot.
1. Log in to the Azure portal.
2. Click or Select Create new resource link found on the upper left-hand corner of the Azure portal.
●● NOTE: The Azure portal may undergo changes from time to time and the location of resources may
not always map to previous instructions. You can also use the search option when creating a new
resource, in order to locate the type of item you wish to create
3. Select AI + Machine Learning
4. Locate the Web App Bot entry and Click or Select on the Web App Bot text, not the Quickstart tutorial
text

3 https://github.com/microsoft/botframework-sdk
4 https://github.com/Microsoft/botbuilder-python#packages
5 https://github.com/Microsoft/botbuilder-java#packages
MCT USE ONLY. STUDENT USE PROHIBITED 50  Module 1 Creating Bots  

5. A new blade will open where you will enter information about the Web App Bot.
6. In the Bot Service blade, provide the requested information about your bot using the following as
guidance.
●● Bot name - As with other resources in Azure, this must be unique. Provide a short, descriptive
name for your bot.
●● Subscription - Select the Azure subscription you want to use.
●● Resource Group - Creating a new resource group permits you the ability to delete all resources
you create in that group, by deleting the resource group itself. This is a great way to clean up
resources after completing tutorials. You can also choose to use an existing resource group that
you may have already created
●● Location - Select the geographic location for your resource group. Your location choice can be any
location listed, though it's often best to choose a location closest to your customer. The location
cannot be changed once the bot is created.
●● Pricing tier - Leave the default option here as it allows for a maximum of 1000 premium message,
which is adequate for this tutorial and for quick testing.
●● App name - The unique URL name of the bot. For example, if you name your bot myawesomebot,
then your bot's URL will be http://myawesomebot.azurewebsites.net. The name must use alphanu-
meric and underscore characters only. There is a 35 character limit to this field. The App name
cannot be changed once the bot is created.
MCT USE ONLY. STUDENT USE PROHIBITED
 Create a Basic Chat Bot  51

●● Bot template - Leave the default set as Basic Bot (C#). You won't be writing or modifying any code
so don't worry about the language choice at this point.
●● LUIS App location - because we are choosing the Basic Bot, the template is looking for a location
for the Language Understanding Service, or LUIS. Select an appropriate location here. While it
doesn't have to reconcile with the bot location, it's typically a good idea to match the two, if
possible.
●● App service plan/Location - Create a new app service plan here, or choose an existing on if you
have already created an app service plan. Creating a new one will require you to choose a loca-
tion, a plan name, and a default pricing tier will be selected.
●● Application Insights - turn this off as we will not make use of insights for this sample bot
●● Microsoft App ID and password - leave the default for auto creating these, unless you want to
secure this for later usage.
7. Click or Select the Create button. Azure will go through a validation process and then begin a
deployment process
8. Once the bot has been deployed, you will receive a notification which you can use to go to the new
resource. If you miss the notification, you can simply Click or Select on the notification icon in the top
nav bar to open the notification and then Click or Select Go to Resource
9. Click or Select the Test in Web Chat link in the left nave pane, under Bot management.
10. A new chat window opens in the main area of the Azure dashboard.
11. Your bot displays a welcome message and then prompts you to select a choice of options or to enter
some text about how it can help you.
12. Interact with the bot by typing in Book a flight.
13. The bot responds by asking where you would like to travel.
14. You may notice that the bot doesn't have a lot of interactive options which is due to the fact that you
haven't written any real code to make the bot “intelligent”
15. Before we close out this task list, explore some more of the bot aspects by Click or Selecting on the
Channels option under Bot management.
16. Note that the Web Chat channel is the one that you used initially to interact with the bot. Take not of
the other channels available to be used with the bot.
17. Click or Select the Settings option under Bot management and note the configuration settings that
are applied to your bot. If you are creating external channels, the endpoint is an important piece of
information to know.
18. Feel free to explore the additional potions and settings for your bot if you wish or, go back to the
Overview page and Click or Select Delete to remove the bot and prevent future charges against your
Azure account.
MCT USE ONLY. STUDENT USE PROHIBITED 52  Module 1 Creating Bots  

Testing with Bot Emulator


Introducing the Bot Emulator
The Bot Framework Emulator is a desktop application that allows bot developers to test and debug their
bots, either locally or remotely. Using the emulator, you can chat with your bot and inspect the messages
that your bot sends and receives. The emulator displays messages as they would appear in a web chat UI
and logs JSON requests and responses as you exchange messages with your bot. Before you deploy your
bot to the cloud, run it locally and test it using the emulator. You can test your bot using the emulator
even if you have not yet created it with Azure Bot Service or configured it to run on any channels.
To ensure that you are using the latest Bot Emulator to keep updated on the emulator and support for it,
you should bookmark the BotFramework-Emulator GitHub Page6.
The Bot Emulator is available for download for:
●● Windows
●● Linux
●● Mac

Connecting the Emulator and a Bot


The Bot emulator can connect to local bots or remote bots. The procedure you follow will vary depend-
ing on where your bot is located. If your bot is local, and it is running, you can make use of a .bot file to
connect to your running bot. The .bot file is a configuration file that contains the connection information
for connecting to the bot. Click or Selecting Open Bot in the emulator, will result in a file open dialog
being displayed, where you can locate the .bot file, if it exists. If you do not have a configuration file yet,
you can simply choose the create a new bot configuration link on the welcome page in the bot emula-
tor.

Local Bots
To connect to a local Bot for testing with the console, you would perform the following steps:
1. Start the bot on the local computer
2. Open the Bot Framework Emulator
3. Open the .bot configuration file

6 https://github.com/Microsoft/BotFramework-Emulator/blob/master/README.md#download
MCT USE ONLY. STUDENT USE PROHIBITED
 Testing with Bot Emulator  53

4. Once your bot configuration file is open, you can interact with the bot through the emulator and
evaluate how the bot behaves
5. When opening a .bot file, you may be required to enter your bot file secret key. You can find this, in a
local bot, in the appsettings.json filee included with the Visual Studio project for the bot solution.
You can also connect to a local bot using a URL as opposed to the .bot file. In this case, you will need to
grab the port number from the running bot. Simply use the URL http://localhost:port number/api/
messages and replace ‘port number’ in the URL with your bot's port. Typically this is 3978.

Remote Bots
If you are using Windows and you are running the Bot Framework Emulator behind a firewall or other
network boundary and want to connect to a bot that is hosted remotely, you must install and configure
ngrok tunneling software. The Bot Framework Emulator integrates tightly with ngrok tunnelling software
(developed by inconshreveable7), and can launch it automatically when it is needed.
1. Open the Emulator Settings
2. Enter the path to ngrok
3. Select whether or not to bypass ngrok for local addresses
4. Click or Select Save

7 https://inconshreveable.com/
MCT USE ONLY. STUDENT USE PROHIBITED 54  Module 1 Creating Bots  

You will have a chance to run through this during the lab exercises.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab  55

Lab
Lab 3: Basic Filter Bot

In this lab, we will be setting up an intelligent bot from end-to-end that can respond to a user's chat
window text prompt. We will be building on what we have already learned about building bots within
Azure, but adding in a layer of custom logic to give our bot more bespoke functionality.
This bot will be built in the Microsoft Bot Framework. We will evoke the architecture that enables the bot
interface to receive and respond with textual messages, and we will build logic that enables our bot to
respond to inquiries containing specific text.
We will also be testing our bot in the Bot Emulator, and addressing the middleware that enables us to
perform specialized tasks on the message that the bot receives from the user.
We will evoke some concepts pertaining to Azure Search, and Microsoft's Language Understanding
Intelligent Service (LUIS), but will not implement them in this lab.

Lab Objectives
●● Create a Basic Bot in the Azure Portal
●● Download the Bot Source Code
●● Modify the Bot Code
●● Test with the Bot Emulator

Lab 4: Log Chat

Now, we wish to log bot chats to enable our customer service team to follow up to inquiries, determine if
the bot is performing in the expected manner, and to analyze customer data. This hands-on lab guides
you through enabling various logging scenarios for your bot solutions.
In the advanced analytics space, there are plenty of uses for storing log conversations. Having a corpus of
chat conversations can allow developers to:
●● Build question and answer engines specific to a domain.
●● Determine if a bot is responding in the expected manner.
●● Perform analysis on specific topics or products to identify trends.

Lab Objectives
●● Understand how to intercept and log message activities between bots and users
●● Log utterances to file storage
MCT USE ONLY. STUDENT USE PROHIBITED
Module 2 Enhancing Bots with QnA Maker

Introducing QnA Maker


Introduction
As in most cases, the logic behind a Bot is designed by the Bot creator. Microsoft makes the creation of
this logic somewhat easier with various services available in the Bot framework but also in related services
available from the Microsoft Cognitive Services. QnA Maker is one of those services that you can use to
implement a service that answers your users' natural language questions by matching it with the best
possible answer from the QnAs in your Knowledge base.
In this module you will:
●● Gain an understanding of what QnA Maker is
●● Learn about key features of QnA and how to create a knowledge base
●● Publish a QnA Maker knowledge base
●● Integrate with a Bot

Overview of QnA Maker


Many organizations are seeking ways to allow customers and end-users to get the answers they need,
through automated mechanisms. For the most part, this automated mechanism is leaning towards Bots.
Communications channels that lend themselves to a “conversational interaction”. Information that
customers may be seeking, could exist in Frequently Asked Questions (FAQs) that the organization has
compiled, support documentation, or even product manuals. The QnA Maker service answers your users'
natural language questions by matching it with the best possible answer from the QnAs in your Knowl-
edge base.
You can create a Knowledge Base for QnA Maker through an easy-to-use web portal. By using the portal,
you can create, manage, train and publish your service without any developer experience. Once the
service is published to an endpoint, a client application such as a chat bot can manage the conversation
with a user to get questions and respond with the answers.
MCT USE ONLY. STUDENT USE PROHIBITED 58  Module 2 Enhancing Bots with QnA Maker  

Architecture
To understand how the QnA Maker provides the services it does, we can look at the architecture behind
it. As mentioned in the introduction on this topic, you use a web-based portal to create your knowledge
base. This portal is part of the QnA Maker management services. The management services include the
portal but also provide facilities for updating, training, and publishing your knowledge base for consump-
tion. Although the portal is a big part of this, you can also make use of management APIs that can be
accessed through a set of request URLs, comprised of request headers, the subscription key, and a
request body. We will only cover the use of the portal here.
Once you have created the knowledge base and published it, the QnA Maker data and runtime are
responsible for making your service available. It will be accessible from your Microsoft Azure subscription
in the region you chose during the creation process. The content that you have added into your knowl-
edge base will be stored in Azure Search while the endpoint for access by client applications, including
your Bot, are provided as a deployed App service. Microsoft Azure also provides the option of integrat-
ing application insights into your service for analytics.

Creating a QnA Maker Service


Before you can integrate QnA Maker with a Bot, that is supported by a knowledge base, you have to
create a QnA Service. The service creation, along with the supporting knowledge base, are managed
through the QnA Maker portal1.
When you first visit the portal, you are simply presented with the static web page that describes some
information about going from FAQ to Bot. In order to do anything useful with the service, you need to
have a knowledge base created. You will learn about the knowledge base creation in the next lesson.
Here we focus on the actual service itself.

1 https://www.qnamaker.ai
MCT USE ONLY. STUDENT USE PROHIBITED
 Introducing QnA Maker  59

To get started with the service, you need to select Create a knowledge base. This will require a sign in,
using a Microsoft Account login. Once you sign in, chances are that you do not have any knowledge
bases existing and the display will indicate this.

If you Click or Select the Create a knowledge base menu again, a new page loads allowing you to create
your QnA Service.
MCT USE ONLY. STUDENT USE PROHIBITED 60  Module 2 Enhancing Bots with QnA Maker  

Once you Click or Select the Create QnA Service, you are directed to your Azure portal, if you are already
signed into the Azure portal. If you are not currently signed in, you will be required to sign into Microsoft
Azure with your existing Azure account. This is required because the service will be hosted on Azure.
The service requires you to provide the following information:
●● Name - like all Azure services, this name must be unique
●● Subscription - the Azure subscription that will be used to host the service
●● Pricing Tier - this is based on number of transactions per second. You will investigate this further in
the walkthrough and labs but note that pricing tiers can change
●● Location - you should select a geographic location that is closest to you or where you want to host
the service
●● Resource Group - like all Azure objects, you should assign this service to an existing resource group,
or create a new one
●● Search Pricing Tier - select the lowest cost search tier that will serve your needs. Searching is required
in order for the service to function correctly. There is currently one free option that includes 3
indexes, 50MB of storage, and no scaling options
●● App Name - an app name is automatically chosen based on the service name you created at the top.
You can change this app name to something different if you wish. It will be the host portion of the
URL that will be generated for the service
●● Website location - determines where the web site will be hosted. The default is to host it in the same
location as the service
●● App Insights - you can opt to include insights for the service to help you diagnose and evaluate the
app and service for various performance considerations. If you choose to enable App Insights, you
will also need to select a location for the app insights data. Once again, the default is in the same
region.
Once you create the service, it will be hosted on Azure and accessible to Bots through the use of the
name you created and will require authorization through the use of a key. You can find the key (2
provided) on the Keys blade under the Resource Management option.
MCT USE ONLY. STUDENT USE PROHIBITED
 Introducing QnA Maker  61

Walkthrough - Create a QnA Service


This walkthrough requires a Microsoft Azure account. If you don't have an Azure subscription, create a
free account2 before you begin
1. Navigate to the QnA Portal3
2. Sign in with your Azure credentials
3. Assuming you have no existing knowledge bases, the portal will indicate you have no existing knowl-
edge bases yet
4. In the top nav bar, Click or Select Create a knowledge base
5. Click or Select Create a QnA Service
6. You will be logged into the Azure portal using the same credentials you used to sign into the QnA
Maker portal
7. Enter a name for you QnA Service
8. Choose your subscription
9. Select the F0 management pricing tier for this service
10. Create a new Resource Group called LearnRG
11. Select F (3 Indexes) for the Search pricing tier
12. Choose the search location in which your service will reside
13. Verify the App name is what you want to use and that it is unique (green check mark)
14. Select the location for the Website
15. You won't be making use of application insights for this test so disable App insights
16. Click or Select Create
17. After a brief deployment process, your resource should be created for this service
18. Go to the resource and Click or Select the Keys entry, under Resource Management, to display the
Keys blade to the student

2 https://azure.microsoft.com/free/?WT.mc_id=A261C142F
3 https://www.qnamaker.ai/
MCT USE ONLY. STUDENT USE PROHIBITED 62  Module 2 Enhancing Bots with QnA Maker  

Implementing a Knowledge Base with QnA


Maker
QnA Knowledge Base
A QnA Maker knowledge base consists of a set of question/answer (QnA) pairs and optional metadata
associated with each QnA pair.
There are some key concepts to be aware of to better understand a knowledge base for QnA Maker.
Three base-level concepts are listed here:
●● Questions - these are what you expect a user to ask. Questions will be paired with answers
●● Answers - the response that will be returned when a user asks a question. The answer is paired to a
question in the knowledge base
●● Metadata - these are tags associated with the question and answer pair. Internally they are represent-
ed as key-value pairs and filter the QnA pairs for matching a user query.
A single QnA, represented by a numeric QnA ID, has multiple variants of a question (alternate questions)
that all map to a single answer. Additionally, each such pair can have multiple metadata fields associated
with it.

When creating your knowledge base for QnA Maker, you should follow the life cycle recommended by
Microsoft. This life cycle involves an iterative model that encompasses the following steps:
●● Create/Update - manage your knowledge base by creating and updating the content for the QnA
session
●● Publish - deploy the knowledge base to the endpoint
●● Analyze - use analytics to monitor and gather feedback
●● Test - a constant process that verifies matching and ranking while the KB is in use
MCT USE ONLY. STUDENT USE PROHIBITED
 Implementing a Knowledge Base with QnA Maker  63

QnA Maker can extract the question/answer pairs from a variety of data sources. There is a fairly broad
support for different data sources that can be used.
●● URLs - can contain the following types of information

●● FAQs
●● (Flat, with sections or with a topics homepage)
●● Support pages
●● (Single page how-to articles, troubleshooting articles etc.)
●● PDF

●● FAQs
●● Product Manual
●● Brochures
●● Paper
●● Flyer Policy
●● Support guide
●● Structured QnA
●● DOC

●● FAQs
●● Product Manual
●● Brochures
●● Paper
●● Flyer Policy
●● Support guide
MCT USE ONLY. STUDENT USE PROHIBITED 64  Module 2 Enhancing Bots with QnA Maker  

●● Structured QnA
●● Excel - Structured QnA file (including RTF, HTML support)
●● TXT/TSV Files - Structured QnA file
If you do not have pre-existing content to populate the knowledge base, you can add QnAs editorially in
QnA Maker Knowledge base. QnA Maker allows you to manage the content of your knowledge base by
providing an easy-to-use editing experience. Select your knowledge base in your list of existing knowl-
edge bases in the QnA Maker interface, then edit the items.

Once a knowledge base is created, it is recommended that you make edits to the knowledge base text in
the QnA Maker portal, rather than exporting and reimporting through local files. However, there may be
times that you need to edit a knowledge base locally. When creating your knowledge base for QnA
Maker, be sure to follow the best practices4 described by Microsoft.

Add a Knowledge Base to the Service


Once you have a QnA Service created, you can then begin to build a knowledge base to support the
questions and answers that will be used in the service. As mentioned in topic one, you can use multiple
sources for this KB or you can build one from scratch. It's best to make use of existing FAQs or support
documents if you have them already.
To begin adding the knowledge base to your QnA service, you will revisit the QnA Maker portal again and
refresh the page. Assuming that you are logged into the Azure portal and the QnA Maker portal with the
same Microsoft account, refreshing the QnA Maker portal page will cause an update to take place behind

4 https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/concepts/best-practices
MCT USE ONLY. STUDENT USE PROHIBITED
 Implementing a Knowledge Base with QnA Maker  65

the scenes. When you begin looking at step 2, you will be able to select the Azure Directory ID, Subscrip-
tion, and QnA Service name, as shown here.

Once you have connected the service to this knowledge base, you can move on the next step which
allows you to name the KB and then begin to populate the KB with the necessary questions and answers.
You should ensure that you name your KB with a descriptive name that makes it easy to understand its
purpose when you work with it.

Walkthrough - Add a Knowledge Base to the


Service
Connect the QnA Service to the KB
1. Go back to the QnA web portal tab and refresh the page
2. The page will refresh but the entries in step 2 will not be filled in for you, only that a link to the
account information has been populated
3. Select your Azure Directory ID, subscription name, and Azure QnA Service (your newly created service
name)
4. In step 3, give your knowledge base a name. In this case we will use the Microsoft Bot FAQ so you can
name it ‘BotFAQ’
5. Download the Microsoft Bot FAQ zip file5 and extract it to your local computer
6. In Step 4 of the QnA web portal process, Click or Select + Add file, locate your extracted Word
document from the previous step, and add it as a source to populate your KB
7. Click or Select Professional under Chit-chat to add a pre-defined personality to your KB
8. Click or Select Create your KB
9. After a short time, your KB will be created and the Edit page will load

5 https://github.com/MicrosoftDocs/mslearn-work-with-vision-cognitive-services/blob/master/Microsoft%20Bot%20FAQ.zip?raw=true
MCT USE ONLY. STUDENT USE PROHIBITED 66  Module 2 Enhancing Bots with QnA Maker  

Publishing a Knowledge Base


Once you have your knowledge base created, you are directed to the knowledge base page where you
can evaluate the questions and associated answers that have been created as a part of your efforts. If
you imported content from an FAQ document or a URL, you should see that the knowledge base has
extracted that content and provide a table-like listing of the questions and answers. You will also notice,
at the top of the question column, the original source that was used for the content.

You should review and make any edits to this page prior to publishing your knowledge base. You can
perform the following editing options:
●● Add questions related to an answer - immediately to the right if each question is an X that you can
use to delete a question but also a plus (+) symbol that you can use to add additional questions,
related to the answer found in the answer column. Recall that you can have the same answer related
to multiple questions in an effort to ensure coverage for anticipated questions that are similar
●● Delete an answer - you can Click or Select the trash can icon next to an answer to delete a question/
answer pair
●● Add QnA Pair - Click or Selecting this button, found above the page navigation arrows, allows you to
add a new question and answer pair to knowledge base. You would do this if your source document
didn't contain all the QnA pairs you need, or to augment the existing list with some new data.
●● Show Metadata Tags - initially your QnA pairs may not have any metadata associated with them, Click
or Selecting the Show metadata tags icon, will display any metadata tags associated with each QnA
pair
Once you are satisfied with your knowledge base, you can then publish it so it becomes available to your
Bot. It is recommended that you test your KB with the web Bot that is available on the same page.
Once you are satisfied with your testing, you can also save and train the knowledge base to help better
refine the answers returned when users ask questions. If you are satisfied and ready to make the KB
available as an accessible endpoint, Click or Selecting the Publish button will start the publishing process.
You will be presented with a notification that the KB will move from a test index to a production index.
This is important to understand that there may be cost changes associated with the service once it is
published, that are not present during testing.
Once you Click or Select Publish on this page, your KB will have an endpoint that is accessible to the
service. You will also be presented with another web page that outlines some important information for
accessing your KB and service using HTTP. You will find information for both Postman and Curl usage.
You will visit the information on this page later when integrating the bot with the KB and service so you
should copy the information down or leave the page open until you have completed the integration
component.
MCT USE ONLY. STUDENT USE PROHIBITED
 Implementing a Knowledge Base with QnA Maker  67

Walkthrough - Test your KB


1. To get an idea as to how a Bot may respond to questions, Click or Select Test in the upper right corner
2. A test panel opens waiting for you to enter a question
3. Type Hello and press Enter. QnA should respond with Hello
4. Type ‘when will v3 retire?’ and press Enter
5. QnA responds with a message regarding V3's updates and release information
6. Type in ‘what is included in v4?’ and press Enter
7. You may continue to test the interaction by asking questions and evaluating the responses to get an
idea as to how the QnA KB is being polled for answers.

Walkthrough - Publishing the KB


Now that you have a QnA knowledge base created, it's time to publish it so you can access the KB from a
client application.
1. On the QnA Maker Knowledge base page, where you were testing in the previous exercise, locate the
PUBLISH button in top nav bar
2. Click or Select PUBLISH
3. Read the informative message on the next page that indicates your KB will move from test to produc-
tion. It also indicates that your KB will be available as an endpoint for use in an app or Bot
4. Click or Select Publish
5. After a short time, a message indicating success will display, assuming no errors were encountered
6. Note the URL information that is displayed. You can choose to test it with Postman or Curl, using the
information provided
7. Note that you can also Click or Select Edit Service to go back to the KB and make edits if you require
8. You can also choose to select the Create bot button to create a standard chat bot that will be auto-
matically connected to this knowledge base.
MCT USE ONLY. STUDENT USE PROHIBITED 68  Module 2 Enhancing Bots with QnA Maker  

Integrating QnA with a Bot


Integrate QnA Maker and a Bot
One of the main reasons you create a QnA Maker service along with an associated knowledge base (KB),
is to serve as the foundation for a chat bot. The chat bot can respond to the users' queries by evaluating
the questions asked, against the questions listed in the KB, and then return an associated answer. You
will need to make the connection between your bot and the service for this effort to work correctly. One
of the easiest mechanisms to do this, is with a Web App Bot in the Azure portal. This allows you to test
the integration and to understand the key aspects that will be required to make the necessary connec-
tions between the bot and the service.
The Web App Bot offers templates that can be used to create bots. The templates undergo changes on a
regular basis. For example, a QnA template existed at one time that allowed for an easy connection to
your QnA Maker knowledge base using the following information:
●● QnAAuthKey - the authorization key that is required to gain access to the endpoint
●● QnAEndpointHostName - the endpoint where your bot will look for the service
●● QnAKnowledgebaseId - the ID that identifies the specific knowledge base that you want to connect to
This information is found on the deployment details page for your published knowledge base.

Integrating QnA with a bot in this manner, is time consuming and error-prone. As a result, the recom-
mended method is to actually use the QnA Maker portal to make the connection. Your instructor will
walk you through this procedure in the next topic.

Walkthrough - Integrate Your KB with a Web


App Bot
Now that you have a QnA Knowledge base created and published, it's time to learn how to integrate it
with a Bot. In this exercise, you will create a simple, web-based chat bot and integrate it with the QnA
Maker Knowledge base you created earlier.

Connect the Bot with Your QnA Service


You will now connect your Bot to the QnA Maker service that you created in the previous exercises
1. Open your browser and the tab containing your published service from the previous walkthrough
2. Locate the Create bot button on the page and select it
3. You will be redirected to the Azure portal and automatically signed in with the account you have been
using
MCT USE ONLY. STUDENT USE PROHIBITED
 Integrating QnA with a Bot  69

4. The Web App Bot configuration is pretty much completed for you already with information generated
from the Qn Maker configurations
5. For this walkthrough, leave the entries as they appear with the exception of Application Insights
option. Turn that off
6. Click or select the Create button
7. Azure will perform some validation and then begin the process of creating and deploying the bot
8. Once the bot is deployed, go to the resource
9. Under the Bot management section, select Test in Web Chat
10. Once the test window opens, interact with the bot to see the responses returned from the knowledge
base

11. Under App Service Settings, select All App Service Settings
12. A new window opens detailing some statistics for the bot
13. Select the Configuration option, under Settings
14. Note that from the previous topic, we discussed the connection information around the QnA Auth
Key, Endpoint, and knowledge base ID
15. We find those settings located on this panel
16. Select the Show values option to see the information used to connect the bot to the QnA Maker
knowledge base
17. If you look at the Postman or Curl sample requests on the service deployment page in the QnA Maker
site, you will note these values in the request headers
You have now successfully created a QnA Maker service, published it on Azure, created a Web chat Bot,
and integrated the Bot with the QnA Maker service to provide a chat-based experience for users to
interact with the Microsoft Bot Framework FAQ.
MCT USE ONLY. STUDENT USE PROHIBITED 70  Module 2 Enhancing Bots with QnA Maker  

Lab
Lab 5: Integrate QnA Maker with a Bot

AdventureWorks wants to use a Bot to allow their customer support FAQ to drive conversations on a
Customer Support Bot. A document already exists that contains some questions and answers taken from
the FAQ engagements with customers. This will serve as a starting point but may need to be augmented
with additional questions and answers.

Lab Objectives
●● Create a QnA Service
●● Generate a Knowledge Base using a PDF Document (FAQ)
●● Connect and Publish the Knowledge Base
●● Connect the QnA Service to a Bot
MCT USE ONLY. STUDENT USE PROHIBITED
Module 3 Learn How to Create Language Un-
derstanding with LUIS

Introducing Language Understanding


Introduction
Language understanding is a concept that even humans get wrong from time to time. A good example is
the use of slang terms or localized phrases. If you are in Indonesia at a public place, perhaps a mall or in a
restaurant, and you're searching for the restroom, you can run into this understanding issue if you
learned the language from formal lessons. Indonesia language lessons might teach you to ask where the
restroom is with the phrase, "Di mana kamar kecil?". While this is technically correct, it applies mainly to
seeking the restroom in someone's house because “kamar kecil” literally means small (kecil) room
(kamar). In public, it's more correct to ask, "Di mana WC?", or "Di mana toilette?". However, almost all
Indonesians will know what you are asking. What happens if you attempt to have a computer perform
that translation to understand what you asked? Will it get the correct answer or will it try to direct you to
a “small room” somewhere that isn't actually a restroom?
Likewise, in the English language, there are many scenarios where a human “understands” the meaning of
a phrase or statement, where the subtle similarities aren't apparent to a non-native English speaker. How
many would understand the phrase "swing the door to"? This is the same as “shut the door” or "close the
door", but not everyone would understand these equivalents. For AI to understand language, specific
aspects are critical to aid the algorithm in making comparisons and distinctions. This is where the
Language Understanding Intelligent Service, LUIS, comes into play.
LUIS makes use of three key aspects for understanding language:
1. Intent - An intent represents a task or action the user wants to perform. It is a purpose or goal
expressed in a user's utterance.
2. Utterance - Utterances are input from the user that your app needs to interpret.
3. Entities - The entity represents a word or phrase inside the utterance that you want extracted.
Let's see an example of this. One of the more common scenarios might focus around using LUIS in a Bot
to help a user make a flight reservation. A user may use the following utterance, “Book 2 tickets on a
MCT USE ONLY. STUDENT USE PROHIBITED 72  Module 3 Learn How to Create Language Understanding with LUIS  

flight to New York for New Year's Eve celebrations.” If we evaluate this utterance for key aspects, we can
determine the user's intent. They want to book a flight. We can state the intent is "BookFlight".
Entities, as indicated above, are words or phrases, but also simply data, that help to provide specific
context for the utterance and aid in more accurately identifying the intent. Not every utterance will
contain entities. In our utterance for booking the new York flight, we can identify entities such as:
●● New York - we can classify this entity as Location.Destination.
●● New Year's Eve - we might classify this entity as Event.
●● The number 2 - this is mapped to a built-in entity in LUIS known as prebuilt entity, specifically a
prebuilt number.

Benefits of Using LUIS in an AI application


In the introduction topic, you read about the ability to understand what a person's spoken words mean,
or what their intention is, through the words they use to convey something. For the most part, it is
relatively easy for two people, in a conversation in the same language, to understand each other. Humans
can “understand” because there are typically a set of well understood intentions that exist such as:
●● Booking a flight to a destination
●● Asking for help with product features
●● Looking for directions to a destination
●● etc.
While it is somewhat intuitive and easy for humans, the same cannot be said about a conversation with a
bot hosted on a computer. In the lessons on creating bots, you learned that the “intelligence” for a bot is
provided by the developer and not the Bot Framework. While we can say that still holds true, and
knowing that the Bot Framework is not necessarily providing logic for the intelligence, we can find some
intelligence for our bot in the language understanding services that are part of LUIS.

Natural Language
Designed to identify valuable information in conversations, LUIS interprets user goals (intents) and distills
valuable information from sentences (entities), for a high quality, nuanced language model. LUIS inte-
grates seamlessly with the Azure Bot Service, making it easy to create a sophisticated bot.
What this means for you is that LUIS takes a lot of the “hard work” out of your hands and provides the
services necessary to help you integrate natural language understanding and processing into your bots.
Using the LUIS interface, you can easily create intents, utterances, and entities that represent example
questions or comments a user might make while interacting with your bot, and have LUIS automatically
recognize those, and similar statements, to allow your bot the ability to "understand" what a user is
asking.

Integrated Learning
Powerful developer tools are combined with customizable pre-built apps and entity dictionaries, such as
Calendar, Music, and Devices, so you can build and deploy a solution more quickly. Dictionaries are
mined from the collective knowledge of the web and supply billions of entries, helping your model to
correctly identify valuable information from user conversations.
MCT USE ONLY. STUDENT USE PROHIBITED
 Introducing Language Understanding  73

Ongoing Learning
Active learning is used to continuously improve the quality of the natural language models. Once the
model starts processing input, LUIS begins active learning, allowing you to constantly update and
improve the model. You can also review the series of statements or questions that users have provided to
the bot, and LUIS has interpreted. If you find matches that don't meet your needs, or perhaps require
some tweaking, you can do that easily in the LUIS tools and then train the models again to improve
results.

Best Practices for LUIS


Microsoft recommends following an app authoring process as a best practice when implementing LUIS
for an AI app or bot. There are five discreet steps involved in the process:
1. Build your language model
2. Add some training example utterances (10-15 per intent)
3. Publish the model
4. Test the newly published model
5. Add features as necessary based on testing
Here are recommendations for how to formulate the specific components of your language model:
Intents - Ensure these are distinct. Don't create overlap with utterances such as ‘Book a flight’ and 'Book
a hotel'. You can differentiate what aspect to ‘book’ by extracting flight and hotel as entities.
Build Iteratively - Keep a separate set of utterances that isn't used as example utterances or endpoint
utterances. Keep improving the app for your test set. Adapt the test set to reflect real user utterances. Use
this test set to evaluate each iteration or version of the app.
Use the None Intent - This intent is the fallback intent, indicating everything outside your application.
Add one example utterance to the None intent for every 10 example utterances in the rest of your LUIS
app.
These are just a small set of best practices to get started with LUIS. You can view the ever changing
design and develop patterns that Microsoft releases on the LUIS documentation, as the service expands.
MCT USE ONLY. STUDENT USE PROHIBITED 74  Module 3 Learn How to Create Language Understanding with LUIS  

Creating a LUIS Service


Introducing the LUIS Service
LUIS is supported by a service that is available on Microsoft Azure. It is a cloud-based API service that
applies custom machine-learning intelligence to a user's conversational, natural language text to predict
overall meaning, and pull out relevant, detailed information. You do not need to understand natural
language processing in any great depth to be able to use the service. You can simply create a new LUIS
service, add some intents, utterances, and define entities, perform some model training, and then publish
the LUIS model as a service. Your applications will use the necessary connection details to access and use
the service. The Microsoft machine learning models take care of the hard work for you.
The service also supports the ability to connect to various application types such as social media, chat
bots, or speech-enabled applications.

Considerations for LUIS Service


While LUIS seems like a perfect choice for applications that require natural language processing, it's
important to ensure that LUIS meets your expectations and that it is the right fit for your needs. Plan-
ning, prior to starting any development work, is a critical step for a successful implementation of LUIS.

Understand your Domain


Before you implement a LUIS service, you need to clearly understand and identify the domain in which
you want to utilize the service. The domain refers to a specific topic that you want LUIS to address. For
example, you may have a travel app that performs booking of tickets, flights, hotels, and rental cars. This
domain is all about travel and identifying the domain as travel, helps you find words or phrases that are
important to that domain. You would not use words or phrases that are related to customer support
topics for a bicycle repair shop or a fitness application.

Plan your Intents


Intents are the “intentions” that a user of your app might have. BookFlight and GetWeather are examples
of good intents while BookInternationalFlight or BookDomesticFlight are considered to be too similar and
may cause problems during training and usage. It may be difficult to think about intents when first
planning the LUIS app but you can use existing activities as a way to help with this task. For example, if
you already have a transcript of common customer questions on support forums, or perhaps you have a
set of frequently asked questions (FAQs) that have been published to help users with your systems or
sites, you can review these as a starting point. What are the common things that are being asked?
These can serve as a good starting point to create Intents. Once you start adding utterances and identi-
fying entities for these Intents, you will start to formulate a more complete picture.
Of course recall that our lifecycle for a LUIS app is to plan, build, train, test, and publish. Your testing can
help identify if the Intents you created are working as you expected. You can also utilize external users
for testing of the app to see if there are scenarios that you are missing. If this is done during the testing
phase, you can incorporate any newly identified Intents into the model before you retrain and publish.

Create Example Utterances for Each Intent


Once you have determined the intents, create 10 or 15 example utterances for each intent. To begin with,
do not have fewer than this number or create many utterances for each intent. Each utterance should be
MCT USE ONLY. STUDENT USE PROHIBITED
 Creating a LUIS Service  75

different from the previous utterance. A good variety in the utterances includes overall word count, word
choice, verb tense, and punctuation.

Walkthrough - Create a LUIS Service and App


Let’s look at how we can use LUIS to add some natural language capabilities. This walkthrough will show
how to have a LUIS service tied to your Azure account for testing. You can follow these steps to create the
LUIS service.
1. Navigate to the Azure Portal1
2. In the Portal, Click or Select Create a resource and then enter LUIS in the search box and press Enter
3. Select Language Understanding
4. Click or Select Create
5. Enter a unique name for you LUIS service, choose a subscription, select a location close to you.
6. Select the F0 pricing tier for this service
7. Create a new Resource Group called LearnRG, or use the existing one if it is already created
8. Click or Select Create
9. Once the deployment has succeeded, go the resource page for the service. You will need one of the
keys in subsequent exercises

Create a LUIS App


1. This step requires you to create the LUIS app in the geographic location that maps to where you
created the service. Select one of the following URLs, based on the location in which you created your
LUIS service in the previous steps, and open it in a new browser window. Even if you didn't create the
service in the optional exercise, it is best to use the location closest to you.
North America2,
Europe3,
Australia4
2. Click or Select LoginIn/Sign Up
3. Login in with your Microsoft Account
4. On the Welcome page, Click or Select Create a LUIS app now
5. Give your LUIS app a name, for example PictureBotLUIS
6. Select the Culture
7. Give your app a description so it is clear what the purpose of the LUIS app is
8. Click or Select Done and the App will open to the Dashboard. You will start to enter Intents, Entities,
and Utterances in the exercises that follow in the next lesson.
9. If the overview page loads over the browser, review the items on the page and then close it using the
X in the upper right corner

1 https://portal.azure.com
2 https://www.luis.ai/
3 https://eu.luis.ai/
4 https://au.luis.ai/
MCT USE ONLY. STUDENT USE PROHIBITED 76  Module 3 Learn How to Create Language Understanding with LUIS  

Build Intents and Utterances


Introducing Intents
One of the first components that you will add to you LUIS app is an intent. Depending on the app you
are building, you will create many intents that support the needs of your app as it relates to user interac-
tions. In LUIS, an intent is the term used to describe the task or action that a user wants to perform. It is
closely tied to the concept of an utterance, which you will learn about in a later topic.
For most LUIS apps, you will create a set of intents that will correspond to tasks a users might want to
perform. The most common scenario presented is for a chat bot that helps a customer with some task or
query. Considering this focus, you might create intents related to a travel related chat bot where the user
might be interested in having the chat bot help with specific travel related considerations such as:
●● Greeting - a simple intent that can respond to greetings a user might enter when first connecting to
the bot
●● BookFlight - help with booking a flight
●● CheckWeather - have the chat bot check weather resources at a potential destination
●● None - a good intent to always have present in your LUIS app. This intent is used as a default or
“fall-through” intent when none of the user's utterances are found to match an existing intent. You
can provide some default messaging in a None intent response such as valid queries, etc.
Microsoft also provides a set of prebuilt domains that you can use as intents in your LUIS app. This is a
quick and easy way to add abilities to your conversational client app without having to design the models
for those abilities. An example of some prebuilt domains is shown in the following image. Note that the
list of available prebuilt domains can change.
MCT USE ONLY. STUDENT USE PROHIBITED
 Build Intents and Utterances  77

When a user interacts with the LUIS app, through your chat bot as an example, and they enter an utter-
ance (a phrase) into the chat bot, the LUIS service will return the single top intent that most closely
matches that utterance. However, for testing purposes, you can also choose to return scores for all
intents that LUIS matches to the utterance. You would need to provide the verbose=true flag in the
query string for the API call, but it can help you understand how LUIS is performing in the matching of
utterances to intents and helps you tweak your sample utterances and retrain your model.

Other Considerations for Intents


Negative Intents
If you want to determine negative and positive intentions, such as “I want a car” and "I don't want a car",
you can create two intents (one positive, and one negative) and add appropriate utterances for each. Or
you can create a single intent and mark the two different positive and negative terms as an entity.
Balance your Intentions
Do not have one intent with 10 utterances and another intent with 500 utterances. This is not balanced. If
you have this situation, review the intent with 500 utterances to see if many of the intents can be reor-
ganized into a pattern.
Request help for apps with significant number of intents
If reducing the number of intents or dividing your intents into multiple apps doesn't work for you,
contact support. If your Azure subscription includes support services, contact Azure technical support.
MCT USE ONLY. STUDENT USE PROHIBITED 78  Module 3 Learn How to Create Language Understanding with LUIS  

This is important to note for limitations on the service. For more information on the limits, you can view
the Model Boundaries5 page on the docs site.

Introducing Utterances
In a LUIS app, the concept of an utterance is best explained as being a phrase or question that a user
might utilize to interact with your app. You map utterances to intents to help determine the users'
intentions. Let's consider a scenario where you are creating a LUIS app that will allow users to search for
images. You would start by creating an intent called SearchPics. Following that, you would want to
create a set of utterances that are sample phrases or questions a user might ask in relation to searching
for images. The following list is just an example of what you might enter.
●● Find outdoor pics
●● Are there pictures of a train?
●● Find pictures of food.
●● Search for photos of boys playing
●● Give me pics of business women
●● Show me beach pics
●● I want to find dog photos
●● Search for pictures of men indoors
●● Show me pictures of men wearing glasses
●● I want to see pics of sad boys
●● Show me happy baby pics
Careful review of these utterances can also reveal a common theme. Words like pics, pictures, and
photos are examples of the common theme in the utterances. They all map to the concept of images of
pictures or photos. You will also see, in the next topic, how these relate to the concept of an entity, which
is another aspect you will work with in a LUIS app.
Each intent that you create in your LUIS app, should have a number of utterances that related to it. You
may not be able to think of every possible combination of phrase or question that a user might enter, but
that's where the training aspect of the model comes into play. Through the entry of utterances, training,
testing, and evaluating the accuracy, you can create better models.

Introducing Entities
The entity represents a word or phrase inside the utterance that you want extracted. An utterance can
include many entities or none at all. An entity represents a class including a collection of similar objects
(places, things, people, events or concepts). Entities describe information relevant to the intent, and
sometimes they are essential for your app to perform its task. For example, a News Search app may
include entities such as “topic”, “source”, “keyword” and “publishing date”, which are key data to search
for news. In a travel booking app, the “location”, “date”, “airline”, "travel class" and “tickets” are key
information for flight booking (relevant to the "Book flight" intent). By comparison, the intent represents
the prediction of the entire utterance.
In the topic on utterances, we mentioned that there were words in the utterances that shared a common
theme. These words, such as pics, images, photos, etc. make great candidates for entities. While intents

5 https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-boundaries#model-boundaries
MCT USE ONLY. STUDENT USE PROHIBITED
 Build Intents and Utterances  79

are required, entities are optional. You do not need to create entities for every concept in your app, but
only for those required for the client application to take action. You label or mark entities for the purpose
of entity extraction only, it does not help with intent prediction.
LUIS offers many types of entities. Choose the entity based on how the data should be extracted and how
it should be represented after it is extracted.

Composite Entity
A composite entity is made up of other entities, such as prebuilt entities, simple, regular expression, list,
and hierarchical entities. The separate entities form a whole entity.

Hierarchical entity
A hierarchical entity is a category of contextually learned simple entities called children.

List Entity
Fixed and closed set of related words, including synonyms.

Pattern.any entity
Pattern.any is a variable-length placeholder used only in a pattern's template utterance to mark where
the entity begins and ends.
Example
Given a client application that searches for books based on title, the pattern.any extracts the complete
title. A template utterance using pattern.any for this book search is Was {BookTitle} written by an Ameri-
can this year[?].
MCT USE ONLY. STUDENT USE PROHIBITED 80  Module 3 Learn How to Create Language Understanding with LUIS  

Prebuilt entity
Prebuilt entities are built-in types that represent common concepts such as email, URL, and phone
number. Prebuilt entity names are reserved. All prebuilt entities that are added to the application are
returned in the endpoint prediction query if they are found in the utterance.

Regular expression entity


A regular expression is best for raw utterance text. It ignores case and ignores cultural variant. Regular
expression matching is applied after spell-check alterations at the character level, not the token level. If
the regular expression is too complex, such as using many brackets, you're not able to add the expression
to the model. Uses part but not all of the .Net Regex library.

Simple entity
A simple entity is a generic entity that describes a single concept and is learned from the ma-
chine-learned context. Because simple entities are generally names such as company names, product
names, or other categories of names, add a phrase list when using a simple entity to boost the signal of
the names used.

Walkthrough - Create Intents, Utterances, and


Entities
We want our bot to understand how to:
●● Search/find pictures
●● Share pictures on social media
●● Order prints of pictures
●● Greet the user (although this can also be done other ways, as we'll see later)
Let’s create intents for the user requesting each of these.
NOTE
There is one intent already present called “None”. Random utterances that don’t map to any of your
intents may be mapped to this intent.
1. Click or Select Create new intent.
2. Name the first intent “Greeting” and Click or Select Done.
Because our scenario for this application is to integrate with a Bot, provide examples of things that
users might say when greeting the Bot initially
3. Type a greeting, such as “hello” and press Enter
4. Repeat the previous step to create values for each of the following, “hi”, "hola", “yo”, "hey", “greetings”
NOTE
You should always provide at least five examples.
5. Your utterances should reflect similar to this
MCT USE ONLY. STUDENT USE PROHIBITED
 Build Intents and Utterances  81

Create entities
Next, let's create the entities we need. In this case, we'll create an entity that can capture a specific ask by
a user, for example, when the user requests to search the pictures, they may specify what they are looking
for.
1. Click or Select on Entities in the left-hand column and then Click or Select Create new entity.
2. Give it an entity name “facet” and entity type Simple. Then Click or Select Done.

3. Create a new Intent called “SearchPics”. Use the same steps as above.
MCT USE ONLY. STUDENT USE PROHIBITED 82  Module 3 Learn How to Create Language Understanding with LUIS  

4. Add the following values as utterances for the SearchPics intent:


●● Find outdoor pics
●● Are there pictures of a train?
●● Find pictures of food.
●● Search for photos of boys playing
●● Give me pics of business women
●● Show me beach pics
●● I want to find dog photos
●● Search for pictures of men indoors
●● Show me pictures of men wearing glasses
●● I want to see pics of sad boys
●● Show me happy baby pics

Selecting the facet entity


Once we have some utterances, we have to teach LUIS how to pick out the search topic as the “facet”
entity. Whatever the "facet" entity picks up is what will be searched.
1. Hover and Click or Select the key word, or Click or Select consecutive words to select a group of
words, and then select the “facet” entity.
TIP
Using multiple words, such as business women is a bit tricky.
Click or Select the first word, move the cursor to the second word and Click or Select again, then move
the cursor into the facet selection pop-up, without going out of the borders, or you will lose the selec-
tion.
2. Your progress should look similar to this.

3. Add two more Intents with related utterances according to this list:
●● SharePic - “Share this pic”, "Can you tweet that?", “post to Twitter”
●● OrderPic - “Print this picture”, "I would like to order prints", "Can I get an 8x10 of that one?",
“Order wallets”
MCT USE ONLY. STUDENT USE PROHIBITED
 Build Intents and Utterances  83

4. To finish out the Intents and entities exercise, add some utterances to the existing None intent, that
don't match the context of this LUIS app. Examples are:
●● “I'm hungry for pizza”
●● “Search videos”
●● “Show me how to drive a car”
MCT USE ONLY. STUDENT USE PROHIBITED 84  Module 3 Learn How to Create Language Understanding with LUIS  

Lab
Lab 6: Implement the LUIS Model

In this lab you will build the logic into a LUIS Service that helps your Bot, which will be integrated later, to
search for images in the system.

Lab Objectives
●● Create a LUIS Service
●● Add Intelligence to your LUIS Service
●● Train and Test the LUIS Service
MCT USE ONLY. STUDENT USE PROHIBITED
Module 4 Enhance Your Bot with LUIS

Overview of Language Understanding in AI


Applications
Introduction
You have heard it mentioned in this course that the developer of a bot is responsible for creating the
logic that makes the bot intelligent. While there are not plug-and-play modules for intelligence in a bot,
you can make use of LUIS apps and the the LUIS service to help your bot understand the users' inten-
tions. You have already seen an example of creating a LUIS app, adding intents, utterances, and entities.
You have seen how the information you provide and use to train the LUIS app model, helps the service to
derive meaning from what users enter.
Connecting your bot to a LUIS application can provide the bot with the ability to understand language.
In this module you will learn how a bot can make us a LUIS service to gain an understanding of the
intention of your users, when they interact with the bot.

Accessing your LUIS Apps


You can have many LUIS apps created and being hosted on the luis.ai service space. You connect your
applications to specific LUIS apps to gain the language understanding that you have built into the app.
For example, you might create a LUIS app to support a travel application, you might create another LUIS
app to provide language understanding for a customer support application, and the list goes on. Howev-
er, once you have created the LUIS app, assigned the intents, utterances, entities, and trained the model,
how do you gain access to the functionality?
First off, you will need to publish the app so that it is accessible to applications, as an endpoint. Once
you are happy with the training and testing, you simply Click or Select the Publish button in the LUIS
portal and decide where to publish the app. You can currently choose between Production or Staging.
MCT USE ONLY. STUDENT USE PROHIBITED 86  Module 4 Enhance Your Bot with LUIS  

Once you select your environment and Click or Select Publish, you should see a green header show up
indicated that publishing is complete. There is also a hyperlink that when Click or Selected, will take you
to the list of endpoints for the LUIS app. This is where you get the necessary endpoint keys and URL for
accessing that particular LUIS app.

The endpoint is the URL in the lower right that you will need to provide to your application in order to
locate the application service on the Internet. You will also need to have the authoring key to be able to
authorize access to the service endpoint and use the application. If you want to make this LUIS app
public so that anyone with the valid key can query the service, you simply open the Application Informa-
tion page and slide the Not Public button so that it indicates Public.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Language Understanding in AI Applications  87

Integrating LUIS with a Bot


Integrating a bot with LUIS can actually be accomplished in either direction. That is to say, you can
connect your bot to a LUIS app, or you can connect LUIS to a bot. The procedure will depend on how
you want to make the connection.
If you have an existing bot that is hosted on Azure, you can connect it to a LUIS app by using three fields
in the bot application settings. You need to configure theLuisAPIKey, LuisAppId, and the LuisAPIHost-
Name. In order to do this, you must first open your LUIS app to gain access to these key pieces of
information. The first screen shot shows the location of the LuisAppId, which is the Application ID found
on the Application Information page.
MCT USE ONLY. STUDENT USE PROHIBITED 88  Module 4 Enhance Your Bot with LUIS  

Selecting the Keys and Endpoints option displays the authoring key and the endpoint URL that will serve
as the host name. Note that the key you see on this screen is indicated as an Authoring Key. The
Authoring Key permits you to do testing in the free tier but you will need to change this to your endpoint
key when your traffic exceeds the free tier quota.

You can paste these values into the Application Settings for the bot service by opening the Application
Settings blade for the bot service and locating the three entries in the App settings section.
MCT USE ONLY. STUDENT USE PROHIBITED
 Overview of Language Understanding in AI Applications  89

If you have created a bot using the Azure Web App Bot template and either downloaded the code for use
in Visual Studio, you can connect LUIS and the bot automatically because the source code that is generat-
ed, automatically contains the necessary information to work with your LUIS service that was created as a
part of the bot creation steps. The only aspect you need to configure in the downloaded source code is
in the appsettings.json file. You put the botFileSecret and botFilePath values in this file.
{
"botFileSecret": "",
"botFilePath": ""

You get these values from the Application Setting blade for your bot service.
MCT USE ONLY. STUDENT USE PROHIBITED 90  Module 4 Enhance Your Bot with LUIS  
MCT USE ONLY. STUDENT USE PROHIBITED
 Integrating LUIS and Bots  91

Integrating LUIS and Bots


Walkthrough - Integrating a Bot and LUIS
In this walkhrough, the instructor will guide you as you follow the steps to create a bot in the Azure
portal, download the source code, modify the source, and provide the integration piece with a LUIS
service. When you are finished, you will test the bot and LUIS integration using the Bot Framework
Emulator.
Note: This walkthrough requires the Bot Emulator to be installed

Create web app bot


1. In the Azure portal, select Create new resource.
2. In the search box, search for and select Web App Bot. Select Create.
3. In Bot Service, provide the required information:

Setting Purpose Suggested setting


Bot name Resource name luis-csharp-bot- + <your-
name>, for example, luis-csharp-
bot-johnsmith
Subscription Subscription where to create bot. Your primary subscription.
Resource group Logical group of Azure resources Create a new group to store all
resources used with this bot,
name the group luis-csharp-bot-
resource-group.
Location Azure region - this doesn't have westus
to be the same as the LUIS
authoring or publishing region.
Pricing tier Used for service request limits F0 is the free tier.
and billing.
App name The name is used as the subdo- luis-csharp-bot- + <your-
main when your bot is deployed name>, for example, luis-csharp-
to the cloud (for example, bot-johnsmith
humanresourcesbot.azureweb-
sites.net).
Bot template Bot framework settings - see
next table
LUIS App location Must be the same as the LUIS westus
resource region
4. In the Bot template settings, select the following, then choose the Select button under these settings:

Setting Purpose Selection


SDK version Bot framework version SDK v4
SDK language Programming language of bot C#
Echo/Basic bot Type of bot Basic bot
5. Select Create. This creates and deploys the bot service to Azure. Part of this process creates a LUIS app
named luis-csharp-bot-XXXX. This name is based on the bot and app name's in the previous section.
MCT USE ONLY. STUDENT USE PROHIBITED 92  Module 4 Enhance Your Bot with LUIS  

6. Leave this browser tab open. For any steps with the LUIS portal, open a new browser tab. Continue to
the next section when the new bot service is deployed.

Add prebuilt domain to model


Part of the bot service deployment creates a new LUIS app with intents and example utterances. The bot
provides intent mapping to the new LUIS app for the following intents:

Basic bot LUIS intents example utterance


Cancel stop
Greeting hello
Help help
None Anything outside the domain of the app.

Add the prebuilt HomeAutomation app to the model to han-


dle utterances like: Turn off the living room lights
1. Go to LUIS portal and sign in.
2. On the My Apps page, select the Created date column to sort by the date the app was created. The
Azure Bot service created a new app in the previous section. Its name is luis-csharp-bot- + <your-
name> + 4 random characters.
3. Open the app and select the Build section in the top navigation.
4. From the left navigation, select Prebuilt Domains.
5. Select the HomeAutomation domain by select Add domain on its card.
6. Select Train in the top right menu.
7. Select Publish in the top right menu.
The app created by the Azure Bot service now has new intents:

Basic bot new intents example utterance


HomeAutomation.TurnOn turn the fan to high
HomeAutomation.TurnOff turn off ac please

Download the web app bot


In order to develop the web app bot code, download the code and use on your local computer.
1. In the Azure portal, still on the web app bot resource, select the Application Settings and copy the
values of botFilePath and botFileSecret. You need to add these to an environment file later.
2. In the Azure portal, select Build from the Bot management section.
3. Select Download Bot source code.
4. When the source code is zipped, a message will provide a link to download the code. Select the link.
5. Save the zip file to your local computer and extract the files. Open the project.
6. Open the bot.cs file and look for _services.LuisServices. This is where the user utterance entered into
the bot is sent to LUIS.
MCT USE ONLY. STUDENT USE PROHIBITED
 Integrating LUIS and Bots  93

/// <summary>
/// Run every turn of the conversation. Handles orchestration of messages.
/// </summary>
/// <param name="turnContext">Bot Turn Context.</param>
/// <param name="cancellationToken">Task CancellationToken.</param>
/// <returns>A <see cref="Task"/> representing the asynchronous opera-
tion.</returns>
public async Task OnTurnAsync(ITurnContext turnContext, CancellationToken
cancellationToken)
{
var activity = turnContext.Activity;

if (activity.Type == ActivityTypes.Message)
{
// Perform a call to LUIS to retrieve results for the current
activity message.
var luisResults = await _services.LuisServices[LuisConfiguration].
RecognizeAsync(turnContext, cancellationToken).ConfigureAwait(false);

// If any entities were updated, treat as interruption.


// For example, "no my name is tony" will manifest as an update of
the name to be "tony".
var topScoringIntent = luisResults?.GetTopScoringIntent();

var topIntent = topScoringIntent.Value.intent;


switch (topIntent)
{
case GreetingIntent:
await turnContext.SendActivityAsync("Hello.");
break;
case HelpIntent:
await turnContext.SendActivityAsync("Let me try to provide
some help.");
await turnContext.SendActivityAsync("I understand greet-
ings, being asked for help, or being asked to cancel what I am doing.");
break;
case CancelIntent:
await turnContext.SendActivityAsync("I have nothing to
cancel.");
break;
case NoneIntent:
default:
// Help or no intent identified, either way, let's provide
some help.
// to the user
await turnContext.SendActivityAsync("I didn't understand
what you just said to me.");
break;
}
}
MCT USE ONLY. STUDENT USE PROHIBITED 94  Module 4 Enhance Your Bot with LUIS  

else if (activity.Type == ActivityTypes.ConversationUpdate)


{
if (activity.MembersAdded.Any())
{
// Iterate over all new members added to the conversation.
foreach (var member in activity.MembersAdded)
{
// Greet anyone that was not the target (recipient) of this
message.
// To learn more about Adaptive Cards, see https://aka.ms/
msbot-adaptivecards for more details.
if (member.Id != activity.Recipient.Id)
{
var welcomeCard = CreateAdaptiveCardAttachment();
var response = CreateResponse(activity, welcomeCard);
await turnContext.SendActivityAsync(response).Configure-
Await(false);
}
}
}
}

The bot sends the user's utterance to LUIS and gets the results. The top intent determines the conversa-
tion flow.

Start the bot


Before changing any code or settings, verify the bot works.
In Visual Studio, open the solution file.
1. Locate and open the appsettings.json file to modify the bot variables the bot code looks for:

Copy
{
"botFileSecret": "",
"botFilePath": ""
}

2. Set the values of the variables to the values you copied from the Azure bot service's Application
Settings in Step 1 of the Download the web app bot section.
Note: Your json file may have placeholder text between the double quotes, simple replace that placehold-
er text with the values
3. In Visual Studio, start the bot. A browser window opens with the web app bot's web site at http://
localhost:3978/.
MCT USE ONLY. STUDENT USE PROHIBITED
 Integrating LUIS and Bots  95

Start the emulator


1. Open the Bot Emulator.
2. In the bot emulator, select the *.bot file in the root of the project. This .bot file includes the bot's URL
endpoint for messages:
3. Enter the bot secret you copied from the Azure bot service's Application Settings in Step 1 of the
Download the web app bot section. This allows the emulator to access any encrypted fields in the .bot
file.
4. In the bot emulator, enter Hello and get the proper response for the basic bot.

Modify bot code


In the BasicBot.cs file, add code to handle the new intents.
1. At the top of the file, find the Supported LUIS Intents section, and add constants for the HomeAuto-
mation intents:

// Supported LUIS Intents


public const string GreetingIntent = "Greeting";
public const string CancelIntent = "Cancel";
public const string HelpIntent = "Help";
public const string NoneIntent = "None";
public const string TurnOnIntent = "HomeAutomation_TurnOn"; // new intent
public const string TurnOffIntent = "HomeAutomation_TurnOff"; // new intent

Notice that the period, ., between the domain and the intent from the LUIS portal's app is replaced with
an underscore, _.
2. Find the OnTurnAsync method that receives the LUIS prediction of the utterance. Add code in the
switch statement to return the LUIS response for the two HomeAutomation intents.
MCT USE ONLY. STUDENT USE PROHIBITED 96  Module 4 Enhance Your Bot with LUIS  

case TurnOnIntent:
await turnContext.SendActivityAsync("TurnOn intent found, JSON re-
sponse: " + luisResults?.Entities.ToString());
break;
case TurnOffIntent:
await turnContext.SendActivityAsync("TurnOff intent found, JSON re-
sponse: " + luisResults?.Entities.ToString());
break;

The bot doesn't have the exact same response as a LUIS REST API request so it is important to learn the
differences by looking at the response JSON. The text and intents properties are the same but the entities
property values have been modified.

{
"$instance": {
"HomeAutomation_Device": [
{
"startIndex": 23,
"endIndex": 29,
"score": 0.9776345,
"text": "lights",
"type": "HomeAutomation.Device"
}
],
"HomeAutomation_Room": [
{
"startIndex": 12,
"endIndex": 22,
"score": 0.9079433,
"text": "livingroom",
"type": "HomeAutomation.Room"
}
]
},
"HomeAutomation_Device": [
"lights"
],
"HomeAutomation_Room": [
"livingroom"
]
}

View results in bot


1. In the bot emulator, enter the utterance: Turn on the livingroom lights to 50%
2. The bot responds with a json output similar to this:
MCT USE ONLY. STUDENT USE PROHIBITED
 Integrating LUIS and Bots  97

TurnOn intent found, JSON response: {"$instance":{“HomeAutomation_De-


vice”:[{“startIndex”:23,“endIndex”:29,“score”:0.9776345,“text”:“lights”,“-
type”:“HomeAutomation.Device”}],“HomeAutomation_Room”:[{“startIndex-
”:12,“endIndex”:22,“score”:0.9079433,“text”:“livingroom”,“type”:“HomeAuto-
mation.
Room”}]},“HomeAutomation_Device”:[“lights”],“HomeAutomation_Room”:[“living-
room”]}
MCT USE ONLY. STUDENT USE PROHIBITED 98  Module 4 Enhance Your Bot with LUIS  

Lab
Lab 7: Integrate LUIS into Bot Dialogs

Our bot is now capable of taking in a user's input and responding based on the user's input. Unfortunate-
ly, our bot's communication skills are brittle. One typo, or a rephrasing of words, and the bot will not
understand. This can cause frustration for the user. We can greatly increase the bot's conversational
abilities by enabling it to understand natural language with the LUIS model we built

Lab Objectives
●● Add LUIS to your C# Code
●● Add LUIS to the PictureBot Dialog
●● Test the LUIS Service
MCT USE ONLY. STUDENT USE PROHIBITED
Module 5 Integrate Cognitive Services with
Bots and Agents

Understanding Cognitive Services for Bot In-


teractions
Introduction
Creating a great user experience with bots or agents, goes a long way to increasing user satisfaction and
in a customer support scenario, can help with developing a good rapport with your customers. As the
course has taught, in the context of bots, the underlying intelligence, or AI, for your bot, is a primary
responsibility of the bot creator.
The first module covered some of the Microsoft Cognitive Services so you should have an idea of the
features supported by those services. You should also recognize the benefit that those features can have
when creating bots and agents that users will interact with. In this module, you will take a look at
implementing some common scenarios that you can bake into your bots such as sentiment analysis,
language detection, and image processing. All of these aspects help to make a bot more intuitive to the
users when they interact with it.

Bots and Text


Having a user type text into a bot is one way that a bot interacts with users. Because the interaction is
text-based, you can make us of many of the Azure Cognitive Services that can act on textual information.
What are some of the types of uses that you might consider implementing?
●● Sentiment Analysis - Think about how many times you have received a text message from a contact,
or an email, and you couldn't determine the “tone” of the conversation to identify the sentiment
behind the text. Without emoticons, it's not always easy to determine if the person is happy, angry,
sad, etc. A bot will also not be able to ascertain a conversation "tone" either, or at least not without
some help. If you send the text that a users enters into a chat bot, to the sentiment analysis service in
the text analytics APIs, you can help your bot determine sentiment in a conversation with the user.
MCT USE ONLY. STUDENT USE PROHIBITED 100  Module 5 Integrate Cognitive Services with Bots and Agents  

●● Language Detection - If your business is a global one, and you are doing business in regions where
languages other than English are spoken, your bot may not work at all if you only code it to work with
English or any single language. If a user types an initial greeting into your bot, it would be a good
idea to check the language the user is utilizing. You can take a different approach depending on
how to intend to support your users. For example, you could simply make use of the language
detection services in Azure Cognitive Services to detect any language not supported by your bot and
to respond to the user in that language, simply indicating that the bot only supports a single lan-
guage. Alternatively, you could elect to provide language translation in a more complex bot scenario
that uses language detection and then translation services to communicate with the user in their
native language.
●● Language Understanding - integrate your bot with LUIS, as you have learned in previous lessons, to
help your bot understand a user's intent through the natural language processing features available in
LUIS.

Bots and Speech


Bots don't always have to be a text entry bot. You can also support speech in a bot and have the user
interact through spoken words. In this case, you may want to consider the speech services available to
perform speech to text conversion. Doing this conversion will allow your bot to take the converted text
and pass it off to any of the language-based Cognitive Services for processing. You may perform the
sentiment analysis, as indicated in the previous topic, or perhaps you need to convert the spoken words
to text so you can perform language conversion for other purposes.
Utilizing speech services with a bot are not covered in this course but it's good to know that the option is
there. One scenario you might consider is to use the Azure speech service and your application to
integrate with a bot in the following manner:
●● accept the spoken audio
●● convert that audio to text
●● send the text to the bot for processing
●● get the returned text from the bot
●● convert it to speech using the speech services
●● have the app speak it back to the user

Bots and Images


You may have experienced this scenario already, or perhaps viewed it on a television advertisement. Your
car is involved in an accident and has sustained some damage. Normal scenarios follow the procedure of
informing an insurance provider, taking the vehicle to a repair shop for estimates, getting approval from
the insurance provider, getting your car returned. Not only is the process time consuming, but it delays
repairs and your ability to have your car back for your use.
To help with this process, an insurance provider might decide to implement a “claims bot” that integrates
with the Azure Computer Vision services. The user would interact with the bot by snapping a picture of
the scene and the damage to the vehicle. The bot would send the image to the computer vision service
which could utilize image comparisons or even custom vision models to determine a cost estimate and
perhaps a recommended repair center to have the vehicle taken to. It might even be able to determine
if the damage is significant enough that they should dispatch a tow truck to recover the vehicle and have
it delivered to the appropriate repair shop.
MCT USE ONLY. STUDENT USE PROHIBITED
 Understanding Cognitive Services for Bot Interactions  101

Another scenario might present it self where a user is interacting with a bot in a conversational manner
and is using the camera on their computer. The bot can interact with the camera and take snapshots of
the user, sending the images to the emotion detection APIs to determine if the user is getting upset, or
experiencing some other emotion, allowing the bot to respond in a more appropriate manner. Perhaps
they user is getting agitated with the bot and you should pass them off to a human customer-service
representative.
You can likely think of many more scenarios where the use of images can greatly improve the experience
with bots. The integration of the computer vision APIs can help you achieve this.
MCT USE ONLY. STUDENT USE PROHIBITED 102  Module 5 Integrate Cognitive Services with Bots and Agents  

Sentiment for Bots with Text Analytics


Sentiment Analysis
Find out what customers think of your brand or topic by analyzing raw text for clues about positive or
negative sentiment. This API returns a sentiment score between 0 and 1 for each document, where 1 is
the most positive. The analysis models are pretrained using an extensive body of text and natural lan-
guage technologies from Microsoft. For selected languages, the API can analyze and score any raw text
that you provide, directly returning results to the calling application.
What does this mean for you bot? As we consider user interactions with your bot, it might be a good
idea to get a feel for the “mood” of your users to help determine if they are happy or not, based on their
interactions. If a sentiment analysis identifies that a user is not happy, you might want your bot to
redirect the user to a real person for the interaction. This can increase customer satisfaction.
Sentiment analysis produces a higher quality result when you give it smaller chunks of text to work on.
This means that it can actually work quite well with a bot scenario where the user is entering small bits of
text rather than complete documents. To get the best results consider structuring the inputs accordingly.
What this means is that you submit the “documents” to the sentiment analysis engine in the proper
format, which is a JSON document that contains an id, the text, and a language element. An example of a
well-formed JSON submission would look like this:

{
"documents": [
{
"language": "en",
"id": "1",
"text": "We love this trail and make the trip every year.
The views are breathtaking and well worth the hike!"
},
{
"language": "en",
"id": "2",
"text": "Poorly marked trails! I thought we were goners.
Worst hike ever."
},
{
"language": "en",
"id": "3",
"text": "Everyone in my family liked the trail but thought
it was too challenging for the less athletic among us. Not necessarily
recommended for small children."
},
{
"language": "en",
"id": "4",
"text": "It was foggy so we missed the spectacular views,
but the trail was ok. Worth checking out if you are in the area."
},
{
"language": "en",
MCT USE ONLY. STUDENT USE PROHIBITED
 Sentiment for Bots with Text Analytics  103

"id": "5",
"text": "This is my favorite trail. It has beautiful views
and many places to stop and rest"
}
]
}

Note that the language elements is indicated as en for US English. If you expect to support multiple
languages in your bot, and you want to use sentiment analysis in the supported languages, you should
perform language detection first, and use the result to ensure you provide the proper language code.
Language detection is covered in the next lesson.

Walkthrough - Test the Sentiment Analysis Ser-


vice
Before you can test sentiment analysis, you will need to ensure that you have a Text Analytics service
created on Azure and the keys available for use. In this walkthrough, we will test using the API testing
console.
1. Login in to the Azure Portal
2. Click or Select + Create a Resource
3. Either type Text Analytics into the Search Marketplace entry or select AI + Machine Learning and
choose Text Analytics
4. Complete the details for the service and Click or Select Create
MCT USE ONLY. STUDENT USE PROHIBITED 104  Module 5 Integrate Cognitive Services with Bots and Agents  

5. Once the service is created, your will require one of the keys that are available on the keys blade for
resource management. This is used to authenticate a request to the service, just like many of the
other Azure Cognitive Services. Copy one of the keys to the clipboard.
6. Head over the API testing console for Text Analytics1.
7. On that page, select the region where you created the service. In our example, we created the service
in the West US 2 location so we would need to also select that region for the testing console.

1 https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics.V2.0/operations/56f30ceeeda5650db055a3c9
MCT USE ONLY. STUDENT USE PROHIBITED
 Sentiment for Bots with Text Analytics  105

8. Once the API Testing Console opens, ensure that you have one of your keys copied and then paste it
into the Ocp-Apim-Subscription-Key location on this page. This will be used to authenticate the
request to the service. Also, remember that the key is appropriate only for the region specified in
your service. If you create the service in a region that is different from that used for the API Testing
Console, authentication will fail.

9. There is already some sample text in the request body so you can use that for your initial testing to
see what the responses look like. Simply Click or Selecting Send on the page will execute the test for
you and you should receive a 200 OK response along with a JSON response at the bottom of the
page. As an example, the request body consisted of the following JSON that was sent.

{
"documents": [
{
"language": "en",
"id": "1",
"text": "Hello world. This is some input text that I love."
},
{
"language": "fr",
"id": "2",
"text": "Bonjour tout le monde"
},
{
"language": "es",
"id": "3",
"text": "La carretera estaba atascada. Había mucho tráfico el día de
ayer."
}
]
}
MCT USE ONLY. STUDENT USE PROHIBITED 106  Module 5 Integrate Cognitive Services with Bots and Agents  

10. Note that there three samples in this body. One in English, one in French, and one in Spanish. This
demonstrates the ability of the sentiment analysis to work across these languages. Remember that
the service cannot detect the language itself, you must provide the language code in the JSON body.
Once the Send button is Click or Selected, the result is returned and looks like this.
Transfer-Encoding: chunked
x-ms-transaction-count: 3
CSP-Billing-Usage: CognitiveServices.TextAnalytics.BatchScoring|3
x-aml-ta-request-id: 576a9ece-5721-4ed2-8baf-29dd5477320b
X-Content-Type-Options: nosniff
apim-request-id: c46ca189-3047-42fc-bc37-242026a5d170
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Date: Wed, 06 Mar 2019 17:11:20 GMT
Content-Type: application/json; charset=utf-8

{
"documents": [{
"id": "1",
"score": 0.98690706491470337
}, {
"id": "2",
"score": 0.84012651443481445
}, {
"id": "3",
"score": 0.334433376789093
}],
"errors": []
}

The important aspects are found in the JSON formated area of the response. You can see that the first
sentence returned a value of 0.98 which indicates a positive sentiment. The second, 0.84 is still on the
positive side as it is closer to the value 1 but the last sentence is only 0.33 which indicates a less positive
sentiment.
Also note that the response uses the same IDs that you provided in the request body, to help you align
the responses. You will make use of the ID values in your code to determine the sentiment to sentence
mapping, in the event your submitted multiple sentences as we did in this test.
If you want to test the service with an application, you can find some code samples on the API landing
page if you scroll to the bottom. An example of C# code that you could use in a .NET console-based
application would be:

using System;
using System.Net.Http.Headers;
using System.Text;
using System.Net.Http;
using System.Web;

namespace CSHttpClientSample
{
MCT USE ONLY. STUDENT USE PROHIBITED
 Sentiment for Bots with Text Analytics  107

static class Program


{
static void Main()
{
MakeRequest();
Console.WriteLine("Hit ENTER to exit...");
Console.ReadLine();
}

static async void MakeRequest()


{
var client = new HttpClient();
var queryString = HttpUtility.ParseQueryString(string.Empty);

// Request headers
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key",
"{subscription key}");

var uri = "https://westus.api.cognitive.microsoft.com/text/


analytics/v2.0/sentiment?" + queryString;

HttpResponseMessage response;

// Request body
byte[] byteData = Encoding.UTF8.GetBytes("{body}");

using (var content = new ByteArrayContent(byteData))


{
content.Headers.ContentType = new MediaTypeHeaderValue("<
your content type, i.e. application/json >");
response = await client.PostAsync(uri, content);
}

}
}
}

All you need to do is replace the {subscription key} value with your copied key from the service and
replace the {body} value with your text that you want to send to the service. Note that the service is
called in an asynchronous fashion.
MCT USE ONLY. STUDENT USE PROHIBITED 108  Module 5 Integrate Cognitive Services with Bots and Agents  

Detect Language in a Bot


Introducing Language Detection
The Language Detection API evaluates text input and for each document and returns language identifiers
with a score indicating the strength of the analysis. Text Analytics recognizes up to 120 languages.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can
parse the results of this analysis to determine which language is used in the input document. The re-
sponse also returns a score which reflects the confidence of the model (a value between 0 and 1).
Language detection can work with documents or single phrases but it's important to note that the
document size must be under 5,120 characters. This size limit is per document and each collection that
you send to the service is restricted to 1,000 items (IDs). A sample of a properly formatted JSON payload
that you might submit to the service in the request body is shown here.

{
"documents": [
{
"id": "1",
"text": "This document is in English."
},
{
"id": "2",
"text": "Este documento está en inglés."
},
{
"id": "3",
"text": "Ce document est en anglais."
},
{
"id": "4",
"text": "本文件为英文"
},
{
"id": "5",
"text": "Этот документ на английском языке."
}
]
}

The service will return a JSON response that contains the IDs provided in the request body along with a
value indicating the confidence level of the detected language. The confidence level is a value ranging
from 0 to 1 with values closer to 1 being a higher confidence level. The JSON response is also formatted
a little differently that what you saw in the sentiment analysis. The reason is that your document may
contain a mix of languages. Let's evaluate that with a standard JSON response that maps to the above
request JSON.
{
"documents": [
MCT USE ONLY. STUDENT USE PROHIBITED
 Detect Language in a Bot  109

{
"id": "1",
"detectedLanguages": [
{
"name": "English",
"iso6391Name": "en",
"score": 1
}
]
},
{
"id": "2",
"detectedLanguages": [
{
"name": "Spanish",
"iso6391Name": "es",
"score": 1
}
]
},
{
"id": "3",
"detectedLanguages": [
{
"name": "French",
"iso6391Name": "fr",
"score": 1
}
]
},
{
"id": "4",
"detectedLanguages": [
{
"name": "Chinese_Simplified",
"iso6391Name": "zh_chs",
"score": 1
}
]
},
{
"id": "5",
"detectedLanguages": [
{
"name": "Russian",
"iso6391Name": "ru",
"score": 1
}
]
}
MCT USE ONLY. STUDENT USE PROHIBITED 110  Module 5 Integrate Cognitive Services with Bots and Agents  

],

Note how the response if formulated in this JSON file. There is a documents array that contains the IDs
of the documents that were sent in the request but also note that there are nested arrays of “detected
languages” for each ID. In this case, there is only one language in the detectedLanguages array. Each
detected language is identified using a name, and ISO code with the letter designator, and a confidence
score. In our sample, all of the languages show a confidence of 1, mostly because the text is relatively
simple and easy to identify the language for.
If we were to pass in a document that had mixed content, from a language perspective, the service will
behave a bit differently. Mixed language content within the same document returns the language with
the largest representation in the content, but with a lower positive rating, reflecting the marginal strength
of that assessment. In the following example, input is a blend of English, Spanish, and French. The
analyzer counts characters in each segment to determine the predominant language.
{
"documents": [
{
"id": "1",
"text": "Hello, I would like to take a class at your University. ¿Se
ofrecen clases en español? Es mi primera lengua y más fácil para escribir.
Que diriez-vous des cours en français?"
}
]
}

You can likely determine for yourself what the outcome will be for this evaluation. If the service counts
the number of characters in each segment, we can deduce that Spanish will have the most in terms of
characters and as a result, it would be the “predominant” language in the text. As you might expect, the
following sample shows a returned result for this multi-language example.
{
"documents": [
{
"id": "1",
"detectedLanguages": [
{
"name": "Spanish",
"iso6391Name": "es",
"score": 0.9375
}
]
}
],
"errors": []
}

The last condition to consider is when there is ambiguity as to the language content. This might happen
if you submit textual content that the analyzer is not able to parse. As a result, the response for the
language name and ISO code will indicate (unknown) and the score value will be returned as NaN, or Not
a Number. The following example shows how the response would look.
MCT USE ONLY. STUDENT USE PROHIBITED
 Detect Language in a Bot  111

{
"id": "5",
"detectedLanguages": [
{
"name": "(Unknown)",
"iso6391Name": "(Unknown)",
"score": "NaN"
}
]

Walkthrough - Testing the Language Detection


API
If you already have a Text Analytics service created and available in Microsoft Azure, you can make use of
that same service. In the previous lesson we tested the sentiment analysis service with an existing Text
Analytics service and we can use that same service, however, the API Testing Console location will be
different.
1. You would need to start from the Language Detection API landing page2 to get access to the region
buttons that open the actual testing console for testing.
2. Select the region in which your Text Analytics service is located, and the API testing console opens in
the browser.
3. Once again, paste the service key into the Ocp-Apim-Subscription-Key entry
4. Paste the following JSON content into the request body section.
{
"documents": [
{
"id": "1",
"text": "Hello world"
},
{
"id": "2",
"text": "Halo, apa kabar"
},
{
"id": "3",
"text": "La carretera estaba atascada. Había mucho tráfico el día de
ayer."
},
{
"id": "4",
"text": ":) :( :D"
}
]
}

2 https://westus.dev.cognitive.microsoft.com/docs/services/TextAnalytics.V2.0/operations/56f30ceeeda5650db055a3c7
MCT USE ONLY. STUDENT USE PROHIBITED 112  Module 5 Integrate Cognitive Services with Bots and Agents  

1. In this sample we are passing in some simple text again. The first is English text, the next is a simple
greeting in Indonesian, number 3 is Spanish text, but the last item is actually a series of emoticons.
2. Click or Select Send to send this to the service and see what the results are.
{
"documents": [{
"id": "1",
"detectedLanguages": [{
"name": "English",
"iso6391Name": "en",
"score": 1.0
}]
}, {
"id": "2",
"detectedLanguages": [{
"name": "Indonesian",
"iso6391Name": "id",
"score": 1.0
}]
}, {
"id": "3",
"detectedLanguages": [{
"name": "Spanish",
"iso6391Name": "es",
"score": 1.0
}]
}, {
"id": "4",
"detectedLanguages": [{
"name": "English",
"iso6391Name": "en",
"score": 1.0
}]
}],
"errors": []
}

As we expected, the first entry was in English, the second is Indonesian, the third Spanish, but the fourth
might not be what you expected. It is returned as English with a confidence score of 1.0, which indicates
the analyzer is positive it is English. In the current iteration of the analyzer, emoticons will be detected as
English rather than unknown or some other language.
Similar to the Sentiment Analysis service, you can find sample code for different application types at the
bottom of the API landing page. The samples are available for Curl, C#, Java, JavaScript, Objective-C,
PHP, Python, and Ruby, which helps to cover a broad range of programming languages and environ-
ments.
Remember that the returned JSON is what your application will need to parse to take action on the
results from the service.
MCT USE ONLY. STUDENT USE PROHIBITED
 Lab  113

Lab
Lab 8: Language Detection in a Bot

Users may interact with your bot in their native language. Your bot should provide for this scenario and
react by either changing the language used for responses or simply indicate that their entered language
is not supported. Either way, you will need to determine which language the user is entering initially.

Lab Objectives
●● Implement Language Detection with a Bot

You might also like