Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 106

AI FOR EVERYONE

-: MASTERS THE BASICS :-

JULY 3, 2023
BY
MIKE
AI for Everyone: Master the Basic

In this course you will learn what Artificial Intelligence (AI) is, explore use
cases and applications of AI, understand AI concepts and terms like
machine learning, deep learning and neural networks. You will be exposed
to various issues and concerns surrounding AI such as ethics and bias, &
jobs, and get advice from experts about learning and starting a career in
AI. You will also demonstrate AI in action with a mini project.

This course does not require any programming or computer science


expertise and is designed to introduce the basics of AI to anyone whether
you have a technical background or not.

This course is for everyone. No prior background in computer science or


programming is necessary.

Change log

 01 Sept 2020 (Rav Ahuja): Updated version of the course published


on edX.org
 01 Sept 2020 (Anamika Agarwal): Replaced links to labs with links
from SN Asset Library
 19 Dec 2022 :(Jada Harrison) Updated version of the course
published on edx.org

1 COURSE LEARNING OBJECTIVES AND SYLLABUS


1.1.1 Course Learning Objectives

 Understand what is AI, its applications and use cases and how it is
transforming our lives
 Explain terms like Machine Learning, Deep Learning and Neural
Networks
 Describe several issues and ethical concerns surrounding AI
 Articulate advice from experts about learning and starting a career in
AI
1.1.2 Syllabus
Module 1: What is AI? Applications and Examples of AI
 Video: Introducing AI
 Video: What is AI?
 Video: Optional: Tanmay’s journey and take on AI
 Video: Impact and Examples of AI
 Video: Optional: Application Domains for AI
 Video: Some Applications of AI
 Video: Optional: More Applications of AI
 Video: Famous applications of AI from IBM
 Reading: Summary & Highlights
 Graded Quiz: What is AI? Applications and Examples of AI

Module 2: AI Concepts, Terminology, and Application Areas

 Video: Cognitive Computing (Perception, Learning, Reasoning)


 Video: Terminology and Related Concepts
 Video: Machine Learning
 Video: Machine Learning Techniques and Training
 Video: Deep Learning
 Video: Neural Networks
 Hands on Lab: Paint with AI (Optional
 Video: Key Fields of Application in AI
 Video: Natural Language Processing, Speech, Computer Vision
 Video: Self Driving Cars
 Hands-On Lab: Computer Vision
 Reading: Summary & Highlights
 Graded Quiz: AI Concepts, Terminology, and Application Areas

Module 3: AI: Issues, Concerns and Ethical Considerations

 Video: Exploring Today's AI Concerns


 Discussion Prompt: Reflection on AI issues, concerns, and ethics
 Video: Exploring AI and Ethics
 Video: Defining AI Ethics
 Video: Understanding Bias and AI
 Hands-on Lab - Detect the Bias
 Video: AI Ethics and Regulations
 Video: AI Ethics, Governance, and ESG
 Reading: Foundations of Trustworthy AI: Operationalizing
Trustworthy AI
 Reading: Precision Regulation for Artificial Intelligence
 Reading: Summary & Highlights
 Graded Quiz: AI Issues, Ethics and Bias

Module 4: The Future with AI, and AI in Action

 Video: The Evolution and Future of AI


 Reading: What's Next for AI
 Video: Future with AI
 Reading: What will our Society look like when AI is everywhere?
 Video: The AI Ladder - The Journey for Adopting AI Successfully
 Video: Advice for a career in AI
 Video: Hotbeds of AI Innovation
 Video: Tanmay’s Advice to Learn AI
 Video: Polong’s Advice for a Job in AI
 Reading: Summary & Highlights
 Quiz: Final Quiz
 Project Overview
 Hands-on Lab: Part 1 Online Shopping Chatbot
 (Optional) Hands-on Lab: Part 2 Online Shopping Chatbot
 Final Assignment: Submission and Grading

Course Wrap Up:

 Reading: Conclusion and Next Steps


 Reading: Course Team
1.1.3 GRADING SCHEME
1. The minimum passing mark for the course is 70% with the following
weights:

 83% - All Graded Quiz Questions


 17% - Final Quiz

2. Though Graded Quiz Questions and the Final Quiz have passing marks
of 83% and 17% each, the only grade that matters is the overall grade for
the course.

3. Graded Quiz Questions have no time limit. You are encouraged to


review the course material to find the answers as this counts for 83% of
your mark.

4. Attempts are per question in both, the Review Questions and the Final
Exam:

 One attempt - For True/False questions and questions with two


answers
 Two attempts - For any question other than True/False or questions
with more than two answers

5. There are no penalties for incorrect attempts.

6. Clicking the "Final Check" button when it appears, means your


submission is FINAL. You will NOT be able to resubmit your answer to that
question again.

7. Check your grades in the course at any time by clicking on the


"Progress" tab.\
M 0

D U

L E 01
2 MODULE INTRODUCTION AND LEARNING OBJECTIVES
Module 1 Introduction

In this Module, you will learn what AI is. You will understand its applications
and use cases and how it is transforming our lives.

Learning Objectives

 Define AI.
 Describe examples, applications, and impact of AI.
 Explore an interactive AI app.

3 VIDEO: INTRODUCING AI (3:28)

At IBM, we define AI as anything that makes machines act more


intelligently .We like to think of AI as augmented intelligence. We
believe that AI should not attempt to replace human experts, but
rather extend human capabilities and accomplish tasks that neither
humans nor machines could do on their own. The internet has given
us access to more information, faster. Distributed computing and
IoT have led to massive amounts of data, and social networking has
encouraged most of that data to be unstructured. With Augmented
Intelligence, we are putting information that subject matter experts
need at their fingertips, and backing it with evidence so they can
make informed decisions.
We want experts to scale their capabilities and let the machines do
the time-consuming work.
1. How do we define intelligence?
Human beings have innate intelligence, defined as the
intelligence that governs every activity in our body.

This intelligence is what causes an oak tree to grow out of a little


seed, and an elephant to form from a single-celled organism.
How does AI learn? The only innate intelligence machines have is
what we give them. We provide machines the ability to examine
examples and create machine learning models based on the inputs
and desired outputs. And we do this in different ways such as
Supervised Learning, Unsupervised Learning,
and Reinforcement Learning, about which you will learn in more
detail in subsequent lessons.
Based on strength, breadth, and application, AI can be described in
different ways.
Weak or Narrow AI is AI that is applied to a specific domain. For
example, language translators, virtual assistants,
self-driving cars, AI-powered web searches, recommendation
engines, and intelligent spam filters.
Applied AI can perform specific tasks, but not learn new ones,
making decisions based on programmed algorithms, and training
data.
Strong AI or Generalized AI is AI that can interact and operate a
wide variety of independent and unrelated tasks.
It can learn new tasks to solve new problems, and it does this by
teaching itself new strategies.
Generalized Intelligence is the combination of many AI strategies
that learn from experience and can perform at a human level of
intelligence.
Super AI or Conscious AI is AI with human-level consciousness,
which would require it to be self-aware.
Because we are not yet able to adequately define what
consciousness is, it is unlikely that we will be able to create a
conscious AI in the near future.
AI is the fusion of many fields of study. Computer science and
electrical engineering determine how AI is implemented in software
and hardware.
Mathematics and statistics determine viable models and measure
performance.
Because AI is modeled on how we believe the brain works,
psychology and linguistics play an essential role in understanding
how AI might work. And philosophy provides guidance on
intelligence and ethical considerations.
While the science fiction version of AI may be a distant
possibility, we already see more and more AI involved in the
decisions we make every day. Over the years, AI has proven to be
useful in different domains, impacting the lives of people and our
society in meaningful ways.
Module 1 - What is AI? Applications and Examples of AI

There's a lot of talk about artificial intelligence these days.


1. How do you define or what does AI mean for you?

There is a lot of talk and there's a lot of definitions for what artificial
intelligence says. So one of them is about teaching the machines to
learn, and act, and think as humans would. Another dimension
is really about how do we get the machines to- how do we impart
more of a cognitive capability on the machines and sensory
capabilities. So it's about analysing images and videos about
natural language processing and understanding speech. It's about
pattern recognition, and so on, and so forth. So the third axis is
more around creating a technology that's able to, in some cases,
replace what humans do. I'd like to think of this as augment what
humans do. To me personally, the most important part of definition
for artificial intelligence is about imparting the ability to think and
learn on the machines. To me that's what defines artificial
intelligence. AI is the application of computing to solve problems in
an intelligent way using algorithms. So what is an intelligent way?
Well, an intelligent way may be something that mimics human
intelligence. Or it may be a purely computational approach and
optimization approach but something that manipulates data in a
way to get not obvious results out, I think, is what I would classify as
being artificially intelligent. I would define AI as a tool that uses
computer to complete a task automatically with very little to no
human intervention.
To me AI is really a complex series of layers of algorithms that do
something with the information that's coming into it.
Artificial intelligence is a set of technologies that allows us to extract
knowledge from data. So it's any kind of system that learns or
understands patterns within that data, and can identify them, and
then reproduce them on new information. Artificial intelligence is not
the kind of simulating human intelligence that people think it is. It's
really not about intelligence at all. But I think another word that
describes AI more accurately today is machine learning. The
reason I say that is because machine learning
technology is all about using essentially mathematics on computers
in order to find patterns in data. Now this data can be structured or
unstructured. The only difference between machine learning and
the technologies that came before it is instead of us, as humans,
having to manually hard code these patterns, and these conditions
into computers. They're able to find these patterns on their own by
using math. That's really the only difference
here. So what I'd say artificial intelligence is, is it's a set of
mathematical algorithms that enable us to have computers find,
very deep and patterns that we may not have even known exist,
without us having to hard code them manually.

Tanmay’s reviews on AI

Hi Tanmay, welcome and tell us how old are you?


I'm 15 years old. Great. How did you get started in technology and
AI? Sure. So I've been working with technology for over 10 years
now. It all started back when I was around five years old because
my dad used to work as a computer programmer, and watching him
program almost all day were so fascinating to me, that I really
wanted to find out more. I wanted to find out how computers could
do really anything that they did? Whether that be displaying my
name on the screen, or adding two numbers, or really anything of
that sort, it was like magic to me at that age. So my dad introduced
me to the world of programming, and I've been working with code. I
submitted my first iOS application and more. But when I was around
10 years old, I started to feel like technology wasn't as fun as it
used to be for me.
Technology wasn't as exciting as it used to be for one simple
reason, it's because technology was very rigid, you code something
in and it immediately starts to become obsolete, it never adapts, it
never changes new data, new users, new circumstances they
create. But when I was 11 years old, I actually stumbled upon a
documentary on IBM Watson playing and winning the Jeopardy
game show back in 2011. So of course, that immediately fascinated
me, as to how a computer can play Jeopardy, and so I went ahead
do a little bit more research, found out that IBM provides Watson's
individual components as APIs on the cloud. I did a little bit more
research, started to create my very first sort of cognitive
applications, and I also created tutorials on my YouTube channel,
on how others can also leverage the IBM Watson APIs. So really
ever since I was 11 years old, I've been working with machine
learning technology through numerous different services like IBM
Watson. That's awesome. So what does AI mean for you? Really
what AI means to me, before I get to that, before I can explain what
AI is to me. I think it's first important to understand really what
others like to think of AI as. Now, a lot of people have this very sort
of split, this sort of very sort of bipolar response to machine learning
or AI, as people call it. Some people are like, yes, it's the greatest
technology of all time, and some people are like, this will be the
downfall of humanity. I'd say that neither of those responses would
be correct. The reason I say that is because machine learning
technology is technology. It's very advanced technology, it helps us
do things that we never could have done before, but it's just that, it's
technology. Artificial intelligence and machine learning is something
that people have been working on mathematically since even
before computers were a thing. Machine learning technology is not
new at all, in fact, it existed the very fundamentals at least, for many
decades before I was even born. But the thing is machine learning
technology or at least, for example, the basic perceptron and these
sorts of mathematical techniques have existed since even before
computers or calculators became popular. So when we were
creating these sorts of machine-learning concepts and AI, and we
started to create literature and movies on the future of technology
and computers, we barely had any idea of not only where
technology would go in the future, but also what technology really
is. Because of that, people have this very common misconception
of artificial intelligence being the human mind within a computer, the
human intelligence simulated wholly within a computer. But that
couldn't be farther from the truth. Machine learning or AI is not
simulating a human mind, but what it does try and do, is it tries to
open up new doors for computers. It tries to enable computers to
understand certain kinds of data that they couldn't have understood
before. So for example, if you take a look at what we as humans
are so special at, the fact that we can understand natural language,
in fact, we are the only animal to have such a complex ability of
being able to communicate in natural language, even if through
something that I have not directly witnessed or seen or heard
evidence for, I can still describe that imaginative concept to you,
that is actually really wonderful. We're also great at understanding
at raw auditory data. I mean, imagine your brain is actually taking
vibrations of air molecules and converting that to thoughts, that's
really amazing. We're also great at processing visual data,
like when you look at someone's face, the fact that you can instantly
recognize them. When you look at someone's eyes, you can tell
exactly where they're looking, that's really an amazing ability. These
are things that computers cannot do because they are
fundamentally limited to mathematics. They can only understand
numbers and mathematical operations. But by using machine
learning technology, you can actually take this mathematics, and
use them to understand patterns in vast amounts of both structured
and unstructured human data. The only difference here is that
before, we as humans would manually construct these patterns and
these conditions, whereas now it's done automatically for us at least
mostly automatically by techniques like gradient descent and
calculus. So machine learning technology is more accurate term for
what AI really is today and will be in the future. Of course, artificial
intelligence isn't meant to replace us because on a fundamental
level, it is a completely different thing than a human brain.

4 GENERATIVE AI OVERVIEW AND USE CASES


1. Define GenerativeAI and describe its significance, and Explain
different use cases of Generative AI.

Artificial Intelligence (AI) is defined as Augmented Intelligence


that enables experts to scale their capabilities while machines
handle time-consuming tasks like recognizing speech, playing
games, and making decisions. On the other hand, Generative
Artificial Intelligence, or GenAI, is an AI technique capable of
creating new and unique data, ranging from images and music to
text and entire virtual worlds. Unlike conventional AI models that
rely on pre-defined rules and patterns, Generative AI models use
deep learning techniques and rely on vast datasets to generate
entirely new data with various applications. A Generative AI
model can also use LLM, Large Language Model, a type of
artificial intelligence based on deep learning techniques designed
to process and generate natural language. For instance,
Generative AI can develop new and more powerful LLM
algorithms or architectures, resulting in more accurate or efficient
natural language processing and generation capabilities.
Alternatively, a Generative AI can design and incorporate LLM
into a larger, more advanced AI system to perform various
advanced tasks, such as decision-making, problem-solving, and
creative work.
Generative AI encompasses various AI technologies and the
idea of developing AI systems. Although more about Generative
AI will soon unfold, the following benefits already make
Generative AI a strategic technology: Creativity and innovation,
Cost and time savings, Personalization, Scalability, Robustness,
and Exploration of new possibilities. Let us look at some diverse
use cases of Generative AI. in the field of healthcare and
precision medicine, Generative AI can support physicians in
identifying genetic mutations responsible for patients' illnesses
and providing tailored treatments. It can also produce medical
images, simulate surgeries, and predict new drug properties to
aid doctors in practicing procedures and developing treatments.
In agriculture, Generative AI can optimize crop yields and create
more robust plant varieties that can withstand environmental
stressors, pests, and diseases. In biotechnology, Generative AI
can aid in the development of new therapies and drugs by
identifying potential drug targets, simulating drug interactions,
and forecasting drug efficacy. In forensics, Generative AI can
help solve crimes by analysing DNA evidence and identifying
suspects. In environmental conservation, Generative AI can
support the protection of endangered species by analysing their
genetic data and suggesting breeding and conservation
strategies. In creative fields, Generative AI can produce unique
digital art, music, and video content for advertising and marketing
campaigns, and generate soundtracks for films or video games.
In gaming, Generative AI can create interactive game worlds by
generating new levels, characters, and objects that adapt to
player behaviour. In fashion, Generative AI can design and
produce virtual try-on experiences for customers and
recommend personalized fashion choices based on customer
behaviour and preferences. In robotics, Generative AI can design
new robot movements and adapt them to changing
environments, enabling them to perform complex tasks. In
education, Generative AI can create customized learning
materials and interactive learning environments that adjust to
students' learning styles and paces. In data augmentation,
Generative AI can produce new training data for machine
learning models, enhancing their accuracy and performance.

In this video, you learned that:


Generative AI is an AI technique capable of creating new and
unique data. It outperforms traditional AI models in terms of
creativity, cost and time savings, personalization,
scalability, robustness, and exploration of new possibilities.
Generative AI has the potential to transform various industries and
improve people's lives
and generate newer and impossible data and experiences, and
It can be used to perform a wide range of tasks, similar to the
flexibility and adaptability
of human intelligence.
5 IMPACT AND APPLICATIONS OF AI

AI is here to stay, with the promise of transforming the


way the world works. According to a study by PWC, $16
trillion of GDP will be added between now and 2030 on
the basis of AI. This is a never before seen scale of
economic impact, and it is not just in the IT industry, it
impacts virtually every industry and aspect of our lives.
AI means different things to different people. For a
videogame designer, AI means writing the code that
affects how bots play, and how the environment reacts
to the player. For a screenwriter, AI means a character
that acts like a human, with some trope of computer
features mixed in. For a data scientist, AI is a way of
exploring and classifying data to meet specific goals.AI
algorithms that learn by example are the reason we can
talk to Watson, Alexa, Siri, Cortana, and Google
Assistant, and they can talk back to us. The natural
language processing and natural language generation
capabilities of AI are not only enabling machines
and humans to understand and interact with each other,
but are creating new opportunities and new ways of
doing business. Chatbots powered by natural language
processing capabilities, are being used in healthcare to
question patients and run basic diagnoses like real
doctors. In education, they are providing students
with easy to learn conversational interfaces and on-
demand online tutors. Customer service chatbots are
improving customer experience by resolving queries on
the spot and freeing up agents’ time for conversations
that add value.AI-powered advances in speech-to-text
technology have made real time transcription a
reality. Advances in speech synthesis are the reason
companies are using AI-powered voice to enhance
customer experience, and give their brand its unique
voice. In the field of medicine, it's helping patients with
Lou Gehrig’s disease, for example, to regain their real
voice in place of using a computerized voice. It is due to
advances in AI that the field of computer vision has been
able to surpass humans in tasks related to detecting and
labelling objects. Computer vision is one of the reasons
why cars can steer their way on streets and highways
and avoid hitting obstacles. Computer vision algorithms
detect facial features and images and compare them
with databases of face profiles. This is what allows
consumer devices to authenticate the identities of
their owners through facial recognition, social media
apps to detect and tag users, and law enforcement
agencies to identify criminals in video feeds. Computer
vision algorithms are helping automate tasks. Such as
detecting cancerous moles in skin images or finding
symptoms in x-ray and MRI scan. AI is impacting the
quality of our lives on a daily basis. There's AI in our
Netflix queue, our navigation apps, keeping spam out of
our inboxes and reminding us of important events.AI is
working behind the scenes monitoring our investments,
detecting fraudulent transactions, identifying credit card
fraud, and preventing financial crimes. AI is impacting
healthcare in significant ways, by helping doctors arrive
at more accurate preliminary diagnoses, reading
medical imaging, finding appropriate clinical trials for
patients. It is not just influencing patient outcomes
But also making operational processes less expensive.
AI has the potential to access enormous amounts of
information, imitate humans, even specific humans,
make life-changing recommendations about health
and finances, correlate data that may invade privacy,
and much more.
6 APPLICATION DOMAINS FOR AI
Can you talk about some of the applications of AI?
>> There's all kind of different applications, obviously,
there's healthcare, there's finance. The one that's
closest to my heart, of course, is robotics and
automation. Where the AI technologies really help us to
improve our abilities to perceive the environment around
the robot and to make plans in unpredictable
environments as they're changing.

>> There's a great book out by an author, it was Kelvin


Kelly, and he is an editor for the Wired magazine, he's
written a great book about technologies that are going to
be changing shaping our world, specifically 12
technologies. And he's got a fantastic definition in the
book about specifically how AI is going to permeate our
everyday life and it's all summarized in one excellent
quote. So he says that the business cases for the next
10,000 startups are easy to predict, I have x and I will
add AI to my x. The way I understand that is it's
basically a notion that AI in one shape, way or form, in
any shape or form, is going to permeate every aspect of
human endeavour. Everything we do, everything we
touch is going to be enhanced by AI. We have great
benefits from taking any device, any machine, and make
it just a little bit smarter. The benefit of that is just adding
a bit of smarts to it, a bit of intelligence to it is
exponential in its benefit.

>> So we work a lot with some really fun applications of


AI. We do a couple of different things in the lab that I
run.
We work on self-driving vehicles as one aspect, so
autonomy for self-driving. Which requires a lot of AI for
the vision systems, for the navigational intelligence, for
the planning and control aspects of the car, we do that.
And we also have a large research program in what are
called collaborative robotics, or co-bots. So, robots that
are designed to work in and around and with people.
And that presents a lot of challenges, because we want
the robots to act intelligently and to interface with
humans in a way that is natural. And that requires
understanding how people behave, which requires
intelligence. In addition to those, there are a myriad of
other applications, drug discovery, medical treatments
for cancer and other diseases. So, a bunch of extremely
exciting applications.

>> I mean I think the general use of AI so


far has been taking large data sets, and making sense
of them. And doing some sort of processing with that
data in real time. That's what we've been doing, and
that's what we've seen most effective, in terms of
creating some sort of larger scale impact in healthcare
beyond just having a siloed device. And we've been
seeing that across the board, across the whole
healthcare spectrum.

>> We use AI all the time, and a lot of the times we're
not even aware of it. We use AI every time we type in a
search query on a search engine, every time we use our
GPS.
Or every time we use some kind of voice recognition
system.
>> I like to focus on a particular segment of AI,
if that's okay, around computer vision. Because it's just
particularly fascinating to me. Now, when we think of
computer vision, they're looking at AI in ways to help
augment, or to help automate or to help train computers
to do something that's already very difficult to train
humans to do. Like when it comes to the airport, trying
to find weapons within luggage through the X-ray
scanner, now that could be difficult to do. No matter how
much you train someone that can be very difficult to
identify.
But with computer vision that can help to automate, help
to augment, help to flag, certain X-ray images so that
maybe even humans can just take a look at a filtered set
of images, not all of the images right? So computer
vision is very disruptive. And so there's many ways in
which computer vision can really help to augment the
capabilities of humans in lots of different industries.

>> Now I mean applications of AI are really all around


us.
There's no limit to really what we're doing with artificial
intelligence. When you do practically anything on any
technology, you're most probably using
some form of what we call machine learning or artificial
intelligence. For example, when you check your email.
Doing something as simple as checking your email.
Spam filtering has been done for years with machine
learning technology. More recently, Google came out
with their features that enable you to do smart email
compositions. So you can actually have text written for
you on the fly as you're writing your email.
Your subjects are automatically written as well, it'll
recommend to you who you should be sending the email
to, see if you missed someone. All of these things are
powered by machine learning. But some of the main
areas where I believe machine learning technology can
make
an impact are the fields of health care and education.
7 SOME APPLICATIONS OF AI
Can you give some specific examples of applications of
AI?
Certainly. So we have a fairly large collaborative
robotics program. So the co-bots we work on are
primarily targeted at the moment at manufacturing
applications,
manufacturing, warehousing, logistics, these types of
applications where normally you may have a person
doing a job that can be dull, it can be dangerous, and
having robotic support or having a robot actually do
the job may make it much safer, more efficient, and
more effective overall.
So we work on a lot of those types of applications,
particularly where the robots are trying to interface
directly with people, as I said. So the robot may
help an individual to lift a heavy container, or help to
move items on a stocking, on a shelf stocking purposes,
so all these kinds of applications, where I think we'll see
collaborative robots move first, and then hopefully one
day and maybe into your home to help you with
the laundry and dishes in the kitchen. Hopefully.
For example, in oil and gas, there's a company,
a pretty large oil and gas company
called the Abu Dhabi National Oil Company, and one of
the problems that any kind of oil company has to deal
with is,
where's the best place for them to drill for oil?
So they have to find these rock samples of all these
different places, for this place and in this place, and that
place, and maybe hundreds of different places for them
to drill oil. From these rock samples, now you have all
these fine sheets of rock in maybe hundreds or
thousands of them, and it's up to these oil companies to
be able to classifythese using they're trained and expert
geologists.
But to train geologists to properly classify these sheets
of rock can be quite difficult, it could be time-consuming,
could cost a lot of money as well. So one way to help
augment the capabilities of humans is to be able to use
computer vision, to classify these rocks samples to be
able to identify which of these locations are the best to
drill for oil? That's in oil and gas. Imagine before this, if
there was a very, very rare form of cancer experienced
by a doctor in Dubai, and if there were another case in
New Zealand, how do you think they would have
actually figured out that, "Hey,
we're both dealing with this very rare case since we
work together." That wouldn't have been possible in the
past, but now with machine learning technology being
able to aggregate knowledge from so many different
sources into one centralized Cloud and understand it,
and provide that information inaccessible, intuitive,
implicit way.
Now, that New Zealand doctor can actually go ahead
and use this machine learning technique to say, "Hey,
just a few days ago there was a doctor with a very
similar case," even though it may not be the exact same
thing. Sure. So we work with a number of startups and
the number of enterprises, and I'll just bring a couple of
examples. So what they like to talk quite a bit about is
company out in California called Echo Devices. What
they've done is they've taken a simple device which is
stethoscope, something we see around the neck of
every physician, nurse, and the health care professional,
and they take that device and basically have
transformed that into first, into a digital device by cutting
the tube on stethoscope, inserting a digitizer into it that
takes an analog sound, transforms it into a digital signal,
amplifies it in the process, makes it a lot easier for
people to hear, it's amplified sound, the sound of your
heart, or your lungs working. But what it also allows us
to do is that allows us to take the digital signal and sent
it via Bluetooth to a smart phone. Once it's on a smart
phone, they're able to graph it, which allows the
physician to better understand, not just through audio
data but through an actual graph of how your heart is
working. But because the information is now captured in
the digital world, it can now be central machine-learning
algorithm, and that's what they do. A machine-learning
algorithm can actually learn from that, apply your
previous learnings from the human doctors, cardiologist,
and now assist a physician who is using the device in
their current diagnosis. So it basically not replacing a
physician in any way, shape, or form, it is assistive
technology which is taking the learnings of the previous
generations of human cardiologist, and helping in the
diagnosis in the current state. To me, that's a perfect
example of taking the X, which is in this particular case
as a stethoscope, and then adding AI to that X. I have a
really nifty name for that, they call it Shazam for
Heartbeats.
8 MORE APPLICATIONS OF AI
Can you talk about AI in action. To give an example of
say
machine learning in action today, how companies have
actually implemented it, there's one example that I
always love to go back to, and it is the example of
Woodside Energy, a company in the Australia New
Zealand region.
Now originally, they actually contacted IBM because
they wanted the IBM to be able to create
essentially a system that can understand
the different documents and
the research that they're engineers come up with,
and have Watson and understand that,
and essentially replace some
of the engineers on their team.
IBM actually went ahead and build the application that
worked to Watson was able to understand that
unstructured content, but they never ended up replacing
any of their engineers.
Instead, they actually ended up hiring more engineers,
because now they realized that two things.
First of all, the barrier of entry for each engineer is now
lower and knowledge can now be shared more
effectively among the teams .Because now instead of
research being written and put into an archive drawer
where it's never seen again, Watson's ingesting that
data, understanding it and providing it to whoever needs
it, whoever Watson thinks would need that data.
So if you imagine in these TV shows and
in these movie scenes as well, you have sometimes,
if someone's looking for a particular suspect in
this particular traffic intersection or whatnot,
if passed through this intersection, and there's of course
some cameras around. So we have the security guard
maybe, trying to look through hours and hours, dozens
and hundreds of hours of footage, maybe at 10x speed
and find that particular black SUV or that green car.
Then as soon as they find it at the end of the episode or
whatnot, then say aha, we found that person. But if you
had some sort of computer vision algorithm running on
this video for just the entire time,
then you wouldn't have a need for some person to
have to manually watch through hours and hours of
footage. Our specific use case is actually triggering new
neural pathways in the brain to form. As you can
imagine, there's a lot of information that happens there
between the connection of how your body functions and
how your brain functions, and what parts of the brain are
damage, what parts of the brain aren't damaged, and
how you're actually moving the person or how you're
triggering certain things in the human body to happen in
order for new neural pathways to form.
So what we've done is actually, we've created massive
data sets of information of how people move, and how
that responds to different areas of the brain. Through
that information, we're able to trigger specific
movements with a robotic device, which in turn creates
these neural pathways to form in the brain, and
therefore recovering the person who suffered a
neurological trauma.
9 FAMOUS APPLICATIONS OF AI FROM IBM

I remember that morning going to the lab and I was


thinking this is it, this is the last Jeopardy game. It
became real to me when the music played and Johnny
Gilbert said from IBM Research in Yorktown Heights,
New York, this is Jeopardy and I just went. Hear it is one
day. This is the culmination of all this work. To be honest
with you I was emotional. Watson. What is Shoe? You
are right. We actually took the lead. We were ahead of
them, but then we start getting some questions wrong.
Watson? What is Leg? No, I'm sorry. I can't accept that.
What is 1920's?
No. What is Chic? No, sorry, Brad. What is Class?
Class. You got it. Watson.
What is Sauron? Sauron is right and that puts
you into a tie for the lead with Brad. The double
Jeopardy round of the first game I thought was
phenomenal. Watson went on a terror.
Watson, who is Franz List? You are right.
What is Violin? Who is the Church Lady?
Yes. Watson. What is Narcolepsy? You are right and
with that you move to $36,681. Now, we come to
Watson. Who is Bram Stoker and the wager, hello
$17,973 and a two day total of $77,147.We won
Jeopardy. They are very justifiably proud of what they've
done. I would've thought that technology like this was
years away but it's here now. I have the bruised ego to
prove it. I think we saw something important today.
Wow, wait a second. This is history. The 60th Annual
Grammy Awards, powered by IBM Watson. There's a
tremendous amount of unstructured data that we
process on Grammy Sunday. Our partnership with the
recording Academy really is focused on helping them
with some of their workflows for their digital production.
My content team is responsible not only for taking all this
raw stuff that's coming in, but curating it and publishing
it. You're talking about five hours of red carpet coverage
with 5,000 artists, making that trip down the carpet with
a 100,000 photos being shot. For the last five hours,
Watson has been using AI to analyse the colours,
patterns, and silhouettes of every single outfit that has
passed through.
So we've been able to see all the dominant styles and
compare them to Grammy shows in the past. Watson's
also analysing the emotions of Grammy nominated song
lyrics over the last 60 years. Get this, it can actually
identify the emotional themes in music and categorize
them as joy, sadness, and everything else in between.
It's very cool. Fantasy sports are an incredibly important
and fun way that we serve sports fans. Our fantasy
games drive tremendous consumption across ESPN
digital properties, and they drive tune-in to our events
and studio shows. But our users have a lot of different
ways they can spend their time. So we have to
continuously improve our game so they choose to spend
that time with us. This year, ESPN teamed up with IBM
to add a powerful new feature to their fantasy football
platform. Fantasy football generates a huge volume of
content -articles, blogs, videos, podcasts. We call it
unstructured data -data that doesn't fit neatly into
spreadsheets or databases. Watson was built to
analyse that kind of information and turn it into usable
insights. We train Watson on millions of fantasy football
stories, blog posts, and videos. We taught it to develop a
scoring range for thousands of players, their upsides
and their downsides, and we taught it to estimate the
chances of player will exceed their upside or fall below
the downside. Watson even assesses a player's media
buzz and their likelihood to play. This is a big win for our
fantasy football players. It's one more tool to help them
decide which running back or QB to start each week. It's
a great complement to the award-winning analysts our
fans rely on. As with any Machine Learning, the system
gets smarter all the time. That means the insights are
better, which means are you just can make better
decisions and have a better chance to win their matchup
every week. The more successful our fantasy players
are, the more time they'll spend with us. The ESPN and
IBM partnership is a great vehicle to demonstrate the
power of enterprise-grade AI to millions of people, and
it's not hard to see how the same technology applies to
real life. There are thousands of business scenarios
where you're assessing value and making trade-offs.
This is what the future of decision-making is going to
look like. Man and machine working together, assessing
risk and reward, working through difficult decisions. This
is the same technology IBM uses to help doctors mine
millions of pages of medical research and investment
banks fund market moving insights.

10GENERATIVE AI APPLICATIONS
1. Welcome to Generative AI Applications
2. After watching this video, you will be able to:
3. List various applications of Generative AI, and
4. explore the uses for each application.
5. Generative AI has emerged as a powerful technology that enables software applications
to create,
6. generate, and simulate new content, enhancing their capabilities and providing
7. unique experiences. Unlike traditional software that follows predefined rules and
algorithms,
8. generative AI leverages machine learning and deep learning techniques to learn patterns
9. and generate original content based on the knowledge it has acquired during training.
10. Due to its potential to create new, personalized content that
11. would have been impossible to create otherwise, Generative AI has been used in various
fields,
12. leading to the development of numerous engaging and well-liked applications.
13. Some popular applications of Generative AI in action include:
14. 1: Generative Pre-trained Transformers or GPT, is a family of large language models
developed
15. by OpenAI that are capable of producing human-like text. GPT-3.5 and GPT-4 are
16. iterations in this family of models with more futuristic models under development. It has a
17. wide range of applications, including chatbots powered by GPT like ChatGPT,
18. automated journalism, and even creative writing.
19. 2: ChatGPT is a chatbot or conversational AI tool by OpenAI that enables users to have
text-based
20. conversations with the underlying language model, GPT. Trained on diverse internet text,
21. it generates human-like responses, providing information, answering questions, assisting
22. with tasks, engaging in creative writing, and offering suggestions across various subjects.
23. 3: Bard is an AI-powered writing assistant from Google that aims to assist users in
producing
24. high-quality writing for communicational documents like emails and social media
25. posts. Bard generates text using a large language model called LaMDA (Language
Model
26. for Dialogue Applications) and can adjust to the user's preferences for style and tone.
27. 4: Watsonx from IBM is an AI and data platform, comprising Watsonx.ai for model
development,
28. Watsonx.data for scalable analytics,
29. and Watsonx.governance for responsible AI workflows. It helps build, deploy,
30. and manage AI applications at scale, enhancing the impact of AI across your
organization.
31. 5: DeepDream is a generative model that can generate surreal and psychedelic
32. images from real-life images. It has been used in art and entertainment,
33. producing some one-of-a-kind and visually stunning images.
34. 6: StyleGAN is a generative model capable of producing high-quality images of faces that
35. do not exist in reality. It has been used in a variety of applications, including
36. creating realistic video game avatars and simulating human faces for medical research.
37. 7: AlphaFold is a generative model that can predict protein structure. It has the potential
to
38. transform drug discovery and make it possible to develop more effective treatments for
diseases.
39. 8: Magenta is a Google project that creates music and
40. art using generative AI. It has yielded some intriguing and impressive results,
41. such as a piano duet performed by a human and an AI-generated piano.
42. 9: Google AI's PaLM 2 is a powerful LLM trained on a dataset ten times larger. It excels
in
43. understanding nuances, generating coherent text and code, translating, and answering
44. questions. Ongoing development promises to revolutionize human-computer interactions,
45. enhancing accuracy, efficiency, creativity, and communication.
46. GitHub Copilot is an AI-powered coding assistant developed by OpenAI and GitHub
47. that is designed to help developers write code more efficiently. It uses
48. a deep learning algorithm to analyze code and generate suggestions for the developer,
such as
49. auto-completing code snippets or suggesting functions based on the context of the code.
50. Generative AI is a rapidly evolving space and is expected
51. to grow dramatically in the coming years.
52. Though, there are certain ethical concerns about Generative AI including potential
misuse
53. of AI-generated content and implications for intellectual property and copyright laws.
54. In this video, you learned that:
55. Generative AI enables applications to create, generate, and simulate new content.
56. It leverages ML and deep learning techniques to learn patterns and generate original
content, and
57. Some applications of Gen AI include GPT-4, ChatGPT, Bard, GitHub Co-pilot, and PaLM
2.
11 MODULE SUMMARY

In this module, you have learned about AI:

 IBM Research defines Artificial Intelligence (AI) as Augmented


Intelligence, helping experts scale their capabilities as machines do the
time-consuming work.
 AI learns by creating machine learning models based on provided inputs
and desired outputs.
 AI can be described in different ways based on strength, breadth, and
application - Weak or Narrow AI, Strong or Generalized AI, Super or
Conscious AI.
 AI is the fusion of many fields of study, such as Computer Science,
Electrical Engineering, Mathematics, Statistics, Psychology, Linguistics,
and Philosophy.

And about some applications of AI, including:

AI-powered applications are creating an impact in diverse areas such as


Healthcare, Education, Transcription, Law Enforcement, Customer Service,
Mobile and Social Media Apps, Financial Fraud Prevention, Patient
Diagnoses, Clinical Trials, and more.

 Robotics and Automation, where AI is making it possible for robots to


perceive unpredictable environments around them in order to decide on
the next steps.
 Airport Security, where AI is making it possible for X-ray scanners to flag
images that may look suspicious.
 Oil and Gas, where AI is helping companies analyze and classify
thousands of rock samples to help identify the best locations to drill for
oil.

Some famous applications of AI from IBM include:

 Watson playing Jeopardy to win against two of its greatest champions,


Ken Jennings and Brad Rutter.
 Watson teaming up with the Academy to deliver an amplified Grammy
experience for millions of fans.
 Watson collaborating with ESPN to serve 10 million users of the ESPN
Fantasy App sharing insights that help them make better decisions to
win their weekly matchups.

To learn more about the topics in this module, read the following articles:

 AI is Not Magic: It's time to Demystify and Apply


 Women Leaders in AI
 Expert Insights: AI fast forwards video for sports highlights
 IBM Watson creates the first movie trailer for 20th Century Fox
 USTA uses IBM Watson to enhance player performance
MODULE INTRODUCTION AND LEARNING OBJECTIVES
1. Module Introduction

In this Module, you will learn about basic AI concepts and terminology. You
will understand how AI learns, and what some of its applications are.

2. Learning Objectives

 Define basic AI concepts.


 Explain Machine Learning, Deep Learning, and Neural Networks.
 Explain the application areas of AI.

12COGNITIVE COMPUTING (PERCEPTION, LEARNING,


REASONING)

AI is at the forefront of a new era of computing,


Cognitive Computing. It's a radically new kind of
computing, very different from the programmable
systems that preceded it, as different as those
systems were from the tabulating machines of a
century ago. Conventional computing solutions,
based on the mathematical principles that emanate
from the 1940's, are programmed based on rules
and logic intended to derive mathematically precise
answers, often following a rigid decision tree
approach. But with today's wealth of big data and
the need for more complex evidence-based
decisions, such a rigid approach often breaks or
fails to keep up with available information. Cognitive
Computing enables people to create a profoundly
new kind of value, finding answers and insights
locked away in volumes of data. Whether we
consider a doctor diagnosing a patient, a wealth
manager advising a client on their retirement
portfolio, or even a chef creating a new recipe, they
need new approaches to put into context the
volume of information they deal with on a daily
basis in order to derive value from it. These
processes serve to enhance human expertise.
Cognitive Computing mirrors some of the key
cognitive elements of human expertise, systems
that reason about problems like a human does.
When we as humans seek to understand something
and to make a decision, we go through four key
steps. First, we observe visible phenomena and
bodies of evidence. Second, we draw on what we
know to interpret what we are seeing to generate
hypotheses about what it means. Third, we evaluate
which hypotheses are right or wrong. Finally, we
decide, choosing the option that seems best and
acting accordingly. Just as humans become experts
by going through the process of observation,
evaluation, and decision-making, cognitive systems
use similar processes to reason about the
information they read, and they can do this at
massive speed and scale. Unlike conventional
computing solutions, which can only handle neatly
organized structured data such as what is stored in
a database, cognitive computing solutions can
understand unstructured data, which is 80 percent
of data today. All of the information that is produced
primarily by humans for other humans to consume.
This includes everything from literature, articles,
research reports to blogs, posts, and tweets. While
structured data is governed by well-defined fields
that contain well-specified information, cognitive
systems rely on natural language, which is
governed by rules of grammar, context, and culture.
It is implicit, ambiguous, complex, and a challenge
to process. While all human language is difficult to
parse, certain idioms can be particularly
challenging. In English for instance, we can feel
blue because it's raining cats and dogs, while we're
filling in a form, someone asked us to fill out.
Cognitive systems read and interpret text like a
person. They do this by breaking down a sentence
grammatically, relationally, and structurally,
discerning meaning from the semantics of the
written material. Cognitive systems understand
context. This is very different from simple speech
recognition, which is how a computer translates
human speech into a set of words. Cognitive
systems try to understand the real intent of the user
language, and use that understanding to draw
inferences through a broad array of linguistic
models and algorithms. Cognitive systems learn,
adapt, and keep getting smarter. They do this by
learning from their interactions with us, and from
their own successes and failures, just like humans
do.
13TERMINOLOGY AND RELATED CONCEPTS

Before we deep dive into how AI works,


and its various use cases and applications,
let's differentiate some of the closely related terms
and concepts of AI: artificial intelligence,
machine learning, deep learning, and neural
networks. These terms are sometimes used
interchangeably, but they do not refer to the same
thing. Artificial intelligence is a branch of
computer science dealing with a simulation of
intelligent behavior.AI systems will
typically demonstrate behaviours’ associated
with human intelligence such as planning,
learning, reasoning, problem-solving, knowledge
representation, perception, motion, and
manipulation, and to a lesser extent social
intelligence and creativity. Machine learning is a
subset of AI that uses computer algorithms to
analyze data and make intelligent decisions based
on what it has learned, without being explicitly
programmed. Machine learning algorithms are
trained with large sets of data and they learn from
examples. They do not follow rules-based
algorithms. Machine learning is what enables
machines to solve problems on their
own and make accurate predictions
using the provided data. Deep learning is a
specialized subset of Machine Learning that
uses layered neural networks to simulate human
decision-making. Deep learning algorithms can
label and categorize information and identify
patterns. It is what enables AI systems
to continuously learn on the job,
and improve the quality and accuracy of
results by determining whether decisions were
correct. Artificial neural networks often referred to
simply as neural networks take inspiration
from biological neural networks, although they
work quite a bit differently. A neural network in AI
is a collection of small computing units called
neurons that take incoming data and learn to make
decisions over time. Neural networks are often
layered deep and are the reason deep learning
algorithms become more efficient as the datasets
increase in volume, as opposed to other machine
learning algorithms that may plateau as data
increases. Now that you have a broad
understanding of the differences between some
key AI concepts, there is one more differentiation
that is important to understand, that between
artificial intelligence and data science. Data
science is the process and method for extracting
knowledge and insights from large volumes of
disparate data. It's an interdisciplinary field
involving mathematics, statistical analysis, data
visualization, machine learning, and more.
It's what makes it possible for us to appropriate
information, see patterns, find meaning from large
volumes of data, and use it to make decisions that
drive business. Data Science can use many of
the AI techniques to derive insight from data.
For example, it could use machine learning
algorithms and even deep learning models to
extract meaning and draw inferences from data.
There is some intersection between AI and data
science, but one is not a subset of the other.
Rather, data science is a broad term that
encompasses the entire data processing
methodology. Well, AI includes everything that
allows computers to learn how to solve problems
and make intelligent decisions. Both AI and Data
Science can involve the use of big data that is
significantly large volumes of data. In the next few
lessons, the terms machine learning, deep
learning, and neural networks will be discussed in
more detail.
14 MACHINE LEARNING

Machine Learning, a subset of AI,


uses computer algorithms to analyse data and
make intelligent decisions based on what it has learned.
Instead of following rules-based algorithms,
machine learning builds models to classify and make
predictions from data. Let's understand this by exploring
a problem we may be able to tackle with Machine
Learning. What if we want to determine whether a heart
can fail, is this something we can solve with Machine
Learning? The answer is, Yes. Let's say we are given
data such as beats per minute, body mass index, age,
sex, and the result whether the heart has failed or not.
With Machine Learning given this dataset, we are able
to learn and create a model that given inputs, will predict
results. So what is the difference between this and using
statistical analysis to create an algorithm? An algorithm
is a mathematical technique. With traditional
programming, we take data and rules, and use these to
develop an algorithm that will give us an answer.
In the previous example, if we were using a traditional
algorithm, we would take the data such as beats per
minute and BMI, and use this data to create an
algorithm that will determine whether the heart will fail or
not. Essentially, it would be an if-then-else statement.
When we submit inputs, we get answers based on
what the algorithm we determined is, and this algorithm
will not change. Machine Learning, on the other hand,
takes data and answers and creates the algorithm.
Instead of getting answers in the end, we already have
the answers. What we get is a set of rules that
determine what the machine learning model will be.
The model determines the rules, and the if-then-else
statement when it gets the inputs. Essentially, what the
model does is determine what the parameters are in a
traditional algorithm, and instead of deciding arbitrarily
that beats per minute plus BMI equals a certain result,
we use the model to determine what the logic will be.
This model, unlike a traditional algorithm, can be
continuously trained and be used in the future to predict
values. Machine Learning relies on defining behavioural
rules by examining and comparing large datasets
to find common patterns. For instance, we can provide
a machine learning program with a large volume of
pictures of birds and train the model to return
the label "bird" whenever it has provided a picture of a
bird. We can also create a label for "cat" and provide
pictures of cats to train on. When the machine model is
shown a picture of a cat or a bird, it will label the picture
with some level of confidence. This type of Machine
Learning is called Supervised Learning, where an
algorithm is trained on human-labeled data. The more
samples you provide a supervised learning algorithm,
the more precise it becomes in classifying new data.
Unsupervised Learning, another type of machine
language, relies on giving the algorithm
unlabelled data and letting it find patterns by itself.
You provide the input but not labels,
and let the machine infer qualities that
algorithm ingests unlabelled data, draws inferences, and
finds patterns. This type of learning can be useful for
clustering data, where data is grouped according to how
similar it is to its neighbours and dissimilar to everything
else. Once the data is clustered, different techniques
can be used to explore that data and look for patterns.
For instance, you provide a machine learning algorithm
with a constant stream of network traffic and let it
independently learn the baseline, normal network
activity, as well as the outlier and possibly malicious
behaviour happening on the network. The third type of
machine learning algorithm, Reinforcement Learning,
relies on providing a machine learning algorithm
with a set of rules and constraints, and letting it learn
how to achieve its goals. You define the state,
the desired goal, allowed actions, and constraints.
The algorithm figures out how to achieve the goal by
trying different combinations of allowed actions, and is
rewarded or punished depending on whether the
decision was a good one. The algorithm tries its best to
maximize its rewards within the constraints provided.
You could use Reinforcement Learning to teach
a machine to play chess or navigate an obstacle course.
A machine learning model is a algorithm used
to find patterns in the data without the programmer
having to explicitly program these patterns.
15 MACHINE LEARNING TECHNIQUES AND TRAINING

Machine Learning is a broad field and we can split


it up into three different categories, Supervised
Learning, Unsupervised Learning, and
Reinforcement Learning. There are many different
tasks we can solve with these. Supervised
Learning refers to when we have class labels in
the dataset and we use these to build the
classification model. What this means is when we
receive data, it has labels that say what the data
represents. In a previous example, we had a table
with labels such as age or sex. With Unsupervised
Learning, we don't have class labels and we must
discover class labels from unstructured data. This
could involve things such as deep learning looking
at pictures to train models. Things like this are
typically done with something called clustering.
Reinforcement Learning is a different subset, and
what this does is it uses a reward function to
penalize bad actions or reward good actions.
Breaking down Supervised Learning, we can split
it up into three categories, Regression,
Classification and Neural Networks. Regression
models are built by looking at the relationships
between features x and the result y where y is a
continuous variable. Essentially, Regression
estimates continuous values. Neural Networks
refer to structures that imitate the structure of the
human brain. Classification on the other hand,
focuses on discrete values it identifies. We can
assign discrete class labels y based on many
inputs’ features x. In a previous example, given a
set of features x, like beats per minute, body mass
index, age and sex, the algorithm classifies the
output y as two categories, True or False,
predicting whether the heart will fail or not. In other
Classification models, we can classify results into
more than two categories. For example, predicting
whether a recipe is for an Indian, Chinese,
Japanese, or Thai dish. Some forms of
classification include decision trees, support vector
machines, logistic regression, and random forests.
With Classification, we can extract features from
the data. The features in this example would be
beats per minute or age. Features are distinctive
properties of input patterns that help in
determining the output categories or classes of
output. Each column is a feature and each row is a
data point. Classification is the process of
predicting the class of given data points. Our
classifier uses some training data to understand
how given input variables relate to that class. What
exactly do we mean by training? Training refers to
using a learning algorithm to determine and
develop the parameters of your model. While there
are many algorithms to do this, in layman's terms,
if you're training a model to predict whether the
heart will fail or not, that is True or False values,
you will be showing the algorithm some real-life
data labeled True, then showing the algorithm
again, some data labeled False, and you will be
repeating this process with data having True or
False values, that is whether the heart actually
failed or not. The algorithm modifies internal
values until it has learned to tell from data that
indicates heart failure that is True or not, that is
False. With Machine Learning, we typically take a
dataset and split it into three sets, Training,
Validation and Test sets. The Training subset is
the data used to train the algorithm. The Validation
subset is used to validate our results and fine-tune
the algorithm's parameters. The Testing data is the
data the model has never seen before and used to
evaluate how good our model is. We can then
indicate how good the model is using terms like,
accuracy, precision and recall.
16 DEEP LEARNING

While Machine Learning is a subset of Artificial


Intelligence, Deep Learning is a specialized subset of
Machine Learning. Deep Learning layers algorithms to
create a Neural Network, an artificial replication of the
structure and functionality of the brain,
enabling AI systems to continuously learn on the job and
improve the quality and accuracy of results.
This is what enables these systems to learn from
unstructured data such as photos, videos, and audio
files. Deep Learning, for example,
enables natural language understanding capabilities of
AI systems, and allows them to work out the context and
intent of what is being conveyed. Deep learning
algorithms do not directly map input to output. Instead,
they rely on several layers of processing units.
Each layer passes its output to the next layer, which
processes it and passes it to the next. The many layers
is why it’s called deep learning. When creating deep
learning algorithms, developers and engineers configure
the number of layers and the type of functions that
connect the outputs of each layer to the inputs of the
next. Then they train the model by providing it with lots
of annotated examples. For instance, you give a deep
learning algorithm thousands of images and labels that
correspond to the content of each image. The algorithm
will run those examples through its layered neural
network, and adjust the weights of the variables in each
layer of the neural network to be able to detect the
common patterns that define the images with similar
labels. Deep Learning fixes one of the major problems
present in older generations of learning algorithms.
While the efficiency and performance of machine
learning algorithms plateau as the datasets grow, deep
learning algorithms continue to improve as they are fed
more data. Deep Learning has proven to be very
efficient at various tasks, including image captioning,
voice recognition and transcription, facial recognition,
medical imaging, and language translation. Deep
Learning is also one of the main components of
driverless cars.
17 NEURAL NETWORKS

An artificial neural network is a collection of smaller units


called neurons, which are computing units modeled on the
way the human brain processes information. Artificial neural
networks borrow some ideas from the biological
neural network of the brain, in order to approximate some of
its processing results. These units or neurons take incoming
data like the biological neural networks
and learn to make decisions over time. Neural networks learn
through a process called backpropagation. Backpropagation
uses a set of training data that match known inputs to desired
outputs. First, the inputs are plugged into the network and
outputs are determined. Then, an error function determines
how far the given output is from the desired output. Finally,
adjustments are made in order to reduce errors. A collection
of neurons is called a layer, and a layer takes in an input and
provides an output. Any neural network will have one input
layer and one output layer. It will also have one or more
hidden layers which simulate the types of activity that goes on
in the human brain. Hidden layers take in a set of weighted
inputs and produce an output through an activation function.
A neural network having more than one hidden layer is
referred to as a deep neural network. Perceptron’s are the
simplest and oldest types of neural networks.
They are single-layered neural networks consisting of input
nodes connected directly to an output node. Input layers
forward the input values to the next layer, by means of
multiplying by a weight and summing the results. Hidden
layers receive input from other nodes and forward their output
to other nodes. Hidden and output nodes have a property
called bias, which is a special type of weight that applies to a
node after the other inputs are considered. Finally, an
activation function determines how a node responds to its
inputs. The function is run against
the sum of the inputs and bias, and then the result is
forwarded as an output. Activation functions can take
different forms, and choosing them is a critical component
to the success of a neural network. Convolutional neural
networks or CNNs are multilayer neural networks that take
inspiration from the animal visual cortex. CNNs are useful in
applications such as image processing, video recognition, and
natural language processing. A convolution is a mathematical
operation, where a function is applied to another function and
the result is a mixture of the two functions. Convolutions are
good at detecting simple structures in an image, and putting
those simple features together to construct more complex
features. In a convolutional network, this process occurs over
a series of layers, each of which conducts a convolution on the
output of the previous layer. CNNs are adept at building
complex features from less complex ones. Recurrent neural
networks or RNNs, are recurrent because they perform the
same task for every element of a sequence, with prior outputs
feeding subsequent stage inputs. In a general neural network,
an input is processed through a number of layers and an
output is produced with an assumption that the two successive
inputs are independent of each other, but that may not hold
true in certain scenarios. For example, when we need to
consider the context in which a word has been spoken, in such
scenarios, dependence on previous observations has to be
considered to produce the output. RNNs can make use of
information in long sequences, each layer of the network
representing the observation at a certain time.
18 HANDS-ON LAB: PAINT WITH AI
IBM Research creates innovative tools and resources to help unleash the power of AI.

Objective for Exercise:

 Learn a new kind of neural network, called a generative adversarial network (GAN) to create
complex outputs, like photorealistic images.
 You will use a GAN to enhance existing images and create your own unique, custom image.

18.1.1 Follow these steps to work with a GAN:


1. Access the demo here: This landscape image was created by AI
2. In the Co-create with a neural network section, under Choose a generated image,
select one of the existing images. For example, choose the 11th image.
3. From the Pick object type list, select the type of object you want to add. For example,
click on Tree to select it.
Figure 1 - Original generated image

4. Move the cursor onto the image. Click and keeping the mouse button pressed, drag your
cursor over an area of the existing image where you want to add the object. For example drag
a line in the red area highlighted in the red rectangle below to add a tree there.
5. Choose another object type and add it to the image.
Figure 2 - Trees and grass added:

6. Experiment with locations: Can you place a door in the sky? Can you place grass so
that it enhances the appearance of existing grass?
7. Use the Undo and Erase functions to remove objects.
8. [Optional] Click Download to save your work.
For more information on the capabilities of GANs, follow these steps:
1. In the What’s happening in this demo section? Click What does a GAN
“understand” and read the text.
2. What does the text say about placement of objects? Does this explain the results you
saw earlier?
3. Click Painting with neurons, not pixels and read the text. How does the GAN help
you manipulate images?
4. Click New ways to work with AI and read the text. What are some of the use cases
for GANs?
Use the Discussion Forum to talk about these questions with your fellow students

18.2 AUTHOR(S)
Rav Ahuja
Date Version Changed by Change Description

2020-08- Migrated Lab to Markdown and added to course repo in


2.0 Anamika
27 GitLab

2022-11-
2.1 Srishti Updated demo link
01

18.3 CHANGELOG
19 KEY FIELDS OF APPLICATION IN AI

So can you talk about the different areas or categories of


artificial intelligence? Now, there are lots of different fields
that AI works in. But if I were to on a very very high level
group some of the major areas where artificial intelligence is
applied, I'd like to start off with natural language. Because
natural language is, I'd say, the most complex data for
machine learning to work with. If you see all sorts of data,
whether that be a sequence to genome, whether that be audio,
whether that be images. There's some sort of discernible
pattern. There's some sort of yes, this is what a car sounds like
or yes, this is what human voice sounds like. But natural
language is fundamentally, a very human task. It's very human
data source. We as humans invented it for humans to
understand. If I were to, for example,
give you a book title, there's actually a very very famous
book, and the title of the book is there are two mistakes in the
the title of this book. Now, there's actually only one mistake,
the two the's. The human brain doesn't realize that. What's the
second mistake? That there was only one mistake.
So this is a sort of natural language complexity that's involved
here. Humans we don't view natural language literally. We
view it conceptually. If I were to write a three instead of an E,
you will understand it because we don't
mean the three in a literal sense. We mean that in a symbolic
sense to represent the concept of E and you can contextualize
that three to figure out that, "Yeah.
It means in E" and not an actual three. These are things that
computers aren't capable of. So natural languages that number
one field that I'm most interested in when it comes to machine
learning. Second, I'd say the most popular would be visual.
Visual data understanding, computer vision. Because it
enables us to do so many things. As humans, our primary
sense is vision.
In fact, a vast majority of your brain's processing power at any
given moment, goes to understanding what it is that you're
seeing. Whether it be a person's face, or whether it be a
computer or some texts, or anything of that sort.
Third, I would say audio-based data. So text-to-speech,
speech-to-text these are very very complex. The reason it's
complex is because it combines a lot of challenges into one.
First of all, you've got to support many languages.
You can't just support English and call it a day. You've got to
support other languages. You've got to support other
demographics. Another challenge is that even within
languages, there are absolutely infinite number of ways that
any human could represent a language. Everyone's going to
have a different accent. Everyone's going to have
a different way of pronouncing certain words.
There's no standardized way that every human will
pronounce ice cube exactly like ice cube. That doesn't exist. If
you take a look at another challenge,
it's that audio data is fundamentally very very difficult to work
with. Because the thing is, audio data exists in the natural
world. What is audio? It's vibrations of air molecules, and
vibrations of air molecules are fast.
Audio is recorded at overpay say 44 kilohertz.
That's a lot of data, 44,000 data points every single second.
There are usually only 44,000 data points in an individual
low-resolution image. So of course, there are lots of
challenges to work around when it comes to audio.
But companies like IBM, Google, Microsoft have actually
worked around these challenges and they're working
towards creating different services to make it easier for
developers. So again, on a very very high level, there's natural
language understanding, there's computer vision,
there's audio data and of course, there's the traditional set
of tabular data understanding. Which is essentially, structured
data understanding.
20 NATURAL LANGUAGE PROCESSING, SPEECH,
COMPUTER VISION

Some of the most common application areas of AI include


natural language processing, speech, and computer vision.
Now, let's look at each of these in turn. Humans have the most
advanced method of communication which is known as
natural language. While humans can use computers to send
voice and text messages to each other, computers do not
innately know how to process natural language.
Natural language processing is a subset of artificial
intelligence that enables computers to understand the meaning
of human language. Natural language processing uses
machine learning and deep learning algorithms to discern a
word's semantic meaning. It does this by deconstructing
sentences grammatically, relationally, and structurally and
understanding the context of use. For instance, based on the
context of a conversation, NLP can determine if the word
"Cloud" is a reference to cloud
computing or the mass of condensed water vapor floating in
the sky. NLP systems might also be able to understand intent
and emotion, such as whether you're asking a question out of
frustration, confusion, or irritation. Understanding the real
intent of the user's language, NLP systems draw inferences
through a broad array of linguistic models and algorithms.
Natural language processing is broken down into many
subcategories related to audio and visual tasks. For computers
to communicate in natural language, they need to be able to
convert speech into text, so communication is more natural
and easy to process. They also need to be able to convert text-
to-speech, so users can interact with computers without the
requirement to stare at a screen. The older iterations of
speech-to-text technology require programmers to go through
tedious process of discovering and codifying the rules of
classifying and converting voice samples into text. With
neural networks, instead of coding the rules, you provide
voice samples and their corresponding text. The neural
network finds the common patterns among the pronunciation
of words and then learns to map new voice recordings to their
corresponding
texts. These advances in speech-to-text technology are the
reason we have real time transcription. Google uses AI-
powered speech-to-text in there Call Screen feature to handle
scam calls and show you the text of the person speaking in
real time. YouTube uses this to provide automatic closed
captioning. The flip side of speech-to-text is text-to-speech
also known as speech synthesis. In the past, the creation of a
voice model required hundreds of hours of coding. Now, with
the help of neural networks, synthesizing human voice has
become possible. First, a neural network ingests numerous
samples of a person's voice until it can tell whether a new
voice sample belongs to the same person. Then, a second
neural network generates audio data and runs it through the
first network to see if it validates it as belonging to the
subject. If it does not, the generator corrects its sample and
reruns it through the classifier. The two networks repeat the
process until they generate samples that sound natural.
Companies use AI-powered voice synthesis to enhance
customer experience and give their brands their unique voice.
In the medical field, this technology is helping ALS patients
regain their true voice instead of using a computerized voice.
The field of computer vision focuses on replicating parts of
the complexity of the human visual system, and enabling
computers to identify and process objects in images and
videos, in the same way humans do.
Computer vision is one of the technologies that enables
the digital world to interact with the physical world.
The field of computer vision has taken great leaps in recent
years and surpasses humans in tasks related to detecting and
labelling objects, thanks to advances in deep learning and
neural networks. This technology enables self-driving cars to
make sense of their surroundings.
It plays a vital role in facial recognition applications allowing
computers to match images of people's faces to their
identities. It also plays a crucial role in augmented and mixed
reality. The technology that allows computing devices such as
smartphones, tablets, and smart glasses to overlay and embed
virtual objects on real-world imagery.
Online photo libraries like Google Photos, use computer
vision to detect objects and classify images by the type of
content they contain.
21 SELF-DRIVING CARS

Can you tell us a little bit about the work you're doing with
self-driving cars.
>> I've been working on self-driving cars for the last few
years. It's a domain that's exploded, obviously, in interest
since early competitions back in the 2005 domain. And what
we've been working on really is putting together our own self-
driving vehicle that was able to drive on public roads in the
regional Waterloo last August. With the self-driving cars area,
one of our key research domains is in 3D object detection. So
this remains a challenging task for algorithms to perform
automatically. Trying to identify every vehicle, every
pedestrian, every sign that's in a driving environment. So that
the vehicle can make the correct decisions about how it should
move and interact with those vehicles. And so we work
extensively on how we take in laser data and vision data and
radar data. And then fuse that into a complete view of the
world around the vehicle.
>> When we think of computer vision,
we usually think immediately of self-driving cars, and why is
that? Well, it's because it's hard to pay attention when driving
on the road, right? You can't both be looking at your
smartphone and also be looking at the road at the same time.
Of course, it's sometimes hard to predict what people are
going to be doing on the street, as well. When they're crossing
the street with their bike or skateboard, or whatnot. So it's
great when we have some sort of camera or sensor that can
help us detect these things and prevent accidents before they
could potentially occur.
And that's one of the limitations of human vision, is attention,
is visual attention. So I could be looking at you, Rav, but
behind you could be this delicious slice of pizza.
But I can only pay attention to one or just some limited
number of things at a time. But I can't attend to everything in
my visual field all at once at the same time like a camera
could. Or like how computer vision could potentially do so.
And so that's one of the great things that cameras and
computer vision is good for. Helping us pay attention to the
whole world around us without having us to look around and
make sure that we're paying attention to everything. And that's
just in self-driving cars, so I think we all kind of have a good
sense of how AI and computer vision shapes the driving and
transportation industry.
>> Well, self-driving cars are certainly the future.
And there's tremendous interest right now in self-driving
vehicles. In part because of their potential to really change the
way our society works and operates. I'm very excited about
being able to get into a self-driving car and
read or sit on the phone on the way to work. Instead of having
to pilot through Toronto traffic. So I think they represent a
really exciting step forward, but there's still lots to do. We still
have lots of interesting challenges to solve in the self-driving
space. Before we have really robust and safe cars that are able
to drive themselves 100% of the time autonomously on our
roads.
>> We've just launched our own self-driving car
specialization on Coursera. And we'd be really happy to see
students in this specialization also come and
learn more about self-driving. It's a wonderful starting point, it
gives you a really nice perspective on the different
components of the self-driving software stack and
how it actually works. So everywhere from how it perceives
the environment, how it makes decisions and
plans its way through that environment. To how it controls the
vehicle and makes sure it executes those plans safely. So
you'll get a nice broad sweep of all of those things from that
specialization. And from there you then want to become really
good and really deep in one particular area, if you want to
work in this domain.
Because again, there's so many layers behind this.
There's so much foundational knowledge you need to start
contributing that you can't go wrong. If you find something
interesting, just go after it. And I am sure there'll be
companies that'll need you for this.

22HANDS-ON LAB: COMPUTER VISION


Objective for Exercise:

 Learn about IBM’s Adversarial Robustness Toolbox, and use it to mitigate


simulated attacks by hackers..

23 COMPUTER VISION
IBM Research creates innovative tools and resources to help unleash the power of AI.

Follow these steps to explore the demo:


1. Access the demo here:Your AI model might be telling you this is not a cat.
2. In the Try it out section, click the image of the Siamese cat.
Figure 1 - Select an image

3. In the Simulate Attack section, ensure that no attack is selected, and that all the
sliders are to the far left, indicating that all attacks and mitigation strategies are
turned off.
What does Watson identify the image as, and at what confidence level? E.g. Siamese cat
92%

4. In the Simulate Attack section, under Adversarial noise type, select Fast
Gradient Method. The strength slider will move to low.
Figure 2 - Select an attack and level

What does Watson identify the image as now, and at what confidence level?

5. In the Defend attack section, move the Gaussian Noise slider to low.
Figure 3 - Mitigate the attack
6. What does Watson identify the image as now, and at what confidence level? Did
the image recognition improve?
Figure 4 - View the results

Note that you can use the slider on the image to see the original and modified images.

7. Move the Gaussian Noise slider to medium, and then to high. For each level,
note what Watson identifies the image as, and at what confidence level. Did the
image recognition improve?
8. Move the Gaussian Noise slider to None.
9. In the Defend attack section, move the Spatial Smoothing slider to low. What
does Watson identify the image as now, and at what confidence level? Did the
image recognition improve?
10. Move the Spatial Smoothing slider to medium, and then to high. For each level,
note what Watson identifies the image as, and at what confidence level. Did the
image recognition improve?
11. Move the Spatial Smoothing slider to None.
12. In the Defend attack section, move the Feature Squeezing slider to low. What
does Watson identify the image as now, and at what confidence level? Did the
image recognition improve?
13. Move the Feature Squeezing slider to medium, and then to high. For each level,
note what Watson identifies the image as, and at what confidence level. Did the
image recognition improve?
14. Which of the three defenses would you use to defend against a Fast Gradient
Attack?

Optional:
If you have time, use the same techniques to explore the other methods of attack
(Projected Gradient Descent and C&W Attack) and evaluate which method of defense
works best for each. If you want, try a different image.

Use the Discussion Forum to talk about the attacks and mitigation strategies with your
fellow students.

23.1 CHANGELOG
Changed
Date Version Change Description
by

2020
Migrated Lab to Markdown and added to course repo in
-08- 2.0 Anamika
GitLab
27

24 MODULE SUMMARY
24.1.1 In this lesson, you have learned about cognitive
computing:
Cognitive computing systems differ from conventional computing systems
in that they can:

 Read and interpret unstructured data, understanding not just the


meaning of words but also the intent and context in which they are
used.
 Reason about problems in a way that humans reason and make
decisions.
 Learn over time from their interactions with humans and keep getting
smarter.

About Machine Learning, Deep Learning and Neural Networks

 Machine Learning, a subset of AI, uses computer algorithms to


analyze data and make intelligent decisions based on what it has
learned. The three main categories of machine learning algorithms
include Supervised Learning, Unsupervised Learning, and
Reinforcement learning.
 Deep Learning, a specialized subset of Machine Learning, layers
algorithms to create a neural network enabling AI systems to learn
from unstructured data and continue learning on the job.
 Neural Networks, a collection of computing units modeled on
biological neurons, take incoming data and learn to make decisions
over time. The different types of neural networks include
Perceptrons, Convolutional Neural Networks or CNNs, and Recurrent
Neural Networks or RNNs.
 Supervised Learning is when we have class labels in the data set
and use these to build the classification model.
 Supervised Learning is split into three categories – Regression,
Classification, and Neural Networks.
 Machine learning algorithms are trained using data sets split into
training data, validation data, and test data.

And about AI application areas, including:

 Natural Language Processing (NLP) is a subset of artificial


intelligence that enables computers to understand the meaning of
human language, including the intent and context of use.
 Speech-to-text enables machines to convert speech to text by
identifying common patterns in the different pronunciations of a word,
mapping new voice samples to corresponding words.
 Speech Synthesis enables machines to create natural sounding
voice models, including the voice of particular individuals.
 Computer Vision enables machines to identify and differentiate
objects in images the same way humans do.
 Self-driving cars is an application of AI that can utilize NLP, speech,
and most importantly, computer vision.
24.1.2 To learn more about the topics in this module, read these articles:
 How to get started with cognitive technology
 Models for Machine Learning
 Applications of Deep Learning
 A Neural Networks deep dive
 A beginner's guide to Natural Language Processing
25 MODULE INTRODUCTION AND LEARNING OBJECTIVES
Module Introduction

AI is everywhere, transforming the way we live, work, and interact — that's


why it's so important to build and use it in line with ethical expectations.
This week, you will learn about issues and concerns surrounding AI, as well
as how AI ethics helps practitioners build and use AI responsibly. This
information will help you understand AI's potential impacts on society so
that you can have an informed discussion about its risks and benefits.

Learning Objectives

 Describe current issues and concerns related to AI


 Define the principles and pillars that form a framework for AI ethics
 Explain how bias can arise in AI and how it can be mitigated
 Explain how AI ethics is enforced through governance and
regulations.

26 EXPLORING TODAY'S AI CONCERNS


Welcome to exploring today's AI concerns. In this video, you will learn
about some of today's hot topics in AI. First, you will hear about why
trustworthy AI is the hot topic in AI. Then, you will hear about how AI is
used in facial recognition technologies, in hiring in marketing on social
media and in healthcare. People frequently ask me what our current hot
topics in AI and I will tell you that whatever I answer today is likely to be
different next week or even by tomorrow. The world of AI is extremely
dynamic, which is a good thing. It's an emerging technology with an
amazing amount of possibilities and the potential to solve so many
problems, so much faster than what we thought was before possible. Now,
as we've seen in some cases it can have harmful consequences. And so, I
would say that the hot topic in AI is how do we do this responsibly? And
IBM has come up with five pillars to address this issue, kind of summarizing
the idea of responsible AI. That is explain ability, transparency, robustness,
privacy and fairness. And we can go into those topics in more depth, but I
want to emphasize two things here and one is that this is not a one and
done sport. If you're going to use AI, if we're going to put it into use in
society, this is not something you just address at the beginning or at the
end. This is something you have to address throughout the entire lifecycle
of AI. These are questions you have to ask whether you're at the drawing
board, whether you're designing the AI, you're training the AI or you have
put it into use or you are the end user who's interacting with the AI. And so,
those five pillars or things you want to constantly think about throughout the
entire lifecycle of AI. And then second and I think even more importantly is
this is a team sport, we all need to be aware of both the potential good and
the potential harm that comes from AI. And encourage everybody to ask
questions. Make room for people to be curious about how AI works and
what it's doing. And with that I think we can really use it to address good
problems and have some great results and mitigate any of the potential
harm. So stay curious. In designing solutions around Artificial Intelligence,
call it AI, facial recognition has become a permanent use case. There are
really three typical examples of categories of models and algorithms that
are being designed. Facial detection that is simply detecting whether it is a
human face versus a or a dog or cat. This type of facial recognition
happens without uniquely identifying who that face might belong to. In facial
authentication, you might use this type of facial recognition to open up your
iPhone or your android device. In this case, we provide a one-on-one
authentication by comparing the features of a face image. What they
previously stored, single up image, meaning that you are really only
comparing the images with the distinct image of the owner of the iPhone or
android device. Facial matching, in this case, we compare the image with a
database of other images or photos. Just as different from the previous in
that, the model is trying to determine a facial match of an individual against
the database for images below it or photos belonging to other humans.
There are many different examples of facial recognition. Many of them you
have no doubt experienced in your day to day activity. Some have proven
to be helpful while others have shown to be not so helpful and then there
are others that have proven to be direct criminal in nature. Where certain
demographics of people have been harmed because of the use of these
facial recognition systems. We've seen facial recognitions and solutions in
AI systems provide significant value in scenarios like navigating through an
airport or going through security or security align. Or even using previous
previous examples like the one we talked about earlier where facial
recognition to unlock your iPhone or possibly to unlock your home or down
lock the door in your automobile. These are all helpful uses of facial
recognition technologies but there are also some clear examples and use
cases that must be off-limits. These might include identifying a person and
the crowd without the sole permission of that person or doing mass
surveillance on a single or group of people. These types of uses of
technology raises important privacy, civil rights, and human rights
concerns. When used the wrong way by the wrong people in facial
recognition technologies no doubt can be used to suppress, dissent, or
infringe upon the rights of minorities or to simply just erase your basic
expectations of having privacy. AI is being increasingly introduced into
each stage of workforce progression, hiring, onboarding, career
progression, including promotions and awards handling, attrition etc. Stock
board hiring: Consider an organization that receives thousands of job
applications. People applying for all kinds of jobs, front office, back office,
seasonal, permanent. Instead of having large teams of people sit and sift
through all these applications, AI helps you rank and prioritize applicants
against targeted job openings, presenting a list of the top candidates to the
hiring managers. AI solutions can process text in resumes and combine
that with other structured data to help in decision making. Now, we need to
be careful to have guardrails in place. We need to ensure the use of AI in
hiring is not biased across sensitive attributes like age, general ethnicity
and the like. Even when those attributes are not directly used by the AI but
maybe creeping in, coming in from proxy attributes like zip code or type of
job previously held. One of the hot topics in AI today is its application and
marketing on social media. It has completely transformed how brands
interact with their audiences on social media platforms like TikTok,
LinkedIn, Twitter, Instagram, Facebook. AI today can create ads for you, it
can create social media posts for you. It can help you target those ads
appropriately. It can use sentiment analysis to identify new audiences for
you. All of this drives incredible results for a marketeer. It improves the
effectiveness of the marketing campaigns while dramatically reducing the
cost of running those campaigns. Now, the same techniques and
capabilities that AI produces for doing marketing on social media platforms
also raises some ethical questions. The marketing is successful because of
all of the data and social media platforms collect from their users.
Ostensibly, this data is collected to deliver more personalized experiences
for end users. It's not always explicit what data is being collected and if you
are providing your consent for them to use as data. Now, the same
techniques that are so effective for marketing campaigns for brands can
also be applied for generating misinformation, conspiracy theories.
Whether it's political or scientific misinformation and this has horrendous
implications for our communities at large. This is why it is absolutely critical
that all enterprises adhere to some clear principles around transparency,
explain ability, trust, privacy in terms of how they use AI or build AI into
their solutions into their platforms. The use of AI is increasing across all
healthcare segments, healthcare providers, pairs, life sciences etc. Pair
organizations are using AI and machine learning solutions that tap into
claims data, often combining it with other data sets like social determinants
of health. A top use case is disease prediction for coordinating care. For
example, predicting who in the member population is likely to have an
adverse condition, maybe like an ER visit in the next three months and then
providing the right forms of intervention and prevention. Equitable care
becomes very important in this context. We need to make sure the AI is not
biased across sensitive attributes like age, gender, ethnicity, etc. Across all
of these of course, conversational AI where virtual agents as well as
systems that help humans better service the member population. That has
become table stakes. Across all of these use cases of AI in health care, we
see a few common things. Being able to unlock insights from the rich sets
of data the organization owns, improving the member or patient
experience, and having guardrails in place to ensure AI is trustworthy.
\

You might also like