Professional Documents
Culture Documents
AI For Everyone
AI For Everyone
JULY 3, 2023
BY
MIKE
AI for Everyone: Master the Basic
In this course you will learn what Artificial Intelligence (AI) is, explore use
cases and applications of AI, understand AI concepts and terms like
machine learning, deep learning and neural networks. You will be exposed
to various issues and concerns surrounding AI such as ethics and bias, &
jobs, and get advice from experts about learning and starting a career in
AI. You will also demonstrate AI in action with a mini project.
Change log
Understand what is AI, its applications and use cases and how it is
transforming our lives
Explain terms like Machine Learning, Deep Learning and Neural
Networks
Describe several issues and ethical concerns surrounding AI
Articulate advice from experts about learning and starting a career in
AI
1.1.2 Syllabus
Module 1: What is AI? Applications and Examples of AI
Video: Introducing AI
Video: What is AI?
Video: Optional: Tanmay’s journey and take on AI
Video: Impact and Examples of AI
Video: Optional: Application Domains for AI
Video: Some Applications of AI
Video: Optional: More Applications of AI
Video: Famous applications of AI from IBM
Reading: Summary & Highlights
Graded Quiz: What is AI? Applications and Examples of AI
2. Though Graded Quiz Questions and the Final Quiz have passing marks
of 83% and 17% each, the only grade that matters is the overall grade for
the course.
4. Attempts are per question in both, the Review Questions and the Final
Exam:
D U
L E 01
2 MODULE INTRODUCTION AND LEARNING OBJECTIVES
Module 1 Introduction
In this Module, you will learn what AI is. You will understand its applications
and use cases and how it is transforming our lives.
Learning Objectives
Define AI.
Describe examples, applications, and impact of AI.
Explore an interactive AI app.
There is a lot of talk and there's a lot of definitions for what artificial
intelligence says. So one of them is about teaching the machines to
learn, and act, and think as humans would. Another dimension
is really about how do we get the machines to- how do we impart
more of a cognitive capability on the machines and sensory
capabilities. So it's about analysing images and videos about
natural language processing and understanding speech. It's about
pattern recognition, and so on, and so forth. So the third axis is
more around creating a technology that's able to, in some cases,
replace what humans do. I'd like to think of this as augment what
humans do. To me personally, the most important part of definition
for artificial intelligence is about imparting the ability to think and
learn on the machines. To me that's what defines artificial
intelligence. AI is the application of computing to solve problems in
an intelligent way using algorithms. So what is an intelligent way?
Well, an intelligent way may be something that mimics human
intelligence. Or it may be a purely computational approach and
optimization approach but something that manipulates data in a
way to get not obvious results out, I think, is what I would classify as
being artificially intelligent. I would define AI as a tool that uses
computer to complete a task automatically with very little to no
human intervention.
To me AI is really a complex series of layers of algorithms that do
something with the information that's coming into it.
Artificial intelligence is a set of technologies that allows us to extract
knowledge from data. So it's any kind of system that learns or
understands patterns within that data, and can identify them, and
then reproduce them on new information. Artificial intelligence is not
the kind of simulating human intelligence that people think it is. It's
really not about intelligence at all. But I think another word that
describes AI more accurately today is machine learning. The
reason I say that is because machine learning
technology is all about using essentially mathematics on computers
in order to find patterns in data. Now this data can be structured or
unstructured. The only difference between machine learning and
the technologies that came before it is instead of us, as humans,
having to manually hard code these patterns, and these conditions
into computers. They're able to find these patterns on their own by
using math. That's really the only difference
here. So what I'd say artificial intelligence is, is it's a set of
mathematical algorithms that enable us to have computers find,
very deep and patterns that we may not have even known exist,
without us having to hard code them manually.
Tanmay’s reviews on AI
>> We use AI all the time, and a lot of the times we're
not even aware of it. We use AI every time we type in a
search query on a search engine, every time we use our
GPS.
Or every time we use some kind of voice recognition
system.
>> I like to focus on a particular segment of AI,
if that's okay, around computer vision. Because it's just
particularly fascinating to me. Now, when we think of
computer vision, they're looking at AI in ways to help
augment, or to help automate or to help train computers
to do something that's already very difficult to train
humans to do. Like when it comes to the airport, trying
to find weapons within luggage through the X-ray
scanner, now that could be difficult to do. No matter how
much you train someone that can be very difficult to
identify.
But with computer vision that can help to automate, help
to augment, help to flag, certain X-ray images so that
maybe even humans can just take a look at a filtered set
of images, not all of the images right? So computer
vision is very disruptive. And so there's many ways in
which computer vision can really help to augment the
capabilities of humans in lots of different industries.
10GENERATIVE AI APPLICATIONS
1. Welcome to Generative AI Applications
2. After watching this video, you will be able to:
3. List various applications of Generative AI, and
4. explore the uses for each application.
5. Generative AI has emerged as a powerful technology that enables software applications
to create,
6. generate, and simulate new content, enhancing their capabilities and providing
7. unique experiences. Unlike traditional software that follows predefined rules and
algorithms,
8. generative AI leverages machine learning and deep learning techniques to learn patterns
9. and generate original content based on the knowledge it has acquired during training.
10. Due to its potential to create new, personalized content that
11. would have been impossible to create otherwise, Generative AI has been used in various
fields,
12. leading to the development of numerous engaging and well-liked applications.
13. Some popular applications of Generative AI in action include:
14. 1: Generative Pre-trained Transformers or GPT, is a family of large language models
developed
15. by OpenAI that are capable of producing human-like text. GPT-3.5 and GPT-4 are
16. iterations in this family of models with more futuristic models under development. It has a
17. wide range of applications, including chatbots powered by GPT like ChatGPT,
18. automated journalism, and even creative writing.
19. 2: ChatGPT is a chatbot or conversational AI tool by OpenAI that enables users to have
text-based
20. conversations with the underlying language model, GPT. Trained on diverse internet text,
21. it generates human-like responses, providing information, answering questions, assisting
22. with tasks, engaging in creative writing, and offering suggestions across various subjects.
23. 3: Bard is an AI-powered writing assistant from Google that aims to assist users in
producing
24. high-quality writing for communicational documents like emails and social media
25. posts. Bard generates text using a large language model called LaMDA (Language
Model
26. for Dialogue Applications) and can adjust to the user's preferences for style and tone.
27. 4: Watsonx from IBM is an AI and data platform, comprising Watsonx.ai for model
development,
28. Watsonx.data for scalable analytics,
29. and Watsonx.governance for responsible AI workflows. It helps build, deploy,
30. and manage AI applications at scale, enhancing the impact of AI across your
organization.
31. 5: DeepDream is a generative model that can generate surreal and psychedelic
32. images from real-life images. It has been used in art and entertainment,
33. producing some one-of-a-kind and visually stunning images.
34. 6: StyleGAN is a generative model capable of producing high-quality images of faces that
35. do not exist in reality. It has been used in a variety of applications, including
36. creating realistic video game avatars and simulating human faces for medical research.
37. 7: AlphaFold is a generative model that can predict protein structure. It has the potential
to
38. transform drug discovery and make it possible to develop more effective treatments for
diseases.
39. 8: Magenta is a Google project that creates music and
40. art using generative AI. It has yielded some intriguing and impressive results,
41. such as a piano duet performed by a human and an AI-generated piano.
42. 9: Google AI's PaLM 2 is a powerful LLM trained on a dataset ten times larger. It excels
in
43. understanding nuances, generating coherent text and code, translating, and answering
44. questions. Ongoing development promises to revolutionize human-computer interactions,
45. enhancing accuracy, efficiency, creativity, and communication.
46. GitHub Copilot is an AI-powered coding assistant developed by OpenAI and GitHub
47. that is designed to help developers write code more efficiently. It uses
48. a deep learning algorithm to analyze code and generate suggestions for the developer,
such as
49. auto-completing code snippets or suggesting functions based on the context of the code.
50. Generative AI is a rapidly evolving space and is expected
51. to grow dramatically in the coming years.
52. Though, there are certain ethical concerns about Generative AI including potential
misuse
53. of AI-generated content and implications for intellectual property and copyright laws.
54. In this video, you learned that:
55. Generative AI enables applications to create, generate, and simulate new content.
56. It leverages ML and deep learning techniques to learn patterns and generate original
content, and
57. Some applications of Gen AI include GPT-4, ChatGPT, Bard, GitHub Co-pilot, and PaLM
2.
11 MODULE SUMMARY
To learn more about the topics in this module, read the following articles:
In this Module, you will learn about basic AI concepts and terminology. You
will understand how AI learns, and what some of its applications are.
2. Learning Objectives
Learn a new kind of neural network, called a generative adversarial network (GAN) to create
complex outputs, like photorealistic images.
You will use a GAN to enhance existing images and create your own unique, custom image.
4. Move the cursor onto the image. Click and keeping the mouse button pressed, drag your
cursor over an area of the existing image where you want to add the object. For example drag
a line in the red area highlighted in the red rectangle below to add a tree there.
5. Choose another object type and add it to the image.
Figure 2 - Trees and grass added:
6. Experiment with locations: Can you place a door in the sky? Can you place grass so
that it enhances the appearance of existing grass?
7. Use the Undo and Erase functions to remove objects.
8. [Optional] Click Download to save your work.
For more information on the capabilities of GANs, follow these steps:
1. In the What’s happening in this demo section? Click What does a GAN
“understand” and read the text.
2. What does the text say about placement of objects? Does this explain the results you
saw earlier?
3. Click Painting with neurons, not pixels and read the text. How does the GAN help
you manipulate images?
4. Click New ways to work with AI and read the text. What are some of the use cases
for GANs?
Use the Discussion Forum to talk about these questions with your fellow students
18.2 AUTHOR(S)
Rav Ahuja
Date Version Changed by Change Description
2022-11-
2.1 Srishti Updated demo link
01
18.3 CHANGELOG
19 KEY FIELDS OF APPLICATION IN AI
Can you tell us a little bit about the work you're doing with
self-driving cars.
>> I've been working on self-driving cars for the last few
years. It's a domain that's exploded, obviously, in interest
since early competitions back in the 2005 domain. And what
we've been working on really is putting together our own self-
driving vehicle that was able to drive on public roads in the
regional Waterloo last August. With the self-driving cars area,
one of our key research domains is in 3D object detection. So
this remains a challenging task for algorithms to perform
automatically. Trying to identify every vehicle, every
pedestrian, every sign that's in a driving environment. So that
the vehicle can make the correct decisions about how it should
move and interact with those vehicles. And so we work
extensively on how we take in laser data and vision data and
radar data. And then fuse that into a complete view of the
world around the vehicle.
>> When we think of computer vision,
we usually think immediately of self-driving cars, and why is
that? Well, it's because it's hard to pay attention when driving
on the road, right? You can't both be looking at your
smartphone and also be looking at the road at the same time.
Of course, it's sometimes hard to predict what people are
going to be doing on the street, as well. When they're crossing
the street with their bike or skateboard, or whatnot. So it's
great when we have some sort of camera or sensor that can
help us detect these things and prevent accidents before they
could potentially occur.
And that's one of the limitations of human vision, is attention,
is visual attention. So I could be looking at you, Rav, but
behind you could be this delicious slice of pizza.
But I can only pay attention to one or just some limited
number of things at a time. But I can't attend to everything in
my visual field all at once at the same time like a camera
could. Or like how computer vision could potentially do so.
And so that's one of the great things that cameras and
computer vision is good for. Helping us pay attention to the
whole world around us without having us to look around and
make sure that we're paying attention to everything. And that's
just in self-driving cars, so I think we all kind of have a good
sense of how AI and computer vision shapes the driving and
transportation industry.
>> Well, self-driving cars are certainly the future.
And there's tremendous interest right now in self-driving
vehicles. In part because of their potential to really change the
way our society works and operates. I'm very excited about
being able to get into a self-driving car and
read or sit on the phone on the way to work. Instead of having
to pilot through Toronto traffic. So I think they represent a
really exciting step forward, but there's still lots to do. We still
have lots of interesting challenges to solve in the self-driving
space. Before we have really robust and safe cars that are able
to drive themselves 100% of the time autonomously on our
roads.
>> We've just launched our own self-driving car
specialization on Coursera. And we'd be really happy to see
students in this specialization also come and
learn more about self-driving. It's a wonderful starting point, it
gives you a really nice perspective on the different
components of the self-driving software stack and
how it actually works. So everywhere from how it perceives
the environment, how it makes decisions and
plans its way through that environment. To how it controls the
vehicle and makes sure it executes those plans safely. So
you'll get a nice broad sweep of all of those things from that
specialization. And from there you then want to become really
good and really deep in one particular area, if you want to
work in this domain.
Because again, there's so many layers behind this.
There's so much foundational knowledge you need to start
contributing that you can't go wrong. If you find something
interesting, just go after it. And I am sure there'll be
companies that'll need you for this.
23 COMPUTER VISION
IBM Research creates innovative tools and resources to help unleash the power of AI.
3. In the Simulate Attack section, ensure that no attack is selected, and that all the
sliders are to the far left, indicating that all attacks and mitigation strategies are
turned off.
What does Watson identify the image as, and at what confidence level? E.g. Siamese cat
92%
4. In the Simulate Attack section, under Adversarial noise type, select Fast
Gradient Method. The strength slider will move to low.
Figure 2 - Select an attack and level
What does Watson identify the image as now, and at what confidence level?
5. In the Defend attack section, move the Gaussian Noise slider to low.
Figure 3 - Mitigate the attack
6. What does Watson identify the image as now, and at what confidence level? Did
the image recognition improve?
Figure 4 - View the results
Note that you can use the slider on the image to see the original and modified images.
7. Move the Gaussian Noise slider to medium, and then to high. For each level,
note what Watson identifies the image as, and at what confidence level. Did the
image recognition improve?
8. Move the Gaussian Noise slider to None.
9. In the Defend attack section, move the Spatial Smoothing slider to low. What
does Watson identify the image as now, and at what confidence level? Did the
image recognition improve?
10. Move the Spatial Smoothing slider to medium, and then to high. For each level,
note what Watson identifies the image as, and at what confidence level. Did the
image recognition improve?
11. Move the Spatial Smoothing slider to None.
12. In the Defend attack section, move the Feature Squeezing slider to low. What
does Watson identify the image as now, and at what confidence level? Did the
image recognition improve?
13. Move the Feature Squeezing slider to medium, and then to high. For each level,
note what Watson identifies the image as, and at what confidence level. Did the
image recognition improve?
14. Which of the three defenses would you use to defend against a Fast Gradient
Attack?
Optional:
If you have time, use the same techniques to explore the other methods of attack
(Projected Gradient Descent and C&W Attack) and evaluate which method of defense
works best for each. If you want, try a different image.
Use the Discussion Forum to talk about the attacks and mitigation strategies with your
fellow students.
23.1 CHANGELOG
Changed
Date Version Change Description
by
2020
Migrated Lab to Markdown and added to course repo in
-08- 2.0 Anamika
GitLab
27
24 MODULE SUMMARY
24.1.1 In this lesson, you have learned about cognitive
computing:
Cognitive computing systems differ from conventional computing systems
in that they can:
Learning Objectives