Professional Documents
Culture Documents
AI Unit 3 Lecture 3
AI Unit 3 Lecture 3
AI learning models can be classified in two main types: inductive and deductive.
Specific to General
— Deductive Learning: This type of AI learning technique starts with the series of
rules and infers new rules that are more efficient in the context of a specific AI
algorithm.
General to Specific
Explanation-Based Learning (EBL) and Relevance-Based Learning (RBL) are
examples of deductive techniques. EBL extracts general rules from examples by
“generalizing” the explanation. RBL focuses on identifying attributes and deductive
generalizations from simple example.
Supervised Learning:
Supervised learning is the types of machine learning in which machines are trained
using well "labelled" training data, and on basis of that data, machines predict
the output. The labelled data means some input data is already tagged with the
correct output.
In supervised learning, the training data provided to the machines work as the
supervisor that teaches the machines to predict the output correctly. It applies
the same concept as a student learns in the supervision of the teacher.
Supervised learning is a process of providing input data as well as correct output
data to the machine learning model. The aim of a supervised learning algorithm is
to find a mapping function to map the input variable(x) with the output
variable(y).
Regression trains on and predicts a continuous-valued response, for example
predicting real estate prices.
Classification attempts to find the appropriate class label, such as analyzing
positive/negative sentiment, male and female persons, benign and malignant
tumors, secure and unsecure loans etc.
Unsupervised Learning
Unsupervised learning is used to detect anomalies, outliers, such as fraud or
defective equipment, or to group customers with similar behaviors for a sales
campaign. It is the opposite of supervised learning. There is no labeled data here.
When learning data contains only some indications without any description or
labels, it is up to the coder or to the algorithm to find the structure of the
underlying data, to discover hidden patterns, or to determine how to describe
the data. This kind of learning data is called unlabeled data.
Reinforcement Learning
Here learning data gives feedback so that the system adjusts to dynamic
conditions in order to achieve a certain objective. The system evaluates its
performance based on the feedback responses and reacts accordingly. The best-
known instances include self-driving cars and chess master algorithm AlphaGo.
Types of Reinforcement: There are two types of Reinforcement:
1. Positive –
Positive Reinforcement is defined as when an event, occurs due to a particular
behavior, increases the strength and the frequency of the behavior. In other
words, it has a positive effect on behavior.
Advantages of reinforcement learning are:
Maximizes Performance
Sustain Change for a long period of time
Disadvantages of reinforcement learning:
Too much Reinforcement can lead to overload of states which
can diminish the results.
2. Negative –
Negative Reinforcement is defined as strengthening of a behavior because a
negative condition is stopped or avoided.
Advantages of reinforcement learning:
Increases Behavior
Provide defiance to minimum standard of performance.
Disadvantages of reinforcement learning:
It Only provides enough to meet up the minimum behavior.
ROTE LEARNING:
Rote learning is the process of memorizing information based on repetition.
Rote learning enhances students’ ability to quickly recall basic facts and helps
develop foundational knowledge of a topic.
Examples of rote learning include memorizing multiplication tables or the
periodic table of elements. The drawbacks of rote learning are that it can be
repetitive, it’s easy to lose focus and it doesn’t allow for a deeper understanding of
a topic.
Idea behind is one should be able to recall material quicker the more one repeats
it.
Rote memorization isn’t considered higher-level thinking or critical thinking since
students don’t learn how to think, analyze, or solve problems with this type of
learning.
Memoization
In computing, Memoization or memoisation is an optimization technique used
primarily to speed up computer programs by storing the results of
expensive function calls and returning the cached result when the same inputs
occur again. Memoization has also been used in other contexts (and for purposes
other than speed gains), such as in simple mutually recursive descent parsing.
[1]
Although related to caching, memorization refers to a specific case of this
optimization, distinguishing it from forms of caching such as buffering or page
replacement. In the context of some logic programming languages, Memoization is
also known as tabling
A memoized function "remembers" the results corresponding to some set of
specific inputs. Subsequent calls with remembered inputs return the
remembered result rather than recalculating it, thus eliminating the primary
cost of a call with given parameters from all but the first call made to the
function with those parameters. The set of remembered associations may be a
fixed-size set controlled by a replacement algorithm or a fixed set, depending on
the nature of the function and its use. A function can only be memoized if it
is referentially transparent; that is, only if calling the function has the same effect
as replacing that function call with its return value. (Special case exceptions to this
restriction exist, however.) While related to lookup tables, since memoization often
uses such tables in its implementation, memoization populates its cache of results
transparently on the fly, as needed, rather than in advance.
Memoization is a way to lower a function's time cost in exchange for space cost;
that is, memoized functions become optimized for speed in exchange for a higher
use of computer memory space. The time/space "cost" of algorithms has a specific
name in computing: computational complexity. All functions have a computational
complexity in time (i.e. they take time to execute) and in space.
Although a space–time tradeoff occurs (i.e., space used is speed gained), this
differs from some other optimizations that involve time-space trade-off, such
as strength reduction, in that memoization is a run-time rather than compile-
time optimization. Moreover, strength reduction potentially replaces a costly
operation such as multiplication with a less costly operation such as addition, and
the results in savings can be highly machine-dependent (non-portable across
machines), whereas memoization is a more machine-independent, cross-
platform strategy.