ML Co4 Session 32 Som

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

EBL AND PERFECT DOMAIN THEORIES

CO-4 SESSION-22

DR. AKHILESH KUMAR DUBEY

ASSOCIATE . PROFESSOR
AIM

To familiarize students with the concepts of unsupervised machine learning, hierarchical clustering, distance functions, and data
standardization

INSTRUCTIONAL OBJECTIVES

This session is designed to:


1. Two formulations for learning: Inductive and Analytical
2. Perfect domain theories

LEARNING OUTCOMES

At the end of this session, you should be able to:


1. Hierarchical clustering and its types
2. Agglomerative clustering
3. Measuring the distance of two clusters
4. Data standardization techniques
Explanation-Based-Learning

• Explanation-Based Learning is a particular type of analytical approach which uses prior


knowledge to distinguish the relevant features of the training examples from the irrelevant, so
that examples can be generalized based on logical rather than statistical reasoning.

• Explanation-Based-Learning works by generalizing not from the training examples


themselves, but from their explanation.
Inductive and Analytical Learning Problems

• The essential difference between analytical and inductive learning methods is that
they assume two different formulations of the learning problem:

 In inductive learning, the learner is given a hypothesis space H from which it


must select an output hypothesis, and a set of training examples D = {(xl, f
(x~)), . . . (x,, f (x,))} where f (xi) is the target value for the instance xi.

 The desired output of the learner is a hypothesis h from H that is consistent with
these training examples.
EBL Task

• Given:
 Goal concept
 Training example
 Domain Theory
 Operationality Criteria

• Find: a generalization of the training example that is a sufficient criteria for the target
concept and satisfies the operationality criteria
EBL Method

• For each positive example not correctly covered by an “operational” rule do:

 Explain: Use the domain theory to construct a logical proof that the example is a
member of the concept.
 Analyze: Generalize the explanation to determine a rule that logically follows from
the domain theory given the structure of the proof and is operational.

• Add the new rule to the concept definition.


Perspectives on EBL

• EBL as theory guided generalization of examples:


Explanations are used to distinguish relevant from irrelevant features.

• EBL as example guided reformulation of theories:


Examples are used to focus on which operational concept reformulations to learn are
“typical”

• EBL as knowledge compilation: Deductive consequences that are particularly useful


(e.g., for reasoning about the training examples) are “compiled out” to subsequently
allow for more efficient reasoning.
Standard Approach to EBL

An Explanation (detailed proof of goal)


goal

facts
After Learning (go directly from facts to solution):
goal

facts
Explanation of a Training Example
EBL-An Analytical Learning Method

• EBL- Explanation-Based Learning

• Prior knowledge is used to construct an explanation/ Proof of each example ( which is expressed as
a Horn rule)

• This explanation is used to distinguish between the relevant features of the training example and
the irrelevant ones

• The explanation (rule) is generalized to the extent possible and added to the current hypothesis (set
of rules)
SafeToStack (x , y) Learning Problem

• Given
• Instances: pairs of physical objects
• Hypotheses: Sets of Horn clause rules, e.g., Target Concept: Safe-to-stack(x,y)

• Training Examples:
• On(Obj1 ,Obj2 )
• Owner(Obj1 , Fred)
• Type(Obj1 ,Box)
• Owner(Obj2 ,Louise)
• Type(Obj2 ,Endtable)
• Density(Obj1 , 0.3)
• Color(Obj1 ,Red)
• Material(Obj1 ,Cardbd)
• Domain Theory:​
• SafeT oStack(x, y) ← ¬ Fragile(y)​
• SafeT oStack(x, y) ← Lighter(x, y)​
• Lighter(x, y) ← Wt(x,wx) Wt(y,wy) Less(wx,wy)
• Determine:
• A hypothesis from H consistent with training examples and domain theory.
Learning from Perfect Domain Theories

• Assumes domain theory is correct (error-free)


 Prolog-EBG is algorithm that works under this assumption
 This assumption holds in chess and other search problems
 Allows us to assume explanation = proof
 Later we’ll discuss methods that assume approximate domain theories
THANK YOU

TEAM ML

You might also like