Professional Documents
Culture Documents
ML Co4 Session 32 Som
ML Co4 Session 32 Som
ML Co4 Session 32 Som
CO-4 SESSION-22
ASSOCIATE . PROFESSOR
AIM
To familiarize students with the concepts of unsupervised machine learning, hierarchical clustering, distance functions, and data
standardization
INSTRUCTIONAL OBJECTIVES
LEARNING OUTCOMES
• The essential difference between analytical and inductive learning methods is that
they assume two different formulations of the learning problem:
The desired output of the learner is a hypothesis h from H that is consistent with
these training examples.
EBL Task
• Given:
Goal concept
Training example
Domain Theory
Operationality Criteria
• Find: a generalization of the training example that is a sufficient criteria for the target
concept and satisfies the operationality criteria
EBL Method
• For each positive example not correctly covered by an “operational” rule do:
Explain: Use the domain theory to construct a logical proof that the example is a
member of the concept.
Analyze: Generalize the explanation to determine a rule that logically follows from
the domain theory given the structure of the proof and is operational.
facts
After Learning (go directly from facts to solution):
goal
facts
Explanation of a Training Example
EBL-An Analytical Learning Method
• Prior knowledge is used to construct an explanation/ Proof of each example ( which is expressed as
a Horn rule)
• This explanation is used to distinguish between the relevant features of the training example and
the irrelevant ones
• The explanation (rule) is generalized to the extent possible and added to the current hypothesis (set
of rules)
SafeToStack (x , y) Learning Problem
• Given
• Instances: pairs of physical objects
• Hypotheses: Sets of Horn clause rules, e.g., Target Concept: Safe-to-stack(x,y)
• Training Examples:
• On(Obj1 ,Obj2 )
• Owner(Obj1 , Fred)
• Type(Obj1 ,Box)
• Owner(Obj2 ,Louise)
• Type(Obj2 ,Endtable)
• Density(Obj1 , 0.3)
• Color(Obj1 ,Red)
• Material(Obj1 ,Cardbd)
• Domain Theory:
• SafeT oStack(x, y) ← ¬ Fragile(y)
• SafeT oStack(x, y) ← Lighter(x, y)
• Lighter(x, y) ← Wt(x,wx) Wt(y,wy) Less(wx,wy)
• Determine:
• A hypothesis from H consistent with training examples and domain theory.
Learning from Perfect Domain Theories
TEAM ML