Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 37

MODULE 4

 Knowledge Representation
1. Ontological Engineering
2. Categories and Objects
• Physical Composition
• Measurements
• Objects: Things and Stuff
3. Events
• Time
• Fluent and Objects
4. Mental Objects and modal Logic
• Other Modal Logics
• Knowledge Base: A knowledge base in artificial intelligence (AI) is a
collection of knowledge that helps support decision-making and problem-
solving. Knowledge bases are a key component of knowledge-based agents,
which deal with real-world facts.

• Inference in Artificial Intelligence: Inference is the process of drawing


conclusions based on facts and evidence. Inference is a crucial process in
artificial intelligence (AI) that involves reasoning and making decisions based
on available information.
Ontological Engineering
• Ontological engineering involves creating representations for complex domains.

• Crucial for more general and flexible representations, especially in complex domains like
Internet shopping or traffic scenarios.

• In "Toy" Domains: Choice of representation is not critical; In Complex Domains: Requires


more general and flexible representations.

• Identified General Concepts: Events, Time, Physical Objects, Beliefs.

• Purpose: These concepts occur in various domains and serve as a foundation for representation.

• Ontological Engineering: The process of creating representations for abstract concepts that are
fundamental in diverse domains.
• Analogy to Object-Oriented Programming: Analogy: Similar to how object-oriented
programming frameworks define general concepts (e.g., Window) for users to extend (e.g.,
Spreadsheet Window).

• Upper Ontology: Upper ontology contains the most general concepts.

• Convention: Graphs depict general concepts at the top and more specific concepts below.

• Use of First-Order Logic (FOL): Purpose: First-order logic is employed to discuss the content
and organization of knowledge.

• Challenges in Representing Real-World Knowledge:

• Nature of Generalizations: Many generalizations have exceptions or hold only to a certain degree.

• Example: The rule "tomatoes are red" has exceptions, as some tomatoes can be green, yellow, or
orange.
Categories and Objects
• The organization of objects into categories is a vital part of knowledge representation.

• For example, a shopper would normally have the goal of buying a basketball, rather than a particular
basketball such as BB9.

• Categories also serve to make predictions about objects once they are classified.

• One infers the presence of certain objects from perceptual input, infers category membership from the
perceived properties of the objects, and then uses category information to make predictions about the
objects.

• For example, from its green and yellow mottled skin, one-foot diameter, ovoid shape, red flesh, black
seeds, and presence in the fruit aisle, one can infer that an object is a watermelon; from this, one infers
that it would be useful for fruit salad.
• There are two choices for representing categories in first-order logic: predicates and
objects. That is, we can use the predicate Basketball(b), or we can reify the category
as an object, Basketballs.

Member(b,Basketballs) abbreviated as b∈Basketballs , to say that b is a member of the


category of basketballs.

• Subset(Basketballs, Balls), abbreviated as Basketballs ⊂ Balls, to say that


Basketballs is a subcategory of Balls.

• Categories organize knowledge through inheritance.

• Food are edible  Fruit is a subclass of Food  Apples is a subclass of Fruit  then
we can infer that every apple is edible
• An object is a member of a category.

BB9∈Basketballs

• A category is a subclass of another category.

Basketballs ⊂ Balls

• All members of a category have some properties.

(x∈Basketballs) ⇒ Spherical(x)

• Members of a category can be recognized by some properties.

Orange(x)∧Round(x)∧Diameter(x)=9.5 ∧x∈Balls ⇒ x∈Basketballs

• A category as a whole has some properties.

Dogs∈DomesticatedSpecies
• Of course there are exceptions to many of the above rules (punctured
basketballs are not spherical);

• we also want to be able to state relations between categories that are not
subclasses of each other.

• For example: Undergraduate and Graduated Students

1. Disjoint

2. Exhaustive Decomposition

3. Partition
(Note that the Exhaustive Decomposition of North Americans is not a Partition, because
some people have dual citizenship.) The three predicates are defined as follows:

Categories can also be defined by providing necessary and sufficient conditions for
membership. For example, a bachelor is an unmarried adult male: x ∈Bachelors ⇔
Unmarried(x)∧ x∈Adults ∧ x∈Males.
Physical Composition
• The idea that one object can be part of another is a familiar one.
• Objects can be grouped into PartOf hierarchies, reminiscent of the Subset
hierarchy:

• The PartOf relation is transitive and reflexive; that is,

Therefore, we can conclude PartOf(Bucharest, Earth).


• Categories of composite objects are often characterized by structural relations among parts.

• For example, a biped is an object with exactly two legs attached to a body:

• It is also useful to define composite objects with definite parts but no


particular structure.

• “The apples in this bag weigh two pounds.”  weight to the set of apples in
the bag  but this would be a mistake because the set is an abstract
mathematical concept that has elements but does not have weight.
• Instead, we need a new concept, which we will call a bunch.

• For example, if Bunch the apples are Apple1, Apple2, and Apple3,

then BunchOf({Apple1,Apple2,Apple3})

• BunchOf(Apples) is the composite object consisting of all apples—not to be confused with


Apples, the category or set of all apples.

• We can define BunchOf in terms of the PartOf relation.


Measurements
• objects have height, mass, cost, and so on. The values that we assign for these properties are called measures.

• Line Segment

• Some categories have strict definitions: an object is a triangle if and only if it is a polygon with three sides.

• No Clear cut definitions : Natural Kind categories.

• For example, tomatoes tend to be a dull scarlet; roughly spherical; with an indentation at the top where the
stem was; about two to four inches in diameter; with a thin but tough skin; and with flesh, seeds, and juice
inside.

• Typical Tomatoes

• Most knowledge about natural kinds will actually be about their typical instances:

x∈Typical(Tomatoes) ⇒ Red(x)∧Round(x).
• Although measures are not numbers, we can still compare them, using an ordering symbol such as >. For example, we might
well believe that Norvig’s exercises are tougher than Russell’s, and that one scores less on tougher exercises:

Similar axioms can be written for pounds and kilograms, seconds and days, and dollars and cents.
Measures can be used to describe objects as follows:

Diameter(Basketball12)=Inches(9.5)

ListPrice(Basketball12)=$(19)

Weight(BunchOf({Apple1,Apple2,Apple3})) = Pounds(2)

d∈Days ⇒ Duration(d)=Hours(24).
Objects: Things and Stuff
• suppose I have some butter and an aardvark in front of me. I can say Stuff there is one
aardvark, but there is no obvious number of “butter-objects,” because any part of a butter-
object is also a butter-object.

• Linguists distinguish between count nouns, such as aardvarks, holes, and theorems, and
Mass noun such as butter, water, and energy.

• any part of a butter-object is also a butter-object:

b∈Butter∧PartOf(p,b) ⇒ p∈Butter.

• We can now say that butter melts at around 30 degrees centigrade:

b∈Butter ⇒ MeltingPoint(b,Centigrade(30)).
Intrinsic and Extrinsic Properties:

• Intrinsic Properties: They belong to the very substance of the object, rather
than to the object as a whole. intrinsic properties—things like density, boiling
point, flavour, color, ownership, and so on.

• Extrinsic Properties: weight, length, shape, and so on—are not retained under
subdivision.
EVENTS
• Consider the action "Shoot." In a discrete, instantaneous world, this action might be represented
as a proposition like Shoot(t) to denote that the action "Shoot" happens at time t.

• Simplified representations

• Fluents represent aspects of the world that change.

• For instance, a fluent like Have Arrow(t) could denote whether the agent has an arrow at time t.

• Successor-State Axioms: These axioms specify how the fluents change over time based on
actions.

• For example, a successor-state axiom might state that Have Arrow(t + 1) is true if the agent
shoots at time t and already had an arrow at time t.
• Consider a continuous action, such as filling a bathtub.

• A successor-state axiom can say that the tub is empty before the action and full when the
action is done, but it can’t talk about what happens during the action.

• It also can’t easily describe two actions happening at the same time—such as brushing one’s
teeth while waiting for the tub to fill.

• To handle such cases we introduce an approach known as event calculus.

• The objects of event calculus are events, fluents, and time points. At(Shankar, Berkeley) is a
fluent: an object that refers to the fact of Shankar being in Berkeley. The event E1 of
Shankar flying from San Francisco to Washington, D.C., is described as
• To assert that a fluent is actually true starting at some point in time t1 and continuing to time t2, we use
the predicate T, as T(At(Shankar, Berkeley),t1,t2).

• Similarly, we use Happens(E1,t1,t2) to say that the event E1 actually happened, starting at time t1 and
ending at time t2.

The complete set of predicates for one version of the event calculus is:
• We can describe the effects of a flying event:

• We assume a distinguished event, Start, that describes the initial state by


saying which fluents are true (using Initiates) or false (using Terminated) at
the start time.
• Assume an event happens between time t1 and t3, and at t2 somewhere in that time interval the
event changes the value of fluent f , either initiating it (making it true) or terminating it (making it
false).

• Then at time t4 in the future, if no other intervening event has changed the fluent (either
terminated or initiated it, respectively), then the fluent will have maintained its value. Formally,
the axioms are:
TIME
• Event calculus opens us up to the possibility of talking about time points and time intervals.
• We will consider two kinds of time intervals: moments and extended intervals.
• The distinction is that only moments have zero duration:

• The time scale is arbitrary; we will measure it in seconds and say that the moment at
midnight (GMT) on January 1, 1900, has time 0.

• The functions Begin and End pick out the earliest and latest moments in an interval, and the
function Time delivers the point on the time scale for a moment.

• The function Duration gives the difference between the end time and the start time.
• To make these numbers easier to read, we also introduce a function Date, which takes six
arguments (hours, minutes, seconds, day, month, and year) and returns a time point:

Two intervals Meet if the end time of the first equals the start time of the second. The
complete set of interval relations (Allen, 1983) is shown below
• Predicates on time intervals.

• To say that the reign of Elizabeth II immediately followed that of George VI, and the
reign of Elvis overlapped with the 1950s, we can write the following:
Fluents and Objects
• Physical objects, in the context of ontological engineering, are conceptualized as generalized
events.

• This perspective treats a physical object as a chunk of space–time, acknowledging its evolution
and changes over time.

• Consider the United States of America (USA) as an example. It can be viewed as an event that
began in 1776 with the union of 13 states and has continued to evolve, now comprising 50 states.

• To describe the changing properties of the USA, state fluents are employed. These are logical
expressions representing properties that may vary over time.

• Over time, the instantiation of President(USA)President(USA) may transition from one


individual to another as elections occur.
• In ontological engineering, a term such as President(USA)President(USA) is constrained to denote
exactly one object in a given model structure.

• Consider the example of the USA presidency, where President(USA)President(USA) denotes an


object that is George Washington from 1789 to 1797, John Adams from 1797 to 1801, and so forth.

• To express that George Washington was the president throughout the year 1790, a temporal relation
T is employed.

• This relation signifies that the object denoted by President(USA) is equal to George Washington
during the time interval from the beginning to the end of the year 1790.

• The ontology maintains a distinction between time indices and fluents. The term President(USA)
serves as a consistent denotation for the evolving entity holding the presidency.
Mental Objects and Modal Logic
• Agents with self-awareness possess knowledge about their own beliefs and reasoning capabilities.

• This knowledge is valuable for controlling inference and decision-making.

• Scenario with Alice and Bob:


• Consider a scenario involving two agents, Alice and Bob.

• Alice asks Bob, "What is the square root of 1764?"

• Bob responds, "I don't know."

• Importance of Self-Awareness:
• Self-awareness allows Bob to introspect on his own knowledge and reasoning abilities.

• If Alice insists, saying, "Think harder," Bob should recognize that further contemplation may lead
to an answer because mathematical knowledge can be deduced.
• Contrast with a Different Scenario:

• Contrast this with a different scenario where Alice asks, "Is the president sitting down
right now?"

• Bob, with self-awareness, realizes that more thought won't yield an answer because the
information about the president's current posture is not within his knowledge base.

• Modelling Mental Objects and Processes:

• The model of mental objects involves representing the beliefs, knowledge, and reasoning
processes within an agent's "mind" or knowledge base.

• While not requiring precise predictions of processing times, this model facilitates an
understanding of an agent's ability to deduce, learn, and respond to inquiries.
• propositional attitudes that an agent can have toward mental objects: attitudes such as Believes,
Knows, Wants, and Informs.

• The difficulty is that these attitudes do not behave like “normal” predicates. For example,
suppose we try to assert that Lois knows that Superman can fly:

• A more serious problem is that, if it is true that Superman is Clark Kent, then we must conclude
that Lois knows that Clark can fly, which is wrong because (in most versions of the story) Lois
does not know that Clark is Superman.

• This is a consequence of the fact that equality reasoning is built into logic.
• If our agent knows that 2+2 = 4 and 4 < 5, then we want our agent to know that 2 + 2 < 5. This
property is called referential transparency.

• Modal logic is designed to address this problem.

• Regular logic is concerned with a single Modal logic modality, the modality of truth, allowing us to
express “P is true” or “P is false.”

• Modal logic includes special modal operators that take sentences (rather than terms) as arguments.

• For Modal operators example, “A knows P” is represented with the notation KAP, where K is the
modal operator for knowledge. It takes two arguments, an agent (written as the subscript) and a
sentence.

• The syntax of modal logic is the same as first-order logic, except that sentences can also be formed
with modal operators
• For example, we can say that even though Lois doesn’t know whether Superman’s secret identity
is Clark Kent, she does know that Clark knows:

• Modal logic solves some tricky issues with the interplay of quantifiers and knowledge. The
English sentence “Bond knows that someone is a spy” is ambiguous. The first reading is that
there is a particular someone who Bond knows is a spy; we can write this as

• The second reading is that Bond just knows that there is at least one spy:
• we can say that agents are able to draw conclusions; if an agent knows P and knows that
P implies Q, then the agent knows Q:

• is a tautology; every agent knows every proposition P is either true or false.

• is not a tautology; in general, there will be lots of propositions that an agent


does not know to be true and does not know to be false.

• Beliefs: That means that if you know something, it must be true, and we have the axiom:

• logical agents are able to introspect on their own knowledge. If they know something,
then they know that they know it:
Other Modal Operator
• X P: "P will be true in the next time step"

• This operator, denoted as "Next," indicates that the proposition P will be true in the immediately
following time step.

• F P: "P will eventually (Finally) be true in some future time step"

• This operator, denoted as "Finally," signifies that the proposition P will become true at some point in
the future.

• G P: "P is always (Globally) true"

• This operator, denoted as "Globally," asserts that the proposition P is true at all time steps.

• P U Q: "P remains true until Q occurs"

• This operator, denoted as "Until," indicates that the proposition P remains true until the proposition Q
becomes true.
• Suppose we have a proposition P representing "It is raining," and Q representing "I have an
umbrella."

• Using temporal logic:

• X P: It will rain in the next time step.

• F Q: Eventually, I will have an umbrella in the future.

• G P: It is always true that it is raining.

• P U Q: It will rain until I have an umbrella.


END OF MODULE 4

You might also like