Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

LC1

Artificial intelligence: is an expansive branch of computer science that focuses on building


smart machines, which are able to perform tasks that normally require human intelligence.

Thanks to AI, these machines can learn from experience, adjust to new inputs, and
perform human-like tasks.

Four schools of thoughts of what AI is


measure success in terms to measure against an ideal concept of
human performance, intelligence

concerned
with thought
processes Systems that think like humans Systems that think rationally
and
reasoning
concerned
with action Systems that act like humans Systems that act rationally
and
behavior

The Turing Test: computer passes the test of intelligence, if it can fool a human interrogator.

Major components of AI: knowledge, reasoning, language, understanding, learning.

What would a computer need to pass the Turing test:

- Natural language processing (NLP): to communicate with examiner.


- Knowledge representation: to store and retrieve information provided before or during
interrogation.
- Automated reasoning: to use the stored information to answer questions and to draw
new conclusions.
- Machine learning: to adapt to new conditions and to detect and diagnosis patterns.
- Vision (for Total Turing test): to recognize the examiner’s actions and various objects
presented by the examiner.
- Motor control (total test): to act upon objects as requested.
- Other senses (total test): such as audition, smell, touch, etc.

Agent: is anything that can be viewed as perceiving its environment through sensors and acting
upon that environment through actuators.

Agent = Architecture + Program


An agent program runs in cycles of:

1- Perceive.
2- Think.
3- Act.

The agent function maps from percept histories to actions. [f: P*  A].
The agent program runs on the physical architecture to produce f.

Rational Agent: For each possible percept sequence, a rational agent should select an action
that is expected to maximize its performance measure.

Rationality is relative to a performance measure.

An Agent judge rationality based on:

- The performance measure that defines the criterion of success.


- The agent prior knowledge of the environment.
- The possible actions that the agent can perform.
- The agent’s percept sequence to date.

When we define a rational agent, we group these properties under P.E.A.S which stands
for:

- Performance
- Environment
- Actuators
- Sensors
LC2

Properties of task environments (IMPORTANT):

1) Fully observable – Partially observable (Environment):

- If an agent’s sensors give it access to the complete state of the environment at each
point in time, then the task environment is fully observable otherwise it is partially
observable.

2) Deterministic – Stochastic – Strategic (Environment):

- If the next state of the environment is completely determined by the current state and the
action executed by the agent, then the environment is deterministic; otherwise, it is
stochastic.
- If the environment is deterministic except for the actions of other agents, then the
environment is strategic.

3) Episodic – Sequential (Decisions):

- If the current decision doesn't affect the next decision, it’s Episodic.
- If the current decision could affect all future decisions, it’s Sequential.

4) Static – Dynamic – (Semi Dynamic or Semi Static) (Environment):

- If the environment can change while an agent is deliberating (working), then the
environment is dynamic for that agent; otherwise, it is static.
- If the environment itself does not change with the passage of time but the agent's
performance score does, then the environment is Semi dynamic or Semi Static.

5) Discrete – Continuous (Actions):

- If actions are limited in number (not continuous) and distinct (not the same), then it’s
Discrete.
- If it’s only one continuous action, then it’s Continuous.

6) Single Agent – Multi Agent (Number of Agents):

- If an agent operates by himself in the environment, then it’s Single Agent.


- If an agent operates and there are other agents in the environment, the it’s Multi Agent.
Examples on task environments (IMPORTANT):

Crossword Puzzle

- Fully observable
- Deterministic
- Sequential
- Static
- Discrete
- Single Agent

Chess

- Fully observable
- Strategic
- Sequential
- Semi dynamic
- Discrete
- Multi-Agent

Backgammon

- Fully
- Stochastic
- Sequential
- Static
- Discrete
- Multi-Agent

Taxi Driving

- Partially
- Stochastic
- Sequential
- Dynamic
- Continuous
- Multi-Agent

Part-Picking Robot

- Partially
- Stochastic
- Episodic
- Dynamic
- Continuous
- Single
Agent types:

1) Simple Reflex Agents:

- Simple reflex agents select an action based on the current state Only ignoring the
percept history.
- Can only work if the environment is fully observable, that is the correct action is
based on the current percept only.
- Use a mapping from states to actions.

2) Model-Based Reflex Agents:

- Same as (Simple Reflex Agents) but take into consideration a model of the world.
- Model of the world based on how the world evolves independently from the agent,
and how the agent actions affects the world.
- Handle partial observability by keeping track of the part of the world it can’t see
now.
- Internal state depending on the percept history (best guess).

3) Goal-Based Agents:

- Same as (Model-Based Reflex Agents) but has a goal to achieve.


- Knowing the current state of the environment is not enough. The agent needs
some goal information.
- Agent program combines the goal information with the environment model to
choose the actions that achieve that goal.
- Consider the future with “What will happen if I do A?”.
- Flexible as knowledge supporting the decisions is explicitly represented and can
be modified.

4) Utility-Based Agents:

- Same as (Goal-Based Agents) but tries to improve its performance in achieving


the goal.
- Sometimes achieving the desired goal is not enough. We may look for quicker,
safer, cheaper trip to reach a destination.
- Agent happiness should be taken into consideration. We call it utility.
- A utility function is the agent’s performance measure.
- Because of the uncertainty in the world, utility agent choses the action that
maximizes the expected utility.
Learning Agents have four conceptual components (IMPORTANT):

- Learning element: responsible for making improvements.


- Performance element: responsible for selecting external actions. It is what we
considered as agent so far.
- Critic: How well is the agent is doing with regard to a fixed performance standard.
- Problem generator: allows the agent to explore.

LC3

Problem solving as search:

1) Define the problem through:


- Goal formulation.
- Problem formulation.

2) Solving the problem as a two stage process:


- Search: “mental” or “offline” exploration of several possibilities.
- Execute the solution found.

Problem Formulation:

- Initial state: The state in which the agent starts.


- States: All states reachable from the initial state by any sequence of actions (State
space).
- Actions: possible actions available to the agent. At a state s, Actions(s) returns the set
of actions that can be executed in states. (Action space).
- Transition model: A description of what each action does.
- Goal test: determines if a given state is a goal state.
- Path cost: A function that assigns a numeric cost to a path.

State space: a physical configuration.


Search space: an abstract configuration represented by a search tree or graph of possible
solutions.
Search tree: models the sequence of actions as a root (Initial State), branches (actions), and
nodes (results from actions).
Expand: A function that given a node, creates all children nodes.
A node has a parent, children, depth, path cost, associated state in the state space.

Search Space Regions:

1) Explored (Closed List, Visited Set)


2) Frontier (Open List)
3) Unexplored

Search Strategies:

- A strategy is defined by picking the order of node expansion.


- Strategies are evaluated along the following dimensions:
1) Completeness (Does it always find a solution if one exists?).
2) Time Complexity (Number of nodes generated/expanded).
3) Space Complexity (Maximum number of nodes in memory).
4) Optimality (Does it always find a least-cost solution?).

Time and space complexity are measured in terms of (IMPORTANT):

- g(n) = gives the path cost from the start node to node n.
- h(n) = is the estimated cost of the cheapest path from n to the goal.
- f(n) = estimated cost of the cheapest solution through n.

Types of Search (IMPORTANT):

- Uninformed Search (Uses no domain knowledge).


1. Breadth-first search (BFS): Expand shallowest node (Implemented using queues and
expands the shallowest node).
2. Depth-first search (DFS): Expand deepest node (Implemented using stacks and
expands the deepest node).
3. Depth-limited search (DLS): Depth first with depth limit (Implemented using priority
queues).
4. Iterative-deepening search (IDS): DLS with increasing limit
5. Uniform-cost search (UCS): Expand least cost node. (expands the node n with the
lowest path cost).

- Informed Search (Uses domain knowledge).


1. Greedy best-first search (expands the node that is nearest to the goal).
2. A* search
LC6

Learning: is the ability to adapt to new observations and solve new problems.

Types of learning:
- Direct Instruction: Involves receiving direct instructions on how to respond to certain
situations.
- Learning in Problem Solving: Learning ways of problem solving from own experience
without an instructor/advisor. Does not involve an increase in knowledge, just the
methods in using the knowledge.
- Neural Nets Learning: Learning by iterative improvement that start with an initial
(possibly random) solution, then improve on the solution step-by-step.
- Explanation Based Learning (EBL): To extract the concept behind the information
within one example, and generalize to other instances.

Inputs to EBL programs:


- Training Example
- Goal Concept
- Operationally Criterion
- Domain-Theory (Knowledge Base)

Machine learning: is any device whose actions are affected by past experiences or, a
computer program is said to learn from experience with respect to some class of tasks and
performance measure.

Training dataset: is a dataset used to train the model.

Test dataset: is a dataset used to evaluate the model efficiency.

Neural network: a model of reasoning based on the human brain.

Artificial Neural Network: a mathematical model for learning inspired by biological neural
networks.
- Artificial neural networks model mathematical functions that map inputs to outputs based
on the structure and parameters of the network.

Neural network structure:


1- A set of input connections brings in activations from other neurons.
2- A processing unit sums the inputs, and then applies a non-linear activation function.
3- An output line transmits the result to output neurons.

Activation function: performs a mathematical operation


on the signal output.
Most common activation functions:
- Threshold Function
- Piecewise Linear Function
- Sigmoidal (S shaped) function
- Tan Sigmoid function

Threshold function examples:


- Step function: gives 0 before a certain threshold is reached and 1 after the threshold is
reached.
- Logistic function: gives as output any real number from 0 to 1, thus expressing graded
confidence in its judgment.

Multi-Output Neural Network: neural networks


that produce more than one output which means
more than one decision when giving it inputs.

Multi-Layer Neural Networks: are artificial neural


networks with an input layer, an output layer,
and at least one hidden layer
Gradient descent: is an algorithm for minimizing loss when training neural networks.

- Gradient Descent algorithm works by starting with a random choice of weights. This is
our naive starting place, where we don’t know how much we should weigh each input.
Then repeating this step: {Calculate the gradient based on all data points that will lead to
decreasing loss. Then update weights according to the gradient}.
- The problem with this kind of algorithm is that it requires calculating the gradient based
on all data points, which is computationally costly.

- Ways to solve this problem are:


1- Stochastic Gradient Descent: where the gradient is calculated based on one point
chosen at random.
2- Mini-Batch Gradient Descent: which computes the gradient based on a few points
selected at random, thus finding a compromise between computation cost and accuracy.

Backpropagation: is the main algorithm used for training neural networks with hidden layers. It
does so by starting with the errors in the output units, calculating the gradient descent for the
weights of the previous layer, and repeating the process until the input layer is reached.

Forward propagation: is a training algorithm consisting of 2 steps:


- Feed forward the values
- calculate the error and propagate it back to the earlier layer.

Tasks neural networks (NNs) perform:


- Classification: NNs organize patterns or datasets into predefined classes.
- Regression: NNs predict the expected output from given input.
- Clustering: NNs identify a unique feature of the data and classify it without any
knowledge of prior data.

LC 7

Knowledge: {sentences} in a knowledge representation language (formal language).

Sentence: is a representation about the world.

Knowledge representation and reasoning (KR, KRR): is the part of AI which is concerned
with AI agents thinking and how thinking contributes to intelligent behavior of agents.
Knowledge representation (KR) is the study of:
- how knowledge and facts about the world can be represented.
- what kinds of reasoning can be done with that knowledge.

Logical AI: is an agent who uses (KR-KRR) to determine a course of actions to reach its goals.

Components of AI system to display intelligent behavior:


- Perception
- Learning
- KRR
- Planning
- Execution

Facts: are the truths about the real world and what we represent.

Knowledge: is awareness or familiarity gained by experiences of facts, data, and situations.

Kind of knowledge which needs to be represented in AI systems:


- Object: All the facts about objects in our world domain.
- Events: Events are the actions which occur in our world.
- Performance: It describes behavior which involves knowledge about how to do things.
- Meta-knowledge: It is knowledge about what we know.

A knowledge-based (KB) agent is composed of:

1- Knowledge base: A set of sentences that describe the world and its behavior in
some formal (representational) language. (domain specific).
2- Inference engine: A set of procedures that use the representational language to infer
new facts from known ones or answer a variety of knowledge based (KB) queries.
(domain independent).

- Knowledge based agents work by perceiving


input from the environment and then the input is
taken to the inference engine of the agent which
is connected to the knowledge base to decide
the action to be outputted by comparing the input
with the knowledge base stored.

The knowledge based agent (KBA) must be able to:


- Represent states, actions, etc.
- Incorporate new percepts.
- Update internal representations of the world.
- Deduce hidden properties of the world.
- Deduce appropriate actions.
Expert System (ES): is a computer program that is designed to solve complex problems and
to provide decision-making ability like a human expert using both facts and heuristics.

- Expert systems perform this by extracting knowledge from its knowledge base using the
reasoning and inference rules according to the user queries. (ES can’t learn).

Characteristics of an Expert system:


- High Performance: provides high performance for solving any type of complex problem
of a specific domain with high efficiency and accuracy.
- Understandable: responds in a way that can be easily understandable by the user. It
can take input in human language and provide the output in the same way.
- Reliable: It is much reliable for generating an efficient and accurate output.
- Highly responsive: ES provides the result for any complex query within a very short
period of time.

Components of an Expert system:


1) User Interface: Takes input from the user from it and uses it to display output back to
the user.
2) Inference Engine: It applies inference rules to the knowledge base to derive a
conclusion or deduce new information. It helps in deriving an error-free solution of
queries asked by the user.

Types of inference engines:


- Deterministic Inference engine: The conclusions drawn from this type of
inference engine are assumed to be true. (based on facts and rules).
- Probabilistic Inference engine: This type of inference engine contains
uncertainty in conclusions. (based on the probability).

Methods used by inference engines to derive solutions:


- Forward Chaining: It starts from the known facts and rules, and applies the
inference rules to add their conclusion to the known facts.
- Backward Chaining: It is a backward reasoning method that starts from the goal
and works backward to prove the known facts.

3) Knowledge base: is a type of storage that stores knowledge acquired from the different
experts of the particular domain. (The bigger the KB the more precise will be the ES).

Components of a Knowledge base (KB):

- Knowledge Representation: It is used to formalize the knowledge stored in the


knowledge base using the If-else rules.
- Knowledge Acquisitions: It is the process of extracting, organizing, and
structuring the domain knowledge, specifying the rules to acquire the knowledge
from various experts, and storing that knowledge into the knowledge base.
Techniques of knowledge representation:
- Logical Representation: is a language with some concrete rules which deals
with propositions and has no confusion in representation which means drawing a
conclusion based on various conditions.

Logical representation categories:


- Propositional logic (PL): is the simplest form of logic where all the statements
are made by propositions and is either true or false.
- Predicate logic: is an extension to propositional logic that does not only assume
that the world contains facts like propositional logic but also assumes the objects,
relations, and functions. (ex: Maged and Ahmed are brothers: => Brothers
(Maged, Ahmed).

Proposition: is a declarative statement which is either true or false.

Types of propositions:
- Atomic Propositions: are the simple propositions. It consists of a single proposition
symbol. (ex: The Sun rises from the East).
- Compound proposition: are constructed by combining simpler or atomic propositions,
using logical connectives. (ex: It is raining today, and the street is wet).

Limitations of Propositional logic:


- We cannot represent relations like (some) with propositional logic. (ex: Some humans
are intelligent).

Uncertainty: a situation where we are not sure about whether a predicate is true or not. (A→B)
(A is the predicate).

Causes of uncertainty: Information occurred from unreliable sources, Experimental Errors,


Equipment fault, etc.

Probabilistic reasoning: is a way of knowledge representation where we apply the concept of


probability to indicate the uncertainty in knowledge by combining probability theory with logic to
handle the uncertainty.
Bayes' rule is one of the ways to solve problems with uncertain knowledge.
Conditional probability of an event (A) given an event (B) has occurred.

Bayes’ Formula:

LC8

Machine Learning: is a computer program is said to learn from experience with respect to
some class of tasks and performance measure.

- Machine learning is a subset of artificial intelligence.


- The basic principle of machine learning is learning from datasets.
- Deep learning is a subset of machine learning which uses neural networks that imitate
intelligent human behavior and decision-making to solve real-world problems.

A Machine Learning system learns from historical data, builds the learning model, and
whenever it receives new data, predicts the output using this model. The accuracy of the
predicted output depends upon the amount of data; as huge amounts of data help build a
better model which predicts the output more accurately.
Machine learning Branches:

1) Supervised Learning:

- Supervised learning: is a type of machine learning method in which we provide sample


labeled data to the machine learning system in order to train it, and on that basis, it
predicts the output.
- The system creates a model using labeled data to understand the datasets and
learn about each data, once the training and processing are done then we test the
model by providing a sample data to check whether it is predicting the exact
output or not.

2) Unsupervised Learning:

- Unsupervised learning: is a learning method in which a machine learns without any


supervision.
- The training is provided to the machine with a set of data that has not been labeled,
classified, or categorized, and the algorithm needs to act on that data without any
supervision.
- The goal of unsupervised learning is to restructure the input data into new
features or a group of objects with similar patterns.
3) Reinforcement Learning:

- Reinforcement learning (RL): is learning of a behavior strategy (a policy) which


maximizes the long term sum of rewards by a direct interaction (trial-and-error) with an
unknown and uncertain environment.
- RL is one step away from AI.
- Reward: A feedback returned to the agent from the environment to evaluate the action
of the agent.
- Policy: is a strategy applied by the agent for the next action based on the current state.

History: is the sequence of observations, actions, rewards. (Ht = O1,R1,A1,...,At−1,Ot,Rt).

State: is the information used to determine what happens next, or is a function of the history:
Sta = f (Ht).

Agent State (Sta): is the agent’s internal representation. i.e. whatever information the agent
uses to pick the next action that is used by reinforcement learning algorithms.

Information State (Markov State): contains all useful information from the history.
- A state (St) is Markov if and only if P[St+1 | St] = P[St+1 | S1,...,St].

Markov decision process (MDP): is one of the important learning models for RL, it is a
discrete-time stochastic control process. It provides a mathematical framework for modeling
decision making in situations where outcomes are partly random and partly under the control of
a decision maker.

Computational Intelligence (CI): is a subset of AI, it is the theory, design, application, and
development of biologically and linguistically motivated computational paradigms.

The three main pillars of CI:

1) Neural Networks.

2) Fuzzy Systems:
- Fuzzy Systems (FS): are systems that use the human language as a source of
inspiration.
- FS model linguistic imprecision and solve uncertain problems based on a
generalization of traditional logic, which enables us to perform approximate
reasoning.
- The term fuzzy refers to things which are not clear or are vague and includes (fuzzy sets
and systems, fuzzy clustering and classification…) and in Boolean system truth value,
1.0 represents absolute truth value and 0.0 represents absolute false value, but in fuzzy
logic, there is intermediate value too present which is partially true and partially false.
3) Evolutionary Computation:
- Evolutionary Computation (EC): using the biological evolution as a source of
inspiration.
- EC solves optimization problems by generating, evaluating and modifying a population
of possible solutions.
- Swarm intelligence (SI): is based on the collective behavior of decentralized, self-
organized systems. It may be natural or artificial.
- Natural examples of SI are (ant colonies, fish schooling, bird flocking, bee swarming,
particle swarm…).

Natural Language Processing (NLP):

Natural Language Processing (NLP): is a field of computer science, artificial intelligence, and
computational linguistics concerned with the interactions between computers and human
languages (understanding language and generating language).

NLP Applications:

1) Speech Recognition:
- Leverage deep neural networks to handle speech recognition and natural language
understanding. (Ex: Siri, Google Assistant, Cortana).

2) Machine Translation:
- Ex: Google Translate.

3) Information Extraction:
- Information extraction: is automatically extracting structured information from
unstructured or semi-structured text.

4) Text Summarization.

5) Dialog Systems:
- Ex: Automated online assistants.

Computer Vision:

Computer vision: is making computers understand images and videos.


- Concerned with the theory for building artificial systems that obtain information from
images.
- The image data can take many forms, such as a video sequence, depth images, views
from multiple cameras, or multi-dimensional data from a medical scanner.
Computer Vision Applications:

1) Optical character recognition (OCR):


- A technology to convert scanned docs to text.

2) Face Detection:
- Used in digital cameras.

3) Smile Detection:
- Used in Digital cameras and smart phones.

4) Vision-based biometrics:
- (Fingerprint scanners - Identification by iris patterns – Face recognition systems).

5) Vision as a source of semantic information.

6) Object Categorization.

7) Scene and context categorization.

8) Qualitative spatial information.

9) Vision-based interaction (Gaming):


- Ex: Digimask: put your face on a 3D avatar.

10) Mobile Robots:


- Ex: NASA’s mars spirit rover.

11) Medical Imaging:


- 3D MRI, CT scans.
- Image guided surgery.

12) Self-driving cars.

Computer Vision Challenges:

- Viewpoint Variation.
- Illumination.
- Scale.
- Deformation.
- Background Clutter.
- Object intra-class variation.
Internet of Things (IOT):

Internet of Things (IOT): is a network of various devices that are connected over the internet
and they can collect and exchange data with each other.

- IOT is used to collect and handle the huge amount of data that is required by the
Artificial Intelligence algorithms. In turn, these algorithms convert the data into
useful actionable results that can be implemented by the IOT devices.

IOT Applications:

1) Industrial Automation:
- Monitoring industrial operations.
- Real-time vehicle diagnostics.
- Identification of materials/ goods.

2) Agriculture:
- Offering high-precision crop control.
- useful data collection.
- automated farming techniques.

3) Smart Health:
- Smart hospital services.
- Mobile assistance.
- Medical equipment saving.
- Monitoring of elderly people.
- Remote diagnosis.

4) Smart Home:
- Home control services (temp, air, light).
- Safety.
- Human satisfaction.

5) Energy Utilities:
- Creating a fundamental shift in advanced energy production and distribution technology,
management and services while leveraging existing investments in infrastructure and
operations.

6) Smart City:
- Traffic management/ Vehicles.
- Plant maintenance lighting.
- Irrigating Environment.
- Environment monitoring.

You might also like