Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

Logic Programming-Expert Systems-ML-NLP

Contents
1- What is Intelligence, Reasoning, and Problem Solving? ........................................................................................... 1
2- Logic Programming with Python Libraries, and Use Cases of LP .............................................................................. 2
3- Expert Systems and Knowledge Base ....................................................................................................................... 3
4- Machine Learning ..................................................................................................................................................... 7
5- Natural Language Processing .................................................................................................................................. 11

1- What is Intelligence, Reasoning, and Problem Solving?


The ability of a system to calculate, reason, perceive relationships and analogies, learn from experience,
store and retrieve information from memory, solve problems, comprehend complex ideas, use natural
language fluently, classify, generalize, and adapt new situations.

What is Intelligence Composed of?


The intelligence is intangible. It is composed of −
 Reasoning,
 Learning,
 Problem Solving,
 Perception,
 Linguistic Intelligence
What is Artificial Intelligence (AI)?
Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think
intelligently, in the similar manner the intelligent humans think. AI is accomplished by studying how
human brain thinks and how humans learn, decide, and work while trying to solve a problem, and then
using the outcomes of this study as a basis of developing intelligent software and systems.

Page 1 of 13
2- Logic Programming with Python Libraries, and Use Cases of LP

Logic Programming is the combination of two words, logic and programming. Logic Programming is a
programming paradigm in which the problems are expressed as facts and rules by program statements but
within a system of formal logic. Just like other programming paradigms like object oriented, functional,
declarative, and procedural, etc., it is also a particular way to approach programming.

Use Cases of Logic Programming?


 1. Logic Programming is extensively used in Natural Language Processing(NLP) since understanding
languages is about recognition of patterns that numbers cannot represent.
 2. It is also used in prototyping models. Since expressions and patterns can be replicated using logic,
prototyping is made easy.
 3. Pattern matching algorithms within image processing, speech recognition and various other
cognitive services also use logic programming for pattern recognition.
 4. Scheduling and Resource Allocation are major operations tasks that logic programming can help
solve efficiently and completely.
 5. Mathematical proofs are also easy to decode using logic programming.

Logic Programming with Python?


Logic Programming can be used to solve numerous mathematical problems that will ultimately help in
building an artificially intelligent machine. We will observe how Logic Programming can be used to
evaluate expressions in mathematics, make programs learn operations and form predictions. We will also
solve a real problem using two libraries that influence logic programming in Python: Kanren and symPy.
 Kanren: Kanren is a library within PyPi that simplifies ways of making business logic out of code.
 SymPy: SymPy stands for symbolic computation in Python and is an open-sourced library. It is used for
calculating mathematical constructs using symbols.

How to Solve Problems with Logic Programming


Logic Programming uses facts and rules for solving the problem. That is why they are called the building
blocks of Logic Programming. A goal needs to be specified for every program in logic programming. To
understand how a problem can be solved in logic programming, we need to know about the building
blocks – Facts and Rules:

Page 2 of 13
3- Expert Systems and Knowledge Base

Expert systems (ES) are one of the prominent research domains of AI. It is introduced by the researchers at
Stanford University, Computer Science Department.

What are Expert Systems?


The expert systems are the computer applications developed to solve complex problems in a particular
domain, at the level of extra-ordinary human intelligence and expertise.
Characteristics of Expert Systems
 High performance
 Understandable
 Reliable
 Highly responsive
Capabilities of Expert Systems
The expert systems are capable of −
 Advising and Instructing and assisting human in decision making
 Demonstrating
 Deriving a solution
 Diagnosing and Explaining
 Interpreting input
 Predicting results
 Justifying the conclusion
 Suggesting alternative options to a problem

Components of Expert Systems: The components of ES include −


A. Knowledge Base
B. Inference Engine
C. User Interface

A- Knowledge Base
The knowledge base of an ES is a store of both, factual and heuristic knowledge.
 Factual Knowledge − It is the information widely accepted by the Knowledge Engineers and
scholars in the task domain.
 Heuristic Knowledge − It is about practice, accurate judgement, one’s ability of evaluation, and
guessing.

Page 3 of 13
Knowledge representation
It is the method used to organize and formalize the knowledge in the knowledge base. It is in the form of
IF-THEN-ELSE rules.
B- Inference Engine
Use of efficient procedures and rules by the Inference Engine is essential in deducting a correct, flawless
solution.
In case of knowledge-based ES
 The Inference Engine acquires and manipulates the knowledge from the knowledge base to arrive
at a particular solution.

In case of rule based ES, it −

 Applies rules repeatedly to the facts, which are obtained from earlier rule application.
 Adds new knowledge into the knowledge base if required.
 Resolves rules conflict when multiple rules are applicable to a particular case.

To recommend a solution, the Inference Engine uses the following strategies − Forward Chaining
Backward Chaining

Forward Chaining

It is a strategy of an expert system to answer the question, “What can happen next?”

Here, the Inference Engine follows the chain of conditions and derivations and finally deduces the
outcome. It considers all the facts and rules, and sorts them before concluding to a solution.

Backward Chaining

With this strategy, an expert system finds out the answer to the question, “Why this happened?”

On the basis of what has already happened, the Inference Engine tries to find out which conditions could
have happened in the past for this result. This strategy is followed for finding out cause or reason. For
example, diagnosis of blood cancer in humans

Page 4 of 13
C- User Interface

User interface provides interaction between user of the ES and the ES itself. It is generally Natural
Language Processing so as to be used by the user who is well-versed in the task domain. The user of the ES
need not be necessarily an expert in Artificial Intelligence.

It explains how the ES has arrived at a particular recommendation. The explanation may appear in the
following forms −

 Natural language displayed on screen.


 Verbal narrations in natural language.
 Listing of rule numbers displayed on the screen.

The user interface makes it easy to trace the credibility of the deductions.

Applications of Expert System

The following table shows where ES can be applied.

Application Description

Design Domain Camera lens design, automobile design.

Diagnosis Systems to deduce cause of disease from observed data,


Medical Domain
conduction medical operations on humans.

Comparing data continuously with observed system or with prescribed


Monitoring Systems
behavior such as leakage monitoring in long petroleum pipeline.

Process Control Systems Controlling a physical process based on monitoring.

Knowledge Domain Finding out faults in vehicles, computers.

Detection of possible fraud, suspicious transactions, stock market trading,


Finance/Commerce
Airline scheduling, cargo scheduling.

Page 5 of 13
 Expert System Development Environment − The ES development environment includes hardware
and tools. They are −
o Workstations, minicomputers, mainframes.
o High level Symbolic Programming Languages such as LISt Programming (LISP)
and PROgrammation en LOGique (PROLOG).
o Large databases.
 Tools − They reduce the effort and cost involved in developing an expert system to large extent.
o Powerful editors and debugging tools with multi-windows.
o They provide rapid prototyping
o Have Inbuilt definitions of model, knowledge representation, and inference design.
 Shells − A shell is nothing but an expert system without knowledge base. A shell provides the
developers with knowledge acquisition, inference engine, user interface, and explanation facility.
For example, few shells are given below −
o Java Expert System Shell (JESS) that provides fully developed Java API for creating an expert
system.
o Vidwan, a shell developed at the National Centre for Software Technology, Mumbai in 1993.
It enables knowledge encoding in the form of IF-THEN rules.
Benefits of Expert Systems
 Availability − They are easily available due to mass production of software.
 Less Production Cost − Production cost is reasonable. This makes them affordable.
 Speed − They offer great speed. They reduce the amount of work an individual puts in.
 Less Error Rate − Error rate is low as compared to human errors.
 Reducing Risk − They can work in the environment dangerous to humans.
 Steady response − They work steadily without getting motional, tensed or fatigued.

Page 6 of 13
4- Machine Learning
Machine learning is a subfield of AI. It focuses on creating algorithms that can learn from the given data
and make decisions based on patterns observed in this data. These smart systems will require human
intervention when the decision made is incorrect or undesirable.

Machine learning evolved from left to right as shown in the above diagram.
 Initially, researchers started out with Supervised Learning. This is the case of housing price
prediction discussed earlier.
 This was followed by unsupervised learning, where the machine is made to learn on its own
without any supervision.
 Scientists discovered further that it may be a good idea to reward the machine when it does the job
the expected way and there came the Reinforcement Learning.
 Very soon, the data that is available these days has become so humongous that the conventional
techniques developed so far failed to analyze the big data and provide us the predictions.
 Thus, came the deep learning where the human brain is simulated in the Artificial Neural Networks
(ANN) created in our binary computers.
Supervised Learning
Supervised learning is analogous to training a child to walk. You will hold the child’s hand, show him how
to take his foot forward, walk yourself for a demonstration and so on, until the child learns to walk on his
own.
There are several algorithms available for supervised learning. Some of the widely used algorithms of
supervised learning are as shown below −
 k-Nearest Neighbours
 Decision Trees
 Naive Bayes
 Logistic Regression
 Support Vector Machines

Classification
You may also use machine learning techniques for classification problems. In classification problems, you
classify objects of similar nature into a single group. For example, in a set of 100 students say, you may like
to group them into three groups based on their heights - short, medium and long. Measuring the height of
each student, you will place them in a proper group.
Unsupervised Learning

Page 7 of 13
In unsupervised learning, we do not specify a target variable to the machine, rather we ask machine “What
can you tell me about X?”. More specifically, we may ask questions such as given a huge data set X, “What
are the five best groups we can make out of X?” or “What features occur together most frequently in X?”.
To arrive at the answers to such questions, you can understand that the number of data points that the
machine would require to deduce a strategy would be very large. In case of supervised learning, the
machine can be trained with even about few thousands of data points. However, in case of unsupervised
learning, the number of data points that is reasonably accepted for learning starts in a few millions. These
days, the data is generally abundantly available. The data ideally requires curating. However, the amount
of data that is continuously flowing in a social area network, in most cases data curation is an impossible
task.
Algorithms for Unsupervised Learning
Let us now discuss one of the widely used algorithms for classification in unsupervised machine learning.
 k-means clustering
Clustering is a type of unsupervised learning that automatically forms clusters of similar things. It is like
automatic classification. You can cluster almost anything, and the more similar the items are in the cluster,
the better the clusters are. In this chapter, we are going to study one type of clustering algorithm called k-
means. It is called k-means because it finds ‘k’ unique clusters, and the center of each cluster is the mean
of the values in that cluster.
Clustering is sometimes called unsupervised classification because it produces the same result as
classification does but without having predefined classes.
Reinforcement Learning

Consider training a pet dog, we train our pet to bring a ball to us. We throw the ball at a certain distance
and ask the dog to fetch it back to us. Every time the dog does this right, we reward the dog. Slowly, the
dog learns that doing the job rightly gives him a reward and then the dog starts doing the job right way
every time in future. Exactly, this concept is applied in “Reinforcement” type of learning. The technique
was initially developed for machines to play games. The machine is given an algorithm to analyze all
possible moves at each stage of the game. The machine may select one of the moves at random. If the
move is right, the machine is rewarded, otherwise it may be penalized. Slowly, the machine will start
differentiating between right and wrong moves and after several iterations would learn to solve the game
puzzle with a better accuracy. The accuracy of winning the game would improve as the machine plays
more and more games.

The entire process may be depicted in the following diagram −

This technique of machine learning differs from the supervised learning in that you need not supply the
labelled input/output pairs. The focus is on finding the balance between exploring the new solutions
versus exploiting the learned solutions.

Page 8 of 13
Deep learning
 Deep learning is a further subset of machine learning. It utilizes an artificial neural network to
process data through various layers of algorithms and reach an accurate decision without human
intervention.
 Deep learning is a subset of machine learning, which is essentially a neural network with three or
more layers. These neural networks attempt to simulate the behavior of the human brain—albeit
far from matching its ability—allowing it to “learn” from large amounts of data.
 The deep learning is a model based on Artificial Neural Networks (ANN), more specifically
Convolutional Neural Networks (CNN)s. There are several architectures used in deep learning such
as deep neural networks, deep belief networks, recurrent neural networks, and convolutional
neural networks.
 These networks have been successfully applied in solving the problems of computer vision, speech
recognition, natural language processing, bioinformatics, drug design, medical image analysis, and
games. There are several other fields in which deep learning is proactively applied. The deep
learning requires huge processing power and humongous data, which is generally easily available
these days.

Deep Reinforcement Learning


 The Deep Reinforcement Learning (DRL) combines the techniques of both deep and reinforcement
learning.
 The reinforcement learning algorithms like Q-learning are now combined with deep learning to
create a powerful DRL model.
 The technique has been with a great success in the fields of robotics, video games, finance and
healthcare.

The idea of artificial neural networks was derived from the neural networks in the human brain. The
human brain is really complex. Carefully studying the brain, the scientists and engineers came up with an
architecture that could fit in our digital world of binary computers. One such typical architecture is shown
in the diagram below –

There is an input layer which has many sensors to collect data from the outside world. On the right hand
side, we have an output layer that gives us the result predicted by the network. In between these two,
several layers are hidden. Each additional layer adds further complexity in training the network, but would

Page 9 of 13
provide better results in most of the situations. There are several types of architectures designed which we
will discuss now.

ANN Architectures
The diagram below shows several ANN architectures developed over a period of time and are in practice
today.

Deep Learning has shown a lot of success in several areas of machine learning applications.

Self-driving Cars − The autonomous self-driving cars use deep learning techniques. They generally adapt to
the ever changing traffic situations and get better and better at driving over a period of time.
Speech Recognition − Another interesting application of Deep Learning is speech recognition. All of us use
several mobile apps today that are capable of recognizing our speech. Apple’s Siri, Amazon’s Alexa,
Microsoft’s Cortena and Google’s Assistant – all these use deep learning techniques.
Mobile Apps − We use several web-based and mobile apps for organizing our photos. Face detection, face
ID, face tagging, identifying objects in an image – all these use deep learning.
Speech and Voice Recognition
These both terms are common in robotics, expert systems and natural language processing. Though these
terms are used interchangeably, their objectives are different.
Speech Recognition Voice Recognition

The speech recognition aims at understanding The objective of voice recognition is to


and comprehending WHAT was spoken. recognize WHO is speaking.

It is used in hand-free computing, map, or menu It is used to identify a person by analysing its tone,
navigation. voice pitch, and accent, etc.

Machine does not need training for Speech This recognition system needs training as it is
Recognition as it is not speaker dependent. person oriented.

Speaker independent Speech Recognition Speaker dependent Speech Recognition systems


systems are difficult to develop. are comparatively easy to develop.

Page 10 of 13
5- Natural Language Processing
What is NLP?
 NLP stands for Natural Language Processing. It is the branch of Artificial Intelligence that gives the
ability to machine understand and process human languages. Human languages can be in the form
of text or audio format.
 Natural Language Processing (NLP) refers to AI method of communicating with intelligent systems
using a natural language such as English.
 Processing of Natural Language is required when you want an intelligent system like robot to
perform as per your instructions, when you want to hear decision from a dialogue based clinical
expert system, etc.
The field of NLP involves making computers to perform useful tasks with the natural languages humans
use. The input and output of an NLP system can be − Speech and Written Text

History of NLP
Natural Language Processing started in 1950 When Alan Mathison Turing published an article in the
name Computing Machinery and Intelligence. It is based on Artificial intelligence. It talks about automatic
interpretation and generation of natural language. As the technology evolved, different approaches have
come to deal with NLP tasks.

Types and approaches of NLP


 Heuristics-Based NLP: This is the initial approach of NLP. It is based on defined rules. Which comes
from domain knowledge and expertise. Example: regex
 Statistical Machine learning-based NLP: It is based on statistical rules and machine learning
algorithms. In this approach, algorithms are applied to the data and learned from the data, and
applied to various tasks. Examples: Naive Bayes, support vector machine (SVM), hidden Markov
model (HMM), etc.
 Neural Network-based NLP: This is the latest approach that comes with the evaluation of neural
network-based learning, known as Deep learning. It provides good accuracy, but it is a very data-
hungry and time-consuming approach. It requires high computational power to train the model.
Furthermore, it is based on neural network architecture. Examples: Recurrent neural networks
(RNNs), Long short-term memory networks (LSTMs), Convolutional neural networks (CNNs),
Transformers, etc.

Advantages of NLP
 NLP helps us to analyse data from both structured and unstructured sources.
 NLP is very fast and time efficient.
 NLP offers end-to-end exact answers to the question. So, It saves time that going to consume
unnecessary and unwanted information.
 NLP offers users to ask questions about any subject and give a direct response within milliseconds.
Disadvantages of NLP
 For the training of the NLP model, A lot of data and computation are required.
 Many issues arise for NLP when dealing with informal expressions, idioms, and cultural jargon.
 NLP results are sometimes not to be accurate, and accuracy is directly proportional to the accuracy
of data.
Page 11 of 13
 NLP is designed for a single, narrow job since it cannot adapt to new domains and has a limited
function.
Components of NLP
There are two components of Natural Language Processing:
 Natural Language Understanding
 Natural Language Generation
Natural Language Understanding (NLU)
Understanding involves the following tasks −
 Mapping the given input in natural language into useful representations.
 Analyzing different aspects of the language.
Natural Language Generation (NLG)
It is the process of producing meaningful phrases and sentences in the form of natural language from
some internal representation. It involves −
 Text planning − It includes retrieving the relevant content from knowledge base.
 Sentence planning − It includes choosing required words, forming meaningful phrases, setting tone
of the sentence.
 Text Realization − It is mapping sentence plan into sentence structure.
Applications of NLP
The applications of Natural Language Processing are as follows:
 Text and speech processing like-Voice assistants – Alexa, Siri, etc.
 Text classification like Grammarly, Microsoft Word, and Google Docs
 Information extraction like-Search engines like DuckDuckGo, Google
 Chatbot and Question Answering like:- website bots
 Language Translation like:- Google Translate
 Text summarization
NLP Python Libraries
 NLTK
 Spacy
 Gensim
 fastText
 Stanford toolkit (Glove)
 Apache OpenNLP

There are a total of 5 execution steps when building a Natural Language Processor:
 1. Lexical Analysis: Processing of Natural Languages by the NLP algorithm starts with identi cation
and analysis of the input words’ structure. This part is called Lexical Analysis and Lexicon stands for
an anthology of the various words and phrases used in a language. It is dividing a large chunk of
words into structural paragraphs and sentences.
 2. Syntactic Analysis / Parsing: Once the sentences’ structure if formed, syntactic analysis works on
checking the grammar of the formed sentences and phrases. It also forms a relationship among
words and eliminates logically incorrect sentences. For instance, the English Language analyzer
rejects the sentence, ‘An umbrella opens a man’.
 3. Semantic Analysis: In the semantic analysis process, the input text is now checked for meaning,
i.e., it draws the exact dictionary of all the words present in the sentence and subsequently checks

Page 12 of 13
every word and phrase for meaningfulness. This is done by understanding the task at hand and
correlating it with the semantic analyzer. For example, a phrase like ‘hot ice’ is rejected.
 4. Discourse Integration: The discourse integration step forms the story of the sentence. Every
sentence should have a relationship with its preceding and succeeding sentences. These
relationships are checked by Discourse Integration.
 5. Pragmatic Analysis: Once all grammatical and syntactic checks are complete, the sentences are
now checked for their relevance in the real world. During Pragmatic Analysis, every sentence is
revisited and evaluated once again, this time checking them for their applicability in the real world
using general knowledge.

Page 13 of 13

You might also like