UNIT4

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Evolutionary Intelligence: Evolutionary Intelligence refers to self-learning systems that are

biologically inspired. It can be used to solve complex real-world problems that are either
difficult or impossible to solve optimally with exact algorithms. The algorithms and models that
we examine are based on natural and biological systems, which are known to archive
extraordinary feats. Key examples are natural evolution and biological brains, the artificial
equivalents of Evolutionary Algorithms and Neural Networks.

Neural Networks: A neural network is a method in artificial intelligence that teaches


computers to process data in a way that is inspired by the human brain. It is a type of machine
learning process, called deep learning, that uses interconnected nodes or neurons in a layered
structure that resembles the human brain. It creates an adaptive system that computers use to
learn from their mistakes and improve continuously. Thus, artificial neural networks attempt to
solve complicated problems, like summarizing documents or recognizing faces, with greater
accuracy.

Natural Language Understanding: Natural language understanding is a branch of artificial


intelligence that uses computer software to understand input in the form of sentences using
text or speech.
Natural language Understanding enables human-computer interaction. It is the comprehension
of human language such as English, Spanish and French, for example, that allows computers to
understand commands without the formalized syntax of computer languages. Natural language
understanding also enables computers to communicate back to humans in their own languages.

Artificial Neural Network:


The term "Artificial Neural Network" is derived from Biological neural networks that develop
the structure of a human brain. Similar to the human brain that has neurons interconnected to
one another, artificial neural networks also have neurons that are interconnected to one
another in various layers of the networks. These neurons are known as nodes.

The given figure illustrates the typical diagram of Biological Neural Network.
The typical Artificial Neural Network looks something like the given figure:

Dendrites from Biological Neural Network represent inputs in Artificial Neural Networks, cell
nucleus represents Nodes, synapse represents Weights, and Axon represents Output.

Relationship between Biological neural network and artificial neural network:

Biological Neural Network Artificial Neural Network

Dendrites Inputs

Cell nucleus Nodes

Synapse Weights

Axon Output

An Artificial Neural Network in the field of Artificial intelligence where it attempts to mimic the
network of neurons makes up a human brain so that computers will have an option to
understand things and make decisions in a human-like manner. The artificial neural network is
designed by programming computers to behave simply like interconnected brain cells.

There are around 1000 billion neurons in the human brain. Each neuron has an association
point somewhere in the range of 1,000 and 100,000. In the human brain, data is stored in such
a manner as to be distributed, and we can extract more than one piece of this data when
necessary from our memory parallelly. We can say that the human brain is made up of
incredibly amazing parallel processors.

We can understand the artificial neural network with an example, consider an example of a
digital logic gate that takes an input and gives an output. "OR" gate, which takes two inputs. If
one or both the inputs are "On," then we get "On" in output. If both the inputs are "Off," then
we get "Off" in output. Here the output depends upon input. Our brain does not perform the
same task. The outputs to inputs relationship keep changing because of the neurons in our
brain, which are "learning."

The architecture of an artificial neural network:

To understand the concept of the architecture of an artificial neural network, we have to


understand what a neural network consists of.

Artificial Neural Network primarily consists of three layers:

Input Layer: As the name suggests, it accepts inputs in several different formats provided by
the programmer.

Hidden Layer: The hidden layer presents in-between input and output layers. It performs all the
calculations to find hidden features and patterns.

Output Layer: The input goes through a series of transformations using the hidden layer, which
finally results in output that is conveyed using this layer.

Appropriate Problem:

 Instances have many attribute value pairs.


 Target function has discrete values, Continuous values or combination of both.
 Training examples with errors or missing values.
 Fast evaluation of the target function learnt.
 Long training times are acceptable. Generally neural networks take a longer time to
train Then, for instance, decision trees. Various factors, including the number of
training examples, the value selected for the learning rate and the architecture of the
network, have an effect on the time required to train a network. Training times may be
varying from a few minutes to many hours.
 {humans not able to understand accurately how learned network carries out
categorization} It is not vitally essential that humans be able to understand accurately
how the learned network carries out categorizations. As we look above, ANNs are black
boxes and it is hard for us to get a handle on what its calculations are doing.
 When using for the definite purpose it was learned for, the evaluation of the target
function needs to be fast. While it can take a long time to learn a network to, for
example, decide whether a vehicle is a bus, car or tank, once the ANN has been
learned, by using it for the categorization task is usually very fast. This can be very
essential: if the network was to be used in a clash situation, then a quick decision
regarding whether the object moving hurriedly towards it is a bus, tank, car or old lady
could be vital. In addition, in the training data neural network learning is quite robust to
errors, because it is not trying to learn exact rules for job, but rather to minimize an
error function.

Problem Characteristics:
1. Is the problem decomposable into small sub-problems which are easy to solve?
2. Can solution steps be ignored or undone?
chess
3. Is the universe of the problem is predictable?
4. Is a good solution to the problem is absolute or relative?
The Travelling Salesman Problem
5. Is the solution to the problem a state or a path?
The Water Jug Problem, the path that leads to the goal must be reported.
6. Does the task of solving a problem require human interaction?
7. Scalability: Scaling ANNs to handle large datasets or distributed computing
environments can be complex.
8. Data availability and quality: ANNs require large amounts of high-quality labeled data
for effective training. Obtaining such data can be challenging, particularly in domains
where data collection is expensive or time-consuming.
9. Generalization: While ANNs can perform well on training data, they may struggle to
generalize to unseen data or perform well in different contexts.

Types of Feed Forward Network:


Linear neural network: The simplest kind of feedforward neural network is a linear
network, which consists of a single layer of output nodes; the inputs are fed directly to the
outputs via a series of weights. The sum of the products of the weights and the inputs is
calculated in each node.
Single-layer perceptron: The single-layer perceptron combines a linear neural network with a
threshold function. If the output value is above some threshold (typically 0) the neuron fires and takes
the activated value (typically 1); otherwise it takes the deactivated value (typically −1).

Multi-layer perceptron: This class of networks consists of multiple layers of computational


units, usually interconnected in a feed-forward way. Each neuron in one layer has directed
connections to the neurons of the subsequent layer.

MultiLayer Perception:
A multi-layered perceptron (MLP) is one of the most common neural network models used in
the field of deep learning. Often referred to as a “vanilla” neural network, an MLP is simpler
than the complex models of today’s era. However, the techniques it introduced have paved the
way for further advanced neural networks.

The multilayer perceptron (MLP) is used for a variety of tasks, such as stock analysis, image
identification, spam detection, and election voting predictions.

The Basic Structure


A multi-layered perceptron consists of interconnected neurons transferring information to each other, much
like the human brain. Each neuron is assigned a value. The network can be divided into three main layers.

Input Layer

This is the initial layer of the network which takes in an input which will be used to produce an
output.
Hidden Layer(s)

The network needs to have at least one hidden layer. The hidden layer(s) perform
computations and operations on the input data to produce something meaningful.

Output Layer

The neurons in this layer display a meaningful output.

Connections
The MLP is a feedforward neural network, which means that the data is transmitted from the input layer to
the output layer in the forward direction.

The connections between the layers are assigned weights. The weight of a connection specifies its
importance. This concept is the backbone of an MLP’s learning process.

Natural Language Processing:


Natural language processing (NLP) is a subfield of Artificial Intelligence (AI). This is a widely used
technology for personal assistants that are used in various business fields/areas.

This technology works on the speech provided by the user breaks it down for proper
understanding and processes it accordingly. This is a very recent and effective approach due to
which it has a really high demand in today’s market.

Natural Language Processing (NLP) is a field that combines computer science, linguistics, and
machine learning to study how computers and humans communicate in natural language. The
goal of NLP is for computers to be able to interpret and generate human language. This not only
improves the efficiency of work done by humans but also helps in interacting with the machine.
NLP bridges the gap of interaction between humans and electronic devices.

Common Natural Language Processing (NLP) Task:

 Text and speech processing: This includes Speech recognition, text-&-speech processing,
encoding (i.e. converting speech or text to machine-readable language), etc.
 Text classification: This includes Sentiment Analysis in which the machine can analyze
the qualities, emotions, and sarcasm from text and also classify it accordingly.
 Language generation: This includes tasks such as machine translation, summary writing,
essay writing, etc. which aim to produce coherent and fluent text.
 Language interaction: This includes tasks such as dialogue systems, voice assistants, and
chatbots, which aim to enable natural communication between humans and computers.

Working of Natural Language Processing (NLP): Working in natural language processing (NLP)
typically involves using computational techniques to analyze and understand human language.
This can include tasks such as language understanding, language generation, and language
interaction.

The field is divided into three different parts:

Speech Recognition — The translation of spoken language into text.

Natural Language Understanding (NLU) — The computer’s ability to understand what we say.

Natural Language Generation (NLG) — The generation of natural language by a computer.

NLU and NLG are the key aspects depicting the working of NLP devices. These 2 aspects are very
different from each other and are achieved using different methods.
Features for NLP problems:

1. Sentiment Analysis: Our specialty here at MonkeyLearn is sentiment analysis. This is


the dissection of data (text, voice, etc) in order to determine whether it’s positive,
neutral, or negative.

2. Named Entity Recognition: Named Entity Recognition, or NER (because we in the tech
world are huge fans of our acronyms) is a Natural Language Processing technique that
tags ‘named identities’ within text and extracts them for further analysis. As you can see
in the example below, NER is similar to sentiment analysis. NER, however, simply tags
the identities, whether they are organization names, people, proper nouns, locations,
etc., and keeps a running tally of how many times they occur within a dataset.

3. Text Summary: This is a fun one. Text summarization is the breakdown of jargon,
whether scientific, medical, technical or other, into its most basic terms using natural
language processing in order to make it more understandable.
4. Topic Modeling: Topic Modeling is an unsupervised Natural Language Processing
technique that utilizes artificial intelligence programs to tag and group text clusters that
share common topics. You can think of this a similar exercise to keyword tagging, the
extraction and tabulation of important words from text, except applied to topic
keywords and the clusters of information associated with them.
5. Text Classification: Again, text classification is the organizing of large amounts of
unstructured text (meaning the raw text data you are receiving from your customers).
Topic modeling, sentiment analysis, and keyword extraction (which we’ll go through
next) are subsets of text classification. Text classification takes your text dataset then
structures it for further analysis. It is often used to mine helpful data from customer
reviews as well as customer service slogs.
6. Lemmatization and Stemming: More technical than our other topics, lemmatization and
stemming refers to the breakdown, tagging, and restructuring of text data based on
either root stem or definition. That might seem like saying the same thing twice, but
both sorting processes can lend different valuable data. Discover how to make the best
of both techniques in our guide to Text Cleaning for NLP.
7. Takeaways: Natural language processing bridges a crucial gap for all businesses
between software and humans. Ensuring and investing in a sound NLP approach is a
constant process, but the results will show across all of your teams, and in your bottom
line.

Classical Natural Language Processing:


Classical NLP refers to the traditional approach to NLP that was prevalent before the
emergence of deep learning and neural network-based models. It typically relied on rule-based
systems and statistical methods to analyze and process text data. Classical NLP algorithms often
involved tasks such as part-of-speech tagging, named entity recognition, syntactic parsing,
information retrieval, and machine translation.

The key characteristics of classical NLP include:


Rule-based Systems: Classical NLP heavily relied on manually crafted rules and patterns to
perform various language processing tasks. These rules were often designed by linguists or NLP
experts and required extensive domain knowledge.

Hand-engineered Features: Classical NLP algorithms often required the extraction of specific
linguistic features from the text data. These features were carefully designed to capture
relevant information for the given task.

Statistical Models: Classical NLP made use of statistical models to analyze and process language
data. These models typically involved techniques like probabilistic modeling, hidden Markov
models, and n-gram language models.

Feed-Forward Network:
A feedforward neural network is a key component of this fantastic technology since it aids
software developers with pattern recognition and classification, non-linear regression, and
function approximation.

A feedforward neural network is a type of artificial neural network in which nodes’ connections

do not form a loop.

Often referred to as a multi-layered network of neurons, feedforward neural networks are so

named because all information flows in a forward manner only.

The data enters the input nodes, travels through the hidden layers, and eventually exits the

output nodes. The network is devoid of links that would allow the information exiting the

output node to be sent back into the network.

The purpose of feedforward neural networks is to approximate functions.

A Feedforward Neural Network’s Layers

The following are the components of a feedforward neural network:

Layer of input
It contains the neurons that receive input. The data is subsequently passed on to the next tier.

The input layer’s total number of neurons is equal to the number of variables in the dataset.

Hidden layer

This is the intermediate layer, which is concealed between the input and output layers. This

layer has a large number of neurons that perform alterations on the inputs. They then

communicate with the output layer.

Output layer

It is the last layer and is depending on the model’s construction. Additionally, the output layer is

the expected feature, as you are aware of the desired outcome.

Recurrent Neural Network:


 Recurrent Neural Network(RNN) is a type of Neural Network where the output from the
previous step is fed as input to the current step. In traditional neural networks, all the
inputs and outputs are independent of each other, but in cases when it is required to
predict the next word of a sentence, the previous words are required and hence there is
a need to remember the previous words. Thus RNN came into existence, which solved
this issue with the help of a Hidden Layer.
 The main and most important feature of RNN is its Hidden state, which remembers
some information about a sequence. The state is also referred to as Memory State since
it remembers the previous input to the network. It uses the same parameters for each
input as it performs the same task on all the inputs or hidden layers to produce the
output. This reduces the complexity of parameters, unlike other neural networks.

How recurrent neural networks learn

Artificial neural networks are created with interconnected data processing components that are
loosely designed to function like the human brain. They are composed of layers of artificial
neurons -- network nodes -- that have the ability to process input and forward output to other
nodes in the network. The nodes are connected by edges or weights that influence a signal's
strength and the network's ultimate output.
In some cases, artificial neural networks process information in a single direction from input to
output. These "feed-forward" neural networks include convolutional neural networks that
underpin image recognition systems. RNNs, on the other hand, can be layered to process
information in two directions.

What Is a Recursive Neural Network?

Deep Learning is a subfield of machine learning and artificial intelligence (AI) that attempts to
imitate how the human brain processes data and gains certain knowledge. Neural Networks
form the backbone of Deep Learning. These are loosely modeled after the human brain and
designed to accurately recognize underlying patterns in a data set. If you want to predict the
unpredictable, Deep Learning is the solution.

Recursive Neural Networks (RvNNs) are a class of deep neural networks that can learn detailed
and structured information. With RvNN, you can get a structured prediction by recursively
applying the same set of weights on structured inputs. The word recursive indicates that the
neural network is applied to its output.

Due to their deep tree-like structure, Recursive Neural Networks can handle hierarchical data.
The tree structure means combining child nodes and producing parent nodes. Each child-parent
bond has a weight matrix, and similar children have the same weights. The number of children
for every node in the tree is fixed to enable it to perform recursive operations and use the same
weights. RvNNs are used when there's a need to parse an entire sentence.

You might also like