Learning

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

LEARNING

Learning is the process of getting better at something over time through experience.
Learning is a ideological concept of how one see the topic and acquire knowledge towards
it. Depending on the kind of information, we need to learn how much of it we already know
and the setting in which it will be learnt.
There are five different methods to learn.
Memorisation
Direct instruction
Analogy
Induction
Deduction
Learning is the process of getting better at something over time through experience.
Learning is a ideological concept of how one see the topic and acquire knowledge towards
it. The part of a learning artificial intelligence system that determines how to alter the
performance element and puts those changes into practice is called the learning element.
Depending on the kind of information we need to learn, how much of it we already know,
and the setting in which it will be learned, everyone learns new information in a different
way.
Memorization is the most basic method of learning. It is achieved by simply copying the
knowledge into the knowledge base in the exact form that it will be used in, requiring the
least degree of inference. Example: kids during math’s exam memorize the formulas,
theorems etc.
Learning through direct instruction is a complicated process. When a teacher gives many
information to us directly in an orderly manner, knowledge must first be converted into an
operational form. This kind of learning necessitates greater inference than role learning.
Example: one teacher taking two subjects with different instructions in each.
The process of learning a new concept or solution by applying analogies to previously
understood concepts or solutions is known as analytical learning. When completing exam
questions where previously taught examples are used as a reference or when frequently
applying analogical learning, we employ this kind of learning. More inference is needed for
this type of learning than for the other two. Given that there are challenging adjustments to
be made between the known and unknown scenarios. Example: when you learn statistics a
part of it is applied in R programming language.
Another method that humans use a lot is induction learning. Similar to analogical learning,
which also requires more inference than the other two approaches, it is a potent form of
learning. Utilizing inductive reasoning, a type of flawed but valuable reasoning, is necessary
for this learning process. We use inductive learning using real-world examples of the idea.
Example: students at low levels learn alphabets which forms sentences, paragraphs etc.
By applying pre-existing knowledge to a series of deductive inference processes, deductive
learning is achieved. New facts or relationships are logically deduced from the known facts.
Deductive learning usually requires more inference than the other methods.
Learning types: A taxonomy or classification of learning types can be used as a reference for
examining or contrasting the variations between them. Learning taxonomies can be created
according to the knowledge representation type (predicate calculus, rules, frames), the
knowledge kind (concepts, problem solving, playing games), or the application field (medical
diagnosis, scheduling, prediction, and so on).
• ROTE LEARNING The fundamental learning activity is rote learning. Repetition-based
memorizing is the foundation of rote learning. Because the information is just transferred
into the knowledge base without being altered, it is also known as memorization. This
method can save a lot of time because computed values are stored. If advanced techniques
are utilized to use the recorded values more quickly and there is a generalization to keep
the amount of stored information down to a tolerable level, the rote learning methodology
can also be used in complicated learning systems. A program that plays checkers, for
example. The theory behind this is that the more times one hears the information, the
easier it will be to remember what it means. Associative learning, active learning, and
meaningful learning are a few substitutes for memorization. In order to determine the
board locations, it assesses in its look-ahead search, ample uses this technique.
• LEARNING BY TAKING ADVICE. This is a basic method of instruction. Assume that a
programmer is a teacher and the computer is a student, and that the programmer produces
a set of instructions to tell the computer what to do. The system will be able to perform new
tasks after it has been trained, or taught new information.
The guidance may come from a variety of sources, including the internet and human
specialists. Compared to rote learning, this kind of learning needs greater inference. Before
being entered into the knowledge base, the information needs to be converted into an
operational format. Additionally, it is important to take the knowledge source's credibility
into account.
The system needs to make sure that the new information doesn't contradict the past
knowledge. The game of Hearts can be learned with a learning technique called FOO (First
Operational Operationalise). It transforms guidance that takes the shape of issues,
principles, and techniques into workable LIST processes (or knowledge). This information is
now prepared for usage.
GENERAL LEARNING MODEL. General Learning Model: - AS noted earlier, learning can be
accomplished using a number of different methods, such as by memorization facts, by being
told, or by studying examples like problem solution. Learning requires that new knowledge
structures be created from some form of input stimulus. This new knowledge must then be
assimilated into a knowledge base and be tested in some way for its utility. Testing means
that the knowledge should be used in performance of some task from which meaningful
feedback can be obtained, where the feedback provides some measure of the accuracy and
usefulness of the newly acquired knowledge.
• LEARNING NEURAL NETWORK Artificial neural networks or simulated neural networks are
subsets of machine learning and are at the core of deep learning algorithms. The way they
communicate with one another is modelled after the way neurons in the human brain
communicate.
An artificial neural network is composed of a node layer containing an input layer, one or
more hidden layers, and an output layer. The connections between the nodes create a
threshold and weight. The activation of a node whose output exceeds the specified
threshold value results in the transmission of data to the next layer of the network.
Otherwise, no information reaches the subsequent layer of the network.
• GENETIC LEARNING
1. Supervised Learning supervised learning is the machine learning task of inferring a
function from labelled training data. Moreover, the training data consist of a set of training
examples. In supervised learning, each example a pair consisting of an input object (typically
a vector) and the desired output value (also called the supervisory signal).
2. Training set A training set a set of data used in various areas of information science to
discover potentially predictive relationships. Training sets used in artificial intelligence,
machine learning, genetic programming, intelligent systems, and statistics. In all these fields,
a training set has much the same role and often used in conjunction with a test set.
Testing set A test set is a set of data used in various areas of information science to assess
the strength and utility of a predictive relationship. Moreover, Test sets are used in artificial
intelligence, machine learning, genetic programming, and statistics. In all these fields, a test
set has much the same role. Accuracy of classifier: Supervised learning In the fields of
science, engineering, industry, and statistics. The accuracy of a measurement system is the
degree of closeness of measurements of a quantity to that quantity's actual (true) value.
SUPERVISED LEARNING Face and speech recognition, product or movie recommendations,
and sales forecasting all involve supervising learning. Supervised learning can be further
divided into two types - regression and classification.
Regression teaches and predicts a continuous-valued response, like predicting property
values. The classification process seeks to identify the appropriate classification label,
analysing positive/negative feelings, gender and sexual orientation, benign and malignant
tumours, secured and unsecured loans, etc. Learning data is supervised and comes with
descriptions, labels, targets or desired outcomes, and the goal is to find a general rule that
connects inputs to outcomes. Labelled data is a type of learning data. New data with
unknown outcomes are tagged with the learned rule.
Supervised learning involves creating a predictive model based on pre-defined data. We
need to create a database and label it before we can build a system to estimate the price of
a piece of land or a dwelling based on various aspects, like size, location, and so on. The
algorithm needs to learn which attributes correspond to which prices. This information will
assist the algorithm in determining the market value of real estate based on the values of
the input attributes.
Supervised learning focuses on acquiring a skill based on the available training information.
The process involves analysing the initial data and generating a derived function that can be
used to generate fresh examples. Many supervised learning algorithms are available,
including logistic regression, neural networks, support vector machines (SVMs) and naive
Bayes classification.

MACHINE LEARNING Machine learning is an artificial intelligence subfield that uses trained
algorithms on data sets to build models that allow machines to do things that humans can't,
like categorize images, analyse data, or predict price changes.
Machine learning is now one of the most popular forms of artificial intelligence, and it
powers many digital products and services that we use daily. In this article, you'll learn more
about machine learning, how it works, the different types of machine learning models, and
how machine learning is used in real life. You'll also learn about the advantages and risks of
machine learning, and you'll find some great, affordable, and flexible machine learning
courses
Working on machine learning
Machine Learning is a branch of Artificial Intelligence(AI) that uses different algorithms
and models to understand the vast data given to us, recognize patterns in it, and then
make informed decisions. It is widely used in many industries, businesses, educational and
medical research fields. This field has evolved significantly over the past few years, from
basic statistics and computational theory to the advanced region of neural
networks and deep learning. Nowadays it is widely used in various domains to predict
future data.
Types of machine learning
There are several types of machine learning, each with special characteristics and
applications. Some of the main types of machine learning algorithms are as follows:
1. Supervised Machine Learning
2. Unsupervised Machine Learning
3. Semi-Supervised Machine Learning
4. Reinforcement Learning

Supervised Machine Learning


Supervised learning is defined as when a model gets trained on a “Labelled
Dataset”. Labelled datasets have both input and output parameters. In Supervised
Learning algorithms learn to map points between inputs and correct outputs. It has
both training and validation datasets labelled.
Unsupervised Machine Learning
Unsupervised Learning Unsupervised learning is a type of machine learning technique in
which an algorithm discovers patterns and relationships using unlabeled data. Unlike
supervised learning, unsupervised learning doesn’t involve providing the algorithm with
labeled target outputs. The primary goal of Unsupervised learning is often to discover
hidden patterns, similarities, or clusters within the data, which can then be used for
various purposes, such as data exploration, visualization, dimensionality reduction, and
more.
Semi-Supervised Learning
Semi-Supervised learning is a machine learning algorithm that works between
the supervised and unsupervised learning so it uses both labelled and unlabelled data. It’s
particularly useful when obtaining labeled data is costly, time-consuming, or resource-
intensive. This approach is useful when the dataset is expensive and time-consuming.
Semi-supervised learning is chosen when labeled data requires skills and relevant
resources in order to train or learn from it.
Reinforcement Machine Learning
Reinforcement machine learning algorithm is a learning method that interacts with the
environment by producing actions and discovering errors. Trial, error, and delay are the
most relevant characteristics of reinforcement learning. In this technique, the model
keeps on increasing its performance using Reward Feedback to learn the behavior or
pattern. These algorithms are specific to a particular problem e.g. Google Self Driving car,
AlphaGo where a bot competes with humans and even itself to get better and better
performers in Go Game. Each time we feed in data, they learn and add the data to their
knowledge which is training data. So, the more it learns the better it gets trained and
hence experienced
Advantages of Machine learning

1. Easily identifies trends and patterns

2. No human intervention needed (automation)

3. Continuous Improvement

4. Handling multi-dimensional and multi-variety data

5. Wide Applications

Disadvantages of Machine Learning

1. Data Acquisition

2. Time and Resources

3. Interpretation of Results

4. High error-susceptibility

Machine learning algorithms


1. Supervised Learning Algorithm
Supervised learning is a type of machine learning algorithms where we used labeled dataset
to train the model or algorithms. The goal of the algorithm is to learn a mapping from the
input data to the output labels, allowing it to make predictions or classifications on new,
unseen data.
2. Unsupervised Learning Algorithm
Unsupervised Learning is a type of machine learning algorithms where the algorithms are
used to find the patterns, structure or relationship within a dataset using unlabled dataset.
It explores the data’s inherent structure without predefined categories or labels.
3. Reinforcement Learning
Reinforcement Learning is a type of machine learning algorithms where an agent learns to
make successive decisions by interacting with its surroundings. The agent receives the
feedback in the form of incentives or punishments based on its actions. The agent’s purpose
is to discover optimal tactics that maximize cumulative rewards over time through trial and
error. Reinforcement learning is frequently employed in scenarios in which the agent must
learn how to navigate an environment, play games, manage robots, or make judgments in
uncertain situations.

Based on the similarity of function, the algorithms can be grouped into the following:

1. Regression Algorithms: To make predictions about the new data, regression is a process
that focuses on determining the relationship between the target output variables and the
input features. Top six Regression algorithms are: Simple Linear Regression, Lasso
Regression, Logistic regression, Multivariate Regression algorithm, Multiple Regression
Algorithm.

2. Instance-based Algorithms: These have a place with the group of discovering that actions
new cases of the issue with those in the preparation information to as needs be figure out
a best match and makes a forecast. The top instance-based algorithms are k-Nearest
Neighbour, Learning Vector Quantization, Self-Organizing Map, Locally Weighted Learning,
and Support Vector Machines.

3. Regularization: Regularization refers to the technique of regularizing the learning process


from a particular set of features. It normalizes and moderates. The weights attached to the
features are normalized, which prevents in certain features from dominating the prediction
process. This technique helps to prevent the problem of overfitting in machine learning. The
various regularization algorithms are Ridge Regression, Least Absolute Shrinkage and
Selection Operator (LASSO) and Least-Angle Regression (LARS).

4. Decision Tree Algorithms: These methods construct a tree-based model constructed on


the decisions made by examining the values of the attributes. Decision trees are used for
both classification and regression problems. Some of the well-known decision tree
algorithms are Classification and Regression Tree, C4.5 and C5.0, Conditional Decision Trees,
Chi-squared Automatic Interaction Detection and Decision Stump.

5. Bayesian Algorithms: These algorithms apply the Bayes theorem for classification and
regression problems. They include Naive Bayes, Gaussian Naive Bayes, Multinomial Naive
Bayes, Bayesian Belief Network, Bayesian Network and Averaged One-Dependence
Estimators.

6. Clustering Algorithms: Clustering algorithms involve the grouping of data points into
clusters. All the data points that are in the same group share similar properties and, data
points in different groups have highly dissimilar properties. Clustering is an unsupervised
learning approach and is mostly used for statistical data analysis in many fields. Algorithms
like k-Means, k-Medians, Expectation Maximisation, Hierarchical Clustering, and Density-
Based Spatial Clustering of Applications with Noise fall under this category.
7. Association Rule Learning Algorithms: Association rule learning is a rule-based learning
method for identifying the relationships between variables in a very large dataset.
Association Rule learning is employed predominantly in market basket analysis. The most
popular algorithms are: Apriori algorithm and Eclat algorithm.

8. Artificial Neural Network Algorithms: Artificial neural network algorithms rely on find its
base from the biological neurons in the human brain. They belong to the class of complex
pattern matching and prediction processes in classification and regression problems. Some
of the popular artificial neural network algorithms are: Perceptron, Multilayer Perceptron's,
Stochastic Gradient Descent, Back-Propagation,, Hopfield Network, and Radial Basis
Function Network.

9. Deep Learning Algorithms: These are modernized versions of artificial neural network,
that can handle very large and complex databases of labelled data. Deep learning algorithms
are tailored to handle text, image, audio and video data. Deep learning uses self-taught
learning constructs with many hidden layers, to handle big data and provides more powerful
computational resources. The most popular deep learning algorithms are: Some of the
popular deep learning ms include Convolutional Neural Network, Recurrent Neural
Networks, Deep Boltzmann Machine, Auto-Encoders Deep Belief Networks and Long Short-
Term Memory Networks.

10. Dimensionality Reduction Algorithms: Dimensionality Reduction algorithms exploit the


intrinsic structure of data in an unsupervised manner to express data using reduced
information set. They convert a high dimensional data into a lower dimension which could
be used in supervised learning methods like classification and regression. Some of the well-
known dimensionality reductionalgorithms include Principal Component Analysis, Principal
Component Regression, Linear Discriminant Analysis, Quadratic Discriminant Analysis,
Mixture Discriminant Analysis, Flexible Discriminant Analysis and Sammon Mapping.

11. Ensemble Algorithms: Ensemble methods are models made up of various weaker
models that are trained separately, and the individual predictions of the models are
combined using some method to get the final overall prediction. The quality of the output
depends on the method chosen to combine the individual results. Some of the popular
methods are: Random Forest, Boosting, Bootstrapped Aggregation, AdaBoost, Stacked
Generalization, Gradient Boosting Machines, Gradient Boosted Regression Trees and
Weighted Average.

APPLICATIONS OF MACHINE LEARNING

These algorithms aid in the development of intelligent systems that can learn from previous
experiences and data to produce accurate results. Numerous enterprises are accordingly
applying ML answers for their business issues, or to make new and better items and
administrations. Medical care, protection, monetary administrations, promoting, and
security administrations, among others, utilize ML.

1. Facial recognition/Image recognition The most well-known application is Facial


Acknowledgment, and the easiest illustration of this application is the iPhone. There are a
ton of purpose instances of facial acknowledgment, for the most part for security purposes
like distinguishing crooks, looking for missing people, help criminological examinations, and
so on. Other applications include disease diagnosis, intelligent marketing, and school
attendance tracking.

2. Automatic Speech Recognition Condensed as ASR, programmed discourse


acknowledgment is utilized to change over discourse into computerized text. Its
applications include performing tasks based on human voice inputs and authenticating
users based on their voice. Discourse examples and jargon are taken care of into the
framework to prepare the model. As of now ASR frameworks track down a wide assortment
of uses in the accompanying spaces: • Medical Assistance • Industrial Robotics • Forensic
and Law enforcement • Défense & Aviation

3. Financial Services Machine Learning has many use cases in Monetary Administrations. AI
calculations end up being phenomenal at recognizing fakes by observing exercises of every
client and survey that assuming an endeavoured action is average of that client or not. A
crucial security use case is financial monitoring to catch money laundering. It likewise assists
in pursuing better exchanging choices with the assistance of calculations that can dissect
great many information sources all the while. Credit scoring and endorsing are a portion of
different applications. The most widely recognized application in our everyday exercises is
the virtual individual colleagues like Siri and Alexa.

4. Marketing and Sales It is further developing lead scoring calculations by including


different boundaries, for example, site visits, messages opened, downloads, and snaps to
score each lead. It additionally assists organizations with further developing their dynamic
evaluating models by utilizing relapse strategies to make expectations. Feeling Examination
is one more fundamental application to measure buyer reaction to a particular item or a
showcasing drive. Brands can use Machine Learning for Computer Vision to identify their
products in online images and videos. These brands additionally use PC vision to gauge the
notices that pass up any significant text. Chatbots are likewise turning out to be more
responsive and smarter.

5. Healthcare An imperative application is in the finding of sicknesses and diseases, which


are generally hard to analyse. Radiotherapy is likewise turning out to be better. Beginning
phase drug disclosure is another pivotal application which includes advancements, for
example, accuracy medication and cutting-edge sequencing. Clinical preliminaries cost a ton
of time and cash to finish and convey results. Applying ML put together prescient
investigation could improve with respect to these variables and give improved results.
Additionally, these technologies are necessary for making outbreak predictions.
Researchers all over the planet are utilizing ML innovations to foresee pestilence episodes.

6. Recommendation Systems Today, recommendation systems are used by many businesses


to effectively communicate with site visitors. It can suggest important items, motion
pictures, web-series, tunes, and considerably more. Most unmistakable use-instances of
suggestion frameworks are internet business destinations like flipkart, amazon.

Decision trees
A decision tree is a flowchart-like structure used to make decisions or predictions. It
consists of nodes representing decisions or tests on attributes, branches representing the
outcome of these decisions, and leaf nodes representing final outcomes or predictions. Each
internal node corresponds to a test on an attribute, each branch corresponds to the result
of the test, and each leaf node corresponds to a class label or a continuous value.
DECISION TREE IN AI An Al (Artificial Intelligence) decision tree software is regularly used to
expect unique destiny events. Both Al and the above-noted ML are basically the identical
concepts; Al and ML use decision tree bushes as a version for selection-making and the
overall performance of numerous tasks.
Al is a extensive area of labor combining numerous strategies for simulating smart conduct
in machines.
The nodes will stay narrowed down till most effective a unmarried node is left over, leaving
the pleasant answer. A few examples of Al decision tree bushes encompass credit score
scoring, clinical diagnoses and detecting fraud.
At the maximum fundamental degree of decision tree bushes in each Al and ML, the primary
variations are how they may be created and used.
Al decision tree bushes are regularly created via way of means of hand (in an app or on
paper) primarily based totally on professional input, even as ML bushes are pieced
collectively routinely via way of means of ML data.
ML decision tree bushes are precious as they own the cap potential to address complicated
datasets, even as Al decision tree bushes use human professional insights.
Advantages of Decision trees
 Step-by-step approach. It is a step-by-step graphical approach that works on the probable
outcomes
 Versatile
 Easy navigation
 Reduces Average Handle Time
 Deliver actionable solutions
 Time-saving
 Training tool

While implementing a Decision tree, the main issue arises that how to select the best attribute
for the root node and for sub-nodes. So, to solve such problems there is a technique which is
called as Attribute selection measure or ASM. By this measurement, we can easily select the
best attribute for the nodes of the tree. There are two popular techniques for ASM, which
are:

o Information Gain
o Gini Index

1. Information Gain:
o Information gain is the measurement of changes in entropy after the segmentation of
a dataset based on an attribute.
o It calculates how much information a feature provides us about a class.
o According to the value of information gain, we split the node and build the decision
tree.
o A decision tree algorithm always tries to maximize the value of information gain, and
a node/attribute having the highest information gain is split first.

2. Gini Index:

o Gini index is a measure of impurity or purity used while creating a decision tree in the
CART(Classification and Regression Tree) algorithm.
o An attribute with the low Gini index should be preferred as compared to the high Gini
index.
o It only creates binary splits, and the CART algorithm uses the Gini index to create
binary splits.

BUILDING A TREE As we know, a tree has root and terminal nodes. We can build the tree by
following two parts after creating the root node. Part 1: Creation of terminal nodes. The
decision to stop growing the tree or create more terminal nodes is crucial when creating
terminal nodes in a decision tree. The process can be accomplished by applying two criteria,
namely maximum tree depth and minimum number of nodes, as follows.
The maximum tree depth is, as its name suggests, the maximum number of nodes in a tree
after the starting point. Once a tree reaches maximum depth, i.e. when it has the maximum
number of terminal nodes, we must stop adding them. The minimum number of training
records that a given node is accountable for can be defined. Once the tree reaches these
minimum node records or below, we must stop adding terminal nodes. A final prediction is
made by the terminal node.

2: Recursive segmentation. Knowing when to create terminal nodes has helped us build our
tree. The tree is built by recursive splitting. This technique allows us to create child nodes
(nodes that are added to an existing node) recursively on each group of data, generated by
dividing the dataset, by calling the same function repeatedly.
Predictions
We need to make a prediction about it after creating a decision tree. Prediction entails
navigating the decision tree with the specified set of information. The recursive function
above allowed us to make a prediction. With the left or child right nodes, the same
prediction procedure is repeated.
Assumptions
When creating a decision tree, there are some assumptions we make. • Training sets serve
as root nodes when preparing decision trees. • The decision tree algorithm Favors
categorizing the attributes' values. Continuous values must be discretized before model
building if you want to use them.

REGRESSION

Regressions are statistical techniques that relate a dependent variable to one or more
independent variables. A regression model can show whether changes in the dependent
variable are associated with changes in one or more of the explanatory variables.

Why do we need Regression?

A regression analysis is a statistical tool used to model the relationship between a


dependent variable and one or more independent variables. Regression analysis aims to
establish a connection between variables, estimate the influence of independent variables
on dependent variables, and make predictions or forecasts.

The mean of the dependent variable could be used as the best value, but this approach fails
to take into account the potential effects of other variables that may affect the dependent
variable. The fact that larger houses generally have higher prices would not be taken into
account if we were trying to predict the price of a house based on its square footage.
Regression analysis allows us to account for the impact of other variables and to estimate
the effect of each independent variable on the dependent variable. We can get more
accurate predictions and a better understanding of the relationships between the variables.

Furthermore, regression analysis can be used to test hypotheses about the relationships
between variables, which can help us gain a deeper understanding of the factors that affect
the dependent variable. We can make predictions or forecasts based on these relationships
with the help of regression analysis. The use of the median value of the dependent variable
as the ultimate value would ignore the potential consequences of other variables, leading
to a less precise and revealing evaluation.

Terminologies Related to the Regression Analysis in Machine Learning

• Response Variable: The primary factor to predict or understand in regression, also known
as the dependent variable or target variable.

• Predictor Variable: Factors influencing the response variable, used to predict its values;
also called independent variables.

• Outliers: Observations with significantly low or high values compared to others,


potentially impacting results and best avoided.

• Multicollinearity: High correlation among independent variables, which can complicate


the ranking of influential variables.

Underfitting and Overfitting: Overfitting occurs when an algorithm performs well on


training but poorly on testing, while underfitting indicates poor performance on both
datasets.
Regression Types

There are two main types of regression:

1. Simple Regression Used to predict a continuous dependent variable based on a single


independent variable. Simple linear regression should be used when there is only a single
independent variable.

2. Multiple Regression Used to predict a continuous dependent variable based on multiple


independent variables. Multiple linear regression should be used when there are multiple
independent variables.

3. Non-Linear Regression Relationship between the dependent variable and independent


variables) follows a nonlinear pattern. Provides flexibility in modelling a wide range of
functional forms.

The Characteristics Of Regression.

• Continuous target variables are the subject of regression, which focuses on predicting
continuous variables that represent numerical values. Predicting home values, estimating
sales numbers, or estimating healing times are some examples.

• Regression models are evaluated based on their ability to minimize the error between
predicted and actual values of the target variable. Common error measures include the
mean absolute error, the mean square error, and the root mean square error.

• A simple linear model can be used to more complex nonlinear models. The complexity of
the model depends on the complexity of the relationship between the input features and
the target variable.

• Overfitting and underfitting are common problems in regression models.

• The interpretability of regression models depends on the algorithm used. Simple linear
models are highly interpretable, while more complex models may be more difficult to
understand.

Which task is a regression task?

• Predicting a person's age.

• Predicting the nationality of a person.

• Predicting whether a company's share price will rise tomorrow.

• It is possible to predict whether a document is related to a UFO sighting.


Predicting age of a person is a real value, predicting nationality is categorical, whether stock
price will increase is a discrete-yes/no answer, predicting whether a document is related to
the topic.

TYPES OF REGRESSION ALGORITHM:

• Simple Linear Regression

• Multiple Linear Regression

• Polynomial Regression

• Support Vector Regression

• Decision Tree Regression

• Random Forest Regression

ARTIFICIAL NEURAL NETWORK Neural networks,

also known as artificial neural networks (ANN) or simulated neural networks (SNN), are part
of machine learning and are at the heart of deep learning algorithms. Their name and
structure are inspired by the human brain, and they mimic the interaction of biological
neurons.

Artificial Neural Networks (ANN) consist of layers of nodes that contain an input layer, one
or more hidden layers and an output layer. Each node or artificial neuron connects to
another and is associated with a weight and a threshold. When the output of an individual
node exceeds a specified threshold value, that node is activated and sends data to the next
layer of the network. Otherwise, the data will not be transmitted to the next level of the
network.

Neural networks rely on training data to learn and improve their accuracy over time.
However, to be precise, these learning algorithms are powerful computer science and
artificial intelligence tools that can classify and group data at high speed. Speech recognition
or image recognition tasks can take minutes to hours compared to manual human expert
identification. One of the most famous neural networks is Google's search algorithm.

HISTORY OF NEURAL NETWORKS

Neural networks have a longer history than most people think. Although the idea of a
"thinking machine" date back to the ancient Greeks, we will focus on the most important
events that led to the development of thinking around neural networks, whose popularity
has ebbed and flowed over the years:

1943.: Warren S. McCulloch and Walter Pitts published "The Neural Activity of the Logical
Computation of Immanent Ideas (link is external to ibm.com)" The purpose of this study
was to understand how the human brain can create complex patterns through
interconnected. brain cells. or neurons. One of the main ideas of this work was to compare
binary threshold neurons with Boolean logic (ie 0/1 or true/false statement). 1958: Frank
Rosenblatt is credited with developing the perceptron, documented in his study "The
Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain" .
He takes the work of McCulloch and Pitt a step further by adding weights to the equation.
Using an IBM 704, Rosenblatt made the computer learn to distinguish between cards
marked left and cards marked right. 1974: Although many researchers contributed to the
idea of back propagation, Paul Werbos was the first person in the United States to point out
its application in neural networks in his dissertation

HOW DOES A NEURAL NETWORK WORK?

Neural networks have numerous layers. Each layer serves a distinct purpose, and the more
complex the network, the more layers there are. A neural network is also known as a multi-
layer perceptron because of this. Before you can fully understand how neural networks
work, you should be familiar with the parts of them. The simplest form of a neural network,
also known as the node layer, consists of three layers.

• Layer for input.

• There is a hidden layer.

• The output layer is located.

The names of these layers hint at the distinct purpose of each of them. The nodes make up
these layers. A neural network can have multiple layers that are hidden depending on the
requirements. The input layer receives input signals and transfers them to the next layer.
The system collects information from the outside world. You'll be able to understand how
a neural network works if you know how to use the linear regression model, since each of
the individual nodes can be compared to a unique linear regression model.

Back-end calculations are handled by the hidden layer. A network can have no
entanglement at all. A neural network has at least one hidden layer. There is an output layer
that transmits the final result of the calculation.

HOW DOES A PERCEPTRON LAYER WORK?

It has the name'multi-layer perceptron' because a neural network is made up of many


layers. These layers are also known as 'hidden layers' within dense layers. There are
numerous perceptron neutrons in them. They're the main thing that's working to make up
that perceptron layer. Inputs are received by these neurons. You combine these numerical
inputs with a bias and a bunch of weights, which results in a single output.

Each neuron takes into account weights and biases when calculating. The combination
function uses weight and bias to give an output (modified input) It works by solving the
following equation.

The combination is bias + weights + inputs.


Afterwards, the activation function produces the following output:

The output is the combination of events.

This parameter defines the nature of the neural network's work. They connect the different
parts of the network. There are several activation functions.

The linear function is a function.

Only the combination of the neuron is output in this function. Combination is activation.

The tangent function is hyperbolic. It is one of the most popular activation functions for
neural networks. The function is sigmoid, and its value lies between -1 and +1.

Combination = activation.

The logistic function is described.

It is a kind of sigmoid function that makes the logistic function similar to the hyperbolic
tangent function. It is, however, different because it lies between 0 and 1.

11 + e-combination is activation.

The Rectified Linear Unit Function is used.

There is also a rectified linear unit function that is prevalent. ReLU is a name for the rectified
linear unit function. When the combination is equal to or greater than zero, ReLU is equal
to the combination, and it is negative if the combination is lower than zero.

TYPES OF NEURAL NETWORKS

The neural networks can be categorized into various types, each serving a distinct purpose.
The below would be representative of the most common types of neural networks that you
will come across for its common use cases.

Frank Rosenblatt created the perceptron in 1958, making it the oldest neural network.

• Feedforward neural networks, or multi-layer perceptron's, are what we have been


focusing on within this article. They're made up of an input, a hidden layer, and an output
layer. Most real-world problems are nonlinear, so it's important to note that these neural
networks are actually comprised of sigmoid neurons, not perceptron's. The foundation for
computer vision, natural language processing, and other neural networks is the data fed
into these models.

• Image recognition, pattern recognition, and/or computer vision are some of the
applications for convolutional neural networks. The principles of linear algebra, particularly
matrix addition, are used to identify patterns within a picture in these networks.
• Recurrent neural networks are identified by their feedback loops. Time-series data is
utilized to make predictions about future outcomes, such as stock market predictions or
sales forecasting.

APPLICATIONS OF NEURAL NETWORKS

Recognizing people.

Neural networks have helped computers recognize handwriting. This falls under the
umbrella of pattern recognition, where the computer can discern human writing, be it
words or numbers.

This technology employs some of the sub-divisions, such as backpropagation neural


networks, a multilayer neural network, and Neocognitron. Handwriting can be converted to
computer texts with the help of many software. This tech makes it all possible.

Checking signatures.

Another popular application of neural networks is signature verification. It is a technique


that is not based on vision.

The software is initially trained using geometrical data for this technology. The application
then identifies the geometrical characteristics or extracts them from the signature,
confirming whether or not it's genuine.

This tech is a great way to avoid scams, as it's way more precise than a person. The
verification is therefore highly reliable, swift and error-free.

Recognizing the human face is important.

One of the most popular applications of neural networks is human face detection, where
the software identifies the given face. The software can classify which images have faces
and which are faceless after processing them thoroughly.

This kind of tech requires the development of sophisticated neural networks equipped with
sophisticated algorithms. Full-connected multilayer feed-forward neural networks and PCA
are some of the neural networks used for training. This illustrates how a neural network
works in sync with evolving tech.

Selling and marketing.

The influx of online shopping platforms is putting all their efforts into enhancing the client
encounter. They are doing everything from the website interface to suggesting items based
on previous buying habits.

Neural networks allow the platforms to identify the customer's choices and needs by
analysing their search history, previous purchases and other browsing patterns. This is the
reason why, when you type something into Google or any other major search engine, your
wall or feed is flooded with advertisements for similar merchandise from these online
retailing platforms.

This has been revolutionary for marketers as they can now simply take the help of software
based on ANN and gather all the information required to make a perfect marketing
campaign and attract the right crowds. Identifying market trends.

The stock market is also being impacted by neural networks. The stock market employs
them for analysing technical aspects. It's not like one will be able to hit the jackpot without
knowing anything with the help of neural network-based software.

Neural networks, in fact, don't make predictions, but they do a thorough price analysis and
offer the user the best possible choices. As a result, they assist the user in making data-
driven decisions and not just following their instincts by analyzing previous patterns. They
aid the trader in identifying things like non-linear interdependence and other kinds of
patterns that other types of technical analysis might miss.

Building artificial assistance is possible.

Some of the greatest Al assistants of all time have been built using a neural networks speech
detection capability. It is possible that the technology will make mistakes in understanding
certain languages and improper dialects during its learning phase. Nevertheless, it is evident
that substantial progress has been made thus far.

Especially artificial neural networks are proving to be crucial in this field, and some of the
most popular ones include multilayer networks, multilayer networks with continuous
connections, and Kohonen self-organizing feature maps.

The health care system.

In the realm of medical care, artificial intelligence, and specifically artificial neural network
(ANN), is gaining popularity for medical diagnosis. The emergence of advanced neural
network technologies, which are deemed ideal for detecting ailments via scans, is expected
to propel its popularity further.

SUPPORT VECTOR MACHINES

Support vector machine (SVMs) is a supervised machine learning algorithm that classifies
data by finding an optimal line or hyperplane that maximizes the distance between each
class in an N-dimensional space. Support vector machines (SVMs) were developed in the
1990s by Vladimir N. Vapnik and his colleagues. They published this work in a paper titled
"Support Vector Method for Function Approximation, Regression Estimation." Classification
problems often use support vector machines. They find the optimal hyperplane that
maximizes the margin between the closest data points of opposite classes. The number of
attributes present in the data being supplied determines whether the hyperplane is a
straight line in a 2-D or a plane in a n-dimensional dimension. Algorithm can find the best
decision boundary between classes by maximizing the margin between points since multiple
hyperplanes can be found to distinguish classes. This allows it to generalize well to new data
and make accurate classification predictions. The parallel lines to the ideal hyperplane are
referred to as support vectors, as they traverse the points that determine the maximum
margin.

It is possible to handle both linear and nonlinear classification tasks with the SVM
algorithm. To enable linear separation, kernel functions are used to transform the data
higher-dimensional space. The use of kernel functions can be referred to as the 'kernel
trick,' and the choice of kernel function, such as linear, exponential, radial, or sigmoid, is
determined by data characteristics and the particular application.

You might also like