Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

Artificial Intelligence

Artificial intelligence is an important branch of computer science that has the broad aim of creating
machines that behave intelligently. The field has subfields, including robotics and machine
learning.There are three major categories of artificial intelligence:

Artificial Narrow Intelligence

Artificial Narrow Intelligence or Weak AI is so-called because it is limited to the performance of


specialised and highly-specific tasks. Amazon’s Alexa is an example of artificial narrow intelligence. Most
commercial applications of AI are examples of Artificial Narrow Intelligence.

Artificial General Intelligence

Artificial General Intelligence, also known as Strong AI or Human-Level AI, is the term used for artificial
intelligence that permits a machine to have the same capabilities as a human.

Artificial Super Intelligence

Artificial Super Intelligence goes beyond general intelligence and results in machines that have superior
capabilities than humans do.

Most of the current artificial intelligence is narrow, while general intelligence is becoming increasingly
likely to be commonplace in the near future. Super-intelligence is not yet even remotely likely.

Artificial Intelligence has many uses in business and finance, many of which draw heavily on
machine learning, including:

 Using sophisticated pattern-recognition techniques to identify potentially fraudulent insurance


claims and credit/debit card transactions.
 Employing network analysis to detect bank accounts likely to be used for the transfer of the
proceeds of crime.
 Customer segmentation and targeted advertising.
 Identifying IT outages before they happen using data from real-time monitoring and pattern-
recognition techniques.
 Using data from GPS sensors on delivery trucks and machine learning to optimise routes
and ensure maximum fleet usage.
 Product recommendation systems, such as that used by Amazon.
 Analysing customer sentiment from social media posts, using Natural Language Processing.
 Predicting the future direction and volatility of the stock market by building predictive models
based on past data and macroeconomic variables.
 Dynamic pricing of goods and services using sales, purchasing and market data together
with machine learning.
 Active monitoring of computer networks for intrusion attempts
Robotics
Robotics is an interdisciplinary branch of artificial intelligence which draws on the disciplines of
computer science, electronic engineering and mechanical engineering and is concerned with the
development of machines which can perform human tasks and reproduce human actions. The
human tasks robotics seeks to replicate include logic, reasoning and planning.

Not all robots are designed to resemble human appearance, but many will be given human-like
features to allow them to perform physical tasks otherwise performed by humans. The design of
such robots makes considerable use of sensor technology, including but not limited to computer
vision systems, which allow the robot to 'see' and identify objects.

Robots are frequently used on production lines in large manufacturing enterprises, but can also be
found in the autopilot systems in aircraft as well as in the more recent and growing development of
self-driving or autonomous cars. All these examples represent the 'narrow' category of artificial
intelligence. Robotics is increasingly used in business and finance.

 Robo-advisors use speech-recognition and knowledge bases to assist customers of financial


institutions in selecting the most suitable products.
 Artificial Intelligence in mobile applications is being employed to assist customers of banks in
the managing of their personal finances.
 Businesses are increasingly employing robotic assistants in customer facing roles such as
technical support on telephones and websites.

Machine Learning
Machine learning is the use of statistical models and other algorithms in order to enable computers
to learn from data. It is divided into two distinct types, unsupervised and supervised learning. The
main feature of machine learning is that the machine learns from its own experience from interaction
with the data it is processing and can make decisions independently of any input from human
beings. They can adapt or create their own algorithms to help them make better and more relevant
decisions on the basis of this experience.

Unsupervised Learning draws inferences and learns structure from data without being provided with
any labels, classifications or categories. In other words, unsupervised learning can occur without
being provided with any prior knowledge of the data or patterns it may contain.

The most frequently used form of unsupervised learning is clustering, which is the task of grouping a
set of observations so that those in the same group (cluster) are more similar to each other in some
way than they are to those in other clusters. There are multiple methods of determining similarity or
dissimilarity. The most commonly used being some form of distance measure, with observations that
are close to each other being considered to be part of the same cluster.

The quality of clusters can be determined by a number of evaluation measures. These generally
base their quality score on how compact each cluster is and how distant it is from other clusters.
Another frequently encountered form of unsupervised learning is market basket analysis or affinity
analysis. This type of analysis is designed to uncover co-occurrence relationships between
attributes of particular individuals or observations. For instance, a supermarket which has used
market basket analysis may discover that a particular brand of washing powder and fabric
conditioner frequently occur in the same transaction, so offering a promotion for one of the two will
likely increase the sales of both, but offering a promotion for the purchase of both is likely to have
little impact on revenue.

The use of market basket analysis can also be found in online outlets such as Amazon, who use the
results of the analysis to inform their product recommendation system. The two most often used
market basket analysis approaches are the a-priori algorithm and frequent pattern growth.

Supervised Learning is similar to the human task of concept learning. At its most basic level, it
allows a computer to learn a function that maps a set of input variables to an output variable using a
set of example input-output pairs. It does this by analysing the supplied examples and inferring what
the relationship between the two may be.

The goal is to produce a mapping that allows the algorithm to correctly determine the output value
for as yet unseen data instances. This is very much concerned with the predictive analytics covered
earlier in the unit so that the machine learns from past relationships between variables and then
builds up a measure of how some variables, factors or behaviours can predict the responses that the
machine should give, such as knowing the temperature and other weather conditions for the coming
week will allow the machine to calculate orders for specific products, based on those forecast
factors.
The purpose of data visualization
Data visualisation allows us to:

 Summarise large quantities of data effectively.


 Answer questions that would be difficult, if not impossible, to answer using non-visual
analyses.
 Discover questions that were not previously apparent and reveal previously unidentified
patterns.
 View the data in its context.

The benefits of data visualisation


By utilising data visualisation techniques, we can:

 Quickly identify emerging trends and hidden patterns in the data.


 Gain rapid insights into data which are relevant and timely.
 Rapidly process vast amounts of data.
 Identify data quality issues.

The history of data visualisation


Data visualisation is not a new concept. It could be argued that it reaches all the way back to pre-
history.

One of the most ancient calculators was invented over 2,500 years ago by the Chinese, called the
Abacus. It is not only an ancient calculator, but is also an early example of data visualisation where
the number of beads in a column shows the relative quantities as counted on rods. The Abacus has
two sections, top and bottom with a bar or ‘beam’ dividing them. When beads are pushed upwards
or downwards towards the bar they are considered as counted.

The magnitude of the numbers increases by a multiple of 10 for each rod, going from the right to the
left, with the far right-hand side rod containing the beads with the lowest value or denomination. This
means that below the bar of the rod at the far right, beads are worth 1 unit each and there are a total
of 5. Each of the two beads above the bar are worth the same as all five beads below the bar, so
each of the beads above the bar on the far right rod is worth 5 each. In the next adjacent column to
the left, each of the bottom beads is worth 10 and the top beads are worth 50 each and so on.
The pie chart is an example of a static visualisation, but shows the relative composition using the
size of the slices, each of which represents a simple share of the total.

Static comparison allows comparison between categories at a single point in tim

Dynamic comparison permits comparison between categories over time.

The waterfall chart shows the how each component adds to or subtracts from the total.

Dynamic composition shows the change in the composition of the data over time. Where the
analysis involves few periods, a stacked bar chart is used where the absolute value for each
category matters in addition to the relative differences between categories.

What makes a good visualisation?


According to Andy Kirk a data visualisation specialist, a good data visualisation should have the
following qualities:

 It must be trustworthy;
 It must be accessible; and
 It must be elegant.
In his work on graphical excellence, statistician Edward R. Tufte describes an effective visualisation
as:

 the well-designed presentation of interesting data: a matter of substance, of statistics, and of


design;
 consisting of complex ideas communicated with clarity, precision, and efficiency;
 that which gives to the viewer the greatest number of ideas in the shortest time with the least
ink in the smallest space;
 nearly always multivariate;
 requiring that we tell the truth about the data.

What does this mean in practice?


Kirk's principle of trustworthiness and Tufte's call to tell the truth about the data mean that we should
actively avoid deliberately or accidently constructing visualisations that do not accurately depict the
truth about the underlying data. This includes, but is not limited to, choosing the most appropriate
visualisation, for example, ensuring all axes start with the lowest values (preferably zero) on the
bottom left of the chart, that axes and data series are labelled and where possible have the same
scale. It is common to see politicians and advertisers ignoring these principles in the interests of
influencing their audiences to believe what they wish to tell them.

The principle of accessibility suggested by Kirk echoes Tufte's statement that a good visualisation
should not only give the viewer the greatest number of ideas in the shortest space of time, but
should also have clarity, precision and efficiency. In effect, this means concentrating on those design
elements that actively contribute to visualising the data and avoiding the use of unnecessary
decoration; which Tufte refers to as 'chart junk'. It also means we should avoid trying to represent
too many individual data series in a single visualisation, breaking them into separate visualisations if
necessary.

Accessibility is, to paraphrase Kirk, all about offering the most insight for the least amount of viewer
effort. This implies that a significant part of designing a visualisation is to understand the needs of
the audience for the work and making conscious design decisions based on that knowledge.

Careful use of colour is a fundamental part of ensuring an accessible design. The choice of colours
should be a deliberate decision. They should be limited in number, should complement each other
and should be used in moderation to draw attention to key parts of the design. When colour is used
this way, it provides a cue to the viewer as to where their attention should be focussed. For
visualisations with a potentially international audience, the choice of colours should also be informed
by cultural norms. For example, in the western world, the colour red means danger but for the
population of East Asia, it signifies luck or spirituality.

An accessible design should enable to the viewer to gain rapid insights from the work, much like a
handle enables us to use a cup more efficiently. These metaphors include the 'traffic light' design
used to provide a high-level summary of the state of some key business risk indicator or the
speedometer metaphor employed to indicate performance. Both are frequently found on Business
Intelligence dashboards used by businesses.

Kirk's final principle, elegance, is a difficult one to describe, but is vital to any successful
visualisation. The key here is that what is aesthetically pleasing is usually simpler and also easier to
interpret and is likely to not only catch our attention but hold it for longer. A good design should avoid
placing obstacles in the way of the viewer; it should flow seamlessly and should guide the viewer to
the key insights it is designed to impart.

You might also like