Professional Documents
Culture Documents
What Is Artificial Intelligence
What Is Artificial Intelligence
OR
“Artificial intelligence (AI) refers to the simulation of human intelligence in
machines that are programmed to think like humans and mimic their actions. The term may
also be applied to any machine that exhibits traits associated with a human mind such as
learning and problem-solving.”
Explanation:
Turing was followed up a few years later by John McCarthy, who first used the term
“artificial intelligence” to denote machines that could think autonomously. He
described the threshold as “getting a computer to do things which, when done by
people, are said to involve intelligence.”
AI is important because it can give enterprises insights into their operations that they
may not have been aware of previously and because, in some cases, AI can perform
tasks better than humans.
This has helped fuel an explosion in efficiency and opened the door to entirely new
business opportunities for some larger enterprises.
Prior to the current wave of AI, it would have been hard to imagine using computer
software to connect riders to taxis, but today Uber has become one of the largest
companies in the world by doing just that.
As another example, Google has become one of
the largest players for a range of online services
by using machine learning to understand how
people use their services and then improving
them. In 2017, the company's CEO, Sundar Pichai,
pronounced that Google would operate as an "AI
first" company.
Today's largest and most successful enterprises have used AI to improve their
operations and gain advantage on their competitors.
Machine learning is one way to use AI. It was defined in the 1950s by AI pioneer Arthur
Samuel as “the field of study that gives computers the ability to learn without
explicitly being programmed.”
Machine learning methods enable computers to operate autonomously without explicit
programming. ML applications are fed with new data, and they can independently learn,
grow, develop, and adapt.
Machine learning derives insightful information from large volumes of data by
leveraging algorithms to identify patterns and learn in an iterative process. ML
algorithms use computation methods to learn directly from data instead of relying on
any predetermined equation that may serve as a model.
The performance of ML algorithms adaptively improves with an increase in the number
of available samples during the ‘learning’ processes. For example, deep learning is a
sub-domain of machine learning that trains computers to imitate natural human traits
like learning from examples. It offers better performance parameters than conventional
ML algorithms.
Machine learning is widely used in many industries, including healthcare, finance, and e-
commerce. By learning machine learning, you can open up a wide range of career
opportunities in these fields.
Machine learning can be used to build intelligent systems that can make decisions and
predictions based on data. This can help organizations make better decisions, improve
their operations, and create new products and services.
Machine learning is an important tool for data analysis and visualization. It allows you to
extract insights and patterns from large datasets, which can be used to understand
complex systems and make informed decisions.
Machine learning is a rapidly growing field with many exciting developments and
research opportunities. By learning machine learning, you can stay up-to-date with the
latest research and developments in the field.
Deep Learning:
“Deep learning is a subfield of ML that uses algorithms called artificial neural
networks (ANNs), which are inspired by the structure and function of the brain and are capable
of self-learning. ANNs are trained to “learn” models and patterns rather than being explicitly
told how to solve a problem.”
OR
“Deep Learning is a subfield of machine learning concerned with algorithms
inspired by the structure and function of the brain called artificial neural networks.”
Deep Learning
Explanation:
Deep learning is a machine learning technique that teaches computers to do what
comes naturally to humans: learn by example.
Deep learning is a key technology behind driverless cars, enabling them to recognize a
stop sign, or to distinguish a pedestrian from a lamppost.
It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-
free speakers. Deep learning is getting lots of attention lately and for good reason. It’s
achieving results that were not possible before.
In deep learning, a computer model learns to perform classification tasks directly from
images, text, or sound.
Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding
human-level performance. Models are trained by using a large set of labeled data and
neural network architectures that contain many layers.
The history of deep learning can be traced back to 1943, when Walter Pitts and Warren
McCulloch created a computer model based on the neural networks of the human
brain.
They used a combination of algorithms and mathematics they called
“threshold logic” to mimic the thought process. Since that time, Deep Learning has
evolved steadily, with only two significant breaks in its development. Both were tied to
the infamous Artificial Intelligence winters.
1958: Frank Rosenblatt creates the perceptron, an algorithm for pattern recognition
based on a two-layer computer neural network using simple addition and subtraction.
He also proposed additional layers with mathematical notations, but these wouldn’t be
realized until 1975.
1980: Kunihiko Fukushima proposes the Neoconitron, a hierarchical, multilayered
artificial neural network that has been used for handwriting recognition and other
pattern recognition problems.
1989: Scientists were able to create algorithms that used deep neural networks, but
training times for the systems were measured in days, making them impractical for real-
world use.
1992: Juyang Weng publishes Cresceptron, a method for performing 3-D object
recognition automatically from cluttered scenes.
Mid-2000s: The term “deep learning” begins to gain popularity after a paper by
Geoffrey Hinton and Ruslan Salakhutdinov showed how a many-layered neural network
could be pre-trained one layer at a time.
2009: NIPS Workshop on Deep Learning for Speech Recognition discovers that with a
large enough data set, the neural networks don’t need pre-training, and the error rates
drop significantly.
2012: Artificial pattern-recognition algorithms achieve human-level performance on
certain tasks. And Google’s deep learning algorithm discovers cats.
2014: Google buys UK artificial intelligence startup Deep mind for £400m
2015: Facebook puts deep learning technology - called Deep Face - into operations to
automatically tag and identify Facebook users in photographs. Algorithms perform
superior face recognition tasks using deep networks that take into account 120 million
parameters.
2016: Google Deep Mind’s algorithm Alpha Go masters the art of the complex board
game Go and beats the professional go player Lee Sedol at a highly publicized
tournament in Seoul.
The promise of deep learning is not that computers will start to think like
humans. That’s a bit like asking an apple to become an orange. Rather, it demonstrates that
given a large enough data set, fast enough processors, and a sophisticated enough algorithm,
computers can begin to accomplish tasks that used to be completely left in the realm of human
perception — like recognizing cat videos on the web (and other, perhaps more useful
purposes).
Another process called back propagation uses algorithms, like gradient descent, to
calculate errors in predictions and then adjusts the weights and biases of the function by
moving backwards through the layers in an effort to train the model. Together, forward
propagation and back propagation allow a neural network to make predictions and correct for
any errors accordingly. Over time, the algorithm becomes gradually more accurate.
The two key phases of neural networks are called training (or learning) and
inference (or prediction), and they refer to the development phase versus production or
application. When creating the architecture of deep network systems, the developer chooses
the number of layers and the type of neural network, and training determines the weights.
Different from fully connected layers in MLPs, in CNN models, one or multiple
convolution layers extract the simple features from input by executing convolution operations.
Each layer is a set of nonlinear functions of weighted sums at different coordinates of spatially
nearby subsets of outputs from the prior layer, which allows the weights to be reused.
Digital assistants
Voice-activated television remotes
Fraud detection
Automatic facial recognition
Esri has developed tools and workflows to utilize the latest innovations in deep
learning to answer some of the challenging questions in GIS and remote sensing applications.
Computer vision, or the ability of computers to gain understanding from digital images or
videos, is an area that has been shifting from the traditional machine learning algorithms to
deep learning methods.
There are many computer vision tasks that can be accomplished with deep learning
neural networks. Esri has developed tools that allow you to perform image classification,
object detection, semantic segmentation, and instance segmentation.
Image classification
“The simplest task is image classification in which the computer assigns the label
“cat” to an image of a cat.”
Object detection
“With object detection, the computer needs to find the objects within an image as
well as their location.”
This is a very important task in GIS because it finds what is in a satellite, aerial, or
drone image, locates it, and plots it on a map. This task can be used for infrastructure mapping,
anomaly detection, and feature extraction.
Semantic segmentation
“Semantic segmentation, in which each pixel of an image is classified as belonging
to a particular class.”
In GIS, semantic segmentation can be used for land-cover classification or to extract
road networks from satellite imagery. In GIS, this is often referred to as pixel classification,
image segmentation, or image classification.
A nice early example of this work and its impact is the success the Chesapeake
Conservancy has had in combining Esri GIS technology with the Microsoft Cognitive Toolkit
(CNTK) AI tools and cloud solutions to produce the first high-resolution land-cover map of the
Chesapeake watershed.
Instance segmentation:
“Instance segmentation is a more precise object detection method in which the
boundary of each object instance is drawn.”
For example, in the image on the left,
the roofs of houses are detected, including the
precise outline of the roof shape. On the right,
cars are detected, and see the distinct shape
of the cars. This type of deep learning
application is also known as object
segmentation.
Good maps need more than just roads—they need buildings. Instance
segmentation models like Mask R-CNN are particularly useful for building
footprint segmentation and can help create building footprints without any
need for manual digitizing.
Building footprints extracted out of satellite imagery and regularized using the Regularize
Image translation
Image translation is the task of translating an image from one possible
representation or style of the scene to another, such as noise reduction or super-resolution.
For example, the image on the left below shows the original low-resolution image,
and the image on the right shows the result of using a super-resolution model. This type of
deep learning application is also known as image-to-image translation.
Change detection
“Change detection deep learning tasks can detect changes in features of interest
between two dates and generate a logical map of change.”
For example, the image on the left below shows a housing development from five
years ago, the middle image shows the same development today, and the image on the right
shows the logical change map where new homes are in white.
Change detection
Application of Deep Learning in Remote Sensing:
Deep-learning (DL) algorithms, which learn the representative and discriminative
features in a hierarchical manner from the data, have recently become a hotspot in the
machine-learning area and have been introduced into the geoscience and remote
sensing (RS) community for RS big data analysis.
DL has emerged as of the most successful machine learning techniques and has
achieved impressive performance in the field of computer vision and image processing,
with applications such as
Image classification,
Object detection, and
Super-resolution restoration
DL is actually everywhere in RS data analysis: from the traditional topics of image
preprocessing, pixel-based classification, and target recognition, to the recent
challenging tasks of high-level semantic feature extraction and RS scene.
Conclusion:
A new discipline called "deep learning" arose and applied complex neural network
architectures to model patterns in data more accurately than ever before. The results are
undeniably incredible. Computers can now recognize objects in images and video and
transcribe speech to text better than humans can.