Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

Table of content

Topic: Application of Deep Learning in GIS and RS

 What is artificial intelligence?


 What is machine learning?
 Deep learning
 Difference between machine learning and deep learning history of deep
learning
 How does deep learning work
 Deep neural network and it's types
 Why is deep learning important?
 Application of deep learning in GIS and RS
 Integrating deep learning with GIS
 Application of deep learning in GIS
 Application of deep learning in Remote sensing data
 Satellite images classification with deep learning
 Scene classification of aerial images
 Deep learning for drone images
 Challenges of deep learning in GIS and RS
 Future of deep learning in GIS and RS
 Conclusion
 Reference
What Is Artificial Intelligence (AI)?
“Artificial Intelligence is defined as the ability of a digital computer or computer-
controlled robot to perform tasks commonly associated with intelligent beings.”

OR
“Artificial intelligence (AI) refers to the simulation of human intelligence in
machines that are programmed to think like humans and mimic their actions. The term may
also be applied to any machine that exhibits traits associated with a human mind such as
learning and problem-solving.”

Explanation:

 Turing was followed up a few years later by John McCarthy, who first used the term
“artificial intelligence” to denote machines that could think autonomously. He
described the threshold as “getting a computer to do things which, when done by
people, are said to involve intelligence.”

 AI is an interdisciplinary science with multiple approaches, but advancements in


machine learning and deep learning are creating a paradigm shift in virtually every
sector of the tech industry. Deep learning and machine learning are sub-fields of
artificial intelligence, and deep learning is actually a sub-field of machine learning.
 Artificial intelligence algorithms are designed to make decisions, often using real-time
data. They are unlike passive machines that are capable only of mechanical or
predetermined responses.
 Using sensors, digital data, or remote inputs, they combine information from a variety
of different sources, analyze the material instantly, and act on the insights derived from
those data. As such, they are designed by humans with intentionality and reach
conclusions based on their instant analysis.
 From the development of self-driving cars to the proliferation of smart assistants like Siri
and Alexa, AI is a growing part of everyday life. As a result, many tech companies across
various industries are investing in artificially intelligent technologies.

Why is artificial intelligence important?

 AI is important because it can give enterprises insights into their operations that they
may not have been aware of previously and because, in some cases, AI can perform
tasks better than humans.
 This has helped fuel an explosion in efficiency and opened the door to entirely new
business opportunities for some larger enterprises.
 Prior to the current wave of AI, it would have been hard to imagine using computer
software to connect riders to taxis, but today Uber has become one of the largest
companies in the world by doing just that.
 As another example, Google has become one of
the largest players for a range of online services
by using machine learning to understand how
people use their services and then improving
them. In 2017, the company's CEO, Sundar Pichai,
pronounced that Google would operate as an "AI
first" company.

Today's largest and most successful enterprises have used AI to improve their
operations and gain advantage on their competitors.

What Is Machine Learning?


“Machine learning (ML) is a discipline of artificial intelligence (AI) that provides
machines with the ability to automatically learn from data and past experiences while
identifying patterns to make predictions with minimal human intervention.”

 Machine learning is one way to use AI. It was defined in the 1950s by AI pioneer Arthur
Samuel as “the field of study that gives computers the ability to learn without
explicitly being programmed.”
 Machine learning methods enable computers to operate autonomously without explicit
programming. ML applications are fed with new data, and they can independently learn,
grow, develop, and adapt.
 Machine learning derives insightful information from large volumes of data by
leveraging algorithms to identify patterns and learn in an iterative process. ML
algorithms use computation methods to learn directly from data instead of relying on
any predetermined equation that may serve as a model.
 The performance of ML algorithms adaptively improves with an increase in the number
of available samples during the ‘learning’ processes. For example, deep learning is a
sub-domain of machine learning that trains computers to imitate natural human traits
like learning from examples. It offers better performance parameters than conventional
ML algorithms.

Features of Machine Learning:


 Machine learning uses data to detect various patterns in a given dataset.
 It can learn from past data and improve automatically.
 It is a data-driven technology.
 Machine learning is much similar to data mining as it also deals with the huge amount of
the data.

Why is Machine Learning important?


There are many reasons why learning machine learning is important:

 Machine learning is widely used in many industries, including healthcare, finance, and e-
commerce. By learning machine learning, you can open up a wide range of career
opportunities in these fields.
 Machine learning can be used to build intelligent systems that can make decisions and
predictions based on data. This can help organizations make better decisions, improve
their operations, and create new products and services.
 Machine learning is an important tool for data analysis and visualization. It allows you to
extract insights and patterns from large datasets, which can be used to understand
complex systems and make informed decisions.
 Machine learning is a rapidly growing field with many exciting developments and
research opportunities. By learning machine learning, you can stay up-to-date with the
latest research and developments in the field.
Deep Learning:
“Deep learning is a subfield of ML that uses algorithms called artificial neural
networks (ANNs), which are inspired by the structure and function of the brain and are capable
of self-learning. ANNs are trained to “learn” models and patterns rather than being explicitly
told how to solve a problem.”

OR
“Deep Learning is a subfield of machine learning concerned with algorithms
inspired by the structure and function of the brain called artificial neural networks.”

Deep Learning

Explanation:
 Deep learning is a machine learning technique that teaches computers to do what
comes naturally to humans: learn by example.
 Deep learning is a key technology behind driverless cars, enabling them to recognize a
stop sign, or to distinguish a pedestrian from a lamppost.
 It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-
free speakers. Deep learning is getting lots of attention lately and for good reason. It’s
achieving results that were not possible before.
 In deep learning, a computer model learns to perform classification tasks directly from
images, text, or sound.
 Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding
human-level performance. Models are trained by using a large set of labeled data and
neural network architectures that contain many layers.

Difference Between Machine Learning and Deep Learning:


 Deep learning is a specialized form of machine learning. A machine learning workflow
starts with relevant features being manually extracted from images. The features are
then used to create a model that categorizes the objects in the image.
 With a deep learning workflow, relevant features are automatically extracted from
images. In addition, deep learning performs “end-to-end learning” – where a network
is given raw data and a task to perform, such as classification, and it learns how to do
this automatically.
 Another key difference is deep learning algorithms scale with data, whereas shallow
learning converges. Shallow learning refers to machine learning methods that plateau
at a certain level of performance when you add more examples and training data to the
network.
 A key advantage of deep learning networks is that they often continue to improve as
the size of your data increases.
 In machine learning, you manually choose features and a classifier to sort images. With
deep learning, feature extraction and modeling steps are automatic.
History Of Deep Learning:
Deep learning is a topic that is making big waves at the moment. It is basically a
branch of machine learning (another hot topic) that uses algorithms to e.g. recognize objects
and understand human speech.

It’s one kind of supervised machine learning, in which a computer is provided a


training set of examples to learn a function, where each example is a pair of an input and an
output from the function.

Deep learning is based on the concept of artificial neural networks, or


computational systems that mimic the way the human brain functions. And so, our brief history
of deep learning must start with those neural networks.

 The history of deep learning can be traced back to 1943, when Walter Pitts and Warren
McCulloch created a computer model based on the neural networks of the human
brain.
They used a combination of algorithms and mathematics they called
“threshold logic” to mimic the thought process. Since that time, Deep Learning has
evolved steadily, with only two significant breaks in its development. Both were tied to
the infamous Artificial Intelligence winters.

 1958: Frank Rosenblatt creates the perceptron, an algorithm for pattern recognition
based on a two-layer computer neural network using simple addition and subtraction.
He also proposed additional layers with mathematical notations, but these wouldn’t be
realized until 1975.
 1980: Kunihiko Fukushima proposes the Neoconitron, a hierarchical, multilayered
artificial neural network that has been used for handwriting recognition and other
pattern recognition problems.
 1989: Scientists were able to create algorithms that used deep neural networks, but
training times for the systems were measured in days, making them impractical for real-
world use.
 1992: Juyang Weng publishes Cresceptron, a method for performing 3-D object
recognition automatically from cluttered scenes.
 Mid-2000s: The term “deep learning” begins to gain popularity after a paper by
Geoffrey Hinton and Ruslan Salakhutdinov showed how a many-layered neural network
could be pre-trained one layer at a time.
 2009: NIPS Workshop on Deep Learning for Speech Recognition discovers that with a
large enough data set, the neural networks don’t need pre-training, and the error rates
drop significantly.
 2012: Artificial pattern-recognition algorithms achieve human-level performance on
certain tasks. And Google’s deep learning algorithm discovers cats.
 2014: Google buys UK artificial intelligence startup Deep mind for £400m
 2015: Facebook puts deep learning technology - called Deep Face - into operations to
automatically tag and identify Facebook users in photographs. Algorithms perform
superior face recognition tasks using deep networks that take into account 120 million
parameters.
 2016: Google Deep Mind’s algorithm Alpha Go masters the art of the complex board
game Go and beats the professional go player Lee Sedol at a highly publicized
tournament in Seoul.

The promise of deep learning is not that computers will start to think like
humans. That’s a bit like asking an apple to become an orange. Rather, it demonstrates that
given a large enough data set, fast enough processors, and a sophisticated enough algorithm,
computers can begin to accomplish tasks that used to be completely left in the realm of human
perception — like recognizing cat videos on the web (and other, perhaps more useful
purposes).

Evolution of deep learning from 1943 to 2006


How does deep learning work?
Deep learning neural networks, or artificial neural networks, attempts to mimic the
human brain through a combination of data inputs, weights, and bias. These elements work
together to accurately recognize, classify, and describe objects within the data.

Deep neural networks consist of multiple layers of interconnected nodes, each


building upon the previous layer to refine and optimize the prediction or categorization. This
progression of computations through the network is called forward propagation. The input and
output layers of a deep neural network are called visible layers. The input layer is where the
deep learning model ingests the data for processing, and the output layer is where the final
prediction or classification is made.

Another process called back propagation uses algorithms, like gradient descent, to
calculate errors in predictions and then adjusts the weights and biases of the function by
moving backwards through the layers in an effort to train the model. Together, forward
propagation and back propagation allow a neural network to make predictions and correct for
any errors accordingly. Over time, the algorithm becomes gradually more accurate.

Deep neural networks:


Deep neural networks employ deep architectures in neural networks. “Deep” refers to
functions with higher complexity in the number of layers and units in a single layer. The ability
to manage large datasets in the cloud made it possible to build more accurate models by using
additional and larger layers to capture higher levels of patterns.

The two key phases of neural networks are called training (or learning) and
inference (or prediction), and they refer to the development phase versus production or
application. When creating the architecture of deep network systems, the developer chooses
the number of layers and the type of neural network, and training determines the weights.

Types of Deep Neural Networks:


Three following types of deep neural networks are popularly used today:

1. Multi-Layer Perceptrons (MLP)


2. Convolutional Neural Networks (CNN)
3. Recurrent Neural Networks (RNN)

Multilayer Perceptrons (MLPs)


A multilayer perceptron (MLP) is a class of a feed forward artificial neural network
(ANN). MLPs models are the most basic deep neural network, which is composed of a series of
fully connected layers. Today, MLP machine learning methods can be used to overcome the
requirement of high computing power required by modern deep learning architectures. Each
new layer is a set of nonlinear functions of a weighted sum of all outputs (fully connected) from
the prior one.

Concept of Multilayer Perceptrons (MLP)


Convolutional Neural Network (CNN)
A convolutional neural network (CNN, or ConvNet) is another class of deep neural
networks. CNNs are most commonly employed in computer vision. Given a series of images or
videos from the real world, with the utilization of CNN, the AI system learns to automatically
extract the features of these inputs to complete a specific task, e.g., image classification, face
authentication, and image semantic segmentation.

Different from fully connected layers in MLPs, in CNN models, one or multiple
convolution layers extract the simple features from input by executing convolution operations.
Each layer is a set of nonlinear functions of weighted sums at different coordinates of spatially
nearby subsets of outputs from the prior layer, which allows the weights to be reused.

Concept of a Convolution Neural Network (CNN)

Recurrent Neural Network (RNN)


A recurrent neural network (RNN) is another class of artificial neural networks that
use sequential data feeding. RNNs have been developed to address the time-series problem of
sequential input data. The input of RNN consists of the current input and the previous samples.
Therefore, the connections between nodes form a directed graph along a temporal sequence.
Furthermore, each neuron in an RNN owns an internal memory that keeps the information of
the computation from the previous samples.

Concept of a Recurrent Neural Network (RNN)

Why is deep learning important?


Artificial intelligence (AI) attempts to train computers to think and learn as humans
do. Deep learning technology drives many AI applications used in everyday products, such as
the following:

 Digital assistants
 Voice-activated television remotes
 Fraud detection
 Automatic facial recognition

It is also a critical component of emerging technologies such as self-driving cars,


virtual reality, and more.
Deep learning models are computer files that data scientists have trained to perform
tasks using an algorithm or a predefined set of steps. Businesses use deep learning models to
analyze data and make predictions in various applications.

Application of Deep learning in GIS and Remote sensing:

Integrating Deep Learning with GIS


 The field of Artificial Intelligence has made rapid progress in recent years, matching or
in some cases, even surpassing human accuracy at tasks such as computer vision,
natural language processing and machine translation.
 The intersection of artificial intelligence (AI) and GIS is creating massive opportunities
that weren’t possible before.
 AI, machine learning and deep learning are helping us make a better world by helping
increase crop yield through precision agriculture, to fighting crime by deploying
predictive policing models, to predicting when the next big storm will hit and being
better equipped to handle it.
 One area of AI where deep learning has done exceedingly well is computer vision, or
the ability for computers to see. This is particularly useful for GIS because satellite,
aerial, and drone imagery is being produced at a rate that makes it impossible to
analyze and derive insight through traditional means.

Applications of deep learning in GIS:

Esri has developed tools and workflows to utilize the latest innovations in deep
learning to answer some of the challenging questions in GIS and remote sensing applications.
Computer vision, or the ability of computers to gain understanding from digital images or
videos, is an area that has been shifting from the traditional machine learning algorithms to
deep learning methods.
There are many computer vision tasks that can be accomplished with deep learning
neural networks. Esri has developed tools that allow you to perform image classification,
object detection, semantic segmentation, and instance segmentation.

Image classification
“The simplest task is image classification in which the computer assigns the label
“cat” to an image of a cat.”

In GIS, this classification is used to categorize geotagged photos. Another image,


classified as “dense crowd,” can be used by GIS for pedestrian and traffic management
planning during public events.
In image classification, the computer assigns the label “cat” to an image of a cat (left). The
computer classified the image (right) as a dense crowd.

Object detection
“With object detection, the computer needs to find the objects within an image as
well as their location.”

This is a very important task in GIS because it finds what is in a satellite, aerial, or
drone image, locates it, and plots it on a map. This task can be used for infrastructure mapping,
anomaly detection, and feature extraction.

Swimming pools are detected within residential parcels

Semantic segmentation
“Semantic segmentation, in which each pixel of an image is classified as belonging
to a particular class.”
In GIS, semantic segmentation can be used for land-cover classification or to extract
road networks from satellite imagery. In GIS, this is often referred to as pixel classification,
image segmentation, or image classification.

A nice early example of this work and its impact is the success the Chesapeake
Conservancy has had in combining Esri GIS technology with the Microsoft Cognitive Toolkit
(CNTK) AI tools and cloud solutions to produce the first high-resolution land-cover map of the
Chesapeake watershed.

Land-cover classification uses deep learning

Instance segmentation:
“Instance segmentation is a more precise object detection method in which the
boundary of each object instance is drawn.”
For example, in the image on the left,
the roofs of houses are detected, including the
precise outline of the roof shape. On the right,
cars are detected, and see the distinct shape
of the cars. This type of deep learning
application is also known as object
segmentation.

Deep Learning for Mapping


 In working with satellite imagery, one important application of deep learning is
creating digital maps by automatically extracting road networks and building
footprints.
 Imagine applying a trained deep learning model on a large geographic area and arriving
at a map containing all the roads in the region, then having the ability to create driving
directions using this detected road network. This can be particularly useful for
developing countries that do not have high-quality digital maps or in areas where
newer developments have been built.

 Good maps need more than just roads—they need buildings. Instance
segmentation models like Mask R-CNN are particularly useful for building
footprint segmentation and can help create building footprints without any
need for manual digitizing.

 However, these models typically result in irregular building footprints that


look more like Antonio Gaudi masterpieces than regular buildings with
straight edges and right angles. Using the Regularize Building Footprint tool in
ArcGIS Pro can help restore the straight edges and right angles necessary for
an accurate representation of building footprints.

Building footprints extracted out of satellite imagery and regularized using the Regularize

Building Footprint tool in ArcGIS Pro is shown.

Image translation
Image translation is the task of translating an image from one possible
representation or style of the scene to another, such as noise reduction or super-resolution.

For example, the image on the left below shows the original low-resolution image,
and the image on the right shows the result of using a super-resolution model. This type of
deep learning application is also known as image-to-image translation.
Change detection

“Change detection deep learning tasks can detect changes in features of interest
between two dates and generate a logical map of change.”

For example, the image on the left below shows a housing development from five
years ago, the middle image shows the same development today, and the image on the right
shows the logical change map where new homes are in white.

Change detection
Application of Deep Learning in Remote Sensing:
 Deep-learning (DL) algorithms, which learn the representative and discriminative
features in a hierarchical manner from the data, have recently become a hotspot in the
machine-learning area and have been introduced into the geoscience and remote
sensing (RS) community for RS big data analysis.
 DL has emerged as of the most successful machine learning techniques and has
achieved impressive performance in the field of computer vision and image processing,
with applications such as
 Image classification,
 Object detection, and
 Super-resolution restoration
 DL is actually everywhere in RS data analysis: from the traditional topics of image
preprocessing, pixel-based classification, and target recognition, to the recent
challenging tasks of high-level semantic feature extraction and RS scene.

Satellite Image Classification with Deep Learning:


 Satellite imagery is important for many applications including disaster response, law
enforcement, and environmental monitoring.
 These applications require the
manual identification of objects
and facilities in the imagery.
 Because the geographic
expanses to be covered are great
and the analysts available to
conduct the searches are few,
automation is required.
 Yet traditional object detection
and classification algorithms are
too inaccurate and unreliable to
solve the problem.
 Deep learning is a family of machine learning algorithms that have shown promise
for the automation of such tasks. It has achieved success in image understanding by
means of convolutional neural networks.
 The system consists of an ensemble of convolutional neural networks and additional
neural networks that integrate satellite metadata with image features.
Scene classification of aerial images
o Distinguished from the aforementioned pixel-level image classifications which interprets
HSIs with a bottom-up manner, scene classification aims to automatically assign a
semantic label to each scene image.
o Here, a scene image usually refers to a local image patch manually extracted from large-
scale high-resolution aerial images that contain explicit semantic classes (e.g.,
residential area, commercial area, etc.).
o Due to high resolutions in such data, different scene images may contain the same types
of objects or share similar spatial arrangements between objects.
o For example, both residential area and commercial area may contain buildings, roads
and trees, but they belong to two different categories of build-up areas. Indeed, great
variations potentially existing in scene images in the spatial arrangements and structural
patterns make scene classification a considerably challenging task.

Deep learning for drone images:


o As of today drones are being used in domains such as agriculture, construction,
public safety and security to name a few while also rapidly being adopted by
others.
o With deep-learning based computer vision now powering these drones, industry
experts are predicting unprecedented use in previously unimaginable applications.
o Today, access to drones that can fly as high as 2kms is possible even for the
general public.
o These drones have high resolution cameras attached to them that are capable of
acquiring quality images which can be used for various kinds of analysis.
o The mapping, carried out through image segmentation or semantic segmentation,
was performed using machine learning (ML) and deep learning (DL) algorithms.
o Amongst the DL networks, a convolutional neural network (CNN) architecture in a
transfer learning framework was utilised. A combination of ResNet50 and SegNet
architecture gave the best semantic segmentation results (≈90%).
o The high accuracy of DL networks was accompanied with significantly larger
labelled training dataset, computation time and hardware requirements compared
to ML classifiers with slightly lower accuracy.
o For specific applications such as wetland mapping where networks are required to
be trained for each different site, topography, season, and other atmospheric
conditions, ML classifiers proved to be a more pragmatic choice.

Drone chasing drone using deep learning


Challenges of deep learning in GIS and RS:
Despite its great potential, in general, the use of DL in RS image classification brings forward
significant new challenges.

There are several reasons for this:

 First, many RS data, especially hyperspectral images (HSIs), contain hundreds of


bands that can cause a small patch to involve a really large amount of data,
which would demand a large number of neurons in a DL network (Berlin & Kay,
1969; Chen, Xiang, Liu, & Pan, 2013; Zhang et al., 2018). Apart from the visual
geometrical patterns within each band, the spectral curve vectors across bands
may also provide important information. However, how to utilize this
information still requires further research.
 Second, the usually impressive performance of DL techniques relies on large
numbers of labeled samples. Unfortunately, very few labeled samples are
available in RS data.
 Third, compared with conventional natural scene images, RS images are more
complex.
 The high spatial resolution RS images may involve various types of
objects, which are also different in size, color, location and rotation.
 HSIs may be acquired using different sensors in the first place.
 The complexity of RS data makes it very difficult if not impossible to
directly construct a DL network model for the classification of such
images, assistance is required for DL to perform.

Future of Deep Learning in GIS and RS:


 DL approaches can generate classifications or predictions about a particular
target based on a collection of input attributes, and GIS and remote sensing can
produce the necessary geographical input variables for such an DL model.
 However, there are constraints when they are performed independently. This
may be avoided by combining GIS with DL. Big data technology allows for the
integration of many predictors and variables, which is not feasible with GIS.
 DL and GIS can explore difficult tasks with complex and dynamic data. Most
geographical characteristics are dynamic, necessitating complicated modelling,
which may be accomplished using neural networks and other spatial data from
the GIS database. This will enhance the accuracy of the modelling.
 Future applications of DL with GIS include susceptibility mapping of various
natural hazards and investigating more complex feature selection or dimension
reduction methods to improve the prediction performance of DL methods.
 Integrating DL with GIS aids in the development of a better decision-making
tool. The approach may be utilized as a support tool for the rapid and efficient
generation of various maps by organizations involved in disaster management,
water resources, the environment, and land use planning.

Conclusion:
A new discipline called "deep learning" arose and applied complex neural network
architectures to model patterns in data more accurately than ever before. The results are
undeniably incredible. Computers can now recognize objects in images and video and
transcribe speech to text better than humans can.

DL is a fast-developing discipline that enables data scientists to use cutting-edge


research while utilizing an industrial-strength GIS. Because of this appealing characteristic,
neural networks are increasingly being used to model complicated physical processes, and a
lack of precise field data predominate.
Reference:
 https://www.ibm.com/topics/artificial-intelligence
 https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence
 https://builtin.com/artificial-intelligence
 https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp
 https://www.mygreatlearning.com/blog/what-is-artificial-intelligence/
 https://www.techopedia.com/definition/190/artificial-intelligence-ai
 https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-ml/
 https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-ml/
 https://www.javatpoint.com/machine-learning
 https://www.aiche.org/resources/publications/cep/2018/june/introduction-deep-learning-part-
1?gclid=Cj0KCQiAiJSeBhCCARIsAHnAzT9iD9JLUs54khzF9iIBNFeuMHmNFVGHeO6H9L9Y6Td9uS_t
smzuwnUaAjk1EALw_wcB
 https://www.mathworks.com/discovery/deep-learning.html
 https://machinelearningmastery.com/what-is-deep-learning/
 https://aws.amazon.com/what-is/deep-learning/
 https://viso.ai/deep-learning/deep-neural-network-three-popular-types/
 https://www.ibm.com/topics/deep-learning
 https://www.dataversity.net/brief-history-deep-
learning/#:~:text=The%20history%20of%20deep%20learning,to%20mimic%20the%20thought%
20process.
 https://www.forbes.com/sites/bernardmarr/2016/03/22/a-short-history-of-deep-learning-
everyone-should-read/?sh=1c13cafd5561
 https://www.esri.com/about/newsroom/arcwatch/where-deep-learning-meets-gis/
 https://pro.arcgis.com/en/pro-app/latest/help/analysis/deep-learning/what-is-deep-learning-
.htm
 https://medium.com/geoai/integrating-deep-learning-with-gis-70e7c5aa9dfe
 https://wires.onlinelibrary.wiley.com/doi/full/10.1002/widm.1264
 https://www.frontiersin.org/articles/10.3389/frai.2020.534696/full
 https://www.mdpi.com/2072-4292/12/16/2602

You might also like