Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Volume 9, Issue 3, March – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24MAR1895

Synthetic Aperture Radar Image Classification Using


Deep Learning
Tanishq Yadav Cherish Jaiswal
ASET ASET
Amity University Amity University
Noida, India Noida, India

Abstract:- Powerful remote sensing tools like Synthetic  Signal Reflection: These radar signals interact with
Aperture Radar (SAR) can detect targets and classify objects and the terrain on the Earth's surface. When the
land cover, as well as give useful data for monitoring signals encounter an object or a change in the surface,
disasters and other uses. Different Deep Learning such as a building, a tree, or the ground itself, some of
processes have made a remarkable changes in the past the energy is bounced back towards the SAR system.
few years, in terms of precision and effectiveness of SAR  Receiving Signals: The SAR system has a sensitive
picture classification. To improve the precision and antenna that collects the reflected signals, which are
dependability of SAR picture interpretation, this often referred to as "backscatter."
research paper presents a thorough investigation of SAR  Measurement of Time Delay: SAR systems measure the
image categorization utilising deep learning techniques, time it takes for the transmitted signal to travel to the
such as convolutional neural networks (CNNs) and target and back. By knowledge of speed of light, these
recurrent models. We review the state of the art now, calculate the distance to the objects on the Earth's
suggest a fresh approach, and indicate potential future surface.
research possibilities. Our findings show that deep  Synthetic Aperture: Unlike traditional radar, SAR has a
learning is excellent at classifying SAR images, laying synthetic aperture, which is created by moving the radar
the groundwork for more advanced remote sensing antenna along a known path. As the platform (e.g.,
applications. satellite) moves, it collects radar data from different
positions. This motion effectively creates a larger
Keywords:- SAR, Deep Learning(DL), CNN. antenna, allowing for the formation of high-resolution
images.[2]
I. INTRODUCTION  Data Processing: The raw radar data collected from
multiple positions is then processed to create a coherent
Synthetic Aperture Radar (SAR) has been used for image. Complex algorithms are used to account for the
remote sensing for Earth, for over 40-45 years. The first different positions of the antenna, the time delays, and
successful in 1964 by Jet Propulsion Laboratory. SAR gives the phase differences in the received signals.
high-resolution, day and night and weather independent  Image Formation: The processed data is used to
images for multiple different applications that range from construct a detailed and high-resolution image of the
geographical science and climate change researches to Earth Earth's surface. This image can reveal features such as
system monitoring and environment, 2-D and 3-D mapping, buildings, vegetation, water bodies, and terrain with a
detection in change, security-related applications up to very high level of detail. [3]
different from one another. Radar aperture is another word
used for the antenna on the space let in light. Radar aperture B. Applications of SAR
uses electromagnetic waves as a medium. Radar in SAR is Due to the formation of high-resolution imagery Sar is
an acronym for radio Detection and Ranging. [1] widely used in different fields, SAR has many applications,
including but not limited to:
A. Working of SAR
 Environmental Monitoring: Sar is used to map land and
SAR : remote sensing technology that make use of
land coverage. Deforestation and monitor forest.
radar to create high resolution images of the Earth’s surface. Monitoring of crop and crop classification and yield
It works by transmission of microwave signals towards the estimation. Soil moisture and subsidence monitoring.
target area and then receiving the signals that are reflected. A Detection of land and coastal changes. Glacier
simplified explanation of how SAR works can be said as monitoring and ice sheet studies.
follows :
 Disaster Management and Response: Detection and
 Transmission of Radar Signals: SAR systems are
monitoring of natural calamities for example
typically mounted on satellites, aircraft, or other
earthquakes, floods, and landslides. Damage estimation
platforms. These generate microwave signals, bands that and assessment done by disaster in affected areas.
are emitted towards the ground. Monitoring of volcanic eruptionsand ash cloud tracking.
Detection of oil spills and monitoring its spread

IJISRT24MAR1895 www.ijisrt.com 2177


Volume 9, Issue 3, March – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24MAR1895

 Urban Planning and Infrastructure Monitoring: Analysis targets, including vehicles and personnel. Mapping of
of Urban growth and planning. Monitoring terrain and assessment ofthe battlefield.
infrastructure such as roads, bridges, and buildings for  Geological Exploration: Detection of underground
deformation and structural integrity. Identification of geological structures. Identification of mineral deposits
illegal constructionand urban sprawl. and resources. Assessment of geological hazards like
 Coastal Applications: Ship detection and vessels for fault lines and sinkholes.
maritime surveillance. Monitoring of sea ice, currents,  Scientific Research: Studies of Earth's crustal movements
and oil spills in oceans. Coastal zone management, and tectonic plate dynamics. Monitoring of
including erosion and land subsidence monitoring. environmental changes in remote and inaccessible areas.
 Agriculture and Forestry: Monitoring of crop health, Research on climate changeand its impacts.
growth, and disease outbreaks. Forest inventory and  Search and Rescue Operations: Detection of distressed
canopy height estimation. Assessment of deforestation ships or aircraft at sea. Locating missing persons in
and illegal logging activities. wilderness or disaster-stricken areas
 Archaeological and Cultural Heritage Preservation:  Weather Forecasting: Monitoring and tracking severe
Identification of archaeological sites and buried weather phenomena such as hurricanes and typhoons.[4]
structures. Monitoring and preservation of historical  Infrastructure Planning and Development: Planning
monuments and cultural heritage sites. transportation networks, including road and railway
 Military and Defence: Reconnaissance and surveillance construction. Monitoring construction activities and
of enemy territory. Detection and tracking of moving infrastructure projects.

II. LITERATURE REVIEW

S. No. Authors Year Research Objective Methodology Key Findings


1. Hasan Anas, Hani 2020 Classification method applied They applied a CNNmodel Final Accuracy: 97.91%
Majdoul, Anibo to synthetic aperture radar (VGG 16) for feature
Chaimae, and Said (SAR). extraction ofSAR images.
Mohamed Used MSTAR as dataset.
Nabil1.
2. A. Pasah, S. N. S., B. 2022 Investigate and analyse the Comprehensive review and The key findings highlightthe
Paul and use of deep learning analysis of existing State-of-the-Art Methods,
D. Kanda techniques in the techniques for SAR Architectures and
classification of Synthetic classification using deep Configurations, Issues in
Aperture Radar (SAR) learning. SAR ImageProcessing,
images. Misclassification and
potential future models.
3. Pia Aabo, MariLuca 2023 The research objectiveof this The methodology involves The key findings highlightthe
Berardi, Filippo study is to address the the exploration and development of the DC2SCN
Biondi,Marta Citile, challenge of spatial assessment of different architecture, its ability to
Carmine C., Nicomino resolution improvement in approaches to SAR super- process complex SAR data
F., Gaetano G., Danil Synthetic Aperture Radar resolution, including while preservingboth
Olando, (SAR) imagery. sparsity, compressive amplitude and phase
Linjie Yan sensing, anddeep learning information
(DL).
4. Alicia Passah, 2023 The study aims to develop a The methodology involves The key findings include
Debdatta Kandar lightweightdeep learning the assessment of deep evaluation of deep learning
model that aims to increase learning modelsand models, observations and
the accuracy by scaling development of a comparisons and proposed
down the parameters up to lightweight model. lightweight deep learning
25 times. model.

III. METHODOLOGY  The significant part of a CNN is its convolutional layers,


where filters are used in extraction of properties like
A. Working of CNN Model textured and angles to form with the input image.
 Convolutional layer output is then sent through a variety
 A DL method also called convolutional neural network of pooling layers, which are used to downscale feature
(CNN) is most effectual in processing and conceding maps and extract the most important information while
images. reducing the spatialdimensions.
 This structure is composed of connected layers, pool
layers, and convolutional layers.

IJISRT24MAR1895 www.ijisrt.com 2178


Volume 9, Issue 3, March – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24MAR1895

 The output of the pooling layers is then applied to one or  Inference: After training, the CNN can make predictions
more fully connected layers to forecast or categorise the on new, unseen data. It passes the input through the
image.[5] layers, and the final layer's output represents the
 Unsupervised learning techniques can be used when network's prediction.
working with unlabeled data. Using autoencoders is one
of the most well-liked methodsfor accomplishing this. IV. ISSUES AND CHALLENGES
 As a result, you can conduct computations in the initial
section of CNN while fitting data into a space with few After studying and reviewing different literature on the
dimensions. application of SAR images. One major problem is faced by
 After doing this, you must reconstruct using additional many users is the availability of labeled SAR training data
layers that up-sample the data you already have. and that too with high resolution. The biggest dataset that is
available for free use is MSTAR dataset but that only
Convolutional Neural Networks (CNNs) work by contains limited military features. Many researchers have
utilising several operations and layers to input data, typically created their own unique style to combine available data but
images, to automatically learn and extract features and that turns to be expensive. There are many techniques that
patterns at different levels of abstraction. Here's how CNNs can be applied to increase the efficiency but a reasonable
work in more detail: dataset is required.
 Input Layer : An image (or a stack of images) is the input
to a CNN. A grid of pixel values serves as an image's V. FUTURE SCOPE AND CONCLUSION
representation.
 Convolutional Layers: The convolutional layer is the This paper provides the most recent deep learning-
foundation of a CNN. Within this layer, small filters based SAR image classification using deep learning. In
(also called kernels) slide or "convolve" over the input many practical scenarios, the classification of SAR images
image. Each filter detects specific features or patterns is the crucial final step in their interpretation. Given the
(edges, textures, shapes) by performing element-wise remarkable progress obtained using deep learning in a
multiplications and summations. variety of computer vision tasks, we conducted an
 Activation Function: After convolutional, there is an investigation into the latest exclusive approaches utilizing
activation function (typically “ReLU - Rectified Linear deep learning techniques for SAR image perception. This
Unit”) is used element-wise to enter non- linearity. This investigation shed light on the specific architectural choices
helps the network learn complex patterns and made in each method, as well as the configurations and
relationships. parameters employed. Nevertheless, a pressing challenge
associated with SAR image classification is the occurrence
 Pooling Layers: The pool layer is the next layer. This
shrinks the sample size for a specific feature map, or of misclassifications. This issue remains unresolved, and
down samples it. Additionally, by lowering the number addressing it is of paramount importance, as the
misinterpretation of targets can result in misinformation in
of parameters the network must process, processing
becomes significantly faster. The result of this is a map real-world applications. Our study also delved into the
strengths and weaknesses of each of these approaches,
with a pool.
providing a comprehensive overview. In the future
 Fully Connect Layers: After several convolutional and
expansion of data set is required with more labelling of
pooling layers, one or more thick layers that are
dataset.[6]
entirely connected are typically added. These layers
flatten the high-level features and perform classification
REFERENCES
or regression tasks.
 Learnable Parameters: All layers of the CNN have [1]. C. W. Sherwin, J. P. Ruina and R. D. Rawcliffe,
learnable parameters. The convolutional layers have "Some early developments in synthetic aperture radar
filter weights, and the fully connected layers have systems", IRE Trans. Military Electronics, vol. MIL-6,
weights and biases. During the training phase, these pp. 111-115, Apr. 1962.
parameters are learned via backpropagation and [2]. J. V. Evans and T. Hagfors, Radar Astronomy, New
optimisation methods like gradient descent. York:McGraw- Hill, 1968.
 Loss Function: A loss function calculates the [3]. A. Gulli and S. Pal, Deep learning with Keras, Packt
discrepancy between the target and the expected output Publishing Ltd, 2017
(ground truth). [4]. C. Elachi, Spaceborne Radar Remote Sensing:
 Backpropagation: The parameters of the network are Applications and Techniques, New York:IEEE, 1988
updated in the opposite direction using the gradients of [5]. O. Russakovsky, J. Deng, H. Su, J. Krause, S.
the loss with respect to the parametersof the network. Satheesh, S. Ma, et al., "ImageNet Large Scale Visual
 Training: CNNs are trained on a labelled dataset. The Recognition Challenge", International Journal of
network learns to recognize features that are relevant to Computer Vision, vol. 115, no. 3, pp. 211-252, 2015
the task it's designed for. For example, in image [6]. N. Aafaq, A. Mian, W. Liu, S. Z. Gilani and M. Shah,
classification, it learns to recognize features that "Video description", ACM Computing Surveys, vol. 52,
distinguish different objects or categories. no. 6, pp. 1-37, Jan 2020

IJISRT24MAR1895 www.ijisrt.com 2179

You might also like