Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

GIS SCIENCE JOURNAL ISSN NO : 1869-9391

CLASSIFICATION OF LAND USE AND LAND COVER FEATURES


FOR SOMWARPET REGION USING LANDSAT-8

Arpitha G A 1, Bhagyamma S2, Sinchana G S3 and Dr. Choodarathnakara A L4


1234
Department of Electronics and Communication Engineering, Kushalnagar

Abstract: Remote sensing provides an environmental monitoring capability that covers vast geographical areas by collecting
data from the Earth's atmosphere, land and oceans. Image classification is a useful technique for automatically distinguishing
image classes based on certain known information; used for LULC mapping. The Land Use and Land Cover (LULC) information
represent land surface features such as vegetation, water, buildings, roads, etc. which can be used for urban planning, etc. The
objective of this article is to produce the LULC map of Somwarpet Taluk, Kodagu District using Landsat-8 image (2017-2020).
The satellite image is processed by ERDAS Imagine 9.2 software and Random Forest (RF) technique is applied for the
classification of multispectral and fused images. The performance of RF on MS as well as fused images is compared for evaluation
purposes and the experimental result showed that the Overall Classification Accuracy (OAC) using the RF technique applied on a
multispectral image is about 63% and only 53% of accuracy is achieved for fused images.
Keywords: Land Use Land Cover, Random Forest, Landsat-8, Kodagu.

1. INTRODUCTION
The technique of sensing and monitoring the physical features of a place by measuring its reflected and
emitted radiation from a distance is known as Remote sensing. Remote sensing technologies such as
satellites provide a consistent perspective of the world that is essential for monitoring the Earth’s resources
as well as for obtaining information regarding the impact of human activities on it. Due to global
environmental change, LULC changes have become a critical subject that must be addressed immediately.
The surface cover on the ground, such as vegetation, urban infrastructure, water, bare soil, and so on, is
referred to as land cover whereas Land use describes how people have utilized the land for socio-economic
activities.

In many remote sensing applications such as environmental and socioeconomic, preprocessing and image
classification have been used as the basic building block. Image Enhancement is one of the pre-processing
techniques which is done prior to the image analysis to improve the resolution and retain certain relevant
information by removing the noise in the present data. Many researchers have worked to increase the
resolution of the satellite image using different data fusion methods. Data fusion is the process of
combining the content of two or more images, which creates a new composite image with more
information compared to the image obtained from the sensor. The author proposed an epithet pipeline
method for fusing RGB data (low spectral and high spatial resolution data) with hyperspectral data (high
spectral and low spatial resolution data). The results were compared to check the performance of the
multivariate image fusion method with bayesian sparse. The RMSE value of 200.158 and 168.641 is
obtained for MVIF and bayesian sparse. It is also suggested that non-linear regression techniques may
work better in the fusing of high-resolution images [1]. This study demonstrated the compressive sensing-
based fusion method to sharpen the low-resolution MS image, with the help of a high-resolution PAN
image. This proposed method showed better performance in dealing with complex structures of satellite
imagery and recommended that this method can be applied to hyperspectral images in the future [2]. The
objective of this paper was to use the multispectral satellite image from Landsat 5 in addition to black and
white aerial photographs, to increase the resolution by the Principal Component Analysis (PCA) method.
Results showed that composite images performed satisfactorily by 70 to 85%. Even though the results are
not satisfactory, they can be used for classification and used in applications such as detecting changes in
archaeological sites, cities, and forests [4].

The author has proposed a supervised cross-fusion method to enhance the ability of fusion. This method
uses optical bands of Landsat 8 and Sentinel 2 images for fusion. From the result analysis, it is identified
that the wavelet fusion (pixel-based method) approach has obtained a correlation co-efficient of 0.97 while
SNR of 0.96. The feature-based fusion i.e., the non-negative matrix factorization method is applied to
Sentinel 1 data and output of pixel-based fusion (LST). The classification is done by Random Forest and
SVM and the results of RoF and SVM improved by 25% and 31%. Therefore, it is concluded that LST
fused with optical and radar improved the accuracy of the classification [8].

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1522


GIS SCIENCE JOURNAL ISSN NO : 1869-9391

Shah et al. MSS data of Landsat-8 OLI and Sentinel-2 with spatial resolutions of 30m and 10m to
develop the data fusion model named extended super-resolution convolutional neural network (ESRCNN).
The assessment results highlighted that the deep learning-based fusion framework performed well in terms
of universal image quality index (0.8895), RMSE (0.0492), correlation co-efficient (0.9131), relative
global-dimensional synthesis error (6.6506) and spectral angle mapper (4.1589). From analysis, it is
observed that reflectance distribution shows better results in ESRCNN method when compared with
ATPRK method. This framework provides continuous reflectance observations with a better temporal
frequency when compared to the single Landsat sensor. The author suggested that the framework can be
used for LULC classification purpose [9].

Random forest (RF) can be used to produce a precise outcome by providing the importance of each feature
without the overfitting problem and it has the ability to measure the significant degree of predictive
parameters. It is used to identify the most relevant features for a given problem as well as to generate a
feature selector method [24]. Random Forest is a classifier that is used in the LULC application because of
its high classification accuracy and capability of determining the variable importance. It offers the
flexibility to perform several types of data analysis including regression, classification and unsupervised
learning [22]. By opting for the suitable band combination, and implementing the best fusion technique as
well as classification approach, it is possible to obtain a reliable LULC classification. The high precision
and accurate LULC mapping are essential for the planning and development of urban regions [25].

The present study focuses on producing the LULC classification map using Random Forest techniques on
two different images such as multispectral image and a fused image. The study is carried out on
Somwarpet taluk, Kodagu district. The results were compared to evaluate the performance of the RF
classifier on both sets of an image to reduce the error in the prediction. Section 1 comprises of Introduction
to data fusion techniques used for LULC applications, Section 2 contains the study area and satellite
considered in this study, Section 3 includes the proposed methodology, Section 4 embraces the result
analysis and Section 5 consists of the conclusion obtained from the study.

2. Study Area

Figure 1. Map of Somwarpet Taluk, Kodagu District

The study area considered for the present work is Somwarpet, it is a semi-urban area situated in North-East
of the Kodagu District. Its geographical coordinates are 12.5943° North, 75.8505° East and have an
average elevation of 1,130 meters. According to the 2001 Indian Census, Somwarpet has a population of
7128 and the main occupation is a coffee plantation, pepper, cardamom, etc., The location of the study area
is depicted in Figure 1.

2.1. Data Products


Table 1 gives the specification of image data products used in this study. The band 8 (Pan: 0.503-0.676)
with a spatial resolution of 15 m from Landsat 8 was acquired on 3rd February 2020 and MS images (30 m)
of the same satellite were acquired on 27th October 2018. The optical bands such as band 4 (Red: 0.636 –
0.673), band 3(Green:0.533 – 0.590) and band 5 (NIR band: 0.851-0.879) from Landsat 8 were considered
for the fusion process.

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1523


GIS SCIENCE JOURNAL ISSN NO : 1869-9391
TABLE 1: Specification of image data products used for LULC map

Sl. No Satellite Date of Acquisition Spatial Resolution

1 Landsat-8 03 February 2020 15m PAN

2 Landsat-8 27 October 2018 30m MS (R, G, NIR)

3. Proposed Methodology
Land Use and Land Cover (LULC) provides information regarding the earth’s surface which can be used
for the evaluation of changes in the varied spatial scales and it is widely used in the applications such as
conservation of the environment, create planning of land usage, Management of resources etc.,[2]. By
Considering the remote sensing data, it is feasible to derive LULC information over the vast areas [1]. RF
technique is applied to obtain the classification for both MS data as well as fused data and the results were
compared to check the performance. Figure 2 shows the Flow chart of the present methodology.

Figure 2. Methodology of Analysis LULC using Random Forest Classifier

3.1. Pre-Processing

1. Geo-referencing: The process of mapping the coordination system of the digital image obtained from
Landsat 8 to the geographical coordination system of the earth.

2. Sub-setting of image: The image that is downloaded from the Landsat 8 satellite will cover more area
than the required study area. Hence the larger image has to be divided into a small area called a subset.
Figure 3 Shows the Landsat 8 satellite image of Somwarpet Taluk.
3.2. LANDSAT-8 Multispectral Image Analysis using Random Tree Classifier

Figure 4. Original Multispectral Image of Somwarpet Region

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1524


GIS SCIENCE JOURNAL ISSN NO : 1869-9391

Figure 4 Shows the multispectral image of the Somwarpet Region. The Multispectral sensors capture the
different features of the earth within specific bands across the EM Spectrum. Devices such as filters or
detectors are used for the segregation of the bands and spectral features can be extracted from these bands.
This MS imagery can be used in remote sensing applications such as urban development, forest mapping
etc. The Random Forest is a supervised learning algorithm that is can be used for both regression and
classification. In this Study, RF is used for classification because of its ability to build multiple decision
trees and can merge them together to get accurate stable predictions, and also it is easier to understand. Just
by increasing the number of trees, it is possible to increase the precision of the outcome. In the present
study multispectral image of LANDSAT-8 are considered. Band 4 (Red: 0.636 – 0.673), band
3(Green:0.533 – 0.590), and band 5 (NIR band: 0.851-0.879) of 30m spatial resolution is considered for
experimental purpose.
3.3. LANDSAT-8 Fused Image Analysis Using Random Forest Classifier

Figure 5. Original Fused Image of Somwarpet Region

The image fusion process is defined as acquiring all relevant data from several images and combining it
into a single image, this single image provides the relevant information which is more informative and
accurate than any single source image. The purpose of image fusion is to create a fused image that
incorporates the most important information from all input images taken by various sensors from the same
scenario. The fusion process has the ability to boost the contrast in the image while maintaining the
integrity of important characteristics from the input images. In this work, the RF technique is applied to
fused image obtained by combining the PAN (0.503-0.676) of 15m spatial resolution and the Multispectral
image of LANDSAT-8 such as Band 4 (Red: 0.636 – 0.673), band 3(Green:0.533 – 0.590), and band 5
(NIR band: 0.851-0.879) of 30m spatial resolution are considered for the layering. Figure 5 shows the
fused image of the Somwarpet Region.
4. Result Analysis
In this work, the accuracy assessment was carried out for both Multispectral and Fused images to
determine the overall classification accuracy.
4.1. Confusion Matrix
1. Overall Accuracy: It is the proportion of all reference pixels that are correctly segregated. According to
the error matrix above, the overall accuracy can be calculated using equation (1)

--------------------------- (1)
4.2. Random Forest Classification of Somwarpet Multispectral Image
Table 3 Demonstrated the confusion matrix for Random Forest Classifier for Multispectral image with
different sets of training samples. The Row Total (RT) and Column Total (CT) shows the correctly
classified pixels. Except for Diagonal elements the other elements are misclassified. The overall
classification accuracy obtained for 100 training samples is 49%. The same experimentation is carried out
for 200, 300, 400 and 500 training samples, the overall classification accuracy obtained for these training
samples are 50%, 53%, 47% and 63%.

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1525


GIS SCIENCE JOURNAL ISSN NO : 1869-9391
Table 3: Confusion Matrix of RF for Multispectral Image with different training set

Classes Reference pixels

1 2 3 4 5 6 7 RT

Confusion Matrix with training set 100


1 1 0 2 1 0 0 0 4
2 0 2 0 0 0 0 0 2
Classification

3 2 0 3 0 0 0 0 5
Pixels

4 0 0 0 2 0 0 0 2
5 0 0 0 0 0 2 0 2
6 0 0 0 0 0 1 0 1
7 0 1 1 0 0 1 1 4
CT 3 3 6 3 0 4 1 20

Confusion Matrix with training set 200


1 3 0 4 1 0 0 0 8
0 0 0 0 0 1 5
Classification

2 4
3 0 0 4 0 0 0 0 4
Pixels

4 0 2 0 3 0 0 0 5
5 1 2 2 0 0 1 0 6
6 0 0 0 0 0 6 0 6
7 2 1 0 0 0 2 0 5
CT 6 9 10 4 0 9 1 39
Confusion Matrix with training set 300
1 5 0 0 2 2 0 0 9
2 0 10 0 0 0 0 1 11
Classification

3 1 0 4 1 1 0 0 7
Pixels

4 0 2 0 3 1 1 0 7
5 4 1 0 3 0 1 0 9
6 0 0 2 0 0 7 0 9
7 0 3 1 0 1 1 1 7
CT 10 16 7 9 5 10 2 59

Confusion Matrix with training set 400


1 8 0 0 2 1 0 0 11
2 0 1 0 0 0 13
Classification

2 11
3 1 0 10 0 1 0 0 12
Pixels

4 3 1 1 2 3 0 0 10
5 1 3 2 3 1 1 0 11
6 2 0 3 0 0 7 0 12
7 3 1 0 1 0 2 2 9
CT 20 16 16 9 6 10 2 79

Confusion Matrix with training set 500


1 10 0 0 3 0 0 1 14
2 2 12 0 0 0 0 0 14
Classification

3 0 0 7 1 0 0 0 8
Pixels

4 4 2 0 8 0 1 0 15
5 5 0 4 6 0 1 0 16
6 0 0 1 0 0 7 1 9
7 6 0 0 1 0 2 2 11
CT 37 14 12 19 0 11 4 87

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1526


GIS SCIENCE JOURNAL ISSN NO : 1869-9391

4.3. Comparison of Overall Classification Accuracy v/s Training Samples for Multispectral Image
Figure 6 shows a comparison of overall classification accuracy v/s training samples obtained for Landsat 8
MS image. For 100 training sets, OCA obtained is 49%. Similarly for training set 200, OCA of 50% is
achieved. For the training set of 300, the obtained OCA is 53%. For the training sets of 400 and 500, the
OCA achieved is 57% and 63% respectively. From the graph it is observed that as the training samples
were increased, the overall classification accuracy also increased

Figure 6. Comparison of overall classification accuracy v/s training samples for Multispectral Image

4.4. Random Forest Classification of Fused Image for Somwarpet Region


Table 5 shows the confusion matrix for random forest classifier of fused images obtained for different
training set i.e., 100, 200, 300, 400 and 500. The table 5 shows the misclassified and correctly classified
points. The diagonal components show the correct classified points for different sizes of training samples.
At the beginning stage of experimentation, only 100 training samples were considered and the OCA
obtained was 50%. For the next iteration, 200 and 300 training samples were taken for the
experimentation and OCA achieved is 51%. Similarly, for 400 and 500 samples the OCA attained is 52%
and 53%.
Table 5: Confusion Matrix of RF for Fused Image with 100,200,300,400 and 500 Training set

Classes Reference Pixels


1 2 3 4 5 6 7 RT
Confusion Matrix with Training Set 100
1 1 0 0 1 1 0 0 3
2 1 5 0 0 2 0 0 8
Classification

3 0 0 8 0 0 0 0 8
Pixels

4 3 0 0 0 1 0 0 4
5 4 0 0 0 2 0 0 6
6 2 0 0 0 0 3 0 5
7 4 0 0 0 1 0 0 5
CT 15 5 8 1 7 3 0 39
Confusion Matrix with Training Set 200
1 0 1 1 1 1 1 1 6
2 0 2 0 0 0 0 0 2
Classification

3 0 0 4 0 0 0 1 5
Pixels

4 0 0 0 2 0 0 0 2
5 0 1 0 1 1 0 0 3
6 0 0 0 0 0 1 0 1
7 0 0 0 0 0 1 0 1
CT 0 4 5 4 2 3 2 20

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1527


GIS SCIENCE JOURNAL ISSN NO : 1869-9391

Confusion Matrix with Training Set 300


1 1 1 1 0 0 0 0 3
Classification 2 0 2 0 0 0 0 0 2
Pixels 3 0 0 5 0 0 0 0 5
4 2 0 1 1 0 1 0 5
5 0 0 0 2 2 0 1 5
6 0 1 0 0 0 5 0 6
7 1 0 2 0 1 0 0 4
CT 4 4 9 3 3 6 1 30
Confusion Matrix with Training Set 400
1 9 0 1 0 0 0 0 10
2 0 6 0 0 0 0 0 6
Classification

3 1 0 4 0 0 0 0 5
Pixels

4 2 0 0 0 2 0 0 4
5 2 0 0 0 5 0 0 7
6 0 0 1 0 1 0 0 2
7 4 0 0 0 2 0 0 6
CT 18 6 6 0 10 0 0 40
Confusion Matrix with Training Set 500
1 7 0 0 4 0 0 0 11
2 0 7 0 0 1 0 0 8
Classification

3 0 0 6 1 0 0 0 7
Pixels

4 1 2 0 3 0 1 0 7
5 0 1 0 2 3 0 0 6
6 0 0 0 0 0 5 0 5
7 1 1 0 2 1 0 0 5
CT 9 11 6 12 5 6 0 49

4.5. Comparison of Overall Classification Accuracy v/s Training Samples for Fused Image

Overall Classification Accuracy


54% 53%
Overall Classification Accuracy

53% 52%
52% 51% 51%
51% 50%
50%
49%
48%
100 200 300 400 500
Training Samples

Figure 7. Comparison of overall classification accuracy v/s training samples for Fused Image

Figure 7 shows a comparison of overall classification accuracy v/s training samples. For training set 100
obtained OCA is 50%. Similarly for training set 200, the obtained OCA is 51%. For the training set of 300,
the obtained OCA is 51%. For the training sets of 400 and 500, the obtained OCA is 52% and 53%
respectively. The comparison of overall classification accuracy v/s training samples was obtained for
LANDSAT 8 Fused image. As the training sets were increased, the overall classification accuracy also
increased.
4.6. Comparison of OCA v/s Training Sample of Multispectral Image and Fused Image for Random
Forest Classifier
Figure 8 illustrates the Comparison of OCA v/s Training Sample of Multispectral Image and Fused image
for Random Forest classifier. From the graph, it is inferred that the overall classification accuracy of
Multispectral image is high compared to Fused Image. For the 100-training set multispectral data has OCA
of 49% while fused data have OCA of 50% and the difference is 1%.

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1528


GIS SCIENCE JOURNAL ISSN NO : 1869-9391

Overall Classification Accuracy


Overall Classification Accuracy
70%
51% 63%
60% 51% 57%
50% 49% 53% 52%
50% 50% 53%
40%
30%
20%
10%
0%
100 200 300 400 500
Training Samples
MS Fused

Figure 8. Comparison of OCA v/s Training Sample of Multispectral Image and Fused image for RF classifier

For the 100-training set multispectral data has OCA of 49% while fused data have OCA of 50% and the
difference is 1%. For the 200-training set multispectral data has OCA of 50% while fused data have OCA
of 51% and the difference is 1%. For the 300-training set multispectral data has OCA of 53% while fused
data have OCA of 51% and the difference is 2%. For the 400-training set multispectral data has OCA of
57% while fused data have OCA of 52% and difference is 5%. For the 500-training set multispectral data
has OCA of 63% while fused data have OCA of 53% and difference is 10%.
4. Conclusion and Future Work
The RF algorithm produces 63% of accuracy on MS image data and 53% accuracy for fused image data.
The results achieved for fused images are almost consistent. For MS images the accuracy range is between
60- 63%. While for fused image data, the OCA achieved is about 49% - 63% may be due to the selection
of bands (Due to correlation existence between bands). It is also noticed that RF classification accuracy
increased by increasing the training data, which in return increases the overall classification accuracy. The
selection of the uncorrelated bands for fusion can be used to obtain better accuracy in classification in
future work.
Acknowledgments
The authors would like to thank our beloved students Banuteja V USN: 4GL17EC003, Dhanush M S
USN: 4GL17EC008, Harshitha R USN: 4GL17EC014 and Ramya S R USN: 4GL17EC030.
REFERENCES
[1] Fortuna, J., Martens, H., & Johansen, T. A. (2020). Multivariate image fusion: A pipeline for
hyperspectral data enhancement. Chemometrics and Intelligent Laboratory Systems, 205, 104097.
[2] Ghahremani, M., Liu, Y., Yuen, P., & Behera, A. (2019). Remote sensing image fusion via
compressive sensing. ISPRS journal of photogrammetry and remote sensing, 152, 34-48.
[3] Jinju, J., Santhi, N., Ramar, K., & Bama, B. S. (2019). Spatial frequency discrete wavelet transform
image fusion technique for remote sensing applications. Engineering Science and Technology, an
International Journal, 22(3), 715-726.
[4] Kaimaris, D., Patias, P., Mallinis, G., & Georgiadis, C. (2020). Data Fusion of Scanned Black and
White Aerial Photographs with Multispectral Satellite Images. Sci, 2(2), 29.
[5] Li, W., Dong, R., Fu, H., Wang, J., Yu, L., & Gong, P. (2020). Integrating Google Earth imagery with
Landsat data to improve 30-m resolution land cover mapping. Remote Sensing of Environment, 237,
111563.
[6] Peng, Y., Li, W., Luo, X., Du, J., Gan, Y., & Gao, X. Integrated fusion framework based on
semicoupled sparse tensor factorization for spatio-temporal–spectral fusion of remote sensing images.
Information Fusion, 65, 21-36.
[7] Quan, Y., Tong, Y., Feng, W., Dauphin, G., Huang, W., & Xing, M. (2020). A Novel Image Fusion
Method of Multi-Spectral and SAR Images for Land Cover Classification. Remote Sensing, 12(22),
3801.

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1529


GIS SCIENCE JOURNAL ISSN NO : 1869-9391

[8] Rangzan, K., Kabolizadeh, M., Karimi, D., & Zareie, S. (2019). Supervised cross-fusion method: a new
triplet approach to fuse thermal, radar, and optical satellite data for land use classification.
Environmental monitoring and assessment, 191(8), 481.
[9] Shah, E., Jayaprasad, P., & James, M. E. (2019). Image Fusion of SAR and Optical Images for
Identifying Antarctic Ice Features. Journal of the Indian Society of Remote Sensing, 47(12), 2113-
2127.
[10] Shao, Z., Cai, J., Fu, P., Hu, L., & Liu, T. (2019). Deep learning-based fusion of Landsat-8 and
Sentinel-2 images for a harmonized surface reflectance product. Remote Sensing of Environment, 235,
111425.
[11] Useya, J., & Chen, S. (2018). Comparative Performance Evaluation of Pixel-Level and Decision-Level
Data Fusion of Landsat 8 OLI, Landsat 7 ETM+ and Sentinel-2 MSI for Crop Ensemble
Classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,
11(11), 4441-4451.
[12] Wang, H., Skau, E., Krim, H., & Cervone, G. (2018). Fusing heterogeneous data: A case for remote
sensing and social media. IEEE Transactions on Geoscience and Remote Sensing, 56(12), 6956-6968.
[13] Wei, X., Chang, N. B., & Bai, K. (2020). A Comparative Assessment of Multisensory Data
Merging and Fusion Algorithms for High-Resolution Surface Reflectance Data. IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing, 13, 4044-4059.
[14] Wu, H., Zhao, S., Zhang, J., & Lu, C. (2019). Remote sensing image sharpening by integrating
multispectral image super-resolution and convolutional sparse representation fusion. IEEE Access, 7,
46562-46574.
[15] Wu, S., & Chen, H. (2020). Smart city oriented remote sensing image fusion methods based on
convolution sampling and spatial transformation. Computer Communications.
[16] Yang, Y., Lu, H., Huang, S., & Tu, W. (2019). Remote Sensing Image Fusion Based on Fuzzy Logic
and Salience Measure. IEEE Geoscience and Remote Sensing Letters.
[17] Ye, F., Li, X., & Zhang, X. (2019). FusionCNN: a remote sensing image fusion algorithm based on
deep convolutional neural networks. Multimedia Tools and Applications, 78(11), 14683-14703.
[18] Yokoya, N., Ghamisi, P., Xia, J., Sukhanov, S., Heremans, R., Tankoyeu, I., ... & Tuia, D. (2018). Open
data for global multimodal land use classification: Outcome of the 2017 IEEE GRSS Data Fusion
Contest. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 11(5),
1363-1377.
[19] Zhang, K., Wang, M., Yang, S., & Jiao, L. (2018). Convolution structure sparse coding for fusion of
panchromatic and multispectral images. IEEE Transactions on Geoscience and Remote Sensing,
57(2), 1117-1130.
[20] Zhao, C., Gao, X., Emery, W. J., Wang, Y., & Li, J. (2018). An Integrated Spatio-Spectral–Temporal
Sparse Representation Method for Fusing Remote-Sensing Images With Different Resolutions. IEEE
Transactions on Geoscience and Remote Sensing, 56(6), 3358-3370.
[21] Liu, Z., Blasch, E., Bhatnagar, G., John, V., Wu, W., & Blum, R. S. (2018). Fusing synergistic
information from multi-sensor images: an overview from implementation to performance assessment.
Information Fusion, 42, 127-145.
[22] Mishra, D., & Palkar, B. (2015). Image fusion techniques: a review. International Journal of
Computer Applications, 130(9), 7-13.
[23] Solanky, V., Sreenivas, K., & Katiyar, S. K. (2019). Performance evaluation of image fusion
techniques for Indian remote sensing satellite data using Z-test. Spatial Information Research, 27(1),
1-9.
[24] Zhang, Y. (2004). Understanding image fusion. Photogramm. Eng. Remote Sens, 70(6), 657-661.
[25] Babu, S. T., Chintesh, I., Satyanarayana, V., & Nandan, D. (2020). Image Fusion: Challenges,
Performance Metrics and Future Directions. In Electronic Systems and Intelligent Computing (pp.
575-584). Springer, Singapore.

VOLUME 9, ISSUE 6, 2022 PAGE NO: 1530

You might also like