Professional Documents
Culture Documents
Satellite Image Classification
Satellite Image Classification
INTRODUCTION
1.1. Introduction
Remote sensing (RS) is the science of using electromagnetic radiations (EMR) to identify
earth surface features and estimation of their geo- physical nature during analysis and
interpretation of its spectral, spatial, and temporal signature without doing physical contact with
the object. RS uses satellite imagery technology to sense the landcover of Earth. At the early of
stage 21st century, satellite imagery became widely available. Currently, various type of orbiting
satellites are found to collect crucial data for improving knowledge about Earth's atmosphere,
oceans, ice, and land. Orbital platforms used different regions of Electromagnetic Spectrum
(EMS) to provide interest data, which contributed with larger scale aerial or ground-based
sensing and analysis to prepare satellite images that picture enough information needed to
monitor trends of natural phenomena. Satellite images are rich material that plays a vital role in
providing geographical information. There are specialized RS applications are employed for
interpreting and analyzing the satellite images in order to understand the behavior of considered
phenomena.
Satellite Image Classification (SIC) is the most significant technique used in RS for the
computerized study and pattern recognition of satellite information, which is based on diversity
structures of the image that involving rigorous validation of the training samples depending on
the used classification algorithm [1]. The powerful of such algorithms is depends on the way of
extracting the information from huge number of data found in images. Then, according to these
information, the pixels are grouping into meaningful classes that enable to interpret and studying
various types of regions that included in the image [2].
1.2. Remote Sensing
Remote sensing is relevant process that permits to extract information about the surface
of land without actual contact with the area being observed [3]. The remotely sensed images are
the aerial and satellite images that allow exact mapping of landcover and make landscape traits
comprehensible on regional, continental, and even global scales [4]. Figure 1.1 shows the
principle of remote sensing, the solar radiation incident on objects found at the landcover, which
are reflect these radiation toward the satellite, the satellite technology receive these radiation and
record the image of the considered area [5].
Remote sensing operates in several region of the electromagnetic spectrum. The
ultraviolent (UV) portion of the spectrum has the shortest wavelength that is practical use of
remote sensing. It is used to dig out a general picture for the object embedded underground.
Passing through the visible spectrum, the infrared radiation (IR) is also used to record thermal
images that are useful in climate consideration. Actually, satellites images are taken through the
visible bands are usually employed in the appearance of digital images containing valuable
information used to assist developers to understand the information of the landcover. The aspires
of satellite images categorization is to split image into discrete classes. The consequential
classified image is a thematic map of the landscape image [5]. The satellite imagery is
categorized into active and passive remote sensing when information is merely recorded [6];
more details about them are given by the following subsections:
1.2.1. Active Remote Sensing
Active remote sensing emits radiation to scan objects and areas whereupon a sensor then
detect and measures the radiation that is reflected or backscattered from the target as shown in
Figure (1.2) [7]. (Radio Detection and Ranging) RADAR and (Light Detection and Ranging)
LiDAR are examples of active remote sensing where the time delay between emission and return
signal is measured, which help to establish the location, speed and direction of the target object
on the ground [7].
Figure 1.6: Examples of aerial images around the Earth, where large intra-class variability can be
appreciated for ‘building’ class.
In the following, I will present three important families/strategies of spectral-spatial
classification, which are successfully used until nowadays and can also be used in combination
with each other, and will position my contributions with respect to these strategies:
1. Graph-based Methods: In these approaches, an image is represented as a graph, where
the nodes typically represent image pixels or regions, while edges either connect adjacent
regions [26, 27], or keep track of a hierarchy in a multi-scale image representation [28,
29]. By applying different graph construction and pruning strategies, these techniques
typically output a disconnected graph, where each connected sub graph corresponds to
the classified object in the resulting classification map. A popular graph-cut technique
applies a min-cut algorithm on the image graph to regularize outputs of the pixel-wise
classifier, by minimizing the Markov Random Field-based energy function associated
with the image and the initial classification result [30]. Another approach consists in
building first a hierarchy of image regions at different levels of details, based on standard
homogeneity/dissimilarity criteria, and then selecting from this hierarchy the regions at
different scales that correspond to the objects in the final classification map [28, 31].
Such hierarchical data structure can be very flexible, for instance, it can be easily
augmented to include textural or shape information for more accurate image analysis.
2. Feature Engineering: The spatial information can be included into a classifier by
analyzing the similarity/correlations between the data in the neighborhood of an image
pixel. The initial approaches would typically use a well-defined algorithm, which can be
often described by a (set of) pre-defined convolutional mask(s), for this purpose. For
instance, Tsai et al. [32] and Huang and Zhang [33] have investigated the use of texture
measures derived from the gray level co-occurrence matrix, such as angular second
moment, entropy and homogeneity, for spectral spatial classification. A popular approach
in remote sensing explores the principles of mathematical morphology, which by
definition aims at analyzing spatial relations between sets of pixels, i.e., extracting
information about the size, shape and orientation of structures. For this purpose,
morphological profiles [34], extended morphological profiles [35], attribute profiles [36]
and extinction profiles [37] have been successfully applied. In all cases, the core idea is
to progressively simplify image based on a pre-defined rule/attribute, which allows
analyzing a particular characteristic (for instant, area or standard deviation of regions)
within this image. The extracted morphological features typically lay in a very-high-
dimensional space, and additional feature extraction must be applied to reduce the
dimensionality of the feature space. Another group of techniques introduces strong shape
priors into classification: for example, in remote sensing images buildings can be
modeled as rectangles [38, 39] and roads can be modeled by line segments [40].
3. Deep learned features: Even though feature engineering gives satisfactory results for
many applications, when the complexity of the data and the number of classes increases,
it is not straightforward to design hand-crafted features that would exhibit high
discriminative power for all classes. The recent trend in the state-of-the-art consists in
designing classification systems that would be able to automatically learn hundreds of
expressive high-level contextual features. In this context, the use of deep learning, in
particular convolutional neural networks, is intensively investigated by both image
analysis and remote sensing communities. While some works use deep learning-based
features as the input to a conventional classifier, such as SVM or logistic regression
classifier [41], most recent studies design an end-to-end system which jointly learns to
extract relevant contextual features and conduct the classification [42, 43].
CHAPTER 2
LITERATURE SURVEY
In this chapter, we discuss about the various types of satellite image classification
approaches and also discuss few notable works.
2.1. Satellite Image Classification
There are several methods and techniques for satellite image classification. Satellite
image classification methods can be broadly classified into three categories: automated, manual,
and hybrid as shown in Figure 2.1 [44]: The following subsections illustrate each category of
classification:
(3.1)
where, and denote the objects and ; denotes the kernel variable which is used to
evaluate the smoothness of the boundary between the classes in the original object space.
3.4 Performance Measures
To evaluate the performance of learning approaches, we consider the following performance
measures:
(3.2)
(3.3)
(3.4)
(3.5)
(3.6)
(3.7)
where, = True Positive, = False Negative, = False Positive and = True Negative.
REFERENCES
1. Baboo, S. Santhosh, and S. Thirunavukkarasu. "Image segmentation using high
resolution multispectral satellite imagery implemented by fcm clustering
techniques." International Journal of Computer Science Issues (IJCSI) 11.3 (2014): 154.
2. R. L. Kettig and D. A. Landgrebe. Classification of multispectral image data by
extraction and classification of homogeneous objects. IEEE Trans. Geoscience
Electronics, 14(1):19–26, Jan. 1976.
3. Goel, Samiksha, Arpita Sharma, and V. K. Panchal. "A hybrid algorithm for satellite
image classification." International Conference on Advances in Computing,
Communication and Control. Springer, Berlin, Heidelberg, 2011.
4. LENI, EZIL SAM. "A TECHNIQUE FOR CLASSIFICATION OF HIGH
RESOLUTION SATELLITE IMAGES USING OBJECT-BASED
SEGMENTATION." Journal of Theoretical & Applied Information Technology 68.2
(2014).
5. Poongodi, S., and Dr B. Kalaavathi. "Satellite image classification based on fuzzy with
cellular automata." International Journal of Electronics and Communication
Engineering 2.3 (2015): 137-141.
6. Toshniwal, Mayank. "Satellite image classification using neural networks." 3rd
International Conference: Sciences of Electronic, Technologies of Information and
Telecommunications. 2005.
7. Rees, William Gareth. Physical principles of remote sensing. Cambridge university press,
2013.
8. Joseph, George. Fundamentals of remote sensing. Universities Press, 2005.
9. Rencz, A. N., and R. A. Ryerson. "Manual of Remote Sensing for the Earth Sciences."
10. Chander, Gyanesh, Brian L. Markham, and Dennis L. Helder. "Summary of current
radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI
sensors." Remote sensing of environment 113.5 (2009): 893-903.
11. Andary, Jim, et al. "Landsat 7 System design overview." IGARSS'96. 1996 International
Geoscience and Remote Sensing Symposium. Vol. 4. IEEE, 1996.
12. ERDAS (Firm). ERDAS field guide. Erdas, 1997.
13. Daptardar, A. A., and Vishal J. Kesti. "Introduction to remote sensors and image
processing and its applications." IJLTET 2 (2013): 107-114.
14. Nirmal Keshava. A survey of spectral unmixing algorithms. Lincoln Lab. J., 14:55– 78,
2003.
15. Gellert Mattyus, Shenlong Wang, Sanja Fidler, and Raquel Urtasun. Enhancing road
maps by parsing aerial images around the world. In IEEE ICCV, 2015.
16. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley, 2001.
17. J. A. Benediktsson and P. H. Swain. Statistical Methods and Neural Network Approaches
for Classification of Data from Multiple Sources. PhD thesis, Purdue
Univ., School of Elect. Eng., West Lafayette, IN, 1990.
18. J. A. Benediktsson, P. H. Swain, and O. K. Ersoy. Conjugate gradient neural networks in
classification of very high dimensional remote sensing data. International
Journal of Remote Sensing, 14(15):2883 – 2903, 1993.
19. E. Merényi. Intelligent understanding of hyperspectral images through selforganizing neural
maps. In Proc. 2nd Int. Conf. Cybernetics and Information Technologies, Systems and
Applications (CITSA 2005), pages 30–35, Orlando, FL, USA, 2005.
20. V. Vapnik. Statistical Learning Theory. New York: Wiley, 1998.
21. G. Camps-Valls, L. Gomez-Chova, J. Muñoz-Marí, J. Vila-Francés, and J.
CalpeMaravilla. Composite kernels for hyperspectral image classification. IEEE Geosci.
Remote Sens. Lett., 3(1):93–97, 2006.
22. J. Ham, Yangchi Chen, M. M. Crawford, and J. Ghosh. Investigation of the random forest
framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens.,
43(3):492–501, March 2005.
23. R. Hansch and O. Hellwich. When to fuse what? random forest based fusion of low-,
mid-, and high-level information for land cover classification from optical and sar
images. In IEEE IGARSS, pages 3587–3590, July 2016.
24. J. A. Richards y X. Jia. Remote Sensing Digital Image Analysis. In SpringerVerlag,
1999.
25. Y. Tarabalka, J. Chanussot, and J. A. Benediktsson. Segmentation and classification of
hyperspectral images using minimum spanning forest grown from automatically selected
markers. IEEE Trans. Systems, Man, and Cybernetics: Part B, 40(5):1267–1279, Oct.
2010.
26. K. Bernard, Y. Tarabalka, J. Angulo, J. Chanussot, and J. A. Benediktsson. Spectral-
spatial classification of hyperspectral data based on a stochastic minimum spanning forest
approach. IEEE Transactions on Image Processing, 21(4):2008– 2021, April 2012.
27. S. Valero, P. Salembier, and J. Chanussot. Hyperspectral image representation and
processing with binary partition trees. IEEE Trans. Image Process. 22(4):1430– 1443,
2013.
28. E. Maggiori, Y. Tarabalka, and G. Charpiat. Improved partition trees for multiclass
segmentation of remote sensing images. In IEEE IGARSS, pages 1016–1019. IEEE,
2015.
29. Y. Tarabalka and A. Rana. Graph-cut-based model for spectral-spatial classification of
hyperspectral images. In igarss, pages 3418–3421, July 2014.
30. Emmanuel Maggiori, Yuliya Tarabalka, and Guillaume Charpiat. Optimizing partition
trees for multi-object segmentation with shape prior. In 26th British Machine Vision
Conference (BMVC), 2015.
31. F. Tsai, C.-K. Chang, and G.-R. Liu. Texture analysis for three dimension remote sensing
data by 3D GLCM. In Proc. of 27th Asian Conference on Remote Sensing, pages 1–6,
August 2006.
32. X. Huang and L. Zhang. A comparative study of spatial approaches for urban mapping
using hyperspectral rosis images over pavia city, northern italy. International Journal of
Remote Sensing, 30(12):3205–3221, 2009.
33. J.A. Benediktsson, M. Pesaresi, and K. Arnason. Classification and feature extraction for
remote sensing images from urban areas based on morphological transformations. IEEE
Trans. Geosci. Remote Sens., 41(9):1940–1949, Sept. 2003.
34. M. Fauvel, J. A. Benediktsson, J. Chanussot, and J. R. Sveinsson. Spectral and spatial
classification of hyperspectral data using svms and morphological profiles. IEEE Trans.
Geosci. Remote Sens., 46(11):3804–3814, Nov 2008.
35. M. Dalla Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone. Morphological attribute
profiles for the analysis of very high resolution images. IEEE Trans. Geosci. Remote
Sens., 48(10):3747–3762, Oct 2010.
36. P. Ghamisi, R. Souza, J. A. Benediktsson, X. X. Zhu, L. Rittner, and R. A. Lotufo.
Extinction profiles for the classification of remote sensing data. IEEE Trans. Geosci.
Remote Sens., 54(10):5631–5645, Oct 2016.
37. Mathias Ortner, Xavier Descombes, and Josiane Zerubia. Building outline extraction
from digital elevation models using marked point processes. IJCV, 72(2):107–132, 2007.
38. Yinghua Wang, Florence Tupin, and Chongzhao Han. Building detection from high-
resolution polsar data at the rectangle level by combining region and edge information.
Pattern Recognition Letters, 20, July 2010.
39. C. Lacoste, X. Descombes, and J. Zerubia. Point processes for unsupervised line network
extraction in remote sensing. IEEE TPAMI, 27(10):1568–1579, October 2005.
40. Wenzhi Zhao, Zhou Guo, Jun Yue, Xiuyuan Zhang, and Liqun Luo. On combining
multiscale deep learning features for the classification of hyperspectral remote sensing
imagery. International Journal of Remote Sensing, 36(13):3368–3379, 2015.
41. Volodymyr Mnih. Machine learning for aerial image labeling. PhD thesis, University of
Toronto, 2013.
42. Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat, and Pierre Alliez.
Convolutional neural networks for large-scale remote-sensing image classification. IEEE
Trans. Geosci. Remote Sens., 55(2):645–657, 2017.
43. Dimitrios Marmanis, Konrad Schindler, Jan Dirk Wegner, Mihai Datcu, and Uwe Stilla.
Semantic segmentation of aerial images with explicit class-boundary modeling. In IEEE
IGARSS, pages 5071–5074, 2017.
44. Abburu, Sunitha, and Suresh Babu Golla. "Satellite image classification methods and
techniques: A review." International journal of computer applications 119.8 (2015).
45. Sathya, P., and L. Malathi. "Classification and segmentation in satellite imagery using
back propagation algorithm of ann and k-means algorithm." International Journal of
Machine Learning and Computing 1.4 (2011): 422.
46. Shivali, A. K., and V. K. Vishakha. "Supervised and Unsupervised Neural Network for
Classification of Satellite Images." Int. J. Comp. Appl 25 (2013).
47. Chriskos, Panteleimon, et al. "De-identifying facial images using singular value
decomposition." 2014 37th International Convention on Information and Communication
Technology, Electronics and Microelectronics (MIPRO). IEEE, 2014.
48. Phillips, Rhonda D., et al. "Feature reduction using a singular value decomposition for
the iterative guided spectral class rejection hybrid classifier." ISPRS Journal of
Photogrammetry and Remote Sensing 64.1 (2009): 107-116.
49. Kumar, Anil, Ashish Kumar Bhandari, and P. Padhy. "Improved normalised difference
vegetation index method based on discrete cosine transform and singular value
decomposition for satellite image processing." IET Signal Processing 6.7 (2012): 617-
625.
50. Mahi, Habib, Hadria Isabaten, and Chahira Serief. "Zernike moments and svm for shape
classification in very high resolution satellite images." Int. Arab J. Inf. Technol. 11.1
(2014): 43-51.
51. Mahi, Habib, Hadria Isabaten, and Chahira Serief. "Zernike moments and svm for shape
classification in very high resolution satellite images." Int. Arab J. Inf. Technol. 11.1
(2014): 43-51.
52. Bekaddour, Akkacha, Abdelhafid Bessaid, and Fethi Tarek Bendimerad. "Multi spectral
satellite image ensembles classification combining k-means, LVQ and SVM
classification techniques." Journal of the Indian Society of Remote Sensing 43.4 (2015):
671-686.
53. Brindha, S. "Satellite image enhancement using dwt–svd and segmentation using mrr–
mrf model." Journal of Network Communications and Emerging Technologies
(JNCET) 1.1 (2015): 6-10.
54. Chengfan Li, Jingyuan Yin, Junj uan Zhao, Extraction of Urban Vegetation from High-
Resolution Remote Sensing Image, International Conference On Computer Design And
Applications, IEEE, 2010
55. Shamama Anwar, Sugandh Raj, A Neural Network Approach to Edge Detection using
Adaptive Neuro-Fuzzy Inference System, IEEE, 2014.
56. Tiago M. A. Santos André Mora, Rita A. Ribeiro, and João M. N. Silva, Fuzzy-Fusion
approach for Land Cover Classification, IEEE International Conference on Intelligent
Engineering Systems, June 30-July 2, 2016
57. K. C. Tan, H. S. Lim and M. Z. Mat Jafri, Comparison of Neural Network and Maximum
Likelihood Classifiers for Land Cover Classification Using Landsat Multispectral Data,
IEEE Conference on Open Systems, 2011
58. Wei Zhang, Zhong Li, Weidong Xu, Haiquan Zhou, A Classifier of Satellite Signals
Based on the Back-Propagation Neural Network, International Congress on Image and
Signal Processing, 2015
59. Nur Anis Mahmon1 and Norsuzila Ya’acob, A Review on Classification of Satellite
Image Using Artificial Neural Network (ANN), Control and System Graduate Research
Colloquium, IEEE, 2015
60. Alberto Prieton, Beatriz Prieto, Eva Martinez Ortigosa, Eduardo Ros, Francisco Pelayo,
Julio Ortega, Ignacio Rojas , Neural networks: An overview of early research, current
frameworks and new challenges, Elsevier B.V, 2016.
61. D.G. Stavrakoudis, J.B. Theocharis , G.C. Zalidis, A Boosted Genetic Fuzzy Classifier
for land cover classification of remote sensing imagery, International Society for
Photogrammetry and Remote Sensing, Inc , Elsevier B.V , 2011
62. Mr.Anand Upadhyay, Dr.S.K.Singh , Ms.Pooja Singh, Ms.Priya Singh, Comparative
study of Artificial Neural Network-based classification of IRS LISS-III satellite images,
IEEE, 2015.
63. S. D. Jawak, P. Devliyal, and A. J. Luis, “A Comprehensive Review on Pixel Oriented
and Object Oriented Methods for Information Extraction From Remotely Sensed Satellite
Images with a Special Emphasis on Cryospheric Applications,” Adv. Remote Sens.,
vol.04, no. 03, pp. 177–195, 2015.
64. S. Jabari and Y. Zhang, “Very High Resolution Satellite Image Classification Using
Fuzzy Rule-Based Systems,” Algorithms, vol.6, no. 4, pp. 762–781, 2013.
65. M. K. Firozjaei, I. Daryaei, A. Sedighi, Q. Weng, and S. K. Alavipanah, “Homogeneity
Distance Classification Algorithm (Hdca): A novel Algorithm for Satellite Image
Classification,” Remote Sens., vol.11, no. 5, 2019.
66. C. Pelletier, G. I. Webb, and F. Petitjean, “Temporal Convolutional Neural Network for
the Classification of Satellite Image Time Series,” Remote Sens., vol.11, no. 5, pp. 1–25,
2019.
67. L. Zhang, Z. Chen, J. Wang and Z. Huang, "Rocket Image Classification Based on Deep
Convolutional Neural Network," 2018 10th International Conference on
Communications, Circuits and Systems (ICCCAS), Chengdu, China, 2018, pp. 383-386.
68. C. Shen, C. Zhao, M. Yu and Y. Peng, "Cloud Cover Assessment in Satellite Images Via
Deep Ordinal Classification," IGARSS 2018 - 2018 IEEE International Geoscience and
Remote Sensing Symposium, Valencia, 2018, pp. 3509-3512.
69. T. Postadjian, A. L. Bris, C. Mallet and H. Sahbi, "Superpixel Partitioning of Very High
Resolution Satellite Images for Large-Scale Classification Perspectives with Deep
Convolutional Neural Networks," IGARSS 2018 - 2018 IEEE International Geoscience
and Remote Sensing Symposium, Valencia, 2018, pp. 1328-1331.
70. Q. Liu, R. Hang, H. Song and Z. Li, "Learning Multiscale Deep Features for High-
Resolution Satellite Image Scene Classification," in IEEE Transactions on Geoscience
and Remote Sensing, vol. 56, no. 1, pp. 117- 126, Jan. 2018.
71. T. Postadjian, A. L. Bris, H. Sahbi and C. Malle, "Domain Adaptation for Large Scale
Classification of Very High Resolution Satellite Images with Deep Convolutional Neural
Networks," IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing
Symposium, Valencia, 2018, pp. 3623-3626.
72. P. Helber, B. Bischke, A. Dengel and D. Borth, "Introducing Eurosat: A Novel Dataset
and Deep Learning Benchmark for Land Use and Land Cover Classification," IGARSS
2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia,
2018, pp. 204-207.
73. K. Cai and H. Wang, "Cloud classification of satellite image based on convolutional
neural networks," 2017 8th IEEE International Conference on Software Engineering and
Service Science (ICSESS), Beijing, 2017, pp. 874-877.
74. A. O. B. Özdemir, B. E. Gedik and C. Y. Y. Çetin, "Hyperspectral classification using
stacked autoencoders with deep learning," 2014 6th Workshop on Hyperspectral Image
and Signal Processing: Evolution in Remote Sensing (WHISPERS), Lausanne, 2014, pp.
1-4.
75. M. Lavreniuk, N. Kussul and A. Novikov, "Deep Learning Crop Classification Approach
Based on Sparse Coding of Time Series of Satellite Data," IGARSS 2018 - 2018 IEEE
International Geoscience and Remote Sensing Symposium, Valencia, 2018, pp. 4812-
4815.
76. L. Bragilevsky and I. V. Bajić, "Deep learning for Amazon satellite image analysis,"
2017 IEEE Pacific Rim Conference on Communications, Computers and Signal
Processing (PACRIM), Victoria, BC, 2017, pp. 1-5.
77. R. F. Berriel, A. T. Lopes, A. F. de Souza and T. OliveiraSantos, "Deep Learning-Based
Large-Scale Automatic Satellite Crosswalk Classification," in IEEE Geoscience and
Remote Sensing Letters, vol. 14, no. 9, pp. 1513- 1517, Sept. 2017.
78. Unnikrishnan, A.; Sowmya, V.; Soman, K.P. Deep learning architectures for land cover
classification using red and near-infrared satellite images. Multimed. Tools
Appl. 2019, 78, 18379–18394. [Google Scholar] [CrossRef]
79. Jiang, J.; Liu, F.; Xu, Y.; Huang, H. Multi-spectral RGB-NIR image classification using
double-channel CNN. IEEE Access 2019, 7, 20607–20613. [Google Scholar] [CrossRef]
80. Yang, N.; Tang, H.; Yue, J.; Yang, X.; Xu, Z. Accelerating the Training Process of
Convolutional Neural Networks for Image Classification by Dropping Training Samples
Out. IEEE Access 2020, 8, 142393–142403. [Google Scholar] [CrossRef]
81. Weng, L.; Qian, M.; Xia, M.; Xu, Y.; Li, C. Land Use/Land Cover Recognition in Arid
Zone Using A Multi-dimensional Multi-grained Residual Forest. Comput.
Geosci. 2020, 144, 104557. [Google Scholar] [CrossRef]
82. Zhong, Y.; Fei, F.; Liu, Y.; Zhao, B.; Jiao, H.; Zhang, L. SatCNN: Satellite image dataset
classification using agile convolutional neural networks. Remote Sens. Lett. 2017, 8,
136–145. [Google Scholar] [CrossRef]
83. Yamashkin, S.A.; Yamashkin, A.A.; Zanozin, V.V.; Radovanovic, M.M.; Barmin, A.N.
Improving the Efficiency of Deep Learning Methods in Remote Sensing Data Analysis:
Geosystem Approach. IEEE Access 2020, 8, 179516–179529. [Google Scholar]
[CrossRef]
84. Syrris, V.; Pesek, O.; Soille, P. SatImNet: Structured and Harmonised Training Data for
Enhanced Satellite Imagery Classification. Remote Sens. 2020, 12, 3358. [Google
Scholar] [CrossRef]
85. Hu, J.; Tuo, H.; Wang, C.; Zhong, H.; Pan, H.; Jing, Z. Unsupervised satellite image
classification based on partial transfer learning. Aerosp. Syst. 2019, 3, 21–28.
86. Vapnik, Vladimir, The nature of statistical learning theory (Springer science & business
media, 2013).
87. Cortes, Corinna and Vapnik, Vladimir, "Support-vector networks", Machine learning 20,
3 (1995), pp. 273--297.
88. Cristianini, Nello, Shawe-Taylor, John et al., An introduction to support vector machines
and other kernel-based learning methods (Cambridge university press, 2000).
Appendix A
MATLAB
A.1 Introduction
The basic building block of MATLAB is MATRIX. The fundamental data type is the
array. Vectors, scalars, real matrices and complex matrix are handled as specific class of this
basic data type. The built in functions are optimized for vector operations. No dimension
statements are required for vectors or arrays.
A.2.1 MATLAB Window
The MATLAB works based on five windows: Command window, Workspace window,
Current directory window, Command history window, Editor Window, Graphics window and
Online-help window.
The command window is where the user types MATLAB commands and expressions at
the prompt (>>) and where the output of those commands is displayed. It is opened when the
application program is launched. All commands including user-written programs are typed in
this window at MATLAB prompt for execution.
MATLAB defines the workspace as the set of variables that the user creates in a work
session. The workspace browser shows these variables and some information about them.
Double clicking on a variable in the workspace browser launches the Array Editor, which can be
used to obtain information.
The current Directory tab shows the contents of the current directory, whose path is
shown in the current directory window. For example, in the windows operating system the path
might be as follows: C:\MATLAB\Work, indicating that directory “work” is a subdirectory of
the main directory “MATLAB”; which is installed in drive C. Clicking on the arrow in the
current directory window shows a list of recently used paths. MATLAB uses a search path to
find M-files and other MATLAB related files. Any file run in MATLAB must reside in the
current directory or in a directory that is on search path.
The Command History Window contains a record of the commands a user has entered in
the command window, including both current and previous MATLAB sessions. Previously
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands. This is useful to select
various options in addition to executing the commands and is useful feature when experimenting
with various commands in a work session.
A.2.1.5 Editor Window
The MATLAB editor is both a text editor specialized for creating M-files and a graphical
MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in
the desktop. In this window one can write, edit, create and save programs in files called M-files.
MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses color to
differentiate between various elements of code, this text editor is recommended as the tool of
choice for writing and editing M-functions.
The output of all graphic commands typed in the command window is seen in this
window.
MATLAB provides online help for all it’s built in functions and programming language
constructs. The principal way to get help online is to use the MATLAB help browser, opened as
a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or
by typing help browser at the prompt in the command window. The help Browser is a web
browser integrated into the MATLAB desktop that displays a Hypertext Markup Language
(HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to
find information, and the display pane, used to view the information. Self-explanatory tabs other
than navigator pane are used to perform a search.
MATLAB has three types of files for storing information. They are: M-files and MAT-
files.
A.3.1 M-Files
These are standard ASCII text file with ‘m’ extension to the file name and creating own
matrices using M-files, which are text files containing MATLAB code. MATLAB editor or
another text editor is used to create a file containing the same statements which are typed at the
MATLAB command line and save the file under a name that ends in .m. There are two types of
M-files:
1. Script Files
2. Function Files
A function file is also an M-file except that the variables in a function file are all local.
This type of files begins with a function definition line.
A.3.2 MAT-Files
These are binary data files with .mat extension to the file that are created by MATLAB
when the data is saved. The data written in a special format that only MATLAB can read. These
are located into MATLAB with ‘load’ command.
This is the set of tools and facilities that help you use MATLAB functions and files. Many
of these tools are graphical user interfaces. It includes the MATLAB desktop and Command
Window, a command history, an editor and debugger, and browsers for viewing help, the
workspace, files, and the search path.
This is a high-level matrix/array language with control flow statements, functions, data
structures, input/output, and object-oriented programming features. It allows both "programming
in the small" to rapidly create quick and dirty throw-away programs, and "programming in the
large" to create complete large and complex application programs.
A.4.4 Graphics:
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as
annotating and printing these graphs. It includes high-level functions for two-dimensional and
three-dimensional data visualization, image processing, animation, and presentation graphics. It
also includes low-level functions that allow you to fully customize the appearance of graphics as
well as to build complete graphical user interfaces on your MATLAB applications.
This is a library that allows you to write C and FORTRAN programs that interact with
MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling
MATLAB as a computational engine, and for reading and writing MAT-files.
hold on holds the current plot and all axis properties so that subsequent graphing
hold off sets the next plot property of the current axes to "replace"
d = find(x>100) returns the indices of the vector x that are greater than 100
for repeat statements a specific number of times, the general form of a FOR
statement is:
for n=1:cc/c;
magn(n,1)=NaNmean(a((n-1)*c+1:n*c,1));
end
INF the arithmetic representation for positive infinity, a infinity is also produced
by operations like dividing by zero, e.g. 1.0/0.0, or from overflow, e.g. exp(1000).
save saves all the matrices defined in the current session into the file,
save filename x y z saves the matrices x, y and z into the file titled filename.mat
save filename x y z /ascii save the matrices x, y and z into the file titled filename.dat
load filename loads the contents of filename into current workspace; the file can
load filename.dat loads the contents of filename.dat into the variable filename
plot
Kinds of plots:
plot(y) creates a plot of y vs. the numerical values of the elements in the y-vector
polar(theta,r) creates a polar plot of the vectors r & theta where theta is in radians
bar(x) creates a bar graph of the vector x. (Note also the command stairs(x))
bar(x, y) creates a bar-graph of the elements of the vector y, locating the bars
Plot description:
text(x,y,'text','sc') writes 'text' at point x,y assuming lower left corner is (0,0)
axis([xmin xmax ymin ymax]) sets scaling for the x- and y-axes on the current plot
Scalar Calculations:
+ Addition
- Subtraction
* Multiplication
^ Exponentiation
Array or element-by-element operations are executed when the operator is preceded by a '.'
(Period):
Matlab Desktop is the main Matlab application window. The desktop contains five sub
windows, the command window, the workspace browser, the current directory window, the
command history window, and one or more figure windows, which are shown only when the
user displays a graphic.
The command window is where the user types MATLAB commands and expressions at
the prompt (>>) and where the output of those commands is displayed. MATLAB defines the
workspace as the set of variables that the user creates in a work session.
The workspace browser shows these variables and some information about them. Double
clicking on a variable in the workspace browser launches the Array Editor, which can be used to
obtain information and income instances edit certain properties of the variable.
The current Directory tab above the workspace tab shows the contents of the current
directory, whose path is shown in the current directory window. For example, in the windows
operating system the path might be as follows: C:\MATLAB\Work, indicating that directory
“work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN
DRIVE C. clicking on the arrow in the current directory window shows a list of recently used
paths. Clicking on the button to the right of the window allows the user to change the current
directory.
MATLAB uses a search path to find M-files and other MATLAB related files, which are
organize in directories in the computer file system. Any file run in MATLAB must reside in the
current directory or in a directory that is on search path. By default, the files supplied with
MATLAB and math works toolboxes are included in the search path. The easiest way to see
which directories are soon the search path, or to add or modify a search path, is to select set path
from the File menu the desktop, and then use the set path dialog box. It is good practice to add
any commonly used directories to the search path to avoid repeatedly having the change the
current directory.
The Command History Window contains a record of the commands a user has entered in
the command window, including both current and previous MATLAB sessions. Previously
entered MATLAB commands can be selected and re-executed from the command history
window by right clicking on a command or sequence of commands.
This action launches a menu from which to select various options in addition to executing
the commands. This is useful to select various options in addition to executing the commands.
This is a useful feature when experimenting with various commands in a work session.
The MATLAB editor is both a text editor specialized for creating M-files and a graphical
MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in
the desktop. M-files are denoted by the extension .m, as in pixelup.m.
The MATLAB editor window has numerous pull-down menus for tasks such as saving,
viewing, and debugging files. Because it performs some simple checks and also uses color to
differentiate between various elements of code, this text editor is recommended as the tool of
choice for writing and editing M-functions.
To open the editor , type edit at the prompt opens the M-file filename.m in an editor
window, ready for editing. As noted earlier, the file must be in the current directory, or in a
directory in the search path.
The principal way to get help online is to use the MATLAB help browser, opened as a
separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by
typing help browser at the prompt in the command window. The help Browser is a web browser
integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML)
documents. The Help Browser consists of two panes, the help navigator pane, used to find
information, and the display pane, used to view the information. Self-explanatory tabs other than