Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

UAV-Multispectral Sensed Data Band

Co-Registration Framework
Jocival D. Dias Junior1 , André R. Backes1 , Maurı́cio C. Escarpinati1 , Leandro H. F. Pinto Silva1 ,
Breno C. S. Costa1 , Marcelo H. F. Avelar1
1 Facultyof Computing, Federal University of Uberlândia, Uberlândia/MG, Brazil
{jocival.dias, backes, mauricio, leandro.furtado, breno.costa, avelar}@ufu.br

Abstract—Precision agriculture (PA) has greatly benefited from process is known as band co-registration and it presents a series
new technologies over the years. The use of multispectral and of difficulties to be executed.
hyperspectral sensors coupled to Unmanned Aerial Vehicles (UAV)
has enabled farmers to monitor crops, improve the use of resources The main difficulty in performing the band co-registration is
and reduce costs. Despite being widely used, multispectral images the fact that each element present in the image is represented
present a natural misalignment among the various spectra due differently in each spectrum due to its chemical and physi-
to the use of different sensors. The variation of the analyzed cal characteristics. This fact hampers the alignment process
spectrum also leads to a loss of characteristics among the bands because, among the spectra, much information is lost [7].
which hinders the feature detection process among them, which
makes the alignment process complex. In this paper, is proposed a Another factor that hinders the band co-registration process
new framework for the band co-registration process based on the is the fact that the vast majority of multispectral or hyper-
premise that natural misalignment is an attribute of the camera, so spectral cameras use different sensors to obtain the spectra.
it does not change during the acquisition process. The results were This physical displacement among the sensors is responsible
compared with the ground-truth generated by a specialist and with for causing misalignment among the bands. This misalignment
other methods present in the literature. The proposed framework
had an average back-projection (BP) error of 0.425 pixels, this changes from flight to flight because, during the flight or
result being 335% better than the evaluated frameworks. landing process, the camera can suffer shocks causing, over
Keywords—Unmanned Aerial Vehicle, Precision Agriculture, time, cause different distortions between the sensors [7]
Band Co-registration, Multispectral Registration For PA’s success, the band co-registration process is required.
I. I NTRODUCTION Therefore, the objective of this work is the development of a
framework for the band co-registration process in multispectral
images obtained by UAVs.
The main discussion of the future of agriculture shows that
Most of the works published on band co-registration in
food production must increase dramatically - probably doubling
images obtained by UAVs start from the principle that it is
by 2050 - while using only 5% more land to sustain the growth
necessary to obtain a different transformation in each image to
of the world’s population [1]. However, with environmental
correct the misalignment between the sensors, i.e., between the
changes, this increase in food production must be made most
bands [8] [7]. Going in opposition to this current, the approach
smartly, that is without depleting natural resources or leveraging
presented in this work searches among the dataset, the best
climate change. In this context, Precision Agriculture (PA)
group of images to align each band and then build an alignment
proved to be the solution for the sustainable development of
scheme, i.e., the order in which the bands will be aligned.
agribusiness [2].
Remote sensing techniques, mainly with the use of Un- To evaluate the performance of the proposed approach, it was
manned Aerial Vehicle (UAV) for the acquisition process, compared with the frameworks presented in the works of [7]
have become increasingly present for Precision Agriculture and [8], as these have demonstrated good results for automati-
(PA) activities [3], [4]. To use the most of PA technologies cally aligning bands in multispectral images obtained by UAV.
(e.g., semantic segmentation [5], weed identification [6] and As inputs to the frameworks, was used the following keypoints
vegetation indices), it is necessary to acquire multispectral or detection and extraction methods: Binary Robust Invariant
hyperspectral images of the area, which in most farms are Scalable Keypoints (BRISK) [9], Features from Accelerated
obtained by UAVs. Segment Test (FAST) [10], Maximally Stable Extremal Regions
A multispectral or hyperspectral image to be useful for (MSER) [11], Harris-Stephens Features (HSF) [12], Speeded-
agricultural applications must first have its bands aligned. This Up Robust Features (SURF) [13], and KAZE Features (KAZE)
[14]. Each method was tested on two different crop datasets
The authors gratefully acknowledges the financial support of CNPq (Na- (Soybean and Cotton).
tional Council for Scientific and Technological Development, Brazil) (Grant
#301715/2018-1). This study was financed in part by the Coordenação de The remainder of this paper is organized as follows. In
Aperfeiçoamento de Pessoal de Nı́vel Superior - Brazil (CAPES) - Finance Section II, the authors present the related works on band co-
Code 001. registration of multispectral images obtained by UAVs. Section
978-1-7281-7539-3/20/$31.00 ©2020 IEEE III demonstrate the proposed framework. In Section IV are
presented the datasets utilized in this work and the evaluation which try to perform an alignment in each scene, the chance of
metric used to conduct the tests. The results obtained by this failure is drastically reduced because it will always search the
work are presented in Section V. Finally, Section VI presents dataset for the best set of transformations to align the bands.
the conclusions. Each step of the proposed framework will be explained in the
following sections.
II. R ELATED W ORK
According to [15], image registration is the process of A. Keypoint extraction and graph construction
geometrically aligning an image on another image of the same For each multispectral scene, the bands were combined two
scene taken from different points of view or by different at time. For each combination, the number of keypoints, after
sensors. It is one of the main image processing techniques and the elimination of outliers, is extracted by each method given as
is important for diverse applications. input into the framework. The number of keypoints, the image
Several works were proposed to perform the band co- identifier, the feature extraction method that generated these
registration in multispectral or hyperspectral images [7] [8] [16] keypoints, and the band combination, is stored in a tuple.
[17] [18] [19]. This paper will present some recent works that Given the set of tuples extracted previously, for each band
have shown good results in the registration process. combination, the tuple that returns the largest number of
In [7], several techniques were evaluated using descriptors of keypoints is obtained. At the end of this step, a complete
local characteristics to perform the hyperspectral registration of undirected weighted graph G is constructed, with the bands
images in spectrally complex environments, more specifically as nodes and the maximum tuples, i.e., tuples that obtained the
in swamps and wetlands. The HSF, Min Eigen Features (MEF), most keypoints, as edges. Each edge is labeled with the image,
Scale Invariant Feature Transformation (SIFT), SURF, BRISK technique that generated that edge and the number of keypoints.
and FAST were used to register the images. The authors The edge weights are represented by the number of keypoints
evaluated the results performing the registration in spectral obtained.
order (wavelength) and in the order in which the spectra were
obtained. In the results, the authors concluded that the best B. Scheme construction
order, among those evaluated, for the band co-registration was
the spectral order. Among the techniques evaluated, SURF With the graph G obtained in the previous step, some graph
presented the best Pearson’s Correlation Coefficient (PCC) and algorithms are applied to transform the complete graph into
Root Mean Squared Error (RMSE) [7]. a scheme that will represent the best order to perform the
In [8], the authors proposed the creation of an automatic multispectral alignment. Starting from the graph G, the Kruskal
framework for the band co-registration of multispectral images [20] algorithm is applied to construct a Maximum Spanning
obtained by UAVs. However, unlike other related works, this Tree (MST). To find the channel to target for alignments, the
framework allowed intermediate targets for registration. This weights between the nodes are replaced by 1 and the Floyd-
framework seeks to verify the bands, taken two by two, and Warshall all-pairs-shortest-path [21] algorithm is used. The
define a scheme that guarantees the highest average number of node with the smallest sum of distances from itself to all the
keypoint between the bands. To obtain the control points, the other nodes is selected as the target channel for the registration
authors used SIFT, BRISK, SURF and Oriented FAST and ro- scheme. Finally, each undirected edge (a, b) is converted to a
tated BRIEF (ORB) techniques. The framework was evaluated directed edge (a, b) if b is closer to the target channel, otherwise
by the number of control points obtained, the computational (b, a).
time and the Back Projection Error (BPE). Among the evaluated
C. Construction of transformations and dataset alignment
algorithms, SIFT obtained the best results, both in the number
of corresponding control points and in the lowest BPE [8]. At the end of the previous process, we have a directed
Gscheme graph that represents the best order to perform the
III. P ROPOSED APPROACH alignment process. Note that through this method it is possible
The proposed UAV-Multispectral sensed data band co- to have intermediate alignments, to guarantee a better alignment
registration framework was inspired by the framework proposed among the channels. Each edge of this graph belongs to a
by [8] and aims to align the bands of multispectral images ob- method and an image present in the dataset whose combination
tained by UAVs using feature-based methods for this purpose. presents the largest number of keypoints for two bands (which,
Unlike the various band co-registration approaches that perform in this case, it will be the nodes that this edge connects).
a different alignment process for each scene (i.e., set of bands), For each of the edge present in Gscheme , the transformation
the proposed method seeks the best transformation set among function is extracted to align one node with the other.
all the dataset images to later align all the scenes using the Note that when extracting all the transformations, the align-
same transformation set. ment process among the bands is performed in an optimal way,
This approach receives as input a multispectral image dataset that is, with the best transformation, considering the number
and, as output, it returns a scheme, i.e., order in which bands of keypoints obtained, that could be estimated in the dataset.
should be aligned and which transformation should be used in With the transformations obtained, all the scenes present in the
each alignment. Unlike the frameworks proposed in Section II, dataset are aligned using this same set of transformations.
IV. E XPERIMENTS the ground-truth and the images present in each dataset. It is
In this section, is presented a brief description of the datasets possible to observe that, for the cotton dataset, the misalignment
and the evaluation metric used in the experiments. is slightly higher than that presented in soybean. This difference
in misalignment between the datasets can be explained by
A. Datasets variations between the sensors of the cameras used during the
In this experiments two datasets were used to evaluate the acquisition and also by variations in the acquisition altitude.
proposal performance, both containing images with 1280 × 960
pixels size and an average of 80% overlap between images. TABLE I
AVERAGE OF MISALIGNMENT, IN PIXELS , BETWEEN THE SENSORS
The spectra present in the images are, blue (475nm center, PRESENT IN THE SOYBEAN DATASET.
20nm bandwidth), green (560nm center, 20nm bandwidth), red
(668nm center, 10nm bandwidth), red edge (rededge) (717nm Blue Green Red Nir Rededge
Blue - 5.18 15.75 15.07 12.14
center, 10nm bandwidth), near-IR (nir) (840nm center, 40nm Green 5.18 - 15.09 12.33 4.02
bandwidth). Images were obtained on a single flight without Red 15.75 15.09 - 29.25 14.79
any kind of pre-processing and using a MicaSense Red-Edge Nir 15.07 12.33 29.25 - 16.17
camera coupled to a Micro UAV SX2 (Sensix Innovations in Rededge 12.14 4.02 14.79 16.17 -
Drone Ltda, Uberlândia, MG, Brazil) at an average height of
100 meters and an average speed of 20 m/s. At this altitude, TABLE II
AVERAGE OF MISALIGNMENT, IN PIXELS , BETWEEN THE SENSORS
the ground sample distance (GSD) is 6.8 cm/pixel. PRESENT IN THE COTTON DATASET.
The first dataset was obtained from a soybean
plantation located at the following decimal coordinate Blue Green Red Nir Rededge
Blue - 28.30 12.11 33.28 8.48
(−17.877308292165985, −51.08216452139867). This dataset Green 28.30 - 21.44 24.01 35.66
contains 1080 images (216 scenes and 5 channels). The Red 12.11 21.44 - 14.94 21.30
second dataset was obtained from a cotton plantation at Nir 33.28 24.01 14.94 - 39.37
the following decimal coordinate (−17.820275501545474, Rededge 8.48 35.66 21.30 39.37 -
−50.32411830846922) and it contains 830 images (166 scenes
and 5 channels). Figure 1 presents a scene, for each dataset,
B. Soybean dataset
containing all channels present.
To measure the performance of the alignment process be- Each feature extraction method was used as an input for the
tween the bands present in the datasets, each dataset was sent frameworks. Figures 2 and 3 show the result, respectively, of
to a specialist so that he performed the manual marking of 12 our approach and the approach proposed by [8].
points on the green band and subsequently scored the equivalent Note that, despite generating the same schemes, the approach
points on the other bands (red, blue, near-IR and red-edge). proposed obtained a greater number of keypoints per band than
the framework proposed by [8]. It is also important to note that
B. Evaluation Metric each alignment will be made using a different image, obeying
In this work, was used Back Projection Error (BPE) [22] to the premise of maximizing the number of keypoints for the best
evaluate the methods compared. This technique represents how band-to-band alignment. The scheme generated by [7] follows
well the location of the target image’s control points aligns the spectral order, i.e., blue, green, red, rededge and nir.
with the registered image’s control points. Given Xi and Xj Table III shows the accuracy of each method, i.e., the number
as the same control points defined by the specialist on the, of alignments the method was able to perform. Note that, since
respectively, target image (i) and moving image (j), T the the soybean dataset has 1080 images, each method should have
transformation matrix and d the Euclidean distance function, performed a total of 864 alignments (for 5 bands, 4 alignments
the BPE can be defined as shown in Equation 1. Smaller BPE are required). It is also important to note that the framework
indicates better image registration performance. proposed by [7] deals individually with each feature extraction
X method.
BP E(I, J) = d2 (Xi , T Xj ) (1)
xi ,xj TABLE III
T HE ACCURACY OBTAINED BY THE FRAMEWORKS IN THE ALIGNMENT OF
V. R ESULTS THE SOYBEAN DATASET.

In this section, the results obtained by the experiments con- Method Failures Accuracy
ducted on this work were presented. Our approach 0 100.00%
paper [8] 183 78.82%
A. Natural Misalignment paper [7] KAZE 352 59.26%
paper [7] HSF 673 22.11%
Multispectral cameras usually use different sensors for the paper [7] SURF 677 21.64%
acquisition of each channel. This physical displacement be- paper [7] MSER 802 7.18%
tween the sensors causes a misalignment between the resulting paper [7] BRISK 830 3.94%
images. Tables I and II show the average misalignment between paper [7] FAST 842 2.55%
Fig. 1. Example of an image scene containing all channels (Blue, Green, Red, near-IR, red-edge respectively) of the soybean plantation dataset on the first line
and all channels (Blue, Green, Red, near-IR, red-edge respectively) of the cotton plantation dataset on the second.

conjunction with the KAZE method.

TABLE IV
R EGISTRATION PERFORMANCE EVALUATION ON S OYBEAN DATASET.

Method BPE
Our approach 0.14
paper [8] 1.33
paper [7] KAZE 40.50

Regardless of the framework used for the soybean dataset,


the best method for extracting control points for band-to-band
alignment in aerial images was the KAZE features proposed
Fig. 2. Scheme generated by the approach proposed for the soybean dataset. by [14].
The caption for each edge consists of: Image ID - Method - Number of
Keypoints. C. Cotton dataset
The methodology for evaluation was the same used in the
soybean dataset, i.e., each feature extraction method was used
as an input for the frameworks. Figures 4 and 5 show the
result scheme, respectively, of our approach and the approach
proposed by [8].

Fig. 3. Scheme generated by the approach proposed in [8] for the soybean
dataset. The caption for each edge consists of: Method - Number of Keypoints.

As can be seen in Table III, by using a global transformation


in all images, the approach proposed was able to perform all
Fig. 4. Scheme generated by our approach for the cotton dataset. The caption
alignments. The framework presented by [8] and the approach for each edge consists of: Image ID - Method - Number of Keypoints.
proposed by [7] with the KAZE method were the only ones
that managed to achieve more than 50% of accuracy in the As in the soybean dataset, it is possible to see that the number
alignment. All methods that failed to achieve a minimum of keypoints obtained by our approach to align the bands is
of 50% alignment were considered inept and discarded from higher when compared to the approach proposed by [8]. Again,
comparing alignment quality. note that KAZE was the feature extraction method that obtained
Table IV shows the BPE obtained by each framework. Note more control points.
that our approach obtained an average BPE close to 0 (0.146 Subsequently, the accuracy of each method was assessed
pixels). The approach proposed by [8] also obtained a good for the dataset and the results are shown in Table V. Note
result when compared to the framework proposed by [7] in that, unlike the soybean dataset, for the cotton dataset the
plish that, each framework was executed twice in a single thread
and the average execution time was computed. As previously
stated, our framework computes the best multispectral band
co-registration scheme for each dataset. Table VII presents the
average running time of the frameworks, in seconds, for both
Soybean and Cotton datasets.

TABLE VII
AVERAGE EXECUTION TIME OBTAINED BY FRAMEWORKS IN THE
DATASETS .

Dataset Framework Execution Time (s)


paper [7] KAZE 1235
Soybean Our Approach 8808
Fig. 5. Scheme generated by the approach proposed in [8] for the cotton paper [8] 10162
dataset. The caption for each edge consists of: Method - Number of Keypoints. paper [7] KAZE 2145
Cotton Our Approach 16339
paper [8] 18550
methods obtained greater accuracy. Our approach, using a
global transformation, managed to align all the images. The It is possible to notice that the execution time is extremely
approach proposed by [8] also achieved good results with only dependent on the processed data, for example, the soybean
1 alignment out of the 664 possible. The framework proposed dataset, despite having more images, has a lower execution
by [7] was also able to achieve accuracy greater than 50% in time. This fact is mainly due to the number of features that
three feature extraction methods: MSER, SURF and KAZE. each method extracted from the images, the greater the number
of features, the longer the time for feature matching and the
TABLE V transformation estimation.
T HE ACCURACY OBTAINED BY THE FRAMEWORKS IN THE ALIGNMENT OF The approach proposed by [7] obtained a lower execution
THE COTTON DATASET.
time because, in addition to evaluating only one technique
Method Failures Accuracy (KAZE), it does not perform any type of analysis of the best
Our approach 0 100.00%
way to perform the alignment. The approach proposed by [8]
paper [8] 1 99.85%
paper [7] KAZE 165 75.15% had a worse result than the one proposed in this work because,
paper [7] SURF 200 69.88% after evaluating the dataset for the scheme construction, it still
paper [7] MSER 240 63.86% estimates for each image the transformation function, which
paper [7] HSF 382 42.47%
paper [7] FAST 390 41.27%
results in the worst execution time evaluated.
paper [7] BRISK 431 35.09% Despite the considerable execution time, our method can
be parallelized in as many threads as needed, which will
Table VI shows the BPE obtained by each framework. Our dramatically decrease the execution time.
approach obtained an average BPE of 0.71 pixels. Despite VI. C ONCLUSION
being a result inferior to that obtained in the soybean dataset,
A. KAZE superiority
our approach still obtained the best result when compared
to the proposed by [8] and [7]. The framework proposed by As seen in the results for the soybean and cotton datasets,
[7] obtained a low performance, regardless of the method for the best method for feature extraction, independent of the
extracting features. evaluated framework, was KAZE. Unlike methods such as
SIFT and SURF, KAZE detects and describes features in
TABLE VI a non-linear scale-space using non-linear diffusion filtering.
R EGISTRATION PERFORMANCE EVALUATION ON C OTTON DATASET. For this reason, the blurring process is locally adaptable in
the image, which reduces noise while maintaining the natural
Method BPE
Our approach 0.71 boundaries of objects, thus achieving superior accuracy and
paper [8] 1.54 location distinction.
paper [7] KAZE 31.19 It is important to note that aerial images of plantations
paper [7] SURF 84.97
paper [7] MSER 132.60
obtained by UAVs usually present a few details useful for
the alignment process. Most of these details are borders,
As with the soybean dataset, the best method for extracting i.e., transitions from soil to plant and vice-versa, due to the
control points for band-to-band alignment in aerial images was difference in reflection from soil to a plant.
the KAZE features proposed by [14]. Given these facts, KAZE is an extremely useful detector and
descriptor for dealing with edges. Its proposal to create a non-
D. Execution Time of the Framework linear scale space, makes the edges of the reflection transitions
Subsequently, the execution time of the approach proposed more detectable and distinguishable, thus obtaining a superior
was compared with the other evaluated frameworks. To accom- result among other techniques.
B. Framework and experiments limitations [2] R. Schrijver, K. Poppe, and C. Daheim, “Precision agriculture
and the future of farming in europe,” Teaduslike ja tehnoloogiliste
The main limitation of this framework is the fact that it is valikute hindamise üksus. [online] http://www. europarl.
camera oriented. This implies that to be executed, the dataset europa.eu/RegData/etudes/STUD/2016/581892/EPRS STU (2016)
informed must have been obtained by the same camera and, 581892 EN. pdf (01.05. 2018), 2016.
[3] I. R. Souza, M. C. Escarpinati, and D. D. Abdala, “A curve completion
preferably, on the same flight. This limitation comes from the algorithm for agricultural planning,” in Proceedings of the 33rd Annual
basic premise of this framework, which is to align the sensors ACM Symposium on Applied Computing, 2018, pp. 284–291.
of the same camera, not being possible to use them to align [4] G. R. Silva, M. C. Escarpinati, D. D. Abdala, and I. R. Souza, “Definition
of management zones through image processing for precision agriculture,”
bands obtained for different cameras. Note that the alignment in 2017 Workshop of Computer Vision (WVC). IEEE, 2017, pp. 150–154.
between the scenes is not being addressed in this work, i.e., this [5] J. Dyson, A. Mancini, E. Frontoni, and P. Zingaretti, “Deep learning for
framework was developed to perform the band co-registration. soil and crop segmentation from remotely sensed data,” Remote Sensing,
vol. 11, no. 16, p. 1859, 2019.
Another major limitation in our work is the fact that, due to [6] J. Tang, D. Wang, Z. Zhang, L. He, J. Xin, and Y. Xu, “Weed identification
the high cost of acquisition of multispectral cameras, we do not based on k-means feature learning combined with convolutional neural
test other cameras to check if the framework is robust to the network,” Computers and Electronics in Agriculture, vol. 135, pp. 63–70,
2017.
main multispectral cameras on the market. Another important [7] B. P. Banerjee, S. A. Raval, and P. J. Cullen, “Alignment of uav-
point is that all the datasets evaluated were obtained by a fixed- hyperspectral bands using keypoint descriptors in a spectrally complex
wing. Therefore, we cannot say whether the framework is also environment,” Remote Sensing Letters, vol. 9, no. 6, pp. 524–533, 2018.
[Online]. Available: https://doi.org/10.1080/2150704X.2018.1446564
robust to multi-rotors. [8] R. Yasir, “Data driven multispectral image registration framework,”
Master’s thesis, University of Saskatchewan, 2018.
C. Final considerations [9] S. Leutenegger, M. Chli, and R. Y. Siegwart, “Brisk: Binary robust invari-
This work presented a new method to perform the band co- ant scalable keypoints,” in 2011 International Conference on Computer
Vision, Nov 2011, pp. 2548–2555.
registration of multispectral images obtained by UAV. Unlike [10] E. Rosten and T. Drummond, “Machine learning for high-speed corner
the other methods, which seek to perform an alignment for detection,” in Computer Vision – ECCV 2006, A. Leonardis, H. Bischof,
each scene, our approach searches among the whole dataset, and A. Pinz, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006,
pp. 430–443.
the best group of images to align each band and then build an [11] J. Matas, O. Chum, M. Urban, and T. Pajdla, “Robust wide-baseline stereo
alignment scheme, i.e., the order in which the bands will be from maximally stable extremal regions,” Image and vision computing,
aligned. Our approach was compared to two other frameworks vol. 22, no. 10, pp. 761–767, 2004.
[12] C. Harris and M. Stephens, “A combined corner and edge detector,” in
for band co-registration in the literature. In Proc. of Fourth Alvey Vision Conference, 1988, pp. 147–151.
As presented in Section V, our approach was, on average, [13] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust
335% better BPE than the second-best framework, obtaining features,” in Computer Vision – ECCV 2006, A. Leonardis, H. Bischof,
and A. Pinz, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006,
for the soybean dataset an alignment with a BPE close to zero pp. 404–417.
(0.14 pixels) and for the cotton dataset a BPE of 0.71 pixels. [14] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “Kaze features,” in
The BPE of the cotton dataset was caused mainly by images Computer Vision – ECCV 2012, A. Fitzgibbon, S. Lazebnik, P. Perona,
Y. Sato, and C. Schmid, Eds. Berlin, Heidelberg: Springer Berlin
with a lot of noise, as shown in Section V-C. This fact shows Heidelberg, 2012, pp. 214–227.
that a study on noise smoothing methods is necessary to obtain [15] G. Hong and Y. Zhang, “Wavelet-based image registration technique
better results. for high-resolution remote sensing images,” Computers & Geosciences,
vol. 34, no. 12, pp. 1708 – 1720, 2008. [Online]. Available:
In Tables III and V the main problem of methods that try to http://www.sciencedirect.com/science/article/pii/S0098300408001544
carry out a different alignment for each band was demonstrated. [16] J. D. Dias Junior, A. Backes, and M. Escarpinati, “Detection of control
Most of the time, for some images, it is not possible to find points for uav-multispectral sensed data registration through the combin-
ing of feature descriptors,” in Proceedings of the 14th International Joint
enough keypoints to estimate a transformation function, which Conference on Computer Vision, Imaging and Computer Graphics Theory
makes it impossible to align the image. Our approach uses a and Applications, 01 2019, pp. 444–451.
global transformation and, for this reason, manages to carry out [17] J. Jhan and J. Rau, “A normalized surf for multispectral image matching
and band co-registration.” International Archives of the Photogrammetry,
all the alignments. Remote Sensing & Spatial Information Sciences, 2019.
The results obtained were relevant and show that the frame- [18] Y. Han, J. Choi, J. Jung, A. Chang, S. Oh, and J. Yeom, “Automated
work presented in this work can be used to align the bands of coregistration of multisensor orthophotos generated from unmanned aerial
vehicle platforms,” Journal of Sensors, vol. 2019, 2019.
multispectral images obtained by UAV. [19] P. Ochieng’Mc’Okeyo, “Automated co-registration of multitemporal se-
ries of multispectral uav images for crop monitoring,” Ph.D. dissertation,
ACKNOWLEDGMENT University of Twente, 2018.
[20] J. Kruskal, “On the shortest spanning subtree of a graph and the trav-
The authors gratefully acknowledge the company Sensix eling salesman problem.” in Proceedings of the American Mathematical
(http://sensix.com.br) for providing the images used in the tests. Society, 1956, pp. 48–50.
[21] R. W. Floyd, “Algorithm 97: Shortest path,” Commun. ACM,
vol. 5, no. 6, pp. 345–, Jun. 1962. [Online]. Available:
R EFERENCES http://doi.acm.org/10.1145/367766.368168
[1] M. C. Hunter, R. G. Smith, M. E. Schipanski, L. W. Atwood, and D. A. [22] L. Ran, Z. Liu, L. Zhang, R. Xie, and T. L. 0009, “Multiple local auto-
Mortensen, “Agriculture in 2050: Recalibrating Targets for Sustainable focus back-projection algorithm for space-variant phase-error correction
Intensification,” BioScience, vol. 67, no. 4, pp. 386–391, 02 2017. in synthetic aperture radar,” IEEE Geosci. Remote Sensing Lett, vol. 13,
[Online]. Available: https://doi.org/10.1093/biosci/bix010 no. 9, pp. 1241–1245, 2016.

You might also like