A Deep Learning Framework For The Detection of Tropical Cyclones From Satellite Images

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL.

19, 2022 1004405

A Deep Learning Framework for the Detection of


Tropical Cyclones From Satellite Images
Aravind Nair, K. S. S. Sai Srujan , Sayali R. Kulkarni, Kshitij Alwadhi, Navya Jain,
Hariprasad Kodamana , S. Sandeep , and Viju O. John

Abstract— Tropical cyclones (TCs) are the most destructive An early automated tracking technique uses the pattern
weather systems that form over the tropical oceans, with about correlation coefficient from two consecutive IR images [7], [8].
90 storms forming globally every year. The timely detection and They defined clouds as a connected set of pixel values and
tracking of TCs are important for advanced warning to the
affected regions. As these storms form over the open oceans by applying area and temperature threshold, clouds can be
far from the continents, remote sensing plays a crucial role in tracked by overlapping between these pixels in successive
detecting them. Here we present an automated TC detection from images [9]–[11]. An automatic algorithm to detect and track in
satellite images based on a novel deep learning technique. In this time the tropical mesoscale convective systems from infrared
study, we propose a multistaged deep learning framework for image series through a 3-D segmentation is proposed by
the detection of TCs, including, 1) a detector—Mask region-
convolutional neural network (R-CNN); 2) a wind speed filter; Fiolleau and Roca [12]. Piñeros et al. [13] proposed an
and 3) a classifier—convolutional neural network (CNN). The approach to detecting TC genesis from remotely sensed IR
hyperparameters of the entire pipeline are optimized to showcase image data. Related works based on fluxes of the gradient
the best performance using Bayesian optimization. Results indi- vectors of brightness temperature and fitting spiral features
cate that the proposed approach yields high precision (97.10%), within the IR images are also used for fixing the center position
specificity (97.59%), and accuracy (86.55%) for test images.
of TCs [14], [15]. Hu et al. [16] presented as an approach for
Index Terms— Convolutional neural network (CNN), deep locating the TC center by using satellite and microwave scat-
learning, DenseNet, detectron, mask region-CNN (R-CNN), terometer data. Furthermore, machine learning (ML)-based
remote sensing, tropical cyclone (TC).
various frameworks for land use—land cover classification
from remote sensing are presented in [17].
I. I NTRODUCTION
Multiple meteorological agencies do a postseason analysis

T ROPICAL cyclones (TCs) are some of the most devas-


tating weather events that form over the warm tropical
oceans and have a high socio-economic impact. On a global
of TC tracks which will be useful for forecast verification
and trend analysis known as “best tracks.” As this process is
subject to manual errors, the data are prone to uncertainty. For
scale, an average of 90 TCs form annually over the tropical example, the best tracks data from the Joint Typhoon Warning
warm waters [1]. The trajectory of a TC is important to Center (JTWC) show an increase in stronger TCs over the
understand the areas it can affect and destructive impact [2]. Western North Pacific (WNP) [18]. However, the TC data from
Furthermore, there has been great interest in the scientific Japanese and Hong Kong meteorological agencies show no
community related to the shape of TC tracks [3] and their such trends [19]. The development of a TC dataset using an
interactions with other weather systems [4], [5]. The current automated algorithm from satellite images can reduce these
TC data archives suffer from uncertainty in the data collection uncertainties.
by manual methods [6]. The satellite data archive spans a The identification of TCs from satellite images is based on
period of more than four decades, and it can be used to extract their size, position, status, and intensity [20]. This form of
a long-term homogeneous TC dataset. identification, based on pattern recognition, was pioneered by
Manuscript received August 16, 2021; revised October 18, 2021; accepted Dvorak and is based on human judgment, hence, requiring
November 13, 2021. Date of publication December 3, 2021; date of cur- an expert eye. A semiautomated approach to this had been
rent version January 7, 2022. This work was supported by the Ministry proposed by Dutta and Banerjee [21], which used Elliptic
of Earth Sciences, India, under Grant NCPOR/2019/PACER-POP/OS-04.
(Corresponding authors: Hariprasad Kodamana; S. Sandeep.) Fourier Descriptors (EFD) and the principal component analy-
Aravind Nair, K. S. S. Sai Srujan, Sayali R. Kulkarni, and S. Sandeep are sis (PCA) on visible and infrared images for classification.
with the Centre for Atmospheric Sciences, IIT Delhi, New Delhi 110016, Detection of the eye of a TC also has been the focus of
India (e-mail: sandeep.sukumaran@cas.iitd.ac.in).
Kshitij Alwadhi is with the Department of Electrical Engineering, IIT Delhi, much research since the formation and categorization of the
New Delhi 110016, India. TC depends on it. Synthetic aperture radar (SAR) technology
Navya Jain is with the Department of Chemical Engineering, IIT Delhi, has also been extensively used in helping the detection of the
New Delhi 110016, India.
Hariprasad Kodamana is with the Department of Chemical Engineering, eye of TCs due to its ability of cloud penetration. SAR is also
IIT Delhi, New Delhi 110016, India, and also with the School of Artifi- used in [22] for a semiautomatic center location method in
cial Intelligence, IIT Delhi, New Delhi 110016, India (e-mail: kodamana@ cases when a TC is imaged without its eye.
iitd.ac.in).
Viju O. John is with EUMETSAT, 64295 Darmstadt, Hessen, Germany. Support vector machines (SVMs), random forest (RF), and
Digital Object Identifier 10.1109/LGRS.2021.3131638 Decision Trees have also been shown to be effective at
1558-0571 © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: MBM Engineering College (TEQIP-III). Downloaded on April 14,2022 at 10:34:47 UTC from IEEE Xplore. Restrictions apply.
1004405 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 19, 2022

detection of TC formation [23]. Deep learning algorithms have


also been used for identification and classification purposes.
This has been demonstrated in the use of an ensemble of
convolutional neural networks (CNNs) classifiers on the sim-
ulated outgoing longwave radiation (OLR) for classifying as
TCs and their precursors in [24]. The CNNs were trained
with 50 000 images containing TCs and their precursors and
Fig. 1. (a) IR image of TC Nanauk (at timestamp 2014-06-12 0000-0030)
500 000 non-TC data for binary classification, showing success and (b) corresponding TC segmentation mask.
in the WNP region. Four different state-of-the-art U-Net
models were developed in [25] for the detection of regions of was also color-coded according to wind speed. Following the
interest (ROIs) for tropical and extratropical cyclones. Giffard- World Meteorological Organization (WMO) criteria for the
Roisin et al. [26] implemented a deep fusion model built to classification of TCs in the Northern Indian Ocean region, any
use the TC track data and 3-D reanalysis as input. Also, there timestamp with wind speed ≥ 34 knots is marked as a TC [30].
have been interesting attempts to estimate TC intensity from The total 816 images were also randomly separated into the
satellite data by using CNN in recent times [27], [28]. following three sets: training (60%)/validation (20%)/testing
In this study, we present a novel implementation of the (20%). Fig. 1(a) shows a sample satellite image for TC
ML technique to detect TCs from high-resolution satellite Nanauk. Fig. 1(b) shows the corresponding segmentation
images by considering only the shape of the clouds in the mask generated, which is used for getting annotations for the
images and maximum sustained surface wind speeds from corresponding satellite image.
JTWC. Each detection is also provided with a segmentation, III. M ETHODOLOGY
allowing any initial analysis based on the shape and size of
the detected TC. The pipeline consists of a state-of-the-art A. Proposed ML Pipeline
mask region-CNN (R-CNN) detector, a wind speed filter, and The proposed ML pipeline functions in the following man-
a CNN classifier. The ML pipeline can be used for subse- ner: 1) An input satellite image is passed to the detector
quent timestamps to generate a time series of the predicted (Mask R-CNN R50 FPN model) (see Fig. 2). The detections
segmentation. are obtained for the input image and recorded in detectron2’s
output format; 2) If one or more bounding boxes are detected,
II. DATA the wind speed of the corresponding timestamp is checked.
The level 1.5 meteosat visible infrared imager (MVIRI) If the wind speed is less than 34 knots, the predictions
IR satellite images from Meteosat 5 and Meteosat 7 Indian for the image are discarded; 3) If there are more than one
Ocean Data Coverage (IODC) at a six-hourly frequency was prediction made for an image, the classifier (DenseNet169) is
considered for the analysis from 2001 to 2007 and 2007 to then provided images cropped from the input satellite image
2016, respectively. The Meteosat 5 and Meteosat 7 during their using the bounding box coordinates; and 4) If more than one
IODC coverage were located at a sub longitude of 63◦ and of the cropped images are classified as cyclones, then the one
57◦ , respectively, providing data for the full disk coverage, but with the highest confidence score from the classifier is chosen
we have considered only the Asian region (44.5◦E–105.5◦E as the correct prediction.
and 10◦ S–45◦ N) for the study. We have considered a total 1) Detector: The detector is a Mask R-CNN R50 FPN
of 40 TCs formed over the North Indian Ocean during the model with 1 × learning rate (LR) Scheduler available in
analysis period totaling 816 images. The 8-bits measurement detectron2’s model zoo. The model used was prepared by
counts of IR channel are calibrated and converted into bright- training it on the training set after loading it with pretrained
ness temperatures as described in [29]. weights available in the model zoo. The detector is used to
The Microsoft common objects in context (MS COCO) provide predictions about the location of TC in the input image
dataset is employed in the deep learning framework where the along with its segmentation. Models in detectron2 also use
annotations are stored in JavaScript Object Notation (JSON) the annotations stored in JSON files in the COCO dataset
files. The annotations specified for object detection consist format during the training phase. Mask R-CNN is an extension
of the following information for each image: 1) image_id to of Faster R-CNN used for object image segmentation [31].
uniquely identify a specific image from the dataset; 2) cat- It consists of two stages. The first stage is a region proposal
egory_id to uniquely identify each category in the dataset; network which proposes candidate bounding boxes. In the
3) segmentation consisting of a list of vertices in polygonal second stage, it performs two tasks in parallel: 1) extract
format; 4) area for the area of Segmentation; 5) bounding_box features from proposed bounding boxes for classification and
drawn around the segmentation; and 6) is_crowd to specify bounding box regression and 2) generate a binary mask for
if the segmentation is for a collection of objects or a single each Region of Interest (RoI). The binary masks represent
object. In the case of a single object, the polygon method is an object’s spatial layout in the image. For each RoI, a m ×
used to specify the segmentation. In order to make the JSON m mask is predicted by using pixel-to-pixel correspondence
file, the usual procedure involves getting segmentation masks provided by convolutions. The RoI features, developed as
manually. small feature maps, are aligned by using RoIAlign to produce
The compiled dataset of images was further processed to pixel-accurate masks. Considering the feature maps as a grid,
represent it in COCO dataset format. Each segmentation mask it utilizes bilinear interpolation of each sampling point with

Authorized licensed use limited to: MBM Engineering College (TEQIP-III). Downloaded on April 14,2022 at 10:34:47 UTC from IEEE Xplore. Restrictions apply.
NAIR et al.: DEEP LEARNING FRAMEWORK FOR THE DETECTION OF TCs FROM SATELLITE IMAGES 1004405

TABLE I
O PTIMIZED PARAMETERS OF THE D ETECTOR (M AXIMIZING A CCURACY )

initialized with pretrained weights available in torchvision


before being trained for TC classification. If the detector
has detected more than one bounding box, the bounding box
coordinates provided by the detector are used to crop the
images and these cropped images are sent to the CNN model
for classification. In this pipeline, we allow more than one
image to be classified as a TC, to account for the pathological
case of the genesis of more than one TC in the ocean,
simultaneously.
For training the classifier, bounding boxes obtained from
segmentation masks drawn for images with TCs and with
disturbances were used to crop on their corresponding images.
Initially a list of 18 models was made from the PyTorch
documentation based on the recorded accuracy values.1 These
belong to the following architectures: 1) AlexNet; 2) Visual
Geometry Group (VGG); 3) ResNet; and 4) DenseNet and
the best performing one was DenseNet169. Labeling of the
prediction as a true-positive or false-positive depends on the
intersection over union (IOU). It is calculated by taking
the area of overlap and dividing it with the area of the union of
the ground truth and the prediction bounding box. A prediction
is considered a true-positive if the value of the IOU is greater
than or equal to a specified threshold.

B. Hyperparameter Optimization
Hyperparameter optimization of the entire ML pipeline
is achieved via Bayesian optimization technique [32]. For
Fig. 2. Proposed ML pipeline. hyperparameter tuning, the inputs for the surrogate model are
considered to be the hyperparameter values and its output is
the nearby grid points of the feature map and the results are
the metric to be maximized. Here, the surrogate model is pro-
aggregated.
duced by mapping hyperparameter values to the performance
2) Additional Filter: One of the methods used to filter out
metric values using the Gaussian process.
some of the false-positives was by using wind speed. For
For detector, initially, F1 score was chosen as the metric
each timestamp, the wind speed was compared and following
to be optimized with the purpose of getting a model with
the WMO criteria for Northern Indian Ocean Region. If the
the best precision-recall values. This resulted in a model
wind speed is ≥34 knots threshold for the timestamp then
which had a higher rate of false-positives. Later to reduce
it is accepted as a TC, otherwise, it is discarded. Here,
the false-positives, F1 Score was replaced by accuracy as
we additionally added a check on IoU of two bounding boxes
the optimizing parameter. While, for the classifier, F1 score
in order to combine bounding boxes that are close enough.
was used as the metric to be maximized in this case. The
This will ensure that multiple bounding boxes of the same
resultant pipeline was better than using only the detector but
TCs are combined and avoid any false-positives. This helps
still suffered from a high number of false predictions. Hence,
remove any predictions made on timestamps just before or
the method was slightly changed to obtain a detector and a
after the cyclone is classified as a TC.
classifier optimized separately for optimal accuracy and used
3) Classifier: The classifier is a CNN model which has been together in the pipeline. The parameters, along with their
trained to classify a given image as a TC or not. The current ranges and values obtained after optimization for accuracy are
classifier model is a DenseNet169 model which was obtained tabulated in Tables I and II for the detector and the classifier,
by optimizing its parameters using Bayesian Optimization for respectively.
maximizing accuracy. Also, this classifier did not require any
layer freezing and used the Adagrad optimizer and had been 1 https://pytorch.org/vision/stable/models.html

Authorized licensed use limited to: MBM Engineering College (TEQIP-III). Downloaded on April 14,2022 at 10:34:47 UTC from IEEE Xplore. Restrictions apply.
1004405 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 19, 2022

Fig. 3. (a)–(c) Input satellite image, prepared segmentation mask and predicted segmentation from the proposed ML Pipeline, respectively, for the timestamp
2001-05-22 0600-0630. (d)–(f) Input satellite image, prepared segmentation mask and predicted segmentation from the proposed ML Pipeline, respectively,
for the timestamp 2003-05-13 1200-1230.

TABLE II TABLE III


O PTIMIZED PARAMETERS OF THE C LASSIFIER (M AXIMIZING A CCURACY ) ML P IPELINES C OMPARISON

C. Implementation Framework
The Mask R-CNN models used were provided in detectron2,
a PyTorch-based modular object detection library. The detec-
tron2 model zoo consists of high-quality implementations of
object detection algorithms such as DensePose, Faster R-CNN, speed in the current time-step. Since the wind speed for this
RetinaNet, and Mask R-CNN models developed by Facebook time-step is 55 knots, it satisfies the wind speed criteria. The
Artificial Intelligence Research. These models train on GPU image is then passed to the classifier since it has multiple
by default and are highly customizable through a configuration outputs. In this scenario, the cropped images are generated
system. The config system is a key-value system to obtain from coordinates of each bounding box and passed one at
standard and common behaviors. Using the config system, the a time to the DenseNet169 model. The model classifies each
model can be configured with specific hyperparameter values output as either “TC” or “not_TC.” The corresponding detector
to adapt to any custom dataset. The Mask R-CNN models output is accepted as the valid detection for that time step,
selected expect the data to be either provided in detectron2’s as shown by the visualization of the final output in Fig. 3(b).
format or in COCO dataset format. Hyperparameter opti- Fig. 3(c) shows the results from performing detections on
mization was also performed using Adaptive Experimentation satellite images for TC Hudhud from 2014. The results are
Platform (Ax). For our purposes, it has been used for hyperpa- presented as a time series for October 08 to 12, 2014.
rameter optimization using the Bayesian optimization method. During the study, various pipelines were tested and com-
It iteratively explores the given parameter space for identifying pared to each other on test datasets. The results from these
the set of best parameter values. Bayesian optimization in Ax are provided in Table III. The metrics in the table are pro-
is implemented through Botorch, a PyTorch-based library for vided as percentages. The proposed ML pipeline has a high
Bayesian optimization. true-positive rate (also called recall) of 76.14% and a high
true-negative rate (also called specificity) of 97.59%. It also
has a high accuracy of 86.55% for the detection of TCs from
IV. R ESULTS AND D ISCUSSIONS
the satellite images. Out of the 88 images with TCs, correct
We have employed the proposed ML pipeline to test high- detections were made in 67 images. The proposed pipeline
resolution satellite images with 88 images with TCs. Fig. 3(a) is also able to avoid false predictions and has successfully
shows the visualization of predictions after the satellite image avoided making false predictions on 81 images without TCs.
has been passed through the detector and the wind speed filter During the experiments, we have observed that Mask
for TC Nanauk at a time-step June 12, 2014, 0030UTC. In this R-CNN models were optimized for the F1 score and suffered
case, the detector makes predictions of two TC detection on from a high number of false-positives. This was observed
the satellite image. It provides the output with class, score, even when the number of outputs reported for each image
predicted segmentation mask, and its bounding box for each after filtering from the classifier was limited to one. The
detection. After checking for the number of predictions on higher success obtained by optimizing the Mask R-CNN and
the image, the wind speed filter is used to check the wind CNN models for accuracy. It yielded a model with higher

Authorized licensed use limited to: MBM Engineering College (TEQIP-III). Downloaded on April 14,2022 at 10:34:47 UTC from IEEE Xplore. Restrictions apply.
NAIR et al.: DEEP LEARNING FRAMEWORK FOR THE DETECTION OF TCs FROM SATELLITE IMAGES 1004405

true-positives and true-negatives. It is found that the use of [10] M. Williams and R. A. Houze, “Satellite-observed characteristics of
CNNs as classifiers in the pipeline reduces the number of winter Monsoon cloud clusters,” Monthly Weather Rev., vol. 115, no. 2,
pp. 505–519, Feb. 1987.
positive detections and contributes to negative detections. This [11] Y. Arnaud, M. Desbois, and J. Maizi, “Automatic tracking and charac-
is because a number of positives being reclassified as “not_tc,” terization of African convective systems on Meteosat pictures,” J. Appl.
which leads to a reduction in the false positives as well as Meteorol., vol. 31, no. 5, pp. 443–453, May 1992.
[12] T. Fiolleau and R. Roca, “An algorithm for the detection and tracking
true-positives. The use of wind speed filter helped reduce of tropical mesoscale convective systems using infrared images from
the number of false-positives and increasing true-negatives. geostationary satellite,” IEEE Trans. Geosci. Remote Sens., vol. 51,
Hence, combining both of these approaches as per the pro- no. 7, pp. 4302–4315, Jul. 2013.
[13] M. F. Pineros, E. A. Ritchie, and J. S. Tyo, “Detecting tropical cyclone
posed pipeline has given the most optimal results by removing genesis from remotely sensed infrared image data,” IEEE Geosci.
predictions that do not satisfy the wind speed criteria, and Remote Sens. Lett., vol. 7, no. 4, pp. 826–830, Oct. 2010.
by removing false predictions when multiple predictions are [14] N. Jaiswal and C. M. Kishtawal, “Automatic determination of center of
tropical cyclone in satellite-generated IR images,” IEEE Geosci. Remote
passed using CNN. Sens. Lett., vol. 8, no. 3, pp. 460–463, May 2011.
[15] N. Jaiswal and C. M. Kishtawal, “Objective detection of center of
tropical cyclone in remotely sensed infrared images,” IEEE J. Sel.
V. C ONCLUSION Topics Appl. Earth Observ. Remote Sens., vol. 6, no. 2, pp. 1031–1035,
In this study, a novel deep learning framework has been Apr. 2013.
[16] T. Hu et al., “Study on typhoon center monitoring based on HY-2
developed for TC detection in high-resolution satellite images. and FY-2 data,” IEEE Geosci. Remote Sens. Lett., vol. 14, no. 12,
The frameworks use a mask R-CNN model as a detector pp. 2350–2354, Dec. 2017.
to provide the segmentation and wind speed filter and CNN [17] S. Talukdar et al., “Land-use land-cover classification by machine
learning classifiers for satellite observations—A review,” Remote Sens.,
classifier to further refine the predictions by filtering out vol. 12, no. 7, p. 1135, Apr. 2020.
possible false predictions. The proposed ML Pipeline uses [18] P. J. Webster, G. J. Holland, J. A. Curry, and H.-R. Chang, “Changes
satellite images taken at an interval of 6 h and is able to in tropical cyclone number, duration, and intensity in a warming envi-
ronment,” Science, vol. 309, no. 5742, pp. 1844–1846, 2005.
detect a TC for most of its life cycle. The study has also [19] M. Wu, K. Yeung, and W. Chang, “Trends in western North Pacific
generated annotated dataset with segmentation masks for every tropical cyclone intensity,” Eos, Trans. Amer. Geophys. Union, vol. 87,
satellite image and is made publicly available,2 along with the no. 48, pp. 537–538, Nov. 2006.
[20] C. Kar, A. Kumar, D. Konar, and S. Banerjee, “Automatic region of
codes for data processing and the deep learning models.2 This interest detection of tropical cyclone image by center of gravity and
study shows the potential of the pipeline in automating the distance metrics,” in Proc. 5th Int. Conf. Image Inf. Process. (ICIIP),
task of TC detection from satellite images. With appropriate Nov. 2019, pp. 141–145.
[21] I. Dutta and S. Banerjee, “Elliptic Fourier descriptors in the study of
modifications in the wind speed filter, such as using the cyclone cloud intensity patterns,” Int. J. Image Process., vol. 7, no. 4,
wind speed from operational weather forecasts, the proposed pp. 402–417, 2013.
technique may be used for real-time applications. [22] S. Jin, S. Wang, X. Li, L. Jiao, J. A. Zhang, and D. Shen, “A salient
region detection and pattern matching-based algorithm for center
detection of a partially covered tropical cyclone in a SAR image,”
R EFERENCES IEEE Trans. Geosci. Remote Sens., vol. 55, no. 1, pp. 280–291,
Jan. 2017.
[1] K. A. Emanuel and D. Nolan, “Tropical cyclone activity and global [23] M. Kim, M.-S. Park, J. Im, S. Park, and M.-I. Lee, “Machine learning
climate,” in Proc. 26th Conf. Hurricanes Tropical Meteorol., Miami, approaches for detecting tropical cyclone formation using satellite data,”
FL, USA, 2004, pp. 240–241. Remote Sens., vol. 11, no. 10, p. 1195, May 2019.
[2] J. P. Kossin, K. R. Knapp, T. L. Olander, and C. S. Velden, “Global [24] D. Matsuoka, M. Nakano, D. Sugiyama, and S. Uchida, “Deep learning
increase in major tropical cyclone exceedance probability over the approach for detecting tropical cyclones and their precursors in the sim-
past four decades,” Proc. Nat. Acad. Sci. USA, vol. 117, no. 22, ulation by a cloud-resolving global nonhydrostatic atmospheric model,”
pp. 11975–11980, Jun. 2020. Prog. Earth Planet. Sci., vol. 5, no. 1, pp. 1–16, Dec. 2018.
[3] R. S. Pandey and Y.-A. Liou, “Decadal behaviors of tropical storm tracks [25] C. Kumler-Bonfanti, J. Stewart, D. Hall, and M. Govett, “Tropical and
in the North West Pacific Ocean,” Atmos. Res., vol. 246, Dec. 2020, extratropical cyclone detection using deep learning,” J. Appl. Meteorol.
Art. no. 105143. Climatol., vol. 59, no. 12, pp. 1971–1985, Dec. 2020.
[4] Y.-S. Lee, Y.-A. Liou, J.-C. Liu, C.-T. Chiang, and K.-D. Yeh, “For- [26] S. Giffard-Roisin, M. Yang, G. Charpiat, C. K. Bonfanti, B. Kégl, and
mation of winter supertyphoons Haiyan (2013) and Hagupit (2014) C. Monteleoni, “Tropical cyclone track forecasting using fused deep
through interactions with cold fronts as observed by multifunctional learning from aligned reanalysis data,” Frontiers Big Data, vol. 3, p. 1,
transport satellite,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 7, Jan. 2020.
pp. 3800–3809, Jul. 2017. [27] W. Tian, W. Huang, L. Yi, L. Wu, and C. Wang, “A CNN-based
[5] Y.-A. Liou, J.-C. Liu, C. P. Liu, and C.-C. Liu, “Season-dependent distri- hybrid model for tropical cyclone intensity estimation in meteorological
butions and profiles of seven super-typhoons (2014) in the Northwestern industry,” IEEE Access, vol. 8, pp. 59158–59168, 2020.
Pacific Ocean from satellite cloud images,” IEEE Trans. Geosci. Remote [28] C. Wang, G. Zheng, X. Li, Q. Xu, B. Liu, and J. Zhang, “Tropi-
Sens., vol. 56, no. 5, pp. 2949–2957, May 2018. cal cyclone intensity estimation from geostationary satellite imagery
[6] C. W. Landsea, G. A. Vecchi, L. Bengtsson, and T. R. Knutson, “Impact using deep convolutional neural networks,” IEEE Trans. Geosci.
of duration thresholds on Atlantic tropical cyclone counts,” J. Climate, Remote Sens., early access, Mar. 26, 2021, doi: 10.1109/TGRS.2021.
vol. 23, no. 10, pp. 2508–2519, May 2010. 3066299.
[7] J. A. Leese, C. S. Clark, and C. S. Novak, “An automated technique for [29] V. O. John et al., “On the methods for recalibrating geostationary
obtaining cloud motion from geosynchronous satellite data using cross longwave channels using polar orbiting infrared sounders,” Remote
correlation,” J. Appl. Meteorol., vol. 10, no. 1, pp. 118–132, Feb. 1971. Sens., vol. 11, no. 10, p. 1171, May 2019.
[8] J. Schmetz, “Cloud motion winds from METEOSAT: Performance [30] M. Mohapatra, G. S. Mandal, B. K. Bandyopadhyay, A. Tyagi, and
of an operational system,” Global Planet. Change, vol. 4, nos. 1–3, U. C. Mohanty, “Classification of cyclone hazard prone districts of
pp. 151–156, Jul. 1991. India,” Natural Hazards, vol. 63, no. 3, pp. 1601–1620, Sep. 2012.
[9] W. L. Woodley, C. G. Griffith, J. S. Griffin, and S. C. Stromatt, [31] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” in Proc.
“The inference of GATE convective rainfall from SMS-1 imagery,” IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 2961–2969.
J. Appl. Meteorol., vol. 19, no. 4, pp. 388–408, Apr. 1980. [32] V. Nguyen, “Bayesian optimization for accelerating hyper-parameter
tuning,” in Proc. IEEE 2nd Int. Conf. Artif. Intell. Knowl. Eng. (AIKE),
2 https://github.com/AravindNair430/TCTracker.git
Jun. 2019, pp. 302–305.

Authorized licensed use limited to: MBM Engineering College (TEQIP-III). Downloaded on April 14,2022 at 10:34:47 UTC from IEEE Xplore. Restrictions apply.

You might also like