Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 5

JRPS –International Journal for Research Publication & Seminar (ESTIJ),

Vol. XXX, No. XXX, 2011

Detection of Runway Debris


Sourish Chakraborty*1, Tanmayee Maske*2, Prayushi Khandelwal*3, Vidhisha Fating*4
Dr. Sunil M. Wanjari*5

Student, Department of Computer Engineering, St. Vincent Pallotti College of


*1 2 3 4

Engineering & Technology, Nagpur (India)


*5
Head of the Department, Department of Computer Engineering, St. Vincent Pallotti College
of Engineering & Technology, Nagpur (India)

Abstract— The detection of foreign objects on runways is a key detection systems (i.e. FOD Detect) in 2013 [2] for a total
safety concern for airports globally. Debris on the runway, such as estimated cost of  $1.71 million [2], which only included the
rocks, luggage, or other objects, can cause considerable damage to installation of a single runway. More affordable automated
aircraft, resulting in accidents and threatening the lives of debris detection techniques could benefit large-scale prevention
passengers and crew. Airfield inspectors use both traditional and of costly airplane’s safety occurrences since more airports
automated methods to inspect runways for the presence of debris could afford them. Furthermore, new detection systems should
that varies in nature. The existing systems' fundamental limitation be adaptable to various airport conditions and locations.
is their inability to detect all forms of foreign objects accurately and
in the appropriate time frame for removal from airport runways. To There were a variety of computer vision-based debris
avoid such mishaps, automatic debris object identification systems detection strategies proposed in previous studies. One idea is to
have been created, which scan the runway and detect any foreign use supervised object detection algorithms such as YOLO and
objects using modern sensing technologies including cameras. SSD [3], [4] This research presents an uprising machine vision
These devices can instantly identify possible threats and warn and deep learning-based debris detection system developed in
airport officials, allowing them to take immediate action to clear response to the constraints of existing technologies as well as
debris and ensure airport operations are safe. This paper presents the practical needs of airport management. The proposed
an overview of the various technologies we have utilized in debris
solution provides a debris detection method that is inexpensive
detection on runways, as well as their advantages in improving
airport safety.
to deploy and easily adaptable.

Keywords-component; formatting; style; styling; insert (key II. OBJECTIVES


words)
Debris on runways is a major safety concern for airports as it
I. INTRODUCTION (HEADING 1) poses a significant risk to aircraft and can lead to costly delays
Debris is defined by the Federal Aviation Administration and damage. To mitigate this risk, debris detection systems
(FAA) as any object in the airport environment that may result have been developed to detect and remove debris from
in damage to aircraft or hurt airport workers. Metal fragments, runways. In this literature review, we will examine recent
screws, tyre debris, small stones, plastic tubes and rubbish are research on debris detection systems on runways.
the most common types of debris. Debris could be sucked into
the aircraft by the aircraft engine during takeoff and landing, There was a variety of computer vision-based debris detection
potentially resulting in engine failure. Furthermore, debris on runways strategies proposed in previous different models.
might puncture the tyres of the aircraft's landing gear. For One idea is to use supervised debris detection using YOLO.
example, a metal strip that dropped on the airport runway
caused a jet disaster at Charles De Gaulle Airport in France in
Supervised detection methods are impractical for debris
2000.  It was the most serious air tragedy in history caused by
detection on runways because they can only detect predefined
debris [1].Therefore there is an urgent need to assist airfield
inspectors in recognizing harmful debris items so that they can classes due to their dependence on a dataset with predefined
be eradicated from the airport environment as soon as possible. classes.

Currently, debris detection is mostly done manually Some examples of published debris detection on runways
(through walks). Automated debris detection systems can help methods have attempted to use general object detection
mitigate the detrimental effects of manual detection on airport architectures are not that accurate and 100% perfect but our
operations and manage human error more effectively. The system had worked on all the part that was not in previous
majority of present automated detection systems rely on radar- taken models.
based technologies, but due to their high cost, these techniques
have not been commonly employed. For example, Boston
Our model can be beneficial to prevent the minor and major
Logan International Airport adopted one of those radar-based
accidents that happen on runways for any vehicle.
JRPS –International Journal for Research Publication & Seminar (ESTIJ),
Vol. XXX, No. XXX, 2011
B. Current FOD Detection Techniques
Our system uses hight quality optical cameras and light which YOLO [11] and SSD [12] are two examples of published FOD
will not reflect and can easily detect and also cheaper in cost. detection approaches that have made an attempt to leverage
general object detection frameworks, although supervised
The proposed approach of our system achieved a high object detection seems to be impractical for the FOD detection
detection rate and reduced false alarms compared to existing task [4], [10]. FOD can refer to any object that is erroneously
methods. placed in crucial airport locations. Since there could be a wide
variety of FOD, it is not practical to create an image dataset
The dataset is made up of 14 different object categories. Six that accurately captures every potential type of FOD. This
categories consist of real debris samples, including nuts, could hinder the typical object detection techniques from
screws, steel balls, gaskets, rubber blocks, and stones. The generalising. Airport operations may not be able to rely on
other 8 categories contain standard FOD samples, including detection techniques that cannot generalise. As a result, we
metal spheres, marble spheres, glass spheres, plastic spheres, draw the conclusion that supervised localization approaches
metal cylinders, marble cylinders, glass cylinders, and plastic are inadequate. This is so because the primary requirement is
cylinders. the detection of FOD, but the classification extension is
advantageous.
The YOLO (You Only Look Once) framework is a popular
object detection algorithm that has been applied to debris Another method gathers all clear runway images from an
detection on runways. The YOLO framework works by airport and stores them in a database of images. Then, at
dividing the image into a grid and predicting bounding boxes detection time, it samples a new runway image, uses GPS
and class probabilities for each grid cell. This approach allows coordinates to search the database for the corresponding
for real-time object detection with high accuracy. image, aligns the two images, and then subtracts the two
images to look for discrepancies [13]. Potential FOD detection
The proposed system achieved a high detection rate and can be found in areas with considerable variation. This kind of
reduced false positives, demonstrating the potential of the approach might not be resistant to minute alterations in the
YOLO framework for debris detection. airport environment. Additionally, it necessitates the collecting
of photographs of all relevant airfield surfaces, and
maintaining such a sizable image file for different airport
III. LITERATURE REVIEW
implementations may not be feasible.
We concentrate on the analysis of relevant work in these
two areas as we list our benefits as the dataset development Finally, it depends on the precision of GPS technology, which
framework and the FOD detection approach. The state-of-the- may be prone to inaccuracy, to find the appropriate photos. If
art FOD detection approach is revisited in Section II-B after a the wrong FOD free image is utilized for comparison,
brief examination of relevant datasets in Section II-A. Finally,
inaccurate GPS estimates could result in a failure of detection.
complete content and organizational editing before formatting.
Overall, this approach might be unstable and difficult to scale
Please take note of the following items when proofreading
spelling and grammar: to other airports. It is suggested that a new method be used to
overcome the major drawbacks of the existing ones, as is
covered in more detail in section III-B. In particular, the
A. Additional FOD Datasets
suggested solution does not call for the storage of airport
A few FOD datasets, including the dataset FOD-A [10], were photos for detection. The pictures are solely needed for
created and published by earlier investigations. This dataset, training. The suggested localization strategy is also
however, is intended for categorization or object detection independent of the airport and generalizable to previously
applications. All of the photos have bounding box annotations unobserved objects.
on the FOD samples. As a result, FOD-A cannot be used
directly with the localization approach described in this study.
The training/validation set for the localization approach must IV. PROPOSED MODEL
be kept distinct from the testing set, which must contain The specifics of the suggested method are described in this
runway photos with FOD randomly strewn throughout. The section.
categorization extension offered in this study, however, uses The framework for data collection is covered in Section IV-A,
FOD-A. the FOD localization technique is covered in Section IV-B,
and the classification extension is covered in Section IV-C.
JRPS –International Journal for Research Publication & Seminar (ESTIJ),
Vol. XXX, No. XXX, 2011
A. Framework for Data Collection the autoencoder's layers are learning blocks, as seen in figure
For the purpose of reflecting our objective of automatically 2. Even though the latent layer just comprises the
recognizing FOD from an aerial perspective, the data is convolutional or ViT layer, we nonetheless classify it as a
gathered as movies from a nearby airfield utilising UAS. We learning block to make the terminology simpler.
gather the movies at three different distances from the runway
surface—30 feet, 60 feet, and 140 feet—for ground sample C. Classification of FOD
distances of 0:1 inch/pixel, 0:2 inch/pixel, and 0:46 inch/pixel,
We calculate the extreme points on the segmentation map to
respectively. The 60 feet and 140 feet movies lose too much
convert the segmentation localization S into the bounding box
detail; hence 30 feet videos are used in the dataset after data
localization R, which is used for classification and assessment.
collection. Videos' frame rates are slowed down to limit the
The segmented point furthest left, the segmented point furthest
number of duplicate frames, and a dataset of images is
right, the segmented point closest to the top of the
produced by separating the frames. The 38402160 resolution
segmentation map, and the segmented point closest to the
frames were divided into an 8 by 4 grid of 448448 patches
bottom of the segmentation map are the extreme points of the
after being scaled to the nearest multiple of 448 448. Input
segmentation map. The four coordinates of a bounding box are
image size is decreased while preserving the accuracy of the
immediately computed from the extreme points to form the
data obtained. Runways and taxiways are the only clear photos
bounding box localization R. We then crop P with R to get the
in the training dataset. The "clean" photographs don't have any
cropped localization C. From there, the approach employs an
FOD objects, thus they don't need to be annotated. Videos of
empirically chosen mainstream supervised classification
taxiways and runways with FOD debris strewn across the
architecture. To establish classification situations comparable
pavement may be found in the testing dataset. Bounding box
to the localization result, we crop all of the photos from the
annotation for FOD objects has been added to the testing
FOD-A dataset [10] at the bounding boxes. The classification
dataset to facilitate performance assessment. With the help of
facilitates subsequent tasks. For example, if C's classification
the aforementioned data production architecture, we can
yields a low prediction score below a certain threshold, C can
effectively gather 81; 185 photos for training. 447 testing
be labelled as unknown and saved for further manual labelling
patches are produced after processing the testing data.
because it is unlikely to represent a picture in the classification
Bounding boxes are noted on each of these 447 fixes for
dataset. Otherwise, if the prediction score is greater than the
evaluation reasons. The FOD object is manually bounding box
selected threshold, C is classed as such.
annotated within each 448 448 patches with FOD utilising the
Computer Vision Annotation Tool (CVAT). Following that,
the annotations are exported from CVAT and transformed into V. EVALUATION
a CSV file. The dataset includes these annotations.
Deployment Model
B. Localization of FOD The modelling starts from the selection of cameras, different
The technology works as follows: in order to preserve image fitting sites, and the fitting of other things such as lights that
detail and lighten the computing load, 3840x2160 resolution will affect the research of the corresponding processing
images—generally thought of as high resolution—are divided algorithm, and the various difficulties which will during fitting
into patches. The suggested solution uses a reconstruction of these things.
methodology to give FOD localisation in the patches. The debris detection on runways is divided into two major
The patch-specific segmentation maps that mark the modules of detection and identification, in which the detection
background and the anomaly are proposed using the module is the basis of the entire system. The detection module
reconstructed patches. uses sensors as the main tools.
To offer a complete image segmentation or to display the FOD The hardware things such as sensors determine their
localizations on the entire image, the patch-specific installation position. The traveling speed of the airplane also
segmentation maps can be concatenated as needed. Before puts forward higher requirements for real-time detection of the
classifying, abnormal areas are removed from the patch- system.
specific segmentation map and normalised. The actual The work flow of debris detection on the run way can be find
cropping is done on the original patch; the segmentation map out by real detectors system.
just gives the position. Taking in consideration of the debris according to India, the
More specifically: Our approach's reconstruction part makes current domestic hardware conditions and our actual situation,
use of an autoencoder [15] with the architecture depicted in this system adopts the method of simultaneous data collection
figure 2. To make experimenting with ViT layers easier, we and detection of debris above 5cm × 5cm, and data collection
divided the autoencoder structure into what we refer to as for foreign objects below 5cm × 5cm Separate from testing.
learning blocks. The four levels in figure 2 represent a The overall design of the airport debris detection on runway:
learning block. A convolutional layer or a ViT layer can be During the routine inspection of runways, the image of the
used in place of the block's initial layer [6]. The classification road surface in front of the vehicle is collected by the camera
head of the ViT classifier has been eliminated in the ViT layer which is put on the vehicle, and the debris on runways,
adaption. With the exception of the last layer, the majority of information is extracted using image processing technology.
JRPS –International Journal for Research Publication & Seminar (ESTIJ),
Vol. XXX, No. XXX, 2011
The photos and the information are then showed on the
screens at the data centre.The camera technology with lights
is mature and the price is low. High defination, camera are VI. CONCLUSION
used to detect debbris and the feature information extracted by Although there are already techniques for detecting Runway
the optical camera is also used to classify the debris. Debris utilizing radar technology [2], these methods can be
Recognition laid the foundation. very expensive. In order to support this strategy, we also give
The collection of pavement images requires that the images a dataset development framework and a computer vision-
can cover as much of the runway width as possible. based solution for Runway Debris Detection. Due to the fact
Many cameras that computer vision only needs a camera and some
Generally, multiple cameras are installed side by side at the development time, it can be substantially less expensive than
front of the vehicle and on the runways, each camera can radar-based solutions.
cover a certain range, and a wider field of view can be There are further image-based Debris detection techniques,
obtained by multiple cameras. The camera can be mounted on however they have drawbacks that could lessen their
a high roof or mounted on the bumper at the front of the effectiveness. These fundamental problems are resolved by the
vehicle. strategy put forward in this work, including a method that can
Single camera be applied to new objects and a reduction in the amount of
To get a wider field of view, this solution needs to be erected data needed.
at a higher position and use a high-resolution wide-angle
camera. The system chooses the scheme of installing multiple This study suggests a unique Runway Debris detection
cameras on the roof of the tower and also on planes. framework based on random forest to increase the detection
The system aims intelligently to distinguish and classify the accuracy of small-scale Debris in a complex background. It
detected foreign objects, so screens at data centers can process uses representation PVF to effectively segregate Debris zones
the debris information returned by the previous stage through and reduce background interference in photos of airfield
real-time and effective algorithms to identify the debris. pavement. To obtain greater accuracy for small-scale debris
Debris detection system at runways targets are used to alert detection, the random forest is chosen. The suggested method
airport staff according to different types of foreign objects. has improved robustness and generalizability for Runway
The brief working principle of the airport runway debris Debris detection thanks to the deep integration of random
detection and identification system is described as follows: forest.
(1) This starts with a self-check to confirm that each sensor is
working properly. In order to improve the effectiveness of Runway Debris
(2) The sensor works to monitor the road surface in front of Detection, future study will apply image pyramids in feature
the body during the road travel. representation. Additionally, to evaluate our suggested
3) If a debris is found the information is shown at the data detection technique in subsequent studies, a larger Debris
Centre’s on the screens. dataset including various illumination circumstances, such as
data processing computer for comprehensive processing to full sunlight and gloomy weather, will be constructed.
determine the foreign object range.
(4) The pattern features the debris are extracted, compared and VII. ACKNOWLEDGMENT
identified by the system and data processing computer, and the With the Airport Authority of India Project, we had a fantastic
danger level of the foreign body is judged and fed back to the opportunity for learning and career development. We are
staff. grateful for the opportunity to work with the professionals that
(5) The staff cleans up the foreign objects in time according to advised and helped us over the course of this Project. We
the danger level, and records the foreign body information, deeply appreciate Mr. Soni and Mr. Vimal of the ATS (Air
which is saved to the system server. Traffic System) Radar Centre, Airports Authority of India
The actual object of the vehicle runway foreign object (AAI), Dr. Babasaheb Ambedkar International Airport Nagpur
detection and identification system developed. for their assistance and resources, which were essential to the
The airport runway debris detection and recognition system project's success. We've chosen this particular moment to
are applied to the vehicle-mounted system. It can turn on the thank them for their help. Prof. Sunil Wanjari, head of the
camera in real time, process the runway pavement scene department and project mentor, has our warmest gratitude for
situation captured by the camera in real time, detect the his essential counsel during the internship and project periods.
runway foreign object therein, and then identify the type of
runway foreign object and its hazard level, and carry out Real-
time alarm.
JRPS –International Journal for Research Publication & Seminar (ESTIJ),
Vol. XXX, No. XXX, 2011

[1] G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of


REFERENCES Lipschitz-Hankel type involving products of Bessel functions,” Phil.
Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955.
The template will number citations consecutively within (references)
brackets [1]. The sentence punctuation follows the bracket [2]. [2] J. Clerk Maxwell, A Treatise on Electricity and Magnetism, 3rd ed., vol.
Refer simply to the reference number, as in [3]—do not use 2. Oxford: Clarendon, 1892, pp.68–73.
“Ref. [3]” or “reference [3]” except at the beginning of a [3] I. S. Jacobs and C. P. Bean, “Fine particles, thin films and exchange
sentence: “Reference [3] was the first . . .” anisotropy,” in Magnetism, vol. III, G. T. Rado and H. Suhl, Eds. New
York: Academic, 1963, pp. 271–350.
Number footnotes separately in superscripts. Place the [4] K. Elissa, “Title of paper if known,” unpublished.
actual footnote at the bottom of the column in which it was [5] R. Nicole, “Title of paper with only first word capitalized,” J. Name
cited. Do not put footnotes in the reference list. Use letters for Stand. Abbrev., in press.
table footnotes. [6] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa, “Electron spectroscopy
studies on magneto-optical media and plastic substrate interface,” IEEE
Unless there are six authors or more give all authors' Transl. J. Magn. Japan, vol. 2, pp. 740–741, August 1987 [Digests 9th
names; do not use “et al.”. Papers that have not been published, Annual Conf. Magnetics Japan, p. 301, 1982].
even if they have been submitted for publication, should be [7] M. Young, The Technical Writer's Handbook. Mill Valley, CA:
cited as “unpublished” [4]. Papers that have been accepted for University Science, 1989.
publication should be cited as “in press” [5]. Capitalize only
the first word in a paper title, except for proper nouns and
element symbols.
For papers published in translation journals, please give the
English citation first, followed by the original foreign-language
citation [6].

You might also like