Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

International Conference on Computational Intelligence and Multimedia Applications 2007

Evaluation of Edge Detection Techniques towards Implementation


of Automatic Target Recognition

J. Manikandan, B.Venkataramani and M. Jayachandran*


Department of ECE, National Institute of Technology, Trichy (NITT), INDIA
Email: jmk77_ada@yahoo.com

Abstract
The vision of Automatic Target Recognition (ATR) is through an integrated command
identification architecture that combines non-cooperative and cooperative identification
sensors and systems. The ATR implemented shall support development of situational
awareness i.e., overall, general knowledge of the tactical battlefield environment, including
the location of friendly, neutral, and enemy forces and plan of action for battle. The
required operational capability will then be achieved by combining onboard data from
multiple sensors and systems with indirectly supplied off board information. Edge Detection
is one of the major image-processing requirements for achieving efficient and accurate
target recognition in difficult domains. The on-board sensors used on combat aircraft are
Electro-optic Targeting Sensors (EOTS), Infra-red (IR) sensors, Radar, Synthetic Aperture
Radar (SAR) and Inverse SAR (ISAR) providing vast amount of images with different
characteristics helpful for detecting targets.
This paper concentrates on the assessment of advanced edge detection techniques on all
types of sensor input images obtained for the implementation of automatic target
recognition for air-to-air, air-to-sea and air-to-ground applications. This paper also
describes the approach towards implementation of automatic target recognition for the
entire range of sensor inputs. The proposed algorithm for automatic target recognition is
for implementation on airborne systems with potential use on ground stations.

1. Introduction
Numerous friendly-fire incidents justify the need for identification of aircraft in both
command and control and weapon systems. Rapid and reliable identification of targets at
maximum surveillance systems range and maximum weapons systems range is a
challenging problem. Edge detection is one of the major image-processing requirements for
achieving automatic target recognition (ATR)[1]. From the hardware point of view,
efficiency and capability of ATR depends on the sensors used, sources of information about
the targets and the signal processing capability. One source for ATR system is fused sensor
data that receives information about the target using on-board sensor data and the
information is fed to ATR system. The sensor range limits the fused data from sensor. The
significant benefit of sensor data fusion is the increased confidence about the identity. The
combination of information from multiple sensors can confirm the same entity thereby
reducing ambiguity as well[2]. Edge detecting an image significantly reduces the amount of
data and filters out useless information, while preserving the important structural properties
in an image. Figure 1 gives a comparison between different types of on-board sensor images
available that are taken as sample images for processing and discussing in this paper.

*He is with Department of Communication, Eritrean Institute of Technology, Asmara, AFRICA

0-7695-3050-8/07 $25.00 © 2007 IEEE 446


441
DOI 10.1109/ICCIMA.2007.84
IR Image EO Image ISAR Image ISAR Image SAR Image
(Ship) (Air)
Figure 1. Different Formats of On-board Images

EOTS perform the functionality of laser ranging, laser tracking, infrared search and
tracking (IRST) and CCD camera. Multiband IR sensors are considered to be capable of
capturing the target during day and night modes. With infrared imagers, targets will be seen
via the radiation they emit. Radar performs the functionality of target tracking and long-
distance ranging. Foliage penetration radars can prove to be more helpful in the case of air-
to-ground modes. SAR and ISAR systems are capable of building images of the battlefield
(including ground, surface and sea)[3], thus providing a better visual situational awareness
and the images built can be used for ATR. SAR is used in air-to-ground mode, whereas
ISAR is used in air-to-air and air-to-sea modes where the radar is stationary but the target is
moving and size of target is relatively small to the background environment. SAR and
ISAR sensors are active sensors and have the possibility of being identified by the enemy
target if the radar range is relatively less compared to the target’s range capability. Still
radars including SAR and ISAR are major attractions as they provide high-resolution
images of extensive areas of the earth’s surface from a platform operating at long ranges,
irrespective of weather conditions or darkness.

2. Edge Detection
In any scenario, an airborne platform observes targets on the ground and air via any or all
of the above-mentioned imaging sensors. The number of objects in the scenes is not known
a priori, but each target is assumed to be of a certain class, such as a jeep, tank, truck,
aircraft, unmanned air vehicles (UAV), missiles or aircraft carriers. Ideally, we would like
sensors that would produce a configuration of targets in a top-down view. Of course, with
imaging sensors, these targets will be viewed through perspective projection only. Three-
dimensional target objects are modeled by a set of two-dimensional (2-D) views of the
object. Translation, rotation, and scaling of the views are allowed to approximate full three-
dimensional (3-D) motion of the object [4]. An edge operator is a neighborhood operation,
which determines the extent to which each pixel's neighborhood can be partitioned by a
simple arc passing through the pixel where pixels in the neighborhood on one side of the arc
have one predominant value and pixels in the neighborhood on the other side of the arc
have a different predominant value. Edge detection uses gradient operators, Zero-crossing
operators and canny edge operators [5].

The assumption for gradient operators is that edges are the pixels with a high gradient. A
fast rate of change of intensity at some direction given by the angle of the gradient vector is
observed at edge pixels. The magnitude of the gradient indicates the strength of the edge. In
Robert edge detector, the differences are computed at the interpolated point [i+1/2,j+1/2]
and are an approximation to the continuous gradient at this interpolated point and not at the
point [i,j]. The Sobel edge detector avoids having the gradient calculated about an
interpolated point between pixels. This operator places an emphasis on pixels that are
closer to the center of the mask. Prewitt operator uses the same equations as the Sobel
operator, except that the constant c = 1 and it does not place any emphasis on pixels that are
closer to the center of the masks. Gradient-based algorithms such as the Prewitt filter have
a major drawback of being very sensitive to noise. The size of the kernel filter and
coefficients are fixed and cannot be adapted to a given image.

442
447
Zero-crossing operators determine whether or not the estimated second direction
derivative has a zero-crossing within the pixel. The Laplacian of an image highlights
regions of rapid intensity change and is therefore often used for edge detection. Laplacian
operators are very sensitive to noise and to counter this, the image is often gaussian
smoothed before applying the Laplacian filter. Convolving this hybrid filter with the image
to achieve the required result led to the origination of Laplacian of Gaussian(LoG) operator.

Canny operator works in a multi-stage process - Smoothening the image with a Gaussian
filter, Computation of the gradient magnitude and orientation, Applying nonmaxima
suppression to the gradient magnitude and Using double thresholding algorithm to detect
and link edges. The effect of Canny operator is determined by three parameters - width of
the Gaussian mask used in the smoothing phase, and the upper and lower thresholds used by
the tracker. Increasing the width of the Gaussian mask reduces the detector's sensitivity to
noise, at the expense of losing some of the finer detail in the image. The localization error in
the detected edges also increases slightly as the Gaussian width is increased. Usually, the
upper tracking threshold can be set quite high and the lower threshold quite low for good
results. It was noticed that setting the lower threshold too high causes noisy edges to break
up and setting the upper threshold too low increases the number of spurious and undesirable
edge fragments appearing in the output.

3. Target Recognition
This section describes the methodology of target recognition using edge-detection
techniques. One main requirement towards ATR implementation is the need to store a
model database before recognition can proceed. The model database can be constructed
from either a CAD model or from a real image set. Hierarchical clustering of the models
(ground, airborne and marine targets) as the canonical positions of the models relative to
each other is determined and the hierarchical clustering shall be expanded for efficient
target recognition algorithm. At each step, the two closest models are determined and
clustered. This yields a canonical position for these models with respect to each other and a
new set of model points replacing the two previous models. The new model is then
compared with the remaining models as above, and the process is repeated until all of the
models belong to a single hierarchically constructed model tree. To perform this operation,
target position detection and target classification shall be done using neural networks[6].

(a) (b) (c) (d)


Figure 2. Automatic target recognition example.
(a) IR image (b) IR image after histogram equalization
(c) Smoothed edges of tank models (d) Detected position of the tank

A vast set of database is generated for the above-mentioned targets with different
perspective and scaling that decides the capability of target recognition. The on-board
sensor images are first pre-processed. Edge detection algorithms are then applied on the
processed images for feature extraction. Figure 2 shows an example of target detection in
an IR image. Same procedure is followed for all the other images available and their

443
448
performance and results are mentioned in next section. Target classification/ identification
are done on the edge detected images by comparing with the list of available target
databases in all perspective and scaling.

4. Simulation Results
Matlab 6.5 is used for the analysis and evaluation of the images. During our analysis with
several IR images, it was noticed that the target recognition using edge detection techniques
for IR images was 95-97% efficient. Also it was concluded during analysis that edge
detection of IR images performed better without histogram equalization. Figure 3 shows the
performance of Prewitt, Sobel, Roberts, Canny (without and with threshold) and LoG
(without and with threshold) on IR image without histogram equalization. ATR using ISAR
images for sea targets and air targets are a challenging task because of low SNR, poor
resolution, and blur associated with the ISAR images. It was observed that the target
recognition using edge detection techniques for ISAR images was 99% efficient. This is
because the target is moving and size of target is relatively small to the background
environment. Figure 4 shows the comparison result of edge detection technique for ISAR
image. EO images are obtained with a different concept when compared to SAR & ISAR
images. The former image-capturing sensors are passive and can be used only during
daytime. Figure 5 shows the performance result of an EO image in a cloudy condition.

Figure 3. Edge Detection - IR image Figure 4. Edge Detection - ISAR Image

During analysis it was noticed that the efficiency of edge detection for EO images
(ground targets) is very poor, whereas for EO images (air targets), it was 90-95% efficient.
Characteristics of SAR Images are totally different as these images are built from the radar
radiation scatters and many cases the targets can’t be identified through edge detection
techniques[7]. Figure 6 displays the performance of edge detection techniques on a simple
SAR image. During analysis with some more samples of SAR images, it was noticed that
the performance was not good as it is unable to distinguish a target with a stationary object
or building or any other structure etc. The efficiency of ATR for SAR images completely
depends on resolution of SAR images. Research activities is under progress for the
development of state-of-art target identification using multi-high resolution SAR images[8].

5. Observations and Future Activities


After simulation results, it was observed that the efficiency of target recognition using
Canny edge detection techniques can be improved by displaying the detected edges of
targets and displaying the same to pilot on cockpit display. Aim is that once the target
orientation gets changed due to the movement of target or due to the sensor movement, the

444
449
edge detected target image also gets refreshed with reference to the angle and distance of
target. With this approach, at some orientations the target can be classified easily at the
expense of intensive computational processing. SAR images with lower resolution shall be
transmitted to ground for analysis, target recognition and feedback to pilot. In order to avoid
ambiguity, multi sensor data fusion shall be employed to confirm the target and target
classification thus improving the efficiency of automatic target recognition. The proposed
evaluation can be used as a baseline towards implementation of a ground based automatic
target recognition system too.

Figure 5.Edge Detection - EO Image Figure 6.Edge Detection - SAR Image

6. Conclusion
In this paper we have evaluated the edge detection techniques for all the possible types of
on-board sensor images to employ canny-edge detection in the implementation of ATR. It is
interesting to note that a human being can correctly recognize targets in many cases that
baffle the approaches discussed above. This suggests that some human capability not yet
understood is effective and the aim of ATR is to identify the targets without human
intervention to the maximum extent possible, thus reducing the workload of pilot. The paper
includes the coverage of all the operational modes (A/A, A/S and A/G) of fighter aircrafts.

7. References
[1] J.Manikandan, “Automatic Target Recognition (ATR) for Fighter Aircrafts”, National Symposium on
Emerging Technologies and Flights of the Future, NIAT, Cochin, July 2006, 29-33.

[2] James L Crowley and Yves Demazeau, “Principles and Techniques for Sensor Data Fusion”, LIFIA
(IMAG), 46 avenue Félix Viallet, F-38031 Grenoble Cédex, France.

[3] Giorgio Franceschetti & Riccardo Lanari, Synthetic Aperture Radar Processing , CRC Press, 1999

[4] Clark F. Olson and Daniel P. Huttenlocher, “Automatic Target Recognition by Matching Oriented Edge
Pixels”, IEEE Transactions on Image Processing, VOL. 6, NO. 1, JANUARY 1997

[5] http://www.ii.metu.edu.tr/~ion528/demo/demochp.html

[6] Dr Trevor Clarkson, “Automatic Target Recognition using Neural Networks”, Department of Electronic
and Electrical Engineering, King’s College London Strand, London, UK

[7] Mehrdad Soumekh, Synthetic Aperture Radar Signal Processing with MATLAB Algorithms, John Wiley
& Sons Inc.,USA, 1999

[8] Zhang Cui, “Research on Automatic Target Recognition in High Recognition in high-resolution SAR
images”, Doctor thesis, National University of Defense Technology, 2003.

445
450

You might also like