Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Computers and Electrical Engineering 40 (2014) 785–795

Contents lists available at SciVerse ScienceDirect

Computers and Electrical Engineering


journal homepage: www.elsevier.com/locate/compeleceng

Fast single haze image enhancement q


Z. Wang a,b, Y. Feng a,⇑
a
School of Electronics and Information, Northwestern Polytechnical University, Xian 710129, People’s Republic of China
b
Department of Electric Engineering, Tongling University, Tongling 241000, People’s Republic of China

a r t i c l e i n f o a b s t r a c t

Article history: This paper presents a new method for fast single haze image enhancement without using
Available online 31 July 2013 any extra information. The proposed approach simultaneously dehazes image and
enhances sharpness by means of individual treatment of the model component and the
residual. In the haze removing stage, two coarse transmission maps using dark channel
prior are fused. One is obtained based on single-point pixel and the other is obtained by
patch. For the sake of dehazing and enhancing sharpness simultaneously, a modified
unsharp masking framework is applied to control the effectiveness of sharpness by con-
structing a sigmoid function adaptively. The main advantage of the proposed approach
compared with others is its higher speed. This speed allows the enhanced haze image to
be applied in real-time processing applications. A comparative experiment with a few
other state of the art algorithms shows similar or better visual results.
Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Outdoor images or videos are usually degraded by the aerosols, such as dust, mist, and fumes in the atmosphere, here
regarded as haze. The light reflected from the object and ambient light in the medium, referred to as the airlight [1], which
should have propagated in straight lines is actually scattered and replaced by previously scattered light. As a result, this air-
light are absorbed and scattered by particles in the medium before reaches the camera. The presence of haze will make im-
age lose contrast and color fidelity. So, the degraded images appear poor visibility and low vividness of the scene.
Image dehazing improves the aesthetic quality of images and also improves data quality during scientific data collection
and computer vision applications, such as objects detection and tracking. Additionally removing haze can significantly in-
crease the visibility of the scene and correct the color shift caused by the airlight. However, haze removal is a challenging
problem because of the relevance between the haze and the unknown scene depth.
Until recent years, a few approaches have been proposed for addressing haze removal problem. The scene for the purpose
of removing haze or increasing the contrast was estimated by multiple images [2] with different amount of haze or addi-
tional information. Haze is removed based on Polarization methods [3] through two or more images taken with different
degrees of polarization. However, dehazing based on polarization methods is quite constraining for the acquisition. An alter-
native [4] observes that image contrasts are restored from a single input image by maximizing the local contrasts of the di-
rect transmission while assuming a smooth layer of airlight. The results are visually compelling with enhanced scene
contrasts, yet may produce some halos near depth discontinuities in scene. Fattal [5] exploits the fact that the transmission
and scene albedo are locally uncorrelated to dehaze the image. This approach is physically sound and can produce impres-
sive results. However, it is deeply based on the color and thus cannot deal with a gray level image. The algorithm may fail in

q
Reviews processed and recommended for publication to Editor-in-Chief by Associate Editor Dr. Eduardo Cabal-Yepez.
⇑ Corresponding author at: School of Electronics and Information, Northwestern Polytechnical University, Xian 710129, People’s Republic of China.
Tel.: +86 13319292138.
E-mail addresses: asdwzl@hotmail.com (Z. Wang), sycfy@nwpu.edu.cn (Y. Feng).

0045-7906/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.compeleceng.2013.06.009
786 Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795

the cases when the locally uncorrelation of transmission and scene albedo is broken and is computationally intensive. Kai-
ming et al. [6] introduce a novel dark channel prior that is a statistical prior on the minimum intensity color channel in an
image patch. They exploit the fact that objects in a clear image patch have at least one color channel with very low intensity,
but in a hazy patch all color channels will have higher intensity due to airlight addition. Halos near depth discontinuities in
scene are removed by using soft matting which leads to spend more processing time. In order to reduce the computational
complexity, Tarel and Hautiere [7] inferred a veil refined by median filter for single image haze removal, however halos can-
not be completely eliminated. Jing et al. [8] proposed a fast fog removal algorithm, which employ a fast bilateral filter to
smooth the rough estimation of the atmospheric veil. However the median filter is a particular fast bilateral filter [7], so
the method proposed by [8] may not be faster than [7].
Most recent dehazing algorithms [4–7] only remove haze regardless of image sharpness, which may be useful for post-
processing such as object detection. The visual appearance of an image may be significantly improved by emphasizing its
high frequency contents on enhancing the edge and detail information in it. The classic linear unsharp masking (UM) tech-
nique is often employed for this purpose. Polesel et al. [9] presents an adaptive algorithm for image enhancement. The algo-
rithm employs two directional filters whose coefficients are updated using a Gauss–Newton adaptation strategy. Deng [10]
uses an exploratory data model as a unified framework for developing generalized unsharp masking algorithms by present-
ing a tangent system which is based upon a specific Bregman divergence.
This paper proposes a novel approach for enhancing sharpness and dehazing image simultaneously. We use dark channel
prior for single image dehazing and halos suppression based on transmission maps fusion, and preserve more image detail
information with modified unsharp masking. It is much faster compared to [5] because its complexity is only a linear func-
tion of the number of input image pixels, and it is able to inhibit halos completely and improve visual quality and sometimes
even have better results than state of the art algorithms.
The rest of this paper is organized as follows. In Section 2 we present the image degradation model due to the presence of
haze in the scene, and in Section 3 our approach and the steps of the fast single image dehazing algorithm are detailed. Mod-
ified unsharp masking framework is employed to enhance sharpness and dehaze image simultaneously in Section 4. We then
compare our algorithms with the already published algorithms in Section 5. Finally, we conclude our approach in Section 6.

2. Image degradation model

In computer vision and image processing, the transmission properties of light can be described by two components. One
is the direct transmission of light from the object surface, the other is the transmission due to scattering by the particles of
the medium, which is called as the airlight. The formation of a hazy image in [11–14] is widely written as follows:
IðxÞ ¼ JðxÞtðxÞ þ Að1  tðxÞÞ ð1Þ

Here, this equation is defined on the three RGB color channels model. I is the observed hazy image intensity, J is the surface
radiance vector at the intersection point of the scene and the real-world ray relating to the pixel x = (x, y), A is the airlight
color vector, and t is the medium transmission describing the portion of the light that is not scattered and reaches the cam-
era, the term A(1 – t(x)) is also called the veiling [7]. The goal of haze removal is to recover J, A, and t from I. For an N-pixel
color image I, there are 3N constraints and 4N + 3 unknowns. J has 3N unknowns, t has N unknowns and A has 3 unknowns.
This makes the problem of haze removal more difficult and is an ill-posed problem. In (1), the first term J(x)t(x) on the right
hand side is direct attenuation, and the second term A(1 – t(x)) is airlight, as well the intensity of the atmospheric veil [7].
The direct attenuation describes the scene radiance and its decay in the medium, the airlight that comes from previously
scattered light leads to the shift of the scene colors. While the direct attenuation is a multiplicative distortion of the scene
radiance and the airlight term is an additive one. The transmission t(x) is based on the Lambert–Beer law for transparent
objects, which account that light traveling through a transparent material will be attenuated exponentially [14]:

tðxÞ ¼ ebdðxÞ ð2Þ

where d(x) is the scene depth and b is the attenuation coefficient due to scattering in the medium. This equation indicates
that the scene radiance is attenuated exponentially with the depth. Since the observed image I(x), scene radiance J(x), and
airlight A, are all vectors with one intensity value per color channel, the attenuation coefficient due to scattering b is not a
function of the color channel, for a given pixel, the transmission is constant over all three RGB color channels. If we have
obtained a good estimate of the airlight A, we need to solve two unknowns in (1): the transmission t(x), which is related
to the depth of the scene, and J(x), the dehazing image. This leads to be an ill-posed problem deducing the scene depth from
the direct transmission, which needs to be resolved. Without additional information or priors about the scene, this cannot
determine if the color of the patch is caused by haze when the object is far away or by the natural color of the object without
haze.

3. Transmission fusion based dehazing

In this section, our approach and the steps of the single image dehazing algorithm based on fusion are detailed. When no
depth information is available, as noticed in [4] it is not possible in Koschmieder’s law (1) to separate the transmission t(x)
Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795 787

and the dehazing image J(x). The haze removal is under-constrained if the input is only a single haze image with no other
assumptions. Therefore, a correct assumptions need to be proposed in order to obtain good results.

3.1. White balance

Image data acquired by sensors, either film or electronic image sensors, must be transformed from the acquired values to
new values that are appropriate for color reproduction or display. Several aspects of the acquisition and display process make
such color correction essential, including the fact that the acquisition sensors do not match the sensors in the human eye.
Before dehazing, we assume that the input image I(x) is normalized between 0 and 1. For example to the grayscale image,
normalization is completed through dividing pixel values between 0 and 255 by 255. In particular we also assume that the
white balance is performed prior to dehazing algorithm. In this paper, we employ the white balance simply by biasing the
image average color towards pure white. In practice, performing a local white balance by biasing towards local image aver-
ages is better than the global white balance on clear pixels, but more time-consuming. When the white balance is correctly
performed, the fog will be pure white, meaning that A is very close to (1, 1, 1), which is confirmed by experiments. Therefore
we can set A in (1) to (1, 1, 1) [7].

3.2. Dark channel prior and block effects

Kaiming et al. [6] propose a very simple and elegant dark channel prior to solve the single image dehazing problem.
Although the method is simple, the result is quite impressive. The law of dark channel prior is that in most of the non-
sky patches, at least one color channel has very low intensity at some pixels. In other words, the minimum intensity in such
a patch should have a very low value, even close to zero. Since the scene radiance J is a haze-free image, the dark channel of J
is close to zero. It is defined as follows:
 
J dark ðxÞ ¼ min min J c ðyÞ ¼ 0 ð3Þ
cfr;g;bg yXðxÞ

where Jc is a color channel of J and X(x) is a local patch centered at x. If we use a minimum filter on both sides of (1), we can
eliminate the multiplicative term and estimate the veil [7] term:
 
VðxÞ ¼ Að1  tðxÞÞ ¼ min min Ic ðyÞ ð4Þ
cfr;g;bg yXðxÞ

The color of the sky in a hazy image I is usually very similar to the airlight A, So we can estimate A simply by V(x) as t(x) ? 0.
In other words, the brightest pixels in the veiling luminance are considered to be the airlight. In particular, if we primary
correctly perform white balance, A can be set to (1, 1, 1). Then t(x) will be estimated through (4) and J(x) can be recovered
by (1) after t(x) and A are effectively estimated.
Without a doubt, the dark channel prior is a statistical prior. When dehazing, a patch (e.g. the patch size is 15  15 sug-
gested by [6]) must be chosen to implement dark channel, which will incur many block artifacts in transmission map
(Fig. 1(b)), leading to halos phenomenon in recovered image J (see Fig. 1(c)). This is because the transmission is not always
constant in a rectangular patch. In order to remove these artifacts, Kaiming et al. [6] propose a soft matting method to refine
the transmission maps and get excellent vision effects. But it brings about a new serious problem of computational complex-
ity. Tarel and Hautiere [7] infer a veiling (which can be obtained by a dark channel prior using single pixel) for single image
haze removal based on median filter. The complexity is about a linear function of the input image size. However halos cannot
be completely eliminated.

Fig. 1. Block artifacts and halos phenomenons. (a) Input hazy images. (b) Estimated transmission maps using patch dark channel prior, patch size is
15  15. (d) Estimated transmission maps using point dark channel prior. (c) and (e) are recovered images using (b) and (d) respectively. (f) Recovered
images using our fusion based dehazing approach.
788 Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795

3.3. Transmission maps fusion method

Fig. 1(c) shows that the halos phenomenon of dehazing image using dark channel prior by a 15  15 patch, which called
as patch dark channel prior. However, in Fig. 1(e), there is no halo, but color distortion is very serious. Therefore, patch size is a
key parameter in dark channel prior based on dehazing algorithm. On one hand, the dark channel prior becomes better for a
larger patch size, since dark channel prior is a statistics prior and the probability that a patch contains a dark pixel is in-
creased. On the other hand, the assumption that the transmission is constant in a larger patch becomes less appropriate,
which leads to color distortion.
Since the patch dark channel prior is employed, many rectangular blocks emerge in transmission maps of Fig. 1(b) which
contain much false high-frequency information. In other words, the smooth part of transmission maps by patch dark channel
prior is close to the real transmission maps, but the patch dark channel prior will also bring some high-frequency details of
blocks edges which will lead to halos artifacts in recovered image shown in Fig. 1(c). Therefore, the paper addresses the issue
of the holes effects by removing high-frequency component of the transmission maps. However the real high-frequency
component of transmission maps such as the edges of scene is also removed, consequently forcing us to add some high-fre-
quency information to the processed transmission maps.
The block effects emerge from the choice of patch during employing the dark channel prior. If only single pixel instead of
patch is chosen when using the dark channel prior, which called as point dark channel prior, the details of scene will be pre-
sented and no block effects will emerge. However the estimation accuracy of smooth part cannot be guaranteed and the col-
or distortion of recovered image is very serious.
The single image dehazing method can be shown in Fig. 2. Here the coarse transmission maps can be estimated by using
patch dark channel and point dark channel. FWT (Fast Wavelet Transform) is used to transform spatial domain of the trans-
mission map to frequency domain. Furthermore, the transformation can also be FFT (Fast Fourier Transform) or some other
spectral transforms can also be applied. FWT is chosen in this paper and the wavelet decomposition level is set as high as
they can in order to clear up block edges. In Fig. 2, for simplicity, only one level of wavelet decomposition is used to show
the results. In order to extract high-frequency component of the transmission maps, high pass filter is used to for filter the
transmission maps of the point dark channel after FWT. Similarly, to extract low-frequency information, low pass filter is an
appropriate choice for filtering the transmission maps of patch dark channel after FWT. So the fusion block in Fig. 2 means
that making high-frequency component of the transmission maps using point dark channel as high-frequency component of
the transmission maps using patch dark channel. In other words, transmission maps are acquired by those whose high-fre-
quency component come from the transmission maps of point dark channel and low-frequency component come from patch
dark channel.
However, the transmission map after fusion includes minor details of scene which are caused by the details of Fig. 1(d).
These minor details are undesired, because the transmission map of one scene should approximatively be a constant if the
size of scene is far less than the viewing distance in the image. So in order to reduce the influence of undesired minor detail
information, a Gaussian kernel is employed to blur the transmission maps after fusion. The dehazing image recovered by the
proposed approach is shown in Fig. 1(f). Halos artifacts cannot be found and there is hardly any color distortion.

3.4. Recovering the scene radiance

When the airlight and the transmission map are estimated appropriately, the scene radiance can be recovered by solving
(1). Note that the direct attenuation term J(x)t(x) can be very close to zero when the transmission t(x) is close to zero, in
other words, the sky is at infinite tends to zero transmission. In this case, the directly recovered scene radiance J leads to
the large shift of the sky colors. Therefore, we introduce a lower constant factor t0 to restrict the transmission t(x), i.e.,

Fig. 2. Block diagram of the proposed fusion algorithm.


Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795 789

we preserve a small amount of haze in very dense haze regions, in order to avoid zero divided by zero (or very small num-
bers). The final scene radiance J(x) is recovered by:

IðxÞ  A
JðxÞ ¼ þA ð5Þ
maxðtðxÞ; t 0 Þ

A typical value of t0 is 0.1 by [6]. We evaluate the influence of parameter t0 by experiment and find that visual effect is no
change when t0 is less than 0.1. Since the scene radiance is usually not as bright as the airlight, the dehazing image looks dim.
We employ the exposure of J(x) for display.

4. Hazing image enhancement by modified unsharp masking algorithm

Generally, most dehazing algorithms cannot enhance the sharpness of image during removing the haze, nevertheless the
sharpness usually may be useful in the post-processing of image such as segmentation and detection. In this section, a mod-
ified unsharp masking algorithm is employed in our dehazing algorithm, which can not only remove the haze but also en-
hance the sharpness. At the same time, it can also compensate the Gaussian blur discussed in Section 3.

4.1. Modified unsharp masking algorithm

In image enhancement, the classical unsharp masking algorithm can be described as follow: v = y + c(x  y), where x is the
input image, y is the result of a linear low-pass filter such as median filter, and the gain c(c > 0) is a real constant coefficient.
The image detail signal d = x – y is usually amplified to increase the image sharpness. However, the signal d may be com-
posed of three components: (1) details of the image, (2) noise, and (3) over-shoots and under-shoots in areas of sharp edges
due to the smooth of edges. So it is clearly undesirable to enhance the noise, and the enhancement of the under-shoot or
over-shoot creating the visually unpleasant halo effect is undesirable as well. Ideally, the algorithm should only enhance
the image details, which requires that the filter should not be sensitive to noise and do not smooth sharp edges. Many of
the literatures have reported these issues such as the edge-preserving filters and adaptive gain control [9].
In exploratory data analysis, it is well known that a signal can be decomposed into two parts. One part fits a particular
model, while the other part is the residual signal. From this point of view, the output of the filter y, can be regarded as the
part of the image that fits the model and d is the residual part of the image. A general form of the unsharp masking algorithm
can be written as follow:

v ¼ hðyÞ þ cðdÞ ð6Þ

Here v is the output of the algorithm, both h(y) and c(d) could be linear or nonlinear functions. This model explicitly states
that c(d) can enhance the sharpness part of the image. This will force the algorithm developer to carefully select an appro-
priate model and avoid linear models. In addition, this model permits the incorporation of contrast enhancement by means
of a suitable processing function such as dehazing algorithm.

4.2. Enhancement using modified unsharp masking algorithm

The haze image enhancing algorithm using modified unsharp masking can be shown in Fig. 3. Median filtering is em-
ployed to obtain detail image signal. Since median filter can obtain good filtering effect using 3  3 window size or 5  5
window size. In this paper, considering the processing speed, a filter with 3  3 window size is adopted. The dehazing algo-
rithm used in Fig. 3 is described in Section 3.
Fig. 3 shows that in order to enhance the detail signal c(d) must be greater than d. Using a universal gain for the whole
image does not lead to good results, because a relatively large gain is required in order to enhance the small details. How-
ever, a large gain also can lead to the over-saturation of the detailed signal whose values are larger than a certain threshold.
Over-saturation is undesirable because different amplitudes of the detail signal are mapped to the same amplitude of either
1 or 0, which will miss useful information. Therefore, the gain must be adaptively controlled.

median dehazing
filtering algorithm h ( y) v = h ( y ) + γ (d )

adaption
control γ (d)
Fig. 3. Block diagram of the modified unsharp masking algorithm.
790 Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795

Fig. 4. Illustration of adaptive control.

Fig. 5. The effect of the adaptive control. (a) original image. (b) c(d) = d. (c) h = 2. (d) h = 4. (e) h = 6.

Fig. 6. From left to right, the original image and the results obtained by Tan [4], Kaiming et al. [6], Tarel and Hautiere [7] and our algorithms, patch size is
15  15, h = 4.

In the following, detailed image processing algorithm will be described. Note that the dynamic range of detail image d is
(1, 1). So a simple idea is to set c(d) as a function of the signal d and to rapidly increase c(d) starting from 0 when |d| < T and
to slowly increase 1 when |d| ? 1. More specifically, we propose the sigmoid adaptive control function:

cðdÞ ¼ b=ð1 þ expðhdÞÞ  a ð7Þ

where h is an integer parameter that controls the increase rate. The two parameters a and b are determined by the extremum
of c(d). If a = 1, b = 2 and h = 2, c(d) will be a tan-sigmoid function. Actually, c(d) is only determined by the parameter h in Eq.
(7) , for a and b are fixed values according to the equations c(0) = 0 and c(1) = 1 when h is set. The effect of h is shown in Fig. 4.
We can see that the nonlinear mapping function is close to saturation when h is large. As such, the effect of the adaptive
control is not significant when h is small. This is also observed in our experiment with images in Fig. 5. It seems more appro-
priate to set h = 4.

5. Comparison experiments

In order to demonstrate the effectiveness of our method, some real images of outdoor scenes are used in our experiments.
Fig. 6 shows a comparison between results obtained by [4,6,7] and our algorithm. From left to right, there are the original
image and the results obtained by Tan [4], Kaiming et al. [6], Tarel and Hautiere [7] and our algorithm. Halo artifacts cannot
be removed in Tan’s and Tarel and Hautiere’s method, for instance the area between the tree trunks, shown in the red ellipse
box. Our method can remove halos artifacts effectively and retain more details in the tree trunks. Actually, in a remote
Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795 791

Fig. 7. From left to right, the original image and the results obtained by Fattal [5], Kaiming et al. [6], Tarel and Hautiere [7] and our algorithms, patch size is
15  15, h = 4.

Fig. 8. From left to right, the original image and the results obtained by Tan [4], Fattal [5], Kopf [15], Kaiming et al. [6], Tarel and Hautiere [7], and our
algorithm, patch size is 15  15, h = 2.

Table 1
Processing time correspond to Fig. 9.

7  7 (s) 15  15 (s) 31  31 (s) 61  61 (s)


Tarel 0.68 1.70 5.70 20.59
Our 1.16 1.58 3.35 10.70

Table 2
Processing time correspond to Fig. 10.

7  7 (s) 15  15 (s) 31  31 (s) 61  61 (s)


Tarel 0.79 2.00 6.63 23.69
Our 1.31 1.82 3.88 12.12

viewing location, the result of Tan [4] obviously looks more legible but the color distortion is serious. The result of our meth-
od seems still misty. In the nearly viewing location, however, the results of our algorithm can provide more detail informa-
tion than Kaiming et al. [6], and looked more pleasure, such as the texture of trunk. Fig. 7 shows a comparison between
results obtained by [5–7] and our algorithm. From left to right, the original image and the results obtained by Fattal [5], Kai-
ming et al. [6], Tarel and Hautiere [7] and our algorithm are shown. There are also some halos artifacts in Tarel and Hautiere’s
method. The similar or better quality results are obtained by our method than others, such as leaves in the top-left of image
in Figs. 7 and 8 show a comparison between results obtained by [4–7,15] and our algorithm. From left to right, there are the
original image and the results obtained by Tan [4], Fattal [5], Kopf et al. [15], Kaiming et al. [6], Tarel and Hautiere [7], and
our algorithm. The visually dehazing effect seems better than Tan [4] and slightly poor than others. However our algorithm
runs faster than others.
The most computational burden of our method is the minimum filter using for patch dark channel prior. Although many
intermediate calculations such as fast wavelet transform, Gaussian filter and median filter are employed in our method, the
computational burdens are smaller than minimum filter for the smaller filter window. By using the Marcel van Herk’s fast
algorithm [16], the complexity is linear to the image size. The complexity of Tarel and Hautiere’s method [7] is the median
filter, which is a linear function of the number of input image pixels as well. However, the median filter may not be faster
than minimum filter when using same patch size. Furthermore, Tarel and Hautiere’s method [7] need to apply median fil-
tering on the median. This means that their method must be carried out two median filtering. So the runtime of [7] is twice
of our computational time, which can be see from the Tables 1 and 2 respectively.
792 Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795

Fig. 9. Recovering images using different patch sizes. From top to bottom, the original image and the images processed by 7  7, 15  15, 31  31, 61  61
patch sizes of Tarel and Hautiere [7] (right) and our algorithm (left), h = 4.
Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795 793

Fig. 10. Recovering images using different patch sizes. From top to bottom, the original image and the images processed by 7  7, 15  15, 31  31, 61  61
patch sizes of Tarel and Hautiere [7] (right) and our algorithm (left), h = 4.
794 Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795

Table 3
Processing time comparison with Tarel and Hautiere [7] for different image sizes.

450  299 (s) 465  384 (s) 600  400 (s) 1000  327 (s) 1024  738 (s)
Tarel 1.14 1.57 2.08 2.86 5.35
Our 1.12 1.46 1.87 2.49 4.94

Fig. 11. Original image is shown on the left. Results of the Tarel and Hautiere [7] (middle) and our (right), patch size is 15  15, h = 4. From top to bottom,
the size of image is 450  299, 465  384, 600  400, 1000  327, 1024  738.

Figs. 9 and 10 show the haze removal results obtained by Tarel and Hautiere’s method [7] and our algorithm using dif-
ferent patch sizes. The image sizes are 512  384 and 512  460 respectively. From top to bottom, there are the original im-
age and the images processed by different patch sizes, such as 7  7, 15  15, 31  31, 61  61. The recovering images
processed using Tarel and Hautiere’s method [7] are shown in the right column of Figs. 9 and 10, and our method is shown
in the left. The colors look over-saturated when the patch sizes are 7  7 and 15  15. The results appear more natural in the
last two rows than the first two rows. This shows that haze removal method works well for sufficiently large patch sizes.
However, we think that the patch size of 15  15 can be accepted when considering real-time processing.
Now, we compare the codes runtime corresponding to Figs. 9 and 10 with Tarel and Hautiere’s method [7], which is the
fastest method in existing state of the art algorithms. In our computer using matlab, and the results are shown in Tables 1
Z. Wang, Y. Feng / Computers and Electrical Engineering 40 (2014) 785–795 795

and 2. Tarel and Hautiere’s method [7] is carried out using the matlab code from his homepage. The processing time of [7] is
twice than that our time when the patch size is set to 61  61, which confirm that our method run faster than Tarel and
Hautiere’s method [7]. However, when the patch size is set to 7  7, the runtime is longer than [7]. Since, we employ some
intermediate calculations in our method. When the size of minimum filter is not far more than the sizes of intermediate cal-
culations, such as Gaussian blur filter in Fig. 2, the intermediate calculations cannot be neglected.
Our last experiment compare runtime and vision effect for different image sizes with Tarel and Hautiere [7] in Table 3 and
Fig. 11. All experiments are executed by matlab on a PC with a 2.7 GHz Intel Pentium Dual-Core Processor and the window
size of [7] and our are 15  15 . Table 1 shows that the Tarel and Hautiere’s and our method are linear time-consuming, and
our method implements slightly faster than [7]. Furthermore, the proposed method shows better dehazing performance than
Tarel and Hautiere [7] in vision comparison experiments in Fig. 11, especially in halo effect rejection and details preserving.

6. Discussion and conclusions

This paper simultaneously dehazes and enhances sharpness single haze image without using any extra information and
thus proposes a novel dehazing algorithm based on transmission maps fusion. Two coarse transmission maps using dark
channel prior are fused, one is obtained based on single-point pixel and the other is obtained by patch. In order to improve
the performance of fusion, a Gaussian kernel function is employed to blur the transmission map after fusion. Its main advan-
tage is its speed since its complexity is only linear with the input image size and it also achieves good or even better visual
results compared to state of the art algorithms as illustrated in the experiments. It can also remove halo artifacts effectively.
The proposed algorithm, thanks to its speed, may be used with advantages as pre-processing in many systems ranging from
surveillance, intelligent vehicles, to remote sensing, etc.
Since our haze image enhancement method is based on the dark channel prior, which is a kind of statistics prior, it may
not work for some particular images as well. The dark channel prior will invalid when the scene objects are inherently sim-
ilar to the atmospheric light. The dark channel of the scene radiance, whether single-point pixel or patch, has bright values
near such objects.
In its essence this method removes halos effectiveness by simply fusing two scale transmission maps. The possible use of
several scales should be a possible way of improvement. However, the fusing method for several scales is more complex than
two scales and computational complexity is even higher. In the future work, we will take the fusing method into
consideration.

References

[1] Koschmieder H. Theory of the horizontal visual range. Beitr Phys Freien Atm 1924;12:171–81.
[2] Narasimhan SG, Nayar SK. Contrast restoration of weather degraded images. IEEE Trans Pattern Anal Mach Intell 2003;25:713–24.
[3] Shwartz S, Namer E, Schechner YY. Blind haze separation. In: Proceedings of the IEEE computer society conference on computer vision and pattern
recognition; 2006. p. 1984–91.
[4] Tan RT. Visibility in bad weather from a single image. In: Proceedings of the IEEE computer society conference on computer vision and pattern
recognition; 2008. p. 1–8.
[5] Fattal R. Single image dehazing ACM SIGGRAPH 2008; 2008. p. 1–9.
[6] Kaiming H, Jian S, Xiaoou T. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 2011;33:2341–53.
[7] Tarel J-P, Hautiere N. Fast visibility restoration from a single color or gray level image. In: Proceedings of the IEEE international conference on computer
vision; 2009. p. 2201–8.
[8] Jing Y, Chuangbai X, Dapeng L. Physics-based fast single image fog removal. In: Proceedings of the IEEE international conference on signal processing;
2010. p. 1048–52.
[9] Polesel A, Ramponi G, Mathews VJ. Image enhancement via adaptive unsharp masking. IEEE Trans Image Process 2000;9:505–10.
[10] Deng G. A generalized unsharp masking algorithm. IEEE Trans Image Process 2011;20:1249–61.
[11] Narasimhan SG, Nayar SK. Chromatic framework for vision in bad weather. In: Proceedings of the IEEE conference on computer vision and pattern
recognition; 2000. p. 598–605.
[12] Nayar SK, Narasimhan SG. Vision in bad weather. In: Proceedings of the seventh IEEE international conference on computer vision; 1999. p. 820–7.
[13] Schechner YY, Narasimhan SG, Nayar SK. Instant dehazing of images using polarization. In: Proceedings of the IEEE computer society conference on
computer vision and pattern recognition; 2001. p. 325–32.
[14] Narasimhan S. Vision and the atmosphere. Int J Comput Vision 2002;48:233–54.
[15] Kopf J, Neubert B, Chen B, Cohen M, Cohen-Or D, Deussen O, et al. Deep photo: model-based photograph enhancement and viewing. ACM Trans
Graphics 2008;27:1–10.
[16] Herk Mv. A fast algorithm for local minimum and maximum filters on rectangular and octagonal kernels. Pattern Recogn Lett 1992;13:517–21.

Zhongliang Wang received the M.S. degrees in microelectronics and solid state electronics from Guizhou University, Guiyan, China, in 2005. He is currently
working toward the Ph.D. degree in signal and information processing in the Shanxi Key Laboratory of Information Acquisition and Processing, School of
Electronics and Information, Northwestern Polytechnical University, Xi’an, China. His research interests include image enhancement and hyperspectral
remote sensing.

Yan Feng received the Ph.D. degree in signal and information processing from Northwestern Polytechnical University, Xi’an, China, in 2007. She is currently
a Professor with School of Electronics and Information, Northwestern Polytechnical University. Her research interests focus on hyperspectral remote
sensing image compressing, compressed sensing and object tracking.

You might also like