Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Multimedia Tools and Applications

https://doi.org/10.1007/s11042-023-17603-z

Review of dehazing techniques: challenges and future trends

Abeer Ayoub1 · Walid El‑Shafai1,2 · Fathi E. Abd El‑Samie1,3 · Ehab K. I. Hamad4 ·


El‑Sayed M. EL‑Rabaie1

Received: 24 February 2023 / Revised: 29 August 2023 / Accepted: 24 October 2023


© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024

Abstract
The phenomenon of atmospheric haze arises due to the scattering of light by minute par-
ticles suspended in the atmosphere. This optical effect gives rise to visual degradation in
images and videos. The degradation is primarily influenced by two key factors: atmos-
pheric attenuation and scattered light. Scattered light causes an image to be veiled in a
whitish veil, while attenuation diminishes the image inherent contrast. Efforts to enhance
image and video quality necessitate the development of dehazing techniques capable of
mitigating the adverse impact of haze. This scholarly endeavor presents a comprehen-
sive survey of recent advancements in the domain of dehazing techniques, encompassing
both conventional methodologies and those founded on machine learning principles. Tra-
ditional dehazing techniques leverage a haze model to deduce a dehazed rendition of an
image or frame. In contrast, learning-based techniques employ sophisticated mechanisms
such as Convolutional Neural Networks (CNNs) and different deep Generative Adversarial
Networks (GANs) to create models that can discern dehazed representations by learning
intricate parameters like transmission maps, atmospheric light conditions, or their com-
bined effects. Furthermore, some learning-based approaches facilitate the direct generation
of dehazed outputs from hazy inputs by assimilating the non-linear mapping between the
two. This review study delves into a comprehensive examination of datasets utilized within
learning-based dehazing methodologies, elucidating their characteristics and relevance.
Furthermore, a systematic exposition of the merits and demerits inherent in distinct dehaz-
ing techniques is presented. The discourse culminates in the synthesis of the primary quan-
daries and challenges confronted by prevailing dehazing techniques. The assessment of
dehazed image and frame quality is facilitated through the application of rigorous evalua-
tion metrics, a discussion of which is incorporated. To provide empirical insights, the study
meticulously elucidates simulation results and presents an in-depth analysis of prominent
dehazing techniques. By orchestrating a seamless integration of theoretical exposition and
practical evaluation, this research contributes to the advancement of the field, paving the
way for improved image and video fidelity in the presence of atmospheric haze.

Keywords Dehazing · Atmospheric light · Traditional dehazing techniques · Learning-


based dehazing techniques · Transmission map · Spectral entropy · GAN · CNN

* Abeer Ayoub
abeerayoub777@gmail.com

Extended author information available on the last page of the article

13
Vol.:(0123456789)
Multimedia Tools and Applications

1 Introduction

The perceptual acuity of visual content is compromised due to the intricate interplay of
scattering and absorption phenomena triggered by minuscule atmospheric particulates [1].
As a consequence, the resultant imagery suffers from both diminished contrast and an over-
lay of whitish obscurity [2]. Such images, characterized by haziness, undergo substantial
degradation, leading to the loss of critical informational content. The strategic deployment
of dehazing techniques emerges as a viable avenue for the rectification of this pervasive
haze-induced degradation in both images and frames.
In addressing this challenging predicament, a diversity of dehazing methodologies have
been conceptualized. The exigency for their development is underscored by the presence
of hazy images and the consequential impediments posed to applications reliant on visual
perception, encompassing domains such as satellite imaging. Regrettably, the inaccuracy-
laden outcomes engendered by the majority of autonomous systems relying upon input
image interpretations further underscore the significance of this pursuit.
Being diverse in nature, dehazing techniques can be broadly classified into traditional
and learning-based paradigms. In the realm of traditional techniques, fundamental reliance
on a haze model is discernible. This model functions as a scaffold upon which the pro-
cess of generating dehazed images or frames is facilitated. The underpinning principles of
traditional techniques involve the determination of atmospheric light attributes from hazy
source images, alongside the computation of transmission maps predicated on depth dis-
parities between the observed scene and the imaging sensor or camera. This computed data
is subsequently integrated into the haze model to yield dehazed outputs [3–16]. However,
the precision of transmission map computations is contingent upon the accuracy of the
depth maps, introducing an inherent challenge in obtaining exact values. Moreover, the
dynamic nature of atmospheric light conditions, fluctuating from diurnal to nocturnal, con-
trasts with the often-employed practice of utilizing constant values for these attributes. A
noteworthy facet of traditional techniques is their susceptibility to cumulative computa-
tional errors, particularly given their reliance on manual interventions. Consequently, the
ambit of video dehazing becomes further compounded by these propagated inaccuracies.
Illustrated in Figs. 1 and 2 are elucidations of the core concepts underpinning traditional
dehazing techniques, while Fig. 3 shows histograms and spectral entropies that substantiate
the differentiation between hazy and dehazed images.
By conferring due diligence to both theoretical explication and empirical illustration,
this scholarly discourse underscores the dynamism inherent in the realm of dehazing tech-
niques. Nuanced analysis not only lays bare the challenges entailed, but also offers a pano-
ramic vista of solutions requisite for the advancement of the field.
Learning-based dehazing techniques harness the capabilities of CNNs in conjunction
with a suite of deep GANs to effectuate the restoration of images afflicted by haze-induced
degradation. The core objective of these techniques revolves around the resolution of the
haze predicament and the subsequent acquisition of pristine, dehazed renditions. This is
achieved through a process whereby the model or network assimilates the intricate com-
putation of the transmission map, denoted as t(x), and the atmospheric light component,
denoted as A. This can occur either independently or through a consolidated integration
of the learning processes governing t(x) and A. A requisite precondition for training of
supervised dehazing models is the availability of different categories of hazy and dehazed
images, thereby nurturing the model ability to discern and rectify hazy artifacts [17–19].

13
Multimedia Tools and Applications

Fig. 1  Dehazing techniques with Atmospheric Scattering Model (ASM)

Fig. 2  Haze model

A pivotal dichotomy manifests in supervised dehazing models, distinguishing


between ASM-dependent and ASM-independent paradigms. Within the ASM-depend-
ent sphere, training entails the integration of t(x) solely or the conjoint learning of
both t(x) and A. In this context, a corollary necessity is the provision of paired hazy
and dehazed image sets [20, 21]. Conversely, ASM-independent learning-based tech-
niques eschew direct reliance on the atmospheric scattering model, instead of focusing
on the acquisition of non-linear relationships characterizing the transition from hazy to
dehazed states [22]. To optimize outcomes, pre-processing techniques aimed at noise
reduction and contrast enhancement in hazy images should precede the application of
dehazing techniques [23–25].

13
Multimedia Tools and Applications

Fig. 3  Comparison between the histograms and spectral entropies of hazy and dehazed images

While supervised dehazing models have demonstrated remarkable efficacy, their


operational viability hinges upon the availability of corresponding hazy and dehazed
image pairs for training. In recognition of the logistic challenges that can arise in secur-
ing such paired datasets, a class of unsupervised dehazing techniques have been devel-
oped. These techniques, exemplified by approaches like those in [26–29], alleviate the
requirement for paired datasets by enabling the identification and dehazing of images,
relying on the inherent image statistics to facilitate the dehazing process.
This paper furnishes a comprehensive overview of advanced dehazing techniques, render-
ing lucid explications of their underlying principles. Moreover, an array of dataset types is
meticulously characterized, accompanied by a thorough delineation of evaluation metrics. To
guide the selection of appropriate techniques, a comprehensive enumeration of the merits and
demerits intrinsic to different dehazing techniques is expounded upon. Through this synthesis
of theoretical elucidation and empirical analysis, the objective is to illuminate the frontiers of

13
Multimedia Tools and Applications

dehazing research, paving the way for enhanced image quality in the presence of atmospheric
degradation.
Table 1 provides a comparison contrasting hazy images against their unaltered, clear coun-
terparts. Evident in hazy images are diminished contrast, compromised visibility, and loss of
intricate details. The overarching objective of dehazing techniques resides in the mitigation of
the air light color distortion, concomitant with the augmentation of contrast, and visibility, in
addition to the restoration of intricate details.
The subsequent sections of this paper are organized into a coherent framework comprising
seven distinct segments. In Sect. 2, the delineation of the overarching objectives that reveal the
scope of this survey is expounded. Section 3 delves into the exposition of the physical model
underpinning haze, in tandem with a comprehensive review of pertinent antecedent research.
Continuing this trajectory, Sect. 4 introduces an exhaustive overview of datasets germane to
learning-based dehazing techniques. The subsequent Sect. 5 expounds upon the outcomes
derived from the simulations undertaken. Concurrently, Sect. 6 encapsulates an incisive analy-
sis of the pivotal quandaries and complexities that beset contemporary dehazing techniques.
Ultimately, the discourse culminates in Sect. 7, wherein concluding remarks are proffered,
encapsulating the overarching findings and contributions introduced by this study.

2 Targets of this survey

This comprehensive survey is dedicated to the exposition and elucidation of pivotal image
and video dehazing techniques. The underlying objectives driving this study can be succinctly
encapsulated as follows:

a. A meticulous explication of the haze model, coupled with a delineation of the nuanced
transition from hazy to dehazed states.
b. An in-depth exploration of the gamut of traditional dehazing techniques, wherein the
haze model assumes a pivotal role in determining the dehazed output for images or
frames.
c. An erudite dissection of noteworthy deep-learning-based dehazing techniques, unrave-
ling their intrinsic characteristics and salient attributes.
d. A comprehensive characterization of the diverse datasets instrumental in the training of
deep learning networks for dehazing applications.
e. An incisive analysis of the empirical outcomes emerging from the deployment of vari-
ous dehazing techniques, culminating in the identification and classification of the most
pivotal methodologies.
f. Integration of evaluation metrics to objectively gauge the quality of dehazed images or
frames, enhancing our comprehensive understanding of the efficacy of the presented
techniques.

Table 1  Comparison between Type of image Original (clear) image Hazy image
hazy and original images
High contrast Low contrast
High visibility Low visibility
Complete details Lost details

13
Multimedia Tools and Applications

3 Related work

The foundational formulation of the physical haze model was initially introduced by
Koschmieder [30], and delineated as:
I(x) = t(x)J(x) + A(1 − t(x)) (1)
Herein, t(x) signifies the transmission map, encapsulating the proportion of light reach-
ing the camera uncontaminated by scattering. I(x) denotes the image under the influence
of haze, where x designates the pixel coordinates of the hazy image. J(x) represents the
dehazed image, while A embodies the atmospheric light. t(x)J(x) embodies the direct atten-
uation component, while A(1 − t(x)) embodies the fraction of light subjected to scattering.
The function t(x) is defined as [1]:

t(x) = e−∅d(x) (2)


Here, ∅ embodies the scattering coefficient inherent to the medium, and d(x) signifies the
distance between the camera and the observed image. It is noteworthy that t(x) = 0, when
d(x) = ∞, signifies images wholly enshrouded in haze, whereas t(x) = 1, when d(x) = 0,
indicates impeccably clear images. Within the bounds of most scenarios, the constraint
0 ≤ t(x) ≤ 1 is consistently upheld.

3.1 Image dehazing

The methodology introduced in reference [4] diligently delineated the pivotal haze model
variables t(x), A, and J(x), leveraging the spectral alterations inherent to pixels across dif-
ferent atmospheric conditions. This enabled the determination of the atmospheric light and
transmission map, effectively, operationalizing the physical haze model to extract dehazed
images. The contributions of the technique in [4] were pivotal, notably capitalizing on the
profound light penetration of near-infrared wavelengths. This strategic choice facilitated
the preservation of intricate details often obliterated in the visible light spectrum. Through
the judicious determination of the atmospheric light and transmission map, this technique
concretized the process of obtaining dehazed images in accordance with the parameters
stipulated by the physical haze model.
In a parallel vein, the innovation proffered by the technique in [9] resonated in the
realm of single-image dehazing, predicated on the tenets of the transmission map and
the Dark Channel Prior (DCP) model. By harnessing the heuristic of the dark channel,
a distinctive phenomenon apparent in outdoor dehazed images, this approach offered the
opportunity for dehazing a solitary hazy image. The statistical tenet underlying outdoor
dehazed images, termed the DCP, manifests as a preponderance of pixels within given
patches exhibiting profound darkness in at least one color channel, often approaching or
attaining blackness [14]. These dark pixels are characterized by intensity values conspicu-
ously low or nearly indistinguishable from zero. Mathematically expressed, this model
takes the form:
( ( � ))
I dark (x) = minx� ∈Ω(x) minc∈{R,G,B} I c x (3)

Herein, Ω(x) pertains to the patch centered around the pixel x, minx� ∈Ω(x) represents a
minimum filter operation, and I dark (x) signifies the hazy image under consideration. How-
ever, the utilization of the dehazing technique employing the DCP engenders certain

13
Multimedia Tools and Applications

complications such as color fidelity discrepancies and the emergence of halo artifacts.
In a constructive response to these challenges, reference [31] introduced an innovative
patch-based approach predicated on the principles of DCP, effectively offering a strategic
recourse to contend with the inherent haze predicament. Notably, a pertinent issue surfaces,
when images exhibit resemblance to atmospheric light and lack the presence of shadows
cast in their trajectory.
Reference [32] introduced the idea of supplanting the most intricate component within
the DCP framework. In this vein, a bilateral filter is harnessed for the purpose of smoothing
the transmission map, thereby endowing it with a refined quality. Building upon these foun-
dations, the technique of [33] leverages Independent Component Analysis (ICA) to under-
take the estimation of the transmission map. This estimation is grounded in the underlying
assumption of statistical independence between transmission and surface shadow compo-
nents within an image. Importantly, the technique of [33] depends on the proposal set forth
in scenarios characterized by minimal haze, opting for an alternative course, when haze
concentrations are extensive.
Another noteworthy innovation, as introduced by the technique of [7], revolves around
the utilization of a non-local haze line. This technique computes per-pixel transmission
values based on pixel positions along the haze lines to which pixels are attributed. A dis-
tinctive attribute of these haze lines is their ability to encompass scenes marked by analo-
gous radiance colors, which are distributed throughout the entirety of the image plane. This
dispersion ultimately yields diverse distances from the camera for the constituent scenes,
effectively enriching the precision of the transmission map.
In the realm of image dehazing, a significant advancement is encapsulated within the
technique of [3], which is predicated upon an approach involving quadtree segmentation
and block-based analysis. At the heart of this innovation lies the fact that hazy images
inherently suffer from diminished contrast. By orchestrating a deliberate enhancement
of the contrast, the restoration of dehazed images is successfully achieved. Within this
methodology, the evaluation of smooth regions is conducted by deriving the dehazed
image, subsequently culminating in the subtraction of the computed mean and standard
deviation.
As a complementation, the technique of [34] introduced an atmospheric scattering
model as a foundational framework, which bears the capacity to discern the atmospheric
light and transmission map, thereby substantiating the estimation of dehazed images. This
model effectively leverages auxiliary information, such as polarized image pairs captured
across different atmospheric conditions or maps outlining the spatial separation between
the camera and the scenes. In this context, Middleton’s model is instrumental in estimat-
ing the transmission map, particularly in scenarios where supplementary information is
available.
Amidst the plethora of techniques, several endeavors are conspicuous in their bid to
approximate the transmission map grounded in physically plausible assumptions. How-
ever, this pursuit often comes at the cost of inescapable artifacts, with halo effects.
As the field continues to evolve, the confluence of theoretical exploration and empir-
ical validation remains paramount in the quest to refine and augment image dehazing
methodologies.
In the continuum of dehazing methodologies, the technique of [35] emerged as a par-
adigm anchored in exploiting the characteristic low contrast intrinsic to hazy images.
The fundamental premise of this technique revolves around the deliberate augmentation

13
Multimedia Tools and Applications

of image contrast, effectively culminating in the production of dehazed renditions. In


a parallel endeavor, the research work in [3] engenders the production of a transmis-
sion map through the implementation of a boundary constraint assumption coupled with
a contextual optimization strategy. While this technique stands poised to alleviate the
haze-induced visual degradation, it is important to underscore that scenario featuring
color hues akin to the ambient light hue may precipitate color distortion in the resultant
dehazed images.
An additional advancement, materialized within the technique of [36], is underscored
by its pronounced focus on artifact reduction inherent in the transmission map. By judi-
ciously estimating both the atmospheric light and the transmission map, this technique sub-
sequently harnesses the haze model to effectuate the generation of dehazed images. Nota-
bly, the authors adopted the Gradient Residual Minimization (GRM) technique, adeptly
leveraging it to temper the manifestation of artifacts. As the pursuit of image dehazing con-
tinues to evolve, these concerted endeavors underscore the multifaceted strategies adopted
to augment image quality, while meticulously circumventing the pitfalls that can manifest
in the form of artifacts and color distortions.
In the realm of dehazing methodologies, the technique of [36] emerged with a focus on
the dehazing of videos on a frame-by-frame basis. It entails the initial display of dehazed
video frames, followed by the application of optical flow predicated upon a Markov Ran-
dom Field (MRF) framework. This strategic maneuver aims at mitigating the presence of
artifacts. The refinement process is particularly salient, notably addressing the transmission
map. However, it is pertinent to acknowledge that this technology exhibits limitations when
image objects assume color hues akin to the ambient light.
On a parallel trajectory, the technique of [37] presented a distinctive solution to the
challenge of halo artifacts through the deployment of optical flow. This intervention indeed
succeeded in eliminating halo artifacts, albeit the spatial artifacts that commonly manifest
across different images remain impervious to technology remediation efforts.
Further enriching the domain, the authors of [38] introduced a proposition wherein radi-
ance and reflectance components are synergistically harnessed to derive the transmission
map. Notably, the technique depends on a dynamic manipulation of the reflectance map,
thereby effectuating changes by modulating the attenuation parameter value. This change
transpires subsequent to the estimation of an initial reflectance map characterized by cer-
tain deficiencies. Central to the technique operationalization is the utilization of a cost
function encompassing reflectance and regularization terms, ultimately steering the opti-
mization of both the transmission and reflectance maps. Accordingly, the merits of this
technique include the appellation of optimized dehazing, underlining the pursuit of holistic
image enhancement by harmonizing multiple image components.
Within the realm of advanced dehazing methodologies, the authors of [16] introduced
a strategy that encompasses the calculation of two distinct transmission maps. These maps
are generated with varying window sizes, specifically a small window size of (3 × 3) and a
broader window size of (15 × 15). This duality is rooted in the premise of harnessing the
DCP technique to effectively circumvent the deleterious emergence of halo artifacts and
oversaturation issues within the resultant dehazed frames.
A pivotal observation arising from the technique of [16] is the proposition that the adop-
tion of a small window size of (3 × 3) for the transmission map yields a discernible mini-
mization of halo artifacts. However, this advantageous reduction comes at the expenses
of an escalated computational burden, engendering increased processing time. Addition-
ally, the trade-off manifests as an elevation in oversaturation tendencies and an augmented

13
Multimedia Tools and Applications

susceptibility to noise in the dehazed images. Conversely, the adoption of a larger window
size of (15 × 15) presents a different landscape. Herein, halo artifacts exhibit a more con-
spicuous prevalence, particularly in proximity to image edges. However, the attendant ben-
efits include a reduction in processing time, ameliorated oversaturation, and a more precise
estimation of the atmospheric light component.
A salient facet in the aftermath of the dual transmission map calculation is the deter-
mination of atmospheric light values for each individual channel. This determination is
enacted through the application of the DCP concept, wherein the DCP heuristic is invoked.
Within this context, the DCP posits that within the non-sky region, at least one color chan-
nel incorporates pixels characterized by intensities strikingly low and proximate to zero.
To bolster the integrity of the two transmission maps, the technique of [16] strategically
depends on smoothing filters. This procedural steps serve the vital purpose of ameliorat-
ing the approximation of the transmission maps, thereby rectifying the potential presence
of halo artifacts. This multi-faceted approach, spanning window size optimization, atmos-
pheric light estimation, and application of smoothing filters, underscores the technique of
[16] nuanced pursuit of enhanced image dehazing performance, while actively mitigating
artifacts.
In the realm of image dehazing, the authors of [39] put forth a proposition rooted in
the concept of extracting multi-scale images from a single source and amalgamating them
to engender a cohesive multi-scale image for the purpose of dehazing. By capitalizing
on the distinctive attributes inherent to various scales, the fusion of multi-scale images
assumes a pivotal role in surmounting the challenges precipitated by haze-induced degra-
dation. This procedure unfurls with the generation of two derivative images from an initial
degraded image: the white-balanced section and the luminance-based image. The ensuing
synthesis of these derived images, coupled with their corresponding weight maps, yields
a substantive enhancement in the overall dehazing outcomes. Notably, the mechanism of
image amalgamation, augmented by judicious weight map assignment, yields pronounced
improvements, thereby underscoring its intrinsic significance.
To further optimize the efficacy of this methodology, pyramid decomposition is judi-
ciously introduced to the weight maps. This strategic intervention, characterized by its
capacity to enhance contrast and accentuate the sharpness of hazy images, operates in tan-
dem with the weight maps to negate any adverse effects potentially stemming from contrast
alterations and sharpness enhancements. In summary, the technique of [39] encapsulates
a multi-pronged approach, spanning multi-scale image extraction, strategic image fusion,
weight map refinement, and pyramid decomposition. This holistic endeavor underscores
the commitment to achieving robust dehazing outcomes, while judiciously mitigating any
potential side effects intrinsic to contrast and sharpness manipulation.
In the realm of image dehazing, the authors of [40] introduced a paradigm that centers
upon the determination of atmospheric light, predicated upon the HSV (Hue, Saturation,
Value) color space. This technique capitalizes on the intrinsic understanding that regions
within an image afflicted by haze exhibit distinct characteristics within the HSV color
space. Specifically, such regions are typified by a reduction in saturation, concomitant with
color degradation due to the deleterious influence of haze. In tandem with these attributes,
a concurrent augmentation in brightness is observable, thereby resulting in the emergence
of substantial differences between the saturation and brightness values.
To operationalize this approach, the initial hazy image is judiciously transformed
from the RGB color space to the HSV color space. This transformative maneuver engen-
ders the creation of an image subdivided into blocks, each of size (32 × 32) pixels. Within
this framework, for each individual block, the difference between the brightness (V) and

13
Multimedia Tools and Applications

saturation (S) parameters is meticulously computed. The block exhibiting the maximal dif-
ference value, denoted as maximum D, is discerned as the haziest and most opaque region.
In the determination of saturation S within the haziest opaque region, the following for-
mulation is pursued:
D = |V − S| (4)

3.2 Video dehazing

Enhancing real-time visibility within captured videos holds paramount importance in


numerous practical applications, including scene surveillance and the visual systems inte-
grated into vehicles or dynamic imagery scenarios. The domain of video dehazing encom-
passes three distinct procedural categories.
The foremost category encompasses frame-based video dehazing techniques. These
techniques adopt a singular-image dehazing approach for each frame constituting the video
stream [41]. The subsequent category, designated as fusion-based video dehazing, hinges
upon the amalgamation of enhanced foreground and background images extracted from
each frame. This cohesive merging process serves as the foundational principle behind the
enhancement of video dehazing outcomes.
The third category, known as universal component-based dehazing, is grounded in the
pursuit of a universal component capable of conferring dehazing improvements across all
frames of the video stream. In this context, the authors of [42] introduced a rapid video
dehazing techniue, predicated upon the haze mask theory and guided image filtering. This
technique is predicated on the premise of subtracting the enhanced image from the original
hazy counterpart to engender the haze component, effectively, yielding the mask layer of
the hazy image.
Additionally, the authors of [43] deployed the transmission of the scene background
image as a pivotal element in the video dehazing strategy. This strategic integration of the
background image transmission emerges as a cornerstone in the endeavor to enhance the
visibility and quality of the video under the influence of haze. Collectively, these distinct
techniques signify the multi-faceted exploration within the realm of video dehazing, delin-
eating an array of strategies to tackle the pervasive challenges of haze-induced degradation
in dynamic visual contexts.
Within the realm of video dehazing methodologies, the authors of [44] offered an
insight grounded in the observation of transmission uniformity across contiguous frames.
This fact forms the bedrock of a temporally coherent video dehazing technique, wherein
the estimation of each frame transmission map draws upon information gleaned from its
surrounding frames.
Another noteworthy pursuit, as proposed by Li et al. [45], exploits the coherence exist-
ing between the depth map and the transmission map. This strategic correlation facili-
tates the estimation of the image depth map, subsequently enabling the determination of
the transmission map for each frame constituting the video stream. This tandem endeavor
effectively contributes to the holistic dehazing of the entire video content.
Moreover, the authors of [46] introduced a distinctive methodology underpinned by
image pyramid construction. Commencing with the production of a hazy image pyra-
mid, the subsequent phase involves the generation of the coarsest approximation of the
transmission map and atmospheric light. This foundation is then iteratively refined across

13
Multimedia Tools and Applications

ascending levels of details, employing up-sampling techniques and guided image filtering
to avert information loss.
An especially salient facet of this technique is the incorporation of temporally similar
transmission values, culminating in the proposition of a real-time video dehazing solution.
This strategic integration serves as a countermeasure to the potential emergence of flick-
ering artifacts within the dehazed video outputs. Collectively, these multi-faceted meth-
odologies reflect the landscape of ongoing research endeavors, each calibrated to address
the dynamic challenges of video dehazing, while advancing the pursuit of artifact-free and
visually-compelling video content.

3.3 Learning‑based dehazing techniques for image and video

a. Supervised Dehazing

The class of techniques termed as supervised dehazing techniques emerged with CNN
and GAN development. An illustrative instance is found in the work of [47], wherein a
robust dehazing technique was introduced harnessing an expansive dataset of hazy images.
Within this paradigm, a Multi-Scale Convolutional Neural Network (MSCNN) is invoked
to estimate the transmission map, thereby effectuating the generation of dehazed images
with the help of the haze model. Notably, the incorporation of hazy frames in the train-
ing dataset infuses supplemental information into the transmission map estimation process,
amplifying the technique pertinence to the domain of video dehazing. DehazeNet [48] con-
stitutes another significant advancement in this trajectory, characterized by its proposition
of a deeply-trained CNN model attuned to learning transmission maps from hazy images,
and subsequently deriving dehazed images from the haze model.
In a parallel endeavor, the authors of [49] introduced the All-in-One Dehazing Network
(AOD-Net), a CNN model that learns dehazed images from hazy counterparts. The k-esti-
mation module within the AOD-Net orchestrates the reconstruction of the haze-imaging
model, facilitated through an intricate sequence of five convolutional layers and ensuing
networks. This holistic approach to dehazing is particularly salient in the context of inde-
pendent estimation of transmission maps and atmospheric light, which is often plagued by
high error rates precipitated by imprecise depth information and atmospheric light consid-
erations. This rationale underscores the strategic preference for the chosen technique [49],
engendering an innovative amalgamation of components in the pursuit of accurate and arti-
fact-minimized video dehazing outcomes.
In [48], the Densely Connected Pyramid-Dehazing Network (DCPDN) was proposed.
The authors simultaneously estimate dehazed images, while also acquiring insights into the
transmission map and atmospheric light, all derived from hazy images. To harness multi-
scale characteristics effectively, the authors synergistically combine encoder-decoder net-
works with a multi-level pyramid-convolution network.
In a related vein, the authors of [21] adopted the recursive Deep Residual Learning
(DRL) technique to tackle dehazing challenges. This network architecture is engaged in the
direct learning of a nonlinear mapping, effectively transforming the space of hazy frames
into their corresponding dehazed counterparts. A distinctive facet within this paradigm
involves the feedback mechanism, wherein the recursively-dehazed frame is reintroduced
into the DRL network input. This recursive expansion strategy nonlinearly optimizes the
DRL network, engendering progressive improvements in dehazing outcomes.

13
Multimedia Tools and Applications

Furthermore, the authors of [22] augmented the application of the DRL network by incor-
porating preprocessing enhancement aimed at mitigating dynamic range limitations and noise
present in images and frames. Given the constrained dynamic range resulting from sensor
measurement inaccuracies, ignoring such disparities could exacerbate errors during the haze
removal process. To address these challenges, preprocessing techniques, including median fil-
tering, homomorphic processing, and Frost filtering, are adroitly implemented to ameliorate
dynamic range and noise issues prior to the dehazing procedure. Notably, this comprehensive
approach yields superior results when juxtaposed against the application of the DRL network
in isolation. This concerted integration of preprocessing interventions underscores a strategic
methodology to enhance dehazing outcomes in scenarios marked by limited dynamic range
and image noise.
In a notable development, the technique of [49] intersects with the contemporary frontiers
of high-level vision tasks, specifically exemplified by the Vision Transformer (ViT) paradigm
[50]. Capitalizing on this foundation, the approach leverages the Swin transformer architecture
to effectuate image dehazing. Deviating from conventional CNNs, this method strategically
integrates DehazeFormer, enriched with a medley of enhancements encompassing a modified
normalization layer, an advanced activation function, and a judicious incorporation of spatial
information. The quintessential objective revolves around the training of DehazeFormer with
several iterations using an expansive dataset of hazy images, culminating in the production of
dehazed images.
The proposition of [51] pertains to a dehazing network structured without the reliance on
paired training data. Central to this framework are three distinct generators operating within
a GAN architecture: the first generator yields dehazed images, the second is entrusted with
estimating atmospheric light, and the third orchestrates the generation of transmission maps.
This triadic ensemble is undergirded by a meticulous integration of loss functions, including
the minimization techniques delineated in [52], thereby engendering aesthetically enhanced
dehazed images. The transformative potency of Cycle-consistency GAN (CycleGAN) is
harnessed to transformation of hazy images into their dehazed counterparts, solidifying the
underpinning strategy.
The authors of [53] introduced a distinctive technique that materializes in the form of Deep
Multi-Model Fusion Dehazing (DMMFD), epitomizing a layer of separation and fusion. The
potency of this methodology lies in its holistic reformulation of the haze model, meticulously
interfacing multiplication, addition, exponentiation, and logarithmic decomposition. This stra-
tegic amalgamation is poised to amplify the model learning capabilities, substantiated by the
rationale that amalgamating diverse layers augments the overarching efficacy of image resto-
ration endeavors. In a concerted manner, these multi-faceted strategies traverse the evolving
landscape of image dehazing, each endeavoring to transcend the limitations of existing meth-
odologies and bolster the quality of dehazed imagery.
F(x) − A0 (1 − R(x))
g0 (x) = (5)
R0 (x)

g1 (x) = F(x) × L1 (x) (6)

g2 (x) = F(x) + L2 (x) (7)

g3 (x) = F(x)L3 (x) (8)

13
Multimedia Tools and Applications

( )
g4 (x) = log 1 + F(x) × L4 (x) (9)

The notation employed herein delineates the network distinctive layers, where gn (x) repre-
sents the respective layer indices. Specifically, g1 (x), g2 (x), g3 (x) and g4 (x) emerge as discrete
entities, each capable of functioning as an autonomous haze layer separation model. This for-
mulation is rooted in the foundational premise that the initial hazy image or frame, denoted as
F(x), can be disentangled into two constituent strata: the haze-free layer, designated as f (x),
and an additional layer, denoted as k(x).
F(x) = 𝛼(f (x), k(x)) (10)
The culmination of the dehazing process is achieved through the judicious fusion of inter-
mediate outputs, encompassing g0 (x), g1 (x), g2 (x), g3 (x), and g4 (x), under the modulation of
five distinct learned attention maps, designated as U0 , U1 , U2 , U3 , and U4. This fusion process
is mathematically formalized as:
gf (x) = U0 × g0 (x) + U1 × g1 (x) + U2 × g2 (x) + U3 × g3 (x) + U4 × g4 (x) (11)
In [54], the innovative Gated Fusion Network (GFN) was introduced, premised on two cru-
cial observations pertaining to the impact of haze. The first observation underscores that a
hazy image may exhibit color distortion attributable to atmospheric light, while the second
insight emphasizes that scattering and attenuation events significantly compromise image vis-
ibility. Commencing from the grayscale image, the white-balanced input, denoted as Fwb (x), is
derived. Additionally, the input Fce (x) is composed by enhancing the average luminance value
F(x) with a control parameter β, yielding:

Fce (x) = 𝛽(F(x) − F(x)) (12)

Here, β is calculated as β = 2(0.5 + F(x)). Employing nonlinear gamma correction, another


input, Fgc (x), is produced to enhance the visibility of F(x):
Fgc (x) = 𝛼F(x)γ (13)

where α = 1 and γ = 2.5. The final dehazed image, denoted as g(x), materializes as the
weighted amalgamation of three inputs, each modulated by the respective confidence maps
Swb, Sce , and Sgc, devised for the fusion process:
g(x) = Swb × Fwb (x) + Sce × Fce (x) + Sgc × Fgc (x) (14)
In the context of DehazeNet [55], the technique delves into the decomposition of the
hazy image into its low-frequency base element, Fbase (x), and high-frequency detail element,
Fdetails (x). The base element encapsulates the primary content of the image, while the high-
frequency counterpart encapsulates edges and textures. By invoking the dehazing function
G(.) on the base element and the enhancement function H(.) on the detail element, the dehazed
image is synthesized according to:
F(x) = Fbase (x) + Fdetails (x) (15)

g(x) = G(Fbase (x)) + H(Fdetails (x)) (16)

13
Multimedia Tools and Applications

These intricate techniques contribute to the advancement of dehazing methodologies,


each delineating a unique approach to enhancing image quality through the mitigation
of haze-induced artifacts.

b. Unsupervised Dehazing

In the realm of public datasets, supervised dehazing techniques have demonstrated


commendable efficacy, yielding consistent outcomes. Nonetheless, their reliance on
paired data, typically comprising hazy and dehazed images or corresponding transmis-
sion maps, imposes a practical constraint as sourcing such data in real-world scenarios
can be inherently intricate. Particularly in outdoor scenes encompassing elements like
grass or water, achieving an equitable comparison between two images captured under
distinct atmospheric conditions—clear and hazy—presents considerable challenges. As
a corollary, the fidelity of dehazed images can become compromised. In light of these
considerations, the advent of unsupervised dehazing techniques has assumed promi-
nence as an alternative avenue.

c. Examples of Unsupervised Dehazing

In the framework of [32], an unsupervised network was introduced, predicated on


the insight that the internal statistical characteristics of a compound layer surpass the
simplicity inherent in its individual components. Consider a scenario, where C is a lin-
ear summation of independent stochastic variables, A and B. Approaching this from a
statistical vantage point, the entropy of C surpasses that of its constituent elements, thus
adhering to the relationship K (C) ≥ max{K (A), K (B)}. Building upon this premise, the
loss function pertaining to image layer decomposition is formulated as follows:
S = Srec + Ω ⋅ Sexc + 𝛽 ⋅ Sreg (17)
The decomposition of the hazy image is governed by the terms Srec denoting the
reconstruction loss, Sexc signifying the exclusion loss between the two distinct Domain-
Invariant Projections (DIPs), and Sreg representing the regularization loss aimed at
ensuring a continuous and smooth transmission map. Noteworthy strides have been
made in the realm of unsupervised data-driven dehazing methodologies, as evidenced
by the achievements of the techniques of [56]. However, a distinct paradigm, known
as Zero-shot Image Dehazing (ZID), stands out for its ability to enact neural-network-
based dehazing with a solitary image, in stark contrast to conventional models that
necessitate a substantial corpus of data for network training. ZID achieves this through
fusion of unsupervised and zero-shot learning, thereby diminishing the dependency of
parameter learning on extensive data resources. The values R(x) and A are ascertained
by means of three subnetworks, namely NA (A-Net), NB (B-Net), and NC (C-Net). By
mitigating the Srec loss, the reconstruction process effectively disentangles the compo-
nents of the hazy image, F(x), yielding a decomposition:
Srec = ||Frec (x) − F(x)||p (18)
In this context, the symbol p designates the p-norm, while Frec (x) corresponds to the
reconstructed hazy image. The extraction of the atmospheric light A unfolds as follows,
leveraging the decomposed atmospheric light DA (x) and the Kullback–Leibler divergence:

13
Multimedia Tools and Applications

SA = SH + SKL = ||DA (x) − A(x)||p + KL(N(wz , 𝜕 2z )G(0, F)) (19)

In the given context, the initial value A(x) is acquired through automated learning from
data, while SH pertains to the loss associated with DA (x). It is imperative to highlight
that the utilization of A(x) in ZID diverges from the usage of A in the hazy model. In this
regard, G(0, F) signifies the Gaussian distribution, with the variable z determined by the
input x. The deconstruction process of the dehazed image g(x) is facilitated through the
unsupervised channel loss SA, effectively executed within the ambit of A-Net. The role of
Sreg is to enhance the model stability and promote the smoothness of both A and R(x). The
comprehensive loss function governing ZID is articulated as follows:
S = Srec + SA + Sg + Sreg (20)
In accordance with the chronological year of inception for each dehazing technique,
the categorization of dehazing techniques is presented in Fig. 4. They are organized in a
descending order.
Table 2 encapsulates the merits and demerits associated with various dehazing tech-
niques. The information in Table (2) elucidates that contemporary dehazing techniques
depend on enhancement strategies prior to the dehazing process, effectively mitigat-
ing noise stemming from camera inaccuracies during image acquisition. In another vein,
a bifurcated transmission map approach has been employed to rectify artifacts within
images. It is noteworthy that an array of recent dehazing techniques have harnessed learn-
ing-based methodologies to directly derive dehazed images from their hazy counterparts.
These techniques have demonstrated heightened accuracy compared to traditional dehazing
techniques.

4 Survey of datasets and their descriptions

Collecting genuine datasets suitable for training CNNs or GANs proves to be a challeng-
ing endeavor. To address this limitation, both synthesized datasets and generated datasets
are employed. A synthesized dataset is constituted by manipulating distinct parameters of
the haze model, including dehazed image (J(x)), atmospheric light (A), distance (d(x)), and
scattering coefficient (∅), to produce hazy images. A haze generator acts as a mechanism
to distribute light, culminating in the creation of hazy images. Table 3 provides a compre-
hensive overview of various dataset types along with their corresponding descriptions, dis-
tinguishing between real, Haze Generated (HG), and Synthesized (Syn) datasets. Further-
more, Table (3) furnishes information regarding the data collection environment, whether
indoor or outdoor, alongside the dataset name and number of images, substantiated by rel-
evant references.
Table 3 provides valuable insights into the composition of various datasets, aiding in a
comprehensive understanding of their characteristics. Specifically, the MRFID dataset [32]
comprises 200 authentic outdoor images, capturing the essence of real-world conditions.
Similar to the MRFID dataset, the BeDDE dataset [56] shares these attributes, offering a
collection of genuine outdoor images. In contrast, the I-HAZE dataset [57] encompasses
35 indoor images, though they are not authentic, having been generated using a haze gen-
erator. Similarly, the O-HAZE [58], Dense-Haze [59], and NH-HAZE datasets [60] consist
of 45, 33, and 55 outdoor images, respectively, all of which are synthetically generated by
a haze generator.

13
Multimedia Tools and Applications

The technique of [27] 2023


The technique of [28] 2023
The technique of [48] 2022
The technique of [49] 2022
The technique of [52] 2021
The technique of [47] 2020
The technique of [42] 2018
The technique of [48] 2017
The technique of . [44] 2017

Traditional
The technique of [43] 2016

Dehazing The technique of [44] 2016

Techniques for The technique of [57] 2016


Image and Video The technique of [46] 2015
The technique of [52] 2015
The technique of [41] 2014
The technique of [39] 2013
The technique of [37] 2013
The technique of [60] 2013
The technique of [54] 2012
The technique of [38] 2011
The technique of [40] 2010

Dehazing The technique of [43] 2008

Techniques
The technique of [36] 2000

Supervised Dehazing
The technique of [27] 2022
The technique of [68] 2021
The technique of [67] 2019
The technique of [65] 2019
The technique of [64] 2018
The technique of [63] 2018
Learning-based The technique of [26] 2018
Dehazing The technique of [60] 2018
Techniques for The technique of [59] 2017
Image and Video
The technique of [58] 2016
The technique of [57] 2016

Unsupervised Dehazing
The technique of [69] 2020
The technique of [68] 2019

Fig. 4  Types of dehazing techniques

Conversely, the RESIDE dataset [61], comprising 10,000 hazy and clear images, pre-
sents a unique composition, integrating both authentic and synthesized images. This dataset
encapsulates a blend of indoor and outdoor environments, further enhancing its diversity.
On the other hand, the D-HAZY dataset [62] is constituted by 1400 indoor images, albeit
synthetically produced. Likewise, the HazeRD dataset [63] comprises 15 outdoor images,
synthesized to simulate hazy conditions. Lastly, the 4KID dataset [64] encompasses 10,000
outdoor images, generated synthetically to resemble hazy scenes. Worth noting is the parity

13
Table 2  Advantages and disadvantages of the different dehazing techniques
Year of publication Steps of technique Advantage Disadvantage

1999 [65] Estimation of atmospheric light and transmission Improvement of the transmission map Inaccurate when image elements are similar to
map by haze equation ambient light
A hazy image is represented as a Factorial
Markov Random Field (FMRF)
2001 [66] The technique is based on the fact that atmos- Polarization filtering is applied Complex system
pheric light is typically partially polarized, To obtain a dehazed image, inversion of the
when it is scattered by atmospheric particles polarization effect of air scattering is applied
Multimedia Tools and Applications

2008 [33] Markov Random Field (MRF) is applied Improved dehazed image quality The fundamental parts are the textures
2009 [67] The technique for enhancing the transmission The speed of the technique makes it useful for There are isolated areas in the scene depth
map using filters is used to improve the appear- real-time applications
ance of a single image
2010 [68] A rapid dehazing technique is used to remove It simplifies the haze equation High complexity
haze from the image
2010[69] Haze model is used to estimate the dehazed Removal of noise and haze Processing time is long
image, and a denoising technique is used to
remove noise
2011 [70] A fast dehazing technique is used Reducing the haze well Complexity of the technique
Estimation of dehazed image in a short time
2011[9] DCP technique is used for removing haze from a The information about haze transmission and Inability to apply the technique fully if the surface
single image haze depth are available object is discrete illumination
2012 [71] High-speed haze removal algorithm is used. The Noise is treated using filters, such as low-pass Shade distortions for light areas are presented
DCP is used for a single image and Gaussian filters
2013 [5] A transmission map is obtained using a boundary It removes haze well It provides color distortion for a scene having a
constraint assumption and contextual optimization similar color to the atmospheric light

13
Table 2  (continued)
Year of publication Steps of technique Advantage Disadvantage
2013 [72] The original hazy image yields two distinct Filtering of important features to solve the haze Complexity in analysis

13
inputs (a white-balanced image and a contrast- problem
enhanced image) Estimating transmission map, atmospheric light,
and then the dehazed image
2014 [31] Independent Component Analysis (ICA) for This technique is a statistical technique to The worst outcomes for dense haze
removing haze separate two additive components from a hazy
image
2014[73] This algorithm deals with the enhancement of a The algorithm depends on the selection of fitting High complexity
single image by removing haze using fusion weight maps like desired pixels
2015 [74] Direct and adaptive dehazing technique for a It increases the contrast of the dehazed image High complexity
single image
2016 [34] Image smoothing technique to dehaze images A Gradient Residual Minimization (GRM) tech- High complexity
The artifacts on the transmission map are nique is used to remove artifacts
reduced
2016 [45] A multi-scale architecture was provided to fine- Accurate results for transmission map Despite the success of CNN-based approaches, a
tune regression, and a Multi-Scale CNN (MS- separate step is needed to estimate the atmos-
CNN) was developed to learn the transmission pheric light
map in a convolutional manner
2017 [47] A CNN technique, called the All-in-One dehaz- The estimation of the transmission map and High complexity
ing network (AOD-Net), was proposed to learn atmospheric light independently results in high
the dehazed image from a hazy image errors because of inaccurate depth informa-
tion and atmospheric light, and hence, this
technique is chosen
2018 [48] A Densely Connected Pyramid-Dehazing Net- To learn multi-scale characteristics, an encoder- A hazy image cannot be used to directly estimate a
work (DCPDN) was proposed to jointly learn decoder network with a multilayer pyramid dehazed image
the transmission map and atmospheric light network was provided
from hazy images to estimate dehazed images
Multimedia Tools and Applications
Table 2  (continued)
Year of publication Steps of technique Advantage Disadvantage
2018 [33] For image dehazing, a CycleGAN network was The Cycle-consistency GAN (CycleGAN) High complexity
proposed technique was proposed for transforming hazy
Minimal loss to provide dehazed images with images to dehazed images
increased visual realism
2018 [75] An atmospheric scattering model that consists When additional information is represented as It is not easy to acquire such additional images
of atmospheric light and the corresponding a pair of polarized images taken in various using a general digital camera
transmission map was proposed weather conditions or with a distance map
Multimedia Tools and Applications

between the camera and the scene, the trans-


mission is approximated
2018 [21] DRL network learns a nonlinear mapping from DRL network returns the recursively-dehazed The network takes more processing time by per-
the space of hazy frames to dehazed frames, frame to the input of the DRL network forming iterations
directly This recursive extension provides a nonlinear
optimization of the DRL network
2018 [54] Two observations with the influence of haze Accurate results are obtained because of using a Datasets are needed for model training. The cost
are used. First, under the influence of atmos- supervised network of analysis is high
pheric light, the color of hazy images may
be distorted to some extent. Second, due to
the existence of scattering and attenuation
phenomena, the visibility of images will be
degraded
2019 [32] Proposal of a technique for layer separation and Because of the supervised network and decom- Computational time is high
fusion to enhance learning position theory, the results are accurate
2019 [51] It was suggested to use a dehazing GAN without The GAN has three generators: one that produces The network complexity is high
paired training data dehazed images, one that gauges atmospheric
light, and a third generator that computes
transmission maps
2020 [32] The unsupervised network is applied based on This technique requires only hazy images, and so High complexity of analysis
the observation that the internal statistics of a they are saved in the dataset. Dehazed images
mixed layer are more complex than those of the are not needed for training. The technique
single layers that compose it provides a continuous and smooth transmis-
sion map

13
Table 2  (continued)
Year of publication Steps of technique Advantage Disadvantage

13
2020 [55] It divides the hazy image into low-frequency and Because of the supervised network and decom- A lot of time is needed for analysis
high-frequency components as the base ele- position theory, the results are accurate
ment Fbase (x) and the detail element Fdetails (x),
respectively. The base element is the main
content of the image, while the high-frequency
element is the edge and texture
2020 [38] An optimized dehazing technique gives the trans- A structure-guided filter is used to remove arti- The dehazed image is not estimated directly from
mission map facts in the transmission map the hazy image
A smoothed transmission map is estimated after
that to obtain the dehazed frame
2020 [49] DehazeFormer is utilized in this technique, The technique uses Swin transformer for image Not accurate enough due to the user’s calculations
as it includes numerous changes, such as an dehazing instead of CNN
improved normalization layer, an activation No more datasets are needed for training
function, and spatial information. This work
presented Vision Transformers (ViTs), which
have been recently used for high-level vision
tasks
2020 [56] Three subnetworks, By combining the benefits of unsupervised learn- High complexity of analysis
namely NA (A − Net), NB (B − Net), and NC (C − Net), ing and zero-shot learning, the dependence of
are used to determine J(x), t(x), and A the parameter learning process on data was
decreased
2022 [22] Images have some noise and a restricted dynamic Before the dehazing process, a variety of Large time of analysis
range due to sensor measurement errors, which improvement techniques, including median
may be increased throughout the haze removal filtering, homomorphic processing, and Frost
process if neglected. To eliminate the noise filtering, have been employed to eliminate
and decrease illumination, a DRL network noise and decreased illumination. In compari-
and further preprocessing via an enhancement son to the DRL network alone, this technique
approach were implemented produced better results
Multimedia Tools and Applications
Table 2  (continued)
Year of publication Steps of technique Advantage Disadvantage

2022 [76] It has been reported that multi-scale image fusion Pyramid decomposition is applied on weight The results were significantly enhanced by the
enhances the resolution of hazing problems by maps and the input images, which helps to combination of the produced images and
providing significant features at several scales. enhance the contrast and also sharpen the the accompanying weight maps with high
The two generated images from an original images complexity
degraded image are the white-balanced part
and the luminance-based image
2023[23] Enhancement techniques such as median filter- Enhancement techniques remove noise Computational time increases
Multimedia Tools and Applications

ing, homomorphic processing, and Frost Dual-transmission maps are used to reduce
filtering are used before dehazing techniques to artifacts in images
reduce noise due to measurement tools

13
Multimedia Tools and Applications

Table 3  Types of datasets and Type of Indoor/Outdoor Number Dataset name


their descriptions dataset of images

Real Outdoor 200 MRFID, 2020 [32]


Real Outdoor 200 BeDDE, 2020 [56]
HG Indoor 35 I-HAZE, 2018 [57]
HG Outdoor 45 O-HAZE, 2018 [58]
HG Outdoor 33 Dense-Haze, 2019 [59]
HG Outdoor 55 NH-HAZE, 2020 [60]
Syn & Real Indoor / Outdoor 10,000 RESIDE, 2019 [61]
Syn Indoor 1400 D-HAZY, 2016 [62]
Syn Outdoor 15 HazeRD, 2017 [63]
Syn Outdoor 10,000 4KID, 2021 [64]

in the number of images between the RESIDE and 4KID datasets, which both consist of
identical numbers of images, contributing to their comparative evaluation.

5 Simulation results

5.1 Evaluation metrics

The evaluation of hazy image enhancement methods encompasses the utilization of per-
tinent metrics such as Peak Signal-to-Noise Ratio (PSNR), correlation between hazy and
dehazed images, histogram shape, and spectral entropy. These metrics collectively offer
insights into the efficacy of various dehazing techniques.
The PSNR, measured in decibels (dB), is defined as follows:
( )
MAX2I
PSNR = 10 Log10 = 20 Log10 MAXI − 10 Log10 (MSE) (21)
MSE

Here, the hazy image denoted as I has dimensions of m × n, and its corresponding
dehazed image is J. The Mean Squared Error (MSE) between the two images is calculated
as:
1 ∑m ∑n [ ]2
MSE = I(i, j) − J(i, j) (22)
mn i=1 j=1
where MSE quantifies the average squared difference between corresponding pixels in the
hazy and dehazed images. MAXI represents the maximum possible pixel value of the hazy
image. A lower PSNR value indicates improved quality of dehazed images, and conse-
quently, a stronger correlation between hazy and dehazed counterparts.
Spectral Entropy (SE) unveils the spectral power distribution of a hazy image (I), expressed as:

SE = − P(I)log2 [P(I)] (23)

where P signifies the normalized power spectral density. Elevated SE values are indicative
of superior dehazed image quality, as they reflect a wider spectral distribution.

13
Multimedia Tools and Applications

Image histograms offer a graphical depiction of pixel density against intensity. A broad-
ened histogram denotes improved quality of dehazed images that is aligned with enhanced
image fidelity. Consequently, the combination of PSNR, correlation analysis, spectral
entropy, and histogram analysis serves as a comprehensive toolkit for assessing the efficacy
of diverse dehazing techniques.

5.2 Illustrations of some dehazing results

The outcomes of different image dehazing techniques are displayed in Fig. 5. The observa-
tions drawn from Fig. 5 indicate that the technique of [22], which incorporates enhance-
ment, yields the most favorable outcomes.
Table 4 presents a comparison of average PSNR and correlation values for images across
various dehazing techniques, as illustrated in Fig. 5. The findings discerned from Table 4
underscore that the technique of [22], characterized by the incorporation of enhancement,
namely homomorphic processing, median filtering, and Frost filtering, attains the highest
average PSNR and average correlation metric values.

6 Major limitations and difficulties with the existing dehazing


techniques

The primary limitations and challenges inherent in existing dehazing techniques are suc-
cinctly summarized as follows:

1) Haze Classification Network Costs: Certain dehazing techniques possess the capability
to discern the presence of haze in an image, while others lack this functionality. The
construction of haze classification networks for hazy images can be computationally
intensive.
2) Scope of Application: Many currently employed dehazing techniques are oriented
towards land-based hazy scenarios, often neglecting oceanic or atmospheric haze.
Addressing a broader range of settings by diversifying models is imperative.
3) Video Dehazing Challenges: Presently-available video dehazing techniques mainly
target surveillance scenes, leaving a gap in efficiently handling videos with moving
cameras. Real-time processing and efficiency are crucial for video dehazing applica-
tions. Background estimation in video dehazing techniques requires refining, along with
a resolution shift issue.
4) Complexity of Haze Models: In-depth analysis of haze models can be resource-intensive,
but it holds promise for enhancing dehazing techniques.
5) Synthetic versus Real-world Data: Bridging the gap between synthetic and real-world
data is essential. The reliance on synthetic data for model training and its successful
application to real-world scenarios warrant exploration.
6) Computational Efficiency and Metrics: Achieving a balance between parameters, com-
putation time, and performance is vital, especially as dehazing techniques often serve
as preprocessing steps in complex computer vision tasks.
7) Future Directions: Future advancements may prioritize the development of swift, com-
pact, and high-performance dehazing techniques.

13
Multimedia Tools and Applications

Fig. 5  Results of various dehazing techniques

13
Multimedia Tools and Applications

Table 4  A comparison of average PSNR and correlation values for images and dehazing techniques in
Fig. 5
Dehazing tech- DCP BCCR​ MSCNN DRL Homomorphic Median Filter Frost Filter
nique [9] [5] [45] [21] Processing with with DRL [22] with DRL
DRL [22] [22]

Average PSNR 15.3792 13.3211 15.0852 7.6264 7.1625 7.5151 7.5589


Average Cor- 0.8983 0.8574 0.8375 0.0430 0.0248 0.0450 0.0475
relation

8) Pre-trained Models and Outdoor Data: While visual loss has been utilized to enhance
dehazing quality in both supervised and unsupervised techniques, the impact and effi-
cacy of pre-trained models and outdoor data warrant further investigation.

These challenges highlight the evolving nature of the dehazing field and underscore the
avenues for future research and development.

7 Conclusion

This paper provided a comprehensive compilation and elucidation of diverse categories


of dehazing techniques. It encompassed an overview of distinct datasets along with their
descriptive attributes. Evaluation metrics pertinent to image dehazing challenges are con-
cisely outlined. Preeminent dehazing techniques, coupled with their corresponding out-
comes, are delineated. The haze model is expounded upon, coupled with a clear distinction
between hazy and dehazed images. Demonstrative results pertaining to various dehazing
techniques have been presented. Furthermore, the issues and obstacles inherent in cur-
rent dehazing methodologies have been expounded. It is evident that concerted endeavors
are required from researchers to address the intricacies of achieving high-quality dehazed
images. In its entirety, this paper serves as an invaluable reference for newcomers embark-
ing on the exploration of dehazing subjects.
Acknowledgements The authors are very grateful to all institutions in the affiliation list for successfully
performing this research work. The authors would like to thank Prince Sultan University for their support.

Authors’ contributions All authors equally contributed.

Funding The authors did not receive support from any organization for the submitted work.

Data availability All data are available upon request from the corresponding author.

Declarations
Ethics approval All authors contributed and accepted to submit the current work.

Consent to participate All authors contributed and accepted to submit the current work.

Consent to publish All authors accepted to submit and publish the work.

Competing interests The authors have neither relevant financial nor non-financial interests to disclose.

13
Multimedia Tools and Applications

References
1. Narasimhan SG, Nayar SK (2000) Chromatic framework for vision in bad weather. In: Proceedings
IEEE Conference on Computer Vision and Pattern Recognition, CVPR (Cat. No. PR00662), vol. 1,
pp. 598–605. IEEE
2. Pal NS, Lal S, Shinghal K (2018) A robust visibility restoration framework for rainy weather
degraded images. TEM J 7(4):859–868
3. Berman D, Treibitz T, Avidan S (2016) Non-local image dehazing”. In IEEE Conference on Com-
puter Vision and Pattern Recognition, pages 1674–1682
4. Feng C, Zhuo S, Zhang X, Shen L (2013) S¨usstrunk, and S."Near infrared-guided color Image
dehazing". In: 2013 IEEE International Conference on Image Processing, pp.2363–2367. IEEE
5. Meng G, Wang Y, Duan J, Xiang S, Pan C (2013) Efficient image dehazing with boundary con-
straint and contextual regularization. IEEE International Conference on Computer Vision, pp.
617–624
6. Zhang H, Liu X, Huang Z, Ji Y (2014) Single image dehazing based on fast wavelet transform with
weighted image fusion. Proc.IEEE Int. Conf. Image Process. (ICIP), pp. 4542_4546, https://​doi.​org/​10.​
1109/​ICIP.​2014.​70259​21
7. Kim I, Min HK (2017) Dehazing using non-local regularization with iso depth neighbor-_elds. Proc.
Conf. Comput. Vis. Theory Appl., pp. 77_88
8. He J, Zhang C, Yang R, Zhu K (2016) Convex optimization for fast image dehazing. Proc. IEEE Int.
Conf. Image Process. (ICIP), pp. 2246_2250, https://​doi.​org/​10.​1109/​ICIP.​2016.​75327​58
9. He K, Sun J, Tang X (2011) Single image haze removal using dark channel prior. IEEE Trans Pattern
Anal Mach Intell 33(12):2341–2353
10. Kim JH, Jang WD, Sim JY, Kim CS (2013) Optimized contrast enhancement for real-time image and
video dehazing. Journal of Visual Communication and Image Representation, 24(3), 410–425,.http://​
mcl.​korea.​ac.​kr/​proje​cts/​dehaz​ing/​videos/​video_​seq.​zip
11. Thanh LT, Thanh DNH, Hue NM, Prasath VBS (2019) Single image dehazing based on daptive his-
togram equalization and linearization of gamma correction. Proc. 25th Asia_Paci_c Conf. Commun.
(APCC), pp. 36_40, https://​doi.​org/​10.​1109/​APCC4​7188.​2019.​90264​57
12. Ding M, Wei L (2015) Single-image haze removal using the mean vector L2-norm of RGB image sam-
ple window. Optik-Int J Light Electron Optics 126(23):3522–3528
13. Saini M, Wang X, Atrey PK, Kankanhalli M (2012) Adaptive workload Equalization in multi-camera
surveillance systems. IEEE Trans Multimedia 14(3):555–562
14. Fattal R (2008) Single Image Dehazing. ACM Trans. Graph., SIGGRAPH 27(3):72
15. Feris RS (2011) Large-scale vehicle detection, indexing, and search in urban surveillance videos. IEEE
Trans Multimedia 14(1):28–42
16. Muhammad S, Imran M, Ullah A, Elbasi E (2021) A Single Image Dehazing Technique Using the
Dual Transmission Maps Strategy and Gradient-Domain Guided Image Filtering. Digital Object Iden-
tifier. https://​doi.​org/​10.​1109/​ACCESS.​2021.​30900​78,June28
17. Kohli B, Silberman N, Hoiem D, Fergus R (2012) Indoor Segmentation and Support Inference from
RGB Images. ECCV , pages 746–760,.
18. Eunsung Jo, Jae-Young Sim (2021) Multi-Scale Selective Residual Learning for Non-Homogeneous
Dehazing. In Conference on Computer Vision and Pattern Recognition. 507–515
19. Ren W, Pan J, Zhang H, Cao X, Yang M-H (2020) Single image dehazing via multi-scale convolu-
tional neural networks with holistic edges. Int J Comput Vision 128(1):240–259
20. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A Efros (2017) Image-to-image translation with con-
ditional adversarial networks. In Conference on Computer Vision and Pattern Recognition. 1125–1134
21. Yixin Du and Xin Li. (2018) Recursive Deep Residual Learning For Single Image Dehazing, IEEE/
CVF Conference on Computer Vision And Pattern Recognition Workshops(CVPRW),2018. https://​
github.​com/​yixin​du1573/​Recur​sive-​Deep-​Resid​ual-​Learn​ing-​for-​Single-​Image-​Dehaz​ing-​DRL/​tree/​
master/​testD​ata
22. Abeer Ayoub, Ensherah A. Naeem, Walid El-Shafai1, Eman A. Sultan O, Zahran Fathi E, Abd El-
Samie, El-Sayed M, EL-Rabaie (2022) Video quality enhancement using recursive deep residual learn-
ing network https://​doi.​org/​10.​1007/​s11760-​022-​02228-w
23. Abeer Ayoub, Ensherah A. Naeem, Walid El-Shafai, Fathi E. Abd El-Samie, Ehab K. I. Hamad, El-
Sayed M. EL-Rabaie (2023) Video quality enhancement using dual-transmission-map dehazing Multi-
media Tools and Applications
24. Abeer Ayoub, Ensherah A. Naeem, Walid El-Shafai, Fathi E. Abd El-Samie, Ehab K. I. Hamad, El-
Sayed M. EL-Rabaie (2023) Video Quality Enhancement using Different Enhancement and Dehazing
Techniques Ambient Intelligence and Humanized Computing.

13
Multimedia Tools and Applications

25. Yizhou Jin, Guangshuai Gao, Qingjie Liu, Yunhong Wang (2020) Unsupervised conditional disen-
tangle network for image dehazing. In International Conference on Image Processing. 963–967
26. Golts A, Freedman D, Elad M (2020) Unsupervised Single Image Dehazing Using Dark Channel
Prior Loss. IEEE Trans Image Process 29:2692–2701. https://​doi.​org/​10.​1109/​TIP.​2019.​29520​32
27. Koschmieder H (1924)“Theorie der horizontalen sichtweite,” Beitrage zur Physik der freien Atmos-
phare pp. 33–53
28. Li Gao, Jing Zhang, Lefei Zhang, Dacheng Tao (2021) Dsp “Dual soft-paste for unsupervised
domain adaptive semantic segmentation”. In ACM International Conference on Multimedia.
2825–2833
29. Lu-Yao Huang, Jia-Li Yin, Bo-Hao Chen, and Shao-Zhen Ye (2019) Towards unsupervised single
image dehazing with deep learning. In International Conference on Image Processing. 2741–2745.32
30. Levin A, Lischinski D, Weiss Y (2008) A closed-form solution to natural image matting. IEEE
Trans Pattern Anal Mach Intell 30(2):228–242
31. Fattal R (2014) Dehazing using color lines. ACM Transactions on Graphics, 4(1):13:1–13:14
32. Liu W, Hou X, Duan J, Qiu G (2020) End-to-End Single Image Fog Removal Using Enhanced
Cycle Consistent Adversarial Networks. IEEE,Transactions on Image Processing 29, 7819–7833,
https://​doi.​org/​10.​1109/​TIP.​2020.​30078​44
33. Tan RT (2008) Visibility in bad weather from a single image. in Proc. IEEE Conf. Comput. Vis.
Pattern Recognit. pp. 1–8
34. Chen C, Do MN, Wang J (2016) Robust image and video dehazing with visual artifact suppression
via gradient residual minimization. Lecture Notes in Computer Science (Including Subseries Lec-
ture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9906 LNCS, 576–591.
https://​doi.​org/​10.​1007/​978-3-​319-​46475-6_​36
35. Zhang J, Li L, Zhang Y, Yang G, Cao X, Sun J (2011) Video dehazing with spatial and temporal
coherence. Visual Computer 27(6–8):749–757. https://​doi.​org/​10.​1007/​s00371-​011-​0569-8
36. Bonneel N, Tompkin J, Sunkavalli K, Sun D, Paris S, Pfister H (2015) Blind video temporal con-
sistency. ACM Trans Graph 34(6):196
37. Liu Y, Li H, Wang M (2017) Single image dehazing via large sky region segmentation and mul-
tiscale opening dark channel model IEEE Access, vol. 5, pp. 8890_8903, doi: https://​doi.​org/​10.​
1109/​ACCESS.​2017.​27103​05
38. Joongchol Shin, Minseo Kim, Joonki Paik, Senior Member (2020) Radiance–Reflectance Com-
bined Optimization and Structure-Guided Norm for Single Image Dehazing” IEEE, and Sangkeun
Lee, Senior Member, IEEE Trans. on Multimedia, vol. 22, no. 1
39. Gibson K, Vo D, Nguyen T (2010) An investigation in dehazing compressed images and video in
Proc. OCEANS
40. Mohammad Khalid Othman, Alan Anwer Abdulla (2022) Enhanced Single Image Dehazing Tech-
nique based on HSV Color Space UHD Journal of Science and Technology | Vol 6 | Issue 2
41. Z.-L. Ma, J. Wen, L.-L. Hao (2014) Video image defogging algorithm for surface ship scenes (in
Chinese), Syst. Eng. Electron., vol. 36, no. 9,pp. 1860_1867
42. Xie B, Guo F, Cai Z (2012) Universal strategy for surveillance videodefogging. Opt Eng
51(10):101703–1_101703–7
43. Li Z, Tan P, Tan RT, Zou D, Zhou SZ, Cheong LF (2015) Simultaneous video defogging and stereo
reconstruction. In: Proc IEEE Conf Comput Vis Pattern Recognit, pp 4988–4997
44. Thuong Van Nguyen, An Gia Vien, Chul Lee (2022) Real-time image and video dehazing based on
multiscale guided filtering Multimedia Tools and Applications (2022) 81:36567–3658413. https://​
doi.​org/​10.​1007/​s11042-​022-​13533-4
45. Wenqi Ren, Si Liu, Hua Zhang, Jinshan Pan, Xiaochun Cao, Ming-Hsuan Yang (2016) Single
image dehazing via multi-scale convolutional neural networks. In European Conference on Com-
puter Vision. 154–169
46. Cai B, Xu X, Jia K, Qing C, Tao D (2016) Dehazenet: An end-to-end system for single image haze
removal. IEEE Trans Image Process 25(11):5187–5198
47. Li B, Peng X, Wang Z, Xu J, Feng D (2017) AOD Net: All-in-One Dehazing Network,” in The
IEEE International Conference on Computer Vision, Glasgow, UK, August , Vol. 1, No. 4, p. 7
48. He Zhang, Vishal M Patel (2018) Densely connected pyramid dehazing network. In Conference on
Computer Vision and Pattern Recognition.3194–3203.https://​doi.​org/​10.​1007/​s11042-​023-​15937-2
49. Yuda Song, Zhuqing He, Hui Qian, Xin Du (2020)“Vision Transformers for Single Image Dehaz-
ing”.Journal of latex class files, vol. 18, No. 9, September
50. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai,
Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly et al
(2021) An image is worth 16x16 words: Transformers for image recognition at scale”. In ICLR

13
Multimedia Tools and Applications

51. Zijun Deng, Lei Zhu, Xiaowei Hu, Chi-Wing Fu, Xuemiao Xu, Qing Zhang, Jing Qin, Pheng-Ann
Heng (2019) Deep multi-model fusion for single-image dehazing. In International Conference on
Computer Vision. 2453–2462
52. Ren W, Ma L, Zhang J, Pan J, Cao X, Liu W, Yang M (2018) Gated Fusion Network for Single Image
Dehazing. In Conference on Computer.Vision and Pattern Recognition. 3253–3261. https://​doi.​org/​10.​
1109/​CVPR.​2018.​00343
53. Yeh C-H, Huang C-H, Kang L-W (2019) Multi-scale deep residual learning-based single image haze
removal via image decomposition. IEEE Trans Image Process 29:3153–3167
54. Yosef Gandelsman, Assaf Shocher, Michal Irani (2019)“Double-DIP”: Unsupervised Image Decompo-
sition via Coupled Deep-Image-Priors”. In Conference on Computer Vision and Pattern Recognition.
11026–11035
55. Li B, Gou Y, Liu JZ, Zhu H, Zhou JT, Peng X (2020) Zero-Shot Image Dehazing. IEEE Trans Image
Process 29:8457–8466. https://​doi.​org/​10.​1109/​TIP.​2020.​30161​34
56. Zhao S, Zhang L, Huang S, Shen Y, Zhao S (2020) Dehazing Evaluation: Real-World Benchmark
Datasets, Criteria, and Baselines. IEEETrans Image Process 29:6947–6962. https://​doi.​org/​10.​1109/​
TIP.​2020.​29952​64
57. Cosmin Ancuti, Codruta O Ancuti, Radu Timofte, Christophe De Vleeschouwer (2018) I-HAZE: a
dehazing benchmark with real hazy and haze-free indoor images”. In International Conference on
Advanced Concepts for Intelligent Vision Systems. 620–631
58. Codruta O Ancuti, Cosmin Ancuti, Radu Timofte, and Christophe De Vleeschouwer (2018) O-HAZE:
a dehazing benchmark with real hazy and haze-free outdoor images. In Conference on Computer
Vision and Pattern Recognition Workshops. 754–76
59. Ancuti CO, Ancuti C, Sbert M, Timofte R (2019) Dense-Haze: A Benchmark for Image Dehazing with
Dense-Haze and Haze-Free Images”. In International Conference on Image Processing. 1014–1018.
https://​doi.​org/​10.​1109/​ICIP.​2019.​88030​46
60. Codruta O. Ancuti, Cosmin Ancuti, Radu Timofte (2020) NH-HAZE: An Image Dehazing Benchmark
with Non-Homogeneous Hazy and Haze-Free Images. In Conference on Computer Vision and Pattern
Recognition Workshops. 1798–1805
61. Li B, Ren W, Fu D, Tao D, Feng D, Zeng W, Wang Z (2019) Benchmarking Single-Image Dehazing
and Beyond. IEEE Trans Image Process 28(1):492–505. https://​doi.​org/​10.​1109/​TIP.​28679​51
62. Cosmin Ancuti, Codruta O Ancuti, Christophe De Vleeschouwer (2016) D-hazy: A dataset to evaluate
quantitatively dehazing algorithms In International Conference on Image Processing. 2226–2230
63. Yanfu Zhang, Li Ding, and Gaurav Sharma (2017) HazeRD: An outdoor scene dataset and benchmark
for single image dehazing. In International Conference on Image Processing. 2226–2230
64. Zhuoran Zheng, Wenqi Ren, Xiaochun Cao, Xiaobin Hu, Tao Wang, Fenglong Song, Xiuyi Jia (2021)
Ultra-High-Definition Image Dehazing via Multi-Guided Bilateral Learning. In Conference on Com-
puter Vision and Pattern Recognition. 16180–16189. Conference on Image Processing. 3205–3209,.
https://​doi.​org/​10.​1109/​ICIP.​2017.​82968​74
65. Nayar SK, Narasimhan SG (1999) Vision in Bad Weather. IEEE International Conference on Com-
puter Vision (ICCV), pp. 820–827
66. Schechner YY, Narasimhan SG, Nayar SK (2001) Instant Dehazing of Images using Polarization. In
Proceedings IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1: 325–332, 14,
15, 39
67. Tarel J, Hautière (2009) Fast Visibility Restoration from a Single Color or Gray Level Image. in Proc.
IEEE ICCV, pp. 2201–2208
68. Yu J, Chuangbai X, Dapeng L (2010) Physics-Based Fast Single Image Fog Removal. 10th IEEE Con-
ference on signal processing
69. Faming F, Fang L, Xiaomei Y, Chaomin S, Guixu Z (2010) Single Image Dehazing and Denoising
with Variational Method, IEEE Conference on image analysis and signal processing
70. Song Y, Hui B, Chang Z (2011) An Improved Dehazing and Enhancing Method using Dark Channel
Prior, IEEE Conference
71. Long J, Zhenwei S, Wei T (2012) Fast Haze Removal for Single Remote Sensing Images using Dark
Channel Prior, International conference on computer vision and remote sensing
72. Ancuti CO, Ancuti C (2013) Single Image Dehazing by Multi-Scale Fusion. IEEE Trans Image Pro-
cess 22(8):32713282. https://​doi.​org/​10.​1109/​TIP.​2013.​22622​84,Aug
73. Nitish G, Baru V (2014) Improved Single Image Dehazing by Fusion IJRET
74. Zhu Q, Mai J, Shao L (2015) A fast Single Image Haze Removal Algorithm Using Color Attenuation
Prior. IEEE Trans Image Process 24(11):3522–3533
75. Middleton WEK (2018) “Vision through the Atmosphere”. Toronto, N, Canada: University of Toronto
Press

13
Multimedia Tools and Applications

76. Dhana Lakshmi Bhavani M, Murugan R, Tripti Goe (2022) An efficient dehazing method of single
image using multi-scale fusion technique Journal of Ambient Intelligence and Humanized Computing
77. A. S, Ali A (2014) A novel method for video dehazing by multi-scale fusion. Int J Sci Eng Technol Res
3(24):48084813
78. Engin D, Genc A, Kemal Ekenel H (2018) Cycle-Dehaze: Enhanced Cyclegan for Single Image
Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Work-
shops, pp. 825–833
79. Wang G, Ren G, Jiang L, Quan T (2013) Single image dehazing algorithm based on sky region Seg-
mentation. Inf Technol J 12(6):1168–1175
80. Narasimhan S, Nayar S (2015) Interactive de-weathering of an image using Physical models. in Proc.
IEEE Workshop Vol. pp. 598–605
81. Lv X, Chen W, Shen I-F (2010) Real-time dehazing for image and video in Computer Graphics and
Applications (PG), 18th Pacific Conference on, IEEE pp. 62–69
82. Yang X, Xu Z, Luo J (2018) Towards Perceptual Image Dehazing by Physics-Based Disentanglement
and Adversarial Training. In Thirty-second AAAI conference on artificial intelligence

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and applicable
law.

Authors and Affiliations

Abeer Ayoub1 · Walid El‑Shafai1,2 · Fathi E. Abd El‑Samie1,3 · Ehab K. I. Hamad4 ·


El‑Sayed M. EL‑Rabaie1

Walid El‑Shafai
eng.waled.elshafai@gmail.com; walid.elshafai@el-eng.menofia.edu.eg
Fathi E. Abd El‑Samie
feabdelhamid@pnu.edu.sa; fathi_sayed@yahoo.com
Ehab K. I. Hamad
e.hamad@aswu.edu.eg
El‑Sayed M. EL‑Rabaie
srabie1@yahoo.com
1
Department of Electronics and Electrical Communication Engineering, Faculty of Electronic
Engineering, Menoufia University, Menouf 32952, Egypt
2
Security Engineering Lab, Computer Science Department, Prince Sultan University,
11586 Riyadh, Saudi Arabia
3
Department of Information Technology, College of Computer and Information Sciences, Princess
Nourah Bint Abdulrahman University, P.O. Box 84428, 11671 Riyadh, Saudi Arabia
4
Electrical Engineering Department, Faculty of Engineering, Aswan University, Aswan 81542,
Egypt

13

You might also like