Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Signal Processing: Image Communication 103 (2022) 116657

Contents lists available at ScienceDirect

Signal Processing: Image Communication


journal homepage: www.elsevier.com/locate/image

A contrast enhancement framework under uncontrolled environments based


on just noticeable difference✩
Yan Chai Hum a ,∗, Yee Kai Tee a , Wun-She Yap b , Hamam Mokayed c , Tian Swee Tan d ,
Maheza Irna Mohamad Salim e , Khin Wee Lai f
a
Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
b
Department of Electrical and Electronic Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
c Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Luleå, Sweden
d BioInspired Device and Tissue Engineering Research Group, School of Biomedical Engineering and Health Sciences, Faculty of Engineering, Universiti Teknologi

Malaysia, 81300 Skudai, Johor, Malaysia


e
Diagnostic Research Group, School of Biomedical Engineering and Health Sciences, School of Biomedical Engineering and Health Sciences, Faculty of Engineering,
Universiti Teknologi Malaysia, 81300 Skudai, Johor, Malaysia
f
Department of Biomedical Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia

ARTICLE INFO ABSTRACT


Keywords: Image contrast enhancement refers to an operation of remapping the pixels’ values of an image to emphasize
Contrast enhancement desired information in the image. In this work, we propose a novel pixel-based (local) contrast enhancement
Image enhancement algorithm, based on the human visual perception. First, we make an observation that pixels with lower regional
Histogram equalization
contrast should be amplified for the purpose of enhancing the contrast and pixels with higher regional contrast
should be suppressed to avoid undesired over-enhancement. To determine the quality of the regional contrast
in the image (either lower or higher), a reference image will be created using a proposed global based contrast
enhancement method (termed as Mean Brightness Bidirectional Histogram Equalization in the paper) for fast
computation reason. To quantify the abovementioned regional contrast, we propose a method based on human
visual perception taking Just Noticeable Difference (JND) into account. In short, our proposed algorithm is able
to limit the enhancement of well-contrasted regions and enhance the poor contrast regions in an image. Both
objective quality and subjective quality experimental results suggested that the proposed algorithm enhances
images consistently across images with different dynamic range. We conclude that the proposed algorithm
exhibits excellent consistency in producing satisfactory result for different type of images. It is important to
note that the algorithm can be directly implemented in color space and not limited only to grayscale. The
proposed algorithm can be obtained from the following GitHub link: https://github.com/UTARSL1/CHE.

1. Introduction applications [6]. For example, image quality is crucial for X-ray or
CCTV, while for entertainment purpose, naturality is a more impor-
Image, as a medium to convey information, is applied to serve tant quality required from an image; hence, blurring of the photo
various purposes in daily life [1]. For example, X-ray images are occasionally produces better quality [7]. Despite being adopted for
utilized to supply doctors the information about body conditions of a aesthetic reasons as well, automatic image processing algorithms are
patients [2]. CCTV captured images are analyzed for security purposes
mostly applied for automatic system such as intelligent vision system
such as finding suspects of crimes [3]. In another hand, artistic camera
of factories, object recognition, classification, automated vehicles and
captured photos are utilized to provide visual satisfaction for entertain-
face recognition system.
ment purposes, such as wallpapers of computers, travel photos, and
wedding photos [4]. Contrast enhancement (CE) is often used to emphasize objects from
Image processing algorithms are adopted to augment automatic background to make different objects in an image more distinguishable.
image editing processes in enhancing the image quality [5]. Image When an image is too dark, having glaze, uneven illumination and
quality is not universal for all applications, depending on the purpose reflections, objects may saturate into background and some important
of the image is assessed for, the quality desired varies for different features can hardly be observed by human visual system. Therefore,

✩ This work was supported by the UTAR Research Fund (IPSR/RMC/UTARRF/2020-C1/H02).


∗ Corresponding author.
E-mail address: humyc@utar.edu.my (Y.C. Hum).

https://doi.org/10.1016/j.image.2022.116657
Received 26 October 2021; Received in revised form 29 December 2021; Accepted 27 January 2022
Available online 2 February 2022
0923-5965/© 2022 Elsevier B.V. All rights reserved.
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

(3) We propose a scheme to regulate the contrast enhancement re-


sulted from MBBDHE by gauging the regional contrast that
depends on the just-noticeable-difference (JND). The contrast of
each region within the image will be adjusted according to the
reference enhancement derived by MBBDHE weighted using the
regional contrast. We give details about this in Section 6.2.

(4) We propose a novel scheme of categorization of images in order to


evaluate the performance of different HE algorithms on images
from different categories.
Fig. 1. Examples of different types of images (a) Cameraman image with normal
brightness, (b) Kids’ image with large scale of darkness, (c) Forest image with extreme
light exposure. 4. Previous works

In grayscale image, combinations of different intensity values of


CE algorithm appears as a conducive tool to detect the unobservable the pixels reveal information about both the shape and brightness of
features and reveal the hidden information in an image. objects in the image. Under the premise, most image CE algorithms
A substantial number of CE algorithms with different strengths intend to remap gray levels of an image with minimal distortion to
and limitations have been developed to fulfill the needs of different the shape. Most existing image CE methods utilize histogram of gray
applications. However, most of the algorithms perform well for only levels or neighboring context of pixels as input to manipulate their
specific types of images. In this article, a CE algorithm is proposed, gray level remapping. In this paper, the reviewed image CE methods
not specified to maximize performance on certain types of images are categorized into global and pixel-wise gray level remapping based
but to produce consistent result across different types of images with on their remapping operations instead of the inputs they considered.
reasonably satisfactory contrast in human’s perception.
4.1. Global gray level remapping
2. Importance of the study
Global gray level remapping methods remap all pixels in the entire
Human or manual CE using software only works if the user notices image that contain the same intensity value to a new value. Image CE
and realizes that there exist certain important features hidden in the algorithms in this category processes pixels with the same gray level as
low contrast area of the image. However, human eyes can detect only a whole, regardless of the differences in spatial context between every
contrast that exceeds certain gray level difference. For important ap- pixel. These methods adopt input histogram or sometimes neighboring
plications such as X-ray and MRI (medical purposes), even the smallest context of the entire gray level to assess the appropriate amount of
contrast unnoticeable by human eye might affect the judgment of intensity remapping. The global gray level remapping methods, theo-
doctors [8]. retically, maintain the shapes of objects in the input image since same
Image is just a representation of a huge set of combination of gray level are remapped together to another level without changing
pixel values in computer software. Some existing CE algorithms are their positions in the image.
suitable only for certain type of images and applications. In contrast Generally, this paradigm of CE expands the dynamic range of high
to images that are taken in controlled environment which offer similar frequency gray levels by narrowing the dynamic range of low frequency
appearances, photos taken for application such as case investigation gray levels. Therefore, different low frequency gray levels might be
inevitably confront with enormous possibilities and uncertainties. Fig. 1 remapped to the same gray levels to provide larger dynamic range
demonstrates that images might exist under different light exposure. to increase the contrast of high frequency gray levels leading to sat-
Therefore, it is important to develop a CE algorithm that produces con- uration effect of small objects in low contrast region given that the
sistent and promising result across images with various illuminations frequencies of their gray levels are low. Besides, if two separate objects
conditions. with the same gray level but opposite tone of background (bright
or dark background), enhancing contrast of one object will lead to
3. Contributions saturation of another object into the background. This method retains
the relationships between gray levels.
Most examples of global gray level remapping algorithm were devel-
The main contributions of this research work are
oped based on the idea of histogram equalization (HE), such as Global
summarized as follows:
Histogram Equalization (GHE) [9], Brightness-preserving Bilinear His-
togram Equalization (BBHE) [10] (variations of BBHE: Dualistic Sub-
(1) We propose a perpetual contrast enhancement framework con-
Image Histogram Equalization (DSIHE) [11], Minimum Mean Bright-
sisted of 4 modules: (i) MBBDHE, (ii) Reference Enhancement,
ness Error Bi-Histogram Equalization (MMBEBHE) [12], Recursive
(iii) Regional Contrast Information Acquisition, and (iv) Bright-
Mean Separate Histogram Equalization (RMSHE) [13]), Dynamic His-
ness Readjustment. The overview of the relationship among
togram Equalization (DHE) [14], Two-dimensional Histogram Equal-
these modules can be found in the flowchart of Fig. 13.
ization (2DHE) [15], Histogram Modification Framework (HMF) [16],
(2) We propose a novel bi-histogram equalization method named as and Weighted Threshold Histogram Equalization (WTHE) [17], Con-
Mean Brightness Bidirectional histogram equalization (MBB- textual and Variational Contrast Enhancement(CVC) [18], Gray Level
DHE). This method performs bi-histogram equalization by ad- Grouping (GLG) [19], Multipurpose Beta Optimized Bi-HE MBOBHE.
justing brightness based on just-noticeable-difference (JND) of (MBOBHE) [20].
Human Visual System (HVS). The highlight of this proposed
bi-histogram equalization method is the introduction of the bidi- 4.2. Pixel-wise gray level remapping
rectional enhancement scheme. MBBDHE acts as the underlying
contrast enhancement method of the entire framework of our In contrast with global gray level remapping CE, pixel-wise gray
proposed algorithm. We give details about this in Section 6.1. level remapping CE remaps intensity values pixel by pixel. In this case,

2
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

objects with the same gray level in an image might be remapped into
different gray levels using pixel-wise gray level remapping. However,
input histogram alone provides insufficient information to differen-
tiate pixels with the same intensity. Therefore, in pixel-wise gray
level remapping CE, additional information regarding the specific pixel
is required. In grayscale image, both the position of pixels and the
corresponding neighboring content supplies additional dimension of
information (other than intensity value).
For image contrast enhancement, only neighboring context provides
pertinent information to manipulate enhancement for a particular pixel.
Since the contrast level is represented by the gray level distance of an
object to its neighboring background, hence position of an object in an
image carries no information about the contrast level. Pixel-wise gray
level remapping methods hold the potential to enhance contrast of an Fig. 2. Human visibility threshold based on background luminance.
Source: Image from [28]
image with minimal saturation effect. Since contrast is determined by
gray level distance between neighboring objects, enhancing intensity
differences between pixels based on local context actually serves the
original purpose of CE better. In fact, pixel-wise gray level remapping constancy. They utilized JND transform to acquire a JND map that
should reveal maximum number of hidden features in the image with models the perceptual response of HVS. They implemented perceptual
minimal saturation effect. However, pixel-wise gray level remapping Generalized Equalization Model (GEM) to control the image tone. By in-
requires windowing operations resulting in significant reduction in tegrating perceptual GEM with JND transform, contrast enhancement,
computational speed. Furthermore, adaptive image processing often color reproduction and detail enhancement in images without over-
neglects the consistency of the overall visual effect on the image enhancement was claimed to achieve. They proposed the use of color
resulting in undesired distortion or artifacts to the image such as constancy to predict the light source color in which the color constancy
halo effect or checker box effect. Stated below are classic examples of is capable of estimating the scene illuminant and hence can adopt it
pixel-wise gray level remapping methods: Adaptive Histogram Equal- to generate canonical image from the color-biased image. First they
ization (AHE) [21], Contrast Limited Adaptive Histogram Equalization optimize both color and contrast using perceptual GEM. Next, a process
(CLAHE) [22] and Exact Histogram Specification [23]. known as JND transform will generate a JND map using HVS response
Most of the global gray level remapping method or even pixel- model derived from the foreground and background luminance. Then,
based gray level remapping method suffers from being inconsistent in Weber’s law updates JND map with the purpose to intensify perceptual
enhancing contrast of different type of images; the algorithm usually response. Lastly, inverse JND transform will be implemented from the
performs well only in certain ranges of images. We therefore would like base image in which the JND map will generate the resultant image.
to propose a consistent contrast enhancement method that performs Liang Zhou et al. [26] proposed a contrast enhancement method
reasonably well even under uncontrolled environment. that is predicated on virtual viewing distance for data visualization
with color images. This enhancement is claimed to exhibit perceptual
4.3. Existing HVS-based contrast enhancement methods characteristic such that the bands adjustments depend on the extracted
luminance channel of the input image in order to become visible at a
Most of the HVS-based contrast enhancement techniques [4,5,24] virtual viewing distance. The method can be considered as a threshold
in the literatures apply unique design of the algorithm attributable to model of spatial vision. Despite under different viewing conditions,
the fact that the term ‘‘HVS’’ does not indicate any specific technique. this model is capable to perform prediction of visibility of an object.
The terms ‘‘HVS-based contrast enhancement algorithm’’ generally de- The core of this model is the computation of contrast and Contrast
scribe a certain category of techniques that exhibit characteristic that Sensitivity Functions (CSF) believing that HVS entails visual pathways
mimics human visual system: either the designed technique models in a band-pass fashion such that spatial vision can be modeled by
the working principle of human visual system or the resultant image multiscale models.
aligns with human visual perception. In our case, we apply JND that
reflects the HVS in perceiving contrast to automatically adjust the 5. Human visual system (HVS) on contrast
weight of enhanced contrast in different regions of the image depending
on the regional contrast such that regions of low regional contrast CE algorithms are frequently adopted as image preprocessing for
require more enhancement and vice versa. In this section, we discuss intelligent vision systems. Images with good contrast improve per-
the core techniques designed by several recent HVS–based contrast formances of object detection and object classification. Besides that,
enhancement techniques. contrast enhancement algorithms can also be used directly to produce
S. Keerativittayanun et al. [7] addressed over-enhancement prob- clearer photos or videos. Therefore, the performances of intelligent
lem in images with less-visible and nicely-visible regions. They de- vision systems, enhanced photos or videos, are directly affected by feed-
signed a method that applies singular value decomposition (SVD) to back from humans, either users or developers that develop the systems.
decompose an image in order to detect image noises, the denoised Thus, understanding the ‘‘behavior’’ of HVS can produce system that
image then will undergo an adaptive non-linear scaling function to satisfy human visual system and hence serve human better.
strengthen less-visible regions. Eventually, a pyramid-based blending HVS suffers from recognizing contrast between gray levels, de-
method is proposed to recover nicely visible regions in order to over- pending on situations; the minimum contrast difference noticeable
come the over-enhancement problem. The introduction of an adaptive by human eyes differs; this minimum difference is known as JND.
non-scaling function highlighted this work. The authors claimed that Experiments was conducted [27] to show that visibility is reflected in
the resultant images align with the human visual perception by estab- non-linearly to the background luminance; even at the same contrast
lishing a balance between tone preservation and detail preservation. level, the perception of human eye in recognizing the visual difference
This function increases the pixel values of the less-visible regions by at depends on many factors such as viewing distance and background
least the corresponding JND values to facilitate human perception. luminance.
Long Yu et al. [25] proposed a perceptually optimized enhancement Fig. 2 above shows that HVS perceives the smallest gray level
of contrast and color in images adopting JND transform and color distance when the background luminance is around the middle of the

3
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Fig. 3. Grayscale bar. An illustration that human perceives differently at dark and
bright zones.

dynamic range (from darkest to brightest = [0,255]). When background


luminance becomes too dark or bright, HVS faces difficulty in rec-
ognizing contrast difference (increment of visibility threshold). JND
is frequently introduced to improve performance of CE algorithm by
avoiding over-enhancement such as in the works of [7,29]. A formula
was adopted in [29] to characterize JND:
Fig. 4. Effect of merely enhancing contour (a) Original image of custom image (b)
⎧ ( )0.5
Contour enhanced custom image (c) Original football image (d) Contour enhanced
⎪𝑇0 (1 − 𝑘 ) + 3 𝑓 𝑜𝑟 𝑥 ≤ 127
𝐽 𝑁𝐷 (𝑥) = ⎨ 127 (1) football image.
⎪𝛾 (𝑘 − 127) + 3 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

The JND defined in (1) attempts to model the result from Fig. 2 where
𝑘 represents background luminance (or the reference gray level used to
estimate the regional contrast), and 𝑇0 and 𝛾 is the variable defined by
the viewing distance of the observer and the monitor. For experiment
of [28], the parameters were found to be 𝑇0 = 17; 𝛾 = 13∕128. In
another word, human eyes are excellent in deciphering intensities of
different gray tone but difficult to perceive contrast difference under Fig. 5. Gray level remapping without preserving brightness relationship between circle
extremely dark or extremely bright condition. and triangle (a) Original custom image (b) Image where circle and triangle have the
The grayscale bar in Fig. 3 above is a concatenation of a series of same brightness.
rectangles which are 5 gray levels to each other. It is easier for human
eyes to perceive the vertical straight lines at the interconnection of the
rectangles at the ‘‘gray zone’’ instead of the dark and bright zones. The challenge of CE is to emphasize objects from background while
Most photos and videos with poor contrast are due to bad lighting preserving information carried by the image. In this sense, global gray
(insufficient light source or over-exposure to light). Based on the JND level remapping becomes handy. In global gray level remapping, pixels
behavior of HVS, before applying any contrast enhancement, readjust- with the same gray level are remapped simultaneously to a new gray
ing the mean brightness of the photos or videos to medium brightness level. For example, if pixel A is brighter than pixel B in input image,
(not too dark and not too bright) should improve the noticeable gray after global gray level remapping, pixel A is still brighter than pixel B,
level difference by HVS. but the gray level difference might change.
Considering CE of color image can be achieved by using grayscale In global gray level remapping, increasing contrast by stretching
image CE on the V channel, CE only processes a 2D matrix of grayscale dynamic range of certain gray levels also suppresses dynamic range of
image. Every pixel in a grayscale image only contains one value (which other gray levels. Despite mathematically, global gray level remapping
is referred as gray level that represents brightness of the pixel). The method such as GHE maintains the brightness relationship between
difference between two neighboring pixels’ values forms an edge point. gray levels; however, since gray levels are discrete integers, saturation
Combination of edge points forms contours. However, human eyes can effect might occur. Besides, reducing dynamic range of other gray
only perceive gray level differences above certain threshold. The edges levels also means reducing contrast between those gray levels. Global
and contours unnoticeable by human eyes are hidden features that gray level remapping algorithm entails inevitable trade-off of contrast
require enhancements. The direction of gray level difference between between certain gray levels for enhancement of other gray levels,
neighboring two pixels represents brightness relationship of these pix- therefore, further improvement of global gray level remapping becomes
els. The brightness relationship carries information such as luminance an optimization task. In fact, only brightness relationship between
neighboring objects carries pertinent information about the shapes of
and lighting condition in the image, and 3D shapes of the actual object
the objects. Objects that are separated far away from each other does
in the image.
not affect the shape of the one another.
Enhancing contrast of an image involves modifying pixel values
As shown in Fig. 5(b), the brightness relationship between triangle
of the grayscale image to magnify the gray level difference between
and circle is no longer preserved, but the information about the shape
neighboring pixels that represent contour of an object in the image.
is maintained. Fig. 5(b) shows that when two objects are not adjacent
Consequently, the edges and contours are revealed. However, in order
to each other, the brightness relationship between the objects does not
to preserve the information carried by the image, the brightness rela- distort the shapes of the objects; hence, pixel-wise gray level remapping
tionship must be maintained; hence, pixels in an object, that do not comes into play. Pixel-wise gray level remapping as a local method, has
represent the contour, also need to be enhanced in accordance to the the potential to remap different gray levels to the same intensity, or
contour pixels to preserve the shape and luminance of the object. remap pixels with similar gray level to different intensities, based on
As shown in Fig. 4(b), the originally flat 2D shape in Fig. 4(a) local context of the image. Pixel-wise gray level remapping algorithm
seems like a 3D object due to distorted brightness relationship between breaks the dynamic range limit of global gray level remapping algo-
pixels in the shapes. Fig. 4(d) shows that the football seems a little bit rithm. However, changing brightness relationship between objects also
flatter than the original image Fig. 4(c) although the appearances of the distorts the luminance information of the image. For example, using
objects are emphasized. As shown in Fig. 4(b), even the circle and tri- local CE ‘‘scrambles’’ the luminance, the observer fails to determine the
angle become more obvious than Fig. 4(a), the brightness relationship direction of light source in the image.
between center and edge of the shapes are not preserved, misleading In conclusion, for applications where luminance information is not
the observer to believe that the circle and triangle objects are originally crucial, pixel-wise gray level remapping holds greater potential in detail
3D (not flat). preservation.

4
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Fig. 7. MBBDHE Operation (a) Input histogram ⌈ (b)⌉ Output histogram. Regardless of
the value of 𝑥𝜇 , 𝑦𝜇 will always be shifted to 𝑘−1
2
= 128.
Fig. 6. Visualization of effect of HE operation on histogram: illustrating that HE
operation enhances gray levels proportional to their frequency respectively where
𝑣𝑖 ∝ ℎ𝑖 . (a) Image histogram before HE operation (b) Image histogram after HE
operation.
levels. Consequently, when enhancing gray level with low intensity
(dark) but high frequency, the gray level tends to become too bright,
resulting in reduced dynamic range for other brighter gray levels.
6. Methodology Let the probability distribution function of the input image be
𝑃 = {𝑃 (𝑖) |𝑖 = 1, 2, … , 𝑘} where 𝑃 (𝑖) represents probability of frequency
A contrast enhancement algorithm was proposed to remap the gray 𝐻 (𝑖) of 𝑥𝑖 gray level. The cumulative distribution function, 𝐶 =
levels of an input image based on regional contrast. In contrast with {𝐶 (𝑖) |𝑖 = 1, 2, … , 𝑘} of MBBDHE can be computed as shown in (2).
2DHE, the proposed algorithm is a pixel-wise gray level remapping
⎧0 𝑓 𝑜𝑟 𝑖 = 1
method. Throughout this text, we shall frequently use the expres- ⎪ 𝑖−1
sion ‘‘proposed algorithm’’ and it refers to method we described in ⎪∑ ∑
𝜇−1

Section 6.2. ⎪( 𝑃𝑥 )∕(2 𝑃𝑥 ) 𝑓 𝑜𝑟 𝑖 = 2, … , 𝜇


𝐶 (𝑖) = ⎨ 𝑥=1 𝑥=1 (2)
The proposed algorithm enhances the contrast inversely propor- ⎪ ∑ 𝑖 ∑𝑘
tional to each pixel’s regional contrast level. In another word, pixel ⎪( 𝑃𝑥 )∕(2 𝑃𝑥 ) + 0.5 𝑓 𝑜𝑟 𝑖 = 𝜇 + 1, 𝜇 + 2, … , 𝑘

with inferior regional contrast will be enhanced to a reference’s max- ⎩ 𝑥=𝜇+1 𝑥=𝜇+1
imum limit; at the same time, the algorithm can also inhibit en- The 𝜇 in (2) represents the index of the mean gray level. Gray level 𝑥𝜇
hancement of pixel with good regional contrast to avoid unpleasant is the input gray levels that represents the mean brightness of the input
over-enhancement. In order to achieve that, other than the weight of image as defined in (3).
enhancement, a reference that determines the ‘‘direction’’ on enhance- ⌊ 𝑘 ⌋
ment is needed. The reference decides whether the pixel should be ∑
𝑥𝜇 = (𝑥 − 1)𝑃𝑥 (3)
remapped to a brighter or darker gray level in order to enhance the 𝑥=1
contrast. In this case, HE based algorithm serves the best reference
The remaining
{ operation
} is the same as GHE, where input gray levels
due to two factors: low computation requirement, and tendency to
over-enhancement. To avoid washed-out effect by GHE, a modified { = 𝑥𝑖 |𝑖 = 1, 2, … , 𝑘 will be
𝑋 } remapped to output gray levels 𝑌 =
𝑦𝑖 = (𝑘 − 1) 𝐶(𝑖)|𝑖 = 1, 2, … , 𝑘 . In MBBDHE, 𝐶(1) will always be 0 as
HE named mean brightness bidirectional histogram equalization (MBB- ∑0
𝑥=1 𝑃𝑥 = 0. The darkest existing gray level in the input image will
DHE) is introduced in Section 6.1. The result of MBBDHE will act as
always be remapped to 0; while the brightest existing gray level will
reference for the proposed regional contrast based weighted contrast
always be remapped to the highest gray level (𝑘 − 1), in this paper,
enhancement algorithm in Section 6.2. The proposed algorithm adjusts (𝑘 − 1) = 255. The mean brightness, 𝑥𝜇 will be set to the middle of
the enhancement of the reference to restrain over-enhancement and the available dynamic range, 0.5(𝑘 − 1) as 𝐶 (𝜇) = 0.5. Consequently,
result in a more naturally looking, sufficiently enhanced image. the overall brightness will be shifted to medium brightness as HVS can
recognize contrast difference best at medium background luminance as
6.1. Mean Brightness Bidirectional Histogram Equalization (MBBDHE) discussed in Section 5. The operation of MBBDHE can be visualized as
shown in Fig. 7.
MBBDHE is a bi-histogram equalization method that utilize bright- The MBBDHE is similar to BBHE in a sense that MBBDHE divides
ness adjustment based on JND of HVS. We further augment the perfor- the histogram into two sections (dark section and bright section) using
mance by introducing a bidirectional enhancement scheme to reduce mean brightness and applies HE operation for both sections separately.
potential washed-out effect. This algorithm is designed as a simplified However, instead of maintaining the mean brightness, the mean gray
idea of bi-directional Switching Equalization [30]. level is shifted to the middle of the dynamic range. As shown
⌈ ⌉in Fig. 7,
In order to understand the meaning of ‘‘direction’’ referred in this 𝑥𝜇 , whatever the value is, will be shifted to 𝑦𝜇 = 𝑘−1 2
= 128.
paper, the nature of HE operation needs to be discussed by referring Besides that, instead of enhancing the gray levels proportional to their
to Fig. 6. The operation of HE implies a preliminary assumption that: frequency in on direction only (as shown in Fig. 6), MBBDHE enhances
the higher the frequency of a gray level, the more important the gray the gray levels away from the mean brightness in two directions. The
level is. HE operation assumes gray levels with high frequencies forms bi-directional nature of MBBDHE addresses not only washed-out effect,
important information of the image, low frequency gray levels are but also realizes full utilization of the dynamic range conforming to
insignificant details. For example, let the input histogram has gray HVS in perceiving gray level contrast.
{ }
levels 𝑋 = 𝑥1 , 𝑥2 , … , 𝑥𝑘 and will be remapped to gray levels 𝑌 = In this paper, the proposed algorithm uses regional contrast infor-
{ }
𝑦1 , 𝑦2 , … , 𝑦𝑘 after HE. If gray level 𝑥𝑖 has high frequency, ℎ𝑖 , hence mation as weight to adjust the enhancement produced by MBBDHE.
the resultant 𝑦𝑖 will have its contrast enhanced away from its adjacent An enhancement reference and regional contrast weights complements
gray level for gray level distance of 𝑣𝑖 , where 𝑣𝑖 is proportional to ℎ𝑖 . each other for the proposed algorithm to work. Theoretically, result
However, both 𝑦𝑖−1 and 𝑦𝑖+1 can be regarded as adjacent gray levels of from any contrast enhancement algorithm can be used to replace
𝑦𝑖 . Therefore, theoretically, 𝑦𝑖 can be enhanced away from 𝑦𝑖−1 (to be MBBDHE to form reference, MBBDHE; however, MBBDHE, being a
brighter), or 𝑦𝑖+1 (to be darker) by amount of 𝑣𝑖 . In normal HE opera- global gray level remapping method, is preferred for its efficient com-
tion, gray levels are always enhanced away from their previous gray putational property, preservation of brightness relationships between

5
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

regions in image and avoidance of GHE’s limitations. Applying regional


contrast weight described in Section 6.2 transforms the algorithm into
pixel-wise gray level remapping.

6.2. Formulation of problem and the proposed algorithm

Let 𝐼 ∈ Z𝐻×𝑊 (in which 𝐼 (𝑖, 𝑗) = {𝑖, 𝑗 ∈ Z+ |𝑖 ≤ 𝐻, 𝑗 ≤ 𝑊 })


depicts the input image and 𝐽𝑟𝑒𝑓 ∈ Z𝐻×𝑊 (in which 𝐽𝑟𝑒𝑓 (𝑖, 𝑗) = {𝑖, 𝑗 ∈
Z+ |𝑖 ≤ 𝐻, 𝑗 ≤ 𝑊 }) depicts the resultant image of MBBDHE, where
𝐻 ∈ Z+ and 𝑊 ∈ Z+ denotes the height and the width of the image
respectively. Reference enhancement, 𝐸 ∈ Z𝐻×𝑊 (in which 𝐸 (𝑖, 𝑗) =
{𝑖, 𝑗 ∈ Z+ |𝑖 ≤ 𝐻, 𝑗 ≤ 𝑊 }) denotes the values to be added to input
image, 𝐼 in order to increase or decrease pixel’s values (to brighten
or darken the pixels) to achieve contrast enhancement of the reference
result, 𝐼 + 𝐸 = 𝐽𝑟𝑒𝑓 . Therefore, the reference enhancement between 𝐼
and 𝐽𝑟𝑒𝑓 from MBBDHE can be computed as shown in (4):

𝐸 (𝑖, 𝑗) = 𝐽𝑟𝑒𝑓 (𝑖, 𝑗) − 𝐼(𝑖, 𝑗) (4)

Then the information of regional contrast is acquired by windowing Fig. 8. Graph plot of 𝑓 (𝑎, 𝑏, 𝑖, 𝑗) versus the absolute gray level difference
the input image using a 𝑤 × 𝑤 kernel where 𝑤 = {2𝑛 + 1 ∶ 𝑛 ∈ Z+ |𝑛 ≤ |𝐼𝑝𝑎𝑑 (𝑎, 𝑏) − 𝐼 (𝑖, 𝑗) | between a neighboring pixel and the center pixel (𝑖, 𝑗) if 𝛼 = 1.2
and 𝛽 = −20 for different 𝐼 (𝑖, 𝑗). Different 𝐼 (𝑖, 𝑗) will generate different 𝐽 𝑁𝐷 and thus
min(𝑟, 𝑐)}. Similar to 2DHE, the selection of 𝑤 will affect the final result.
different characteristic of 𝑓 (𝑎, 𝑏, 𝑖, 𝑗).
The suitable 𝑤 is selected based on the nature of the image and the
purpose of the user. For this paper, 𝑤 = 33 is adopted to demonstrate
the performance of the proposed algorithm. The padding used for this
convolution process is symmetric padding, where 𝑤−1 2
pixels width of
mirror padding
( is) adding
( to )all four sides of the input image to form
𝐻+2( 𝑤−1 ) × 𝑊 +2( 𝑤−1 )
𝐼𝑝𝑎𝑑 ∈ Z 2 2 .
Regional gray level similarity, 𝑅 ∈ Z𝐻×𝑊 computed from padded
image, 𝐼𝑝𝑎𝑑 , is computed to reflect regional homogeneity of region
around 𝐼(𝑖, 𝑗). The lower the similarity which is 𝑅(𝑖, 𝑗), the higher the
regional contrast of pixel 𝐼(𝑖, 𝑗); thus, less enhancement is required
for pixel 𝐼(𝑖, 𝑗) and vice versa. In another word, 𝑅 (𝑖, 𝑗) represents the
weight of the enhancement and determines the amount of enhancement
needed for 𝐼(𝑖, 𝑗), where ∀𝑅 (𝑖, 𝑗) ∈ [0, 1]. The 𝑅 can be computed as in
(5).
𝑖+ 𝑤−1 𝑗+ 𝑤−1
1 ∑2 ∑ 2
𝑅 (𝑖, 𝑗) = 𝑓 (𝑎, 𝑏, 𝑖, 𝑗) (5)
𝑤2
𝑎=𝑖− 𝑤−1
2
𝑏=𝑗− 𝑤−1
2

Where,
( )
1
𝑓 (𝑎, 𝑏, 𝑖, 𝑗) = | |
(6)
|𝐼𝑝𝑎𝑑 (𝑎,𝑏)−𝐼(𝑖,𝑗)|−𝛽−𝐽 𝑁𝐷(𝐼(𝑖,𝑗))
1 + 𝛼| |
⎧ ( )0.5
⎪𝑇 (1 − 𝑥 ) + 3 𝑓 𝑜𝑟 𝑥 ≤ 127
𝐽 𝑁𝐷 (𝑥) = ⎨ 0 127 (7)
⎪𝛾 (𝑥 − 127) + 3 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

13
In this article, 𝑇0 = 17, 𝛾 = 128 . The |𝐼𝑝𝑎𝑑 (𝑎, 𝑏) − 𝐼 (𝑖, 𝑗) | repre-
sents the absolute gray level difference between the neighboring pixel,
𝐼𝑝𝑎𝑑 (𝑎, 𝑏) with the kernel center, 𝐼(𝑖, 𝑗); the parameter 𝛼 and 𝛽 define the
characteristic of 𝑓 (𝑎, 𝑏, 𝑖, 𝑗) (defined in (6) in which 𝐽 𝑁𝐷(.) is defined
in (7)) recommended to be set at 1.2 and −20 respectively to model
the HVS. These values of 𝛼 and 𝛽 are set empirically determined based
Fig. 9. Regional gray level similarity, R. (a) Forest Image (b) 𝑅 of Forest Image (c)
on human visual system by taking the mean values of 𝛼 and 𝛽 across
Cameraman Image (d) 𝑅 of Cameraman Image (e) Kids Image (f) 𝑅 of Kids Image.
7 observers on 120 randomly chosen color images and 120 randomly
grayscale images from image database. The relationship between the
gray level difference and the 𝑓 (𝑎, 𝑏, 𝑖, 𝑗) is as shown in below Fig. 8.
The smaller the gray level differences, the higher the 𝑓 (𝑎, 𝑏, 𝑖, 𝑗) region. When enhancing contrast of and image, the dynamic range of
resulting in higher regional gray level similarity, 𝑅(𝑖, 𝑗), thus more the enhanced region expands to provide more space for larger gray
enhancement is applied. The 𝑅 can be visualized in Fig. 9 to see which level difference, consequently, the mean brightness of the enhanced
particular regions in the image need more enhancement. region changes. For good contrast region that has less enhancement, the
Fig. 9 shows the R of different I. The lower the value of R(i,j) (darker brightness shifting is lesser than poor contrast region. This is also the
regions in R as shown in Fig. 9), the better the contrast of that particular main disadvantage of pixel-wise gray level remapping method because
w×w region around I(i,j). enhancement is performed separately for each local region.
After computing 𝑅, brightness readjustment is needed to preserve In order to readjust the brightness, the reference enhancement of
brightness relationship between poor contrast region and good contrast the from the global gray level remapping method, 𝐸 becomes useful.

6
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Fig. 10. Visualization of Changes Along the Proposed Operation (a) Original Image 𝐼
(b) 𝐼 + 𝑅 ⊙ 𝐸 (c) 𝐽 = 𝐼 + 𝑅 ⊙ 𝐸 + 𝐵 in which the ⊙ represents element-wise dot product
(multiplication).

First, the enhancement of good contrast region is emphasized to form


𝐺 = {𝐺 (𝑖, 𝑗) |1 ≤ 𝑖 ≤ 𝐻, 1 ≤ 𝑗 ≤ 𝑊 } defined in (8).

𝐺 (𝑖, 𝑗) = (1 − 𝑅 (𝑖, 𝑗)) 𝐸 (𝑖, 𝑗) (8)

Then, the emphasized enhancement of good contrast region, 𝐺, Fig. 11. Effect of Different w on Cameraman Image (a) Input Image. (b) 𝑗𝑟𝑒𝑓 (c) 𝑤 = 3
is smoothed to produce averaged brightness shifting within the good (d) 𝑤 = 23 (e) 𝑤 = 33 (f) 𝑤 = 53.
contrast region. Median filter is used to smoothen 𝐺 for better edge
preservation and minimize ‘‘halo effect’’ of most pixel-wise gray level
remapping (such as CLAHE). The median filtering uses the same ker- on the face only of the cameraman, despite the whole image is being
nel size (𝑤 × 𝑤 square kernel) and symmetric padding. Let 𝐺𝑝𝑎𝑑 = observed at the same time. The area of focus should not be too small
{𝐺𝑝𝑎𝑑 (𝑎, 𝑏)|1 − 𝑤−1
2
≤ 𝑎 ≤ 𝐻 + 𝑤−12
, 1 − 𝑤−1
2
≤ 𝑏 ≤ 𝑊 + 𝑤−1 2
} be the because it is impossible for human to assess an image pixel by pixel. In
symmetrically padded 𝐺, brightness adjustment, 𝐵 = {𝐵(𝑖, 𝑗)|1 ≤ 𝑖 ≤ an extreme example, detail as small as one pixel carries no important
𝐻, 1 ≤ 𝑗 ≤ 𝑊 } (defined in (9)) can be computed by applying median information for the observer, only combinations of multiple pixels can
filter to 𝐺𝑝𝑎𝑑 . form meaningful feature to the observer, especially in camera taken
photos. Therefore, the 𝑤×𝑤 window should be selected to represent the
𝐵 (𝑖, 𝑗) = 𝑚𝑒𝑑𝑖𝑎𝑛 {𝐺𝑝𝑎𝑑 (𝑠, 𝑡)} (9) area of focus of observers when assessing the result of the enhancement.
(𝑠,𝑡)∈𝑊 (𝑖,𝑗)
In this report, 𝑤 = 33. However, different 𝑤 can be used depending on
Where, the applications.
⎧ ⎫ As shown in Fig. 11, when 𝑤 is too small, over-enhancement as in
⎪ | 𝑖 − 𝑤 − 1 ≤ 𝑥 ≤ 𝑖 + 𝑤 − 1, ⎪
| 2 2 𝐽𝑟𝑒𝑓 occurs. The smaller the 𝑤, the closer the resultant 𝐽 to 𝐽𝑟𝑒𝑓 . For
𝑊 (𝑖, 𝑗) = ⎨(𝑥, 𝑦) | ⎬ (10)
| 𝑗− 𝑤−1 ≤𝑦≤𝑗+ 𝑤−1 example, unpleasant over-enhancement occurs at the face of the cam-
⎪ | ⎪
⎩ 2 2 ⎭ eraman in Fig. 11(c) because the window is too small when considering
𝑊 (𝑖, 𝑗) in (10) represents set of coordinates in 𝑤×𝑤 neighborhood with regional contrast at the face area. The over-enhancement is suppressed
(𝑖, 𝑗) as center. Then, the final output image of the proposed algorithm, when using larger 𝑤 in Fig. 11(d) because the window is large enough
𝐽 = {𝐽 (𝑖, 𝑗)|1 ≤ 𝑖 ≤ 𝐻, 1 ≤ 𝑗 ≤ 𝑊 }, can be computed by adding up to consider the whole face of cameraman when computing regional
input image with weighted enhancement and additional brightness as contrast of the face.
shown in (11). However, large 𝑤 will also cause loss of small details as shown in
Fig. 12. The result as shown in Fig. 12(f) uses 𝑤 = 53, hence when some
𝐽 (𝑖, 𝑗) = 𝐼 (𝑖, 𝑗) + 𝑅 (𝑖, 𝑗) 𝐸 (𝑖, 𝑗) + 𝐵(𝑖, 𝑗) (11) details are smaller relative to the selected 𝑤, the proposed algorithm
diminishes those details.
Fig. 10 below shows the changes from input image, to weighted
Generally, global gray level remapping methods struggle to search
enhancement, and finally to brightness readjustment. As discussed
for the appropriate amount of enhancement across different types of
above, asynchronous local enhancement causes distorted brightness re-
input images. Different global gray level remapping methods tend to
lationship between poor contrast region and good contrast region. The
either over-enhance or under-enhance the images. In another hand,
region in the red box in Fig. 10 represents good contrast region where
pixel-wise gray level remapping methods ‘‘scramble’’ the brightness
the features of the leaves can be observed by HVS, while green box
relationship, distort luminance information, and causing unpleasant
represents poor contrast region. In the original image, the brightness
checker box effect and halo effect.
of red box region is lower than of green box region. After regional
The proposed algorithm is designed to harness the advantages of
contrast-based enhancement as shown in Fig. 10(b), the brightness of
both global and pixel-wise gray level remapping algorithms, maintain-
the red box region shifted less than of the green box region, resulting ing the brightness relationship as in global method, while inhibiting
into same brightness. over-enhancement from the 𝐽𝑟𝑒𝑓 using local information of the image.
In order to maintain the brightness relationship between red box At the same time, the proposed algorithm produces no checker box
region and green box region, where red box region is darker than green effect and minimal halo effect. Below we present the pseudocode of
box region, additional smoothened brightness shifting is applied to the the proposed algorithm for clarity in Table 1.
region as show in Fig. 10(c). The proposed algorithm entails 4 modules. To further facilitate
The only parameter to be defined for this proposed algorithm is understanding, we elucidate these processes of the proposed algorithm
the 𝑤 selection for which it should be based on the size of area when by using a flowchart in Fig. 13.
user wishes to look for details in the image. When using small 𝑤, more
details are preserved resulting in more unnatural over-enhancement to 7. Results and discussions
the image. In contrast, large 𝑤 tends to inhibit enhancement of some
details in the image when the details are too small relative to the 𝑤 × 𝑤 To assess the performance, 13 opted contrast enhancement algo-
window. When assessing the image, although the whole image is being rithms — GHE [9], BBHE [31], DHE [14], GLG [32], WTHE [17],
seen at the same time by HVS, human conscious is focusing on only a HMF [16], FHSABP [33], CLAHE [22], CVC [18], and 2DHE [15],
smaller region in the image. For example, when trying to assess feature BPDHE [34], EPMP [35], SICE [36] were compared with the pro-
of the face of the cameraman in Fig. 9(c), the observer is focusing posed algorithm. Some standard test images, MATLAB test images,

7
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Table 1
Pseudocode of the Proposed Algorithm.

Fig. 12. Effect of different 𝑤 on Forest Image (a) Input Image (b) 𝐽𝑟𝑒𝑓 (c) 𝑤 = 3 (d)
𝑤 = 23 (e) 𝑤 = 33 (f) 𝑤 = 53.

internet resource, and a custom image were used. The performances


of the contrast enhancement algorithms were compared in terms of
complexity (computational time with respect to image size) and image
quality (information loss and contrast). However, for contrast enhance-
ment algorithms consisting user-defined parameters, the parameters
are specified in Table 2. The results shown in this article imply no
superiority of one CE algorithm over the other one; changes in the
user-defined parameters can improve or impair the image quality as
well as the computational speed. Most CE algorithms are trading-off
among complexity, image quality preservation, amount of contrast
enhancement, consistency, and level of automation. It would be inap-
propriate to define that certain CE algorithm is generally better than
another; any CE algorithm might be superior than others for different
applications. The discussions in this section generally focus on inves-
tigation of the nature of different CE algorithms on different types of
images.

Fig. 13. Flowchart (a) MBBDHE (b) Reference Enhancement based on MBBDHE (c) 7.1. Metrics to quantify the performance of CE algorithms
Regional Contrast Information Acquisition (d) Brightness Readjustment.

We adopted the following metrics to quantitative measures to com-


pares the performance of different CE algorithms on different images
with the proposed algorithm, Discrete Entropy (DE) [37], Measure of
Enhancement (EME) [38,39], SSIM index [40].

8
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Table 2 A good enhancement should improve the local contrast of an image


Parameters of CE Algorithms.
with minimal compromise on the details (minimal saturation), or even
CE Algorithms Parameters enrich the details of the image. It is not sufficient to measure the
WTHE 𝑃𝑙 = 0, 𝑟 = 0.5, 𝑣 = 0.5, 𝑊𝑜𝑢𝑡 = 255 𝐷𝐸 and 𝐸𝑀𝐸 of output image, the values must be compared with
HMF 𝜆 = 1, 𝛾 = 100, 𝛼 = 5, 𝑣 = 0.01𝐻𝑊
the 𝐷𝐸 and 𝐸𝑀𝐸 of input image to measure the improvement or
FHSABP 𝑁 =6
CLAHE 𝑃𝑙𝑖𝑚𝑖𝑡 = 0.01, 𝑡𝑖𝑙𝑒𝑠 = 8 × 8 deterioration. The values difference, 𝛥𝐷𝐸 = 𝐷𝐸𝑜𝑢𝑡𝑝𝑢𝑡 − 𝐷𝐸𝑖𝑛𝑝𝑢𝑡 and
CVC 𝑤 = 7, 𝛼 = 𝛽 = 𝛾 = 31 𝛥𝐸𝑀𝐸 = 𝐸𝑀𝐸𝑜𝑢𝑡𝑝𝑢𝑡 −𝐸𝑀𝐸𝑖𝑛𝑝𝑢𝑡 are computed to indicate improvement
2DHE 𝑤=5 or impairment of local contrast and details. Besides, luminance similar-
Proposed Algorithm 𝑤 = 33 ity, 𝑙 and structural similarity, 𝑠 are used to measure the performance
of an CE algorithm in preserving the information of the original image.

The DE of an image is computed as described in (12): 7.2. Categorization of test images


256
In this paper, 776 test images were used, including 554 one-channel
𝐷𝐸 (𝐽 ) = − 𝑃𝐽 (𝑖) log 𝑃𝐽 (𝑖) (12)
𝑖=1 grayscale images, and 222 three-channel color images. The CE algo-
rithms were applied to the lightness field in the L*a*b* format of the
where 𝑃𝐽 = {𝑃𝐽 (𝑖)|𝑖 = 1, 2, … , 256} represents the probability distribu-
color images. The sources from which test images were adopted are
tion function (PDF) of the 𝑖th gray levels of the 8-bit image 𝐽 which is
listed in Table 3.
normalized from histogram of 𝐽 .
To assure the evaluation is not biased to any particular type of
𝑘1 𝑘2 ( )
1 ∑∑ max 𝐽𝑎,𝑏 + 1 images, we categorized the test images into 7 categories: high contrast
𝐸𝑀𝐸 (𝐽 ) = 20 ln ( ) (13) discrete tone images, low contrast discrete tone images, dynamic tone
𝑘1 𝑘2 𝑎=1 𝑏=1 min 𝐽𝑎,𝑏 + 1
images, continuous tone images, intensive hidden details monotonous
To calculate the EME of an image 𝐽 , as defined in (13), the image is images, flat monotonous images. The number of images in each cate-
divided into 𝑘1 × 𝑘2 sub-blocks, 𝐽𝑎,𝑏 . The measurement of enhancement gory is imbalanced; therefore, categorical weights, 𝑤𝑘 (defined in (19))
EME is the summation of 20 ln (.) of all sub-blocks, where (.) of each are applied to each metric when overall performances are measured.
block represents the ratio of maximum gray level to minimum gray
𝑁
level. The purpose is to assess the contrast of the image window by 𝑤𝑘 = ( ) (19)
𝐾 𝑛𝑘
window based on the spread of the gray levels in the windows.
For this paper, luminance and structural similarity of SSIM (de- Where 𝑘 depicts categorical index: 𝑘 = 1, 2, … , 7; 𝑁 depicts total
fined in (14)) are used to compare the performances of different CE number of test images, 𝑁 = 776; 𝐾 depicts total number of categories:
algorithms. The algorithm of SSIM can be simplified as following: 𝐾 = 7.

𝑆𝑆𝐼𝑀 (𝐼, 𝐽 ) = [𝑙 (𝐼,̇ 𝐽 )]𝛼 [𝑐 (𝐼, 𝐽 )]𝛽 [𝑠 (𝐼, 𝐽 )]𝛾 (14)


7.3. Quantitative result
Where, 𝑙 (𝐼, 𝐽 ) (defined in (15)) depicts luminance similarity between
input image 𝐼 and output image 𝐽 ; 𝑐 (𝐼, 𝐽 ) (defined in (16)) depicts The weighted mean of the resulting 𝛥DE, 𝛥EME, l, and 𝑠, are shown
contrast similarity between input image 𝐼 and output image 𝐽 ; 𝑠(𝐼, 𝐽 ) in Table 4 to indicate the overall performance of each CE algorithms
(defined in (17)) depicts structural similarity between input image across the test images; whereas, the weighted standard deviation of
𝐼 and output image 𝐽 ; 𝛼 depicts emphasis of luminance similarity ; the resulting DE, EME, l, and 𝑠 across test images by each of the CE
𝛽 depicts emphasis of contrast similarity and 𝛾 depicts emphasis of algorithms as shown in Table 5 will be used to indicate the overall
structural similarity. The parameters 𝛼, 𝛽, 𝛾 ∈ [0, ∞) represents the ratio consistency of that respective algorithm.
of luminance, contrast, and structural similarity to the final SSIM index. As shown in Table 4, the proposed algorithm generally produces
The similarities are computed as shown below: greater 𝛥DE. However, for already-high-contrast images (high EME),
the proposed algorithm tends to reduce the local contrast of the high
𝑙 (𝐼, 𝐽 ) = (2𝜇𝐼 𝜇𝐽 + 𝐶1 )∕(𝜇𝐼2 + 𝜇𝐽2 + 𝐶1 ) (15) contrast regions. As expected, the proposed algorithm offers poor lumi-
𝑐 (𝐼, 𝐽 ) = (2𝜎𝐼 𝜎𝐽 + 𝐶2 )∕(𝜎𝐼2 + 𝜎𝐽2 + 𝐶2 ) (16) nance preservation ability, hence the proposed algorithm is not suitable
for applications for which brightness preservation is crucial. Besides,
𝑠 (𝐼, 𝐽 ) = (𝜎𝐼𝐽 + 𝐶3 )∕(𝜎𝐼 𝜎𝐽 + 𝐶3 ) (17)
the structural similarity of the proposed algorithm, is more inferior than
Where 𝜇𝐼 and 𝜇𝐽 represents local mean of 𝐼 and 𝐽 respectively other CE algorithms, hence it is also not suitable for applications that
to compare luminance; 𝜎𝐼 and 𝜎𝐽 represents local standard deviation require high detail preservation ability.
of 𝐼 and 𝐽 respectively to compare contrast. The local information As shown in Table 5, although the performance of the proposed
is extracted using radius (standard deviation of isotropic Gaussian algorithm is not the best among all, the proposed algorithm shows a
function) of 1.5 (default setting of MATLAB). Let the height of both better overall consistency than other CE algorithms. The proposed al-
images as 𝐻 and width as 𝑊 , 𝜎𝐼𝐽 defined in (18): gorithm shows smaller statistical fluctuation, hence resulting in smaller
∑ ‘‘spread’’ of the data.
1
𝜎𝐼𝐽 = (𝐼 (𝑖, 𝑗) − 𝜇𝐼 )(𝐽 (𝑖, 𝑗) − 𝜇𝐽 ) (18) Histogram equalization is regarded as a non-linear process. Splitting
𝐻𝑊 − 1 ∀𝑖,𝑗
followed by equalizing the color channel individually sounds logical
Following the default setting of MATLAB’s ssim(.) function, 𝐶1 = but is not preferred in practice. Equalization often involves only the
[0.01 (𝑘 − 1)]2 , 𝐶2 = [0.03 (𝑘 − 1)]2 , 𝐶3 = 𝐶2 ∕2, where 𝑘 = 256 for 8- intensity values of the image not the color components. So for a RGB
bit image. By setting 𝛼 = 1 and 𝛽, 𝛾 = 0, luminance similarity can color image, HE should not be applied individually on each channel
be obtained: 𝑆𝑆𝐼𝑀 (𝐼, 𝐽 ) = 𝑙(𝐼, 𝐽 ). In another hand, when 𝛾 = 1 and on the ground that the correlation between the color channels will
𝛼, 𝛽 = 0, structural similarity is obtained: 𝑆𝑆𝐼𝑀 (𝐼, 𝐽 ) = 𝑠(𝐼, 𝐽 ). Both impose undesired modification chromaticity of colors. Instead, it should
𝑙(𝐼, 𝐽 ) and 𝑠(𝐼, 𝐽 ) are used in this paper to measure performances of the be applied such that intensity values are equalized without disturbing
CE algorithms. the color balance of the image. To compound the problem, applying
𝐷𝐸 represents the richness of the details of the image, while 𝐸𝑀𝐸 HE on each individual channel will lead to higher computation time as
represents local contrast of the image. Both 𝐷𝐸 and 𝐸𝑀𝐸 are used to the algorithm will be performed for three times. Therefore, the better
indicate the performance of an CE algorithm in enhancing the image. option would be converting the color space of the image from RGB

9
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Table 3
Sources of test images.
Image database Format
http://sipi.usc.edu/database/database.php TIFF
http://www.imageprocessingplace.com/root_files_V3/image_databases.htm TIF
https://archive.ics.uci.edu/ml/datasets/Discrete+Tone+Image+Dataset BMP, PNG, JPG
http://decsai.ugr.es/cvg/dbimagenes/ PGM, PPM, PBM
MATLAB TIF, JPG

Table 4
Mean of metrics for different CE Algorithms across 776 test images.
Mean
CE Algorithms
𝛥𝐷𝐸 𝛥𝐸𝑀𝐸 𝑙 𝑠
GHE 2.844 24.290 0.876 0.946
BBHE 2.251 23.371 0.919 0.958
DHE 2.799 10.927 0.936 0.964
GLG 2.916 18.071 0.908 0.979
WTHE −11.174 8.880 0.858 0.934
HMF 3.139 13.632 0.912 0.963
FHSABP 23.048 21.969 0.932 0.962
CLAHE 15.839 10.778 0.957 0.979
CVC 2.912 13.657 0.953 0.971
2DHE 3.434 20.415 0.904 0.965
BPDHE 2.568 16.523 0.896 0.956
EPMP 2.698 12.265 0.902 0.935
SICE 3.6589 10.369 0.912 0.925
Proposed Algorithm 16.261 12.749 0.911 0.938

Table 5
Standard deviation of metrics for different CE Algorithms across 776 images.
Standard Deviation
CE Algorithms
𝐷𝐸 𝐸𝑀𝐸 𝑙 𝑠
GHE 19.687 22.460 0.143 0.072
BBHE 19.153 22.640 0.111 0.051
DHE 19.683 18.197 0.114 0.068
GLG 19.346 21.396 0.107 0.050
WTHE 26.530 17.870 0.159 0.070
HMF 19.353 16.522 0.127 0.053
FHSABP 7.387 26.448 0.107 0.047
CLAHE 14.171 20.423 0.074 0.030
CVC 19.160 21.247 0.117 0.041
2DHE 19.447 19.894 0.127 0.056
BPDHE 19.652 22.256 0.155 0.075
EPMP 18365 18.693 0.145 0.072
SICE 19.693 19.638 0.168 0.069
Proposed Algorithm 14.955 16.214 0.077 0.063

Fig. 14. Implementation of proposed algorithm in L*a*b (CIELAB) color spaces (a)
into another color spaces which separate intensity values from color Original image of ‘‘Airplane’’ (b) Proposed algorithm on ‘‘Airplane’’ (c) Original image
components before performing the equalization. Here we demonstrate of ‘‘Sailboat’’ (d) Proposed algorithm on ‘‘Sailboat’’ (e) Original image of ‘‘Mandrill’’
the application of proposed algorithm by converting the image to (f) Proposed algorithm on ‘‘Mandrill’’.
CIELAB color spaces (CIEL∗a∗b∗ color space, where L∗ denotes the
luminance, a∗ denotes the color on a green–red scale and b∗ denotes
the color on a blue–yellow scale). The CE algorithms were applied to
the lightness field in the L*a*b* format of the color images (see Fig. 14).

7.4. Qualitative result

Fig. 15 demonstrates the consistency property exhibited by the


proposed algorithm on different contrast condition of original image
as compared to the two advanced pixel-based histogram equalization
methods: GLG and 2DHE.
The standard deviation of DE for pixel-wise remapping algorithms
is generally lower than global gray level remapping algorithms; at the
same time, the mean of the 𝛥DE of pixel-wise algorithms is higher than
global gray level remapping algorithms. It shows that pixel-wise gray
Fig. 15. Example to illustrate enhancement consistency of the proposed algorithm (a)
level remapping mechanisms produces consistently rich details output
Original Images (b) Outputs of 2DHE (c) Outputs of GLG (d) Outputs of Proposed
images. However, FHSABP and CLAHE produce high standard deviation Algorithm.
for EME, while the proposed algorithm retains the lowest standard

10
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Fig. 16. Forest Image from low contrast discrete tone image category (a) Input Image
(b) GHE (c) BBHE (d) DHE (e) GLG (f) WTHE (g) HMF (h) FHSABP (i) CLAHE (j) CVC
(k) 2DHE (l) EPMP (m) SICE (n) BPDHE (o) Proposed Algorithm.

deviation for the output EME. Besides that, proposed algorithm also
produces lowest mean 𝛥EME, it indicates that the local contrast en-
hancement of the proposed algorithm is the lowest. In another hand, it
also means that the proposed algorithm is able to consistently produce
rich details images with minimal contrast enhancement.
We demonstrate the qualitative result for three chosen categories by
the 13 CE algorithms and the proposed algorithm: low contrast discrete
tone image (Forest) in Fig. 16, Monotone image (Kids) in Fig. 17,
Dynamic tone image (Camera Man) in Fig. 18. Notice that the qualities Fig. 17. Kids Image from Monotone image tone image category (a) Input Image (b)
of the resultant images generated from the proposed algorithm are con- GHE (c) BBHE (d) DHE (e) GLG (f) WTHE (g) HMF (h) FHSABP (i) CLAHE (j) CVC (k)
2DHE (l) EPMP (m) SICE (n) BPDHE (o) Proposed Algorithm.
sistent under different images which are affected by extreme lighting
and illuminations. The resultant images demonstrate the strength of
the proposed algorithm — visually appealing tone mapping without
introducing artifacts in various images with different dynamic ranges.
We would like to emphasize that the result shown in this section
does not indicate any superiority of an algorithm over another. The
performance of an CE algorithm might change based on the hyperpa-
rameters setting: with different configuration, an CE algorithm might
perform differently. However, algorithms with many hyperparameters
are difficult to be automated and applied into system with uncer-
tain environment. Being able to consistently perform in uncontrolled
environment should not be the sole reason for an algorithm to be
adopted for a particular application; each contrast enhancement algo-
rithm that we studied excels in different properties of the algorithm.
These properties trading off each other depending on the nature of the
algorithm.
The average processing time for each algorithm on 776 images
of different sizes (resized) was shown in Table 6. From the table,
the main finding is that the additional advantage provided by the
proposed algorithm in handling images under different exposure comes
with a price, that is, higher computational cost. From the table, we
can notice that the average processing time of proposed algorithm
increases significantly as the size of the images increases. While the
proposed algorithm addresses one of the toughest problems in his-
togram equalization substantially, it suffers from the main limitation of
high processing time; thus, it is one of our future works in optimizing
the framework in order to establish a framework that can address the
inconsistent problem of histogram equalization without compromising
the processing time.
Fig. 18. Camera Man Image from dynamic tone image category (a) Input Image (b)
We have designed an experiment to further gauge the performance GHE (c) BBHE (d) DHE (e) GLG (f) WTHE (g) HMF (h) FHSABP (i) CLAHE (j) CVC (k)
of the proposed algorithm under different scenario, firstly, each of the 2DHE (l) EPMP (m) SICE (n) BPDHE (o) Proposed Algorithm.
776 images will be classified into six categories: (a) High Contrast

11
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Table 6
Average processing time for different CE Algorithms.
Average Processing Time (seconds)
CE Algorithms
100 × 100 200 × 200 400 × 400
GHE 0.0348 0.0369 0.0372
BBHE 0.0376 0.0534 0.1025
DHE 0.1881 0.2070 0.2491
GLG 0.3502 0.3851 0.4081
WTHE 0.0372 0.0513 0.0982
HMF 0.0426 0.0563 0.1152
FHSABP 0.0526 0.0586 0.1125
CLAHE 0.0745 0.0826 0.0563
CVC 0.1752 0.2810 0.5290
2DHE 0.3697 0.4135 0.5426
BPDHE 0.3789 0.4965 0.6539
EPMP 0.0415 0.0465 0.0489
SICE 1.2264 2.1075 4.2302
Proposed Algorithm 1.3659 3.2569 16.5963

Fig. 19. Example of 𝑏 and 𝑤 values of an image computed from average of JND.
Discrete Tone Images (HC) (b) Low Contrast Discrete Tone Images (LC)
(c) Dynamic Tone Images (DT) (d) Continuous Tone Images (CT) (e)
Intensive Hidden Details Monotonous Images (DM) (f) Flat Monotonous Then a discrete index, 𝜃𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒 ∈ [0, ∞) is computed to indicate
Images (FM). Each classes of images carries different properties and degree of distinction of the tones in the input image. Larger 𝜃𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒
might affect the performances of CE algorithms. Then, we perform the (defined in (22)) implies greater distinction of the tone in the image,
quantitative Measures (DE%, EME, I(I,J), s(I,J)) for the Performances of meaning more black and white regions in the image. Fully spanned
each CE Algorithms on the classified image. The classification scheme uniform histogram produces 𝜃𝑏𝑟𝑖𝑔ℎ𝑡 = 𝜃𝑑𝑎𝑟𝑘 = 𝜃𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒 = 1.
was designed as following: The images were classified based on 𝜃𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒 ,
𝜃𝑏𝑟𝑖𝑔ℎ𝑡 + 𝜃𝑑𝑎𝑟𝑘
𝐶𝑒𝑥𝑡𝑟𝑒𝑚𝑒 , 𝜎 into discrete tone, dynamic tone, continuous tone, and 𝜃𝑑𝑖𝑠𝑐𝑟𝑒𝑡𝑒 = 𝜃𝑏𝑟𝑖𝑔ℎ𝑡 𝜃𝑑𝑎𝑟𝑘 (22)
monotonous images as shown in Fig. 20. Discrete tone and monotone 2
images were further classified based on EME and span. Then, information such as ratio of black and white regions to the
∑ ∑256
Properties of histogram includes 𝐷𝐸, a metric used to indicate whole image, 𝐶𝑒𝑥𝑡𝑟𝑒𝑚𝑒 = 𝑏+1 𝑖=1 𝑃 (𝑖)+ 𝑖=𝑤+1 𝑃 (𝑖) can be obtained through

overall contrast of the image. For better representation, 𝐷𝐸 is nor- 𝜃𝑑𝑎𝑟𝑘 and 𝜃𝑏𝑟𝑖𝑔ℎ𝑡 . Knowing 𝑏 = 54 and 𝑤 = 188, 𝐶𝑒𝑥𝑡𝑟𝑒𝑚𝑒 = 𝑏+1 𝑖=1 𝑃 (𝑖) +
∑256 55 68
malized to percentage of maximum possible discrete entropy 𝐷𝐸% = 𝑖=𝑤+1 𝑃 (𝑖) = 256
𝜃 𝑑𝑎𝑟𝑘 + 256
𝜃𝑏𝑟𝑖𝑔ℎ𝑡 . In another word, black and white
𝐷𝐸∕𝐷𝐸max × 100. In this study, one more property of histogram is used regions of the image occupy 𝐶𝑒𝑥𝑡𝑟𝑒𝑚𝑒 × 100% of the whole image.
to rate and classify input images. An 8-bit input image has 256 possible We summarize the characteristics of different classes of images in
{ }
gray levels. Let the 𝑖-th gray levels be 𝑋 = 𝑥𝑖 |𝑖 = 1, 2, … , 256 where Table 7.
{ }
𝑥𝑖 = 𝑖 − 1; the input histogram 𝐻 = ℎ𝑖 |𝑖 = 1, 2, … , 256 . The frequency Any CE algorithm can hardly work well under all conditions; We
ℎ𝑖 is the number of occurrences of 𝑥𝑖 in the input image. However, not believe that the failure is not random, categorizing types of input
all images span the full dynamic range. The unoccupied neighboring images aids in identifying circumstances when a CE algorithm would
gray levels are referred as ‘‘gap’’, while the occupied neighboring gray fail. In this study, the performances of CE algorithms with images
levels are referred as ‘‘stack’’ in this study. The percentage of gray levels of different categories and properties are measured to identify the
occupied by stacks to the whole dynamic range (256 levels) is referred behavior of certain CE algorithm for certain images. The results and
as span of the histogram, span ∈[0,100]. In general, global gray level data analysis would provide a clear guide on future improvement of the
remapping algorithms that only take histogram as input, for example algorithms, to produce consistent algorithm that can enhance contrast
GHE, have lower consistency — the algorithm sometimes works ap- and reveal the hidden details for varied types of images without failure.
propriately good and sometimes unsatisfying. In order to improve the Furthermore, perhaps the results of this study could provide a general
performance and consistency, some HE-based variations also consider guide on which algorithm to use for different types of images and
2D local information of the image in computing the remapping function applications for better performances.
(typical example such as 2DHE and CVC). In this study, 32 × 32 𝐸𝑀𝐸 Table 8 shows that the proposed HE framework is able to produce
is used to measure local properties of the images. output images with at least the second best (perform best in HC, LC
Since HVS perceives contrast differently at different background and FM) in terms of overall contrast measured by mean of 𝐷𝐸% on
luminance, the brightness and darkness information are required in different categories of images. Table 9 shows that the proposed HE
classifying the image instead of taking the mean brightness because framework is able to produce output images with best regional contrast
brightness cancels off darkness in mean value. The bright and dark in HC, LC and DT, second best in FM category, measured by mean of
regions are defined as regions where JND above its average of JND, EME. The strength of the proposed HE framework was demonstrated
hence the dark region gray levels will fall within [0,b] and bright region by Table 10 showing that, in overall, the proposed HE framework
gray levels will fall within [w,255] as shown in Fig. 19 below: In achieves the lowest standard deviation of 𝐷𝑒% and EME. The results
this report, two indexes are used created to represent the amount of of Table 10 indicate that the proposed HE algorithm can be applied
bright and dark regions in the image. Let the PDF of the input image be on images from different categories and performs equally well without
𝑃 = {𝑃 (𝑖)|𝑖 = 1, 2, … , 256} calculated, the histogram bright index, 𝜃𝑏𝑟𝑖𝑔ℎ𝑡 much fluctuation in the overall contrast and regional contrast of output
(defined in (21)), and dark index, 𝜃𝑑𝑎𝑟𝑘 (defined in (20)) are computed images.
as follow: We have also conducted an ablation study on the proposed algo-
256 ∑
𝑏+1 rithm to investigate the contribution comes from the proposed JND
𝜃𝑑𝑎𝑟𝑘 = 𝑃 (𝑖) (20) (used to generate 𝑅 (𝑖, 𝑗)) as shown in Table 11 (note that we could
𝑏 + 1 𝑖=1
not remove 𝐽𝑟𝑒𝑓 for ablation study since it is an integrated part of
256 ∑256
the proposed algorithm). We implemented the proposed HE framework
𝜃𝑏𝑟𝑖𝑔ℎ𝑡 = 𝑃 (𝑖) (21)
256 − 𝑤 𝑖=𝑤+1 without regional gray level similarity, 𝑅 (𝑖, 𝑗) (output image thus =

12
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Fig. 20. Flowchart for categorizing input image.

Table 7
Characteristics of different classes of images.
Classes Characteristics
High contrast Consists of distinctive tones and high frequencies black and white
Discrete Tone
regions. High number of distinctive tone regions form sharp edges
resulting in high local contrast.
Low contrast Large and low contrast regions of extreme brightness (black and
white regions) exist in the same image, normally caused by
different brightness of close and far object, large shadows, or
distinctive background for large and low contrast foreground.
Dynamic Tone The tones are averagely spanned across the dynamic range. The
image consists of details and information in regions of all
brightness. In another word, there are dark, bright and gray objects
forming edges with each other.
Continuous Tone Has an obvious overall mean brightness. The tones are less
distinctive but there are still objects from various tones. Consists of
dark, bright, and also gray objects.
Intensive hidden The image is monotonous, and all details are around the same
Monotone
details shades. However, there are intensive hidden details which may be
noisy when over enhanced. The hidden details mostly fall in
continuous tone, which means enhancing the image with global
methods may result in large area saturation of small details.
Flat The image is flat and monotonous. Saturation is less likely to occur
even when being enhanced by global gray level remapping
algorithms.

J=I+E+B) separately to investigate the effect of the proposed HE In addition, we have conducted another experiment on 𝐽𝑟𝑒𝑓 (we
framework on different categories of images in the absence of 𝑅 (𝑖, 𝑗). designed MBBDHE as 𝐽𝑟𝑒𝑓 ) by replacing MBBDHE with other HE meth-
The results of Table 11 suggested that 𝑅 (𝑖, 𝑗) contributes in both overall
ods. As mentioned before, MBBDHE is designed to be a reference for
contrast and regional contrast respectively in which the effect on
regional contrast, measured by EME decreased with higher percentage the proposed regional contrast based weighted contrast enhancement
as compared to overall contrast, measured by 𝐷𝑒% . algorithm in this study; hence, MBBDHE should be a fast global HE,

13
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Table 8
Mean of 𝐷𝐸% for different categories.
Mean
Category HC LC DT CT DM FM
Metrics
Methods DE% DE% DE% DE% DE% DE%
GHE 82.327 91.355 83.255 87.941 59.959 67.582
BBHE 84.465 91.631 83.807 88.405 59.969 65.512
DHE 85.384 92.690 84.014 89.533 58.354 67.582
GLG 85.943 93.625 85.977 88.478 59.816 66.139
WTHE 36.693 79.826 67.352 84.182 52.232 40.868
HMF 80.666 95.467 85.242 84.652 60.930 67.396
FHSABP 91.869 97.953 97.863 96.796 97.566 74.009
CLAHE 93.425 96.918 92.409 96.248 76.967 76.106
CVC 84.978 95.594 85.622 89.345 61.246 65.787
2DHE 87.771 91.398 81.986 91.761 59.052 66.211
EPMP 90.518 96.679 82.999 92.371 64.378 67.577
SICE 85.840 91.784 83.264 87.854 59.315 66.244
BPDHE 83.785 89.179 85.218 88.545 60.627 67.582
Proposed 95.066 95.121 93.142 94.312 89.578 84.290

Table 9
Mean of 𝐸𝑀𝐸 for different categories.
Mean
Category HC LC DT CT DM FM
Metrics
Methods EME EME EME EME EME EME
GHE 53.681 20.666 25.432 49.230 41.293 9.934
BBHE 46.131 19.065 24.178 48.512 37.169 22.112
DHE 36.124 19.990 24.510 23.382 33.596 15.130
GLG 38.939 21.340 24.989 47.388 31.357 28.330
WTHE 35.268 15.968 21.732 36.882 16.179 13.434
HMF 48.704 17.069 22.949 42.282 22.985 20.757
FHSABP 18.183 20.165 27.282 49.088 35.998 16.042
CLAHE 48.399 34.286 25.714 54.205 11.022 20.743
CVC 46.418 16.244 21.373 36.029 14.733 20.344
2DHE 38.772 16.029 24.142 34.088 42.371 19.241
EPMP 41.746 18.192 25.995 33.576 30.631 22.192
SICE 35.798 19.172 23.643 48.695 36.935 14.929
BPDHE 37.724 27.960 24.733 45.976 31.089 15.726
Proposed 55.473 31.862 27.705 35.953 30.361 23.146

Table 10 higher standard deviation of 𝐷𝐸% and EME. This result indicates that
Standard deviation of 𝐷𝐸% and EME across different categories.
MBBDHE plays an important role in the framework contributing in
Standard Deviation both overall and regional contrast of resultant images. As mentioned,
Metrics MBBDHE stretches the gray levels away from the mean brightness in
Methods DE% EME both directions. The bi-directional nature of MBBDHE can better handle
GHE 12.53285 17.67702 washed-out effect, since it realizes full utilization of the dynamic range.
BBHE 13.26575 13.07458 This paper describes a HE framework (J = I + R ⊙ E + B) that combines
DHE 13.83239 9.956547 the proposed MBBDHE and the proposed regional gray level similarity
GLG 13.90217 9.804255
WTHE 20.41458 10.52523
scheme. Experimental results have shown that this scheme is effective
HMF 12.95578 13.28623 in enhancing contrast of images from different categories.
FHSABP 9.623839 12.98529
CLAHE 9.753953 16.90673 8. Conclusion, limitation and future works
CVC 13.98729 12.84957
2DHE 14.1464 11.05794
EPMP 13.79683 9.58937 A contrast enhancement algorithm has been proposed to enhance
SICE 13.35891 13.03497 the details of an image and produce promising result. The proposed
BPDHE 12.27959 10.70213 enhancement is consistent and able to produce satisfactory result for
Proposed 4.789455 8.96416
different type of images under various uncontrolled environments. Most
notably, the proposed algorithm uses JND of HVS to measure low
contrast and high contrast regions in the images to produce good
therefore, we compare with GHE, BBHE, DSIHE only since other vari- contrast image in human’s perspective. The resultant image of the pro-
ants will consume much longer processing time and hence not suitable posed algorithm gives merit in enhanced contrast, detail preservation,
to be used as reference or comparisons. The results of the experiment naturality, and consistency under uncontrolled environment.
suggest that MBBDHE outperforms other HE methods. The proposed algorithm is designed to emphasize originally unseen
Table 12 shows that MBBDHE is itself a better global HE en- (hidden) details in the image and maintain contrast relationship of high
hancement scheme with lower mean of 𝐷𝐸% and mean of EME and contrast regions. Therefore, despite the consistently and sufficiently

14
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

Table 11
Mean of 𝐷𝐸% and EME for different categories.
Removal of 𝑅 (𝑖, 𝑗) from proposed HE framework Proposed HE framework
Categories of input images
𝐷𝐸% EME 𝐷𝐸% EME
HC 91.145 50.592 95.066 55.473
LC 92.542 29.186 95.121 31.862
DT 90.587 23.490 93.142 27.705
CT 95.908 33.682 96.312 35.953
DM 85.190 29.478 88.578 30.361
FM 81.856 21.567 84.290 23.146

Table 12 References
Performance of different candidates for 𝐽𝑟𝑒𝑓 .
Mean Standard Deviation [1] P. Liu, S. Horng, J. Lin, T. Li, Contrast in haze removal: Configurable contrast
Metrics enhancement model based on dark channel prior, IEEE Trans. Image Process. 28
(5) (2019) 2212–2227, http://dx.doi.org/10.1109/TIP.2018.2823424.
𝐽𝑟𝑒𝑓 𝐷𝐸% 𝐸𝑀𝐸 𝐷𝐸% 𝐸𝑀𝐸
[2] S. Wang, G. Luo, Naturalness preserved image enhancement using a priori multi-
GHE 83.563 33.285 6.235 14.792 layer lightness statistics, IEEE Trans. Image Process. 27 (2) (2018) 938–948,
BBHE 86.498 31.869 5.126 13.653 http://dx.doi.org/10.1109/TIP.2017.2771449.
DSIHE 86.969 32.165 5.426 13.835 [3] X. Liu, G. Cheung, X. Ji, D. Zhao, W. Gao, Graph-based joint dequantization and
Proposed 91.918 34.088 4.789455 8.96416 contrast enhancement of poorly lit JPEG images, IEEE Trans. Image Process. 28
MBBDHE (3) (2019) 1205–1219, http://dx.doi.org/10.1109/TIP.2018.2872871.
[4] Q. Fu, C. Jung, K. Xu, Retinex-based perceptual contrast enhancement in images
using luminance adaptation, IEEE Access 6 (2018) 61277–61286, http://dx.doi.
org/10.1109/ACCESS.2018.2870638.
enhanced local contrast, the image might not seem natural in all cases. [5] M. Li, J. Liu, W. Yang, X. Sun, Z. Guo, Structure-revealing low-light image
enhancement via Robust Retinex model, IEEE Trans. Image Process. 27 (6) (2018)
Furthermore, the algorithm is computationally heavy, therefore the
2828–2841, http://dx.doi.org/10.1109/TIP.2018.2810539.
proposed algorithm at current stage is not yet suitable for applications [6] Q. Song, P.C. Cosman, Luminance enhancement and detail preservation of images
that demand real-time responses. and videos adapted to ambient illumination, IEEE Trans. Image Process. 27 (10)
The modified HE method of MBBDHE is sufficient to produce good (2018) 4901–4915, http://dx.doi.org/10.1109/TIP.2018.2846686.
[7] S. Keerativittayanun, K. Kotani, T. Kondo, T. Phatrapornnant, J. Karnjana, Less-
contrast image in overall consistently. In the case where low frequency visible contrast enhancement based on the human visual perception, Optik
details in the image need to be preserved, then the regional contrast (Stuttg) 157 (2018) 467–483, http://dx.doi.org/10.1016/j.ijleo.2017.11.096.
assessment is required, but the algorithm needs to be simplified. Since [8] G. Maguolo, L. Nanni, A critic evaluation of methods for COVID-19 automatic
detection from X-ray images, 2020, pp. 1–10, arXiv.
the algorithm requires more than one convolution, such as FHSABP, it
[9] R.C. Gonzalez, R.E. Woods, Digital Image Processing, third ed., Prentice-Hall,
is generally computationally slower than global gray level remapping Inc., 2006.
algorithm. Since the kernel changes based on the center value, it is dif- [10] K. Yeong-Taeg, Quantized bi-histogram equalization, in: 1997 IEEE International
ficult to utilizes Graphical Processing Unit (GPU) for faster performance Conference on Acoustics, Speech, and Signal Processing, Vol. 4, ICASSP-97, 1997,
pp. 2797–2800 4.
because most libraries only support convolution operation using static
[11] W. Yu, C. Qian, Z. Baeomin, Image enhancement based on equal area dualistic
kernel. Therefore, it requires users that wish to apply the proposed sub-image histogram equalization method, Consum. Electron. IEEE Trans. 45 (1)
algorithm in a faster way to produce own library, that can perform 2D (1999) 68–75, [Online]. Available: http://dsihe.
convolution using dynamic kernel, to run the program using parallel [12] C. Soong-Der, A.R. Ramli, Minimum mean brightness error bi-histogram equal-
ization in contrast enhancement, Consum. Electron. IEEE Trans. 49 (4) (2003)
computing. 1310–1319, [Online]. Available: http://mmbebhe.
[13] K.S. Sim, C.P. Tso, Y.Y. Tan, Recursive sub-image histogram equalization applied
to gray scale images, Pattern Recognit. Lett. 28 (10) (2007) 1209–1221, [Online].
CRediT authorship contribution statement Available: http://rsihe.
[14] M. Abdullah-Al-Wadud, M.H. Kabir, M.A.A. Dewan, C. Oksam, A dynamic
histogram equalization for image contrast enhancement, Consum. Electron. IEEE
Yan Chai Hum: Writing – original draft, Conceptualization, Trans. 53 (2) (2007) 593–600, [Online]. Available: http://dhe.
Methodology, Project administration, Funding acquisition. Yee Kai [15] T. Celik, Two-dimensional histogram equalization and contrast enhancement, Pat-
tern Recognit. 45 (10) (2012) 3810–3824, http://dx.doi.org/10.1016/j.patcog.
Tee: Data curation. Wun-She Yap: Visualization, Investigation.
2012.03.019.
Hamam Mokayed: Software. Tian Swee Tan: Validation. Maheza [16] T. Arici, S. Dikbas, Y. Altunbasak, A histogram modification framework and its
Irna Mohamad Salim: Writing – review & editing. Khin Wee Lai: application for image contrast enhancement, Image Process. IEEE Trans. 18 (9)
Formal analysis. (2009) 1921–1935.
[17] W. Qing, R.K. Ward, Fast image/video contrast enhancement based on weighted
thresholded histogram equalization, Consum. Electron. IEEE Trans. 53 (2) (2007)
757–764, [Online]. Available: http://wthe.
Declaration of competing interest [18] T. Celik, T. Tjahjadi, Contextual and variational contrast enhancement, IEEE
Trans. Image Process. 20 (12) (2011) 3431–3441, http://dx.doi.org/10.1109/
TIP.2011.2157513.
The authors declare that they have no known competing financial [19] C. ZhiYu, B.R. Abidi, D.L. Page, M.A. Abidi, Gray-level grouping (GLG): an
interests or personal relationships that could have appeared to automatic method for optimized image contrast enhancement-part I: the basic
influence the work reported in this paper. method, Image Process. IEEE Trans. 15 (8) (2006) 2290–2302.
[20] Y.C. Hum, K.W. Lai, M.I. Mohamad Salim, Multiobjectives bihistogram equaliza-
tion for image contrast enhancement, Complexity 20 (2) (2014) http://dx.doi.
org/10.1002/cplx.21499.
Acknowledgment [21] S.M. Pizer others, Adaptive histogram equalization and its variations, Comput.
Vis. Graph. Image Process. 39 (3) (1987) 355–368, http://dx.doi.org/10.1016/
s0734-189x(87)80186-x.
This research is supported by UTAR Research Fund (IPSR/RMC/ [22] K. Zuiderveld, in: I.V. Graphics gems, S.H. Paul (Eds.), Contrast Limited Adaptive
UTARRF/2020-C1/H02). Histogram Equalization, Academic Press Professional, Inc., 1994, pp. 474–485.

15
Y.C. Hum, Y.K. Tee, W.-S. Yap et al. Signal Processing: Image Communication 103 (2022) 116657

[23] D. Coltuc, P. Bolon, J.M. Chassery, Exact histogram specification, Image Process. [32] ZhiYu. Chen, B.R. Abidi, D.L. Page, M.A. Abidi, Gray-level grouping (GLG): an
IEEE Trans. 15 (5) (2006) 1143–1152. automatic method for optimized image contrast enhancement-part I: the basic
[24] A. Majumder, S. Irani, Perception-based contrast enhancement of images, ACM method, IEEE Trans. Image Process. 15 (8) (2006) 2290–2302, http://dx.doi.org/
Trans. Appl. Percept. 4 (3) (2007) http://dx.doi.org/10.1145/1278387.1278391, 10.1109/TIP.2006.875204, Aug.
17–es. [33] C. Wang, J. Peng, Z. Ye, Flattest histogram specification with accurate brightness
[25] L. Yu, H. Su, C. Jung, Perceptually optimized enhancement of contrast and preservation, Image Process. IET 2 (2008) 249–262, http://dx.doi.org/10.1049/
color in images, IEEE Access 6 (2018) 36132–36142, http://dx.doi.org/10.1109/ iet-ipr:20070198.
ACCESS.2018.2848671. [34] H. Ibrahim, N.S. Pik Kong, Brightness preserving dynamic histogram equalization
[26] L. Zhou, D. Weiskopf, C.R. Johnson, Perceptually guided contrast enhancement for image contrast enhancement, IEEE Trans. Consum. Electron. 53 (4) (2007)
based on viewing distance, J. Comput. Lang. 55 (2019) 100911, http://dx.doi. 1752–1758, http://dx.doi.org/10.1109/TCE.2007.4429280.
org/10.1016/j.cola.2019.100911. [35] B. Chen, Y. Wu, L. Shi, A fast image contrast enhancement algorithm using
[27] M.A. Bouman, Peripheral contrast thresholds of the human eye, J. Opt. Soc. entropy-preserving mapping prior, IEEE Trans. Circuits Syst. Video Technol. 29
Amer. 40 (12) (1950) 825–832, http://dx.doi.org/10.1364/JOSA.40.000825. (1) (2019) 38–49, http://dx.doi.org/10.1109/TCSVT.2017.2773461, Jan.
[28] Chun-Hsien Chou, Yun-Chin Li, A perceptually tuned subband image coder based [36] J. Cai, S. Gu, L. Zhang, Learning a deep single image contrast enhancer from
on the measure of just-noticeable-distortion profile, IEEE Trans. Circuits Syst. multi-exposure images, IEEE Trans. Image Process. 27 (4) (2018) 2049–2062,
Video Technol. 5 (6) (1995) 467–476, http://dx.doi.org/10.1109/76.475889, http://dx.doi.org/10.1109/TIP.2018.2794218.
Dec. [37] C.E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 27
[29] C.-H. Lee, L.-H. Chen, W. Wang, Image contrast enhancement using gray-level (3) (1948) 379–423, http://dx.doi.org/10.1002/j.1538-7305.1948.tb01338.x.
grouping based on the just-noticeable-difference model of the human visual [38] S.S. Agaian, B. Silver, K.A. Panetta, Transform coefficient histogram-based image
system, J. Inf. Technol. Appl. 6 (2012) 18–22. enhancement algorithms using contrast entropy, IEEE Trans. Image Process. 16
[30] H. Ibrahim, S.C. Hoo, Local contrast enhancement utilizing bidirectional switch- (3) (2007) 741–758, http://dx.doi.org/10.1109/TIP.2006.888338, Mar.
ing equalization of separated and clipped subhistograms, Math. Probl. Eng. 2014 [39] K.A. Panetta, E.J. Wharton, S.S. Agaian, Human visual system-based
(2014) 848615, http://dx.doi.org/10.1155/2014/848615. image enhancement and logarithmic contrast measure, IEEE Trans. Syst. Man,
[31] K. Yeong-Taeg, Contrast enhancement using brightness preserving bi-histogram Cybern. B 38 (1) (2008) 174–188, http://dx.doi.org/10.1109/TSMCB.2007.
equalization, Consum. Electron. IEEE Trans. 43 (1) (1997) 1–8. 909440.
[40] W. Zhou, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality assessment:
from error visibility to structural similarity, IEEE Trans. Image Process. 13 (4)
(2004) 600–612, http://dx.doi.org/10.1109/TIP.2003.819861.

16

You might also like