13E21D6506

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

ISSN 2319-8885

Vol.04,Issue.38,
September-2015,
Pages:8348-8353
www.ijsetr.com

Guided Filter for Image Enhancement by using the Contourlet Transform


H. RAMA1, HARSHAVARDHAN VELAGAPUDI2
1
PG Scholar, Dept of ECE, Vidya Vikasinstitute of Technology, India, E-mail: rama.hirekar@gmail.com.
2
Assistant Professor, Dept of ECE, Vidya Vikasinstitute of Technology, India.

Abstract: An image fusion methodology is planned for making an extremely informative consolidated image through merging
multiple pictures. The planned methodology is predicated on a two-scale decomposition of a picture into a base layer containing
giant scale variations in intensity, and a detail layer capturing little scale details. A unique target-hunting filtering-based weighted
average technique is planned to form full use of abstraction consistency for fusion of the bottom and detail layers. Experimental
results demonstrate that the planned methodology will acquire progressive performance for fusion of multispectral, multi-focus,
multimodal, and multi-exposure pictures.

Keywords: Fusion, Base Layer, Filtering, Scale.

I. INTRODUCTION use priors obtained from population atlases. Another method


Image fusion is a very important technique for varied is active contour method which is suitable for finding edges
image processing and pc vision applications like feature of a region whose gray scale intensities are significantly
extraction and target recognition. Through image fusion, different from the Surrounding region in the image. To
different pictures of a similar scene may be combined into a segment homogenous regions, the semi automatic region
single consolidated image. The consolidated image will offer growing methods first requires users to identify a seed point.
additional comprehensive info regarding the scene that is In this paper we proposed a full automatic region-growing
additional useful for human and machine perception. As an segmentation technique. First we found the seed
example, the performance of feature extraction algorithms automatically using textural features from Co-occurrence
may be improved by fusing multi-spectral remote sensing matrix (COM) and run length features. Then using gray scale,
pictures. The fusion of multi-exposure pictures may be used spatial information and Otsu thresholding method, region
for photography. In these applications, a decent image fusion growing was applied to segment the region. Image processing
methodology has the subsequent properties. First, it will is one of most growing research area these days and now it is
preserve most of the useful info of various pictures. Second, very much integrated with the medical and biotechnology
it doesn't produce artifacts. Third, it's strong to imperfect field. Image Processing can be used to analyze different
conditions such as miss registration and noise. A large medical and MRI images to get the abnormality in the image.
variety of image fusion ways have been planned in literature.
Among these ways, multi-scale image fusion and data-driven
image fusion are terribly winning ways. Segmentation of
brain tissues in gray matter, white matter and tumour on
medical images is not only of high interest in serial treatment
monitoring of “disease burden” in oncologic imaging, but
also gaining popularity with the advance of image guided
surgical approaches. Outlining the brain tumour contour is a
major step in planning spatially localized radiotherapy (e.g.,
Cyber knife, I MRT) which is usually done manually on
contrast enhanced T1-weighted magnetic resonance images
(MRI) in current clinical practice.

On T1 MR Images acquired after administration of a


contrast agent (gadolinium), blood vessels and parts of the
tumour, where the contrast can pass the blood–brain barrier
are observed as hyper intense areas. There are various
attempts for brain tumour segmentation in the literature
which use a single modality, combine multi modalities and
Fig.1.

Copyright @ 2015 IJSETR. All rights reserved.


H. RAMA, HARSHAVARDHAN VELAGAPUDI
C. Extending the Depth of Field in Microscopy through
Curvelet based frequency-Adaptive Image Fusion
We propose a curvelet-based image fusion method that
infrequency-adaptive. Because of the high directional
sensitivity of the curve let transform (and consequentially, its
extreme sparseness), the average performance gain of the
new method over state-of-the-art Methods are high.

D. Image Quality Assessment: From Error Visibility to


Structural Similarity
In this paper, we have summarized the traditional
approach to image quality assessment based on error-
sensitivity, and have enumerated its limitations. We have
proposed the use of structural similarity as an alternative
motivating principle for the design of image quality
measures. To demonstrate our structural similarity concept,
we developed an SSIM index and showed that it compares
Fig.2. favorably with other methods in accounting for our
experimental measurements of subjective quality of 344JPEG
 Traditional multi-scale image fusion ways need more and JPEG2000 compressed images.
than 2 scales to get satisfactory fusion results. The key
contribution of this paper is to gift a quick two-scale III. EXISTING SYSTEM
fusion technique that doesn't trust heavily on a specific A. Wavelet Transform
image decomposition technique. An easy average filter is Wavelets are an extension Fourier analysis. The
qualified for the planned fusion framework. mathematics of Fourier analysis dates back to the nineteen
 A completely unique weight construction technique is century but it wasn‟t until the mid twentieth century, with the
planned to combine component strikingness and spatial advent of fast algorithms and computers that Fourier analysis
context for image fusion. Rather than victimization, began to make an impact on the world. Widely used in signal
optimization based mostly ways guided filtering is analysis, hardly a scientific field hasn‟t been impacted by this
adopted as an area filtering technique for image fusion. technique. Wavelet analysis uses a similar approach but
 A vital observation of this paper is that the roles of two instead of sinusoids, waves of limited duration, termed basis
measures, i.e., component strikingness and spatial function or mother wavelets, are used [Fig.3].
consistency are quite totally once fusing different layers.
In this paper, the roles of component strikingness and
spatial consistency are controlled through adjusting the
parameters of the guided filter.

II. LITERATURE SURVEY


A. A Universal Image Quality Index
We propose a new universal objective image quality
index, which is easy to calculate and applicable to various
image processing applications. Instead of using traditional
error summation methods, the proposed index is designed by
modelling any image distortion as a combination of three
factors: loss of correlation, luminance distortion, and contrast
distortion. Although the new index is mathematically defined Fig.3. Wavelet analysis represents the signal as
and no human visual system model is explicitly employed, combinations of scaled and shifted mother wavelets.
our experiments on various image distortion types indicate
that it performs significantly better than the widely used Since the wavelet is of limited duration, it can be shifted
distortion metric means quared error. down the signal at known intervals. At each step a coefficient
is calculated representing how closely the wavelet resembles
B. Fusing Images with Different Focuses Using Support this section of signal. By scaling the wavelet, stretching or
Vector Machines compressing it, information about the overall signal trend to
In this paper, we improve this fusion procedure by small details can be obtained.
applying the discrete wavelet frame transform (DWFT) and
the support vector machines (SVM). Unlike DWT, DWFT B. DWT Based Image Fusion
yields a translation-invariant signal representation. Using Discrete Wavelet transform (DWT) is a mathematical
features extracted from the DWFT coefficients, a SVM is tool for hierarchically decomposing an image. The DWT
trained to select the source image that has the best focus at decomposes an input image into four components labeled as
each pixel location, and the corresponding DWFT LL, HL, LH and HH [9]. The first letter corresponds to
coefficients are then incorporated into the composite wavelet applying either a low pass frequency operation or high pass
representation.
International Journal of Scientific Engineering and Technology Research
Volume.04, IssueNo.38, September-2015, Pages: 8348--8353
Guided Filter for Image Enhancement by using the Contour Let Transform
frequency operation to the rows, and the second letter refers IV. PROPOSED SYSTEM
to the filter applied to the columns. The lowest resolution A. Contour Let Transform
level LL consists of the approximation part of the original Contourlets form a multiresolution directional tight frame
image. The remaining three resolution levels consist of the designed to efficiently approximate images made of smooth
detail parts and give the vertical high (LH), horizontal high regions separated by smooth boundaries. The Contourlet
(HL) and high (HH) frequencies. Fig.4 shows three-level transform has a fast implementation based on a Laplacian
wavelet decomposition of an image. Pyramid decomposition followed by directional filterbanks
applied on each bandpass subband. The Contourlet transform
is inspired by the human visual system and Curvelet
transform which can capture the smoothness of the contour of
images with different elongated shapes and in variety of
directions. However, it is difficult to sampling on a
rectangular grid for Curvelet transform since Curvelet
transform was developed in continuous domain and
directions other than horizontal and vertical are very different
on rectangular grid. Therefore, the Contourlet transform was
proposed initially as a directional multiresolution transform
in the discrete domain.

Fig.4.Three-level Discrete Wavelet Transform.

 Firstly, the image is decomposed into high-frequency


images and low frequency images with wavelet
transform.
 Then the spatial frequency and the contrast of the low-
frequency image are measured to determine the fused
low frequency image.

The discrete wavelet transform (DWT) was developed to


apply the wavelet transform to the digital world. Filter banks
are used to approximate the behavior of the continuous Fig.6.
wavelet transform. The signal is decomposed with a high-
pass filter and a low-pass filter. The Contourlet transform uses a double filter bank
structure to get the smooth contours of images. In this double
filter bank, the Laplacian pyramid (LP) is first used to
capture the point discontinuities, and then a directional filter
bank (DFB) is used to form those point discontinuities into
linear structures. The Laplacian pyramid (LP) decomposition
only produce one bandpass image in multidimensional signal
processing, that can avoid frequency scrambling. And
directional filter bank (DFB) is only fit for high frequency
since it will leak the low frequency of signals in its
directional subbands. This is the reason to combine DFB
with LP, which is multiscale decomposition and remove the
low frequency. Therefore, image signals pass through LP
subbands to get bandpass signals and pass those signals
through DFB to capture the directional information of image.

B. Bock Diagram
Image Fusion is the process of combining relevant
information from two or more images into a single image.
Fig.5. The fused image should have more complete information
which is more useful for human or machine perception. The
Limitations: resulting image will be more informative than any of the
 DCT is not gather the image information. input images. Medical fusion image is to combine functional
 DWT is not a suitable transform for edge image (CT) and anatomical image (MRI) together into one
information. image .This image can provide abundance information to
 For all medical images the contrast will be same so doctor to diagnose clinical disease.
we can‟t take contrast feature for Fusion.

International Journal of Scientific Engineering and Technology Research


Volume.04, IssueNo.38, September-2015, Pages: 8348-8353
H. RAMA, HARSHAVARDHAN VELAGAPUDI
2-D filter banks. The DFB is achieved by switching off the
down samplers/up samplers in each two-channel filter bank
in the DFB tree structure and up sampling the filters
accordingly. As a result, NSCT is shift-invariant and leads to
better frequency selectivity and regularity than contourlet
transform.Fig.4 shows the decomposition framework of
contourlet transform and NSCT. In this paper, image
decomposition is performed by the NSCT. We hope that
predominance of NSCT, which are shift-invariant, multi
resolution, localization, directionality, and anisotropy, will be
more suitable for image fusion and other image processing,
i.e. target recognition, object detection, etc. In the fusion
process, both neighborhood coefficients and cousin
coefficients information are utilized in the salience measure.
Fig.7.
E. Fusion of Low-Frequency Coefficients
C. Image Preparation
Considering the images‟ approximate information is
Digital images of melanoma and benign nevi were
constructed by the low-frequency coefficients, average rule is
collected in JPEG format from different sources totaling 72,
adopted for low-frequency coefficients. Suppose BF ( x, y) is
half melanoma and half benign. MATLAB‟s Wavelet
the fused low-frequency coefficients, then
Toolbox only supports indexed images with linear monotonic
color maps so the RGB images were converted to grayscale
images. The next step in the process was to segment the (1)
lesion from the surrounding skin. Since a clear color where B1 ( x, y) and 2 B2 ( x, y )denote the low-frequency
distinction existed between lesion and skin, thresholding was coefficients of source images.
very suitable for this task. A black and white image was
produced and its size increased by six pixels all around in F. Fusion of High-Frequency Coefficients
order to include the entire border region in the segmented High-frequency coefficients always contain edge and
image. texture features. In order to make full use of information in
the neighborhood and cousin coefficients in the NSCT
D. NSCT-based Fusion Algorithm domain, a salience measure, as a combination of region
In the foremost contourlet transform [6] down samplers energy of NSCT coefficients and correlation of the cousin
and up samplers are presented in both the laplacian pyramid coefficients, is proposed for the first time. We define region
and the DFB. Thus, it is not shift-invariant, which causes energy by computing the sum of the coefficients‟ square in
pseudo-Gibbs phenomena around singularities. NSCT is an the local window. Suppose Clk (x y) is the high-frequency
improved form of contourlet transform. It is motivated to be NSCT coefficients, whose location is (x,y) in the subband of
employed in some applications, in which redundancy is not a k-th direction at l-th decomposition scale. The region energy
major issue, i.e. image fusion. is defined as follows:

(2)
where denotes the regional window and its size is M × N
(typically 3×3 ).Region energy, rather than single pixel value,
will be more reasonable to extract features of source images
by utilizing neighbors‟ information.

G. Contrast Enhancement
In spite of increasing demand for enhancing remote
sensing images, existing histogram-based contrast
enhancement methods cannot preserve edge details and
exhibit saturation artefacts in low- and high-intensity regions.
In this section, we present a novel contrast enhancement
algorithm for remote sensing images using dominant
brightness level-based adaptive intensity transformation. If
we do not consider spatially varying intensity distributions,
the correspondingly contrast-enhanced images may have
intensity distortion and lose image details in some regions.
Fig.8. For overcoming these problems, we decompose the input
image into multiple layers of single dominant brightness
In contrast with contourlet transform, non sub sampled levels. To use the low-frequency luminance components, we
pyramid structure and non sub sampled directional filter perform the DWT on the input remote sensing image and
banks are employed in NSCT. The non sub sampled pyramid then estimate the dominant brightness level using the log-
structure is achieved by using two-channel non sub sampled
International Journal of Scientific Engineering and Technology Research
Volume.04, IssueNo.38, September-2015, Pages: 8348--8353
Guided Filter for Image Enhancement by using the Contour Let Transform
average luminance in the LL sub band. Since high-intensity A. Quality Measurement
values are dominant in the bright region, and vice versa. The Quality of the reconstructed image is measured in-
terms of mean square error (MSE) and peak signal to noise
H. Guided Filter Process ratio (PSNR) ratio. The MSE is often called reconstruction
A bilateral filter is a non-linear, edge-preserving error variance q2. The MSE between the original image f
and noise-reducing smoothing filter for images. The intensity and the reconstructed image g at decoder is defined as:
value at each pixel in an image is replaced by a weighted
average of intensity values from nearby pixels. While this
filter is effective in many situations, it may have unwanted (3)
gradient reversal artifacts [12, 13, 8] near edges (further Where the sum over j, k denotes the sum over all pixels in the
explained in Section 3.4). Its fast implementation is also a image and N is the number of pixels in each image.
challenging problem. In this paper we propose a new type of TABLE I:
explicit image filter, called guided filter. The filtering output
is locally a linear transform of the guidance image. This filter
has the edge-preserving smoothing property like the bilateral
filter, but does not suffer from the gradient reversal artifacts.
It is also related to the matting Laplacian matrix [2], so is a
more generic concept and is applicable in other applications
beyond the scope of ”smoothing”. Moreover, the guided filter
has an O(N) time (in the number of pixels N) exact algorithm
for both gray-scale and color images. Experiments show that
the guided filter performs very well in terms of both quality
and efficiency in a great variety of applications, such as noise
reduction, detail smoothing/enhancement, HDR compression,
image matting/feathering, haze removal, and joint
upsampling.

Advantages:
The „Contrast‟ method and the proposed fusion method
presents lightly better visual effect than the others.
Especially, the proposed method has less disturbing details
and has smooth edges such as the outlines of skulls and brain

Fig.10.

From that the peak signal-to-noise ratio is defined as the


ratio between signal variance and reconstruction error
variance. The PSNR between two images having 8 bits per
pixel in terms of decibels (dB) is given by:

(4)
Generally when PSNR is 20 dB or greater, then the
original and the reconstructed images are virtually in-
distinguishable by human eyes.
TABLE II:

Fig.9.
V. RESULT ANALYSIS
Here we report some experimental results that illustrate
the performance of the proposed approach. The experiments
were performed under windows and matlab running on a
desktop machine.
International Journal of Scientific Engineering and Technology Research
Volume.04, IssueNo.38, September-2015, Pages: 8348-8353
H. RAMA, HARSHAVARDHAN VELAGAPUDI
Author’s Profile:
H. Rama Graduated in B.Tech ECE in
2013 from JNTU Hyderabad. She is
currently pursuing Masters Degree in
M.Tech [ECE] from JNTUH University,
Hyderabad, R.R. Dist Telangana State,
India.

Harshavardhan Velagapudi, working as


Assistant Professor in ECE Dept. in Farah
Institute of Technology, Chevella, R.R. Dist
Telangana State, India. Post Graduated in
ECE (Embedded Systems) M.Tech from
VIGNAN‟S UNIVERSITY, Vadlamudi,
Fig.11. Guntur, A.P. He Graduated in ECE B.Tech.
VI. CONCLUSION from JNTU Kakinada. His Research
We have given a completely unique image fusion technique Interests Include Wireless Communications, Embedded
supported target-hunting filtering. The projected technique Systems. He has published 1 research papers in International
utilizes the common filter to induce the two-scale Conferences, Journals.
representations that is straightforward and effective. A lot of
significantly, the target-hunting filter is employed in very Anil Sooram Graduated in B.Tech ECE in
novel thanks to fill up use of the robust correlations between 2007 from JNTU Hyd. He received Masters
neighborhood pixels for weight improvement. Experiments Degree in M.Tech [ECE] from JNTUH
show that the projected technique will well preserve the University, Hyderabad. Presently he is
initial and complementary data of multiple input pictures. working as Associate Professor in ECE
Encouragingly, the projected technique is extremely sturdy to Dept. in Farah Institute of Technology,
image registration. Chevella, R.R. Dist Telangana State, India.
VII. RFERENCES His research interests include Wireless Communications,
[1] A. A. Goshtasby and S. Nikolov, “Image fusion: Embedded Systems. He has published 3 research papers in
Advances in the state of the art,” Inf. Fusion, vol. 8, no. 2, pp. International Conferences, Journals. He has received best
114–118, Apr. 2007. Teacher award from Farah Group.
[2] D. Socolinsky and L. Wolff, “Multispectral image
visualization through first-order fusion,” IEEE Trans. Image Dr. J. Sasi Kiran Graduated in B.Tech
Process., vol. 11, no. 8, pp. 923–931, Aug. 2002. [EIE] from JNTU Hyd. He received
[3] R. Shen, I. Cheng, J. Shi, and A. Basu, “Generalized Masters Degree in M.Tech [Computers &
random walks for fusion of multi-exposure images,” IEEE Communications] from Bharath
Trans. Image Process., vol. 20,no. 12, pp. 3634–3646, Dec. University, Chennai, M.Tech [CSE] from
2011. JNT University, Hyderabad. He received
[4] S. Li, J. Kwok, I. Tsang, and Y. Wang, “Fusing images Ph.D degree in Computer Science from
with different focuses using support vector machines,” IEEE University of Mysore, Mysore. He has
Trans. Neural Netw.,vol. 15, no. 6, pp. 1555–1561, Nov. served Vidya Vikas Institute of Technology for 10 years as
2004. Assistant Professor, Associate Professor, HOD-CSE&IT &
[5] G. Pajares and J. M. de la Cruz, “A wavelet-based image Vice Principal and taught courses for B.Tech and M.Tech
fusion tutorial,” Pattern Recognit., vol. 37, no. 9, pp. 1855– Students. At Present he is working as Professor in CSE and
1872, Sep. 2004. Dean – Academics in Vidya Vikas Institute of Technology,
[6] D. Looney and D. Mandic, “Multiscale image fusion Chevella, Greater Hyderabad, R.R. Dist Telangana State,
using complex extensions of EMD,” IEEE Trans. Signal India. His research interests include Image Processing, Cloud
Process., vol. 57, no. 4,pp. 1626–1630, Apr. 2009. Computing and Network Security. He has published several
[7] M. Kumar and S. Dass, “A total variation-based research papers till now in various National, International
algorithm for pixellevel image fusion,” IEEE Trans. Image Conferences, Proceedings and Journals. He is a life member
Process., vol. 18, no. 9, pp. 2137–2143, Sep. 2009. of CSI, ACM, ISTE, IE, IAE, NSC, ISCA, IACSIT, CSTA,
[8] P. Burt and E. Adelson, “The laplacian pyramid as a AIRCC, CRSI, GMIS-USA, Red Cross and Managing
compact image code,” IEEE Trans. Commun., vol. 31, no. 4, Committee Member of Computer Society of India. He has an
pp. 532–540, Apr. 1983. editorial board member of IJERT and Board of Studies
[9] O. Rockinger, “Image sequence fusion using a shift- Member of CVSR Engineering College, Hyd. He has
invariant wavelet transform,” in Proc. Int. Conf. Image received Best Teacher award twice from Vidya Group,
Process., vol. 3, Washington, DC, USA, Oct. 1997, pp. 288– Significant Contribution award from Computer Society of
291. India and Passionate Researcher Trophy from Sri.
[10] J. Liang, Y. He, D. Liu, and X. Zeng, “Image fusion Ramanujan Research Forum, GIET, Rajuhmundry, A.P,
using higher order singular value decomposition,” IEEE India.
Trans. Image Process., vol. 21, no. 5, pp. 2898–2909, May
2012.
International Journal of Scientific Engineering and Technology Research
Volume.04, IssueNo.38, September-2015, Pages: 8348--8353

You might also like