Professional Documents
Culture Documents
13E21D6506
13E21D6506
13E21D6506
Vol.04,Issue.38,
September-2015,
Pages:8348-8353
www.ijsetr.com
Abstract: An image fusion methodology is planned for making an extremely informative consolidated image through merging
multiple pictures. The planned methodology is predicated on a two-scale decomposition of a picture into a base layer containing
giant scale variations in intensity, and a detail layer capturing little scale details. A unique target-hunting filtering-based weighted
average technique is planned to form full use of abstraction consistency for fusion of the bottom and detail layers. Experimental
results demonstrate that the planned methodology will acquire progressive performance for fusion of multispectral, multi-focus,
multimodal, and multi-exposure pictures.
B. Bock Diagram
Image Fusion is the process of combining relevant
information from two or more images into a single image.
Fig.5. The fused image should have more complete information
which is more useful for human or machine perception. The
Limitations: resulting image will be more informative than any of the
DCT is not gather the image information. input images. Medical fusion image is to combine functional
DWT is not a suitable transform for edge image (CT) and anatomical image (MRI) together into one
information. image .This image can provide abundance information to
For all medical images the contrast will be same so doctor to diagnose clinical disease.
we can‟t take contrast feature for Fusion.
(2)
where denotes the regional window and its size is M × N
(typically 3×3 ).Region energy, rather than single pixel value,
will be more reasonable to extract features of source images
by utilizing neighbors‟ information.
G. Contrast Enhancement
In spite of increasing demand for enhancing remote
sensing images, existing histogram-based contrast
enhancement methods cannot preserve edge details and
exhibit saturation artefacts in low- and high-intensity regions.
In this section, we present a novel contrast enhancement
algorithm for remote sensing images using dominant
brightness level-based adaptive intensity transformation. If
we do not consider spatially varying intensity distributions,
the correspondingly contrast-enhanced images may have
intensity distortion and lose image details in some regions.
Fig.8. For overcoming these problems, we decompose the input
image into multiple layers of single dominant brightness
In contrast with contourlet transform, non sub sampled levels. To use the low-frequency luminance components, we
pyramid structure and non sub sampled directional filter perform the DWT on the input remote sensing image and
banks are employed in NSCT. The non sub sampled pyramid then estimate the dominant brightness level using the log-
structure is achieved by using two-channel non sub sampled
International Journal of Scientific Engineering and Technology Research
Volume.04, IssueNo.38, September-2015, Pages: 8348--8353
Guided Filter for Image Enhancement by using the Contour Let Transform
average luminance in the LL sub band. Since high-intensity A. Quality Measurement
values are dominant in the bright region, and vice versa. The Quality of the reconstructed image is measured in-
terms of mean square error (MSE) and peak signal to noise
H. Guided Filter Process ratio (PSNR) ratio. The MSE is often called reconstruction
A bilateral filter is a non-linear, edge-preserving error variance q2. The MSE between the original image f
and noise-reducing smoothing filter for images. The intensity and the reconstructed image g at decoder is defined as:
value at each pixel in an image is replaced by a weighted
average of intensity values from nearby pixels. While this
filter is effective in many situations, it may have unwanted (3)
gradient reversal artifacts [12, 13, 8] near edges (further Where the sum over j, k denotes the sum over all pixels in the
explained in Section 3.4). Its fast implementation is also a image and N is the number of pixels in each image.
challenging problem. In this paper we propose a new type of TABLE I:
explicit image filter, called guided filter. The filtering output
is locally a linear transform of the guidance image. This filter
has the edge-preserving smoothing property like the bilateral
filter, but does not suffer from the gradient reversal artifacts.
It is also related to the matting Laplacian matrix [2], so is a
more generic concept and is applicable in other applications
beyond the scope of ”smoothing”. Moreover, the guided filter
has an O(N) time (in the number of pixels N) exact algorithm
for both gray-scale and color images. Experiments show that
the guided filter performs very well in terms of both quality
and efficiency in a great variety of applications, such as noise
reduction, detail smoothing/enhancement, HDR compression,
image matting/feathering, haze removal, and joint
upsampling.
Advantages:
The „Contrast‟ method and the proposed fusion method
presents lightly better visual effect than the others.
Especially, the proposed method has less disturbing details
and has smooth edges such as the outlines of skulls and brain
Fig.10.
(4)
Generally when PSNR is 20 dB or greater, then the
original and the reconstructed images are virtually in-
distinguishable by human eyes.
TABLE II:
Fig.9.
V. RESULT ANALYSIS
Here we report some experimental results that illustrate
the performance of the proposed approach. The experiments
were performed under windows and matlab running on a
desktop machine.
International Journal of Scientific Engineering and Technology Research
Volume.04, IssueNo.38, September-2015, Pages: 8348-8353
H. RAMA, HARSHAVARDHAN VELAGAPUDI
Author’s Profile:
H. Rama Graduated in B.Tech ECE in
2013 from JNTU Hyderabad. She is
currently pursuing Masters Degree in
M.Tech [ECE] from JNTUH University,
Hyderabad, R.R. Dist Telangana State,
India.