7thapril-Project Repo FINAL

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 68

COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY

COCHIN UNIVERSITY COLLEGE OF


ENGINEERING KUTTANAD, PULINCUNNU

PROJECT REPORT

Submitted by:

FIDA SAHOODA P 20320514


NAMAN ANAND 20320522
ASWINK 20320540
MAYURIP 20920518

In partial fulfilment of the award of degree of Bachelor of Technology in

ELECTRONICS AND COMMUNICATION ENGINEERING

DIVISION OF ELECTRONICS AND


COMMUNICATION ENGINEERING
2023-2024
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
COCHIN UNIVERSITY COLLEGE OF
ENGINEERING KUTTANAD, PULINCUNNU

PROJECT REPORT

TITLE: MIE-IF/AB: MEDICAL IMAGE ENHANCEMENT


USING IMAGE FUSION AND ALPHA BLENDING
Submitted by:

FIDA SAHOODA P 20320514


NAMAN ANAND 20320522
ASWINK 20320540
MAYURIP 20920518

In partial fulfilment of the award of degree of Bachelor of Technology in

ELECTRONICS AND COMMUNICATION ENGINEERING

DIVISION OF ELECTRONICS
AND COMMUNICATION
ENGINEERING 2023-2024
COCHIN UNIVERSITY OF SCIENCE AND TECHNOLOGY
COCHIN UNIVERSITY COLLEGE OF ENGINEERING
KUTTANAD, PULINCUNNU

DIVISION OF ELECTRONICS AND COMMUNICATION


ENGINEERING

CERTIFICATE
Certified that this is a bonafide on the main entitled

MIE-IF/AB: MEDICAL IMAGE ENHANCEMENT USING


IMAGE FUSION AND ALPHA BLENDIN

Submitted by:

FIDA SAHOODA P 20320514


NAMAN ANAND 20320522
ASWINK 20320540
MAYURIP 20920518

During the year 2022 in partial fulfilment for the award of the degree of Bachelor of
Technology in Electronics and Communication Engineering from CUSAT.

Project coordinator Project Guide Head of Department

Dr. ANILKUMAR K K Mr. NISHANTH KUMAR Dr. ANILKUMAR K K

Associate professor Associate Professor Associate Professor

Div of ECE, CUCEK Div of ECE, CUCEK Div of ECE, CUCEK


ACKNOWLEDGEMENT

We thank the Supreme Lord for imparting us with the spiritual energy
in the right direction which has led to the successful completion of the
Project Report. We would also like to thank the Principal, Dr. Joseph
Kutty Jacob, for the facilities provided by him during the preparation
of the report. We are extremely thankful to the Head of the
Department of Electronics and Communication Engineering, Dr.
Anilkumar K K, for giving all the support and valuable directions in
overcoming the difficulties faced during the preparation of the report.
We express our sincere thanks to the Project Coordinator, Dr.
Anilkumar K K, for giving innovative suggestions, timely advice,
correction advice and suggestions during this endeavour. We feel to
acknowledge our indebtedness and deep sense of gratitude to our
Guide, Dr. Anilkumar K K, whose guidance and kind supervision
given us throughout the course which shaped the present work as its
show. We also express our gratitude towards all faculties of CUCEK
for their encouragement. We also express our deep sense of thanks to
all of our classmates and friends for their support, constructive
criticism and suggestions.
ABSTRACT

This project proposes a method for restoring composite images using


a novel image restoration process with alpha blending for image
fusion enhancement. The proposed method consists of several stages:
preprocessing, image fusion, and post-processing.In the pre processing
stage, the input images are first normalized. Then, noise removal
techniques are applied to remove any unwanted noise from the
images. Next, the images are augmented, which may involve techniques
such as contrast adjustment and re sizing. In the image fusion stage, the
pre-processed images are fused together to create a composite image.
Image fusion is a technique that combines information from multiple
images into a single image. The goal of image fusion is to create an
image that is more informative than any of the input images. Here,
alpha blending is employed to refine the fused image. Alpha blending
allows for pixel-level control over the contribution of each source image
to the final result. This can be particularly beneficial for highlighting
specific features or reducing artifacts at image seams.In the
postprocessing stage, the quality of the composite image is evaluated.
This may involve techniques such as block diagram analysis.The
proposed method can be used to restore composite images that have
been degraded by noise, blur, or other imperfections. The method can
also be used to improve the quality of composite images that were
created from low-quality input images, with alpha blending offering
additional control over the final image characteristics.
TABLE OF CONTENT

S.NO TITLE PAGE NO.


1. INTRODUCTION

2. AIM AND OBJECTIVE

3. LITERATURE SURVEY

4. EXISTING TECHNOLOGIES

5. METHODOLOGY

6. ALGORITHMS AND BLOCKDIAGRMS

7. RESULT

8. CONCLUSION

9. FUTURE SCOPE

10. REFERENCES

11. APPENDIX
TABLE OF FIGURE

S.No. Title Page No.

1. COLLECTING THE DATASET

2. POST MASKING IMAGES

3. NORMALIZED IMAGES

4. NOISE REDUCTION IMAGES

5. CONTRAST ENHANCEMENT

6. AUGMENTED IMAGES

7. IMAGES POST BLENDING

8. RESULTS
1. INTRODUCTION

Medical imaging, crucial for healthcare, encounters challenges like


motion artifacts and missing areas, compromising diagnostic accuracy.
Medical image inpainting addresses these issues by reconstructing
images with missing regions. Our project aims to enhance diagnostic
image quality by addressing machine-induced errors through image
processing techniques. Segmentation isolates structures, aiding
analysis, while registration aligns images for comparison.
Classification categorizes tissue types, aiding diagnosis, and feature
extraction quantifies characteristics. 3D reconstruction provides
anatomical representations, and image fusion integrates modalities for
a unified view. Texture analysis and deep learning enhance analysis
accuracy for improved diagnostic outcomes.

Challenges in medical imaging arise from various sources, including


motion artifacts, processing errors, equipment malfunctions, and
complex patient anatomy. Addressing these requires meticulous
patient positioning, rigorous quality control, and optimized imaging
parameters. In MATLAB, segmentation, registration, and
deconvolution techniques mitigate defects, while artifact removal
algorithms eliminate common image artifacts. Machine learning,
particularly CNNs, enhances defect detection and correction. By
integrating these diverse techniques, diagnostic accuracy is bolstered,
advancing patient care in medical imaging.
Image blending and fusion are fundamental in image processing, each
serving distinct purposes. Blending merges images smoothly, ideal for
panoramic stitching. Fusion integrates information from multiple
images to create a single, enriched composition, essential in low-light
conditions, multi-modal imaging, and object detection. These
techniques find application in medical imaging, remote sensing, and
surveillance, where integrated information facilitates informed
decision-making.
AIM AND OBJECTIVE

Aim:
The aim of this project is to develop and implement a method for
restoring composite images using a novel image restoration process
with alpha blending for image fusion enhancement.

Objectives:
• To design and implement a preprocessing stage that includes
normalization, noise removal, and image augmentation
techniques to prepare input images for fusion.
• To develop an image fusion stage that combines preprocessed
images using alpha blending to create a composite image that is
more informative than any of the input images.
• To evaluate the quality of the composite image through
postprocessing techniques such as block diagram analysis.
• To demonstrate the effectiveness of the proposed method in
restoring composite images degraded by noise, blur, or other
imperfections.
• To assess the capability of the method in improving the quality of
composite images created from low-quality input images,
utilizing alpha blending for additional control over final image
characteristics.
LITERATURE SURVEY

1. Enhancement Of Medical Images Using Image Processing In Matlab


Authors: UdayKumbhar, Vishal Patil, Shekhar Rudrakshi

Image enhancement encompasses a range of techniques aimed at


refining specific features within digital images for subsequent
analysis or display. Examples include contrast adjustment, edge
enhancement, pseudocoloring, noise filtering, sharpening, and
magnification. While these techniques amplify certain image
characteristics, they do not inherently increase the underlying data's
information content; rather, they emphasize predefined attributes.
Enhancement algorithms are typically interactive and application
dependent, with methods such as contrast stretching remapping grey
levels using predetermined transformations, as seen in histogram
equalization. These methodologies, alongside others utilizing local
neighborhood operations or transformative techniques like discrete
Fourier transforms, are indispensable tools across various fields
including medical imaging, art studies, forensics, and atmospheric
sciences. The significance of image enhancement lies in its ability to
refine visual data, facilitate deeper insights, and aid decision-making
processes without compromising image integrity. As such, ongoing
research focuses on developing robust enhancement methodologies
that achieve optimal results while preserving the authenticity and
interpretability of the underlying data.
2. Medical Image Enhancement Application Using Histogram
Equalization in Computational Libraries
Authors: Mohamed Y. Adam, Mozamel M. Saeed, Al Samani A. Ahmed

Digital Image Processing refers to the manipulation of two


dimensional images by digital computers to alter existing images
according to desired specifications. This may involve tasks such as
noise removal, contrast enhancement, correction of blurring resulting
from camera movement during image acquisition, and rectification
of geometrical distortions caused by lenses. Prior to undertaking
image processing, image enhancement is often necessary to improve
the overall quality of the image. While not exhaustive, this section
focuses on image enhancement through techniques such as histogram
equalization. Histogram equalization is particularly effective for
enhancing low-contrast and dark images by improving contrast and
brightness uniformly across the image, especially in cases where the
original image exhibits irregular illumination. Such enhancement
techniques are crucial for enhancing the visibility of features within
the scene, thereby facilitating easier visualization, classification, and
interpretation of images. Contrast stretching, a common method for
enhancing images, involves spreading out the range of scene
illumination, with linear contrast stretch being one approach.
However, linear contrast stretch may assign an equal number of gray
levels to both frequently and infrequently occurring gray levels,
leading to ambiguous feature distinction. To address this limitation,
histogram equalization allocates more gray levels to frequently
occurring ones, thereby enhancing feature contrast. While global
histogram equalization may result in intensity saturation in dark and
bright areas, color image enhancement can be achieved by encoding
red, green, and blue components into separate spectral images.
Overall, these enhancement techniques play a critical role in
improving image quality and aiding in subsequent image analysis
and interpretation.
3. A Novel Approach for Contrast Enhancement and Noise
Removal of Medical Images
Authors: Vijeesh Govind, Arun A. Balakrishnan, Dominic Mathew

Medical image enhancement is crucial for identifying specific regions


within an image by enhancing the desired areas. Various approaches
to image enhancement, as detailed in references, operate in either
spatial or frequency domains. Histogram equalization is a widely used
technique that enhances image contrast by expanding the dynamic
range, albeit with the drawback of potential over-enhancement in
certain image regions. Adaptive histogram equalization attempts to
mitigate this issue by forming histograms from localized data but
requires significant computational resources. Transform domain
techniques, such as 2-D Discrete Cosine Transform (DCT) or Fourier
transform, convert input images into the desired domain, though they
may introduce objectionable artifacts necessitating further processing.
In our proposed method, we employ the Perona-Malik filter for noise
removal from the enhanced image. The subsequent sections of this
paper are organized as follows: Section II provides the theoretical
background, including Weighted Histogram Equalization (WHE),
transform domain approaches, and the Perona-Malik filter. Section III
outlines the implementation of our proposed method. Section IV
presents experimental results and compares our method with
traditional Histogram Equalization. Finally, Section V concludes the
paper.
4. A Novel Approach for Contrast Enhancement Based on
Histogram Equalization
Authors: Hojat Yegane, Ali Ziaei, Amirhossein Rezaie.

Contrast enhancement techniques are widely utilized in image and


video processing to achieve a broader dynamic range. Among these
techniques, histogram modification-based algorithms are particularly
popular for achieving enhanced contrast. Histogram Equalization (HE)
stands out as one of the most commonly employed algorithms due to
its simplicity and effectiveness. HE works by uniformly distributing
pixel values, resulting in an image with a linear cumulative histogram.
HE finds applications in various fields such as medical image
processing, speech recognition, and texture synthesis, often in
conjunction with histogram modification techniques. However, HE
has two main disadvantages that affect its efficiency. Firstly, it assigns
one gray level to two neighboring gray levels with different intensities.
Secondly, it may lead to a "washed-out" effect when a majority of the
image comprises a particular gray level with higher intensity. Recent
research in image and video contrast enhancement has yielded
advancements aimed at overcoming these limitations. For instance,
Mean Preserving Bi-Histogram Equalization (BHE) addresses
brightness preservation issues by separating the input histogram into
two parts based on the input mean before equalizing them
independently. Another notable improvement is Dualistic Sub-Image
Histogram Equalization (DSIHE), which divides the histogram into
segments based on entropy and applies histogram equalization to each
segment separately. These advancements demonstrate ongoing efforts
to refine contrast enhancement methods for improved performance
and applicability.
EXISTING TECHNOLOGY

1. Registration and Fusion:


Image registration aligns multiple images of the same patient or anatomical
region to facilitate comparison and analysis. Fusion techniques integrate
information from multiple imaging modalities (e.g., MRI, CT, PET) to create
composite images with enhanced diagnostic value. Registration and fusion
methods often involve complex algorithms like rigid or deformable registration
and multi-modal image fusion.

2. Inpainting and Interpolation:


In cases where portions of the image are missing or corrupted, inpainting and
interpolation techniques are employed to fill in the missing regions. These
methods use surrounding information to estimate and reconstruct the missing
portions. Various interpolation algorithms like bilinear interpolation, bicubic
interpolation, and spline interpolation are commonly utilized.
• Inpainting involves reconstructing missing or damaged portions of an
image based on the available surrounding information. It typically employs
algorithms that analyze the structure and texture of the image to estimate the
content of the missing regions. Inpainting algorithms can be classified into
various categories, including patch-based methods, texture synthesis
approaches, and deep learning-based techniques. These methods are widely
used in image restoration tasks, such as removing unwanted objects from
photographs or repairing damaged areas in historical images.
• Interpolation, on the other hand, is a more general technique used to
estimate values of pixels at unknown locations within an image. It is
commonly employed in resizing or scaling operations, where the goal is to
generate a higher-resolution image from a lower-resolution input.

3. Artifact Correction Techniques:


Various artifacts can degrade medical images, including metal artifacts in MRI,
beam hardening artifacts in CT, and streak artifacts in X-ray images. Advanced
correction techniques, such as metal artifact reduction algorithms (MAR) m
MRI or CT, are used to mitigate these artifacts and improve image quality.

4. Patch-Based Approaches:
Patch-based techniques divide the image into smaller patches and use
information from similar patches in the image to reconstruct missing regions.
These methods are effective when there are similar structures or textures present
in the image.

5. Deep Learning-Based Approaches:


With recent advancements in deep learning, convolutional neural networks
(CNNs) have been increasingly used for image enhancement tasks. These
approaches learn complex mappings from input images to desired outputs,
enabling them to adaptively enhance image quality based on training data. Deep
learning-based methods have shown promising results in various medical
imaging applications, including denoising, super-resolution, and contrast
enhancement.
Merits:
• Can learn complex mappings from input images to desired outputs,
adaptively enhancing image quality.
• Have shown promising results in various medical imaging tasks, including
denoising and super-resolution.
Demerits:
• Require large amounts of training data and computational resources for
training deep neural networks.
• Lack of interpretability compared to traditional image processing techniques,
making it challenging to understand the underlying reasons for their
decisions.
Causes of reduced Quality of Medical Images

----------------->I Motion Artifacts

"---------------->I Image Processing Errors

------------."------- > I Equipment Malfunction

... ------ >I Patients Anatomy

....-----------------------------------------+1)1 Inadequate Imaging Parameter

.....
/ Segmentation

,..,..,.. Blending
0 0
0z >I Registration
I
Deconvolution
0 >I I
0
.....
/ Artifact Removal

.....
/
Machine

*Registration: Align multiple images of the same patient or different imaging modalities
*Deconvolution: Enhance image resolution

*Machine learning: Unet, CNN

METHODOLGY

2.1 Collecting Images from DATASET

In the process of collecting medical images for your project report, several
crucial steps ensure the integrity, relevance, and ethical compliance of the data.
Firstly, select a dataset that aligns with your research question, considering
factors like size, relevance, and the availability of required image types (e.g.,
X rays, MRI scans). Ensure legal permission for dataset use, verify image
quality, and standardize format.
• Define Criteria: Determine the specific criteria for selecting images
based on your project requirements.
• Visual Inspection: Review the sampled images visually to assess their
quality and relevance.
• Validation: Validate the selected images against your predefined criteria
to ensure they meet the desired standards
• Annotation: If necessary, annotate the selected images with relevant
labels or annotations for further analysis or machine learning tasks
• Storage and Organization
• Backup and Version Control
• Ethical Considerations

Figure 1: Two collected images


2.2 Masking of the Required Portion

In medical image analysis, masking refers to the process of isolating


or highlighting specific regions or structures within an image for
further analysis or visualization. This technique is particularly useful
for focusing on areas of interest, such as abnormalities, organs, or
anatomical landmarks.

Figure 2: Images Post Masking

2.3 PREPROCESSING

Preprocessing is a crucial step in medical image analysis aimed at


enhancing the quality, consistency, and usability of images before
further analysis or interpretation.

Preprocessing involves various methods which can be used, a brief


explanation has been discussed below:
2.3.1 Intensity Normalization:
Intensity normalization scales pixel values within medical images
to a standardized range, improving consistency for quantitative
analysis. It ensures that pixel intensities are comparable across
images acquired with different imaging parameters or scanners.

Figure 3: Normalized Image

2.3.2 Noise Reduction:


Noise reduction techniques aim to remove or reduce unwanted
artifacts or distortions present in medical images. Common
methods include Gaussian blur, median filtering, and bilateral
filtering.
Noise-Reduced Image (Gaussian Filter) 3 x105 Original Image Histogram

2.5

0.5

o. ,_i

0 50 100 150
200 250
Pixel Intensity

4 ><105 Noise-Reduced Image Histogram

Difference Image

3.5

2.5

15

0.5

o ..l ... ..._-.,,1,, ---1....-- . ,_,................Lw.J


0 50 100 150 200 250
Pixel Intensity

Figure 4: (a, b,c, d)

In the original image histogram (left), the wider spread of pixel


intensities suggests a higher presence of noise. By contrast, the noise
reduced image histogram (right) appears narrower, indicating a more
concentrated distribution of pixel intensities. This suggests that the
noise reduction process has successfully removed or suppressed some
of the random variations in the original image.
2.3.3 Contrast Enhancement:
Contrast enhancement techniques improve the visibility of
structures within medical images by adjusting the intensity
distribution. Histogram equalization, contrast stretching, and
adaptive histogram equalization are commonly used for this
purpose.

Original Gray Image EqualizedGray Image

HistogramofOriginalImage Histogramof Equalized Image


l' I

1•

50 100 150 200 250 50 100 150 200 250

Figure 5: (a: original image, b: equalized image, c,d: histogram of original image, equalized image)

Histogram equalization enhances image contrast by adjusting the


distribution of pixel intensities, resulting in improved visibility of
details. The process involves transforming the histogram to spread out
pixel values, making brighter areas more prominent. This technique is
particularly useful in medical imaging, such as MRis, where
improved contrast aids in better visualization of anatomical structures.
Before Equalization:

This histogram might show a peak at a specific intensity value,


indicating a large portion of the image has a similar
brightness.There could be limited spread across the entire intensity
range, suggesting low contrast between different tissues.

After Equalization:

This histogram will ideally be more spread out across the entire
intensity range.Each intensity value should have a more balanced
representation, indicating improved contrast between different
brain regions.

2.3.4 Resizing:
Re-sizing adjusts image dimensions to a desired size, crucial in
standardizing resolution and aspect ratio across data sets. It ensures
consistency in prepossessing and model training, reducing
variability in image sizes. Various interpolation methods preserve
image quality while resizing, including nearest-neighbour, bi-linear,
or cubic. Resizing may also involve cropping or padding images
for specific input requirements. Overall, resizing facilitates
compatibility across different stages of the image processing
pipeline, promoting efficient analysis and model deployment.
2.3.5 Augmentation:
Augmentation is a data enhancement technique used in machine
learning, including medical image analysis. It involves applying
transformations like rotation, flipping, and scaling to increase
dataset diversity. By exposing the model to varied data,
augmentation helps prevent overfitting and improves
generalization. In medical imaging, it's valuable for tasks like
classification and segmentation, where diverse image appearances
are common.

Figure 6: Augmented Images


2.4 IMAGE FUSION AND ALPHA BLENDING

Image Fusion:
Image fusion involves combining multiple images acquired from
different sources or modalities, each potentially containing its own set
of errors or artifacts. By fusing these images, the errors present in one
image may be compensated for by the information from other images,
leading to a final fused image with reduced overall errors. For
example, if one image has noise artifacts while another has blur
artifacts, fusion techniques can combine the sharp details from one
image with the noise reduction from another, resulting in a fused
image with improved clarity and reduced noise.

Image Blending:
Image blending techniques can further refine the fused images by
seamlessly integrating them to create a visually coherent composite
image. Blending methods such as alpha blending or gradient domain
blending can be used to blend images with different errors, ensuring
smooth transitions and maintaining consistency across the composite
image. By blending images with different errors, you can effectively
combine their strengths while minimizing the impact of individual
errors, resulting in a final image that is visually appealing and
accurately represents the underlying data.
Equation that is used to implement these is given below with a brief
explanation:
blended_image = uint8((alphas(i) * double(imagel) + alphas(i) *
double(image2) + alphas(i) * double(image3) + alphas(i) *
double(image4) + alphas(i) * double(image5)) I 5);

blended Image (blended_image):


• This variable holds the resulting blended image after combining
multiple input images.

Blending Parameter (alphas(i)) and Input Images (imagel, image2,


image3, image4, image5):

• alphas(i) is a scalar value representing the blending parameter


for each input image.

• imagel, image2, image3, image4, and image5 are the input


images to be blended. These images can be grayscale or color
images, but they must have the same dimensions.

Type Conversion:

• double(imagel ), double(image2), double(image3), double(image4),


and double(image5) convert the pixel values of the input images to
double precision.

• This conversion 1s necessary to prevent overflow during


calculations, especially when the blending parameters are close to
1.
Weighted Sum:

• The equation calculates a weighted sum of pixel values from all


input images.

• For each image, alphas(i) represents the weight applied to its


pixel values.

• By multiplying each image's pixel values with its corresponding


blending parameter (alphas(i)), the equation gives more weight
to images with higher blending parameters and less weight to
images with lower blending parameters.

Normalization:

• After obtaining the weighted sum of pixel values from all input
images, the sum is divided by the total number of images (5 in
this case).

• This step normalizes the blended image to ensure that the pixel
values are within the valid range (0 to 255 for uint8), suitable
for display and further processing.

Data Type Conversion:

• uint8() is used to convert the resulting blended image back to 8-


bit representation.
• This ensures that the pixel values of the blended image are
within the valid range for uint8 data type, making it suitable for
display and further processing.

Image 1 Image 2 Image 3 Image 4 Image 5

Alpha= 0 Alpha= 0.3 Alpha= 0.6 Alpha= 0.9 Alpha=1.2

Figure 7: Images after Blending


SOFTWARE REQUIREMENTS
1. MATLAB:

MATLAB is a high-level programming language and interactive environment


widely used in scientific and engineering applications. It is utilized in this
project for implementing image processing algorithms and analyzing medical
images.

Version: The code provided was developed and tested using MATLAB version
X.XX.

Obtaining MATLAB: MATLAB can be obtained from MathWorks


(https://www.mathworks.com/) through purchasing a license or accessing it via
academic institutions or organizations that provide MATLAB licenses.

MATLAB is a programming environment designed for numerical computing


and visualization.

It provides an interactive platform for developing algorithms, analyzing data,


creating models, and visualizing results.

MATLAB supports matrix manipulations, plotting of functions and data,


implementation of algorithms, creation of user interfaces, and interfacing with
programs written in other languages.
You can obtain MATLAB from MathWorks, the company that develops
MATLAB, either through purchasing a license or accessing it via academic
institutions or organizations that provide MATLAB licenses.

2. Image Processing Toolbox:

The Image Processing Toolbox 1s an essential component for performing


various image processing tasks such as noise reduction, image enhancement,
and analysis. It provides a rich set of functions specifically designed for image
processing applications.

Version: The code relies on functions from the Image Processing Toolbox,
version X.XX.

Functionality: This toolbox facilitates operations such as applying filters,


histogram equalization, image blending, and computing image quality metrics
like PSNR and SSIM.

The Image Processing Toolbox is an add-on for MATLAB that provides a


comprehensive set of functions for processing, analyzing, and visualizing
images.

It includes functions for tasks such as image filtering, segmentation,


morphological operations, feature extraction, image registration, and image
enhancement.

The toolbox 1s essential for tasks involving image manipulation, as


demonstrated m the provided code, such as applying filters, performing
histogram equalization, rotating images, blending images, and computing
metrics like PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural
Similarity Index).

Image Processing Toolbox is usually installed separately from MATLAB and


requires a license to use.

3. Additional Toolboxes (if applicable):

Depending on the specific functions used m the project code, additional


toolboxes beyond the Image Processing Toolbox may be required. These could
encompass toolboxes for signal processing, computer vision, statistics, or other
domains. Examples of functions that may belong to different toolboxes or
necessitate additional installations include imgaussfilt, imwarp, imref2d, psnr,
rms, and ssim.

3.1 Image Processing Tool Box

• imgaussfilt:

Functionality: The imgaussfilt function is utilized for Gaussian filtering,


a fundamental technique for noise reduction and image smoothing.

• 1mwarp:

Functionality: With the imwarp function, we perform geometric


transformations on images, including rotation, scaling, translation, and
affine transformations.

• imref2d:
Functionality: The irnref2d function is crucial for creating 2-D spatial
referencing objects, enabling the association of spatial information with
images, such as defining pixel coordinates.

3.2 Inbuilt Matlab Functions

• psnr (Peak Signal-to-Noise Ratio):

Description: The psnr function calculates the Peak Signal-to-Noise Ratio


between two images, providing a quantitative measure of image

• rms (Root Mean Square):

Description: The rms function computes the Root Mean Square value of
an image or a part of an image, representing the average magnitude of
pixel intensities.

• ssim (Structural Similarity Index):

Description: The ssim function calculates the Structural Similarity Index


between two images, indicating the similarity in structural information.

4. Dataset Acquisition and Management

4.1 Kaggle

Kaggle is a platform for data science and machine learning competitions,


datasets, and notebooks. The dataset utilized in this project was sourced from
Kaggle. Accessing and managing the dataset requires the following:
Kaggle Account:

A Kaggle account is necessary for accessmg datasets and participating m


competitions. Users can sign up for a free account on the Kaggle website.

KaggleAPI:

The Kaggle API enables programmatic access to datasets, competitions, and


other Kaggle resources. It simplifies the process of downloading datasets
directly into the local environment.

Software Requirements:

To utilize the Kaggle API, ensure that Python is installed on your system along
with the kaggle Python package. The kaggle package can be installed via pip,
the Python package manager.

4.2 Kenhub

Kenhub is a platform that provides educational resources for medical students


and professionals, including anatomical models, quizzes, and articles. The
dataset obtained from Kenhub may require specific tools or software for
processing and analysis.

Access to Kenhub Dataset:

Ensure that you have obtained permission or subscribed to Kenhub's services to


access the dataset legally.
Data Format:

Verify the format of the dataset provided by Kenhub. It may be in standard file
formats such as CSV, DICOM, or proprietary formats specific to medical
1magmg.

Software Requirements:

Depending on the dataset's format and content, you may need specialized
software for medical image viewing, analysis, or processing. Common software
includes DICOM viewers, medical image analysis software, or programming
environments like MATLAB or Python with libraries such as NumPy, SciPy,
and scikit-image for image processing tasks.

5. DICOM Viewer Software

DICOM (Digital Imaging and Communications in Medicine) is the standard


format for the communication and management of medical imaging information.
Viewing DICOM images requires specialized DICOM viewer software that is
capable of interpreting and displaying medical images in the DICOM format.

Software Requirement:

DICOM Viewer:

A DICOM viewer software is necessary for viewing medical images stored in


the DICOM format. This software provides functionalities such as image
viewing, manipulation, annotation, measurement, and analysis.
3. ALGORITHMS & FLOW-CHART

3.1 Masking of the medical image

3.1.1 Pseudocode

% Load your image

imagepath = 'original image path ';

originallmage = imread(imagepath);

% Define a mask

mask= false(size(originallmage, 1), size(originallmage, 2));

mask(50:150, 100:200) = true;

% Apply the mask

maskedlmage = originallmage;

maskedlmage(repmat(mask, [1, 1, size(originallmage, 3)1)) = 0;

imshow(maskedlmage);
3.1.2 Algorithm

1. Load the Image:

• Read the medical image from the specified path using an


appropriate function (e.g., imread in MATLAB).

• Store the loaded image in a variable named originallmage.

2. Define the Mask:

• Create a new image (mask) with the same dimensions (height


and width) as the original image.

• Initialize the mask with all pixels set to a value representing the
background (e.g., logical false for binary masks).

• Define the region of interest (ROI) by setting the corresponding


pixels within the mask to a value representing the foreground
(e.g., logical true). This can be achieved using vanous
techniques depending on the desired mask shape:

• Rectangular mask: Specify a bounding box using row and


column indices (similar to the code example).

• Freehand mask: Use interactive tools to draw the desired shape


on the image.
• Automatic segmentation: Implement algorithms like
thresholding, edge detection, or region-based segmentation to
identify the ROI.

3. Apply the Mask:

• Create a new image variable (maskedlmage) and initialize it


with a copy of the original image.

• Perform element-wise multiplication between the original image


and a replicated version of the mask. The replicated mask should
have the same dimensions as the original image (including
channels for color images). This ensures the masking operation
applies to all channels.

• Pixels corresponding to true values in the replicated mask will


remain unchanged, while pixels corresponding to false values
will be set to a background value (e.g., zero for thresholding).
3.1.3 Flowchart: Masking of a Medical Image

Start

l
Initialize Environment

l
Loadimages

Create Masks

l
DefiI Regions

Apply Masks

Displt Results

l
End
Image 1 Image 2
\.

I I
t
Normalization

l
Noise Removal

l 0
i--;
(D

i--;

Augmentation ()
(D
rJJ
.....
l
rJJ

:::s
(JO

Contrast

l
Resize

1
Image Fusion

w
0
Image Blending rJJ
'"d

l
i--;
0
()
(D
rJJ
.....
rJJ

:::s
Composite Image/ Resultant Image (JO

w
Quality Evolution
-
Block Diagram
lmgS lmg4 lmg3 lmg2 lmgl

l
Double
l
Double
l
Double
l
Double Double

l
lmgS lmg4 lmg3 lmg2 lmgl

>
=-
t--- --------t-- -----+------.----------11----------+-......----------1 n
=-
=
=
-

-
-

w \I w w w

Adder
D
1/5
Divider Block

D
Blended Image
3.2 Pre- Processing

3.2.1 Algorithms

Normalization:

• Read the input image.


• Convert the image to double precision.
• Normalize the pixel values in the image between O and 1.
• Display the normalized image.

Noise Removal:

• Read the input image.


• Apply a Gaussian filter to the image for noise reduction.
• Adjust the sigma value based on the noise level.
• Display the noise-reduced image.
Enhancement:
• Read the input image.
• Convert the image to grayscale.
• Perform histogram equalization on the grayscale image.
• Display the equalized image and its histogram.

Augmentation:
• Read the input image.
• Define a mask on the image.
• Apply the mask to create a masked image.
• Normalize the masked image.
• Augment the image by rotating it multiple times.
• Display the original and augmented images.

3.2.2 Pseudocodes

1. Normalization:
• Input: Image path
• Output: Normalized image
• Load the image from the specified path
• Convert the image to double precision
• Calculate the minimum and maximum pixel values in the image
• Normalize the pixel values in the image between O and 1
• Return the normalized image

2. Noise Removal:
• Input: Image path, Sigma value for Gaussian filter
• Output: Noise-reduced image
• Load the image from the specified path
• Apply a Gaussian filter to the image for noise reduction with the
specified sigma value
• Return the noise-reduced image

3. Enhancement:
• Input: Image path
• Output: Equalized image and its histogram
• Load the image from the specified path
• Convert the image to grayscale
• Perform histogram equalization on the grayscale image
• Return the equalized image and its histogram

4. Augmentation:
Input: Image path, Number of augmented images
Output: Augmented images
Load the image from the specified path
Normalize the masked image
Augment the image by rotating it multiple times
Return the augmented images
3.2.3 FLOW CHART

START

\I

Normalize ...... Noise Contrast


, ...... , ...... Augmentation
Image , Enhancement
Removal

\,/
w \I \I

• • Load
• Load image • Load Image
Load Image Image

• Convert to
• Apply • Perform
histogram
• Normalize
the image

double precision Gaussian Filter


Equalisation • Augment
the image

• Calculate the
min/max
by rotating
in different
angles

• Normalise
pixels values

Output- Normalized
ima!!:e

Output- Noise Reduced


Image

Output: Equalized Image,


Histogram

Output: Augmented
Image END
3.3 Image Fusion and Blending

3.3.1 Algorithm

1. Read Input Images:


• Read the input images from the specified file paths.
• Store each image in separate variables: imagel, image2,
image3, image4, image5.
2. Resize Images:
• Determine the m1n1mum height and width among all input
images.
• Resize each image to the dimensions of the smallest image.
• Store the resized images back in their respective variables.
3. Define Alpha Values Range:
• Define a range of alpha values from O to 1.5 with a step size of
0.1.
• These alpha values will control the blending strength in the
alpha blending process.
4. Create a Figure:
• Create a new figure to display the blended images for different
alpha values.
5. Alpha Blending Loop:
• For each alpha value in the range:
• Combine all input images using alpha blending with the current
alpha value.
• Use a weighted sum of the pixel values in each image, where
the weight is determined by the alpha value.
• Display the blended image in a subplot of the figure.
• Repeat this process for each alpha value.
6. Display Input Images:
• Display each input image in separate subplots of the figure for
companson.
• Set titles for each subplot to indicate the corresponding image.
7. End

3.3.2 FLOW CHART

START

l
Read Input Images

l
ResizeImages

Define Alpha Values

l
Create Figure

l
Alpha Blending Loop

l
Display Output Images

l
End
3.3.3 PSEUDOCODE

1. Read Input Images:


- Read the input images from specified file paths.
- Store each image in separate variables.
2. Resize Images:
- Determine the minimum height and width among all input images.
- Resize each image to the dimensions of the smallest image.
3. Define Alpha Values:
- Define a range of alpha values from O to 1.5 with a step size of 0.1.
4. Create Figure:
- Create a new figure for displaying images.
5. Display Input Images:
- Display each input image in separate subplots of the figure for
comparison.
6. Alpha Blending Loop:
- For each alpha value in the range:
- Combine all input images using alpha blending with the current
alpha value.
- Display the blended image in a subplot of the figure.
7. End.
4. QUALITY ANALYSIS

Using Root Mean Square (RMS) values, Peak Signal-to-Noise Ratio


(PSNR), and Structural Similarity Index (SSI) are excellent methods
for quantitatively assessing the quality of images.

4.1 Root Mean Square (RMS):

• RMS is a measure of the average deviation of pixel values from


their mean.
• To calculate RMS:
• Compute the squared difference between each pixel value and the
mean pixel value.
■ Take the average of these squared differences.
■ Take the square root of the average to get the RMS value.
• Higher RMS values indicate greater deviation from the mean,
which may suggest poorer image quality

RMS= j (Ii - JY2

L ==Intensity value of a pixel in 1st image


I == Intensity value of a pixel in 2nd image
N == Total number of Pixels
4.2 Peak Signal-to-Noise Ratio (PSNR):

• PSNR measures the quality of an image by comparing it to a


reference image.
• It quantifies the ratio between the maximum possible power of a
signal and the power of corrupting noise.
• PSNR is often calculated in decibels (dB).
• Higher PSNR values indicate higher image quality.
• To calculate PSNR:
■ Compute the mean squared error (MSE) between the original
and distorted images.
■ Take the logarithm of the maximum possible pixel value
squared.
■ Subtract the logarithm of the MSE from the logarithm of the
maximum possible pixel value squared.
■ Multiply the result by 10 to obtain PSNR in decibels.

PSNR= 10 log10 (m-ax-"2)


mse

max== Maximum Possible Pixel


mse== mean square error
4.3 Structural Similarity Index (SSIM):

• SSIM compares the structural similarity between two images.


• It considers luminance, contrast, and structure similarity.
• SSIM values range from -1 to 1, where 1 indicates perfect
similarity.
• Higher SSIM values indicate higher image quality.
• To calculate SSIM, you can use MATLAB's ssim function or
implement the algorithm manually.

SSIM(I,I' (2µ1 µI' +ct )(2al,I' +c2 )


=) (µ12 +µ1'2 +Ct )(a/2 +a/'2 +C2 )

Here's how we have incorporated these metrics into our quality


checking process:
■ Compute RMS values for each image and analyze them to
assess the overall deviation from the mean pixel value.
■ Calculate PSNR values between the original and distorted
images to quantify the level of noise and distortion.
■ Use SSIM to compare the structural similarity between images,
indicating the presence of any structural distortions or artifacts.
PSNR, RMS, SSIM resultant Images
TABLE

Alpha PSNR RMS SSIM


0 7.4131 108.6135 0.51813
0.1 8.1511 99.7665 0.51981
0.2 8.9491 91.009 0.56595
0.3 9.8126 82.3971 0.62036
0.4 10.7513 73.9562 0.67612
0.5 11.7733 65.7465 0.7288
0.6 12.8683 57.4598 0.77494
0.7 14.0463 50.6088 0.81381
0.8 15.2434 44.093 0.84431
0.9 16.375 38.7072 0.86706
1.0 17.236 35.0542 0.88247
1.1 17.630 33.4997 0.89126
1.2 17.4897 34.0451 0.89479
PSNR Curve

15
(l'.
z
(/)
Cl.
10

5
0 0.2 0.4
0.6 0.8 1.2
Alpha
RMS Curve

100

(/) 80

(l'. 60

40

20
0 0.2 0.4
0.6 0.8 1.2
Alpha
SSIM Curve
o.9r--------r-----,1----,1----------r1- --------=.;c::::=ir:::;:::::::;=B="'=---i

0.8-

.,,,A
vi 0-.7
(/)

0.5 - I
0 0.2 0.4 0.6
0.8 1.2
Alpha

Figure 8: Graphical representation of PSNR, RMS & SSIM

PSNR (Peak Signal-to-Noise Ratio):


• The PSNR values generally increase as the alpha value increases,
indicating that the quality of the blended image improves with
higher alpha values.
• This suggests that blending images with higher alpha values
results in less noise and better preservation of signal quality.
RMS (Root Mean Square):
• The RMS values decrease as the alpha value increases, implying
that the deviation of pixel values from their mean decreases with
higher alpha values.
• Lower RMS values indicate better consistency and uniformity in
pixel values across the blended image.
SSIM (Structural Similarity Index):
• The SSIM values generally increase with higher alpha values,
indicating better structural similarity between the original and
blended images.
• This suggests that blending images with higher alpha values
preserves the structural features and details present in the original
images.

Overall Conclusion:
• Increasing the alpha value in the blending process leads to
improvements in image quality, as evidenced by higher PSNR
values, lower RMS values, and higher SSIM values.
• Therefore, selecting higher alpha values for blending results in
better-quality blended images with reduced noise, more consistent
pixel values, and improved structural similarity to the original
images.
RESULTS

fx % a Z Profiler
Find T
Refoctor Analyze Stop
Boolcmarlc •
Figurel 0 ><
+♦ 'J] ► 0: ► project oot put image ► mulllple1mage ptoJect ► Edit View Insert Tools Desktop Window Help • p
CUrrem Folder ® Command Wil I'.:) aD i;3 Ci
>> pre .,
outlutc-orrection Error u

ii
(E
O 1 NormaUnd lma IM-Remowd lmaQe (Gauulan Fitter)

H
5blendingandacruracytesting..m Ir.dex e
11.pdf
:£11111.Jpg
Error .i.
■ -accuracyallJpg
s.Jll
a1aphahnear.m Con1r•it'"ilm111t Augmenilmago1 Augmeimago2
ALPHArmsPSNRsso.m
fa.»
a1J9ment22.m
augmentanon.m
benlidnigoutpot4.m E:d1lor-
8151NPUT.m D
!,binary1mage.png
finarplo
· b1emdedlastone.png
blended5lastoutpulpng 1
I blend,goutSx2jpg
.._ blending5images.m

t\ogbraintumor - Copy. jpg';


Workspace

Name• Value
:!:angle
C) au9mentedlm... lx15c-elt
10 mask = false(size(originalimage, 1), sizeforiginalimage, 2));
equal12edlma... 721x1280x]dou...
11 mask(S0:150, 100:200) = true;
1magepath 'D:\proJectoutp... 12
mask 721x12801og«al 13 X Apply the mask
maskedlmage 721x1280x1utnt8
no,malizedlm_ 721x1280x1dou.M 14 maskedimage = originalimage;
numAugment .. 15 15 maskedimage( repmat(mask, (1, 1, size(originalimage, 3)])) = 0;
orig1nallmage 721x1280x1utnt8 16
sigma 1
17
smootl\eium... 721x1280x] dou...
I• • "• • •

==
••11

Q Search II ..) A ("., g f G t{> )) '} 060412 ; .

% ta! Profiler
Refactor Analyze S1op

,,gure2 0 ><
♦ + t ► 0: ► projectoutput1mage ► mult1ple1mageptOJect ► File Edit v,ew Insert Tools Desktop Window Help • p
©
current :::•- ® Co::-;r:_,.. 0_0_ _: -[i_- ,
(F oututcorrection Error
lmage1 /mage2 lmage3 lmage4 lmag,e5

m m
Sblendingandacruracytesting.m

ril il
:r.dex
• 11.pdf
t1111t.Jpg !)
• accuracyallJpg
alaphahnear.m Alpha•O Alpha•0.1 Alpha•0.2 A.lpha•0.3 Alpha•0.4
ALPHArmsPSNRsso.m
asdlds.m
augment22-m
>> fin
fa.»
I I I I I
augmentation.m Alpha•0.5 Alpha•0.6 Alpha•0.7 Alpha•0.8 Alpha•0.9
benlidnigoutp0t4.m
Bl51NPUT.m
::binary1mage.png
- blemdedlastone.png
Editor·
finalplo
1
I
Alpha=1 Alpha=1.1 Alpha=1_2 oject\normalizedBT 01. png·);
blended51asloutpulpng
2
!{.blend1gout5x2.jpg oject\normalizedBT 2. png');
blending5images.m oject\normalizedBl 3. png'); X Provide the path t
Details PSNRCur,,eg Curve oject\normalizedBir 4. png'); % Provide the path t
20 o.:1Zfs.1M Curve ject\normalizedBT 5. png'); X Provide the path t
15 0.;,ocli
>
Workspace 0 07
1 50 06
Name• Value 0 05 I Ir O 05 1 0 05 1
I alphas 1x13dooO/e Alpha Alpha Alpha mage3, 1), size(image4, 1), size(image5, 1)]);
± angle 289
10 111in_ idffi = 111 n s ze 111a1e1, , s ze 1e2, size(:mage3, 2), size(image4, 2), size(image5, 2)]);
() augmentedlm,M 1x15celt
blendeiuma_ 839x475x] u<r1t8 11 imagel = imresize(imagel, [min_height, min_width]);
equahzedlma... 721x1280xJ dou... 12 image2 = imresize(image2, [min_height, min_1,idth]);
13 13 image3 = imresize(image3, [min_height, min_1,idth]);
1magel 839x475x] u<r118
1ma9e2 839x475x]u<r118 14 image4 = imresize(image4, [min_height, min_1,idth]);
1mage3 839x475x]W1l8 15 images = imresize(imageS, [min_height, min_width]);
1mage4 839x475X]IJ<tll8 16
images 839x475x]u<r118
17
,magepath 'D:\pro1ectou1p...
• •" I • • • • •

== Q Search i ::; a ..) - (" A d" g r. G '9' )) ID Of, 041, ,; •


CONCLUSION

In this project, we introduce an innovative approach to image


restoration, merging image fusion and alpha blending techniques to
overcome the limitations of single-image processing. By harnessing
the complementary information from multiple sources, our method
produces composite images rich in detail and clarity. Preprocessing
steps are applied to prepare the input images, followed by image
fusion to integrate their information effectively. Subsequently, alpha
blending refines the fused image, offering precise control over the
contribution of each source image at the pixel level. This fine-tuned
control enhances the final image by accentuating specific features,
mitigating artifacts, and potentially yielding more visually compelling
and application-specific results.

Looking ahead, future endeavors will focus on delving into advanced


image fusion algorithms and exploring novel techniques for alpha
channel generation. These refinements aim to further elevate the
efficacy and versatility of the image restoration process. By
continually innovating and refining our methodologies, we aspire to
advance the state-of-the-art in image restoration, ultimately enhancing
the quality and utility of medical imaging and other domains reliant
on high-fidelity visual data.
FUTURE SCOPE

1. Real-time Medical Image Fusion for Surgical Navigation:


The project presents an opportunity for future advancements in real-time
medical image fusion for surgical navigation systems. By extending the current
alpha blending techniques, the project can contribute to the development of
innovative solutions that seamlessly integrate pre-operative imaging data, such
as MRI or CT scans, with intra-operative images, such as endoscopic or
laparoscopic views. This integration enables surgeons to benefit from enhanced
visualization and guidance during minimally invasive procedures, facilitating
better decision-making and improved patient outcomes.

Future research in this area may involve optimizing alpha blending algorithms
to achieve low-latency processing, ensuring real-time image fusion capabilities
during surgical procedures. Additionally, incorporating advanced registration
techniques into the image fusion process can help achieve accurate alignment of
pre-operative and intra-operative images, enhancing the reliability and
effectiveness of surgical navigation systems.

Furthermore, integrating the enhanced alpha blending techniques with existing


surgical navigation platforms or developing standalone navigation systems with
built-in image fusion capabilities can provide surgeons with intuitive interaction
and visualization tools. Surgeons can interactively manipulate fused images,
adjust transparency levels, and explore different views of the patient's anatomy
during surgery, thereby enhancing surgical precision and efficiency.

Overall, the future scope of real-time medical image fusion for surgical
navigation represents a natural extension of the current project, leveraging the
foundational concepts and techniques of alpha blending to address critical
challenges in surgical navigation and improve patient care in minimally
. .
mvas1ve surgery.
2. AI-driven Assistive Technologies for Image-guided
Interventions:

Incorporating AI-driven ass1stlve technologies for image-guided


interventions stands as a promising avenue for future expansion of the
project. By integrating advanced image processing techniques with
artificial intelligence algorithms, the project can potentially develop
innovative solutions aimed at enhancing surgical precision, improving
patient outcomes, and streamlining clinical workflows. Such
technologies could include automated image analysis algorithms for
real-time feedback, intelligent guidance systems for surgical
navigation, and personalized treatment planning tools based on
machine learning models. This future scope aligns closely with the
project's objectives of leveraging alpha blending techniques to
enhance medical imaging applications, paving the way for
transformative advancements in healthcare technology and clinical
practice.

3. Interactive Augmented Reality (AR) for Education and


Training:

Integrating Interactive Augmented Reality (AR) for education and


training presents a compelling future scope for the project. By
leveraging alpha blending techniques within AR environments, the
project can facilitate immersive learning experiences and interactive
training simulations for medical professionals, students, and trainees.
These AR applications could include virtual anatomical structures
overlaid onto real-world scenes, interactive medical simulations, and
hands-on procedural training modules. Through intuitive interaction
and visualization tools, users can manipulate virtual objects, explore
anatomical structures from different perspectives, and practice
surgical procedures in a risk-free environment. This future direction
aligns with the project's focus on enhancing medical imaging
applications and has the potential to revolutionize medical education
and training practices, ultimately improving clinical proficiency and
patient care outcomes.
REFERENCES

■ Academia, "Enhancement Of Medical Images Using Image Processing In


Matlab," Academia.edu. [Online]. Available:
https://www.academia.edu/keypass/RXlHa3FGdDVZNOlpdzRyQOFvNGhia
kV6VjdvQ2E2MzFOcDhzVXBsckUxWT0tLXUvcFRQMENWQnRwZGtj
dk5kZlh1RlE9PQ==--
79fb690f8b534360cc32becb33463ba1336503e0/t/AqScy-RJq308i
whwaE/resource/work/50382431/Enhancement_Of_Medical_Images_Using
_Image_Processing_In_Matlab?email_work_card=reading-history.
[Accessed: 5 April 2024].

■ Academia, "Improving Diagnostic Viewing of Medical Images using


Enhancement Algorithms," Academia.edu. [Online]. Available:
https://www.academia.edu/keypass/RXlHa3FGdDVZNOlpdzRyQOFvNGhia
kV6VjdvQ2E2MzFOcDhzVXBsckUxWT0tLXUvcFRQMENWQnRwZGtj
dk5kZlh1RlE9PQ==--
79fb690f8b534360cc32becb33463ba1336503e0/t/AqScy-RJq308i
whwaE/resource/work/34212577/Improving_Diagnostic_Viewing_of_medi
cal_Images_using_Enhancement_Algorithms?email_work_card=reading
history. [Accessed: 5 April 2024].

■ Academia, "Paper1: Medical Image Enhancement Application Using


Histogram Equalization in Computational Libraries," Academia.edu.
[Online]. Available:
https://www.academia.edu/keypass/RXlHa3FGdDVZNOlpdzRyQOFvNGhia
kV6VjdvQ2E2MzFOcDhzVXBsckUxWT0tLXUvcFRQMENWQnRwZGtj
dk5kZlh1RlE9PQ==--
79fb690f8b534360cc32becb33463ba1336503e0/t/AqScy-RJq308i
whwaE/resource/work/95250335/Paper1_Medical_Image_Enhancement_ A
pplication_Using_Histogram_Equalization_in_Computational_Libraries?em
ail_work_card=title. [Accessed: 5 April 2024].
■ Academia, "A novel approach for contrast enhancement and noise removal
of medical images," Academia.edu. [Online]. Available:
https://www.academia.edu/keypass/anU3cXpMd3lHUmcwYVZwZUpsK2F
kS0lIZTB1a21GRzBuT3B4V0lLQkpTWT0tLTRueVBWQTVVZHdlei83a
ThBNGU4aFE9PQ==--
560c84222cle37f76676b951c34617b87c9af420/t/AqScy-RHQpyTN
CXhB3/resource/work/78960132/A_novel_approach_for_contrast_enhance
ment_and_noise_removal_of_medical_images?email_work_card=title.
[Accessed: 5 April 2024].

■ Academia, "A Novel Approach for Contrast Enhancement In Biomedical


Images Based on Histogram Equalization," Academia.edu. [Online].
Available:
https://www.academia.edu/keypass/aFVLbTI5Q080d3FjUk 1mYUgzTmxm
TlZ2SjVIby8xa05kSV pYTWNqcFNJaz0tLTNFTkI0TGZFSEFHcVlrR0pU
cDI5Ymc9PQ==--d5e83a87d3ef53c0368aee4c3c4ec2a7ead23d5d/t/ AqScy
RH4EWUm-
08mZd/resource/work/356379/A_Novel_Approach_for_Contrast_Enhance
ment_In_Biomedical_Images_Based_on_Histogram_Equalization?email_w
ork_card=title. [Accessed: 5 April 2024].

■ Image Dataset 1:
Kenhub, "Medical Imaging and Radiological Anatomy," Kenhub.com.
[Online]. Available:
https://www.kenhub.com/en/library/anatomy/medical-imaging-and
radiological-anatomy. [Accessed: 5 April 2024].

■ Image Dataset 2:
K. Mader, "SIIM Medical Images," Kaggle, Month Year. [Online].
Available: https://www.kaggle.com/datasets/kmader/siim-medical-images.
[Accessed: 5 April 2024].
Appendix 1

% Load your image


imagepath = 'D:\project out put image\multipleimage project\ogbraintumor
- Copy.jpg';
originallmage = imread(imagepath);

% Define a mask
mask= false(size( originallmage, 1), size(originallmage, 2));
mask(50: 150, 100:200) = true;

% Apply the mask


maskedlmage = originallmage;
maskedlmage( repmat(mask, [1, 1, size(originallmage, 3)])) = 0;

% Normalize the image


normalizedlmage = double(maskedlmage);
normalizedlmage = (normalizedlmage - min(normalizedlmage(:))) /
(max(normalizedlmage(:)) - min(normalizedlmage(:)));

% Apply Gaussian filter for noise reduction


sigma= 1.0; % Adjust sigma value based on noise level
smoothed_image= imgaussfilt(normalizedlmage, sigma);

% Perform histogram equalization


equalizedimage = histeq(normalizedimage);

% Augmentation: Rotate the image manually


numAugmentedimages = 15; % Adjust the number of augmented images as
needed
augmentedimages = cell(l, numAugmentedimages);
for i = 1:numAugmentedimages
angle= randi([l, 360]); % Random rotation angle between 1 and 360 degrees

% Rotate the image manually using interpolation


augmentedimages{i} = rotateimage(normalizedimage, angle);
end

% Display images
figure;

% Original Image
subplot(5, 3, 1);
imshow(originallmage );
title('Original Image');

% Normalized Image
subplot(5, 3, 2);
imshow(normalizedimage);
title('Normalized Image');

% Noise-Removed Image (Gaussian Filter)


subplot(5, 3, 3);
imshow(smoothed_ image);
title('Noise-Removed Image (Gaussian Filter)');

% Contrast-Enhanced Image
subplot(5, 3, 4);
imshow(equalizedimage);
title('Contrast-Enhanced Image');

% Augmented Images
for i = 1:numAugmentedimages
subplot(5, 3, i + 4);
imshow(augmentedimages{i});
title(['Augmented Image ' num2str(i)]);
end

function rotatedimage = rotateimage(image, angle)


% Rotate the image manually using interpolation

% Convert angle to radians


angleRad = deg2rad(angle);

% Compute rotation matrix


R = [cos(angleRad) -sin(angleRad); sin(angleRad) cos(angleRad)];

% Compute rotated image size


imageSize = size(image);
rotatedSize = round(max(abs(imageSize(l :2)*R), [], 2)');

% Compute padding to prevent cropping


padAmount = max(rotatedSize - imageSize(l :2),
0); padAmount = padAmount(l :2);

% Apply rotation using interpolation


rotatedlmage = imwarp(image, affine2d([R [O; OJ; 0 0 1]), ...
'OutputView', imref2d(rotatedSize + padAmount));
end
Appendix 2

% Read the input images


imagel = imread('D:\project out put image\multipleimage project\
normalizedBT 01.png');
image2 = imread('D:\project out put image\multipleimage project\
normalizedBT 2.png');
image3 = imread('D:\project out put image\multipleimage project\
normalizedBT 3.png'); % Provide the path to image3
image4 = imread('D:\project out put image\multipleimage project\
normalizedBT 4.png'); % Provide the path to image4
image5 = imread('D:\project out put image\multipleimage project\
normalizedBT 5.png'); % Provide the path to image5

% Resize images to the same dimensions


min_height = min([size(imagel, 1), size(image2, 1), size(image3, 1),
size(image4, 1), size(image5, 1)]);
min_width = min([size(imagel, 2), size(image2, 2), size(image3, 2),
size(image4, 2), size(image5, 2)]);
image1 = imresize( image1, [min_height, min_width]);
image2 = imresize(image2, [min_height, min_width]);
image3 = imresize(image3, [min_height, min_width]);
image4 = imresize(image4, [min_height, min_width]);
image5 = imresize(image5, [min_height, min_width]);

% Define alpha values range


alphas= 0:0.1:1.25;

% Initialize arrays to store PSNR, RMS, and SSIM values


psnr_values = zeros(size(alphas));
rms_values = zeros(size(alphas));
ssim_values = zeros(size(alphas));

% Create a figure for displaying images and plots


figure;

% Display the original images


subplot(5, 5, 1);
imshow(imagel);
title('Image 1');
subplot(5, 5, 2);
imshow(image2);
title('Image 2');
subplot(5, 5, 3);
imshow(image3);
title('Image 3');
subplot(5, 5, 4);
imshow(image4);
title('Image 4');
subplot(5, 5, 5);
imshow(image5);
title('Image 5');
% Loop over each alpha value
for i = 1:numel(alphas)
% Combine all images using alpha blending
blended_image = uint8((alphas(i) * double(imagel) + alphas(i) *
double(image2) + alphas(i) * double(image3) + alphas(i) * double(image4) +
alphas(i) * double(image5)) I 5);

% Display the blended image


subplot(5, 5, i + 5);
imshow(blended_image);
title(['Alpha = 'num2str(alphas(i))]);

% Compute PSNR, RMS, and SSIM


psnr_values(i) = psnr(imagel, blended_image);
rms_values(i) = rms(double(imagel(:)) - double(blended_image(:)));
ssim_values(i) = ssim(imagel, blended_image);
end

% Plot PSNR, RMS, and SSIM values


subplot(5, 5, 22);
plot(alphas, psnr_values, 'o-');
title('PSNR Curve');
xlabel('Alpha');
ylabel('PSNR Value');
grid on;

subplot(5, 5, 23);
plot(alphas, rms_values, 'o-');
title('RMS Curve');
xlabel('Alpha');
ylabel('RMS Value');
grid on;

subplot(5, 5, 24);
plot(alphas, ssim_values, 'o-');
title('SSIM Curve');
xlabel('Alpha');
ylabel('SSIM Value');
grid on;

You might also like