Download as pdf or txt
Download as pdf or txt
You are on page 1of 48

Practical Aspects of Computer Vision (3171113) 200160111007

Practical Aspects of Computer Vision


(3171113)
LABORATORY MANUAL

SEMESTER VII

DEPARTMENT OF
ELECTRONICS AND COMMUNICATION
ENGINEERING
GOVERNMENT ENGINEERING COLLEGE- MODASA
Practical Aspects of Computer Vision (3171113) 200160111007

CERTIFICATE

This is to certify that Mr. Dohi Parth Dipeshkumar Enrolment no


200160111007 of the seventh semester of B.E has satisfactorily completed his
one full semester term work in “Practical Aspects of Computer Vision
(3171113)” satisfactorily in partial fulfilment of Bachelor of Electronics and
Communication Engineering degree to be awarded by Gujarat Technological
University.

Prof. Chandresh Parekh


Date: - …. /… /…….
Practical Aspects of Computer Vision (3171113) 200160111007

Preface

The main motto of any laboratory/practical/field work is to enhance required skills as well as create
ability amongst students to solve real-time problems by developing relevant competencies in the
psychomotor domain. By keeping this in view, GTU has designed a competency outcome-based
curriculum for engineering degree programs where sufficient weightage is given to practical work.
It shows the importance of enhancement of skills amongst the students and it pays attention to
utilising every second allotted for practical amongst students, instructors and faculty members to
achieve relevant outcomes by performing the experiments rather than having merely study-type
experiments. It is a must for the effective implementation of a competency-focused outcome-based
curriculum that every practical is keenly designed to serve as a tool to develop and enhance relevant
competency required by the various industries among every student. These psychomotor skills are
very difficult to develop through traditional chalk-and-board content delivery methods in the
classroom. Accordingly, this lab manual is designed to focus on the industry-defined relevant
outcomes, rather than the old practice of conducting practicals to prove concepts and theories.

By using this lab manual students can go through the relevant theory and procedure in advance
before the actual performance which creates interest and students can have a basic idea beforethe
performance. This in turn enhances pre-determined outcomes amongst students. Each experiment
in this manual begins with competency, industry-relevant skills, course outcomes as well and
practical outcomes (objectives). The students will also achieve safety and necessary precautions to
be taken while performing practicals.

This manual also provides guidelines to faculty members to facilitate student-centric lab activities
through each experiment by arranging and managing necessary resources so that the students
follow the procedures with required safety and necessary precautions to achieve the outcomes. It
also gives an idea of how students will be assessed by providing rubrics.

Practical Aspects of Computer Vision is an Elective course which deals with developingnecessary
practical skills among the students to build computer vision applications. It provides aplatform for
students to demonstrate their skills making the computer do what humans can do with their eyes
and brain. Students also learn programming (coding) skills and application of machine learning to
an extent.

Utmost care has been taken while preparing this lab manual however there is always room for
improvement. Therefore, we welcome constructive suggestions for improvement of the content and
removal of errors, if any.

Prof. C. R. Parekh- G.E.C. Modasa


Practical Aspects of Computer Vision (3171113) 200160111007

Practical – Course Outcome matrix

Course Outcomes (COs):


1. To comprehend both theoretical and practical aspects of the analysis of
imageswith computes.
2. Implement algorithms for image feature detectors and descriptors.
3. Analyze various mathematical and geometrical transformations.
4. Implement algorithms for image matching and panoramas.
5. Implement algorithms for object detection, classification and tracking.
Sr. CO CO CO CO CO
Objective(s) of Experiment
No. 1 2 3 4 5

0. Vision, Mission, PO, PSO, PEO, CO

To get familiarity with basic image handling and processing


1. 
functions. (4 Hrs.  2 Sessions).
(a)To apply mathematical operations to process images.
2. (b)To apply logical operations to process images. 
(c) To apply morphological operations to process images.
To implement various filters for image processing.
3. 
(4 Hrs. 2 Sessions).
(a) To apply various edge detectors on images and compare
4. their results.

(b) To apply the Harris corner detector and evaluate its
performance.
To apply the SIFT/SURF descriptor algorithm to image
5.  
matching.
To Apply various Geometric Transformations on images
6. 
and compare their effects.
(a) To write codes for special image warping- SWIRL
7. and WAVE to an image. 
(b) To write a code for image registration.
8. To create a panorama from two images.  

9. To implement un-calibrated stereo image rectification  

To retrieve an image from an image data set using


10.   
content-based image retrieval
To detect and count moving cars using image segmentation
11.  
and the Gaussian Mixture Model.

12. To recognize text using Optical Character Recognition.  

13. Mini Project- (4 Hrs. 2 Sessions) 


Practical Aspects of Computer Vision (3171113) 200160111007

Industry Relevant Skills

The following industry-relevant competency is expected to be developed in the student by


undertaking the practical work of this laboratory.
1. Image processing & analysis.
2. Computer vision applications to real life.

Guidelines for Faculty Members


1. The teacher should provide the guidelines with the demonstration of practicals to the
studentswith all features.
2. The teacher shall explain basic concepts/theory related to the experiment to the students
before starting each practical
3. Involve all the students in the performance of each experiment.
4. The teacher is expected to share the skills and competencies to be developed in the students
and ensure that the respective skills and competencies are developed in the students after
the completion of the experimentation.
5. Teachers should give opportunity to students for hands-on experience after the
demonstration.
6. The teacher may provide additional knowledge and skills to the students even though not
covered in the manual but are expected from the students by the concerned industry.
7. Give practical assignments and assess the performance of students based on the task
assigned to check whether it is as per the instructions or not.
8. The teacher is expected to refer complete curriculum of the course and follow theguidelines
for implementation.

Instructions for Students


1. Students are expected to carefully listen to all the theory classes delivered by the faculty
members and understand the COs, content of the course, teaching and examination scheme,
skill set to be developed etc.
2. Students shall organize the work in the group and make a record of all observations.
3. Students shall develop programming skills as expected by industries.
4. Students shall attempt to develop related hands-on skills and build confidence.
5. Students shall develop the habits of evolving more ideas, innovations, skills etc. apart
fromthose included in the scope of the manual.
6. Students shall refer to material freely available on the internet.
7. Students should develop a habit of submitting the experimentation work as per the schedule
and s/he should be well prepared for the same.

Common Safety Instructions for all sessions


1. Students are expected to use computer keyboards softly and make sure that wires are
connected securely before turning the computer on.
2. Do not use pirated software.
Practical Aspects of Computer Vision (3171113) 200160111007

Disclaimer

This lab manual is prepared for academic purposes only and intended to be used by
undergraduate students of the EC branch.
This manual was prepared with licensed versions of MATLAB software (which is a registered
trademark of Mathworks Inc. USA) during my tenure at Government Engineering College,
Gandhinagar and Government Engineering College, Modasa. Students may use the student
version, which may have limited validity and reduced functionalities. OR students may also
use open-source programming languages such as Python.
Before taking this course, the students are expected to have some programming knowledge in
any programming language.
Most sample codes/demos. codes given in this manual are taken from various online sources
like the website, videos and blogs of Mathworks Inc., free YouTube videos from computer
vision/image processing enthusiasts and the codes submitted by my past students as their lab
work/project work. All sample codes are run and found to work error-free on my laptop.
However, MATLAB continuously improves their set of functions and some functions which I
have used in demo codes may not be supported in your version and may be replaced by their
updated versions.
The sample codes are there for you to quickly test/visualize your understanding of the concept.
They may not be the best/optimum codes. With practice, you may write better codes than the
codes in this manual. Do write codes on your own rather than copying.
Although every care has been taken to make these codes error-free, there may be syntax errors
in some cases due to typing errors while preparing the manual. The instructors are supposed to
help/guide the students in such cases.
As this course is purely programming/software-based, we do not mention any
equipment/instrument list in any of the experiments.
In the theory part, we have mentioned only the major functions and their brief descriptions.
There are many other functions that one finds while reading the sample codes. The students
are encouraged to learn about them on their own, as they are an integral part of the
programming language.
Also, the procedure in software-based experiments is to think of logical solutions and write
codes. So, we do not mention any procedure/steps in this manual.
The instructors must see that every student uses his/her logic and variable names in the codes
for any two students are not the same.

Happy learning!
Practical Aspects of Computer Vision (3171113) 200160111007

Index
(Progressive Assessment Sheet)

Sr. Objective(s) of Experiment Page Date of Date of Marks Sign. of Remark


No. No. perform submission Teacher
with date
0. Vision, Mission, PO, PSO, PEO, CO

To get familiarity with basic image handling and


1.
processing functions. (4 Hrs.  2 Sessions).
(a) To apply mathematical operations to
processimages.
2. (b) To apply logical operations to process images.
(c) To apply morphological operations to
processimages.
To implement various filters for image processing.
3.
(4 Hrs. 2 Sessions).
(a) To apply various edge detectors on images
4. andcompare their results.
(b) To apply the Harris corner detector and
evaluate its
performance.
To apply the SIFT/SURF descriptor algorithm
5.
o image matching.
To Apply various Geometric Transformations on
6.
images and compare their effects.
(a) To write codes for special image
7. warping-SWIRL and WAVE to an image.
(b) To write a code for image registration.
8. To create a panorama from two images.
To implement un-calibrated stereo image
9.
rectification
To retrieve an image from an image data set using
10.
similarity.
To detect and count moving cars using image
11.
segmentation and the Gaussian Mixture Model.
To recognize text using Optical Character
12.
Recognition.
13. Mini Project-(4 Hrs.  2 Sessions).
Total
Practical Aspects of Computer Vision (3171113) 200160111007

Experiment No: 1
Basic image handling and processing (4 Hrs.  2 Sessions).

Date(s):

Competency and Practical Skills: Image processing

Relevant CO: CO1

Objectives: To get familiarity with basic image handling and processing functions.

Equipment/Instruments: Computer and MATLAB© (OR PYTHON) Software. (Common


for all practical sessions).

Theory:

The students are advised to go through the detailed description of all the following functions
anduse each of them at least once during their first lab practice.

Functions for image reading/writing and getting image information:

• imread: To read the image as a matrix of numbers from a specified location/path in your
computer or an in-built MATLAB image database.
• imshow: To display the matrix read with ‘imread’ as an image on the display device
• figure: It creates a window that forces the current image to become visible above all
other windows.
• title: It is used to give the title to the image in the figure.
• insertObjectAnnotation: This function is used to provide annotation in the image.
• imfinfo: It gives information about the image
• imcontour: Creates contour plot on the image.

Functions for processing/manipulating image intensities:

• rgb2gray: It converts a color image to a gray scale or intensity image.


• im2bw: It converts a grey scale image to a black & white or binary image. This is also
a kind of thresholding operation.
• imbinarize: This is another way of converting a grayscale image into a binary image.

1
Practical Aspects of Computer Vision (3171113) 200160111007

• imcomplement: To get a negative or complement of the image, use this function.


• imadjust: It is used to modify the contrast of an image.
• histogram: It shows the histogram of an intensity image. A histogram gives an idea
about the overall appearance of an image. For example, if the image is dark, then, most
pixels will have intensity values in the lower range of the histogram.
• histeq: This function is used to equalize intensity values in the image to give it an
appearance that is easy for human eyes to perceive.
• Powe Law or Gamma Transform: Another type of image intensity transform. If r is the
• input image and s is the transformed image, then, the following relation applied on each
pixel of the image gives Power Law Transform of intensity values throughout the
image. s = c (r+). The students are encouraged to write a small piece of code to
observe power law transform.
• Log Transform: If r is the input image and s is the transformed image, then, the
following relation applied on each pixel of the image gives log transform of the image.
• s= c*log10(1+r). The students are encouraged to write a small piece of code to observe
power law transform.
• Gray Level Slicing: Also called Intensity Level Slicing. This technique is used to
highlight a specific range of intensity values in an image. This is Grey Level Slicing.
MATLAB does not have an in-built function readily available. Students are
encouraged to write a code to perform this operation.
• Bit-plane Slicing: Each pixel in the image represents an intensity value which could be
considered as a binary number of n-bit. Instead of highlighting a range of intensity, we
could highlight the contribution made by each bit. This is Bit-plane slicing. MATLAB
does not have an in-built function readily available. Students are encouraged to write a
code to perform this operation.

Other useful operations:

• Plotting points and lines over images: In most computer vision tasks, we are interested
inshowing feature points, correspondences, detected objects, etc. in the image using
pointsand lines. It is possible to plot/draw points, lines and geometrical shapes using
simple commands in MATLAB. Students are encouraged to write small code for this.
A useful command for plotting lines is ‘line’. One can use ‘impoly’ to plot a polygon.

2
Practical Aspects of Computer Vision (3171113) 200160111007

• Crop, Copy and paste regions: It is required to crop, copy and paste part of an image
from one location to another or one image to another. ‘imcrop’ is a useful in-built
MATLAB
• Command to crop the image. Students can write code involving these tasks on their
own.

Procedure:

1. Enter program codes as demonstrated in the sample code below, prepare a file/script and
run the program(s). (One may prepare a flow chart and pseudo code, if required before entering
the code.)

Sample codes & Results: Demo. Codes and results are given below. Save output images and
anyother calculation/graph/ chart/table generated after the program is executed.

Sample Code that covers all functions/operations mentioned in theory.

Task 1: Basic image handling

3
Practical Aspects of Computer Vision (3171113) 200160111007

% Grayscale to binary image


Image_binary1 = im2bw(Image_gray);
figure;imshow(Image_binary1); title('Binary image') %Figure 5

% Another way of getting binary image


Image_binary2 = imbinarize(Image_gray);
figure;imshow(Image_binary2); title('Another way of getting binary image');
%Figure 6

% Image negative or Image complement


Image_negative = imcomplement(Image_gray);
figure; imshow(Image_negative); title('Compliment / Negative Image'); %Figure 7

% Contrast streching operation can adjust intensity values


Image_Contrast_Strech = imadjust(Image_gray);
figure; imshow(Image_Contrast_Strech); title('Contrast streching'); %Figure 8

% Image histogram and its equalization


figure; title('Histogram and its equalization') %Figure 9
subplot(2,2,1); imshow(Image_gray); title('image-gray scale');
subplot(2,2,2); histogram(Image_gray); title('Histogram of the gray scale image');
histogram_eq = histeq(Image_gray);
subplot(2,2,3); imshow(histogram_eq); title('Histogram equalized image')
subplot(2,2,4); histogram(histogram_eq); title('Histogram after equlization');

Output from a Command window:

For Image resize

Original (width,Height): (1087,5100)

Reized (width,Height): (272,1275)

For Image information

Filename: 'C:\Users\parth\Downloads\i_card.jpg'

FileModDate: '12-Sep-2023 07:25:31'

FileSize: 193703

Format: 'jpg'

FormatVersion: ''

Width: 1700

Height: 1087

BitDepth: 24

ColorType: 'truecolor'

FormatSignature: ''

NumberOfSamples: 3 4
CodingMethod: 'Huffman'

CodingProcess: 'Progressive'
Practical Aspects of Computer Vision (3171113) 200160111007

ColorType: 'truecolor'

FormatSignature: ''

NumberOfSamples: 3

CodingMethod: 'Huffman'

CodingProcess: 'Progressive'

Comment: {}

Output Images:
Figure 1: Figure 2:

Figure 3:

5
Practical Aspects of Computer Vision (3171113) 200160111007

Figure 4:

Figure 5:

Figure 6:

6
Practical Aspects of Computer Vision (3171113) 200160111007

Figure 7: Figure 8:

Figure 9:

7
Practical Aspects of Computer Vision (3171113) 200160111007

Task 2: Log and Gamma (power law) transform


clc;
clear;
close all;
% Read and display image from matlab library
Image1 = imread('autumn.tif');
figure;imshow(Image1);title('Autumn Image read from MATLAB in- built library');
%Figure 1
% Here are some other ways of contrast streching
% 1. Log transform
R = im2double(Image1);
S1 = 2*log(1+R);
S2 = 3*log(1+R);
figure; title('log transformation and Gamma or power law transform are also
auseful contrast stretching'); %Figure 2
subplot(3,2,1);imshow(R);title('Original Image', 'FontSize', 15);
subplot(3,2,3);imshow(S1);title('Log transform with constant multiplier =
2','FontSize', 15);
subplot(3,2,5);imshow(S2);title('Log transform with constant multiplier =
3','FontSize', 15);
% 2. Gamma or power law transform
G1 = R.^2;
G2 = R.^0.5;
subplot(3,2,2);imshow(R);title('Original Image', 'FontSize', 15);
subplot(3,2,4);imshow(G1);title('Power law transform with constant exponent =2',
'FontSize', 15);
subplot(3,2,6);imshow(G2);title('Power law transform with constant multiplier
=1/2', 'FontSize', 15);

Output Images:
Figure 1:

8
Practical Aspects of Computer Vision (3171113) 200160111007

Figure 2:

9
Practical Aspects of Computer Vision (3171113) 200160111007

Task 3: Gray level slicing


clc;
clear;
close all;

itemp = imread('C:\Users\parth\Downloads\len_std.jpg');
image = itemp(:,:,1);
%decide the min. level of intensity level slicing range
rmin = 100;
%decide the max. level of intensity level slicing range
rmax = 150;
[r,c] = size(image); %dimensions of image
s = zeros(r,c); % pre allocate a variable to store the result image
%result Image
for i = 1:r
for j = 1:c
% if the current pixel of original image is in the specfied range then
make it 0 in the result image
if (rmin < image(i,j) && image(i,j) < rmax)
s(i,j) = 0;
else
% otherwise store the same value of the pixel in the result image
s(i,j) = image(i, j);
end
end
end

figure; imshowpair(uint8(image), uint8(s), 'montage'); title('gray level


slicing');

Output Image:

10
Practical Aspects of Computer Vision (3171113) 200160111007

Task 4: Bit plane slicing


clc;
clear;
close all;

img = imread('peppers.png');
img_gray = rgb2gray(img);
figure;title('Bit plane slicing'); % Figure 1
subplot(1,2,1); imshow(img);title('original peppers image in color');
subplot(1,2,2); imshow(img_gray);title('gray scale image of peppers');

f1 = bitget(img_gray,1);
f2 = bitget(img_gray,2);
f3 = bitget(img_gray,3);
f4 = bitget(img_gray,4);
f5 = bitget(img_gray,5);
f6 = bitget(img_gray,6);
f7 = bitget(img_gray,7);
f8 = bitget(img_gray,8);

figure; title('Bit plane slicing') %figure 2


subplot(2,4,1); imshow(logical(f1));title('bit plane 1');
subplot(2,4,2); imshow(logical(f2));title('bit plane 2');
subplot(2,4,3); imshow(logical(f3));title('bit plane 3');
subplot(2,4,4); imshow(logical(f4));title('bit plane 4');
subplot(2,4,5); imshow(logical(f5));title('bit plane 5');
subplot(2,4,6); imshow(logical(f6));title('bit plane 6');
subplot(2,4,7); imshow(logical(f7));title('bit plane 7');
subplot(2,4,8); imshow(logical(f8));title('bit plane 8');

Output Images:
Figure 1:

11
Practical Aspects of Computer Vision (3171113) 200160111007

Figure 2:

12
Practical Aspects of Computer Vision (3171113) 200160111007

Task 5: Image annotations


clc;
clear;
close all;
% image annotation
Aimg = imread('coins.png');
position = [96 146 31; 236 173 26];
label = [5,10];
RGB = insertObjectAnnotation(Aimg, "circle", position,
label,'LineWidth',3,'Color', {'cyan','yellow'}, 'TextColor','black');
figure;title('Image annotations');
imshow(RGB); title('Annotated coins');

Output Image

13
Practical Aspects of Computer Vision (3171113) 200160111007

Task 6: Contours
clc;
clear;
close all;
% Image Contour
Cimg = imread('rice.png');
figure; imshow(Cimg); title('Real rice image') %Figure 1
figure; imcontour(Cimg); title('Contours') %Figure 2
% Inserting points and lines on an image
xp = [50, 100, 150, 200];
yp = [100, 100, 200, 200];
hold on;
plot(xp,yp,'r*'); title('Inserting points and lines on am image')
hold on;
line([50,100], [100,100], 'color', 'r', 'LineWidth', 2)
h = impoly;
position = wait(h);

Output Images:
Figure 1:

Figure 2:

14
Practical Aspects of Computer Vision (3171113) 200160111007

Task 7: crop, copy and paste


clc;
clear;
close all;
% crop, copy and paste
grayimage = imread('coins.png');
% Get the dimensions of the image.
% numberOfColorBands should be = 1.
[rows, columns, numberOfColorBands] = size(grayimage);
% Display the original gray scale image.
figure; title('cropping and pasting regions from/to image');
subplot(2,2,1);
imshow(grayimage);
axis on;
title('Orginal Grayscale Image', 'FontSize', 20);
% Ask user to draw a box
subplot(2,2,1);
promptmessage = sprintf('Drag out a box that you want to copy / nor clickcancel to
quit');
titleBarCaption = 'Continue?';
button = questdlg(promptmessage, titleBarCaption, 'Continue',
'Cancel','Continue');
if strcmpi(button, 'Cancel')
return;
end
k = waitforbuttonpress;
point1 = get(gca, 'CurrentPoint'); % button down detected
finalRect = rbbox; % return figure units
point2 = get(gca, 'CurrentPoint'); % button up detected
point1 = point1(1,1:2);
point2 = point2(1,1:2);
p1 = min(point1, point2); % calculate locations
offset = abs(point1-point2); % calculate dimensions
% Find the coordinates of the box
xCoords = [p1(1) p1(1)+offset(1) p1(1) + offset(1) p1(1) p1(1)];
yCoords = [p1(2) p1(2) p1(2)+offset(2) p1(2) + offset(2) p1(2)];
x1 = round(xCoords(1));
x2 = round(xCoords(2));
y1 = round(yCoords(5));
y2 = round(yCoords(3));
hold on
axis manual
plot (xCoords, yCoords, 'b-'); % redraw in dataspace units
% Display the cropped image
croppedimage = grayimage(y1:y2, x1:x2);
subplot(2,2,3);
imshow(croppedimage);
axis on;
title('Region that you defined', 'FontSize', 20);
% Paste it into the original image
[rows2, columns2] = size(croppedimage);
promptmessage = sprintf('In the UPPER LEFT IMAGE,\nClick on the upper leftpoint
where you want to paste it,\nor click Cancel to quit.');
titleBarCaption = 'Continue?';
button = questdlg(promptmessage, titleBarCaption, 'Continue',
'Cancel','Continue');
if strcmpi(button, 'Cancel')
return

15
Practical Aspects of Computer Vision (3171113) 200160111007

end
[x, y] = ginput(1);
% Determine the pasting boundaries
r1 = int32(y);
c1 = int32(x);
r2 = r1 + rows2 - 1;
r2 = min([r2 rows]);
c2 = c1 + columns2 -1;
c2 = min([c2, columns]);
plot([c1 c2 c2 c1 c1], [r1 r1 r2 r2 r1], 'r-');
% Paste as much of croppedImage as will fit into the original image.
grayimage(r1:r2, c1:c2) = croppedimage(1:(r2-r1+1), 1:(c2-c1+1));
subplot(2,2,4);
imshow(grayimage);
axis on;
title('Region that you defined pasted onto original', 'FontSize', 20);

Output Images:
Figure 1: Figure 2:

Figure 3:

16
Practical Aspects of Computer Vision (3171113) 200160111007

Figure 4:

17
Practical Aspects of Computer Vision (3171113) 200160111007

Conclusion: (Students are supposed to note their own learning/observations etc.)


In the above experiment, I performed a comprehensive range of image processing
operations, including resizing, rotation, grayscale and binary conversion, complement
transformation, contrast stretching, and histogram equalization. Beyond these foundational
techniques, I also applied advanced methods such as log and gamma transformations, grey
level and bit-plane stretching, image annotations, and contour detection. Additionally, I
acquired practical skills in image manipulation, such as cropping, copying, pasting, and even
creating dialogue boxes. This experience has enriched my proficiency in image processing.

Quiz:(Write your answers in the space below)

1. Identify the image having contours in the output from the images above and write its
title as youranswer.
Ans. In Task 6 Figure 2 has contours in the output (Page no. 16).
Title: Inserting points and lines on an image

2. State the function that will convert a colour image into a grayscale image.
Ans. The function that will convert a colour image into a grayscale image is: rgb2gray()

3. Find the size of the autumn image required to store it in computer memory.
Ans. Required size: 213210 bytes

clc;
clear;
close all;

img = imread('autumn.tif');
% Use the whos function to get information about the image variable
info = whos('img');
% Extract the memory usage in bytes from the structure
memory_usage_bytes = info.bytes;
fprintf('Memory required to store the autumn image in computer memory to : %d
bytes\n', memory_usage_bytes);

Output from a Command window:

18
Practical Aspects of Computer Vision (3171113) 200160111007

Suggested Reference(s):
https://in.mathworks.com/help/images/getting-started-with-image-processing-toolbox.html
https://www.oreilly.com/library/view/programming-computer-
vision/9781449341916/ch01.html
References used by the students: (List the references in the space provided below)
https://in.mathworks.com/matlabcentral/answers

Rubric-wise marks obtained:

Rubrics 1 2 3 4 5 Total
Quiz + Logical Neatness Accuracy/correctness Presentationof Timely
Understanding of the output the work submission 50
of the code
(10) (10) (10) (10) (10)

Points

19
Practical Aspects of Computer Vision (3171113) 200160111007

Experiment No: 2
Mathematical, Logical & Morphological operations on images

Date(s):

Competency and Practical Skills: Image processing

Relevant CO: CO1

Objectives: (a) To apply mathematical operations to process images.


(b) To apply logical operations to process images.
(c) To apply morphological operations to process images.

Theory:
The students are advised to go through the detailed description of all the following functions
anduse each of them at least once during this lab practice.
• imresize : Images must be of same size for arithmetical operations. So, resize one of
thetwo images to make it equal to the other image, if needed.

Functions for Mathematical operations on images:


• imadd : Addition of two images of equal size results in 3rd image where each pixel
value isthe addition of corresponding pixel values.
• imsubtract : Subtraction of one image from another to get 3rd image.
• imabsdiff : It results in absolute difference of two images.
• imdivide : It gives division of one image by another to get the 3rd image.
• immultiply : It provides multiplication of one image by another.
• mean2 : This function calculates mean or statistical average value of 2D image. It will
act like a constant for a particular image.
• Changing brightness of an image with a constant: The constant value obtained with
mean2or any other constant can be used to modify the image by adding/subtracting it
to/from the image. Students are encouraged to write a small code for this.
• imnoise : This function can add Gaussian or salt & pepper types of noise to a given
image.
• This is statistical function.The statistical properties of the noise can be controlled.
• Image de-noising using averaging of multiple copies of noisy images : It is possible to
suppress noise effects by averaging of multiple copies of noisy images of one image.
Multiple noisy copies can be created by using ‘imnoise’. Adding all noisy image copies
and averaging, results in an image, where effect of noise is reduced. Students are
encouraged to write a small code for this.
• imgradient : This function calculates gradient or derivative of an image along x and y
directions respectively.

20
Practical Aspects of Computer Vision (3171113) 200160111007

• imgradientxy : This function calculates magnitude and direction of gradients along x


and ydirections.

Functions for Logical operations on images


• bitand : Provides logical AND on two images of equal size.
• bitor : Provides logical OR on two images of equal size.
• bitxor : Provides logical Exlclusive-OR on two images of equal size.
• bitcmp : Provides complement of an image.

Functions for Morphological operations on images


Binary images may contain numerous imperfections. In particular, the binary regions produced
by simple thresholding are distorted by noise and texture. Morphological image processing
remove these imperfections by accounting for the form and structure of the image. They need
a structural element to probe the image. Following functions are morphological operations for
the images:

• strel: This function is used to define the structuring element.


• imerode : This function results in image erosion.
• The erosion of a binary image f by a structuring element s (denoted f s) produces a
newbinary image g = f s with ones in all locations (x,y) of a structuring element's
origin at which that structuring element s fits the input image f, i.e. g(x,y) = 1 is s fits f
and 0 otherwise, repeating for all pixel coordinates (x,y).
• imdilate : This function is used to dilate an image.
• The dilation of an image f by a structuring element s (denoted f s) produces a new
binary image g = f s with ones in all locations (x,y) of a structuring element's orogin
atwhich that structuring element s hits the the input image f, i.e. g(x,y) = 1 if s hits f and
0 otherwise, repeating for all pixel coordinates (x,y). Dilation has the opposite effect to
erosion -- it adds a layer of pixels to both the inner and outer boundaries of regions.
• imopen : It is used to morphologically open an image.
• The opening of an image f by a structuring element s (denoted by f s) is an erosion
followed by a dilation:
• f s=(f s) s
• Opening is so called because it can open up a gap between objects connected by a thin
bridge of pixels. Any regions that have survived the erosion are restored to their original
size by the dilation.
• Opening is an idempotent operation: once an image has been opened, subsequent
openings with the same structuring element have no further effect on that image:
(f s) s) = f s.
• imclose: It is used to morphologically close an image.
• The closing of an image f by a structuring element s (denoted by f • s) is a dilation
followed by an erosion:
• f•s=(f s) s

21
Practical Aspects of Computer Vision (3171113) 200160111007

• Closing is so called because it can fill holes in the regions while keeping the initial
regionsizes. Like opening, closing is idempotent: (f • s) • s = f • s, and it is dual operation
of opening (just as opening is the dual operation of closing):
• f • s = (f c s)c; f s = (f c • s)c.
• In other words, closing (opening) of a binary image can be performed by taking the
complement of that image, opening (closing) with the structuring element, and taking
thecomplement of the result.

Sample codes & Results: Demo. Codes and results are given below. Save output images and
anyother calculation/graph/ chart/table generated after the program is executed.

Sample Code that covers all functions/operations mentioned in theory.


Task 1: Arithmetic operations
clc;
close all;
% Arithmatic operations on two images
% two images must be of same size
Im = imread('coins.png');
Ic = imread('cameraman.tif');
Im=imresize(Im,size(Ic));
Iadd=imadd(Im,Ic);
figure;title('Arithmatic operations')
subplot(2,3,1);imshow(Iadd);title('Addition of Images')
Isub=imsubtract(Im,Ic);
subplot(2,3,2);imshow(Isub);title('Subtraction of Images')
Iabdiff=imabsdiff(Im,Ic);
subplot(2,3,3);imshow(Iabdiff);title('Absolute difference of Images')
Idiv= imdivide(Im,Ic);
subplot(2,3,4);imagesc(Idiv);title('Division of an Image by another')
Imul=immultiply(uint16(Im),uint16(Ic));
subplot(2,3,5);imagesc(Imul);title('Multiplication of an Image by another')

Output Images:

22
Practical Aspects of Computer Vision (3171113) 200160111007

Task 2: intensity of an image


clc;
close all;
I1= imread('C:\Users\parth\Downloads\i_card.jpg');
figure;imshow(I1);title('student ID Card _read from
C:\Users\parth\Downloads\i_card.jpg')
Iresize=imresize(I1,0.25);
Irotate=imrotate(Iresize,90);
Igray= rgb2gray(Irotate);
Iavg = mean2(Igray);
Inew = Igray- Iavg;
figure; imshow(Inew);title('brightness modified image')
Ibright = Inew + 171;
figure;imshow(Ibright);title('Average brightness subtraction and then addition can
not restore same image')

Output Images:
Figure 1: Figure 2:

Figure 3:

23
Practical Aspects of Computer Vision (3171113) 200160111007

Task 3: Image de-noising


clc;
close all;
Ix=(imread('C:\Users\parth\Downloads\pr112.jpg'));
figure;title('Averaging multiple images for noise reduction')
subplot(231);imshow(Ix);title('original image')
N1=imnoise(Ix,'gaussian');
subplot(232);imshow(N1);title('Noisy Image 1')
N2=imnoise(Ix,'gaussian');
subplot(233);imshow(N2);title('Noisy Image 2')
N3=imnoise(Ix,'gaussian');
subplot(234);imshow(N3);title('Noisy Image 3')
N4=imnoise(Ix,'gaussian');
subplot(235);imshow(N4);title('Noisy Image 4')
Add1=imadd(N1,N2, 'uint16');
Add2=imadd(N3,N4, 'uint16');
Avg= imadd(Add1,Add2, 'uint16');
Average_image=uint8(Avg/4);
subplot(236);imshow(Average_image);title('Averaged Image')

Output Image:

24
Practical Aspects of Computer Vision (3171113) 200160111007

Task 4: image gradient/derivative


clc;
close all;
Id= imread('C:\Users\parth\Downloads\pe15.png');
[Gx,Gy] = imgradientxy(Id);
[Gmag,Gdir] = imgradient(Gx,Gy);
figure;imshowpair(Gmag,Gdir,'montage')
title('Gradient Magnitude (Left) and Gradient Direction (Right)')

Output Images:
Figure 1:

25
Practical Aspects of Computer Vision (3171113) 200160111007

Task 5: Logical operations


% Logical operations on images
Iand=bitand(Im,Ic);
Ior= bitor(Im,Ic);
Ixor= bitxor(Im,Ic);
Icompl= imcomplement(Ic);
figure;title('logical operations')
subplot(2,2,1);imshow(Iand);title('logical AND')
subplot(2,2,2);imshow(Ior);title('logical OR')
subplot(2,2,3);imshow(Ixor);title('logical EX-OR')
subplot(2,2,4);imshow(Icompl);title('Image complement')

Output Images:

26
Practical Aspects of Computer Vision (3171113) 200160111007

Task 6: Morphological operations


clc;
close all;
Im = imread('coins.png');
figure;title('Morphological operations')
subplot(2, 3, 1);imshow(Im); title('Original image')

% Dilated Image
se = strel('disk', 7);
dilate = imdilate(Im, se);
subplot(2, 3, 2); imshow(dilate);title('Dilated image')

% Eroded image
erode = imerode(Im, se);
subplot(2, 3, 3);imshow(erode); title('Eroded image')

% Opened image
open = imopen(Im, se);
subplot(2, 3, 4);imshow(open); title('Opened image')

% Closed image
closed = imclose(Im, se);
subplot(2, 3, 5);imshow(closed); title('Closed image')

Output Images:

27
Practical Aspects of Computer Vision (3171113) 200160111007

Conclusion: In summary, mathematical, logical, and morphological operations on images serve as


essential tools in image processing. They allow us to enhance image quality, extract meaningful
information, and manipulate images for various applications. These operations play a crucial role in
fields such as computer vision, minedical imaging, and remote sensing, enabling us to solve complex
challenges and advance technology. As we continue to innovate, these operations will remain a
cornerstone in our ability to work with digital images effectively.

Quiz:(Write your answers in the space below)


1. What is the range of intensities used for the gray scale image of employee ID card in the sample
code above?
image=rgb2gray(imread('c:\Users\parth\OneDrive\Desktop\Parth_id_card.jpg'));
%find the minimum maximum intensity values
min_intensity=min(image(:));
max_intensity=min(image(:));
%display the range of intensities
fprintf('Range of intensities: %d to %d\n',min_intensity, max_intensity);
Range of intenbsities:0 to 255

2. Briefly explain the difference of pixel values for A-B and abs|A-B| when A<B
when A is less than B, both A - B and abs (A - B) will provide information about the difference
between the pixel values of A and B. However, A - B will retain negative values to indicate the
direction of the difference, while abs (A - B) will only give you the magnitude of the difference,
making it useful for measuring the extent of dissimilarity between the two images without regard to
the direction of the difference.

3. Write one application where morphological operation is needed.

One important application of morphological operations is in text document preprocessing for tasks
like optical character recognition (OCR). Morphological operations can help clean and enhance
scanned or digital text images, removing noise, improving text quality, and separating characters from
the background, making it easier for OCR algorithms to accurately recognize and extract text from
images.

Suggested Reference(s):

https://in.mathworks.com/help/images/getting-started-with-image-processing-toolbox.html

https://www.oreilly.com/library/view/programming-computer-vision/9781449341916/ch01.html

References used by the students: (List the references in the space provided below)

https://in.mathworks.com/help/images/getting-started-with-image-processing-toolbox.html

https://www.oreilly.com/library/view/programming-computer-vision/9781449341916/ch01.html

28
Practical Aspects of Computer Vision (3171113) 200160111007

Rubric-wise marks obtained:

Rubrics 1 2 3 4 5 Total

Quiz+Logical Neatness of Accuracy/correct- ness Presentation of Timely 50


Understanding the code (10) of the output the work submission

(10) (10) (10) (10)

Points

29
Practical Aspects of Computer Vision (3171113) 200160111007

Experiment No: 3
Image filtering (4 Hrs=2 Sessions)

Date(s):

Competency and Practical Skills: Image processing

Relevant CO: CO1

Objectives: To implement various filters for image processing.

Theory:
The students are advised to go through the detailed description of all the following functions and use
each of them at least once during this lab practice.
Image blurring and de-blurring: Image gets blurred when it down-sampled or imaging at low-
resolution. In the context of signals, it is a low pass filtering operation in 2D ( two dimension).
Another way in which blurring can take place in an image is when either scene or camera is moving.
This second case is called motion blur.

Functions for image blurring and de-blurring


• fspecial : creates a two-dimensional filter H of the specified type. Use MATLAB help for
explanation of types , additional parameters and examples.
• imfilter : Provides N-dimensional filtering for multidimensional images. See MATLAB help
for description of th functions and examples.
• deconvwnr : Used to de-blur an image using wiener filter. Use MATLAB help for details and
examples.
• Image de-noising : Noise is typically as high frequency content in context of images. So, one
can also look at image de-noising as a low-pass filtering operation in 2D.

Functions for image de-noising


• medfilt2 : This function gives 2D filtering on a matrix. Each element in the matrix is replaced
by median value of a specified neighbourhood. See MATLAB help for full description and
examples.
• wiener2 : This function provides 2D adaptive filtering of noise. wiener2 lowpass filters an
intensity image that has been degraded by constant power additive noise. wiener2 uses a pixel-
wise adaptive Wiener method based on statistics estimated from a local neighborhood of each
pixel. Use MATLAB help for details and examples.

30
Practical Aspects of Computer Vision (3171113) 200160111007

Un-sharp masking technique to sharpen images : Sharpening of images is a high-pass filtering in 2D.
‘imsharpen’ is the function used for un-sharp masking.

High-boost filtering : High boost filtering is used to highlight the high-frequnecy content in an image.
It is just adding the sharpened image to the original image.

Color split & merge : A color image has three planes or channels of colors; one each for Red, Green and
Blue color. For color-based image processing techniques it is, sometimes, necessary to split the given
image into its primary color components (Red, Green and Blue). This is, also, known as color filtering,
sometimes. On the other hand, sometimes, it is required to synthesize the color image by adding the
primary colors at each pixel. Hence, color split and merge operations are useful. These operations can
be carried out using matrix manipulation in MATLAB. Students are encouraged to write a small code
for these operations.

Frequency domain filtering of images: All the filtering techniques discussed and practiced above are in
spatial domain. This is similar to time domain analysis of signals. Now we will implement frequency
domain filters of different types and make a comparative analysis in the context of images.

The following types of filters to be implemented in frequency domain:

• Low-pass filter
• High-pass filter
• Band-pass filter
• Band-stop filter (Also called band reject filter or notch filter)

For each of the types mentioned above, there are three ways of implementation

• Ideal
• Butterworth
• Gaussian

Functions for frequency domain filtering


• fft2 : It calculates fast Fourier transform of a given image. Use MATLAB help for detailed
description and examples.
• freqspace : It provides frequency spacing for equally spaced frequency responses. Use
MATLAB help for detailed description and examples.
• Output Images:
• Figure 1:

31
Practical Aspects of Computer Vision (3171113) 200160111007

• Figure 2:
• Figure 3:
• Figure 4:
• Figure 5:
• Figure 6:
• fftshift : fftshift is useful for visualizing the Fourier transform with the zerofrequency
• component in the middle of the spectrum.
• ifft2 : Calculates 2D discrete inverse Fourier transform to bring the image back to spatial
domain from frequency domain.

Homomorphic Filter: A special type of frequency domain filtering to improve contrast and dynamic
range of an image. Students are encouraged to study the concept and implement a basic code for
homomorphic filtering.

Sample codes & Results: Demo. Codes and results are given below. Save output images and any other
calculation/graph/ chart/table generated after the program is executed.
Sample Code that covers all functions/operations mentioned in theory.
Task 1: Image denoising
clc;
close all;
clear all;
%image denoising using filter
A=rgb2gray(imread('C:\Users\parth\Downloads\pr12.jpg'));
figure;title('Image de-noising')
subplot(2,2,1); imshow(A);title('Original image');
% Add salt & pepper noise
Ispn = imnoise(A,'salt & pepper', 0.03);
subplot(2,2,2); imshow(Ispn);title('Image with salt & pepper noise');
% Remove Salt & pepper noise by median filters
K = medfilt2(Ispn);subplot(2,2,3); imshow(uint8(K)); title('Remove salt & pepper
noise by median filter');
% Remove salt & pepper noise by Wiener filter
L = wiener2(Ispn,[5 5]);
subplot(2,2,4); imshow(uint8(L)); title('Remove salt & pepper noise by Wiener
filter');
figure;title('Gaussian Noise removal')
subplot(2,2,1); imshow(A);title('Original image');
% Add gaussian noise
M = imnoise(A,'gaussian',0,0.005);
subplot(2,2,2); imshow(M); title('Image with gaussian noise');
% Remove Gaussian noise by Wiener filter
L = wiener2(M,[5 5]);
subplot(2,2,3); imshow(uint8(L));title('Remove Gaussian noise by Wiener filter');
% Remove Gaussian noise by median filter
K = medfilt2(M);
subplot(2,2,4); imshow(uint8(K)); title('Remove Gaussian noise by median filter');
% unsharp masking technique to sharpen images

32
Practical Aspects of Computer Vision (3171113) 200160111007

Iunsharp=imread('rice.png');
Isharp =imsharpen(Iunsharp);
figure;imshowpair(Iunsharp,Isharp,'montage');title('unsharp masking')
% high boost filtering technique
Ihbf=imadd(Isharp,Iunsharp);
figure;imshowpair(Isharp,Ihbf,'montage');
title('High boost filtering')

Figure 1:

Figure 2:

33
Practical Aspects of Computer Vision (3171113) 200160111007

Figure 1:

Figure 2:

34
Practical Aspects of Computer Vision (3171113) 200160111007

Task 2: RGB split and merge


I1= imread('C:\Users\parth\Downloads\i_card.jpg');
Iresize=imresize(I1,0.25);
Irotate=imrotate(Iresize,90);
red= Irotate(:,:,1);
green = Irotate(:,:,2);
blue = Irotate(:,:,3);
black = zeros(size(Irotate,1),size(Irotate,2),'uint8');
only_red=cat(3,red,black,black);
only_green=cat(3,black,green,black);
only_blue = cat(3,black,black,blue);
merge=cat(3,red,green,blue);
figure;title('Splitting color images into RGB channel and merging RGB channels to
create color image')
subplot(3,3,2);imshow(Irotate);title('Original RGB image');
subplot(3,3,4);imshow(only_red); title('Red channel image');
subplot(3,3,5);imshow(only_green);title('Green channel image');
subplot(3,3,6);imshow(only_blue);title('Blue channel image');
subplot(3,3,8);imshow(merge);title('merged image');

output images:

Figure 1:

35
Practical Aspects of Computer Vision (3171113) 200160111007

Task 3: Image filtering in frequency domain


% Low pass filter
% Ideal low pass filter
clc;
close all;
clear all;
img=imread('C:\Users\parth\Downloads\pr12.jpg');
im=rgb2gray(img);
[m, n]=size(im);
ft_im=fft2(im);% Fourier transform of an image
shift_ft_im = fftshift(ft_im);% shift transformed image to centre
fc=input('enter value of cut-off frequency in terms of pixels: ');
%fc=50;
h=zeros(m,n);% pre-allocation of filter mask size
[a, b]=freqspace(256,'meshgrid');
d= m.*sqrt(a.^2+b.^2)<=fc;%creates non-separable filter mask
h(d)=1;
shift_h=fftshift(h);%shifting mask to the centre
g=shift_h.*ft_im; %convlution in spatial domain becomes multiplication in
frequency domain
output=abs(ifft2(g));% getting the result back in spatial domain with inverse
transform
% separable mask
hn=zeros(m,n);
dn=abs(256*a)<fc+10 & abs(256*b)<fc-10;
hn(dn)=1;
shift_hn=fftshift(hn);
gn=shift_hn.*ft_im;
outputn=abs(ifft2(gn));
figure;
subplot(241);imshow(im);title('Gray scale image')
subplot(242);imshow(shift_ft_im);title('DFT of gray scale image')
subplot(243);mesh(log(1+abs(fftshift(ft_im))));title('DFT of the image after
shift')
subplot(244);imshow(h);title('Ideal Low Pass Filter mask 2D')
subplot(245);mesh(h);title('Ideal Low Pass Filter mask 3D')
subplot(246);imshow(uint8(output));title('Filtered image')
subplot(247);imshow(hn);title('Ideal LPF mask-separable')
subplot(248);imshow(uint8(outputn));title('Filtered image separable mask')
% Gaussian Low Pass Filter
hg=zeros(m,n);
for u=1:m
for v=1:n
dg=((u-(m/2)).^2+(v-(n/2)).^2);
hg(u,v)=exp(-dg/2/fc/fc);
end
end
shift_hg= fftshift(hg);
gg = shift_hg.*ft_im;
outputg=abs(ifft2(gg));
figure;
subplot(231);imshow(im);title('Gray scale image')
subplot(232);imshow(shift_ft_im);title('DFT of gray scale image')
subplot(233);mesh(log(1+abs(fftshift(ft_im))));title('DFT of the image after
shift')
subplot(234);imshow(hg);title('Gaussian Low Pass Filter mask 2D')
subplot(235);mesh(hg);title('Gaussian Low Pass Filter mask 3D')
subplot(236);imshow(uint8(outputg));title('Filtered image')
%Butterworth Low Pass Filter

36
Practical Aspects of Computer Vision (3171113) 200160111007

hb=zeros(m,n);
nx=2;% order of the filterd
for u=1:m
for v=1:n
db= ((u-(m/2)).^2+(v-(n/2)).^2);
hb(u,v)=1./(1+(db/fc/fc).^(2.*nx));
end
end
gb = hb.*shift_ft_im;
outputb=abs(ifft2(gb));
figure;
subplot(231);imshow(im);title('Gray scale image')
subplot(232);imshow(shift_ft_im);title('DFT of gray scale image')
subplot(233);mesh(log(1+abs(fftshift(ft_im))));title('DFT of the image after
shift')
subplot(234);imshow(hb);title('Butterworth Low Pass Filter mask 2D')
subplot(235);mesh(hb);title('Butetrworth Low Pass Filter mask 3D')
subplot(236);imshow(uint8(outputb));title('Filtered image')
%High Pass Filters
%Ideal
hpi=1-h;
shift_hpi=fftshift(hpi);
ghpi=hpi.*shift_ft_im;
out_hpi=abs(ifft2(ghpi));
figure;
subplot(221);imshow(im);title('Gray scale image')
subplot(222);imshow(hpi);title('Ideal High Pass Filter Mask 2D');
subplot(223);mesh(hpi);title('Ideal HPF 3D')
subplot(224);imshow(uint8(out_hpi));title('Filtered Image')
%Gaussian
hpg=1-hg;
shift_hpg=fftshift(hpg);
ghpg=hpg.*shift_ft_im;
out_hpg=abs(ifft2(ghpg));
figure;
subplot(221);imshow(im);title('Gray scale image')
subplot(222);imshow(hpg);title('Gaussian High Pass Filter Mask 2D');
subplot(223);mesh(hpg);title('Gaussian HPF 3D')
subplot(224);imshow(uint8(out_hpg));title('Filtered Image')
%Buttrworth
hpb=1-hb;
shift_hpb=fftshift(hpb);
ghpb=hpb.*shift_ft_im;
out_hpb=abs(ifft2(ghpb));
figure;
subplot(221);imshow(im);title('Gray scale image')
subplot(222);imshow(hpb);title('Butterworth High Pass Filter Mask 2D');
subplot(223);mesh(hpb);title('Butterworth HPF 3D')
subplot(224);imshow(uint8(out_hpb));title('Filtered Image')
% Band pass filter and band stop filters require two cut off frequencies
fcl=input('enter value of lower cut-off frequency in termsof pixels: ');
fch=input('enter value of higher cut-off frequency in termsof pixels: ');
%Band pass filter
% First create an LPF mask
hl=zeros(m,n);
[al ,bl]=freqspace(256,'meshgrid');
dl= m.*sqrt(al.^2+bl.^2)<=fch;%creates non separable(circular)filter mask
hl(dl)=1;
% Then create an HPF mask

37
Practical Aspects of Computer Vision (3171113) 200160111007

hh=ones(m,n);
[ah,bh]=freqspace(256,'meshgrid');
dh= m.*sqrt(ah.^2+bh.^2)<=fcl;
hh(dh)=0;
% We are creating a BPF mask using multiplication of the above two masks
hbpf=hl.*hh;
%Frequency domain multiplication is equivalent to spatial domain convolution
ghbpf=hbpf.*shift_ft_im;
%Inverse FFT to get back the result in spatial domain
out_bpf=abs(ifft2(ghbpf));
%plotting
figure;
subplot(221);imshow(im);title('Gray scale image')
subplot(222);imshow(hbpf);title('Ideal Band Pass Filter Mask 2D');
subplot(223);mesh(hbpf);title('Ideal BPF 3D')
subplot(224);imshow(uint8(out_bpf));title('Filtered Image')
% Band stop filter(BSF)
% We create a BSF mask as complement from BPF mask here for simplicity
hbsf=1-hbpf;%cut-off frequencies are same as band pass filter here
ghbsf=hbsf.*shift_ft_im;
out_bsf=abs(ifft2(ghbsf));
figure;
subplot(221);imshow(im);title('Gray scale image')
subplot(222);imshow(hbsf);title('Ideal Band Stop Filter Mask 2D');
subplot(223);mesh(hbsf);title('Ideal BSF 3D')
subplot(224);imshow(uint8(out_bsf));title('Filtered Image')

Output Images:

Figure 1: Figure 2:

38
Practical Aspects of Computer Vision (3171113) 200160111007

Figure 3: Figure 4:

Figure 5: Figure 6:

39
Practical Aspects of Computer Vision (3171113) 200160111007

Task 4: Homomorphic Filter


% Read the image
I = imread('C:\Users\parth\Downloads\img333.jpg');
I=im2double(I);
% Get In(Image)
I = log(1 + I);
% Get M, N for FF
M=2 *size(I,1)+1;
N = 2*size(I,2) + 1;
% Create a centered Gaussian Low Pass Filter
sigma = 10;
[X, Y] = meshgrid(1:N,1:M);
centerX = ceil(N/2);
centerY = ceil(M/2);
gaussianNumerator= (X-centerX).^2+ (Y - centerY).^2;
H= exp(-gaussianNumerator./(2*sigma.^2));
% Create High Pass Filter from Low Pass Filter by 1-H
H=1-H;
% Uncentered HPF
H = fftshift(H);
% Frequency transform - FFT
If=fft2(1, M, N);
% High Pass Filtering followed by Inverse FFT
lout = real(ifft2(H.*If));
lout=lout(1:size(I,1), 1:size(I,2));
% Do exp(I) from log domain
Ihmf = exp(lout) - 1;
% display the images
imshowpair(I, Ihmf, 'montage')
title('Homomorphic Filtered Image', 'FontSize', 12)

Output Image :

40
Practical Aspects of Computer Vision (3171113) 200160111007

Quiz:(Write your answers in the space below)

1. Which filter is better to remove salt & pepper noise from the image?
Median Filter Method

2. What is indicated by the peak in the centre of DFT (3D) of an image?


The peak in the center of the 3D Discrete Fourier Transform (DFT) of an image
typically represents the image's low-frequency or DC (Direct Current) component.
This component contains information about the overall brightness or average intensity
of the image. The peak's magnitude indicates the strength of this low-frequency
component, and its position at the center of the DFT grid is due to the DC component
having a frequency of zero.

3. Which of filter masks produces visually better image after filtering?


(Ideal/Butterworth/Gaussian)
Gaussian HPF

Suggested Reference(s): https://in.mathworks.com/help/images/getting-started-with-


image-processing-toolbox.html

Youtube videos by Ekeeda

https://blogs.mathworks.com/matlab/

References used by the students: (List the references in the space provided below)
https://in.mathworks.com/help/images/getting-started-with-image-processing-
toolbox.html

Youtube videos by Ekeeda

https://blogs.mathworks.com/matlab/

Rubric wise marks obtained:


Rubrics 1 2 3 4 5 Total

Quiz + Logical Neatness of oAccuracy/correct- ness Presentation of Timely 50


Understanding the code f of the output the work submission
(10)
(10) (10) (10) (10)

Points

41

You might also like