DIP Full File Prasenjeet 2012 13

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 118

SAOE

BE-II/DIP

E&TC

Assignment No.:1
Title: Reading a BMP File and displaying its Image Informatics
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:


Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No.1

Title:

Reading a BMP File and displaying its Informatics.

Objective:

To read a Bitmap file.


To display its image informatics.

Theory: Bitmap File Format (BMP):- The Microsoft Windows Bitmap (BMP) file format
is a basic file format for digital images in the Microsoft Windows world. BMP files have:

A file header

A bitmap header

A color table

Image data

The file header occupies the first 14 bytes of all BMP files.
The structure is shown in Table 1
BMP File Header
Bitmap Information
Colour Table
Pixel Data
Table 1- BMP file format

SAOE

BE-II/DIP

E&TC

The bitmap header begins with the size of the header, which is then followed by the width and
the height of the image data. If the height is a negative number, the image is stored from bottom to up.
Usually the number of colour planes is one. The bits per pixel gives the number of bits required to
represent the particular pixel. The next two fields deal with image compression. The compression field is
0 for no compression and 1 for run-length encoding compression. The next two fields deal with image
compression. The compression field is 0 for no compression and 1 for run-length encoding compression.
The next two fields deal with the resolution of the image data and the final two fields deal with the
colours or grey shades in the image. The horizontal and vertical resolutions are expressed in pixels per
meter. The colour field gives the number of colours or grey shades in the image. A colour table is a
lookup table that assigns a grey shade or colour to a number given in the image data. The BMP colour
table has 4 bytes in each colour table entry. The bytes are for the blue, green and red colour values. The
final part of the BMP file is the image data. The data is stored row by row with padding at the end of
each row. It has following advantages:
1. Widely used on most platforms and applications.
2. BMP supports 1-bit through 24-bit colour depth.
3. It is good to store small image, because the compression is lossless, the detail of the raster image will
be also stored.

Disadvantage of BMP format is that it does not support compression, which results in
very large files.

Comparison of various file formats.


Attribute

JPEG

GIF

PNG

BMP

Compression

Lossy

Lossless

Lossless

Lossless

Bits

8 bits

24 bits

8 or 24 bits

1,4,8,16,24,32,4
8, or 64 bits

Color

Not Transparent

Transparent

Better Transparent

Not Transparent

SAOE

BE-II/DIP

E&TC

Extraction of BMP header


As we now know that there are following elements in a BMP file, the detailed explanation is given
below.
1. File Header
2. Image Information Header
3. Colour Table
4. Pixel Data

The file header:


The file header has exactly 14 bytes in it. The first two bytes must be ASCII codes for the
characters B and M. The software should check these values to confirm that the file it is reading is
most probably a Windows BMP file. The second field is a four byte integer that contains the size of the
file in bytes. The third and fourth fields are each two bytes long and are reserved for future extensions to
the format definition. The present definition requires both of these fields be zero. The final four bytes field
is an integer that gives the offset of the start of the Pixel data section relative to the start of the file.

Field Name

Size in Bytes

Description

bfType

The characters "BM"

bfSize

The size of the file in bytes

bfReserved1

Unused - must be zero

bfReserved2

Unused - must be zero

bfOffBits

Offset to start of Pixel Data


Table 2-BMP file header

SAOE

BE-II/DIP

E&TC

The Image Information Header:


A BMP information header specifies the dimensions, compression type and colour format for the
bitmap. There are actually two distinct options for the image information header one that was developed
for the OS/2 BMP format of 12 bytes long and the one that is for the Windows BMP format of 40 bytes
long. The first four bytes of each format is the length of the header in bytes and therefore a simple
examination of this value tells you which format is used.

Field Name

Size in
Bytes

Description

biSize

Header Size - Must be at least 40

biWidth

Image width in pixels

biHeight

Image height in pixels

Biplanes

Must be 1

biBitCount

Bits per pixel - 1, 2, 4, 8, 16, 24, or 32

biCompression

Compression type (0 = uncompressed)

biSizeImage

Image Size - may be zero for uncompressed images

biXPelsPerMeter

Preferred resolution in pixels per meter

biYPelsPerMeter

Preferred resolution in pixels per meter

biClrUsed

Number of Colour Map entries that are actually used

biClrImportant

Number of significant colours. If 0 then all colours are important.


Table 3- BMP image information header

SAOE

BE-II/DIP

E&TC

The Colour Table:


A colour table is defined as an array of RGB Quad structures, contains as many elements as there
are colours in the bitmap. There is no colour table for 24 bit BMP images as each pixel is represented by
24-bit red-green-blue (RGB) values in the actual bitmap data area.
For an 8 bit bitmap image the colour table consists of 256 entries with each entry consisting of
four bytes of data. The first three bytes are the blue, green and the red colours respectively. The fourth
byte is unused and must be equal to zero.

Field Name

Size(bytes)

Description

RgbBlue

specifies the blue part of the colour

rgbGreen

specifies the green part of the colour.

rgbRed

specifies the red part of the colour.

rgbReserved

must always be set to zero.

Table 4- BMP Colour Table

The Pixel Data


In the 8-bit format each pixel is represented by a single byte of data, that byte is an index into the
colour table. In 24-bit image, each pixel is represented by three consecutive bytes of data that specifies
the blue, green, and red component values respectively. In 24-bit BMP file the colour table is absent.
The pixel data is actual image data represented by consecutive rows, or scan lines. The number
of bytes in one row must always be adjusted to fit into the border of multiple of four. You simply append
zero bytes until the number of bytes in a row reaches of multiple of four.
Pixels are stored "upside-down" with respect to normal order, starting in the lower left corner,
going from left to right, and then row by row from the bottom to the top of the image

SAOE

BE-II/DIP

E&TC

The following figure illustrates the pixel data stored in a BMP image. Image is stored from
bottom left corner as indicated by the file pointer and then moving upwards row by row.

Figure 1

SAOE

BE-II/DIP

E&TC

Program:
#include<stdio.h>
#include<conio.h>
typedef struct header
{
unsigned short type;
unsigned long size;
unsigned short reserved1;
unsigned short reserved2;
unsigned long offbits;
unsigned long structsize;
unsigned long width;
unsigned long height;
unsigned long planes;
unsigned short bitcount,compression;
long xpelparameter,ypelparameter;
unsigned long imagesize,cirused,cirimportant;
}header;
void main()
{
header bmpheader;
FILE *fp;
char image[30];
//clrscr();
printf("\n Enter the image file you want to enter ");
gets(image);
fp=fopen(image,"r");
if (fp==NULL)
{
printf("\n Error in opening file ");
}
else
{
printf("\n BMP header attributes:----> ");
fread(&bmpheader,sizeof(bmpheader),1,fp);
printf("\n TYPE : %d ",bmpheader.type);
printf("\n SIZE : %d ",bmpheader.size);
printf("\n reserved1 : %d ",bmpheader.reserved1);
printf("\n reserved2 : %d ",bmpheader.reserved2);
printf("\n Offbit
: %d ",bmpheader.offbits);
printf("\n Structsize : %d ",bmpheader.structsize);
printf("\n width : %d ",bmpheader.width);
printf("\n height: %d ",bmpheader.height);
printf("\n planes : %d ",bmpheader.planes);
printf("\n bitcount : %d ",bmpheader.bitcount);
printf("\n compression
: %d ",bmpheader.compression);
printf("\n xpelparameter : %d ",bmpheader.xpelparameter);
printf("\n ypelparameter : %d ",bmpheader.ypelparameter);
printf("\n imagesize : %d ",bmpheader.imagesize);
printf("\n cirused: %d ",bmpheader.cirused);
printf("\n cirimportant : %d ",bmpheader.cirimportant);
}
getch();
}

SAOE

fig.:Untitled.bmp

BE-II/DIP

E&TC

SAOE

BE-II/DIP

Result:
Enter the image file you want to enter C:\TC\BIN\Untitled.bmp
BMP header attributes:---->
TYPE : 19778
SIZE : 5
reserved1 : 0
reserved2 : 1078
Offbit : 2621440
Structsize : 53673984
width : 30146560
height: 65536
planes : 8
bitcount : 0
compression : 49520
xpelparameter : 5
ypelparameter : 0
imagesize : 0
cirused: 0
cirimportant : 0

E&TC

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:2
Title: Estimation of Statistical Parameters of an Image and

Histogram Plotting
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:


Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No.2
Title: Estimation of statistical parameters of an image and histogram
plotting.
Objective:
To find the statistical parameters of an image.
To plot the histogram of an image.
Theory:

Moments are applicable to many different aspects of image processing, ranging from
invariant pattern recognition and image encoding to pose estimation. When applied to
images, they describe the image content (or distribution) with respect to its axes. They are
designed to capture both global and detailed geometric information about the image. Here
we are using them to characterize a grey level image so as to extract properties that have
analogies in statistics or mechanics. The images are read, and the headers are removed, the core
data can be used to calculate the statistical properties of an image.

Mean:
The mean and variance are the first two statistical moments which provide
information on the shape of the distribution.
The mean of a data set is simply the arithmetic average of the values in the set, obtained
by summing the values and dividing by the number of values

SAOE

BE-II/DIP

E&TC

Variance and Standard Deviation:


The variance of a data set is the arithmetic average of the squared differences
between the values and the mean.

The standard deviation is the square root of the variance:

Histogram:
An image histogram is a type of histogram that acts as a graphical
representation of the tonal distribution in a digital image. It plots the number of pixels for
each tonal value. The horizontal axis of the graph represents the tonal variations, while
the vertical axis represents the number of pixels in that particular tone. The left side of the
horizontal axis represents the black and dark areas, the middle represents medium grey
and the right hand side represents light and pure white areas.
A histogram uses a bar graph to profile the occurrences of each grey level present
in an image. The horizontal axis is the grey-level values. It begins at zero and goes to the
number of grey levels (256 in this example). Each vertical bar represents the number of
times the corresponding grey level occurred in the image.
In statistics, a histogram is a graphical display of tabulated frequencies. It shows
what proportion of cases fall into each of several categories. A histogram differs from a
bar chart in that it is the area of the bar that denotes the value, not the height, a crucial
distinction when the categories are not of uniform width.
In a more general mathematical sense, a histogram is a mapping mi that counts the
number of observations that fall into various disjoint categories (known as bins), whereas
the graph of a histogram is merely one way to represent a histogram. Thus, if we let n be
the total number of observations and k be the total number of bins, the histogram mi meets
the following conditions:

SAOE

BE-II/DIP

E&TC

A cumulative histogram is a mapping that counts the cumulative number of


observations in all of the bins up to the specified bin. That is, the cumulative histogram
Mi of a histogram mi is defined as:

Image histograms can be useful tools for thresholding. Because the information
contained in the graph is a representation of pixel distribution as a function of tonal
variation, image histograms can be analysed for peaks and/or valleys which can then be
used to determine a threshold value.

HISTOGRAM OF IMAGE
1800

QUANTITY OF GRAY LEVEL==>

1600
1400
1200
1000
800
600
400
200
0

50

100
150
200
GRAY LEVEL VALUES==>

250

Sample histogram of cameraman.tif image

300

SAOE

Algorithm:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Programs:
%% user defined mean of a matrix
function y=u_mean(x)
x=double(x);
[r c d]=size(x);
z=0;
for k=1:d
for i=1:r
for j=1:c
z=z+x(i,j,k);
end
end
end
y=z/(r*c*d);
%% user defined function for variance of a matrix
function y=u_var(x)
m=u_mean(x);
[r c d]=size(x);
x=double(x);
z=0;
for k=1:d
for i=1:r
for j=1:c
z=z+(x(i,j,k)-m).^2;
end
end
end
y=z/(r*c*d-1);

%subtracted 1 from the denominator in


%order to calculate variance for
%unbiased estimator

%% user defined function for standard deviation of the matrix


function y=u_std(x)
z=u_var(x);
y=sqrt(z);

%% user defined prog. for histogram calculation and plotting


function y=u_hist(x)
%x=imread('cameraman.tif');
[r c]=size(x);
m=max(max(x));
h=zeros(1,m);
%
z=0;
for i=1:r
for j=1:c

SAOE

BE-II/DIP

E&TC

if x(i,j)==0
z=z+1;
else
h(1,x(i,j))=h(1,x(i,j))+1;
end
end
end
out=[z,h];
figure,stem(out);
figure,bar(out);
title('-Histogram-');
set(gca,'XTick',0:25:m);
y=out;

Program1:
%Practicle #2a
%Name : Prassanjeet Singh
%(Statistical)To find the Mean, Variance and Standard Deviation of an image.
clc
clear all
close all
%a=[1 2 3 ; 4 5 6 ; 7 8 9];
a=imread('cameraman.tif');
%imshow(a);
b=double(a);
[r c]=size(a);
l=r*c;
%converting 2D matrix to 1D
z=zeros(1,r*c);
k=0;
for x=1:1:r
for y=1:1:c
k=k+1;
z(1,k)=a(x,y);
end
end
%using inbuilt functions
mean1=mean(z)
var1=var(z)
std_deviation=std(z)
%user defined
mean2=u_mean(a)
var2=u_var(a)
std_deviation_u=u_std(a)

%reading the image into a new


%variable a
%getting # of rows and columns

%predefining z matrix
%initializing a counter

SAOE

BE-II/DIP

Result:
mean1 = 118.7245
var1 = 3.8865e+003
std_deviation = 62.3417
mean2 = 118.7245
var2 = 3.8865e+003
std_deviation_u = 62.3417
Program2:
%Practical: 2b
%Histogram using inbuilt as well as user defined fn
clc
close all
clear all
a=imread('cameraman.tif');
imhist(a);
hist2=u_hist(a);

Result:

Fig. Histogram using in-built function

E&TC

SAOE

BE-II/DIP

Histogram using user defined function:

Fig. Histogram shown using stem function

Fig. Histogram shown using bar function

E&TC

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.: 03
Title: Image enhancement using histogram modelling.
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:

Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No. 3

Title: Image enhancement using histogram modelling.


Objective: Image Enhancement using:
a) Contrast stretching
b) Histogram Equalization

Theory:
Image Enhancement:
Cosmetics, to make the image look good. Image enhancement involves
accentuation, or sharpening, of image feature such as edge, boundaries, gray level and
contrast manipulation, noise reduction, filtering, interpolation and magnification,
pseudo colouring and so on.
The enhancement process does not increase the inherent information content
in the image. But it increases the dynamic range of chosen feature so that they can be
detected easily.
1. Contrast Stretching
Contrast stretching (often called normalization) is a simple image enhancement
technique that attempts to improve the contrast in an image by `stretching' the range of
intensity values it contains to span a desired range of values, e.g. the the full range of
pixel values that the image type concerned allows. It differs from the more sophisticated
histogram equalization in that it can only apply a linear scaling function to the image
pixel values. As a result the `enhancement' is less harsh. (Most implementations accept a
graylevel image as input and produce another graylevel image as output)

S=

(
(

)
)

SAOE

Algorithm:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

2. Histogram Equalization:
Histogram equalization automatically determines a transformation function
seeking to produce an output image with a uniform histogram.
Linear stretching is good, but it does not change the shape of the histogram. In
applications where we need a flat histogram, linear stretching fails. To get a flat
histogram, we go for histogram equalization.
Transfer function should satisfy the following conditions:
1) T(r) should be single valued and it should be monotonically increasing.
2) 0<=T(r)<=1 for 0<=r<=1
If
(r) = Probability density function of input gray level.
(s) = Probability density function of output gray level.
As per probability theory,
(s) =

(r)

----------------------------------eq. (1)

CDF of an image
CDF =
S=

(r)dr
(r)dr

Taking differentiation on both sides,


=

(r)

Substitute in eq. (1)

(s) =

(r).

( )

=1

SAOE

BE-II/DIP

E&TC

Steps for solving numerical on Histogram Equalization


1) For given r and
(from image), calculate n.
2) Calculate probability density function of input gray level { ( )}
3) Calculate cumulative distribution function of input gray level {S= T{r} =

( )}
4) Calculate (L-1)S and round off the value.
5) Map old gray level value to new equalized gray level.
6) Plot equalized histogram.

Algorithm for histogram equalization:

SAOE

BE-II/DIP

Program:
%% user defined prog. for histogram calculation and plotting
function y=u_hist(x)
%x=imread('cameraman.tif');
[r c]=size(x);
m=max(max(x));
h=zeros(1,m);
%
z=0;
for i=1:r
for j=1:c
if x(i,j)==0
z=z+1;
else
h(1,x(i,j))=h(1,x(i,j))+1;
end
end
end
out=[z,h];
% figure,stem(out);
% figure,bar(out);
% title('-Histogram-');
% set(gca,'XTick',0:25:m);
y=out;
%% Practical 3: Contrast Stretching and Histogram Equilization
clc
close all
clear all
I=imread('pout.tif');
img=double(I);
imhist(I);
[r c]=size(I);
%% contrast Stretching
p=input('Enter the lower limit for contrast stretching...');
q=input('Enter the upper limit for contrast stretching...');
close all
out=zeros(r,c);
for i=1:r
for j=1:c
if (I(i,j)<=p)
out(i,j)=0.5*img(i,j);
elseif (I(i,j)>p && I(i,j)<=q)
out(i,j)=2*(img(i,j)-p)+0.5*p;
else
out(i,j)=0.5*(img(i,j)-q)+0.5*p+2*(q-p);
end
end
end
out=uint8(out);
figure
subplot(421)
imshow(I);

E&TC

SAOE

BE-II/DIP

E&TC

title('Original Image->');
subplot(422)
imhist(I);
title('Histogram of Original Image->');
subplot(423)
imshow(out);
title('Contrast Stretched Image->');
subplot(424)
imhist(out);
title('Histogram of Contrast Stretched Image->');
%% Histogram equilization using inbuilt function
subplot(425)
out2=histeq(I);
imshow(out2);
title('Histogram Equalized Image->');
subplot(426)
imhist(out2);
title('Histogram of Histogram Equalized Image->');
%% Histogram Equilization using user defined function
hin1=u_hist(I);
[lr lc]=size(hin1);
maxquanta=2.^(ceil(log2(lc)))-1;
tp=r*c;
pdf=hin1/tp;
cdf=zeros(lr,lc);
cdf(1)=pdf(1);
for i=2:lc
cdf(i)=cdf(i-1)+pdf(i);
end
eqtable=round(cdf*maxquanta);
out2=zeros(r,c);
temp=0;
for i=1:r
for j=1:c
out2(i,j)=eqtable(1,I(i,j)+1);
end
end
out2=uint8(out2);
subplot(427)
imshow(out2);
title('Histogram Equalized Image(user defined)->');
subplot(428)
imhist(out2);
title('Histogram of Histogram Equalized Image(user defined)->');

SAOE

BE-II/DIP

Result:
Enter the lower limit for contrast stretching...100
Enter the upper limit for contrast stretching...160

E&TC

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:4
Title :Grey level transformations on an image

Date of performance:
Date of submission :
Name of Student : Prassanjeet Singh Dhanjal
Class :BE [E & TC] Division : A
Signature of the Teacher:
Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No.4

Title: Grey level transformations on an image

Objective: To perform following point processing algorithms on an image

Negative transformation
Log transformation
Power law transformation (Antilog transformation)
Bit plane slicing
Grey level slicing

Theory:
Image enhancement is a very basic image processing task that defines us to have a better
subjective judgement over the images. And Image Enhancement in spatial domain (that
is, performing operations directly on pixel values) is the very simplistic approach.
Enhanced images provide better contrast of the details that images contain. Image
enhancement is applied in every field where images are ought to be understood and
analysed. For example, medical image analysis, Analysis of images from satellites, etc.
Here we discuss some preliminary image enhancement techniques that are applicable for
grey scale images.
Image enhancement simply means, transforming an image f into image g using T. Where
T is the transformation. The values of pixels in images f and g are denoted by r and s,
respectively. As said, the pixel values r and s are related by the expression,
s = T(r)
where T is a transformation that maps a pixel value r into a pixel value s. The results of
this transformation are mapped into the grey scale range as we are dealing here only with
grey scale digital images. So, the results are mapped back into the range [0, L-1], where
L=2k, k being the number of bits in the image being considered. So, for instance, for an
8-bit image the range of pixel values will be [0, 255]. There are three basic types of
functions (transformations) that are used frequently in image enhancement.

SAOE

BE-II/DIP

E&TC

They are,

Linear,
Logarithmic,
Power-Law.

The transformation map plot shown below depicts various curves that fall into the above
three types of enhancement techniques.

The Identity and Negative curves fall under the category of linear functions. Identity curve
simply indicates that input image is equal to the output image. The Log and Inverse-Log
curves fall under the category of Logarithmic functions and nth root and nth power
transformations fall under the category of Power-Law functions.

Image Negation:
The negative of an image with grey levels in the range [0, L-1] is obtained by the negative
transformation shown in figure above, which is given by the expression,
s=L-1-r
This expression results in reversing of the grey level intensities of the image thereby
producing a negative like image. The output of this function can be directly mapped into
the grey scale look-up table consisting values from 0 to L-1.
Log Transformations:
The log transformation curve shown in fig. A, is given by the expression,
s = c log (1 + r)
where c is a constant and it is assumed that r0. The shape of the log curve in fig. A tells
that this transformation maps a narrow range of low-level grey scale intensities into a
wider range of output values. And similarly maps the wide range of high-level grey scale
intensities into a narrow range of high level output values. The opposite of this applies for
inverse-log transform. This transform is used to expand values of dark pixels and
compress values of bright pixels.

SAOE

BE-II/DIP

E&TC

Figure A: Plot of various transformation functions

Power Law Transformation:

Basic form

where C and r are positive constants.


As in log transformation, power- law curves with <1,

SAOE

BE-II/DIP

E&TC

For > maps a narrow range of dark i/p values into a wider range of o/p values and with
the opposite being true for higher values of i/p. By varying we obtain a family of
possible transformation. For curves generated with values <1 effect is opposite. Finally
the transformation (1) reduces to identity transformation for C= =1.
A variety of devices for image capture, printing, and display respond according to a power
law. The exponent in power law equation is referred to as gamma process used to correct
this power law response phenomena is called gamma correction. E.g. CRT devices have
intensity. Vs voltage response as a power functions with varying from 1.8 to 2.5. With
=2.5, the CRT would produce images darker than intended. E.g. i/p is a simple grey scale
linear wedge .To reproduce colours accurately also requires knowledge of gamma
correction as changing the value of gamma changes not only the brightness but also the
colour ratios of R:G:B. Gamma correction is extremely important as use of digital images
for commercial purposes over the internet has increased.

SAOE

BE-II/DIP

E&TC

This transformation function is also called as gamma correction. For various values of
different levels of enhancements can be obtained. This technique is quite commonly
called as Gamma Correction. If you notice, different display monitors display images at
different intensities and clarity. That means, every monitor has built-in gamma correction
in it with certain gamma ranges and so a good monitor automatically corrects all the
images displayed on it for the best contrast to give user the best experience.

SAOE

BE-II/DIP

E&TC

The difference between the log-transformation function and the power-law functions is
that using the power-law function a family of possible transformation curves can be
obtained just by varying the .
These are the three basic image enhancement functions for grey scale images that can be
applied easily for any type of image for better contrast and highlighting. Using the image
negation formula given above, it is not necessary for the results to be mapped into the
grey scale range [0, L-1]. Output of L-1-r automatically falls in the range of [0, L-1]. But
for the Log and Power-Law transformations resulting values are often quite distinctive,
depending upon control parameters like and logarithmic scales. So the results of these
values should be mapped back to the grey scale range to get a meaningful output image.
For example, Log function s = c log (1 + r) results in 0 and 2.41 for r varying between 0
and 255, keeping c=1. So, the range [0, 2.41] should be mapped to [0, L-1] for getting a
meaningful image.

Bit Plane Slicing:


Instead of highlighting grey level images, highlighting the contribution made to total
image appearance by specific bits might be desired. Suppose that each pixel in an image
is represented by 8 bits. Imagine the image is composed of 8, 1-bit planes ranging from
bit plane1-0 (LSB) to bit plane 7 (MSB).In terms of 8-bits bytes, plane 0 contains all
lowest order bits in the bytes comprising the pixels in the image and plane 7 contains all
high order bits.
Separating a digital image into its bit planes is useful for analysing the relative importance
played by each bit of the image, implying, it determines the adequacy of numbers of bits
used to quantize each pixel, useful for image compression. In terms of bit-plane extraction
for a 8-bit image, it is seen that binary image for bit plane 7 is obtained by proceeding the
input image with a thresholding grey-level transformation function that maps all levels
between 0 and 127 to one level (e.g. 0)and maps all levels from 129 to 253 to another
(e.g. 255).

SAOE

BE-II/DIP

E&TC

Grey level slicing:


Highlighting a specific range of gray-levels in an image is often desired.
Applications include enhancing features such as masses of water, crop regions, or certain
elevation area in satellite imagery.
Another application is enhancing flaws in x-ray. There are two main different approaches:
Highlight a range of intensities while diminishing all others to a constant low level.
Highlight a range of intensities but preserve all others.
The fig. illustrates the intensity level slicing process. The left figures show a
transformation function that highlights a range [A, B] while diminishing all the others.
The right figures highlights a range [A, B] but preserves all the others.

SAOE

BE-II/DIP

E&TC

SAOE

Algorithm:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Programs:
%% User Defined function to convert greyscale image to binary sliced image
% 1st plane refers to the LSB plane
function y=u_grey2binslice(x)
%x=[255 230 40;29 140 239; 89 90 100];
%x=imread('cameraman.tif');
[r c]=size(x);
x=double(x);
m=max(max(x));
l=ceil(log2(double(m)));
img=zeros(r,c,l);
for i=1:r
for j=1:c
temp=x(i,j);
for k=1:l
temp_r=rem(temp,2);
temp=floor(temp/2);
img(i,j,k)=temp_r;
end
end
end
y=img;
%% User defined function to convert binary sliced image to greyscale
function y=u_binslice2grey(x)
[r c l]=size(x);
img=zeros(r,c);
for i=1:r
for j=1:c
temp=0;
for k=1:l
temp=temp+x(i,j,k)*2.^(k-1);
end
img(i,j)=temp;
end
end
y=img;

%% Practical4: Grey Level Transformations on an Image


%(a)Negative Transformation (b)Thresholding
%(c)Greyscale Slicing (d)Log Transf.
%(e)Power Law Transformation (f)Bit Plane Slicing
clc
close all
clear all
a=imread('cameraman.tif');
%a=imread('moon.tif');
%a=imread('coins.png');
[r c]=size(a);

SAOE

BE-II/DIP

E&TC

L=max(max(a));
b=double(a);
%% Negative Transformation
negative=zeros(r,c);
for i=1:r
for j=1:c
negative(i,j)=L-a(i,j);
end
end
negative=uint8(negative);

%% Thresholding
T=input('Please enter the threshold level between 0 to 255= ');
thresimg=zeros(r,c);
for i=1:r
for j=1:c
if a(i,j)>= T
thresimg(i,j)=255;
end
end
end
%% GreyScale Slicing
%without Background
T1=input('Enter the lower value for slicing= ');
T2=input('Enter the upper value for slicing= ');
sliceimwobg=zeros(r,c);
for i=1:r
for j=1:c
if a(i,j)>= T1 && a(i,j)<=T2
sliceimwobg(i,j)=255;
end
end
end
%with background
sliceimwbg=a;
for i=1:r
for j=1:c
if a(i,j)>= T1 && a(i,j)<=T2
sliceimwbg(i,j)=255;
end
end
end
%% Log Transformation
C1=input('Enter the coefficient for logarithmic transform= ');
logtransimg=zeros(r,c);
for i=1:r
for j=1:c
logtransimg(i,j)=C1*log(1+b(i,j));
end
end
logtransimg2=uint8(logtransimg);

SAOE

BE-II/DIP

%% Power Law Transformation


C2=input('Enter coefficient for Power Law transform= ');
gamma=input('Enter Gamma factor= ');
powlawtransimg=zeros(r,c);
for i=1:r
for j=1:c
powlawtransimg(i,j)=C2*(b(i,j).^gamma);
end
end
powlawtransimg=uint8(powlawtransimg);
%% Power Law Second Method
powlaw2=zeros(r,c);
for i=1:r
for j=1:c
powlaw2(i,j)=exp(logtransimg(i,j)./C1)-1;
end
end
powlaw2img=uint8(powlaw2);

%% Bit Plane Slicing


bitsliced=u_grey2binslice(a);
a1=bitsliced(:,:,1);
a2=bitsliced(:,:,2);
a3=bitsliced(:,:,3);
a4=bitsliced(:,:,4);
a5=bitsliced(:,:,5);
a6=bitsliced(:,:,6);
a7=bitsliced(:,:,7);
a8=bitsliced(:,:,8);
combined=uint8(u_binslice2grey(bitsliced));
%% Image Show
imshow(a);
figure
subplot(3,3,1);
imshow(a);
title('Original');
subplot(332)
imshow(negative);
title('Negative of Original');
subplot(333)
imshow(thresimg);
title('Thresholded Image');
subplot(334)
imshow(sliceimwobg);
title('Grey Level Sliced Image without Background');
subplot(335)
imshow(sliceimwbg);
title('Grey Level Sliced Image with Background');
subplot(336)
imshow(logtransimg2);
title('Log Transformed Image');
subplot(337)
imshow(powlawtransimg);
title('Power Law Transformed Image');

E&TC

SAOE

BE-II/DIP

figure,subplot(331);
imshow(a);
subplot(332)
imshow(a8);
subplot(333)
imshow(a7);
subplot(334)
imshow(a6);
subplot(335)
imshow(a5);
subplot(336)
imshow(a4);
subplot(337)
imshow(a3);
subplot(338)
imshow(a2);
subplot(339)
imshow(a1);
figure,imshow(powlaw2img);
title('original from power law antilog');
figure,imshow(combined);

Results:
Please enter the threshold level between 0 to 255= 125
Enter the lower value for slicing= 34
Enter the upper value for slicing= 45
Enter the coefficient for logarithmic transform= 25
Enter coefficient for Power Law transform= 1.2
Enter Gamma factor= 1.1
Fig.: Original Image

E&TC

SAOE

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

SAOE

BE-II/DIP

Conclusion and Inferences:

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:5
Title: Spatial domain filtering of images.
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No.5
Title: Spatial domain filtering of images.

Objective:
o Read an image file and perform
Spatial domain high pass filtering (HPF)
Spatial domain low pass filtering (LPF)
Spatial domain median filtering
Spatial domain high-boost filtering

Theory:

Spatial frequencies:
All images and pictures contain spatial frequencies. Most of us are familiar with
some type of frequency such as the 60-cycle, 110-volt electricity in our homes. The
voltage varies in time as a sinusoid, and the sinusoid completes a full cycle 60 times in a
second-a frequency of 60 Hz. Images have spatial frequencies. The grey level in the image
varies in space (not time), i.e. it goes up and down.

Filtering:
Filtering is also a common concept. Adjusting the bass and treble on stereos filters
out certain audio frequencies and amplifies others. High pass filters pass high frequencies
and stop low frequencies. Low pass filters pass low frequencies and stop high frequencies.
In the same manner it is possible to filter spatial frequencies in images. A high-pass filter
will amplify or pass frequent changes in gray levels and a low-pass filter will reduce
frequent changes in gray levels.

SAOE

BE-II/DIP

E&TC

Application of spatial image filtering:


Spatial image filtering has several basic applications in image processing. Among
these are noise removal, smoothening and edge enhancement. Noise in an image usually
appears as snow (white or black) randomly sprinkled over an image. Spikes or very sharp,
narrow edges in the image cause snow. A low pass filter smoothes and often removes
these sharp edges.
Convolution:
At the heart of any filtering process is the operation of convolution. Often a mask
(2D) is convolved with an image to obtain the desire effect.

Following image describes the operation of convolution:


(

) (

Low pass filtering:


Low pass filtering smoothes out sharp transitions in grey levels and removes noise.
The output (response) of low pass filter is simply the average of the pixels contained in
the neighbourhood of the filter mask. The averaging is achieved through spatial
integration.
Low pass filter in which all coefficients are equal is called as box filter.
Following figure shows four low pass filter convolution masks. Convolving these filters
with a constant gray level area of an image will not change the image. Notice how the
second convolution mask replaces the center pixel of the input image with the average
gray level of the area. The outer three masks have the same general form a peak in the
centre with small values at the corners.

SAOE

BE-II/DIP

Low pass filtering:


1/9 *

1/6 *

1/10 *

1/16 *

High pass filtering:

E&TC
1

-1

-1

-1

-1

-1

-1

-1

-1

-1

-1

-1

-1

-2

-2

-2

-2

High pass filters amplify or enhance a sharp transition (an edge) in an image. Sharpening
of image or high pass filtering is achieved through spatial differentiation.

SAOE

BE-II/DIP

E&TC

Image differentiation enhances edges and other discontinuities such as noise and
deemphasizes areas with slowly varying grey level values.
Above figure shows three 3X3 high pass filter convolution masks. Each will leave a
homogenous area of an image unchanged. They all have the same form a peak in the
centre, negative values above, below, and to the sides of the centre, and corner values
near zero. The three masks, however, produce different amplifications to different spatial
frequencies.

Median Filters:
Order statistics filter s are non-linear spatial filters whose response is based on
ordering the pixels contained in the image area encompassed by the filter and then
replacing the value of the centre pixel with the value determined by the ranking result.
Median filter is the best known example.
Median filter uses an empty mask. The median filter takes an area of an image
(3X3, 5X5, 7X7 etc.) looks at all the pixel values in that area and replaces the centre pixel
with the median value. The median filter does not require convolution. It does however,
require sorting the values in the image area to find median value.
Median filters are particularly effective in the presence of impulse noise also called
as salt and pepper noise.

High boost filtering:


High pass filters gives good result at the cost of background. At times, we need to
enhance the edges and would also like to retain some part of background. A solution to
this is a modified version of high pass filter called as high-boost filtering.
We can think of high pass filtering in terms of subtracting a low pass image from
the original image, that is,
High pass = Original - Low pass.
However, in many cases where a high pass image is required, we also want to
retain some of the low frequency components to aid in the interpretation of the image.
Thus, if we multiply the original image by an amplification factor A before subtracting
the low pass image, we will get a high boost or high frequency emphasis filter. Thus,

SAOE

BE-II/DIP

High boost

E&TC

= A*Original Low pass


= (A-1)*Original + Original-Low pass
= (A-1).Original + High pass

Now, if A = 1 we have a simple high pass filter. When A > 1 part of the original image is
retained in the output.
A simple filter for high boost filtering is given by

(1/9) *

-1

-1

-1

-1

-1

-1

-1

-1

Where X=9A-1
For example, if A=1.1, then
X=9A-1
So, X=8.9
This implies that 10% of background of image is preserved.
GENERAL PROCEDURE TO BE FOLLOWED FOR FILTERING:

1. Take a mask of (3x3, 5x5, 7x7 etc.) size.


2. Place this mask on the image.
3. Multiply each component of the mask with the corresponding value of the image.
4. Add up all the values and place it in the centre.
5. Shift the mask by one pixel along the row till the row is completed.
6. Repeat the same procedure for each row
This operation is called as linear spatial filtering and it is to be followed for all the
filters.

SAOE

Algorithm:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Programs:
%% User defined Mean Filter(odd order window)
function y=u_meanfltr(x,w,C)
b=double(x);
[r c]=size(x);
w=w/C;
%w=input('Enter the window i.e. square matrix for LPF: ');
%C=input('Enter Division factor: ');
[wr wc]=size(w);
wrm=ceil(wr/2);
wcm=ceil(wc/2);
mnfimg=zeros(r,c);
for i=wrm:r-wrm+1
for j=wcm:c-wcm+1
temp=0;
for wi=1:wr
for wj=1:wc
temp=(w(wi,wj)*b((i-wrm+wi),(j-wcm+wj)))+temp;
end
end
mnfimg(i,j)=temp;
end
end
y=uint8(mnfimg);
%% User defined Median Filter(odd order window)
function y=u_medfltr(x,S)
b=double(x);
[r c]=size(x);
%S=input('Enter the window size i.e. square matrix for median filter: ');
%w=ones(S,S);
wrm=ceil(S/2);
wcm=ceil(S/2);
mdnfimg=zeros(r,c);
temp=zeros(1,S*S);
for i=wrm:r-wrm+1
for j=wcm:c-wcm+1
k=1;
for wi=1:S
for wj=1:S
temp(1,k)=b((i-wrm+wi),(j-wcm+wj));
k=k+1;
end
end
mdnfimg(i,j)=median(sort(temp));
end
end
y=uint8(mdnfimg);

SAOE

BE-II/DIP

%% Practical 5: Filter Design


%% I. LPF
clc
close all
clear all
a=imread('cameraman.tif');
[r c]=size(a);
subplot(221)
imshow(a);
title('Original Image
');
%% user defined method
%% LPF
w1=[1 1 1;1 1 1;1 1 1];
C1=9;
b=u_meanfltr(a,w1,C1);
subplot(222)
imshow(b);
title('Mean filtered Image ');
wfive=[1 1 1 1 1;1 1 1 1 1;1 1 1 1 1;1 1 1 1 1;1 1 1 1 1];
Cfive=25;
bfive=u_meanfltr(a,wfive,Cfive);
subplot(223)
imshow(bfive);
title('Mean filtered Image 5x5 mask');
%%
order=3;
d=u_medfltr(a,order);
figure
subplot(221)
imshow(d);
title('Median filtered Image ');
n=imnoise(a,'salt & pepper',0.1);
subplot(222)
imshow(n);
title('Image with 10% added noise of type salt n pepper');
d1=u_medfltr(n,3);
d2=u_medfltr(n,5);
subplot(223)
imshow(d1);
title('Image after median filtering with 3x3 window ');
subplot(224)
imshow(d2);
title('Image after median filtering with 5x5 window ');
%% HPF
w2=[1 1 1;1 -8 1;1 1 1];
C2=1;
f=u_meanfltr(a,w2,C2);
figure,imshow(f);
title('High Pass Filtered Image
');
%% HBF
X1=1.05;
%5 percent
X2=1.1;
%10 percent
X3=1.2;
%20 percent
w3=[-1 -1 -1;-1 9*X1-1 -1;-1 -1 -1];
w4=[-1 -1 -1;-1 9*X2-1 -1;-1 -1 -1];

E&TC

SAOE

BE-II/DIP

w5=[-1 -1 -1;-1 9*X3-1 -1;-1 -1 -1];


g=u_meanfltr(a,w3,2);
h=u_meanfltr(a,w4,2);
l=u_meanfltr(a,w5,2);
figure;
subplot(221)
imshow(g);
title('High Boost Filtered Image with 5% background ');
subplot(222)
imshow(h);
title('High Boost Filtered Image with 10% background ');
subplot(223)
imshow(l);
title('High Boost Filtered Image with 20% background ');

Results:

E&TC

SAOE

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:6
Title: Discrete Cosine Transform and Inverse Discrete Cosine

Transform of an image.
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:


Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No.6
Title: To Perform Discrete Cosine Transform and Inverse Discrete Cosine Transform of
an image.

Objective:
Read an image file and perform
1) Discrete Cosine Transform (DCT)
2) Inverse Discrete Cosine Transform (IDCT) on the DCT image

Theory:
Need of Transform:
Transform are used in image filtering, image data compression, image description,
etc. The rapid growth of digital imaging applications, including desktop publishing,
multimedia, teleconferencing, and high-definition television (HDTV) has increased the
need for effective and standardized image compression techniques.
Among the emerging standards are JPEG, for compression of still
images.MPEG, for compression of motion video and for compression of video telephony
and teleconferencing.
DCT Importance:
Discrete Cosine Transforms (DCTs) express a function or a signal in terms of a
sum of sinusoids with different frequencies and amplitudes. A DCT operates on a function
at a finite number of discrete data points. The discrete cosine transform (DCT) is a
technique for converting a signal into elementary frequency components. It is widely used
in image compression.
N*M image is subdivided in set of smaller n*m sub image and DCT is computed
for each of sub image. Within each of sub image only non negligible components retained
i.e. compression is achieved. Other DCT components are assumed to be zero.
A fast version of the DCT is available, like the FFT, and calculation can be based
on the FFT. Both implementation offers about the same speed. The Fourier transform is
not actually optimal for image coding since DCT can give higher compression rate, for
the same image quality.

One-Dimensional Discrete Cosine Transform:


1-d DCT of a sequence f(x) of length L is given as,

SAOE

f (u) = (u)

BE-II/DIP

((

( )

E&TC

.........for 0 u N-1

..........1

} ....u=0

Where, (u) ={

................2
= {

} ....1 u N-1

(u) is required to make it unitary or orthogonal

One- Dimensional Inverse Discrete Cosine Transform:


f (x) = (u)

((

( )

Where, (u) = {

............for 0 u N-1

..........3

} ....u=0
.................4

= {

} ....1 u N-1

This equation expresses f as a linear combination of the basis vectors. The coefficients
are the elements of the inverse transform, which may be regarded as reflecting the amount
of each frequency present in the input f. The one-dimensional DCT is useful in
processing one-dimensional signals such as speech waveforms.

Two-Dimensional DCT:
f (u, v) = (u)* (v)

((

((

.........for 0 u M-1
0 v N-1
......5
Where, (u) = {
={

......for u=0
.......for 1 u M-1
......6

(v) = {
={

......for v=0
.......for 1 v M-1

Since the 2D DCT can be computed by applying 1D transforms separately to the rows
and columns, we say that the 2D DCT is separable in the two dimensions.

SAOE

BE-II/DIP

E&TC

Two-Dimensional IDCT:
F(x, y) =

( ) ( )

((

((

...........for 0 m M-1
0 n N-1
........7
Where, (u) = {1/
={

......for u=0
.......for 1 u M-1
..........8

(v) = {2/
={

......for v=0
.......for 1 v N-1

Properties of DCT:
1. Due to its mirror symmetry property it produces less degradation at each sub image
boundaries than DFT.
2. DCT has periodicity 2N. Since only real coefficients are used memory
requirement of DCT is less than DFT
3. Decorrelation - The principle advantage of image transformation is the removal of
redundancy between neighboring pixels. This leads to uncorrelated transform
coefficients which can be encoded independently.
4. Energy Compaction - DCT exhibits excellent energy compaction for highly
correlated images. The uncorrelated image has its energy spread out, whereas the
energy of the correlated image is packed into the low frequency region.
5. Orthogonality - IDCT basis functions are orthogonal. Thus, the inverse
transformation matrix of A is equal to its transpose i.e. inv A=A'.
6. Separability Perform DCT operation in any of the direction first and then
apply on second direction, coefficient will not change.

SAOE

Algorithm:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

Program:
%% user defined function to find DCT and IDT
function [y1,y2]=u_dct(x)
[N M]=size(x);
x=double(x);
C=zeros(N,N);
const1=sqrt(1/N);
const2=sqrt(2/N);
for i=1:N
for j=1:M
if i==1
C(i,j)=const1;
else
C(i,j)=const2*cos((2*(j-1)+1)*(i-1)*pi/(2*N));
end
end
end
y1=C*x*C';
y2=C'*y1*C;
x=uint8(x);y2=uint8(y2);
figure,imshow(x);title('Original Image');
figure,imshow(uint8(y1));title('DCT Image');
figure,imshow(uint8(y2));title('Retrieved Image');
%% prac 6
%% dct and idct
clc
close all
clear all
a=imread('lena.tif');
[b,d]=u_dct(a);
c=dct2(a);
figure,imshow(uint8(c));
title('DCT using inbuilt Function');

Result:

E&TC

SAOE

BE-II/DIP

E&TC

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:7
Title: Edge detection using different operators.
Date of performance :13/9/12
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC] Division :A
Signature of the Teacher:
Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No.7
Title: Edge detection
Objective: To understand and implement edge detection techniques based on:1.
2.
3.
4.
5.

Ordinary operator.
Roberts operator.
Sobel operator.
Prewitt operator.
Canny operator.

Theory:
An edge in an image is a boundary or contour at which a significant change occurs in
some physical aspect of an image, such as illumination or the distances of the visible surfaces
from the viewer. Changes in physical aspects manifest themselves in a variety of ways, including
changes in intensity, color, and texture.
Detecting edges is very useful in a number of contexts. For example, in a typical image
understanding task such as object identification, an essential step is to segment an image into
different regions corresponding to different objects in the scene. Edge detection is the 1st step in
image segmentation.
A high amount of semantic information about image content is conveyed by shapes of
objects. In a simplistic view 'edge' is a discontinuity of amplitude. In natural images it will barely
happen that an edge sharply separates two distinct plateaus of amplitude. This type of "step
edges" can be found in synthetically generated graphics images. In natural images, due to
shadows and reflections, the type of "ramp edges" is a better model characterized by slope width
"a" and edge slope 'b' Natural edges are even more smooth as shown below. The slope is not
constant over the edge such that a point of maximum slope edge can be identified.

Amplitude B

Step
edge
Amplitude A

Amplitude
B

b=(B-a)/a

Amplitude
A

Ramp
edge
a

Detecting edges is a basic operation in image processing.


Fig1.Different types of edges.

Max slope

Amplitude
B

Amplitude A
a

SAOE

BE-II/DIP

E&TC

For detection of edges, we have to use mask processing. In mask processing the mask is shifted
over the entire image to calculate weighted sum of pixels at a particular location.
E.g.:

Depending on the coefficients of these masks, we can perform different types of image processing
operations.
EDGE DETECTION:
Consider the following example.
A

w-1,-1
w0,-1
w1,-1

w-1,0
w0,0
w1,0

w-1,1
w0,1
w1,1

Intensity profile along


Horizontal line
Taking its first derivative
Taking second derivative
Taking modulus

In the above example, consisting of two images a and b, it can be seen that images a and
b consists of two main intensity levels. The separation of these intensity levels is the edge of the
image.
It is observed that the intensity profile changes gradually. Because of quantization and
sampling, the abrupt changes in the image are converted to gradual ones.
The first derivative responds whenever there is a transition in the intensity levels. The
first derivative is positive at the leading edge (brighter side of edge) and negative at the falling
edge (darker side of the edge).
Now, we will see how to apply these derivatives. The derivative can be found using the
gradient operation.

SAOE

BE-II/DIP

E&TC

Image gradient:
The tool of choice for finding edge strength and direction at location (x, y) of an image, f,
is the gradient denoted by, f and defined as the vector

f = grad (f) =

-------------------------------------------------------------- (1)

This vector has the important property that it points in the direction of the greatest rate of
change of f at location (x, y). The magnitude (length) of the vector f, denoted as M(x, y), where
M(x, y) = mag ( f ) = (gx2 + gy2)1/2 --------------------------------------------------------(2)

|gx| +|gy|

The direction of gradient vector f:


(

-1

( gy/gx).--------------------------------------------------------(3)

It tells us direction of gradient vector f. But the direction of the edge is perpendicular to it.
Using the gradient operators we can find direction as well as the strength of edge at a particular
location (x, y).

Computing Gradient:
Case 1: 1D data
Y=f(x)
Differentiate (3) w.r.t. x

=
The slope is

( )

---------------------------------------------(4)

SAOE

BE-II/DIP

E&TC

Case 2: 2D data
There are two variables x & y in 2D data. For finding gradient we work with each
variable separately. i.e. we take partial derivatives. Therefore, we differentiate w.r.t x and keep y
fixed.
(

--------------------------------------------(5)

Now we differentiate w.r.t y and keep x fixed.


(

----------------------------------------------(6)

Hence, the final gradient is


i

+j

------------------------------------------------------------(7)

Its magnitude is given as


|

|=(

-----------------------------------------------(8)

Finding gradient using masks


Considering a (3X3) neighbourhood with Z5 as the origin

Z1
Z4
Z7

Z2
Z5
Z8
(

Z3
Z6
Z9
)

---------------------------------------(9)

-----------------------------------------(10)

In discrete domain, h=k=1

= f(x+1,y) f (x,y)---------------------------------------------(11)

SAOE

BE-II/DIP

E&TC

= f(x,y+1) f (x,y)---------------------------------------------(12)
From the (3X3) neighbourhood, we have

= Z8-Z5 and

= Z6-Z5
Therefore, | F|=|Z8-Z5|+|Z6-Z5|.-------------------------------------------------------(13)

This is the first order difference gradient. This can be implemented using 2 masks.
|Z8-Z5|=
1.
0
-1
0
Mask 1(along x gradient).

|Z6-Z5|=
1.
-1
0
0
Mask 2(along y gradient).

2
-1

Combined mask.

This mask is known as ordinary operator.

-1
0

SAOE

BE-II/DIP

E&TC

Roberts mask.
| F|=|Z9-Z5|+|Z6-Z8|-------------------------------------------------------------------------(14)

|Z9-Z5|=
1.
0
0
-1
Mask 1(along x gradient).

|Z6-Z8|=
0.
1
-1
0
Mask 2(along y gradient).

1
-1

1
1

Combined mask.
Prewitt operator
| F|= |(Z7+Z8+Z9)-(Z1+Z2+Z3)| + |(Z3+Z6+Z9)-(Z7+Z4+Z1)|------------------------(15)

Prewitt edge operator

-1
0
-1

Horizontal

-1
0.
-1

-1
0
-1

-1
-1
-1

Vertical

0
0.
0

-1
-1
-1

SAOE

BE-II/DIP

E&TC

Sobel operator
| F|= |(Z7+2Z8+Z9)-(Z1+2Z2+Z3)| + |(Z3+2Z6+Z9)-(Z7+2Z4+Z1)|---------------------(16)

Sobel edge operator.

-1
0
1

-2
0
2

-1
-2
-1

-1
0
1

Horizontal

0
0
0

1
2
1

Vertical

II Derivative:
In the case of second derivative, it is positive on darker side of edge and negative on
brighter side of the edge. However, it is observed that the second derivative is very sensitive to
noise present in the image. Also, they give double edges. Hence, they are not usually used for
edge detection. But as its nature suggests we can use the sign of second derivative to determine
whether a point is lying on the darker side of an edge or the brighter side. Also it is observed that,
there are some zero crossings in the second derivative. This can be used to identify location of
edge whenever there is a gradual transition of intensity.

Masks for second derivative:

Laplacian operator:
+

----------------------------------------------------------------(17)

= f(x+1,y) f (x,y)-------------------------------------------------------(18)

= f(x,y+1) f (x,y)-------------------------------------------------------(19)

SAOE

BE-II/DIP

E&TC

The second derivative is given by


2

(f ) = [

(f)/ x2 ] + [

(f)/ y2 ].-------------------------------------(20)

Where
[

(f)/ x2 ]=f(x+1, y)+f( x-1,y)- 2f(x,y)---------------------------------------------(21)

(f)/ y2 ]=f(x,y+1)+f( x,y-1)- 2f(x,y)----------------------------------------------(22)

Therefore,
2

(f ) = f(x+1, y)+f( x-1,y)+ f(x,y+1)+f( x,y-1)-4f(x,y)-------------------(23)

Considering (3X3) neighbourhood,

(f ) = (Z2+Z4+Z6+Z8-4Z5)-------------------------------------------------(24)

0
-1
0

-1
4
-1

0
-1
0

This is known as Laplacian operator.


If we also consider the diagonal elements for second derivative, in that case the mask will be as
follows:
-1
-1
-1

-1
8
-1

-1
-1
-1

This operator is very sensitive to noise and so cannot be used for edge detection. To make it
useful, the image is 1st smoothed using a Gaussian operator and that smoothed image can be
operated by Laplacian operator. This operator is called as LoG (Laplacian of Gaussian operator).

SAOE

BE-II/DIP

E&TC

h(x,y) = exp(-(x2+y)2 /2 2)-----------------------------------------------(25)


- standard deviation.
Let
x2+y2=r2
then Laplacian of the edge can be written as

h = (( r2

) /

) exp (- r2/2 2)-----------------------------------(26)

NOTE: - LAPLACIAN OPERATOR ARE ISOTROPIC FILTERS. THAT IS THEIR


REPONSE IS INDEPENDENT OF THE DIRECTION OF THE DISCONTINOUS IN THE
IMAGE. THEY ARE ROTATION INVARIANT.

Image of the LoG operator.

Gaussian mask in two


dimension.

SAOE

BE-II/DIP

E&TC

Laplacian of Gaussian.

Mask for LoG operator:

0
0
-1
0
0

0
-1
-2
-1
0

-1
-2
16
-2
-1

0
-1
-2
-1
0

0
0
-1
0
0

If we compare the mask with above figure, we can observe that at centre its value is max at
center. Then it goes to negative maximum and comes to zero.
Example for LoG operator as compared to Sobel operator.

Original image

LoG operated image

Sobel operated image

SAOE

BE-II/DIP

E&TC

It is observed that the LoG operated image is more sensitive than the sobel operated image. It
gives any transition from one region to other and can be used to find secondary information.
Whereas the sobel operated image helps for edge detection.
However, edges are not always connected by the sobel operator. So, edge linking is required.

Program:
%% User defined function for convolution using even order matrix mask
function y=u_conv0mask(x,w,C)
b=double(x);
[r c]=size(x);
w=w/C;
[wr wc]=size(w);
wrm=ceil(wr/2);
wcm=ceil(wc/2);
output=zeros(r,c);
for i=wrm:r-wrm
for j=wcm:c-wcm
temp=0;
for wi=1:wr
for wj=1:wc
temp=(w(wi,wj)*b((i-wrm+wi),(j-wcm+wj)))+temp;
end
end
output(i,j)=temp;
end
end
y=uint8(output);
%% User defined function for convolution using odd order matrix mask
function y=u_conv1mask(x,w,C)
b=double(x);
[r c]=size(x);
w=w/C;
[wr wc]=size(w);
wrm=ceil(wr/2);
wcm=ceil(wc/2);
output=zeros(r,c);
for i=wrm:r-wrm+1
for j=wcm:c-wcm+1
temp=0;
for wi=1:wr
for wj=1:wc
temp=(w(wi,wj)*b((i-wrm+wi),(j-wcm+wj)))+temp;
end
end
output(i,j)=temp;
end
end
y=uint8(output);
%% User defined function to detect edge using ordinary method
function y=u_ordinary_edge(x,C)

SAOE

BE-II/DIP

E&TC

w1 = [1 0;-1 0];
w2 = [1 -1;0 0];
y1 = u_conv0mask(x,w1,C);
y2 = u_conv0mask(x,w2,C);
y=y1+y2;
% figure('Name','Ordinary Method'),subplot(2,2,1);imshow(x)
% subplot(222);imshow(y1)
% subplot(223);imshow(y2)
% subplot(224);imshow(y)
%% User defined function to detect edge using roberts method
function y=u_roberts_edge(x,C)
w1 = [1 0;0 -1];
w2 = [0 1;-1 0];
y1 = u_conv0mask(x,w1,C);
y2 = u_conv0mask(x,w2,C);
y=y1+y2;
% figure('Name','Roberts Method'),subplot(2,2,1);imshow(x)
% subplot(222);imshow(y1)
% subplot(223);imshow(y2)
% subplot(224);imshow(y)
%% User defined function to detect edge using Prewitt method
function y=u_prewitt_edge(x,C)
w1 = [-1 -1 -1;0 0 0;1 1 1];
w2 = [-1 0 1;-1 0 1;-1 0 1];
y1 = u_conv1mask(x,w1,C);
y2 = u_conv1mask(x,w2,C);
y=y1+y2;
% figure('Name','Prewitt Method'),subplot(2,2,1);imshow(x)
% subplot(222);imshow(y1)
% subplot(223);imshow(y2)
% subplot(224);imshow(y)
%% User defined function to detect edge using Sobel method
function y=u_sobel_edge(x,C)
w1 = [-1 -2 -1;0 0 0;1 2 1];
w2 = [-1 0 1;-2 0 2;-1 0 1];
y1 = u_conv1mask(x,w1,C);
y2 = u_conv1mask(x,w2,C);
y=y1+y2;
% figure('Name','Sobel Method'),subplot(2,2,1);imshow(x)
% subplot(222);imshow(y1)
% subplot(223);imshow(y2)
% subplot(224);imshow(y)
%% User defined function to detect edge using Laplacian method
function y=u_laplacian_edge(x,C)
w=[0 -1 0;-1 4 -1; 0 -1 0];
y = u_conv1mask(x,w,C);
% figure('Name','Laplacian Method'),subplot(1,2,1);imshow(x)
% imshow(y)
%% User defined function to detect edge using LoG method
function y=u_LoG_edge(x,C)
w=[0 0 1 0 0;0 1 2 1 0;1 2 -16 2 1;0 1 2 1 0;0 0 1 0 0];
y = u_conv1mask(x,w,C);
% figure('Name','Laplacian of Gaussian Method'),subplot(1,2,1);imshow(x)
% imshow(y)

SAOE

BE-II/DIP

E&TC

%% Prac 7: Edge Detection


clc
close all
clear all
%% inbuilt
I = imread('testpat1.png');
% I = imread('lena.tif');
imshow(I);title('Original Image-->');
In=imnoise(I,'Salt & Pepper',0.05);figure,imshow(In);title('Noisy Image->');
BW1 = edge(I,'prewitt');
BW2 = edge(I,'canny');
BW3 = edge(I,'log');
BW4 = edge(I,'zerocross');
BW5 = edge(I,'Sobel');
BW6 = edge(I,'roberts');
BW7 = edge(In,'log');
figure, subplot(241),imshow(BW1)
title('Prewitt')
subplot(242), imshow(BW2)
title('Canny')
subplot(243), imshow(BW3)
title('Laplacian of Gaussian')
subplot(244), imshow(BW4)
title('ZeroCross')
subplot(245), imshow(BW5)
title('Sobel')
subplot(246), imshow(BW6)
title('Roberts')
subplot(247), imshow(BW7)
title('Laplacian of Gaussian of Noisy Image')
%% Using user defined functions
UBW1 = u_ordinary_edge(I,1);
UBW2 = u_roberts_edge(I,1);
UBW3 = u_laplacian_edge(I,1);
UBW4 = u_sobel_edge(I,1);
UBW5 = u_prewitt_edge(I,1);
UBW6 = u_LoG_edge(I,1);
UBW7 = u_laplacian_edge(In,1);
UBW8 = u_LoG_edge(In,1);
figure, subplot(241),imshow(logical(UBW1))
title('Ordinary Method')
subplot(242), imshow(logical(UBW2))
title('Roberts')
subplot(243), imshow(UBW3)
title('Laplacian')
subplot(244), imshow(UBW4)
title('Sobel')
subplot(245), imshow(UBW5)
title('Prewitt')
subplot(246), imshow(UBW6)
title('Laplacian of Gaussian')
subplot(247), imshow(UBW7)
title('Laplacian of Noisy Image')
subplot(248), imshow(UBW8)
title('Laplacian of Gaussian of Noisy Image')

SAOE

BE-II/DIP

E&TC

SAOE

BE-II/DIP

Using User Defined Function

Using In Built Function

E&TC

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:8
Title: Morphological Operations on Binary Image
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:


Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No.8

Title: Morphological Operations on Binary Image

Objective:
To perform dilation on input binary image.
To perform erosion on input binary image.
To perform opening on input binary image.
To perform closing on input binary image.

Theory:
Morphological Image Processing:Mathematical morphology is a tool for extracting image components that are
useful in the representation and description of region shape, such as boundaries, skeletons,
and the convex hull. Interest also lies in morphological techniques for pre- or post
processing such as morphological filtering, thinning, and pruning.
In the general case, morphological image processing operates by passing a
structuring element over the image in an activity similar to convolution. The structuring
element can be of any size and it can contain any complement of 1s and 0s. At each
pixel position, a specified logical operation is performed between the structuring element
and the underlying binary image. The binary result of that logical operation is stored in
the output image at that pixel position. The effect created depends upon the size and
content of the structuring element and upon the nature of the logical operation.

SAOE

BE-II/DIP

E&TC

Dilation:

Dilation is an operation that grows or thickens objects in a binary image. The


specific manner and extend of this thickening is controlled by a shape referred to as
structuring element.
Mathematically, dilation is defined in terms of set operations. The dilation of A
by B, denoted by A B, is defined as:

A B= {z (B)z A }

For a given structuring element:

-(1)

SAOE

BE-II/DIP

E&TC

2D dilation takes place as follows:


0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
1
1
1
1
1
1
0
0

0
0
1
1
1
1
1
1
0
0

0
0
1
1
0
0
1
1
0
0

0
0
1
1
0
0
1
1
0
0

0
0
1
1
1
1
1
1
0
0

0
0
1
1
1
1
1
1
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
1
1
1
1
1
1
1
1
0

0
1
1
1
1
1
1
1
1
0

0
1
1
1
1
1
1
1
1
0

0
1
1
1
0
0
1
1
1
0

0
1
1
1
0
0
1
1
1
0

0
1
1
1
1
1
1
1
1
0

0
1
1
1
1
1
1
1
1
0

0
1
1
1
1
1
1
1
1
0

0
0
0
0
0
0
0
0
0
0

Erosion:

The process is also known as shrinking. The manner and extent of shrinking is
controlled by a structuring element. Simple erosion is a process of eliminating all the
boundary points from an object, leaving the object smaller in area by one pixel all around
its perimeter.
The erosion of A by B, denoted as A B, is defined as

A B = {z (B)z Ac } -(2)

In other words, erosion of A by B is the set of all structuring element origin


locations where the translated B has no overlap with the background of A.

SAOE

BE-II/DIP

E&TC

For a given structuring element:


1

2D erosion takes place as follows:


0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
1
1
1
1
1
1
0
0

0
0
1
1
1
1
1
1
0
0

0
0
1
1
0
0
1
1
0
0

0
0
1
1
0
0
1
1
0
0

0
0
1
1
1
1
1
1
0
0

0
0
1
1
1
1
1
1
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
1
1
1
1
1
0
0
0

0
0
1
0
0
0
1
0
0
0

0
0
1
0
0
0
1
0
0
0

0
0
1
0
0
0
1
0
0
0

0
0
1
1
1
1
1
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0

Opening:

Opening generally smoothens the contour of an image, eliminates thin protrusions.


The opening of set A by structuring element B, denoted AB, is defined as

AB= (A B) B

-(3)

In other words, opening of A by B is simply the erosion of A by B, followed


by dilation of the result by B.

SAOE

BE-II/DIP

E&TC

The opening operation satisfies the following properties:


1) AB is a subset of A.
2)

If C is a subset of D, then CB is a subset of DB.

3) (A B) B = AB.
Effect of opening using a 33 square structuring element

Closing:
Closing smoothens sections of contours but, as opposed to opening, it generally
fuses narrow breaks and long thin guffs, eliminates small holes, and fills gaps in the
contour. The closing of set A by structuring element B, denoted AB is defined as
AB = (A B) B

-(4)

In other words, closing of A by B is simply the dilation of A by B, followed by


erosion of the result by B.
The closing operation satisfies the following properties:
1. AB is a subset of A.
2. If C is a subset of D, then CB is a subset of DB.
3. (A B) B = A B.

SAOE

BE-II/DIP
Effect of closing using a 33 square structuring element

E&TC

SAOE

Algorithms:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Program:
%% User defined Dilation (odd order window)
function y=u_dilate(x,wr,wc)
[r c]=size(x);
%S=input('Enter the window size i.e. square matrix for median filter: ');
%w=ones(S,S);
wrm=ceil(wr/2);
wcm=ceil(wc/2);
dilated_img=zeros(r,c);
temp=zeros(1,wr*wc);
for i=wrm:r-wrm+1
for j=wcm:c-wcm+1
k=1;
for wi=1:wr
for wj=1:wc
temp(1,k)=x((i-wrm+wi),(j-wcm+wj));
k=k+1;
end
end
dilated_img(i,j)=max(temp);
end
end
y=dilated_img;

%% User defined Errosion (odd order window)


function y=u_erode(x,wr,wc)
[r c]=size(x);
%S=input('Enter the window size i.e. square matrix for median filter: ');
%w=ones(S,S);
wrm=ceil(wr/2);
wcm=ceil(wc/2);
eroded_img=zeros(r,c);
temp=zeros(1,wr*wc);
for i=wrm:r-wrm+1
for j=wcm:c-wcm+1
k=1;
for wi=1:wr
for wj=1:wc
temp(1,k)=x((i-wrm+wi),(j-wcm+wj));
k=k+1;
end
end
eroded_img(i,j)=min(temp);
end
end
y=eroded_img;

SAOE

BE-II/DIP

E&TC

%% Prac 8
% Morphological IP
% (a)Dilation (b)Erosion (c)Opening (d)Closing
clc
close all
clear all
a=imread('circbw.tif');
% a=imread('lena.tif');
subplot(231);imshow(a);title('Original Image');
a_dil=u_dilate(a,3,3);
subplot(232);imshow(a_dil);title('Dilated Image');
a_errode=u_erode(a,3,3);
subplot(233);imshow(a_erode);title('Erroded Image');
closing=u_erode(u_dilate(a,3,3),3,3);
subplot(234);imshow(closing);title('Image with applied closing operation');
opening=u_dilate(u_erode(a,3,3),3,3);
subplot(235);imshow(opening);title('Image with applied opening operation');
boundary=logical(uint8(a)-uint8(a_erode));
subplot(236);imshow(boundary);title('Boundary of the Image');

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:9
Title: Pseudo color Image Processing
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:


Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No: 9
Title:

To study and implement pseudo colouring on the grayscale bitmap image.

Theory:
Pseudo colour Image Processing:
Pseudo colour (also called false colour) image processing consists of assigning colours to grey
values based on a specific criterion. The term pseudo or false colour is used to differentiate the process of
assigning colours to monochromatic images from the processes associated with true colour images. The
principle use of pseudo colour is for human visualization and interpretation of gray-scale events in an
image or sequence of images.
Grey level to colour transformation:
Basically, the idea underlying this approach is to perform three independent transformations on
the gray level of any input pixel. The three results are then fed separately into the red, green and blue
channels of a colour television monitor. This method produces a composite image whose colour content
is modulated by the nature of the transformation functions. Note that these are transformations on the gray
level values of an image and are not functions of position. This method is based on smooth, nonlinear
functions, which, as might be expected gives the technique considerable flexibility.
Transformation

f1(x,y)

g1(x, y)

hr(x,y)

T1

g2(x,y)
Transformation

f2(x,y)

Additional
processor

T2

hg(x,y)

g3(x,y)
Transformation

f3(x,y)

Tk

hb(x,y)

SAOE

BE-II/DIP

E&TC

The approach shown in fig is based on single monochrome image. Often it is of interest to
combine several monochrome images into single composite image. The following figure shows the block
diagram of such technique.
Here individual sensors produce individual monochrome image, each in a different spectral band.
The types of additional processing can be the knowledge about the response characteristics of the sensors
used to generate the images.

RGB colour model:


The RGB colour model is an additive colour model in which red, green, and blue light are
added together in various ways to reproduce a broad array of colours. The name of the model
comes from the initials of the three additive primary colours, red, green, and blue.
The main purpose of the RGB colour model is for the sensing, representation, and display of
images in electronic systems, such as televisions and computers, though it has also been used in
conventional photography. Before the electronic age, the RGB colour model already had a solid
theory behind it, based in human perception of colours.
RGB is a device-dependent colour model: different devices detect or reproduce a given
RGB value differently, since the colour elements (such as phosphors or dyes) and their response
to the individual R, G, and B levels vary from manufacturer to manufacturer, or even in the same
device over time. Thus an RGB value does not define the same colour across devices without
some kind of colour management.

HSI colour model:


Hue is a colour attribute that describes pure colour (pure yellow, orange and red) whereas
saturation gives the measure of the degree to which pure colour is diluted by white light. HSI colour model
owes its usefulness to the two principal facts. First the intensity component, I, is decoupled from the
colour information in the image. Second the hue and saturation components are intimately related to the
way in which human beings perceive colours. This feature makes HIS model an ideal tool for developing
image processing algorithms based on some of colour sensing properties of the human visual system.
Examples of usefulness of HSI model range from the design of imaging system for automatically
determining the ripeness of the fruits and vegetables, and to system for matching colour samples,
inspecting quality of finished colour goods. In these and similar applications, the key is to base system
operation on colour properties the way a person might use those properties for performing the task in the
question.

SAOE

Algorithm:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

Program:
#include<dos.h>
#include<conio.h>
#include<graphics.h>
#include<stdlib.h>
#include<stdio.h>
void pseudocolour(FILE *fp);
void main()
{
FILE *fp;
char filename[15];
printf("enter source image file ");
gets(filename);
fp=fopen(filename,"rb");
if(fp==NULL)
{
printf("can not open source ");
getch();
exit(0);
}
pseudocolour(fp);
getch();
}
void pseudocolour(FILE *fp)
{
int i;
FILE *fp2;
unsigned char k;
char filename[15];
printf("enter the destination file name ");
gets(filename);
fp2=fopen(filename,"wb");
if(fp2==NULL)
{
printf("can not open source ");
getch();
exit(0);
}
for(i=0;i<54;i++)
fputc(fgetc(fp),fp2);
for(i=0;i<256;i++)
{
k=fgetc(fp);
if(k>=0 && k<32)
{
fputc(0,fp2);
fputc(0,fp2);
fputc(0,fp2);
}
else if(k>=32 && k<64)
{
fputc(0,fp2);
fputc(0,fp2);
fputc(255,fp2);
}
else if(k>=64 && k<96)

E&TC

SAOE

BE-II/DIP

{
fputc(0,fp2);
fputc(255,fp2);
fputc(0,fp2);
}
else if(k>=96 && k<128)
{
fputc(255,fp2);
fputc(0,fp2);
fputc(0,fp2);
}
else if(k>=128 && k<160)
{
fputc(255,fp2);
fputc(255,fp2);
fputc(0,fp2);
}
else if(k>=160 && k<192)
{
fputc(255,fp2);
fputc(0,fp2);
fputc(255,fp2);
}
else if(k>=192 && k<224)
{
fputc(0,fp2);
fputc(255,fp2);
fputc(255,fp2);
}
else
{
fputc(255,fp2);
fputc(255,fp2);
fputc(255,fp2);
}
fputc(0,fp2);
fgetc(fp);
fgetc(fp);
fgetc(fp);
}
while(!feof(fp))
fputc(fgetc(fp),fp2);
fcloseall();
printf("pseudocolouring successful");
}

E&TC

SAOE

BE-II/DIP

E&TC

Result:
enter source image file C:\TC\BIN\lena.bmp
enter the destination file name coloredlena.bmp
pseudocolouring successful

lena.bmp

Coloredlena.bmp

SAOE

Conclusion:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Assignment No.:10
Title: Creating Noisy Images And Filtering Using MATLAB
Date of performance :
Date of submission :
Name of Student: Prassanjeet Singh Dhanjal
Class : BE [E & TC]

Division : A

Signature of the Teacher:


Remarks:

Roll Number : 52

SAOE

BE-II/DIP

E&TC

Assignment No. 10

Title: Creating noisy image and filtering using MATLAB

Objective:
To create images corrupted by various noises and to perform spatial filtering on them
using MATLAB.
To observe the effect of various type of filters on the noisy images

Theory:

Unlike image enhancement, the ultimate goal of restoration techniques is to improve


an image in some predefined sense. Although there are areas as overlap, image
enhancement is largely a subjective process, while image restoration is for the most part
an objective process. Restoration attempts to reconstruct or recover an image that has
been degraded by various noises. Thus restoration techniques are oriented towards
modelling the degradation and applying the inverse process in order to recover the original
image.
This approach usually involves formulating a criterion of goodness that will yield an
optimal estimate of the desired result. By contrast, enhancement techniques basically are
heuristics procedures designed to manipulate an image in order to take advantage of
psychophysical aspects of human visual system .For example, contrast stretching is
considered an enhancement technique because it is based primarily on the pleasing
aspects it might present to the viewer, whereas removal of image blur by applying a deblurring function is considered a restoration technique.

SAOE

BE-II/DIP

E&TC

Spatial and frequency properties of noise


Relevant to our discussion are parameters that define the spatial characteristics of
noise, and whether the noise is correlated with the image. Frequency properties refer to
the frequency content of noise in the Fourier sense (i.e. as opposed to the electromagnetic
spectrum). For example, when Fourier spectrum of noise is constant, noise usually called
as white noise .This terminology is a carryover from the physical properties of white light,
which contains nearly all frequencies in the visible spectrum in equal proportions. The
Fourier spectrum of a function containing all frequencies in equal proportions is a
constant.
With the exceptions of spatially periodic noise we assume that noise is independent
of spatial coordinates, and that it is uncorrelated with respect to the image itself (i.e. there
is no correlation between pixel values and the values of noise components). Although
these assumptions are partially invalid in some applications (quantum limited imaging
such as in X-ray and nuclear-medicine imaging, is a good example), the complexities of
dealing with spatially dependent and correlated noise are beyond the scope of our
discussion.

Noise Models
The principle sources of noise in digital images arise during image acquisition
(digitization) and/or transmission. The performance of imaging sensors is affected by a
variety of factors, such as environmental conditions during image acquisition, and by the
quality of the sensing elements themselves. For instance, in acquiring images with a CCD
camera, light level and sensor temperature are major factors affecting the amount of noise
in the resulting image.
Images are corrupted during transmission principally due to interference in the
channel used for transmission.

SAOE

BE-II/DIP

E&TC

Gaussian noise

Because of its mathematical tractability in both the spatial and frequency domains,
Gaussian (also called as normal) noise models are used frequently in practice.
In fact, this tractability is so convenient that it often results in Gaussian models
being used in situations in which they are marginally applicable at best.
PDF of Gaussian noise, z is given by

where z = represents gray level, is the mean of average value of z and is the standard
deviation.

SAOE

BE-II/DIP

E&TC

Rayleigh noise

The PDF of Rayleigh noise is given by


( )

{ (

The mean and variance of this density are given by

Mean =
Variance =

The Rayleigh density can be quite useful for approximating skewed histograms.

SAOE

BE-II/DIP

E&TC

Erlang (Gamma) noise

The PDF of Erlang noise is given by

( )

{ (

Where a > 0, b is positive integer and ! indicates factorial. The mean and variance of
this density are given by
Mean =
Variance =

SAOE

BE-II/DIP

Exponential noise

The PDF of exponential noise is given by

where a > 0. The mean and variance of this density function are

Mean =
Variance =

Note that this PDF is a special case of the Erlang PDF, with b = 1.

E&TC

SAOE

BE-II/DIP

E&TC

Uniform noise

The PDF of uniform noise is given by

( )

The mean and the variance of the density function is given by

Mean =

Variance =

SAOE

BE-II/DIP

E&TC

Impulse (salt and pepper) noise

The PDF of (bipolar) impulse noise is given by

( )

If b > a, grey-level b will appear as a light dot in the image. Conversely, level a
will appear like a dark dot. If neither the probability is zero and especially if they are
approximately equal, impulse noise values will resemble salt-and-pepper granules
randomly distributed over the image. For this reason, bipolar impulse noise is also called
salt-and-pepper noise

SAOE

Algorithm:

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

Program:
%% Prac 10
%% Creating noisy image and filtering using MATLAB
clc
clear all
close all
a=imread('cameraman.tif');subplot(231),imshow(a);title('Original Image');
[r c]=size(a);
a1=imnoise(a,'Gaussian',0.05);subplot(232),imshow(a1);
title('Image with Gaussian Noise');
a2=imnoise(a,'Salt & Pepper',0.05);subplot(233),imshow(a2);
title('Image with Salt & Pepper Noise');
a3=imnoise(a,'localvar',rand(r,c));subplot(234),imshow(a3);
title('Image with Localvar Noise');
a4=imnoise(a,'Poisson');subplot(235),imshow(a4);
title('Image with Poisson Noise');
a5=imnoise(a,'Speckle',0.05);subplot(236),imshow(a5);
title('Image with Speckle Noise');
figure,subplot(231);imshow(a);title('Original Image');
f1=u_medfltr(a1,3);subplot(232),imshow(f1);
title('Median Filter applied on Gaussian Noise');
f2=u_medfltr(a2,3);subplot(233),imshow(f1);
title('Median Filter applied on Salt & Pepper Noise');
f3=u_medfltr(a3,3);subplot(234),imshow(f1);
title('Median Filter applied on Localvar Noise');
f4=u_medfltr(a4,3);subplot(235),imshow(f1);
title('Median Filter applied on Poisson Noise');
f5=u_medfltr(a5,3);subplot(236),imshow(f1);
title('Median Filter applied on Speckle Noise');
w=[1 1 1;1 1 1;1 1 1];
figure,subplot(231);imshow(a);title('Original Image');
F1=u_meanfltr(a1,w,9);subplot(232),imshow(F1);
title('Mean Filter applied on Gaussian Noise');
F2=u_meanfltr(a2,w,9);subplot(233),imshow(F1);
title('Mean Filter applied on Salt & Pepper Noise');
F3=u_meanfltr(a3,w,9);subplot(234),imshow(F1);
title('Mean Filter applied on Localvar Noise');
F4=u_meanfltr(a4,w,9);subplot(235),imshow(F1);
title('Meian Filter applied on Poisson Noise');
F5=u_meanfltr(a5,w,9);subplot(236),imshow(F1);
title('Mean Filter applied on Speckle Noise');

SAOE

BE-II/DIP

E&TC

SAOE

BE-II/DIP

E&TC

SAOE

Conclusion:

BE-II/DIP

E&TC

You might also like