Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 81

Computer Vision and Image processing

Chapter 3 cont..
Image Filtering cont..
Woldia University
IOT
March 2021
Concept of Mask

 What is a mask?
 A mask is a filter.
 Masking is also known as filtering
 A mask is a small matrix whose values are called weights.
 Each mask has an origin, which is usually one of its positions.
 The origins of symmetric masks are usually their center pixel position.
 For nonsymmetric masks, any pixel location may be chosen as the origin (depending on
the intended use).
Concept of Mask

 WHAT IS FILTERING.
 The process of filtering is also known as convolving a mask with an image. As this
process is same as convolution so, filter masks are also known as convolution masks.
 HOW IT IS DONE.
 The general process of filtering and applying masks is consists of moving the filter mask
from point to point in an image. At each point (x,y) of the original image, the response of
a filter is calculated by a pre defined relationship.
 All the filters values are pre defined and are a standard.
Concept of Mask

 WHY FILTERS ARE USED?


 Filters are applied on image for multiple purposes. The two most common uses are as
following:
 Filters are used for Blurring and noise reduction
 Filters are used for edge detection and sharpness
Concept of Mask

 BLURRING AND NOISE REDUCTION:


 Filters are most commonly used for blurring and for noise reduction.
 Blurring is used in pre processing steps, such as removal of small details from an image prior to
large object extraction.
 The common masks for blurring are:
 1. Box filter
 2. Weighted average filter
 In the process of blurring we reduce the edge content in an image and try to make the transitions
between different pixel intensities as smooth as possible.
 Noise reduction is also possible with the help of blurring.
Concept of Mask

 EDGE DETECTION AND SHARPNESS:


 Masks or filters can also be used for edge detection in an image and to increase sharpness
of an image.
 WHAT ARE EDGES?
 We can also say that sudden changes of discontinuities in an image are called as edges.
Significant transitions in an image are called as edges. A picture with edges is shown
below.
Concept of Mask

Original image Same image with edges


Concept of Blurring

 In blurring , we simple blur an image.


 An image looks more sharp or more detailed if we are able to perceive all the objects and
their shapes correctly in it.
 For example. An image with a face, looks clear when we are able to identify eyes , ears ,
nose , lips , forehead e.t.c very clear.
 This shape of an object is due to its edges.
 So in blurring , we simple reduce the edge content and makes the transition from one
color to the other very smooth.
Concept of Blurring

 BLURRING VS ZOOMING.
 You might have seen a blurred image when you zoom an image. When you zoom an
image using pixel replication , and zooming factor is increased, you saw a blurred image.
This image also has less details , but it is not true blurring.
 Because in zooming , you add new pixels to an image , that increase the overall number
of pixels in an image , whereas in blurring , the number of pixels of a normal image and
a blurred image remains the same.
Concept of Blurring

 E.g. A blurred image.


Concept of Blurring

Types of filters:
 Blurring can be achieved by many ways. The common type of filters that are used to
perform blurring are:
 Mean filter
 Weighted average filter
 Gaussian filter
Concept of Blurring

 MEAN FILTER
 Mean filter is also known as Box filter and average filter. A mean filter has the
following properties.
 The size of the filter must be odd, 3x3, 5x5, 7x7, etc.
 The elements of the mask must be positive.
 The size of the mask determines the degree of smoothing.
Concept of Blurring

 The result of a mask of 3x3 on an image is:


 May be the results are not much
Original image Blurred image clear.
 Let’s increase the blurring.
 The blurring can be increased by
increasing the size of the mask.
 The more is the size of the mask ,
the more is the blurring.
 Because with greater mask ,
greater number of pixels are
catered and one smooth transition
is defined.
Concept of Blurring

 The result of a mask of 5x5 on an image is:


Original image Blurred image
Concept of Blurring

 The result of a mask of 7x7 on an image is:


Original image Blurred image

 Same way if we
increase the mask ,
the blurring would
be more and the
results are shown
here.
Concept of Blurring

 The result of a mask of 9x9 on an image is:


Original image Blurred image

 Same way if we
increase the mask ,
the blurring would
be more and the
results are shown
here.
Concept of Blurring

 The result of a mask of 11x11 on an image is:


Original image Blurred image

 Same way if we
increase the mask ,
the blurring would
be more and the
results are shown
here.
Concept of Blurring

 Example of Average filter:


100 110 100 120
110 100 100 120  Assume the marked
pixel contain a
100 100 120 110
corrupted pixel.

 This pixel will be replaced by average of the neighboring pixels plus the pixel itself.
 i.e. 100+110+100+110+100+100+100+100+120/9 = 103
Concept of Blurring

 Application: generally Average filter is used for noise reduction but sometimes produces
undesirable side effect.
 Suitable filter size to apply for averaging filtering process is 3x3, 5x5, 7x7,etc.. Size odd.
 i.e.
Center p
0
 We have to extend to get
the neighbouring pixels.
 By using zero padding or
border replication.
Concept of Blurring

Example. Box-filter

1 1 1
1 1 1
1 1 1
Image filtering 1 1 1

1 1 1

1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0
0 0
0 0
0 90
90 0
0 90
90 90
90 90
90 0
0 0
0

0
0 0
0 0
0 90
90 90
90 90
90 90
90 90
90 0
0 0
0

0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0

0
0 0
0 90
90 0
0 0
0 0
0 0
0 0
0 0
0 0
0

0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0 0
0

Credit: S. Seitz
Image filtering 1 1 1

1 1 1

1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 10

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 0 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 0 0 0 0 0 0 0

0 0 90 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

Credit: S. Seitz
Image filtering 1 1 1

1 1 1

1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 10 20

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 0 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 0 0 0 0 0 0 0

0 0 90 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

Credit: S. Seitz
Image filtering 1 1 1

1 1 1

1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 10 20 30

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 0 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 0 0 0 0 0 0 0

0 0 90 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

Credit: S. Seitz
Image filtering 1

1
1

1
1

1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 10 20 30 30

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 0 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 0 0 0 0 0 0 0

0 0 90 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

Credit: S. Seitz
Image filtering 1

1
1

1
1

1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 10 20 30 30

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 0 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0
?
0 0 0 0 0 0 0 0 0 0

0 0 90 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

Credit: S. Seitz
Image filtering 1

1
1

1
1

1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 10 20 30 30

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0
?
0 0 0 90 0 90 90 90 0 0

0 0 0 90 90 90 90 90 0 0 50

0 0 0 0 0 0 0 0 0 0

0 0 90 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

Credit: S. Seitz
Image filtering 1 1 1
1 1 1
1 1 1

0 0 0 0 0 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0 0 10 20 30 30 30 20 10

0 0 0 90 90 90 90 90 0 0 0 20 40 60 60 60 40 20

0 0 0 90 90 90 90 90 0 0 0 30 60 90 90 90 60 30

0 0 0 90 90 90 90 90 0 0 0 30 50 80 80 90 60 30

0 0 0 90 0 90 90 90 0 0 0 30 50 80 80 90 60 30

0 0 0 90 90 90 90 90 0 0 0 20 30 50 50 60 40 20

0 0 0 0 0 0 0 0 0 0 10 20 30 30 30 30 20 10

0 0 90 0 0 0 0 0 0 0 10 10 10 0 0 0 0 0

0 0 0 0 0 0 0 0 0 0

Credit: S. Seitz
Gaussian blur vs Mean blur

 Both Gaussian blur and Mean blur uses the concept of kernel convolution.
 The kernel size should be quit smaller than the actual image matrix.
E.g.1
17 14 13 09 17 17 14 13

21 64 62 41 19 21 64 62 348/9 38.66667
42 54 61 52 40 42 54 61 <Result>
1 1 1
41 30 31 34 38
1 1 1
20 24 40 38 35 Mean blur
1 1 1
<Original image> <Kernel>
Gaussian blur vs Mean blur

 Both Gaussian blur and Mean blur uses the concept of kernel convolution.
 The kernel size should be quit smaller than the actual image matrix.
E.g.2
50 50 100 100 50 50 100
50 50 100 100 50 50 100 600/9 66.666667
50 50 100 100 50 50 100 <Result>
50 50 100 100
1 1 1
50 50 100 100
1 1 1
50 50 100 100 Mean blur
1 1 1
<Original image> <Kernel>
Gaussian blur vs Mean blur

 Both Gaussian blur and Mean blur uses the concept of kernel convolution.
 The kernel size should be quit smaller than the actual image matrix.
E.g.2
50 100 100
50 50 100 100
50 50 100 100 100 200 200 1000/16 62.5
50 50 100 100 50 100 100 <Result>
50 50 100 100
1 2 1
50 50 100 100
2 4 2
50 50 100 100 Gaussian blur
1 2 1
<Original image> <Kernel>
Concept of Edge Detection

High intensity
Low intensity
 We are going to see
another type of
kernel convolution
called edge
detection.
Concept of Edge Detection

 What are edges?


 We can also say that sudden changes of discontinuities in an image are called as edges.
 Significant transitions in an image are called as edges.
 There are three types of edges:
 Horizontal edges
 Vertical Edges
 Diagonal Edges
Concept of Edge Detection

Basic Idea:
 Look for a neighbourhood with
strong signs of change.
 Is there any big Discontinuity?

Issues to consider:
 Size of the neighbourhood, i.e.
k=1
 What metrics represent a
“”Change”? Ans. Threshold.
Concept of Edge Detection
Concept of Edge Detection

 WHY DETECT/EXTRACT EDGES?


 Most of the shape information of an image is enclosed in edges.
 So first we detect these edges in an image and by using these filters and then by enhancing
those areas of image which contains edges, finally sharpness of the image will increase and
image will become more clean.
 Edges and lines are used in:
 Object recognition  Horizon detection
 Image matching (e.g. stereo,  Line following robots
mosaics)  And many more apps
 Document analysis
Concept of Edge Detection

 Here are some of the masks for edge detection, (all of these filters are linear filters or
smoothing filters):
 Prewitt Operator
 Sobel Operator
 Robinson Compass Masks
 Krisch Compass Masks
 Laplacian Operator.
Concept of Edge Detection

 PREWITT OPERATOR: Prewitt operator is used for detecting edges horizontally and
vertically.
 SOBEL OPERATOR: The sobel operator is very similar to Prewitt operator. It is also a
derivate mask and is used for edge detection. It also calculates edges in both horizontal and
vertical direction.
 ROBINSON COMPASS MASKS: This operator is also known as direction mask. In this
operator we take one mask and rotate it in all the 8 compass major directions to calculate
edges of each direction.
Concept of Edge Detection

 KIRSCH COMPASS MASKS: Kirsch Compass Mask is also a derivative mask which is used
for finding edges. Kirsch mask is also used for calculating edges in all the directions.
 LAPLACIAN OPERATOR: Laplacian Operator is also a derivative operator which is used to
find edges in an image. Laplacian is a second order derivative mask. It can be further divided
into positive laplacian and negative laplacian.
 All these masks used to find edges.
 Some find horizontally and vertically, some find in one direction only and some find in all the
directions.
 The next concept that comes after this is sharpening which can be done once the edges are
extracted from the image.
Concept of Edge Detection

 Sharpening :
 Sharpening is opposite to the blurring.
 In blurring, we reduce the edge content and in sharpening , we increase the edge content.
 So in order to increase the edge content in an image , we have to find edges first.
 Edges can be find by one of the any method described above by using any operator.
 After finding edges , we will add those edges on an image and thus the image would have
more edges , and it would look sharpen.
Concept of Edge Detection

Original Sharpen
image image
Concept of Edge Detection

1. Prewitt Operator:


 was developed by Judith M. S. Prewitt.
 It calculates the gradient of the image intensity at each point, giving the direction of the
largest possible increase from light to dark and the rate of change in that direction.
 An image gradient is a directional change in the intensity or color in an image. The
gradient of the image is one of the fundamental building blocks in image processing.
Concept of Edge Detection

 Prewitt operator is used for edge detection in an image. It detects two types of edges:
 Horizontal edges
 Vertical Edges
Concept of Edge Detection

 Edges are calculated by using difference between corresponding pixel intensities of an


image.
 All the masks that are used for edge detection are also known as derivative masks.
 Because the image is also a signal so changes in a signal can only be calculated using
differentiation.
 So that’s why these operators are also called as derivative operators or derivative masks.
Concept of Edge Detection

 All the derivative masks should have the following properties:


 Opposite sign should be present in the mask.
 Sum of mask should be equal to zero.
 More weight means more edge detection.
 Prewitt operator provides us two masks one for detecting edges in horizontal direction
and another for detecting edges in vertical direction.
Concept of Edge Detection

 Vertical direction:
 This type of mask will find the edges in
-1 0 1
vertical direction and it is because the zeros
-1 0 1 column in the vertical direction. When you
-1 0 1 will convolve this mask on an image, it will
give you the vertical edges in an image.
Concept of Edge Detection

 How it works?
 It simply works like as first order derivate and calculates the difference of pixel
intensities in a edge region.
 As the center column is of zero so it does not include the original values of an image but
rather it calculates the difference of right and left pixel values around that edge.
 This increase the edge intensity and it become enhanced comparatively to the original
image.
Concept of Edge Detection

 Horizontal direction:
 This mask will find edges in horizontal direction
and it is because that zeros column is in horizontal
-1 -1 -1 direction. When you will convolve this mask onto
0 0 0 an image it would prominent horizontal edges in the
image.
1 1 1
 Calculates difference among the pixel intensities of
a particular edge.
 Calculate the difference of above and below pixel
intensities of the particular edge
 Thus increasing the sudden change of intensities
and making the edge more visible.
Concept of Edge Detection

Input image After applying vertical mask After applying horizontal mask
Concept of Edge Detection

Input image After Edge detected


Concept of Edge Detection

2. Sobel operator:


 The sobel operator is very similar to Prewitt operator, and it is also a kernel convolution process.
 The major difference in sobel operator is the coefficients of masks are not fixed and they can be
adjusted according to our requirement unless they do not violate any property of derivative masks.
 It is also a derivate mask and is used for edge detection.
 Like Prewitt operator sobel operator is also used to detect two kinds of edges in an image:
 Vertical direction
 Horizontal direction
Concept of Edge Detection

 Vertical mask: Horizontal mask:

-1 0 1 -1 -2 -1
Gx -2 0 2 Gy 0 0 0
-1 0 1 1 2 1

 G =

atan(Gy/Gx)
To find the angle of the edge!
Concept of Edge Detection

 Vertical mask: Horizontal mask:

-1 0 1 -1 -2 -1
Gx -2 0 2 Gy 0 0 0
-1 0 1 1 2 1

 G =
atan(Gy/Gx)
<Colour added to image
<To find the angle of the edge!> to indicate angle of
orientation>
Concept of Edge Detection

Original Image After applying vertical mask After applying Horizontal mask
Concept of Edge Detection

 COMPARISON:
 As you can see that in the first picture on which we apply vertical mask, all the vertical
edges are more visible than the original image. Similarly in the second picture we have
applied the horizontal mask and in result all the horizontal edges are visible.
 So in this way you can see that we can detect both horizontal and vertical edges from an
image. Also if you compare the result of sobel operator with Prewitt operator, you will
find that sobel operator finds more edges or make edges more visible as compared to
Prewitt Operator.
 This is because in sobel operator we have allotted more weight to the pixel intensities
around the edges.
Concept of Edge Detection

 Let we applying more weights to the mask:


 Now we can also see that if we apply more weight to the mask, the more edges it will get
for us. Hence, there is no fixed coefficients in sobel operator, so here is another weighted
operator.

-1 0 1
-5 0 5
-1 0 1
Concept of Edge Detection
 Example how a sobel operator works. Input image
Concept of Edge Detection

 Mark the changes where the images goes from a high contrast to a low contrast.
 And we going to expect that those changes means edges in the image.
 Equation: ɗ Contrast = ɗ State
 The change in contrast = the change in state.
 We just going to detect horizontally and vertically.
Concept of Edge Detection
 Vertical scan of the image
Concept of Edge Detection
 Horizontal scan of the image
Concept of Edge Detection

 The final edge map look like:


Concept of Edge Detection

3. Canny Edge Detector


 It takes the sobel operator and makes it looking better, or it just make useful for image
analysis.
 It was developed by John F. Canny in 1986.
 Simply the input of the canny operator is the output of sobel.

Apply Sobel
Input Converted to Gaussian operator in Canny
Image gray scale Blur x,y operator
direction
Concept of Edge Detection

We want to find
were the root is??
Concept of Edge Detection

 If we just use sobel edge detection, it will just find the gradient of the left and right side
of the root:
 It is not also resolution independent.
 If we got a high resolution image the gradient will
split out in many pixels.
 In a small resolution we will have a sharp edge
gradient.
Concept of Edge Detection

 Canny works like first find the edges and use a concept
called hysteresis thresholding.
 For every pixel, try to find the local maxima, a value
that bigger than its neighbours.

Find out x is
? X ? grater than its
neighbours
across the
edge?
Concept of Edge Detection
Concept of Edge Detection
Concept of Edge Detection
Concept of Edge Detection

 Hysteresis Thresholding:
 This stage decides which are all edges are really edges and which are not.
 For this, we need two threshold values, minVal and maxVal.
 Any edges with intensity gradient more than maxVal are sure to be edges and those below
minVal are sure to be non-edges, so discarded.
 Those who lie between these two thresholds are classified edges or non-edges based on their
connectivity.
 If they are connected to “sure-edge” pixels, they are considered to be part of edges. Otherwise,
they are also discarded. See the image below:
Concept of Edge Detection

 Hysteresis Thresholding:
Concept of Edge Detection

 The edge A is above the maxVal, so considered as “sure-edge”. Although edge C is below
maxVal, it is connected to edge A, so that also considered as valid edge and we get that full
curve. But edge B, although it is above minVal and is in same region as that of edge C, it is
not connected to any “sure-edge”, so that is discarded. So it is very important that we have to
select minVal and maxVal accordingly to get the correct result.
 This stage also removes small pixels noises on the assumption that edges are long lines.
 So what we finally get is strong edges in the image.
Concept of Edge Detection

Canny Edge Detection in OpenCV:


 OpenCV puts all the above in single function, cv2.Canny().
 First argument is our input image.
 Second and third arguments are our minVal and maxVal respectively.
 Fourth argument is aperture_size. It is the size of Sobel kernel used for find image gradients.
By default it is 3.
Concept of Edge Detection
Concept of Edge Detection
Background Subtraction

Background subtraction (BS) is a common and widely used technique


for generating a foreground mask (namely, a binary image containing the
pixels belonging to moving objects in the scene) by using static cameras.
As the name suggests, BS calculates the foreground mask performing a
subtraction between the current frame and a background model,
containing the static part of the scene or, more in general, everything that
can be considered as background given the characteristics of the observed
scene.
Background Subtraction cont..
Background Subtraction cont..

Background modeling consists of two main steps:


• Background Initialization;
• Background Update.
In the first step, an initial model of the background is computed, while in
the second step that model is updated in order to adapt to possible changes
in the scene.
Background Subtraction cont..

How to perform BS by using OpenCV?


1. Read data from videos or image sequences by using cv2.VideoCapture()
2. Create and update the background model by using 
cv2.BackgroundSubtractor() class;
3. Get and show the foreground mask by using cv2.imshow()
Background Subtraction cont..

Python Code:
#..End..#

You might also like