Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Edge is a set of connected pixels that form a boundary between two different

regions.

Edge Detection Applications

● Reduce unnecessary information in an image while preserving the


structure of the image.
● Extract important features of the image like curves, corners and
lines.
● Recognizes objects, boundaries and segmentation.
● Plays a major role in computer vision and recognition
● Xray, satellite image, robotic vision, fingerprint scanner, tumor
detection

a random number is generated between 1 and a final value. If the number is the
final value, then the pixel will be changed with noise. If the final number is larger,
fewer pixels will be changed. As the number decreases, more pixels will be
changed, thus making a noisier picture.

To determine how the pixel is changed, a random number is generated between 1


and 256 (max for grayscale values). This algorithm is implemented when the given
pixel is noted to be changed. Instead of the original value of the pixel, it is
replaced by the random number between 1 and 256.
By randomizing the noise values, the pixels can change to a white, black, or gray
value, thus adding the salt and pepper colors. By randomizing which pixels are
changed, the noise is scattered throughout the image. The combination of these
randomizations creates the "salt and pepper" effect throughout the image.

apply the cv2.cornerHarris method

# to detect the corners with appropriate

# values as input parameters


Dilation:The dilation operator takes two pieces of data as inputs. The first is the
image which is to be dilated. The second is a (usually small) set of coordinate points

known as a structuring element (also known as a kernel). Dilation adds pixels to the

boundaries of objects in an image

Salt and Pepper Noise:


Salt and pepper noise is an impulse type of noise in images. We consider
salt-and-pepper noise, for which a certain amount of the pixels in the
image are either black or white (black or white dots). Normally if there is
black dots in the image we call it pepper noise and if there is white dots
in the image we call it salt noise. This noise is generally caused by errors
in data transmission, failure in memory cell or analog-to-digital
The ‘salt-and-pepper’ noise consists of random pixels being set to black
or white (the extremes of the gray level range). This kind of impulse
noise can be generated by image digitalization or during image
transmission. converter errors.
Salt noise - white pixel in black background

Pepper noise - black pixel in white background

Gaussian Filter:
From the image perspective, during Gaussian filtering each individual pixel is replaced
with a Gaussian shaped blob with the same total weight as the original intensity value.
This Gaussian is also called the convolution kernel. It renders small structures invisible,
and smoothens sharp edges.
The Gaussian filter works by using the 2D distribution as a point-spread function. This is
achieved by convolving the 2D Gaussian distribution function with the image. We need
to produce a discrete approximation to the Gaussian function. This theoretically requires
an infinitely large convolution kernel, as the Gaussian distribution is non-zero
everywhere. Fortunately the distribution has approached very close to zero at about
three standard deviations from the mean. 99% of the distribution falls within 3 standard
deviations. This means we can normally limit the kernel size to contain only values 23
This means we can normally limit the kernel size to contain only values within three
standard deviations of the mean.
Median Filter:
Median filter is use to eliminate salt and paper noise Median filters are the most popular
because of the ability to reduce impulse noise aka salt-and-pepper noise. In order to perform
median filtering at a point of an image, we take a empty mask and we replicate the pixel of
image by its corner then we move mask around image
we first sort the values of the pixels in the neighborhood, determine the median and then
assign that value to the highest intensity corresponding pixel in the filtered image.

Sobel :
It computes an approximation of gradient of image. First order derivative
Demerit: signal to noise ratio
Not accurate results and discontinuity
Sobel X - vertical symmetric
Sobel Y - horizontal

When using Sobel Edge Detection, the image is processed in the X and Y directions
separately first, and then combined together to form a new image which
represents the sum of the X and Y edges of the image. However, these images can
be processed separately as well.

Then from there, we will use what is called kernel convolution. A kernel is a 3 x 3
matrix consisting of differently (or symmetrically) weighted indexes. This will
represent the filter that we will be implementing for an edge detection.
when we use the Y direction, we are scanning from top to bottom

When we want to scan across the X direction of an image for example, we will
want to use the following X Direction Kernel to scan for large changes in the
gradient. Similarly, when we want to scan across the Y direction of an image, we
could also use the following Y Direction Kernel to scan for large gradients as well.
The kernels applied to the input image, produce separate

gradient component measurements in each orientation (Gx and

Gy). The magnitude of the gradient at every pixel and the

direction of the gradient is computed by combining Gx and Gy

together.

Laplacian Filter:
A Laplacian filter is one of edge detectors used to compute the

second spatial derivatives of an image. It measures the rate at which

the first derivatives changes. In other words, Laplacian filter

highlights the regions where the pixel intensities dramatically

change. Due to this characteristic of the Laplacian filter, it is often

used to detect edges in an image. We will see how the filter finds

edges with visual illustration later.

Laplacian didn’t take out edges in any particular direction but it took out edges in the

following classification.

● Inward Edges

● Outward Edges

This detector finds edges by looking for zero crossings after filtering f(x, y) with a

Laplacian of Gaussian filter. In this method, the Gaussian filtering is combined with

Laplacian to break down the image where the intensity varies to detect the edges

effectively. It finds the correct place of edges and testing wider area around the

pixel

Smoothing: Gaussian

Enhance edges: laplacian


Zero crossings denote edge location

Harris Corner Detector :

It is a corner detection operator that is commonly used in

computer vision algorithms to extract corners and infer features

of an image. Harris’ corner detector takes the differential of the

corner score into account with reference to direction directly,

instead of using shifting patches for every 45-degree angles, and

has been proved to be more accurate in distinguishing between

edges and corners.

apply the cv2.cornerHarris method

# to detect the corners with appropriate

# values as input parameters

Dilation:The dilation operator takes two pieces of data as inputs. The first is the
image which is to be dilated. The second is a (usually small) set of coordinate points

known as a structuring element (also known as a kernel). Dilation adds pixels to the

boundaries of objects in an image


Reverting back to the original image,# with optimal threshold value

image[dest > 0.01 * dest.max()]=[0, 0, 255]

The idea is to consider a small window around each pixel p in an

image. We want to identify all such pixel windows that are unique.

Uniqueness can be measured by shifting each window by a small

amount in a given direction and measuring the amount of change that

occurs in the pixel values. More formally, we take the sum squared

difference (SSD) of the pixel values before and after the shift and identify

pixel windows where the SSD is large for shifts in all 8 directions.

DOG:

One method is called "difference of gaussians" - you can imagine this corresponds to

taking a copy of a picture, blurring the copy slightly, and subtracting the original from

the copy to produce the result. Imagine a simple picture of a black rectangle on a

white background: the slight blurring has no noticeable effect in the middle of the

rectangle, because it mixes black pixels with black pixels, also in the background the

blurring has no effect as it mixes white pixels with adjacent white pixels - but it causes

the edges of the black region to "leak" outward. Now, when you subtract the original
image from the blurred copy, the centre of the blurred rectangle still matches the

centre of the sharp rectangle exactly, but the edges of the blurred rectangle are outside

the sharp rectangle, and they alone survive the subtraction, producing an image that

highlights the edges. "Gaussian" is a mathematical function used as the shape made by

blurring a single white pixel on a black background.

Sharp edge- intensity changes

Laplacian is double order derivative


We need to take derivatives in both dimensions. Hence the laplacian operator comes in
handy.

Prewitt Edge Detector

Any edge detector pipeline starts by filtering. Prewitt edge

detector uses a mean mask for filtering. After filtering, we will

find the derivatives of the image in X and Y directions. After


differentiating, we will find the resultant from both derivatives

you found before. Then we apply a threshold intensity and

convert the image into a binary image.

Canny edge detector also starts with Gaussian filtering. After


filtering, the image will be differentiated in X and Y directions.
After differentiating, Magnitude and gradient matrix will be
calculated.

High-level pseudocode
1. Take the grayscale of the original image

2. Apply a Gaussian filter to smooth out any noise

3. Apply Sobel operator to find the x and y gradient values for

every pixel in the grayscale image

4. For each pixel p in the grayscale image, consider a 3×3

window around it and compute the corner strength function.

Call this its Harris value.


5. Find all pixels that exceed a certain threshold and are the

local maxima within a certain window (to prevent redundant

dupes of features)

6. For each pixel that meets the criteria in 5, compute a feature

descriptor.

You are exiting the script and thus the windows are closed. If you have multiple windows open and
you do not need those to be open, you can use cv2.destroyAllWindows() to close those all. Then you
can open new cv2 windows, if needed = in a sense in above cv2.destroyAllWindows() is just good
coding practice.

High Pass and Low Pass:


A high pass filter tends to retain the high frequency information within an image
while reducing the low frequency information. The kernel of the high pass filter is
designed to increase the brightness of the center pixel relative to neighboring pixels.

Low pass filter is the type of frequency domain filter that is used for smoothing
the image. It attenuates the high frequency components and preserves the low
frequency components. High pass filter: High pass filter is the type of frequency
domain filter that is used for sharpening the image.

You might also like