Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

The project demonstration link

This problem is to implement the Lucas-Kanade optical flow algorithm. In this method, we assume that the
image brightness constancy equation yields a good approximation of the normal component of the motion field
and that the latter is well approximated by a constant vector field within any small patch of the image plane.
Implementaion function function [u, v, hitMap] = opticalFlow(I1,I2,windowSize, tau) is listed in appendix.
Here, to prevent uninvertibal AT A, we need to check if it has rank 2. This is because when the 22 matrix is
singular, it present edge pixels where all gradient vectors point in the same direction, which cannot provide
accurate information about motion since we cannot tell orientation along the edge. In addition, the gradients
have very small magnitude in a low texture region, so the optical flow in these regions is negligible. Therefore,
if the smallest eigenvalues of the left matrix of the equation is smaller than a threshold value τ we do not
compute the optical flow. Since size of test images are not the same, we tune parameter τ and window size to
test optimal setting. Sythentic [windowSize=5,10,15;τ=0.1], sphere [windowSize=10,20,30,τ=0.05], corridor
[windowSize=15,30,100; τ=0.02] Figure 1,2,3 shows result for sphere, sythetic image and corridor respectively.
Constant gray image presents hitmap with constant value 1, which means no pixel is discarded.

Feature Matching:
After that, I apply Scale Invariant Feature Transform (SIFT) for the image matching. SIFT is a feature detection
algorithm. SIFT helps locate the local features in an image, commonly known as the ‘key points of the image.
These key points are scale & rotation invariants that can be used for various computer vision applications, like
image matching, object detection, scene detection, etc. From the following function, we are importing images
and converting them to grayscale. After that, we use the function cv2.SIFT_create () where the key point is the
main point, the feature is the descriptions of the respective images. Once got the descriptors and key points of
the two images, we will find correspondences between them.
Matching feature descriptors breaks down to the nearest neighbor search on the feature vectors. In the next task
I will also implement an outlier detection technique that uses the ratio of nearest to the second nearest neighbor.
The problem of finding more than one nearest neighbor is called kNN search.
The get_Match() computes the matches of the image points using Euclidean distance. We use OpenCV function
cv2.DescriptorMatcher_create(). Where Brute Force Matcher is used for matching the features of the first image
with another image. It takes one descriptor of the first image and matches all the descriptors of the second
image and then it goes to the second descriptor of the first image and matches all the descriptors of the second
image and so on. we will use knnMatch() to get k best matches, which finds the two nearest neighbors of an
image 1 descriptor in image 2. The output is a vector with dimensions n × 2, where n is the number of
descriptors in image 1. The distance between two descriptors can be computed with: For this work, we will take
k=2.

So, apply the ratio test which explains the valid matches function. For the outlier removal, Given the two
nearest neighbors for a descriptor in image 1, a good way of identifying outliers is to compute the ratio between
the two distances. If this ratio is above a threshold, this match is considered ambiguous and classified as an
outlier.

Homography estimation:

So, once I have obtained the best matches between the images, our next step is to calculate the homography
matrix. As we described before, the homography matrix will be used with the best matching points to estimate a

REPORT TITLE PAGE 2


relative orientation, transformation within the two images. A homography is a perspective transformation that
maps planes to planes in a three-dimensional space. I will compute the homography that transforms the image
plane of one camera to the image plane of another camera. These transformations allow us to project all images
to the same image plane and to create the final panorama image.
To estimate the homography we define the Compute_Homography_image function. Where RANSAC (Random
Sample Consensus) is used from the OpenCV library, used for robust estimation of the model parameters in a
presence of outliers.

Even after applying the ratio test in the previous function some of the feature matches are outliers. If one of the
four matches used to compute the homography is an outlier, the resulting transformation matrix is incorrect. We
will use the RANSAC algorithm to achieve a robust result:
Basically in the RANSAC I did:
 Sample (randomly) the number of points required to fit the model
 Solve for model parameters using sample
 Score by the fraction of inliers within a preset threshold of the model
when applying a perspective transformation a 2D image point must be temporarily lifted into homogeneous
space

Experimental results:
For the experiment, I built an input function that asked for how many images we want to concatenate or
stitched. After that, I put the image for my experiment I am using 2 images for the stitching. Then the function
returns the feature matching and panorama of the images.

First of all, I read the image using OpenCV function

Then I draw the key points of the input images using the OpenCV function DrawKeypoints. The results of the
following image are shown below:

REPORT TITLE PAGE 3


After that, I am matching the feature of the images as I explained before

Lastly, I applied the Homography where the RANSAC is used for the stitching of the images. The results is
shown in below:

Now we can look at the results of the developed algorithm of another image and checked the panorama :
First of all, the following input image is shown below for the algorithm as like the previous image

REPORT TITLE PAGE 4


After that, the feature matching is applied to the above images. The results is shown below:

Finally, calculate the homography and stitch the image in the below:

REPORT TITLE PAGE 5


REPORT TITLE PAGE 6

You might also like