Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 31

Optical Flow

Rizwan
Manzoor
Presentation Outline
 What is Optical Flow

 Applications

 Implementation

 Shi-Tomasi

 Lucas-Kanade

 Demo
What is Optical Flow?
The pattern of apparent motion of
objects, surfaces, and edges in a visual
scene caused by the relative motion
between an observer (an eye or a
camera) and the scene.
Optical Flow
Where did each pixel in image 1 go to in image 2 ?
Optical Flow

Human vision does optical flow


analysis all the time –being aware
of movement around them
Real Life Example
When we are driving a car. A sign on the
side of the road would move from the center
of our vision to the side, growing as we
approached. If we had 360 degree vision,
this sign would proceed to move quickly
past our side to our back, where it would
shrink.

This motion of the sign is its optic flow.


Scene Interpretation
Given a video sequence with camera/objects
moving we can better understand the scene if we
find the motions of the camera/objects:
• How is the camera moving?
• How many moving objects are there?
• Which directions are they moving in?
• How fast are they moving?
• Can we recognize their type of motion (e.g.
walking,running, etc.)?
Motion Analysis
Optical Flow allows you to judge how close you are to certain objects, and how
quickly you are approaching them.

It is also useful for avoiding obstacles:


 if an object in front of you is expanding but not moving, you
are probably headed straight for it,

 but if it is expanding but moving slowly to the side, you will


probably pass by it.

Since optic flow relies only on relative motion, it remains the same when you
are moving and the world remains still, and when you are standing still but
everything you can see is moving past you.
Image Tracking

Image tracking

9
Motion Tracking

10
Estimating Atmospheric Motion
Autonomous Vehicles Motion
Facial Expression Tracking
Implementation
 Find objects from one frame in other frames

 Determine the speed and direction of


movement of objects

 Determine the structure of the environment


Implementation
 How to determine one frame's features in
another frame?

 Lucas-Kanade

 How to choose which features are “good” to


track?

 Shi-Tomasi
Shi-Tomasi
 A “good” feature will intuitively have two
distinctive qualities: texturedness and
corner
 Lack of texture = ambiguity in tracking
 No corner = aperture problem
 Many algorithms – Harris, SUSAN, FAST,
Shi-Tomasi...
Shi-Tomasi
 The algorithm is strongly based on the Harris
corner detector
 Slight modification from Harris Edge Detector.
 Image patches undergoing affine
transformations, Shi-Tomasi gives better
results.
Lucas-Kanade Method

The Lucas-Kanade method is an


optical flow estimator which estimates
motion based on the local gradient and
local difference of two consecutive
frames.
Optical Flow Assumptions:
Brightness Constancy
Optical Flow Assumptions:
Optical Flow Assumptions:
Optical Flow: 1D
Brightness Constancy Assumption:
Case
f (t ) ≡ I ( x (t ), t ) = I ( x (t + dt ), t + dt )

{
∂ f ( x)
=0 Because no change in brightness with time
∂t
∂I  ∂x  ∂I
 + =0
∂x t  ∂t  ∂t x(t )

Ix v It

It
⇒v =
Ix
Tracking in the 1D case:

I ( x, t ) I ( x, t + 1)


v ?
p

x
Tracking in the 1D case:

I ( x, t ) I ( x, t + 1)

It
Temporal derivative
p 
v

x
Ix

Spatial derivative
Assumptions:
∂I ∂I  I
Ix = It = v ≈− t  Brightness constancy
∂x t ∂t x= p
Ix  Small motion
Tracking in the 1D case:
Iterating helps refining the velocity vector

I ( x, t ) I ( x, t + 1)
Temporal derivative at 2nd iteration

It
p

x
Ix

Can keep the same estimate for spatial derivative

  I
v ← v previous − t Converges in about 5 iterations
Ix
Algorithm for 1D tracking:
For all pixel of interest p:
 Compute local image derivative at p: I x

 Initialize velocity vector: v ← 0
 Repeat until convergence:

 Compensate for current velocity vector: I ' ( x, t + 1) = I ( x + v , t + 1)
 Compute temporal derivative: I t = I ' ( p, t + 1) − I ( p, t )
  I
 Update velocity vector: v ←v− t
Ix
From 1D to 2D tracking
∂I  ∂x  ∂I
1D:  + =0
∂x t  ∂t  ∂t x(t )

∂I  ∂x  ∂I  ∂y  ∂I
2D:  +  + =0
∂x t  ∂t  ∂y t  ∂t  ∂t x(t )

∂I ∂I ∂I
u+ v+ =0
∂x t ∂y t ∂t x(t )

One equation, two velocity (u,v) unknowns…


From 1D to 2D tracking

We get at most “Normal Flow” – with one point we can only detect movement
perpendicular to the brightness gradient. Solution is to take a patch of pixels
Around the pixel of interest.
From 1D to 2D tracking
I ( x, y , t + 1)
y

v2

 v3
I ( x, t ) I ( x, t + 1) v1

v

v4
x I ( x, y , t )
x
From 1D to 2D tracking
THANK YOU

You might also like