3. computer vision 2 - yonsei university · 2014. 12. 29. · e-mail: [email protected]...

126
E-mail: [email protected] http://web.yonsei.ac.kr/hgjung 3. Computer Vision 2 3. Computer Vision 2

Upload: others

Post on 11-Oct-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

3. Computer Vision 23. Computer Vision 2

Page 2: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

3.1. Corner3.1. Corner

Page 3: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Corner Detection [1]Corner Detection [1]

• Basic idea: Find points where two edges meet—i.e., high gradient in two directions

• “Cornerness” is undefined at a single pixel, because there’s only one gradient per point

– Look at the gradient behavior over a small window

• Categories image windows based on gradient statistics

– Constant: Little or no brightness change

– Edge: Strong brightness change in single direction

– Flow: Parallel stripes

– Corner/spot: Strong brightness changes in orthogonal directions

Page 4: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Corner Detection: Analyzing Gradient CovarianceCorner Detection: Analyzing Gradient Covariance

• Intuitively, in corner windows both Ix and Iy should be high– Can’t just set a threshold on them directly, because we want

rotational invariance

• Analyze distribution of gradient components over a window to differentiate between types from previous slide:

• The two eigenvectors and eigenvalues λ1, λ2 of C (Matlab: eig(C)) encode the predominant directions and magnitudes of the gradient, respectively, within the window

• Corners are thus where min(λ1, λ2) is over a threshold

Page 5: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Derivation of the Harris Corner Detector [2]The Derivation of the Harris Corner Detector [2]

• The Harris corner detector is based on the local auto-correlation function of a signal; where the local auto-correlation function measures the local changes of the signal with patches shifted by a small amount in different directions.

• Given a shift (Δx,Δy) and a point (x, y), the auto-correlation function is defined as,

(1)

where I(·, ·) denotes the image function and (xi, yi) are the points in the window W (Gaussian1) centered on (x, y).

1For clarity in exposition the Gaussian weighting factor has been omitted from the derivation.

Page 6: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Derivation of the Harris Corner Detector [2]The Derivation of the Harris Corner Detector [2]

• The shifted image is approximated by a Taylor expansion truncated to the first order terms,

(2)

where Ix(·, ·) and Iy(·, ·) denote the partial derivatives in x and y, respectively.

Page 7: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Derivation of the Harris Corner Detector [2]The Derivation of the Harris Corner Detector [2]

• Substituting approximation Eq. (2) into Eq. (1) yields,

where matrix C(x, y) captures the intensity structure of the local neighborhood.

Page 8: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: MathematicsHarris Detector: Mathematics

1

2

“Corner”1 and 2 are large,1 ~ 2;E increases in all directions

1 and 2 are small;E is almost constant in all directions

“Edge”1 >> 2

“Edge”2 >> 1

“Flat”region

Classification of image points using eigenvalues of M:

M=앞 페이지 C

Page 9: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: MathematicsHarris Detector: Mathematics

Measure of corner response:

2det traceR M k M

1 2

1 2

dettrace

MM

(k – empirical constant, k = 0.04-0.06)

• The trace of a matrix is the sum of its eigenvalues, making it an invariant with respect to a change of basis.

Page 10: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: MathematicsHarris Detector: Mathematics

1

2 “Corner”

“Edge”

“Edge”

“Flat”

• R depends only on eigenvalues of M

• R is large for a corner

• R is negative with large magnitude for an edge

• |R| is small for a flatregion

R > 0

R < 0

R < 0|R| small

Page 11: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris DetectorHarris Detector

• The Algorithm:– Find points with large corner response function R (R >

threshold)– Take the points of local maxima of R

Page 12: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: WorkflowHarris Detector: Workflow

Page 13: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: WorkflowHarris Detector: Workflow

Compute corner response R

Page 14: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: WorkflowHarris Detector: Workflow

Find points with large corner response: R>threshold

Page 15: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: WorkflowHarris Detector: Workflow

Take only the points of local maxima of R

Page 16: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector: WorkflowHarris Detector: Workflow

Page 17: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector Code [3]Harris Detector Code [3]

dx = [-1 0 1; -1 0 1; -1 0 1]; % Derivative masks

dy = dx';

Ix = conv2(im, dx, 'same'); % Image derivatives

Iy = conv2(im, dy, 'same');

% Generate Gaussian filter of size 6*sigma (+/- 3sigma) and of

% minimum size 1x1.

g = fspecial('gaussian',max(1,fix(6*sigma)), sigma);

Ix2 = conv2(Ix.^2, g, 'same'); % Smoothed squared image derivatives

Iy2 = conv2(Iy.^2, g, 'same');

Ixy = conv2(Ix.*Iy, g, 'same');

cim = (Ix2.*Iy2 - Ixy.^2)./(Ix2 + Iy2 + eps); % My preferred measure.

% k = 0.04;

% cim = (Ix2.*Iy2 - Ixy.^2) - k*(Ix2 + Iy2).^2; % Original Harris measure.

Page 18: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

shapessm

Iy

Iy2

Ix Ix2

Ixy

cim

Page 19: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Harris Detector Code [3]Harris Detector Code [3]

Page 20: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

ReferencesReferences

1. Chandra Kambhamettu, “Corners+Ransac,” Delaware University Lecture Material of Computer Vision (CISC 4/689), 2007.

2. Konstantinos G. Derpanis, “The Harris Corner Detector,” available at http://www.cse.yorku.ca/~kosta/CompVis_Notes/harris_detector.pdf

3. Peter Kovesi, “MATLAB and Octave Functions for Computer Vision and Image Processing,” available at http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/index.html

Page 21: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

3.2. Edge3.2. Edge

Page 22: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Why extract edges? [1]Why extract edges? [1]

Page 23: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Where do edges come from? [1]Where do edges come from? [1]

Page 24: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Images as functions [1]Images as functions [1]

Page 25: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Images as functions [1]Images as functions [1]

Page 26: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Discrete Gradient [1]The Discrete Gradient [1]

Page 27: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The The SobelSobel Operator [1]Operator [1]

Page 28: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Edge detection using the Edge detection using the SobelSobel operator [1]operator [1]

Page 29: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Effects of noise [1]Effects of noise [1]

Page 30: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Solution: smooth first [1]Solution: smooth first [1]

Page 31: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Derivative theorem of convolution [1]Derivative theorem of convolution [1]

Page 32: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

LaplacianLaplacian of Gaussian [1]of Gaussian [1]

Page 33: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

2D edge detection filters [1]2D edge detection filters [1]

Page 34: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

LoGLoG filter [1]filter [1]

Page 35: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Canny edge detector [1]Canny edge detector [1]

Page 36: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Canny edge detector: step 4 [1]Canny edge detector: step 4 [1]

Page 37: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Canny edge detector [1]Canny edge detector [1]

LoG result

Page 38: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Canny edge detector [1]Canny edge detector [1]

Page 39: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

3.3. Binocular Stereo3.3. Binocular Stereo

Page 40: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Vision [1]Stereo Vision [1]

scene pointscene point

optical centeroptical center

image planeimage plane

Page 41: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Vision [1]Stereo Vision [1]

• Basic Principle: Triangulation– Gives reconstruction as intersection of two rays– Requires

• calibration• point correspondence

Page 42: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Point Correspondence [1]Point Correspondence [1]

p p’?

Given p in left image, where can the corresponding point p’in right image be?

Page 43: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Simplest Case: The Simplest Case: RectiRecti--linear Configuration [1]linear Configuration [1]

• Image planes of cameras are parallel.• Focal points are at same height.• Focal lengths same.• Then, epipolar lines are horizontal scan lines.

Page 44: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Simplest Case: The Simplest Case: RectiRecti--linear Configuration [3]linear Configuration [3]

Imageplane

),,( ZYXP

),,( ZYXP

f

Oy

x

z

)1,,()1,,(),,(ZYf

ZXfyxZYX

YyXxZYfY

ZXfXfZ

Page 45: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Simplest Case: The Simplest Case: RectiRecti--linear Configuration [3]linear Configuration [3]

),,(1 ZYXP 1Oy

x

z

f

2Oy

x

z

B

BfxxZ ,,, offunction a as for expression Derive 21

1p

2p

Page 46: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Simplest Case: The Simplest Case: RectiRecti--linear Configuration [3]linear Configuration [3]

),,(1 ZYXP 1Oy

x

z

f

2Oy

x

z

B

211

11

1

12

1

11 ,

xxBfZ

ZBfx

ZBXfx

ZXfx

Page 47: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Simplest Case: The Simplest Case: RectiRecti--linear Configuration [1]linear Configuration [1]

OOll OOrr

PP

ppll pprr

TT

ZZ

xxll xxrr

ff

T T is the stereo baselineis the stereo baselined d measures the difference in retinal position between correspondinmeasures the difference in retinal position between corresponding pointsg points

Then given Z, we can compute X and Y.

(xl,yl)=(f X/Z, f Y/Z)(xr,yr)=(f (X-T)/Z, f Y/Z)

Disparity:d=xl-xr=f X/Z – f (X-T)/Z

Page 48: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Constraints [1]Stereo Constraints [1]

X1

Y1

Z1O1

Image plane

Focal plane

M

p p’Y2

X2

Z2O2

Epipolar Line

Epipole

Page 49: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Epipolar Geometry and Fundamental Matrix [1]Epipolar Geometry and Fundamental Matrix [1]

• The geometry of two different images of the same scene is called the epipolar geometry.

• The geometric information that relates two different viewpoints of the same scene is entirely contained in a mathematical construct known as fundamental matrix.

Page 50: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Baseline and Epipolar Plane [1]Baseline and Epipolar Plane [1]

• Baseline: Line joining camera centers C, C’• Epipolar plane π : Defined by baseline and scene point X

from Hartley& Zisserman

baseline

Page 51: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Epipoles and Epipolar Lines [1]Epipoles and Epipolar Lines [1]

• Epipolar lines l, l’: Intersection of epipolar plane π with image planes

• Epipoles e, e’: Where baseline intersects image planes– Equivalently, the image in one view of the other camera

center.

C C’from Hartley& Zisserman

Page 52: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Epipolar Pencil [1]Epipolar Pencil [1]

• As position of X varies, epipolar planes “rotate” about the baseline (like a book with pages)– This set of planes is called the epipolar pencil

• Epipolar lines “radiate” from epipole—this is the pencil of epipolar lines

from Hartley& Zisserman

Page 53: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Epipolar Constraints [1]Epipolar Constraints [1]

• Camera center C and image point x define ray in 3-D space that projects to epipolar line l’ in other view (since it’s on the epipolar plane)

• 3-D point X is on this ray, so image of X in other view x’ must be on l’• In other words, the epipolar geometry defines a mapping x→l’, of points in one

image to lines in the other

from Hartley& Zisserman

C C’

x’

Page 54: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Fundamental Matrix [1]The Fundamental Matrix [1]

• Mapping of point in one image to epipolar line in other image x l’ is expressed algebraically by the fundamental matrix F

• Write this as l’ = F x• Since x’ is on l’, by the point-on-line definition we know

that x’T l’ = 0

• Substitute l’ = F x, we can thus relate corresponding points in the camera pair (P, P’) to each other with the following:

x’T F x = 0

line point

Page 55: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

The Fundamental Matrix [2]The Fundamental Matrix [2]

In computer vision, the fundamental matrix F is a 3×3 matrix which relates corresponding points in stereo images. In epipolar geometry, with homogeneous image coordinates, xand x′, of corresponding points in a stereo image pair, Fxdescribes a line (an epipolar line) on which the corresponding point x′ on the other image must lie. That means, for all pairs of corresponding points holds

Being of rank two and determined only up to scale, the fundamental matrix can be estimated given at least seven point correspondences. Its seven parameters represent the only geometric information about cameras that can be obtained through point correspondences alone.

The above relation which defines the fundamental matrix was published in 1992 by both Faugeras and Hartley.

Page 56: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Longuet-Higgins' essential matrix satisfies a similar relationship, the essential matrix is a metric object pertainingto calibrated cameras,while the fundamental matrix describes the correspondence in more general and fundamental terms of projective geometry. This is captured mathematically by the relationship between a fundamental matrix F and its corresponding essential matrix E, which is

K and K’ being the intrinsic calibration matrices of the two images involved

The Fundamental Matrix [2]The Fundamental Matrix [2]

Page 57: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Computing Fundamental Matrix [1]Computing Fundamental Matrix [1]

Fundamental Matrix is singular with rank 2

In principal F has 7 parameters up to scale and can be estimatedfrom 7 point correspondences

Direct Simpler Method requires 8 correspondences

Page 58: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Pseudo InversePseudo Inverse--Based [1]Based [1]

0uFuT

Each point correspondence can be expressed as a linear equation

01

1

333231

232221

131211

vu

FFFFFFFFF

vu

01

33

32

31

23

22

21

13

12

11

FFFFFFFFF

vuvvvvuuvuuu

Page 59: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Pseudo InversePseudo Inverse--Based [1]Based [1]

Page 60: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

• Input: n point correspondences ( n >= 8)– Construct homogeneous system Ax= 0 from

• x = (f11,f12, f13, f21,f22,f23 f31,f32, f33) : entries in F• Each correspondence gives one equation• A is a nx9 matrix (in homogenous format)

– Obtain estimate F^ by SVD of A• x (up to a scale) is column of V corresponding to

the least singular value– Enforce singularity constraint: since Rank (F) = 2

• Compute SVD of F^• Set the smallest singular value to 0: D -> D’• Correct estimate of F :

• Output: the estimate of the fundamental matrix, F’• Similarly we can compute E given intrinsic parameters

The EightThe Eight--Point Algorithm [1]Point Algorithm [1]

0lT

r pFp

TUDVA

TUDVF ˆ

TVUDF' '

Page 61: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Locating the Epipoles from F [1]Locating the Epipoles from F [1]

• Input: Fundamental Matrix F– Find the SVD of F– The epipole el is the column of V corresponding to the

null singular value (as shown above)– The epipole er is the column of U corresponding to

the null singular value• Output: Epipole el and er

TUDVF

el lies on all the epipolar lines of the left image

0lT

r pFp

0lT

r eFp

F is not identically zero

True For every pr

0leF

pl pr

P

Ol Orel er

Pl Pr

Epipolar Plane

Epipolar Lines

Epipoles

Page 62: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Corner Detection [5]Corner Detection [5]Im1.jpg Im2.jpg

thresh = 500; % Harris corner threshold

% Find Harris corners in image1 and image2 [cim1, r1, c1] = harris(im1, 1, thresh, 3); [cim2, r2, c2] = harris(im2, 1, thresh, 3);

Page 63: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

CorrelationCorrelation--Based Matching [5]Based Matching [5]

dmax = 50; % Maximum search distance for matchingw = 11; % Window size for correlation matching

% Use normalised correlation matching[m1,m2] = matchbycorrelation(im1, [r1';c1'], im2, [r2';c2'], w, dmax);

% Display putative matchesshow(im1,3), set(3,'name','Putative matches')

for n = 1:length(m1);line([m1(2,n) m2(2,n)], [m1(1,n) m2(1,n)])

end

Putative matches

Page 64: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

RANSACRANSAC--Based Fundamental Matrix Estimation [5]Based Fundamental Matrix Estimation [5]

% Assemble homogeneous feature coordinates for fitting of the% fundamental matrix, note that [x,y] corresponds to [col, row]x1 = [m1(2,:); m1(1,:); ones(1,length(m1))];x2 = [m2(2,:); m2(1,:); ones(1,length(m1))];

t = .002; % Distance threshold for deciding outliers

[F, inliers] = ransacfitfundmatrix(x1, x2, t, 1);

Inlying matches

Page 65: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

EpipolarEpipolar Lines [5]Lines [5]

% Step through each matched pair of points and display the% corresponding epipolar lines on the two images.

l2 = F*x1; % Epipolar lines in image2l1 = F'*x2; % Epipolar lines in image1

% Solve for epipoles[U,D,V] = svd(F);e1 = hnormalise(V(:,3));e2 = hnormalise(U(:,3));

for n = inliersfigure(1), clf, imshow(im1), hold on, plot(x1(1,n),x1(2,n),'r+');hline(l1(:,n)); plot(e1(1), e1(2), 'g*');

figure(2), clf, imshow(im2), hold on, plot(x2(1,n),x2(2,n),'r+');hline(l2(:,n)); plot(e2(1), e2(2), 'g*');

end

Page 66: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

EpipolarEpipolar Lines [5]Lines [5]

Page 67: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Estimation of Camera Matrix [6]Estimation of Camera Matrix [6]

• Once the essential matrix is known, the camera matrices may be retrieved from E. In contrast with the fundamental matrix case, where there is a projective ambiguity, the camera matrices may be retrieved from the essential matrix up to scale and a four-fold ambiguity.

• A 3x3 matrix is an essential matrix if and only if two of its singular values are equal, and the third is zero.

• We may assume that the first camera matrix is P=[I|0]. In order to compute the second camera matrix, P’, it is necessary to factor E into the product SR of a skew-symmetric matrix and a rotation matrix.

• Suppose that the SVD of E is Udiag(1,1,0)VT. Using the notation of W and Z, there are (ignoring signs) two possible factorizations E=SR as follows:

S=UZUT, R=UWVT or UWTVT

0 1 01 0 00 0 1

W0 1 01 0 0

0 0 0

Z

ZW= 1 0 00 1 00 0 0

E=SR=(UZUT)(UWVT)= UZ(UTU)WVT

= UZWVT

= U diag(1,1,0) VT

Page 68: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Estimation of Camera Matrix [7]Estimation of Camera Matrix [7]

S=UZUT, R=UWVT or UWTVT

• For a given essential matrix E=Udiag(1,1,0)VT, and first camera matrix P=[I|0], there are four possible choices for the second camera matrix P’, namely

P’=[UWVT|u3] or [UWVT|-u3] or [UWTVT|u3] or [UWTVT|-u3]

Where, u3 is the last column of U.

The four possible solutions for calibrated reconstruction from E.

Epipole

Page 69: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Linear triangulation methods [8]Linear triangulation methods [8]

• In each image we have a measurement x=PX, x’=P’X, and these equations can be combined into a form AX=0, which is an equation linear in X.

• The homogeneous scale factor is eliminated by a cross product to give three equations for each image point, of which two are linearly independent.

• For the first image, x×(PX)=0 and writing this out gives

Where, piT are the rows of P.•

1

2

3

0 11 0

0

T

T

T

yx

y x

p Xp X 0p X

3 1

3 2

2 1

( ) ( ) 0( ) ( ) 0( ) ( ) 0

T T

T T

T T

xyx y

p X p Xp X p Xp X p X

3 1

3 2

3 1

3 2

' ' '' ' '

T T

T T

T T

T T

xy

xy

p pp p

Ap pp p

Page 70: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Linear triangulation methods [8]Linear triangulation methods [8]

• An equation of the form AX=0 can then be composed, with

Where two equations have been included from each image, giving a total of four equations in four homogeneous unknowns. This is a redundant set of equations, since the solution is determined only up to scale.

X can be calculated by SVD of A.

3 1

3 2

3 1

3 2

' ' '' ' '

T T

T T

T T

T T

xy

xy

p pp p

Ap pp p

Page 71: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Fundamental matrix estimation methodsFundamental matrix estimation methods

Y

X

Z

'X

'Y

'Z

camx X 'camx

cam x PX ' 'cam x P X

0cam x PX ' ' 0cam x P X3T 1T

3T 2T

3T 1T

3T 2T

p pp p

where A'p' p''p' p'

xy

xy

AX = 03T 1T

3T 2T

2T 1T

p p 0p p 0p p 0

xyx y

X XX XX X

3T 1T

3T 2T

2T 1T

'p' p' 0'p' p' 0'p' 'p' 0

xyx y

X XX XX X

Page 72: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Rectification [1]Stereo Rectification [1]

• Image Reprojection– reproject image planes onto common

plane parallel to line between optical centers

•• Notice, only focal point of camera Notice, only focal point of camera really mattersreally matters

How can we make images as in recti-linear configuration?

Stereo Rectification

Page 73: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

ReferencesReferences

1. Chandra Kambhamettu, “Multipleview1,” Delaware University Lecture Material of Computer Vision (CISC 4/689), 2007.

2. Wikipedia, “Fundamental Matrix (Computer Vision),”http://en.wikipedia.org/wiki/Fundamental_matrix_(computer_vision)

3. Sebastian Thrun, Rick Szeliski, Hendrik Dahlkamp and Dan Morris, “Stereo1,” Stanford University Lecture Material of Computer Vision (CS223B), Winter 2005.

4. Daniel Wedge, “The Fundamental Matrix Song,” http://danielwedge.com/fmatrix/5. Peter Kovesi, “Example of finding the fundamental matrix using RANSAC,” available at

http://www.csse.uwa.edu.au/~pk/Research/MatlabFns/Robust/example/index.html6. Richard Hartley and Andrew Zisserman, “8.6 The essential matrix,” Multiple View

Geometry in Computer Vision, Cambridge, pp. 238-241.7. Richard Hartley and Andrew Zisserman, “11.2 Linear triangulation methods,” Multiple

View Geometry in Computer Vision, Cambridge, pp. 238-241.

Page 74: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

3.4. Stereo Matching3.4. Stereo Matching

Page 75: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Vision [1]Stereo Vision [1]

Page 76: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Reduction of Searching by Epipolar Constraint [1]Reduction of Searching by Epipolar Constraint [1]

Page 77: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Photometric Constraint [1]Photometric Constraint [1]

Same world point has same intensity in both images.Same world point has same intensity in both images.– True for Lambertian surfaces

• A Lambertian surface has a brightness that is independent of viewing angle

– Violations:• Noise• Specularity• Non-Lambertian materials• Pixels that contain multiple surfaces

Page 78: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Photometric Constraint [1]Photometric Constraint [1]

For each epipolar line

For each pixel in the left image

• compare with every pixel on same epipolar line in right image

• pick pixel with minimum match costThis leaves too much ambiguity, so:

Improvement: match windowswindows

Page 79: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Correspondence Using Correlation [1]Correspondence Using Correlation [1]

Page 80: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Sum of Squared Difference (SSD) [1]Sum of Squared Difference (SSD) [1]

Page 81: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Image Normalization [1]Image Normalization [1]

Page 82: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Images as Vectors [1]Images as Vectors [1]

Page 83: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Image Metrics [1]Image Metrics [1]

Page 84: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Result [1]Stereo Result [1]

Left Disparity Map

Images courtesy of Point Grey Research

Page 85: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Window Size [1]Window Size [1]

Better results with adaptive window• T. Kanade and M. Okutomi, A Stereo Matching Algorithm with an Adaptive

Window: Theory and Experiment,, Proc. International Conference on Robotics and Automation, 1991.

• D. Scharstein and R. Szeliski. Stereo matching with nonlinear diffusion. International Journal of Computer Vision, 28(2):155-174, July 1998.

Page 86: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Ordering Constraint [3]Ordering Constraint [3]

Page 87: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Smooth Surface Problem [3]Smooth Surface Problem [3]

Page 88: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Occlusion [1]Occlusion [1]

Left Occlusion Right Occlusion

Page 89: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Search over Correspondence [1]Search over Correspondence [1]

Page 90: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Matching with Dynamic Programming [1]Stereo Matching with Dynamic Programming [1]

Page 91: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Dynamic Programming [3]Dynamic Programming [3]

Local errors may be propagated along a scan-line and no inter scan-line consistency is enforced.

Page 92: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Correspondence by Feature [3]Correspondence by Feature [3]

Page 93: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

SegmentSegment--Based Stereo Matching [3]Based Stereo Matching [3]

AssumptionAssumption• Depth discontinuity tend to correlate well with color edges

• Disparity variation within a segment is small

• Approximating the scene with piece-wise planar surfaces

Page 94: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

SegmentSegment--Based Stereo Matching [3]Based Stereo Matching [3]

• Plane equation is fitted in each segment based on initial disparity estimation obtained SSD or Correlation

• Global matching criteria: if a depth map is good, warping the reference image to the other view according to this depth will render an image that matches the real view

• Optimization by iterative neighborhood depth hypothesizing

Page 95: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

SegmentSegment--Based Stereo Matching [3]Based Stereo Matching [3]

Hai Tao and Harpreet W. Sawhney

Page 96: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

SegmentSegment--Based Stereo Matching [3]Based Stereo Matching [3]

Page 97: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Smoothing by MRF [2]Smoothing by MRF [2]

Page 98: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Smoothing by MRF [4]Smoothing by MRF [4]

Page 99: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Smoothing by MRF [4]Smoothing by MRF [4]

Page 100: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Testing and Comparison [1]Stereo Testing and Comparison [1]

Page 101: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Testing and Comparison [1]Stereo Testing and Comparison [1]

Page 102: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Testing and Comparison [1]Stereo Testing and Comparison [1]

Window-based matching(best window size)

Ground truth

Page 103: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Stereo Testing and Comparison [1]Stereo Testing and Comparison [1]

State of the art method Ground truth

Boykov et al., Fast Approximate Energy Minimization via Graph Cuts, International Conference on Computer Vision, September 1999.

Page 104: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Intermediate View Reconstruction [1]Intermediate View Reconstruction [1]

Right ImageLeft ImageDisparity

Page 105: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Intermediate View Reconstruction [1]Intermediate View Reconstruction [1]

Page 106: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

ReferencesReferences

1. David Lowe, “Stereo,” UBC(Univ. of British Columbia) Lecture Material of Computer Vision (CPSC 425), Spring 2007.

2. Sebastian Thrun, Rick Szeliski, Hendrik Dahlkamp and Dan Morris, “Stereo 2,” Stanford Lecture Material of Computer Vision (CS 223B), Winter 2005.

3. Chandra Kambhamettu, “Multiple Views1” and “Multiple View2,” Univ. of Delawave Lecture Material of Computer Vision (CISC 4/689), Spring 2007.

4. J. Diebel and S. Thrun, “An Application of Markov Random Fields to Range Sensing,” Proc. Neural Information Processing Systems (NIPS), Cmbridge, MA, 2005. MIT Press.

Page 107: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

3.5. Optical Flow3.5. Optical Flow

Page 108: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion in Computer VisionMotion in Computer Vision

Motion

• Structure from motion• Detection/segmentation with direction

[1]

Page 109: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field Motion Field v.sv.s. Optical Flow [2], [3]. Optical Flow [2], [3]

Motion Field: an ideal representation of 3D motion as it is projected

onto a camera image.

Optical Flow: the approximation (or estimate) of the motion field

which can be computed from time-varying image sequences. Under

the simplifying assumptions of 1) Lambertian surface, 2) pointwise

light source at infinity, and 3) no photometric distortion.

Page 110: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field [2]Motion Field [2]

• An ideal representation of 3D motion as it is projected onto a

camera image.

• The time derivative of the image position of all image points given

that they correspond to fixed 3D points. “field : position vector”

• The motion field v is defined as

whereP is a point in the scene where Z is the distance to that scene point. V is the relative motion between the camera and the scene,T is the translational component of the motion,and ω is the angular velocity of the motion.

Page 111: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field [5]Motion Field [5]

Zf Pp (1)

3D point P (X,Y,Z) and 2D point p (x,y), focal length f

Motion field v can be obtained by taking the time derivative of (1)

ZYfyZXfx

2

2

ZVY

ZVfv

ZVX

ZVfv

ZYy

ZXx

(2)

Page 112: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field [5]Motion Field [5]

PTV

(3)

The motion of 3D point P, V is defined as flow

By substituting (3) into (2), the basic equations of the motion field is acquired

fy

fxyxf

ZfTyTv

fx

fxyyf

ZfTxTv

XYZX

YZy

YXZY

XZx

2

2

(4)

X X Y Z

Y Y Z X

Z Z X Y

V T Z YV T X ZV T Y X

Page 113: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field [5]Motion Field [5]

The motion field is the sum of two components, one of which depends on translation only, the other on rotation only.

fy

fxyxf

ZfTyTv

fx

fxyyf

ZfTxTv

XYZX

YZy

YXZY

XZx

2

2

ZfTyTv

ZfTxTv

YZy

XZx

fy

fxyxfv

fx

fxyyfv

XYZXy

YXZYx

2

2

Translational components

Rotational components

(4)

(5) (6)

Page 114: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Pure Translation [2]Motion Field: Pure Translation [2]

If there is no rotational motion, the resulting motion field has a peculiar spatial structure.

If (5) is regarded as a function of 2D point position,

ZfTyTv

ZfTxTv

YZy

XZx

(5)

Z

YZy

Z

XZx

TTfy

ZTv

TTfx

ZTv

Z

Y

Z

X

TTfy

TTfx

0

0

If (x0, y0) is defined as in (6)

0

0

yyZTv

xxZTv

Zy

Zx(6) (7)

Page 115: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Pure Translation [2]Motion Field: Pure Translation [2]

Equation (7) say that the motion field of a pure translation is radial. In particular, if TZ<0, the vectors point away from p0 (x0, y0), which is called the focus of expansion (FOE). If TZ>0, the motion field vectors point towards p0, which is called the focus of contraction. If TZ=0, from (5), all the motion field vectors are parallel.

ZfTyTv

ZfTxTv

YZy

XZx

(5)

ZTfvZ

Tfv

Yy

Xx

(8)

Page 116: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Motion Parallax [6]Motion Field: Motion Parallax [6]

Equation (8) say that their lengths are inversely proportional to the depth of the corresponding 3D points.

ZTfvZ

Tfv

Yy

Xx

(8)

This animation is an example of parallax. As the viewpoint moves side to side, the objects in the distance appear to move more slowly than the objects close to the camera [6].

http://upload.wikimedia.org/wikipedia/commons/a/ab/Parallax.gif

Page 117: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Motion Parallax [2]Motion Field: Motion Parallax [2]

If two 3D points are projected into one image point, that is coincident, rotational component will be the same. Notice that the motion vector V is about camera motion.

(4)

fy

fxyxf

ZfTyTv

fx

fxyyf

ZfTxTv

XYZX

YZy

YXZY

XZx

2

2

Page 118: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Motion Parallax [2]Motion Field: Motion Parallax [2]

The difference of two points’ motion field will be related with translation

components. And, they will be radial w.r.t FOE or FOC.

01 2 1 2

01 2 1 2

1 1 1 1

1 1 1 1

x Z X Z

y Z Y Z

v T x T f x x TZ Z Z Z

v T y T f y y TZ Z Z Z

FOC

Page 119: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Motion Parallax [2]Motion Field: Motion Parallax [2]

Motion ParallaxMotion Parallax

The relative motion field of two instantaneously coincident points:

1. Does not depend on the rotational component of motion2. Points towards (away from) the point p0, the vanishing point of

the translation direction.

Page 120: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Pure Rotation Motion Field: Pure Rotation w.r.tw.r.t YY--axis [7]axis [7]

If there is no translation motion and rotation w.r.t x- and z- axis, from (4)

fxyv

fxfv

Yy

YYx

2

Page 121: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Pure Rotation Motion Field: Pure Rotation w.r.tw.r.t YY--axis [7]axis [7]

Z

Distance to the point, Z, is constant.

Translational Motion

Rotational Motion

Z

Distance to the point, Z, is changing. According to Z, y is changing, too.

Z1

Z2

Page 122: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Optical Flow [4]Motion Field: Optical Flow [4]

The Image Brightness Constancy Equation [2]

Page 123: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Optical Flow [2]Motion Field: Optical Flow [2]

tT II V

I

tIV

AssumptionThe image brightness is continuous and differentiable as many times as needed in both the spatial and temporal domain.

The image brightness can be regarded as a plane in a small area.

Page 124: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: LucasMotion Field: Lucas--KanadeKanade Method [8]Method [8]

Assuming that the optical flow (Vx,Yy) is constant in a small window of size mxm with m>1, which is center at (x, y) and numbering the pixels within as 1…n, n=m2, a set of equations can be found:

tT II V

Page 125: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

Motion Field: Aperture Problem [2], [9]Motion Field: Aperture Problem [2], [9]

The component of the motion field in the direction orthogonal to the spatial image gradient is not constrained by the image brightness constancy equation. Given local information can determine component of optical flow vector only in direction of brightness gradient.

The aperture problem. The grating appears to be moving down and to the right, perpendicular to the orientation of the bars. But it could be moving in many other directions, such as only down, or only to the right. It is impossible to determine unless the ends of the bars become visible in the aperture.

http://upload.wikimedia.org/wikipedia/commons/f/f0/Aperture_problem_animated.gif

Page 126: 3. Computer Vision 2 - Yonsei University · 2014. 12. 29. · E-mail: hogijung@hanyang.ac.kr Epipolar Constraints [1] • Camera center C and image point x define ray in 3-D space

E-mail: [email protected]://web.yonsei.ac.kr/hgjung

ReferencesReferences

1. Richard Szeliski, “ Dense motion estimation,” Computer Vision: Algorithms and Applications, 19 June 2009 (draft), pp. 383-426.

2. Emanuele Trucco, Alessandro Verri, “8. Motion,” Introductory Techniques for 3-D Computer Vision, Prentice Hall, New Jersey 1998, pp.177-218.

3. Wikipedia, “Motion field,” available on www.wikipedia.org.4. Wikipedia, “Optical flow,” available on www.wikipedia.org.5. Alessandro Verri, Emanuele Trucco, “Finding the Epipole from Uncalibrated

Optical Flow,” BMVC 1997, available on http://www.bmva.ac.uk/bmvc/1997/papers/052/bmvc.html.

6. Wikipedia, “Parallex,” available on www.wikipedia.org.7. Jae Kyu Suhr, Ho Gi Jung, Kwanghyuk Bae, Jaihie Kim, “Outlier rejection for

cameras on intelligent vehicles,” Pattern Recognition Letters 29 (2008) 828-840.

8. Wikipedia, “Lucas-Kanade Optical Flow Method,” available on www.wikipedia.org.

9. Wikipedia, “Aperture Problem,” available on www.wikipedia.org.