lecture 03 15/11/2011 shai avidan הבהרה : החומר המחייב הוא החומר הנלמד...

Post on 04-Jan-2016

221 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Lecture 03

15/11/2011Shai Avidan

הבהרה: מופיע / לא המופיע זה ולא בכיתה הנלמד החומר הוא המחייב החומר

במצגת.

• Perspective Projection• Direct Linear Transformation• Traiangulation

PERSPECTIVE PROJECTION

Perspective projection

Scene point Image coordinates

Camera frame

Image plane

Optical axis

Focal length

Thus far, in camera’s reference frame only.

• Extrinsic: location and orientation of camera frame with respect to reference frame

• Intrinsic: how to map pixel coordinates to image plane coordinates

Camera parameters

Camera 1 frame

Reference frame

Extrinsic camera parameters

)( TPRP wc

World reference frame

Camera reference frame

Tc ZYX ,,P

• Extrinsic: location and orientation of camera frame with respect to reference frame

• Intrinsic: how to map pixel coordinates to image plane coordinates

Camera parameters

Camera 1 frame

Reference frame

Intrinsic camera parameters

•Ignoring any geometric distortions from optics, we can describe them by:

xxim soxx )(

yyim soyy )(

Coordinates of projected point in camera reference

frame

Coordinates of image point in

pixel units

Coordinates of image center in

pixel units

Effective size of a pixel (mm)

Camera parameters• We know that in terms of camera reference frame:

• Substituting previous eqns describing intrinsic and extrinsic parameters, can relate pixels coordinates to world points:

)(

)()(

3

1

TPR

TPR

w

wxxim fsox

)(

)()(

3

2

TPR

TPR

w

wyyim fsoy

Ri = Row i of rotation matrix

)( TPRP wcand Tc ZYX ,,P

Projection matrix

• This can be rewritten as a matrix product using homogeneous coordinates:

• The motion of the camera is equivalent to the inverse motion of the scene

wxim

wyim

w

= K[R t]Xw

Yw

Zw

1

100/00/

yy

xx

osfosf

K=

wwyy

wwxx

imim

imim

/

/

point in camera coordinates

10

RTR

External parametersInternal parameters

Calibrating a camera• Compute intrinsic and extrinsic

parameters using observed camera data

• Main idea• Place “calibration object” with

known geometry in the scene• Get correspondences• Solve for mapping from scene to

image: estimate M=K[R t]

• This can be rewritten as a matrix product using homogeneous coordinates:

= K[R t]Xw

Yw

Zw

1

w

wim

w

wim

y

x

PM

PM

PM

PM

3

2

3

1

M

Let Mi be row i of matrix M

product M is single projection matrix encoding both extrinsic and intrinsic parameters

Pw in homog.

Projection matrix

wxim

wyim

w

• This can be rewritten as a matrix product using homogeneous coordinates:

= K[R t]Xw

Yw

Zw

1M

Let Mi be row i of matrix M

product M is single projection matrix encoding both extrinsic and intrinsic parameters

wwyy

wwxx

imim

imim

/

/

Pw in homog.

Projection matrix

wxim

wyim

w

Estimating the projection matrix

w

wim

w

wim

y

x

PM

PM

PM

PM

3

2

3

1wimx PMM )(0 31

wimy PMM )(0 32

For a given feature point

Estimating the projection matrixwimx PMM )(0 31

wimy PMM )(0 32

0

1

*3433323114131211

w

w

w

imZ

Y

X

mmmmxmmmm

Expanding this first equation, we have:

...32123111 mxYmYmxXmX imwwimww

0... 34143313 mxmmxZmZ imimww

Estimating the projection matrix

imwimwimwimwww

imwimwimwimwww

yZyYyXyZYXxZxYxXxZYX

1000000001

00

wimx PMM )(0 31

wimy PMM )(0 32

Estimating the projection matrixThis is true for every feature point, so we can stack up n observed image features and their associated 3d points in single equation:

Solve for mij’s (the calibration information) [F&P Section 3.1]

)1()1()1()1()1()1()1()1()1()1(

)1()1()1()1()1()1()1()1()1()1(

10000

00001

imwimwimwwww

imwimwimwimwww

yZyYyXyZYX

xZxYxXxZYX

im

… … … …

)()()()()()()()()()(

)()()()()()()()()()(

10000

00001nnnnnnn

imnnn

nnnnnnnnnn

imwimwimwwww

imwimwimwimwww

yZyYyXyZYX

xZxYxXxZYX

00…

00

P m

Pm = 0

Summary: camera calibration

• Associate image points with scene points on object with known geometry

• Use together with perspective projection relationship to estimate projection matrix

• (Can also solve for explicit parameters themselves)

DIRECT LINEAR TRANSFORMATION

Direct Linear Transformation from 3D to 2D

• Given two cameras (a stereo pair) and a 3D object with known 3D coordinates we can:

1) For each camera compute the direct linear transformation from 3D to 2D

• Now we can move the stereo pair to a different place, take a pair of pictures and recover 3D information:

1) Given two camera matrices and a pair of matching points, we can triangulate to get its depth.

2) We’ll talk about triangulation later in class

When would we calibrate this way?

• Makes sense when geometry of system is not going to change over time

• …When would it change?

TRIANGULATION

24

Triangulation

• Problem: Given some points in correspondence across two or more images (taken from calibrated cameras), {(uj,vj)}, compute the 3D location X

25

Triangulation

• Method I: intersect viewing rays in 3D, minimize:

– X is the unknown 3D point– Cj is the optical center of camera j– Vj is the viewing ray for pixel (uj,vj)

– sj is unknown distance along Vj

• Advantage: geometrically intuitive

Cj

Vj

X

26

Triangulation• Method II: solve linear equations in X

– advantage: very simple

• Method III: non-linear minimization

– advantage: most accurate (image plane error)

top related