an introduction to error propagation: derivation, meaning and...

22
An Introduction To Error Propagation: Derivation, Meaning and Examples of Equation Kai Oliver Arras Technical Report EPFL-ASL-TR-98-01 R3 of the Autonomous Systems Lab, Institute of Robotic Systems, Swiss Federal Institute of Technology Lausanne (EPFL) September 1998 Autonomous Systems Lab Prof. Dr. Roland Siegwart SWISS FEDERAL INSTITUTE OF TECHNOLOGY LAUSANNE EIDGENÖSSISCHE TECHNISCHE HOCHSCHULE LAUSANNE POLITECNICO FEDERALE DI LOSANNA DÉPARTEMENT DE MICROTECHNIQUE INSTITUT DE SYSTÈMES ROBOTIQUE C Y F X C X F X T =

Upload: duongtram

Post on 19-Aug-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

An Introduction To Error Propagation: Derivation, Meaning and Examples

of Equation

Kai Oliver Arras

Technical Report

Nº EPFL-ASL-TR-98-01 R3

of the Autonomous Systems Lab, Institute of Robotic Systems, Swiss Federal Institute of Technology Lausanne (EPFL)

September 1998

Autonomous Systems LabProf. Dr. Roland Siegwart

SWISS FEDERAL INSTITUTE OF TECHNOLOGY LAUSANNEEIDGENÖSSISCHE TECHNISCHE HOCHSCHULE LAUSANNEPOLITECNICO FEDERALE DI LOSANNA

DÉPARTEMENT DE MICROTECHNIQUEINSTITUT DE SYSTÈMES ROBOTIQUE

CY FX CX FXT

=

Page 2: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

1

Contents

1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2. Error Propagation: From the Beginning . . . . . . . . . . . . . . . . . . . . . . . 2

2.1 A First Expectation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.2 When is the Approximation a Good One? . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.3 The Almost General Case: Approximating the Distribution of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3.1 Addendum to Chapter 2.2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.4 Getting Really General: Adding . . . . . . . . . . . . . . . . . . 6

3. Derivating the Final Matrix Form . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4.1 Probabilistic Line Extraction From Noisy 1D Range Data . . . . . . . . . . . . . . . 9

4.2 Kalman Filter Based Mobile Robot Localization:The Measurement Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5. Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Appendix A

Finding the Line Parameters and in the Weighted Least Square Sense

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

Appendix B

Deriving the Covariance Matrix for and

. . . . . . . . . . . . . . . . 17

B.1 Practical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Y f X1 X2 … Xn, , ,( )=

Z g X1 X2 … Xn, , ,( )=

r α

r α

Page 3: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

2

1. Introduction

This report attempts to shed light onto the equation

, (1)

where is a , a covariance matrix and some matrix of dimension .In estimation applications like Kalman filtering or probabilistic feature extraction we fre-quently encounter the pattern . Many texts in literature introduce this equation with-out further explanation. But relationship (1), called the

error propagation law

, can beexplicitely derived and understood, being important for a comprehension of its underlyingapproximative character. Here, we want to bridge the gap between these texts and the noviceto the world of uncertainty modeling and propagation.

Applications of (1) are e.g. error propagation in model-based vision, Kalman filtering, reli-ability or probabilistic systems analysis in general.

2. Error Propagation: From the Beginning

We will first forget the matrix form of (1) and change to a different perspective. Error propa-gation is the problem of finding the distribution of a function of random variables. Often wehave mathematical models of the system of interest (the output as a function of the input andthe system components) and we know something about the distribution of the input and thecomponents.

Then, we desire to know the

distribution of the output

, that is the distribution function of when where is some known function and the distribution function of the ran-dom variable is known

.If is a nonlinear function, the probability distribution function of , , becomes

quickly very complex, particularly when there is more than one input variable. Although ageneral method of solution exists (see

[B

REI

70]

Chapter 6-6), its complexity can be demon-strated already for simple problems. An approximation of is therefore desirable. Theapproximation consists in the propagation of only the first two statistical moments, that is themean and the second (central) moment , the variance. These moments do not in generaldescribe the distribution of . However if is assumed to be normally distributed they do

.

We do not consider pathological functions where is

not

a random variable – however, they exist

That is simply one of the favorable properties of the normal distribution.

CY FX CX FXT=

CX n n× CY p p× FX p n×

FX CX FXT

X YSystem

Figure 1: The simplest case: one input random variable ( ), and one outputrandom variable ( ).

N 1=P 1=

YY f X( )= f .( )

X

f .( ) Y

f .( ) Y pY y( )

pY y( )

µY σY2

Y Y

Page 4: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

3

2.1 A First Expectation

Look at figure 2 where the simple case with one input and one output is illustrated. Supposethat X is normally distributed with mean and standard deviation

. Now we would like

to know how the 68% probability interval is propagated through the ‘sys-tem’ .

First of all, from figure 2 it can be seen that if the shaded interval would be mapped ontothe -axis by the original function its shape would be somewhat distorted and the resultingdistribution would be asymmetric, certainly not Gaussian anymore. When approximating

by a first-order Taylor series expansion about the point ,

, (2)

we obtain the linear relationship shown in figure 2 and with that a normal distribution for

. Now we can determine its parameters and .

, (3)

. (4)

Finally the expectation is rised that in the remainder of this text we will again bump intosome generalized form of equation (3) and (4).

At this point, we should not forget that the output distribution, represented by and ,is an approximation of some unknown truth. This truth is impertinently nonlinear, non-nor-mal and asymmetric, thus inhibiting any exact closed form analysis in most cases. We arethen supposed to ask the question:

Remember that the standard deviation is by definition the distance between the most probable value, , and the curve’s turning points.

Another useful property of the normal distribution, worth to be remembered: Gaussian stays Gaussian under linear transformations.

µX σX

Figure 2: One-dimensional case of a nonlinear error propagation problem

µx σx+µx σx– µx

µy σy+

µy σy–

µy

X

Y

f X( )

σµ

µX σX– µX σX+,[ ]f .( )

y

f X( ) X µX=

Y f µX( )X∂

∂ f

X µX=

X µX–( )+≈

pY y( ) µY σY

µY f µX( )=

σY X∂∂ f

X µX=

σX=

µY σY

Page 5: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

4

2.2 When is the Approximation a Good One?

Some textbooks write equations (3) and (4) as inequalities. But if the left hand sides denotethe parameters of the output distribution which is, by assumption, normal, we can write themas equalities. The first two moments of the true (but unknown) output distribution – let’s callthem and – are definitely different from these values†. Hence

(5)

(6)

It is evident that if is linear we can write (5) and (6) as equalities. However the followingfactors affect approximation quality of and by the actual values and .

(7)

(8)

Thus they apply for both cases; when is linear and when it is nonlinear‡.

• The guess : In general we do not know the expected value . In this case, its best (and only available) guess is the actual value of , for example the current measurement . We then differentiate at the point hoping that is close to such that is not too far from .

• Extent of nonlinearity of : Equations (3) and (4) are good approximations as long as the first-order Taylor series is a good approximation which is the case if is not too far from linear within the region that is within one standard deviation of the mean [BREI70]. Some people even ask to be close to linear within a -interval.

The nonlinearity of and the deviation of from have a combined effect on . Ifboth factors are of high magnitude, the slope at can differ strongly and the resultingapproximation of can be poor. At the other hand, if already one of the them is small, oreven both, we can expect to be close to .

• The guess : There are several possibilities to model the input uncertainty. The model could incorporate some ad hoc assumptions on or it might rely on an empirically gained relationship to . Sometimes it is possible to have a physically based model providing ‘true’ uncertainty information for each realization of . By systematically following all sources of perturbation during the emergence of , such a model has the desirable property that it accounts for all important factors which influence the outcome and .In any case, the actual value remains a guess of which is hopefully close to .

But there is also reason for optimism. The conditions suggested by Figure 2 are exaggerated.Mostly, is very small with respect to the range of . This makes (2) an approximation ofsufficient quality in many practical problems.

† Remember that the expected value and the variance (and all other moments) have a general definition, i.e. are independent whether the distribution functions exhibits some nice properties like symmetry. They are also valid for arbitrarily shaped distributions and always quantify the most probable value and its ‘spread’.

‡ It might be possible that in some nonlinear but nice-conditioned cases the approximation effects of non-normality and the other factors compensate each other yielding a very good approximation. Nevertheless, the opposite might also be true.

µ0 σ02

µ0 µY≈

σ02 σY

2≈

f .( )µY σY y∗ sy∗

µY y∗≈

σY sy∗≈

f .( )

x∗ µX

X x∗f X( ) X x∗= x∗ E X[ ] y∗E Y[ ]

f .( )f .( )

f .( ) 2σ±

f .( ) x∗ E X[ ] σY

X x∗=σY

sy∗ σY

sx∗σX

µX

Xx∗

x∗ sx∗sx∗ σX σX

σX X

Page 6: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

5

2.3 The Almost General Case: Approximating the Distribution of

The next step towards equation (1) is again a practical approximation based on a first-orderTaylor series expansion, this time for a multiple-input system. Consider

where the ‘s are input random variables and is represented byits first-order Taylor series expansion about the point

. (9)

Equation (9) is of the form with

, (10)

. (11)

As in chapter 2.1, the approximation is linear. The distribution of is therefore Gaussian andwe have to determine and .

= (12)

= (13)

= (14)

= (15)

= (16)

= (17)

= (18)

= (19)

= (20)

Y f X1 X2 … Xn, , ,( )=

Figure 3: Error propagation in a multi-input single-output system: , .N n= P 1=

X1X2X3

Xn

YSystem

Y f X1 X2 … Xn, , ,( )= Xi n Yµ1 µ2 … µn, , ,

Y f µ1 µ2 … µn, , ,( )X∂

∂ fi

µ1 µ2 … µn, , ,( )i 1=

n

∑ Xi µi–[ ]+≈

Y a0 ai∑ Xi µi–( )+≈

a0 f µ1 µ2 … µn, , ,( )=

ai X∂∂ f

iµ1 µ2 … µn, , ,( )=

YµY σY

2

µY E Y[ ]= E a0 aii

∑ Xi µi–( )+[ ]

E a0[ ] E aiXi[ ] E aiµi[ ]–i

∑+

a0 aiE Xi[ ] aiE µi[ ]–i

∑+

a0 aiµi aiµi–i

∑+

a0

µY f µ1 µ2 … µn, , ,( )

σY2 E Y µY–( )2[ ]= E ai Xi µi–( )

i∑( )2[ ]

E ai Xi µi–( )i

∑ a j X j µ j–( )j

∑[ ]

E ai2 Xi µi–( )2

i∑ aia j Xi µi–( ) X j µ j–( )∑

i j≠∑+[ ]

Page 7: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

6

= (21)

= (22)

= (23)

The vector has been omitted. If the ‘s are independent the covariance dis-appears, and the resulting approximated variance is

. (24)

This is the moment to validate our expectation from chapter 2.1 for the one-dimensionalcase. Equation (17) corresponds directly with (3) whereas (23) somewhat contains equation(4).

2.3.1 Addendum to Chapter 2.2

In order to close the discussion on factors affecting approximation quality, we have to considerbriefly two aspects which play a role if there is more than one input.

• Independence of the inputs: Equation (24) is a good approximation if the stated assumption of independence of all ‘s is valid.

It is finally to be mentioned that, even if the input distributions are not strictly Gaussian, theassumption of the output being normal is often reasonable. This follows from the central limittheorem when the ‘s somehow additively constitute the output .

2.4 Getting Really General: Adding

Often there is not just a single which depends on the ‘s but there are more system out-puts, that is, more random variables like . Suppose our system has an additional output with .

Obviously and can exactly be derived as shown before. The additional aspect which isintroduced by is the question of the statistical dependence of and which is expressedby their covariance . Let’s see where we arrive when substituting

and by their first-order Taylor series expansion (9).

ai2E Xi µi–( )2[ ]

i∑ aia jE Xi µi–( ) Xi µ j–( )[ ]∑

i j≠∑+

ai2σi

2

i∑ aia jσi j∑

i j≠∑+

σY2

X∂∂ f

i

2

σi2

i∑ X∂

∂ fi

X∂

∂ fj

σi j∑i j≠

∑+

µ1 µ2 … µn, , , Xi σi j

σY2

X∂∂ f

i

2

σi2

i∑≈

Xi

Xi Y

Z g X1 X2 … Xn, , ,( )=

Y Xi

Y ZZ g X1 X2 … Xn, , ,( )=

Y

ZSystem

Figure 4: Error propagation in a multi-input multi-output system: , N n= P 2=

X1X2X3

Xn

µZ σZ2

Z Y ZσYZ E Y µY–( ) Z µZ–( )[ ]=

Y Z

Page 8: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

7

= (25)

= (26)

= (27)

= (28)

=

(29)

=

(30)

= (31)

= (32)

If and are independent, the second term, holding their covariance, disappears. Addingmore output random variables brings in no new aspects. In the remainder of this text we shallconsider without loss of generality.

3. Derivating the Final Matrix Form

Now we are ready to return to equation (1). We will now see that we only have to reformulateequations (23) and (32) in order to obtain the initial matrix form. We recall the gradient oper-ator with respect to the -dimensional vector

. (33)

is a -dimensional vector-valued function . TheJacobian is defined as the transpose of the gradient of , whereas the gradient is theouter product of and

(34)

has dimension in this case, in general. We introduce the symmetric input covariance matrix which contains all variances and covariances of the input randomvariables . If the ‘s are independent all with disappear and isdiagonal.

σYZ E Y µY–( ) Z µZ–( )[ ]

E Y Z⋅[ ] E Y[ ]E Z[ ]–

E µY X∂∂ f

iXi µi–[ ]∑+

µZ X∂∂g

iXi µi–[ ]∑+

⋅ µY µZ–

E µY µZ µZ X∂∂ f

iXi µi–[ ]∑ µY X∂

∂gi

Xi µi–[ ]∑ X∂∂ f

iXi µi–[ ]∑ X∂

∂gi

Xi µi–[ ]∑+ + + µY µZ–

E µY µZ[ ] µZEX∂

∂ fiXi X∂

∂ fi

∑ µi–∑ µY EX∂

∂giXi X∂

∂gi

∑ µi–∑+ +

EX∂

∂ fi X∂∂g

jXi µi–[ ] X j µ j–[ ]∑∑ µY µZ–+

µY µZ µZ X∂∂ f

i∑ E Xi[ ] µZ X∂

∂ fi

∑ E µi[ ]– µY X∂∂g

i∑ E Xi[ ] µY X∂

∂gi

∑ E µi[ ]–+ +

EX∂

∂ fi X∂∂g

iXi µi–[ ]2∑ X∂

∂ fi X∂∂g

jXi µi–[ ] X j µ j–[ ]

i j≠∑∑+ µY µZ–+

X∂∂ f

i X∂∂g

iE Xi µi–( )2[ ]∑ X∂

∂ fi X∂∂g

jE Xi µi–( ) X j µ j–( )[ ]

i j≠∑∑+

σYZ X∂∂ f

i X∂∂g

i∑ σi

2

X∂∂ f

i X∂∂g

ji j≠∑∑ σi j

2+

Xi X j

P 2=

n X

∇∇∇∇XX∂

∂1 X∂

∂2

… X∂

∂n

T

=

f X( ) p f X( ) f 1 X( ) f 1 X( ) … f p X( )T

=FX f X( )

∇∇∇∇X f X( )

FX ∇∇∇∇X f X( )T⋅T f 1 X( )

f 2 X( ) X∂∂

1 …

X∂∂

n

X1∂∂ f 1 …

Xn∂∂ f 1

X1∂∂ f 2 …

Xn∂∂ f 2

= = =

FX 2 n× p n× n n×CX

X1 X2 … Xn, , , Xi σi j i j≠ CX

Page 9: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

8

(35)

We further introduce the symmetric output covariance matrix ( in generalwith outputs )

(36)

Now we can instantly form equation (1)

(37)

(38)

(39)

Looks nice. But what has it to do with chapter 2? To answer this question we evaluate thefirst element , the variance of :

=

(40)

= (41)

If we now reintroduce the notation of chapter 2.3, that is, , , and, we see that (41) equals exactly equation (23). Assuming the reader being a notori-

ous skeptic, we will also look at the off-diagonal element , the covariance of and :

=

(42)

CX

σX1

2 σX1 X2 … σX1 Xn

σX2 X1σX2

2 … σX2 Xn

: : :

σXn X1σXn X2

… σXn

2

=

2 2× CY p p×Y 1 Y 2 … Y p, , ,

CY

σY 1

2 σY 1Y 2

σY 2Y 1σY 2

2=

CY FX CX FXT=

σY 1

2 σY 1Y 2

σY 2Y 1σY 2

2

X1∂∂ f 1 …

Xn∂∂ f 1

X1∂∂ f 2 …

Xn∂∂ f 2

σX1

2 σX1 X2 … σX1 Xn

σX2 X1σX2

2 … σX2 Xn

: : :

σXn X1σXn X2

… σXn

2

X1∂∂ f 1

X1∂∂ f 2

: :

Xn∂∂ f 1

Xn∂∂ f 2

=

X1∂

∂ f 1 σX1

2 …Xn∂

∂ f 1 σXn X1+ + …

X1∂∂ f 1 σX1 Xn

…Xn∂

∂ f 1 σXn

2+ +

X1∂∂ f 2 σX1

2 …Xn∂

∂ f 2 σXn X1+ + …

X1∂∂ f 2 σX1 Xn

…Xn∂

∂ f 2 σXn

2+ +

X1∂∂ f 1

X1∂∂ f 2

: :

Xn∂∂ f 1

Xn∂∂ f 2

=

σY 1

2 Y 1

σY 1

2

X1∂∂ f 1

2

σX1

2

X2∂∂ f 1

X1∂∂ f 1 σX2 X1

…Xn∂

∂ f 1

X1∂∂ f 1 σXn X1 X1∂

∂ f 1

X2∂∂ f 1 σX1 X2 X2∂

∂ f 1

2

σX2

2 …Xn∂

∂ f 1

X2∂∂ f 1 σXn X2

+ + + + + + +

…X1∂

∂ f 1

Xn∂∂ f 1 σX1 Xn X2∂

∂ f 1

Xn∂∂ f 1 σX2 Xn

…Xn∂

∂ f 1

2

σXn

2+ + + + +

Xi∂∂ f 1

2

σXi

2

i∑ Xi∂

∂ f 1

X j∂∂ f 1 σXi X j∑

i j≠∑+

f 1 X( ) f X( )= Y 1 Y=σXi

σi=σY 1Y 2

Y Z

σY 1Y 2 X1∂∂ f 1

X1∂∂ f 2 σX1

2

X2∂∂ f 1

X1∂∂ f 2 σX2 X1

…Xn∂

∂ f 1

X1∂∂ f 2 σXn X1 X1∂

∂ f 1

X2∂∂ f 2 σX1 X2 X2∂

∂ f 1

X2∂∂ f 2 σX2

2 …Xn∂

∂ f 1

X2∂∂ f 2 σXn X2

+ + + + + + +

…X1∂

∂ f 1

Xn∂∂ f 2 σX1 Xn X2∂

∂ f 1

Xn∂∂ f 2 σX2 Xn

…Xn∂

∂ f 1

Xn∂∂ f 2 σXn

2+ + + + +

Page 10: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

9

= (43)

Again, by substituting by , by and by , equation (43) corre-spond exactly to the previously derived equation (32) for .

We were obviously able, having started from a simple one-dimensional error propagationproblem, to derive the error propagation law . Putting the results togetheryielded its widely used matrix form (1).

Now we can also understand the informal interpretation of figure 5.

4. Examples

4.1 Probabilistic Line Extraction From Noisy 1D Range Data

Model-based vision where geometric primitives are the sought image features is a good exam-ple for uncertainty propagation. Suppose the segmentation problem has already been solved,that is, the set of inlier points with respect to the model is known. Suppose further that theregression equations for the model fit to the points have a closed-form solution – which is thecase when fitting straight lines. And suppose finally, that the measurement uncertainties ofthe data points are known as well. Then we can directly apply the error propagation law inorder to get the output uncertainties of the line parameters.

Xi∂∂ f 1

Xi∂∂ f 2 σXi

2

i∑ Xi∂

∂ f 1

X j∂∂ f 2 σXi X j∑

i j≠∑+

f 1 X( ) f X( ) f 2 X( ) g X( ) σXiσi

σYZ

CY FX CX FXT=

“The

inpu

t unc

ertain

ty...

CY FX CX FXT=...i

s pop

agate

d thr

ough

...and

appr

oxim

ative

ly

mappe

d to t

he ou

tput

.”

the s

ystem

f(.).

..

Figure 5: Interpretation of the error propagation law in its matrix form

α

r

Figure 6: Estimating a line in the least squares sense. The model parameters (length of the perpendicular) and (its angle to the abscissa) describe uniquely aline.

di

xi = (r i , q i )

Page 11: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

10

Suppose

n

measurement points in polar coordinates are given and modeled asrandom variables with

.

Each point is independently affected byGaussian noise in both coordinates.

~ (44)

~ (45)

= (46)

= (47)

= (48)

Now we want to find the line where and yielding and with that, the line model

. (49)

This model minimizes the

orthogonal

distances from the points to the line. It is important tonote that fitting models to data in

some

least square sense yields not a satisfying geometricsolution in general. It is crucial to know

which

error is minimized by the fit equations. A goodillustration is the paper of

[G

AND

94]

where several algorithms for fitting circles and ellipses arepresented which minimize algebraic and geometric distances. The geometric variety of solu-tions for the same set of points demonstrate the importance of this knowledge if geometricmeaningful results are required.

The orthogonal distance of a point to the line is just

. (50)

Let be the (unweighted) sum of squared errors.

(51)

The model parameters are now found by solving the nonlinear equation system

. (52)

Suppose further that for each point a variance modelling the uncertainty in radial andangular direction is given a priori or can be measured. This variance will be used to determinea weight for each single point, e.g.

. (53)

Then, equation (51) becomes

. (54)

The issue of determining an adequate weight when (and perhaps some additional information) is given is complex in general and beyond the scope of this text. See

[C

ARR

88]

for a careful treatment.

xi ρi θi,( )=Xi Pi Qi,( )= i 1 … n, ,=

Pi N ρi σρi

2,( )

Qi N θi σθi

2,( )

E Pi P j⋅[ ] E Pi[ ]E P j[ ] ∀ i j, 1 … n, ,=

E Qi Q j⋅[ ] E Qi[ ]E Q j[ ] ∀ i j, 1 … n, ,=

E Pi Q j⋅[ ] E Pi[ ]E Q j[ ] ∀ i j, 1 … n, ,=

x αcos y αsin r–+ 0= x ρ θ( )cos=y ρ θ( )sin= ρ θcos αcos ρ θsin αsin r–+ 0=

ρ θ α–( )cos r– 0=

di ρi θi,( )

ρi θi α–( )cos r– di=

S

S di2

i∑ ρi θi α–( )cos r–( )2

i∑= =

α r,( )

α∂∂S 0=

r∂∂S 0=

σi2

wi

wi 1 σi2⁄=

σi

S widi2∑ wi ρi θi α–( )cos r–( )2∑= =

Page 12: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

11

It can be shown (see Appendix A) that the solution of (52) in the weighted least square sense†

is

(55)

. (56)

Now we would like to know how the uncertainties of the measurements propagate through‘the system’ (55), (56). See figure 7.

This is where we simply apply equation (1). We are looking for the output covariancematrix

, (57)

given the input covariance matrix

(58)

and the system relationships (55) and (56). Then by calculating the Jacobian

(59)

† We follow here the notation of [DRAP88] and distinguish a weighted least squares problem if is diag-onal (input errors are mutually independent) and a generalized least squares problem if is non-diago-nal.

CXCX

α 12---atan2

wiρi2 2θisin∑ 2

Σwi

------- wiw jρiρ j θicos θ jsin∑∑–

wiρi2 2θicos∑ 1

Σwi

------- wiw jρiρ j θi θ j+( )cos∑∑–-------------------------------------------------------------------------------------------------------------------

=

rwiρi θi α–( )cos∑

wi∑----------------------------------------------=

X1X2X3

Xn

A

RModel Fit

Figure 7: When extracting lines from noisy measurement points , the model fitmodule produces line parameter estimates, modeled as random variables . It isthen interesting to know the variability of these parameters as a function of the noiseat the input side.

Xi

A R,

2 2×

CARσA

2 σAR

σAR σR2

=

2n 2n×

CXCP 0

0 CQ

diag σρi

2( ) 0

0 diag σθi

2( )= =

FPQP1∂

∂αP2∂

∂α …Pn∂

∂αQ1∂

∂αQ2∂

∂α …Qn∂

∂α

P1∂∂r

P2∂∂r …

Pn∂∂r

Q1∂∂r

Q2∂∂r …

Qn∂∂r

=

Page 13: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

12

we can instantly form the error propagation equation (60) yielding the sought .

(60)

Appendix A is concerned about the step-by-step derivation of the fit equations (55) and(56), whereas in Appendix B equation (60) is once more derived. Under the assumption ofnegligible angular uncertainties, implementable expressions for the elements of are deter-mined, also in a step-by-step manner.

4.2 Kalman Filter Based Mobile Robot Localization:The Measurement Model

The measurement model in a Kalman filter estimation problem is another place where weencounter the error propagation law. The reader is assumed to be more or less familiar withthe context of mobile robot localization, Kalman filtering and the notation used in [BAR93] or[LEON92].

In order to reduce the unbounded growth of odometry errors the robot is supposed to up-date its pose by some sort of external referencing. This is achieved by matching predicted en-vironment features with observed ones and estimating the vehicle pose in some sense with theset of matched features. The prediction is provided by an a priori map which contains the po-sition of all environment features in global map coordinates. In order to establish correct cor-respondence of observed and stored features, the robot predicts the position of all currentlyvisible features in the sensor frame. This is done by the measurement model

. (61)

The measurement model gets the predicted robot pose as input and ‘asks’ the mapwhich features are visible at the current location and where they are supposed to appear whenstarting the observation. The map evaluates all visible features and returns their transformedpositions in the vector . The measurement model is therefore a world-to-robot-to-sensor frame transformation†.

However, due to nonsystematic odometry errors, the robot position is uncertain and due toimperfect sensors, the observation is uncertain. The former is represented by the (predicted)state covariance matrix and the latter by the sensor noise model

. They are assumed to be independent. Sensing uncertainty affects the observation directly, whereas vehicle position uncertainty –

which is given in world map coordinates – will propagate through the frame transformationsworld-to-robot-to-sensor , linearized about the prediction . Then the observa-tion is made and the matching of predicted and observed features can be performed in the sen-sor frame yielding the set of matched features. The remaining position uncertainty of thematched features given all observations up to and including time ,

, (62)

† Note that is assumed to be ‘intelligent’, that is, it contains both, the mapping (with as the position vector of feature number ) and the world-to-sensor frame transformation of all visible for the current prediction . This is in contrast to e.g. [LEON92], where solely the frame transformation is done by and the mapping is somewhere else.

CAR

CAR FPQCXFPQT=

CAR

z k 1+ k( ) h x k 1+ k( )( ) w k 1+( )+=

x k 1+ k( )

z k 1+ k( )

h .( ) x k 1+ k( ) m j→m j jm j x k 1+ k( )

h x k 1+ k( ) m j,( ) x k 1+ k( ) m j→

P k 1+ k( )w k 1+( ) N 0 R k 1+( ),( )∼R k 1+( ) P k 1+ k( )

h .( ) x k 1+ k( )

k

S k 1+( ) cov z k 1+( ) Zk[ ]=

Page 14: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

13

is the superimposed uncertainty of observation and the propagated one from the robot pose,

. (63)

is also called measurement prediction covariance or innovation covariance.

5. Exercises

As stated in the introduction, the report has educational purposes and accompanies a lectureon autonomous systems in the microengineering departement at EPFL. Thus, the audience ofthis report are people not yet too familarized with the field of uncertainty treatment. Somepropositions for exercises are given:

• Let them do the derivation of (equations (12) to (17)) and (equations (18) to (23)) given a few rules for the expected value.

• Let them do the derivation of (equations (25) to (32)) or, in the context of example 1, equations (93) to (101) of Appendix B for . Some rules for the expected value and dou-ble sums might be helpful.

• Let them make an illustration of each factor affecting approximation quality discussed in chapter 2.2 with drawings like figure 2.

• If least squares estimation in a more general sense is the issue, derivating the fit equations for a regression problem is quite instructive. The standard case, linear in the model parame-ters, and with uncertainties in only one variable is much simpler than the derivation of example 1 in Appendix A. Additionally the output covariance matrix can be determined with (1).

S k 1+( ) h∇ P k 1+ k( ) h∇ R k 1+( )+=

S k 1+( )

µY σY2

σYZ

σAR

Page 15: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

14

Literature

[BREI70] A.M. Breipohl, Probabilistic Systems Analysis: An Introduction to Probabilistic Models, Decisions, and Applications of Random Processes, John Wiley & Sons, 1970.

[GAND94] W. Gander, G. H. Golub, R. Strebel, “Least-Squares Fitting of Circles and Ellipses”, BIT, vol. 34, no. 4, p. 558-78, Dec. 1994.

[DRAP88] N.R. Draper, H. Smith, Applied Regression Analysis, 3rd edition, John Wiley & Sons, 1988.

[CARR88] R. J. Carroll, D. Ruppert, Transformation and Weighting in Regression, Chapman and Hall, 1988.

[BAR93] Y. Bar-Shalom, X.-R. Li, Estimation and Tracking: Principles, Techniques, and Software, Artech House, 1993.

[LEON92] J.J. Leonard, H.F. Durrant-Whyte, Directed Sonar Sensing for Mobile Robot Nav-igation, Kluwer Academic Publishers, 1992.

[ARRAS97] K.O. Arras, R.Y. Siegwart, “Feature Extraction and Scene Interpretation for Map-Based Navigation and Map Building”, Proceedings of SPIE, Mobile Robotics XII, Vol. 3210, p. 42-53, 1997.

Page 16: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

15

Appendix A: Finding the Line Parameters and in the Weighted Least Square Sense

Consider the nonlinear equation system

= (64)

= (65)

where S is the weighted sum of squared errors

= . (66)

We start solving the system (64), (65) by working out parameter .

= = (67)

= (68)

= (69)

= (70)

= (71)

Parameter is slightly more complicated. We introduce the following notation

. (72)

= (73)

= (74)

= (75)

=

(76)

=

(77)

r α

r∂∂S

0

α∂∂S

0

S wi ρi θicos αcos ρi θisin αsin r–+( )2

i 1=

n

r

r∂∂S

0 2wi ρi θicos αcos ρi θisin αsin r–+( ) 1–( )∑2– wiρi

θicos αcos θisin αsin+( ) 2 wir∑+∑2– wiρi

θi α–( )cos 2r wi∑+∑2r wi∑ 2 wiρi

θi α–( )cos∑

rwiρi

θi α–( )cos∑wi∑

-----------------------------------------------

α

θicos ci= θisin si=

α∂∂S

2wi ρici αcos ρisi αsin r–+( ) α∂∂ ρici αcos ρisi αsin r–+( )∑

2 wi ρici αcos ρisi αsin1

Σwj

--------- w jρ j θ j α–( )cos∑–+ ρ– ici αsin ρisi αcos α∂

∂r–+

2 wi ρici αcos ρisi αsin1

Σwj

--------- w jρ j θ j α–( )cos∑–+ ρisi αcos ρici αsin–

1Σwj

--------- w jρ j θ j α–( )sin∑– ∑

2 wi ρi2cisi αcos2 ρi

2ci

2 αcos αsin–1

Σwj

---------ρici α w jρ jθ j α–( )sin∑cos– ρi

2si

2 αcos αsin+∑

ρi2sici αsin2–

1Σwj

---------ρisi α w jρ jθ j α–( )sin∑sin–

1Σwj

--------- w jρ jθ j α–( )cos∑ ρ

isi αcos–

1

Σwj

--------- w jρ jθ j α–( )cos ρisi αcos∑ 1

Σwj( )2----------------- w jρ j

θ j α–( )cos w jρ jθ j α–( )sin∑∑++

2 wi ρi2cisi αcos2 αsin2–( ) ρi

2 αcos α ci2

si2

–( )sin–1

Σwj

--------- w jρiρ jci αcos θ j α–( )sin∑–∑

1

Σwj

--------- w jρiρ jsi αsin θ j α–( )sin∑–

1Σwj

--------- w jρiρ jsi αcos θ j α–( )cos∑–

1

Σwj

--------- w jρiρ jci αsin θ j α–( )cos∑ 1

Σwj( )2----------------- w jwkρ

jρk θ j α–( )cos θk α–( )sin

k∑

j∑++

Page 17: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

16

=

(78)

=

(79)

=

(80)

=

(81)

=

(82)

=

(83)

= (84)

From (84) we can obtain the result for or respectively.

(85)

(86)

Equation (86) contains double sums which may not be fully evaluated. Due to the symmetryof trigonometric functions the corresponding off-diagonal elements can be added and thus sim-plifies calculation. For the final result (87), the four-quadrant arc tangent has been taken.This solution, (71) and (87), generates sometimes -pairs with negative values. Theymust be detected in order to change the sign of and to add to the corresponding . All

-values lie then in the interval .

(87)

2 2αcos wiρi2cisi 2 αcos αsin wiρi

2c2 i∑–∑ 2

Σwi

--------- wiw jρiρ j ci αcos si αsin+( ) s j αcos c j αsin–( )∑∑+

2

Σwi

--------- wiw jρiρ j ci αcos– s j αcos c j αsin–( ) si αsin s j αcos c j αsin–( )–[∑∑+

si αcos c j αcos s j αsin+( ) ci αsin c j αcos s j αsin+( )+ + ]

2αcos wiρi2s2 i 2αsin wiρi

2c2 i∑–∑

2

Σwi

--------- wiw jρiρ j cis j αcos2 cic j αsin αcos sis j αsin αcos sic j αsin2–+– cis j αcos2–[∑∑+

cic j αsin αcos sis j αsin αcos– sic j αsin2+ sic j αcos2–+

sis j αsin αcos cic j αsin αcos cis j αsin2+ +– ]

2αcos wiρi2s2 i 2αsin wiρi

2c2 i∑–∑

2

Σwi

--------- wiw jρiρ j sic j αcos2 sis j αsin αcos cic j αsin αcos cis j αsin2+ +––( )∑∑+

2αcos wiρi2s2 i 2αsin wiρi

2c2 i∑–∑

2

Σwi

--------- wiw jρiρ j αsin αcos cic j sis j–( ) cis j αsin2 sic j αcos2–+[ ]∑∑+

2αcos wiρi2s2 i 2αsin wiρi

2c2 i∑–∑ 2

Σwi

--------- αsin αcos wiw jρiρ jci j+∑∑+

2

Σwi

--------- αsin2 wiw jρiρ jcis j∑∑ 2

Σwi

--------- αcos2 wiw jρiρ jsic j∑∑–+

2αcos wiρi2s2 i 2αsin wiρi

2c2 i∑–∑ 1

Σwi

--------- 2αsin wiw jρiρ jci j+∑∑+

2

Σwi

--------- wiw jρiρ jcis j∑∑ αcos2 αsin2–( )–

2αcos wiρi2s2 i∑ 2

Σwi

--------- wiw jρiρ jcis j∑∑–

2αsin wiρi2c2 i∑ 1

Σwi

--------- wiw jρiρ jci j+∑∑–

α 2αtan

2αsin2αcos

----------------

2Σwi

--------- wiw jρiρ jcis j∑∑ wiρi

2s2 i∑–

1Σwi

--------- wiw jρiρ jci j+∑∑ wiρi

2c2 i∑–

---------------------------------------------------------------------------------------------=

2αtan

2Σwi

--------- wiw jρiρ j θicos θ jsin∑∑ wiρi

22θisin∑–

1Σwi

--------- wiw jρiρ j θi θ j+( )cos∑∑ wiρi

22θicos∑–

---------------------------------------------------------------------------------------------------------------------------=

α r,( ) rr π α

α π– 2⁄ α 3π 2⁄≤<

α 12---atan2

2Σwi

--------- wiw jρiρ j θi θ j+( )sin

i j<∑∑ 1

Σwi

--------- wi Σwj–( )wi ρi2

2θisin∑+

2Σwi

--------- wiw jρiρ j θi θ j+( )cos

i j<∑∑ 1

Σwi

--------- wi Σwj–( )wi ρi2

2θicos∑+----------------------------------------------------------------------------------------------------------------------------------------------------------------

=

Page 18: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

17

Appendix B: Deriving the Covariance Matrix for and

The model parameter and are just half the battle. Besides these estimates of the meanposition of the line we would also like to have a measure for its uncertainty. According to theerror propagation law (1), an approximation of the output uncertainty, represented by , issubsequently determined.

At the input side, mutually independent uncertainties in radial direction only are assumed.The input random vector consists of the random vector

denoting the variables of the measured radii, and the random vector holding the corresponding angular variates. The input covariance

matrix is therefore of the form

. (88)

We represent both output random variables , by their first-order Taylor series expan-sion about the mean . The vector has dimension and is composed ofthe two mean vectors and .

(89)

(90)

The relationships and correspond to the results of Appendix A, equations (71) and(87). Referring to the discussion of chapter 2.2, we do not know in advance and its bestavailable guess is the actual value of the measurements vector .

It has been shown that under the assumption of independence of and , the followingholds

(91)

(92)

We further want to know the covariance under the abovementioned assumption of negli-gible angular uncertainties:

= (93)

= (94)

= (95)

=

r α

α r

CAR

2n 1× XT PT QT[ ]= PP1 P2 … Pn, , ,[ ]T= QQ1 Q2 … Qn, , ,[ ]T= 2n 2n×

CX

CXCP 0

0 CQ

diag σρi

2( ) 0

0 0= =

A RµT µρ

T µθT[ ]= µ 2n 1×

n 1× µρ µρ1µρ2

… µρn, , ,[ ]T= µθ µθ1

µθ2… µθn

, , ,[ ]T=

A α µ∗( )Pi∂

∂α

X µ∗=

Pi µρi∗–[ ]∑ Qi∂

∂α

X µ∗=

Qi µθi∗–[ ]∑+ +≈

R r µ∗( )Pi∂

∂r

X µ∗=

Pi µρi∗–[ ]∑ Qi∂

∂r

X µ∗=

Qi µθi∗–[ ]∑+ +≈

r .( ) α .( )µ

µ∗P Q

σA2

Pi∂∂α

2

σρi

2∑ Qi∂∂α

2

σθi

2∑+≈

σR2

Pi∂∂r

2

σρi

2∑ Qi∂∂r

2

σθi

2∑+≈

σAR

COV A R,[ ] E A R⋅[ ] E A[ ]E R[ ]–

E αPi∂

∂αPi µρi

–[ ]∑+ r

Pi∂∂r

Pi µρi–[ ]∑+

⋅ αr–

E αr rPi∂

∂αPi µρi

–[ ]∑ αPi∂

∂rPi µρi

–[ ]∑ Pi∂∂α

Pi µρi–[ ]∑ Pi∂

∂rPi µρi

–[ ]∑+ + + αr–

E αr[ ] rEPi∂

∂αPi Pi∂

∂α∑ µρi–∑ αE

Pi∂∂r

Pi Pi∂∂r∑ µρi

–∑+ +

Page 19: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

18

(96)

=

(97)

= (98)

= (99)

= (100)

Since and are independent, the expected value of the bracketed expression disappears.Hence

(101)

If, however, the input uncertainty model provides non-negligible angular variances, it iseasy to show that under the independency assumption of and the expression keepingtrack of can be simply added to yield

. (102)

As demonstrated in chapter 3, the results (91), (92) and (102) can also be obtained in themore compact but less intuitive form of equation (1). Let

(103)

the composed Jacobian matrix containing all partial derivates of the model parameterswith respect to the input random variables about the guess . Then, the sought covariancematrix can be rewritten as

. (104)

Under the conditions which lead to equation (102), the right hand side can be decomposedyielding

(105)

EPi∂

∂αPi∂

∂rPi µρi

–[ ] P j µρ j–[ ]∑∑ αr–+

αr rPi∂∂∑ α( )E Pi[ ] r

Pi∂∂α∑ E µρi

[ ]– αPi∂

∂r∑ E Pi[ ] αPi∂∂∑ r( )E µρi

[ ]–+ +

EPi∂

∂αPi∂

∂rPi µρi

–[ ] P j µρ j–[ ]

i j≠∑∑ Pi∂

∂αPi∂

∂rPi µρi

–[ ]2∑+ αr–+

EPi∂

∂αPi∂

∂rPiP j PiµP j

– P jµPi– µPi

µP j+( )

i j≠∑∑ E

Pi∂∂α

Pi∂∂r

Pi µρi–[ ]2∑+

Pi∂∂α

Pi∂∂r

i j≠∑∑ E PiP j PiµP j

– P jµPi– µPi

µP j+[ ]

Pi∂∂α

Pi∂∂r∑ E Pi µρi

–( )2[ ]+

Pi∂∂α

Pi∂∂r

i j≠∑∑ E PiP j PiµP j

– P jµPi– µPi

µP j+[ ]

Pi∂∂α

Pi∂∂r∑ σρi

2+

Pi P j

σAR Pi∂∂α

Pi∂∂r∑ σρi

2=

P QQ

σAR Pi∂∂α

Pi∂∂r∑ σρi

2

Qi∂∂α

Qi∂∂r∑ σθi

2+=

FPQ FP FQ P∂

∂α

Q∂∂α

P∂∂r

Q∂

∂r

P1∂∂α

P2∂∂α …

Pn∂∂α

Q1∂∂α

Q2∂∂α …

Qn∂∂α

P1∂∂r

P2∂∂r …

Pn∂∂r

Q1∂∂r

Q2∂∂r …

Qn∂∂r

= = =

p 2n×µ∗

CAR

CAR FPQ CXFPQT

=

CAR FPCPFPT

FQCQFQT

+=

Page 20: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

19

B.1 Practical Considerations

We determine implementable expressions for the elements of covariance matrix . Underthe assumption of negligible angular uncertainties, concrete expressions of , and will be derived. We must furthermore keep in mind that our problem might be a real timeproblem, requiring an efficient implementation. Expression are therefore sought which mini-mize the numer of floating point operations, e.g. by reuse of already computed subexpressions.Although calculating with weights does not add much difficulty, we will omit them in thischapter†.

For the sake of brevity we introduce the following notation

(106)

with

, (107)

. (108)

We will use the result that the parameters (equation (87)) and (equation (71)) can alsobe written in Cartesian form, where and :

, (109)

. (110)

They use the means

. (111)

From equations (91), (92) and (101) we see that the covariance matrix is defined when bothpartial derivates of the parameters with respect to are given. Let us start with the derivateof .

(112)

The partial derivates of the numerator and the denominator with respect to can beobtained as follows:

= (113)

=

(114)

† See [ARRAS97] for the results with weights. Performance comparison results of three different ways to determine and are also briefly given.

CAR

σA2 σR

2 σAR

α r

α 12---atan2

DN----

=

N2n--- PiP j Qicos Q jsin∑∑ Pi

22Qisin∑–=

D1n--- PiP j Qi Q j+( )cos∑∑ Pi

22Qicos∑–=

α rxi ρi θicos= yi ρi θisin=

α 12--- 2

2– y yi–( ) x xi–( )∑y yi–( )2

x xi–( )2–∑

--------------------------------------------------------

atan=

r x αcos y αsin+=

x 1 n⁄ xi∑⋅= y 1 n⁄ yi∑⋅=

Pi

α

Pi∂∂α 1

2--- 1

1 N2

D2⁄+

----------------------------⋅Pi∂

∂DN

Pi∂∂N

D–

D2

---------------------------------⋅ 12---

Pi∂∂D

NPi∂

∂ND–

D2

N2

+---------------------------------⋅= =

Pi

Pi∂∂N

Pi∂∂ 2

n--- P jPk Q jcos Qksin∑∑

Pi∂∂

P j2

2Q jsin∑{ }–

2– Pi 2Qisin2n---

Pi∂∂

P12c1s1 P1c1P2s2 P1c1P3s3 P1c1P4s4 …+ + + +{+

P2c2P1s1 P22c2s2 P2c2P3s3 P2c2P4s4 …+ + + + +

P3c3P1s1 P3c3P2s2 P32c3s3 P3c3P4s4 …+ + + + + }

Page 21: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

20

= (115)

= (116)

= (117)

= (118)

The mean values and are those of equation (111).

= (119)

=

(120)

= (121)

= (122)

= (123)

= (124)

= (125)

= (126)

= (127)

Substituting into equation (112) gives

= (128)

= (129)

= (130)

It remains the derivate of :

= (131)

= (132)

2n---

Pi∂∂

P j Q jPi Qisincos{ }∑ Pi∂∂

Pi Qicos P j Q jsin{ }∑+ 2Pi 2Qisin–

2n--- Qisin P j Q jcos∑ Qicos P j Q jsin∑+( ) 2Pi 2Qisin–

2n--- Qisin nx Qicos ny+( ) 2Pi 2Qisin–

2 x Qisin y Qicos+ Pi 2Qisin–( )

x y

Pi∂∂D

Pi∂∂ 1

n--- P jPk Qi Q j+( )cos∑∑

Pi∂∂

P j2

2Q jcos∑{ }–

2– Pi 2cos Qi1n---

Pi∂∂

P12c1 1+ P1P2c1 2+ P1P3c1 3+ P1P4c1 4+ …+ + + +{+

P2P1c2 1+ P22c2 2+ P2P3c2 3+ P2P4c2 4+ …+ + + + +

P3P1c3 1+ P3P2c3 2+ P32c3 3+ P3P4c3 4+ …+ + + + + }

1n---

Pi∂∂

P jPi Qi Q j+( )cos∑{ }Pi∂∂

PiP j Q j Qi+( )cos∑{ }+ 2Pi 2cos Qi–

2n---

Pi∂∂

P jPi Qi Q j+( )cos∑{ } 2Pi 2cos Qi–

2n--- P j Qi Q j+( )cos∑ 2Pi 2cos Qi–

2n--- P j Qicos Q jcos∑ 2

n--- P j Qisin Q jsin∑– 2Pi 2cos Qi–

2n--- Qicos P j Q jcos∑ 2

n--- Qisin P j Q jsin∑– 2Pi 2cos Qi–

2n--- Qicos nx

2n--- Qisin ny– 2Pi 2cos Qi–

2 x Qicos y Qisin– Pi 2cos Qi–( )

Pi∂∂α 1

2---

Pi∂∂D

NPi∂

∂ND–

D2

N2

+---------------------------------⋅

12---

2 x Qicos y Qisin– Pi 2cos Qi–( )N 2 x Qisin y Qicos+ Pi 2Qisin–( )D–

D2

N2

+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------⋅

N x Qicos y Qisin– Pi 2cos Qi–( ) D x Qisin y Qicos+ Pi 2Qisin–( )–

D2

N2

+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------

r

Pi∂∂r

Pi∂∂ 1

n--- P j Q j α–( )cos∑

1n---

Pi∂∂

P j Q j α–( )cos{ }∑

Page 22: An Introduction To Error Propagation: Derivation, Meaning and …srl.informatik.uni-freiburg.de/papers/arrasTR98.pdf · 2 1. Introduction This report attempts to shed light onto the

21

= (133)

= (134)

= (135)

= (136)

= (137)

Note that we made use of the already known expression .Let us summarize the results. The sought covariance matrix is

(138)

with elements

= (139)

= (140)

= (141)

All elements of are of complexity and allow extensive reuse of already computedexpressions. This makes them suitable for fast calculation under real time conditions and lim-ited computing power.

1n--- Q j α–( )cos

Pi∂∂

P j{ } 1n--- P j∑ Pi∂

∂Q j α–( )cos{ }+∑

1n--- Qi α–( )cos

1n--- P j∑ Q j α–( )sin

Pi∂∂α

+

1n--- Qi α–( )cos

Pi∂∂α 1

n--- P j∑ Q j α–( )sin+

1n--- Qi α–( )cos

Pi∂∂α

α∂∂r⋅+

1n--- Qi α–( )cos

Pi∂∂α

y αcos x αsin–( )+

α∂ Pi∂⁄

CARσA

2 σAR

σAR σR2

=

σA2 1

D2

N2

+( )2

---------------------------- N x θicos y θisin– ρi 2cos θi–( ) D x θisin y θicos+ ρi 2θisin–( )–[ ]2σρi

2∑

σR2 1

n--- θi α–( )cos

Pi∂∂α

y αcos x αsin–( )+2σρi

2∑

σAR Pi∂∂α

Pi∂∂r σρi

2∑

CAR O n( )