chapter 9 function approximation 고려대학교 컴퓨터학과 음성정보처리 연구실...
TRANSCRIPT
Chapter 9Chapter 9Function ApproximationFunction Approximation
고려대학교 컴퓨터학과 음성정보처리 연구실고려대학교 컴퓨터학과 음성정보처리 연구실
2003010594 2003010594 조영규조영규 2004020594 2004020594 방규섭 방규섭 2005020594 2005020594 정재연정재연
KoreaUnivers
ity
2 음성정보처리연구실
Contents
9.1 Least Squares Approximation
9.2 Continuous Least Squares
9.3 Function Approximation at a Point
9.4 Using Matlab’s Functions
KoreaUnivers
ity
3 음성정보처리연구실
9.1 Least Squares Approximation
Some of the most common methods of approximating data are based on the desire to minimize some measure of the difference between the approximating function and the given data points.
The method of least squares seeks to minimize the sum of the squares of the differences between the function value and the data value.
Advantages to using the square of the differences at each point
Positive differences do not cancel negative differences
Differentiation is not difficult
Small differences become smaller and large differences are magnified
KoreaUnivers
ity
4 음성정보처리연구실
9.1.1 Linear Least Squares Approximation - Example 9.1 Linear Approximation to Four Points
( ) 0.9 1.4f x x (1,2.1),(2,2.9),(5,6.1),(7,8.3)
The total sqared error is 0.04 0.09 0.04 0.36 0.53
24 4 4At 7 : (7) 7.7, 8.3, (7.7 8.3) 0.36x f y e
23 3 3At 5 : (5) 5.9, 6.1, (5.9 6.1) 0.04x f y e
22 2 2At 2 : (2) 3.2, 2.9, (3.2 2.9) 0.09x f y e
21 1 1At 1: (1) 2.3, 2.1, (2.3 2.1) 0.04x f y e
KoreaUnivers
ity
5 음성정보처리연구실
9.1.1 Linear Least Squares Approximation - Discussion (1/2)
Normal equation for linear least square approximation (1/2)
( )f x ax b 1 1 2 2 3 3 4 4( , ),( , ),( , ),( , )x y x y x y x y2 2 2 2
1 1 2 2 3 3 4 4[ ( ) ] [ ( ) ] [ ( ) ] [ ( ) ]E f x y f x y f x y f x y
2 2 2 21 1 2 2 3 3 4 4[ ] [ ] [ ] [ ]ax b y ax b y ax b y ax b y
Simplifying gives2 2 2 2
1 2 3 4 1 2 3 4 1 1 2 2 3 3 4 4[ ] [ ]a x x x x b x x x x x y x y x y x y
1 2 3 4 1 2 3 4[ ] [1 1 1 1]a x x x x b y y y y
1 1 1 2 2 2 3 3 3 4 4 4[ ] [ ] [ ] [ ] 0E
ax b y x ax b y x ax b y x ax b y xa
1 1 2 2 3 3 4 4[ ] [ ] [ ] [ ] 0E
ax b y ax b y ax b y ax b yb
In general form2
1 1 1
n n n
i i i ii i i
a x b x x y
2
1 1 1 1
, , , n n n n
xx i x i xy i i y ii i i i
S x S x S x y S y
1 1
n n
i ii i
a x bn y
KoreaUnivers
ity
6 음성정보처리연구실
9.1.1 Linear Least Squares Approximation - Discussion (2/2)
Normal equation for linear least square approximation (2/2)
Matlab Function for linear Least Squares Approximation
The solution of the system of equations is
, xy x y xx y xy x
xx x x xx x x
nS S S S S S Sa b
nS S S nS S S
KoreaUnivers
ity
7 음성정보처리연구실
Example 9.2 Least Squares Straight Line to Fit Four Data Points
(1,2.1),(2,2.9),(5,6.1),(7,8.3) 1.0440, 0.9352a b
( ) 1.044 0.9352f x x
The total squared error is2 2 2 2
1 2 3 4 0.0360d d d d
Linear least squares straight line
KoreaUnivers
ity
8 음성정보처리연구실
Example 9.3 Noisy Straight-Line Data
[0.00 0.20 0.80 1.00 1.20 1.90 2.00 2.10 2.95 3.00]x
[0.01 0.22 0.76 1.03 1.18 1.94 2.01 2.08 2.90 2.95]y
0.9839 , 0.0174a b
( ) 0.9839 0.0174f x x
Data and Linear fit for noisy straight line
KoreaUnivers
ity
9 음성정보처리연구실
9.1.2 Quadratic Least Squares Approximation - Discussion (1/2)
Normal equation for Quadratic least square approximation
1 1 nFor n data points, ( , ),...,( , ) we wish to minimizenx y x y2 2 2 2
1 1 1[ ( ) ( ) ] ... [ ( ) ( ) ]n n nE a x b x c y a x b x c y
2 2 2 21 1 1 1[ ( ) ( ) ]( ) ... [ ( ) ( ) ]( ) 0n n n n
Ea x b x c y x a x b x c y x
a
2 21 1 1 1[ ( ) ( ) ]( ) ... [ ( ) ( ) ]( ) 0n n n n
Ea x b x c y x a x b x c y x
b
2 21 1 1[ ( ) ( ) ] ... [ ( ) ( ) ] 0n n n
Ea x b x c y a x b x c y
c
Simplifying gives
4 3 2 2
1 1 1 1
n n n n
i i i i ii i i i
a x b x b x x y
3 2
1 1 1 1
n n n n
i i i i ii i i i
a x b x c x x y
2
1 1 1
[ ]n n n
i i ii i i
a x b x c n y
KoreaUnivers
ity
10 음성정보처리연구실
9.1.2 Quadratic Least Squares Approximation - Discussion (2/2)
Matlab Function for Quadratic Least Squares Approximation
KoreaUnivers
ity
11 음성정보처리연구실
Example 9.6 Oil Reservoir
1668.9 360.3 82.8
360.3 82.8 21.3
82.8 21.3 7.0
A
130.3413
42.4743
21.0000
b
2( ) 0.299 2.9926 8.5687p x x x
Using the Matlab function for quadratic least squares, we find that the normal equations are Az=b, with
The solution is [0.2990, 2.9926,8.5687]z
KoreaUnivers
ity
12 음성정보처리연구실
9.1.3 Cubic Least Squares Approximation
6 5 4 3 3
1 1 1 1 1
5 4 3 2 2
1 1 1 1 1
4 3 2
1 1 1 1 1
3 2
1 1 1 1 1
,
,
,
1 .
n n n n n
i i i i i ii i i i i
n n n n n
i i i i i ii i i i i
n n n n n
i i i i i ii i i i i
n n n n n
i i i ii i i i i
a x b x c x d x x y
a x b x c x d x x y
a x b x c x d x x y
a x b x c x d y
3 2( )f x ax bx cx d
KoreaUnivers
ity
13 음성정보처리연구실
Example 9.7 Cubic Least Squares2.6259 0.0000 3.1328 0.0
0.0000 3.1328 0.0000 4.4
3.1328 0.0000 4.4000 0.0
0.0000 4.4000 0.0000 11.0
A
1.5226
2.2000
2.2800
5.5000
r
3 2( )p x ax bx cx d
0.2550, 0.0000, 0.6997, 0.5a b c d
KoreaUnivers
ity
14 음성정보처리연구실
Example 9.8 Cubic Least Squares, Continued
1.3130 1.4160 1.5664 1.8
1.4160 1.5664 1.8000 2.2
1.5664 1.8000 2.2000 3.0
1.8000 2.2000 3.0000 6.0
A
1.6613
1.9976
2.6400
4.6500
r
3 2( )p x ax bx cx d
0.00, 0.375, 0.825, 0.500a b c d
KoreaUnivers
ity
15 음성정보처리연구실
9.1.4 Least Squares Approximation for Other Functional Forms
If the data are best fit by an exponential function, it is convenient instead to fit the logarithm of the data by a straight line
KoreaUnivers
ity
16 음성정보처리연구실
Example 9.10 Least-Squares Approximation of a Reciprocal Relation
The plot of the following data suggests that they could be fit by a function of the form y=1/(ax+b)
x=[0 0.5 1 1.5 2]
y=[1.00 0.50 0.30 0.20 0.20]
KoreaUnivers
ity
17 음성정보처리연구실
To approximate the exact value of the function for all point in an interval
The summations are replaced by the corresponding integrals
To approximate a given function a quadratic function on the interval [0, 1] and [-1, 1],
we minimize the following equation.
9.2 Continuous Least Squares
( )s x
2( )p x ax bx c
1 2 2
0
1 2
0
1
0
1
0
[ ( )]
( )5 4 3
( )4 3 2
( )3 2
E ax bx c s x dx
a b cx s x dx
a b cxs x dx
a bc s x dx
1 2 2
1
1 2
1
1
1
1
1
[ ( )]
2 20 ( )
5 32
0 0 ( )3
20 2 ( )
3
E ax bx c s x dx
a c x s x dx
b xs x dx
a c s x dx
KoreaUnivers
ity
18 음성정보처리연구실
To find the continuous least squares quadratic approximation to the exponential function on [-1, 1]
The coefficient matrix
Example 9. 12 Continuous Least Squares
2 20
5 32
0 03
20 2
3
A
2 2
2 2
2
2
2( ) ( 2 2)
x x x
x x x x
x x x
x x
x e dx x e xe dx
x e xe e e x x
xe dx x e e
e dx e
1 2
1
1
1
1
1
50.8789
20.7358
12.3504
x
x
x
x e dx ee
xe dxe
e dx ee
KoreaUnivers
ity
19 음성정보처리연구실
Example 9. 12 Continuous Least Squares (cont.)
2 20
5 3 0.87892
0 0 0.73583
2.35042
0 23
z
0.5368
1.1037
0.9963
z
Exponential function2( ) 0.5368 1.1037 0.9963f x x x
2( ) 0.5 1t x x x
KoreaUnivers
ity
20 음성정보처리연구실
9.2.1 Continuous Least Squares with Orthogonal Polynomails
The set of functions {f0, f1, f2, ….., fn} is linearly independent on
the interval [a, b]A linear combination of the functions is the zero function only if all of the coefficients are zero
In other words c0f0(x) + c1f1(x) + c2f2(x) + …. + cnfn(x) = 0 for all x in [a, b],
then c0=c1=c2=….=cn=0.
Example of linearly independent function
f0=1, f1=x, f2=x2,…., fn=xn
{p0, p1, p2, ….., pn} where pj is a polynomial of degree j
{1, sin(x), cos(x), sin(2x), cos(2x),…., sin(nx), cos(nx)}
The set of functions {f0, f1, f2, ….., fn} is orthogonal on [a, b]
Specially, if the functions are orthogonal and d j =1 for all j, the functions
are called orthonormal.
0( ) ( )
0
b
i jaj
if i jf x f x dx
d if i j
KoreaUnivers
ity
21 음성정보처리연구실
9.2.2 Gram-Schmidt process
How to construct a sequence of polynomials that are arthogonal on the interval [a, b]
Conditions
Pn be a polynomial of degree n
The coefficient of xn in pn be positive
For n = 0, 1, 2, ….
( ) ( ) 0b
n map x p x dx if n m
( ) ( ) 1b
n map x p x dx
Starting by taking p0(x)=x>0
To satisfy condition 3
2 21 ( ) 1
1
b
ac dx b a c
cb a
To construct p1(x), we begin by letting q1(x) = x + c1,0p0.
q1(x) is orthogonal to p0
0 1,0 0
0 1,0 0 0
( ) 0
0
b
a
b b
a a
p x c p dx
xp dx c p p dx
KoreaUnivers
ity
22 음성정보처리연구실
9.2.2 Gram-Schmidt process
To satisfy condition 3, we normalize q1(x)
To construct p2(x), we begin by letting q2(x) = x2 + c2,1p1(x) + x2,0p0(x). q2(x) is orthogonal to p1(x)
Normalizing q2(x)
1,0 0 0 0sin 1b b
a ac xp dx ce p p dx
11
1 1
( )( )
( ) ( )b
a
q xp x
q x q x dx
21 2,1 1 2,0 0
21 2,1 1 1 2,0 1 0
( )( ( ) ( )) 0
( ) ( ) ( ) ( ) 0
b
a
b b b
a a a
p x x c p x c p x dx
x p dx c p x p x dx c p x p x dx
22,1 1 1 1 1 0sin ( ) ( ) 1 ( ) ( ) 0
b b b
a a ac x p dx ce p x p x dx p x p x dx
22
2 2
( )( )
( ) ( )b
a
q xp x
q x q x dx
KoreaUnivers
ity
23 음성정보처리연구실
9.2.2 Gram-Schmidt process
The process continues in the same manner, as we construct each higher degree polynomial in turn.
Where, qn(x) = xn + cn,0p0(x) + cn,1p1(x) + ….. + cn,n-1pn-1(x)
(The coefficients cn,0, … , cn,n-1 are found so that qn(x) is orthogonal to each of the previously generated polynomials)
( )( )
( ) ( )
nn b
n na
q xp x
q x q x dx
KoreaUnivers
ity
24 음성정보처리연구실
9.2.3 Legendre Polynomials
A function of x defined on any finite interval a≤ x≤ b To transformed to a function of t defined -1≤ t≤ 1
The continuous least squares function approximation of Legendre polynomials
Legendre polynomials form an orthogonal set on [-1, 1] For normalization
An alternative definition of the Legendre polynomials
2 2
b a b ax t
0 1
2 32 3
4 2 5 34 5
( ) 1, ( ) ,
1 3( ) , ( ) ,
3 56 3 10 5
( ) , ( )7 35 9 21
p x p x x
p x x p x x x
p x x x p x x x
1
0 01
1 1 21 11 1
1 1 4 22 21 1
( ) ( ) 2
2( ) ( )
32 1 8
( ) ( )3 9 45
p x p x dx
p x p x dx x dx
p x p x dx x x dx
20
( 1)( ) 1 ( ) [(1 ) ], 1
2 !
n nn
n n n
dp x and p x x for n
n dx
1
1
2( ) ( )
2 1n np x p x dxn
KoreaUnivers
ity
25 음성정보처리연구실
9.2.4 Least Squares Approximation with Legendre Polynomials
To find the quadratic least squares approximation to f(x) on t
he interval [-1, 1] , we need to determine the coefficient c0, c
1, c2 that minimize
Setting
1 20 0 1 1 2 21
[ ( ) ( ) ( ) ( )]E c P x c P x c P x f x dx
0
0E
c
1
0 0 1 1 2 2 012[ ( ) ( ) ( ) ( )] ( ) 0c P x c P x c P x f x P x dx
1 1 1 1
0 0 0 1 1 0 2 2 0 01 1 1 1( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )c P x P x dx c P x P x dx c P x P x dx f x P x dx
1 1
0 0 0 01 1( ) ( ) ( ) ( )c P x P x dx f x P x dx
KoreaUnivers
ity
26 음성정보처리연구실
9.2.4 Least Squares Approximation with Legendre Polynomials (cont.)
In a similar manner, equations formed by setting and
and
The integrations on the left were performed earlier; the result is
1
0E
c
2
0E
c
1 1
1 1 1 11 1( ) ( ) ( ) ( )c P x P x dx f x P x dx
1 1
2 2 2 21 1( ) ( ) ( ) ( )c P x P x dx f x P x dx
1
0 01
1
1 11
1
2 21
1( ) ( )
23
( ) ( )245
( ) ( )8
c f x P x dx
c f x P x dx
c f x P x dx
KoreaUnivers
ity
27 음성정보처리연구실
Example 9.13 Least Squares Approximation Using Legendre Polynomials
To find the quadratic least squares approximation to f(x) = ex
on the interval [-1, 1]in terms of the Legendre polynomials
g(x) = c0p0(x) + c1p1(x) + c2p2(x) where p0(x) = 1, p0(x) = x, p0(x) = x2 –1
3
1 1
01 1
1 1
11 1
1 1 221 1
1 1 1( ) (2.3504) 1.1752
2 2 23 3 3
( ) ( ) (0.7358) 1.10372 2 245 45 1 45 1
( ) ( ) ( ) (0.8789 2.3504) 0.53688 8 3 8 3
x x
x
x
e p x dx e dx
f x p x dx xe dx
f x p x dx e x dx
2
0 0 1 1 2 2 0 1 2
1( ) ( ) ( ) ( ) ( )
3g x c p x c p x c p x c c x c x
KoreaUnivers
ity
28 음성정보처리연구실
Example 9.13 Least Squares Approximation Using Legendre Polynomials
Exponential function2
0 1 2
1( ) ( )
3g x c c x c x
KoreaUnivers
ity
29 음성정보처리연구실
Example 9.14 Pade Approximation of the Runge function
Pade approximation of the runge functionThe Runge function
Its first three derivatives at x =0 are f(0)=1, f′(0)=0, f″(0)=-50, f‴(0)=0
Taylor polynomial of the function (at x=0)
To seed a rational-function representation, using k=3, m=1, and n=2
For k=3, the linear system of equation is
2
1( )
1 25f x
x
2 3( ) 1 0 25 0t x x x x
1 02
2 1
( )1
a x ar x
b x b x
KoreaUnivers
ity
30 음성정보처리연구실
9.4 Using Matlab’s Functions% script for polynomial regression
% generate data
X = -3 : 0.1 : 3;
y1 = sin(x);
y2 = cos(2*x);
yy = y1 + y2;
y = 0.01*round(100*yy)
% find least squares 6th-degree polynomial to fit data
z = polyfit (x, y, 6)
% evaluate t he polynomial
p = polyval(z, x);
% plot the polynomialplot(x, p)Hold on% plot the dataplot (x, y, ‘+’)hold off