chapter 2 a survey of simple methods and tools
DESCRIPTION
Chapter 2 A Survey of Simple Methods and Tools. 2.1 Horner ’ s Rule and Nested Multiplication. Nested Multiplication For example. Horner ’ s rule for polynomial evaluation. 多項式最高次項的係數. 多項式的係數. Horner ’ s rule for polynomial derivative evaluation. Polynomial first derivative: For example:. - PowerPoint PPT PresentationTRANSCRIPT
1
Chapter 2 Chapter 2 A Survey of A Survey of
Simple Methods Simple Methods and Toolsand Tools
2
2.1 Horner’s Rule and Nested 2.1 Horner’s Rule and Nested MultiplicationMultiplication
Nested MultiplicationNested Multiplication
For exampleFor example
3
Horner’s rule for polynomial Horner’s rule for polynomial evaluationevaluation
多項式的係數
多項式最高次項的係數
4
Horner’s rule for polynomial Horner’s rule for polynomial derivative evaluationderivative evaluation
Polynomial first derivative:Polynomial first derivative:
For example:For example:
5
Horner’s rule for polynomial Horner’s rule for polynomial derivative evaluationderivative evaluation
6
A more efficient A more efficient implementation of Horner’s implementation of Horner’s
rulerule If the intermediate values in tIf the intermediate values in t
he computation of he computation of pp((xx)) are sa are saved, then the subsequent coved, then the subsequent computation of the derivative cmputation of the derivative can be done more cheaply.an be done more cheaply.
DefineDefine
So thatSo that ThenThen
and, in particular,and, in particular,
DefineDefine
Therefore Therefore
Since Since
注意 bk亦為 x的函數
7
8
2.2 Difference Approximations to the 2.2 Difference Approximations to the Derivative—Derivative—one-sideone-side difference difference
The definition of the derivative:The definition of the derivative:
Taylor’s Theorem:Taylor’s Theorem:
So that we haveSo that we have
Thus the error is roughly proportional to Thus the error is roughly proportional to hh.. Can we do better?Can we do better?
9
2.2 Difference Approximations to the 2.2 Difference Approximations to the Derivative—Derivative—centeredcentered difference difference
Consider the two Taylor expansions:Consider the two Taylor expansions:
10
Example 2.1Example 2.1
11
Further illustrate these differences in Further illustrate these differences in accuracyaccuracy
Let’s continue computing with the same Let’s continue computing with the same example, but take more and smaller values example, but take more and smaller values of of hh..
Let Let
with the corresponding errorswith the corresponding errors
1at )(' find to,)(finction Given xxfexf x
12
13
Nearly 4
Error increase. why?
14
Rounding Error Rounding Error
Let denote the function computation as Let denote the function computation as actually done on the computer.actually done on the computer.
Define as the error between Define as the error between the function as computed in the function as computed in infinite infinite precisionprecision and as and as actually computedactually computed on the on the machine.machine.
The approximate derivative that we The approximate derivative that we compute is constructed with , not compute is constructed with , not ff..
DefineDefine
15
Rounding ErrorRounding Error
We haveWe have
which we write aswhich we write as
16
Rounding ErrorRounding Error
17
Nearly 4
18
19
2.3 Application: 2.3 Application: Euler’s Method for Euler’s Method for Initial Value Initial Value
ProblemsProblems General form:General form:
One-side difference (Eq. 2.1)One-side difference (Eq. 2.1)
Euler’s methodEuler’s method
20
21
Example 2.2Example 2.2
22
23
24
25
26
2.4 Linear Interpolation2.4 Linear Interpolation
Given a set of nodesGiven a set of nodes xxkk, if for all , if for all kk, then we , then we say the function say the function pp interpolates the function interpolates the function ff at t at these nodes.hese nodes.
Linear interpolation: using a straight line to appLinear interpolation: using a straight line to approximate a given functionroximate a given function
For example: the equation of a straight line that For example: the equation of a straight line that passes through the two points passes through the two points
27
f(x0)
f(x1)
x0 x1
p1(x)
x
28
The accuracy of linear The accuracy of linear interpolationinterpolation
29
Example 2.3Example 2.3
f (0.1) f (0.2)
30
31
Piecewise linear Piecewise linear interpolationinterpolation
Example 4.2Example 4.2
32
)(log1
)(" ),(log1
)('
)(log)( and loglog
222
2
ex
xfex
xf
xxfdx
du
u
e
dx
ud aa
33
34
2.5 Application: the trapezoid 2.5 Application: the trapezoid rulerule
Define the integration of interest as Define the integration of interest as II((f f ):):
35
36
Error analysisError analysis
Apply the Integral Mean Value TheoremApply the Integral Mean Value Theorem
thusthus
37
The The nn-subinterval trapezoid -subinterval trapezoid rulerule
38
39
This theorem tells us:This theorem tells us:– The numerical approximation will converge to The numerical approximation will converge to
the exact valuethe exact value
– How fast this convergence occurs How fast this convergence occurs h h 22
40
Example 2.5Example 2.5
41
42
Example 2.6Example 2.6
43
The stability of the trapezoid The stability of the trapezoid rulerule
We conclude that the trapezoid rule is a stable numerical method.We conclude that the trapezoid rule is a stable numerical method. In fact, almost all methods for numerically approximating integrals In fact, almost all methods for numerically approximating integrals
are stable.are stable.
The double prime on the summation symbol means that The double prime on the summation symbol means that the first and last terms are multiplied by ½.the first and last terms are multiplied by ½.
44
2.6 Solution of tri-diagonal linear 2.6 Solution of tri-diagonal linear systemssystems
45
If If AA is tri-diagonal, then is tri-diagonal, then
For example:For example:
46
Make a notational simplification: Make a notational simplification:
wherewhere Then the augmented matrix Then the augmented matrix
corresponding to the system iscorresponding to the system is
47
Gaussian eliminationGaussian elimination The elimination stepThe elimination step
wherewhere
The backward solution stepThe backward solution step
48
Example 2.7Example 2.7
49
After a single pass through the first loop:After a single pass through the first loop:
We cannot continue the process, for we would We cannot continue the process, for we would have to divide by zero in the next step.have to divide by zero in the next step.
However, the solution of the system indeed exist:However, the solution of the system indeed exist:
50
Diagonal dominance for tri-Diagonal dominance for tri-diagonal matricesdiagonal matrices
For exampleFor example
51
2.7 Application: 2.7 Application: Simple Two-point Boundary Value Simple Two-point Boundary Value
ProblemsProblems Two-point boundary value problem (BVP)Two-point boundary value problem (BVP)
52
Use Taylor expansions similar to (2.2) and (2.3) (just Use Taylor expansions similar to (2.2) and (2.3) (just take more terms) to derive an approximation to the take more terms) to derive an approximation to the second derivative, by adding them. Then we getsecond derivative, by adding them. Then we get
This is a tri-diagonal system of linear equations.This is a tri-diagonal system of linear equations.
53
In matrix-vector formIn matrix-vector form
It is diagonally dominant, so we can apply It is diagonally dominant, so we can apply the algorithm developed in the precious the algorithm developed in the precious section to produce solutions.section to produce solutions.
54
Example 2.8Example 2.8
55