levenberg - marquardt (lm) algorithm_ aghazade
TRANSCRIPT
Introduction
Linear Inverse Problems
• The relation between unknown model parameters and observed data is Linear.
Nonlinear Inverse Problems• the relationship between the model
parameters and the data can be Nonlinear.
• Nonlinearity itself is the source of ill-posedness.
2
Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points.
one of the main difficulty with nonlinear problems is the local minima trap.
The initial model is important for certain convergence.
3Local minima trap
Jacobian Matrix :represents the local sensitivity of the calculated data to variation in the parameters .
Solution of the nonlinear problems leads to iterative procedure.
4Taylor series approximation
In local optimization manner we are looking for a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions.
Newton root finding algorithm 5
Itterative process need to resolve the unkonwns .
Gauss Newton Algorithm 6
7
Gauss-Newton Algorithm
Challenges with nonlinear LS problems
Gradient Descent
• certain Convergence
• Slow convergence rate
Gauss Newton
• No guarantee about convergence
• good convergence rate
8
Certain Convergence & Convergence rate
Another Problem: Matrix Singularity may occur.
Levenberg Marquardt Algorithm
By defining new parameter in Gauss Newton Algorithm we can deal with Convergence guarantee , Convergence rate and Matrix singularity.LM :
9
10
The hard part of the Levenberg–Marquardt method is determining the right value of λ. The general idea is to use small values of λ in situations where the Gauss–Newton method is working well, but to switch to larger values of λ when the Gauss–Newton method is not making progress. A very simple approach is to start with a small value of λ, and then adjust it in every iteration. If the Levenberg–Marquardt step leads to a reduction in f(m), then decrease λ by a constant factor (say 2). If the Levenberg–Marquardt step does not lead to a reduction in f(m), then do not take the step. Instead, increase λ by a constant factor (say 2), and try again. Repeat this process until a step is found which actually does decrease the value of f(m). Aster (2005)
11
Gradient Descent Algorithm 12
Gauss - Newton Algorithm 13
Levenberg Marquardt Algorithm
14
ExampleEarthquake location (Modern Global Seismology , page 231).
15
Stations Location16
Initial model 17
X0 21
Y0 21
Z0 12
t0 30
Gradient Descent Algorithm 18
Results19
Iteration Newton Levenberg Marquardt
1 29.26830.7036
21.0631 34.4796
29.2612 30.7085 21.1182 34.4767
2 30.0082 30.1598 11.1419 35.0240
30.0007 30.1781 11.2455 35.0169
3 29.9522 30.1925 9.1039 34.9608
29.9405 30.2195 9.3350 34.9501
4 29.9532 30.1922 8.9268 34.9596
29.9395 30.2237 9.2259 34.9473
5 29.9533 30.1921 8.9249 34.9596
29.9394 30.2240 9.2280 34.9472
X0 21
Y0 21
Z0 12
t0 30
Initial Model
X0 30
Y0 30
Z0 8
t0 35
True Model
20
Good Luck