control-oriented modeling and system identification

134
Modeling and estimation for control E. Witrant Control-oriented modeling and system identification Outline Emmanuel WITRANT [email protected] April 22, 2014. Modeling and estimation for control E. Witrant Course goal To teach systematic methods for building mathematical models of dynamical systems based on physical principles and measured data. Main objectives: build mathematical models of technical systems from first principles use the most powerful tools for modeling and simulation construct mathematical models from measured data Modeling and estimation for control E. Witrant Class overview 1 Introduction to modeling Physical Modeling 2 Principles of physical modeling 3 Some basic relationships in physics 4 Bond graphs Simulation 5 Computer-aided modeling 6 Modeling and simulation in Scilab System Identification 7 Experiment design for system identification 8 Non-parametric identification 9 Parameter estimation in linear models 10 System identification principles and model validation 11 Nonlinear black-box identification Towards process supervision 12 Recursive estimation methods Modeling and estimation for control E. Witrant Grading policy Based on homeworks, each due on the next day at the beginning of the first class.

Upload: ledien

Post on 27-Dec-2016

250 views

Category:

Documents


8 download

TRANSCRIPT

Page 1: Control-oriented modeling and system identification

Modeling and

estimation for

control

E. Witrant

Control-oriented modeling and

system identification

Outline

Emmanuel [email protected]

April 22, 2014.

Modeling and

estimation for

control

E. Witrant

Course goal

To teach systematic methods for building mathematical models

of dynamical systems based on physical principles and

measured data.

Main objectives:

• build mathematical models of technical systems from first

principles

• use the most powerful tools for modeling and simulation

• construct mathematical models from measured data

Modeling and

estimation for

control

E. Witrant

Class overview

1 Introduction to modeling

Physical Modeling

2 Principles of physical modeling

3 Some basic relationships in physics

4 Bond graphs

Simulation

5 Computer-aided modeling

6 Modeling and simulation in Scilab

System Identification

7 Experiment design for system identification

8 Non-parametric identification

9 Parameter estimation in linear models

10 System identification principles and model validation

11 Nonlinear black-box identification

Towards process supervision

12 Recursive estimation methods

Modeling and

estimation for

control

E. Witrant

Grading policy

Based on homeworks, each due on the next day at the

beginning of the first class.

Page 2: Control-oriented modeling and system identification

Modeling and

estimation for

control

E. Witrant

Material

• Lecture notes from 2E1282 Modeling of Dynamical

Systems, Automatic Control, School of Electrical

Engineering, KTH, Sweden.

• L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

• S. Campbell, J-P. Chancelier and R. Nikoukhah, Modeling

and Simulation in Scilab/Scicos, Springer, 2005.

• S. Stramigioli, Modeling and IPC Control of Interactive

Mechanical Systems: A Coordinate-free Approach,

Springer, LNCIS 266, 2001.

• L. Ljung, System Identification: Theory for the User, 2nd

Edition, Information and System Sciences, (Upper Saddle

River, NJ: PTR Prentice Hall), 1999.

Modeling and

estimation for

control

E. Witrant

Class website

• Go to:

http://www.gipsa-lab.grenoble-inp.fr/˜e.witrant/classes/14_Modeling_Lessons_KUL.pdf

• or Google “witrant” then go to “Teaching”, “International”

and “Control-oriented modeling and system identification”

Page 3: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Modeling and estimation forcontrol

Lecture 1: Introduction to modeling

Emmanuel [email protected]

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Systems and Models

Systems and experiments

• System: object or collection of objects we want to study.• Experiment: investigate the system properties / verify

theoretical results, BUT• too expensive, i.e. one day operation on Tore Supra;• too dangerous, i.e. nuclear plant;• system does not exist, i.e. wings in airplane design.

⇒ Need for models

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

What is a model?

• Tool to answer questions about the process withoutexperiment / action-reaction.

• Different classes:1 Mental models: intuition and experience (i.e. car driving,

industrial process in operator’s mind);2 Verbal models: behavior in different conditions described

by words (i.e. If . . . then . . . );3 Physical models: try to imitate the system (i.e. house

esthetic or boat hydrodynamics);4 Mathematical models: relationship between observed

quantities described as mathematical relationships (i.e.most law in nature).Generally described by differential algebraic equations:

x(t) = f(x(t), u(t), d(t))

0 = g(x(t), u(t), d(t))

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Models and simulation

• models→ used to calculate or decide how the systemwould have reacted (analytically);

• Simulation: numerical experiment = inexpensive and safeway to experiment with the system;

• simulation value depends completely on the model quality.

How to build models?

• Two sources of knowledge:• collected experience: laws of nature, generations of

scientists, literature;• from the system itself: observation.

• Two areas of knowledge:• domain of expertise: understanding the application and

mastering the relevant facts→ mathematical model;• knowledge engineer: practice in a usable and explicit

model→ knowledge-based model.

Page 4: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• Two different principles for model construction:• physical modeling: break the system into subsystems

described by laws of nature or generally recognizedrelationships;

• identification: observation to fit the model properties tothose of the system (often used as a complement).

How to verify models?

• Need for confidence in the results and prediction, obtainedby verifying or validating the model: model vs. system.

• Domain of validity: qualitative statements (most verbalmodels), quantitative predictions. Limited for all models.

⇒ Hazardous to model outside the validated area.

⇒ Models and simulations can never replace observationsand experiments - but they constitute an important anduseful complement.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Different types of mathematical models

• Deterministic - Stochastic: exact relationships vs.stochastic variables/processes;

• Static - Dynamic: direct, instantaneous link (algebraicrelationships) vs. depend also on earlier applied signals(differential/difference equations);

• Continuous - Discrete time: differential equation vs.sampled signal;

• Distributed - Lumped: events dispersed over the space(distributed parameter model→ partial differentialequation PDE) vs. finite number of changing variables(ordinary diff. eqn. ODE);

• Change oriented - Discrete event driven: continuouschanges (Newtonian paradigm) vs. (random) event-basedinfluences (i.e. manufacture, buffer. . . )

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Examples of Models

Networked control of the inverted pendulum [Springer’07]

• Objective: test control laws for control over networks.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• Physical model and abstraction:

m2

m1

u(t)

l ol c

θ(t)

z(t)

• Mathematical model• from physics:

[

m1 m1l0m1l0 J + m1z2

] [

]

+

[

0 −m1zθ2m1zθ 0

] [

]

+

[

−m1 sin θ− (m1l0 + m2lc) sin θ −m1z cos θ

]

g =

[

10

]

u,

Page 5: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• input/ouput representation (x [z, z, θ, θ]⊤):

x1 = x2,

x2 =u

m1− l0x4 + x1x2

4 + g sin (x3),

x3 = x4,

x4 =1

J0(x1) −m1l20[g (m2lc sin (x3) + m1x1 cos (x3))

−m1 (l0x4 + 2x2) x1x4 + −l0 u] ,J0(x1) = J + m1x2

1 ,

y = x1, x2

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• Fluid-flow model for the network [Misra et al. 2000, Hollotand Chait 2001]: TCP with proportional active queuemanagement (AQM) set the window size W and queuelength q variations as

dWi(t)dt

=1

Ri(t)−

Wi(t)2

Wi(t − Ri(t))Ri(t − Ri(t))

pi(t),

dq(t)dt

= −Cr +N

i=1

Wi(t)Ri(t)

, q(t0) = q0,

where Ri(t) q(t)Cr

+ Tpi is the round trip time, Cr the link

capacity, pi(t) = Kpq(t − Ri(t)) the packet discard functionand Tpi the constant propagation delay. The averagetime-delay is τi =

12 Ri(t)

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

E.g. network with 2 TCP flows:

dW1,2(t)dt

=1

R1,2(t)−

W1,2(t)2

W1,2(t − R1,2(t))R1,2(t − R1,2(t))

p1,2(t)

dq(t)dt

= −300 +2

i=1

Wi(t)Ri(t)

, q(0) = 5

τ(t) = R1(t)/2

Behavior of the network internal states.

q(t)W1(t)

W2(t)

time (s)

q(t)

and

W1,

2(t)

(pac

kets

)

Average queue length and windows sizes

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• Compare different control laws: in simulation

0 0.2 0.4 0.6 0.8 1-0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

0 0.2 0.4 0.6 0.8 1-15

-10

-5

0

5

10

15

predictor with a variable horizon

predictor with a fixed horizon

state feedback

buffer strategy

θ(t)(r

ad)

time (s)

Simulation with measurement noise

predictor with a variable horizon

predictor with a fixed horizon

state feedback

buffer strategy

θ(t)(r

ad)

Page 6: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

On the inverted pendulum experiment:

(a) Predictive control with fixed horizon. (b) Predictive control with time-varyinghorizon.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Thermonuclear Fusion with ToreSupra tokamak

Physical model

∂ψ

∂t= η∥(x, t)

[

1µ0a2

∂2ψ

∂x2+

1µ0a2x

∂ψ

∂x

+R0jbs(x, t) + R0jni(x, t)]

jφ(x, t) = −1

µ0R0a2x∂

∂x

[

x∂ψ

∂x

]

Mathematical model [PPCF 2007]

Abstraction

Experimental results

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Identification of temperature profiles [CDC 2011]

• Parameter-dependant first-order dynamics:

τth(t) = eϑt0 Iϑt1p Bϑt2

φ0nϑt3

e Pϑt4tot

dWdt

= Ptot −1τth

W , W(0) = Ptot(0)τth(0)

Te0(t) = AW

→ “free” parameters ϑi determined from a sufficiently richset of experimental data.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• Model validation:

Central temperature (keV) and power inputs (MW)

|η(x, t) − η(x, t)|

x

time (s)

Te0(t)

Te0(t)ITERL-96P(th)Plh Picrf

Comparison of the model with a shot not included in thedatabase.

Page 7: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Conclusion: all models are approximate!

• A model captures only some aspects of a system:• Important to know which aspects are modelled and which

are not;• Make sure that model is valid for intended purpose;• “If the map does not agree with reality, trust reality”.

• All-encompasing models often a bad idea:• Large and complex hard to gain insight;• Cumbersome and slow to manipulate.

• Good models are simple, yet capture the essentials!

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Models for Systems and Signals

Types of models

• System models (differential / difference equations) andsignal models (external signals / disturbances).

• Block diagram models: logical decomposition of thefunctions and mutual influences (interactions, informationflows), not unique. Related to verbal models.

• Simulation models: related to program languages.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Input, output and disturbance signals

• Constants (system or design parameters) vs. variables orsignals;

• Outputs: signals whose behavior is our primary interest,typically denoted by y1(t), y2(t), . . . , yp(t).

• External signals: signals and variables that influence othervariables in the system but are not influenced by thesystem:• input or control signal: we can use it to influence the

system u1(t), u2(t), . . . , um(t);• disturbances: we cannot influence or choose

w1(t), w2(t), . . . , wr(t).

• Internal variables: other model variables.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Differential equations

• Either directly relate inputs u to outputs y:

g(y(n)(t), y(n−1)(t), . . . , y(t), u(m)(t), u(m−1)(t), . . . , u(t)) = 0

where y(k )(t) = dk y(t)/dtk and g(·) is an arbitrary,vector-valued, nonlinear function.

• or introduce a number of internal variables related by firstorder DE

x(t) = f(x(t), u(t))

with x, f and u are vector-valued, nonlinear functions, i.e.

x1(t) = f1(x1(t), . . . , xn(t), u1(t), . . . , um(t))

x2(t) = f2(x1(t), . . . , xn(t), u1(t), . . . , um(t))...

xn(t) = fn(x1(t), . . . , xn(t), u1(t), . . . , um(t))

Page 8: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

The outputs are then calculated from xi(t) and ui(t) from:

y(t) = h(x(t), u(t))

• Corresponding discrete time equations:

x(t + 1) = f(x(t), u(t))

y(t) = h(x(t), u(t))

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

The concept of state and state-space modelsDefinitions:

• State at t0: with this information and u(t), t ≥ t0, we cancompute y(t).

• State: information that has to be stored and updatedduring the simulation in order to calculate the output.

• State-space model (continuous time):

x(t) = f(x(t), u(t))

y(t) = h(x(t), u(t))

u(t): input, an m-dimensional column vectory(t): output, a p-dimensional column vectorx(t): state, an n-dimensional column vector

→ nth order model, unique solution if f(x, u) continuouslydifferentiable, u(t) piecewise continuous and x(t0) = x0

exists.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• State-space model (discrete time:)

x(tk+1) = f(x(tk ), u(tk )), k = 0, 1, 2, . . .

y(tk ) = h(x(tk ), u(tk ))

where u(tk ) ∈ Rm, y(tk ) ∈ Rp, x(tk ) ∈ Rn.→ nth order model, unique solution if the initial valuex(t0) = x0 exists.

Linear models:

• if f(x , u) and h(x, u) are linear functions of x and u:

f(x , u) = Ax + Bu

h(x , u) = Cx + Du

with A : n × n, B : n ×m, C : p × n and D : p ×m.

• if the matrices are independent of time, the system islinear and time-invariant.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Stationary solutions, static relationships and linearizationStationary points: Given a system

x(t) = f(x(t), u(t))

y(t) = h(x(t), u(t))

a solution (x0, u0) such that 0 = f(x0, u0) is called a stationarypoint (singular point or equilibrium).At a stationary point, the system is at rest: x(0) = x0, u(t) = u0

for t ≥ 0⇒ x(t) = x0 for all t ≥ 0.Stability: suppose that x(t0) = x0 gives a stationary solution,what happens for x(t0) = x1? The system is

• asymptotically stable if any solution x(t) close enough tox0 converges to x0 as t → ∞;

• globally asymptotically stable if all solutions x(t) withu(t) = u0 converge to x0 as t → ∞.

Page 9: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Static relationships:

• for asymptotically stable stationary point (x0, u0), theoutput converges to y0 = h(x0, u0). Since x0 dependsimplicitly on u0,

y0 = h(x(u0), u0) = g(u0)

Here, g(u0) describes the stationary relation between u0

and y0.• Consider a small change in the input level from u0 to

u1 = u0 + δu0, the stationary output will be

y1 = g(u1) = g(u0 + δu0) ≈ g(u0) + g′(u0)δu0 = y0 + g′(u0)δu0.

Here g′(u0) : p ×m describes how the stationary outputvaries locally with the input→ static gain.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Linearization:

• system behavior in the neighborhood of a stationarysolution (x0, u0);

• consider small deviations ∆x(t) = x(t) − x0,∆u(t) = u(t) − u0 and ∆y(t) = y(t) − y0, then

∆x = A∆x + B∆u

∆y = C∆x + D∆u

where A , B , C and D are partial derivative matrices off(x(t), u(t)) and h(x(t), u(t)), i.e.

A =

∂f1∂x1

(x0, u0) . . .∂f1∂xn

(x0, u0)

......

∂fn∂x1

(x0, u0) . . .∂fn∂xn

(x0, u0)

;

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

• important and useful tool but• only for local properties;• quantitative accuracy difficult to estimate→ complement

with simulations of the original nonlinear system.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

ExampleFrom lecture notes by K.J. Åstrom, LTH

Model of bicycle dynamics:

d2θ

dt2=

mglJp

sin θ +mlV2

0 cos θ

bJp

(

tan β+a

V0 cos2 β

dβdt

)

where θ is the vertical tilt and β is front wheel angle (control).⇒ Hard to gain insight from nonlinear model. . .

Linearized dynamics (around θ = β = β = 0):

d2θ

dt2=

mglJp

θ +mlV2

0

bJp

(

β+aV0

dβdt

)

has transfer function

G(s) =mlV2

0

bJp×

1 + aV0

s

s2 −mglJp

.

Page 10: Control-oriented modeling and system identification

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Gain proportional to V20 :

• more control authority at high speeds.

Unstable pole at√

mglJp≈

g/l:

• slower when l is large;

• easier to ride a full size bike than a childrens bike.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Homework 1Consider the inverted pendulum dynamics:

x1 = x2,

x2 =u

m1− l0x4 + x1x2

4 + g sin (x3),

x3 = x4,

x4 =1

J0(x1) −m1l20[g (m2lc sin (x3) + m1x1 cos (x3))

−m1 (l0x4 + 2x2) x1x4 + −l0 u] ,J0(x1) = J + m1x2

1 ,

y =

[

x1

x2

]

where

Parameter name Value Meaningm1 0.213 kg Mass of the horizontal rod.m2 1.785 kg Mass of the vertical rod.l0 0.33 m Length of the vertical rod.lc −0.029 m Vertical rod c.g. position.g 9.807 m

s2 Gravity acceleration.J 0.055 Nm2 Nominal momentum of inertia.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

Analyze the system dynamics by

• linearizing the proposed model;

• writing the transfer function;

• interpreting the resulting equations.

Modeling andestimation for

control

E. Witrant

Systems andModelsWhat is a model?

How to build models?

How to verify models?

Mathematical models

ExamplesInverted pendulum

Tore Supra

Conclusions

Models forsystems andsignalsDifferential equations

State-space models

Stationary, stabilityand linearization

Homework

References

1 L. Ljung and T. Glad, Modeling of Dynamic Systems, PrenticeHall Information and System Sciences Series, 1994.

2 V. Misra, W.-B. Gong, and D. Towsley, Fluid-based analysis of anetwork of AQM routers supporting TCP flows with anapplication to RED, SIGCOMM 2000.

3 C. Hollot and Y. Chait, Nonlinear stability analysis for a class ofTCP/AQM networks, CDC 2001.

4 EW, D. Georges, C. Canudas de Wit and M. Alamir, On the useof State Predictors in Networked Control Systems, LNCSSpringer, 2007.

5 EW, E. Joffrin, S. Bremond, G. Giruzzi, D. Mazon, O. Barana etP. Moreau, A control-oriented model of the current profile inTokamak plasma, IOP PPCF 2007.

6 EW and S. Bremond, Shape Identification for DistributedParameter Systems and Temperature Profiles in Tokamaks,CDC 2011.

Page 11: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Modeling and Estimation for ControlPhysical Modeling

Lecture 2: Principles of physical modeling

Emmanuel [email protected]

April 22, 2014.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

The Three Phases of Modeling

“Successful modeling is based as much on a good feeling for

the problem and common sense as on the formal aspects that

can be taught”

1. Structuring the problem

• divide the system into subsystems, determine causes and

effects, important variables and interactions;

• intended use of the model?

• results in block diagram or similar description;

• needs understanding and intuition;

• where complexity and degree of approximation are

determined.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

2. Setting up the Basic Equations

• “fill in” the blocks using the laws of nature and basic

physical equations;

• introduce approximations and idealizations to avoid too

complicated expressions;

• lack of basic equations→ new hypotheses and innovative

thinking.

3. Forming the State-Space Models

• formal step aiming at suitable organization of the

equations/relationships;

• provides a suitable model for analysis and simulation;

• computer algebra can be helpful;

• for simulation: state-space models for subsystems along

with interconnections.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Example: the Mining Ventilation Problem

Objectives:

• propose a new automation strategy to minimize the fans

energy consumption, based on distributed sensing

capabilities: wireless sensor network;

• investigate design issues and the influence of sensors

location;

• find the optimal control strategy that satisfies safety

constraints.

Page 12: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Outline

1 The Phases of Modeling

2 1: Structuring the problem

3 2. Setting up the Basic Equations

4 3. Forming the State-Space Models

5 Simplified models

6 Conclusions

7 Homework

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Phase 1: Structuring the problem

Ask the good questions:

• What signals are of interest (outputs)?

• Which quantities are important to describe what happens

in the system?

• Of these quantities, which are exogenous and which

should be regarded as internal variables?

• What quantities are approximately time invariant and

should be regarded as constants?

• What variables affect certain other variables?

• Which relationships are static and which are dynamic?

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

General tips:

• often need experimental results to assist these steps (i.e.

time constants and influences);

• the intended use determines the complexity;

• use model to get insights, and insights to correct the

model;

• work with several models in parallel, that can have different

complexity and be used to answer different questions;

• for complex systems, first divide the system into

subsystems, and the subsystems into blocs.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Example: for the mining ventilation problem

• Inputs to the system:

• ρ: air density in vertical shaft;

• P: air pressure in vertical shaft;

• ∆H: variation of pressure produced by the fan;

• mj,in: incoming pollutant mass rate due to the engines;

• mj,chem: mass variation due to chemical reactions between

components;

• h: time-varying number of hops in WSN.

• Outputs from the system:

• cj(z, t) pollutants (COx or NOx ) volume concentration

profiles, where z ∈ [0; hroom] is the height in the extraction

room;

• uavg is the average velocity of the fluid in the tarpaulin tube;

• mj pollutant mass in the room;

• τwsn delay due to the distributed measurements and

wireless transmission between the extraction room and the

fan.

Page 13: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

• Division into subsystems:

• fan / tarpaulin tube / extraction room / wireless sensor

network.

• Corresponding block diagram:

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Phase 2: Setting up the Basic

Equations

Main principles:

• formulate quantitative I/O relationships;

• use knowledge of mechanics, physics, economics, . . .

• well-established laws, experimental curves (data sheets)

or crude approximations;

• very problem dependent!

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Two groups of relationships:

1 Conservation laws: relate quantities of the same kind, i.e.

• Pin − Pout = stored energy / unit time;

• inflow rate - outflow rate = stored volume / t;

• input mass flow rate - output mass flow rate = stored mass

/ t;

• Kirchhoff’s laws.

2 Constitutive relationships: relate quantities of differentkinds (i.e. voltage - current, level - outflow, pressure drop -flow)

• material, component or bloc in the system;

• static relationships;

• relate physical to engineering relationships;

• always approximate.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

How to proceed?

• write down the conservation laws for the block/subsystem;

• use suitable constitutive relationships to express the

conservation laws in the model variables. Calculate the

dimensions as a check.

Page 14: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Mining ventilation example (i.e. extraction room):

• Conservation law - conservation of mass for chemical

species j:

mj(t) = mj,in(t) − mj,out(t) − mj,chem(t)

• Constitutive relationship - relate the mass to concentration

profile:

mj(t) = Sroom

∫ hroom

0

cj(z, t)dz

= Sroom

[∫ hdoor

0

cj(z, t)dz + αj(t)∆h

]

,

and hypothesis on the shape (e.g. sigmoid):

cj(z, t) =αj(t)

1 + e−βj(t)(z−γj(t)).

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Phase 3: Forming the

State-Space Models

Straightforward recipe:

1 choose a set of state variables (memory of what has

happened, i.e. storage variables);

2 express the time derivative of each state as a function of

states and inputs;

3 express the outputs as functions of the state and inputs.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Examples of stored quantities:

• position of a mass / tank level (stored potential energy);

• velocity of a mass (stored kinetic energy);

• charge of capacitor (stored electrical field energy);

• current through inductor (stored magnetic energy);

• temperature (stored thermal energy);

• internal variables from step 2.

Make separate models

for the subsystems and diagram interconnections→ modularity

and error modeling diagnostic.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Extraction room model:

• shape parameters α, β and γ chosen as the state:

x(t) = [α, β, γ]T ;

• time derivative from mass conservation:

Ej

αj(t)

βj(t)γj(t)

= mj,in(t) − Bj ufan(t − τtarp) − Djk , with

Ej Sroom

Vint

......

...∂Cj,i

∂αj

∂Cj,i

∂βj

∂Cj,i

∂γj

......

...

+

∆h

0

0

T

Bj 1

hdoor

Vint

...

Cj,i

...

× Starp ν, Djk = Sroom

Vint

...

ηjk ,i Cj,i Ck ,i

...

+ ηjkαjαk∆h

Page 15: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Number of state variables:

• sufficient if derivatives described by state and inputs;

• harder to determine unnecessary states;

• linear models→ rank of matrices;

• when used in simulation, the only disadvantage is related

to unnessary computations.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Simplified models

Even if a relatively good level of precision can be achieved, the

model has to be manageable for our purpose.

Model simplification:

• reduced number of variables;

• easily computable;

• linear rather than nonlinear;

• tradeoff between complexity and accuracy;

• balance between the approximations;

• three kinds:

1 small effects are neglected - approximate relationships are

used;

2 separation of time constants;

3 aggregation of state variables.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Small effects are neglected - approximate relationships

are used:

• i.e. compressibility, friction, air drag→ amplitude of the

resonance effects / energy losses?

• based on physical intuition and insights together with

practice;

• depends on the desired accuracy;

• linear vs. nonlinear: make experiments and tabulate the

results.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Separation of time constants:

• May have different orders of magnitude, i.e. for Tokamaks:Alfven time (MHD instabilities) 10−6 s

density diffusion time 0.1 − 1 s

heat diffusion time 0.1s-1s (3.4 s for ITER)

resistive diffusion time few seconds (100 − 3000 s for ITER)

• Advices:

• concentrate on phenomena whose time constants match

the intended use;

• approximate subsystems that have considerably faster

dynamics with static relationships;

• variables of subsystems whose dynamics are appreciably

slower are approximated as constants.

• Two important advantages:

1 reduce model order by ignoring very fast and very slow

dynamics;

2 by giving the model time constants that are on the same

order of magnitude (i.e. τmax/τmin ≤ 10 − 100), we get

simpler simulations (avoid stiffness!).

• When different time-scales are desired, consider the use

of different models.

Page 16: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Aggregation of state variables:

To merge several similar variables into one state variable: often

average or total value.

• i.e. infinite number of points in the extraction room→ 3

shape parameters, trace gas transport in firns;

• hierarchy of models with different amount of aggregation,

i.e. economics: investments / private and government /

each sector of economy / thousand state variables;

• partial differential equations (PDE) reduced to ordinary

differential equations (ODE) by difference approximation of

spatial variables.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Example: Heat conduction in a rod

• input: power in the heat source P;

• output: temperature at the other endpoint T ;

• heat equation:∂

∂tx(z, t) = a

∂2

∂z2x(z, t)

where x(z, t) is the temperature at time t at the distance z

from the left end point and a is the heat conductivity

coefficient of the metal;

• hypothesis: no losses to the environment;

• at the end points: a∂

∂zx(0, t) = P(t), x(L , t) = T(t)

• requires to know the whole function x(z, t1), 0 ≤ z ≤ L , to

determine T(t), t ≥ t1,→ infinite dimensional system.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Example: Heat conduction in a rod (2)

Aggregation of state variables: approximate for simulation

• divide the rode (x(z, t), 0 ≤ z ≤ L/3, aggregated into x1(t)etc.) and assume homogeneous temperature in each part

• conservation of energy for part 1:

d

dt(heat stored in part 1) = (power in) − (power out to part 2)

d

dt(C · x1(t)) = P − K(x1(t) − x2(t))

C: heat capacity of each part, K : heat transfer

• similarly:

d

dt(C · x2(t)) = K(x1(t) − x2(t)) − K(x2(t) − x3(t))

d

dt(C · x3(t)) = K(x2(t) − x3(t))

T(t) = x3(t)

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

• Rearrange the equations to obtain the linear state-space

model:

x(t) =K

C

−1 1 0

1 −2 1

0 1 −1

x +1

C

1

0

0

P

y(t) = (0 0 1) x(t)

• Conclusions: essentially the same as using finite

difference approximation on the spacial derivative

(homework), a finer division would give a more accurate

model.

Page 17: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Example: solving the air continuity in polar firns and ice

coresFrom poromechanics, firn =

system composed of the ice

lattice, gas connected to the

surface (open pores) and gas

trapped in bubbles (closed pores).Air transport is driven by:

∂[ρice(1 − ǫ)]

∂t+ ∇[ρice(1 − ǫ)~v ] = 0

∂[ρogasf ]

∂t+ ∇[ρo

gas f(~v + ~wgas)] = −~ro→c

∂[ρcgas(ǫ − f)]

∂t+ ∇[ρc

gas(ǫ − f)~v] = ~ro→c

with appropriate boundary and

initial conditions.Scheme adapted from [Sowers

et al.’92, Lourantou’08].

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

I.e. CH4 transport at NEEM (Greenland)

⇒ Unique archive of the recent (50-100 years) anthropogenic

impact. Can go much further (i.e. > 800 000 years) in ice.

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

From distributed to lumped dynamics

• Consider a quantity q transported in 1D by a flux u = qv

with a source term s (t ∈ [0, T ], z ∈ [0, zf ]):

∂q

∂t+∂

∂z[q v(z, t)] = s(z, t), with

q(0, t) = 0

q(x , 0) = q0(x)

where s(z, t) , 0 for z < z1 < zf and s = 0 for z1 < z < zf .

• Approximate ∂[qv]/∂z, i.e. on uniform mesh:

• backward difference: (uz)i =ui−ui−1

∆z+ ∆z

2(uzz)i

• central difference: (uz)i =ui+1−ui−1

2∆zi− ∆z2

6(uzzz)i

• other second order:

(uz)i =ui+1+3ui−5ui−1+ui−2

4∆zi+ ∆z2

12(uzzz)i −

∆z3

8(uzzzz)i

• third order: (uz)i =2ui+1+3ui−6ui−1+ui−2

6∆zi− ∆z3

12(uzzzz)i

• Provides the computable lumped model: dq/dt = Aq + s

• The choice of the discretization scheme directly affects the

definition of A and its eigenvalues distribution: need to

check stability and precision!

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

E.g. stability: eigenvalues of A for CH4 at NEEM with

dt ≈ 1 week

-2 -1.5 -1 -0.5 0 0.5 1 1.5 2-3

-2

-1

0

1

2

3

Real part of eigenvalues

Imag

inar

y pa

rt o

f eig

enva

lues

0 20 40 60 80 100 120 140 160 180-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

Depth (m)

Rea

l par

t of e

igen

valu

es

FOUCentralFOU + centralFOU + 2nd orderFOU + 3rd order

Page 18: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

E.g. eig(A) for CH4 at NEEM with dt ≈ 1 week, zoom

-15 -10 -5 0 5

x 10-3

-2

-1

0

1

2

Real part of eigenvalues

Imag

inar

y pa

rt o

f eig

enva

lues

0 20 40 60 80 100 120 140 160 180-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

Depth (m)

Rea

l par

t of e

igen

valu

es

FOUCentralFOU + centralFOU + 2nd orderFOU + 3rd order

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Conclusions

→ Guidelines to structure the general approach for modeling

• The clarity of the model and its usage directly depends on

its initial philosophy

• Prevent the temptation to avoid the documentation of

“obvious steps”

• Forecasting the use of experimental knowledge and

sub-model validation strategies during the modeling

phases is essential

Principles of

physical

modeling

E. Witrant

The Phases of

Modeling

1: Structuring

the problem

2. Setting up

the Basic

Equations

3. Forming the

State-Space

Models

Simplified

models

Firn example

Conclusions

Homework

Homework 2Use finite differences to solve the heat conduction

a∂2

∂z2x(z, t) =

∂tx(z, t), T(t) = x(L , t), P(t) = a

∂zx(z, t)|z=0.

1 define the discretized state

X(t) [x1(t) . . . xi(t) . . . xN(t)]T as a spatial discretization

of x(z, t);

2 use the central difference approximation ∂2u∂z2 ≈

ui+1−2ui+ui−1

∆z2

to express dxi(t)/dt as a function of xi+1, xi and xi−1, for

i = 1 . . .N;3 introduce the boundary conditions

• with ∂u∂z(0, t) ≈ u1−u0

∆zto express x0 as a function of x1 and

P, then substitute in dx1/dt ;• with ∂u

∂z(L , t) ≈ uN+1−uN

∆zto express xN+1 as a function of xN ,

then substitute in dxN/dt (suppose that there is no heat

loss: ∂x(L , t)/∂z = 0);

4 write the discretized dynamics in the state-space form;

5 for N = 3 compare with the results obtained in class.

Page 19: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Modeling and Estimation for ControlPhysical Modeling

Lecture 3: Some Basic Relationships in Physics

Emmanuel [email protected]

April 21, 2014.

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Introduction

• Most common relationships within a number of areas in

physics.

• More general relationships become visible.

⇒ General modeling strategy.

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Outline

1 Physic fundamentals

2 Electrical Circuits

3 Mechanical Translation

4 Mechanical Rotation

5 Flow Systems

6 Thermal System

7 Thermal Systems

8 Conclusions

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Physic fundamentals

Mass conservationFor a closed system: ∆M = 0.

Energy conservation

For an isolated system: ∆E = 0.

1st law of thermodynamics

Heat (Q) and Work (W ) are equivalent and can be exchanged.

∆E = ∆U +∆Ecin +∆Epot = Q + W .

Page 20: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Electrical Circuits

Fundamental quantities:

voltage u (volt) and current i (ampere).

Components:Nature Relationship (law) Energy

Inductor

(L henry)i(t) =

1

L

∫ t

0

u(s)ds, u(t) = Ldi(t)

dt

T(t) = 12Li2(t)

(magnetic field E storage, J)

Capacitor

(C farad)u(t) =

1

C

∫ t

0

i(s)ds, i(t) = Cdu(t)

dt

T(t) = 12Cu2(t)

(electric field E storage)

Resistor

(R ohm)u(t) = Ri(t)

Nonlinear

resistanceu(t) = h1(t)i(t), i(t) = h2(t)u(t)

P(t) = u(t) · i(t)(loss, in watts, 1 W = 1 J/s)

Ideal

rectifierh2(t) =

x, x > 0

0, x ≤ 0

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Interconnections (Kirkhhoff’s laws):

k

ik (t) ≡ 0 (nodes),∑

k

uk (t) ≡ 0 (loops).

Ideal transformer:transform voltage and current s.t. their product is constant:

u1 · i1 = u2 · i2, u1 = αu2, i1 =1

αi2

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Mechanical Translation

Fundamental quantities:

force F (newton) and velocity v (m/s), 3-D vectors (suppose

constant mass m = 0).

Components:Nature Relationship (law) Energy

Newton’s

force lawv(t) =

1

m

∫ t

0

F(s)ds, F(t) = mdv(t)

dt

T(t) = 12mv2(t)

(kinetic E storage)

Elastic bodies

(k N/m)F(t) = k

∫ t

0

v(s)ds, v(t) =1

k

dF(t)

dt

T(t) = 12k

F2(t)(elastic E storage)

FrictionF(t) = h(v(t))

Air drag

Dampers

h(x) = cx2sgn(x)h(x) = γx

P(t) = F(t) · v(t)(lost as heat)

Dry

frictionh(x) =

+µ if x > 0

F0 if x = 0

−µ if x < 0

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Interconnections:

k

Fk (t) ≡ 0 (body at rest)

v1(t) = v2(t) = . . . = vn(t) (interconnection point)

Ideal transformer:force amplification thanks to levers:

F1 · v1 = F2 · v2

F1 = αF2

v1 =1

αv2

Page 21: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Example: active seismic isolation control [2]

Mass - spring - damper approximation:

m4x4(t) = γ4(x3 − x4) + k4(x3 − x4)mi xi(t) = [γi(xi−1 − xi) + ki(xi−1 − xi)]

+[γi+1(xi+1 − xi)+ki+1(xi+1 − xi)], i = 2, 3

m1x1(t) = [γ1(x0 − x1) + k1(x0 − x1)]+[γ2(x2 − x1) + k2(x2 − x1)]+u(t)

m1x0(t) = Fearth(t)y(t) = [x0 + x1 x2 − x1]

T

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Mechanical Rotation

Fundamental quantities:

torque M [N ·m] and angular velocity ω [rad/s].

Components:Nature Relationship (law) Energy

Inertia

J [Nm/s2]ω(t) =

1

J

∫ t

0

M(s)ds, M(t) = Jdω(t)

dt

T(t) = 12Jω2(t)

(rotational E storage)

Torsional

stiffness kM(t) = k

∫ t

0

ω(s)ds, ω(t) =1

k

dM(t)

dt

T(t) = 12k

M2(t)(torsional E storage)

Rotational

frictionM(t) = h(ω(t)) P(t) = M(t) · ω(t)

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Interconnections:

k

Mk (t) ≡ 0 (body at rest).

Ideal transformer:a pair of gears transforms torque and angular velocity as:

M1 · ω1 = M2 · ω2

M1 = αM2

ω1 =1

αω2

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Example: printer belt pulley [3]

Spring tension: T1 = k(rθ − rθp) = k(rθ − y)Spring tension: T2 = k(y − rθ)

Newton: T1 − T2 = md2y

dt2

Motor torque (resistance, L = 0: Mm = Km i =Km

Rv2

drives belts + disturb.: Mm = M + Md

T drives shaft to pulleys: M = Jd2θ

dt2+ h

dt+ r(T1 − T2)

Page 22: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Flow Systems

Fundamental quantities:

for incompressible fluids, pressure p [N/m2] and flow Q [m3/s].

Fluid in a tube:

Pressure gradient ∇p force p · A

mass ρ · l · A flow Q = v · A

inertance [kg/m4] Lf = ρ · l/A

Constitutive relationships (Newton: sum of forces = mass ×

accel.):

Q(t) =1

Lf

∫ t

0

∇p(s)ds, ∇p(t) = Lf

dQ(t)

dt

T(t) = 12Lf Q

2(t)(kinetic E storage)

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Fluid in a tube (2):

Pressure drop from Darcy-Weisbach’s equation for a circular

pipe:∂P

∂x= f

(

L

D

) (

V2

2g

)

Friction factor for laminar flow (Re < 2300): f = 64Re

. For

turbulent flow, empirical formula or Moody Diagram:

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Flow in a tank:

• Volume V =∫

Qdt , h = V/A , and fluid capacitance

Cf A/ρg [m4s2/kg].

• Constitutive relationships:Bottom pres.

p = ρ · h · gp(t) =

1

Cf

∫ t

0

Q(s)dsT(t) = 1

2Cf p

2(t)(potential E storage)

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Flow through a section reduction:

• Pressure p, Flow resistance Rf , constant H .

• Constitutive relationships:

Pressure drop

Darcy’s law

area change

∇p(t) = h(Q(t))∇p(t) = Rf Q(t)

∇p(t) = H ·Q2(t) · sign(Q(t))

Page 23: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Interconnections:

k

Qk (t) ≡ 0 (flows at a junction),∑

k

pk ≡ 0 (in a loop)

Ideal transformer:

p1 · Q1 = p2 · Q2, p1 = αp2, Q1 =1

αQ2.

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Thermal System

Fundamental quantities:

Temperature T [K ], Entropy S [J/kg · K ],and heat flow rate q [W ].

3 ways to transfer heat:

• Conduction: Contact between 2 solids at different

temperatures

• Convection: Propagation of heat through a fluid (gas or

liquid)

• Radiation: 3rd principle of thermodynamics : P = ǫSσT4

(T > 0⇒ qrad > 0)

Thermal energy of a body or Fluid: Etherm = M · Cp · T

Heat transported in a Flow: q = m · h (h=enthalpy)

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Heat Conduction

Body heating:

Fourier’s law of conduction in 1D

k ·∂T2

∂x2= ρ · Cp ·

∂T

∂t

q(t) = M · Cp ·∂T

∂t,

where k [W/m · K ] is thermal conductivity of the body,

ρ [kg/m3] and M [kg] are the density and the mass of the body,

and Cp [W/(kg · K)] is the specific Heat of the body.

Interconnections:∑

k

qk (t) ≡ 0 (at one point).

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Heat Convection

Forced convection between a flowing fluid in a pipe:

h · Sw · (Tw(t) − T(t)) = −Mw · Cp,w ·dTw(t)

dt= q(t)

where T [K ] is the fluid temperature,

h [W/m2 · K ] is the heat transfert coefficient,

and Tw [K ], Mw [kg], Sw [m2] Cp,w [J/kg · K ] are the

temperature, mass, surface and specific heat of the pipe.

Interconnections:

k

qk (t) ≡ 0 (at one point).

Page 24: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Convective Heat Transfer

coefficient

Correlation for Forced internal turbulent Flow:Dittus-Boelter correlation (1930) with 10000 < Re < 120000.

h =k

DNu

where k is thermal conductivity of the bulk fluid, D is the

Hydraulic diameter and Nu is the Nusselt number.

Nu = 0.023 · Re0.8· Prn

with Re = ρ·v ·Dµ

is the Reynolds Number and Pr is the Prandtl

Number. n = 0.4 for heating (wall hotter than the bulk fluid) and

n = 0.33 for cooling (wall cooler than the bulk fluid). Precision

is ±15%

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Thermal Systems: summary

• Conduction in 0D:Thermal capacity

C [J/(K · s)]T(t) =

1

C

∫ t

0

q(s)ds, q(t) = CdT(t)

dt

• Interconnections:

q(t) = W∆T(t) (heat transfer between 2 bodies)∑

k

qk (t) ≡ 0 (at one point).

where W = hSw [J/(K · s)].

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Conclusions

Obvious similarities among the basic equations for different

systems!

Some physical analogies:System Effort Flow Eff. storage Flow stor. Static relation

Electrical Voltage Current Inductor Capacitor Resistor

Mechanical:

- Translational Force Velocity Body (mass) Spring Friction

- Rotational Torque Angular V. Axis (inertia) Torsion s. Friction

Hydraulic Pressure Flow Tube Tank Section

Thermal Temperature Heat flow rate - Heater Heat transfer

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

Characteristics:

1 Effort variable e;

2 Flow variable f ;

3 Effort storage: f = α−1 ·∫

e;

4 Flow storage: e = β−1 ·∫

f ;

5 Power dissipation: P = e · f ;

6 Energy storage via I.: T = 12α

f2;

7 Energy storage via C.: T = 12β

e2;

8 Sum of flows equal to zero:∑

fi = 0;

9 Sum of efforts (with signs) equal to zero:∑

ei = 0;

10 Transformation of variables: e1f1 = e2f2.

• Note: analogies may be complete or not (i.e. thermal).

⇒ Create systematic, application-independent modeling from

these analogies (next lesson).

Page 25: Control-oriented modeling and system identification

Principles of

physical

modeling

E. Witrant

Physic

fundamentals

Electrical

Circuits

Mechanical

Translation

Mechanical

Rotation

Flow Systems

Thermal

System

Heat Conduction

Heat Convection

Thermal

Systems

Conclusions

References

1 L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

2 Noriaki Itagaki and Hidekazu Nishimura, “Gain-Scheduled

Vibration Control in Consideration of Actuator Saturation”,

Proc. of the IEEE Conference on Control Applications, pp

474- 479, vol.1, Taipei, Taiwan, Sept. 2-4, 2004.

3 R.C. Dorf and R.H. Bishop, Modern Control Systems, 9th

Ed., Prentice Hall, 2001.

Page 26: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Modeling and Estimation for ControlPhysical Modeling

Lecture 4: Bond Graphs

Emmanuel WITRANT

[email protected]

April 22, 2014

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Basic Concepts behind Bond

Graphs [S. Stramigioli’01]

• Mathematical modeling: mathematical relations, generally

without constraints or physical interpretation.

• Physical modeling: physical concepts and restrict to keep

some physical laws.

Bond-graph

• satisfy 1st principle of thermodynamics: energy

conservation

• self-dual graphs where:

vertices = ideal physical concepts (storage or transformation of

energy)

edges - power bonds - = lossless transfer of energy (i.e. water

pipes, energy from one part to the other in the system)

⇒ excellent tool for describing power-consistent networks of

physical systems.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Causality

• Block diagrams: exchange of information takes place

through arrows, variable x going from A to B = causal

exchange of information

but often physically artificial and not justified, i.e. resistor

• Bond graphs: causality not considered in the modeling

phase, only necessary for simulation.

Energy

• one of the most important concepts in physics

• dynamics is the direct consequence of energy exchange

• lumped physical models: system = network

interconnection of basic elements which can store,

dissipate or transform energy

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Outline

1 Physical Domains and Power Conjugate Variables

2 The Physical Model Structure and Bond Graphs

3 Energy Storage and Physical State

4 Free Energy Dissipation

5 Ideal Transformations and Gyrations

6 Ideal Sources

7 Kirchhoff’s Laws, Junctions and the Network Structure

8 Bond Graph Modeling of Electrical Networks

Page 27: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Physical Domains and Power

Conjugate Variables

Physical domains:

• Discriminate depending on the kind of energy that a

certain part of the system can store, i.e. kinetic energy of

a stone thrown in the air→ translational mechanical -

potential energy of a capacitor→ electrical domain.

• Most important primal domains:

• mechanical = mechanical potential & mechanical kinetic;

• electromagnetic = electrical & magnetical;

• hydraulic = hydraulic potential & hydraulic kinetic;

• thermic: only one without dual sub-domains, related to the

irreversible transformation of energy to the thermal

domain.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Power conjugate variables:

• Similarity among domains (cf. Lesson 3), i.e. oscillator

• In each primal domain: two special variables, power

conjugate variables, whose product is dimensionally equal

to power

• Efforts and flows:

Domain Effort Flow

Mechanical Translation force F velocity v

Mechanical Rotation torque τ angular velocity ω

Electro-magnetic voltage v current i

Hydraulic pressure p flow rate Q

Thermic temperature T entropy flow E

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

The Physical Model Structure and

Bond Graphs

Energetic ports:

• physical modeling→ atomic elements like the storage,

dissipation, or transformation of energy;

• external variables = set of flows and dual vectors;

• effort-flow pairs = energetic ports since their dual product

represents the energy flow through this imaginary port.

Bond graphs as a graphical language:

1 extremely easy to draw;

2 mechanical to translate into block diagram or differential

equations;

3 a few rules and it is impossible to make the common “sign

mistakes” of block diagrams.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Energetic bonds:

• edges in the graph, represent the flow of energy (e.g.

water pipes);

• notations: effort value above or left, flow under or right;

• rules:

A B A B

1 each bond represents both an effort e and a dual flow f ;

2 the half arrow gives the direction of positive power P = eT f

(energy flows);

3 effort direction can be, if necessary, specified by the causal

stroke & dual flow goes ALWAYS in the opposite direction

(if not an element could set P independently of destination

→ extract infinite energy).

Page 28: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Network structure:

• if 2 subsystems A and B , both the effort and flow MUST

be the same: interconnection constraint that specifies how

A and B interact;

• more generally, interconnections and interactions are

described by a set of bonds and junctions that generalize

Kirchhoff’s laws.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Energy Storage and Physical

State

Identical structure for physical lumped models

• Integral form characterized by:

1 an input u(t), always and only either effort or flow;

2 an output y(t), either flow or effort;

3 a physical state x(t);4 an energy function E(x).

• State-space equations: x(t) = u(t), y(t) =∂E(x(t))

∂x• Change in stored energy:

E =dE

dt=∂E(x)

∂x

Tdx

dt= yT u = Psupplied

→ half arrow power bonds always directed towards

storage elements (E > 0)!

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

e.g. Capacitor:

x

E

∂∂

∫ q v

u x y

i

flow effortstate

Ce

f

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Bond graphs representations

• Depending whether u is an effort or a flow in the integralform, two dual elements:

• C element: has flow input u and dual effort output y;

• I element: has effort input u and dual flow output y.

• Causal representations:

generalized displacement

q(t) = q(t0) +∫ t

t0f(s)ds

differential form→ γ−1(e)

generalized potential energy E(q)

co-energy

E∗(e)⇒ γ−1(e) =∂E∗(e)

∂e

Page 29: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

generalized momenta

p(t) = p(t0) +∫ t

t0e(s)ds

differential form→ γ−1(f)

generalized kinetic energy E(p)

co-energy

E∗(f)⇒ γ−1(f) =∂E∗(f)

∂f

• Multidimensional I indicated by I and multidimensional C =

C.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Mechanical domain

C - Spring:

• input u = velocity v, generalized displacement∫

v = x,

stored potential energy E(x) = 12kx2, effort

y =∂E

∂x= kx = F (elastic force);

• holds for ANY properly defined energy function, which is

the ONLY information characterizing an ideal storage of

energy;

• e.g. nonlinear spring: E(x) = 12kx2 + 1

4kx4⇒

y = F =∂E

∂x= kx + kx3;

• linear spring, co-energy E∗(F) =1

2

F2

k,

x = γ−1(F) =∂E∗(F)

∂F=

F

k.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

I - Effort as input, kinetic mechanical domain:

• input u = force F ,∫

F = p = mv (momenta) by Newton’s

law (holds if m(t))⇒ proper physical state for kinetic E

storage: momentum p;

• E(p) =1

2

p2

m, y = v = γ(p) =

∂E

∂p=

p

m;

• kinetic co-energy E∗(v) =1

2mv2,

p = γ−1(v) =∂E∗(v)

∂v= mv.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Electrical domain:

• proper physical states: charge q and flux φ, NOT i and v;

C - Storage in electrostatic domain:

• u = i, physical state∫

i = q (generalized displacement),

stored potential energy E(q) =1

2

q2

C(co-energy

E∗(v) =1

2Cv2), effort y =

∂E

∂q=

q

C= v;

• e.g. nonlinear capacitor: E(q) =1

2

q2

C+

1

4

q4

C⇒

y = v =q

C+

q3

C.

• using co-energy, q = γ−1(v) =∂E∗(v)

∂v= Cv.

I - Ideal inductor:

• u = v,∫

v = φ, E(φ) =1

2

φ2

L, where L induction

constant, y = i =φ

L.

Page 30: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Energy storage:

• Generalized states:

Domain Gen. momentum (∫

e) Gen. displacement (∫

f )

Mech. Translational momentum p displacement x

Mech. Rotational ang. momentum m ang. displacement θ

Electromagnetic flux linkage φ charge q

Hydraulic pressure mom. Pp volume V

Thermic NON EXISTENT entropy E

• Storage elements:

1 what are the real physical states?

2 energy function provides for the equation;

3 argument→ what physical ideal element it represents;

4 the only ideal physical elements to which a state is

associated are energy storage;

5 in bond graphs, the power bond connected to a storage

element must always be directed toward the element.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Duality

• 2 storage / physical domain but thermal (generalized

potential and kinetic energy storage) = dual;

• one major concept in physics: oscillations if interconnected

dual elements, e.g. spring-mass or capacitor-inductor;

• thermal domain does NOT have both = irreversibility of

energy transformation due to a lack of “symmetry”.

Extra supporting states

• states without physical energy;

• e.g. position of a mass translating by itself: physical state

p, position x =∫

v = p/m but if the measurement is x and

not v:(

p

x

)

=

(

0

p/m

)

+

(

u

0

)

, y = x

⇒ total state (p, x)T , physical state p, supporting state x

needed for analysis without associated physical energy.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Free Energy Dissipation

Principle:

• irreversible transformation, e.g. mechanical or electrical→

thermal;

• dissipation of energy is transformation (1st principle of

thermodynamics);

• dissipation of free-energy (math.: Legendre transformation

of energy with respect to entropy), e.g. ideal electrical

resistors or mechanical dampers;

• ideal dissipator characterized by a purely statical

(no-states) effort/flow relation: e = Z(f) (Impedance form)

or f = Y(e) (Admittance form) for which Z(f)f < 0 or

eY(e) < 0 (energy flowing toward the element)

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Electrical domain

• Ohm’s law: u = Ri and i = u/R;

• causally invertible;

• r : constant R of a linear element (r = R).

Mechanical domain

• viscous damping coefficient b: F = bv and v = F/b,

r = b.

Page 31: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Ideal Transformations and

Gyrations

Electrical domain

• elements with two power ports = two power bonds;

• ideal, power continuous, two port elements: power flowing

from one port (input bond) ≡ one flowing out from other

port (output bond)⇒ cannot store energy inside.

• e.g. ideal transformer:

• input and output bonds with positive power flow in and out;

• external variables: (ein, fin) = power flowing in from input

port and (eout , fout ) = power flowing out from other port;

• power continuity: Pin = eTin

fin = eTout fout = Pout

• linear relation between one of the external variable on one

port to one of the external variables on the other port;

• flow-flow→ ideal transformers, flow-effort→ ideal gyrators

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Ideal Transformers

• relation: linear between flows and dependent linear

between efforts;

• characterizing equation: fout = nfin where n: linear

constant characterizing the transformer

• power constraint: ein = neout ⇔ eout =1nein

⇒ if 2 ports belong to same domain and n < 1, ein < eout

but fin > fout .

• e.g. gear-boxes: ein = torque applied on pedal axis and fin= angular velocity around the pedals, (eout , fout ) on the

back wheel;

• n relates the efforts in one way and also the flows in the

other way;

• if n variable: modulated TF (extra arrow).

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Ideal Gyrators

• linear constant between effort of output port and flow of

input port: eout = nfin;

• power constraint: ein = nfout ⇔ fout =1nein;

• e.g. gyrative effect of a DC motor (electrical power flows in

and mechanical power flows out): out torque τ = Ki,

power continuity→ u = Kω (e.m.f.):

Electrical

domain

(

i

u

)

(

τ

ω

)

Rotational

domain

• if n variable: modulated gyrator.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Multi-bonds

• characteristic constant→ matrix, if variable→ modulated

transformer or gyrator;

• Transformers:

• TF, MTF;

• f2 = Nf1 ⇒ e1 = NT e2 (using eT1

f1 = eT2

f2);

• Gyrators:

• GY, MGY, SGY;

• e2 = Nf1 ⇒ e1 = NT f2;

• e = Sf with S = −ST =

[

0 −NT

N 0

]

• if N = identity matrix: symplectic gyrator SGY (algebraic

relationship, can be used to dualize C into I).

Page 32: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Ideal Sources

• Supply energy: ideal flow source and ideal effort source.

• Only elements from which the power bond direction goes

out: Psource = eT f .

• Supply a certain effort or flow independently of the value

of their dual flow and effort.

• e.g. ideal voltage and current source in the electrical

domain

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Kirchhoff’s Laws, Junctions and

the Network Structure

• How we place the bricks with respect to each other

determines the energy flows and dynamics

• Generalization of Kirchhoff’s laws, network structure→

constraints between efforts and flows

• Two basic BG structures: 1 junctions = flow junctions and

0 junctions = effort junctions

• Any number of attached bonds

• Power continuous (in = out)

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

1-junctions:

• flow junction: all connected bonds are constrained to have

the same flow values;

• causality: only one bond sets the in flow and all other

bonds use it (strokes constraint);

• equations:

fi1 = · · · = fim = fo1 = · · · = fon (flow equation),m∑

k=1

eik =n∑

k=1

eok (effort equation);

• mesh Kirchhoff ’s law in electrical networks: same current

and the algebraic potential sum = 0;

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Electrical example:

• same current→

flow junction

• all bonds point to R, C and I and

source bond point out→ all

signs are automatically correct;

• I (integral causality) “sets” the

junction current (mesh) and

other elements have this current

as input and voltages as outputs;

• complete dynamics describedby:

• effort equation:

Vs = Vr + Vc + Vl

• I element: φ = Vl and i = φ/L• q element: q = i and

Vc = q/C

• R element: Vr = Ri

Page 33: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

0-junctions:

• effort junction: all connected bonds constrained to have

same efforts;

• causality: only one bond sets ein and all other bonds use

it;

• equations:

ei1 = · · · = eim = eo1 = · · · = eon (effort equation),m∑

k=1

fik =n∑

k=1

fok (flow equation);

• current Kirchhoff ’s law: in a node, algebraic current sum =

0.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Effort difference:

• need the difference of two efforts to specify power

consistent interconnection with other elements;

• all flows are the same and

m∑

k=1

eik =n∑

k=1

eok ⇒ e1 = e2 + e3 ⇔ e3 = e1 − e2.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Flow difference:

• need the difference of two flows to specify power

consistent interconnection with other elements;

• all efforts are the same and

m∑

k=1

fik =n∑

k=1

fok ⇒ f1 = f2 + f3 ⇔ f3 = f1 − f2.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Bond Graph Modeling of

Electrical Networks

Algorithm:

1 for each node draw a 0-junction which corresponds to the

node potential;

2 for each bipole connected between two nodes, use effort

difference where a bipole is attached and connect the

ideal element to the 0-junction representing the difference.

3 choose a reference (v = 0) and attach an effort source

equal to zero to the corresponding 0-junction.

4 simplify:

• eliminate any junction with only 2 attached bonds and have

the same continuing direction (one in and one out);

• fuse 1 and 0-junctions that are connected through a

single-bond;

• eliminate all junctions after the 0 reference source that do

not add any additional constraint.

Page 34: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Bond Graph Modeling of

Mechanical Systems

Algorithm:

1 for each moving mass draw a 1-junction = mass velocity;

2 add an additional 1-junction for inertial reference with an

attached Sf = 0;

3 for each inertia attach a corresponding I element to the

one junction corresponding to its velocity;

4 for each damper or spring: flow difference for ∆v attach to

the 1-junction;

5 simplify the graph by:

• eliminating all junctions with only two bonds in the same

continuing direction;

• fuse 1 and 0-junctions connected through a single-bond;

• eliminate all the junctions after the reference source which

do not add any additional constraints.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Examples

DC motor example

• 6 interconnected lumps:

• 2 storage elements with corresponding physical states

(φ, p): ideal inductor L and rotational inertia I → 2 states

and order 2 model;

• 2 dissipative elements: the resistor R and the friction b;

• 1 gyration effect K ;

• an ideal voltage source u.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

• Elements equations:

• storage elements and physical states:

Inertia

p = τI

ω =∂EI

∂p=∂

∂p

(

1

2Ip2

)

=p

I

Inductor

φ = ul

i =∂EL

∂φ=∂

∂φ

(

1

2Lφ2

)

L

• dissipation (linear): ur = Ri and τb = bω (dissipating

torque);

• gyration equations: τ = Ki and um = Kω

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

• Network interconnection:

• use previous algorithms to describe the electrical and

mechanical parts;

• introduce the gyrator to connect the two domains→

inter-domain element;(a) Preliminary diagram drawing:

• 0-junctions of electrical to indicate the connection points of

the bipoles;

• mechanical: 1-junctions = angular rotation of the wheel and

reference inertial frame (source);

• gyrator = relation from flow i to effort τ⇒ 1 to 0 junction;

• torque applied between the wheel and ground.

• simplifications:

(b) eliminate the two zero sources and attached junctions;

(c) eliminate any junction with only two bonds attached to it;

(d) mix all the possible directly communicating junctions of the

same type.

Page 35: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Intuitively:

• electrical part = series connection source, resistor,

inductor and electrical gyrator side→ 1-junction;

• mechanical part: only the velocity w is present, the motor

applies a torque to the wheel, but part of it is “stolen” by

the dissipating element.

• final equations⇒ LTI state-space form:

p = τI = τ − τb = Ki − bω =K

Lφ −

b

Ip,

φ = ul = −um − ur + u = −K

Ip −

R

Lφ+ u

d

dt

(

p

φ

)

=

(

−b/I K/L

−K/I −R/L

)

︸ ︷︷ ︸

A

(

p

φ

)

+

(

0

1

)

︸︷︷︸

B

u

y ω = (1/I 0)︸ ︷︷ ︸

C

(

p

φ

)

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Multidimensional example

• two point masses connected by an elastic translational

spring and a damper;

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

• bond graph:

• note: all bonds attached to 1-junction have the same flows

and all attached to 0-junction the same effort;

• “:: E(q)” = energy function, q = energy variable (p1, p2) for

I and position diff. ∆x for elastic;

• ideal source→ constant force = gravitation for each mass;

• “: b” for dissipative element indicates Fr = b(v2 − vl).

Page 36: Control-oriented modeling and system identification

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Conclusions

Bond graphs:

• Provide a systematic approach to multiphysics modeling

• Based on the fundamental laws of energy conservation

• Fundamental theory = port-Hamiltonian systems

• Used in industry with dedicated numerical solvers (e.g.

20-Sim)

• Needs practice!

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Homework 3

Draw the bond graph model of the printer belt pulley problem

introduced in Lesson 3 and check that you obtain the same

equations.

Bond Graphs

E. Witrant

Power

Conjugate

Variables

Structure and

Bond Graphs

Storage and

state

Energy

Dissipation

Transformations

and Gyrations

Ideal sources

Junctions

Electrical

Networks

Mechanical

Systems

Examples

Conclusions

Reference

1 S. Stramigioli, Modeling and IPC Control of Interactive

Mechanical Systems: A Coordinate-free Approach,

Springer, LNCIS 266, 2001.

2 L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

Page 37: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Modeling and Estimation for ControlSimulation

Lecture 5: Computer-aided modeling

Emmanuel WITRANT

[email protected]

April 22, 2014

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Introduction

x = f(x , u)

y = h(x , u)

• Can contain complex calculations.

• Computer assistance?

• Computer algebra.

• 2 systematic ways to state-space: algebraic and bond

graphs.

• Numerical limitations.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Outline

1 Computer Algebra

2 Analytical Solutions

3 Algebraic Modeling

4 An Automatic Translation of Bond Graphs to Equations

5 Numerical Methods - a short glance

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Computer Algebra

• Methods for manipulating mathematical formulas (,

numerical calculations).

• Numerous softwares: Macsyma, Maple, Reduce, Axiom,

Mathematica...

• Examples of capabilities:

• Algebraic expressions: (x + y)2 = x2 + 2xy + y2

• Factorizations: x3 − y3 = (x − y)(x2 + xy + y2)• Symbolic differentiation

∂z(x2z + sin yz + a tan z) = x2 + y cos yz +

a

1 + z2

• Symbolic integration

1 + x2dx =1

2(arc sinhx + x

x2 + 1)

Page 38: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Analytical Solutions

• May have partial interesting results, i.e.

x1 = f1(x1, x2)

x2 = f2(x1, x2)

solution algorithm generates F(x1, x2) = C if possible,

continue from this to

x1 = φ1(t)

x2 = φ2(t).

F is called the integral of the system, geometrically = path

in x1 − x2 plane, but do not have velocity information.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Example: the pendulum

θ = ω

ω = −g

lsin θ

has integral 12ω2 − g

lcos θ = C which represents the energy

(kinetic + potential) of the system.

Figure: Pendulum trajectories in θ − ω plane

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Algebraic Modeling→ Transform the equations into a convenient form.

Introduction of state variables for higher-order differential

equations:

• Consider

F(y , y , . . . , yn−1, yn; u) = 0,

• introduce the variables

x1 = y , x2 = y, . . . , xn = yn−1,

• we get

x1 = x2, , x2 = x3, . . . , xn−1 = xn

F(x1, x2, . . . , xn, xn; u) = 0

→ state-space description provided xn can be solved for the

last equation.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Example

• Let

y(3)2 − y2y4 − 1 = 0.

• With x1 = y, x2 = y, x3 = y , we get

x1 = x2

x2 = x3

x23 − x2

2 x41 − 1 = 0

• The last equation can be solved for x3 and gives

x1 = x2

x2 = x3

x3 = ±√

x22x4

1+ 1

Note: 2 cases if we don’t know the sign of y(3) = x3 from

physical context.

Page 39: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Systems of higher-order differential equations:

• two higher-order differential equations in 2 variables

F(y, y , . . . , yn−1, yn; v , v, . . . , vm−1; u) = 0

G(y , y , . . . , yn−1; v , v , . . . , vm−1, vm; u) = 0

• introduce the variables

x1 = y , x2 = y, . . . , xn = yn−1,

xn+1 = v , xn+2 = v , . . . , xn+m = vm−1,

• we get

x1 = x2, x2 = x3, . . . , xn−1 = xn

F(x1, x2, . . . , xn , xn; xn+1, . . . , xn+m ; u) = 0

xn+1 = xn+2, . . . , xn+m−1 = xn+m

G(x1, x2, . . . , xn; xn+1, . . . , xn+m, xn+m; u) = 0

⇒ state-space description if xn and xn+m can be solved in F

and G.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Example:

y + v + yv = 0 (1)

y2

2+

v2

2− 1 = 0 (2)

Problem: highest order derivatives in same equation

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Solution:

• differentiate (2) twice gives (3);

• (1)×v-(3) =(4);

• (4)×v2 & vv eliminated with (3) gives (5);

• eliminate v thanks to (2)→ eq. in y only.

• Can be generalized to an arbitrary number of equations

provided all equations are polynomial in the variables and

their derivatives.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

An Automatic Translation of Bond

Graphs to Equations

From a simple example:

• Introduce the state x = αf2 for I: x = e2;

• imagine a list of equations with ei and fi computed from v

and x, e1 = v first and f1 = f2 last (or f1 = f3);

e1 = v

...

f1 = f2

Page 40: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

1) from I element: f2 = x/α, dual

e2 = x2 = e1 − e3 (junction output)→second to last so that e1 and e3 are

calculated before:

e1 = v

f2 =1

αx

...

x = e2 = e1 − e3

f1 = f2

2) What variables are defined by first

2 equation? Junction→ flows and R:

e1 = v

f2 =1

αx

f3 = f2

e3 = βf3

x = e2 = e1 − e3

f1 = f2

⇒ starting from v and x, all variables evaluated in proper order.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• successive substitutions gives a compact state-space

description:

x = e1 − e3 = e1 − βf3 = e1 − βf2 = e1 −β

αx = v −

β

αx

→ choose 2 lists, forward and backward, instead of one.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Algorithms for Equation Sorting

1 Choose a source and write its input in forward list and the

equation of its dual in backward list.

2 From adjacent bonds, if some variable is defined in terms

of already calculated variables, write its equation in the

forward list and the equation of the other bond variable in

the backward list, as far as possible.

3 Repeat 1 and 2 until all sources have been treated.

4 Choose an I element and write the equation fi =1αi

xi in

forward list and xi = ei = . . . in backward list.

5 Do the analogy of step 2.

6 Repeat 4 and 5 until all I elements have been processed.

7 Do the analogy of steps 4, 5, and 6 for all C elements

(ei =1βi

xi to forward list and xi = fi backward list.

8 Reverse the order of the backward list and put it after the

forward list.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Example: DC motor

• State variables:

x1 =

∫ t

v2dτ = L1i2,

∫ t

M2dτ = Jω2

Page 41: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Create the list:

Step Forward list Backward list

1 v1 = v i1 = i22 i2 = 1

L1x1 x1 = v2 = v1 − v3 − v4

2 i3 = i2 v3 = R1i32 i4 = i2 v4 = kω1

2 M1 = ki4 ω1 = ω2

4 ω2 = 1Jx2 x2 = M2 = M1 −M3

5 ω3 = ω2 M3 = φ(ω3)

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Reverse backward list after forward list:

v1 = v

i2 =1

L1

x1

i3 = i2

i4 = i2

M1 = ki4

ω2 =1

Jx2

ω3 = ω2

M3 = φ(ω3)

x2 = M2 = M1 −M3

ω1 = ω2

v4 = kω1

v3 = R1i3

x1 = v2 = v1 − v3 − v4

i1 = i2

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Eliminating all variables that are not states gives:

x1 = v −R1

L1

x1 −k

Jx2

x2 =k

L1

x1 − φ(x2/J)

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Numerical MethodsPhysical model→ state-space equations→ scaling (same

order of magnitude to avoid numerical problems)→ impact of

discretization in simulation.

Basis of Numerical Methods:

• Consider the state-space model

x = f(x(t), u(t))

where x ∈ Rn. If fixed input u(t) = u(t), u is a time

variation and

x = f(t , x(t))

x(0) = x0

we want an approximation of x at 0 < t1 < t2 < · · · < tf →x1, x2, x3, . . . approximate x(t1), x(t2), x(t3), . . .

Page 42: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Simplest algorithm: difference ratio = Euler’s method:

xn+1 − xn

h≈ x(tn) = f(tn, xn), where h = tn+1 − tn

⇒ xn+1 = xn + h · f(tn , xn)

more generally

xn+1 = G(t , xn−k+1, xn−k+2, . . . , xn, xn+1)

where k is the number of utilized previous steps→ k-step

method. If xn+1 not in G: explicit method (i.e. Euler),

otherwise implicit.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Firn example: gas in open pores (1)

Impact of the convection term discretization on the trace gases

mixing ratio at NEEM (EU hole)

0 10 20 30 40 50 60 70

310

320

330

340

350

360

370

380

Depth (m)

CO2 firn − NEEM EU

CO

2 (

ppm

)

Figure: For 100 (‘ · · · ’), 200 (‘- - -’) and 395 (‘—’) depth levels

(∆z ≈ 0.8, 0.4 and 0.2 m, respectively): Lax-Wendroff (blue,

reference), central (red) and first order upwind (green).

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Firn example: gas in open pores (2)

Impact of time discretization on the trace gases mixing ratio at

NEEM (EU hole, ∆z = 0.2 m and a zoom on specific region)

0 10 20 30 40 50 60375

380

385

390

Depth (m)

CO2 firn − NEEM EU

CO

2 (

ppm

)

Figure: Explicit with a sampling time ts=15 minutes (red), implicit

(blue) with ts = 1 day (‘—’), 1 week (‘– – –’) and 1 month (‘- - -’), and

implicit-explicit (green) with ts = 1 week (‘—’) and 1 month (‘– – –’).

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Firn example: gas in open pores (3)

Averaged simulation time per gas associated with the proposed

time-discretization schemes for NEEM EU (1800 to 2008, full

close-off depth at 78.8 m, 12 gases, left) and South Pole 1995 (1500

to 1995, full close-off depth at 123 m), obtained on a PC laptop

equipped with the processor i5 540 m (2.53 Ghz, 3 Mo):

Method ts ∆z a Simulation time a

Implicit 1 day 0.2 m 4.02 / 22.25 s

Implicit 1 week 0.2 m 0.63 / 3.91 s

Implicit 1 month 0.2 m 0.26 / 1.48 s

Explicit 15 min 0.2 m 5.09 / 29.45 min

Explicit 30 min 0.4 / 0.61 m 24.39 s / 1.34 min

Explicit 1 h 0.8 / 1.23 m 7.19 s / 12.13 s

Imp-explicit b 1 week 0.2 m 0.63 s / 3.77 s

Imp-explicit b 1 month 0.2 m 0.27 s / 1.48 sa : NEEM EU / South Pole; b : Crank-Nicholson.

Page 43: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Accuracy determined by the global error

En = x(tn) − xn

but hard to compute→ one-step (provided exact previous

steps), local error

en = x(tn) − zn, zn = G(t , x(tn−k ), x(tn−k+1), . . . , zn)

i.e. for Euler (xn+1 ≈ xn + h · f(tn, xn))

en+1 = x(tn+1) − zn+1 = x(tn+1) − x(tn) − h · f(tn, x(tn))

=h2

2x(ζ), for tn < ζ < tn+1

Note (Taylor):

x(tn+1) = x(tn) + h · f(tn , x(tn)) +h2

2· f ′(tn, x(tn)) + O(3)

→ local error proportional to h2 and global error

proportional to h (number of steps proportional to h−1).

If local error O(hk+1), k is the order of accuracy.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Stability is also crucial. i.e.

x = λx , λ ∈ Cx(0) = 1

with Euler: xn+1 = xn + hλxn = (1 + hλ)xn has solution

xn = (1 + hλ)n .

It implies that

xn → 0 if |1 + hλ| < 1

|xn | → ∞ if |1 + hλ| > 1

stable if Re [λ] < 0 AND |1 + hλ| < 1 (h small enough)

→ the stability of the DE does not necessarily coincides

with the one of the numerical scheme!

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

The Runge-Kutta Methods:

Consider the integral form

x(tn+1) = x(tn) +

∫ tn+1

tn

f(τ, x(τ))dτ

with central approximation

xn+1 = xn + h · f(tn +h

2, x(tn +

h

2))

and (Euler) x(tn +h2) ≈ xn +

h2f(tn, xn). Consequently, we have

the simplest Runge-Kutta algorithm

k1 = f(tn, xn),

k2 = f(tn +h

2, xn +

h

2k1),

xn+1 = xn + hk2.

Local error x(tn+1) − xn+1 = O(h3)→ 1 order of magnitude

more accurate than Euler.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• General form:

k1 = f(tn , xn),

k2 = f(tn + c2h, xn + ha21k1,

k3 = f(tn + c3h, xn + h(a31k1 + a32k2)),

...

ks = f(tn + csh, xn + h(as1k1 + · · ·+ as,s−1ks−1)),

xn+1 = xn + h(b1k1 + · · ·+ bsks),

where s, ci, bi and aij chosen to obtain the desired order

of accuracy p, calculation complexity or other criterion→family of Runge-Kutta methods.

• A classic method sets s = p = 4 with

c2 = c3 =1

2, c4 = 1, a21 = a32 =

1

2, a43 = 1,

b1 = b4 =1

6, b2 = b3 =

2

6, (others = 0)

Page 44: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Adams’ Methods:

• Family of multistep methods

xn = xn−1 +k

j=0

βj fn−j , fi = f(ti , xi)

where βj chosen such that the order of accuracy is as high

as possible. If β0 = 0: explicit form (accuracy k + 1),

Adams-Bashforth, while β0 , 0: implicit form (accuracy k ),

Adams-Moulton.

• Simplest explicit forms:

k = 1 : xn = xn−1 + fn−1h

k = 2 : xn = xn−1 + (3fn−1 − fn−2)h

2

k = 3 : xn = xn−1 + (23fn−1 − 16fn−2 + 5fn−3)h

12

k = 4 : xn = xn−1 + (55fn−1 − 59fn−2 + 37fn−3 − 9fn−4)h

24

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Simplest implicit forms:

k = 1 : xn = xn−1 + fnh

k = 2 : xn = xn−1 + (fn + fn−1)h/2

k = 3 : xn = xn−1 + (5fn + 8fn−1 − fn−2)h/12

k = 4 : xn = xn−1 + (9fn + 19fn−1 − 5fn−2 + fn−3)h/24

• Why more complicated implicit methods?

(a) Adams-Bashforth (explicit),

k = 1 (–), k = 2 (−−) and

k = 3 (· · · ).

(b) Adams-Moulton (implicit),

k = 2 (–) and k = 3 (−−).

⇒ Larger stability regions. Note: ր k ց stability.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Variable Step Length:

• Fixed steps often inefficient→ large steps when slow

changes & small steps when rapid changes.

• Automatic adjustment based on local error approximation,

i.e. assume a local error

x(tn+1) − xn+1 = Chp+1 + O(hp+2)

where C depends on the solution (unknown). If 2 steps of

length h, we have approximately (errors are added)

x(tn+2) − xn+2 = 2Chp+1 + O(hp+2) (1)

x value computed for a step of length 2h from tn to tn+2:

x(tn+2) − x = C(2h)p+1 + O(hp+2) (2)

(2) − (1) : xn+2 − x = 2Chp+1(2p − 1) + O(hp+2) (3)

C from (3) in (1) : x(tn+2) − xn+2 =xn+2 − x

2p − 1+ O(hp+2)

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Previous result:

x(tn+2) − xn+2 =xn+2 − x

2p − 1+ O(hp+2)

Assume O(hp+2) negligible→ known estimate of the error.

• The estimate can be used in several ways, in general:

ց h if error > tolerance,

ր h if error≪ tolerance.

Ideally, a given accuracy is obtained with minimum

computational load.

• Crucial issue for embedded control and large-scale plants.

Most of the time, use existing softwares/libraries.

Page 45: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Stiff differential equations:

• Both fast and slow components and large difference

between the time constants, i.e.

x =

(

−10001 −10000

1 0

)

x

x(0) =

(

2

−1.0001

)

has solution

x1 = e−t + e−10000t

x2 = −e−t − 0.0001e−10000t .

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

• Problem: in simulation, start with very small step to follow

the fast term (i.e. e−10000t ), which soon goes to zero:

solution only characterized by slow term. BUTր h implies

stability problems (i.e. −10 000 · h within stability region).

⇒ use methods that are always stable: compromise with

accuracy (implicit in general).

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Comments about Choice of Methods:

• Runge-Kutta most effective for low complexity

(computational work) while Adams better for high

complexity;

• methods for stiff problems - may be - ineffective for nonstiff

problems;

• problem dependent.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Conclusions

• First step:

• from the physical model (high order or bond graphs), write

the system in a state-space form

• investigate the behavior of the continuous dynamics, e.g.

nonlinearities, time-delays, time constants of the linearized

dynamics . . .

• Second step:

• discretize the dynamics to get computable difference

equations

• check on the impact of the discretization step

• advanced methods with predictor/corrector schemes

• From experience:

• hand-made discretizations are often more tractable

• when doing “equivalent” model transformations, they are

more equivalent in the discretized framework

Page 46: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

Homework 4

a. Write the state-space description for:

• Example 1:

y + v2 + y = 0

y2 + v + vy = 0

• Example 2:

y + v3 + v2 + y = 0

y2 + v + vy = 0

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

b. Numerical methods

Consider the differential equa-

tion

y′′(t) − 10π2y(t) = 0

y(0) = 1, y(0) = −√

10π

1 Write this equation in

state-space form.

2 Compute the eigenvalues.

3 Explain the difference

between exact and

numerical difference

expressed in Table 8.6.3.

Computer-

aided

modeling

E. Witrant

Computer

Algebra

Analytical

Solutions

Algebraic

Modeling

Automatic

Bond Graphs

Translation

Numerical

Methods

Conclusions

Homework

References

1 L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

2 W.E. Boyce and R.C. Di Prima, Elementary Differential

Equations and Boundary Value Problems, 6th edition,

John Wiley & Sons, Inc., 1997.

Page 47: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Modeling and Estimation for ControlSimulation

Lecture 6: Simulation with Scilab

Emmanuel WITRANT

[email protected]

April 22, 2014

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Models for simulation

• Choose the appropriate simulation tool/function depending

on the class of model

• I.e. Scilab provides a wide array of tools for different

models.

• Can use abbreviated commands and defaults parameters.

• Important to know appropriate tools, how the algorithms

are set up and how to face difficulties.

Simulation toolsThree forms:

1 primary tools used by knowledgeable users on challenging

problems;

2 simplified version easier to use and for simpler problems;

3 special cases occurring in specific areas of science and

engineering.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Outline

1 Ordinary differential equations

2 Boundary value problems

3 Difference equations

4 Differential algebraic equations

5 Hybrid systems

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Ordinary differential equations

(ODEs)

y = f(t , y), y(t0) = y0

where y, f vector valued, and t ∈ R.

• Higher order models can always be transformed into 1st

order and directly simulated in Scilab, except Boundary

value problems.

• Unique solution if f and ∂f/∂y continuous.

• The most continuous derivatives of f(t , y) exist, the more

derivatives y has. In simulation, accuracy obtained from

error estimates that are based on derivatives.

• Controlled differential equation (DE):

y = f(t , y , u(t))

y has only one more derivative than u→ may create

problems for piecewise continuous inputs.

Page 48: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating ODEs: simplest call

y=ode(y0,t0,t,f)

• t0, y0, f(t , y)→ default method and error tolerance, adjust

step size;

• many more solutions than needed: specify also final time

vector t;

• returns y= [y(t0), y(t1), . . . , y(tn)];

• online function definition, i.e. f(t , y) = −y + sin(t)

function ydot = f(t,y)

ydot=-y+sin(t)

endfunction

• interface to ode solvers like ODEPACK.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating ODEs: advanced call

[y,w,iw]=ode([type],y0,t0,t [,rtol [,atol]],f [,jac]

... [w,iw])

• ”type”: lsoda (default, automatically selects between

nonstiff predictor-corrector Adams and stiff backward

difference formula BDF), adams, stiff (BDF), rk

(adaptive Runge-Kutta of order 4), rkf (RK 45, highly

accurate), fix (simpler rkf), root (lsodar with root

finding), discrete.

• ”rtol, atol”: real constants or vectors, set absolute and

relative tolerance on y: ǫy(i) = rtol(i) ∗ |y(i)|+ atol(i),computational time vs. accuracy.

• ”jac”: external, analytic Jacobian (for BDF and implicit)

J=jac(t,y).

• ”w,iw”: real vectors for storing information returned by

integration routine.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating ODEs: more options

odeoptions[itask,tcrit,h0,hmax,hmin,jactyp,mxstep,

maxordn,maxords,ixpr,ml,mu]

• sets computation strategy, critical time, step size and

bounds, how nonlinear equations are solved, number of

steps, max. nonstiff and stiff order, half-bandwidths of

banded Jacobian.

• computational time and accuracy can vary greatly with the

method.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating ODEs: Implicit differential equations

• A(t , y)y = g(t , y), y(t0) = y0. If A not invertible ∀(t , y) of

interest→ implicit DAE, if invertible→ linearly implicit DE

or index-zero DAE.

• Better to consider directly than inverting A (more efficient

and reliable integration).

y=impl([type],y0,ydot0,t0,t [,atol, [rtol]],res,adda

... [,jac])

→ requires also y(t0) and to compute the residuals

(g(t , y) − A(t , y)y) as: r=res(t,y,ydot)

Page 49: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating ODEs: Linear systems

• number of specialized functions for

x = Ax + Bu, x(0) = x0,

y = Cx + Du.

• [sl]=syslin(dom,A,B,C [,D [,x0] ]) defines a

continuous or discrete (dom) state-space system, system

values recovered using [A,B,C,D]=abcd(s1);

• [y [,x]]=csim(u,t,sl,[x0])→ simulation (time

response) of linear system.

Simulating ODEs: Root finding

• to simulate a DE up to the time something happens;

• y,rd[,w,iw]=ode(root”,y0,t0,t [,rtol [,atol]],f [,jac],ng,g

[,w,iw])” integrate ODE f until g(t , y) = 0;

• iteratively reduces the last step to find surface crossing.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Boundary value problems (BVPs)

• DE with information given at 2 or more times:

y = f(t , y), t0 ≤ t ≤ tf ,

0 = B(y(t0), y(tf)).

If y is n-dimensional→ n boundaries.

• More complicated than initial value problems (cf.

Optimization class), where local algorithm move from one

point to the next.

• BVP: need more global algorithm with full t interval→

much larger system of equations.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating BVPs: Numerous methods

1 shooting methods: take given IC then guess the rest and

adjust by integrating the full interval→ easy to program

but not reliable on long intervals and stiff problems;

2 multiple shooting: breaks time interval into subinterval and

shoot over these;

3 discretize the DE and solve the large discrete system, i.e.

Euler with step h on y = f(t , y), t0 ≤ t ≤ tf ,

0 = B(y(t0), y(tf )) gives:

yi+1 − yi − f(t0 + ih, yi) = 0, i = 0, . . . ,N − 1,

B(y0, yN) = 0.

usually with more complicated methods than Euler but

large system of (nonlinear) DE→ BVP solver has to deal

with numerical problems and need Jacobian-like

information.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating BVPs: COLNEW

Scilab uses Fortran COLNEW code in bvode, which assumes

that the BVP is of the form

dmi ui

dxmi= fi

(

x , u(x),du

dx, . . . ,

dmi−1u

dxmi−1

)

, 1 ≤ i ≤ nc ,

gi

(

ζj , u(ζj), . . . ,dm∗u

dxm∗

)

= 0, j = 1, . . . ,m∗,

where ζj are x where BC hold and aL ≤ x ≤ aR .

Let m∗ = m1 + m2 + · · ·+ mnc, z(u) =

[

u, dudx, . . . , dm∗u

dxm∗

]

, then

dmi ui

dxmi= fi (x, z(u(x))) , 1 ≤ i ≤ nc , aL ≤ x ≤ aR

gi (ζj , z(u(ζj))) = 0, j = 1, . . . ,m∗,

bvode starts with initial mesh, solve NL system and iteratively

refines the mesh.

Page 50: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating BVPs: COLNEW implementation

[z]=bvode(points,ncomp,m,aleft,aright,zeta,ipar,ltol,

...tol,fixpnt,...fsub1,dfsub1,gsub1,dgsub1,guess1)

• solution z evaluated at the given points for ncomp≤ 20

DE;

• we have to provide bounds (aleft,aright) for u, BCs and

numerical properties of the model.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating BVPs: Example - optimal controlNecessary conditions: consider the NL controlled system

y = y2 + v, J(y, u) =

∫ 10

0

10v2 + y2 dt

Find v : y(0) = 2→ y(10) = −1, while min J. NC found fromHamiltonian and give the BV DAE

y = y2 + v,

λ = −2y − 2λy,

0 = 20v + λ,y(0) = 2, y(10) = −1.

⇒ BVP :

y = y2 − λ/20,

λ = −2y − 2λy,

y(0) = 2, y(10) = −1.

Ready to be solved by bvode, which gives:

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Difference equations

• Discrete-time values or values changing only at discrete

times, for discrete processes or because of isolated

observations.

• Integer variable k and sequence y(k) that solves

y(k + 1) = f(k , y(k)), y(k0) = y0,

or with time sequence tk , k ≥ k0:

z(tk+1) = g(tk , z(tk )), z(tk0) = z0.

If evenly spaced events tk+1 − tk = h = cst :

v(k + 1) = g(w(k), v(k)), v(k0) = v0,

w(k + 1) = w(k) + h, w(k0) = tk0

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Difference equations (2)

• Solution existence simpler than DE: y(k) computed

recursively from y(k0) as long as (k , y(k)) ∈ Df .

• Note: uniqueness theorem for DE (if 2 solutions start at

the same time but with different y0 and if continuity of f , fyholds, then they never intersect) not true for difference

equations.

• Can always be written as 1st order difference equations.

Page 51: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating difference equations

1 Easier because no choice about time step and no error

from derivatives approximations→ only function

evaluation and roundoff errors.

2 First order y(k + 1) = f(k , y(k)), y(k0) = y0, evaluated

by y=ode(discrete”,y0,k0,kvect,f)” where kvect =

evaluation times.

3 Linear systems

x(k + 1) = Ax(k) + Bu(k), x(0) = x0,

y(k) = Cx(k) + Du(k),

• [X]=ltitr(A,B,U,[x0]) or

[xf,X]=ltitr(A,B,U,[x0]);

• If given by a transfer function

[y]=rtitr(Num,Den,u [,up,yp]) where [,up,yp] are

past values, if any;

• Time response obtained using

[y [,x]]=flts(u,sl [,x0]).

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Differential algebraic equations

(DAEs)

• Most physical models are differential + algebraic (DAEs):

F(t , y , y) = 0

→ rewrite as ODE or simpler DAE, or simulate the DAE

directly.

• Theory much more complex than ODEs: ∃ solutions only

for certain IC, called consistent IC, i.e.

y1 = y1 − cos(y2) + t ,

0 = y31 + y2 + et ,

requires y1(t0)3 + y2(t0) + et0 = 0.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Differential algebraic equations (2)

• Structure→ index definition (≥ 0, 0 for ODE). Index-oneDAE in Scilab: F(t , y , y) = 0 with Fy , Fy is an index-onematrix pencil along solutions and Fy has constant rank:

1 implicit semiexplicit:

F1(t , y1, y2, y1) = 0

F2(t , y1, y2) = 0

where ∂F1/∂y1 and ∂F2/∂y2 nonsingular, y1 is the

differential variable and y2 the algebraic one;

2 semiexplicit:

y1 = F1(t , y1, y2)

0 = F2(t , y1, y2)

with ∂F2/∂y2 nonsingular.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating DAEs

• Need information on both y(t0) and y(t0) to uniquely

determine the solution and start integration, i.e.

tan(y) = −y + g(t)→ family of DE

y = tan−1(−y + g) + nπ. Sometimes approximate value of

y(t0) or none at all.

• Scilab uses backward differentiation formulas (BDF), i.e.

backward Euler on F(t , y , y) = 0 gives

F

(

tn+1, yn+1,yn+1 − yn

h

)

= 0

→ given yn, iterative resolution using the Jacobian w.r.t.

yn+1: Fy +1hFy ′ .

• based on DASSL code (for nonlinear fully implicit

index-one DAEs):

[r [,hd]]=dassl(x0,t0,t [,atol,[rtol]],res [,jac]

where x0 is y0 [ydot0], res returns the residue

r=g(t,y,ydot) and info sets computation properties.

Page 52: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Example tan(y) = −y + 10t cos(3t), y(0) = 0

y(0) = nπ is a consistent IC→ compare y(0) = 0, π, 2π

implicit ODEs and fully implicit DAEs can have multiple nearby

roots→ integrators must ensure no jump on another solution

when making a step (conservatism in the step size choice).

DAEs and root-finding:

[r,nn,[,hd]]=dasrt(x0,t0,t [,atol,[rtol]]

...,res [,jac],ng, surf [,info] [,hd]): just add

intersection surface.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Hybrid systems

• Mixture of continuous- and discrete-time events.

• When an event (discrete variable change) occurs: change

in DE, state dimension, IC (initialization problem). . .

• Interfere with error control of integrators.

• Handled in Scilab and more particularly Scicos.

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Simulating Hybrid systems

• Continuous variable yc and discrete variable yd (piecewise

constant on [tk , tk+1[):

yc(t) = f0(t , yc(t), yd(t)), t ∈ [tk , tk+1[

yd(tk+1) = f1(t , yc(tk+1), yd(tk )) at t = tk+1

i.e. sampled data system (u is a control function):

yc(t) = f0(t , yc(t), u(t)), t ∈ [tk , tk+1[,

u(tk+1) = f1(t , yc(tk+1), u(tk )) at t = tk+1.

• yt=odedc(y0,nd,stdel,t0,t,f), where

y0=[y0c;y0d],

stdel=[h, delta] with delta=delay/h,

yp=f(t,yc,yd,flag).

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

Conclusions

• Implementing the equations needs some dedicated

thinking

• Need to understand the expected results prior to

computation

• Trade-off:

• computation time vs. precision

• mathematical simplicity vs. code efficiency

• Particularly challenging for real-time modeling

• A code aims to be transmitted to other people: the

structure and comments have to be clear!

Page 53: Control-oriented modeling and system identification

Computer-

aided

modeling

E. Witrant

Ordinary

differential

equations

Simulating ODEs

advanced call

more options

Implicit diff eq

Linear systems

Boundary

value problems

Simulating BVPs

COLNEW

optimal control

Difference

equations

Simulation

Differential

algebraic

equations

Simulation

Example

Hybrid systems

Simulation

Conclusions

References

1 S. Campbell, J-P. Chancelier and R. Nikoukhah, Modeling

and Simulation in Scilab/Scicos, Springer, 2005.

2 Scilab website: http://www.scilab.org.

Page 54: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Modeling and Estimation for ControlSystem Identification

Lecture 7: Signals for System Identification

Emmanuel WITRANT

[email protected]

April 23, 2014

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Basics of System Identification

System identification = use of data in modeling

• Include experimental data in modeling work.

• Used to find constants or complete the model.

• Based on system variables: inputs, outputs and possibly

disturbances.

→ understand how the system works, describe partial

systems and compute values of the constants.

• Three different ways to use identification for modeling:

1 make simple experiments to facilitate problem structuration

(phase 1);

2 describe I/O relationships independently of physical

insights (often linear);

3 use data to determine unknown parameters in physical

models: tailor-made models.

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Estimate system from measurements of u(t) and y(t)

Many issues:

• choice of sampling frequency, input signal (experiment

conditions);

• what class of models, how to model disturbances?

• estimating model parameters from sampled, finite and

noisy data.

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Outline

1 From Continuous Dynamics to Sampled Signals

2 Disturbance Modeling

3 Signal Spectra

4 Choice of Sampling Interval and Presampling Filters

⇒ Introduction to signal analysis and processing

Page 55: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

From Continuous Dynamics to

Sampled Signals

Continuous-time signals and systemsContinuous-time signal y(t)

Fourier transform Y(ω) =∫ ∞

−∞y(t)e−iωt dt

Laplace transform Y(s) =∫ ∞

−∞y(t)e−st dt

Linear system y(t) = g ∗ u(t)Y(ω) = G(ω)U(ω)Y(s) = G(s)U(s)

Derivation operator p × u(t) = u(t) works as s-variable, but in

time domain.

Example (0 IC) y(t) = 0.5u(t) + u(t)y(t) = (0.5p + 1)u(t)

Y(s) = (0.5s + 1)U(s)

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Discrete-time signals and systemsDiscrete-time signal y(kh)

Fourier transform Y (h)(ω) = h∑∞

k=−∞ y(kh)e−iωkh

z-transform Y(z) =∑∞

k=−∞ y(kh)z−k

Linear system y(kh) = g ∗ u(kh)

Y (h)(ω) = Gd(eiωh)U(h)(ω)

Y(z) = Gd(z)U(z)Shift operator q × u(kh) = u(kh + h) works as z-variable, but in

time-domain.

Example (0 IC) y(kh) = 0.5u(kh) + u(kh − h)y(kh) = (0.5 + q−1)u(kh)Y(z) = (0.5 + z−1)U(z)

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Sampled systems

Continuous-time linear system

x(t) = Ax(t) + Bu(t)

y(t) = Cx(t) + Du(t)

⇒ G(s) = C(sI − A)−1B + D.

Assume that we sample the inputs and outputs of the system

Relation between sampled inputs u[k ] and outputs y[k ]?

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Sampled systems (2)

Systems with piecewise constant input:

• Exact relation possible if u(t) is constant over each

sampling interval.

• Solving the system equations over one sampling interval

gives

x[k + 1] = Adx[k ] + Bdu[k ]

y[k ] = Cx[k ] + Du[k ]

Gd(z) = C(zI − Ad)−1Bd + D

where Ad = eAh and Bd =∫ h

0eAsBds.

Page 56: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Sampled systems (3)

Example: sampling of scalar system

• Continuous-time dynamics

x(t) = ax(t) + bu(t)

• Assuming that the input u(t) is constant over a sampling

interval

x[k + 1] = adx[k ] + bdu[k ]

where ad = eah and bd =∫ h

0easbds = b

a(eah − 1).

• Note: continuous-time poles in s = a, discrete-time poles

in z = eah .

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Sampled systems (4)

Frequency-domain analysis of sampling

• Transfer function of sampled system

Gd(z) = C(zI − Ad)−1Bd + D

produces same output as G(s) at sampling intervals.

• However, frequency responses are not the same! One has

|G(iω) − Gd(eiωh)| ≤ ωh

∫ ∞

0

|g(τ)|dτ

where g(τ) is the impulse response for G(s).

⇒ Good match at low frequencies (ω < 0.1ωs)⇒ choose

sampling frequency > 10× system bandwidth.

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Sampling of general systems

• For more general systems,

• nonlinear dynamics, or

• linear systems where input is not piecewise constant

conversion from continuous-time to discrete-time is not

trivial.

• Simple approach: approximate time-derivative with finite

difference:

p ≈1 − q−1

hEuler backward

p ≈q − 1

hEuler forward

p ≈2

q − 1

q + 1Tustin’s approximation

(typical for linear systems)

. . .

• I.e. write x(tk ) = x(tk − 1) +∫ tk

tk−1f(τ) dτ and find the

previous transformations using different integral

approximations

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Numerical approximations of the integral [F. Haugen’05]

Page 57: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Disturbance Modeling

• Discrete-time set-up:⇒ Estimate Gd from

measurements y[k ] and u[k ].The effect of disturbances is

crucial, need for a disturbance

model!• Basic observations:

• disturbances are different from time to time

• some characteristics (e.g., frequency content) persist

• Can be captured by describing disturbances as filtered

white noise v(k) = H(q)e(k)

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Discrete-time stochastic processes

• Discrete-time stochastic process: an infinite sequence

v(k , θ) whose values depend on a random variable θ

• To each fixed value θ∗ of θ, the sequence v(k , θ∗)depends only on k and is called a realization of the

stochastic process

• For a discrete-time stochastic process v[k ], we define its

Expected or mean value mv(k) = Eθv[k ]Auto-correlation function Rv(k , l) = Eθv[k + l]v[k ]

and say that v[k ] is

stationary if mv and Rv are independent of k

ergodic if mv and Rv can be computed from

a single realization

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Some background

• define the real random variable e the possible outcomes

of unpredictable experiment;

• define fe(x) the probability density function:

P(a ≤ e < b) =

∫ b

a

fe(x)dx

• the expectation is

Ee =

R

xfe(x)dx or (discrete) Ee =∑

xiP[X = xi ]

• the covariance matrix is

Cov(e, y) = E[(e − E(e))(y − E(y))T ] = E(ey) − E(e)E(y)

=∑

i,j

(ei − E(e))(yj − E(y))P[e = ei , y = yj ] (discrete)

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Some background (2)

White noise:

• A stochastic process e[k ] is called white noise

if me = 0 and

Re(k , l) =

σ2 if l = 0

0 otherwise

Unpredictable sequence!

Signals and auto-correlation function (ACF)

• Different realizations may look very different.

• Still, qualitative properties captured as:

- slowly varying ACF↔

slowly varying process;

- quickly varying ACF↔

quickly varying process.

• Close to white noise if R(l)→ 0 rapidly as |l| grows.

Page 58: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Some background (3)

Properties of the auto-correlation function [Wikipedia]

• Symmetry: ACF is even (Rf (−l) = Rf (l) if f ∈ R) or

Hermitian (conjugate transpose, Rf (−l) = R∗f(l) if f ∈ C)

• Peak at the origin (|Rf(l)| ≤ Rf (0)) and the ACF of a

periodic function is periodic with the same period (dirac at

0 if white noise)

•∑

uncorrelated functions (0 cross-correlation ∀l) =∑

ACF of each function

• Estimate: for discrete process X1,X2, . . . ,Xn with known

mean µ and variance σ:

R(l) ≈1

(n − l)σ2

n−l∑

t=1

(Xt − µ)(Xt+k − µ), ∀l < n ∈ N+

• unbiased if true µ and σ

• biased estimate if sample mean and variance are used

• can split the data set to separate the µ and σ estimates

from the ACF estimate

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Signal Spectra

A common framework for deterministic and stochastic

signals

• Signals typically described as stochastic processes with

deterministic components (det. inputs vs. stoch.

disturbances).

• For a linear system with additive disturbance e(t)(sequence of independent random variables with

me(t) = 0 and variances σ2)

y(t) = G(q)u(t) + H(q)e(t)

we have that

Ey(t) = G(q)u(t)

so y(t) is not a stationary process.

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Quasi-Stationary Signals (QSS)

Definition: s(t) is QSS if

1 Es(t) = ms(t), |ms(t)| ≤ C, ∀t (bounded mean)

2 Es(t)s(r) = Rs(t , r), |Rs(t , r)| ≤ C, and the following limit

exists

limN→∞

1

N

N∑

t=1

Rs(t , t − τ) = Rs(τ), ∀τ (bounded autocor.)

where E is with respect to the stochastic components of s(t).

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Quasi-Stationary Signals (2)

• If s(t) deterministic then s(t) is a bounded sequence

such that

Rs(τ) = limN→∞

1

N

N∑

t=1

s(t)s(t − τ)

exists (E has no effect).

• If s(t) stationary, the bounds are trivially satisfied and

Rs(τ) do not depend on t .

• Two signals s(t) and w(t) are jointly quasi-stationary if

both QSS and if the cross-covariance

Rsw(τ) = Es(t)w(t−τ), with Ef(t) limN→∞

1

N

N∑

t=1

Ef(t), exists.

• Uncorrelated signals if Rsw(τ) ≡ 0.

Page 59: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Definition of Spectra

• Power spectrum of s(t) (freq. content of stoch. process,

always real):

φs(ω) =∞∑

τ=−∞

Rs(τ)e−iτω

e.g. for white noise φs(ω) = σ2: same power at all

frequencies.

• Cross-spectrum between w(t) and s(t) (measures how

two processes co-vary, in general complex):

φsw(ω) =∞∑

τ=−∞

Rsw(τ)e−iτω

ℜ(φsw)→ cospectrum, ℑ(φsw)→ quadrature spectrum,

arg(φsw)→ phase spectrum, |φsw | → amplitude spectrum.

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Definition of Spectra (2)

• From the definition of inverse Fourier transform:

Es2(t) = Rs(0) =1

∫ π

−π

φs(ω)dω

• Example (stationary stochastic process): for the process

v(t) = H(q)e(t), the spectrum is φv(ω) = σ2|H(eiω)|2.

• Example (spectrum of a mixed det. and stoch. signal): for

the signal

s(t) = u(t) + v(t),

where u(t) deterministic and v(t) stationary stochastic

process, the spectrum is φs(ω) = φu(ω) + φv(ω).

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Transformation of spectrum by linear systems

• Theorem: Let w(t) QSS with spectrum φw(ω), G(q)stable and s(t) = G(q)w(t). Then s(t) is also QSS and

φs(ω) = |G(e iω)|2φw(ω)

φsw(ω) = G(e iω)φw(ω)

• Corollary: Let y(t) given by

y(t) = G(q)u(t) + H(q)e(t)

where u(t) QSS, deterministic with spectrum φu(ω), and

e(t) white noise with variance σ2.Let G and H be stable filters, then y(t) is QSS and

φy(ω) = |G(e iω)|2φu(ω) + σ2|H(e iω)|2

φyu(ω) = G(e iω)φu(ω)

⇒ We can use filtered white noise to model the character of

disturbances!

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Spectral factorization

• The previous theorem describes spectrum as real-valued

rational functions of eiω from transfer functions G(q) and

H(q).In practice: given a spectrum φv(ω), can we find H(q) s.t.

v(t) = H(q)e(t) has this spectrum and e(t) is white

noise? Exact conditions in [Wiener 1949] & [Rozanov

1967]

• Spectral factorization: suppose that φv(ω) > 0 is a rational

function of cos(ω) (or eiω), then there exists a monic

rational function (leading coef. = 1) of z, H(z), without

poles or zeros on or outside the unit circle, such that:

φv(ω) = σ2|H(eiω)|2

Page 60: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Spectral factorization (SF): ARMA process example

If a stationary process v(t) has a rational spectrum φv(ω), wecan represent it as v(t) = H(q)e(t), where

H(q) =C(q)

A(q)=

1 + c1q−1 + · · ·+ cncq−nc

1 + a1q−1 + · · ·+ anaq−na.

We may write the ARMA model:

v(t) + a1v(t − 1) + · · ·+ anav(t − na)

= e(t) + c1e(t − 1) + · · ·+ cnce(t − nc)

• if nc = 0, autoregressive (AR) model:

v(t) + a1v(t − 1) + · · ·+ anav(t − na) = e(t),

• if na = 0, moving average (MA) model:

v(t) = e(t) + c1e(t − 1) + · · ·+ cnce(t − nc).

⇒ SF provides a representation of disturbances in the standard

form v = H(q)e from information about its spectrum only.

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Filtering and spectrum

• Consider the general set-up with u(k) and e(k)uncorrelated:

φy(ω) = |G(eiω)|2φu(ω) + φe(ω)

φyu(ω) = G(eiω)φu(ω)

• Note:

• power spectrum additive if signals are uncorrelated

• cross correlation can be used to get rid of disturbances

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Choice of Sampling Interval and

Presampling Filters

Sampling is inherent to computer-based data-acquisition

systems→ select (equidistant) sampling instances to minimize

information losses.

Aliasing

Suppose s(t) with sampling interval T: sk = s(kT),k = 1, 2, . . ., sampling frequency ωs = 2π/T and Nyquist

(folding) frequency ωN = ωs/2.

If sinusoid with |ω| > ωN, ∃ ω, −ωN ≤ ω ≤ ωN, such that

cosωkT = cos ωkT

sinωkT = sin ωkT

k = 0, 1, . . .

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Aliasing (2)

⇒ Alias phenomenon: part of the signal with ω > ωN

interpreted as lower frequency↔ spectrum of sampled signal

is a superposition (folding) of original spectrum:

φ(T)s (ω) =

∞∑

r=−∞

φcs(ω+ rωs)

where φcs and φ

(T)s correspond to continuous-time and sampled

signals.

To avoid aliasing: choose ωs so that φcs(ω) is zero outside

(−ωs/2, ωs/2). This implies φ(T)s (ω) = φc

s(ω).

Page 61: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Antialiasing presampling filters

• We loose signals above ωN , do not let folding effect distort

the interesting part of spectrum below ωN → presampling

filters κ(p): sF(t) = κ(p)s(t) ⇒ φcsF(ω) = |κ(iω)|2φc

s(ω)

• Ideally, κ(iω) s.t.

|κ(iω)| = 1, |ω| ≤ ωN

|κ(iω)| = 0, |ω| > ωN

which means that sFk= sF(kT) would have spectrum

φ(T)sF

(ω) = φcs(ω), −ωN ≤ ω < ωN

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

Antialiasing presampling filters (2)

⇒ Sampled spectrum without alias thanks to antialiasing

filter, which should always be applied before sampling if

significant energy above ωN .

• Example - Continuous-time signal: square wave plus

high-frequency sinusoidal

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Noise-reduction effect of antialiasing (AA) filters

• Typically, signal = useful part m(t) + disturbances v(t)(more broadband, e.g. noise): choose ωs such that most

of the useful spectrum below ωN. AA filters cuts away HF.

• Suppose s(t) = m(t) + v(t) and sampled, prefiltered

signal sFk= mF

k+ vF

k, sF

k= sF(kT). Noise variance:

E(vFk )

2 =

∫ ωN

−ωN

φ(T)vF

(ω)dω =∞∑

r=−∞

∫ ωN

−ωN

φcvF(ω+ rωs)dω

→ noise effects from HF folded into region [−ωN , ωN] andintroduce noise power. Eliminating HF noise by AA filter,variance of vF

kis thus reduced by

r,0

∫ ωN

−ωN

φcv(ω+ rωs)dω =

|ω|>ωN

φcv(ω)dω

compared to no presampling filter.

⇒ ց noise if spectrum with energy above ωN.

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Conclusions

• First step to modeling and identification = data acquisition

• Implies computer-based processing and sampled signal

• Models including both deterministic and stochastic

components

• Characterize the spectrum for analysis and processing

• Prepare experimental signal prior to the identification

phase

Page 62: Control-oriented modeling and system identification

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

HomeworkSpectrum of a sinusoid function:

u(t) = A cos(ω0t)

1 Show that u(t) is a quasi-stationary signal by computing

the bound Ru(τ).

2 Show that the power spectrum φu(ω) is composed of two

Dirak δ functions.

Hint - you may wish to use the identities:

cos θ+ cos φ = 2 cos

(

θ+ φ

2

)

cos

(

θ − φ

2

)

cos(ω0τ) =1

2

(

eiω0τ + e−iω0τ)

δ(x) =1

∞∑

n=−∞

einx

Signals for

System

Identification

E. Witrant

Basics of

System

Identification

Sampled

Signals

Discrete-time

Sampled systems

Sampling

Disturbance

Modeling

Discrete-time

stochastic processes

Some background

Signal Spectra

Quasi-Stationary

Signals

Definition of Spectra

Transformation by

linear systems

Spectral factorization

Filtering and spectrum

Sampling

Interval and

Filters

Aliasing

Antialiasing filters

Noise-reductioneffect

Conclusions

Homework

References

1 L. Ljung, System Identification: Theory for the User, 2nd

Edition, Information and System Sciences, (Upper Saddle

River, NJ: PTR Prentice Hall), 1999.

2 L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

3 Finn Haugen, Discrete-time signals and systems,

TechTeach, 2005. http://techteach.no/publications/

discretetime_signals_systems/discrete.pdf

Page 63: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Modeling and Estimation for ControlSystem Identification

Lecture 8: Non-parametric Identification

Emmanuel WITRANT

[email protected]

April 23, 2014

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Class goal

Linear time-invariant model

• described by transfer functions or impulse responses

• determine such functions directly, without restricting the

set of models

• non-parametric: do not explicitly employ finite-dimensional

parameter vector in the search

• focus on determining G(q) from input to output

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Outline

1 Transient-response and correlation analysis

2 Frequency-response analysis

3 Fourier analysis

4 Spectral analysis

5 Estimating the disturbance spectrum

6 Conclusions

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Time-domain methods

Impulse-response analysis

Consider the system:

(input u(t), output y(t) and

additive disturbance v(t))

y(t) =∞∑

k=1

g(k)u(t − k) + v(t)

= G0(q)u(t) + v(t)

• pulse u(t) =

α, t = 0

0, t , 0→ y(t) = αg0(t) + v(t),

by def of G0 and impulse response g0(t)• if v(t) small, then the estimate is g(t) = y(t)/α and error

ǫ(t) = v(t)/α from experiment with pulse input.

• Drawbacks:

• most physical processes do not allow pulse inputs s.t. ǫ(t)negligible

• nonlinear effects may be emphasized

Page 64: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Step-response analysis

Similarly,

u(t) =

α, t ≥ 0

0, t < 0

• y(t) = αt∑

k=1

g0(k) + v(t)

• g(t) =y(t) − y(t − 1)

αand ǫ(t) =

v(t) − v(t − 1)

α• results in

⊲ large errors in most practical application

⊲ sufficient accuracy for control variables, i.e. time delay,

static gain, dominant time-constants

⊲ simple regulators tuning (Ziegler-Nichols rule, 1942)

⊲ graphical parameter determination (Rake, 1980)

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Correlation analysis

Consider again:

y(t) =∞∑

k=1

g0(k)u(t − k) + v(t)

• If u is QSS with Eu(t)u(t − τ) = Ru(τ) and

Eu(t)v(t − τ) = 0 (OL) then

Ey(t)u(t − τ) = Ryu(τ) =∞∑

k=1

g0(k)Ru(k − τ)

• If u is a white noise s.t. Ru(τ) = αδτ0 then

g0(τ) = Ryu(τ)/α

⊲ An estimate of the impulse response is obtained from an

estimate of Ryu(τ)

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Example: N measurements

RNyu(τ) =

1

N

N∑

t=τ

y(t)u(t − τ)

if u , white noise,

• estimate RNu (τ) =

1

N

N∑

t=τ

u(t)u(t − τ)

• solve RNyu(τ) =

1

N

N∑

k=1

g(k)RNu (k − τ) for g(k)

• if possible, set u such that RNu and RN

yu are easy to solve

(typically done by commercial solvers).

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Frequency-response analysis

Sine-wave testing

• physically, G(z) is such that G(eiω) describes what

happened to a sinusoid

• if u(t) = α cosωt , t = 0, 1, 2, . . . then

y(t) = α|G0(eiω)| cos(ωt + φ) + v(t) + transient

where φ = arg G0(eiω)

⊲ G0(eiω) determined as:

• from u(t), get the amplitude and phase shift of y(t)• deduce the estimate GN(e

iω)• repeat for frequencies within the interesting band

• known as frequency analysis

• drawback: |G0(eiω)| and φ difficult to determine accurately

when v(t) is important

Page 65: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Frequency analysis by the correlation method

• since y(t) of known freq., correlate it out from noise

• define sums

IC(N) 1

N

N∑

t=1

y(t) cosωt and IS(N) 1

N

N∑

t=1

y(t) sinωt

• based on previous y(t) (ignore transients and cos(a + b))

IC(N) =α

2|G0(e

iω)| cos(φ) + α|G0(eiω)|1

2

1

N

N∑

t=1

cos(2ωt + φ)

︸ ︷︷ ︸

→0 as N→∞

+1

N

N∑

t=1

v(t) cos(ωt)

︸ ︷︷ ︸

→0 as N→∞ if v(t) DN contain ω

• if v(t) is a stat. stoch. process s.t.∑∞

0 τ|Rv(τ)| < ∞ then

the 3rd term variance decays like 1/N

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• similarly,

IS(N) = −α2|G0(e

iω)| sin(φ) + α|G0(eiω)|1

2

1

N

N∑

t=1

sin(2ωt + φ)

+1

N

N∑

t=1

v(t) sin(ωt) ≈ −α2|G0(e

iω)| sin(φ)

• and we get the estimates

|GN(eiω)| =

I2C(N) + I2

S(N)

α/2, φN = arg GN(e

iω) = − arctanIS(N)

IC(N)

• repeat over the freq. of interest (commercial soft.)

(+) Bode plot easily obtained and focus on spec. freq.

(-) many industrial processes DN admit sin inputs & long

experimentation

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Relationship to Fourier analysis

Consider the discrete Fourier transform

YN(ω) =1√N

∑Nt=1 y(t)e−iωt and IC & IS , which gives

IC(N) − i IS(N) =1√

NYN(ω)

• from the periodogram (signal power at frequency ω) of

u(t) = α cosωt , UN(ω) =√

Nα/2 if ω = 2πr/N for some

r ∈ N• then GN(e

iω) =√

NYN(ω)Nα/2

=YN(ω)UN(ω)

• ω is precisely the input frequency

• provides a most reasonable estimate.

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Commercial software example

In practice, you may use Matlab Identification toolboxr to

• import the data in a GUI

Page 66: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• pre-process it (remove mean, pre-filter, separate

estimation from validation, etc.)

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• analyse the signals

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• get multiple models of desired order and compare the

outputs

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Fourier analysis

Empirical transfer-function estimate

Extend previous estimate to multifrequency inputs

ˆGN(e

iω) =YN(ω)

UN(ω)with (Y/U)N(ω) =

1√

N

N∑

t=1

(y/u)(t)e−iωt

ˆGN = ETFE, since no other assumption than linearity

• original data of 2N numbers y(t), u(t), t = 1 . . .Ncondensed into N numbers (essential points/2)

ReˆGN(e

2πik/N), ImˆGN(e

2πik/N), k = 0, 1, . . . ,N

2− 1

→ modest model reduction• approx. solves the convolution (using Fourier techniques)

y(t) =N∑

k=1

g0(k )u(t − k ), t = 1, 2, . . . ,N

Page 67: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Properties of the ETFE

If the input is periodic:

• the ETFE is defined only for a fixed number of frequencies

• at these frequencies the ETFE is unbiased and its

variance decays like 1/N

If the input is a realization of a stochastic process:

• the periodogram |UN(ω)|2 is an erratic function of ω, which

fluctuates around φu(ω)

• the ETFE is an asymptotically unbiased estimate of the TF

at increasingly (with N) many frequencies

• the ETFE variance do notց as N ր and is given as the

noise to signal ratio at the considered freq.

• the estimates at different frequencies are uncorrelated

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Conclusions on ETFE

• increasingly good quality for periodic signals but no

improvement otherwise as N ր• very crude estimate in most practical cases

• due to uncorrelated information per estimated parameter

⇒ relate the system behavior at one frequency to another

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Spectral analysis

Smoothing the ETFE

Assumption: the true transfer function G0(eiω) is a smooth

function of ω. Consequences:

• G0(eiω) supposed constant over

2πk1

N= ω0 −∆ω < ω < ω0 +∆ω =

2πk2

N

• the best way (min. var.) to estimate this cst is a weightedaverage of G0(e

iω) on the previous freq., eachmeasurement weighted by its inverse variance:

• for large N, we can use Riemann sums and introduce the

weights Wγ(ζ) =

1, |ζ | < ∆ω0, |ζ | > ∆ω

• after some cooking and simplifications,

GN(eiω0) =

∫ π

−πWγ(ζ − ω0)|UN(ζ)|2 ˆGN(eiζ)dζ

∫ π

−πWγ(ζ − ω0)|UN(ζ)|2dζ

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Connection with the Blackman-Turkey procedure

Noticing that as N → ∞∫ π

−πWγ(ζ − ω0)|UN(ζ)|2dζ →

∫ π

−πWγ(ζ − ω0)φu(ζ)dζ

supposing∫ π

−πWγ(ζ)dζ = 1 then

φNu (ω0) =

∫ π

−πWγ(ζ − ω0)|UN(ζ)|2dζ

φNyu(ω0) =

∫ π

−πWγ(ζ − ω0)YN(ζ)UN(ζ)dζ

GN(eiω0) =

φNyu(ω0)

φNu (ω0)

→ ratio of cross spectrum by input spectrum (smoothed

periodograms proposed by B&T )

Page 68: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Weighting function Wγ(ζ): the frequency window

• “Wide” fw→ weight many different frequencies, small

variance of GN(eiω0) but far from ω0 = bias

• γ (∼ width−1) = trade-off between bias and variance

• Width and amplitude:M(γ)

∫ π

−π ζ2Wγ(ζ)dζ and W(γ) 2π

∫ π

−πW2γ (ζ)dζ

• Typical windows for spectral analysis:

2πWγ(ω) M(γ) W(γ)

Bartlett1

γ

(

sin γω/2

sinω/2

)22.78

γ0.67γ

Parzen4(2 + cosω)

γ3

(

sin γω/4

sinω/2

)412

γ20.54γ

Hamming1

2Dγ(ω) +

1

4Dγ(ω − π/γ) +

1

4Dγ(ω+ π/γ),

π2

2γ20.75γ

where Dγ(ω) sin(γ+ 1/2)ω

sinω/2

• good approx. for γ ≥ 5, as γր M(γ)ց and W(γ)ր

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• Example: γ = 5 vs. γ = 10

-3 -2 -1 0 1 2 3

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

ω (rad/s)

Wγ(ω

)

Windows for γ = 5

BartlettParzenHamming

-3 -2 -1 0 1 2 3

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

ω (rad/s)

Wγ(ω

)

Windows for γ = 10

BartlettParzenHamming

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Asymptotic properties of the smoothed estimate

• The estimates ReGN(eiω) and ImGN(e

iω) are

asymptotically uncorrelated and of known variance

• GN(eiω) at , freq. are asymptotically uncorrelated

• γ that min. the mean square estimate (MSE) is

γopt =

(

4M2|R(ω)|2φu(ω)

Wφv(ω)

)1/5

· N1/5

→ frequency window more narrow when more data

available, and leads to MSE ∼ C · N−4/5

• typically, start with γ = N/20 and compute GN(eiω) for

various values of γ,ր γց biasր variance (more details)

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Example

• Consider the system

y(t) − 1.5y(t − 1) + 0.7y(t − 2) = u(t − 1) + 0.5u(t − 2) + e(t)

where e(t) is a white noise with variance 1 and u(t) a

pseudo-random binary signal (PRBS), over 1000 samples.

% Construct the polynomial

m0=poly2th([1 -1.5 0.7],[0 1 0.5]);

% Generate pseudorandom, binary signal

u=idinput(1000,’prbs’);

% Normally distributed random numbers

e=randn(1000,1);

% Simulate and plot the output

y=idsim([u e],m0);

z=[y u]; idplot(z,[101:200])

Page 69: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• we get the inputs and ouputs

0 20 40 60 80 100-20

-10

0

10

Time

y1

0 20 40 60 80 100-1

-0.5

0

0.5

1

u1

Time

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• The ETFE and smoothing thanks to Hamming window

(γ = 10, 50, 200) are obtained as

% Compute the ETFE

ghh=etfe(z);[om,ghha]=getff(ghh);

% Performs spectral analysis

g10=spa(z,10);[om,g10a]=getff(g10);

g50=spa(z,50);[om,g50a]=getff(g50);

g200=spa(z,200);[om,g200a]=getff(g200);

g0=th2ff(m0);[om,g0a]=getff(g0);

bodeplot(g0,ghh,g10,g50,g200,’a’);

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

• we get the ETFE and estimates

10-1

100

10-1

100

101

Frequency (rad/s)

Am

plitu

de

true systemETFE

γ=10

γ=50

γ=200

⇒ γ = 50 seems a good choice

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Estimating the disturbance

spectrum

y(t) = G0(q)u(t) + v(t)

Estimating spectra

• Ideally, φv(ω) given as (if v(t) measurable):

φNv (ω) =

∫ π

−πWγ(ζ − ω)|VN(ζ)|2dζ

• Bias: E φNv (ω) − φv(ω) =

12M(γ)φ′′v (ω) + O(C1(γ))

︸ ︷︷ ︸

γ→∞

+O(√

1/N)︸ ︷︷ ︸

N→∞

• Variance : Var φNv (ω) =

W(γ)

Nφ2

v(ω) + O(1/N)︸ ︷︷ ︸

N→∞

• Estimates at , freq. are uncorrelated

Page 70: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

The residual spectrum

• v(t) not measurable→ given the estimate GN

v(t) = y(t) − GN(q)u(t)

gives

φNv (ω) =

∫ π

−πWγ(ζ − ω)|YN(ζ) − GN(e

iζ)UN(ζ)|2dζ

• After simplifications: φNv (ω) = φ

Ny (ω) −

|φNyu(ω)|2

φNu (ω)

• Asymptotically uncorrelated with GN

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Coherency spectrum

• Defined as

κNyu(ω)

√√

|φNyu(ω)|2

φNy (ω)φ

Nu (ω)

→ φNv (ω) = φ

Ny (ω)[1 − (κNyu(ω))

2]

• κyu(ω) is the coherency spectrum, i.e. freq. dependent

corr. btw I/O

• if 1 at a given ω, perfect corr. ↔ no noise.

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Conclusions

Nonparametric identification

• direct estimate of transient or frequency response

• valuable initially to provide the model structure (relations

between variables, static relations, dominant

time-constants . . . )

• spectral analysis for frequency fonctions, Fourier = special

case (wide lag window)

• essential user influence = γ: trade-off between frequency

resolution vs. variability

• reasonable γ gives dominant frequency properties

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

Homework

download the User’s guide at

http://www.mathworks.com/access/helpdesk/help/pdf_doc/ident/ident.pdf

System Identification ToolboxTM 7

Suppose that you have some data set with inputs u ∈ R1×Nt

and outputs y ∈ RNy×Nt for which you wish to identify a

deterministic model:

1. Which kind of models can you test with the SysID toolbox?

2. Which functionalities are available for identification and

model analysis?

Page 71: Control-oriented modeling and system identification

Nonparametric

identification

E. Witrant

Time-domain

methods

Impulse-response

Step-response

Correlation

Frequency-

response

Sine-wave testing

Correlation method

Relationship to Fourier

Fourier

ETFE definition

ETFE properties

Spectral

Smoothing the ETFE

Blackman-Turkey

procedure

Frequency window

Asymptotic properties

Disturbance

spectrum

Residual spectrum

Coherency spectrum

Conclusions

Homework

References

• L. Ljung, System Identification: Theory for the User, 2nd

Edition, Information and System Sciences, (Upper Saddle

River, NJ: PTR Prentice Hall), 1999.

• L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

• O. Hinton, Digital Signal Processing, EEE305 class

material, Chapter 6 - Describing Random Sequences,

http://www.staff.ncl.ac.uk/oliver.hinton/eee305/Chapter6.pdf .

Page 72: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

System Identification

Lecture 9: Parameter Estimation in Linear Models

Emmanuel WITRANT

[email protected]

April 24, 2014

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Class goal

Today, you should be able to

• distinguish between common model structures used in

identification

• estimate model parameters using the prediction-error

method

• calculate the optimal parameters for ARX models using

least-squares

• estimate bias and variance of estimates from model and

input signal properties

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

System identification

Many issues:

Les. 7 choice of sampling frequency, input signal (experiment

conditions), pre-filtering;

Les. 8 non parametric models, from finite and noisy data, how to

model disturbances?

Today what class of models? estimating model parameters from

processed data.

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

System identification via parameter estimation

Need to fix model structure before trying to estimate

parameters

• system vs. disturbance model

• model order (degrees of transfer function polynomials)

Page 73: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Outline

1 Linear models

2 Basic principle of parameter estimation

3 Minimizing prediction errors

4 Linear regressions and least squares

5 Properties of prediction error minimization estimates

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Linear models

Model structuresMany model structures commonly used (BJ includes all others

as special cases)

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Transfer function parameterizations

The transfer functions G(q) and H(q) in the linear model

y[k ] = G(q; θ)u[k ] + H(q; θ)e[k ]

will be parameterized as (i.e. BJ)

G(q; θ) q−nkb0 + b1q−1 + · · ·+ bnb

q−nb

1 + f1q−1 + · · ·+ fnfq−nf

H(q; θ) 1 + c1q−1 + · · ·+ cnc

q−nc

1 + d1q−1 + · · ·+ dndq−nd

where the parameter vector θ contains the coefficients bk ,

fk , ck , dk .

Note: nk determines dead-time, nb , nf , nc , nd order of transfer

function polynomials.

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Model order selection from physical insight

Physical insights often help to determine the right model order:

y[k ] = q−nkb0 + b1q−1 + · · ·+ bnb

q−nb

1 + f1q−1 + · · ·+ fnfq−nf

u[k ] + H(q; θ)e[k ]

If system sampled with first-order hold (input pw. cst, 1 − q−1),

• nf equals the number of poles of continuous-time system

• if system has no delay and no direct term, then nb = nf ,

nk = 1

• if system has no delay but direct term, then nb = nf + 1,

nk = 0

• if continuous system has time delay τ, then nk = [τ/h] + 1

Note: nb does not depend on number of continuous-time zeros!

i.e. compare Euler vs. Tustin discretization

Page 74: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Basic principle of parameter

estimation

• For given parameters θ, the model predicts that the

system output should be y[t ; θ]

• Determine θ so that y[t ; θ] matches observed output y[t ]“as closely as possible”

• To solve the parameter estimation problem, note that:

1 y[t ; θ] depends on the disturbance model

2 “as closely as possible” needs a mathematical formulation

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

One step-ahead prediction

Consider LTI y(t) = G(q)u(t) + H(q)e(t) and undisturbed

output y∗ = G∗u∗. Suppose that H(q) is monic (h(0) = 1, i.e.

1 + cq−1 for moving average), the disturbance is

v(t) = H(q)e(t) =∞∑

k=0

h(k)e(t − k) = e(t) +∞∑

k=1

h(k)e(t − k)

︸ ︷︷ ︸

m(t−1), known at t − 1

Since e(t) white noise (0 mean), conditional expectation (mean

value of distribution)

v(t |t − 1) = m(t − 1) = (H(q) − 1)e(t) = (1 − H−1(q))v(t)

⇒ y(t |t − 1) = G(q)u(t) + v(t |t − 1)

= G(q)u(t) + (1 − H−1(q))(y(t) − G(q)u(t))

=[

1 − H−1(q)]

y(t) + H−1(q)G(q)u(t)

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Parameter estimation methodsConsider the particular model structureM parameterized using

θ ∈ DM ⊂ Rd: M∗ = M(θ)|θ ∈ DM

• each model can predict future inputs:

M(θ) : y(t |θ) = Wy(q, θ)y(t) + Wu(q, θ)u(t)

i.e. one step-ahead prediction of

y(t) = G(q, θ)u(t) + H(q, θ)e(t) :

Wy(q, θ) = [1 − H−1(q, θ)], Wu(q, θ) = H−1(q, θ)G(q, θ)(multiply by H−1 to make e white noise),

• or nonlinear filterM(θ) : y(t |θ) = g(t ,Z t−1; θ) where

ZN [y(1), u(1), . . . , y(N), u(N)] contains the past

information.

⇒ Determine the map ZN → θN ∈ DM = parameter

estimation method

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Evaluating the candidate models

Given a specific modelM(θ∗), we want to evaluate the

prediction error

ǫ(t , θ∗) = y(t) − y(t |θ∗)

computed for t = 1, 2, . . . ,N when ZN is known.

• “Good model” = small ǫ when applied to observed data,

• “good” prediction performance multiply defined, guiding

principle:

Based on Z t we can compute the prediction error ǫ(t , θ).At time t = N, select θN s.t. ǫ(t , θN), t = 1, 2, . . . ,N,

become as small as possible.

• How to qualify “small”:

1 scalar-valued norm or criterion fct measuring size of ǫ;

2 ǫ(t , θN) uncorrelated with given data (“projections” are 0).

Page 75: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Minimizing prediction errors

1. Get y(t |θ∗) from the model to compute

ǫ(t , θ∗) = y(t) − y(t |θ∗)

2. Filter ǫ ∈ RN with stable linear L(t):ǫF(t , θ) = L(q)ǫ(t , θ), 1 ≤ t ≤ N

3. Use the norm (l(·) > 0 scalar-valued)

VN(θ,ZN) =

1

N

N∑

t=1

l(ǫF(t , θ))

4. Estimate θN by minimization

θN = θN(ZN) = arg min

θ∈DMVN(θ,Z

N)

⇒ Prediction-error estimation methods (PEM), defined

depending on l(·) and prefilter L(q).

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Choice of LExtra freedom for non-momentary properties of ǫ

• same as filtering I/O data prior to identification

• L acts on HF disturbances or slow drift terms, as

frequency weighting

• note that the filtered error is

ǫF(t , θ) = L(q)ǫ(t , θ) =[

L−1(q)H(q, θ)]−1

[y(t) − G(q, θ)]

⇒ filtering is same as changing the noise model to

HL(q, θ) = L−1(q)H(q, θ)

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Choice of l

• quadratic norm l(ǫ) is first candidate

• other choices for robustness constraints

• may be parameterized as l(ǫ, θ), independently of model

parametrization

θ =

[

θ′

α

]

: l(ǫ(t , θ), θ) = l(ǫ(t , θ′), α)

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Multivariable systems

Quadratic criterion:

l(ǫ) =1

2ǫTΛ−1ǫ

with weight Λ ≥ 0 ∈ Rp×p

• Define, instead of l, the p × p matrix

QN(θ,ZN) =

1

N

N∑

t=1

ǫ(t , θ)ǫT (t , θ)

• and the scalar-valued function

VN(θ,ZN) = h(QN(θ,Z

N))

with h(Q) = 12tr(QΛ−1).

Page 76: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Linear regressions and least

squares

Linear regressions

Employ predictor architecture (linear in theta)

y(t |θ) = φT (t)θ + µ(t)

where φ is the regression vector, i.e. for ARX

y(t) + a1y(t − 1) + . . .+ anay(t − na)

= b1u(t − 1) + . . .+ bnbu(t − nb) + e(t),

⇒ φ(t) = [−y(t − 1) − y(t − 2) . . . − y(t − na)

u(t − 1) . . . u(t − nb)]T

and µ(t) a known data-dependent vector (take µ(t) = 0 in the

following).

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Least-squares criterion

The prediction error becomes ǫ(t , θ) = y(t) − φT(t)θ and the

criterion function (with L(q) = 1 and l(ǫ) = 12ǫ2)

VN(θ,ZN) =

1

N

N∑

t=1

1

2

[

y(t) − φT(t)θ]2

∴ least-squares criterion for linear regression. Can be

minimized analytically (1st order condition) with

θLSN = arg min VN(θ,Z

N) =

1

N

N∑

t=1

φ(t)φT (t)

−1

︸ ︷︷ ︸

R(N)−1∈Rd×d

1

N

N∑

t=1

φ(t)y(t)

︸ ︷︷ ︸

f(N)∈Rd

the least-squares estimate (LSE). [Exercise: proove this result]

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Example: parameter estimation in ARX models

Estimate the model parameters a and b in the ARX model

y(k) = ay(k − 1) + bu(k − 1) + e(k)

from y(k), u(k) for k = 0, . . . ,N.

⇒ find θLSN

!

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Solution

• θ = [a b]T and φ(t) = [y(t − 1) u(t − 1)]T

• The optimization problem is solved with

R(N) =1

N

N∑

t=1

[

y2(t − 1) y(t − 1)u(t − 1)y(t − 1)u(t − 1) u2(t − 1)

]

and

f(N) =1

N

N∑

t=1

[

y(t − 1)y(t)u(t − 1)y(t)

]

• Note: estimate computed using covariances of u(t), y(t)(cf. correlation analysis).

Page 77: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Properties of the LSE

Consider the observed data y(t) = φT(t)θ0 + v0(t), θ0 being

the true value:

limN→∞

θLSN − θ0 = lim

N→∞R(N)−1 1

N

N∑

t=1

φ(t)v0(t) = (R∗)−1f∗

with R∗ = Eφ(t)φT (t), f∗ = Eφ(t)v0(t), v0 & φ QSS. Then

θLSN→ θ0 if

• R∗ non-singular (co-variance exists, decaying as 1/N)

• f∗ = 0, satisfied if

1 v0(t) a sequence of independent random variables with

zero mean (i.e. white noise): v0(t)∀t − 1

2 u(t)∀v0(t)& na = 0 (i.e. ARX)→ φ(t) depends on u(t)only.

[Exercise: proove this result]

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Multivariable caseWhen y(t) ∈ Rp

VN(θ,ZN) =

1

N

N∑

t=1

1

2

[

y(t) − φT (t)θ]T

Λ−1[

y(t) − φT(t)θ]

gives the estimate

θLSN = arg min VN(θ,Z

N)

=

1

N

N∑

t=1

φ(t)Λ−1φT (t)

−1

1

N

N∑

t=1

φ(t)Λ−1y(t)

Key issue: proper choice of the relative weight Λ!

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

LS for state-space

Consider the LTI

x(t + 1) = Ax(t) + Bu(t) + w(t)

y(t) = Cx(t) + Du(t) + v(t)

Set

Y(t) =

[

x(t + 1)y(t)

]

, Θ =

[

A B

C D

]

, Φ(t) =

[

x(t)u(t)

]

, E(t) =

[

w(t)v(t)

]

Then Y(t) = ΘΦ(t) + E(t) where E(t) from sampled sum of

squared residuals (provides cov. mat. for Kalman filter).

Problem: get x(t). Essentially obtained as x(t) = LYr where

Yr is a r-steps ahead predictor (cf. basic subspace algorithm).

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Parameter estimation in general model structures

More complicated when predictor is not linear in parameters. In

general, we need to minimize VN(θ) using iterative numerical

method, e.g.,

θi+1 = θi − µiMiV ′N(θi)

[Exercise: analyze the convergence of V ]

Example: Newtons method uses (pseudo-Hessian)

Mi =(

V ′′N (θi))−1

or(

V ′′N (θi) + α

)−1

while Gauss-Newton approximate Mi using first-order

derivatives.

⇒ locally optimal, but not necessarily globally optimal.

Page 78: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Properties of prediction error

minimization estimates

What can we say about models estimated using prediction

error minimization?

Model errors have two components:

1 Bias errors: arise if model is unable to capture true system

2 Variance errors: influence of stochastic disturbances

Two properties of general prediction error methods:

1 Convergence: what happens with θN as N grows?

2 Accuracy: what can we say about size of θN − θ0 as N ր?

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Convergence

• If disturbances acting on system are stochastic, then so is

prediction error ǫ(t)

• Under quite general conditions (even if ǫ(t) are not

independent)

limN→∞

1

N

N∑

t=1

ǫ2(t |θ) = Eǫ2(t |θ)

and

θN → θ∗ = arg minθ

Eǫ2(t |θ) as N →∞

⇒ Even if model cannot reflect reality, estimate will minimize

prediction error variance! ↔ Robustness property.

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Example

Assume that you try to estimate the parameter b in the model

y[k ] = bu[k − 1] + e[k ]

while the true system is given by

y[k ] = u[k − 1] + u[k − 2] + w[k ]

where u, e, w are white noise sequences, independent of

each other.

[Exercise: What will the estimate (computed using the

prediction error method) converge to?]

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

SolutionThe PEM will find the parameters that minimize the variance

Eǫ2(k) = E(y[k ] − y[k ])2

= E(u[k − 1] + u[k − 2] + w[k ] − bu[k − 1] − e[k ])2

= E((1 − b)u[k − 1] + u[k − 2])2+ σ2w + σ2

e

= (1 − b)2σ2u + σ2

u + σ2w + σ2

e

minimized by b = 1→ asymptotic estimate.

Page 79: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Convergence (2): frequency analysis

Consider the one-step ahead predictor and true system

y(t) = [1 − H−1∗ (q, θ)]y(t) + H−1

∗ (q, θ)G(q, θ)u(t)

y(t) = G0(q)u(t) + w(t)

⇒ ǫ(t , θ) = H−1∗ (q)[y(t) − G(q, θ)u(t)]

= H−1∗ (q)[G0(q) − G(q, θ)]u(t) + H−1

∗ w(t)

Looking at the spectrum and with Parseval’s identity

θ∗ = limN→∞

θN = arg minθ

∫ π

−π

|G0(eiω) −G(e iω, θ)|2

︸ ︷︷ ︸

made as small as possible

φu(ω)

|H∗(e iω)|2︸ ︷︷ ︸

weighting function

• good fit where φu(ω) contains much energy, or H∗(eiω)

contains little energy

• can focus model accuracy to “important” frequency range

by proper choice of u

• θ∗ can be computed using the ETFE as G0

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Example

Output error method using low- and high-frequency inputs

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Estimation error varianceSupposing that ∃ θ0 s.t.

y(t) − y(t |θ0) = ǫ(t |θ0) = e(t) = white noise with var λ

the estimation error variance is

E(θN − θ0)(θN − θ0)T ≈ 1

NλR−1, where R = Eψ(t |θ0)ψ(t |θ0)

T

and ψ(t |θ) ddθ

y(t |θ) (prediction gradient wrt θ). Then:

• the error varianceր with noise intensity andց with N

• the prediction quality is proportional to the sensitivity of y

with respect to θ (componentwise)

• considering that ψ computed by min. algo., use

R ≈1

N

N∑

t=1

ψ(t |θN)ψ(t |θN)T , λ ≈

1

N

N∑

t=1

ǫ2(t |θN)

• θN converges to a normal distribution with mean θ0 and

variance 1NλR−1

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Error variance (2):frequency domain characterization

The variance of the frequency response of the estimate

Var

G(eiω; θ) ≈n

N

Φw(ω)

Φu(ω)

• increases with number of model parameters n

• decreases with N & signal-to-noise ratio

• input frequency content influences model accuracy

Page 80: Control-oriented modeling and system identification

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Identifiability

• Determines if the chosen parameters can be determined

from the data, uniquely.

• A specific parametrization is identifiable at θ∗ if

y(t |θ∗) ≡ y(t |θ) implies θ = θ∗

• May not hold if

• two , θ give identical I/O model properties

• we get , models for , θ but the predictions are the same

due to input deficiencies

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

Conclusions

• Model structure from physical insights

• Seek (next step) model prediction using measurement

history

• Minimize prediction error with proper weights (filters)

• i.e. least squares: regressor & disturbance architecture⇒

optimization using signal covariances

• Evaluate convergence & variance as performance criteria,

check identifiability

Parameter

estimation in

linear models

E. Witrant

Linear models

Model structures

TF parameterizations

From physical insights

Parameter

estimation

Prediction

Estimation methods

Evaluating models

Minimizing

pred error

Choice of L

Choice of l

Multivariable systems

Linear reg. &

LS

LS criterion

Properties of the LSE

Multivariable LS

LS for state-space

General models

PEM

properties

Convergence

Variance

Identifiability

Conclusions

References

• L. Ljung, System Identification: Theory for the User, 2nd

Edition, Information and System Sciences, (Upper Saddle

River, NJ: PTR Prentice Hall), 1999.

• L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

• Lecture notes from 2E1282 Modeling of Dynamical

Systems, Automatic Control, School of Electrical

Engineering, KTH, Sweden.

Page 81: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

System Identification

Lecture 10: Experiment Design and Model

Validation

Emmanuel WITRANT

[email protected]

April 24, 2014

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Class goal

Today, you should be able to

• use system identification as a systematic model-building

tool

• do a careful experiment design/data collection to enable

good model estimation

• select the appropriate model structure and model order

• validate that the estimated model is able to reproduce the

observed data

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

System identification: an iterative

procedure

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Outline

1 Experiments and data collection

2 Informative experiments

3 Input design for open-loop experiments

4 Identification in closed-loop

5 Choice of the model structure

6 Model validation

7 Residual analysis

Page 82: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Experiments and data collection

A two-stage approach.

1 Preliminary experiments:

• step/impulse response tests to get basic understanding of

system dynamics

• linearity, stationary gains, time delays, time constants,

sampling interval

2 Data collection for model estimation:

• carefully designed experiment to enable good model fit

• operating point, input signal type, number of data points to

collect, etc.

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Preliminary experiments: step response

Useful for obtaining qualitative information about system:

• indicates dead-times, static gain, time constants and

resonances

• aids sampling time selection (rule-of-thumb: 4-10 samples

per rise time)

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Tests for verifying linearity

For linear systems, response is independent of operating point,

• test linearity by a sequence of step response tests for

different operating points

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Tests for detecting friction

Friction can be detected by using small step increases in input

Input moves every two or three steps.

Page 83: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Designing experiment for model estimation

Input signal should excite all relevant frequencies

• estimated model accurate in frequency ranges where

input has much energy

• good choice is often a binary sequence with random hold

times (e.g., PRBS)

Trade-off in selection of signal amplitude

• large amplitude gives high signal-to-noise ratio, low

parameter variance

• most systems are nonlinear for large input amplitudes

Many pitfalls if estimating a model of a system under

closed-loop control!

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Informative experiments

• The data set Z∞ is “informative enough” with respect to

model setM∗ if it allows for discremination between 2,

models in the set.

• Transferred to “informative enough” experiment if it

generates appropriate data set.

• Applicable to all models likely to be used.

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Open-loop experiments

Consider the set of SISO linear models

M∗ = G(q, θ),H(q, θ)|θ ∈ DM

with the true model

y(t) = G0(q)u(t) + H0(q)e0(t)

If the data are not informative with respect toM∗ & θ1 , θ2,

then

|∆G(eiω)|2Φu(ω) ≡ 0,

where ∆G(q) G(q, θ1) − G(q, θ2):

⇒ crucial condition on the open-loop input spectrum Φu(ω)

• if it implies that ∆G(eiω) ≡ 0 for two equal models, then

the data is sufficiently informative with respect toM∗

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Persistence of excitation

Def. A QSS u(t) with spectrum Φu(ω) is said persistently

exciting of order n if, ∀Mn(q) = m1q−1 + . . .+ mnq−n

|Mn(eiω)|2Φu(ω) ≡ 0→ Mn(e

iω) ≡ 0

Lem. In terms of covariance function Ru(τ), it means that if

Rn

Ru(0) . . . Ru(n − 1)...

. . ....

Ru(n − 1) . . . Ru(0)

then u(t) persistently exciting⇔ Rn nonsingular.

Lec.PE If the underlying system is y[t ] = θTφ[t ] + v[t ] then θ that

makes the model y[t ] = θφ[t ] best fit measured u[t ] and

y[t ] are given by

θ = (φTNφN

︸︷︷︸

Rn

)−1φTNyN

Page 84: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Informative open-loop experiments

Consider a setM∗ st.

G(q, θ) =q−nk (b1 + b2q−1 + . . .+ bnb

q−nb+1)

1 + f1q−1 + . . .+ fnfq−nf

then an OL experiment with an input that is persistently exciting

of order n = nb + nf is sufficiently informative with respect to

M∗.Cor. an OL experiment is informative if the input is persistently

exciting.

• the order of excitation = nb of identified parameters

• e.g. Φu(ω) , 0 at n points (n sinusoids)

Rq: immediate multivariable counterpart

⇒ The input should include many distinct frequencies: still a

large degree of freedom!

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Input design for open-loop

experiments

Three basic facts:

• asymptotic properties of the estimate (bias & variance)

depend only on input spectrum, not the waveform

• limited input amplitude: u ≤ u ≤ u

• periodic inputs may have some advantages

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

The crest factor

• cov. matrix typically inversely proportional to input power

⇒ have as much power as possible

• physical bounds u, u→ desired waveform property

defined as crest factor; for zero-mean signal:

C2r =

maxt u2(t)

limN→∞1N

∑Nt=1 u2(t)

• good waveform = small crest factor

• theoretical lower bound is 1 = binary, symmetric signals

u(t) = ±u

• specific caution: do not allow validation against

nonlinearities

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Common input signals

Achieve desired input spectrum with smallest crest factor:

typically antagonist properties.

• Filtered Gaussian white noise (WN): any spectrum with

appropriate filter, use off-line non-causal filters (e.g.

Kaiser & Reed, 1977) to eliminate the transients

(theoretically unbounded)

• Random binary signals (RBS): generate with a filtered

zero-mean Gaussian noise and take the sign. Cr = 1,

problem: filter change spectrum

• Pseudo-Random Binary Signal (PRBS): periodic,deterministic signal with white noise properties.Advantages with respect to RBS:

• cov. matrix can be analytically inverted

• secured second order properties when whole periods

• not straightforward to generate uncorrelated PRBS

• work with integer number of periods to have full PRBS

advantages→ limited by experimental length

Page 85: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Common input signals (2)

• Low-pass filtering by increasing the clock period: to get

more low-frequency, filter PRBS (no B) and take P

samples over one period:

u(t) =1

P(e(t) + . . .+ e(t − P + 1))

• Multi-sines: sum of sinusoids

u(t) =∑d

k=1 ak cos(ωk t + φk )

• Chirp signals or swept sinusoids: sin. with freq. that

changes continuously over certain band Ω : ω1 ≤ ω ≤ ω2

and time period 0 ≤ t ≤ M

u(t) = A cos(

ω1t + (ω2 − ω1)t2/(2M)

)

instantaneous frequency (d/dt): ωi = ω1 +tM(ω2 − ω1).

Good control over excited freq. and same crest as sin. but

induces freq. outside Ω.

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Periodic inputs

Some guidelines:

• generate PRBS over one full period, M = 2n − 1 and

repeat it

• for multi-sine of period M, choose ωk from DFT-grid

(density functional theory) ωl = 2πl/M, l = 0, 1, . . . ,M − 1

• for chirp of period M, choose ω1,2 = 2πk1,2/M

Advantages and drawbacks:

• period M → M distinct frequencies in spectrum, persistent

excitation of (at most) order M

• when K periods of length M (N = KM), average outputs

over the periods and select one to work with (ց data to

handle, signal to noise ration improved by K )

• allows noise estimation: removing transients, differences

in output responses over , periods attributed to noise

• when model estimated in Fourier transformed data, no

leakage when forming FT

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Example: input consisting of five sinusoids

u = idinput([100 1 20],’sine’,

[],[],[5 10 1]);

% u = idinput(N,type,band,levels)

% [u,freqs] = idinput(N,’sine’,

% band,levels,sinedata)

% N = [P nu M] gives a periodic

% input with nu channels,

% each of length M*P and

% periodic with period P.

% sinedata = [No_of_Sinusoids,

% No_of_Trials, Grid_Skip]

u = iddata([],u,1,’per’,100);

u2 = u.u.ˆ2;

u2 = iddata([],u2,1,’per’,100);

0 200 400 600 800 1000 1200 1400 1600 1800 2000-1

-0.5

0

0.5

1

20 periods of u and u2

0 10 20 30 40 50 60 70 80 90 100-1

-0.5

0

0.5

1

1 period of u and u2

Sample

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.510

-30

10-20

10-10

100

1010

Frequency (1/s)

Power Power spectrum for u and u

2

u

u2

Spectrum of u vs. u2: frequency splitting (the square having

spectral support at other frequencies) reveals the nonlinearity

involved.

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Identification in closed-loop

Identification under output feedback necessary if unstable

plant, or controlled for safety/production, or inherent feedback

mechanisms.

Basic good news: prediction error method provides good

estimate regardless of CL if

• the data is informative

• the model sets contains the true system

Some fallacies:

• CL experiment may be non-informative even if persistent

input, associated with too simple regulators

• direct spectral analysis gives erroneous results

• corr. analysis gives biased estimate, since

Eu(t)v(t − τ) , 0

• OEM do not give consistent G when the additive noise not

white

Page 86: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Example: proportional feedback

Consider the first-order model and feedback

y(t) + ay(t − 1) = bu(t − 1) + e(t), u(t) = −fy(t)

then

y(t) + (a + bf)y(t − 1) = e(t)

⇒ all models a = a + γf , b = b − γ where γ is an arbitrary

scalar give the same I/O description: even if u(t) is persistently

exciting, the experimental condition is not informative enough.

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Some guidelines

• The CL experiment is informative⇔ reference r(t) is

persistently exciting in

y(t) = G0(q)u(t) + H0(q)e(t)

u(t) = r(t) − Fy(q)y(t)

• Non linear, time-varying or complex (high-order) regulators

yield informative enough experiments in general

• A switch between regulators, e.g.

u(t) = −F1(q)y(t) and u(t) = −F2(q)y(t),

s.t. F1(eiω) , F2(e

iω); ∀ω

achieves informative experiments

• Feedback allows to inject more input in certain freq ranges

without increasing output power.

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Choice of the model structure

1 Start with non-parametric estimates (correlation analysis,spectral estimation)

• give information about model order and important

frequency regions

2 Prefilter I/O data to emphasize important frequency ranges

3 Begin with ARX models

4 Select model orders via

• cross-validation (simulate & compare with new data)

• Akaike’s Information Criterion, i.e., pick the model that

minimizes(

1 + 2d

N

) N∑

t=1

ǫ[t ; θ]2

where d = nb estimated parameters in the model

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Model validation

Parameter estimation→ “best model” in chosen structure, but

“good enough”?

• sufficient agreement with observed data

• appropriate for intended purpose

• closeness to the “true system”

Example: G(s) =1

(s + 1)(s + a)has O- & CL responses for

a = −0.01, 0, 0.01

Insufficient for OL prediction, good enough for CL control!

Page 87: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Validation

• with respect to purpose: regulator design, prediction or

simulation→ test on specific problem, may be limited to

do exhaustively (cost, safety)

• feasibility of physical parameters: estimated values and

variance compared with prior knowledge. can also check

sensitivity for identifiability

• consistency of I/O behavior:

• Bode’s diagrams for , models & spectral analysis

• by simulation for NL models

• with respect to data: verify that observations behaveaccording to modeling assumptions

1 Compare model simulation/prediction with real data

2 Compare estimated models frequency response and

spectral analysis estimate

3 Perform statistical tests on prediction errors

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Example: Bode plot for CL control

Different low-frequency behavior, similar responses around

cross-over frequency

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Model reduction

• Original model unnecessarily complex if I/O properties not

much affected by model reduction

• Conserve spectrum/eigenvalues

• Numerical issues associated with matrix conditioning (e.g.

plasma in optimization class)

Parameter confidence interval

• Compare estimate with corresponding estimated standard

deviation

• If 0∈ confidence interval, the corresponding parameter

may be removed

• Usually interesting if related to a physical property (model

order or time-delay)

• If standard dev. are all large, information matrix close to

singular and typically too large order

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Simulation and prediction

• Split data into two parts; one for estimation and one for

validation.

• Apply input signal in validation data set to estimated model

• Compare simulated output with output stored in validation

data set.

Page 88: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Residual analysis

• Analyze the data not reproduced by model = residual

ǫ(t) = ǫ(t , θN) = y(t) − y(t |θN)

• e.g. if we fit the parameters of the model

y(t) = G(q, θ)u(t) + H(q, θ)e(t)

to data, the residuals

ǫ(t) = H(q, θ)−1 [y(t) − G(q, θ)u(t)]

represent a disturbance that explains mismatch between

model and observed data.

• If the model is correct, the residuals should be:

⋄ white, and

⋄ uncorrelated with u

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Statistical model validation

• Pragmatic viewpoint: basic statistics from

S1 = maxt|ǫ(t)|, S2

2 =1

N

N∑

t=1

ǫ2(t)

likely to hold for future data = invariance assumption (ǫ do

not depend on something likely to change or on a

particular input in ZN)

⇒ Study covariance

RNǫu(τ) =

1

N

N∑

t=1

ǫ(t)u(t − τ), RNǫ (τ) =

1

N

N∑

t=1

ǫ(t)ǫ(t − τ)

⋄ RNǫu(τ): if small, S1,2 likely to be relevant for other inputs,

otherwise, remaining traces of y(t) not inM⋄ RN

ǫ (τ): if not small for τ , 0, part of ǫ(t) could have been

predicted⇒ y(t) could be better predicted

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Whiteness test

• Suppose that ǫ is a white noise with zero mean and

variance λ, then

N

λ2

M∑

τ=1

(

RNǫ (τ)

)2=

N(

RNǫ (τ)

)2

M∑

τ=1

(

RNǫ (τ)

)2 ζN,M

should be asymptotically χ2(M)-distributed (independency

test), e.g. if ζN,M < χ2α(M), the α level of χ2(M)

• Simplified rule: autocorrelation function√

NRNǫ (τ) lies

within a 95% confidence region around zero→ large

components indicate unmodelled dynamics

• Similarly, independency if√

NRNǫu(τ) within 95%

confidence region around zero:

⋄ large components indicate unmodelled dynamics

⋄ RNǫu(τ) nonzero for τ < 0 (non-causality) indicates the

presence of feedback

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

Conclusions

System identification: an iterative procedure in several steps

• Experiment design

⋄ preliminary experiments detect basic system behavior

⋄ carefully designed experiment enable good model

estimation (choice of sampling interval, anti-alias filters,

input signal)

• Examination and prefiltering of data

⋄ remove outliers and trends

• Model structure selection

• Model validation

⋄ cross-validation and residual tests

Page 89: Control-oriented modeling and system identification

Experiment

design and

model

validation

E. Witrant

Experiments

and data

collection

Preliminary

experiments

Designing for

estimation

Informative

experiments

Open-loop

experiments

Persistence of

excitation

Input design

Crest factor

Common input

Periodic inputs

CL Identif

Guidelines

Model

structure

Model

validation

Residual

analysis

Statistical validation

References

• L. Ljung, System Identification: Theory for the User, 2nd

Edition, Information and System Sciences, (Upper Saddle

River, NJ: PTR Prentice Hall), 1999.

• L. Ljung and T. Glad, Modeling of Dynamic Systems,

Prentice Hall Information and System Sciences Series,

1994.

• Lecture notes from 2E1282 Modeling of Dynamical

Systems, Automatic Control, School of Electrical

Engineering, KTH, Sweden.

Page 90: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

System Identification

Lecture 11: Nonlinear Black-box Identification

Emmanuel WITRANT

[email protected]

April 24, 2014

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Motivation

Linear systems limited when considering:

• Physical models

• large parameter variability

• complex systems

Today’s concerns:

• generic classes of models

• black box: neural networks and AI

• parameter estimation for NL models: back on nonlinear

programming

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Outline

1 Nonlinear State-space Models

2 Nonlinear Black-box Models

3 Parameters estimation with Gauss-Newton stochastic gradient

4 Temperature profile identification in tokamak plasmas

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Nonlinear State-space Models

• General model set:

x(t + 1) = f(t , x(t), u(t),w(t); θ)

y(t) = h(t , x(t), u(t), v(t); θ)

Nonlinear prediction→ no finite-dimensional solution

except specific cases: approximations

• Predictor obtained from simulation model (noise-free)

x(t + 1, θ) = f(t , x(t , θ), u(t), 0; θ)⇔d

dtx(t , θ) = f(·)

y(t |θ) = h(t , x(t , θ), u(t), 0; θ)

• Include known physical parts of the model, but unmodeled

dynamics that can still have a strong impact on the system

→ black-box components.

Page 91: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Nonlinear Black-box Models:

Basic Principles

Model = mapping from past data Z t−1 to the space of output

y(t |θ) = g(Z t−1, θ)

→ seek parameterizations (parameters θ) of g that are flexible

and cover “all kinds of reasonable behavior” ≡ nonlinear

black-box model structure.

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

A structure for the general mapping: Regressors

Express g as a concatenation of two mappings:

• φ(t) = φ(Z t−1): takes past observation into regression

vector φ (components = regressors), or φ(t) = φ(Z t−1, θ);

• g(φ(t), θ): maps φ into space of outputs.

Two partial problems:

1 How to choose φ(t) from past I/O? Typically, using only

measured quantities, i.e. NFIR (Nonlinear Finite Impulse

Response) and NARX.

2 How to choose the nonlinear mapping g(φ, θ) from

regressor to output space?

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Basic features of function expansions and basis functions

• Focus on g(φ(t), θ) : Rd → Rp, φ ∈ Rd , y ∈ Rp.

• Parametrized function as family of function expansions

g(φ, θ) =n∑

k=1

αk gk (φ), θ = [α1 . . . αn]T

gk referred as basis functions, provides a unified

framework for most NL black-box model structures.

• How to choose gk ? Typically

• all gk formed from one “mother basis function” κ(x);• κ(x) depends on a scalar variable x;

• gk are dilated (scaled) and translated versions of κ, i.e. if

d = 1 (scalar case)

gk (φ) = gk (φ, βk , γk ) = κ(βk (φ − γk ))

where βk = dilatation and γk = translation.

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Scalar examples

• Fourier series: κ(x) = cos(x), g are Fourier series

expansion, with βk as frequencies and γk as phases.

• Piece-wise continuous functions: κ as unit interval

indicator function

κ(x) =

1 for 0 ≤ x < 1

0 else

and γk = k∆, βk = 1/∆, αk = f(k∆): give a PW constant

approximation ∀ f over intervals of length ∆. Similar

version with Gaussian bell κ(x) = 1√2π

e−x2/2.

• PW continuous functions - variant -: κ as unit step function

κ(x) =

0 for x < 0

1 for x > 0

Similar result with sigmoid function κ(x) = 11+e−x

Page 92: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Classification of single-variable basis functions

• local basis functions, with significant variations in local

environment (i.e. presented PW continuous functions);

• global basis functions, with significant variations over the

whole real axis (i.e. Fourier, Voltera).

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Construction of multi-variable basis functions

(φ ∈ Rd , d > 1)

1 Tensor product. Product of the single-variable function,

applied to each component of φ:

gk (φ) = gk (φ, βk , γk ) =d∏

j=1

κ(βj

k(φj − γj

k))

2 Radial construction. Value depend only on φ’s distance

from a given center point

gk (φ) = gk (φ, βk , γk ) = κ(

||φ − γk ||βk

)

where || · ||βkis any chosen norm, i.e. quadratic:

||φ||2βk

= φTβkφ with βk > 0 matrix.

3 Ridge construction. Value depend only on φ’s distance

from a given hyperplane (cst ∀φ in hyperplane)

gk (φ) = gk (φ, βk , γk ) = κ(βTk (φ − γk ))

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Approximation issues

• For any of the described choices, the resulting model

becomes

g(φ, θ) =n∑

k=1

αkκ(βk (φ − γk ))

• Fully determined by κ(x) and the basis functions

expansion on a vector φ.

• Parametrization in terms of θ characterized by three

parameters: coordinates α, scale or dilatation β, location

γ. Note: linear regression for fixed scale and location.

• Accuracy [Juditsky et al., 1995]: for almost any choice of

κ(x) (except polynomial), we can approximate any

“reasonable” function g0(φ) (true system) arbitrarily well

with n large enough.

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Approximation issues (2)

Efficiency [Barron 1993]:

1 if β and γ allowed to depend on the function g0 then n

much less than if βk , γk fixed a priori;

2 for local, radial approach, necessary n to achieve a degree

of approximation d of s times differentiable function:

n ∼ 1

δ(d/s), δ ≪ 1

→ increases exponentially with the number of regressors

= curse of dimensionality.

Page 93: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Networks for nonlinear black-box structuresBasis function expansions often referred to as networks.

• Multi-layer networks:

Instead of taking a linear combination of regressors, treat

as new regressors and introduce another “layer” of basis

functions forming a second expansion, e.g. two-hidden

layers network

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Networks for nonlinear black-box structures (2)

• Recurrent networks. When some regressors at t are

outputs from previous time instants φk (t) = g(φ(t − k), θ).

Estimation aspects

Asymptotic properties and basic algorithms are the same as

the other model structures!

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Parameters estimation with

Gauss-Newton stochastic

gradient algorithm

⇒ A possible solution to determine the optimal parameters of

each layer.

Problem description

Consider no system outputs y ∈ Rnm×no , with nm measurements

for each output, and a model output y ∈ Rnm×no .

Objective: determine the optimal set of model parameters θ

which minimizes the quadratic cost function

J(θ) 1

nm

nm∑

i=1

||y(i) − y(θ, i)||22

Output error variance is minimized for θ∗ = arg minθ J(θ).

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Stochastic descent algorithm

Based on the sensitivity of y(θ, i) with respect to θ

S(θ, i) ∂y

∂θ=

[

∂y

∂θ1, . . . ,

∂y

∂θnv

]T

,

the gradient of the cost function writes as

∇θJ(θ) = −2

nm

nm∑

i=1

S(θ, i)(y(i) − y(θ, i))

Page 94: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Stochastic descent algorithm (2)

θ∗ obtained by moving along the steepest slope −∇θJ(θ) with a

step η, which as to ensure that

θl+1 = θl − ηl∇θJ(θl)

converges to θ∗, where l algorithm iteration index. ηl chosen

according to Gauss-Newton’s method as

ηl (ΨθJ(θl) + υI)−1,

where υ > 0 is a constant introduced to ensure strict

positiveness and ΨθJ(θl) is the pseudo-Hessian, obtained

using Gauss-Newton approximation

ΨθJ(θl) =

2

nm

nm∑

i=1

S(θl , i)S(θl , i)T

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Stochastic descent algorithm (3)

Consider dynamical systems modeled as (t ∈ [0,T ])

dxm

dt= fm(xm(t), u(t), θ), xm(t0) = xm0

y(t) = gm(xm(t), u(t), θ)

xm is the model state and fm(·) ∈ C1, then

S(θ, t) =∂gm

∂xm

∂xm

∂θ+∂gm

∂θ

where the state sensibility∂xm

∂θobtained by solving the ODE

d

dt

[

∂xm

∂θ

]

=∂fm

∂xm

∂xm

∂θ+∂fm

∂θ

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Assumptions

• ni independent system inputs u =

u1, . . . , uni

∈ Rnm×ni ,

available during the optimal parameter search process.

• The set y , u corresponds to historic data and J is the

data variance.

• The set of nm measurements is large enough and well

chosen (sufficiently rich input) to be considered as

generators of persistent excitation to ensure that the

resulting model represents the physical phenomenon

accurately within the bounds of u.

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

For black-box modelsConsider the nonlinear black-box structure

g(φ, θ) =n∑

k=1

αkκ(βk (φ − γk ))

To find the gradient ∇θJ(θ) we just need to compute

∂α[ακ(β(φ − γ))] = κ(β(φ − γ))

∂β[ακ(β(φ − γ))] = α

∂β[κ(β(φ − γ))]φ

∂γ[ακ(β(φ − γ))] = −α

∂γ[κ(β(φ − γ))]

Page 95: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Example: sigmoid functions family

κj =1

1 + e−βj(x−γj)

The sensibility function is set with

∂v

∂αj=

1

1 + e−βj(x−γj),∂v

∂βj=αje−βj(x−γj)(x − γj)

(1 + e−βj(x−γj))2,

∂v

∂γj

= −αje−βj(x−γj)βj

(1 + e−βj(x−γj))2.

Notes:

• any continuous function can be arbitrarily well

approximated using a superposition of sigmoid functions

[Cybenko, 1989]

• nonlinear function⇒ nonlinear optimization problem

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Temperature profile identification

in tokamak plasmas⇒ Parameter dependant identification of nonlinear distributed

systems

• Grey-box modeling,

• 3-hidden layers approach: spatial distribution, steady-state

and transient behaviour,

• Stochastic descent method with direct differentiation.

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Identification method: TS Temperature profile (L-mode)

Physical model:3

2

∂nT

∂t= ∇ (nχ∇T) + ST

• Input: normalized profile v(x , t) =Te(x ,t)Te0(t)

1. v(x , t) = α

1+e−β(x−γ), ⇒ θf = α, β, γ

2.

αlh = eϑsα0 Iϑsα1p B

ϑsα2

φ0Nϑsα3

(

1 +Picrf

Ptot

)ϑsα4

βlh = −eϑsβ0 Iϑsβ1

p Bϑsβ2

φ0nϑsβ3

e Nϑsβ4

∥⇒ θs = ϑsα i, ϑsβ i , ϑsγ i

γlh = eϑsγ0 Iϑsγ1

p Bϑsγ2

φ0Nϑsγ3

(

1 +Picrf

Ptot

)ϑsγ4(

1 +Picrf

Ptot

)ϑsγ5

3.

τth(t) = eϑt0 Iϑt1p B

ϑt2

φ0nϑt3e P

ϑt4

tot

dW

dt= Ptot −

1

τthW , W(0) = Ptot(0)τth(0)

Te0(t) = AW ⇒ θt = ϑt ,i

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Identification method (2)

v(x, tk )

v(θf (tk ))

θf(uss , θs)

uss(tkss)

f(w, ut , θt , θs)

ut(tkt)

Page 96: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Results (19 shots - 9500 measurements)

0 50 100 150 200 250 300 350 400

10-4

10-3

0 50 100 150 200 250 300 350 40010

-2

10-1

0 50 100 150 200 250 300 350 4000

2

4

6

8

First layer estimation

Second layer estimation

Third layer estimation

nx∑

1

||v(x

i,t k)−

v(x

i,t k)||

2n

x∑

1

||v(x

i,t k)−

v(u

ss,t

k)||

2

Te

0&

Te

0(k

eV

)

Time (s)

meas.

identif.

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Results (2)

Test case:

0

0.2

0.4

0.6

0.8

1

5 10 15 20 25 30 35 40 45

0 5 10 15 20 25 30 35 40 450

2

4

6

8

0

0.05

0.1

0.15

0.2

Central temperature (keV) and power inputs (MW)

|η(x , t) − η(x , t)|

x

time (s)

Te0(t)

Te0(t)ITERL-96P(th)Plh Picrf

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Conclusions

• Development similar to linear models

• Predictor→ nonlinear function of past observations

• Unstructured black-box models much more demanding

• Clearly identify nonlinearities prior to identification:

semi-physical models give the regressor

• Define sub-models that can be analyzed independently

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

Homework 5

1 Comment and write down the equations corresponding to

the algorithm “fitTe sig.m” below.

2 How should the script be modified to use Gaussian fitting

curves?

function [P,Jf] = fitTe_sig(TE,xval,P)

% TE = input temperature profile

% xval = location of the measurement

% P = Initial Conditions on design parameters

[np,nm] = size(TE); % number of profiles/measurements

xval = xval’; y = TE’;

ni = 1000; % number of iterations

nv = 3; % number of design parameters

nu = .1*eye(nv); % conditioning parameter

J = zeros(ni,1); % cost function

for j = 1:ni % for each iteration

GJ = zeros(nv,1); % Gradient J

DP2(:,j) = P; % design parameters evolution recording

Page 97: Control-oriented modeling and system identification

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

y_est = zeros(nm,1); % model output (estimated system output)

sigmoid = P(1)./(1+exp(-P(2).*(xval-P(3)))); % source terms

y_est = y_est + sigmoid;

dsigmoid_e = sigmoid.ˆ2./P(1).*exp(-P(2).*(xval-P(3)));

S = [sigmoid./P(1) (xval-P(3)).*dsigmoid_e -P(2).*dsigmoid_e];

% Sensitivity function dy/dK

PJ = S’*S; % pseudo-hessian \psi

for i=1:np % for each profile

error = y(:,i)-y_est;

% difference between the reference (0 in our case) and model

for k = 1:nv % for each design parameter

GJt(k) = error’*S(:,k);

end

GJ = GJ + GJt’;

J(j) = J(j) + (error’*error)ˆ2/np;

end

GJ = -2/np.*GJ;

PJ = 2.*PJ;

alpha = inv(PJ + nu);

P = P - alpha*GJ; % this is the veriation law for K

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

if J(j) < 1*1e-4

% if the cost function is sufficiently small we are happy and

% get out!

TE_est = y_est’;

break

end

end

Jf = J(j);

Nonlinear

Black-box

Identification

E. Witrant

Nonlinear

State-space

Models

Nonlinear

Black-box

Models

Regressors

Function expansions

and basis functions

Multi-variable basis

functions

Approximation issues

Networks

Parameters

estimation with

Gauss-Newton

Stochastic descent

Assumptions

For black-box models

Identification in

tokamaks

Identification method

Results

Conclusions

References

1 L. Ljung, “System Identification: Theory for the User”, 2nd

Edition, Information and System Sciences, (Upper Saddle

River, NJ: PTR Prentice Hall), 1999.

2 E. Witrant and S. Bremond, ”Shape Identification for

Distributed Parameter Systems and Temperature Profiles

in Tokamaks”, Proc. of 50th IEEE Conference on Decision

and Control, Orlando, USA, December 12-15, 2011.

Page 98: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

System Identification

Lecture 12: Recursive Estimation Methods

Emmanuel WITRANT

[email protected]

April 24, 2014

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Motivation

On-line model when the system is in operation to take decision

about the system, i.e.

• Which input should be applied next?

• How to tune the filter parameters?

• What are the best predictions of next

outputs?

• Has a failure occurred? Of what type?

⇒ Adaptive methods (control, filtering, signal processing and

prediction).

Recursive estimation methods

• completed in one sampling interval to keep up with

information flow;

• carry their estimate of parameter variance;

• also competitive for off-line situations.

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Overview

• General mapping of data set to parameter space

θt = F(t ,Z t) may involve an unknown large amount of

calculation for F .

• Recursive algorithm format:

• X(t): information state

X(t) = H(t ,X(t − 1), y(t), u(t))

θt = h(X(t))

• H & h: explicit expressions involving limited calculations

⇒ θt evaluated during a sampling interval

• small information content in latest measurements pair (γt

and µt ):

θt = θt−1 + γt Qθ(X(t), y(t), u(t))

X(t) = X(t − 1) + µtQX(X(t), y(t), u(t))

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Outline

1 The Recursive Least-Squares Algorithm

2 The Recursive IV Method

3 Recursive Prediction-Error Methods

4 Recursive Pseudolinear Regressions

5 Choice of Updating Step

Page 99: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

The Recursive Least-Squares

Algorithm

Weighted LS criterion

θt = arg minθ

t∑

k=1

β(t , k)[

y(k) − φT(k)θ]2

where φ is the regressor, has solution

θt = R−1(t)f(t)

R(t) =t∑

k=1

β(t , k)φ(k)φT (k)

f(t) =t∑

k=1

β(t , k)φ(k)y(k)

Uses Z t and θt−1 cannot be directly used, even if closely

related to θt .

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Recursive algorithm

• Suppose the weighting sequence properties

β(t , k ) = λ(t)β(t − 1, k ), 0 ≤ k ≤ t − 1

β(t , t) = 1

⇒ β(t , k ) =t∏

k+1

λ(j)

where λ(t) is the forgetting factor. It implies that

R(t) = λ(t)R(t − 1) + φ(t)φT (t)

f(t) = λ(t)f(t − 1) + φ(t)y(t)

⇒ θt = R−1(t)f(t)= θt−1 + R−1(t)φ(t)[

y(t) − φT (t)θt−1

]

• At (t − 1) we only need to store the information vector

X(t − 1) = [θt−1, R(t − 1)].

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Efficient matrix inversionTo avoid inverting R(t) at each step, introduce P(t) = R−1(t)and the matrix inversion lemma

[A + BCD]−1 = A−1 − A−1B[DA−1B + C−1]−1DA−1

with A = λ(t)R(t − 1), B = DT = φ(t) and C = 1 to obtain

θ(t) = θ(t − 1) + L(t)[

y(t) − φT(t)θ(t − 1)]

L(t) =P(t − 1)φ(t)

λ(t) + φT(t)P(t − 1)φ(t)

P(t) =1

λ(t)

[

P(t − 1) − L(t)φT (t)P(t − 1)]

Note: we used θ(t) instead of θt to account for slight

differences due to the IC.

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Normalized gain version

To bring out the influence of R & λ(t) on θt−1, normalize as

R(t) = γ(t)R(t), γ(t) =

t∑

k=1

β(t , k)

−1

⇒1

γ(t)=

λ(t)

γ(t − 1)+1

It implies that

θt = θt−1 + R−1(t)φ(t)[

y(t) − φT(t)θt−1

]

R(t) = λ(t)R(t − 1) + φ(t)φT (t)

becomes

ǫ(t) = y(t) − φT (t)θ(t − 1)

θ(t) = θ(t − 1) + γ(t)R−1φ(t)ǫ(t)

R(t) = R(t − 1) + γ(t)[

φ(t)φT (t) − R(t − 1)]

• R(t): weighted arithmetic mean of φ(t)φT (t);

• ǫ(t): prediction error according to current model;

• γ(t): updating step size or gain of the algorithm.

Page 100: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Initial conditions

• Ideally, R(0) = 0, θ0 = θI but cannot be used (R−1)→

initialize when R(t0) invertible: spare t0 samples s.t.

P−1(t0) = R(t0) =

t0∑

k=1

β(t0, k)φ(k)φT (k)

θ0 = P(t0)

t0∑

k=1

β(t0, k)φ(k)y(k)

• Simpler alternative: use P(0) = P0 and θ(0) = θI, whichgives

θ(t) =

β(t ,0)P−1

0 +t∑

k=1

β(t , k )φ(k )φT(k )

−1 β(t ,0)P−1

0 θI +t∑

k=1

β(t , k )φ(k )y(k )

If P0 and t large, insignificant difference.

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Weighted multivariable case

θt = arg minθ

1

2

t∑

k=1

β(t , k)[

y(k) − φT(k)θ]T

Λk−1[

y(k) − φT(k)θ]

gives, similarly to the scalar case

θ(t) = θ(t − 1) + L(t)[

y(t) − φT(t)θ(t − 1)]

L(t) = P(t − 1)φ(t)[

λ(t)Λt + φT (t)P(t − 1)φ(t)]−1

P(t) =1

λ(t)

[

P(t − 1) − L(t)φT (t)P(t − 1)]

and (normalized gain)

ǫ(t) = y(t) − φT(t)θ(t − 1)

θ(t) = θ(t − 1) + γ(t)R−1φ(t)Λt−1ǫ(t)

R(t) = R(t − 1) + γ(t)[

φ(t)Λt−1φT(t) − R(t − 1)

]

Note: can also be used for the scalar case with weighted norm

β(t , k) = αk

t∏

k+1

λ(j), where the scalar αk corresponds to Λ−1k

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Kalman filter interpretation

• The Kalman filter for

x(t + 1) = F(t)x(t) + w(t)y(t) = H(t)x(t) + v(t)

is

x(t + 1) = F(t)x(t) + K(t)[y(t) − H(t)x(t)], x(0) = x0,

K(t) = [F(t)P(t)HT(t) + R12(t)][H(t)P(t)HT(t) + R2(t)]−1

P(t + 1) = F(t)P(t)FT(t) + R1(t) − K(t)[H(t)P(t)HT(t) + R2(t)]KT (t),

P(0) = Π0.

with R1(t) = Ew(t)wT(t), R12(t) = Ew(t)vT(t), R2(t) = Ev(t)vT (t)

• The linear regression model y(t |θ) = φT(t)θ can be

expressed as

θ(t + 1) = Iθ(t) + 0, (≡ θ)y(t) = φT (t)θ(t) + v(t)

Corresponding KF:

(Λt R2(t))

θ(t + 1) = θ(t) + K(t)[y(t) − φT (t)θ(t)],

K(t) = P(t)φ(t)[φT(t)P(t)φ(t) + Λt ]−1,

P(t + 1) = P(t) − K(t)[φT (t)P(t)φ(t) + Λt ]KT (t).

= exactly the multivariable case formulation if λ(t) ≡ 1!

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Resulting practical hints

• if v(t) is white and Gaussian, then the posteriori

distribution of θ(t), given Z t−1, is Gaussian with mean

value θ(t) and covariance P(t);

• IC: θ(0) is the mean and P(0) the covariance of the prior

distribution→ θ(0) is our guess before seing the data and

P(0) reflects our confidence in that guess;

• the natural choice for |Λt | is the error noise covariance

matrix. If (scalar) α−1t = Ev2(t) is time-varying, use

β(k , k) = αk in weighted criterion.

Page 101: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Time-varying systems

• Adaptive methods and recursive algorithms: time-varying

system properties⇒ track these variations.

⇒ Assign less weight to older measurements: choose

λ(j) < 1, i.e. if λ(j) ≡ λ, then β(t , k) = λt−k and old

measurements are exponentially discounted: λ is the

forgetting factor. Consequently γ(t) ≡ γ = 1 − λ

• OR have the parameter vector vary like random walk

θ(t + 1) = θ(t) + w(t), Ew(t)wT (t) = R1(t)

with w white Gaussian and Ev2(t) = R2(t).

Kalman filter gives

conditional

expectation and

covariance of θ as:

θ(t) = θ(t − 1) + L(t)[

y(t) − φT (t)θ(t − 1)]

L(t) =P(t − 1)φ(t)

R2(t) + φT(t)P(t − 1)φ(t)

P(t) = P(t − 1) − L(t)φT (t)P(t − 1) + R1(t)

⇒ R1(t) prevents L(t) from tending to zero.

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Example: parametrization of the plasma resistivity profile

Consider the time and space dependant η(x , t) (shot 35109),

approximated with the scaling law

η(x , t , θ(t)) eθ1eθ2xeθ3x2

. . .eθNθxNθ−1

where x ∈ RNx and θ = θ(t) ∈ RNθ , then

• the data is processed as

y(x , t) = ln η(x, t)

• the model is

parameterized as

y(t , θ) ln η(x, t , θ(t))

= [1 x x2 . . . xNθ−1]︸ ︷︷ ︸

ΦT∈RNx ×Nθ

θ1(t)θ2(t)...

θNθ(t)

︸ ︷︷ ︸

θ(t)

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Example (2): code for multivariable case

lambda = .5; % forgetting factor

weight = exp(-x.ˆ2./.07); % gaussian distribution

Lambda = inv(diag(weight)); % gauss. weight

P = inv(lambda.*Phi*Phi’); % initial P

for j = 1:Nt % time loop

y_t = y(:,j); % acquire measurement

epsilon = y_t - Phi’*Theta; % prediction err.

Theta_r(:,j) = Theta; % store param. at t

L = P*Phi*inv(lambda.*Lambda + Phi’*P*Phi);

P = (P - L*Phi’*P)./lambda; % update P

Theta = Theta + L*epsilon; % update Theta

y_est(:,j) = Phi’*Theta; % record estimation

cost(j) = epsilon’*inv(Lambda)*epsilon;

end

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Simulation results

5 10 15 20 25

-19

-18.5

-18

θ1

Estimated parameters

5 10 15 20 252.5

3

3.5

4

θ2

5 10 15 20 25

2468

1012

θ3

5 10 15 20 25

-20

-10

0

θ4

time (s)

5 10 15 20 25

10-3

10-2

time (s)

[

y(k)− φT (k)θ] T

Λ−1

k

[

y(k)− φT (k)θ]

Page 102: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Simulation results (2): effect of λ and Λt

5 10 15 20 25

-19

-18.5

-18

θ1

Estimated parameters

5 10 15 20 250

2

4

6

8

θ2

5 10 15 20 25

-10

0

10

θ3

5 10 15 20 25-20

0

20

θ4

time (s)

λ = .8 and Λt = |EεεT |I

λ = .8

λ = .99

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

The Recursive IV Method

Instrumental Variables (IV)

• Linear regression model: y(t |θ) = φT(t)θ

⇒ θLSN = sol

1

N

N∑

t=1

φ(t)[y(t) − φT (t)θ] = 0

• Actual data: y(t) = φT(t)θ0 + v0(t). LSE θN 9 θ0 typically,

because of the correlation between v0(t) and φ(t):introduce a general correlation vector ζ(t), which elements

are called the instruments or instrumental variables.• IV estimation:

θIVN = sol

1

N

N∑

t=1

ζ(t)[y(t) − φT(t)θ] = 0

=

1

N

N∑

t=1

ζ(t)φT(t)

−1

1

N

N∑

t=1

ζ(t)y(t)

Requires

Eζ(t)φT (t) nonsingular IV cor. with φ,

Eζ(t)v0(t) = 0 IV not cor. with noise

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Choice of Instruments: i.e. ARX

Supposing the true system:

y(t) + a1y(t − 1) + · · ·+ ana y(t − na) = b1u(t − 1) + · · ·+ bnbu(t − nb ) + v(t)

Choose the IV similar to the previous model, while ensuringthe correlation constraints:

ζ(t) = K(q)[−x(t − 1) . . . − x(t − na) u(t − 1) . . . u(t − nb )]T ,

where K is a filter and N(q)x(t) = M(q)u(t) (i.e. N, M from LS

estimated model and K(q) = 1 for open-loop).

⇒ ζ obtained from filtered past inputs: ζ(t) = ζ(t , ut−1)

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Recursive IV method

• Rewrite the IV method as

θIVN = R−1(t)f(t), with R(t) =

N∑

k=1

β(t , k )ζ(k )φT(k ), f(t) =N∑

k=1

β(t , k )ζ(k )y(k )

which implies that

θ(t) = θ(t − 1) + L(t)[

y(t) − φT (t)θ(t − 1)]

L(t) =P(t − 1)ζ(t)

λ(t) + φT (t)P(t − 1)ζ(t)

P(t) =1

λ(t)

[

P(t − 1) − L(t)φT (t)P(t − 1)]

• Asymptotic behavior: same as off-line counterpart except

for the initial condition issue.

• Choice of the IV (i.e. model-dependant):

ζ(t , θ) = Ku(q, θ)u(t) with Ku a linear filter and

ζ(t , θ) : x(t , θ), u(t) with A(q, θ)x(t , θ) = B(q, θ)u(t).

Page 103: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Recursive Prediction-Error

Methods

Weighted prediction-error criterion

Vt(θ,Zt) = γ(t)

1

2

t∑

k=1

β(t , k)ǫ2(k , θ),

with γ, β as defined above (β(t , k) = λ(t)β(t − 1, k),β(t , t) = 1). Note that

∑tk=1 γ(t)β(t , k) = 1 and the gradient

w.r.t. θ obeys (with ǫ(k , θ) = y(k) − y(k , θ) and ψ ∂y/∂θ):

∇θVt(θ,Zt) = −γ(t)

t∑

k=1

β(t , k)ψ(k , θ)ǫ(k , θ),

= γ(t)

[

λ(t)1

γ(t − 1)∇θVt−1(θ,Z

t−1) − ψ(t , θ)ǫ(t , θ)

]

= ∇θVt−1(θ,Zt−1) + γ(t)

[

−ψ(t , θ)ǫ(t , θ) − ∇θVt−1(θ,Zt−1)]

since λ(t)γ(t)/γ(t − 1) = 1 − γ(t).

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Recursive methodGeneral search algorithm (ith iteration of min. and Z t data):

θ(i)t = θ

(i−1)t − µ

(i)t

[

R(i)t

]−1∇θVt(θ

(i−1)t ,Z t),

Suppose one more data point collected at each iteration:

θ(t) = θ(t − 1) − µ(t)t

[R(t)]−1∇θVt(θ(t − 1),Z t),

where θ(t) = θ(t)t

and R(t) = R(t)t

. Make the induction

assumption that θ(t − 1) minimized Vt−1(θ,Zt−1):

∇θVt−1(θ(t − 1),Z t−1) = 0

⇒ ∇θVt(θ(t − 1),Z t) = −γ(t)ψ(t , θ(t − 1))ǫ(t , θ(t − 1))

along with µ(t) = 1, it gives

θ(t) = θ(t − 1) + γ(t)R−1(t)ψ(t , θ(t − 1))ǫ(t , θ(t − 1))

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Recursive method (2)

• Problem: ψ(t , θ(t − 1)), ǫ(t , θ(t − 1)) derived from

y(t , θ(t − 1)) & y(t , θ) requires the knowledge of all the

data Z t−1, and consequently cannot be computed

recursively.

• Assumption: at k , replace θ by the currently available

estimate θ(k) and denote the approximation of

ψ(t , θ(t − 1)) and y(t , θ(t − 1)) by ψ(t) and y(t).

Ex.1 Finite LPV:

ξ(t + 1, θ) = A(θ)ξ(t , θ) + B(θ)

[

y(t)u(t)

]

[

y(t |θ)ψ(t , θ)

]

= C(θ)ξ(t , θ)

ξ(t + 1) = A(θ(t))ξ(t) + B(θ(t))

[

y(t)u(t)

]

[

y(t)ψ(t)

]

= C(θ(t − 1))ξ(t).

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Ex.2: Gauss-Newton

∂2VN(θ,ZN)

∂θ2≈

1

N

N∑

1

ψ(t , θ)ψT (t , θ) HN(θ), &R(i)N

= HN(θ(i)N),

with the proposed approximation suggests that

R(t) = γ(t)t∑

k=1

β(t , k)ψ(k)ψT (k).

Final recursive scheme

ǫ(t) = y(t) − y(t)

θ(t) = θ(t − 1) + γ(t)R−1(t)ψ(t)ǫ(t)

R(t) = R(t − 1) + γ(t)[

ψ(t)ψT (t) − R(t − 1)]

Together with R(t) from Gauss-Newton example→ recursive

Gauss-Newton prediction-error algorithm.

Page 104: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Family of recursive prediction-error methods (RPEM)

• Wide family of methods depending on the underlying

model structure & choice of R(t).

• Example: linear regression y(t |θ) = φT(t)θ gives

ψ(t , θ) = ψ(t) = φ(t), the RLS method. Gradient variant

(R(t) = I) on the same structure:

θ(t) = θ(t − 1) + γ(t)φ(t)ǫ(t)

where the gain γ(t) is a given sequence or normalized as

γ(t) = γ′(t)/|φ(t)|2 widely used in adaptive signal

processing and called LMS (least mean squares).

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Example: recursive maximum likelihoodConsider the ARMAX model

y(t) + a1y(t − 1) + · · ·+ ana y(t − na) = b1u(t − 1) + · · ·+ bnbu(t − nb)

+ e(t) + c1e(t − 1) + · · ·+ cnc e(t − nc)

and define θ [a1 . . . ana b1 . . . bnbc1 . . . cnc ]

T . Introduce the vector

φ(t , θ) = [−y(t − 1) . . . − y(t − na) u(t − 1) . . . u(t − nb ) ǫ(t − 1, θ) . . . ǫ(t − nc , θ)]T ,

y(t |θ) = φT(t , θ)θ, ǫ(t , θ) = y(t) − y(t |θ)ψ(t , θ) + c1ψ(t − 1, θ) + · · ·+ cncψ(t − nc , θ) = φ(t , θ)

The previous simplifying assumption implies that

ǫ(t) = y(t) − φT (t)θ(t)

φ(t) = [−y(t − 1) . . . − y(t − na) u(t − 1) . . . u(t − nb ) ǫ(t − 1, θ) . . . ǫ(t − nc , θ)]T

y(t) = φT(t)θ(t − 1); ǫ(t) = y(t) − y(t)

ψ(t) + c1(t − 1)ψ(t − 1) + · · ·+ cncψ(t − nc) = φ(t)

and the algorithm becomes θ(t) = θ(t − 1) + γ(t)R−1(t)ψ(t)ǫ(t)

⇒ Recursive Maximum Likelihood (RML) scheme.

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Recursive Pseudolinear

Regressions

Very similar to Recursive Prediction-Error Methods except that

the gradient is replaced by the regressor:

y(t) = φT(t)θ(t − 1)ǫ(t) = y(t) − y(t)

θ(t) = θ(t − 1) + γ(t)R−1(t)φ(t)ǫ(t)

R(t) = R(t − 1) + γ(t)[

φ(t)φT (t) − R(t − 1)]

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Choice of Updating Step

How to determine the update direction and length of the step

(γ(t)R−1(t))?

Update direction

1 Gauss-Newton: R(t) approximates the Hessian

R(t) = R(t − 1) + γ(t)[

ψ(t)ψT (t) − R(t − 1)]

2 Gradient: R(t) is a scaled identity R(t) = |ψ(t)|2 · I or

R(t) = R(t − 1) + γ(t)[

|ψ(t)|2 · I − R(t − 1)]

→ trade-off between convergence rate (Gauss-Newton, d2

operations) and algorithm complexity (gradient, d operations)

Page 105: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Update step: adaptation gain

Two ways to cope with time-varying aspects:

1 select appropriate forgetting profile β(t , k) or suitable gain

γ(t), equivalent as

β(t , k) =t∏

j=k+1

λ(j) =γ(k)

γ(t)

t∏

j=k+1

(1 − γ(j)),

λ(t) =γ(t − 1)

γ(t)(1 − γ(t)) ⇔ γ(t) =

[

1 +λ(t)

γ(t − 1)

]−1

;

2 introduce covariance matrix R1(t) for parameters change

per sample: ր P(t) and consequently the gain vector

L(t).

→ trade-off between tracking ability and noise sensitivity.

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Choice of forgetting factors λ(t)

• Select the forgetting profile β(t , k) so that the criterion

keeps the relevant measurements for the current

properties.

• For ”quasi-stationary” systems, constant factor λ(t) ≡ λslightly < 1:

β(t , k) = λt−k = e(t−k ) ln λ ≈ e−(t−k )(1−λ)

⇒ measurements older than memory time constant

t − k = 11−λ

included with a weight < e−1 ≈ 36% (good if

the system remains approximately constant over t − k

samples). Typically, λ ∈ [0.98, 0.995].

• If the system undergoes abrupt and sudden changes,

choose adaptive λ(t): ց temporary if abrupt change (”cut

off” past measurements).

→ trade-off between tracking alertness and noise sensitivity.

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Conclusions

• Instruments for most adaptation schemes

• Derived from off-line methods by setting a new iteration

when a new observation is performed

• Same results off/on line for specific cases (RLS, RIV) but

data not maximally utilized

• Asymptotic properties of RPEM for constant systems are

the same as off-line: the previous analysis hold

• 2 new important quantities: update direction and gains

• Can be applied to both “deterministic” and “stochastic”

systems

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Homework 6

1 Apply RPEM to a first-order ARMA model

y(t) + ay(t − 1) = e(t) + ce(t − 1).

Derive an explicit expression for the difference

y(t) − y(t |θ(t − 1)).

Discuss when this difference will be small.

2 Consider γ(t) =[∑t

k=1 β(t , k)]−1

and β(t , k) defined by

β(t , k) = λ(t)β(t − 1, k), 0 ≤ k ≤ t − 1

β(t , t) = 1

Show that β(t , k) =γ(k)

γ(t)

t∏

j=k+1

[1 − γ(j)]

Page 106: Control-oriented modeling and system identification

Recursive

Estimation

Methods

E. Witrant

Recursive LS

Recursive algorithm

Matrix inversion

Normalized gain

Initial conditions

Multivariable case

Kalman filter

Time-varying systems

Example

IV Method

Choice of Instruments

Recursive IV method

Recursive PEM

Recursive method

Final recursive

scheme

Family of RPEM

Example

Recursive

Pseudolinear

Regressions

Updating Step

Adaptation gain

Forgetting factors

Conclusions

Homework

Reference

1 L. Ljung, “System Identification: Theory for the User”, 2nd

Edition, Information and System Sciences, (Upper Saddle

River, NJ: PTR Prentice Hall), 1999.

Page 107: Control-oriented modeling and system identification

Modeling and Estimation for Controland

Optimization and Optimal ControlLab topics

Emmanuel WITRANTFelipe CASTILLO-BUENAVENTURA

University Joseph Fourier,UFR PhITEM,

International Master in Systems, Control & IT

November 26, 2012

Page 108: Control-oriented modeling and system identification

Contents

1 Vibration Isolation for Heavy Trucks. 31.1 Perform the three phases of modeling (lab 1) . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . 31.2 Design a simulator (lab 2) . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 31.3 Bond-graph (lab 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . 4

2 Modeling and Observation of the LEGO Mindstorms Robot 52.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 52.2 Robot Programming (lab 1) . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . 62.3 Robot Modeling (lab 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 62.4 Kalman Filter (lab 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 62.5 Recursive Identification of the LEGO (lab 2) . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . 72.6 Conclusions (lab 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 7

3 Modeling of a Thermonuclear Plant 83.1 What is thermonuclear fusion? . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . 83.2 Tokamaks and profile control . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 83.3 System dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 9

3.3.1 Resistivity and bootstrap current . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . 103.3.2 Temperature and density profiles . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 113.3.3 Discretization/Computation issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

3.4 Model Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 123.4.1 Boundary condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 123.4.2 Distributed current deposit from the antennas (LH, ECCD) . . . . . . . . . . . . . . . . . . . . . 123.4.3 Temperature profile modifications . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 13

3.5 Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 143.5.1 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . 143.5.2 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 14

4 Simulation and Control of an Inverted Pendulum Using Scilab 154.1 Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 154.2 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 15

4.2.1 pendule.dem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . 154.2.2 simulation.sci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 17

5 Identification of an Active Vibration Control Benchmark Us ing Matlab 185.1 System description and preliminary work . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 185.2 Experiment design and frequency-response analysis . . .. . . . . . . . . . . . . . . . . . . . . . . . . . 195.3 Spectral analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 195.4 Estimating the disturbance spectrum . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . 195.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 19

6 Analysing the Anthropogenic Impact on the Ozone Layer Depletion 206.1 Data properties (lab 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . 206.2 Transport model and time loop (lab 1) . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . 206.3 Atmospheric scenario (lab 1-2) . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . 216.4 I/O properties (lab 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 216.5 Conclusions (lab 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 21

1

Page 109: Control-oriented modeling and system identification

7 Optimization: Particle Source Identification in a Tokamak 227.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . 227.2 Simplified Transport Model . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 23

7.2.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . 237.3 Laboratory work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 23

7.3.1 State-space description [1h30] . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . 237.3.2 Optimal on-line identification . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . 247.3.3 Use of the experimental data . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 24

8 Optimization: Flow Control 258.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . 258.2 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . 258.3 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . 268.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . 26

2

Page 110: Control-oriented modeling and system identification

Chapter 1

Vibration Isolation for Heavy Trucks.

A company has developed a vibration isolation systemfor the use on the back of heavy truck cabs. These cabs arepivoted with stiffmounts on the fronts and the vibration iso-lators are located at the left and right hand side of the cab atthe rear. The isolators consists of fluid filled displacers (pis-ton/cylinder devices) attached by a fluid filled tube (with anorifice) to an air compressed accumulator tank. The dis-placers are about 10 cm in diameter and the tube is about2.5 cm in diameter. The system is shown in Figure 1.1.

1.1 Perform the three phases of modeling (lab 1)

1. structuring the problem:

(a) divide the system into subsystems

(b) define the important variables

(c) for each subsystem, identify the interaction variables

(d) draw a block diagram of the complete system

2. setting up the Basic Equations:

(a) recall the main physical laws that are necessary to solvethis problem

(b) which blocks have a dynamic behavior? which ones are static?

(c) associate an equation to each block

3. forming the State-Space Model:

(a) combine the equations obtained previously to get an algebro-differential set of equation

(b) clearly define the state variables

(c) rewrite the model as a set of first order differential equations

1.2 Design a simulator (lab 2)

Based on the following simplifications:

1. since we are only interested in how the system deviates from equilibrium state and the system is linear, you candisregard the influence from the gravity on the cab and the isolation system, i.e.vin is the only input to the system.This means that the accumulator tank functions like a standard tank and you can denote its fluid capacitance byC f ;

3

Page 111: Control-oriented modeling and system identification

Figure 1.1: Vibration isolation system.

2. the wheel dynamics can be disregarded;

3. inertance in the cylinders can be disregarded, i.e. the only inertance in the system is in the tube;

4. friction in the tube can be neglected;

5. the mass in the pistons can be neglected;

the system dynamics writes as

x1 =Am

(

x3 + Rf x2 + L f x2

)

x3 =1

C fx2, x2 = A(vin − x1)

wherex1 is the cab speed,x2 is the flow in the tube andx3 is the tank pressure.Setting arbitrarilyA = π(.05)2, Rf = 10, L f = .01,C f = 5, m = 500 andvin as a noisy sine function with a 50Hz

period:

1. illustrate the dynamics with a block representation inMatlab Simulinkr;

2. obtain the State-Space matrices and simulate the response with aM-File;

3. that provide the system response to a step or noisy input; discuss and illustrate the influence of the design parameters.

1.3 Bond-graph (lab 2)

1. Draw a bond-graph, including causality, for the isolation system model. The velocityvin from the road surface is aninput to the system.

2. Based on the bond-graph description, determine a state-space model for the system. The velocityv of the truck cabis the output.

4

Page 112: Control-oriented modeling and system identification

Chapter 2

Modeling and Observation of the LEGOMindstorms Robot

This lab proposes the use of Matlab Simulink and LEGO Mindstorms NXT as a platform for doing theorical and practicalwork in dynamic system modeling, Kalman filter design and parametric identification, respectively. The modeling of theLEGO Mindstorms is proposed to be developed by the students as well as the design of a Kalman filter to estimate theposition of the robot and a recursive parametric identification.

2.1 System Description

In this lab, the LEGO Mindstorms NXT platform will be used. The NXT robot is based on a powerful 32-bits microcon-troller: ARM7, with 256 Kbytes FLASH and 64 Kbytes of RAM. Forprogramming and communications, NXT has anUSB 2.0 port and a wireless Bluetooth class II, V2.0 device. The new control unit has 4 inputs (one of them provides anIEC 61158 Type 4/EN 50 170 expansion for future use) and 3 analog outputs. The new LEGO version offers 4 electronicsensor types: touch, light, sound and distance. In additionof these basic sensors, nowadays a great variety of optionalsen-sors can be bought, as for example vision cameras, magnetic compass, accelerometers, gyroscopes, infrareds searchers,etc. (http://www.hitechnic.com/,http://www.mindsensors.com/).

The last most significant LEGO components are the actuators.In this case, they consist on dc motors that in-corporate encoders sensors integrated. These sensors havea 1-degree resolution, which improves the previous ver-sion encoders resolution. A more detailed description about LEGO Mindstorms NXT motors can be found inhttp://www.philohome.com/motors/motorcomp.htm ([1]).

In this Lab, we are interested on the modeling of the three wheel robot presented in Figure 2.1.

Figure 2.1: Picture and scheme of the LEGO robot with three wheels.

Two of the wheels are independently powered by electric motors and one is left free with the purpose of letting therobot turn. The way of controlling the robot position is to change the speed of the motorized wheels.

• the translation of the robot is produced by the sum of the rotational speed of the two wheels;

• the rotation is produced by the difference between these speeds;

5

Page 113: Control-oriented modeling and system identification

Using this information, the student has to find a dynamic model for the LEGO robot applying the three phases ofmodeling seen in class. For this exercice the only availablemeasurements are the encoders sensors integrated in the DCmotors. From this measurements, a wheel angular speed estimator based on a Kalman filter approach has to be conceived.

2.2 Robot Programming (lab 1)

The following functions may be useful for the programming ofthe three wheel robot. Check the Not eXactly C (NXC)Programmer’s Guide for more detailhttp://bricxcc.sourceforge.net/nbc/nxcdoc/NXC_Guide.pdf.

For the servomotors, check:

• ResetTachoCount(port alias)

• MotorTachoCount(port alias)

• MotorActualSpeed(port alias)

• MotorPower(port alias)

• OnFwdReg(port alias,power,...)

For time management

• CurrentTick()

• FirstTick()

• Wait(time)

2.3 Robot Modeling (lab 1)

In this section, it is expected to obtain a model of the robot shown in Figure 2.1 capable of representing the robot positiondynamics.

A-1 Perform the three phases of modeling for the robot dynamics: structuring the problem, setting up the basic equationsand forming the state-space models. Explain the assumptions made for the model conception;

A-2 create the robot model in Matlab Simulink, illustrate the results obtained;

A-3 program in the robot a trajectory and acquire the necessary information from the robot in order to compare the resultswith the simulator previously made. Discuss and illustratethe results.

2.4 Kalman Filter (lab 2)

In this section, it is expected to design a Kalman filter in order to estimate the wheels angular velocitiesωl andωr fromthe encodes of each wheel (θl andθr ). (Read [1])

B-1 write the linear model of the wheel angular velocities and then discretize it. Consider that only the encoders mea-surements are available.

B-2 design a Kalman filter in order to estimate the robot wheels angular velocities. (Consider the measurement andprocess covariance matrices presented in [1]).

B-3 Using Matlab Simulink, illustrate the difference between taking the time derivative of the wheel angleto estimate theangular velocity and the Kalman filter.

B-4 Reconstruct the robot position from the estimated wheelangular speeds using the robot model developed in theprevious section.

B-5 program in the robot the angular speed estimator using the Kalman filter. Implement in the robot the developedmodel in order to get an estimate of the robot position from the wheel angular speeds.

6

Page 114: Control-oriented modeling and system identification

2.5 Recursive Identification of the LEGO (lab 2)

In this section, implement a recursive identification of a LEGO robot using the recursive least-square (RLS) algorithm(read second chapter of [2]). Consider:

y(t) = φT ∗ ψ(t) (2.1)

Where the parameter vectorφ is to be estimated from the measurementsy(t). ψ(t) is the regressor (vector of laggedinput-output).

B-1 build the linear difference equation of the dynamic system with output signalθ, and input signalu. In this caseθ isthe measurement given by the encoders. The input signal is the power specified for the electric motor. Take only intoaccount one time step lag,

B-2 which is the advantage of implement the RLS instead of thenon-recursive least square method?;

B-3 state the recursive least-square (RLS) algorithm in other to estimate the parameter vectorφ;

B-4 implement RLS algorithm in the LEGO robot and present theresults obtained;

2.6 Conclusions (lab 2)

Conclude and comment.

7

Page 115: Control-oriented modeling and system identification

Chapter 3

Modeling of a Thermonuclear Plant

The aim of thismodeling for control labis to investigate the modeling issues associated with a complex industrial plant,including distributed dynamics and numerous actuators andsensors. A thermonuclear plant, such as a tokamak, providesfor a particularly appropriate choice to fulfill this goal. Indeed, a tokamak is both fascinating as the potential mainsource of energy for future generations and a key challenge for control engineers, as it necessitates advanced controlmethodologies with tight modeling and real-time issues. Inthis lab, the main principles of fusion are first presented,motivating the interest for profile control and dedicated tokamak experiments. We will then focus more precisely on ToreSupra tokamak (CEA-Cadarache, DRFC) and the mathematical description of the magnetic flux dynamics, along with themain actuators. This description will be used to develop an object-oriented model, using thethree phases of modelingpresented in class. Based on an existingMatlabr code and the obtained model, aSimulinkr simulator will be built. Mostof the presented results are detailed in [3].

3.1 What is thermonuclear fusion?

- Excerpts from ITER official websitewww.iter.org-Fusion is the energy source of the sun and the stars. On earth,fusion research is aimed at demonstrating that this

energy source can be used to produce electricity in a safe andenvironmentally benign way, with abundant fuel resources,to meet the needs of a growing world population.

Nuclear fusion involves the bringing together of atomic nuclei. The atom’s nucleus consists of protons with a singlepositive charge and neutrons of similar mass and no charge. The strong nuclear force holds these ”nucleons” togetheragainst the repulsive effect of the proton’s charge. As many negatively charged electrons as protons swarm around thenucleus to balance the proton charge, and the mass of the atomlies almost totally in the nucleus.

The sum of the individual masses of the nucleons is greater than the mass of the whole nucleus. This is becausethe strong nuclear force holds the nucleons together - the combined nucleus is a lower energy state than the nucleonsseparately. The difference, the binding energy (E = m.c2), varies from one element to another. Because of the possibleways that nucleons can pack together, when two light atomic nuclei are brought together to make a heavier one, thebinding energy of the combined nucleus can be more than the sum of the binding energies of the component nuclei (i.e. itis in an even lower energy state). This energy difference is released in the ”fusion” process.

A similar situation occurs when heavy nuclei split. Again the binding energies of the pieces can be more than thatof the whole (i.e. they are in a lower energy state), and the excess energy is released in the ”fission” process. Thesealternatives are shown in Fig. 3.1.

To conclude this short introduction on fusion, an interesting movie describing ITER, the International Tokamak Ex-perimental Reactor, is available at:http://www.iter.org/Flash/iter5minviewing.htm

3.2 Tokamaks and profile control

In the coming years the main challenge in the fusion community will be the development of experimental scenarios forITER. Amongst them, the so-called advanced tokamak steady-state ones will play a significant role, since they will allowto reproduce and study (on a smaller scale), the conditions that are expected to be obtained in a fusion plant of reduced sizeand costs [4]. In these scenarios a particular emphasis is given to the current density profile and to the way of producingthe plasma currentIP. Due to the intrinsic limited availability of magnetic flux in the fusion devices, needed to sustain apurely inductive current,IP will have to be mainly generated by non-inductive sources. In particular, the importance of thereal-time safety factor profile (q-profile) control is emphasized in [5], where an interestingoverview on recent advancesand key issues in real-time tokamak plasma control is provided. A Proportional-Integral (PI) feedback control strategy,

8

Page 116: Control-oriented modeling and system identification

Figure 3.1: Fusion and Fission principles

Figure 3.2: A schematic view of Tore Supra toka-mak

based on a simple linear model, is also proposed and its efficiency is illustrated by experimental results, which motivatefurther research developments in this direction.

The control of so-called “advanced” plasma regimes [4, 6, 7]for steady state high performance tokamak operationis a challenge, in particular because of the non-linear coupling between the current density and the pressure profiles. Ina burning plasma, the alpha-particle power will also be a strong function of these profiles, and, through its effect on thebootstrap current, will be at the origin of a large (though ultra-slow) redistribution of the current density. The possibledestabilization of adverse Toroidal Alfven Eigenmodes (TAE) - such as the drift kinetic modes that are anticipated toappear at high values of the central safety factor [8] - as well as potential thermal instabilities due to the ITB dynamicswill further complicate the issue. This motivates the need for further investigation of plasma profiles shaping to guaranteeand control steady-state operation of the plasmas.

Considering the tokamak magnetic configuration, such as theone presented in Fig. 3.3, the plasma can be described asimbricated surfaces. The physical variables of interest for profile control (temperature, density and magnetic flux) can beconsidered as constant on these surfaces. The problem of modeling a plasma in 3-D space can consequently be simplifiedto the modeling of physical variables along the normalized minor radius (1-D problem).

To summarize:

• advanced plasma confinement schemes require further research on current density profile control, which motivatesthe need for a representative real-time model of plasma physics for control synthesis;

• such a model will focus on the magnetic flux dynamics, from which the current density profiles are obtained;

• for a tokamak such as Tore Supra, the distributed inputs are due to the (auto-generated) bootstrap effect, the poloidalcoils (magnetic source), and the lower hybrid (LH), electron cyclotron current drive (ECCD) and ion cyclotron radioheating (ICRH) antennas, as presented in Fig. 3.4.

3.3 System dynamics

In order to simplify the plasma model, we make the following hypotheses:

• the coordinates are supposed to be cylindrical (neglect theimpact of the toroidal shape),

• the diamagnetic effect is neglected.

The magnetic flux dynamics is consequently obtained as [9, 10]:

∂ψ

∂t(x, t) = η∥(x, t)

[

1µ0a2

∂2ψ

∂x2+

1µ0a2x

∂ψ

∂x+ R0 jni(x, t)

]

(3.1)

jφ(x, t) = − 1µ0R0a2x

∂x

[

x∂ψ

∂x

]

(3.2)

whereη∥(x, t) is the resistivity,µ0 = 4π × 10−7 H/m is the permeability of free space,a is the small plasma radius,R0 ismagnetic center location,jni(x, t) = jbs + jLH + jECCD is the non-inductive current source, including both the bootstrapeffect jbs and the microwave current drive (jLH and jECCD), jφ(x, t) is the poloidal current profile (controlled output). Theboundary conditions are given byψ′(0, t) = 0,ψ′(1, t) = f (Ip) or ψ(1, t) = f (Vloop), and the initial conditions are known.

The computations of the resistivity, the temperature and density profiles, as well as non-inductive current sources, arebriefly described below.

9

Page 117: Control-oriented modeling and system identification

Figure 3.3: Isoflux maps, particles trajectories and controlled profiles

• Poloidal,

• LH,

• ECCD,

• ICRF.

• j i , q, I i , Vloop,

• l i , βθ, ∆.

Figure 3.4: Input-output representation of Tore Supra

3.3.1 Resistivity and bootstrap current

The plasma resistivity mainly depends on the electron temperature and density profiles,Te(x, t) andne(x, t), respectively,and on the effective value of the plasma chargeZ. We can use the formulae detailed in [11], which can be summarized as

η∥(x, t) = f (Te, ne, Z) (3.3)

where a proportionality onTe−3/2 plays the major role.

The bootstrap current is generated by trapped particles andmay be the main source of non inductive current in specificscenarios (high bootstrap experiments). It is a self induced current (created by the isoflux surfaces) that induces significantnonlinearities in the magnetic flux equation. We consider the model derived in [12]:

jbs(x, t) =eR0

∂ψ/∂x

(

A1Z + αTi (1− αi)

Z− A2

)

ne∂Te

∂x

+A1Z + αTi

ZTe∂ne

∂x

(3.4)

10

Page 118: Control-oriented modeling and system identification

with ne = Zni andTi = αTiTe.

3.3.2 Temperature and density profiles

The temperature profiles can be obtained thanks to direct measurement or using the scaling laws proposed in [13], whichare detailed here. The electron temperature profile is estimated with

Te(x, t) ≈α(t)

1+ e−β(t)(x−γ(t))ATe(t) (3.5)

where the normalized shape of the profileTe(x, t)/Te(0, t) is estimated with a sigmoid function defined by its amplitudeα(t), dilatationβ(t) and translation (inflection point)γ(t). The amplitude of the profile isATe(t) and computed from theplasma thermal energy. The shape parameters are set with theswitched model

α, β, γ =

αlh, βlh, γlh if Plh , 0

αω, βω, γω else.

This model then distinguishes the LHCD heating effect from the ohmic and ICRH ones. Selecting the most significantterms, the shape parameters are related to the global and engineering parameters with

αlh = e−0.87I−0.43p B0.63

φ0N∥0.25

(

1+Picrh

Ptot

)0.15

βlh = −e3.88I0.31p B−0.86

φ0n−0.39

e N∥−1.15

γlh = e1.77I1.40p B−1.76

φ0N∥−0.45

(

1+Picrh

Ptot

)−0.54

whereIp is the total plasma current,Bφ0 the toroidal magnetic field (at the center of the plasma),N∥ the parallel refractionindex,Picrh the ICRH power andPtot the total input power. Note that the first three variables areglobal plasma parameterswhile the others correspond to some inputs (the LH powerPlh is included inPtot). For the ohmic and ICRH cases, wehave that:

αω = e−0.37I−0.46p B0.23

φ0n0.22

e

βω = −e1.92I0.38p n−0.33

e

γω = e−0.15I1.03p B−0.51

φ0

(

1+Picrh

Ptot

)−0.46

The dynamics of the temperature profiles is obtained from theplasma thermal energyWth(t)

Wth(t) =3e2

V(1+ αTiαni) neTe dV (3.6)

whereαTi andαni are scaling laws, ande is the electron charge. The temperature profile amplitude isrelated toWth thanksto the relationshipATe(t) = A(t)Wth(t) andWth is estimated with

τth(t) = 0.135I0.94p B−0.15

φ0n0.78

e

(

1+Plh

Ptot

)0.13

Ptot−0.78

dWth

dt= Ptot −

1τth

Wth, Wth(0) = Ptot(0)τth(0)

(3.7)

whereτth(t) is the thermal energy confinement time.The density profiles are obtained from line-averaged measurements or thanks to an interferometer). Considering the

simplest case, the density profile is approximated with

ne0(t) =γn + 1γn

ne(t) and ne(x, t) = ne0(t)(1− xγn) (3.8)

wherene(t) is the line averaged density.

11

Page 119: Control-oriented modeling and system identification

3.3.3 Discretization/Computation issues

Main dynamics

Some flexibility in the discretization scheme can be introduced by a variable step spatial discretization:

ψ(xi , t)ex =η∥ i, j

µ0a2

(

d2ψi+1, j − d3ψi, j + d4ψi−1, j

)

+d1η∥ i, j

µ0a2xi

(

ψi+1, j − ψi−1, j

)

+ η∥ i, jR0 jni i, j

= η∥ i, j

(

e1ψi+1, j − e2ψi, j + e3ψi−1, j + R0 jni i, j

)

(3.9)

where the subscripti = 1 . . .N denotes the spatial discretization points andj refers to the time samples. The parameterse1, e2 ande3 can be adjusted to set the desired mesh topography (i.e. it may be interesting to have a more precise resolutionat the plasma center or to use these parameters to ensure the robustness of the discretization scheme).

Similarly, an extra degree of freedom can be introduced in the temporal discretization by using the implicit-explicitscheme: (

ψi, j+1 − ψi, j

δt

)

= h ψ(xi , t)ex+ (1− h) ψ(xi , t)im (3.10)

whereh determines the ratio of explicit to implicit computation.The final computation writes as:

Ai,i+1, j ψi+1, j + Ai,i, j ψi, j + Ai,i−1, j ψi−1, j − Bi,i+1, j ψi+1, j+1 (3.11)

− Bi,i, j ψi, j+1 − Bi,i−1, j ψi−1, j+1 + Si, j = 0 (3.12)

⇔ ψ j+1 = B−1j A j ψ j + B−1

j S j (3.13)

where the transition matrices both depend on the physical model and on the numerical issues.

Boundary conditions

Specific care has to be given for the central point as the magnetic flux gradient becomes in that caseψ′(x1,t)x1= 0

0. One wayto solve this problem is to discretize the equation with:

1x∂

∂x

[

x∂ψ

∂x

]

≈ 1δx2/4

δx22 ψ′1/2

δx22

=4δx2

ψ′1/2 ≈4

δx22

(ψ2 − ψ1) (3.14)

for this specific point.

3.4 Model Inputs

There are three different types of inputs on the previous magnetic field model.

3.4.1 Boundary condition

The boundary input is set by the poloidal coils currentIc(t) (inductive inputs) as [14]:

Lc Ic + MIp + RcIc − Vc = 0 (3.15)

MIc + Lp Ip + Rp

(

Ip − INI

)

= 0 (3.16)

whereRc andLc are the coils resistance and internal inductance,Rp andLp are the plasma resistance and inductance,Mis the matrix of mutual inductances,Vc is the input voltage applied to the coils andINI is the current generated by the noninductive sources. Note that a local control loop setsVc according toIp,re f or Vloop,re f , which are then considered as theactual boundary inputs.

3.4.2 Distributed current deposit from the antennas (LH, ECCD)

This current deposit is approximated thanks to the gaussianapproximation (engineering approach):

jcd/lh(x, t) = ϑ(t)e−(µ(t)−x)2/2σ(t)

12

Page 120: Control-oriented modeling and system identification

(Rant,Zant)

(Rabs,Zabs)

φpol

R∗Rc

R0 R

Z

Figure 3.5: ECCD current deposit0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0

0.5

1

1.5

2

2.5

xwhxrµhxr

Emax

2

Emax

HX

Rem

issi

on measurementgaus. approx.

Figure 3.6: LH current deposit

ECCD current deposit

The total EC current is the sum of several deposits due to several EC beams, which parameters are denoted by the subscriptm. Each current deposit is determined by the position of the steering mirror (Rant,m,Zant,m), its orientation in the poloidaland toroidal plane (φpol,m, φtor,m), and the emission powerPcd,m, as presented in Figure 3.5. The ECCD current deposit islocalized betweenR∗ andRc, and is computed as [15]:

Rc = 0.2044nhI ind

f, (3.17)

R∗m =Rant,m√

2| sinφtor,m|

1−

1−4R2

c

R2ant,m

sin2 φtor,m

1/2

, (3.18)

γcd,m =Γ1Te(xabs,m)

Te(xabs,m) + 105×

1− Γ2

[ρm(Rabs,m, φpol,m) + Rabs,m− R0

Rabs,m

]Γ3

, (3.19)

ϑcd,m =γcd,mPcd,m

Rone

[

2πa2∫ 1

0xe−(µcd,m−x)2/2σcd,mdx

]−1

×

sign(φpol,m). (3.20)

LH current deposit

This deposit shape can either be obtained directly from HardX-Rays measurement or computed using the scaling laws:

whxr = α0Bα1φ0

Iα2p nα3Plh

α4 N∥α5 (3.21)

µhxr = β0Bβ1

φ0I β2p nβ3Plh

β4N∥β5 (3.22)

The current drive efficiency is given by [16]:

ηlh(t) = 3.39D0.26n τ0.46

th Z−0.13, (3.23)

with Dn(t) ≈ 2.03− 0.63N∥, and its magnitude is:

ϑlh(t) =1

2πa2∫ 1

0xe−(µhxr−x)2/2σlhdx

ηlhPlh

neR0(3.24)

Summing the bootstrap and antennas current sources, we havethat:

jni(x, t) = jbs(x, t) + jcd(x, t) + j lh(x, t) (3.25)

3.4.3 Temperature profile modifications

The temperature profile modification, due to ICRF and LH antennas, have a direct influence on the magnetic flux throughthe resistivity and bootstrap current.

13

Page 121: Control-oriented modeling and system identification

3.5 Lab Exercises

3.5.1 Modeling

Perform the “three steps on modeling” on Tore Supra tokamak (4 points):

1. Structuring the problem

2. Setting up the Basic Equations

3. Forming the State-Space Models

3.5.2 Simulation

1. Download and run theMatlabr files (shot 35109) from the class website.

2. Discretization issues (4 points):

(a) indicate which part of the code should be modified in orderto modify the temporal discretization;

(b) simulate changes in the time step and implicit to explicit ratio.

3. Control architecture (4 points):

(a) run the open-loop and the closed-loop configurations andcompare your results;

(b) analyze the control architecture;

(c) modify the control parameters and comment.

4. Disturbance (3 points):

(a) introduce a disturbance with the ECCD antenna at 15 s and 21 s;

(b) discuss the effect on the dynamics;

(c) investigate the result when a scalar feedback is used.

5. Put a step on boundary control and discuss the effect (3 points).

6. Write a consistent report with an introduction, a conclusion, labelled figures etc. (2 points)

Note: comment and analyze ALL your simulations.

14

Page 122: Control-oriented modeling and system identification

Chapter 4

Simulation and Control of an InvertedPendulum Using Scilab

4.1 Tasks

1. StartScilabr and run

?/Demonstrations Scilab/CACSD/Inverted

pendulum

2. Explain the different steps of this demonstration and the illustrated computing capabilities.

3. Find and download in your personal space

\scilab-5.0.3\modules\cacsd\demos\pendulum

4. Relate the demonstration steps to the algorithm (printedin the Appendix).

5. Motivate the equations used for feedback control and observation (Jacobian computation from the nonlinear systemto obtain the linearization, observer formulation and feedback gain design).

6. Modify the control strategy to use an optimal control approach, i.e. with

I =∫ t f

t0

xTdiag(qi)x+ ru2dt

Discuss the impact of the ratiosqi/r on the simulation results. Note: you may use thericcati function.

4.2 Algorithms

4.2.1 pendule.dem

mode(-1)

path=get_absolute_file_path(’pendule.dem’);

getf(path+’simulation.sci’)

getf(path+’graphics.sci’)

//disable displayed lines control

display_props=lines(); lines(0)

my_handle = scf(100001);

clf(my_handle,"reset");

xselect(); wdim=xget(’wdim’)

//mode(1)

dpnd()

//

// equations

//----------

//state =[x x’ theta theta’]

//

mb=0.1;mc=1;l=0.3;m=4*mc+mb; //constants

//

x_message([’Open loop simulation’

’ ’

’ y0 = [0;0;0.1;0];

//initial state [x x’’ theta theta’’]’

’ t = 0.03*(1:180); //observation dates’

’ y = ode(y0,0,0.03*(1:180),ivpd);

//differential equation integration’

’ //Display’

15

Page 123: Control-oriented modeling and system identification

’ P = initialize_display(y0(1),y0(3))’

’ for k=1:size(y,2),

set_pendulum(P,y(1,k),y(3,k));end’])

//

y0=[0;0;0.1;0];

y=ode(y0,0,0.03*(1:180),ivpd);

P=initialize_display(y0(1),y0(3));

for k=1:size(y,2), set_pendulum(P,y(1,k),y(3,k));end

//

x_message([’Linearization’

’ ’

’ x0=[0;0;0;0];u0=0;’

’ [f,g,h,j]=lin(pendu,x0,u0);’

’ pe=syslin(’’c’’,f,g,h,j);’

’ // display of the linear system’

’ ssprint(pe)’])

//

mode(1)

x0=[0;0;0;0];u0=0;

[f,g,h,j]=lin(pendu,x0,u0);

pe=syslin(’c’,f,g,h,j);

ssprint(pe)

mode(-1)

//

x_message([’Checking the result’

’ //comparison with linear model computed by hand’;

’’

’ f1=[0 1 0 0’

’ 0 0 -3*mb*9.81/m 0’

’ 0 0 0 1’

’ 0 0 6*(mb+mc)*9.81/(m*l) 0];’

’ ’

’ g1=[0 ; 4/m ; 0 ; -6/(m*l)];’

’ ’

’ h1=[1 0 0 0’

’ 0 0 1 0];’

’ ’

’err=norm(f-f1,1)+norm(g-g1,1)+norm(h-h1,1)+norm(j,1)’])

//

mode(1)

f1=[0 1 0 0

0 0 -3*mb*9.81/m 0

0 0 0 1

0 0 6*(mb+mc)*9.81/(m*l) 0];

g1=[0 ; 4/m ; 0 ; -6/(m*l)];

h1=[1 0 0 0

0 0 1 0];

err=norm(f-f1,1)+norm(g-g1,1)+norm(h-h1,1)+norm(j,1)

mode(-1)

x_message([’Linear system properties analysis’

’ spec(f) //stability (unstable system !)’

’ n=contr(f,g) //controlability’

’ ’

’ //observability’

’ m1=contr(f’’,h(1,:)’’) ’

’ [m2,t]=contr(f’’,h(2,:)’’)’])

//---------

mode(1)

spec(f) //stability (unstable system !)

n=contr(f,g) //controlability

//observability

m1=contr(f’,h(1,:)’)

[m2,t]=contr(f’,h(2,:)’)

mode(-1)

//

x_message([’Synthesis of a stabilizing controller

using poles placement’

’ ’

’// only x and theta are observed:

contruction of an observer’

’// to estimate the state : z’’=(f-k*h)*z+k*y+g*u’

’ to=0.1; ’

’ k=ppol(f’’,h’’,-ones(4,1)/to)’’ //observer gain’

’// compute the compensator gain’

’ kr=ppol(f,g,-ones(4,1)/to)’])

//-------------------------------------------------

//

//pole placement technique

//only x and theta are observed:

contruction of an observer

//to estimate the state : z’=(f-k*h)*z+k*y+g*u

//

to=0.1; //

k=ppol(f’,h’,-ones(4,1)/to)’ //observer gain

// verification //

// norm( poly(f-k*h,’z’)-poly(-ones(4,1)/to,’z’))

//

kr=ppol(f,g,-ones(4,1)/to) //compensator gain

//

x_message([’Build full linear system

pendulum-observer-compensator’

’ ’

’ ft=[f-g*kr -g*kr’

’ 0*f f-k*h]’

’ ’

’ gt=[g;0*g];’

’ ht=[h,0*h];’

’’

’ pr=syslin(’’c’’,ft,gt,ht);’

’’

’//Check the closed loop dynamic’

’ spec(pr.A)’

’ ’

’//transfer matrix representation’

’ hr=clean(ss2tf(pr),1.d-10)’

’ ’

’//frequency analysis’

’ black(pr,0.01,100,[’’position’’,’’theta’’])’

’ g_margin(pr(1,1))’])

//---------------------------------------------

// state: [x x-z] //

mode(1)

ft=[f-g*kr -g*kr

0*f f-k*h];

gt=[g;0*g];

ht=[h,0*h];

pr=syslin(’c’,ft,gt,ht);

// closed loop dynamics:

spec(pr(2))

//transfer matrix representation

hr=clean(ss2tf(pr),1.d-10)

16

Page 124: Control-oriented modeling and system identification

//frequency analysis

clf()

black(pr,0.01,100,[’position’,’theta’])

g_margin(pr(1,1))

show_pixmap()

mode(-1)

//

x_message([’Sampled system’

’ ’

’ t=to/5; //sampling period’

’ prd=dscr(pr,t); //discrete model’

’ spec(prd.A) //poles of the discrete model’])

//---------------//

mode(1)

t=to/5;

prd=dscr(pr,t);

spec(prd(2))

mode(-1)

//

x_message([’Impulse response’

’ ’

’ x0=[0;0;0;0;0;0;0;0]; //initial state’

’ u(1,180)=0;u(1,1)=1; //impulse’

’ y=flts(u,prd,x0); //siscrete system simulation’

’ draw1()’])

//-----------------//

mode(1)

x0=[0;0;0;0;0;0;0;0];

u(1,180)=0;u(1,1)=1;

y=flts(u,prd,x0);

draw1()

mode(-1)

//

x_message([’Compensation of the non linear system with’

’linear regulator’

’’

’ t0=0;t1=t*(1:125);’

’ x0=[0 0 0.4 0 0 0 0 0]’’; // initial state’

’ yd=ode(x0,t0,t1,regu);

//integration of differential equation’

’ draw2()’])

;

//--------------------------------------

// simulation //

mode(1)

t0=0;t1=t*(1:125);

x0=[0 0 0.4 0 0 0 0 0]’; //

yd=ode(x0,t0,t1,regu);

draw2()

mode(-1) lines(display_props(2)) x_message(’The end’)

4.2.2 simulation.sci

function [xdot]=ivpd(t,x)

//ydot=ivpd(t,y) non linear equations of the pendulum

// y=[x;d(x)/dt,teta,d(teta)/dt].

// mb, mc, l must be predefined

//!

g=9.81; u=0 qm=mb/(mb+mc) cx3=cos(x(3)) sx3=sin(x(3))

d=4/3-qm*cx3*cx3

xd4=(-sx3*cx3*qm*x(4)**2+2/(mb*l)*(sx3*mb*g-qm*cx3*u))/d

//

xdot=[x(2);

(u+mb*(l/2)*(sx3*x(4)**2-cx3*xd4))/(mb+mc);

x(4);

xd4]

function [y,xdot]=pendu(x,u)

//[y,xdot]=pendu(x,u) input (u) - output (y) function of the pendulum

//!

g=9.81; qm=mb/(mb+mc) cx3=cos(x(3)) sx3=sin(x(3)) d=4/3-qm*cx3*cx3

xd4=(-sx3*cx3*qm*x(4)**2+2/(mb*l)*(sx3*mb*g-qm*cx3*u))/d

//

xdot=[x(2);

(u+mb*(l/2)*(sx3*x(4)**2-cx3*xd4))/(mb+mc);

x(4);

xd4]

//

y=[x(1);

x(3)]

function [xdot]=regu(t,x)

//xdot=regu(t,x) simulation of the compound system

// pendulum - observer - compensator

//!

xp=x(1:4);xo=x(5:8); u =-kr*xo //control [y,xpd]=pendu(xp,u)

xod=(f-k*h)*xo+k*y+g*u xdot=[xpd;xod]

17

Page 125: Control-oriented modeling and system identification

Chapter 5

Identification of an Active Vibration ControlBenchmark Using Matlab

As stated inWikipedia, “active vibration control is the active application of force in an equal and opposite fashion to the forces imposedby external vibration. With this application, a precision industrial process can be maintained on a platform essentially vibration-free.Many precision industrial processes cannot take place if the machinery is being affected by vibration. For example, the productionof semiconductor wafers, or for reducing vibration in helicopters, offering better comfort with less weight than traditional passivetechnologies. In the past, only passive techniques were used. These include traditional vibration dampers, shock absorbers, and baseisolation.”

In this lab, you will learn how to useSystem Identificationmethods seen in class in order to model a benchmark on vibration controlrecently proposed byGIPSA-lab[17].

5.1 System description and preliminary workThe active vibration compensation experiment is depicted in Figure 5.1. It can be basically considered as a LTI system with two inputs(the controlled forceu and the disturbance forceuP) and one output: the residual forcey.

Figure 5.1:GIPSA-labvibration benchmark.

In order to prepare the lab properly:

1. Download all the files in the class website and read throughthe system descriptionBenchmarkV1.1.pdf. Check for additionalinformation in [17].

2. Synthesize the information (2 pages summary) that is madeavailable concerning the system properties (number of input andoutput channels, derivative effects), data (sampling frequencies, inputs properties) andsimulation files.

In order to illustrate the key concepts seen in class, do the following.

18

Page 126: Control-oriented modeling and system identification

5.2 Experiment design and frequency-response analysis1. Load a data set (i.e.data_prim1.mat) and visualize the input datau:

(a) which kind of signal is it?

(b) superpose this signal everyN = 210 − 1 samples, i.e. usingi = 1:N; plot(i,u(i),i,u(i+N),i,u(i+2*N))

what can you conclude on the input sequence?

(c) estimate its power spectral density (i.e. withpwelch) and conclude on the frequency contents;

(d) discuss the properties and advantages of such an input for system identification.

2. Input/output relationships:

(a) load and comment the functionfreq_resp_cal provided on the website;

(b) plot on the same graph the magnitude of the frequency obtained by spectral analysis for the primary and secondarychannels;

(c) what happens when the PRBS clock frequency is divided by two?

(d) discuss the similarities and differences between the two channels (common poles/zeros, order, etc.).

3. Conclude with some guidelines concerning the model architecture.

Note: if theMatlab Signal Processing Toolboxis not installed on you computer, you may download theoctave-signalfree software athttp://octave-signal.sourcearchive.com/

5.3 Spectral analysis1. Empirical transfer-function estimate (ETFE):

(a) compute the ETFE (i.e. withetfe) of both channels and discuss your results;

(b) smooth the ETFE thanks to Hamming window and vary the window parameterγ (i.e. withspa, getff andbode);

(c) for eachγ, computeM(γ) andW(γ). Relate these parameters to the smoothed ETFEs obtained above.

2. Autocorrelation analysis:

(a) compute the correlation between inputs and outputs for both channels;

(b) what is the periodicity of the peaks (in number of samples)? What is the implication of this observation?

(c) by zooming on the cross correlations (choosing one period), what is the horizon of the deterministic behavior?

(d) what can be concluded on the model order, or the presence of delays, for both channels?

Note: the materialDigital Signal Processingproposed in [18] may be useful for this analysis.

5.4 Estimating the disturbance spectrum1. Identify (withMatlab Identification Processing Toolbox) the decoupled data thanks to a 6th order state-space model with to the

prediction-error method.

2. Using this model, compute the residual and its spectrum.

3. Discuss the model choice and compare it with the other models easily available.

4. Set the experimental inputs as common inputs in the model provided insimulator_bench.mdl and in the previous state-spacemodel, and compare the two outputs with the experimental data.

5.5 ConclusionsSummarize your main results and conclude.

19

Page 127: Control-oriented modeling and system identification

Chapter 6

Analysing the Anthropogenic Impact on theOzone Layer Depletion

The aim of this lab is to investigate the anthropogenic impact on the ozone layer deplection thanks to the analysis of a model ofCH3CCl3 transport in firn ice cores. As detailed in [19]:

“1,1,1-Trichloroethane is an excellent solvent for many organic materials and also one of the least toxic of the chlorinated hydro-carbons. Prior to the Montreal Protocol, it was widely used for cleaning metal parts and circuit boards, as a photoresistsolvent in theelectronics industry, as an aerosol propellant, as a cutting fluid additive, and as a solvent for inks, paints, adhesivesand other coatings.

It was also the standard cleaner for photographic film (movie/slide/negatives, etc.). Other commonly available solvents damageemulsion, and thus are not suitable for this application. The standard replacement, Forane 141 is much less effective, and tends toleave a residue. 1,1,1-Trichloroethane was used as a thinner in Correction fluid products such as Liquid paper. Many applications for1,1,1-trichloroethane (including film cleaning) were previously done with carbon tetrachloride, which was banned in YEAR1.

The Montreal Protocol targeted 1,1,1-trichloroethane as one of those compounds responsible for ozone depletion and banned itsuse beginning in YEAR2. Since then, its manufacture and use has been phased out throughout most of the world.

1,1,1-Trichloroethane is generally considered as a non-polar solvent, but since all three electronegative chlorine atoms lie on thesame side of the molecule, it is slightly polar, making it a good solvent for organic matters that do not dissolve into totally non-polarsubstances such as hexane. 1,1,1-Trichloroethane in laboratory analytics has been replaced by other solvents.”

6.1 Data properties (lab 1)Download the fileCH3CCl3.mat on the class website and run:

%% Data loading

load CH3CCl3.mat % Qdirect x0 A B Qatm Qfin x

nt = length(Qatm.time);

nx = length(x);

ts = mean(diff(Qatm.time));

figure(1)

plot(Qatm.time, Qatm.values)

xlabel(’Time (years)’,’fontsize’,14)

ylabel(’Concentration’,’fontsize’,14)

title(’Initial scenario’,’fontsize’,16)

• What are the types of data provided to you?

• From the figure, can you guess YEAR1 and YEAR2 mentioned in the introduction?

6.2 Transport model and time loop (lab 1)The system provided in the data file is the transport model:

x = Ax+ Bu, x(0) = x0

wherex is the state vector of the gas concentrations at the discretized depth andu is the atmospheric scenario. Using a purely implicitscheme, as seen in class, the time evolution of the concentration in the firn can be obtained as:

%% Time loop (remove top and bottom)

I_ts = eye(nx-2)./ts;

A_D = inv(I_ts - A);

20

Page 128: Control-oriented modeling and system identification

Qdirect = zeros(nx-2,nt);

Qdirect(:,1) = x0;

B_D = A_D*[B;zeros(nx-3,1)];

A_DCS = A_D*I_ts;

for i = 1:nt-1

Qdirect(:,i+1) = A_DCS*Qdirect(:,i)

+ B_D*Qatm.values(i+1);

end

% Complete the model with atmospheric and bottom

boundary conditions

Qdirect = [Qatm.values’;Qdirect;Qdirect(nx-2,:)];

1. Write the discrete equations corresponding to Euler backward, Euler forward and Tustin discretizations. What wouldbe the truediscretization?

2. Modify the previous code to run your results.

3. Superpose your results on the figure:

figure(2)

errorbar(Qfin.space,Qfin.values,Qfin.error,’x’)

hold on

plot(x,Qdirect(:,end),’b’)

axis tight

xlabel(’Depth (m)’,’fontsize’,14)

ylabel(’Concentration’,’fontsize’,14)

title(’Ice core measurement vs. model’,

’fontsize’,16)

4. Investigate the impact of the time step and the robustnessof the schemes.

6.3 Atmospheric scenario (lab 1-2)1. Modify the atmospheric scenario by adding a white noise and a bias on the atmospheric input: verify that this noise is a discrete

stochastic signal. How can you characterize the model sensitivity?

2. What are the time constants associated with the model (provide for different depths)?

3. What is the frequency content of the atmospheric scenario? Hint: look at the power spectrum, computed thanks tohttp://www.ac.tut.fi/aci/courses/ACI-42070/esiselostus.pdf.

4. Draw the Bode diagram, taking as outputs the concentrations at different depths. Conclude on the seasonality effect of atmo-spheric concentration changes in the deep firn.

6.4 I/O properties (lab 2)Consider fictitious measurements carried on ice below the last firn depthy (i.e. inferred from trace gas measurements in ice and madetime-dependant through the firn sinking speed).

1. What is the I/O cross spectrum?

2. How can it be used to infer the disturbance model?

3. Discuss the choice of the sampling interval of the atmospheric scenario.

6.5 Conclusions (lab 2)Conclude on the main aspects covered in this lab and the key steps to validate the scenario/model adequacy, as well as the inferredphysical properties. Propose some perspectives for the identification and optimization of such models.

21

Page 129: Control-oriented modeling and system identification

Chapter 7

Optimization: Particle Source Identificationin a Tokamak

In this work, the problem of particle source identification from distributed electron density measurements in fusion plasmas, such asthe ones obtained in Tore Supra tokamak, is considered. A transport model, suitable for identification purposes, is proposed based on asimplification of classical particle transport models. Youwill first derive a steady-steady state model from the PDE discretization. Theoptimization strategies considered in class will then be applied to determine a feedback scheme that provides an on-line identificationof the source input. You will finally apply your algorithm to dedicated experimental data from Tore Supra.

7.1 IntroductionRecent developments in control theory and controlled thermonuclear fusion research are naturally leading to researchtopics of commoninterest that are particularly challenging for both scientific communities. For example, new modeling and identification tools are neededfor the understanding and analysis of complex physical phenomena that occur in thermonuclear fusion. The representation (qualitativeand quantitative) of particles transport at the plasma edgeis an example of such topics.

Tokamak experiments, such asTore Supraor JET, are equipped with Lower Hybrid (LH) antennas to heat the plasma and createthe toroidal current. The LH waves are recognized as the mostefficient non-inductive current drive sources and their use is forecastedfor ITER experiment. The efficiency of such antennas is strongly related to our ability toensure an appropriate coupling between thewaves on the plasma, which directly depends on the electron density in a region called thescrape-off layer (SOL), located between thelast closed magnetic surface (theseparatrix) and the wall. This region, as well as the key elements discussed in this paper, are depictedin Fig. 7.1. The importance of local density control in the SOL is discussed in [20], where dedicated experimental conditions areestablished for long distance shots (plasma experiments) on Tore Supra. It is further emphasized in [21], where the coupling efficiencyis improved thanks to gas puffing in front of the launcher on JET (reduced reflected power). Afirst attempt to control the couplingbetween LH and the SOL is proposed in [22], based on a scalar model.

The electron density in the SOL is directly influenced by the LH input powerPLH [20]. Indeed, a small but significant part ofthis power is absorbed in the plasma edge during the wave propagation to the core. A possible effect of this absorption is the gasionization, which results in an increased electron density. This suggests thatPLH is a key parameter for the local control of the electron

Figure 7.1: SOL, limiter and LH launcher in atokamak cross section (i.e. Tore Supra).

0

0.2

0.4

0.6

0.8

1 00.1

0.20.3

0.4

0

0.01

0.02

0.03

0.04

0.05

Time (s)Normalized space

Par

ticle

den

sity

Figure 7.2: Distribution from the state-space description.

22

Page 130: Control-oriented modeling and system identification

density, and consequently for the coupling efficiency, which motivates the development of new modeling tools based on experimentalmeasurements. On Tore Supra, the electron density is measured by using a microwave reflectometer that has both a good spatial(r ≈ 1 cm) and temporal resolution (t ≈ 2 msfor the shot considered).

The aim of this work is to develop an identification method fordetermining thesource term, i.e. the number of electrons createdper unit time and volume, when the high frequency heating is switched on. To achieve this, we derive a particle transport model forthe area of non-confined plasma (SOL) and develop an appropriate parametric identification method for distributed systems, based onthe reflectometer measurements. The resulting algorithm then provides an efficient tool for analyzing the coupling associated with theelectron density in the SOL, even if the detailed physical relationships are unknown.

7.2 Simplified Transport ModelThe aim of this section is to present a model that is suitable to determine the particle transport. Thanks to specific hypotheses associatedwith the physical properties of the SOL, the following transport model was established in [23]:

∂n∂t= D⊥(t)

∂2n∂r2− cs(r, t)

2Lc(r, t)n+ S(r, t) (7.1)

whereD⊥ is the cross-field diffusion coefficient (typically∼ 1 m2s−1), cs is the sound speed,Lc is the connecting length along the fluxtube to the flow stagnation point andS = Sl + SLH reflects the particle source induced by the limiterSl and the LH antennaSLH . Thesound speed can be approximated ascs ≈

√(Te + Ti)/mi , whereTe andTi are the electron and ion temperatures, andmi is the ion mass.

The ratioTi/Te is obtained thanks to the experimental measurements described in [24]. Lc is deduced from the safety factorq (whichtypically varies by 10 % in the SOL) asLc = 2πrq.

A Neumann boundary condition∂n(0, t)/∂r = 0 is set close to the center while a Dirichlet onen(L, t) = nL(t), wherenL(t) is givenby the measurements, governs the plasma edge. Note thatr = 0, L denote relative coordinates with respect to the model domain ofvalidity (i.e. r = 0 at the separatrix location). The data set considered in this paper is characterized by repeated LH impulses, whichhighlight the impact of LH antenna.

7.2.1 Problem statementThe problem considered in this paper is to determineS(r, t) in (7.1) from the given signals ofn(r, t). The transport parameters (assumedconstant according to a supposed quasi-steady state behavior - i.e. these parameters vary slowly in comparison with thestate dynamics)D⊥, cs andLC are obtained from existing models and measurements.

Using the subscriptst andx to denote the time and space derivatives, respectively, where x = r/L ∈ [0, 1] is the normalized radius,the class of systems considered can be described as:

nt(x, t) = α nxx(x, t) − γ n(x, t) + S(x, t),nx(0, t) = nx0(t), n(1, t) = nL(t),n(x,0) = nr0(x),

(7.2)

whereα = D⊥/L2 is the diffusion andγ = cs/(2LC) > 0 is the sink term. For the numerical applications, we will take α = 18.54 andγ = 30.54.

7.3 Laboratory work

7.3.1 State-space description [1h30]1. Based on model (7.2), a central discretization

∂2u∂z2≈

ui+1 − 2ui + ui−1

∆z2

and the use of the boundary conditions as

nx(0, t) ≈n1(t) − n0(t)∆x

= nx0(t), nN+1 = nL(t),

derive aN-dimensional state-space model of the density distribution in the standard form

X = AX+ BU, X(0) = X0

whereX = [n1 n2 . . .nN]T , U = [S1 S2 . . .SN, nx0, nL]T , A ∈ RN×N andB = [ I [a 0 . . .0]T [0 . . .0 c]T ] ∈ RN×N+1. For thesimulations, choosenx0(t) = −0.04 andnL(t) = 0.0005.

2. Choose an arbitrary source term with a gaussian distribution

S(x, t) = S(x) = ϑe−(µ−x)2/2σ

a boundary valuenL and a sigmoid initial condition

n(x, 0) =α

1+ e−β(x−γ)

As starting values, you can chooseϑ = 2, µ = .3,σ = .05,α = 3× 10−2, β = −15,γ = .7.

23

Page 131: Control-oriented modeling and system identification

3. Write aMatlabr .m file to visualize the eigenvalues ofA as well as the gaussian and sigmoid distributions.

4. Design aSimulinkr model to simulate the density dynamics according to your initial condition and source distribution.

5. With the proposed numerical values, you should obtain a distribution like the one presented in Figure 7.2.

7.3.2 Optimal on-line identificationThe estimation problem is formulated as a tracking control problem based on the following considerations [25]:

• the model is the system which dynamics have to be controlled;

• S(t) acts as the controlled input;

• the measurementnmeas(t) is the tracked reference;

• in order to fit the model with the measurements, our aim is to minimize the modeling errornmeas(t)−nmodel(t) over the simulationtime interval;

• this can be achieved thanks to a LQR tracking controller withintegral action.

Note that the state-space dynamics write asX = AX+ S + ω

whereω(t) is a known exogenous input (containing the boundary conditions in our case). Then:

1. write the dynamics of the extended stateXe = [X ǫI ]T , whereǫI is the integrated estimation error (i.e. ˙ǫI = nmeas(t) − X(t)) as

Xe = AeXe + Be1S + Be2Ω

with Ω(t) = [nx0 nL nTmeas]

T ;

2. based on the cost function

I =12

∫ ∞

0

[

XǫI

]T [

0 00 Qr

]

︸ ︷︷ ︸

Q

[

XǫI

]

+ STR S · dt

show that the optimality necessary conditions lead to

S = −R−1BTe1 (PXe + γ)

with

−Q = PAe + ATe P− PBe1R−1BT

e1P

γ =[

PBe1R−1BT

e1 − ATe

]−1PBe2Ω

3. show that the optimal estimation architecture corresponds to the one depicted in Figure 7.3;

4. interpret the relative roles ofGlqr andGlqr,W in the control architecture;

5. using theMatlabr function lqr to solve the ARE equation set the previous architecture and verify that you get your initialsource distribution (using the previous model to generate the measurements).

Figure 7.3: Optimal tracking of the particle source.

7.3.3 Use of the experimental data1. Download the experimental data filemeasurements.mat and implement your algorithm on it.

2. Illustrate the impact of the relative values ofQr andRon the algorithm efficiency.

3. Discuss your results and propose some future research perspectives on this topic (i.e. as applications of the modeling classmaterial).

24

Page 132: Control-oriented modeling and system identification

Chapter 8

Optimization: Flow Control

The problem of temperature control systems with time transport of mass is studied in this lab with application to a process-controlsystem for Poiseuille flow. To further investigate the phenomenon of energy transmission in a Poiseuille flow, a specially conceivedsystem has been designed to test and validate advanced control strategies on this subject. In this lab, a model identification procedureand an optimal PID have to be developed for the temperature output control of the device under constant mass flow conditions.

8.1 System DescriptionTo further investigate the phenomenon of energy transmission in a Poiseuille flow, a specially conceived system has beendesigned totest and validate advanced control strategies on this subject. Figure 8.1 shows the diagram of the device.

Figure 8.1: Temperature and humidity flow control scheme

This device is composed primarily by a heating column, a tube, a resistance, two fans, a flow meter and temperature sensors. Thepurpose of this experiment is to control the outlet temperature of the tube by controlling the power dissipated on the resistance fordifferent air mass flows (exogenous inputs generated by the fans)through the tube.

8.2 IdentificationIn this section, it is expected to obtain a model of the deviceshown in Figure 8.1 capable of representing the dynamic relation betweenthe power dissipated in the heating resistor (control input) and the output temperature under a constant mass flow. An identificationprocedure has to be conceived in order to acquire the required information for the model construction.

A-1 Discuss which identification strategy could be suitablefor this application. Design the experiments to carry out onthe device inorder to obtain the necessary information.

25

Page 133: Control-oriented modeling and system identification

A-2 Using the data-log available in the HMI (human machine interface), capture the results of the experiments. Use Matlab or Excelto process the information acquired and perform the identification procedure chosen in the previous step.

A-3 Build the identified the model in Matlab an compare with the results obtained from the device.

8.3 ControlThe flow control device has already a PID programmed in the PLCfor controlling different quantities on the system. The PID parameterscan be configured directly from the HMI which results practical for control implementation in the device.

B-1 Design an optimal PID control for the system using the model obtained in the previous section. Use a LQR approach.

B-2 Test the PID controller using the model in Matlab.

B-3 Implement the optimal PID in the flow control system and compare the results obtained with respect to the simulation performedpreviously.

8.4 ConclusionsComment and conclude.

26

Page 134: Control-oriented modeling and system identification

Bibliography

[1] A. Valera, M. Valls, L. Marin, and P. Albertos. On the advanced air-path control and estimation for and multiple and alternativecombustion mode engines.18th IFAC World Congress, 2011.

[2] L. Ljung and T. Stoderstrom.Theorical and Practical Recursive Identification. MIT Press, Cambridge, Massachusetts, 1983.

[3] Witrant E, Joffrin E, Bremond S, Giruzzi G, Mazon D, Barana O, and Moreau P. Acontrol-oriented model of the current profilein tokamak plasma.Plasma Phys. Control. Fusion, 49:1075–1105, 2007.

[4] Taylor T S. Physics of advanced tokamaks.Plasma Phys. Control. Fusion, 39(B):47–73, December 1997.

[5] Joffrin E et al. Integrated scenario in JET using real-time profile control. Plasma Phys. Control. Fusion, 45:A367–A383, 2003.

[6] Gormezano C. High performance tokamak operation regimes. Plasma. Phys. Control. Fusion, 41(12B):B367–B380, December1999.

[7] Wolf R C. Internal transport barriers in tokamak plasmas. Plasma. Phys. Control. Fusion, 45(1):R1–R91, January 2003.

[8] Jaun A, Fasoli A, Vaclavik J, and Villard L. Stability of Alfven eigenmodes in optimized tokamaks.Nucl. Fus., 40(7):1343–1348,July 2000.

[9] Blum J. Numerical simulation and optimal control in plasma physicswith applications to Tokamaks. Modern Applied Mathe-matics. Wiley/ Gauthier-Villars, 1989.

[10] Bregeon R.Evolution resistive du profil de courant dans les tokamaks,application a l’optimisation des decharges de Tore Supra.PhD thesis, Universite de Provence (Aix-Marseille I), CEA- Cadarache, France, November 1998.

[11] Hirshman S P, Hawryluk R J, and Birge B. Neoclassical conductivity of a tokamak plasma.Nucl. Fusion, 17(3):611–614, 1977.

[12] Hirshman S P. Finite-aspect-ratio effects on the bootstrap current in tokamaks.Phys. of Fluids, 31(10):3150–3152, October 1988.

[13] Witrant E. Parameter dependant identification of nonlinear distributed systems: Application to tokamak plasmas.Technical reportTRITA-EE 2007:029, Royal Institute of Technology (KTH), Stockholm, Sweden, June 2007.

[14] Kazarian-Vibert F, Litaudon X, Moreau D, Arslanbekov R, Hoang G T, and Peysson Y. Full steady-state operation in Tore Supra.Plasma Phys. Control. Fusion, 38:2113–2131, 1996.

[15] Giruzzi G. Impact of electron trapping on RF current drive in tokamaks.Nucl. Fusion, 27(11):1934–1939, 1987.

[16] Goniche Met al. Lower Hybrid current drive efficiency on Tore Supra and JET. In 16th Topical Conference on Radio FrequencyPower in Plasmas, Parkcity, USA, June 2005.

[17] I.D. Landau, M. Alma, J. Martinez-Molina, G. Buche, andA. Karimi. Benchmark on adaptive regulation: Rejectionof unknown/time-varying multiple narrow band disturbances.http://www.gipsa-lab.grenoble-inp.fr/˜ioandore.landau/benchmark_adaptive_regulation/, October 2009.

[18] O. Hinton. Digital signal processing, ch.6 - describing random sequences.http://www.staff.ncl.ac.uk/oliver.hinton/eee305/Chapter6.pdf, October 2009.

[19] Nordic Council of Ministers. Use of ozone depleting substances in laboratories, January 2003.

[20] J. Mailloux, M. Goniche, P. Bibet, P. Froissard, C. Grisolia, R. Guirlet, and J. Gunn. Long distance coupling of the lower hybridwaves on Tore Supra. In European Physical Society, editor,Proc. of the25th European conference on Cont. Fus. and Plasmaheating, volume 23, page 1394, Prague, 1998.

[21] A. Ekedahl, G. Granucci, J. Mailloux, Y. Baranov, S.K. Erents, E. Joffrin, X. Litaudon, A. Loarte, P.J. Lomas, D.C. McDonald,V. Petrzilka, K. Rantamaki, F.G. Rimini, C. Silva, A.A. Stamp, M. Tuccillo, and JET EFDA Contributors. Long distance couplingof lower hybrid waves in JET plasmas with edge and core transport barriers.Nucl. Fusion, 45(5):351–359, 2005.

[22] D. Carnevale, L. Zaccarian, A. Astolfi, and S. Podda. Extremum seeking without external dithering and its application to plasmaRF heating on FTU. InIEEE Conference on Decision and Control, Cancun, Mexico, December 2008.

[23] E. Witrant, M. Goniche, and Equipe TORE SUPRA. A qss approach for particle source identification in tore supra tokamak. InProc. of 48th IEEE Conference on Decision and Control, Shanghai, December 2009.

[24] M Kocan, J.P. Gunn, J-Y Pascal, G Bonhomme, C Fenzi, E Gauthier, and J-L Segui. Edge ion-to-electron temperature ratio in theTore Supra tokamak.Plasma Phys. Control. Fusion, 50:125009–125019, 2008.

[25] J. Weydert, E. Witrant, and M. Goniche. Optimal estimation of the particle sources in Tore Supra tokamak. In4th InternationalScientific Conference on Physics and Control, Catania, Italy, September 2009.

27