optctrl_heli

Upload: aleksandar-micic

Post on 08-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 optctrl_heli

    1/6

    Nordic Matlab Conference, Oslo 1718 Oct, 2001

    Optimal control teaching using Matlab: Have we reached theturning point yet?

    David I. Wilson

    Department of Electrical Engineering, Karlstad University,

    SE 651-88 Sweden

    Keywords: Matlab/Simulink, optimal control,BVPs, teaching, laboratories

    Abstract

    The title of this article could be interpreted intwo ways, and that is deliberate. The teach-ing of automatic control has undergone a renais-sance in the last decade primarily due to Mat-lab. It is possible to deliver useful courses inautomatic control utilising powerful techniquesand we can finally rid ourselves of the tedium ofmanual, predominantly frequency design meth-ods. Perhaps now, our teaching of this materialis coming close to optimality.

    The second interpretation of the title is theteaching of more advanced topics such as op-timal control in undergraduate courses to en-gineers. Traditionally these topics have suf-fered from the requirement of higher math-ematics (variational calculus, advanced linearalgebra, stochastic control), non-trivial topicsfrom numerical analysis (constrained optimisa-tion, QPs, boundary value problems) and allmanner of implementation problems. This re-sults in a substantial learning period for the stu-dents before they are able to test these schemeson actual equipment. This lack of application,coupled with at times obscure theory is wor-rying to many students. Simultaneously I aimto show how we at Karlstad University addressboth these issues.

    1 Background to control

    teaching

    Many, if not most, universities now use Mat-

    lab in their teaching of automatic control. Overthe last decade, we at Karlstad University

    have moved from a plethora of naive designtools, crude graphics, rudimentary symbolicsand home-grown interfaces running under DOSwith our laboratory equipment. Teaching intro-ductory control courses, particularly to chemi-cal engineering students unused to minimal helpscreens, was extremely inefficient. The best wecould manage in a 5-week introductory controlcourse was one step test, and a PID controller tobe built in Pascal. Nowadays the situation haschanged due to the standardising to one soft-ware environment, and the avoidance of tediousinterface programming that offers little to theunderstanding of control.

    The first big improvements came in 1995 whenwe interfaced Simulink to our collection of labo-ratory plants using the Real-time toolbox1 andNational Instruments2 analogue to digital con-verter cards. Now for example the chemical en-gineering students whom typically have had lit-tle prior exposure to computers outside word-processing, can interface to a laboratory plantand implement and tune a PID controller fromscratch. We feel that the improved understand-ing is due to that the diagram of the PID con-troller and transfer function model closely fol-lows the block diagram in any standard control

    textbook, and that the Simulink diagram con-nected to the true plant is identical to a simu-lation version except that the transfer functionis now replaced with input and output plugs asshown in Fig. 1. One example we use in ourfirst undergraduate control course is to build aPID controller with integral windup protectionfollowing [1, Fig 8.10, p310].

    What is particularly convenient is that we caneasily proto-type a proposed controller design

    1Available from Humusoft, Czech Republic

    http://www.humusoft.cz/rt/irt.htm2LabPC 1200 available from http://www.ni.com

    246

  • 8/7/2019 optctrl_heli

    2/6

    Nordic Matlab Conference, Oslo 1718 Oct, 2001

    Figure 1: Real-time PID control usingSimulink and the real-time toolbox from Hu-musoft. Any number of laboratory plants canbe connected to the input and output plugs.

    on a model, and then rapidly swap the modelfor the actual plant, be it a RC filter network, acascaded series of liquid-level tanks, an electro-magnetic balance arm, or even a model heli-copter for real tests. Even more interesting forthe students is that we can rapidly swap fromone plant to a duplicate to test the robustness of

    the controller, or even swap from one plant to adifferent type of plant as shown in Fig. 2 and seeexactly what components of the controller needre-designing. Since our advanced control coursesare optional, only the motivated and interestedstudents pursue them, many of who have hadsome prior industrial experience. They quicklyappreciate that industrial controller design mustbe flexible and, as far as possible automated.

    2 Optimal control

    Karlstad University also offer a number of ad-vanced courses in control intended for gradu-ate or final year undergraduate students cover-ing topics such as digital control, adaptive con-trol and optimal control. There is somethinginherently satisfying about the nature of opti-mal control; perhaps it is the combination ofelegant theory with the knowledge that no con-troller will be better at least in theory.

    However the optimal control course has de-

    manding pre-requisites. Most undergraduatetext books (e.g. [24]) pay scant attention to

    (a) Cascaded series of RC filters orBlackbox

    (b) Fan and flapper

    (c) Toy helicopter

    Figure 2: Plants suitable for control laboratories

    247

  • 8/7/2019 optctrl_heli

    3/6

    Nordic Matlab Conference, Oslo 1718 Oct, 2001

    optimal control, and those that do, [1, 5, 6],primarily restrict themselves to linear optimalcontrol; LQR and LQG. Graduate texts such as

    [79], naturally cover the underlying mathemat-ics competently, but still leave gaps for the im-plementation although there are rare examples,[10], which manage both the theory and to con-vey how one could realistically implement suchan optimal scheme. The problem is not the au-thors intention, but rather the inescapable factthat implementing a robust constrained nonlin-ear multivariable optimiser inside a nonlinearboundary-value problem is a nontrivial compu-tational task. To implement this in realtime,ensuring that when the sampling interval is up

    you have a sensible control input ready at yourdisposal, and all in limited, or perhaps evenfixed-precision requires serious attention to de-tail which has little to do with optimal control.On the other hand, without this care, your opti-mal controller is unlikely to be optimal or evenclose to it.

    2.1 The problem of the lean pe-

    riod

    A more serious problem in demonstrating manyadvanced control topics in the laboratory is therequirement for an adequate plant model andstate-feedback. In our education, this meansthat after the students get to control our labo-ratory plant equipment using PID controllers ina first control course, they cannot manage anymore convincing laboratories until they havemastered discrete-time control, system identifi-cation (for state-space models), linear optimalcontrol and state estimation. This results insubstantial lean period in the education in terms

    of practical applications and is disturbing for themore practically oriented students. Given theirhealthy suspicion of simulated responses, with-out application, they are often unconvinced thatthis material is worth learning.

    So the onus is on the lecturer to providea demonstration that out-performs competingtechniques. First there is the choice of plant.Ideally we would like a multivariable with differ-entiable nonlinearities preferably in the dynam-ics (as opposed to Wiener/Hammerstein forms),

    low noise and no input or output saturation.For practical reasons, we desire time constants

    in the order of 110 seconds. We wish to avoidtroublesome nonlinear elements such as stiction,hysteresis and excessive deadtime. In the case

    of our blackbox (refer Fig. 2(a)), we find it con-venient to add the nonlinearities in softwareusing a wrapper around the plant block.

    2.2 Linear optimal control

    Our first optimal control laboratory is to designa Kalman estimator and a linear-quadratic regu-lator (LQR) controller given a previously iden-tified state-space plant model. The Simulinkconfiguration shown in Fig. 3 closely follows any

    standard textbook diagram such as [5, Fig. 65].An extension to this exercise involves includingmodel adaption and setpoint-following capabili-ties, [5, p729]. An example of the controlled re-sponse is given in Fig. 4. This laboratory high-lights the importance of selecting appropriatesampling times, model structure and forgettingfactors, and adjusting the control weighting inthe optimal formulation to prevent input satu-ration. Only plant identification is performed inthe first 30 samples.

    Kestimator K

    0.5

    disturbance

    K

    ctrl gain Lblackbox

    z

    1

    Unit Delay

    Scope

    K Phi

    Mux

    Manual Switch

    K

    Delta

    K

    C model

    2

    2 2

    2

    2

    2

    2

    2

    2

    Figure 3: Real-time LQR using observed statesfor feedback.

    2.3 Classical optimal control

    A good place to start teaching optimal con-trol is with the classical open-loop general op-timal control problem if only to exemplify whyclosed-loop versions based on linear models and

    quadratic performance indices are so popular.Here we wish to establish inputs u(t) to min-

    248

  • 8/7/2019 optctrl_heli

    4/6

    Nordic Matlab Conference, Oslo 1718 Oct, 2001

    0 50 100 150 200 2500.2

    0

    0.2

    0.4

    0.6

    LQR of the blackbox t=1.50

    output&setpoint

    0 50 100 150 200 2500.5

    0

    0.5

    dlqr servo: 2

    q= 1,

    2

    r= 10

    Input

    0 50 100 150 200 2505

    0

    5

    RLS: =0.995

    0 50 100 150 200 2501

    0

    1

    2

    3

    K1

    andK2

    time (s)

    Figure 4: A servo linear quadratic regulator ofa laboratory plant with an adaptive RLS plant

    model. Upper two trends: system output, set-point and input. Lower two trends: Model pa-rameters and controller gains.

    imise the scalar functional

    J = (x(tf)) + tf

    0

    L(x,u, t) dt (1)

    given a nonlinear dynamic model

    x = f(x,u, t), x(t = 0) = x0 (2)

    The solution to Eqn. 1 is obtained through vari-ational calculus to be a two-point boundaryvalue problem,

    x = f(x,u, t), x(t0) = x0 (3)

    = H

    x, (tf) =

    x

    tf

    (4)

    where are the co-state variables, Hdef= L+Tf

    is the Hamiltonian and the control input is de-termined as the solution to

    L

    u+ T

    f

    u= 0 (5)

    Solving the possibly nonlinear two-point bound-ary value problem of Eqns. 3 and 4 with theembedded algebraic constraint, Eqn. 5 is non-trivial. If input saturation constraints arepresent, then the method due to Pontryagin isappropriate. The issue becomes even worse if wewere to attempt this optimisation every sampletime. We could parameterise the control moves,and solve the ensuing optimisation problem, butthis is an approximation, and still exhibits ex-cessive computation.

    Solving the full two-point boundary value prob-lem using the shooting method does work, butthis places excessive limits on the hardware, andworks only if you have a good idea for the initialstart guess of the costates at t = 0. A more ro-bust method is to use the boundary-value code

    based on collocation, bvp4c, present inMat-

    lab release 12. With 500MHz PCs, we canrun this optimal controller (with a linear state-estimator) down to sampling times of around1 second. This could be substantially improvedwith better coding and mex s-functions, but hasthe disadvantage that we lose the transparencyof the algorithm.

    2.4 Dynamic Matrix Control

    Dynamic matrix control or DMC is one exampleof a collection of predictive controllers known as

    249

  • 8/7/2019 optctrl_heli

    5/6

    Nordic Matlab Conference, Oslo 1718 Oct, 2001

    receding horizon controllers, [11]. For uncon-strained linear plants, the controller requires asimple, albeit large matrix inversion. While the

    basic algorithm is relatively straight forward,the subtleties and the open-loop nature, and thepotential for model-plant mismatch mean thatthe student gains most only after they have pro-grammed it. It is also a good example where itis much easier to develop the algorithm in rawMatlab as opposed to Simulink.

    For nonlinear plants, and/or constraints, thecontroller calculations require an online optimi-sation calculation. As in the linear case, theoptimisation is repeated every sampling inter-

    val, but now requires an iterative search ratherthan a standard matrix inversion. Fig. 5 showsan example of a model predictive controller ofthe helicopter shown in Fig. 2(c) implementedusing the xPC target. While the controlled re-sponse is impressive compared to what we canachieve using PID control, [12], the developmenteffort and cost is substantial, limiting this to afinal-year project.

    2.5 Optimal controllers to min-

    imise the absolute error

    Optimal control where we try to minimise theabsolute sum of the errors, rather than the clas-sical squared sum results in a linear program,[13]. This approach may be more robust is cer-tainly more natural and it has the added ad-vantage that we can directly take into accountsaturation in the input variables, or indeed anylinear constraint condition.

    The optimisation problem is then to choose theset ofN future manipulated variables uk such

    that the performance index

    J =N1k=0

    ni=1

    |ri xi| (6)

    is minimised subject to the discrete processmodel, xk+1 = xk +uk of order n. This canbe formulated as an admittedly largish linearprogram for even modest sized problems. Fig. 6shows a view of the constraint matrix for a 3input/3 output system with a control horizon of10 samples. We could use the LP solver lp.m

    from the optimisation toolbox, (now updated tolinprog), but we note that lp does not em-

    ploy sparse techniques. The lower figure showsthe resulting optimal trajectory highlighting theacausal behaviour when using a receding horizon

    controller knowing future setpoint changes.

    0 10 20 30 40 50 60 70 80 90

    0

    5

    10

    15

    20

    25

    30

    nz = 555

    Constraint matrix for a 3 input, /3 output, 10 horizon system

    0 1 2 3 4 5 6 7 8 94

    2

    0

    2

    4

    sample time

    Manipulated

    0 1 2 3 4 5 6 7 8 92

    1.5

    1

    0.5

    0

    0.5

    1

    state

    LP optimal control

    Figure 6: Upper: The constraint matrix (viewedwith spy) from a linear program used as a opti-mal control strategy. Lower: Acausal behaviourwith input constraints.

    3 Conclusions

    The teaching of automatic control at KarlstadUniversity has benefited from the incorporationof Matlab, Simulink and the real-time tool-

    box. The main problem we now face in oursecond course in the subject is that until thestudents are able to identify discrete state-spacemodels and design observers or Kalman filters,they cannot apply modern control techniquessuch as LQG or GPC to physical plants. Evenonce the students manage to demonstrate suchoptimal control techniques, it is still problematicto convince oneself that the response is actuallyoptimal in any sense.

    Despite these difficulties, at the end of our sec-

    ond 5-week control course the students are ableto develop advanced control algorithms and ap-

    250

  • 8/7/2019 optctrl_heli

    6/6

    Nordic Matlab Conference, Oslo 1718 Oct, 2001

    0.5

    0

    0.5

    Elevation

    Np=60, N

    c=30. N

    p=40, N

    c=15. N

    p=30, N

    c

    =10.

    0.2

    0

    0.2

    Azimuth

    0 20 40 60 80 100 120 140 160 180 200 220

    1

    0

    1

    Time (sec)

    Inputs

    Figure 5: Closed-loop response of the helicopter using a predictive control algorithm with differentprediction and control horizons. Data from [12].

    ply them to physical plants, something unthink-able even just a few years ago. We do this en-

    tirely in Matlab and some toolboxes, withoutneeding to re-code routines in C or use a com-piler.

    Acknowledgements Thanks are due to JonasBalderud for the implementation of the heli-copter MPC controller.

    References

    [1] Karl-Johan Astrom and Bjorn Wittenmark.Computer-Controlled Systems: Theory and De-

    sign. PrenticeHall, 3 edition, 1997.

    [2] Richard C. Dorf and Robert H. Bishop. ModernControl Systems. AddisonWesley, 8 edition,1998.

    [3] Benjamin C. Kuo. Automatic Control Systems.PrenticeHall, 7 edition, 1995.

    [4] Norman S. Nise. Control Systems Engineering.Benjamin/Cummings, 2 edition, 1995.

    [5] Katsuhiko Ogata. Discrete Time Control Sys-tems. PrenticeHall, 1987.

    [6] Thomas Kailath. Linear Systems. PrenticeHall, 1980.

    [7] Thomas L. Vincent and Walter J. Grantham.Nonlinear and Optimal Control Systems. John

    Wiley & Sons, 1997.[8] C.K. Chui and G. Chen. Linear Systems and

    Optimal Control. SpringerVerlag, 1989.

    [9] Enid R. Pinch. Optimal Control and the Cal-culus of Variations. Oxford University Press,1993.

    [10] Jr. Arthur E. Bryson. Dynamic Optimization.AddisonWesley, 1999.

    [11] C.R. Cutler and B.L. Ramaker. Dynamic ma-trix controlA computer control algorithm.In Joint Automatic Control Confr., volume 1,

    pages wp5B. IEEE, August 1980.

    [12] Jonas Balderud and David Wilson. Predictivecontrol of a toy helicopter. In American Con-trol Conference, Alaska, USA, 810 May 2002.Submitted.

    [13] Tore K. Gustafsson and Pertti M. Makila.L1 Identification Toolbox for Matlab. AboAkademi, Finland, August 1994. Ftp:ftp.abo.fi/pub/rt/l1idtools.

    251