robust control: proceedings of a workshop held in tokyo, japan, june 23 â€“ 24, 1991
Embed Size (px)
TRANSCRIPT
Lecture Notes in Control and Information Sciences 183
Editors: M. Thoma and W. Wyner
S. Hosoe (Ed.)
Robust Control Proceedings of a Workshop held in Tokyo, Japan, June 23  24, 1991
SpringerVerlag Berlin Heidelberg NewYork London Paris Tokyo HongKong Barcelona Budapest
Advisory Board
L.D. Davisson • A.GJ. MacFarlane" H. K w a k e r n ~ J.L. ~ s e y " Ya Z. Tsypkin • AJ . Vit~rbi
Editor
Prof. Shigeyuki Hosoe Nagoya University Dept. of Information EngIneering Furocho, Chikusaku Nagoya 46401 IAPAN
ISBN 3540559612 SpringerVerlag Berlin Heidelberg NewYork ISBN 0387559612 SpringerVerlag NewYork Berlin Heidelberg
This Work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the fights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in other ways, and storage in data banks. Duplication of this publication or parts thereof is only permitted under the provisions of the German Copyright Law of September 9, 1965, in its current version and a copyright fee must always be paid. Violations fall under the prosecution act of the German Copyright Law.
© SpringerVerlag Berlin Heidelberg 1992 Printed in Germany
The use o fregistered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Typesetting: Camerarea/Iy by a,uflaors Offsetprinting: MercedesDruck, Berlin, B~'okbindidg: B. Helm, Berlin 6113020 5 4 3 2 1 0 Printed on acidfree paper
PREFACE
The workshop on Robust Control was held in Tokyo, Japan on July 24 25, 1991. From 10 countries, more than 70 reseachers and engineers gathered together, and 33 general talks and 3 tutorial ones, all invited, were presented. The success of the workshop depended on their high level of scientific and engineering expertise.
This book collects almost all the papers devoted to the meeting. The to pics covered include: Hoo control, parametric#tructured approach to robust control, robust adaptive control, sampleddata control systems, and their ap plications.
All the arrangement for the workshop was executed by a organizing com mittee formed in the technical committee for control theory of the Society of Instrument and Control Engineers in Japan.
For financial support and continuing cooperation, we are grateful to Casio Science Promotion Foundation, and many Japanese companies.
Shigeyuki Hosoe
C O N T E N T S
Out l ine of the W o r k s h o p
P a p e r s
• H.KIMURA (J, J~)Lossless Factorization Using Conjugations of Zero and Pole Ex tractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
• D.J.N.LIMEBEER, B.D.O.ANDERSON, B.HENDEL Mixed H2/H~ Filtering by the Theory of Nash Games . . . . . . . . . . . . . . . . . . . . . 9
• M.MANSOUR The Principle of the Argument and its Application to the Stability and Robust Stability Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
• L.H.KEEL, J.SHAW, S.P.BHATTACHARYYA Robust Control of Interval Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
• B.BAMIEH, M.A.DAHLEH, J.B.PEARSON Rejection of Persistent, Bounded Disturbances for SampledData Sys tems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
• Y.OHTA, H.MAEDA, S.KODAMA Rational Approximation of LiOptimal Controller . . . . . . . . . . . . . . . . . . . . . . . . . 40
• M.ARAKI, T.HAGIWARA Robust Stability of Sampled Data Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
• S.HARA, P.T.KABAMBA Robust Control System Design for SampledData Feedback Systems . . . . . . . 56
• Y.YAMAMOTO A Function State Space Approach to Robust Tracking for SampledData Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
• F.B.YEH, L.F.WEI SuperOptimal HankelNorm Approxmations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
• J.R.PARTINGTON
Robust Control and Approximation in the Chordal MeLric . . . . . . . . . . . . . . . . . 82
• R.F.CURTAIN FiniteDimensional Robust Controller Designs for Distributed Parameter Systems: A Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
• 3.H.XU, R.E.SKELTON Robust Covariance Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
VIII
• T.FUJII, T.TSUJINO An Inverse LQ Based Approach to the Design of Robust Tracking System with Quadratic Stability ................................................. 106
• T.T.GEORGIOU, M.C.SMITH Linear Systems and Robustness: A Graph Point of View .................. 114
• M.FUJITA, F.MATSUMURA, K.UCHIDA Experimental Evaluation of H~ Control for a Flexible Beam Magnetic Suspension System ...................................................... 122
• K.OSUKA Robust Control of Nonlinear Mechanical Systems ......................... 130
• A.S.MORSE HighOrder Parameter Tuners for the Adaptive Control of Nonlinear Systems ................................................................. 138
• H.OHMORI, A.SANO Design of Robust Adaptive Control System with a Fixed Compensator .... 146
• S.HOSOE, JF.ZHANG, M.KONO Synthesis of Linear Multivariable Servomechanisms by Hc¢ Control ....... 154
• T.SUGIE, M.FUJITA, S.HARA ]/ooSuboptimal Controller Design of Robust Tracking Systems ........... 162
• M.KHAMMASH Stability and Performance Robustness of ~I Systems with Structured NormBounded Uncertainty .............................................. 170
• S.SHIN, T.KITAMORI A Study of a Variable Adaptive Law ..................................... 179
• M.FU Model Reference Robust Control ......................................... 186
* T.MITA, K.Z.LIU Parametrization of H~ FI Controller and Hoo/H2 State Feedback Control ... 194
i A.A.STOORVOGEL, H.L.TRENTELMAN The Mixed H2 and Hoo Control Problem ................................. 202
• D.J.N.LIMEBEER, B.D.O.ANDERSON, B.HENDEL: Nash Games and Mixed H2/H~ Control ................................. 210
• J.B.PEARSON Robust g1Optimal Control .............................................. 218
( J , J ' )  L O S S L E S S F A C T O R I Z A T I O N U S I N G C O N J U G A T I O N S O F Z E R O A N D P O L E E X T R A C T I O N S
Hidenori Kimura Department of Mechanical Engineering for ComputerControlled Machinery, Osaka University
1. Introduction A matrix G(s) in L** is said to have a (J, J')losslessfactorization, if it is represented as a product
G(s)=O(s)rl(s), where O(s) is (J, J')Iossless ]11] and rl(s) is unimodular in rl**. The notion of O, J') lossless factorization was first introduced by Ball and Helton [2] in geometrical context. It is a generalization of the wellknown innerouter factorization of matrices in H"* to those in L**. It also includes the spectral factorization for positive matrices as a special case. Furthermore, it turned out that the (J, J')lossless factorization plays a central role in H"* control theory [1][4][7]. Actually, it gives a simple and unified framework of t l " control theory from the viewpoint of classical network theory. Thus, the (J, J')lossless factorization is of great importance in system theory.
The (J, J')lossless factorization is usually treated as a (J, J')spectralfactorization problem which is to find a unimodular n(s) such that G(s)JG(s) = rl~(s)jrl(s). Ball and Ran [4] first derived a statespace representation of the (J, J')spectral factorization to solve the Nehari extension problem based on the result of Bart et.al. [5]. It was generalized in [ 1] to more general modelmatching problems. Recently, Green et.al. [8] gave a simpler statespace characterization of the (J, J')spectral factorization for a stable G(s). Ball et al. 13] discussed some connections between the chainscattering formulation of H** and (J, J')lossless factorization.
In this paper, we give a simple derivation of the (J, J')lossless factorization for general unstable G(s) based on the theory o f conjugation developed in [9]. The conjugation is a simple operation of replacing some poles of a given rational matrix by their mirror images with respect to the origin by multiplication of a certain matrix. It is a statespace representation of the NevanlinnaPick interpolation theory [ 11] and the Darlington synthesis method of classical circuit theory. The (J, J')lossless factorization for unstable matrix, which is first established in this paper, enables us to solve the general I1 °° control problem directly without any recourse to Youla parameterization [12].
In Section 2, the theory of conjugation is briefly reviewed. The (J, J')lossless conjugation, a special class of conjugations by (J, J')lossless matrix, is developed in Section 3. Section 4 gives a frequency domain characterization of the (J, J')lossless factorization. Section 5 is devoted to the state space representation of the (J, J')lossless factorization. The existence condition for the (J, J ')lossless factorization is represented ill terms of two Riccati equations, one of which is degenerated in the sense that it contains no constant tema. Notations :
C(sIA)" B+D := [A~1 R(s) = RT(.s), R'(s) = RT(~), tC IDJ,
~A) ; The set of eigenvalues of a matrix A.
RU:x r ; The set of proper rational nmtrices of size mxr without poles on the imaginary axis,
Rllmxr; The set of proper stable rational matrices of size m×r.
2. Con juga t ion The notion of conjugation was first introduced in [9]. It represents a simple operation which replaces
the poles of a transfer function by their conjugates. The conjugation gives a unified framework for treating the problems associated with interpolation theory in state space [10]. In this section, the notion of conjugation is generalized to represent the relevant operations more naturally.
To define the conjugation precisely, let
G(s) =[ A~DB I R.. De Rr , A ~ ( 2 . t )
be a minimal realization of a transfer function O(s).
Definition 2.1 Let a(A) = A1UA2 be a partition of o'(A) into two disjoint sets of k complex numbers At and nk complex numbers A2. A matrix e(s) is said to be a AFconjugaror of G(s), if the poles of the product
G(s)O(s) are equal to {A l} UA2 and for any constant vector ~, ~ 0,
~(sl  A)'tB@(s) #0 ( 2 . 2 )
for some s, where At is a set composed of the elements of At with reversed sign. As a special case, we can choose At composed of the RHP elements of o(A). If A has no eigenvalue
on the jo~  axis. ^l conjugator of G(s) stabilizes the product G(s)e(s). In that case, Alconjugator is called a stabilizing conjugator. In the same way, we can define an antistabilizing conjugator.
Remark. The condition (2.2) is imposed to role out the trivial case like e(s)  0. It is automatically satisfied for the case where e(s) is right invertiblc. T H E O R E M 2.2 e(s) is a Atconjugator of G(s) given by (2.4), if and only if O(s) is given by
t C. I I J (2.3)
where Co, De, X, are such that X is a solution of the Riccati equation
XA + ATX + XBC.,X = 0 (2.4)
such that A~ := A + BC,X satisfies a(AO = {At } ,..) A2 and ( A , , BD, ) is controllable. In that case, the product G(s)e(s) is given by
G(s) A(s)=[ A + B C¢ X~~] Dc c ~ ~ co x IE, J (2.5)
(Proof) From the definition, the Amatrix of the product G(s)O(s) has eigenvalues in cI(A)uo'(AT). Hence, it can be assumed, without loss of generality, that the Amatrix of O(s) is equal to A T. Hence, we put
 t C ~ I Do J (2.6)
for some B c C¢ and De. Then, the product rule yields
G(s)O(s) = 0 A T Be (2.7)
C DC~ DDc.
Since At c o(A T) and A2 c o(A), we have
BC_~ Ml M2 Mt M2 At 0
0 A T NI N2 Nt N2 0 a~ (2.8)
for some MI , M2, Nt and N2 such that 6(AI)= {At } u A2 • Assume that O(s) is a hi.conjugator. Since the modes associated with Alr are uncontrollable in
(2. I0), we have
for some BL. From (2.8), it follows that AMI + BCdN! = MIAI, ATNt = NtAI. (2.10)
Hence, we have (sl A) Mi  BC.~I = Mt (sl Al) (2.1 1)
3
Premultiplication of (2.11) by (sl  A) "t and postmultiplication by (sI  A)aa l yield
Idt (sI  Al) "1Bt = (st  A) "l BO(s). (2.12)
Here, we used the relation N~ (sl  At) "~ B~ = (sl + A'r) "t B,, which is due to (2.9) and the second relation of (2. I0). The condition (2.5) implies that Mt is nonsingular and (At, 130 is controllable.
Taking X=NIMI "l in (2.10) yields (2.4). Since MIIAcMI=AI, A c should be stable. It is clear that (^¢, BDc)=(MflAtMt, MIBI) is controllable because (Ab BI) is controllable.
Due to (2.9), we have Be=NIBI=NIMI'IBDc=XBDc, from which (2.3) follows. The derivation of (2.5) is straightforward.
Remark. Theorem 2.2 implies that the conjugator of G(s) in (2.4) depends only on (A, B). Therefore, we sometimes call it a conjugator of (A. B).
Remark It is clear, from (2.7), that the order of O(s) given in (2.6) is equal to the rank of X which is equal to the number of elements in Lt.
3. (J, J ' ) Loss l e s s C o n j u g a t o r s
A matrix O(s)~ RL" is said to be (J, J')unitary, if tm*r)x(p*q)
O(s) J O (s) = J' (3.1)
for each s. A (J, J')unitary matrix e (s ) is said to be (J, J')tossless, if O'(s) J O (s) g J', (3.2)
for each Re[s]>0, where
j = [ I,, 0 ] j ,=[ I~ 0 ] m>_p, r_>q (3.3) 0 I, , 0 I~ ,
A (J, J')lossless matrix is called simply a Jlossless matrix.
A conjugator of G(s) which is (L J')lossless is called a (J, J')lossless conjugator of G(s). The existence condition of a Jlossless conjugator, which is of primary importance in what follows, is now stated. T H E O R E M 3.1. Assume that (A, B) is controllable and A has no eigenvalue on the jc0axis. There exists a (J, J')lossless stabilizing (antistabilizing) conjugator of (A, B), if and only if there exists a nonnegative solution P of the Riccati equation
PA + ATP  PBJB'rp = 0 (3.4)
which stabilizes (antistabilizes) A, : A BJB'rp. In that case, the desired (J, I')lossless conjugator of G(s) and the conjugated system are given respectively by
L JB T [ I I , C  DJBTp D ( 3 . 5 )
where Do is any matrix satisfying D~J D, = J'. (ProoJ) Since (2.4) implies X(sl + A T) = (sl  Ac)X, O(s) in (2.3) is represented as
O (s) = Oo (s) De,
O 0 ( s ) = I + C c X ( s I  A c ) "j B, A c = A + B C c X . (3.6)
Assume that P is a solution of the Lyapunov equation P a , + AcT P+ X "r c,'rJ Co X =0 . (3.7)
Then, it follow that
O~ (s) J O0(s) = J + BT (sIAcV) a (XrC~ J + P B) + (J C¢ X + B'r P) ( s t  A¢) "~ B.
Since ~ is stable and (A¢, B) is controllable from the assumption, O(s) is Junitary if and only if J C, X + B r I, = 0. (3.8)
In that case, we have O~(s)JOu(s) = J (s + ~BT(sl  A~)"P(sl  Ac)4B
Hence, in order that Oo(s) is JIossless, P should be nonnegative. From (3.8), (3.7) is written as (3.4) and
O0(s) in (3.6) can be written as O0(s) = I  Jffr(sl + Ar)'IPB, from which the representation (3.5) follows. Now, we shall show a cascade decomposition of the (J, J')Iossless conjugation of (A, B) according
to the modal decomposition of the matrix A. Assumethat the pair (A, B) is of the form
As! A,.,
The solution P of (3.4) is represented in conformable with the partition (3.9) as p__. [ P,, PIa]
PITS P22 . (3.10)
L E M M A 3.2. If the pair (A, B) in (3.9) has a (J, J')lossless conjugator O(s) of the form (3.5), then O(s) is represented as the product
O(s) = O,(s) OKs) (3.11)
of a Jlossless conjugator Ol(s) of (At t, Bt) and a (J, J')lossless conjugator 02(s) of (An, 82) with Bz=Ba+Ptff'hB, , where P~a is a peudo inverse of P,, in (3.10). Actually, e~(s) and OXs) are given respectively as
[J B; ' I 4 B; ' ( 3 . 1 2 )
where ~ = PH  P ~ { ~ a and D is any constant (J, J')unitary matrix. (ProoJ) Since P ~ 0, we have Pt2(I  P~2Pza) = 0. Therefore,
UPU'r =[ A0 Paa0 ] , (U.I)'rB =[_~] . U=[ 10 "PtalP~ ]
From this relation and (3.4), it follows that
0 Pal Aal A~ 0 A T 0 P~ 0 P22 0 P22 (3.13)
where Azz = Aat + P~aPTaAn  A~zl'~ff'rz. Hence, t~AH + A~IA  AB~]B TA = 0. Since ~ ~ 0, Or(s) in (3.12) is a J lossle_ss conjugator of (A,. B~). Also, it is clear that Oz(s) given in (3.12) is a (J, J')lossless conjugator of (Aa:. B9. Since (3.11) implies that Az~ = BzlB TA, we have
e~(s)O~s) = 0 A~ Pzalh 19 = = e(s).
sa~ J~I I t  jdu ' The proof is now complete.
4. (J, J')Lossless F a c t o r i z a t i o n s
A matrix G(s) e RL~',,. o x c**.) is said to have a (J, J')losslessfactorization, if it can be represented as
G(s) = O(s) l'I(s), (4.1)
where O(s)¢ RL~'m,,r)x(v+*) is (J, J')lossless and rI(s)is unimodular in H ' , i.e., both rI(s)and lI(s) "* are in
RH~,,0,, c,,o • It is a generalization of the innerouter factorization of H" matrices to L" matrices.
Since rI(s) is unimodular, it has no unstable zeros nor poles. Therefore, O(s) should include all the unstable zeros and the poles of G(s). The (J, J')lossless conjugation introduced in the previous section gives a suitable machinery to carry out these procedures of "zero extraction" and "pole extraction."
If (4.1) holds, it follows that
rlCs)"J'e(s) J G(s) = lp~. (4.2)
This implies that G(s) should be left invertible. Hence, there exists a complement G'(s) of G(s) such that
[G(s) G'(s) l : = [G'(s) ] LoJ'(s)] (4.3)
exists. Obviously, G*(s) is a left inverse and G'(s) is a left annihilator of G(s), i.e., G'(s) G(s) = I~, s , G'(s) G(s) = 0. (4.4)
The following result gives a method of carrying out a (J, J')iossless factorization based on sequential conjugation of zeros and poles extraction.
T I I E O R E M 4.1. (J, J')lossless factorization of G(s) exists, if and only if the left inverse G*(s) of G(s) has a Jlossless stabilizing conjugator e l ( s ) such that G~(s)Je(s) has a (J, J')iossless antistabilizing
conjugator O2(s) such that GJ(s )e , ( s )e2(s ) :O (4.5)
(ProojO To prove the necessity, assume that G(s) allows a (J, J')lossless factorization (4.1). Ciearly, we have
G(s) J rr'(s)J' Therefore, e(s) is a (J, J')Iossless stabilizing conjugator of G+(s), as well as a (J, J')iossless anti stabilizing conjugator of G~(s)J. Due to Lemma 3.2, e(s) can be factored as (3.11) where e l ( s ) is stable J lossless matrix and e2(s) is an antistable (I, J')lossless matrix. From the wellknown property of (1, I') lossless matrix, O2(s) has no unstable zero. From
G(s )=e t(s)e2(s)rl(s), (4.6)
6+(s)et (s) = Ilt(s) satisfies nt(s)e2(s)n(s) = I. Since e2(s)rI(s) has no unstable zeros, rlt(s) is stable. Therefore e t (s ) is a stabilizing conjugator of G+(s). Since G~(s)Jet(s)e2(s) =rI~(s)J ' , and rl(s)J is anti stable, e2(s) is a (J, J ')dossless antistabilizing conjugator of G~(s)Jel(s). The relation (4.5) is obvious from (4.6). Now, the necessity of the theorem has been proven.
Assume that there exist Or(s) and O2(s) satisfying the condit ions of the theorem. Let lql(s)=G+(s)et(s) and lq2(s) = G~(s)JOt(s)O2(s). From the assumption, lIt(s) is stable and l12(s ) is anti stable. Since e l (s ) is Jlossless, e l (s )Je l (S)=J (see[6]). Therefore, we have lqt(s)JOl~(S)JG(s) =I. Since lIt(s) is stable, G~(s)JOI(S) has no stable zero. Since conjugator does not create zero, rlz(s) has no stable zero. Therefore, 112~(s) is unimodular.
Finally, we shall show that the relation (4.1) holds with e(s)=et(s)o2(s) and lI(s)=J'rlz(s). Due to (4.5), Ot(s)O2(s)=G(s)U(s) for some U(s). Since Fl(s) = e2(s)e](s)JG(s), we have II2~(s)U(s) = J'. The assertion follows immediately.
Based on the above theorem and its proof, we can state an algorithm of (J, J')lossless factorization.
[Algor i thm of (J, J ' )  Ioss less Fae to r i za t ion ] Step 1. Find a Jlossless stabilizing conjugator el(s) of G+(s). Step 2. Find a (J, J')lossless antistabilizing conjugator O2(s) of G~(s)Jet(s) satisfying (4.5). Step 3. Put o(s)=ot(s)e2(s) and II(s)=J'O~(s)JG(s), which give a (J, J31ossless factorization.
In the above algorithm, Ol(s absorbs all the unstable zeros of G(s). Hence, Step 1 is regarded as a procedure of zero extraction. Similarly, O2(s) in Step 2 absorbs all the unstable poles of G(s), and hence, Step 2 is regarded as pole extraction.
5. StateSpace Theory of (J, J')Lossless Factorizations In this section, we derive a statespace forms of (J, J')lossless factorizations. Let
G(s)=[ A ~ D B ] eR nx,,, Rem*r)x(p*q) A D~ (5.1) be a statespace form of G(s). The following assumptions are made :
(A0 The matrix A has no cigenvalucs on the jo)axis.
(A2) The matrix D is of full column rank.
(A3) G~(s)JG(s) admits no unstable polezero cancellation.
Due to the assumption (A2), there exists a column complement D' of D such that
[D D, ]I =[ D* D" ] (5.2)
exists. Obviously,
D'D = Imq D'D = 0. (5.3)
Now, we assume that O(s) has a (J, J')lossless factorization (4.1). Since G(s)JG(s)=rI(s)Jn(s), we have
D T J D = D 'r Z' D, (5.4)
where Dx = rl(~,) e R ~p'q)x~p'q). Since Il(s) is unimodular, D~ is nonsingular. Hence, DTjD must b~ non singular.
The left inverse G+(s) and a left annihilator Gl(s) of G(s) satisfying (4.6) are represented in the state space as
[O'(s)J D~ (5.5) D.tC
whcrc U is an arbitrary matrix which is detcrmincd latcr. Now, wc shall carry out the (J, J')losslcss factorization of G(s) based on thc algorithm dcrivcd in
Section 4.
Step l (Zero Extraction) Due to Theorem 3.1, a Jlossless stabilizing conjugator Or(s) of G+(s) in (5.5) is calculated to be
] [ JL 'r [ 1 ], (5.6)
where X is a nonnegative solution of the Riccati equation
X(A LC) + (A LC)TX  xLTjL'rX = 0 (5.7) which stabilizes
At := A L(C + jL'rX) (5.8i
Step 2 (Pole Extraction) Straightforward computation using LD = B yields
Ar 'r I I C'rj+XL] G(s)JO,(s) = [ ~ ] O,(s) = "At [ B "r I DTJ ] .B v DTj , (5.9)
Due to Theorem 3.1, a (J, J')lossless antistabilizing conjugator of (5.9) is given by
O.2(s)_[ A ]Y(C'rj + XL) ] DDk ' LcJL'rX[ I (5.10)
A T 0
H(s) = J'D~ T / F~rBT "AT
L Carrying out the similarity transformation
r, o o l r , o xl o / /o , ° t
LOY I J LO 0 I J ,
where Dr is any matrix satisfying (5.4) and v is a nonnegative solution of the Riccati equation
VA T + AY + Y(c'r+ XLJ) J (C + jLTx)y = 0 (5. I 1) which stabilizes
Az := A + Y(cT+ XLI) J (C + JLTX), (5.12)
NOW, concatenation rule yields
I (ALC) "r XL(C+JLTX) XB
O(s) = 8Ks)O~s) = 0 A y(CTJD+XB) Dn (5.I3)
JL T CJLTX) D
We use the free parameter U in (5.5) to satisfy (4.5). From (5.5), it follows that
G±(s)B(s) = [ At I (I+YX)B+yCTjD ] D't(C+JL'rX ") 0 .
Assumption (A3) implies that ( A1, (I+YX)B+yCTjD) is controllable. Hence, (4.5) holds, if and only if
Dt(C+jLTx) = 0. (5.14)
Thus, the free parameter U in (5.5) must be determined such that (5.14) holds. The relation (5.14) holds, if and only if
LTX = J (DF+ C) (5.15)
for some matrix F. Since LD=B for any choice of U, we have
F= (DTjD)'I(BTx + DTjc). (5.16)
Straightforward computation using (5.15) and (5.16) verifies that the Riccati equation (5.7) is written as
XA + ATx  (cTjD+XB)(DTjD)'I(DTJC+BTX)+cTjc = 0. (5.17)
The matrix At given in (5.8) is calculated to be
Al = A +BF. (5.18) Also, the Riccati equation (5.11) and A2 in (5.12) are represented respectively, as
YAT+AY+yF'rDTJDFY = 0 , A2 = A + yFTDTJDF (5.19)
Step 3. Using (5.7), we can write the statespace form of O(s)=Ol(s)O2(s) in (5.13) as
[ A, BF B ] ! A yFTDTJD [ Dk! O(s) = 0 _ • ( 5 . 2 0 )
L(C+DI) DF D j
where Dg is any matrix satisfying (5.4). It remains to compute the unimodular factor rl(s) = J'O(s)JG(s). The concatenation rule yields
(C+DF)Tjc .(C+DF)'rjc FTDTJC FTDTJD
A B
DrJC DTJD with the transformation matrix
we obtain
Fl(s) = D" [ ~ IB'YFITDTJD 1. (5.21)
Now, we can state the main result of this paper. THEOREM 5.1. Assume that G(s) given in (5.1) satisfies Assumptions (AI)(A2)(A3). Then, G(s) has a (J, J')losslcss factorization, if and only if there exist a nonsingular matrix D= ~ R ~'e'L''e satisfying (5.4), a solution × > 0 of the Riccati equation (5.17) which stabilizizes A I given in (5.18)(5.16) and a solution Y 2_ 0 of the Riccati equation (5.19) which stabilizes A2 given in (5.19). In that case, a (J, J')losslcss factor e(s) is given in (5.20) and a unimodular factor rl(s) is given in (5.2t).
The case of stable G(s) is much simpler. The result is exactly the same as was obtained in [8]. COROLLARY 5.2. Assume that A is stable in addition to (AI)(A2) and (A3). Then, G(s) has a (J, J') lossless factorization (4.1), if and only if there exists a nonsingular matrix Dx ~ R t* '¢° '¢ satisfying (5.4) and the Riccati equation (5.17) has the solution x > 0 which stabilizes Al given in (5.18).
If the above condition hold, the factors O(s) and rl(s) of (4.1) are given respectively by
'1. C+DFI DJ , LFIIJ . (5.22)
(Prooj~ IrA is stable, we can take Y0 in solving (5.19). The assertions of the corollary follow immediately from (5.20) and (5.21) for Y=0.
6. Conclusion A necessary and sufficient condition for the existence of (J, J')lossless factorization in RL '~' has
been established in the state space based on the method of conjugation. It includes the result of [8] for RH** as a special case. The derivation is simple, exhibiting the power of conjugation method.
The result obtained in this paper is directly applied to !1 ¢~ control problems without any recourse to Youla parameterization, giving a simpler result for both standard and nonstandard plants. This will be published elsewhere.
REFERENCES [ 11 J.A. Ball and N. Cohen, "The sensitivity minimization in an H** norm : Parameterization of all optimal solutions," Int. J. Control, 46, pp.785816 (1987) [21 J.A. Ball and J.W. Hehon, A BeurlingLax Theorem for the Lie group U(m, n) which contains most classical imterpolation," J. Operator Th., 9, pp. 107142 (1983) [3] J.A. Ball, J.W. Hehon and M. Verma, "A factorization principle for stabilization of linear control systems," Reprint. [4] J.A. Ball and A.C.M. Ran, "Optimal Hankel norm model reduction and WienerHopf factorization I : The canonical case,'SIAM J. Contr.Optimiz., 25, pp.362382 (1987). [5] H. Ban, I. Gohberg, M.A. Kaashoek and P. van Dooren, "Factorizations of transfer functions," SlAM J. Contr. Optimiz. 18, pp. 675696 (1980). [61 P. Dewilde, "Lossless Chainscattering matrices and optimum linear prediction : The vector case," Circuit Th. & Appl., 9, pp.135175 (1981). [7] B.A. Francis, A Course in Ho, Control, Springer, New York (1987). [8] M. Green, K. Glover, D. Limebeer and J. Doyle, "A Jspectral factorization approach to H,o control," SlAM J. Contr. Optimiz., 28, pp.13501371 (1990). [9] H. Kimura, "Conjugation, interpolation and modelmatching in Hoo ," Int. J. Control, 49, pp.269307 (1989). [ 10] H. Kimura, "State space approach to the classical interpolation problem and its applications, "in Three Decades of Mathematical System Theory, H. Nijmeijer and J.M. Schumacher (eds.), SpringerVerlag, ,pp. 243275(1989) [11] H. Kimura, Y. Lu and R. Kawatani, "On the structure of H,o control systems and related extensions," IEEE Trans. Auto. Contr., AC36, pp. 653667 (1991). [12] H. Kimura, "Chainscattering formulation of H,,* control problems," Proc. Workshop on Robust Control, San Antonio (1991).
Mixed H2/H Filtering by the Theory of Nash Games
1D.J.N. Limebeer 2B.D.O. Anderson 1 B. Hendel
Department of Electrical Engineering, I Department of Systems Engineering, 2 Imperial College, Australian National University, Exhibition Rd., Canberra, London. Australia.
A b s t r a c t
The aim of this paper is to study an H2/Hoo terminal staSe estimation problem using the classical theory of Nash equilibria. The H2/Hoo nature of the problem comes from the fact that we seek an estimator which satis fies two Nash inequalities. The first reflects an Hoo filtering requirement in the sense alluded to in [4], while the second inequality demands that the estimator be optimal in the sense of minimising the variance of the termi nal state estimation error. The problem solution exploits a duality with the H2/Hoo control problem studied in [2, 3]. By exploiting duality in this way, one may quickly cxtablish that an estimator exists which staisfies the two Nash inequalities if and only if a certain pair of cross coupled Riceati equations has a solution on some optimisation interval. We conclude the paper by showing that the Kalman filtering, Hoo filtering and H2/Hoo fil tering problems may all be captured within a unifying Nash game theoretic framework.
1 I n t r o d u c t i o n In this paper we seek to solve a mixed H2/Hoo terminal state estimation problem by formulating it as a two player nonzero sum Nash differential game. As is well known [1, 5], two player nonzero sum games have two performance criteria, and the idea is to use one performance index to reflect an Hoo filtering criterion, while the second reflects the H2 optimality criterion usually associated with Kalman filtering. Following the precise problem statement given in section 2.1, we reformulate the filtering problem as a deterministic mixed H~/Hoo control problem in section 2.2. This approach offers two advantages in that the transformation is relatively simple, and we can then use an adaptation of the mixed H~/Hoo theory given in [2] to derive the necessary and sufficient conditions for the existence of the Nash equilibrium solution to the derived control problem, and therefore for the existence of optimal linear estimators which solve
10
the filtering problem. These conditions are presented in section 2.3 where we also derive the dynamics of an online estimator. Some properties of the resulting mixed filter are given in section 2.4. The aim of section 2.5 is to provide a reconciliation with Kalman and Hoo filtering. In particular, we show that Kalman, Hoo and Hz/Hoo filters may all be captured as special cases of a another two player Nash game. Brief conclusions appear in section 3.
Our notation and conventions are standard. For A E C "nxm, A ~ denotes the complex conjugate transpose, g{.} is the expectation operator. We denote I]RII21 as the operator norm induced by the usual 2norm on functions of time. That is :
IIRII21 = sup [IRu[[2
2 The H2/H~ filtering problem
2 . 1 P r o b l e m S t a t e m e n t
We consider a plant of the form
k(t) = A(t)z(t) + B(t)w(t) x(to) = zo ( 2.1 )
with noisy observations
zt(t) = Cl(t)z(t) ( 2 .2 ) ~ ( t ) = c2(t)x(t) + n~(t), ( 2.3 )
in which the entries of A(t), B(t), C1 (t) and C2(/) are continuous functions of time. The inputs w(t) and n2(t) are assumed to be independent zeromean white noise processes with
~{w(t)w'Cr)} = Q(t)6(t  ~') ( 2.4 ) E{n2(t)n2'Cr)} = R2Ct)6(t 7) Vt ,r e [t0,tfl. ( 2.5 )
The matrices Q :> 0 and R~ > 0 are symmetric and have entries which are continuous functions of time.
Our final assumption concerns the initial state. We let z0 be a gaussian random variable with
~{x0} = m0, c{(~0  m0)(~0  m0)'} = P0. ( 2.8 )
Finally, we assume that z0 is independent of the noise processes, i.e.
£{z0w'Ct)} = 0, ~{~0n~(t)} = 0; V t e [t0,t:].
The estimate of the terminal state z( t l ) is based on both observations zx(t) and z2(t), and the class of estimators under consideration is given by
~t~ ! ~(tj) = [MI(~, ts)~l(~) + M2(~,tf)z2(~)] aT (2.7)
where Mi (r, ty) and M2(r, if) are linear timevarying impulse responses.
11
The filtering problem is formulated as a twoplayer differential Nash game. The first player has access to the observation zi(t) and tries to maximise the variance of the estimation error by choosing an M1 (.,t/) which minimises the Hoo filtering cost function [4]
JI(M1, Mr) = CIIMdl ]  ~ {[=(tD  ~(tl)1'[=(~1)  ~(ts)]}. (2.8)
7rllMt Ill is a penalty term introduced to prevent player one from assigning an arbitrarily large value to M1 thus driving J1 to minus infinity. The second player has access to the observation z2(t) and attempts to minimise the error variance by selecting an Mr which minimises the Kalman filtering payoff function
Jr(Mr, Mr) = ~ {[x(t/)  fc(t/)]'[z(t/)  ~:(t/)]} . ( 2 . 9 )
We therefore seek two linear estimators M~" and M~ which satisfy the Nash equilibria
JI(M[,M~) < JI(Mt,M~) (2 .10 )
J2(M[,M~) <_ J2(M~,M2). ( 2.11 )
2 . 2 P r o b l e m R e f o r m u l a t i o n
In order to solve the stochastic filtering problem we transform it into a deterministic control problem. This approach has two advantages in that the reformulation is relatively simple, and we can then use an adaptation of the mixed Hr/Hoo theory [2] to derive the necessary and sufficient conditions for the existence of the Nash equilibria M~' and M~.
We begin by introducing the timevarying matrix differential equation [4]
Z(t, tl) = A'( t)Z(t , tl) + C~(t)Ml'(t, tl) + C~(t)M2'(t,tl) (2.12)
with Z(lc,t/) = I. The aim is to rewrite the cost functions (2.8) and (2.9) as quadratic performance indices involving Z(.), MI(.) and Ms(). By direct calculation
d (Z(,, t j)',(,)) dt
I I I I = [  A ' Z + C I M t ' + C r M 2 ] (t)x(t)
+ Z(t,tl)'[m(t)z(t) + B(t)w(t)] = Ml(t,lf)zl(t) + Mr(t, tl)[zr(t)  n2(t)]
+ Z(t,t+)'B(t)w(t).
Integrating over [to,if] gives
=(ty)  z ' ( to, ty)~o = { M l z l + M2[z2  n2] + Z 'Bw} (Odt.
Rearranging and using (2.7) yields
fi' • ( t l )  ~ (~I ) = z ' ( t o , t . l ) ~ o  { M ~ , ~  Z ' B ~ , } ( t ) d t .
12
In the next step we square both sides and take expectations making use of the assumed statistical properties of the noise processes and the initial state. This gives
f + z'(to,ts)poZ(to,~1). ( 2.13 )
We can now use (2,13) to rewrite J1 and ,/2 as
= ~/'s(7~MIM 1' M' Z'BQB'Z)(t)dt} J, tr  M2R2 2  k ~o
 ~r{Z'C~o,t~)PoZC~o,tj)} ( 2a4 )
k J t o
+ fr{Z'(to,tf)PoZ(to,t])}. ( 2.15 ) Notice that both payoff functions are now deterministic.
The filtering problem has thus been transformed into the mixed H2/Hoo control problem of finding equilibrium solutions MI" and M~ which satisfy
JI(M~,M~) <_ J,(MI,M~) ( 2 . 16 )
J2(M{,M{) < J2(M~,M2) ( 2.17 )
where J1 and J~ arc given by (2.14) and (2.15); the matrix Z(t) is constrained by (2.12).
R e m a r k s (i) Notice that the "state" Z(Q is matrix valued and the 'dynamics ' have a terminal condition Z(tl) = I. (it) The optimal linear estimators M~ and M~ are matrix valued too. (iii) The initial condition data in the form of Z'(to)PoZ(to) appears in the cost func tionMs.
2 . 3 F i n d i n g a n o n  l i n e e s t i m a t e
The necessary and sufficient conditions for the existence of Nash equilibrium solutions to the mixed H2/Hoo control problem, and therefore for the existence of linear estimators M~" and M~ follow directly from [2], and are presented in the following theorem. Scaling arguments have allowed us to assume without loss of generality that R2 = Q = I .
T h e o r e m 2.1 Given the system and observations described by (2.1) to (2.3), there exist transformatio~,s M~ and M~ which satisfy the Nash equilibria (2.16) and (2.17) with Jx and J2 defined in (2.14) and (2.15) if and only if the coupled Riecati differential equations
/'I(L) = AP,(t) + PI ( t )A '  BB'
 [P l ( t ) P ~ ( , ) ] l 7~C[C1 C~C2 Pl(t)
P2(t) = mP2(t) + P2(t)A'+ BB'
 [ P,(t) P2(t) ] ] 0 ,),26[C1 ] . ( 2.19 ) t J
13
with Pt(to) = Po and P~(to) = Po, have sotu.ons Pt( t ) <_ O, e~(t) > 0 on (2.18) and (2.19) have solutions,
7 2Z' ( t , tl)P~(t)C I Z ' ( t , t l ) P z ( t ) C ~.
M~( t , t s ) =
M;Ct, t~) =
Moreover,
JI(M~,M~) Z~(M~,M~)
[to,~sl  ~'/'
( 2.20 ) ( 2.21 )
with
Direct calculation yields
~z  l ( t f , t ) =
Since Z ( t , t l ) = Z~(t l , t ) , we have
d~I Z(L,tj)= d~Z~(t~,,).
 z ~(t:,t) ~~, z(tj,t)z~(,1, t)
= Z t ( t l , t ) (A '  72C~01P1  C~C~P2)(tl),
and then (2.25) :=>
~ Z'(t, is) = ( A  72 ptc[ c1  P~C~C2)(Q )Z'(t, tl).
Now, from (2.7)
/,i' Differentiating with respect to t! and using (2.26) we obtain
or equivalently
d d~j ~(~I1 = A(t l l~(tj)
= (A  72P1C[C1  PzC~C2)(tI)~(tl)
+ (v"P,C'~zl + P2C~z2)(t:) e ( to ) = ,no
 K,(t l )[C~(~l)~Ctl) zl(e/)]
 z<2Cts)[C~(t f )~( t f )  z~(tl)]
KI(f / ) = 72Pi(ll)Ci(tI) ( 2.29 )
K~(ts) = P~(t:)C~(tf) . ( 2.30 )
The matrices Pl( t l ) and P2(t/) satisfy the coupled Riccati equations (2.18) and (2.19).
(2.28)
(2.26)
( 2.27 )
( 2 . 2 8 )
= ~o~(tj) ( 2.22 ) = P~(t~). ( 2.23 )
With a little manipulation we can now find an online estimate for z( t l ) based on the observations zl(tl) and z2(tI), in the same sense that the Kalman filter provides an on line state estimate using current observations. Implementing the equilibrium strategies in (2.12) gives
Z(t, t]) = ( A ' + 7~PIC~C1 + P2C~C2)Z(t,II) Z ( t l , t l ) = I. ( 2.24 )
14
2.4 T h e m i x e d H 2 / H c ¢ f i l ter In this section we show how the H2/He¢ filter is defined in terms of the above analysis, and we can present some of its properties. Suppose M~ is replaced by a dormant player as follows : Calculate M[ and M~ as above, and then in (2.27) replace the term multiplying zl by zero, i.e. we take M~ = 0 • M~. This means that in (2.28) we replace g~(t l ) by zero to obtain
~ ( t / ) = [A(tl) K~(t/)C~(~/)l~(t D + g2(*1)C~(tl)z~(tS); ~(t0) = ITI0}
which is the defining equation of the H2/Hoo filter. It is easy to see from this equation that the tI2/Hoo filter has thc usual observer structure associated with Kalmaa and Hoo filters. The following result provides an upper bound on the variance of the terminal state estimation error, and it shows that if the white noise assumption on the disturbance inputs w and n2 is dropped, the energy gain from the disturbances to the generalised estimation error Ct(~:(tl)  z( tI) ) is bounded above by 7 for all w, n2 E £2[to, tl].
T h e o r e m 2.2 If M1 ~ O,
(i) the online state estimate is generated by
d~l~:(tf) = (A  P2C~C2)(t/)~(tf) + P2(tl)C~(tl)z2(tl) ~(to) = ( 2.31 ) mo~
(it) the covariance of the estimation error e(tl) = ~(tl)  z ( t l ) satisfies
£{e(tf)e'(t /) } <  P l ( t l ) ( 2.32 )
and (iii) when mo = 0, Ilnll~ < ~ for all w, n2 E £z[t0,tl], where the operator T"g is
described by
e(tj) (A  P~CiC~)(tf)e(tl) + [ B  ~C~ ] ( 2.33 )
z = C1e(tI) (2 .34 )
2.5 R e c o n c i l i a t i o n o f K a l m a n , H ~ a n d H 2 / H ~ fil t e r i n g
Here we establish a link between Kalman, H~ and mixed H2/H~ filtering. Each of the three theories may be generated as special cases of the following nonzero sum, twoplayer Nash differential game:
Given the system described by (2.1) to (2.3) and a state estimate of the form given in (2.7), we seek linear estimators M~' and M~ which satisfy the Nash equilibria JI(M~, M~) < JI(MI, M~) and J2(M~, M~) < J2(M~, M2), where
J2 = ~: {[z(tf) ~(tl)]'[z(ts) ~(tj)]} p211Mdl ~.
15 This game can be reformulated as a deterministic control problem and is solved
by M~(t,tl) = 72Z'(t,t/)Sl(t)C~ and M~(t,~l) = Z'(t , t l)S2(t)C ~, where Z(t , t l ) is generated by (2.12) and SI(t) and $2(/) satisfy the coupled Riccati differential equations
Sl(t) = A S t ( t ) + S t ( t ) A '  B B '
 [ s ct) s (l) ] 1 72qC, C C2 St(t)
s2(t) $2(~) = AS2(t) + S=(t)A' + BB'
 [ s,(t) s=(t) ] / p2740~C * 72C',C, C~C2 ][ S2(t)] 7~C~C1 S,(t) L
with Sl(t l) = S2(tl) = O.
(i) The Kalman filtering problem is recovered by setting p = 0 and 3' = oo, in which case S l ( t ) = S2(t) = Pc(t). The matrix Pe(~) is the solution to the usual Kalman filter Riccati equation. (ii) The pure Hoo filtering problem is recovered by setting p = 3'. Again, S , ( t ) = S2(t) = P~(t). The matrix Poo(t) is the solution to the Hc¢ filtering Riccati equation [4]. (iii) The mixed H2/Hoo filtering problem comes from p = 0.
3 C o n c l u s i o n s We have shown how a mixed H2/Hoo filtering problem may be formulated as a deter ministic mixed II2/Hoo control problem. The control problem can then solved using the Nash game techniques explained in [2]. The necessary and sufficient conditions for the existence of a solution to the filtering problem are given in terms of the existence of solutions to a pair of crosscoupled Riccati differential equations. We have derived an upper bound on the covariance of the state estimation error in terms of one of the solu tions to the coupled equations and have shown that, under certain conditions, the energy gain from the disturbance inputs to the generalised estimation error Cl (x ( t / )  ~(t/) ) is bounded above by 7
R e f e r e n c e s [1] T. Basar and G. 3. Olsder,"Dynamic noncooperative game theory," Academic Press,
New York, 1982
[2] D. J. N. Limebeer, B. D. O. Anderson and B. IIendel,"A Nash game approach to mixed H2/Hoo control," submitted for publication
[3] D. :I. N. Limebeer, B. D. O. Anderson and B. Hendel,"Nash games and mixed H2/Hoo control," preprint
[4] D. J. N. Limebeer and U. Shaked,"Minimax terminal state estimation and Hoo filtering," submitted to IEEE Trans. Auto. Control
[5] A. W. Start and Y. C. Ho,"Nonzerosum differential games," J. Optimizalion Theory and Applications, Vol.3, No. 3, pp 184206, 1967
The Principle of the Argument and its Application to the Stability and Robust Stability Problems
Mohamed Mansour Automatic Control Laboratory
Swiss Federal Institute of Technology Zurich, Switzerland
Abstrac t
It is shown that the principle of the argument is the basis for the different stability criteria for linear continuous and discrete systems. From the principle of argument stability criteria in the frequency domain are derived which lead to HermiteBieler theorems for continuous and discrete systems. RouthHurwitz crite rion and its equivalent for discrete systems, SchurCohn criterion and its equivalent for continuous systems are directly obtained. Moreover, the monotony of the argu ment for different functions can be proved using HermiteBieler theorem. Using the principle of the argument and the continuity of the roots of a polynomial w.r.t, its coefficients, all known robust stability results can be obtained in a straightforward and simple way.
1 Principle of the Argument
Consider a function f(z) which is regular inside a closed contour C and is not zero at any point on the contour and let Ac{arg f(z)} denotes the variation of arg f(z) round the contour C, then
T h e o r e m 1 [1] AcTarg fCz)} = 2~N (1)
where N is the number @zeros o f f (z ) inside C.
2 Stability Criterion using Principle of the Argu ment
2.1 C o n t i n u o u s S y s t e m s
Consider the characteristic polynomial of a continuous system
f ( s ) = aos n + a , s n1 + . . . + a .  l s + a n (2)
T h e o r e m 2 f (s) has all its roots in the left half of ~he splane if and only if f( jw) has a change of argument ofn~ when w changes from 0 to ~ .
17
The proof of this criterion which is known in the literature as CremerLeonhardMichailov Criterion is based on the fact that a negative real root contributes an angle of ~ to the change of argument, and a pair of complex roots in the left half plane contributes an angle of ~r to the change of argument as to changes from 0 to oo. It is easy to show geometrically that the argument is monotonically increasing.
f(s) can be written as f(s) = h(s 2) + sg(s 2) (3)
where h(s ~) and sg(s z) are the even and odd parts of f (s) respectively.
f (jto ) = h(to 2) k jtog(to2) (4)
2.2 D i s c r e t e S y s t e m s
Consider the characteristic polynomial of a discrete system
/ ( z ) = aoz" + a l z " 1 + ... + a . _ : + a . (5)
T h e o r e m 3 f ( z ) has all its roots inside the unit circle of the zplane if and only if f ( J °) has a change of argument of nrc when 0 changes from 0 go re.
The proof of this criterion is based on the fact that a real root inside the unit circle contributes an angle of 7r to the change of argument, and a pair of complex roots inside the unit circle contributes an angle of 2~r to the change of argument as 0 changes from 0 to lr. It is easy to show geometrically that the argument is monotonically increasing
f (z) can be written as f (z) = h(z) + g(z) (6)
where 1[ h(z) = 7 f(z) q znf = C~o zn + oqz '~x q" ... I" otlz "I" O:o (7)
and 1 [ ( 1 ) ]
g(Z) = ~ f(Z) "q" znf = fro zn "~"/~I zn1 21" .. /~I z  80 (8)
are the symmetric and antisymmetric parts of f (z) respectively. Also
f(e i') = 2ein°/2[h" k J9*] (9)
where for n even, n = 2v
OtV h*(O) = £I' 0 cosvO I" 0~I COS(// 1)0 + ... + o~,_x cosO+ ~
(1o) g'(O) = 3 0 s i n v O + 3 1 s i n ( u  1 ) 0 + . . . + P .  l s i n 0
From theorem 3 and equation (9) we get
T h e o r e m 4 f ( z ) has all its roots inside ~he unit circle of the zplane if and only if f* = h* + jg* has a change of argument of n~ when 0 changes from 0 to r.
f" is a complex function w.r.t, a rotating axis. It can be shown geometrically that the argument of f* increases monotonically. Fig. 1 shows the change of coordinates.
g*
\
18
Ira
Re
Figure 1: Change of coordinates
3 Hermi te B ie l er T h e o r e m
3.1 Continuous Systems
Consider the polynomial in (3).
T h e o r e m 5 (HermiteBieler Theorem) f(s) has all its roots in the left half of the s plane if and only if h(A) and g()~) have simple real negative alternating roots and ~o > O.
a 1
The first root next to zero is of h(a).
The proof of this theorem follows directly from theorem 1; [2]. If theorem 5 is satisfied then h(,~) and 0(,~) are called a positive pair.
3 .2 D i s c r e t e S y s t e m s
Consider polynomial (5).
T h e o r e m 6 (Discrete HermiteBieler Theorem) f(z) has all its roots inside the unit circle of the zplane if and only if h(z) and g(z) have simple alternating roots on the unit
~ ' l < 1; [3], [4]. circle and a,
The proof can be sketched as follows: Due to theorem 4, h ° and g" and hence h and g will go through 0 alternatively n times
I"1 < 1 distinguishes between f (z) as 0 changes from 0 to r . The necessary condition ao
and its inverse which has all roots outside the unit circle. I I
4 M o n o t o n y of the argument
In [5] it is shown that the argument of a Hurwitz polynomial f ( s ) = h(s 2) + sg(s 2) in the frequency domain f(j•) = h( w 2) + jwg(_~2) as well as the argument of the modified functions f~'(~) = h (  ~ ~) + j 0 (  ~ 2) and f~(~) = h (  ~ 2) + j ~ 2 g ( _ ~ ) are monotonically increasing functions of w, whereby w varies between 0 and co. Similarly it is shown that given a Schur polynomial f (z) = h(z) + g(z) where h(z) and g(z) are the
19
symmetric and the antisymmetric parts of f (z) respectively, the same monotony property can be obtained for an auxiliary function .f*(0) defined as f(e j°) = 2J"° /2f f (0) . The argument of f*(0) = h*(0) + jg*(O) as well as the argument of the modified functions
and
f~(O) = ~ + j g : (0) cos ~ sm
h'(0) .g.(0)
are monotonically increasing functions of 0 whereby 0 varies between 0 and a'. These results were proved in [5] using mathematical induction. However, a simpler proof can be obtained using networktheoretical results which is based on ttermiteBieler theorems, [6]. The results obtained can be directly applied to the robust stability problem [5].
5 H e r m i t e  B i e l e r T h e o r e m and Cri ter ia for H u r w i t z S tab i l i ty
In this section it is shown how different criteria for Hurwitz s~ability like RouthHurwitz criterion and equivalent of SchurCohn criterion for continuous systems are derived from HermiteBieler theorem. Also Kharitonov theorem [7], [8] can be similarly derived, [9].
5 .1 R o u t h  H u r w i t z C r i t e r i o n
Without loss of generality consider n even. The first and second rows of the Routh table are given by the coefficients of h and g respectively. The third row is given by the coefficients of h  a°s2g. The following theorem shows that Routh table reduces
. . a l .  .
the stablhty of a polynomial of degree n to another one of degree n  1. Repeating this operation we arrive at a polynomial of degree one whose stability is easily determined.
T h e o r e m 'i t [2], [10] f ( s ) is Hurwitz slable if and only if h  ~~s2g + sg is Hurmitz stable.
Proof : Let f ( s ) = h + sg be Hurwitz stable i.e. ao,al,...a,~ > 0 and according to theorem 5, h(~) and g(,~) have simple real negative alternating roots, h  a 2 ~ s g and h
have the same sign at the roots of g and the number of roots of h ~ 2  als g + s g is n  1.
Moreover a l a 2  aoa3 > 0. Therefore, according to theorem 5, h  aa~s2g + sg is Hurwitz stable. The proof of the if part is similar. []
5 . 2 E q u i v a l e n t o f S c h u r  C o h n  J u r y C r i t e r i o n f o r C o n t i n u o u s S y s 
t e m s
T h e o r e m 8 [11] f ( s ) is Hurtoitz stable if and only if f ( s )  l ~ f (  s ) is Hurwitz
20
 1 is a root of f(s)  t ~ f (  s ) which can be eliminated, so that the stability of a polynomial of degree n is again reduced to the stability of a polynomial of degree n  1. This theorem can be proved using Rouch~ theorem. Here we give a proof based on HermiteBieler theorem and on the following Lemma.
L e m m a 1 h + sg is gurwitz stable if and only if 7h + 6sg is gurwitz stable for 7, 5 positive numbers.
Proof : The interlacing property does not change by multiplying h and g by positive numbers. Also ~ > 0. I"3
7ao
Proof : Proof of theorem 8. Let f(s) be tlurwitz stable. If l ( s ) = h + s g then/(X) = h  s g
Therefore
Using Lemma 1 we get I(s)  l ~ I (  s ) is Hurwitz stable. The proof of the if part is similar. El
6 Hermi te B ie l er T h e o r e m and Criteria for Schur Stabi l i ty
In this section it is shown how different criteria for Schur stability like the equivalent of RouthHurwitz criterion for discrete systems and SchurCohnJury criterion are derived from HermiteBieler theorem. Also analog result of Kharitonov theorem can be obtained; [91, [121. Consider f(z) given by equation (5). Without loss of generality we assume a0 > 0 and n even. For n odd a similar treatment can be made. A necessary condition for Schur
[~~1 < 1 which results in a0 > 0 and fl0 > 0. stability of (5) is o,
6 .1 E q u i v a l e n t o f R o u t h  H u r w i t z C r i t e r i o n f o r D i s c r e t e S y s t e m s
If h(z) and g(z) given by (7) and (8) fulfill the discrete HermiteBieler theorem then their projections on the real axis h*(x) and g*(x) have distinct interlacing real roots between  1 and +1. In this case h*(x) and g*(x) form a discrete positive pair. From [9] h*(x) and g'(x) are given by
v  1
h' x) = i=0
v  I
g'(x) = ~ , at,_,_1 i=0
(11)
where n = 2v and Tk, Uk are Tshebyshev polynomials of the first and second kind respectively.
21
Let
h*(x) = 7oX" +3,1X "z + . . .
g*(X) = 6oX ~1 +6zX ~2+ . . .
T h e o r e m 9 [11] f ( z ) is Schur stable if and only if g*(X ) and h ' ( x )  ~ ( X  1)g*(X) form a discrete positive pair and h*(1) < 0 for v odd and h*(1) > 0 for v even.
Proof : Assume f ( z ) be Schur stable, h*(x)  ~o(X  1)g*(x) has the same value at the roots of g*(X) as h*(x). Therefore, the interlacing property is preserved. The condition h*(1) < 0 for v odd and h*(1) > 0 for v even guarantees that the reduction of the order is not due to a root of h*(x) between  1 and  c o . Therefore g*(X) and h*(x)  ~.(X  1)g*(X) form a discrete positive pair. The if part can be proved similarly. I"1
This theorem can be implemented in table form.
6.2 SchurCohnJury Criterion
I ~  [ < 1 i s S c h u r s t a b l e i f a n d o n l y i f T h e o r e m 10 [4], [111 The polynomial (5) with a.
the polynomial 1 I f ( z )  aa~z" f (1)] is Schur stable.
This was proved in [4] using the following Lemma:
L e m m a 2 h + g is Schur stable if and only if 7h + ~g is Schur stable for 7, 6 positive numbers.
Proof : Proof of Lemma 2: The interlacing property does not change by multiplying h
and g by positive numbers. Also l ~° '6~° [ < 1 if I~t < l. The reverse is true. [] [ 7 ~ o + 6 P 0 [ ao
a n ] Proof : Proof of theorem 10: f ( z )  "a~oz f (7) = h(z) + g(z)  ~o [h(z)  g(z)] = ( a. [1 only if part ( 1  o ) h ( z ) + . + , o ) g ( z ). The of theorem l0 follows directly from
~I < 1 and Lemma 2. all I
The if part can be proved similarly, rq
7 Robust Stability in the Frequency Domain
Let f ( s ,a ) be given by (2) and a E A. A is a closed set in the coefficient space. Let D be an open subset of the complex plane. Robust Dstability means that all roots of the polynomial family lie in D. Let r(s*) be the value set of the polynomial family at s = s" and 0F(s*) its boundary. Using the principle of the argument and the continuity of the roots w.r.t, the coefficients of a polynomial, we get the following result for robust Dstability.
T h e o r e m 11 [13], [14]. f(s,a_) with a_ E A is robust Dstable if and only if the following conditions are satisfied:
22
i) f(s, a) L¢ Dstable for some a ~. A
is) r(s') # 0 for some s" e aD
iii) or(s) # 0 for all s e aD
Using theorem 11 we can solve the following problems in a simple way
i) an edge theorem for the robust stability of polytopes in the coefficient space, [13]
it) stability of edges
iii) vertex properties of edges, [5], [15]
iv) Kharitonov theorem and its analog and counterpart for discrete systems, [16], [17], [12], [xsl
v) robust stability of controlled interval plants continuous and discrete, [14], [19], [20]
A survey of all the above problems is given in [21].
8 C o n c l u s i o n s
In the above it was shown that the stability criteria for linear and discrete systems as well as the solution of the robust stability problem all can be derived from the principle of the argument.
R e f e r e n c e s
[1] E.G. Phillips, Functions of a complex variable with applications, Oliver and Boyd, 1963.
[2] H. Chapellat, M. Mansour, and S. P. Bhattacharyya, Elementary proofs of some classical stability criteria, IEEE Transactions on Education, 33(3):232239, 1990.
[3] H.W. Schiissler, A stability theorem for discrete systems, IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP24:8789, 1976.
[4] M. Mansour, Simple proof of SchurCohnJury criterium using principle of argu ment, Internal Report 9008, Automatic Control Laboratory, Swiss Federal Institute of Technology, Zurich, 1990.
[5] M. Mansour and F.J. Kraus, Argument conditions for Hurwitz and Schur stable polynomials and the robust stability problem, Internal Report 9005, Automatic Control Laboratory, Swiss Federal Institute of Technology, Zurich, 1990.
[6] M. Mansour, Monotony of the argument of different functions for Routh and Hut wits stability, Internal Report 9111, Automatic Control Laboratory, Swiss Federal Institute of Technology, Zurich, 1991.
[7] V. L. Kharitonov, Asymptotic stability of an equifibrium position of a family of systems of linear differential equations, Differentsial'nye Uravneniya, 14:20862088, 1978.
23
[8] B. D. O. Anderson, E. I. Jury, and M. Mansour, On robust Hurwitz polynomials, IEEE Transactions on Automatic Control, 32:909913, 1987.
[9] M. Mansour and F.J. Kraus, On robust stability of Schur polynomials, Internal Report 8705, Automatic Control Laboratory, Swiss Federal Institute of Technology, Zurich, 1987.
[10] M. Mansour, A simple proof of the RouthHurwitz criterion, Internal Report 8804, Automatic Control Laboratory, Swiss Federal Institute of Technology, Zurich, 1988.
[11] M. Mansour, Six stability criteria & HermiteBieler theorem, Internal Report 9009, Automatic Control Laboratory, Swiss Federal Institute of Technology, Zurich, 1990.
[12] M. Mansour, F. J. Kraus, and B.D.O. Anderson, Strong Kharitonov theorem for discrete systems, in Robustness in Identification and Control, Ed. M. Milanese, R. Tempo, A. Vicino, Plenum Publishing Co~oration, 1989. Also in Recent Advances in Robust Control, Ed. P. Dorato, R.K. Yedavaili, IEEE Press, 1990.
[13] S. Dasgupta, P. J. Parker, B. D. O. Anderson, F. J. Kraus, and M. Mansour, Fre quency domain conditions for the robust stability of linear and nonlinear systems, IEEE Transactions on Circuits and Systems, 38(4):389397, 1991.
[14] F. J. Kraus and W. TruSl, Robust stability of control systems with polytopical uncertainty: a Nyquist approach, International Journal of Control, 53(4):967983, 1991.
[15] F. J. Kraus, M. Mansour, W. TruSl, and B.D.O. Anderson, Robust stability of control systems: extreme point results for the stability of edges, Internal Report 9106, Automatic Control Laboratory, Swiss Federal Institute of Technology, Zurich, 1991.
[16] R. J. Minnichelli, J. J. Anagnost, and C. A. Desoer, An elementary proof of Kharitonov's stability theorem with extensions, IEEE Transactions on Automatic Control, 34(9):995998, 1989.
[17] F. J. Kraus and M. Mansour, Robuste Stabilitiit im Frequenzgang (german), In ternal Report 8706, Automatic Control Laboratory, Swiss Federal Institute of Tech nology, Zurich, 1987.
[18] F. J. Kraus and M. Mansour, On robust stability of discrete systems, Proceedings of the 29th IEEE Conference on Decision and Control, 2:421422, 1990.
[19] F. J. Kraus and M. Mansour, Robust Schur stable control systems, Proceedings of the American Control Conference, Boston, 1991.
[20] F. J. Kraus and M. Mansour, Robust discrete control, Proceedings of the European Control Conference, Grenoble, 1991.
[21] M. Mansour, Robust stability of systems described by rational functions, to appear in Advances in Robust Systems Techniques and Applications, Leondes Ed., Academic Press, 1991.
Robust Control of Interval Systems*
L.H. Keel
Center of Excellence in Information Systems
Tennessee State University
Nashville, TN. U.S.A
J. Shaw and S.P. Bhattacharyya
Department of Electrical Engineering
Texas A&M University
College Station, TX. U.S.A.
ABSTRACT
In this paper we consider a feedback interconnection of two linear time invariant systems, one of which is fixed and the other is uncertain with the uncertainty being a box in the space of coefficients of the numerator and denominator polynomials. This kind of system arises in robust control analysis and design problems. We give several results for the Nyquist plots and stability margins of such families of systems based oil the extremal segments of rational functions introduced by Chapellat and Bhattadaaryya [1]. These CB segments play a fundamental characterizing role in many control design problems.
1. I N T R O D U C T I O N
The problem of designing control systems to account for parameter uncertainty has become very important over the last few years. The main reason for this is the lack of effective methods for dealing with systems containing uncertain or adjustable parameters. Significant progress on this problem occured with the introduction of Kharitonov's Theorem [2] for interval polynomials which provided a new way of deal ing with parametric uncertainties. However, despite its mathematical elegance, its applicability to cozltrol systems was very limited because the theorem assumes that
*Supported in part by NASA grant NAGI863 and NSF Grant ECS8914357
25
all coefficients of the given polynomial family perturb independently. Under this as sumption the theorem can only provide conservative results for the control problem where the uncertain parameters enter the characteristic polynomoial in a correlated fashion
A breakthrough was reported in the theorem of Chapellat and Bhattacharyya [1] which dealt with the case in which the characteristic polynomial is a linear combina tion of interval polynomials with polynomial coeffidents. The CB Theorem provided Kharitonovlike results for such control problems by constructing certain line seg ments, the number of which is independent of the polynomial degrees, that completely and nonconservatively characterizethe stability of interval control systems.
Using this novel theorem, it has been discovered that the worst case H** stability margin of an interval control system occurs on one of these CB segments. The details and constructive algorithm to compute H °° stability margin are given in [3]. Similarly the worst case parametric stability margin occurs on the CB segments [4].
The main result of this paper is that the Nyquist plot boundaries of all the transfer functions of interest in an interval control system occurs on the CB segments. This key result has farre~ching implications in the development of design methods for interval control systems. Throughout this paper, proofs are omitted and may be found in [5]. A similar result has recently been independently discovered by Tesi and Vicino [6].
2. NOTATION AND P R E L I M I N A R Y RESULTS
Figure 1. Feedback System
We consider the feedback interconnection (see Figure 1) of the two systems with respective transfer functions
2v(s) Q~(s) P(~): . . . . . (1)
Q(~) := Q~(,) D(~)
where N(s), D(s) and Q~(s), i = 1,2 are polynomials in the complex variable a. In general Q(s) is fixed and P(s) will be assumed to lie in an uncertainty set described as follows.
Write
N(s) := nvs p+np_ls pI +. . .+his+no DCs) := dq~q + d ,_~ ,~ + . . . + d~, + do (2)
26
~nd let the coefficients lie in prescribed intervals
. , e l . . , . + ] , i e {0,1,. . . ,p} :=~ die[d~,d+], ie{O, 1,...,q}:=q.
Introduce the interval polynomial sets
H(~) := {N(,) = n,s"+n~_,~'~ + . . . n , ~ + , , o : n~ e ["~',",+l, i e E} D(,) := {D(s)=d,8'+d,,sqt+...dz*+do : d /e ld~/ ,d~/ ] , ieq} (3)
and the corresponding set of interval transfer hmctions (orintervM systems)
LD(a) : (N(,) ,D(s)) e Af(a)xV(s)}. (4)
The four Kharitonov vertex polynomials associated with Af(s) are
K~(,) := o + 'q* +~+*' +  D 3 + " ~ " + ' q ' ~ + " "
K~(~) := o +",+~ +"~+~ ÷ " ~ + " ~ ' +"~+'~ + . . . . K~(~) := ,,o+ + ,q~ + , q ~ + .+,~ + ,,,%, + , q , s + . . . (5) K.~(~) := .+ + .+~ + . ~ + .~ ,~ ' + , ,+ , ' +,,~%~ + . . . .
and we write ~.,,,'C~) " {K~C.~), K~(.,), g~c.,),.,"qC.~)} (6)
Similarly the four Kharitonov polynomials associated with 29(s) are denoted K~(a), i = 1,2,3,4 and
Jc.~(.,) := {K~C.,), K,~C..,), x~',~C,,,),_rc,,/(s)} (7) The four Kharitonov segment polynomials associated with N'(s) are defined as follows:
st¢0) :=
[AK~,Cs) + (1  A)K~(s) : A E [0,1], (i,j) E {(1,2), (1,3),(2,4),(3,4)}] (8)
and the four Kharitonov segment polynomials associated with :D(s) are
S,(~) :=
[id(~(s)+ (1 #)K~(s) : /~E [0,11, Ci,j) E {(1,2),(1,3),(2,4),(3,4))] (9) Following Chapellat and Bhattacharyya [1] we introduce the following subset of
(H(~)xV(~))c8 = {(2v(~),D(~)) : N(~) e ~:~f(~), D(s) 6 Sv(s) or N(s) E Sa'(s), n(s) e K:v(*)}. (10)
Referring to the control system in Figure 1, we calculate the following transfer functions of interest in an,alysls and design problems:
yCs) = p(~) "(~) .(.,) ~ = Q(~) 01)
27
~(~) T'(s) := e~=P(s)Q(s) (12)
~(~) P(~)Q(s) (13) T"(~) :=  ~ = 1 + P(~)Q(,) e(s) 1
T°(s) := r  ~ = i + P(s)Q(s) (14)
u(s) Q(s) (15) T"(s) := r  ~ = 1 + P(s)QCs)"
The characteristic polynomial of the system is denoted as
~r(s) = Q2(s)D(s) + Ql(s)N(s). (16)
As P(s) ranges over the uncertainty set P(s) (equivalently, (N(s), D(s)) ranges over Af(s)xD(s)) the transfer functions T°(s), TY(s), T"(s), T'(s) range over correspond ing uncertainty sets T°(s), T~(s), T"(s), and T'(s), respectively. In other words,
T'(~) := { PC~)Q(~) : P(~)e ~(~) } (17)
T'(~) := { ~ : P ( ~ ) ~ ( ~ ) } (lS)
T"(~) := { ~ : P(~) ~ ~(~) } (10)
T'(~) := { ' ~(~) 1+.(;)~(.) : P(~) ~ }. (20)
Similarly the characteristic polynomial ~r(s) ranges over the corresponding uncertainty set denoted by
IICs) = {Q2Cs)D(s) + Q,Cs)N(s) : (N(s),D(s)) E AfCs)xD(s)}. (21)
We now introduce the CB subset of the family of interval systems P(s):
PcBCs) := { N9]DO : (NCs),DCs)) E (N'(s)xDCs))cn }. (22)
These subsets will play a central role in all the results to be developed later. We note that each element of Pea(s) is a 0he parameter of transfer functions and there are at most 32 such distinct elements.
The CB subsets of the transfer function sets (17)  (20) and the polynomial set (21) are also introduced:
T~:BCs ) := { PCs)Q(~) : PCs) E PCB(8) } (23)
T~B(S ) : { ~ : P(s) E PCB(S) } (24) l+P(o)e(,)
T~.(~) := { Q(.) : P(~) c ro.(~) } (25) l+P(s)Q(s)
TS.(~) := { ' l+P(~')O(,) : P(s) E PCB(S) } (26)
28
and
ncB(s) := {Q.( , )D(s) + 0,( ,)IV(s) : (N( , ) . D( , ) ) e (~ (~)xV( , ) )cB} (27)
In the paper we shall deal with the complex plane image of each of the above sets evaluated at s = fla. We denote ¢ar3a of these two dimensional sets in the complex plane by replacing s in the corresponding argument by w. Thus, for example,
n ( ~ ) := {n(~) : • = j~} (28)
and T~s(o~) := {T~s(~) : s = jo~} (29)
The Nyquist plot of a set of functions (or polynomials) T(s) is denoted by T:
X := Uo<~<ooX(w) (30)
The boundary of a set 8 is denoted OS.
Stabili ty and Stabil i ty Mar$ins
The control system of Figure 1 is stable for fixed Q(s) and P(s) if the charac teristic polynomial
(s) = Q~(s)DCs) + Q1Cs)N(s) (31)
is Hurwitz, i.e. has all its n = q+ degree [Q2(s)] roots in the open left half of the complex plane. The system is robustly stable if and only if each polynomial in l'I(s) is of degree n (degree D(s). remains invariant and equal to q as D(s) ranges over :D(s)) and every polynomial in II(s) is Hurwitz.
The following important result was provided in Chapdlat and Bhattacharyya [11.
Theorem 1. (CB Theorem) The control system of Figure I is stable for all P(s) E P(s) if and only if it is stable for all P(s) e Pcs(s).
The above Theorem gives a constructive solution to the problem of checking robust stability by reducing it to a problem of checking a set of (at most) 32 root locus problems.
If a fixed system is stable we can determine its gain margin V and phase margin ~b as follows:
7+(P(s),O(s)) := max{ f~" : Q2(s)D(s) + KQI(s)N(s)
is Hurwitz for It" E [1, f(] } (32)
~(P(s),Q(s)) := max{ K : Q~(s)D(s)+ kQ,(s)~(s) ' 1 is Hurwitz for It" E [K' ] } (33)
29
Similarly the phase margins are defined as follows:
@+(P(s),Q(s)) := max{ ~ : Q,(s)D(s)+ei÷Ql(s)N(s)
is Hurwitz for ~ E [O, ~l } (34)
qb(PCs),Q(s)) := max { ~b : Q, Cs)DCs) + ei÷Ql(s)NCs)
is Hurwitz for ~ E [0,~ }. (35)
Note that 7 +, 7 , ~+, and ~b are uniquely determined when Q(s) and P(s) are fixed. The unstructured additive stability margin y. (H ¢* stability margin) can also be
uniquely determined for a fixed stable closed loop system and is given by
1 v.(P(s)Q(s)) := IIQ(s)(1 + p(s)O(s))_,ll, 0. (36)
In the following section we state some fundamental results on the above sets of transfer functions and the calculation of extremal stability margins over the uncertainty set HCs)xV(s).
3. MAIN RESULTS
We shall give the main results here without proof. The proofs are given in Keel, Shaw and Bhattacharyya [5]. Similar results have been recently reported independently by Tesi and Vicino [6].
Theorem 2. For every to > O,
OP(to) C PcB@) 0T°(to) C T~.(w)
0T~@) c T~.(to) OT"(to) c T~s@) OT'(to) c Tt.(to)
(37) (38) (39) (40)
(41)
This result shows that at every to > 0 the image set of each transfer function in (17)  (20) is bounded by the corresponding image set of the CB segments.
The next result deals with the Nyquist plots of each of the transfer functions in (17)(20).
Corollary 1. The Nyquist plots of each of the transfer function sets T°(s), TU(s), T"(s), and T~(s) are bounded by their corresponding CB subsets:
cTr ° c T~. (42) 69T' C W~s (43)
~)T" C T"cB (44)
0T' C T" cB (4,5)
3 0
This result has many important implications in control system design and will be explored in forthcoming papers. One important consequence is given below:
Theorem 3. Suppose that the closed loop system is robustly stable, i.e. stable for all P(s) E P(s). Then
and
max 7* = max 7 =t: (46) P(s)eP{') P(s}fPc,(s)
max 4~: = max ~b ~: (47) P(s)eP(s) P(s)ePcB (s)
max v. (48) max tta = P(s)eP(s) P(s)ePc,(s)
min 7 ~= = min 7 ~= (49) P(s)6P(s) P(s)6PcB(s)
min q,+ = min ~: (50) P(s) E P ( s ) P(s)EPcB(s}
min va = min va. (51) P(s)eP(.) P(s)ePc,(s)
In control systems the maximum margins are useful for synthesis and design (with P(s) being a compensator) and the minimum margins are useful in calculating worst case stability margins (P(s) is the plant). In each case the crucial point that makes the result constructive and useful is the fact that Pcs(s) consists of a finite and "small" number of line segments.
4. C O N C L U D I N G R E M A R K S
We have showe that the boundaries of various transfer function sets and various ex tremal gain and phase margins occur on the CB segments first introduced in [1]. Using this idea, construction of Nyquist and Bode bands can be given. These important tools allow us to reexamine all of classical control theory with the added ingredient of robustness in both stability and performance using the framework of interval systems. More interestingly, these new results provide a way of developing design and synthesis techniques in the field of "parametric robust control. These techniques also connect with H °° control theory and promise many new results in the analysis and design of control systems containing uncertain or adjustable real parameters.
R E F E R E N C E S
[1] H. Chapellat and S. P. Bhattaeharyya, "A generalization of Kharitonov's theorem: robust stability of interval plants," IEEE Transactions on Automatic Control, vol. A C  34, pp. 306  311, March 1989.
[2] V. L. Kharitonov, "Asymptotic stability of an equilibrium position of a family of systems of linear differential equations," Differential Uravnsn, vol. 14, pp. 2086  2088, 1978.
31
[3] H. ChapeUat, M. Dahleh, and S. P. Bhattacharyya, "Robtmt stability under struc tured and unstructured perturbations," IEEE Transactions on Automatic Control, vol. AC  35, pp. 1100  1108, October 1990.
[4] H. Chapellat, M. Dahleh, and S. Bhattacharyya, "Extremal manifolds in robust stability," TCSP Report, Texas A&M University, July 1990.
[5] L. Keel, J. Shaw and S. P. Bhattacharyya, "Frequency domain design of interval control systems," Tech. Rep., Tennessee State University, July 1991. Also in TCSP Tech. Rep., Texas A&M University, July 1991.
[6] A. Tesi and A. Vicino, "Kharitonov segments suffice for frequency response anal ysis of plantcontroller families". To appear in Control of Uncertain Dynamic Systems, September 1991, CRC Press.
Rejection of Persistent, Bounded disturbances for SeLmpledData Systems*
Bassam Bamleh t, Munther A. Dahleh $, and J. Boyd Pearson t
June 4, 1991
Abst rac t
In this paper, a complete solution for the l 1 sampleddata problem is furnished for arbitrary plants. Then l 1 sampleddata problem is described a~ follows: Giv en a contlnuoustime plant, with contlnuoustime performance objectives, design a digital controller that delivers this performance. This problem differs from the s tandard discretetlme methods in that it takes into con~ideratlon the intersamp]L, lg behaviour of the closed loop system. The resulting dosed loop system dynamics con sists of both continuousthne and discretetime dynamics and thus such systems are known as "Hybrid" systems. It is shown that given any degree of accuracy, there exists a standard discretetime l I problem, which can be determ;ned apriori, such that for any controller that achieves a level of performance for the discretetlme problem, the same controller achieves the same performance within the prescribed level of accuracy if implemented as a sampleddata controller.
"The first and last authors' research is supported by NSF ECS8914467 and AFOSR910036. The second author is supported by WrightPatterson A.F.B. F3361590C3608, , C.S. Draper Laboratory DLH4186U and by the ARO DAAL0386K0171.
t Dept. of Electrical and Computer Engineering, Rice University, Houston, TX 77030 I Laboratory of Information sad Decision Systems, Massachusetts Institute of Technology, Cambridge,
MA
1 I n t r o d u c t i o n
33
This paper is concerned with designing digital controllers for continuoustime system s to optimaly achieve certain performance specifications in the presence of uncertainty. Contrary to discrete time designs, such controllers are designed taking into consideration the intersample behavior of the system. Such hybrid systems are generally known as sampleddata systems, and have recently received renewed interest by the control com munity.
The difficulty in considering the continuous time behavior of sampleddata systems, is that it is time varying, even when the plant and the controller are both continuous time and discretetime timeinvarlant respectively. We consider in this paper the standard problem with sampledda~a controllers (or the sampleddata problem, for short) shown in figure 1. The continuous time controller is constrained to be sampleddata controller, that is, it is of the form 7~GS~. The generalized plant is continuoustime timeinvariant and G is discretetime timeinvariant,Tf, is a zero order hold (with period r), and ,.~ is an ideal sampler (with period r). 7f¢ and ,S~ are assumed synchronized. Let ~'(G, 7f~CS¢) denote the mapping between the exogenous input and the regulated output. Jr(G, 7~¢G,S~) is in general time varying, in fact it is rperiodic where r is the period of the sample and hold devices.
Sampleddata systems have been studied by many researchers in the past in the context of LQO controllers (e.g. [121). Recently, [4] studied this problem in the context of 7"{ °° control, and were able to provide a solution in the case where the regulated output is in discrete time and the exogenous input is in continuous time. The exact problem was solved in [2],[3], and independently by [9] and [13]. The L•induced norm problem (the one we are concerned with in this paper) was considered in [7] for the case of stable plants.
In this paper we will use the framework developed in [2],[3], to study the t x sampled data problem. Precisely, the controller is designed to minimize the induced norm of the periodic system over the space of bounded inputs (i.e. L°°). This minimization results from posing time domain specifications and design constraints, which is quite natural for control system design. The solution provided in this paper is to solve the sampleddata problem by solving an (almost.) equivalent, discrete time ~i problem. While this was the approach followed in [7], the main contribution of this paper is that it provides bounds that can be computed apriori to deternfine the equivalent discretetime problem, given any desired degree of accuracy and thus provides a solution for the synthesis problem. The solution in this paper is presented in the context of the lifting framework of [2], [3], as an approximation procedure for certain infinite dimensional problems. This approach has the advantage of handling both the stable and the unstable case in the saane franlework, and results in techniques that are more transparent than those in [7].
34
110 J . . . . . I 4:
Figure 1: Hybrid discrete/continuous time system
. . . . . . pi
0 1 2 $
Figure 2: W~ : L~[0, co) ~ t~,[0,r)

k
2 T h e L i f t i n g T e c h n i q u e
In this section we briefly summarize the lifting technique for continuoustime periodic systems developed in [2], [3], and apply it to the sampleddata problem.
For continuous time signals, we consider the usual L°°[0,oo) space of essentially bounded functions [6], and it 's extended version L~°[0,oo). We will also need to consider discrete time signals that take values in a function space, for this, we define Ix to be the space of all X valued sequences, where X is some Banach space. We define t ~ as the subspace of l x with bounded norm sequences, i.e. where for ( f l } fi t x , the norm II(f,}llz r := supl II/illx < Go. Given any .f fi L~°[o,oo), we define i t ' s / / / t in o / ¢ t~o.t,..i, as follows: / is an L°°to,~lvalued sequence, we denote it by ( / i} , and for each ~,
/~(t) := / ( t + , 0 o < t < ,.
The lifting can be visualized ~ taking a continuous time signal and breaking it up into a sequence of 'pieces' each corresponding to the function over an interval of length r (see figure 2). Let us denote this lifting by Wr : L~[o,~o) . , l~..lo., j. W,. is a linear isomorphism, furthermore, if restricted to L~°io,oo) , then W,. : L~[o,~)  ~ t~',[o., ! is an isometry, i.e. it preserves norms.
Using the lifting of signals, one can define a lifting on systems. Let C be a linear continuous time system on L~°[o,oo), then it 's li/ting G is the discrete time system G :=
35
G I I " " I 
Figure 3: Equivalent Problem
W.GW~ "1, this is illustrated in the commutative diagram below:
IX,to,,! ~ ,. l£"°t..,l
+++
Thus G is a system that operates on Banach space (L°°[0,~]) vMued signah, we will call such systems infinite dimensional. Note that since W~ is an isometry, if G is stable, i.e. a bounded linear map on L °° then G is also stable, and furthermore, their respective induced norms are equal, Ildll = IIGll The correspondence between a system and it's lifting also preserves algebraic system properties such as addition, cascade decomposition and feedback (see [2] for the details).
To apply the lifting to the sampleddata problem, consider again the standaxd problem of figure 1, and denote the dosed loop operator by .T(G,?L.CS¢). Since the lifting is an isometry, we have that II~r(o, 7¢.cs~)[ I  IIW.~=(G, ~ C S . ) w ;  ' II, this is shown in figure 3(a).
We now present (from [2]) a state space realization for the new generalized plant 0 . The original plant G has the realization:
G = 6 A Dll D12 .
G'l 0 0
It is assumed that the sampler is proceeded with a presampling filter. It can be shown ([2]) that a realization for the generalized plant C. (figure 3) is given by
G21 G2~ C2 0 0
36
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " 1
. _  . ~ ~ ........... ~ ~ . ,
'/'71, lUt v , , F   ] . " I zJ~ l ; 1~.
! ! ! : [ , ,
I c I Figure 4: Decomposition of (~
[ cA" I eA('')B, ~(*')B, ] Gte" Gte'4("')10,)Bx + D t t 6 ( t  , ) Gt~I'(t)B= + Dn C= 0 0
where ~(t) := I~ eA'd's" The system O has the following input and output spaces
6 n : ~.r.,,,olo.,.i ~ .~.f,O*l,,fl Gn : la. ~ lLto,,.! 021 : tLto,,j '* t a , 02: : +,~ * Is,
Using this lifting, G can be decomposed as
° [ ~ 0 ~ ° l + [ I ° 0 0 o.10,] :J Lo, ' ~ ] [ ] L °] L~J ~10 o 0 I 0 0
This decomposition is illustrated in figure 4. The closed loop mapping ~'(G, U) is corre spondingly decomposed as
~(O,C) = b,t + ~(Oo, C) = D,, + [~t b~, ] J:( ¢., C) B,. (I)
We will use the notation C) := [ eL bt, ] , and call (~ the output operator and/~, the input operator. With this decomposition, G,~, is finite dimensional, and f), B1 axe finite rank operators
: R "+~ ~ L°°[o.fl, /~1 : L~10,fl o ~7.
It is evident from this decomposition that only the "discrete" part of the plant depends on the controller, while the infinite dimensional hybrid part depends only on the system's dynamics.
3
37
, l
F~}. "1 Figure 5: The system On
S o l u t i o n P r o c e d u r e
Using the lifting we are able to convert the problem of finding a controller to minimize the L °° induced norm of the hybrid system (figure 1) into the following standard problem with an infinite dimensional generalized plant G:
:= inf I P ' ( a , ~ , o s , ) l l =: inf IlS~(O,O)ll (2) 0 j tab l i z ing ~ 0 etaMixitto
The above infinite dimensional problem is solved by the followin 8 approximation proce dure through solving a standard MIMO 0 problem [5, 10]. Let 74~ and ,S~, be the following operators defined between L~[0,~l and l°°(.) (t°°(.) is R" with the maximum norm),
x,,: t**c.) ,/,"*to,.l (x,,,,)(O = ,4l~J) ; {,,(i)} E t**c.),
One can in fact show that the operators are well defined. Now to approximate the infinite dimensional problem, we use the approximate dosed loop system 8,,~'(G, U)~f,, (see figure 5), and for each ~z we define
#. := inf IIS~(0,c)X.ll, (3) O ,Jtabili~i~O
This new problem now involves the induced norm over l~***(,0, i.e. it is a standard MIMO
0 problem. Let us denote the generalized plant a~sociated with 7~,~'(0, C)S,, by ~ , such that
S.7(C.,C)~,, = Y(O. ,C) ,
where ~n and a realization for it is given by,
I o. :[~ ~,1o[~. ;] = S.Ol sob,,~o s. , , O, o °I =: bn b1~ •
0 0
The solution in now described below.
i i i i i ~ ! ! ! ! ! !
0 Tin . . . . . . T
38
S. Ji l l 1 2 . . . . . .
i I I I 1 2 . . . . . . n
I , i
i i i
0 Tin
i i
! i ! : i ;
° ° o ° o o
4
Figure 6: The operators 8,, and 7~
Design Bounds
Let us begin with analysis. Note that since U~:(O,o)ll is a pe~odic,ny t~me v~ying system, its L'induced norm is not immediately computable. An alternative method of computing II~:(O, O)ll comes from the limit
IW(¢, C)ll = .u2~ IIS,,~(¢, c)~,,il =: ~m 1W(6",,, C)ll, (4)
for a fixed C. This equation (4) however is by far not sufficient to show the convergence of the synthesis procedure, since given only (4), the rate of convergence may depend on the choice of C. Our objective is to obtain explicit bounds on [].~r(~, C)[l in the following form
Main Inequal i ty : There are conatants Ko and K1 which depend only on G, such ~a~
IIJ:(0.,O)ll < II~:(0,O)H _< KLln + (1 +  ~ ) [l~'(¢,,C')[[, (5)
The significance of the bound (5) is that it is exactly what is needed for synthesis. When one performs an t t design on G,, the result is a controller that keeps [[.~(~,,, C)H small, but the objective is to keep the L'induced norm of the hybrid system (or equivalently II~'(~, C)ll) small, and the inequality (5) guarantees this. The proof of this bound utilizes heavily the decomposition discussed earlier, and the fact that the infinite dimensional operators O and B1 me finite dimensional, and a~e dependent only on the original plant. The details of these proofs can be found in [1].
39
References
[1] B. Bamieh, M.A. Daldeh and 3.B. Pearson, 'Minimization of the L ~ induced norm for sampleddata systems,' Rice University Technical Report, Houston, TX, June 1991.
[2] B. Bamleh and J.B. Pearson,'A General Framework for Linear Periodic Systems with Application to 7{ ~ SampledDe~ta Control', to appear in IEEE Trans. on Ant. Control.
[3] B. Bamleh, J.B. Pearson, B.A. Francis, A. Tannenbanm, 'A Lifting Technique for Linear Periodic Systems with Applications to SampledData Control', to appear in Systems and Controls Letters.
[4] T. Chen and B. Francis, 'On the L: induced 26 26 norm of a sampleddata system', Systems and Control Letters 1990, v.15, 211219.
[5] M.A. Dahleh and J.B. Pearson, 'l:optimal feedback control for MIMO discretetime systems', in Trans. Ant. Control, AC32, 1987.
[6] C.A. Desoer and M. Vidyasagar, Feedback Systems: InputOutput Properties, Aca demic Press, New York, 1975.
[7] G. DuUerud and B. Francis, ,ry Performance in SampledData Systems', preprint, Submitted to IEEE Trans. on Ant. Control.
[8] B.A. Francis and T.T. Georgiou, 'Stability theory for linear timeinvariant plemts with periodic digital controllers', IEEE Trans. Ant. Control, AC33, pp. 820832, 1988.
[9] P.T. Kabamba and S. Hara, 'Worst ca~e analysis and design of sampled data control systems', Preprint.
[i0] J.S. McDonald and J.B. Pearson, 'IZOptimal Control of Muitivariable Systems with Output Norm Constraints', Autoraatica, vol. 27, no. 3, 1991.
[11] N. Sivashankar and Pramod P. Khargonekar, '~o~Induced Norm of SampledData Systems', Proc. of the CDC, 1991.
[12] P.M. Thompson, G. Stein and M. Athans, 'Conic sectors for sampleddata feedback systems,' System and Control Letters, Vol 3, pp. 7782, 1983.
[13] H.T. Toivonen, 'Sampleddata control of continuoustime systems with an 7foo opti ramify criterion', ChemicaJ Engineering, Abo Akademi, Finland, Report 901, Jan uary 1990.
Rational Approximation of L1optimal controller
Yoshito Ohm?, Hajhnc Ma~ and Shinzo Kodama~
? Dcparuncnt of Mechanical Engineering for ComputerControlled Machinery Osaka Univcxsity 2I Yam~dmOka, Suil~t, Osaka 565, Japan Dcparm~nt of EIeclmnic Hnginccring Osaka University 2I YAm~daOka, Sulfa, Osaka 565, Japan
Abstract
This paper studies the L1optimal control problem for SISO systems with rational
controllers. It is shown that thc infimal achievable norm with rational controllers is as srrmll as
that with irrational controllers. Also a way to construct a rational suboptimal controller is
studied.
1. Introduction The problem of Llopdmal control was fwst formulated by Vidyasagar [7] to solve the
disturbance rejection problem with persistent input. A complete solution for general problems
was presented by Dahlch and Pearson for discrctctimc 1block MIMO systems [2] and for
continuoustime SISO systems [3]. In [3] they also pointed out the uscfuincss of Lloptimal
control in the following areas: (i) disturbance rejection of bounded input, (ii) tracking of
bounded input, and (iii) robustness.
For continuoustime systcms, it turns out that an LlOpdmal solution has a certain
disadvantage: delay terms appear both in the numerator and denominator of an optimal
controllcr [3]. This raises a question as to what limitation wc impose if controllers arc confined
to be rational. One obvious way to look at this is to approximate the irrational controller by
rational one. However, as it will be shown in a later section, an irrational transfer function
cannot be approximated by the set of stable rational functions in thc topology with which the
Llconlrol problems arc formulated. This means the question is not trivial.
In this paper, it is shown that rational controUcrs arc as effective as irrational con~ollcrs
in continuoustime L1control problems. Specifically, it is established that the infimal
achievable norm with rational controllers is as small as that with irrational controllers. In the
course of proving the above, a way to consuuct a sequence of rational controllers that achieves
the optimal performance index is presented.
41
2. Mathematical preliminaries In this section, some underlying linear spaces are reviewed to define the Llcontrol
problems. The following notations are used throughout the paper. R+ is the set of
nonnegative real numbers.
LI(R+) the set of Lebesgue integrable functions on R+.
L**(R+) the set of Lebesgue measurable and bounded functions on R+.
C0(R+) the set of continuous functions on 17,+ that converge to 0.
c(n)(R+) the set of ntime continuously differentiable functions on R+.
BV(R+) the set of bounded variations on 1%.
the Banach algebra with convolution whose elements have the form
F Fo + LT c h,).
where Fa (E LI(R+) and (fi) is absolutely summable.
U lA= + Lo Iri I ~[ the class of the Laplace transformations of ~[.
Hb~IA = IIFUA For the algebra ~l, see [4] and [1]. ~[ is a subalgebra of BVfR+). Indeed, for F e ~1, define
d~ = Fat. (1)
Then /Y e BV(R+), and the correspondence gives an isometric injection that preserves
convolution. The importance of the algebra A is that it is the set of stable linear timeinvariant
operators from L,,(R+) to L**(R+). The operation of F e ~1 is defined by the convolution
y(O = F*x(t) (2)
= J~ F(t  t) x(T) dz.
The induced norm as an operator is
IIFIlind = sup {llyll**, y = F'X, x E L**(R+), Ilxa** < 1 } (3)
= UFn A.
The set of stable rational functions ~ is a subset of ~ that is not closed nor dense in ~. To
see that it is not dense, an example shows that
inf{ Uexp(s)  PlIA, P E ~ } = 1 > 0. (4)
This implies that irrational transfer functions cannot be in general approximated by rational
transfer functions. Systems are assumed to admit coprime factorization in ~l (or in ~ , if it is
finitedimensional).
42
3. Llrational approximation problems
w _l Z
Y
Fig.l The standard problem
Consider the standard control problem shown in Fig.l, where ~ is the generalized plant
and ~ is the controller. The signal w is exogenous input such as disturbances and command
input; z is the output to he controUed; u is the control signal; and y is the measured output.
The signals are scalar valued. We assume that the plant ~ is rational. The problem is to
stabilize the feedback loop, and make the norm of the transfer function ~'zw from w to z as
small as possible. The norm should reflect the goal that the maximum magnitude of the output
z is small. An appropriate choice of the norm is the norm of ~[, and this leads to the
LlControl problem,
inf {R~'zwHA, J~ stabilizes the feedback loop}. (5)
With some mild assumptions, the parametrization of stabilizing controllers for SISO
systems brings the transfer function ~'~ into the affinc form
where ~'1, ~'2 ~ ~ are fixed transfer functions, and ~ e ~ (if irrational controllers are
admissible) or ~ (if controllers arc confined to be rational) is a free parameter. Then the
LlControl problem (5) is transformed into the foUowing modelmatching problems:
= inf {liT 1  T2~aA, ~ e ~}, (7)
^ ^U ^ ~,~M =iaf {u~'1 T~ A, Q e ~ c ~} (8)
Obviously the optimal value of the rational modelmatching problem (7) is not less that
that of the irrational one (8), i.e., ~vlM < ~2dM. Because of (4), it is not clear if the equality
holds. If it does, then it turns out that rational controllers are as effective as irrational
controllers in the Llcontrol problem. Thus we arc intercstod in the following problems: (i)
43
~IM = ~dClM? (ii) How can we construct a mirdmizing sequence of (8)? The focus of this
paper is to show that (i) is affirmative. In the course of D'oving that, the problems (ii) will be
addressed.
4. Representation of dual problem It was shown in [3] that the Llmodel matching problem (7) achieves the infimum, and
that an optimal solution can be constructed via the duality theorem [5]. The dual problem
turned out to be a finitedimensional linear programming with infinite number of constraints.
The purpose of this section is to present the dual problem in a form suited for subsequent
discussions. Some part of the following argument is in line with [8].
First, another model matching problem is introduced to incorporate with the duality theorem. Let ~'1 and 32 e S c fl be as in (7). Since A is a subalgebra of BV(R+), let aTl
= Tldt and aT2 = T2dt. Then ~1, T2 ¢ BV(R+). Dei'me the subspace
W= { T2*~, a ¢ BV(R+) ] C BV(R+). (9)
The embedded Llmodel matching problem is defined as follows:
7~VlM = inf {IITI  RNT,¢, ~ e W c BV(R+)}. (10)
Since C0(R+)* = BV(R+), the duality theorem says that the problem (10) takes the same
optimal value as the @re)dual problem:
7Dp=sup {fR+F(t) d~'I, F E ±WC: Co(R+), HFII. :g I), (i1)
where
l w   "l co(R+): = 0 w (12)
is the (left) annihilator of W. The duality theorem asserts that there is a minimizer of (10), and
that
7DP = ~MM. (13)
The following is a characterization of the annihilator.
Proposition 1. (annihilator) Let ~2(s)=[A2,B2,C2,D2] be an nth order minimal realization. Assume that the zeros of ]~2 lie in the open right half plane. Then the set ±W t~
C(n)(R+) is characterized as follows:
IW n C(n)(R+) = {F: F(t) = L T exp(~Tt)z, z a Rn}, (14)
where
L = B2D2 1 and ~t = A2  LC2. (15)
44
If one can show i w C C(n)(R+), then Proposition 1 is a complete characterization. In
[5], differentiabflity was first assumed, and then a llmi~ is taken to give the annihilator.
However, we could not follow the argument there. The result of Proposition 1 lacks
mathematical beauty, and the following discussion becomes a little bit awkward, but it will be
shown that it is enough for our purpose. See the argument at the end of this section.
The dual problem (11) will be slightly modified to coordinate Proposition 1, and will be
represented by a linear programming. The modified dual problem is:
7MDp = sup {IR+F(t) d~'I,F e ±Wc~ c(n)(R+), FL, ~ I}. (16)
Substituting the characterization (14) into (16), we obtain the following linear
programming.
Proposition 2 (Linear programming) Let ~I(S)=[AI,BI,CI,DI] be a minimal realization.
Let ~2(s)=[A2,B2,C2,D2] and W be as in Proposition 1. Define
M = LDI + RBI, (17)
where R is a unique solution of the Lyapunov equation
• ~R  RAI = LCI. (18)
Then the modified dual problem (17) is represented as the linear programming
= max g r z (19)
subject to ILT exp(XTt)zl < 1, t > 0. (20)
The three problems (11), (16), and (19) apparently satisfy
7bP > ~vlDP = 7LP. (21)
The equality holds in (21) because the linear programming (19) is just another representation of
Theorem 5 in [31. Hence
'~MM = 'YDP = ~v[DP = ~.,P. (22)
Here is a brief review how an optimal solution of (7) is obtained. Suppose Zopt is
optimal for (19). Then writing
M = Zq=ll . l i (exp (~ tl) L), (23)
where
~ q = l l m I = ~P, (24)
45
for some {tl, I2 ..... tq}, Zopt satisfies
[L T exp(,~Tti)Zopt[ = 1, i = 1,2 ..... q. (25)
Let Fopt e C0(R+), Fopt(0 ffi L T exp(~Tt)zopt • Then it is optimal for (16). The result in [3]
indicates that this is also an optimal solution of (11). By the ali t, nment condition [5], an
optimal solution ~OlX of (10) is such that ~opt = ~1  ~opt iS aligned with Fopt. From (25),
~opt is supported by the tmite set {tl, t2 ..... fq},and hence ~opt is a sum ofdeRa functions:
Oopt(t) = Zq= l /~ i 6(t  ti), (26)
where/~i are defined by (23). Since ~opt e ~1, let ~opt = ~'1  ~'2~ for some ~opt e ~1.
Actually ~.opt. is optimal for (7).
5. Main resul ts
In this section, we shall show that rational controllers are effective in continuoustime
Lloptimal control. For this, it will be shown that 7MM = ~RMM. The following notions of
convex analysis are used (see for example [6]). The convex hull of a set S is the smallest
convex set that contains S, and is denoted by conv S. A polytope is a set that is the convex
hull of finitely many points. For a set S, the po l~ set S ° is defined as
S ° = [x: yTx < 1, y e S}. (27)
If S is a closed convex set containing the origin, then S °° = S.
L e m m a 1. Let ~l(S) = [AI,BI,CI,D1] and ~2(s) = [A2,B2,C2,D2] b e minimally
realized stable SISO transfer functions. Assume that the zeros of ~'2 lie in the open right half plane. Define L, ~l, and M as (15) and (17). Suppose there is a symmetric polyhedron X
and a positive number Z such that
(I + ~ ) X C conv(Y.L, X) (28)
and
sup {MTz, z e (conv(:fiL, X)) ° } = T< 1, (29)
Then them is a ~ e S such that
R~'I  f2~HA < r. (30)
P r o o f : Define a matrix X in such a way that the columns of X and  X generate
the polyhedron X. We may assume that X does not contain zero columns. From (28) there
are matdces A and C 3 satisfying
(I + ~ ) x = xA  LC3, and (31)
46
T ] [ [ ~ 3 ] [ 1 1 < I, (32)
where 141 is the matrix 1norm defined as the maximal absolute column sum. Define matrices
A = ( A  D/t, and (33)
C3 = c 3 / I . (34)
From (29) it follows that M ~ (conv(:fL,X)) °° = (conv(:ff,,~D). Hence there are B4 and
D34 such that
M = XB4 + LD34, and (35)
II II, = r (36)
Then it will be shown that ~(s) = [A,B4,C3,D34] satisfies the interpolation condition, i.e.,
= ~ 1  ~ 2 ~ ~ S forsome ~ S , and UW~WIA< y. It is shown in Lemma 2 below that A is
a stable matrix and that ~ E A < g. For the interpolation condition, the (A, B) part of a
realization of ~'2q(~'  ~'1) is:
A 0 • (37) 0 A1 L BI
Let T be a nonsingular matrix [.XR] T= 0 I O . (38)
0 0 I
Using (17), (18), (31), (33), (34), and (35), (37) is transformed into
Since A and AI are stable, it follows that ~ = ~'2"1(~  ~1) is stable.
Lemma 2. Thematrix a of (33) is stable. The transfer function ~.J(s)=[A,B4,C3,D34]
defined in the proof of Lemma I satisfies
II~IA < Z (40)
47
Theorem 1. Suppose that the zeros of ~'2 lie in the open right half plane. Then the two
model matching problems (7) and (8) take the same op~nal values, i.e.
~4M  ~RMM. (41)
Since the set of stable rational function ~ is not dense in ~, a minimizing sequence of
the rational model matching problem (8) may not converge to a minimizer of the model
matching problem (7) in the norm topology. This is what happens usually, because the optimal
objective transfer function (e.g. weighted sensitivity) is a finite sum of pure delays (see (4)).
On the other hand, Theorem 1 says that the two problems attain the same optimal value.
6. C o n c l u s i o n s
It was shown that the continuoustime Llopthnal control problems admit a sexlucnce of
rational controllers that attains the optimal performance index. Since the set of rational stable transfer functions ~ is not dense in the algebra ~ , the approximation is not in the norm
topology. Also a way to construct such an approximating sequence was presented.
References
[1] F.M. Caliier and C.A. Dcsoer, "An algebra of transfer functions of distributed linear
timeinvariant systems." IEEE Trans. Circuits Syst., vol.CAS25, pp.651662, 1978.
[2] M.A. Dahleh and J.B. Pearson, "l 1 optimal feedback controllers for MIMO discretctimc
systems," IEEE Trans. Automat. Contr., vol.AC32, pp.314322, 1987.
[3] M.A. Dahlch and J.B. Pearson, "Lloptimal compensators for continuoustime
systems," IEEE Trans. Automat. Contr., vol.AC32, pp.889895, 1987.
[4] C.A. Desoer and M. Vidyasagar, Feedback Systems: InputOutput Properties, Orlando,
Florida: Academic Press, 1975.
[5] D.G. Luenbergcr, Optimization by Vector Space Methods, New York: John Wiley &
Sons, 1969. [6] R.T. Rockafellar, Convex Analysis, Princeton, New Jersey: Princeton, 1970.
[7] M. Vidyasagar, "Optimal rejection of persistent bounded disturbances," IEEE Trans.
Automat. Contr., vol.AC31, pp.527534, 1986. [8] M. Vidyasagar, "Further results on the optimal rejection of persistent bounded
disturbances, Part H: Continuoustime case," preprint.
Robust Stability of Sampled Data Systems
Mituhiko Araki and Tomomichi ttagiwara
Department of Electrical Engineering, Kyoto University,
Yoshida, Sakyoku, Kyoto 60601, Japan
Abstract : In this paper, we study robust st&bility and stabilisability of sa~npleddata control systems composed of a continuoustime plant and a discretetime compensator together with sample/hold devices, using the frequency domffiln method. Our study is not confined only to the case of standaxd (i.e., singlerate) sampleddata controllers but also extended to the case of multirate sampled=data controllers. Specifically, we compaxe, by numerical examples, the tolerable amounts of the uncertainty of a given continuoustime plant for which the robust stabilization ca~ be &tta£ued in the three cases: the cases of a continuoustime controller, a singlerate sampleddata controller, and a multirate sampleddata controller. The result will be useful for estimsting the deterioration csused by use of sampleddata controllers in place of continuous ones, or eva]u~tting the a~ivantages of the multizate sampleddata scheme over the stsndaxd one.
1 Introduct ion
To ensure robust stability of closedloop systems is one of the most important requirements in
the control system design. The robust stability condition of continuoustime dosedloop systems
and the robust stabilizability condition of continuoustime plants with bounded uncertainty are
clarified in [1] and [2], respectively. However, to the knowledge of the authors, much has not been
obtained about robust stability and stab/lizabllity of sampleddata control systems composed
of a continuoustime plant and a d/scretetime compensator together with sample/hold devices.
This paper aims at presenting a frequency domain method to analyze robust stability and
stabilizability of sampleddata control systems on the same basis with continuoustime control
systems. Our study is not confined only to the case of standazd (i.e., singlerate) sampled
data controllers but also extended to the case of multirate sampleddata controllers [3],[4],[5].
Specifically, we compare, by numerical exa~aples, the tolerable amounts of the uncertainty of a
given continuoustime plant for which the robust st~ll ization can be attained in the three cases:
the cases of a continuoustime controller, a singlerate sampleddata controller, and a multirate
sampleddata controller. The result will be useful for estimating the deterioration caused by
use of sampleddata controllers in place of continuous ones, or evaluating the adwnta~es of the
multirate sampleddata scheme over the standard one.
49
2 M u l t i r a t e S a m p l e d  D a t a C o n t r o l l e r s
Many types of multirate sampleddata controllers have been studied in the literature [3][8]j
but in this paper we only dexl with the mnltirate input sampleddata controllers which will be
described in the following. Note that this controller reduces to the standard one if all the input
multiplicities are set to 1.
We suppose that the given plant is an nth order controllable, observable system with m
inputs and p outputs described by dz ~ = A z + B ~ , ~ = C x . (11
We sample all the plant outputs with sampling period To, and we place a zeroorder hold circuit
hi(s) = (1  e  r " ) / s preceded by a sampler with sampling period 7~ at the ith plant input,
where
Ti = To/N, (/7/: integer1. (2)
Thus, the ith plant input is changed Ni times during the interval To with which the output is
sampled. We refer to To as frame period and Ni as input multiplicity. Let uV(kl be the vector
composed of the all inputs applied during the frame interval [kTo, k~ 1T0):
~D(~) := [~?(k:, . . . , ~ : (k : ] ~, (31
~ ( ~ ) := [~,(kro), . . . , ~,(kro + :~,  1~,)] ~
(i = 1,. . . , m), (4)
and let yV(]¢) := y(kTo). Denoting the ztransforms, with respect to the frame period, of uV(k)
and yn(k) by UP(z) and YD(z), respectively, we have
YV(z / = GV(z)UV(z) (5)
where GV(z) is the pulse transfer function matrix associated with the expanded discretetime
realization [9] of the plant (1). We call GV(zl the "pulse transfer function of the multirate
plant" or, simply, "mnltirate plant."
With the above sampleddata scheme, we use the control law given by by
,~(k + I) = A.,,.(~) + B..yv(k),
~v(~) =  c . , . ( k )  D.yD(k), (6)
where w is the state of the controller with an appropriate dimension. The controller with the
above structure is called multirate input sampleddata controller. We denote the pulse transfer
function of the controller by GD(z) = C,, (zIA,~)lBw + D,. Our control system is equivalent
to an ordinary digital dosedloop system given in Fig. 1. The above controller reduces to the
usual singlerate sampleddata controller if 2Vl  1 for i ~ 1 , . . . , m.
5O
'I Go(z) I °v'"
Fig. 1: Multirate sampleddata control system
3 M u l t i r a t e P l a n t U n c e r t a i n t y
In this section we study how the upper bound of the uncertainty of the multirate plant which
originates f~om the uncertainty of the continuoustime plant is. Let G0(8) be the nominal model
of the continuoustime plant; i.e., Go(8) = C ( # I  A )  I B , and assume that the transfer function
matrix G(s) of the actual plant is given by
a( , ) = G0(e) + AG(~), ~a = ~a. (7)
where AG(s) is bounded on the imaginary axis:
IIAGCj~)II < le(J~)l (v~o), (8)
where £(s) is a strictly proper rational function, and Ire and Iro, denote the number of the un
stable poles of G(s) and that of G0(e), respectively. In correspondence to the above uncertainty
of G(s), the multirate plant GV(z) is expected to have the uncertainty AGD(z) of the form
GV(~) = G~o(~) + aG°(z) . (9)
We want to evaluate the upper bound of [[AGD(eJ'T°)[[ for 0 < w < 2~/T0, where the upper
bound should be taken over all possible G(s), or equivalently AG(s), given by (8). Now, using
the notion of symmetriccoordinate multirate impulse modulation [9] as demonstrated in [10],
and defining
wi = 2~c/T, (i = 0, 1,...,m), (10)
we can evaluate this quantity as
IIAoD(~"~')I I < ,~D(w) (11)
where lD(a, ') is given as follows.
(i) The case where N~ = No (i  I ,  . . , m)
i Jv'1 1 tD( ' ' ) = ',1 ')': ~ { ~ ( "  k'"°)] '2
'~ kffiO ~vO (12)
where
51
t~(w)  ~ I o.,T, 2_ sin ~ 1 ]l(jw + kNo~o)l  , ,   .o I ( o/No) + 2 ~
(ii) The ca.se where Ar~ ~/Vj for some i and j
I m s,1 1
] / = 1 k~0 *
where
(13)
(14)
= c15)
Note that (12) and (14) coincide with each other if m = 1 but that the latter does not reduce
to the former (but to the former multiplied by m 1/2) when Ni  No for i = 1, .  . , m. This
is because our evvJuation of the above tD(w) is, in general, conservative and derivations of (i)
and (ii) use different approaches to taking multirate structure into account in order to make
the result as less conservative as possible. Obtaining least conservative evaluation would be a
future problem, while it seems quite difficult. For the derivation of (12) and (14) in the general
case, the readers are referred to [10], but the derivation of (12) for the case of No = 1 (this is
the case of the ordinary singlerate sampleddata plant) shail be given in the following to show
the base idea. First, note that
Z[G(.,)HCs)] = Z[Co(.,)H(s)] + Z[,~aC.,)a(.,)] (18)
follows from the linearity of ztransformation as in (9), where Z[] denotes ztransformation and
H(s) denotes the zer~order hold with s~mpling period To, where Z[~G(s)a(~)] =: nG'(~) is the uncertainty for the singlerate plant Z[G0(8)H(s)]. Applying the impulse modulation
formula, we obtain
T.)  C17)
where w0 := 2~/To denotes the s~mpling frequency. Evaluating the norm of the series on the
right side, we obtain
I • .  j u T . I
• w T o .
where we used the inequality (8)• The right hand side of 08) is nothing but (12) for No = 1.
In [11], a similar method was used to derive an upper bound for the uncertainty of a discretized
plant, where multiplicative uncertainty was assumed.
52
4 Robust Stabilizability, ~Numerical Examples
The upper bound of the uncertainty of the multixate plants (including the singlerate plants as
a special case) given in the previous section enables us to compare the robustlystabilizing ability
of continuoustime controllers, singler~te sampleddata controllers, and multirate sampled
data controllers on the same basis (i.e., on the basis of the uncertainty assumption about the
continuoustime plants). In this section, we make. such comparison by numerical examples. We
first consider the following three scalar plants G0(s) with uncertainty bound t(8).
P1. I / ( s  1) k (k > o) (19) P2. 1/(s5) , I(8)=
P3. 1/(s 10)
For each plant, the upper bound of k for which the plant is robustly stabilizable by a continuous
time linear timeinvariant controller is given as follows, where the upper bounded is denoted
by k¢. Plant P1 P2 P3
(20) hc 1.0 0.6 0.55
Next, we examine the upper bound ofk for which the plant is robustly.stabilizable by a multirate
input sampling controller with a given input multiplicity (including a singlerate sampleddata
controller as a special case). As described in the previous section, we can derive an upper
bound ID(w) for the multirate plant corresponding to the given input multiplicity (Note that
(12) and (14) coincide for scalar plants). In general, however, the obtained upper bound Iv(w)
can not be expressed in the form of If(eJ~T')[ with any rational function f(z). Since this is
not convenient for studying robust stabilizability, we seek a rational function f(z) such that I/(ei"r')l _~ ID(w), yet If(ei"r')l m ID(w). To get such /(z) graphically, we introduce the
bilinear transformation z  1 I + X
= ~ (21) X=z"~~, z l  x " Under this transformation, z e ~ r ' is mapped to X = J~ and vice versa, where ~ an 2 ,
or equivalently,
= ~ arctan~. (22) oJ
Therefore, we obtain 2
~D(w ) = ~D (~0 a~ctan~) .~: l~(~). (23)
By plotting this l~(~) (0 < ~ < oo) on Bode diagram, we can see that it can be very accurately
approximated (and also bounded from above) by It~(JOI with
K l + T t x ID(x) " "i'+ ~'~' (24)
53
where ~, Tt and Kt are constant numbers depending on the frame period To and the input
multiplidty JVo. Since lxD(X ) is a rational function of X, the desired f(z) is obtained by
the inverse billneaz transformation of ID(x). Thus, we can calculate an upper bound of k
for which the robust stabilizability bf the plant by a multirate (or singlerate if .No  1)
sampleddata controller is assured. However, it should be noted that we are dealing with
only suflldent condition for robust stabilizability, since the upper bounds (12) and (14) are, in
general, conservative as mention in the previous section (This is also the case in all examples
to follow, except the case of continuoustime controllers). The calculated results are as follows,
where the upper bound is denoted by k,,.
]cm (P1) I k,~ (P2)
No 70=0 .2 To=0.8 .N o ITo=0.2 To=0.8
1 0.8177 0.4383 1 0.2072 6.290 x 10 3
2 0.8185 0.4466 2 0.2132 7.935 x 10 3
3 0.8244 0.4483 3 0.2144 8.506 x 10 3
4 0.8186 0.4488 4 0.2148 8.746 x 10 ~
No 1
2
3
4
k,,, (P3)
To=O.2 To=O.8 0.05846 5.911 x 10 s
0.06436 8.241 x 10 s
0.06573 9.593 x 10 s
0.06622 1.036 x 10 4
(25)
From the above result, we see that the robustlystabilizing ability of multirate input sampling
controller is higher than that of singlerate sampleddata controller (which is a natural conse
quence from theoretical point of view), and that the relative difference of the ability between
singlerate sampleddata controllers and multirate sampleddata controllers with higher input
multiplicity becomes larger as the unstable pole of the plant and/or the frame period become
larger.
The effect of the frame period on the robustlystabilizing ability of multirate input sampling
controller is also studied for the plant P3, and the results are siren as follows.
To I 0.1
N 0 = l i 0.1887 N 0 = 4 0.1955
0.01 I o.ool I 0.4974105445 I 0.4972 ] 0.5442
(26)
The results indicate that the upper bound of k approaches the upper bound for continuous
time linear timeinv~'iant controllers,/co, regardless of the value of No (i.e., including the case
of singlerate sampleddata controllers). For singlerate sampleddata controllers, this result
has been shown by H~ra et al. [12] in a different approach.
54
Next, we consider the case of the following multivaxi~le plants.
P4. a4o(8)= ~ 0 ]
0 ~ I k 7 ~ ,lo(s)  (27)
* * s + l P5. Gs,,(s) = ,*o ,1o
1..!._ 1.. J+ l e+ l
For each plant, the upper bound of k for which the plant is robustly stabilizable by a continuous
time lineax timeinvaxiant controller is as follows.
I ko 10.55 I077781 As in the case of scalar plants, we first calculate ID(w) and then find graphically a rational
function f ( z ) such that I/(e#'r*)l > ~D(oJ), yet [f(eJ'r')] ~ tD(w) using b'dineax transformation
and Bode diagram. Finally, we can calculate the upper bound of k such that the existence of a
robustly stabilizing multirate sampleddata controller with given input multiplicities is ensured,
and the results axe as follows.
1 1 0.5846 x I0*
2 2 0.6436 x 10 I
2 1 0.3717 x 10 I
1 2 0.4772 x 10 I
km (P4)
To = 0.2 To = 0.8
0.5911 x 10 4
0.8240 x 10 4
0.4745 x 10 4
0.4831 x 10 4
k,~ (PS)
To = 0.2 To = 0.8
1 1 0.8267 x 10 1
2 2 0.9102 x 10 1
2 1 0.6049 x 10*
1 2 0.6049 x 10*
0.8360 x 10 4
0.1165 × 10 ~
0.6772 x 10 4
0.6772x 10 4
(29)
As in the case of scalar plants, the above results show that the relative difference of the
robustlystabilizing ability between singlerate sampleddata controllers and multirate sampled
data controllers is large when the frame period is large, provided that input multiplicities NI
and N2 axe the same. However, when NI and N2 axe different, the range of k for which the
existence of a robustly stabilizing controller can be ensured does not always increase as the
input multiplicities become large. Theoretically speaking, the range never decreases as long
as un input multiplicity becomes larger by multiplied by an integer. Therefore, the strange
phenomena in the above results axe due to the fact that the evMuation of ID(w) in (12) and
(14) is conserwtive. Less conservative results could be obtained if we knew the upper bound
for the continuoustime uncertainty AG(8) columnwise. However, details axe omitted because
of spa~e limitation.
55
References
[1] M. J. Chen and C. A. Desoer, "Necessary and Sufficient Condition for Robust Stability of
Linear Distributed Feedback Systems," Int. J. Control, Vol. 35, No. 2, pp. 255267 (1982)
[2] M. Vidyasagnr and H. Kimura, "Robust Controllers for Uncertain Linear Multiv~iable
Systems," Automatica, Vol. 22, No. I, pp. 8594 (1986)
[3] A. B. Chammas and C. T. Leondes, "Pole Assignment by Piecewise Constant Output
Feedback," Int. J. Control, Vol. 29, No. 1, pp. 3138 (1979)
[4] P. T. Kabamba~ "Control of Linea~ Systems Using Generalized SampledData Hold Func
tions," IBEB Tran#. Automatic Control, VOl. 32, No. 9, pp. 772783 (1987)
[5] M. Araki and T. Hsgiwaxa, "Pole Assignment by Multi~ate SampledData Output Feed
bark," Int. J. Control, Vol. 44, No. 6, pp. 16611673 (1986)
[6] T. Mira, B. C. Pang and K. Z. Lin, "Design of Optimal Strongly Stable Digital Control
Systems and Application to Output Feedbark Control of Mecha~cal Systems," Int. J.
Control, Vol. 45, pp. 20712082 (1987)
[7] T. Hagiw~ra and M. Araki, "Design of a Stable State Feedback Controller Based on the
Multirate Sampling of the Plant Output," IBBB Traas. Automatic Control, Vol. 33, No. 9,
pp. 812819 (1988)
[8] M. ArMd and T. Hagiwara, "Periodically TimeVaxying Controllers," Chin~Japan Joint
Symposium on Systems Control Theory and Its ApplicMion, Hangzhou, China (1989)
[9] M. Araki and K. Yamamoto, "Multivm'iable Multirate SampledData Systems: StateSpace
Description, 'I~ansfer Characteristics, and Nyquist Criterion," IBBE Tran3. Automatic Con
trol, Vol. AC31, No. 2, pp. 145154 (1986)
[10] M. Araki, T. Hagiwa~a, T. Fujimura and Y. Goto, "On Robust Stability of Multirste
Digital Control Systems," Proc. 29th Conference on Decision and Control, pp. 19231924,
Honolulu, Hawaii (1990)
[11] C. E. Rohrs, G Stein and K. J. Astrom, "Uncertainty in Smnpled Systems," Proc. American
Control Conference, Boston, pp. 9597 (1985)
[12] S. Haxa, M. Nakajima and P. T. Ksbamba, "Robust Stabilization in SampledData Control
Systems," Proc. 13th Dynamical System Theory Symposium (in Japanese), pp. 115120
(1991)
Robust Control System Design for SampledData Feedback Systems
Shinji HARA* and Pierre T. KABAMBA** * Dept. of Control Engineering, Tokyo Institute of Technology,
Ohokayama, Meguroku, Tokyo 152, Japan "* Aerospace Engineering Dept., The University of Michigan,
Ann Arbor, MI 481092140, USA
In this paper, we consider an optimization problem for sampled data control systems in the sense of the L2 induced norm of the linear operator with continuoustime inputs and outputs. The problem is a worst case design and a counterpart of the H~optimization problem for purely continuoustime or discretetime systems. IIence, it can be applied to the robust controller design taking account of the intersample behavior for sampled data feedback systems. We show that the optimization problem for a 4block generalized continuoustime plant with a digital controller can be solved with a "/iteration on a certain discretetime 4block plant which depends on 7 The computation algorithm with three exponentiations is also derived.
1 I n t r o d u c t i o n
The first motivation of this paper is the intersampling behavior of sampled data systems, by which we mean systems with continuous time inputs and outputs, but some state variables evolve in continuous time and other evolve in discrete time. Such systems are important in practice because of the widespread use of digital computers for active control (see e.g.[I]). Typically, sampled data systems are analyzed and designed through their discrete time behavior, i.e. their behavior at the sampling times. The classical tool for assessing intersampling performance is the modified ztransform, but it implicitly requires discretization of the inputs. While a few performance criteria (such as stability or deadbeat response) can be judged based on discrete time behavior, many others (such as disturbance and noise attenuation, overshoot, settling time, etc.) require a closer look at the intersampling behavior. Moreover, it has been shown that a sampled data system may very well have innocuous sample time dynamics, but unacceptable intersampling ripples (see e.g. [2], [31, [4], [51). R~ently, there has been several publications specifically accounting for the intersampling behavior of sampled data systems [2][14].
The second motivation of the paper is the recently developed theory of H~ analysis and design in the state space (see e.g. [15], [16] and the references therein). This approach considers that the signals of interest are square integrable, and aims at minimizing the worst possible amplification factor between selected signals. Mathematically, this corre sponds to minimizing the L2 induced norm of a linear operator. Although most of the results were first derived for linear time invariant continuous time systems, they have been extended to linear time invariant discretetime systems (see e.g. [17], [18]). Such an extension, however, does not cover the important class of sampled data systems. The
57
technical difficulty is that sampled data systems are inherently time varying. Note that optimal Hoo design of time varying systems has been considered, however these results are not directly applicable to sampled data systems. Also, induced norms of sampled data systems are given in [9], [10], a treatment which is more operatortheoretic than the state space treatment of this paper.
In this paper, we show how to optimize the L2 induced norm of a sampled data system obtained by using a digital controller on a standard 4block analog plant. This problem is a worst case design and it is a counterpart of the Hoonorm optimization problem for purely continuous or discretetime systems [19], [15], [16]. It will be shown that the optimal attenuation problem for a 4block continuous time plant with a digital controller can be solved with a 7iteration on a certain discrete time 4block plant which depends on 7. The computation algorithm with three exponentiations is also derived.
Throughout the paper, superscript T denotes matrix transpose, parentheses ( . ) around an independent variable indicate an analog function of continuous time or the Laplace transform of such a function, whereas square brackets [ • ] indicate a discrete sequence or the ztransform of a sequence. ~msx( • ) indicates the maximum singular value of the matrix argument.
2 Optimization of the L2 Induced Norm
Consider a sampleddata feedback control system which consists of a generalized continuoustime plant (including frequency shaped weighting matrices and antialiasing filters),
G(s) = [ GII(s)G2t(s) Gr2(s)G12(s) ] (2.1) a discretetime controller to be designed K[z], a given hold function H(t) and a sampler with sampling period 7" > 0. The statesp,'tce representations of G(8) and K[z] are given by
~(t) = Az(t) + Bit(t) + B2u(t) e(t) = + D.r(t) + D : Ct) (2.2) If(t) = C2x(t) + D2zr(t) + Dr2u(t)
= + D,y(k ) (2.3)
respectively, where ~(~), u(t), z(t) and ~/(t) denote the exogenous input, control input, controlled output and measured output (with appropriate dimensions), respectively, all of which are analog signals. The control input u(t) is determined by
u(t) = H(t)v[k], t E [k~', (k + 1)r) (2.4)
where H(t) is a ~rperiodic matrix. We make the following assumptions:
Assumptions A:
1C) ( A, B2) is stabilizable and (A, C2) is detectable
58
1S) The sampling period r is chosen so that (,4, B2) is stabilizable and (A, Ca) is de tectable, where
f0 1"
(2.5)
2) D21 = 0 and D22 = 0
3) D ~ = 0
4) The pair (Aa, Ca) is obse~able, and the matrix Ca is specified.
Let us discuss these assumptions in some detail. The first assumptions 1C I ,, 1S I is required for stabilization. If 1C I is satisfied, then 1S I holds for almost all ~'. The second assumption 2 / is needed to help guarantee boundedness of the closed loop operator. In practice, the second and third assumptions are both quite reasonable because the sampler is always preceded by a low pass filter and the digital controller is often strictly proper for practical reasons. The fourth assumption is consistent with using a minimal realization of the compensator, for which there is no loss of generality in assuming we use an observable canonical form. It should be noticed that when the digital controller has a computational delay, the third and fourth assumptions axe automatically satisfied. Indeed in that case, the digital controller has an observable realization of the form
{ za[k + I] = Aa~a[k] + B~y(k~) (2.6)
or equivalently, an observable realization
where Ca = [ 0 I ] is specified and Da = 0. With Assumptions A, the feedback system is represented by
~(t I = Ax(t) + B2H(t)C~x~[k] + Blw(t)
zd[k] = Aaza[k] + BaC2z( kl" I (2.7)
• (t) = cl (tl + Dl H(t)Ca=a[ ] + Dl w(t)
For every compensator K[z] which internally stabilizes (2.71, we define a performance index which is the £~ induced norm of the closed loop operator, that is
,I(K) = sup [ Ilzlt~ '1, (2.8)
59
Our objective is to find a digital stabilizing controller K[z] which achieves a specified attenuation level 7 > 0, i.e. such that J(K) < 7. This is accomplished by applying the result in [13] to the sampled data feedback system (2.7) and yields the following development.
For every 7 > amax(Dxx) define G~[z] as a fictitious 4block discrete time plant, which is independent of the compensator, but depends on 7, and with state space realization
~[k + 11 = A~[k I + & 6 N + & a N
~[k] = 0,~[k] + b , ~ N + b~=~[k] (2.9) ~[kl = O=~[kl
where .4, &, 02 are defined by (2.5) and other matrices in (2.9), B,, 0,, Dn and 0x2, are obtalnd as follows.
Let & :=/o" e*('oB,g(~) #g (2.1o)
where ~'.r(t) is given in Appendix B. Define
0,(0 := c,e ~' 0~(0 :=/o' C,e*('OB,&(~) d~ + D,,k.,(t) (2.11) Ca(t) := C',/o' eA(tOB=H(~) < ÷ Dt2H(t)
and ¢T(t) ] £ o (t) [ 0,(o o=(t) 0310]dt
Since/Q~ is positive semidefinite, it can be factored into conformably partitioned matrices
1 ~lr,= / ~ [ 0 , / In /)m ] (2.12)
which completes the characterization of the fictitious 4block plant G~[z]. We can now state the following result for the optimal 4block sampled data synthesis
problem.
Theorem 2.1 Suppose Assumpions A hold, and 7 > Cmax(Dn). Then the digital com pensator K[z] stabilizes (2.7) and achieves J(K) < 7 if and only if K[z] stabilizes the fictitious digital plant G~[z] defined in (2.9)(2.12) with attenuation level 7. By this we mean that (2.9)(2.12) together with the feedback law
~"[~[k~ i] == e~[klA~{kl + ,.~{kl (2.13)
60
yields an intenally stable dosed loop satisfying
o o ~
I . kO I
or equivalently
(2.14)
Ila ll < 3' (2.15)
Remark 2.1 All the matrices in (2.9)(2.12) are independent of the parameters of the controller K[z] to be designed, except for a dependence on C~ which has been assumed specified in Assumption A. However, they depend on H(t) and 7 as well as the parameters of G(s). The practical implication of Theorem 2.1 is therefore that an optimal digital controller K[z] can be designed using a discrete time version of the 7iteration proposed in [15], but with a plant which depends on 7. The computation is initialized by finding an upper bound for the achievable attenuation; and this can be done very simply by finding a digital stabilizer for G(s) and computing its attenuation level 3'o. At each step, one decrements 7, computes the realization (2.9)(2.12) of the fictitious plant GT[z], and tests whether there exists a digital stabilizer for GT[z] which achieves attenuation level 7 This test involves solving two discrete Pdcatti equations, and is constructive [17], [18]. When the test is successful, the digital controller is also guaranteed to stabilize and achieve attenuation level 3' when applied to G(s); otherwise, attenuation level 3' cannot be achieved by any LTI digital compensator in Fig.4.1.
Remark 2.2 An optimal digital controller can also be designed using a bisection algo rithm. Start with a lower bound and an upper bound of the optimal attenuation, e.g. ~max(D11) and the same upper bound as in Remark 2.1, respectively. At every step, define 7 as the center of the current bounding interval; test whether there exists a digital compensator which achieves attenuation level 7 using Proposition 2.1; and update the lower or upper bound accordingly.
3 C o m p u t a t i o n a l A l g o r i t h m
We now show the computational algorithm with three exponentiations for deriving the equivalent discretetime Hao problem, where we assume that H(t) = I, i.e., zeroorder hold case. The algorithm can be derived from the main theorem in Section 2 and the following relations:
fo~'eA~Bd~[O []exp{[A ~ ] r } [ ~ ] (3.1)
fo e A (QeA~d~  ~ r I ' r (3.2)
where [ AT
e~p 0
< Computa t iona l Algori thm >
Q * F
61
• Step 1: Computation of WT: W 7 = ~TF 7
e x p / [ A~, C~Cv * r7 o I,}[o °,] ~ L
where [ A + BIR~IDTllOI BIpSIB~ _ B2 + BIRvIDm ]
Av = O~O, + OrI DnR; 'DSO, A T  C~l D,,P~'Brl CT(I + D,,P~'DTI)D,, 0 0 0
Ov = [ P,.~'Dr, O, R.T'BT P.~'D~O,~, ]
= 7zI  D~ID11 • Step 2: Computation of BoI:
B°I[ On 0 ] exp{An~'}BD where
• Step 3: Computation of[ C°, D°n D°,= ]:= M¢/': Let
and define ~/~ = O~I'#. Then, we have
Mt = B T~ I~.T B,v where
A# = 0 Av 0 , B~r = 0 W4 "t/2 0 O 0 0 0 0 I
c,,=[c, Remark l : The sizes of the exponentiations in steps 1, 2 and 3 are (2n + m2) x 2, (3n + ra2), and (3n + 2m2) x 2, respectively. Remark 2: Though G.1[z] depends on 7, we can apply the 7 iteration for the optimiza tion.
4 C o n c l u s i o n s
The work reported herein represents an attempt to extend the Hoo methodology (worst case design) to sampled data systems, so as to make it applicable to the common situation where a digital controller regulates an analog plant.
This work obviously leads to original research problems and possible extensions. Among these, we will mention the development of an extensive theory of sampled data systems, where each signal is assumed to have an analog and a discrete part. Other obvious exten sions are the use of periodic or multirate digital controllers and the optimization of the induced norm with respect to the hold function, in the spirit of [20], [3]. These questions are under study and will be reported on in the future.
62
References
[1] G.F. Franklin and J.D. Powelh Digital Control of Dynamic Systems, AddisonWesley (1980)
[2] S. Urikura, and A. Nagata: RippleFree Deadbeat Control for Sampled Data Systems, IEEE Trans. Auto. Contr., voi.32, 474/482 (1987)
[3] Y.C. Juan, and P.T. Kabambs: Optimal Hold Functions for Sampled Data Regula tion, to appear in Automatics
[4] Y. Yamamoto: New Approach to SampledData Control Systems ~ A Function Space Method; Proc. 29th CDC, 1882/1887 (1990)
[5] H.K. Sung and S. Haxa: Ripplefree Condition in SampledData Control Systems; Proc. 13th Dynamical System Theory Syrup., 269/272 (1991)
[6] B.A. Francis, and T.T. Georgiou: Stability Theory for Linear Time Inwtriant Plants with Periodic Digital Controllers, IEEE Trans. Auto. Contr., vol. 33, 820/832 (1988)
[7] T. Chen, and B.A. Francis: Stability of Sampled Data Feedback Systems, IEEE Trans. Auto. Contr., vol.36, 50/58 (1991)
[8] T. Chen, and B.A. Francis: H2 Optimal Sampled Data Contol, Preprint (1990)
[9] T. Chen, and B.A. Francis: On the L2 Induced Norm of Sampled Data System, Systems & Control Letters, voi.15, 211/219 (1990)
[10] T. Chen, A. Feintuch and B.A. Francis: On the Existence of HooOptimal Sampled Data Controllers, Proc. 29th CDC, 1794/1795, Honolulu (1990)
[1I] S. Haxa and P.T. Kabamba: Worst Case Analysis and Design of Sampled Data Control Systems, Proc. 12th Dynamical System Theory Symp., 167/172, (1989)
[12] P.T. Kabamba and S.Hara: On computing the induced norm of sampled data sys tems, Proc. 1990 ACC, San Diego, CA, May, 1990
[13] S. Hara and P.T. Kabamba: Worst Case Analysis and Design of Sampled Data Control Systems, 29th IEEE CDC, Honolulu, 202/203 (1990)
[14] B. Bamieh and J.B. Pearson: A General Framework for Linear Periodic Systems with Application to Hoo SampledData Control, Technical report No.9021, Rice University, November, 1990
[15] K. Glover and J.C. Doyle: StateSpace Formulae for Stabilizing Controllers that Satisfy a H,o Norm Bound and Relations to Risk Sensitivity, Systems and Control Problems, IEEE Trans. AC348, 831/897, (1989)
[16] J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis: State Space Solutions to Standard//2 and Hoo Control Problems, IEEE Trans. Auto. Contr., vol. 34, 831/847 (1989)
[17] R. Kondo, S. Hara, and T. Itou: Characterization of Discrete Time Hoo Controllers via Bilineax Transformation, Proc. 29th Conf. Dec. Contr., Honolulu, Hawaii, (1990)
63
[18] R. Kondo and S. Hara: A Unified Approach to Continuous/Discrete Time Hoo Prob lems via JSpectral Factorizations, to be presented at MTNS '91
[19] S.P. Boyd, V. Balakrishnan, and P.T. Kabamba: A Bisection Method for Computing the H~ Norm of a Transfer Matrix and Related Problems, Math. Contr. Sig. Syst., vol.2, 207/219 (1989)
[20] P.T. Kabamba: Control of Linear Systems Using Generalized Sampled Data Hold Functions, IEEE Trans. Auto. Contr., voi.32, 772/783 (1987)
[21] W. Sun et al: H~ Control and Filtering with Sampled Measurements, Proc. ACC '91 (1991)
A p p e n d i x E
Define the following matrices of function 7.
[ ] ~:= [ Cl TC'+ C TDn p,~IDTIUI A T ~11D11~~lBT ] ¢,(0 0~(0 ] := [ (B, + B,~'Dr, D,~)~(0C, ]
cT(I + mnI~'DT**)D,2H(t)Cd j
and define
(E.1)
(E.2) ~(0 ]  Z' e~"° [ O.,(O ~(0 := D,~c,~,,(t)  BT~(t) %,(t) : DT, C,~,,(,)  B~=Ct) (E.3)
%3(t) := DT, C,~,3(') + DT, D*2HCt) C,  BT, &=(')
Without loss of generality, assume that the columns of l~'.,(t) are linearly independent on [0, ~']. (If this is not the case, just select a maximal set of linearly independent columns, and redefine V~(t) accordingly). Then define
A ~* ~ A 2 M
w, :=/o v,(0 (E.4) p,(,) :L
A F u n c t i o n S t a t e S p a c e A p p r o a c h t o
Robust Tracking for SampledData Systems
Yutaka Yamamoto
Abst rac t
It is wen known that tracking to continuoustime signals by sampleddata systems presents various difficulties. For example, the usual discretetlme model is not suit able for describing intersample rippLes. In order to adequately handle this problem we need a framework that explicitly contains the intersample behavior in the model. This paper presents an infinitedimensional yet timeinvariant dlscretetime model which contains the full intersample behavior as information in the state. This makes it possible to clearly understand the intersample as a result of a mismatch between the intersample tracking signal and the system zerodkectional vector. This leads to an internal model principle for sampleddata systems, and some nonclassical feature arising from the interplay of digital and continuoustime behavior.
1 I n t r o d u c t i o n
It is well known that tracking to continuoustime signals by sampleddata systems presents various difficulties. For example, the usual discretetime model is not suitable for describ ing intersamph ripples. In order to adequately handle this problem we need • framework that explicitly contains the intersample behavior in the model. There are now several approaches in this direction: [1], [2], [4], [12].
This paper employs the approach of [12], which gives an infinitedimensional yet time invariant discretetime model, and this model contains the full intersample behavior as information in the state (a similar approach is also employed by [7] for computing H °* norms). This makes it possible to clearly understand intersample ripples as a result of a mismatch between the intersample tracking signal and the system zerodirectional vector. This leads to an internal model principle for sampleddata systems, and some nonclassical feature arising from the interplay of digital and continuoustime behavior.
2
Let
P r e l i m i n a r i e s
~(~) = A~(t) + Z~,(~), y(~) = c~(t) (1)
65
be a given continuoustime system, mad h a fixed sampling period. Given a function ¢(t) on [0, oo), we define S(¢) as a sequence of functions on [0, h) as follows:
,~(~b) := {~bk(0)}~*=0 , ,kk(0) := ¢((k  1)h + 0). (2)
For such a sequence {,kk(8)}~*= 1, its ztransform is defined as
z(¢) = ~ ¢~(8)~ ~. (3) kO
We take ,~(u) = uk(0), S(z) = xk(6), S(y) = yk(6) as input, state, and output vectors (although infinitedimensionM), and give a discretetime trnnsition rule. Indeed, at time
= kh, let the state of the system ( A , B , C ) be xk(h) and let input uk+~(6) (0 < 6 < h) be applied to the system. Then the state and output trajectories xk+l(8) and yk+a(8) are given by the following equations:
• k+,(0) = e~°~k(h) + f ° eA(O")Buk+I(T)dT,
uk(o) = c~(o), o < o < h. (4)
Let U be the space of piecewise continuous functions on (0, hi that are rightcontinuous. Introducing the operators
F : Un ~ U "
G : U'~ , U n
H : U ~ ~ U ~ 1' :u(O) ~ eA(e ' )Bu(r )dr ,
: ~(o) ~ c ~ ( o ) ,
(5)
equation (4) can be written simply as
zk+l = F z k + Guk+l, Vk = Hzk . (6)
Consider the hybrid control system depicted in Fig. 1. Here C(z ) and P ( s ) denote discrete time and continuous time systems (Ad, Ba, Cd) and (A,, B,, C,), respectively. It is easy to
Figure 1: Hybrid ClosedLoop System ~a
66
see that the dosedloop system P'a obeys the following equations:
=k+,(o) ,A,% o ] rh(o) (7) t l t
x~(o) + r~(o) is)
where B(O) is given by B(O) := "If eA'('")B,H(r)dr, (9)
where H(t) is the holder function given on [0, hi and 6h denote the sampling at 9 = h:
6h. z(O) = z(h). (I0)
With this model, we obtain the following (operator) transfer matrix
 B ( h ) C , z r  e ",~ B , ( h ) . (11)
3 Tracking and Internal Model Principle
In what follows, we assume that the two systems (A~, B~, C~) and (A,, Be, C,) are stabi lizable and detectable, and that the latter satisfies the spectrum nondegeneracy condition, i.e., there are no pairs of eigenvalues of A= that are different by art integer multiple of 2~rj/h. This assures the stabilizability and detectability of the sampled system. We fur ther assume that there is no polezero cancellation between (Ad, Bd, C~) and (A,, B~, C~).
Let W,,(z) be the dosedloop transfer operator from r to e given by (11). We first consider tracking to e"tv (Re p > 0), which is expressed as {A%(0)}~*= 0, )~ = e '~, v(0) = e~Sv. The ztransform of this function is, according to (3), zv(O)/(z  A).
We say that A E ¢ is a transmission zero of the dosedloop transfer operator W,,(z) if there exists v(0) E U such that
Wc,(A)v(0) = 0. (f2)
If this holds, v(8) is called a zero directional vector associated with A.
The following theorem was proved by [12]:
Theorem 3.1 Suppose that the closedloop system transfer operator W~(z) is stable, i.e., it has no poles outside the open unit disk {Izl < 1}. Assume also that the holder is the zeroorder hold: H(t) =. 1. Then
1. If A ~ 1, tracking without stationary ripples is possible only by incorporating I~ into P(s) as a pole.
~. If A = 1, tracking without ripples is possible by incorporatin9 A into C(z) as a pole.
67
3. When A = 1, the stationary ripple is given by W,,(1)v(0).
This theorem tells us the following: Assuming the zeroorder holder, i) tracking to e ~t (/~ ~ 0) is possible only by incorporating a continuoustime compensator which contains the internal model of this exogenous signal; ii) in the case A = 1 mostly encountered, tracking is possible by incorporating a digital internal model z / ( z  1); iii) even if complete tracking is not possible, its ripple can be explicitly computable.
It is also possible to characterize transmission zeros in terms of the state space repre sentation and invariant zeros. Hence in the case of tracking to signals with simple poles is quite similar to the standard case. However, tracking to signals with repeated poles is quite different, and the situation cannot be described with simple polezero arguments. To see this, consider the ramp signal 1. The ztransform of this signal i s
h 6 ( z  l ) 2 I ' ( z _ l ) ,
and the double and simple poles here do not appear separately. Furthermore, in this case, tracking can be achieved by incorporating one pole z = 1 into the digital compensator, and the other s = 0 into the analog part. Therefore, a more elaborate study is necessary in the general case.
Using a mixed continuoustime/discretetime model, Sung and Hara [8] derived a state space necessary and sufficient condition for tracking. A geometric approach was taken by [6] to get a necessary and sufficient condition. Somewhat earlier than these, Urikura and Nagata [10] gave a geometric condition for the case of deadbeat tracking. We here look for a condition in the frequency domain. For simplicity of arguments, we assume that the underlying systems are single input/single output.
To see the difficulty involved, let us write the loop transfer function of Fig. 1 as p(z, O)/q(z). Note that the denominator does not depend on the parameter 0. This can also be seen from formula (11) where the characteristic polynomial depends only on z. This means that if we consider intersample tracking, its continuoustime behavior is re flected upon the numerator p(z, 0), and not the denominator q(z). This is not suitable for stating the internal model principle as in the usual way, where the system characteristics are reflected upon the denominator, and the internal model principle is stated in terms of the relationships of denominators of the loop transfer function and the exogenous signal generator. This type of statement is clearly impossible for the sampleddata systems if we consider the intersample tracking.
It is therefore desirable to recover the continuoustime transfer function to prove the internal model principle. How can it be done? To this end, let us first introduce the notion of finite Laplace transforms.
Definition 3.2 Let ~o be any function or distribution on the interval [0, hi. The finite Laplace transform, denoted £h[~o] is defined by
:= f0' (13)
The integral must be understood in the sense of distributions if ~ is a distribution.
68
Note that, in view of the well known PaleyWiener theorem [9], £h[~](s) is always an entire function of exponential type.
The ztransform of a function ~b on [0, c¢) and its Laplace transform can be related by the following lemma:
Lemma 3.3 Suppose that q~ satisfies the estimate
II~llo.lkn.Ck+l)hl  cea~ (14)
for some c , ~ > o, where II¢llz'.tkh.¢*+,)hl is the L~norrn on [kh,(k + 1)hi. Then the Laplace transform L[~.b] exists, and
l:h[Z[~b][z]]l,=,,, = L[ffl(s). (15)
P roof That ¢ is Laplace transformable is obvious from (14). To show (15), observe
L[~](s) = ~keCk"+'~'ae = e~h'C,[~k](S) k=O 0 kffiO
oo
= £ h [ ( ~ zkCk)]l,_,,, = Z:h[Z[¢][z111,_,,.. (16 ) k=O
The interchange of integral and summation in the second line is justified by the absolute convergence of the Laplace transform. [3
Note that replacing z by e ~' actually corresponds to the left shift operator: ~(t) ~b(t + h). Therefore, the digital compensator C(z) has the (continuoustime) transfer function C(eh'). The holder element has bounded support, so that it has some Laplace transform representation that is entire function. For example, the zeroorder hold element has the transfer function (1  eh')/s. This is clearly entire due to polezero cancellation at s = 0. Therefore, the loop transfer function of Fig. 1 falls into the category of the pseudorational class as defined in [11]. Roughly speaking, a transfer function g(s) is pseudorational if it admits a factorization G(s) = p(s)/q(s) where p(s) and q(s) axe Laplace transforms of distributions with compact support in (c~, 0]. Such a function d(s) is said to divide q(s) if q(s)/d(s) is again the same type of function, i.e., the Laplace transform of a distribution with compact support in (oo, 0]. (In pasticular, it is an entire function of exponential type; see [11] for details.) Using the results obtained there, we can now state and prove the necessity past of the internal model principle. A difficulty here is that although the loop transfer function can be well described in the sdomain, the closedloop transfer function is not, because sampling is not a timeinvariant operator in the continuous time sense. Therefore, we must bypass this problem without explicitly using the closedloop transfer function W,,(z).
Theorem 3.4 Consider the unity feedback system #iven in Fig. 1. Suppose that the ezogenous si#nal is generated by dl(s) with poles only in the closed righthalf plane. Suppose also that the closedloop system is internally stable, and let G(s) = ql(s)p(s) be a eoprime factorization of the loop transfer/unction. If the closedloop system asymptotically tracks any signal generated by da(s), then d(s) divides q(s).
69
Proof Let r(t) be any signal generated by dl(s), and let y(t) the corresponding output tracking r(t). We first consider the system in the discretetime mode. If the asymptotic tracking occurs, then it follows that the error e(t) must also converge to zero at the sampling instants, i. e., e(kh) .* O, k * oo. Therefore, we have
y(t) = + where y2(t) is the response (of the dosedloop system) corresponding to the input e(kh), and y~(~) is the initialstate response. Since the dosedloop system is stable and e(kh) ~ 0 as k ~ oo, we must h~ve !/2(t) "' 0 as t * oo. This means that yI(t) must be asymptotic to r(t). Also, since the sampled input e(kh) to the dosedloop system approaches zero, Yl (t) should be asymptotic to the initialstate response of the openloop system G(s) (see Fig. 2; i.e., the system is almost in the stationary mode). This means that for sufficiently large sampling step k, y~Ckh + O) ,,, r(a). Furthermore, this output must be generated by an initial state of G(s). Therefore, it should follow that any initial state response generated by dl(s) must be contained in the set of initial state responses generated by ql(s). In the terminology of [11],
X a C X q.
According to [11, Theorem 4.6], this implies d(s)lq(s). []
q1(s)p(s)
I d_l(s) ]
yk(0! ek(O),
Figure 2: Stationary Mode
The necessity theorem above is still unsatisfactory for the following reasons: i) We have replaced the digital compensator C(z) by the delay element C(eh'). But those poles of C'(e h') are not necessarily poles of C(z) since the latter is just a digital compensator and only by combination with the analog holder element, does it generate a continuous time response. For example, consider the zeroorder hold: (1  eh')/s. If the preceding digital element has z  1 in the denominator, it produces the signal 1/s via Cancellation (1  eh')/(Z  1)l,=~h, = e h'. We thus make the following hypothesis:
Hypothesis: Let C(z) and ~b(s) be the transfer functions of digital compensator and the holder. Then the product C(eh')~b(s) becomes a product of C'(e h') and ¢(s) which are rational functions of e h" and s, respectively.
70
This hypothesis is satisfied, for example, for
z 1  e  h ' c(~)= ~_]; '/'(~)= s~+,,~''=2'dh"
Remark 3.5 A similar hypothesis is made in [6].
Let us now state and prove the sufficiency of our internal model principle.
T h e o r e m 3.6 Let C(z), C'(z), ~b(s) and ~b(a) be as above, satisfy the hypothe,is above. Write the loop transfer funclion P(s)C(e h') as ql(s)v(s)C'(eh'), where q(s) contains the denominator of P(s) and q~(s). Suppose that the closedloop system Fig. 1 is stable, the ezogeno~ signal is generated b~l d~(s), and that d(s)lq(s). Then the closedloop system asymptotically tracks any signal generated¢by dl(s).
P r o o f Since d(s)lq(s), if we look at only the sampled instants, the closedloop system contains the discretetime internal model. Therefore, by the internal stability, at least tracking at sampled instants occurs. Therefore, the error input to the loop transfer function tends to zero. Hence for sufficiently large t0 = kh, the output y(t),t > to is close to the tracking signal dlro at sampled instants. Reset this to to zero, and then we have that the output y(t) becomes
y(t) = q'=0 + ~(t) where ((t) is the output corresponding to the input ek(h) so that (($) * 0. (The output of in C'(z) contributes to the output only through the continuoustime part ql(s), so it need not be counted in the expression above.) Therefore, qlz0 must asymptote to dlr0 at least on the sampled instants t = kh. Since dlr0 is a linear combination of exponential polynomials, we may confine our consideration to the tracking to signals of type Pe ~t. However, since we have the spectrum nondegeneracy condition, this occurs only when the output qlzo asymptotes also on the intersample intervals. []
Remark 3.7 If we do not have the spectrum nondegeneracy condition, then we cannot conclude tracking in the intersample periods. Indeed, if we have s = 0, 21rj/h both as poles in the continuoustime part, then it may well happen that the closedloop system may track to the sine wave even if the exogenous signal is a step, because the feedback loop error signal does not contain any intersample information. However, in such a case, the detectability property obviously fails.
A c k n o w l e d g m e n t s . I wish to thank Professors M. Ikeda and S. H~ra for helpful discussions.
R e f e r e n c e s
[I] T. C. Chen and B. A. Francis, "Stability of sampledd~t~ systems, ~ TechnlcM Repot~ No. 8905, University of Tozonto, 1 9 8 9 .
71
[2] B. A. ~eacis and T. T. Georgiou, "Stability theory for linear timeinvafieat pleats with periodic digital controllers," IEEE Trans. Aurora. Control, AC33: 820832, 1988.
[3] T. Hagiwaza and M. Araki, "Design of a stable feedback controller baaed on the multirate sampling of the pleat output," IEEE Trans. Aurora. Control, AC,33: 812819, 1988.
[4] S. Hara and P. T. Kabamba, "Worst case analysis and design of sampled data control systems," Pxoc. 29th CDC: 202203, 1990.
[5] P.T. Kabamba, "Control of linear systems using generalized sampled*data hold functions," IEEE Trans. Aurora. Control, AC32: 772783, 1987.
[6] A. Kaweao, T. Itagiwara and M. Araki, ``Robust servo condition for sampleddata systems," (in Japanese) SICE Control Theory Syrup.: 3540, 1990.
[7] B. Bamieh and J. B. Pearson, "A general framework for linear periodic systems with ap plications to H e° sampleddata control," Tec. Rep. 9021, Rice Univ., 1990.
[8] H.K. Sung and S. Hara, "Ripplefree condition in sampleddat~ control systems," SICE 13th DST Syrup.: 269272, 1991.
[9] F. Treves, Topological Vector Spaces, Distributions and Kerne/s, Academic Press, New York, 1971.
[10] S. Urikura and A. ]qagata, "Ripplefree deadbeat control for sampleddata systems," IEEE Trans. Aurora. Control, AC32: 474482, 1987.
[11] Y. Yam~moto, "Pseudorational input/output m~ps and their realizations: a fractional representation approach to infinitedimensional systems," SIAM J. Control & Optimiz., 26: 14151430, 1988.
[12] Y. Yamamoto, "New approach to sampleddata systems: a function space method" Proc. 29th CDC: 18821887, 1990.
S U P E R  O P T I M A L H A N K E L  N O R M A P P R O X I M A T I O N S
FangBo YEH and LinFang WEI Department of Mathematics, Tun~a l University
Talchung, Talwan, Republic of China
Abstract: It is wellknown that optimal Hankelnorm approximations are seldom unique for multivariable systems. This comes from the Hankelnorm being somewhat of a crude criterion for the reduction of multivariable systems. In this paper, the strengthened condition originated with N. J. Young is employed to restore the uniquess. A state space algorithm for the computation of superoptimal solution is presented.
I . I n t r o d u c t i o n
Recent developments in optimal Hankelnorm approximations [1][3] have held great attention in the control society. As pointed out in [1], based on the Kronecker theorem and the singular value analysis, Hankelnorm criterion appears to be very natural and useful. Roughly speaking, the Hankelnorm or Hankel singular values can be thought as a measure of the controllability and observability of a LTI system, which has strong relations to the McMillan degree of a system. In addition to this sound physical meaning, another merit of using Hankelnorm criterion is that the calculation of a lower degree approximation with minimum error or a minimum degree approximation within a given tolerance can be easily computed. These features bail the design out of endless iterations in the face of largescale multivariable systems. The celebrated paper f2] gives detailed statespace solutions and their L eO error bounds to these problems. For single input or single output systems, it is wellknown that the optimal Hankel norm approximation is unique and can be easily determined by the Schmidt pair of the associated Hankel operator. However, the optimal solutions are seldom unique for multivariable systems. The problem how to choose the best solution then naturally arises. A simple example reported in [2] is used to clarify the situation. Consider the
system za+0.4s
G(~) = [ "2+L25"+°'°90 0 ] t i+o.5
A bit of calculation shows that all optimal Hankelnorm approximations with McMillan degree two are given by
= o 0 (  '
where 0 is any stable function of McMillan degree one and II  _< ½ . ere H" [In stands for the Hankelnorm of a system. It is trivial .to see that ~(a) = z a+0.5 would be the best choice in a natural sense. However, it is not at all clear how this could be generalized in [2]. The nonuniqueness comes from that the Hankelnorm is
73
somewhat of a crude criterion for the reduction of multivariable systems. To make the solutlon unique, some finer measure should be imposed. In this paper, the strengthened condition in [9] is employed to restore the uniquess.
To formulate the problem more precisely, we begin with the nomenclature. The symbol ILL °°mx~ denotes the space of p x q matrixvMued, proper and realrational functions in s with no poles on the j0~ axis. Its subspace with no poles in the right half plane is denoted as RH~ 'pxq. The superscript p × q will be omitted if the size is irrelevant. The problem of optimal Hankelnorm approximation can be defined as follows: given G E RH~ °mxq and an integer k _> 0, find a G ~ RH~ 'pxq with McMillan degree k such that the error's Hankelnorm is minimized, i.e.,
min IlC ll. (1.1) @ERH~ ''p'~q ~,~d d,z(d')=k
It is known that Hankelnorm is only related to the stable part of a system, and the addition of an antistable function does not affect the norm. Thus, the problem of optimal Hankelnorm approximations is equivalent to
min s~°(G  G) (1.2) ~EXH"~'"'
where RH°°,)~ xq denotes the subset of RL c°'pxq with no more than k poles in the left
halfplane, RH_°°,~ xq is abbreviated to RH_ °°'pxq, and
• ~ (E) := m ~ s; (E(Y~))
The symbol sj (.) denotes the jth largest singular value of a constant matrix. Then, deleting the antistable part of G obtained from (1.2) will give the optimal solution to (1.1). In (1.2), it is seen that only the first frequency dependent singular value of the error system G  G is minimized. A generalization is that we seek to minimize all the singular values whenever possible. Therefore, the problem of superoptimal Hankel norm approximation is defined as follows: given G E RL °°'pxq and an integer k _> 0
RH °°mxq such that the sequence find a G E ,k
 °(c . . .
is minimized lexicographically. To our knowledge, this problem was first studied in [101, wherein the existence and uniqueness of the superoptimal solution had been proved us ing the conceptual operatortheoretic constructions. In this paper, a different approach which requires only simple statespace calculations will be studied. The approach is much more straightforward and comprehensible. Besides, the polezero cancellations in the algorithm will be analyzed in detail.
II. Mathematical Preliminaries
In the development of this work, the statespace approach is adopted. For a proper and realrational matrix function G(8), G~(s) is synonymous with GT(s), and
74
the data et~cture [A,S, C, D] de~ot~ • reaUz~tion of C(,) , i.e.,
G(s) =C(sI A)'B + D= [A,B,C,D] = [ A~D ]
where A, B, G sad D are constant matrices with compatible dimensions. A collection of statespace operations using this data structure can be found in [6]. For a stable system G(s) with the realization [A, B, C,D] , the corresponding controllability and observabillty Gramian are defind as the uniquely nonnegatlve definite solutions to the following Lyapunov equations, respectively,
AP + PA T + BB r =0
ATQ + QA + CTC = 0 If the realization is minimal, then P and Q must be positive definite. When both P and Q are equal and diagonal, we say that the realization [A, B, C, D] is balanced and the j t h laxgest diagonal element, denoted ¢,j(G), is defined as the ] th Hankel singular values of G(s). It is always possible to get a balanced realization by use of the state similarity transformation.
The problem of approximating a given Hankel matrix by a lower rank one had been studied in [4] and [5], Their striking results stated that the restriction of the solution to be a Hankel matrix does not affect the achievable error. These remarkable results will be briefly reviewed here. We denote RL 2'~ the space of p vectorvalued, strictly proper and realrational functions with no poles on the ]w axis, and is a Hilbert space under the inner product
(~, v):= ~ ur(j~)v(j~) O0
The subspace of RL 2,p with no poles in the righthalf plane is denoted R H 2'~, and its orthogonal complement is denoted R H ~ p. For a system G in P,.H~ 'p×q, the Hankel operator with symbol G, denoted FG, maps R H ~ q to K H 2'~. For z 6 I IH ~ q, F e z is defined as
r e x := n(G z)
where H is the orthogona[ projection operator which maps RL 2'p onto R H 2,p. An important result is that the Hankel operator FG is of finite rank and its singular values are just the Hankel singular values of G(s). The pair (vj,wz) satisfies
r e v~ = ~;(c) wj
r b w; = a;(G) v;
is called the j th Schmidt pair of FG corresponding to ¢5"(G). The following Lemma relates the Schmidt pair of Fe to any optimal solution, and is central to this study.
Lenxma 2.1: Given G 6 R H ~ ''pxq and an integer/¢ > 0 then (I) rain s~(G  G) = ¢rk+l(G) := ¢,
75
(2) ifak(G) > a then for any optimal G we have ( G  G ) u = ato and ( G  G ) ~ w = av. Here (v, w) denotes the (k + 1}st Schmldt pair of the Hankel operator r e . In case of k = 0, Oo(G) is interpreted as +co. [ ]
A matrix G in RL 0° is allpass if G~(s)G(s) = I, and is inner if G is restricted to R H ~ °. An inner matrix that has a leftinverse in R H ~ is called a minimumphase inner matrix [8]. The following lemmas give some important properties of mlnlmumphase inner matrices.
L e m m a 2.2:[81 Let G(s) = [A,B,C,D] be inner together with the controllability Gramian P and observabillty Gramian Q. Then G is minlmumphase inner if and only if IIPQ[] < x, and
GL(s) = [A, BD r + PCT:,DvC(PQ  I )  I , D T ]
is one of stable leftinverses of G. [ ]
L e m m a 2.3: For a strictly tall inner matrix G(~) = [A,B,C,D] having nonsingular controllability Gramian P, the rightcoprime factorization of G(  s ) over R H ~ , i.e. G(s) = NG(~)M~t(~), can be written as
Ms(8) = [A v,  V  ' B, B r , I]
where Me is inner and NG is minimumphase inner. [ ]
III. Diagonal iz ing Matrices
In this section, we shMl study how to construct allpass matrices which will diagonMize each possible error system. It is seen that in order to decide the second singular value, namely ~° (G  G), and keep ~° (G  G] unchanged at the same time, a natural way is to dlagonalize the error system G  G by premultiplying and post multiplying two suitable allpass matrices. Not to sacrifice any generality, it is important to have this diagonalizing process hold for all optimal solutlotm. Recalling that any optimal solution (~ should satisfy Lemma 2.1 part (2), it is clear that the Schmidt pair (v, w) serve as a starting point in this diagonlization process.
We begin by assuming that a minimal balanced realization [A, B, C, D] of a stable system G with McMillan degree n is given. Then the associated controllability and observability Gramians are both equal and can be arranged as diag(a,~), where a = #k+t(G) and r. is also diagonal. To have a clear presentation, hereinafter, we shall assume that a is distinct, i.e.,
(AX) akCG) > ~ > ~+2(G).
Relaxing this assumption is possible but only leads to a more messy indexing notation. Partition matrices A, B and C as
A = [A21 A~2 ' B~ '
76
where a t , is a scalar, B, is a row vector and Ct i~ a column vector. following Lyapunov equations are hold
0 ,,,, rB, Br r,.,, ,,,.] [o [; [,,,,.....,+,...; [`42, .4==
a,, `4~t 1 [ a " `4'=1 i'CTC, ,,,,, [; .,,.,,+ L A,,. L c~c,
Obviously, the
B , B ~ I B=B~ J = 0
c;c,] c~ c~ ] = o
To simplify the notation, we will assume that G has been scaled such tha t BxBT = C~Ct = 1, which is without loss of any generality. The (k + 1)~t Schmidt pair v and w for the above balanced realization can be writ ten as
. (  . ) = [`4r, e , / ~ , B r , O ] , ~{. ) = [ `4 , e , /v~ ,C ,O]
where et is the first column of an n x n identity matrix. By direct statespace calculation, it is easy to verify tha t v~v = w~w. Thus, it is possible to factorize v (  8 ) and w(s) such that
where 0 and t~ are allpass vectors, and a is a scalar function. The statespace realizv. tions of 0 and tb can be writ ten as
~(.) = [A., A 5 , C . , BT],
in which
T T Au  A~= + aA,=(A=I  A,2Xu),
 `4=iX~), Aw = A=2 + a A 2 , ( A n T
• (,) = [`4., A2,, c~ , c , ]
c ° = B~ + ~BT(`4~,  `4 ,=x . )
C.,  C2 + aC,(Al=  A~,X..)
where Xu and X~ satisfy the following algebraic Riccati equations,
(A22 + aA=tAI2)Xv + Xu(A=2 + c A 2 t A n ) T  aXuA~=Ax2Xu  aA2tA~I = 0 (3.1)
(An + aA2,An)TXw + Xw(A2~ + oA2tAn)  oXwA2,A2,XwT  oA~=A,~ = 0 (3.2)
A direct computat ion will show that any solutions to (3.1) and (3.2) will make 0 and tb allpass. Two special kinds of solutions are of interest, na.mely, the stabilizing and antistabilizing solutions. For our purpose, we choose Xu and X~ to be the antitabilizing solutions, which is inspired from [13]. It will be shown that the use of antistabilizing solutions can be of great help in analyzing the polezero cancellations. Hence to ensure the existence of antistabilizing solutions, it is natural to assume tha t
(A=2, A t 2) and (A=2, A2, ) are controllable. (A2) r r
Then, it can be verified tha t Qv = aXv + E and Qw = aX., + E satisfy the following Lyapunov equations, respectively
T A, Qv + Q,,A,, + Cr.C. = 0
T C wC~ = 0
77
In case of k = 0, it can be proved that Qv and Qw are nonsingular provided that a is distinct. However, for general k, the situation is not clear and we will assume that
(A3) Qu and Qto are nonsingular.
The allpass completions [7] of 0 and tb are given by
= B2s ,c,,vd
= c.,
where[S T Briand[C, C.L]areorthogonM. Denote l?'=[t~ ~ ] a n d I ~ v ' = [ ~ W.L]. Then, the error G  G can be diagonallzed as shown in the following lemma.
L e m m a 3.1: Given that G satisfies Assumptions (A1), (A2) and (A3), then for any optimal solution G we have
0
where gt is allpass and independent of G. [ ]
IV. S u p e r  O p t i m a l Solu t ions
The superoptimM model matching problem, i.e. k  0, was first studied in [9] wherein a highlevel algorithm is released. The implementations of Young's algorithm have been reported in [11] using the polynomial approach, and in [12] and [13] using the statespace approach. The basic idea of Young's algorithm can be summarized as follows. First, the minimum achievable L °° norm of the error is calculated. Then, two allpass matrices are constructed to diagonMize the error system, which results in the same modelmatching problem but with one less dimension. Hence by induction, this dimension peeling process can be continued layer by layer until a unique solution is found. Finally, the superoptimal solution of the original problem is constructed from the solution of each layer. However, difficulty arises when apply this idea to the general case k > 0. The reason is that the addition and multiplication of two R H ~ functions is still an H _ function, but is generally not true for l lH~,k functions. This causes the minimum achievable norm of the subsequent layer hard to determine. In order to
oo,pXq ensure that the final superoptimal solution of the original problem is in R H k , be precisely characterized, and will be the solution set of each subsequent layer should ' . z
studied in this section. We now continue our work in the previous section. By [2], an optimal solution
is given by
0 , ( , ) = [ F'(e~Ar'+~A22~+ecTc'BIBT) P'(EB,aC[C,B1)] C~F.  aC, B,B r D + aCIBI
where F = E2 _ a2i is nonsingular by (A1). Recalling Lemma 3.1, we see that any optimal solution (~ should satisfy
0
78
Thus, G~7(a) and G , V (  s ) have the same first column, i.e.,
~,~(~) = [¢:(~) ¢,%(°)] := [h H,]
Besides, it is required that tb~(H  Hi) = 0. Now observe that
• ~(G  ~ ) ~ (  . ) = # ~ ( G  ~ ) f ' (  ~ )  # ~ ( ~  ~ , )~ ' ( ~)
where
,.°][o o ]
F,(,) = #fCC  ~,)#(.) Hence to compute the superoptimal solution, we require finding H such that
(C1) [h H] 6 RH_°°,~xql~'(*), (c2) ~~(H  H,) = 0, (C3) s~ (F,  I&'~'(H  Hi)) , for j = 1, 2 . . . . is minimized lexicographically.
To solve the above problem, the first step is to parametrize all the functions H that satisfy both Conditions (CI) and (C2) in terms of some free function. And then the minimizing process ((33) can be carried out. These parametrizatious are studied in Theorems 4.1 and 4.2.
First of all, according to Lemma 2.3, we introduce the following factorizatious which play an important role in the riddance of (Cl) and (C2)
~C*) = ~(*) ~72(*) with
= [ A . ,  P ~ A I ~ , C . P ~ +
= [A~,,P~, a2i,CwP~,+ .~(.) ~  ' C,A~,,C,] where P, and Pw are the uniquely negative definite solutiorm to the Lyapunov equations, respectively,
A. t ' . + 2".A T + AT2AI~ = 0
A J ' w + PwA T + A2~A~I = 0
It is important to notice that nv(*) and nw(~) are minimumphase inner and, hence, have stable left inverse. With these notations, we have the following theorem.
" D ~ ° ° , p X q L r [ o't T h e o r e m 4.1: [h H] 6 . . . . ,k  ~  w if and only if
][~H °°'px(ql) := )~ n ~ : . l . . % (  ~ ) + ,k
where In. (s) is any stable left inverse of nv(*) . [ ]
79
Now by Theorem 4.1, it follows that H and Ht can be parametrized as
= ~ I n , t ~ . ~ ± (  ~ ) + R
H, = ~In,t~.¢±(,) +R~
for some R and Rl in 1],H~,~ ×(ql). In other words, RI can be computed as follows
RI = ~ ( x  ~ , t ~ . ) ~ (  , )
wherein the function V2 := ( I nv l ,~ , )Vl ( 8) has at most n  1 states and is antistable. This can be verified by letting
t~.(,) [A~, T = C v ,A12(I  P~' IQ'~I) ,Bt]
according to Lemma 2.2. Then a series of statespace calculations yields that
[A,,P;, Q, B~B.L,C~P~ + V2(,~)= T  1  I T B T A I 2 , B T ]
As Rt is found, Condition (C2) is equivalent to n~(RRt) = 0. The following theorem characterize all such function R.
T h e o r e m 4.2: R e RH~ ,~ ×(ql) and n ~ ( R  R,) = 0 if and only if
where la . (~) is any stable left inverse of nu,($). [ ]
Hence by theorem 4.2, there exist functions Q and Qt in R H ~ p~)×(qx} such that
T T R = ln, n~Rx + IYVLQ
T T ~II.LQ I RI : l,~, n ~ R l +
and, in other words Q, ff[(~r T T " =  I n nu , )GiV 2
To compute the function W2 := W~'(I  T T ln, n~) , we choose
= C , , , , A 2 , ( I  P ~ , ' Q ~ , I ) , C ~ ]
according to Lemma 2.3. then a series of statespace calculations gives that
T T   I   I W2 = [ A ~ , P ~ C T + A21C, , C ± C ~ Q , P~ ,C[ ]
which is also antistable and has McMillan degree no more than n  1. Finally, the function which need to be minimized in (C3) become~
F,  t~z~'(H  g , ) = El  W~'(R  Rx) = F, + Qt 
Define Q := FI + QI = IYd~'(G  Gx)l~'.L(s) + W~G1V2 (4.1)
8O
Then Condition (C3) is reduced to the problem that finds a (~ in RH_°°,~ ~l)x(q1) such that the sequence
?O(Q _ _ . . .
is minimized lexicographically. This is just the same superoptimal problem but with one less dimension. This dimension peeling process can be recursively invoked until the row or column dimension is reduced to one, wherein the solution can be uniquely determined. By induction, the superoptimal solution is clearly unique. And once the superoptimal solution Q is found, (~ can be recovered as follows
= +  ( 4 . 2 )
Although this algorithm is conceptually workable, the computation should not follow (4.1) and (4.2) directly. The reason is that the sizes of the Amatrlces of Q and G will blow up very rapidly if pure statespace additions and multiplications are used, and a further computation is required in order to get the minimal realizations of Q and G. Owinz to this observation, the polezero cancellations occur in (4.1) and (4.2) should be analyzed in detail.
It can be proved that Q is stable and can be realized with n  1 states. Then, the superoptimal solution G will require no more than deg((~) + n  1 states, where (~ is the superoptimal Hankelnorm approximation of Q. As a result, the required computation time in each recursive step will gradually decrease rather than increase. Since our intenslon is to apply the theory to largescale multivariable systems, it is clear that a feasible computer program can be setup only when the full analysis of the pole zero cancellations is carried out. Thus, the result of this section is valuable from the pratical consideration. The proof requires a series of tedious statespace calculations and is omitted.
Theorem 4.3: A realization of Q is given by
Av
which is stable and minimal. Moreover, Let (~ = [j[,]),~,f)l 6 RH °°'(p1)x(qt) J ,k be any optimal Hankelnorm approximation to Q with L c° error a~+1(Q). Then a realization for G is
¢(s) = 1 + P cy
[ C. C±¢ D + aC,B, + C.bB±
where P,~ satisfies A,PI2  P12 ~iT = QjL B, BT ]~r. []
Y. Concluding Remarks
Throughout this paper, we have concentrated on the computation of super optimal Hankelnorm approximations. The existence and uniqueness of the super optimal solution axe proved by use of simple statespace calculations. The approach
81
is unlike the work in [101, which based on conceptual operatortheoretic constructions. In addition, we have given a detailed analysis of polezero cancellations in the algorithm and a bound on the McMiIlan degree of the superoptlmal solution, which generalize the results in [13].
References
[1] S.Y. Kung and D. W. Lin, ~Optimal Hankelnorm model reductions: Multivariable systems," IEEE Trans. Automat. Contr., vol. AC26, pp. 832852, 1981.
[2] K. Glover, "All optimal Hankelnorm approximations of linear multivarlable system and their L °° error bounds," Int. J. Contr., vol. 39, pp. 11151193, 1984.
[3] J. A. Ball and A. C. M. Ran, ~Optimal Hankelnorm model reductions and Wiener Hopf factorizations lh the noncanonical case," Integral EqrL Operator Theory, vol. 10, pp. 416436, 1987.
[4] V. M. Adam jan, D. Z. Arov, and M. G. Krein, "Analytic properties of Schmidt pairs for a Hankel Operator and the generalized SchurTakagi problem, ~ Math. o/ the USSR: Sborink, vol. 15, pp. 3173, 1971.
[5] V. M. Adarrdan, D. Z. Arov, and M. O. Kreln, ~Infinite block Hankel matrices and related extension problems," AMS Transl., ser. 2, vo[. 111, pp. 133156, 1978.
[6] B. A. Francis, A Course in H °° Control Theory. New York: SprlngerVerlag, 1987. [7] J. C. Doyle, "Lecture Notes in Advances in Multiwriable Control," ONR / Hon
eywell Workahop, Minneapolis, MN, 1984. [8] F.B. Yeh and L. F. Wei, "Innerouter factorizations of rightinvertible realrational
matrices," Syst. Contr. Left., vol. 14, pp. 3136, 1990. [9] N. J. Young, ~The NevanlinnaPick problem for matrixvalued functions," J. Op
erator Theory, vol. 15, pp. 239265, 1986. [10] N. J. Young, ~Superoptimal Hankelnorm approximations," in Modeling Robust
ness and Sensitivity Reduction in Control System#, R. F. Curtain, Ed. New York: SpringerVerlag, 1987.
[11] F. B. Yeh and T. S. Hwang, ~A computational algorithm for the superoptimal solution of the model matching problem, ~ Sy#t. Contr. Left., vol. 11, pp. 203211, 1988.
[12] M. C. Tsai, D. W. Gu, and I. Postlethwaite, "A statespace approach to super optimal H °° control problems," IEEE Trans. Automat. Contr., vol. AC33, pp. 833843, 1988.
[13] D. J. N. Limebeer, G. D. Halikias, and K. Glover, ~Statespace algorithm for the computation of superoptimal matrix interpolating functions," Int. J. Contr., vol. 50, pp. 24312466, 1989.
R O B U S T C O N T R O L A N D A P P R O X I M A T I O N I N T H E C H O R D A L M E T R I C
Jonathan 1t. Partington
School of Mathematics, University of Leeds, Leeds LS2 9JT, U.K.
Email: PMT6JRP~UK.AC.LEEDS.CMS1 Fax: +44 532 429925.
A b s t r a c t
The chordal metric on SISO transfer functions (possibly unstable and infinite dimensional), which generates the graph topology, is considered as a measure of ro bnstness for stabilizing controllers. Connections with approximation are also explored.
1. Introduct ion .
One aim of this paper is to express the margin of robust stabilization of control systems in terms of the chordal metric on the Riemann sphere. This metric has been studied by function theorists (see e.g. Hayman [6]), since it is a natural analogue of the Hoo distance between bounded analytic functions which can be used for functions with poles in the domain in question. Some results in this direction have been given by E1Sakkary [2,3], and it is clear that this is a fruitful area of enquiry.
In section 2 we shall study the continuity of the feedback process with respect to the topology given by the chordal metric (which, as we shall show, coincides with the topology given by the graph and gap metrics when we consider all stabilizable systems  this removes a technical restriction which was necessary in [10]). It will thus be possible to consider the stability margin in the chordal metric and we shall give an explicit expression for this. An application to approximation is also given.
Finally section 3 discusses some examples, including one in which the optimally robust constant feedback for an unstable delay system is calculated explicitly.
All our transfer functions will be single input, single output  it seems that a widelyused definition of the chordal metric for multlvariable systems has still to be given.
2. T h e cho rda l me t r i c and robus tness .
In this section we relate the chordal metric between two (possibly unstable) plants to the gap and graph metrics. We shall see that it yields the same topology, and that it can therefore be used as a measure of robustness.
For convenience we shall throughout this paper use the symbols c = IICIIoo and 9 = I[Glloo to denote the H ~ norms of functions in which we are particularly interested.
Three natural metrics can be defined to measure the closeness of one meromorphic function to another. The first two assume that two plants PI and P2 have normalized coprime factorizations Pk = Nk/Dk (k = 1, 2) with N~Nk + D~Ok = 1 on T, and with Xt , Yt in Hoo such that XkNk + YkDk = 1. See Vidyasagar [13] for details. We write
[N'] forkI ,2 .
83
The graph metric d(PI,P2) is defined by
d(Pl P2) = max { inf II01 ¢~011oo, inf 1102  01011~.} (2.1) ' QEH**,IIOII<I QEH,,.,IIqlI<I
(see Georgiou [41, Vidyasagar [131).
Likewise the gap metric ,5(t',, P2) can be defined by
6(PI,Pz)=max{Q£H**irff II0: ¢~QIIoo, _i~,~m. IIG2  0~QIIo.} (2.2) (see Georgiou [4]). Since ,5(2:',,I'2) <_ d(P,,P,) <_ 25(Px,P~) these define the same topology, which is the natural topology for considering robust stabilization of systems. Meyer [9] has shown recently that the gap topology on systems of a fixed degree n is quotient Euclidean, though in general it is more complicated.
A third metric, the chordal metric I¢(P1,Pz), may be defined as follows. For two complex numbers wl and w2 the chordal distance between them on the Riemarm sphere is
with ~(~, oo) = 1/x/1 + I,.,,I ~. Any function of the form
/(1 + Iw~ 12)(1 + 1~,21 ~) (2.3)
r(z) = e ie az + b  b z + a'
with a, b E C, and 8 6 R is a conformal isometry of the Riemann sphere, that is, it satisfies t~(r(wl),r(w2)) = tc(wl,w2) for all wl, w2, and corresponds to a rotat ion of the sphere. It is not hard to see that any point w, can be mapped to any other point w2 by using these maps, an observation that will be useful later.
For two meromorphic functions P h P2 in the open right half plane we write
,¢(PhP2) = sup{,c(Pl( ,) ,P2(8)) : R e , > 0). (2.4)
More about this metric can be found in Hayman [6].
E1Sakkary [2] showed that the chordal metric coincides with the gap metric when restricted to systems with no unstable zeroes or poles. The following inequality (see [6]) will be useful to us:
n(wa,wz) < rain ( [ w a  w , [ , l ~  ~~ I ) . (2.5)
In some ways, this metric is a more natural one to consider than the graph and gap metrics: one can see at once that two functions P and P ' are close if at each point either ]P( , )  P ' ( , ) ] is small or ] I /P ( s )  1/P'(*)[ is small  this lat ter case handles poles of P and P ' . In order to obtain a robustness result, we shall need a lower bound for ~, which acts as a partial converse to inequality (2.5).
84
L e m m a 2.1. For any two complex numbers wl and w2, and for any a with 0 < a < 1,
,,;('"1, '"~) > " ' " {"21 ~ ,  "~ I, a~ l l l "~  l/w~ l, 1  ,,~ }1(1 + ~,2 ).
P r o o f Consider the following three cases, which are collectively exhaustive:
(a) Iw~l < l/a, and l "2 l < 1/a; (b) I~'~1 > " , and Iw~t >_. " ; (c) Ei ther Iw~l < a and I"~1 > i /a, o r viceversa. In case (a), the formula for ~ shows that ~(wl,w2) ~ I~1  ~21/(1 + l /a2); in
case (b) the same applies with wl and w2 replaced by l /w1 and l/w2; and in case (c) a(w],w2) > a(a, I /a) = (1  a2)/(1 + a2).
O
E1Sakkary [3] has given some robustness results for the chordal metric, of which the following is typical: if 1/(1 + P) is stable, and
1 ~(PCs), P'(s)) <
3(1 + I[1 + P(s)][ ~12) '/~'
, / ( , + , , , ) , , ,,,,o ,,a,,,o (,.,,,, ,,,o,o o , ,  I.) i I
Using Lemma 2.1, we can now prove some new robustness results in the chordal metric.
T h e o r e m 2.2. Suppose that P(s ) and P ' ( s ) are SISO transfer [unctions and C(s) E H ~ is a stable controller such that the dosed loop tran~ex function
G(~) = P(s)/C1 + ccs )P(~) )
is BIBO stable, i.e. G(s) E Hoo. I f
~(P, V') _< (1/3)min{1,g l ,c1(1 + cg) l} ,
t h ~ V' is ~so stab;~ized by C, i.e. G'Cs) = P'Cs)l(1 + cCs)P'Cs)) e H~.
P r o o f Suppose that s is a point such that G'(s) = co but [G(s)[ < g = [[G[[oo < co. We may estimate the chordal distance of P(s ) from P'(s) using Lemma 2.1. Let a be any number with 0 < a < 1.
C a) IPCs)  v'Cs)l = II/(C(~)(1  cCs)G(~)))l > l/(cC1 + cg)); and
(b) [ l iP (s )  l /P'Cs)l = I I /G(~) I ___ 1/a. Thus
a(P, P ' ) >_ rain{a2/(c(1 + cg)), a2/g, 1  a2}1(1 + a2).
An easy limiting argument shows that the same is t ree if G' is merely unbounded, rather than having a pole in the right half plane.
The result now follows on taking a = 1 /v~ . m
85
Given c = IlCll and g = IIGII it is in genera/possible to choose a in order to obtain tighter estimates, but we do not do this here.
We can improve upon the above result and show tha t if P is dose to P ' then G is dose to G' in H ~ norm, as follows.
T h e o r e m 2.3. Under the hypotheses of Theorem 2.2, let e > 0 be given. Then, i f
~(P, P ' ) < (1/3) min{1, el((1 + cg)(1 + c(g + e))), ~/(g(g + e))},
then IIG  G'II~ < ~.
P r o o f Again, we use Lemma 2.1, and the estimates
IGCs)  G'(s)l
IPCs)  v'Cs)l = 11  cCs)GCs)111  ccs)c,(~)l'
and
I I / p C s )  l / P ' ( . ~ ) l = IC(.. ,)  C ' ( . , ) l ICCs) l I G ' ( , ) I
to bound ~ from below, under the hypothesis that [GCs)  O'(s)l = ~. Q
We can now give an alternative proof of the result of [10] tha t the chordal metric gives the graph topology; indeed we can now treat the case when P is any stabilizable plant (and hence when all nearby plants in the chordal metric are stabilizable by the same controller). Note that all stabilizable plants have coprime factorizations over HoG, by [12].
L e m m a 2.4. Suppose that P(s ) and P ' ( s ) have coprime factorizations over HoG, namely PCs) = E(s ) /F( s ) and P'(s) = E' (s) IF ' (s ). Then
~(P, P ' ) < A max(l iE  Z'l[, liE  F'[I),
where A is a constant that depends only on P, not on P'.
P r o o f Since E and F are coprime, they satisfy a Bezout identity X(s )E( s ) + Y(s )F(s ) = 1 with X , Y 6 HoG. Let z = ]IX][Go and y = [IY[[oo. Then certainly for each s we have that z[E(s)[+ylF(s)l >_ 1, and hence if [E(s)[ < 1/2z then IF(s)[ >_ 1/2y. We can thus estimate tz(E/F, E' /F'): when IF(s)[ ~_ 1/2y,
~(E/F,E'/F') < [E/F  E'/F'[ < IEIIF'  F[ + [FIIE  E'I   [ f [ [ f ' [
which will be at most a constant times max(liE  E'[oo, [IF  F'[[oo) provided that IIF  F'[[oo < 1/4y; alternatively, when IF(s)[ < 1/2y and [E(s)l _> 1/2z,
~(EIF, E ' /F ' ) < ]FIE  F'IE'[ < IFIIE'  El + IEIIF  F ' I   I E H E q '
86
using (2.5) again, and this is bounded by a constant times max(l iEE' l loo, I I F  F % o ) provided that lIE  E'[I < 1/4z.
Corol la ry 2.5. With the notation above, and with G, C, P fixed and G', P' varying, the following conditions are equ/valent:
C a) IIG'  GII "* 0; (b) ~(P,P') * 0; (c) 6(P,P') 40.
Hence the chordal metr/c determines the graph topology on the set of stab///zable sys tems.
P r o o f By Theorem 2.3 (b) implies Ca); by Lemma 2.4 and the definition of 6, (c) implies (b); finally that Ca) implies (c) is wellknown using the results of [13], since P = G/(1  CG) and G and 1  CG are coprime, similarly P ' = G'/(1  CG'); then JIG  G'[[ ~ 0 and I1(1  CG)  (1  CG')H ~ 0, which implies (c).
Q
We give now an explicit formula for the optimal robust stability margin for a stable controller measured in the chordal metric. This formula shows a similarity to many of the Hoo optimization problems encountered in other robustness questions.
T h e o r e m 2.6. Let P(s) be stabil;zable, and let C(s) be a stabilizing controllex. Then the chordal distance from P to the set of plants not stabl]izable by C is ~/1  7 2, where
= ~(C,P) = sup{~(C(s),e(s))). (2.6) #
Hence any C minimizing (2.6) is optimal.
P r o o f Note that P ' / (1 ÷ CP') is unstable precisely when
i~{~;(e'(s),llC(s))} = 0.
Hence if ~;(P(s),i/C(s)) _> 6 for all s then s(P,P') >_ 6 for any P' not stabilized by C; and if ~(P(s),1/C(s))  6 for some s then there is a plant P' with s(P, P') = 5 and P'(s) = 1 /C( s ) , namely P'(s) = r(P(s)) for some rotation r of the Riemann sphere such that r (P(s) ) = 1 /C(s ) . Hence the stability margin of C is given by
= i~{~(PCs),llCCs))} = i~{~(c(s),llPCs))}.
Since (~(wl, wz))2+(tc(wl,  1 /~2 ) ) 2 = 1 for any wl and w2, it follows that ~ = ~/1  r/2, where
,7 = sup,,(cCs), P(s)) = , , (o ,~) .
{:I
For robust stabilization using normalized coprime factors, as in [8] and [i], the plant P is given as N(s)/D(s) where N and D are normalized and coprlme. A perturbed model is (N + A N ) / ( D + AD), where AN and AD are in Hoo with small norm. We write AP(s) = fAN(s), AD(s)]. The following simple result estimates such a stability margin using the chordal metric.
87
T h e o r e m 2.7. For N, D, AN, AD as above,
,¢(g/D, (N + AN)/(D + AD)) <
P r o o f Using (2.5), we have that (at a point s)
IIAPII : /2 II,aPII/v~"
{ D N l ~(N/D,(N + AN) / (D + AD)) < rain D(D + AD) ' N(N'+ AN) j "
Now either IN(*)I >__ 1 / v ~ or ID(a)I >__ I / v ~ , and ,aso
IDAN  NADI <_ II[N, Dill II[ZXD, AN]II  IIAPII,
which gives the result on taking the supremum over s. rl
Finally we give an application to approximation of systems. Let P be a given system, and C a stabilizing controller, such that C and G = P](I+CP) are both in Hoo. Let G' be an Hoo approximation to G. This yields an approximation P ' = G ' / ( I  CG') to P and our final result estimates the distance of P ' from P in the chordal metric.
T h e o r e m 2.8. Let P be a possibly unstable openloop transfer [unction and C 6 Hoo a stable controller such that the dosedloop transfer function G = P/(1 + CP) 6 Hoo is stable; let G' be an appmximation to the dosed loop traasfer function G, with IIG  G'II = ~ and P ' = G'/(1  CG') the corresponding approximation to the open loop transfer function P. Then the following error bound holds for the chords/metric ~(P, P').
a(1 + c) 2 ~(P, P ' ) <
:  ~(: + e) m=(:, ei"
The same inequalities hold if the roles of G and P and their primed equivalents are reversed.
P r o o f Note that IC(s)IIGCs)I + I1  cCs)GCs)l > 1 at each point. We consider the cases when (a) IGtlI1 CGt < I and I1 CGI > 1/(1 + c) and (b)
IGI/I1  CGI > 1 and IGI >_ :t/(1 + c). In the first case,
pp'=
Hence
In the second case,
Hence
(G  G') (:  CG)O  CG")"
tP  P't < 0 / ( 1 + ,:))(1/(1 + e)  ~o,)"
G '  G l i P  l i P ' = GG'
I I lP VP ' I <_ ( 1 / ( 1 + ~ ) ) ( 1 / ( 1 + ~ )  ~ ) "
88
Finally, taking the supremum over all points 8, we obtain
a(1 + c) 2 x(P, P ' ) <
1  ,~(1 + ~) max(l , c)
using (2.5) again. r l
An application to identification is given in [7]. Further results on approximation of unstable systems in the chordal metric may be found in [10].
3. E x a m p l e s .
We begin with an elementary example. Consider the plant P(s) = 1/s. By Theorem 2.6, an optimally robust controller C can be found by minimizing
= ~(c(8) , 1/~).
This is the same as
+C(s)'~+I '
and transforming to the disc, with z = (1  s)/(1 + s), and noting that 0(8) #  1 at an optimum, we have to minimize ~(9(z) , ~) for D = (1  C)/(1 + C) e Hoo. Winding number considerations show that, as in the Hoo case, the optimum D is identically zero, so that C(s) = 1. The stability margin is (1~:(1,1/~)2) 1/2 = 1/Vr2. A function P ' with n(1/8,P') = l/v/2, that is not stabilized by C, is v o P, where r is a suitable rotation of the Riemann sphere. Indeed P'(s) = (1 + 8)/(1  s) will do.
Now consider the continuonstime delay system with transfer function P(s) = e  ' / 8 . It is well known that, with a constant controller C, the closedloop transfer function G(s) = e  ' / ( 8 + Ce °) is stable for 0 < C < 7r/2. Consider now this restricted problem of robust stabilization  to determine which value of C gives the maximal stability margin in the chordal metric.
We can reexpress the results of Theorem 2.6 as
e = max in f{ t c (1 /P ( s ) , C)}= max inf{t~(se°,C)}. 0<C<~/2 O<C<f/2 °
For each C this infimum, e(O) say, will be attained at some 80 and one can then use the isometrles of the Euclidean sphere to obtain a function P ' such that P'(so) = 1 /C and 1¢(8c °, P') <_ e(C) for all s (geometrically this is just P ' = r o P for some 7 rotating the sphere.)
The robustness margin e(C) is plotted below for the relevant values of C; its maxi mum value is 0.397, achieved at C = 0.46. Thus any function P ' whose chordal distance from e  ° / s is at most 0.397 is stabilized by C = 0.46, and there is a function at this distance which is not stabilized by C.
0.4
0..35
0.~
0.2
0.1.~
0.1
0.05
89
0.2 0.4 0.6 ~  I I ~ IA 1~
Plot of chordal robustnena margin e(C) against C
References.
[I] R.F. Curtain, "Robust stabilizability of normalized coprime factors: the infinltedimen sional case," Int. Journal of Control 51 (1990), 11731190.
[2] A.K. E1Sakkary, "The gap metric: robustness of stabilization of feedback systems," LB.E.B. 23"ans. Aurora. Control 30 (1985), 240247.
[3] A.K. E1Sakkary, "Robustness on the Riemann sphere," Int Journal of Control 49 (1989), 5oi 67.
[4] T.T. Georgiou, "On the computation of the gap metric," Syateraa and Control Lettera 11 (1988), 253257.
[5] T.T. Georgiou and M.C. Smith, "Optimal robustness in the gap metric," LE.E.B. Trans. Aurora. Control 35 (1990), 673686.
[6] W.K. Hayman, Meromorphic functions, Oxford University Press (1975).
[7] P.M. M/ikil/i and J.1L Partington, "Robust identification of stabilizable systems," Report 919, University of Leeds (1991).
[8] D.C. McFaxlane and K. Glover, Robust controller design using normalized coprime factor plant descriptions, SpringerVerlag (1990).
[9] D.G. Meyer, "The graph topology on nth order systems is quotient Euclidean," I.E.E.B. Trans. Aurora. Control 36 (1991), 338340.
[10] J.R. Partington, "Approximation of unstable infinitedimensional systems using coprime factors," Sy~te~nJ and Control Letter~ 16 (1991), 8996.
[11] J.R. Partington and K. Glover, "Robust stabilization of delay systems by approximation of coprime factors," Sy~tera.~ and Control Letter~ 14 (1990), 325331.
[12] M.C. Smith, "On stabilization and the existence of coprlme factors," /.E.E.E. Trans. Aurora. Control 34 (1989), 10051007.
[13] M. Vidyasagar, Control System Synthesis: a factorization approach, MIT Press, 1985.
FINITEDIMENSIONAL ROBUST CONTROLt_~R DESIGNS
FOR DISTRIBUTED PARAMETER SYSTEMS: A SURVEY
R.F. Curtain
Mathematics Institute, UniversiW of Oroningen
P.O. Box 800, 9700 AV GRONINGEN, The Netherlands
I. Introduct ion
Although there has been much research on the stabil/zation of distributed parameter
systems, the theory usually produces infinit~dimensional controllers. For example, the
theory of linear quadratic control produces a statefeedback l~w in which both the state
and the gain operator are inflnitedlmensional. To produce an implementable
finitedimensional controller entails approximating both this gain operator and the state
which is a complicated numerical procedure and sometimes (in the case of unbounded inputs)
difficult to justify theoretically. The continuing popularity of this approach is somewhat
surprising, since there exist several direct methods of designing finitedimensional
controllers for infinitedimensional systems. The first finitedimensional compensator
design was a statespace approach in 1981 and since then there have been many others. In
particular, there exist various frequency domain theories for designing finitedimensiona/
controllers which are robost to cert~n types of uncertainties or which satisfy other
additional performance objectives. This talk will be a survey and comparison of ways of
designing finitedimensional controllers for infinitedimensional systems with the
emphasis on robust controllers.
2. Approximation of Inflnltedimenslonal deterministic LQG controllers:
Numerical approximation or infinitedimensional Rlccati equations
One of the early successes of infinitedimensional systems theory was the solution of
the linear quadratic control problem [Linns, 1971] and this topic dominated the literature
for decades. In particular, the case of boundary control for p.d.e, systems and delayed
control for delay systems presented difficult m&thematical problems and was the subject of
many treatises. The solution yields a sta~efeedback controller whose gain is the solution
of an infinitedimensional Riccati equation. The numerical approximation of
infinitedlmensional Riccati equations has been the subject of many papers, most of which
assume bounded input operators. A recent survey of infin/tedimensional Riccati equations,
including those with unbounded input operators can be found in {Lasiecka and Triggiani,
1991]. Although there are fewer results on numerical approximations with unbounded inputs,
the underlying idea is the same. One approximates the underlying system operator (p.d.e.
or delay type) by s suitable finitedimensional scheme and one gives sufficient conditions
91
on the approximations to obtain strong convergence of the finiterank approximating
solutions to the infinitedimensional operator. Recently, more compac~ sufficient
conditions have been given to ensure convergence ([Kappel and SMomon], [It6], but the
choice of the approximations still depends strongly on the type of system one is dealing
with (parabolic, hyperbolic or delay). However, by now there is a lot of knowledge about
the performance of different approximation schemes, at least for the case of bounded
inputs.
Of course, even if one has a sequence of finiterank gs~s which converge strongly to
the optimal gain, implemention of the LQ control Isw requires knowledge of the whole
state. While there is a stochastic L ~ theory [Cur~n and Pritchard, 1978], most attempts
at designing finitedimensional compensators have been deterministic and there have been
very few of these. In [Itg, 1990] general conditions are given for the convergence of
finitedimensional approximation schemes to infinitedimensional compensators. These
infinitedimensional compensators will have been developed in a way so that the closed
loop system is exponentaUy stable. The classic application is to the combination of a LQ
control law with a stateobserver  a deterministic ~ design approach. In principle,
this reduces to two Riccati approximation schemes: one for the control Riecati equation
and the other for the ~filter" Riccati equation. However, a more careful analysis of a
specific application to flexible systenm in [Gibson and Adamarian] reveals that if one
wishes to do a realistic engineering filter design, it might be difficult to satisfy all
of the assumptions one needs to guarantee convergence. In this example, a physically
meaningful choice of the state noise weights did not satisfy their theoretical assumptions
and so they could not prove convergence. However, their numericM results indicated that
things did converge nicely.
Summarizing, we can make the following, general comments on finitedimensional
compensator design via approximation of infim' tedimensional, deterministic
compensators. At least for the cnse of bounded control and measurement, there is a fairly
complete theory of convergence of the approximations and sufficient knowledge about the
appropriate choice of the approximations for parabolic, hyperbolic and delay systems. The
theory for the case of unbounded inputs and outputs is somewhst le~ developed and one may
have to exercise some care in the choice of weights to satisfy the theoretical conditions.
This notwithstanding, this design method is not recommended for non experts (see [Burns et
al, 1988], and research is still continuing into important properties of the
approximations, such as the stabilizability and detectability properties of the
approximations [Burns],
3. Direct s tatespace design of lnf~mitedlmensinnal compensators
Surprisingly enough, it took a long time to realize that it was possible to directly
design a finitedimensional compensator for an iufinitedimensional system without
approximating anything. The first design in [Schumacher, 1981, 1983] was a geometric
approach and was numerically simple. The assumptions were basically that the inputs and
output operators B and C were bounded, the system operator A should generate a
Cosemigroup, (A,B,C) should be stabilizable and detectable mad the eigenvectors of A
92
should span the statespace. It was this last assumption which was restrictive. While most
p.d.e, systems will satisfy this assumptions, many retarded systems will not. The
finitedimensional compensator designs in [Sakaw~, 1984] sad [Curtain, 1984] are similar
to each other and use a type of modal approach in designing the compensator; they also
need the assumption the the eigenvectors span the statespace. A completely different
approach was taken in [Bernstein and Hyland, 1986] who presented a theory for designing a
fixed order, finitedimensional compensstor. They do not need the amumptions on the
eigenvectors, but they do need to assume the existence of solutions of nonlinear "optin~
projection equations" and they need to solve these numerically. This is a nontrlvial task,
in contrast to the other three design approaches which can be carried out in using
POIA'IIAB routines.
4. Flnltadhnemlonal compensator designs based on reduced order models
Basic engineering methodology is to design a compensstor based on approxinmt~ model of
the true system: the reduced order model. In our application the true system is
infinit~dimensional, the approximation is finit~dimemional and the compensator will be
finitedimensionsl. However, we apply the finlt~dimenalonal controller to the
infinitedhnensional system, hoping thst it will stabilize, although we are only
guaranteed thst it will stabilize the reduced order model. Usually it works fine, but it
can happen that it fails to stabilize the original infinltedimensional systems. This
phenomenon was demonstr&ted in {Balas, 1978] and christened as the nsplllover" problem. In
[8alas, 1980] he suggested various choices of reduced modal on Galerkin approximations of
the Infinitedimensional statespace and s list of sufficient conditions which were rsther
complicated to check. In [Bontsem& and Curtain, 1988], an explanation of and a remedy for
the spiLiover problem were given in terms of robustness properties of the plant. In fact,
it is a simple coronmT of the theory of additively robust controllers which we will
discuss in section 5.
We shall use the following notation:
The class of functions of a complex variable which are bounded arid holomorphic in
Re(s) > 0.
~ . The class of functions of a complex variable which are bounded and holomorphic in
Re(s)>¢, for some ~>0.
/~.+R(s)The class of transfer functions which are the sum of one in ~ and a proper
rational one.
M(~) Transfer matrices of any size whose components are elements of Ha.
/~/H m The quotient space of H m.
The class of systems considered comprises plants with the decomposition G=G,+G=, where
G, eM(//~_) and Gu is a strictly proper rational transfer matrix. In particular, this me.zns
that the plant has only finitely many unstable poles in Re(s)>0.
The appropriate concept of stability is defined next.
Definition 4A The feedback system (G,K) with G,K~M(/~_+R(s)) is said to be f n p ~ ,   o ~
stab/e if and only if (a) and (b) are satisfied.
93
(a) ~n+fldet(ZGK)(s)l >0
(b) S=(IGK) "z, KS, SG, IKSG~M(if~.)
The following Lemma is an easy consequence of the theory of robust stabilization under
additive perturbations which we shall discuss in more detail in section 5.
Lemma 4.2 Suppose the plant G is strictly proper and has a decomposition G=Gu+Ga, where
G, eM(fl~.) and G u is a strictly proper, rational transfer matrix with all its poles in
Re(s)>O. Let G/fG=+G,, where (7= is an L=approx~nation for Gr H K! stabilizes the
reduced order model G!, then K! also stabilizes G provided that
IGG/I==  G ,  G ,  = < KI(IGIK!)"~, 1 Furthermore, a stabiUzing finitedimensional compensator K! for G! and G exists provided
that
GG/ I= < ¢~,(G~). (¢,,in(G~) is the mnallest Hankel singular value of the stable rational transfer function
Gt(s)=Gt=(s). It gives an upper bound to the /~approxima~ion error.) •
The class of Infinitedimensional systems covered by this I~..~ is very large. It
includes systems with a statespace representation (A,B,C,D) in which B and C may be
unbounded. The system should be "wellposed" in the sense that it should have weDdefined
controllability, observability map and inputoutput map, but these are very mid
requirements (see Curtain and Weiss, 1989). The essential restriction is that the operator
A should have only finitely many unstable elgenvalues, but this is also necessary for the
L~G theory in the case of bounded, finiterank inputs and outputs. In addition, this class
includes systems which do not have nice statespace representation and for which Riccati
equations are not wellposed. For example, it includes the important class of systems of
the type G(s)=eaZG/(s), where G!(s) is a proper, rational trarlsfer function. [emma 4.2
suggests the following
Reduced order model design approach 4.3
Step 1 Find a reduced order model (7/ so that G,G!®<~ Step 2 Design your favorite compensator K/(e.g. LQG) for (7/ and calculate its robustne~
margin KI(IGIKI)'a~, t. Step 3 Check whether G~,GIJ® < KI(IGIKI)'t~ t. If not, go to step 1 and obtain a more
accurate reducedorder model G! and repeat the loop.
Step 4 If the inequality checks out, K! stabilizes G.
The oniy numerical approximation occurs in step I and there are many excellent
techniques for these approximations, [Glover, Curtain and Pa~ington, 1988], [Hover, Lain
and Partington, 1990, 1991], [Gu, Khargonekan Lee, 1989], [Zwaxt, Curtain, Partington and
Glower, 1988] and [Partington, Glover, Zwart and Curtain, 1988]. This is a numerically
simpler design approach than that approximating solution of infinitedimensional Riccati
equations which we discussed in section 1. In addition, the number am(G~ t) gives a good
indication of the order of the reducedorder model required. The L~approximatiun error
94
should be less than umln(G~). A small value of #min(G~) indicates, a priori, that ~ low
order reduced order model could lead to splllover problems.
Finally, a word about exponential stability of the closed loop system and stability in
the sense of Lemrna 3.1: in general, this type of inputoutput stability is weaker than
exponential stability. However, for large classes of statespace realizations it will
imply exponential stability of the closed loop system {see Curtain, 1988).
5. Robust cont ro l le r designs with a p r lo r l robus tness bounds
Since the infinitedimensional model is, at best, an approximate one, it is highly
desirable that the controllers will also stabilize perturbations of the nominal model, the
rob~sme~s proper~y. Depending on the class of perturbatiom chosen and the topology
chosen to measure the size of the perturbations, one obtains different definitions and
measurements of robustness. A minimal requirement is that of robustness with respect to
the graph topology, [Vidyasagar, chapter 7].
Recently, in [Morris, 1991], it was shown that under the usual assumptions made in
approximating infinitedimensional system in the design of Infinitedimensional
deterministic ~ controllers, the approximations converged in the graph topology. This
can be interpreted to show that the deterministic /~G controller design discussed in
section 2 is robust to perturbations which are "smMl ~ in the graph topology. Reassuring
as this result is, it gives no information as to the size on nature of perturbations which
are allowed. There are no results on the robustness of the direct s tatespace design
compensators discussed in section 3, although numerical results carried out on an example
using the design in [Curtain, 1984] were disappointing. On the other hand, it is clear
that Lemma 4.1 is a type of robustness result. It says that there is robustness with
respect to stable additive perturbations in the plant and it also gives an indication of
the robustness tolerance in KI(IGIK/)'ID: t. In this section, we discuss two simple finitedimensional robust controller designs for
infinitedimensional systems with finitely many unstable poles, which give information
about the size and nature of perturbations which retain stability.
The first is that of robua~ atahilizo~ion of a plan~ G u~t,r additive perturb~iona A,
G and z~ are assumed to satisfy the following assumptions
(5.1) G is regarded as the nominal plant and is assumed to be a strictly proper transfer
matrix in M(/~.+R(#) and moreover, it is assumed to have no poles on the imaginary axis.
(5.2) The permissible perturbations A are also in M(/~.+R(a)) and they are such that A and
~I+G have the same number of unstable poles in Re(s)>0.
Definition 5.1 The feedback system (G,K) with G, KEM(/~.+R(s)) is said to be add/tively
robuatly aWh/e with r o s a T1utr~ • ff the feedback system is inputoutput stable for
all G and A satisfying (5.1) and (5.2) and AD®<t. •
The assumption (5.1) and (5.2) are needed to apply a Nyquist argument sad the following
result from [Logpxr~nn, 1990] is a generalization of first version for infinitedimensional
systems in [Curtain and (]lover, 1986].
95
Theorem 5.2 If C, eM(/~.+R(s)) and (5.1) is satisfied, given an e>0 there exists a
compensator K which stabilizes G+A for all A satisfying (5.2) and [A[, .<¢ if and only if
omin(G~) Ze (5.4)
where o'm~n(G~) is as in Lemma 4.2. •
In addition to this existence result, there are also explicit formulas for a controller
K which achieves the maximum robustness margin a~,in(Gut). These depend on the stable part
of G, Ga, and so this K is infinfledin~n.~ional, which is undesirable for applications.
However, is one replaces Gj by a finitedimensional Lapproximation, G~,, say, then one
can design a finitedimensional controller (of the order of k plus the McMillan degree of
G=) which has a robustness margin of at leazt am~,~(G~)~G,G~~. So in combination with a
theory for good Lapproximations, we have a practical design technique for additively
robust finitedimensional controllers for a large class of infinitedimensional systems.
Reducedorder model design approach 5.3
Step 1 Calculate amin(G.t.).
Step 2 Approximate G by G~m+G= so that UG~aGaII®< <o'm~(G~).
Step 3 Find the finitedimensional additively robustly stabilizing compensator K! which
stabilizes G~,+G u.
Step 4 K 1 additively robustly stabilizes G with a robustness margin of at least
a.i.( Ct=)  I I~  Coil®. • The same remarks made in section 4 concerning the L=approximation techniques and the
class of systems covered apply here too. The advantage of this approach over the reduced
order model design approach 4.3 is that we can give an a priori robustness margin of
a,,i,(G~)[[GjG~[[ ® and design a finitedimensional controller to achieve this. Applications
of this technique to design robustly stabilizing finitedimensional controllers for a
class of flexible systems can be found in [Bontsema, 1989].
The second robust controller design is applicable to a wider class of systems,
including those with infinitely many unstable poles.
Robustness s tabi l izat ion under normalized eoprlme fac to r per tu rba t ions
The plant G and the controller are allowed to be in M(H=//ff); the elements of the
transfer matrix are quotients of elements in /~. In addition, it is assumed that G has a
normalized coprime factorization.
Definition 5.4 If GEM(Iff/If), then ~i/~ is called a left coprim~ fa~orization of G if and
only if
(~) ~ and ~eM(/f) (i() G = ~ (i/i) M and N are left coprime in the sense that there exist X and Y in M(/ff) such that
RY~x+I (5.3)
~1~ is called a normalized left coprime factorization of G if it is a left coprime
factorization and
~(j~)IV'(jw)+M(j~)M'(jw) = I for all real w (5.4}
96
The definition of s right coprime factoriz~tion is analogous.
It is known that GEM(if'/fl") has a normalized leftcoprlme factortz~tion ff and only if
G is stabilJzable in the sense of Definition 4.1 (see [Smith, 1989]. (This defir~tion can
be extended in the obvious way to G, KEM(lt"/ff'). We consider the robustness of stability with respect to perturbations of the form
Ga (~+A.)'~(~+A~) (5.S)
where G=~;['t~ is a left coprime factoriz~tion of G sad AS, AmeM(//").
The robust design objective is to find a feedback controller KGM(ltm/l~ ') which stabilizes not only the nominal system G, but also the f ~ f i y of perturbed systems defined
by
G, = {Ga = (~+AM)'~(/~+AH) such that AM, ANeAr(ff" ) (S.6)
and J[z3M,A~v],,<e } (see Figure 8.2)
This leads to the following definition of factor robust stability.
Definition g.5 Suppoee that the system GeM(H"fli") has the left normalized ¢oprime
factorization of Definition 5.3. Then the feedback system (G,K) is /o.¢tor rolnts~y stable if and only if (Ga,K) is inputsoutput stable for all GAe~,. If there exists a K such that
(G,K) is factor robustly stable, then (G,e) is said to be factor robustiy stabil/zable
with [~cWr rob~ines, n~rgin ¢. •
This problem has an elegant solution.
Theorem 5.6 [Georgiou and Smith, 1990] If GeM(I~o/I~ ") has a left normalized coprime
factorization, then (M,N,K,~) is factor robustly stable if sad only ff
• ~(I [~,~]N,) t
where Hs represents the H~mkel norm of the system (M,N). The maximum factor robustness
margin is
e,,, < (I  )[),~] ,)½ •
In general, explicit formulas for normalized coprime factorizatiorm are not available.
However, for proper rational transfer functions sad a ¢L,~ of infinitedimensional
systems they are given in terms of the solutions of algebraic Ricc~tl equations (see
[Clover and McFarlaine, 1988, 1990], (Curtain, 1990] and [Curtain and Van Keulen, 1990]).
Moreover, explicit formulas for robustly stabilizing controllers are aiao known for these
classes of systems. While it is nice to have an elegant mathematical solution for
infinitedimensional systems in terms of algebraic Riccatl eqtuLtious~ these are not a good
basis for designing finltedimensionai robustly stabnl~.In~ control/ere. In [Bontsem~ sad
Curtain, 1989] the following approximation result was proved for plants in G(M(/~.+R(8)).
Lemma 5.T Suppose t ~ t GeM(/~_+R($)) has the decomposition G,=GI+G., where G.EM(I~.) and G! is a rational transfer function. G! is a reduced order model for G such that
MGGI®<p<~=~,. H K! stabilizes (7,/ with a factor robusmees margin of ~>~, it stabilizes
G with the factor robustness margin of at least ¢ p . •
yields the following algorithm for designing a robust finitedimensional compensator.
97
Reduced order model design approach 6.8
Step 1 Find a reducedorder model Gy for G with an Lmerror IGGtb,<I~.
Step 2 Obtain a minimum realisstton for G! and calculate the maximal factor robustness
margin for G! (This involves solving Riccsti equ~tlous), say ewe.
Step 3 Compare z ~ and p. If zmx/~ is acceptable as a robustness margin, go to step 4.
Otherwise, return to step 1 and find a reducedorder model with a mnaller L,,error.
Step 4 Find a factor robustly stabilizing compensator K! for G! with robustness margin
e> >p.
Step 5 K! factor robustly stabilizes G with robustness margin z  p .
This is analogous to the reducedorder model design 5.3 for additive perturbetions. It
is applicable to a wider class of systems, since (5.1) and (5.2) do not need to be
satisfied here. In particular, G can have finitely many poles on the imaginary axis and G
and Gzx do not need to have the same number of unstable poles. The design approach 5.8 was
applied to a flexible beam model with parameter uncertainty in [Bonteem~ and Curtain,
1989]. This study was a followup of that on additively robustly stabilizing compensators
in [Bontsema, 1989]. Both the additive and the factor theories allow one to measure the
perturbations allowed (in the Lmnorm and in the gap metric) and this was done for
parameter variations in a flexible beam model. The reduced order model design approach 5.8
allowed the largest range of parameter vaxiations in the model, both in the theoretically
gnaremteed robustness margin and in the actual robustne~e margin. This is to be expected
as it admits a larger class of perturbations; in particular, G and G A do not need to have
the same number of unstable poles.
6. Conclusions
We have discussed five different approaches to designing finitedimensional
compensators for infinitedimeusional systems. The infinitedimensional /~G approach in
section 1 finds an infinitedimensional compensator first and then approximates to obtain
a finitedimensional compeosator, whereas the statespace approaches in section 2 lead in
one step to a finitedimensional compensator. The remaining approaches following the
philosophy of approximating the irrational transfer function first by ~ reducedorder
model and designing a finitedimensioned compensator for it. Coupled with good
L=approximation techniques, these offer computational simplicity. The reduced order model
design approach 4.3 has long been standard engineering practice. The two robust controller
designs discussed in section 5 have the extra advantage of guaranteeing robustness to
uncertainties in the model. The robustness margin can be calculated a priori and it
corresponds to L~ or gapmetric perturbations in the nominal infinitedimensional model.
References: These l~ve been omitted due to space l'mdCaC~ns.
A complete vcr,~¢~ of th~ paper L~ ~ l c on r c ~ .
ROBUST COVARIANCK CONTROL
J.H. Xu and R.E. Skelton
School of Aeronautics & Astmnantics
Purdue University
West Lafayette, IN 47907
Abstract
This paper extends first the covariance assignment theory for continuous time systems to the assignment
of a covariancc upper bound in the presence of structured perturbations. A systematic way is given to
construct an assignable covariancc upper bound for the purpose of robust design.
L Introduction
The main idea of the covariance control theory developed in [I4] is to choose a state covariance
Xo according to the different requirements on the system performance and then to design a controller so
that the specified state covariance is assigned to the nominal closed loop system. However, because of
modeling error, the real system may differ from the nominal system for the design. As a sesult, the state
covariance X of the real system will not be equal to the desired Xo.
Reference [5] provides an upper bound for the perturbed state covariance of a linear system when
all system perturbations arc in a given structured set. The purpose of this paper is to present a way for
designing a robust covariance controller which guarantees that the state covariance X of the perturbed
system is upper bounded by a specified matrix X as long as the plant perturbations are in a given
structured set
The description of the paper is organized into 5 sections. Section 2 reviews briefly the needed
robustness analysis result in [5]. Section 3 extends the covariance assignment theory to the assignment
of a covariance upper bound when the plant matrix is perturbed in a structured way. In Section 4, the
problem of designing a robust covariance controller is solved by constructing an assignable covariance
upper bound. The paper is concluded in Section 5 where directions for further research are discussed.
2. Upper Covariance Bound of Perturbed Systems
Consider the following timeinvariant feedback system
~(t) = Aox(t) + Dw(t), (2.1)
99
when: Ao is stable, w(t) is a white noise signal of unit intensity. The state covarlance Xo defined as
Xo ~ lim E [x(0x*(t)] (2.2)
is the unique solution to the following Lyapunov equation [6]
AoXo + XoAo + DD* = 0 . (2.3)
In practice, perturbations in the system matrix Ao are inevitable.. As a result, the real system
matrix is actually Ao + AA, instead of Ao, where the perturbation AA describes the uncertainty of the
system and can be characterized in certain cases by a specified set f2 as follows
Q A= {AA : AAAA* < ~,, and (Ao + AA, D) is controllable}. (2.4)
The controllability condition is guaranteed if D is nonsingular. In this case, the state covarlance of the
perturbed system X = X* > 0 is the unique solution of the following equation
(A o + AA)X + X(Ao + ~,)* + DD* = 0, (2.5)
as long as Ao + AA remains stable.
Obviously, it is impossible to know X exactly because AA is uncertain. However, with the help of
the following theorem we can find an upper bound on the perturbed state covarianc¢ X.
Theorem 2.1
Suppose there exist a real ~ > 0 and an X > 0 satisfying the following algebraic Riccati equation
XX + [~.+ DD • = 0 (2.6a) Ao~ + ~A: + ~
then for the nominal feedback system we have the H . bound
llfsI  Ao) t DIL < " ~ . (2.6b)
and/or all AA E ~ , Ao + AA is asymptotically stable and X < X, where X satisfies (2.5). [ ]
Proof (Theorem 2.1)
By using the ine.quality
~2 _~ • ~+ p~> + p/',A~A" ~,S,~,~ + ~ " , V p > 0,
we have
I00
AA)X + X(Ao + AA)* + DD" ~ AoX + ,~A~ + ~ + ~X + DD*, (Ao +
i.e. from (2.6a),
(Ao + AA)X + X(Ao + AA)" + DD" ": 0 , (2.7)
which implies that Ao + AA is stable for all AA e f /since X > 0 and (Ao + AA, D) is controllable.
By subtracting (2.5) from (2.7), we have
(Ao + AA)(~ X) + O~  X)CAo + AA)" ,: O.
Obviously, X < X for all AA efl, because (Ao + AA) is stable. The proof of (2.6b) can be found in [5]
and hence is omitted. Q.E.D.
3. Upper Covariance Bound Assignment Theory
Consider the following linear timeinvariant plant model
•p(t) = ApXp(t) + Bpn(t) + DpW!t), z(t) = Mpxp(t), y(t) = Cpxp(t), (3.1)
where w e R n ' , Xp e R n', u e R ~', z e R n' and y e R % arc external input, state, control input,
measurement and output vectors, respectively. The dynamic conlxoller to bc designed is of specified
order no:
x¢(0 = A¢x~(0 + Bcz(0, u(0 = C_~xc(t) + Doz(0. (3.2)
By using Theorem 2.1 and the following definitions
:l .,0[:' :.] oj o,[:: ,3,, an upper covariance bound
L (3.4)
can be found by solving the algebraic Riccati equation (2.6a), m ~
BGM)X + X(A + BGM)* +   ~ + pX + DD* = 0 (3.5) (A +
for a given G and [3. Note that Ao = A + BGM.
For a given ~, a specified X is said to be as.~ign~le as art upper bound if a controller G exists such
that X solves (3.5). In this case, we say that this controller a.c.~ig~ X as a covafiance upper bound for ~e
101
system (3.1). Now, the upper covariancc bound assignment problem can be mathcmaticMly formulated
as follows:
For a given X, what are the necessary and sufficient conditions for the solvability of G from
(3 .5)? If these conditions are satisfied, what are all solutions G for this equation?
By considering only the perturbation AAp in Ap, we have
001 where ~,p is an upper bound for AApAA;. Then by defining
Xp = Xp  Xvcx~t X ~ , (3.6a) N 2 ~ ~ m
Xp + XpoXp~ Qp  XpA; + ApXp + DDD; + [3Ap + ~. , (3.6b)
2 Qp = Xp (XpAp + ApXp + DpDp + [3Ap + xI'  t if) xp , ( 3 . 0 : )
we can present the upper covarianc¢ bound assignability conditions in the following lhcorem.
Theo~m 3.1
A specified upper bound X > 0 is assignable, iff
(i) (I  BpB~) Qp(I  BpB~) = 0, (3.7a)
(ii) (I  M~Mp) Qp (I  M~Mp) = 0, (3.7b)
where []+ denotes the MoorePenrose inverse, rl
Proof (Theorem 3.1)
It has been proved in [7] that there exists a solution G to (3.5) for a given X if and only ff
(I  BB+)(AX + XA ° + ~ + ~X + DD')(I  BB +) = 0 , r
(I  M+M)X "l (AX + XA ° + ~ + ~3A + DD*)X "t (I  M+M) = 0 .
By considering that
IBB+[IBpB~ :], IM+M [IM~Mp :],
and
102
where Ap is an upper bound for the perturbation AAp: AApAA~, ~; ~ , we obtain (3.7).
Q.E.D.
For parameterizing the whole set of controllers that assign this X to the system, we need the
following definitions.
QAXA* + A X + D D " + [ 3 A +   ~  , ~A_~.IQ~I, (3.8a)
r A= M+M~(I _ BB+), (3.8b)
¢ ~ 2 QX(I  BB+)F + + FI~Q[I  X(I  BB÷)I '~] . (3.8c)
Theorem 3.2
Suppose a specified X > 0 is assignable for a given ~, then all controllers that assign this X to the
• ystem (3.1) areparameterized by Z e R (n'+n')×(n'+') and S =  S * ¢ R (n ' 'm'~'(n '~) as follows
+ B+XM+M(I  IT+)S(I  IT+)M + + Z  B+BZMM + . (3.9)
[]
The proof of this theorem follows along the same lines as that in the standard covariance control
theory [7] and hence is omitted.
Remark 3.1
For convenience we assume that there are no redundant inputs and measu~ments (Bp and Mp are
of full rank). In this case, the freedom Z disappears from G in (3.9).
4. Design ofRobust Covariance Controller
In this section, we are to use the upper covariance bound assignment result to construct first an
assignable X and then to design a controller which assigns this upper bound X to the system (3.1). For
constructing an assignable X we need the following theorem.
Theorem 4. I
A specified "X > 0 with the constraint X¢ = ctl is assignable, if there exist a
and a
such that
103
ZmA= [_Zm 11 ~ 0 ' 2 ] , withZmtl =Zmtt ,Rn'X~' ,Z, m l 2 e R n'x0~'~) Lzml2
Zb~ [z;,t2 . withz, lt =z~lt . z , t2
(i) XpA'~ + ApXp + DpD; + ~Ap + ~p(I + VmZmVm))~p = 0,
~2 ~
i
_ . _ . _ X . + c t X .  aXp . (ii) XpAp + ApXp + DpDp + ~Ap + " " + UbZbU b = 0 ,
13
where V m and Ob come from the singular value decompoMtion of Mp and Bp respectively
(4.1a)
(4.1b)
r1
Proof ('Theorem 4. I )
For the specified X, the first assignability condition in Theorem 3.1 is equivalent to
7 Z = Z ' , such that Qp = Z  (I  SpB~)Z (I  BpB~ ) , (4.2)
which, by using (3.6) and the assumption X¢ = a.l, is equivalent to (4.1b). The second assignability
condition in Theorem 3. I is equivalent to
Z = Z*, such that Qp = Z  (I  M~Mp)Z (I  M~Mp), (4.3)
which again, by using (3.6), is equivalent to (4.1a). Q.E.D.
Remark 4.1
Both (4.1a) and (4.1h) arc algebraic Riccad equations, and (4.1b) can be written in the following
standard form
 2
+ + + + (DpDp + 13Ap + UbZbUb  '~") = 0.
With the help of Theorem 4. I, a systematic way for constructing an assignable X can be stated as
follows.
104
step 1 Construct Ap, such that AApAA; < Ap.
step 2 Choose [3 according to the scquircmcnt on the IL. norm of the closedloop Innsfcxcnc¢ (2.6b).
step 3 Solve the algebraic Riccati equations (4.1a) and (4.1b) by adjusting Z m and Z b such that
Xp > O, Xp  Xp > O, rankCXp  Xp) = nc,
step 4 Construct ~ and Xpc by using (3.6a) as follows
= ~U~At~ • ± . ~A'~U~ rl
and obtain an assignable upper bound
step 5 Obtain a controller which assigns this X to the system (3.1) by setting any S =S* in (3.9).
Remark 4.2 The conditions for obtaining positive definite solutions to the algebraic Riccati equations (4.1a)
and (4.1b) can be found in [8].
Remark 4.3
It is reasonable to construct X¢ as a scaled identity matrix because, as stated in [910], the
computational errors introduced via controller realization in compumrs are explicitly related to the
properties of the controller covariance matrix X¢, and the best ~ in this sense should have all its
diagonal elements equal to cz, where cc is a scaling factor.
Theo~m 4.2 For all AA e i'~ defined in (2.4), the closedloop system with the perturbed plant (3.1) and the
controller obtained in the step 5 has the following properties:
(/) Ao + AA is asymptotically stable, where Ao = A + BGM;
(ii) X ~; X, where X satisfies (2.5);
( i i i ) [ I (sIAo)tDII . < ~ ' . [2
105
Proof (Theorem 4.2)
The X constructed above in step 5 is posidvc definite and assignable since both Xc and
Xp =~f.p XI=X:IX~ arc positive definite and Xp and Xp satisfy (4.1a) and (4.1b) respectively for
some chosen Zm and Zt,. Hence, .we can construct a controller of order nc by using Theorem 3.2 which
stabilizes the closed loop system (see Theorem 2.1) and assigns this X to the: system (3.1). As a result of
Theorem 2.1, the closed loop system has the above three properties. Q.E.D.
5. Conclusion
A systematic way to construct an assignable covariance upper bound is presented. By
constructing such an upper bound, a robust covariance controller can be designed to tolerate sta'uctured plant perturbations in the sense that the perturbed closed system covariance remains beneath this bound
for all plant perturbations in the given seL
Further research is needed on how to construct a "smallest" covariance upper bound which is
assignable to the given system.
References
[I] A. Hotz and R. $kelton, "Covariance Control Theory", Int. J. Control VoL 46, No. I, pp. 1332, 1987.
[2] E. Collins and R. 8kclton, "A Theory of State Covariancc Assignment for Discrete Systems", IEEE Trans. Auto. Control, Vol. AC32, No. I, pp. 3541, Jan. 1987.
[3] R. Skelton and M. Ikeda, "Covariance Controllers for Linear Continuous T'tm¢ Systems", Int. J. Control, Vol. 49, No. 5, pp. 17731785, 1989.
[4] C. Hsich and IV,. $kehon, "All DiscreteTime Covariance Controllers", Proceedings ACC, Pittsburgh, 1989.
[5] J.H. Xu, R.E. Skelton and G. Zhu, "Upper and Lower Covariance Bounds for Perturbed Linear Systems", IEEE Trans. Auto. Control, vol. AC35, no. 8, pp. 944948, Aug. 1990.
[6] H. Kwakemaak and R. Sivan, Linear Optimal Control Systems, Wiley, New York, 1972.
[7] K. Yasuda, R.E. Skehon and K. Grigoriadis, "Covariance Controllers: A New Parameterization of the Class of All Stabilizing Controllers," submitted for publication in Automatica, 1990.
[8] J.C. Doyle, K. Glover, P.P. Khargonekar and B.A, Francis, "StateSpace Solution to Standard H2 and IL Control Problems", IEEE Trans. Auto. Control, vol. AC34, no. 8, pp. 831847, Aug. 1989.
[9] D. Williamson, "Roundoff Noise Minimization and PoleZero Sensitivity in Fixed Point Digital Filters Using Residue Feedback," IEEE Trans. Acoustics, Speech and Signal Processing, vol. ASSP34, no. 4, pp. I013I016, Aug. 1986.
[I0] D. Williamson and R.E. Skehon, "Optimal qMarkov Cover for Finite Wordlength Implementation," Proc. ACC, Atlanta, 1988.
An Inverse LQ Based Approach to the Design of Robust Tracking System with Quadrat ic Stabil i ty
T.Fujii and T.Tsujino
Dept. of Control Engineering and Sdence, Kyushu Institute of Technology, Japan
ABSTRACT
A tracking problem for linear uncertain systems is considered. A robust feedback controller is designed such that the closed loop system is quadratically stable and its output tracks a command step input. One such controller is obtained by applying the quadratic stabilization method from the viewpoint of Inverse LQ problem. Attractive features of the design method proposed lie in its simplicity in several senses from the practical viewpoint.
1 INTRODUCTION
This paper addresses the problem of de signing robust tracking controllers for lin ear uncertain systems and command step inputs. Similar problems have already been considered in [1],[2] for linear uncertain sys tems. In [1], however, desired tracking is guaranteed essentially for small parameter uncertainty, whereas in [2] it is guaranteed for large parameter uncertainty but under a strong assumption of matching condition. On the contrary, we consider here a more re alistic design problem in that 1) the linear system considered here has timeinvariant uncertain parameters with no such assump tions, 2) both robust stability and robust tracking are guaranteed for all allowable un certainty specified in advance. In the area of robust stability for such systems above, much progress has been made recently on quadratic stability [3]. In particular, de sign methods for quadratically stabilizing controllers have been obtained in a simi lar form as in the LQ design[4][6]. Ab though these methods can be applied di rectly to the abovementioned design prob
lem, the resulting design method involves inherent practical difficulties in the choice of design parameters similar to those well known in LQ design. In order to overcome these difficulties, a new design method for optimal servo systems, called Inverse LQ design method, has been proposed recently by the first author from the viewpoint of the Inverse LQ problem[7],[8]. The most prac tical features of the method lle in its de sign objective of output response specifica tion as well as analytical expressions of the gain matrices as a function of design param eters and the system matrices.
In view of the similarity between LQ and quadratically stabilizing controls as stated above, we apply here the quadratic stabi lization method to the servo problem above from the viewpoint of Inverse LQ problem. Our objective here is thus to develop an Inverse LQ based design method of robust tracking controllers such that the closed loop system is quadratically stable and its output tracks a command step input for all allowable uncertainties. We have already developed such a design method in [10] for linear systems with uncertain parameters entering purely into the state matrix. In
107
this paper we consider the case where un certain parameters enter purely into the in put matrix or both into the state matrix and the input matrix, and obtain a unified de sign method for these two cases. In partic ular, we design such controllers in the form of both state feedback and observer based output feedback.
2 PROBLEM FORMULATION
Consider a linear system with structured uncertainty described by
~,(t) = [A + DFE.]z(f) + [a + DFE,]u(t)
v(O = c=(0 (1)
Here x(t) E R" is the state, u(t ) E R m is the control input and y(t) E R " is the out put; A, B, and C are the nominal system matrices with rank B = m; D,E, , and Es are known real matrices characterizing the structure of the uncertainty with Es ~ 0. Moreover F is a matrix of uncertain pa rameters with its maximum singular value bounded by unity, i.e.,
F e F = {F : IIFII _< 1 } (2)
For this uncertain system we consider a design problem of robust servo systems tracking a step reference input r(t) such that
1) the closed loop system is quadratically stable under the parameter uncertainty de scribed above (robust stability),
2) its output y(t) approaches r(t) asymptot ically as t , co for all allowable F (robust
tracking),
by use of state feedback or observer based output feedback controllers.
To solve this servo problem, we first con sider a familiar augmented system used of ten in a design problem of servo systems for a step reference input r(t):
= [A{ + D(FEo~]~, + [B~ + D(FEblu
y = [c o]~,, (3,,)
where
A ~ = C O 0 '
0 0] (3e)
and we make the following assumption as is usual in this type of servo problems consid ered here.
det C 0
Then the above servo problem can be re duced to a design problem of quadratically stabilizing controllers for the augmented sys tem (3).
3 PRELIMINARY RESULTS
The notion of robust stability of our inter est here is the socalled quadratic stabiltyfor linear uncertain systems as defined below.
Definit ion 1 [3] The unforced system (1) with u = 0 is said to be quadratically sta ble if there e.zists an n x n real symmetric matrix P > 0 and a constant a > 0 such that for any admissible uncertainty F, the Lyapunov function V(=) = zT Pz satisfies
L(z, t) := 1I = 2zT P[A + DFE.]z < c~ll=ll 2
for all pair* (z,t) E R" x R". (5)
We first describe a quadratic stabilization problem(QSe) for the system (3), and then consider its inverse problem.
Definit ion 2 (QSP) Determine first whether the system (3) is quadratically stabilizable or not, and if so, construct a quadratically stabilizing control(Q S C ).
The solution to QSP is stated below:
Fact 1 [6] The system (3) is quadratically stabilizable if and only if there exists e > 0 such that the Riccati equation
[A e  B( E~ E~IT P + P[A¢  BfF.E~t E~]
+ S ~ { ;  S , Z ~ } Z . ~ + ~I = 0 (8)
ha8 a real symmetric solution P > O. More over, a QSC is given by
1 0 8
, , =  ( R  , ~ ( P + z~[E~)& (7)
where R and E are defined by
a = (v~]=v~ + ¼v2vl) ' , z = v j ' v ~ (s)
baud on the singular vulue decomposition of
In view of Fact i, we transform the system (3) first by the feedback transformation
, , = , ,  ~ErE~& ( lO)
into
~. = [ Ar + D FU, U~ E. O ] C 0 G
+ o
y = [c o1~.,
where
(n,)
(*Ib)
where P. > 0 is a solution to the following Riccati equation for some • > 0
P.A. + A.rP.  P . (D ,D , r  B ,R 'B .r )J '.
+ET, ( I  Et~f[)E, + , rrr = 0 (t3b)
According to the design theory of servo sys tems, with the QSC (13) for the augmented system (12) we can construct a desired to bust servosystem as stated in Section 2 by the following state feedback pins integral control:
u(t) = KFz(t)  Ki fot(r(r)  l~(r))dr (14a)
where
[K~, Kt] := K . F  ' + =E~rE.e (14b)
or by the following type of observer based output feedback control:
~(,) = (A ZC)~(O + Zv(~) + B<O (ts~) J'c u(O = m < ( O + Kr K r )  y(,')),~" 05b)
Ar = A  BEE~ Eo
and then by the state transformation ~, = rx, into
(A. + D.FE.)z. + (B. + D.FEb)v c , = , (12a) y =
where
r = [AC r B 10 (detr # 0) (12b)
c. : [ c 0 ]
o .   Ol l,,, As a result of these transformations the QSP
APPLICATION OF QUADRATIC STA B I L I Z A T I O N M E T H O D F R O M T H E VIEWPOINT OF INVERSE P R O B L E M
We axe now ready to define two types of inverse quadratic stabilization problems.
Definition 3 (Inverse QSP of K type) Given a linear feedback control v(t) =  K . z , ( t ) for the system (I$), find condi tions such that it is a QSC of a particular type, say Ktgpe, that is the one given by (13).
Definition 4 (General Inverse QSP) Given a linear feedback control v(t) = Keze(t) for the system (1~), find condi tions such that it is a QSC of a general type.
We then state below some pertinent re for the system (3) can be reduced to that for suits associated with these inverse problems. the system (12). Furthermore, by Fact i, a QSC for the system (12) is given by Fact 2 [9] A feedback control v = Keze
is a quadratically stabilizing control of K v =  K e z . , Ke = R 1BYPe (13a) type for the system (12) if and only if there
109
ezists R > 0 of the form (8) and P, > 0 that satisfy the following relations:
RK. = syv. (18,)
P.( ~ ..K.  A.) + ( ~ . . K .  A.)~ P.
P,D,D~P,  (U[E,)r(U~E.) > 0 (16b)
Fac t 3 [7'] Let B , be given by (lec), and K= by (ISa) for some R > 0 and P, > O. Then
1. K= can be e~ressed as
K, = V  ' E V [ K 1] (17)
for some nonsingular matriz V E R rex', some positive definite diagonal matriz E = diag{o'i} E R "~x'~, and some real matriz K E R'~x'%
~. The matrices R and Pe in (13a) are ez pressed by
P, = (V K,)r AE:(V K,) + blockdiag(Y,0) (18) R = VTAV (19)
for the matriz V in (17), some positve def inite diagonal matriz A E .R "x"= and some positive definite matriz Y G I t "xn.
Fact 4 The Riccati inequality (16b) is equivalent to the following linear matriz in equality. [ e.~. + ~Tp. _ (uIE,)r(uIE.) P.D. ]
DT. P. 1 j > o
ql,¢ := ~B.K. A. (21)
Fact 5 [6] A feedabck control v(t) = K=z.( t ) is a QSC for the system (le) if and only i f K, satisfies the following condi tions.
1. ~e := Ae  BeKe is stable (22a)
e. pt(K,) := IIGo(s)lloo < z, (22b) Gt(s) := (E,  E,K,)(sI  e~t)l D.
5 MAIN RESULTS
To design a desired robust tracking con troller stated in Section 2, we first derive a parameterization of QSC from the Inverse LQ viewpoint, and then obtain conditions for QSC to be satisfied by the associated pa rameters, which leads finally to the desired design algorithm for QSC.
5.1 P a r a m e t e r l z a t i o n of QSC
The key idea of our design method of QSC is to pexameterize a QSC law K, in the form of (17) based on Facts 1 and 2, and then determine the associated parame ter matrices V, K, and E based on Facts I to 3, in such a way that the Ke so parameter ized is a QSC law for the system (12). Note by Fact 1 that the R as in (13a) is restricted in the form of (8), or equivalently, by
R = [ ~ v , ] [ s' ° o
and hence by 2 of Fact 3 we can determine V a s
V = [VI V2] T (23)
5.2 Condi t ions for QSC and design a lgor i thms
We then show necessary and sufficient conditions for QSC of the state feedback control (13) and the observer based out put feedback control (15) associated with the pararaeterlzed gain Ke, which lead to determination of the remaining parameter matrices K and E.
5.2.1 S t a t e feedback case
T h e o r e m 1 Set V as in (~3). Then the state feedback (13) associated with Ke given by (17) is a QSC law of Ktype for some E only i l K satisfies the following conditions.
1. AK := Ar  BK is stable. (24&) 2. ~.(K):= Hu~G.(,)II~ < 1 (24b)
where
Ga(s) := U2UT E.(,I  AK)I(I  BB)D
B := (CA~IB)2CA~ I
The proof of this theorem is shown in Ap pendix.
T h e o r e m 2 [9] Set V as in (17). Then the matriz G,(s) given by (~b) has the follow ing asymptotic property as {o'i} ~ oo.
Go(s)  , ~,(s) := G.Cs) G, Cs) where
110
O~(s) := Eb[K(sI  AK)t(I  BB) + B]D
These results lead to the following basic de sign algorithm of QSC.
Design a lgor i thm 1 Step 1 Set V  [~ V2] T. Step 2 First obtain a matrix K satisfy ing (24a) by the following pole assignment method.
of {~i} is ensured by noting the following asymptotic property of the eigenvalues of ~ [71: (,~,(¢.)} . {.,} u {~,,} ~. {.,} , oo
2)If we cannot find a desired K in Step 2 or desired {al} in Step 3, then we con dude that this system cannot be quadrati cally stabilizable.
1. First specify stable poles sl ~ s . and n dimensional vectors gl "~ g. and then find a nonsingular T that satisfies the following equation.
T5  ArT  BG (25) where
S . = diag{sl, s2,..., Sn} G = ~q.g~,..,a.]
2. Determine K by the following equation.
K =  ~ r  ~ ( 2 o )
Then check whether it satisfies the condition [10,(s)[Ioo < 1 or not (see Theorem 2). If it satisfies this condition, proceed to Step 3; otherwise check whether it satisfies the con dition (24b). If it satisfies this condition, then proceed to Step 3; otherwise obtain a different K similarly and repeat this step. Step 3 First choose some positive values of ~1 N ~,~ and then check whether the result ing control law K, given by (17) satisfies the condition (22). If it satisfies this condition, then proceed to Step 4; otherwise choose different {~i} again and repeat this step. Step 4 With the parameter matrices V,K, and E obtained above, the QSC law If, can be obtained by (17), from which the desired gain matrices KF,KI in (14) of the servo system can be obtained by
[Kr KI] = VIEViK fir I + Z~E.~ (27)
Remark 1)When K satisfies the condition
IlC~,(s)lloo < 1 in Step 2, Theorem 2 sug gests us to choose {cr¢} large enough in Step :3 so that the resulting K, satisfies the con dition (22). In fact, validity of such a choice
4.2 .2 Obserser based ou tpu t feed back case
Theorem 3 [10] The closed loop system (1),(15) is quadratically stable if and only if the following conditions are satisfied.
is ~.ble. ( ~ )
2. IIGo,(s)llo. < t (28b)
G°. (s ) . '=[E. E ,K. E ,K~ '] ( s l q~t . )  I [yS]
and furthermore
a. . ( , ) * a . 0 )  V,0) a. {,,}  . oo
We thus obtain a desired design algorithm for this case from Design algorithm 1 by re placing the condition (22) in Step 3 with (28).
5 ILQ DESIGN ALGORITHM
In this section we show that ILQ de sign algnrithm[8] can be used to determine K and E in such a way as stated in the pre ceding design algorithms described in Sec tion 5.
5.1 ILQ design me thod [7],[8]
This design method for servo systems tracking a step input has been derived from the viewpoint of inverse LQ problem on the basis of a parameterization of the form (17) of LQ gains, and has several attractive fen tares as shown below from the practical viewpoint.
111
1)The primary design computation is the determination of K in (17) by the pole as signment method as stated in Design alo gorithm 1, which is obviously much simpler than solving Pdccati equations in the usual LQ design.
2)The step output response can be spec ified by suitable choice of design parame ters {sl} and {91} for the pole assignment method above.
3)The design parameters {al} as in (17) can be used as the tradeoff parameters be tween the magnitude of control inputs and the tracking property of output responses.
4)The resulting feedback matrices can be expressed explicitly in terms of the sys tem matrices and the design parameters se lected.
Fig.1 shows the configulation of ILQ servo system associated with the nominal aug mented system (12) which has the state feedback gain KF = V*EVK~ + EErb E, and the integral gain KI = V I~VK~ as in (7). Here K~ and K~ are determined as the ILQ principal gains for the system SF indi cated by the dotted line in Fig.1 so that the closed loop response approaches a specified responce as E  . co. Note that the above definition of KF, Kx together with (7), (17) yields
[K~. K~] = [K I l r  * (29) Denote the ith row of C by c/(l _< i _< m) and define the following indices di,d and ma trix M:
d~ := rain{kleiA~B # O} = min{kleiA~B # O) (1 < i_< m) (30)
d := dt + d, "l" ... + dra (31)
M := ~ = ! (32)
e,. A~" B c,.,,A ~ B
and make the following assumption.
Assumpt ion The nominal system (1) is minimum phase with det M ~ 0.
Let Ki be a set of integers with di + 1 ele ments such that
K =
where
d ÷ m ~ { k h = ,  / q u g 2 u . . . u K , (an)
and define two polynomials for each Ki ,
÷,~(s) := ~ ( , ,  ml , ) 1 < i < m (34) J k E / f t
¢,,(a) := {÷,(.) ~,(o)}/,, * < i < m (35) • m . where {st, ,k E /{,} i=x are those stable
poles specified freely in the pole assignment and the remaining poles should be specified by all the system zeros. With these assump tion and definitions, we can state the ana lytical expression of the ILQ principal gains K~ and K~t as well as the pole assignment gain K as follows.
M*N (36)
MXNo, I~t = MIMo (37)
q÷l(Ar) ] N :=
~,/,.(Ar)
~¢m(Ar) c,~¢~(a) 0s)
M0 := a i ~ g { ¢ ~ ( 0 ) , . . . , ¢ , ( 0 ) }
This expression yields the following result on which the second features of ILQ design method is based.
Theorem 4 Under the assumption, the step response of the nominal closed loop sys tem shown in Fig.1 approaches that of a system with the transfer function Gd(s) as {~,} ~ oo,where
G,~(,) " diag ~ ~,(0) 'l, (39) " l ~,(~) J
Furthermore, this property also holds even if we use the observer based output feedback (15) instead of the state feedback (14).
This result suggests us to use {sk, k E Ki} in (34) as design parameters for specifying the ith output response, and {ak} in (17) as those tradeoff parameters mentioned earlier of an ILQ servo system shown in Fig.1.
In connection of the choice of these ILQ design parameters {sk} and {~rl} with De sign algorithm 1, the selection of {sk}
112
should be made in order for the result in 8 pole assignment gain K to satisfy the quadratic stability conditions [[~,(5)][~ < 1 or (24b) in Step 2, and that of {~'i} should be made in order for another quadratic sta bility condition (22) in Step 3 to be satisfied by the resulting K, .
6 CONCLUSION
We have developed here a unified de sign method of robust tracking controllers for the case where uncertain parameters en ter purdy into the input matrix or both into the state matrix emd the input matrix. Al though the method proposed are similar to those proposed in [10] for the case with pa rameter uncertainty only in the state ma trix, there exists an essential difference be tween them in that for the latter case a high gain robust controller can be always de signed, whenever a robust controller exists, but not for the former case as treated here. This may indicate the difficulty of designing robust tracking controllers under parameter uncertainty particularly in the input matrix.
[6] P. P. Khargonekar, I. It. Petersen, and K. Zhou (1990), 'T, obnst JtabUixation of uncer t~u linear ~tcms: quadratic stabilizability and Heo control theory," IEEE Trans. Auto. Control, vol.35, 356361.
[7] T.Fujii(1987), "A new approach to LQ de aign from the viewpoint of the inverse regu lator problem," 1EEE Trana. Auto. Control, vol.AC32. 9951004.
[8] T. Fujli, et al.(198~, "A Practical Approach to LQ deaign and ira application to engine con trol, ~ Proc. 10th IFAC World Congreaa, Mu nich.
[9] T.Fujii, T.Tsujino, and H.Uemateu, "The de sign of robnst servo aystems baaed on the quadratic stability theory," Trans. of ISCIE, to be published (in Japanese).
[10] T.Fujii and T.Tsujino(1991), ~An Inverse LQ Based Approach to the Design of Robust Tracking Controllers for linear uncertain sys tems," Proc. of MTNS, Kobe.
[11] K.ghou and P.P.Khargonekar(1988), "An gebralc Riec~ti equation approach to H~ op timization," Systems & Control Lett., 11, 85 91.
A p p e n d i x Clarifying this difference from the system theoretical point of view is left as a future We note by Facts 2 and 4 that the control
(14) is quadratically stabilizing control of K study, type for the system (12) only if there exists a
REFERENCE
[I] E.J.Davison(1976), "The robust control of a servomechanism problem for linear time invariant multivariable systems," IEEE Trans. Auto. Control, vol. AC21.
[2] W. E. Schmitendorfand B. R. Barmish(1986), "Robust Asymptotic Tracking for Linear Sys tems with Unknown Parameters," Automat ica, vol.22, 355360.
[3] B. R. Barmish(1985), "Necessary and suffi cient conditions for quadratic stabilizability of an uncertain linear system," J. Optim Theory. Appl., voi. 46, 399408.
[4] I. R. Petersen(1987), "A stabilization algo rithm for a class of uncertain linear systems," Systems & Control Lett., 8, 351357.
[5] L R. Petersen(1988), "Stabilization of an un certain linear system in which uncertain lin ear parameters enter into the input matrix," SIAM 3. Control Optim., vol. 26, 12571264.
P. > 0 in the form of (18) that satisfies (20). Hence we show below that the existence of such a Pe implies the condition (24). First we substitute (18) into (20) and make the following equivalent transformation.
[ + ]
° 1 0 I > 0
This inequaiJty can be rewritten as
[ X'H'+HyX*~UIE'T')T(UrE'T'}Drp. X'D'] >o[
where
T. := G I
X" := T~'P'T'= [ TTYTO EA0 ] (41b)
(40)
113
Zn
O O ] or equivalently H. := Tj,*~.T,= O ½E
YxAx + ATKYx + (UIE.)r(U~E,,)
 [ vGsS TIBVIVFBV_, ] (41c) +Yx(AKTZItD)(AKTZnD) TYx < O (45)
Z,, By Lemma 2.2 of [11] the inequality (44) [ I ] (41e) is equivalent t°
:= fiT,)* 0 AK is ~able (47@
and T,S and G axe those matrices satisfying HUT EoOI_ Ag)* AKTZuDI[oo < 1 the following relations.
T S T 1 = AF  B K := AK, d e t T # 0 G =  l i T (41f) Furthermore (40) can be transformed equiv alently as follows
T T T T T (40) O X.H,+H. Xo  ( U 2E.T,) (U 2E.T,)
[L,, L,,]  ( x , b . ) ( x , ~ . f := L~ L~: > 0
Ln > 0 0 L22  L~L'~LI2 > 0 (42)
Ln :=  T T Y T S  (TTYTS) (UT2 E.Ts)T(uT E.TS)
 (T T YTZ, 1 D) (TTyTZ! 1 D) T (43)
In the following we show that the inequal ity Ln > 0 implies (24). We first note that Ln > 0 is equivalent to
Y(TST 1) + (TSTI)rY + {UT E.(TST"I)} T
x {UrEo(TST*)} + Y(TZnD)(TZnn)rY < O which can be rewritten by (410 as
YAK + A~Y + (UT2 E.Ag)T(uTE.AK)
+Y(TZnD)(TZnD)TY < 0 (44)
(47b)
To derive (24b) from (47b), we premu]tiply (41e) by the following term.
0]=[, (4.)
[ ' ° 1 (+ N :=  K I
Then we have
['1 TZ. = [x o ] (r . )  ' o = A~* + A~'B(CA~'B)'CA~' (50)
Substituting this equation into the transfer function of (47h) yields
U~ E.(al  Ax) t AKTZn D = VTE.(sXAK) ~
x { l  B(CA~*B)tCA~*}D (51)
Hence from (47) we obtain
AK is stable
[ I U ~ E . ( s l  Ag)*(I B B  ) n l l o o < I
e ~ + v"zv [ + "(sIA)'B x s
(S2a) (52b)
Q.E.D.
S F
Fig. 1 Configulation of ILQ servo systems
Linear sys t ems and robustness: a graph point of view1
Tryphon T. Georgiou 2 and Malcolm C. Smith 3
A b s t r a c t
This paper presents a framework for modelling and robust control of linear systems based on a graph point of view. Basic concepts such as lineaxity, causality, stabilizability can be defined as properties of the graph of a system. Uncertainty is naturally thought of in terms of perturbations of the graph. Modelling and approximation axe also funda mentally related to the graph of the system. Robustness will be quantified in the graph setting using the gap metric. Necessary and sufficient conditions for robustness will be explained geometrically in terms of the graphs of the plant and controller. A flexible structure design example will be discussed.
1 I n t r o d u c t i o n
The importance of a "graph" viewpoint in systems was highlighted in the early 1980's by the work of Vidysagax et al. [18], [16], in a fundamental study of the graph topology, and Zames and E1Sakkaxy [19], who introduced the gap metric into the control literature. The later work of Vidyasagar and Kimura [17] and Glover and McFaxlane [9], [12] was the basis for a substantial quantitative theory which is still rapidly developing. This paper presents an overview of several recent results of the authors and others concerning the gap metric as well as a discussion of a control system design example and some related topics.
2 Bas ics o f Linear S y s t e m s
In this paper we consider a system to be defined mathematically as an operator which maps an input space to an output space. In Fig. I the plant is a system P : ~)(P) C H ~ y where H := L~'[0,co) (resp. y := L~[0,oo))is the input (resp. output) space and V(P) is the domMn. The graph of P is defined by
ISupported by the National Science Foundation, U.S.A. and the Fellowship of Engineering, U.K. ~Department of Electrical Engineering, University of Minnesota, Minneapolis, MN 55455, U.S.A. aUniversity of Cambridge, Department of Engineering, Cambridge CB2 1PZ, U.K.
115
~tl , el
+ P
i + lt2
Figure 1: Standard feedback configuration.
In general, a system P is said to be linear if ~ (P ) is a linear subspace o f / ; := U ~ y . In an inputoutput setting the study of linear systems is the study of linear subspaces /C C £ which are graphs, i.e. such that (o) E /C @ y = 0. An advantage of the graph point of view is that the basic definitions and theorems can be compactly stated without assuming the systems are stable.
A linear system P is said to be shiftinvariant if ~ (P ) is a shiftinvariant subspace of £;, i.e. S t y ( P ) C G(P) for all r > 0, where Srx(t) = x(t + r) . The system P is said to be causal if T r G ( P ) is a graph for all r > 0, where Trx( t ) = z(t) for t < r and zero otherwise. In Fig. 1 the controller is a system C : D(C) C Y ~ 12. The inverse graph of C is defined by
The feedback configuration of Fig. 1, denoted by [P, C], is defined to be stable if the operators mapping ui * e i are bounded for i , j = 1,2. It can be shown that [P,C] is stable if and only if ~ (P ) and ~ ' (C) are closed in / ; ,
~(P) n g'(c) = {o} and
~(P) + g ' ( ¢ ) = u ~ y .
The theorems below give necessary and sufficient conditions, in the frequency domain, for shiftinvariance, causality and stabilizability. For x ~ L~[0, eo) we denote by ~ E H~' the Fourier transform of x. The operator P : H ~ * H i is defined by the relation 15~ = (P"~) for all x E 7)(P). For U E H~o x" with r < n we introduce the notation du := {gcd of all highest order minors of U} (see [15] for a proof of existence of gcd's in H~o).
116
Theorem 1. Consider any shiftinvariant linear system P such that ~ ( P ) is a closed subspace of £ . Then there exists r _< m, M E H ~ x" of rank r, and N E oorgPx~ such that
(1)
and G ' G = I. Moreover P is causal if and only ifgcd(d, e °) = 1 where d := dM/do E Hoo.
P r o o f . The first part is a wellknown consequence of the BeurlingLax theorem [11]. The condition for causality is est~bllshed in [8]. 1:3
T h e o r e m 2. Consider any shiftinvariant linear system P. Then there exists a C : 3 J ~ / 2 (linear, shiftinvariant) such that [P, C] is stable if and only if (1) holds with r = m and G is left invertible over Hoo.
P r o o f . A more detailed version of this proof can be found in [8]. Let [P, C] be stable. Then (1) holds for r <. m, M E H~ ×" and N E H~xL Also, by Theorem 1, we can take
v q /4"m×q V E H ~ ×~ and q _< p. Define ~'(C,) = (v)H2 with U Eoo ,
M U ) H~+.0×(~+q). T := g V E (2)
Then [P, C] stable implies that TH~ +q = H~ +p. This requires r = m, q = p and T is invertible over H~. The first m rows of T  I now provide a left inverse of G. For the converse direction assume that r = m and that G has a left inverse over Hoo. By Tolokonnikov's Lemma [13, p. 293] there exists U E H ~ xp and V E H ~ xp such that T as defined in (2) is invertible over Hoo. Moreover this can be done with a nonsingular V, in (v) which case we can define a stabilizing C in terms of the graph G((~) = v /ar~2" N
3 Uncertainty
An intrinsic requirement in robust control design is a suitable measure of modelling un certainty. In an inputoutput setting it is natural to measure uncertainty by perturbations of the graph. Such a measure can be quantified using the gap metric. Let Pi have graph ~i := GiH~' where G~GI = I and Gi E H ~ +'~)x" for i := 1,2. Then the gap between P1 and P2 is defined to be 5 ( P I , P 2 ) : = [{FIo, IIo2{[ ([10], [19]) where Fix: denotes the orthogonal projection onto KS. It can be shown [4] that the following formula holds for the gap: 6(P1, P2) = max (6(P1, P2), g(P2, Vl)} where ~(V,, V2) = infqE,. . IIG1  G2QLo.
A natural generalization of the gap metric that includes a frequency domain "scal ing" of the spaces of inputs and outputs can be defined as follows [6]. Let W i , W . of appropriate dimension be bounded and have bounded inverses. Define the weighted gap
/~(P,, P2; Wo, Wi) := 6 (WoPIWi , WoP2Wi) .
117
It follows from the definition that 6(.,. ; Wo, Wi) is a metric. Moreover, the weighted gap inherits the property 0 ~ 6(., ;Wo, Wl) < 1 from the gap metric. In Section 6 we will discuss how the weighted gap metric can be incorporated into a design procedure.
4 M o d e l l i n g a n d I d e n t i f i c a t i o n
Broadly speaking, any identification experiment on the plant P involves measurements (or estimates) fi, .~ of the input and output signals. The vector (~) can be thought of as an approximate element of the graph of P with the uncertainty being in the graph topology. Identification of the graph of a systemas opposed to the transfer functionis particularly natural in the case of closed loop experiments, as in adaptive control or in the identification of unstable systems. Error bounds are naturally expressed in terms of an appropriate metric such as the gap.
A closely related topic is that of approximation: given a plant P of order n, find the closest approximant of order k < n in the gap metric. This problem is currently unsolved, though there exist upper and lower bounds on the achievable error in terms of the singular values of two Hankel operators [7]. That is
a~+, (n , ,~OC't , , , ) _< i~f~(P,P) <_ ~ ~, ( r ~ , , , ~ l . . ) (3) i f k + l
where the infimum is taken over all P with degree at most k < n. In the matrix case, the upper bound in (3) was derived under a nonrestrictive assumption that it is less than a certain quantity ~(P)see [5], [7].
5 R o b u s t n e s s
In this section we determine the radius of the largest uncertainty ball in the gap metric which a feedback system can tolerate. Let ¢~d := g(P) and A/" := g'(C). Then, it is straightforward to show that [P, C] is stable if and only if A ~ := II~z [j, is invertible
 I (see [3]). When A . ~ ¢ is invertible, then Q . ~ := A ~ I I f f z , is the parallel projection onto .M along A/', i.e., Q .~¢ = Q~a~¢, range(Q~.~') = .A4 and kernel(Q2,~) = A/'. Note that Q ~ can be expressed in terms of P and C as follows:
Q.~.¢ = ( p ) (I  C P )  ' ( I ,  C ) . (4)
When [P,C] is stable define be, v := IIQ~rl11 = ~ /1 6 ( M , X ' ) ~ and observe that 0 < bp.e <_ 1.
118
Theorem 3. Suppose [P, C] is stable. Then [P~, C] is stable for all Pz with 6(P, Pt) < b if and only if b < b~,,c.
Proof. From a purely geometric point of view the sufficiency part of the theorem follows from the equation:
A ~ , ~ = AMC¢(I~ + X)(IIM x I~) 1
where X = Q~M(II~4,  I I~) . To see this, note that if [[II~,  n ~ l l < bp.o then ]]X H < 1. This implies that A~4,~¢ is invertible. A construction for the necessity part in the case of timevarying systems is given in [3]. See also [5] for a frequency domain approach to the shiftinvariant case. o
A dual result to Theorem 3 involves interchanging the roles of P and C. It is interesting to note that bp, v = bc,p (see [5]) so that the robustness margin for plant perturbations is the same as for controller perturbations. An equivalent fact is I I q ~ , l l = IIQA,~II (see [3]) where QJ¢,~4 is the parallel projection onto A f along 3.4. For combined plant and controller uncertainty, the following sharp result has been obtained by Qiu and Davison [14].
Theorem 4. Suppose [P,C] is stable. Then [Pz,Cz] is stable for all P~ with 6(P,P~) < bz and all C1 with 6(C, Cz) < b2 if and only if
b~ + b] + 2b, b2~/1  b~o _< ep,o. o
6 C o n t r o l S y s t e m D e s i g n
The problem of weighted gap optimization amounts to solving the following Hoo minimization problem:
I( ) I)' (d~[ w  '  ' ( w , ,  c w : 1) bop,(P) := " u WoP ( I  C P ) • (5) ¢ 0
This is a wellposed problem [2] (unlike several popular choices of Hoo design problems such as S/CS or S/T optimization) which offers interesting possibilities for controller design. We remark that the robustness results of the previous section apply to the weighted gap when 5(P, Pz) is replaced by 6(P, Pz;Wo, Wi), 6(C,C,) by 6(C, Cz;Wi~,Wo *) and bp, v by the corresponding weighted expression for the parallel projection as in (5).
In case Wl is the identity matrix and Wo is a scalar weighting function then (5) reduces to finding:
bo,,(P) := C,,b,, W o P ( I  CP) '  P ( I  CP) lC =
119
The choice of the weighting function Wo in (6) is interpreted here in a closedloop sense as a tradeoff between "control action", as represented by the transfer function ( I  CP) 1 C, and "disturbance attenuation", represented by P ( I  CP) 1. An upper bound equal to bopt(P) 1 is also guaranteed on I I ( I  CP)IlIoo and I l P ( I  CP)ICI[oo. This interpretation, as well as the natural one of robustness in the weighted gap, will be exploited in the design example below.
In some recent work [12], McFarlane and Glover proposed a certain "loop shap ing" design procedure. This procedure begins with an augmented plant transfer func tion P,ug := WoPWl containing input and output "weighting" functions Wo,Wi. The weighting functions are selected on the basis of a desired loop shape for the control system. The controller is designed to give optimal robustness with respect to normalized coprime fraction perturbations of P=us" We point out that this choice of design synthesis problem is precisely the same as (5).
7 Flexible Structure Example
The weighted gap design approach was applied to a 12meter vertical cantilever truss structure at the WrightPatterson Air Force Base (Dayton, Ohio). A 4input/4output/14 state nominal model was selected for the design and a higher order 42state model was used for robustness 'analysis (see Fig. 2). An initial choice of Wo = 0.65 led to an appropriatc lcvcl of control action for the lower frequency modes. The weight was modified
10 z
10 o
101
10z
10~
104
10s 101
i
¢ ° i i J . . . . . . . .  ' , " ; ' ; ' ; ' , . • , , , = . . . . . . . . . . .
10 0 l 0 t 10 2 10 3
Figure 2: Open loop singular values for 42state model.
l0 t ......... S i I  H I m  I )H l l . ) ) J= l l ~
102
120
101 . . . . . . . . . . . . . :SC . . . . . . . . . . . . . . . .
102
105 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10~ 10 4 I0 o lO t 10 2 10 3 10 4
. . . . . . . . . . . . . : .2: . . . . . . . . . . . . . . . .
I I I t la l l l I I I fiBril I I a I la l t t I I I
10 o lO t 10 2 10 3
101 lO t PSC i g l lgeeei ) ) tggealg O t ~eeeee~ g l l l g l t
102
105 10 4
PS ) ) = l , ) . ) i , ) = H n g ) i , ) ) l . ) i = l ) 0 , .
~ . t = i H m n lu I * = t l t m * = i l l *
10 o 101 10 2 10 3
102
105 10 4 100 101 102 103
Figure 3: Singular values of entries of parallel projection.
to Wo = c(0.1s + c,)2/(s 2 + 0.5as + a 2) to give, additionally, increased disturbance attenuation over the middle frequency range and robustness to neglected high frequency dynamics. This choice of weighting function gave the value boo~(P) = 0.655 for c = 0.65 and a = 52.0. The weighted gap between the 14state and 42state models was 0.446
which guarantees robustness by the weighted version of Theorems 3 and 4. A suboptimal (16state) controller was designed for the associated oneblock Hoo optimization problem. A plot of the singular values for each 4 x 4block entry of (4) for the 42state plant and 16state controller is shown in Fig. 2. This controller was implemented on the structure and gave disturbance attenuation close to the anticipated levels. A detailed report on the
implementations is given in [1].
R e f e r e n c e s
[1] S. Buddie, T.T. Georgiou, U. Ozgfiner and M.C. Smith, Flexible structure experi ments at JPL and WPAFB, report in preparation.
121
[2] M.J. Englehart and M.C. Smith, A fourblock problem for H~. design: properties and applications, Automatica, to appear.
[3] C. Foias, T.T. Georgiou, and M.C. Smith, Geometric techniques for robust stabiliza tion of linear timevarying systems, preprint, February 1990; and in the Proceedings of the 1990 IEEE Conference on Decision and Control, pp. 28682873, December 1990.
[4] T.T. Georgiou, On the computation of the gap metric, Systems and Control Letters, 11, 253257, 1988.
[5] T.T. Georgiou and M.C. Smith, Optimal robustness in the gap metric, IEEE Trans. on Automat. Control, 35, 673686, 1990.
[6] T.T. Georgiou and M.C. Smith, Robust control of feedback systems with combined plant and controller uncertainty, Proceedings of the 1990 American Control Confer ence, pp. 20092013, May 1990.
[7] T.T. Georgiou and M.C. Smith, Upper and lower bounds for approximation in the gap metric, preprint, February 1991.
[8] T.T. Georgiou and M.C. Smith, Graphs, causality and stabilizability: linear, shift invariant systems on L2[0, oo), report in preparation.
[9] K. Glover and D.C. McFarlane, Robust stabilization of normalized coprime factor plant descriptions with H~obounded uncertainty, IEEE Trans. on Automat. Contr., 34, 821830, 1989.
[10] T. Kato, Perturbation Theory for Linear Operators, SpringerVerlag: New York, 1966.
[11] P.D. Lax, Translation invariant subspaces, Acta Math., 101, 163178, 1959. [12] D.C. McFarlane and K. Glover, Robust controller design using normalized co
prime factor plant descriptions, Lecture notes in Control and Information Sciences, SpringerVerlag, 1990.
[13] N.K. Nikol'skii', Treatise on the Shift Operator, SpringerVerlag: Berlin, 1986.
[14] L. Qiu and E..J. Davison, Feedback stability under simultaneous gap metric uncer tainties in plant and controller, preprint, 1991.
[15] M.C. Smith, On stabilization and the existence of coprime factorizations, IEEE Trans. on Automat. Contr., 34, 10051007, 1989.
[16] M. Vidyasagar, The graph metric for unstable plants and robustness estimates for feedback stability, IEEE Trans. on Automat. Contr., 29, 403418, 1984.
[17] M. Vidyasagar and H. Kimura, Robust controllers for uncertain linear multivariable systems, Automatica, 22, 8594, 1986.
[18] M. Vidyasagar, H. Schneider and B. Francis, Algebraic and topological aspects of feedback stabilization, IEEE Trans. on Automat. Contr., 27, 880894, 1982.
[19] G. Zames and A.K. ElSakkary, Unstable systems and feedback: The gap metric, Proceedings of the Allerton Conference, pp. 380385, October 1980.
Experimental Evaluation of H °° Control for a Flexible Beam Magnetic Suspension System
Masayuki Fujltat, Fumio Matsumurat, and Kenko Uchlda:~
t Department of Electrical and Computer Engineering, Kanazawa University, Kodatsuno 24020, Kanazawa 920, Japan
Department of Electrical Engineering, Waseda University, Okubo 341, Shinjuku, Tokyo 169, Japan
Abstract An H °° robust controller is experimentally demonstrated on a magnetic suspension system with a flexible beam. The experimental apparatus ' utilized in this study is a simplified model of magnetic bearings with an elastic rotor. We first derive a suitable mathematical model of the plant by taking account of various model uncertainties. Next we setup the standard problem, where the generalized plant is constructed with frequency weighting functions. An iterative computing environment MATLAB is then employed to calculate the central controller. The techniques devised for the selection of design parameters involve both the game theoretic characterizations of the central controller in the time domain and the allpass property of the closed loop transfer function in the frequency domain. Finally, the digital controller is implemented using a DSP pPD77230 where the control algorithm is written in the assembly language. Several experiments are carried out in order to evaluate the robustness and the performance of this ~ design. These experi mental results confirm us that the flexible beam magnetic suspension system is robustly stable against various real parameter changes and uncertainties.
1. Introduction
The problem of statespace formulae for the H °° control has been recently settled (see [1], [2], and the references therein). A direct consequence of the above results is that the associated complexity of computation is fairly reduced. Further, there has been a substan tial development in the computer aided control systems design packages. Accordingly, one could readily practice this powerful methodology. We believe that one of the most chal lenging issues in this present situation is an application oriented study of the H °° control [3]  [7]. In this paper, an H °° controller is experimentally demonstrated on a magnetic suspension system with a flexible beam. Based on several experimental results, we will show that the magnetic suspension system designed by the H ~° methodology, is robustly stable against various real parameter changes and uncertainties.
2. Modeling
We consider a magnetic suspension system with a flexible beam shown in Fig. 1. The apparatus utilized in this study is a simplified model of magnetic bearings with an elastic rotor. The experimental configuration consists of a flexible aluminum beam with an elec tromagnet and a gap sensor. The beam is supported by a hinge at the left side. Mass M is attached at the center of the beam and mass m is attached at the right side. An U shaped electromagnet is located as an actuator at the right side. As a gap sensor, a stan dard induction probe of eddycurrent type is placed at the same position in the right side.
123
IHi e l e c t r o m a g n e t
U E + e
i
xI s u p p o r t i n g po in t base line r . . . . • " . '    ~ " ' " ' ~ , . . 7  ~ .,"' . I ! ,
b e a m [ mass m
inas$ M
Fig. 1. Flexible beam magnetic suspension system.
A mathematical model of this experimental apparatus has been derived in [6], [7]. The statespace representation is of the following form
where ~2 i]r, u :  e, and
O.O O.O 1.0 O.O 0.0
0.0 0.0 0.0 1.0 0.0
7070 712 0.327 0.654 41.9
399 797 0.654 1.31 0.0
0.0 0.0 0.0 0.0 18.0
Ag=
= c~=, + Wo (1)
I O.O ] 0.0
Bg 0.0 , 0.0 0.317
0 0 0 ] .
Dg
O.O 0.0 0.0
0.0 0.0 0.0
0.172 0.0 0.0
O.O 0.0965 0.0
0.0 0.0 0.317
(3) G(s) =  13.3(s+0.654p8.2}(s+0.654+p8.2) ( s+S4.4)( sS4.1)( s+18.O)( s+O.f97j28.8)( s+O.f97 +j28.8) "
Here ( A~, Bg ) and ( Ag, Dg ) are controllable, and ( Ag, Cy ) is observable. It should be noted [6], [7] that the disturbance terms ( v0, w 0 ) in (1) come from various model uncer tainties such as a) the errors associated with the model parameters, b) the neglected higher order terms in the Taylor series expansions in the linearization procedure, c) the unmo deled higher order vibrational dynamics of the beam, d) the effect of the eddycurrent in the electromagnet, e) the real physical disturbances acting on the beam, and 0 the sensor noises. The transfer function of the nominal plant G(s) := Cg(sIAg)lBg is then given by
o (2)
124
S. Controller Design
For the magnetic suspenAinn system described in the previous section, our principal control objective is its robust stabilization against various real parameter changes and uncertainties. To this end, we will setup the problem within the framework of the
control theory.
First let us consider the disturbances v 0 and w 0. In practice, these disturbances g0
and w o mainly act on the plant over a low frequency range and a high frequency range,
respectively. Hence it is helpful to introduce frequency weighing functions. Let go and w 0
be of the form
"o(') " W~(,) , , (s), "o ( ' ) = w2Cs)" ( ' ) (4)
where WI(s )  : Cwl(s I Awl )  IBwl + Dw 1 cud W2(s )  : Clva(slAw,~)lBwz. + Dw. ~. These functions, as yet unspecified, can be regarded as free design parameters. Next we consider the variables which we want to regulate. In this study, since our main concern is in the stabilization of the beam, it is natural to choose as
z 1
Z~i m //g ~ , 11 ia
~irg : .
1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0
(5)
Then, as the error vector, let us define as
Z "1" Z2 P ~ , e : d~,,gl e~ e2 e3 e, ] (6)
u • ~ 9
! controller
Fig. 2. Control problem setup.
125
where 0 is a weighting matrix on the regulated variables zg and p is a scalar weighting factor on the control input u. These values, as yet unspecified, are also free design param eters. Now, we will setup the generalized plant as in Fig. 2.
The statespace representation of the generalized plant is
~:  Az + BId + B2n , z 01z + Dnd + D12u, y 02z + D~Id + D22u (7)
where
[+] [:1 x:= x~ , d:= (S) z~
with the states zw1 and z ~ in the frequency weightings Wl(S ) and W2(s), respectively,
Ag DgOWl 0 ] 0 Awl 0 J, BI:= 0 0 Aw2
A:=
and
o ] Bwl 0 , B2:= (ga)
0 Bw2
[o 00] {00] E0] Cl :=' 0 0 0 ' DI1 :=' 0 0 ' D12 :=' pl
Substituting u(~)=F(s )y(s ) yields
z 2 = T.,v T.,= [ : ] (10)
where @(s) := (sIAg) 1 and
,o,, {,=] T.=. :=[pFCgl@[I+B, FC,@]'D, WI' T.,. : =  [ pI J F[I+G~1W2"(11)
Now our control problem is: find a controller F(s) such that the closedloop system is internally stable and the closedloop transfer function satisfies the following H °° condition:
T=,.T==,. < I . c o
(12)
126
The design has been carried out using the computer aided design package MATLAB. The frequency weightings are chosen as (see, Fig. 3 and 4)
Wi(8 )  9"76x 103 [ ~ ] (13) (11s/(2~'0.016)) (1+&/(2~r'0.5)) 1
W2(s) 5"21×lO7(1+sl(l~'O'O1))(l+s/(l~'O'O5))(l+sl(l~'100)) (14) (l+81(2~.3.O))(H,~l(2~'5.0))(l+sl(2~.8.0))
While, the weightings e and p are chosen as
0 i = 66.0, O l = 3.5x10 1, 03 = 9.1xlO  l , 0¢ = 7.OxlO  i , p = 8.0xlO 5. (15)
For the selection of the frequency weightings Wl(a ) and Wi(a), the well known "allpass" property of the H °° control at optimality is very helpful. Besides, the LQ differential game theoretic characterizations of the ~ control enable us to choose the weightings O and p effectively. Direct calculations yield the central controller [1]
F(a)  3"64 x lOl°(s+84"4)(s+50'3)(s+31"4)(s+18"9)(a+18"l) ( s+3650)( s+615j480)( st615+ j480)( s+14.8)( s+3.13)( s+O.0758)
(s+2.40j2.17)(s+2.40+j2.17)(8iO.687j28.9)(8+0.687+,7~8.9) (16) x (8+53.1j31.0)(si53.1+j31.0)(8+O.412j29.3)(sFO.412+j29.3)"
0
80
40
0
40 0.001
i 1 I!!!!ilLIIItiliil l l.llUl t'IIIIHtl' II11111t ...... I2tlH, . . . . P " t " I ' P t 4 " t i . . . . . . . . . I ' " " H I . . . . . . . . . . . . . . . . . . . . . H I " , . . . . . . . . . !  ~ . . . . . C~
r i t ~ . i ~ ~ t l l l I I I I h t d I I ! t ! I I 1t!.~ ....... l,l.lll.ll ...... lo..loo • .it.ill ......... l .......... Il,., .i I ........... I~ ". ..... 1... i~
! . l : . : r t J .. • ~Tt ! I . . . . . . . . . • i I . . . . I . E ................... ......... !...I ..... , . . . . . . . . . :I . . . . . i I INIII! I i I. ~:i, . .i., • li~l I I IIII~I i II ,i L .............. ~ ~ ............. I . . . . . . . . . . . . . . . ,.~,,
i l l i F : i t I : " " I ' t l l I i i : i L _ . ! ' . I . : . " ; : ~ ~ . i . ! L  ! . . . I . ' . ! . : ~
I " ' : : : i " , : I ' l l I I I : " " " l l t i : : t , . . l , . . , . ~ t ~ i i t i : i . . . .
0.01 0.1 1 10 100 1000 Frequency[Hz]
Fig. 3. Frequency weighting W r
40
60
80
100
120
0.001 0.01 0.1 1 10 100 1000 Frequency[HzJ
Fig. 4. Frequency weighting W 2.
127
4. Experimental Results
The obtained continuoustime controller (16) is discretized via the Tustin transform at the sampling rate of 70 ps. A digital signal processor (DSP)based realtime controller is implemented using NEC #PD77230 on a special board, which can execute one instruc tion in 150 ns with 32bit floating point arithmetic (see, Fig. 5). The control algorithm is written in the assembly language for the DSP and the software development is assisted by the host personal computer NEC PC9801. The data acquisition boards consist of a 12bit A/D converter module DATEL ADCB500 with the maximum conversion speed of 0.8 p s
and a 12bit D/A converter DATEL DACHK12 with the maximum conversion speed of 3
#s.
We first evaluate the nominal stability of the designed control system with time responses for a steptype disturbance. The disturbance is added to the experimental sys tem as an applied voltage in the electromagnet, which in fact amounts to about 20 % of the steadystate force. Fig. 6 shows the displacements of z 1 and z 2. From this result, we can see that the nominal stabilization is certainly achieved. Since our concerns are also in the robust stability of the system against various model uncertainties, we further continue the same experiments with the values of mass M, mass ra, and the resistance of the elec tromagnet R changed. The parameters have been changed in the following ways:
(i) M = 9.63 kg, which amounts to 7 % decrease for its nominal value of 10.36 kg,
(ii) M = 8.06 kg, which amounts to 22 % decrease for its nominal value of 10.36 kg.
(iii) M  11.51 kg, which amounts to 11% increase for its nominal value of 10.36 kg.
(iv) m = 6.82 kg, which amounts to 18 % increase for its nominal value of 5.8 kg,
(v) R = 62.0 ~, which amounts to 9 % increase for its nominal value of 57.0 f/,
The results are shown in Fig. 7  11. In any case, it can be seen that the beam is still suspended stably even if the steptype disturbance is added. Therefore, these experimental results confirm us that the magnetic suspension system designed by the/T ° control theory is robustly stable against various uncertainties.
Magnetic
Suspension
System
Analog ~ 12bit A/D M~tlplexer  [ ADCBS00
l A/D Contro ~ " 1  ~
t I n p u t ~ 12bitD/A ~
I ~ DACHK12
DSP
pPD77230
Fig. 5. DSP.based controller.
128
5. Conclusions
In this paper, an H °° controller has been experimentally demonstrated on a magnetic suspension system with a flexible beam. Several experimental results showed that the magnetic suspension system is robustly stable against various model uncertainties.
Re[erences
[i] K. Glover and J.C. Doyle, "Statespace formulae for all stabilizing controllers that satisfy an Hoonorm bound and relations to risk sensitivity", Syst. Contr. Left., vol. 11, no. 3, pp. 167172, 1988.
[2] J.C. Doyle, K. Glover, P.P. Khargonekar, and B.A. Francis, "Statespace solutions to standard H 2 and Hoo control problems," /EEE Trans. Automat. Contr., vol. 34, no. 8, pp. 831847, 1989.
[3] M. Fujita, F. Iv[atsumura, and M. $himizu, "H °° robust control design/or a magnetic suspension system," in Proc. 2nd Int. Syrup. Magnetic Bear/ngs, Tokyo, Japan, 1990.
[4] M. Fujita and F. Matsumura, "H °° control of a magnetic suspension system (in Japanese)," .7. IEE oJ3apan, vol. 110, no. 8, pp. 661664, 1990.
[5] F. Matsumura, Iv[. Fujita, and M. Shimizu, "I1obust stabilization of s magnetic suspension system using H °° control theory (in Japanese)," Trans. IEE of 3span, vol. II0D, no. 10, pp. 10511057, 1990.
{6] M. Fujita, F. Matsumura, and K. Uchida, "Experiments on the H °° disturbance attenuation control of a magnetic suspension system," in Proc. 29th IEEE Conf. Decision Contr., Honolulu, Hawaii, 1990.
[7] M. Fujita, F. Matsumura, and K. Uchida, "//0° robust control of a flexible beam magnetic suspension system (in Japanese)," 3. SICE, voI. 30, no. 8, 1991.
t ~ 0.0 . . . .
1.0 0 1 2 3 4 S
Tisne(s]
(&)Disp acement X I
i 0.0
2.0 0 I 2 3 4
Tlme[,I
(b lDJ lp l scemen t Z 2
.g.
LO U 1 2 3 4
~=oIo (~ )Dl ,p l scemeu t X I
.2.0 t • . . .  7 . . O l 2 3 4 li
Tl=,l,I (b)Diepl=cmulut Z 2
Fig. 6. Responses for steptype disturbance (nom~.~1).
Fig. 7. Responses for steptype disturbance (~  , . s ~ k g ) .
1.0 , • , •
0
~,.~ 10.0 , J .°I o
"1 :t 3 4 Time[,J
(~)Dlsplacement :C I
I ] 3 TlmelsJ
, i 4
( b ) D i s p l s c e m e n t :C 2
129
1.0
1.0 0
~.~ 2"0 I 0.0
2.0 0
'1 2 3 4 Ti,ne[.J
(a)Dluplacement X 1
1 2 " 3 4 Time[*]
(b)Dlsplacemeu~ X 2
Fig. 8. Responses for steptype disturbance ( M = 8.06 kg).
Fig. 9. Responses for steptype disturbance (M ,= 11.51 kg).
1.0
0.0
r~ L0
0 1 2 3 4 TI , . . { ,J
( t t )DiepJt~cement ~1
~ 0.0
1,0 0
1.0
0,o
 L 0 0 1 '2 3 4 B
~,.e(,l (~)Displ~cement Z I
1.o,
~ .~.o [ . . . . . . . . . 1 2 3 4 O 1 2 3 4
T;meL,l TIm,L,I
(b)Displacement •2 (b)Displ&cement Z 2
Fig, I0. Responses for steptype disturbance Fig. 11. Responses for steptype disturbance (m = 6.82 kg). (R  6 .0 n ) .
P,.obust Contro l of Nonl inear Mechanical Systems Case Studies
Koichi Osu "k,~ Depsrtmeut of MechLulcal Englnee~ing,
University of Osska Prefecture, Sakai, Ossk~ 591, Japan.
1 In t roduc t ion
Since the 1960's, trajectory control problem of robot, especially manipulators, has been
studied. In the early stage, since the dynamical equations of robot manipulators are highly
nonlinear, many researchers were interested in how to attack such nonlinear properties[l]. However, in the early of 1980's, it was pointed out that the dynamical model of robot
manipulators have some useful properties. As the results, main subject of the research
has been shifted from 'How to attack the nonlinearlity' to 'How to treat the modelling errors of the dynamical model of robots[2]' . Here, it can be said that the following
uncertainties cause the modelling errors. (i)Structured Uncertainty; Uncertainty caused by the uncertainties of dynamical parameters such as mass or moment of inertia of links. (ii) Unstructured Uncertainty; Uncertainty caused by the uncertainties of parastical un
modelled dynamics o] actuater system. From the above discussions, it is clear that to design a controller which makes the
resulting control system robust to both uncertainties is important and necessary. However,
it seems that no design method for such an ideal controller has been proposed.
In this paper, dividing the uncertainties into two cases, that is (i) and (ii), and then
we introduce control strategy which suited for each case through some experiments.
2 Dynamical Model of Robot
In general, robots are consisted of nrigid links and actuators installed in each joint. O[ten, we divide the dynamical model of robot into two parts. One is the dynamical model of
rigid links and the other is that of the actuator systems. Suppose that the li.ks which construct the link system are all rigid and the in
puts to this system are torque produced by the actuator systems. Then, letting 8 =
[Ol,0~,...,O,,] r be jnint angle vector and letting v = [vl,~'2,"',r,,] r be input torque
vector, the dynamical equation of the li,k system can described ,as
< s > j(o)~+cCe,8)=~. (1)
131
Where, J(O)8 denotes n dimensional vector which denotes inertia torque term, c(0, 6) is n
dimensional vector which represents torque due to centrifugal force, Coriolis force, gravity
and viscosity friction. Notice that, once a robot given, the structure of the nonlinear
functions in these vectors are decided. Furthermore, it is well known that the dynamical
equation Eq.(1) has the following useful properties. < P1 > The matrix J(8) is positive
definite for all 0 • < P2 > All elements of J(O) is bounded for all 8, and all elements of the vector c(0,6) are bounded for all 8 and all bounded 0. < P3 > The equation satisfies the 'Matching Conditions'.
By the way, most of actuator systems of robots are constructed from an actuator itsdf,
a transmission box and an electrical amplifier. The input of this system is an electric signal u to the amplifier and its output is the torque ~" form the actuator. Then the dynamical
model of the actuator system can be written as follows:
< A > { ~ = f ( z , u ) = h(=). (2)
Where, f ( z , u) denotes a vector whose elements are nonlinear functions of z and u, and
h(z) is a vector whose elements are nonlinear functions of =. It should be noticed that it is difficult for us to decide the structures of the nonlinear functions in f (z , u) and h(=),
and also difficult to specify the dimensions of z and u. Therefore, we should take account
of the unstructured uncertainties in controlling the actuator system.
3 Robus t Cont ro l Under S t r u c t u r e d Uncer ta in ty
Fig.1 Miniature Robot
In this section, we introduce a small
manipulator named Miniature Robot.
Then using the robot, we show that the
philosophy of highgin feedback control is effective if the uncertainty is mainly
structured one. We designed PDType
Two Stage Robust Tracking Controller which proposed by the author and con
trolled tile robot by using the controller[3]. Fig.l shows the Miniature Robot. This robot has two degrees of freedom as
shown in figure. The dynamical model
of this robot can be described as < S >.
We can measure [0 7̀ ~r]r and tile ini tial condition is [0(0) r 0r] r. Ilere, we
define tile following regions:
132
s . ( , . ) = T01 o r " + , . < e, < o ~  , . (i = 1 .2)} a . ( ~ . ) = {# I b 7 " + ~. < #~ < ~7 "~  , . (i = 1.2)} (3)
~.(~.) = {Ol llOll < ~.}
where 0~" and O~ "~ are decided from the admissible configulation of the robot. ~a,, and
0 ~ ' are decided from the admissible velocity of the robot. The scalars of ~,, ~,, ~ axe
of the robot. ~, is a constant.
It is important that, in the dynamical model of the mirtia, ture robot, the dynamics of the link system is dominant. Therefore, since < P3 > can be applicable, a concept of high
gain feedback control is effective. PDType Two Stage Robust Tracking Controller is
controller by which we can specify the following specifications. (~) The desired trajectory Oa can be specified. (b) The control accuracy ~ can be specified. (c)lf the initial condition error between desired trajectory and actual trajectory exist, we can specify the characteristics of convergence o/initial error. (d) Robustness against structured uncertainties can be assured. (e) The structure of the resultant control system is simple.
Notice that we can specify (c) explicitly.
The idea of the design method is the following• First, construct an intermediate linear
model < M > whose initial condition coincides with that of the robot < .5" >. Then,
design a PDType robust controller < C~ > so that < S > may follow < M > with the specified control accuracy, that is,
ll[eCt) T ~Ct)T]ll := ll[eCtf  euct) T O(~)r _ o,.(tf]ll < e (t > o), O)
where eMCt) denotes the output of < M >. Then, regard < M > as a controlled object and construct a twodegreeoffreedom controller < C~ > for < M > so as to < M > follows ed(t) = aaCt) + £X(G,(s)eo).
The following procedure shows the design method of the above controller. StepO) Formulation of design specifications (a) Specify the control accuracy ~ ; II to T  eL oT _ eTl II < ~.
(b) Specify the admissible initial error bound E ; Ile(0)ll < g . (c) Specify the convergence characteristics of e(0) ; G,(s) =
(d) Plan the desired trajectory On ; 0n E II,(~,), On fi It,(~,), Oa E tI,(~,). where,e, = E + e, ~ = 0.4AE + ~.
S tep l ) Design of PDType Two Stage Controller. Construct the intermediate linear model
< M > #M = uM (eM(o) = o(o), e~,(o) = o) (5)
and design tile control law:
133
< c ~ , > u u = #,~  2 ; ~ ( e u  o,~)  x2(eu  e,~) > [ v = kh,(8  Ou)  k h z ( e  eu) (6) < Cs , = 3(e)(,, . , , + ,,) + ~(e,~i) L
where,](e) and ~(8, 8) denote nominal matrices of d(e) and c(e, 0). hi(i  1, 2) are arbitrary positive scalar and k is the design parameter which is designed by using the specified control accuracy and the modelling errors. See Appendix and Fig.2.
< c.,, > ° o . . . . . , . . . . . . . . . , ° ° ° ° ° ~
o . . . . . . . ° . . . . . . . . . ° . . . . . .
< o R > . . . . . . . . . . . . . . . . . . . < S >
~ ~..~,,,,~',.,,. ~,, L . . ~ . . .
:1 ~ ~ [i ; ° ° o ° . ° . ° . . . . . . . . . . . . . . . ° ° ° . . . . . . . . . . . . .
< M >
) # n
Fig.2 PDType Two Stage Robust Tracking Control System
To show the effectiveness of the control law (5)(6), we applied the law to the miniture robot[4]. Especial),, we show the experimental results by which the ability of specification of convergence, property of e(0) can be recognized. At first, we desided a nominal model of the robot as follows.
J(o) = diag[1.63 × 10 ° 1.63 × 10 6] (kgm 2) ~(0,0) = [0.325 cos 81 0.244 cos 8~] r (Nm) (7)
Then, we chose the initial condition [e(o) r ti(0)r] r and the desired trajectory [ea r ~r]r as follows.
[1 9 1 (01 = I °° } o ( o ) = t o . 4 5 ] , , ° . ° (8)
o.~ j  0 .785{  c o s ( ~ ) + 1.o}  
134
3.0
l.o
i !
:..0, ................. i ....
t.A J . , \ ! ...~°. ],~oQe,,i,lo~o~,,oe~el • o* ,oo .
0.0 0,35 0,1
Time t (sec)
Fig.3 Experimental Results
Fig.3 shows the experimental results. Parameters axe setted as
khi = 50000, kh2 = 100. For ex periment 1, we chose A  10 and
for experiment 2, we chose )~ 
20. From the figure, we caxt see
that the convergence chasnetefis
tics of initial error can be speci
fied easily.
4 P..obust Contro l Under U n s t r u c t u r e d Uncer ta in ty
In this section, we introduce a certain actuator named Rubbertuator.
[  . 
F i g . 4 Rubbertuator
Using this sctuator, we show
that the philosophy of H** con
trol is effective if the uncertainty
is mainly unstructured one.
ltubbertuator is a new actu~
tot developed by BKIDGESTONE
in Japan ( see Fig.4). As shown
in Fig.4, we use ~ pair of Kub bertuator like muscles of animals, that is, they axe connected each
other to rotate the load in push pull mode. The sequence for op
erating the system is as follows.
Step0) Supply an electric current I = 6.25(mAI to channel 1 and 2 of the servovalve.
Step1) Repeat the following procedures every sampling period.
1) Detect the angle of the rotor, 6 (pulse),by using a rotary encoder. 2) Normalize 8 (pulse) as 8 = 0/147.67(re0 and compute the input signal,uIre 0. 3) Input i~ = I+0.2u (mA) and {2 = l 0 .2u (mA) to the channel 1 and 2 of servovalve
respectively, and returen to 1). In the following, we regard u(rel) and 0(re/) as input and output of the system. Since, the dynamical model of Rubbertuator is nonlinear systsm[5], the model should
be described as a nonlinear state space equation < A > shown in 2. However, it should
135
be noticed that the structure of the nonlinear function in < A > and the dimension of the equation are not clear, but the nonlinearlity of the function is not so strong. This implies
that we should not apply a high gain feedback control as descrived in the previous section.
Therefore, we regard the system as linear system approximately and, obtaining Bode plots experimentally, design a linear robust controller. The specification of the controller which
we are going to design is as fojlows: (a)Ensure robust stability against modelling errors. (b)Have a good tracking property. To do so, we design a controller according to the following procedure. First, fix the structure of control system as shown in Fig.5.
r ~o
Fig.5 Control System
Where, G(s) denotes transfer func tion of Rubbertuator which is regarded
as a linear system, K(s) denotes a con troller which will be designed. Next, using the information about the frequency
region in which large modelling errors
may exist, decide a weighting function W2(s) for complementary sensitivity func tion T(s).
Using the infrmation about the specification for control accuracy, decide a weighting function Wl(s) for sensitivity function S(s). Finaly, solve the MixedSensitivity Problem,
that is, design a controller to satisfy
I [ 6WI(s)S(s)
Where I1" II~ denotes//~, norm[Gl. This problem can be solved by setting a generalized plant in the H** theory as
[ 6W,(s) 6W,(s)Go(s) ] PCs) = 0 W~(s)e0(s) . 00)
I Go(s)
To construct an Ho. controller, we adopt the following design procedure.
Step0) Input various kinds of sinewaves to Rubbertuator, we got Bode diagram. See Fig.6. It should be noticed that, since Rubbertuator is a nonlinear system, we can see many lines in the figure.
S tcp l ) From the Bode diagram, we decided a nominal plant model Go(S). The result is Go(s) = 889.2r and we also show IG0(jw)l and £Go(jta) in Fig.&
,2 +23.74~+889.27 '
Step2) From Fig.6, calculate a multiplicative uncertainty A(jw) by A(jw) = ~  1. for each G(jw) . Then, we obtained Ir(j~0)las a pointwise maximal of A(jw). a,(j,o)
136
:tO
0
I I 0 " *
6 = 5 0 .
iliittl iiiUl
I " " " +I I O , t J ~ , ) II
I Illli [IiM t i t!;+lil 111111
j "' iiii,', If i " i it .
+III!!UI, I!!nl Illl!!l111111!t ili!!!+
I 0 + I 0 ~ I0 ~, 10 ~, f i l l
,~ (md, ' . c )
Fig.6 Bode Diagram
Step3) From a viewpoint of robust sta bility, we decided g weighting function W2(s) for complementary sensitivity func tion T(a) so as to IW2(/~) I > I r ( /a ) l  As the result, we chose W~(s) = ,'+2o,.~10o
~ 0 "
Step4) Decide Wl(s) : From a view point of control accuracy, we decided a weighting function W1(8) for sensitivity
function S(s) as 6W,(s) = ~ , i .
Step5) Constructing a generalized plant
P(s) as F,q.(10), designd an H~ controller K(s) by using PCMATLAB[6].
We could solve the problem when The resulting controller can be written as
43.669s a + 125.87s ~ + 4439.7s + 211640 K(s) = s< + 163.96sa + 4375.20s 2 + 8262.4s + 4050.2
(11)
Fig.7 shows the experimental result. The desired trajectory is r = u(Q (0 • g < 4(sec)). In the figure, the simulated result is also shown. From these results, the effec tiveness of H¢~ can be recognized.
1 . 0
/ ~ ' I ~ Simu~ ti°n ....
= u (t)
0 . 0
o.o ' 2'.0 . . . . 4 : 0 ' 6'.0 ' 8 . o
T i m e t (sec)
Fig.7 Experimental Result
5 Conclusion
In this paper, we showed some case studies of robust control of mechanical systems. In this paper, dividing the uncertainties in the dynamical model of robot systems into two
137
cases, that is (i) Structured Uncertainty and (ii) Unstructured Uncertainty, and then we
showed control strategy which suited for each case through some experiments.
We can say that the next step in the field of robot motion control is to consider both
uncertainties at once.
A p p e n d i x
In this appnedlx, we show how to design J(0), ~(8, 0), k[3].
(l)J(0) must be designed so as to boounded for arbitrary 0 and to satisfy D" := (D T + D) > 0. Where,
D = ](O'J(O. (ii) ~9, O) must be chosen so as to bounded for arbitrary 0 and for arbitrary bounded O.
(h"+h'/')(~'~+a'])z/' Where~Ja A, , . (D' ) .J l = (iii) k should be chosen so as to satisfy k > k0 := h,h~], • = min0en,(0)
m.= . E x.(0) = [~(0)I 3(0)  11=~ + JCe)l[~(0, ~)  c(0, ~)], R=~. + 2 ~ , + ~2~, and ~.(.) de~otes
IuMI < X minimum elgenva.lue of matrix.
References
[1]J.J.Uicker: Dynamic Force Analysis of Spatial Linkages,ASME J. of Applied Mechanics 34, 418/424
(1967). [2]K.Osuka:Adaptive Control of Nonlinear Mechanlcd Systems, Trans. of SICE, 222, 126/133 (1986)0n
Japanese).
[3]K.Osuka~T.Sugie and T.Ono: PDType TwoStage Robust Tracldng Control for Robot Manipulators,
Proc. U.S.A.Japan Symposium on Flexible Automation, 153/160 (1988).
[4]N.Yoshlda,K.Osuk~ and T.Ono: PDType TwoStage Robust Tr~cking Control for Robot Manipulators
An Application to DAIKIN Miniature Robot, Proc. of 20th ISIR, 1051/1058 (1989).
[5]K.Osul~, T.Kimura and T.Ono: H co Control of a Certain Nonlinear Actuator, Proc. of The 29th
IEEE CDC, 370/37l (1990).
[6]B.Francis: A Course in Iloo Control Theory, SprlngerVerlag (1987).
[7]R.Y.Chiasg and M.G.S~fonov: RobustControl Toolbox, The Math Works, Inc. (1988)
HighOrder Parameter Tuners for the Adaptive Control of Nonlinear Systems*
A. $. Morse Department of Electrical Engineering
Yale University P. O. Box 1968
New Haven, Ct., 065201968,USA
A ~ t r a e t
A new method of parameter tunin K is introduced which generates u outputs not only tuned parameters, but also the first f~ time derivatives of each parameter, n being a prespecified positive integer. It is shown that the algorithm cam be used together with a suitably defined identifier based pLrameterized controller to adaptively stabilize any member of • spec/ally structured family of nonllne•r systems.
I n t r o d u c t i o n
In the past few years there have been a number of attempts to extend existing adaptive control techniques to processes modelled by restricted classes of nonlinear dynamical systems (e.g., [1][3]). Such efforts usually consist of showing that a particular algorithm is capable of stabilizing a specially structured linearly parameterized, nonlinear design model Eo(p) for each fixed but unknown value of p. Techniques from nonlinear system theory are then used to characterize, in coordinate independent terms, those dynamical process models whose inputoutput descriptions match or are close to that of ~1> for particular values of p in ED's parameter space 7 >. This then is the class of admissible process models C~, to which the particular algorithm can be successfully applied.
With the preceding in mind it is natural to wonder just how far existing adaptive techniques (e.g., [4]} can be extended into the world of nonlinear systems without having to introduce significant modifi cations. For example, for those cases in which parameters enter the model in "just the right way" and nonlinearities are such that with knowledge of the parameters nonadaptive stabilization is possible, one might expect that without knowledge of the parameters and notwithstanding the nonlinear dynamics, adaptive stabilization could be achieved using familiar methods. Interestingly, this has proved not to be the case, even for very simply structured examples, when the nonlinearities are not globally Lipschitz. Within the very recent past, several new adaptive algorithms have been discovered {e.g., see [5][9]) which overcome some of these difficulties. One of the key features of these algorithms which sharply distinguishes them from more familiar algorithms described in [4] and elsewhere, is the absence of tuning error normalization. The purpc~e of this paper is to introduce a new method of parameter tuning, also employing an unnormalized tuning error, which can be used to obtain adaptive controllers with the same capabilities as those reported in [6].
The new tuner is of "highorder ~. By a tuner of order fl, is meant a tuning algorithm capable of generating as outputs not only tuned parameters, but also the parameter's first n timederivatives. Most tuning algorithms are of order one. The only previous studies of highorder tuners we know of can be found in [10] and [11]. What distinguishes the algorithm introduced here from those considered in [10] and [11], is the absence of tuning error normalization.
The new highorder tuner ET is described in ~1. In §2, we define one poswible class C/, of process mod els Ea, to which highorder tuning might be applied. In §3 we explain how to construct an ~identifierbased parameterized controller" Ec (cf. [12]}, which together with E r ~ is capable of adaptively stabilizing any process admitting a model in Cp. A brief stability analysis is carried out in §4.
"This research was supported by the National Science Foundation under grant number ECS9012551
139
1. H i g h  O r d e r T u n e r Dr
Consider the ~error equation ~ = ,Xe + qo(t  qj,)'w + e (1)
where ,X is a positive constant, q0 is a nonzero, scalar, constant of unknown magnitude but known sign, q/, is an unknown constant mvector, w is an mvector of known time functions, e is a known scalar tuning error, e is an exponentially decaying time function, and k is a vector of m parameters to be tuned. Equations of this form arise in connection with certain types of adaptive control problems, as will be shown in the sequel. Our objective here is to explain how to tune k in such a way as to cause both k and e to be bounded wherever they exist and also to cause e to have a finite £2 norm. The algorithm we propose will generate as outputs not only k, but also the first fl derivatives of k, fl being a prespecifled positive integer. These derivatives can thus be used elsewhere in the construction of an overall adaptive controller. In order for the algorithm to have this property we shall have to make the following assumption. A s s u m p t i o n 1: The first fl  1 derivatives of w are available signals. H i g h  O r d e r T u n i n g A l g o r i t h m :
• If fi = 1, set ~: = sign(qo)we (2)
• If fl > 1 pick a monic, stable polynomial ~(s) of degree f i  1 and let (e, ~ , ~ be a minimal realization of ~(o ) /o ( s ) ; set
e = s  v (3)
=  ~ + h 0 ( t  h) '~ (4) h0 = ( k  h) '~e (5)
= sign(qo)w~ (6)
.~ = .~x(x + D) + ~h'(x + D) (7)
t ' = eX (8)
where D is the m × m diagonal matrix whose ith diagonal entry is w~, w~ being the ith component of w.
R e m a r k 1: Note that for fl > 1, the definition of (~, A, ~) together with (6) to (8) ensure that the first fl derivatives of k can be expressed as functions of X, h, w, ~ and the first fl  1 derivatives of to. Therefore, in view of Assumption 1, the first fi derivatives of k are realizable from available signals. R e m a r k 2: The significant properties of the preceding algorithm will remain unchanged if D is replaced with any positive definite matrix D satisfying D > D. One such matrix is D = w'wI .
The main result of this paper is as follows. H i g h  O r d e r T u n i n g T h e o r e m : Let [O,T) be any subinterval of [0,¢0) on which to is defined and continuous.
• For f i = I, each solution { e , k } to (1) and (~) is bounded on [0,7') and e has a finite £2 norm.
• For fi > 1, each solution (e, v, ho, h, X } to (1), ( 3 ) . (8) is bounded on [0, T ) and e and u have finite £2 norms.
I d e a o f proof : For ~ = 1 the result is well known and can easily be deduced by evaluating derivative of the =partial Lyapunov function" e 2 + Iq0l l l t  qplt ~ along a solution to (1) and (2). For fl > 1 the theorem can be proved without much more effort using the partial Lyapunov function ~2 + (h0  qo) 2 + Iq0H[h  qe[[ ~ + 6 Y~i~i z 'Pz i where zi = zl + ~]l~h~, zl is the ith column of X, he is the ith component of h, P is the unique positive definite solution to the Lyapunov equation P A + ~ ' P =  I , and 6 = x
For a complete proof, see [13]. R e m a r k 3: If an upper bound q" for iq0[ is known, the preceding algorithm can be simplified by setting
equal to e, eliminating (4) and (5), and replacing D with pD where p is a positive scale factor of
sufficiently large value; e.g.,/J > 2n~'lle~llllPJla;ll where P is as above. Using this particular algorithm it A is possible to construct an adaptive controller along the lines of [14], which is capable of stabilizing any siso process admitting a linear, minimum phase model whose dimension does not exceed a prespecified integer n and whose relative degree is either fl or fi + 1 [13].
140
2. Process Model Ep
Let n and n" be fixed {i.e., known} integers satisfying n ~ n* > 0 and let N : R * R be a memorylese, locally Lipsehitz, nonlinear function satisfying N(O) = O. The class Cp of process models ~.p to be considered consists of siso systems of the form
~j, = Aj, zj, + beu + buN(~)
u = e e z n (0)
where (Ap, 5p, ¢~,) is a npdimensional, stabilizable, detectable linear subsystem. Each Ep E Cp admits n block diagram representation of the form shown in Figure 1 where ~.e(s) and tjv(J) are the strictly
transfer functions g J ' ~ and ~ resp~tively; here as(s) is the characteristic polynomial of proper Ap, ~.o(a) is monic and 9J" is a nonzero constant called the ]~igl~ ~reqnency gain.
n ~ . Y
Figure I: Process Model E r
We shall make the following Process Model Assumptions: For each Ep ~ Cp
1. n p < n
2. relative degree tp(s) = n"
3. gp>O
4. ~p(s) is stable
5. N is known
6. relative degree tp(s) <_ relative degree tt/(~)
For the case when tU(s) = 0 the preceding reduce to the process model assumptions made in the well known classical model reference adaptive control problem originally solved in [15]. A more general version of the nonlinear problem considered here, with multiple nonlinear feedback loops, has recently been solved in [6], but without the benefit of highorder tuning.
Assumption 3 is not crucial and can easily be avoided using a Nusshanm gain [16]. Assumption 6 appears also not to be crucial and can probably be replaced, as in [7, 8, 0], with the much milder require
relative degree t~ • or zero, whichever that N have a continuous pth derivative, p being either merit is larger . The additional techniques needed for handling this case, which" "mvolve the "introduction" of ~nonlinear damping" can be found in [9].
3. Parameter lzed Control ler Ec
Our aim here is to construct an ~identifierbased parameterized controller" Ec suitable for high order tuning. In general, identifierbased parameterized controllers consist of two subsystems, one an "identifier" rx whose function is to generate an identification error s and the other an ~internal regulator ~ or "certainty equivalence controller" ER whose output serves as the closedloop control to the process {of. [121}.
As the first step in the construction of Ec , we select an appropriate ~parameterized design model" Eo upon which the structure of El and the synthesis of ER are to be based. For this, choose a posi tive number A, a monic stable polynomial no(S) of degree n*  1, a controllable, observable realization (A., b., e., d.) of a.~, and a ndimeusional, singleoutput, observable pair (c, A) with A stable. These
141
selectioas determine the ~direct control, design model ~ {d. [12]} Eo(p), paxameterised by the vector p = [ P o f t 1~ l~a P4 ] ' ~ = ~ n ~ s J ~ a n d d e s c . r i b e d b y t h e e q u a t i o n
ZOX = AZDI "1" PXYD "l" I~UD "]" paN(yD)
ZD, = A .ZO, q 'b . (uo "['CZDI"t'p4N(YD))
¢o = c . z o , + d . ( u v + ~ z m + p 4 ~ V ( ~ o ) ) ~ =  : W o + ~ ¢ ~ (lO)
~O admits the following block diagram representation where ~ is the tramrfer function of (A,p~, c).
Figure 2: Direct Control Design Model
Eo has three properties which make it especially useful for the problem at hand. P r o p e r t i e s of Eo :
1. Detectability: For each fixed p E ~ , each solution [ z~t z ~ y to (10), along which l/o(t) = UD(t) = 0, goes to zero exponentially fast.
2. Output Stabilizability: For each fixed p E 7 >, the control uo =  c z o t  p 4 N ( y v ) %utput stabilizes" Eo in that y~(t ) = c>'fyD(0).
3. Matchability: For each Ep ~ Co, there exists a point t q E "P at which Eo "matches" Ep in the sense that E~(q) admits the same block diagram representation as E p .
Because of matchabillty and the structure of Eo in (10), it is possible to write
(n) ~ =  ~ v + q0(q~w + c.x. + d . . ) + ~p
where w is a weighting vector generated by the identifier equations
to' = e . H + d . [ z ' N(~)]
t l = A . H + b . [ z a N(y)]
k. = A . z . + b.u [ 00] ] J = 0 A' 0 z + 0 d 0 u (12)
o o .4' o o d N(u)
and p(m) denote the unique quotient *rod rem~index respectlvel r of (J 4 .~)a(,)a.(R) divided by ap(,). It is eu¥ to verify that with q0 gP ~md ql,q2,qa, and q4 defined so the (A,b,q~), (A,b,q~), and (A,b,q~,q4) rea~ze ~p(,)'t(,)a(.) p(*) Q(.) ' spa(@ and ~ respectlvely, q will have what's requlrt~.
142
Here el" is a l insar combination of decaying exponentials depending on the eigenvaluee of A and A, , and ql" = [ q~ ~ q~ q4 ] ' {d, [15, 12]}. In view of (11) , it makes sense to use as an identification error
• = ~D  Y (13)
where
~ =  ~ o + k o ~ (14) ~D = k 'w+c .z , + d . n (15)
since this leads to the familiar error equation
=  ~ e + (ko  qo)~o + qe(k  ql ') 'w  el• (16)
Here k0 E ~t, and k  [ ~ k~ k~ k4 ]~ ~ ~ n + x are parameters to be tuned. E l is thus completely described by (12) to (15).
To define certainty equivalence control ER , consider first the "surrogate" design model substate
z'D1 [ S(kx) S(k2) s(ka) ]= (17)
where $ (Q = [ (M XMx) ' ¢ (MXM3) '¢ "" ( M  X M . ) ' ¢ ] ' ¥( e ~ ' , M is the observabillty matrix of (c ,A) , Mi is the observability matr ix of (e~,A) and el is the i th uni t vector in ~n [12]. If N were linear, then in accordance with the Certainty Equivalence Output Stabilization Theorem [12], one would set ~
u =  c~ox  t4N'(xl) (18)
since uo = czDl  p4N(yD) output stabilizes E n at p. This control would then define ER • Unfortunately, problems arise as soon as one tries to use (18) when N is nonlinear. The problem
appears to stem from the use of certainty equivalence i t se l f , since this is, in essence, a "static ~ or "frozen" parameter approach to the design of ER [12] . The difficulty is tha t certainty equivalence promotes control laws such as (18), and does so without directly taking into account the actual time dependence of tuned parameters. Because of this, for certainty equivalence control to work, parameters must be tuned "slowly," at least on the average. This is typically accomplished using some form of tuning error normalization. Simple examples suggest tha t it is the slow tuning necessitated by the certainty equivalence approach which enables strongly nonlinear functions {i.e., functions which are locally but not globally Lipschitz) to foil adaptive controllers configured in this way.
Wha t recent research has shown [5][9], is tha t there are alternative methods, not using tuning error normalization and not of the "static" certainty equivalence type which can be used with success, even if N is not globally Lipschitz. Wha t we now propose is a method along these general lines.
To motivate what we have in mind, let us note first that the output stabilizing control uo =  c z v t  p 4 N ( y v ) upon which the static certainty equivalence control (18) is b ~ d , ~ exponentially stabilizes ¢~D as defined in (10), since with this control
a . (6 )~o = 0
where 6 is the differential operator. From this, (12) and (15), it is easy to see tha t if k were held fixed, then (18) would exponentially stabilize ~D since this control would cause ~z7 to satisfy
~ . (~ )~v = 0 (19)
This suggests tha t to avoid the need for slow tuning, u should be chosen to make (19) really hold, even though k is not s constant. Since (15) and (12) imply that
, , . ( 6 ) ~ = u +, , . (6)(vw) (20)
3Using (17) and the sadly derived identity c.8(~) = ~', (18) r.aa be rewritten in the more familiar form ,, =  [ k; ,4 I,; ] ,  k,N(v).
143
it is obvious that (19) can, in fact, be achieved with the dynamic cextainty equivalence output stabilization law
=  ~ . ( 6 ) ( v ~ ) (21)
Recalling that a , ( s ) is of degree n"  I , we see that control (21) can be realized without time function differentiation, provided the first n °  1 derivatives of both l: and w are available signals. In view of (12), w clearly has this property. Therefore, if thinks can be arranged so that error equation (16) takes on the form of (1), then a tuner of order n"  1 can be constructed to endow k with the required properties. To get (16) into this form, its enough to simply set k0 = 0, since if this is done (16) becomes
=  ~ e + ~o(k  ~ e ) ' w + ( (22)
where =  q 0 ~ D  el, (23)
In view of (19), e must go to zero exponentially fast. Henceforth, therefore we shall take (21) as the definition of Eiq and we shall assume that E r is & tuner of order fl = n*  1 defined by (2) to (7) ~. In addition, since ~D is not needed, we shall take it to be zero in which case
e =  y (24)
4. S tab i l i t y Ana lys i s
The overall adaptive control system to be examined consists of the highorder tuner ET described by (2) to (8), process model Ep defined by (9), identifier E t modeled by (12), (15), (24), and dynamic certainty equivalence control E.q given by (21). The steps involved in establishing global boundedoess are as follows:
1. Using (12) it is easy to see that the component subvectors wx,u~,u~, and w4 of w satisfy the differential equations
a , (5 ) (6 I a t )wl = c~y (25)
gp~p(~)ct.C6)(~rl  At)w2 = ct otp(6)y ~NC~)ct.(6)(6I At)w3 (26)
, , . ( 6 ) ( 6 x  a'), , ,~ = c'~V(u) (27) ~ . ( 6 ) ~ , = ~r(u) (28)
Since the process model assumptions imply that the transfer matrices
1 .(~I a ' ) ~ , '~P(*) ( s z  a')xc ', ~N(s) ~ . ( ~ ) g p ~ p C ~ ) ~ . ( s ) g ~ p ( ~ )
are each proper and stable, in the light of (25) and (26), w can be viewed as the output of a stable linear system E , with state zw and inputs y, w3, and w4. The interconnection of ~w with E~ determined by (22) and (24), is thus a dynamical system E of the form
i c = I (xr , w3,w~,¢) w = g(~: , , ,~ , w , , , , ) (29)
where z~. = {y, vmho, h,X, zw) and f and g are analytic functions 4.
2. Now invoke the HighOrder Tuning Theorem and (24), thereby establishing the boundedness of all components of z ~ , except for zw.
3. From (19) and the definition of~ in (23), it follows that ¢ is in D n '  I the space of all vectorvalued, bounded functions with bounded time derivatives up to and including those of degree n"  1. From the boundednees of y, (27) and (28) it is clear that ws and w4 are also in ~ n '  z . The definition of I]w together with the boundedness of its inputs imply that zw is bounded as well.
~Ifn"  1, set fl = 1. *~ n = 1, z ~ = { v , k , z ~ }
144
4. The preceding characterises E as am analytic dynamical system with a bounded state sad an input {e,w~,w4} in ~D ~'1. From this it follows that to must be in ~)n'x. The equations defining Dr show that /: must be in D n '  I as well. From these observations, and (21) it follows that u is bounded.
5. The boundednem of u and y and the assumption that (cp,Ap) is detectable, imply that zp is bounded. The boundedness of u and y also implies that the identifier state components H, z , and z defined by (12) are bounded.
6. The preceding establishes the boundednese of complete adaptive system state {o, ho, h,X, zi , ,H,z. ,z} (or {k.,zp, H,z . ,z} for the case fl = 1), wherever it exists. From this and the smoothness of the overall adaptive system it follows that its state must exist and be hounded on [0,co). It also follows from the boundednees of ~ and the finiteness of the £2norm of y established by the HighOrder tuning Theorem and (24), that y goes to zero as t * oo [17].
Concluding Remarks
The aim of this paper has been to introduce a new highorder tuning algorithm and to discuss its possible use in the synthesis of adaptive controllers for special families of nonlinear systems. While the resulting controllers, like their predecessors in [5][9], are a good deal more complicated than the standard certainty equivalence controllers discussed in [4], the former can adaptively stabilize many types of nonlinear systems which the latter cannot.
It is interesting to note that because the overall adaptive system under consideration here has been developed without the presumption of slowly tuned parameters, the stability analysis differs somewhat from that used in the study of more familiar adaptive systems {eg., see [12, 4]}. Whereas the latter typically make use of small signal perturbation analysis, the Bellman Gronwall Lemma, and the like, the reasoning used here does not and consequently is somewhat less technical.
The algorithms developed in this paper can of course also be applied to linear process models. Whether or not the departure from static certainty equivalence control this paper suggests can improve performance in such applications, remains to be seen.
References
[I] S. Sutry and A. Isidori, ~Adaptive Control of Linea:izable Systems, ~ IEEE Trans. Auto. Control, AC34, Nov. 1989, pp 11231131.
[2] R. Marino, I. Kanellakopouloe and P. V. Kokotovic, "Adaptive Tracking for Feedback Linearizable Systems, ~ Proc. 281h CDC, Tampa, 1989, pp. 10021007.
[3] L. Praly, G. Bastin, J.B Pomet and Z. P. Jiangs ~Adaptive Stai)ifization of Nonlinear Systems'j Ecole NationMe Superieure des Mines de Paris Technical Report No. A236, October, 1990
[4] S. Sastry and M. Bodson, Adaptive Control: Stability, Convergence and Robnstness, PrenticeHall, Inc., Englewood Cliffs, NJ, 1989.
[5] I. Kanellakopouloe, P. V. Kokotovic and A. S. Morse, uSyetematie Design of Adaptive Controllers for Feedback Linearizable Systems," IEEE Trans. Auto. Control, to appear.
[6] I. Kanellakopouloe, P. V. Kokotovic and A. S. Morse, UAd&ptive OutputFeedback Control of Sys tems with Output Nonlinearities, ~ Foundations of Adaptive Control, SpringerVerlag, Berlin, 1991.
[7] R. Marino and P. Tomei, "Global Adaptive Observers and Outputfeedback Stabilization for a Class of Nonlinear Systems, Foundations of Adaptive Control, SpringerVerla~, Berlin, 1991.
[8] Z. P. Jiang and L. Praly, "Iterative Designs of Adaptive Controllers for Systems with Nonlinear Integrators," Proc. 1991 IEEE CDC, submitted.
[9] I. Kaneilakopouloe, P. V. Kokotovic and A. S. Morse, ~Adaptive OutputFeedback Control of a Class of Nonlinear Systems,~ Coordinated Science Laboratory Technical Report DC 131, submitted for publication.
14,5
[I0] D. I t Mudgett, Pro~lenu in Parameter Adaptive Control, Yale University Doctoral Dissertation, 1988.
[II] D. I t Mudgett and A. 5. Morse, "HighOrder Parameter Adjustment Laws for Adaptive Stabiliza tion," Proe. 1987 Conference on Information 5eienc~ and Syoten~, Match, 1987.
[12] A. S. Morse, "l'owatds a Unified Theory of Parameter Adaptive Control  Patt 2: Certainty Equiv alence and Implicit Tuning," IEEE Tron.~. Auto. Control, to appear.
[13] A.S. Morse, "HighOrder Parameter Tuners for the Adaptive Control of Linear and Nonlinear 5ys tenm" Proe. U. S.Italy Joint Seminar on Systeras, Models, and Feed6aek: TAeory and Applications, ~ Cupric1992.
[14] A. S. Morse, "A 4(n + 1)Dimensional Model Reference Adaptive StabUiser for any Relative Degree One or Two Minimum Phase System of Dimension n or LeM," Automation, v. 23, 1987, pp.123125.
[15] A. Feuer and A. S. Morse, "Adaptive Control of SingleInput, SingleOutput Linear Systerns," IEEE Trans. Auto. Control, AC23, August, 1978, pp. 557569.
[16] D. I t Mudgett and A. S. Morse, ~Adaptive Stabilization of Linear Systems with Unknown High Frequency Gains," IEEE TranJ. Auto Control, AC30, June, 1985, pp.549554.
[17] M. A. Aizerman and F. G. Gantmacher, Absolute Stability of Regulator Systems, HoldenDay, 1964,p.58.
D E S I G N OF R O B U S T A D A P T I V E C O N T R O L S Y S T E M W I T H A FIXED C O M P E N S A T O R
OHMORIandA. SANO
Department of Electrical Engineering, Keio University, 3141 Hiyoshi, Kohokuku. Yokohama 223, Japan
Abstract. This paper presents a design method of model reference robust adaptive control systems (Id RRAC5) with fixed compensator for a singleinput singhoutput(5150) linear time invariant continuoustime plant in the presence of bounded unknown deterministic disturbances and unmodeled dynamics. The fixed compensator is designed based on Tg°°optimal theory. The aim of the proposed control structure is realization of harmonizing the adaptive control to the robust control.
1. INTRODUCTION
The model reference adaptive control systems(MRACS) are designed under the as sumption that the plant dynamics are exactly presented by one member of a specified class of models. Unfortunately, it has been shown (Rohrs and coworkers, 1982) that bounded disturbances and/or unmodeled dynamics can make the system unstable.
Several modified adaptive control laws have been proposed(surveyed by Ortega and Tang, 1989) to achieve the robust stability and performance in the face of the presence of unmodeled dynamics and/or disturbances.
In" this paper we consider the model reference adaptive control of a linear time invariant plant in the presence of unknown deterministic disturbances and unmodeled dynamics. The key idea of the proposed adaptive control system is that the system is consists of two feedback loops: one performs the modelmatching adaptively and the other includes an error feedback controller we call "the fixed compensator", which can compensate the disturbances and model uncertainty.
The main objective of this paper is to propose the 74 °0 design method of the fixed compensator that can achieve the global stability of the proposed adaptive control sys tem and that can reduce the effect of disturbances and robust performance. The fixed compensator is obtained as the solution for the 74°°optimal design problem.
2. SYSTEM DESCRIPTION
Consider the following singleinput, singleoutput(SISO), lineartimeinvariant(LTI) sys tem:
A(s)z(t) = u(t)+d,( t) y(t) = B(s)(1 + L(s))x(t) + d,(t) (1)
147
where n  I yta
A(s)  : + ~ , ,,,s', B(~) = ~ b,,' (2) i = 0 iO
where y(t) and u(t) are the measured output and input respectively, d=(t) and dy(t) represent the disturbances at input and output respectively, and z(t) is the partial state. L(s) denotes unmodelcd dynamics which represents an unstructured uncertainty, that is dominant at high frequency domain.
We will make the following assumptions: (AI) The order n and ra(<_ n  1) are known. Furthermore A(s) and B(s) are relatively prime; (Ae) The unknown deterministic disturbances d,(t) and dy(t) are bounded; (/13) The zeros of the polynomial B(s) lie strictly inside the left halfplane, i.e. the plant is nonminimum phase system; (A~) b,,, > O. (,45) There exists a hounded scalar function l(jw) such that
IL(j.,)I < e(j.,), v,~ (3)
The desired output {yM(t)} satisfies the following reference model:
AM(s)yM(t) = BM(s)r(t)
where
(4)
n a t  I m a t
AM(~)  : " + ~ aM~ ~, BM(~)  ~ bM,~' (5) i = 0 i = 0
We also impose the following additional constraints: (,45) AM(S) is asymptotically stable polynomial; (,46) {r(t)} is uniformly bounded; (,47) n  m < nM  raM.
The design problem to be considered here is to synthesize a MRRACS for the plant in the presence of disturbances and unmodeled dynamics which can cause the tracking error e ( t ) (  y(t)  yM(t)) to approach zero as closely as possible.
3. CONTROLLERS STRUCTURE
In this section we propose the configuration including the fixed compensator which can not only reduce the effect of disturbances and unmodeled dynamics but also maintain the exact modelmatching. We choose any asymptotically stable polynomial T(s) which is described by
n   t r ,  "2
TC~)  ~ . . . . ' + ~ t : ' . (6) i=0
There exist uniquely polynofials R(s) and S(s) which can satisfy the following Diophan tine equation in polynomials:
T(~)AM(~) = AC~)R(~) + S(~) (7)
where
R(s)  ~ .... i +
Furthermore by defining
148
n  m  2 . n ~ ,~,', s(.) .,.~ (s)
i=0 iO
.  1 BR(~)  R(s)B(~)  b . H ( , )  E b~ , ' (9)
i1
where H ( s ) is an asymptotically stable monic polynomial of the order n  1, the plant Eq.(1) can be rewritten as
where
TCs)AM(,)~(O = ~(~) [0~(0] + R(,)a(O
~(t) 
(lO)
O. = [ b , . , b R ( .  2 ) , . " , b [ m ,
s .  1 , s .  2 , " " , So] T E 7~ 2"
a n  2 I
[(0, ~~u(O,, ~(.)(0, ~.  i I r H(s)y(t),. •., ~~~It(t)],, E R 2'~ (12)
aCt)  BC~)L(~).Ct) + dCt) (13) d(t )  A ( , ) d , ( t ) + a(s)(1 + L(~))d.(t) (14)
(11)
(15)
where 0i(t) is an adjustable parameter and
~'(t) _ ~(t)  ~(t) Jv(~)
MCs)v(t) = NCs)e(t); F(a) m MCs)
(~) ~ [b, C t ) , ~ ( t ) , . . . , ~ , ( 0 1 r
(16)
(17)
(18)
,~(t) 1 _ ~,(~)~,(t) + H(s) '~ t , ] O~(t)
Now we propose a MRRACS(see Fig.l) with the fixed compensator F(s ) which is given by
149
7_"
+
I/
Reference Model
: ~ . [ d . u . YM . . . . . . . . .
["~_, ~71 Plant
% j 
[ ~ ~ixed Compensator "
Fig.1. The proposed adaptive control system using the fixed compensator.
The proposed MRRACS with F(s) = 0 reduces to the ordinary MRACS structure. Using Eq.(4), the adaptive control law Eq.(15) can be rewritten as
T(s)Au(s)yM(t) = If(s)[~T(t)~(t)] + T(s)BM(s)v(t) (19)
From Eqs. (10) and (19), we have
e(t) = H(s) [ T(s)BM(S) N(S) c(t)] T(S)AM(S) eT(t)~(t) g(s) M(s) ]
Jr T(s~M(s).d(t) (20)
where
@(t) ~. 0 .   0 ( t ) (21)
We can rewritten Eq.(20) as
e(t) = H(s) 41 + Ll(s)) [d/T(t)~(t) T(s)AM(s) "
R(s) + T(s)AMCs) + S(s)L(s) d'(t) where
T(s)BM(S)H(s) M(s)N(s) e(t)]
(22)
A( , ) r ( , ) L(,) (2a) L,Cs) = TCs)AMCs) + S(s)L(s) dlCt) = ACs)LCs)YM(O + d(t) (24)
It is worth noting that dl(t) becomes bounded disturbances. We can obtain the following the theorem.
150
T h e o r e m 1 : Let the non.minimal expression of the plant Eq. (I0) be given together with the adaptive control law Eq.(15) with the fixed compensator F(s). Then we can get the tracking error e(t) which described by
where
,(t) = w, cs)[,pT(t)~(t)] + w~(s)a, Ct) (25)
H(s)M(s) (X + L2(s)), (26) W,(s) = TCs)V(s)
RCs)/Cs) (27) W~Cs) = TCs)V(s) + SM(s)LCs)'
V(s) ~ AM(s)U(s) + BM(s)N(s), (28) SM(s)  MCs)SCs) + TCs)BMCs)N(s), (29)
M(s)A(s)R(s) L2(s) = T ( s ~ L ( s ) L(s) (30)
R e m a r k 1 : If the fixed compensator F(s) and the unmodeled dynamics L(s) are absence, the error transfer function Wl(s), the disturbance transfer function Wd(s) and da(t) are obtained as follows:
/ / i s ) R(s) (31) WiCs) = T(s)AMCs)' WdCs) = T(S)AMCs)'
dr(t) = A(s)d,(t) + B(s)d.(t) (32)
Comparing the above disturbance transfer function Wd(s) with one in Eq. (26) and Eq. (27), it is obvious that the presence of the fixed compensator can achieve the disturbance rejection if it is designed suitably.
R e m a r k 2 : Note that the proposed system including the fixed compensator can be im plemented if V(s) is a stable polynomial, and that it has absolutely no relation to the controller parameters which accomplish the exact model matching. Hence introducing the fixed compensator is not violate the exact model matching condition.
R e m a r k 3 : In the case of the ordinar T exact model matching, since the characteristic polynomial of the disturbances is T0)AM(s), if AM(S) must be chosen as the polynomial having slow convergence mode then the convergence mode of disturbances is also slow. While in the proposed exact model matching with the s%ed compensator, since the charac teristic polynomial of the system becomes T(s)V(s), we can set the convergence property of disturbances independed of AM(S).
151
4. ADAPTIVE SCHEME AND STABILITY ANALYSIS
From this section we assume that the unmodeled dynami¢~ L(s) is absence. If the plant model has the unmodeled dynamics, we can also prove the stability of the proposed adaptive system in the presence of the unmodeled dynamics by using the normalizationof adaptive law (Ioannou and Tsakalis, 1986).
The adjustable controller parameter ~ (t) is updated by the adaptation algorithm, we adopt the amodification approach:
O(t) = Or(t) + O e(t), (33) .
0 t(t) = nO t(t) + Vtz (t)e(t), (34)
bp ( t ) = rx, z (t)e(t) , (35) a > 0, r~ = r T > 0, rp = r~ > 0
where e(t) is augmented error and z (t) is the filtered signal as follows:
e(t) = e ( t )  e , ( t ) (36) e,(t) = W(s)(O(t)W~X(s) W~l(s)O(t))T~(t) (37)
1 • (t) = w2(~)¢(t) (38)
where W~(s) is chosen such that W(s) = Wl(s)W2(s) is strictly positive realness.
It will be shown that the adaptive control system using the above adaptive scheme is globally stable, i.e. all signal in this system are bounded. We can easily get the following equation:
e(t) = W(s) [¢(t)Tz(t)] + d~(t) (39)
d'(t)  Wa(s)d(t) (40)
Main theorem of the stability of proposed adaptive system including the fixed compensator is below:
Th eo rem 2 : I f W(s) is strictly positive real and the disturbances term d'(t) is bounded, the proposed adaptive control law Eq. (16) and Eqs. (33)(35) accomplish that for all initial condition tracking error e(t) convergences a certain closed region Q, and all signal in the adaptive control system are bounded.
The performance of the control system can be envaluted from the region above. So if the region Q can be smaller, we can realize the high performance. Next section we will propose the design method of fixed compensator which can make this region smaller.
5. DESIGN SCHEME OF FIXED COMPENSATOK
In this section, it is shown that the design problem of the fixed compensator in the direct adaptive control system becomes the design problem of a stable controller which internally stabilizes the reference model, which and performs low sensitivity property.
152
We use the following notation. 7~ is a field of real numbers and C+e denotes the extended right halfplane, i.e. {8 : T£,(8) > 0} U {co}. "R/C ° is the set of all proper realrational functions which have no poles in C+e. 6/is the class of a unit function, whose element belongs to 7~7/°° and its inverse'is also in "RT/¢°. Furthermore, G(E) denotes the set {G(s) :v , e ¢}.
The fixed compensator F(s) should be designed to satisfy the following two requests for the closed loop system: (R1) there exists Wn(8) such that W(~) in Eq.(39) is strictly positive realness; (R~) Q is ~s small as possible. The first requirement is essential to achieve the global stability of the proposed adaptive system via theorem 2. The second requirement is necessary to reduce the region Q for the enhanced control performance.
If both the denominator and numerator polynomials of W,(s) in Eq.(26) are asymp totically stable polynomials then the proper W~'*(8) exists such that W(8) = W,(8)W2(8) is strictly positive realness. Hence if both the denominator M(s) in Eq.(17) and the char acteristic polynomial V(8) in Eq.(28) are asymptotically stable polynomials then first request (RI) can be satisfied. It is obvious that the design problem of F(s) such that M(8) and F(8) are asymptotically stable polynomials becomes the design problem of a stable stabilizing controller for GM(S). Since GM(S) is stable, GM(S) automatically satisfies the ParityInterlacingProperty, i.e. it is strongly stabilizable. It is wellknown that the class of all stable stabilizing compensator F(8) for GM(S) can be represented as follows:
S_(GM) = S(CM) n 7~7~ °°
= FCs) = GM(S) : U(s) 61.4, 1= U(Zc+.) (41)
where Zc÷.  {a : GM(8) = 0, s 6 C+,) and U(s) is a unit free parameter.
From Eqs.(28) and (41), by using U(,), W,(s) in Eq.(27) is represented as
a(~) 1 Wd(s) =
TC~)A.(s) " t + GM(~)F(~)
= R(s) . 1 (42) T(8)AM(8) U(~)
On the other hand, since F(s) should also satisfy the request (R~), we should desire that F(s) minimizes the following cost function:
1 J(U) = I lXll~, X(~) = To(a). U(s) (43)
where TD(8) 6/g is a weighting function.
Combining the requests in Eqs.(41) and (43), we can formulate the design problem of the fixed compensator F(8) as the following 7/°0 suboptimal problem:
Find X(s) 6 H that satisfies
]]X][~ < 7 6 7£, (44)
w.r.t. X(Zc+.) = TD(Zc+.), X 6 1,4
153
Several approaches for finding the solution of the problem mentioned above have been considered. Hara and Vidyasagar(1989) have formulated the sensitivity minimization and robust stabilization with stable controllers as two interpolationminimization problems for unit functions. Ito, Ohmori and San6(1991) have shown an uncomplicated algorithm for attaining low sensitivity property by a stable controller is able to be presented based on a certain NevanlinnaPick interpolation problem. Using either approach, we can get the suboptimal solution X'~b(s). Then the resulting fixed compensator is obtain by
To(,)  1
F(s) = X"b(s) (45) O~(s)
Remark 4 : The proposed design of the fized compensator can practically realize the high performance of the disturbance reduction, because the structure of the disturbances is not assumed a prior.
Remark 5 : It is note that if the plant model has the unmodd~ dynamics L(s), the abovementioned design problem of the fixed compensator becomes the design problem of a stable robust stabilizing compensator.
6. CONCLUSIONS
We have presented a MRRACS for the plant in the presence of the deterministic dis turbances and unmodeled dynamics. The fixed compenstor in the MRRACS can designed based on the 7/°o optimal control theory in order to realize high performance of the con trol. The same configuration should be applicable in the discrete time case, furthermore it can be briefly extend to the multivariable systems.
REFERENCES
Hara, S., and M. Vidyasagar (1989). Sensitivity minimization and robust stabilization by stable controller. Proc. Int. Syrup. on Mathmatical Theory of Networks and Systems.
Ioannou, P., and K. Tsakalis (1986). A robust direct adaptive controller. IEEE Trans. Automat. Contr., AC31, No.11, 10331043.
Ito, H., H. Ohmori and A. Sano (1991). A design of stable controller attaining low sensitivity property, submitted for the IEEE Transaction s on Automatic Control.
Ortega, R., and Yu Tang (1989). Robustness of Adaptive Controllers a Survey. Auto. matica, ~5, No.5, 651677.
Rohrs, C. E., L. Valavani, M. Athand, and G, Stein (1982). Robustness of adaptive control algorithms in the presence of unmodeled dynamics. Proceedings of the ~lst IEEE Conference on Decision and Control, Orlando, FL, 311.
Synthesis of Servo mechanism Problem via Hm Control
Shigeyuki. Hosoe Nagoya University, Japan
Feifei. Zhang Michio. Kono Tokyo University of Mercantile Marine, Japan
1. Introduction The robust servodesign is a classical but important problem in control
engineering and has attracted considerable attention heretofore(, see [1] [3] and the references cited therein). The central notion employed in the work was the internal model principle. To assure the robust tracking property of a closed loop system irrespective of plant uncertainty to a class of desired trajectories (which is assumed to be generated by a signal generator), it is known that the loop necessarily contains the dynamics(, called internal model) representing the one for the signal generator in the feedforword path. Also if it is the case, servodesign can be converted to a regulatordesign and thus various design methods( LQG, poleassignment, etc.) developed for the latter can be used.
In this paper, the servodesign problem is considered in the framework of Hoo control. It is formulated as a kind of mixed sensitivity problem, aiming at constructing a feedback system which has the low sensitive characteristics and stability robustness with respect to plant variations. It will be shown that if we put modes corresponding to the dynamics of the signal generator into the weighting function then the resulting controller must eventuaily possesses an internal model.
Considered are both the output feedback case and the case where some of the states or all the states of the plant are directly measurable without measurement noise. In the latter case, it is revealed that under some mild conditions the order of the controller is reduced by the number of independent observations.
Notation: / t H ~ : the set of stable and rational proper transfer function matrices BH~, : the subset of contractive matrices in / t / /~ ,
D L. I J
DHM(A, S) = ( A . + SA21) (A 2 + SA2 ) ~ ( A , S) = A~I + A~2S(I  A22S)~A21
X = R i c _ A T : X A +  X P X + Q = O , A  P X : stable
1 5 5
2. P r o b l e m Formula t ion Consider the system in Fig.l, where It(s): the transfer function matrix of the reference signal generator which is assumed, without loss of generality, to be antistable.
r i "1
]AP I BP/ : the transfer function matrix of the plant which is P ( s ) = L , 0
minimal. Wl(s) , Wu(s) : the weighting functions. K(s): the transfer function matrix of the controller.
K (s) ........................ ; W~ (s)
i ' i
y, . . . . . . . . . . . . . . . . . . P.Cs¿ , Y 3 , U , , V

:  iK{s~; , '~1 (sIA.) Bo
Fig.1 Weighted feedback system
Define vectors ~:~0, w, Yl, Y2, Y3, u, v and zl, z2 as indicated in the fig ure. The vector Y2 represents the measurement variable which can be observed without measument noise. Also, denote by ~0 the open loop transfer function matrix from Y3 to v when the loop is opened at the point marked by x. Then the sensitivity function S(s) and the cosensitivity function T(s) evaluated at the point x are given respectively by
S(s) = (I+ ~o) I (2.1)
T(8) = ~>o(I + ~o) 1 (2.2)
Now, the servodesign problem to be considered here is stated as follows: Find a controller K(s) which ensures the following design objectives.
1) The closed system is internally stable. 2) The system has low sensitive and robustly stable characteristics. 3) The system achieves output regulations to reference inputs generated
by R(s) and this is preserved under small perturbation of the plant.
156
It is well known that i) and 2) can be realized by appropriately choosing WI(8) and W2(s) and solving the following kind of Hoo control problems:
Problem Find all stabilizing K(s) such that
Observe that this is a mixed sensitivity problem but it differs from the usual formulation in the following points. First let us see that the controUer~s transfer function is given by K(s)~ not by R(8) (; see Fig.l) and that (81A,,)' is shared with W1(s) and K(s). This implies that in our formulation the states of the realization of W1(s) is assumed to be measurable and can be used for control purposes. Next, notice that ~0  P(8)K(s) if Gp = 0. Therefore in this case the problem coincides with the usual mixed sensitivity problem [41 and the present result provides an alternative way for deriving the solution.
Now, to treat the third objective in an unified framework, we choose
1 W,(S) = ~o Ws(,) (2.4)
where a(s) is the least common denominator of R(s) and W s ( 8 ) is a min imal phase transfer function matrix such that Wx(oo) = 0. Looking at Fig.l, it can be seen that this choice of W1 (8) enforces that the controller has ~~ in the feedforword path and the modes of a(8) are not canceled since the closed loop is stable. Therefore if the problem has a solution, then the internal model principle is automatically satisfied.
Finally we assume that W2(8) is chosen so that
Before proceeding to the next section, let us rewrite Fig.1 as in Fig.2 ( standard form). The generafized plant G(s) is given by
c ( , ) =
a,, B~ Cp 0 A t
"Cwa 0 0 Cw~ I 0 o o, 0 Cp
0 0
0 I
0
0 I = Ca
6 c2 0 0
Hi H2 1 0 Da2
D21 0 (2.6)
157
. z : I I Z2
Y~
I] Y = Y2
Y2
Fig.2 Feedback system in standard form
3. Solution In this section, the solution will be given to the servodesign problem just
formulated. First, we state it in somewhat generalized setting. Write
G(,) = A Ca C2
B1 B2 ] 0 Da2
Dm 0 (3.1)
to which, the following assumptions are made.
A1) D12= I~2 6
[ 0 ] R,2x"'l A:2) D21 I,.,,,1 E rc2, ] R0,2,,,1)x A3) C2 = [Ca ' C~a 6 :row full rank, C~ £ R "ax
A4) (A,B2, C2): stabilizable and detectable
C2 D~x : column full rank for 0 _< w < oo
B, ] A6) [ Ca Da2 :columnfullrankfor0<w<oo
From A3), C~a can be arranged as
c~i = [I,~_.i 0] (2.2)
Corresponding to this partitioning of C~I, we denote 6"I = [ Cn C1r ], R 
[At Aa],andC22=[C22, C22,],respectively. Further, forthe BIC22 = As A4 simplicity of notation, we assume that A2 and A4 are arranged as
158
llr I° ] A2 _ A4xl 0 , A 4 n : stable, (A21, A4n): observable (3.3) A4 LA421 A4T'
i.e., As and A4 has the observable canonical form. Notice that the assumption " A4n is stable " does not loss any generality, since there exist F such that FIA21 + A4n is stable, and this stabilization of A4n can be realized by an appropriate coordinate transformation on A BI C22.
Now let
X = R i c A  B2DT=Cl B1B T  B2B[
~c~ (A B~D~C,) ~ ] (3.4)
~'=Ric[A~ ~ A,~0~] (~.5)
where b22 is the (22) block of cTrgz, % CL, C2~,. Write
Notice that Y4 stabilizes
Y= Y4
i, = A, + r.(cT~c1~  c[,,c~,) (3.s)
and Y satisfies
Y(A  BIC~) r + (A  B~C~)r + Y(~C~  C~ c~))r = o (3.9)
Theorem For the generalized plant described by (3.1) and satisfying
the assumptions AI)A6): There exists a internally stabilizing controller/((s) such that
IIG11 + GI~(X C~)Ia~IIl. < i (3.10)
if and only if there exist X >_ 0 and I:~ >__ 0 (Y >_. 0) satisfying
159
In this case all the controllers/~(s) are given by
[~(s) = D H M (KD, [Q, V]) O.n)
or
KD  l ib + BI,C21Z _ ___e_j_z . . . .
C22Z C~ Z
~ ',, ~ . . ~ " Im2 I 0 0
o', o Jr=~ o )x,~_.,~ o
r¢(,) = F~(K~;[¢, V])
&+B~oc~,z.&c,z_e,z .I b'~o g"' ~!]° L C~iZ I O'
for arbitrary Q 6 BHoo and V 6 RH~o, where ]~lc 6 .~nX(p2ml) is chosen so tha t it stabilizes A6 + B1,C2zZ , and
Z = (I YX) I ~ = A  S~C~ + Y(~C~  C[C~)
~ = B~ + Y Cr~ Zh~ C~ = D ~ C , + B ~, X ~.~ = c.~ + B T x
(3.1~)
O.z4)
0.15)
Notice that the order of I/D or KF is equal to n, the order of the generalized plant. Below, we shall show that if V is appropriately chosen, this can be reduced.
From the definitions,
A1 lb = A3 + Y~(Cr~,C,,  Cr~,C~2,)
_ [ AI  A3+ Y4(cT, c1,C~22,C221) ::]
[ , 0 ] [ , 0 ] Z = ( l _ Y4X4)_iY4X 3 ( l _ Y4X4)_1 "" Z3 Z4
where, X4 is the (22) block of X. Therefore, if /)i~ is selected as
[ =  LA3 + Y4(Cg C~,
the Amatrix of KD becomes as (, notice C21Z = 6'21 )
160
Next, using the definition of DIIM,/~'(~) is represented as
~( , ) = D(,)~V(,)
[D(8) N(s)] = (~, +Qe:~:~+VC21)Z I I V Q
[: I  a I A2 BI , ,
in which the Cmatrix is
[C1~ + Q022~ + (CI, + QC22,)Z., + V (01, + qC22,)Z~]
Therefore if we set V = (¢1~ + QC22z)  (C1, + qc22,)Z3 , one gets
[D(,) N(") ]= ( 0 ~ . + 0 0 2 2 . ) Z , X ~ V '
Hence the reduced order controller K(s) is obtained as
~(~) = DHM(K~,,, Q) (a.lo)
IrD,  ClrZ4 I ell  ClrZ3 (3.1~) e22,Z4 0 e221  ¢2,.Za
or in the form of linear feedback transformation,
e(~) = Fa(KF,; Q) (3.z8)
[A.IB, JUlrZ4 J~l.dB2d(~'+li+01tZ3) BI</ , j " B2d ] Kl, r = ¢~Ir Z4 (~Iz  0Zr Z.~ 0 1 (3.19)
C22rZ+ C22t  ~:22rZ+ I I
Observe that the order of the controller is reduced by the number of the independent rows of C21. It would be of interest if this result is compared with the minimal order observers.
Now, let us apply the above result to the servodesign problem. The cor respondences between the expressions of (2.6) and (3.1) are given by
161
._ [~. _..c.],.,_ [,~.],,,, [~,] [~] Ap = ,'DI~ = , D=I =
It., 0] [0' 0] c1= 0 c=2 'c21= Cp , c , ~ = [ 0 cp] [,0] where, Cl, and Ap are arranged as Ap L A~,~ Ap~ J
[i]
[o' °o] o,o [,, ,,] o A  B , c ~ = ___4..,__',A~] = A~ A, C21 = I ' Ap~ ~ Ap4 J
Looking at these expressions, let us see that assumptions A1), A2), A3) are trivially satisfied. Using the assumption that (Ap, Bp, Cp) is minimal and the definition of W2(8), it can be seen that A4) is equivalent to
lAP  s I Bp ] : row full rank for V8 satisfying a( , ) = 0. A4') c , o L
Also, it would be easy to verify that AS) is equivalent to
rank lAP  j w I ] ¢ , J  , , . t
There is no simple expression for A6), and it must be directly checked. The controller K(s) is obtained by (3.14) and the structure indicated in
Fig.l, The result is given by
K( , ) = F:(Kop(S), Q) (3.20)
[i 0 (3.21) Ko, (s) = K~, (8) Imz
0 I,,,2
4. References 1) W.M. Wonham: Linear Multivariable Control A Geometric Approach,
Berlin : SpringerVerlag (1979). 2) B.A. Francis and M. Vidyasaga~. Algebraic and Topological Aspects of
the Regulator Problem for Lumped Linear Systems, Automatica, Vol. 19, No.l, pp.87 90(1983).
3) M. Ikeda and N. Suda: Synthesis of Optimal Servosystems, Transitions of the SIC'E, Vol.24, No.l, pp4046 (1989).
4) F. Zhang, S. Hosoe and M. Kono: Synthesis of Robust Output Regulators via H°°Control, Preprints of First IFAC Symposium on Design Method of Control Systems, pp.366371 (1991).
Hoosuboptimal Controller Design of Robust Tracking Systems
Toshiharu SUGIE*, Masayuki FUJITA** and Shlnji HARA***
* Div. of Applied Systems Science, Kyoto University, Uji, Kyoto 611, Japan ** Dept. of Electrical and Computer Engineering, Kanazawa University, Kodatsuno, Kanazawa 920, Japan *** Dept. of Control Engineering, Tokyo Institute of Technology Meguroku, Tokyo 152, Japan
1 Introduction
In the control system design, one of the fundamental issue is the robust tracking problem
[1, 2], whose purpose is to find a controller satisfying (a) internal stability and (b) zero
steady state tracking error in the presence of plant perturbations. However, in addition
to (a) and (b), control systems are generally required to have both (c) good command
response and (d) satisfactory feedback properties such as enough stability margin and low
sensitivity. Concerning to (d), it is widely accepted that Hoo control framework is quite
powerful for robust stabilization and the loop shaping.
The purpose of this paper is to characterize all controllers which satisfy (a) internal
stability, (b) robust tracking, (c) the exact model matching and (d) the specified Hoo
control performance related to the above feedback properties. First, we derive a necessary
and sufficient condition for such a controller to exist, based on the authors' former result
on (i) the Hoo control problem with jwaxis constraints [6] and (ii) the twodegreeof
freedom robust trazJdng systems [7]. Then, we parametrize all such controllers, where the
free parameter of the Ho. suboptimal controller is fully exploited.
Notation: C+ denotes the set of all s E C satisfying Re[s] _> 0. The sets of all real
rational and stable proper real rational matrices of size m × n are denoted by R(s) '~xn
and W~Xn rtlXn RHoo , respectively. For G(s) e P~H¢o , we define IIG[Ioo : = sup, I l G ( J ~ ) I I , where
H'[I denotes the largest singula~ value. A subset of RH~o x~ consisting of all S(s) satisfying
HSHoo < 1 is denoted by ,~x~ BHoo . If no confusion arises we drop the size "m x n".
2
163
P r o b l e m f o r m u l a t i o n
We consider a linear timeinvariant system P,(U, P) described by
v = P . (2.1)
u = C := [Ci, C~] r
y Y
where P E R.(s) "x~ C E It(s) "x2" denote the plant to be controlled and the compen
sator to be designed, respectively. The vectors u, y, and r are qdimensional control input,
mdimensional controlled output and mdimensional command signal, respectively.
Suppose that r is described by
1 r = Rr0, R = [ILdsjo~,) I" (2.a)
where R is a command signal generator whose poles are distinct and ro is unknown
constant vector.
The system 13(C, P) is said to be internally ,table if all the transfer matrices from
{dl, d2, d3} to {u, y} are in R.H~ when we replace {r, u, y} by {r + dl, u + d~, y + d3},
where {dl, d2, d3} are virtual external inputs inserted at the input channels of P and C.
Concerning to the internal stability, we have the following lemma [7].
Lemma 1: For given P, all compensators V = [CI, C2] such that E(C, P) is
internally stable are given by
C1 = (D + C2N)K, VK E P~H~ (2.4)
c~ e n(p) (2.5)
where P = ND I is a right coprime factorization (rcf) of P over RlI=, and ~(P) denotes
the set of all closed loop stabilizer of P.
Remark 2.1: Strictly speaking, C2 E fl(P) means that DcD + ~[cN is unimodular
over t tH~, where C2 =/gcl/Vc is a left coprime factorization of C~.
Since the design of C that achieves internal stability is equivalent to the choice of
{K, C2} subject to (2.4) and (2.5), C is referred to C = C(K, C2) hereafter.
164
Note that the transfer matrix G~ from r to y of ~(C, P) is given by
G,, = p(ir + C2p)1Cl
= #K (2.6)
from (2.4), which implies that the role of K is to specify the comm~nd response of E(C, P).
Also, it is easy to see that the role of C2 is to make the control system robust against
the modeling errors and disturbances. The recognition of these roles is quite important
to solve the problem stated below.
Now we consider the following four requirements:
(sl): (Stability) ~(C, P) is internally stable.
(s2) (H~ suboptimality) The feedback compensator C2 satisfies the following H~ norm
performance, i.e.,
I1~11® := < 1 (2.7) W,T oo
S := (I + PCa) a, T := I  S (2.8)
where Wo and W, axe given weighting matrices.
(s3): (Robust tracking) There exits an open neighborhood of P, say D, such that
~(C,/~) is internally stable (2.9)
( I .  G',,)R e RHoo (2.10)
hold for every P' E D, where G~r denote the transfer matrix from r to y of ~(C, P').
(s4): (Ezact model matching) Let GM E l~H~o x " be the given desired model, then
the transfer matrix G,, of P.(C, P) satisfies
G,, = GM (2.11)
Remark 2.2: The problem of finding a C~ which satisfies (s2) is the so called mixed
sensitivity problem, which often appeaxs in the practical H~ applicatious(e.g, [5, 3]). The
requirement (s3) meavs that y(t) tracks r(t) without steadystate error against the plant
perturbations. On the other hand, (s4) is one of the most direct ways to get the desired
command response.
165
Now we are in the position to state the problem:
Problem: For given P, W,, W,, R and GM, find all compensators C which satisfy
the requirements (sl) ~ (s4) if they exist.
In order to utilize the existing H** control results, we make the following assumptions:
(A1) P does not have any pole on the jwa~ds, wad rank P(jw) = m for any w.
(A2) W, is in ILlI**. Wi has no poles in C+ and WtP is biproper. Both of them has
no zeros in the jwa.xis.
3 S o l v a b i l i t y c o n d i t i o n
In this section, we derive the solvability condition of our problem.
Let 7. be the irdlmum of II~ll.o achieved by stabilizing compensators, i.e.,
"r. inf II~ll** (3.1) c2ef~(e)
Then, we obtain the following result.
Theorem 1: Under the assumptions (AI) and (A2), for given P, W,, W~, R and
GM, there exists an compensator C = C(K, C~) satisfying all of the four requirements
(sl) ,,, (s4) if and only if the following four conditions hold:
(i) % < I (3.2)
(ii) IIW,(joJ,)ll < 1 for i = 1 ~ k (3.3)
(~) a , . , ( j o ~ ) = x,,, for i = 1 ~ k (3.4)
(iv) There exits a KM E RHoo such that the following equation holds.
GM = N KM (3.5)
where P = N D x is an rcf over RHoo.
In order to prove the theorem, we need several lemm~.
The following two lemmas are directly obtained from [7].
L e m m a 2: For given P and GM, there exits a compensator C satisfying (sl)
and (s4) if and only if the condition (iv) holds. In addition, a stabilizing compensator
C = C(K, C2) satisfies (s4) if and only if GM = N K holds.
L e m m a 3:
(s3) if and only if
166
For given P and R, there exits a compensator G satisfying (sl) and
rank N(jwl) = m for i = 1 ,.~ k (3.6)
holds. In addition, a stabilizing compensator C = C(K, 6"2) satisfies (s3) if and only if
[NK](jw,) = I,, , for i = I ,,~ k (3.7)
Dc(jwl) = 0, for i = 1 N k (3.8)
where P = N D a and C2 = NcD~ 1 are rcf's over RH**.
The following result is easily obtained from [6].
L e m m a 4: Under the assumptions (A1) and (A2), suppose 7. < 1 holds for given P,
W, and Wt, and there exists an internally stabilizing compensator satisfying ~(jwl) = ~i
(i = 1 ,.~ k) for given constant matrices ffi. Then there exits a C~ E N(P) satisfying both
i1¢11.* < 1 and
• (jwi)~i for i = l ,  ~ k (3.9)
if and only if the following inequality holds.
I l f f , [ l<l for i = l , ~ k (3.10)
In the above lemma, the sufficiency is important.
( P r o o f o f T h e o r e m 1) Necessity: Suppose (sl) N (s4) hold. It is obvious that (1) holds from (sl) a~d (s2). Also Lemmas 2 and 3 yid& (iii) and (iv). So we will show
that (ii) holds. Concerning to C~, the requirement (s3) is equivalent to (3.8), which also
equals to S(jwl) = 0 and T(jwi) = I. Therefore, under the assumptions (A1) and (A2),
this implies that the stabilizing compensator C2 satisfies
f°l • (jwl) = ~, := for i = 1 ,,, k (3.11) w,(jo,3
From Lemma 4 with ][~i[I = HWt(Jwi)H, we obtain (ii).
Sufficiency: Suppose (i) ,,~ (iv) hold. Then there exists a stabilizing G satisfying
IICH** < 1 and (3.11). Since (3.11) menus (3.8), it is enough for us to choose C =
C(KM, C~). This completes the proof. (QED)
K e m a r k 3.1:
which is given by
167
We can calculate the infunum of I1¢11~ subject to (sl) and (s3),,
m ~ { % , IIW,(j~OII (i = 1 ,, k)} (3.12)
Wu and Mansour [9] discuss the similar problem in the special case where P is stable and
q~ := C~(I + PC2 )  ' . So our result is a generalization of [9].
4 C h a r a c t e r i z a t i o n o f a l l s o l u t i o n s
In this section, we characterise all the solutions of our problem.
The next lemma is well known (e.g., [4]).
Lem m a 5: Under the assumptions (A1) and (A2), suppose % < 1 holds for given
P, Wt and W0. Then all stabilizing C2 which satisfy ]l~]]co < 1 are described by
C2 = (HnS + I11=)(I12xS + T122) ~ V S e B H ~ ~ (4.1)
where the matrices I~ij E ILHoo are determined from the data {P, W,, Wt} via two Riccati
equations, and ]I := is unimodular over It.H~,. (see [4] for the explicit form
of H)
Based on Theorem 1 and the above lemma, we obtain the following result.
Theorem 2: Under the assumptions (A1) and (A2), suppose the conditions (i)
,,, (iv) of Theorem 1 hold for given P, W,, Wt, R and G M. Then all the compensators
C = GCK, C~) satisfying (sl) ,,, (s4) are given by
K = KM + W Q K V Qzc e tLH~ '~)x'' (4.2)
C~=(II,,S+l[,2)(II~,S+IT22)' V S e B H ~ ~ (4.3)
subject to
[I121S + H22](jw~) = 0 for i = 1 ,~ k
where W E t t H ~ (q'') is a matrix satisfying
(4.4)
N W = O, r a nkW = q  m (4.5)
for the rd P = N D 1, and h'M E t t H ~ is a matrix satisfying (3.5).
168
(Proof) As for K, noting that Gu = NKM, it is easy to see that the general
solution of GM = NK is given by (4.2).
Concerning to C2, note that the fight hand side of (4.1) is an rcf of C2~ because
bolas. So (4.4) is e q u l ~ e n t to (a.a), ,,hia~ m e ~ the req~ement (s3) is satis~ed. (sl)
and (s2) are satisfied automatically as long as S G BH~. Tl~ completes the proof.
(QED)
Remark 4.1: Concerning to the pa~ametfization of the H, controllers, (4.1) is often
given by the linear fractional transformation form
C2 = Hn ÷ H12S(I H~S)IH21 VS E BH~ ~ (4.7)
Since the following relation holds
i 11 ][ ° H11 ][11 ./r 0 ,.~21 Y,~
between Hij and Hij (see [4]), (4.4)is equivalent to
(4.s)
[z~22s](jw,) = I for i = 1 ,,, k (4.9)
Remark 4.2: In this paper, we have treated the plant P given by (2.1). However,
by using the robust tracking result of [8], it is easy to extend our result to the case where
the plant is described by
= ~ (4.10) z P2
where the measurement z may not be equal to the output y.
5 C o n c l u s i o n
In this paper, we have discussed the problem to find a controller which satisfies (a) inter
nal stability, (b) robust tracking, (c) the exact model matching and (d) the specified Hoo
169
control performance related to the feedbackproperties. First, we have obtained a neces
sary and sufficient condition for such a controller to exist. Then, we have parametrized
all such controllers, where (i) the twodegreeoffreedom compensator configuration and
(ii) the free parameter of the Hoo suboptimal controller are fully exploited.
References
[I] E.J.Davison, The robust control of a servomechanism problem for linear time
invariant multivaxiable systems, IEEE Trans. Aurora. Contr., 21, 2534 (1976)
[2] B.A.Francis and M.Vidyasagar, Algebraic and topological aspects of the regulator
problem for lumped linear systems, Automation, 19~ 8790 (1983)
[3] M. Fujita, F. Matsumura and M. Shlmizu : H °° robust control design for a magnetic
suspension system ; Proc. of Pnd Int. Sympo. on Magnetic Bearing, pp. 349 ,,, 355
(1990)
[4] R.Kondo and S.Hara, On cancellation in H~ optimal controllers, Systems f~ Control
Letters, 13, 205210 (1989).
[5] H. Kuraoka et al. : Application of Ho~ optimal Design to Automotive Fuel Control ;
Proc. I989 American Control Conf., pp. 1957 ,,~ 1962 (1989)
[6] T. Sugie and S. liars, Hoosuboptimal control problem with boundary constraints,
Systems ~ Control Letters, 13, 9399 (1989).
[7] T. Sugie and T. Yoshil~wa, General solution of robust tracking problem in two
degreeoffreedom control systems, IEEE Trans. Aurora. Contr., 31, 552554 (1986)
[8] T. Sugie and M. Vidyasagax, Further results on the robust tracking problem in two
d.o.f, control systems, Systems f~ Control Letters, 13, 101108 (1989).
[9] Q.II. Wu and M. Mansour, Robust output regulstion for a class of linear multivariable
systems, Systems ~ Control Letters, 13, 227232 (1989)
Stability and Performance Robustness of 11 Systems with Structured NormBounded Uncertainty
M. Khammash Iowa State University
Ames, Iowa 50011
Abstract
An overview of recent advances in the robustness techniques in the 11 setting are discussed. In particulax, results on the stability and performance robustness of multivariable systems in the presence of normbounded structured perturbations are presented. For robustness analysis, the computation of nonconservative conditions will be highlighted as one of the most attractive features of this approach. In addition, a useful iteration technique for robustness synthesis is presented. Finally~ extension of the robustness techniques to the timevarying case is outlined and then used to provide necessary and suf~cient conditions for the robustness of saxnpled data systems in the presence of structured uncertainty.
1 I n t r o d u c t i o n
When designing or analyzing systems, a single system model rarely provides com plete information about the true behavior of the physical system at hand. For this reason, robustness considerations must be taken into account in the analysis and dcsign procedures. Robustness can be divided into two, not unrelated, types: robustness to signal uncertainty, and robustness to system uncertainty. When L; 2 signals are present, the ~oo theory provides a systematic means for designing con trollers which have the effect of minimizing the influence of uncertain signals on the desired outputs. In the ~oo setting, the robustness to system uncertainty can be treated using the ~ function of Doyle [5] which is particularly suited for handling structured uncertainty. But when the uncertain signals at hand are assumed to be bounded in magnitude and thus belong to L; °° (or l °° in the discretetime case), the L;I (~I) theory provides the means for designing controllers which have optimal or near optimal properties in terms of reducing the effect of the disturbances on the desired outputs. This paper is concerned with the stability and performance robust ness of I l systems in the presence of structured uncertainty. The stability robustness of systems with unstructured uncertainty has been solved by Dahleh and Ohta [2] in the discretetime case. These results are extended to the case of structured un certainty. In addition, the performance robustness problem is solved by showing itcan be equivalent to a stability robustness problem having one additional ficti tious uncertainty block. These results apply equally well to continuoustime and discretetime systems. However, modern control systems often implement a di~ tai controller to a continuous time plant via sample and hold devices. Since the
171
1//
r"
"1
L,
r
I
I
U '1
p , ' I I I : I
I I
M Z
A
Figure 1: System with Structured Uncertainty.
resulting sampleddata system is timevarying, the robustness of such a system can not be treated using the robustness techniques for timeinvariant nominal systems. This paper also presents necessary and sufficient conditions for the robustness of sampleddata systems in the presence of structured uncertainty. The results axe stated in terms of specral radii of certain matrices obtained using the statespace description of the continuoustime plant and the discretetime controller. Finally, the performance robustness of sampleddata systems is solved whereby a certain equivalence between the stability and performance problems is shown to hold.
The remaining part of this paper is divided into two main parts: the robustness of timeinvariant systems and the robustness of sampleddata systems. In the first part, motivation is given and the general robustness problem for timeinvaxiant systems is setup. Then, some results regarding the performance robustness are presented, followed by the solution for the robustness analysis problem. Finally, the robustness synthesis problem is addressed and an iterative scheme for robustness synthesis is presented. In the second part of the paper, the robustness of sampled data systems in the presence of structured uncertainty is addressed. The problem is formally set up. Next, the performance robustness of sampleddata systems is discussed, and its relation to stability robustness is outlined. Finally, necessary and sufficient conditions for the robustness of sampleddata systems are presented.
2 R o b u s t n e s s of T imeInvar iant S y s t e m s
2 . 1 S e t u p
We start by setting up the robustness problem for timeinvaxiant systems. Consider the system in figure 1. M represents the nominal part of the systems composed of the nominal linear timeinvaxiant plant and the linear stabilizing controller. M can be either continuoustime or discretetime. We will assume it is discretetime,
172
System I System II
Figure 2: Stabili ty Robustness vs. Performance Robustness
although the results carry over with obvious modifications to the continuoustime case. w in the figure represents the unknown disturbances. The only assumption on w is that it is bounded in magnitude, i.e. it belongs to the space l a°, and we assume that Ilwl]oo _< 1. z, on the other hand, is the regulated output of interest. A 1 . . . . ,An denotes system perturbations modelling the uncertainty. Each perturbation block Ai is causal and has an induced l°°norm less than or equal to 1. Therefore, each At belongs to the set A_.= {A : Aiscausal, and HAH : = sup~,~ 0 ~ < I} and can be timevaxying and/or nonlinear. The Ai's axe otherwise independent. Consequently, we define the class of admissible perturbations as follows:
Z)(n) := {A = diag(£Xx,...,An) : A i E ~.~}. (i)
The system in the figure achieves robust stability if it is stable for MI admissible perturbations. It is sa~d to achieve robust performance if it achieves robust stability and at the same time Tz~, < 1 for all admissible perturbations, where Tz,~ is the map between the input w and the output z. Note that when the perturbation A is zero, I[T~ffi[[ is the induced £oo norm of the nominal system and is equM to the l 1 norm of the impulse response of the map Tz~.
2.2 S t a b i l i t y R o b u s t n e s s vs. P e r f o r m a n c e R o b u s t n e s s
In this subsection, we discuss the unique relationship that holds between stability robustness and performance robustness. When M is timeinvaxiant, a certain inter esting equivMence between stability robustness and performance robustness holds. More specifically, a performance robustness problem can be treated as a stability robustness problem. Consider the two systems in Fig. 2. The first system in the fig ure, System I, is that corresponding to the robust performance problem. By adding a fictitious perturbation block, Ap, where Ap E A, we get System II. System II, therefore, corresponds to a stability robustness problem with n + 1 perturbation blocks. The following theorem establishes the relation between the two systems:
T h e o r e m 1 System I achieves robust performance if and only if System H achieves robust stability.
173
V1 A E V(n)
Figure 3: Stability Robustness Problem
The proof of this theorem will be omitted and may be found in [9]. A similar relationship holds when the perturbations are timeinvariant and when w E l 2 as has been shown in [6]. As a result of this theorem, we can focus our efforts on finding stability robustness conditions. Conditions for performance robustness will automatically follow from the stability conditions.
2.3 R o b u s t n e s s Condi t ions in the P r e s e n c e of SISO P e r t u r b a t i o n s
In this subsection, we provide necessary and sufficient conditions for stability ro bustness in the presence of structured SISO perturbations. These conditions are shown to be easily computable even for systems with a large number of perturba tion blocks. We will do so for a nominal system M with n perturbation blocks as appears in Fig. 3. As a result, M has n inputs and n outputs. Since M is linear timeinvariant and stable, each component Mii has an impulse response belonging to l 1. The l I norm of the impulse response of Mij is equal to the induced t ~ norm of M/j. We refer to this quantity as ]IMij[l~. Given statespace representation of M, IiMi~'[~ can be computed arbitrarily accurately. As a result, we can define the following nonnegative n x n matrix:
~ = [ " M l l ] ~  . . "Ml,~"41.
With this definition, we can state the following necessary and sufficient conditions for robust stability of the system in Fig. 3:
T h e o r e m 2 The following are equivalent:
1. The system in Fig. 3 is stable for all A E ]9(n), i.e. robustly stable.
2. ( I  M A ) 1 is £o% stable for all A e ]9(n).
3. p( M) < 1, where p(.) denotes the spectral radius.
~. infRe~ IIR1MRI[A < 1, where ~ := {diag(rl, . . . ,r,) :r~ > 0}.
The proof of this theorem can be found in [9] and in [8]. Item 3. in this theorem provides a very effective means for robustness analysis. A number of algorithms exist for the fast computation of the spectral radius of a nonnegative matrix. These
174
algorithms take advantage of the PerronFrobenius theory of nonnegative matrices. (see [1]). Among the interesting properties of nonnegative matrices is that the spectral radius of a nonnegative matrix is itself an eigenvalue of the matrix, and is often referred to as the Perron root of that matrix. Corresponding to that root, there exists an eigenvector which is itself nonnegative. One way of computing p(M) appears in [13]. It is summarized as follows: Suppose M is primitive ( M ~ > 0 for some positive integer ra). Let x > 0 be any n vector. Then
rosin ('Mm+lx)i < p(M) < max  _
Furthermore, the upper and lower bounds both converge to p(M) as m * co. If the were not primitive, it can be perturbed by adding c > 0 to each element making
each entry of M positive and thus making M primitive. ~ can be chosen so that its effect on the spectral radius is arbitrarily small. An explicit bound for the size of required has been derived in [10].
As a consequence of 3 of the previous theorem, the robustness analysis problem is completely solved. Item 4. of the same theorem provides a useful method for the synthesis of robust controllers. This is the next subject of discussion. The dependence of M on the controller can be reflected through its dependence on the Youla parameter Q. More specifically, M can be written as:
M ( Q ) = 7'1  T2QT3, (3)
where 7'1, 7'2, and T3 are stable and depend only on the nominal plant ~o, while Q is a stable rational function which determines the stabilizing controller. See [7] for more details. The robustness synthesis problem can therefore be stated as follows:
Q ~afble p(MCQ)). (4)
tIowever, it is not difficult to see that this problem is a nonconvex optimization problem. Instead, we can use 4. of Theorem 2 to rewrite the robustness synthesis problem as follows:
inf inf IIR~(Q)RI~. (5) o stable h e n
The optimization involves two parameters, R and Q. With R fixed, the optimization problem with respect to Q is a standard £Lnorm minimization problem which can be readily solved using linear programming as in [3] and [11] for example. With Q fixed, the optimization problem with respect to R is nonconvex. However, the global minimizer (if one exists) or an R yielding arbitrarily close values to the global minimum (if no minimizer exists) can be easily found, thanks to the theory of nonnegative matrices. It turns out that if M(Q) is primitive, the inflmum is in fact a minimum and it is achieved by R = d i a g ( r l , . . . , r , , ) where ( r l , . . . ,r,~) T is an eigenvector corresponding to the Perron root p(,~(Q)). Moreover, this eigenvector can be computed easily using power methods. In particular, let z(°) > 0 be any
vector. Given any x (I1) we can define x (1) := ~ . It can be shown [13] n i i , , ,v j z ' , .   , t l
175
tO :l i M
Z
r i
' [X j ' , : :
t I : I
' [22] I ' I I A I. ,J
Figure 4: Sampled Da ta Sys tem with St ruc tured Uncerta inty.
that limi...oo x (1) exists and is equal to the desired eigenvector corresponding to M. Thus the optimum R, when Q is fixed, can be easily computed. As a result, we can setup the following iteration scheme which provides a sequence of Q's, and hence controllers, yielding successively smaller values for p(~¢). The iteration which is somewhat similar to the D  K iteration in the p synthesis goes as follows:
1. Set i := O, and R.o := I .
2. Set Qi := arg infq ,t=bt~ JJR1(T1  T2QTa)RILa.
3. Set R~ := arg infREu IJR2(TI  T2Q~T3)R[I.~.
4. Set / := i + 1. Go to step 2.
Finally, it should be mentioned that even though the iteration scheme above does not guarantee the limiting controller achieves the global minimum for the spectral radius, it does provide a useful way of obtaining controllers having good robustness properties.
3 R o b u s t n e s s o f S a m p l e d  D a t a S y s t e m s
In Fig. 4 the robustness problem of sampleddata systems is shown. In the figure, Co is a continuoustime, linear, timeinvaria~t nominal plant. C, on the other hand, is a discretetime controller. S is a sampling device with sampling period Ts, while 7/ is a first order hold with the same period. The interconnection of {~o and C and the sample and hold devices comprises M which is now a linear periodically timevarying system with period T,. The definition for stability robustness and performance robustness is the same as before except that now the input zo and the output z are £oo siguals, and the Ai's a~e continuoustime normbounded operators.
176
3.1 Stability Robustness vs. Performance Robustness
For sampleddata systems, M is timevarying. In [10] it was shown that the unique relationship between stability robustness and performance robustness which holds for timeinvariant M ceases to hold when M is timevarying. It was shown that while performance robustness still implies stability robustness, stability robustness does not in general imply performance robustness. Examples demonstrating this fact can be easily constructed. However, in the case of sampleddata systems, M has a special structure. It is periodically timevarying. This fact is significant enough to make the equivalence between the two notions of robustness hold in the same way it does in the timeinvariant case! We state this fact in the next theorem whose proof can be found in [10].
T h e o r e m 3 Let M and A in System [and System H of Fig. 2, be the same as the M and A in Fig. ,~, i.e. those corresponding to a sampleddata system. Then System I achieves robust performance if and only if System H achieves robust stability.
So, here again, we need only get conditions for robust stability in the presence of structured perturbations. Robust performance can be treated in the same way as robust stability. The required necessary and sufficient conditions for robust stability of sampleddata systems axe discussed next.
3.2 C o n d i t i o n s for R o b u s t n e s s of S a m p l e d  D a t a Sys t e m s In this section, it will be shown that the spectral radius still plays a significant role in the robustness of sampleddata systems. The conditions for robustness are not derived in terms of the spectral radius of a single nonnegative matrix. Instead, it is obtained in terms of the spectral radii of nonnegative matrices in a certain parametrized family.
Even though conditions can be derived for a general linear timeinvariant plant ~o and controller C, we will restrict our treatment for a plant and a controller having a finite dimensional statespace representation. So let (7o and C have the following statespace representation:
~ =
A C1
C.+1
B1 • • • Bn+l Dll . . . D1,~+1
J D., . . . ' k Col °" )
0 . . . 0
(6)
where the last row in the D matrix of {7o is zero so that the sample and hold operations make sense. Each Mij has a certain kernel representation, say Mij(t, r) for 0 _< t, r < co, so that for any u E L: a°
[oo M~i(t, ~)u(,)dr (T) (Miju)(t) Jo
177
where Mq(t, r ) = ~Iij(t, r ) + ~°=o rh~j(t)6(t  tk  r) (see [4] for more details). Because Mq is periodic, it follows that Mij( t ,r) = M~j(t + kTj ,r + kTa) for k = 0,1, . . . .
To completely characterize the system, we only need to know
M~(t, r ) : Mij(t q" kTs, 1")
where M ~ ) ( t , v ) takes value in [0,T,] x [0,T,]. We can now state the following theorem:
T h e o r e m 4 Let M in Fig. 3 correspond to that of the sampled.data system in Fig. ~. Then the given sampleddata system is robustly stable if and only if
, , < [ o , r . l \LII/~.xit,,)lh . . . II/~.,.it.)lhJ/
,,,here ~,j(t) = (llM~(t)ll~, IIM~(t)lh,'" "} e O, and IIM~(t)IIA := fo r` IM, j(t,~')ldr. The proof of this theorem can be found in [10]. From the theorem statement it is clear that unless we can compute I]/~q(ti)l]l, the theorem will not he of much practical use. To compute this norm, it is should be enough to find IIM~(t)[~ where k = 0 , 1 , . . . , N , for N sufficiently laxge to yield an arbitrarily accurate estimate of Ilfl~Jl[1 The size of N can be determined a priori from information from Co and C.
We can indeed compute M~j(t, r ) of a sampleddata system using state space representation of Co and C. It can be shown that M ~ ( t , r ) : Dq(t ,r ) , and
M~(t,r) = O,(t)~kl ~j(r) where
e AT. + ¢(Ts)B.+IDcC.+I ¢(T.)B.+ICc B¢C.+ I A~
Ciea' + r,(t)D¢C.+, r(t)C~ (9)
where
eA(T.r}Bj
0 CieA(t~)l(t r)
+Dij$(t  r )
2 O(t) = eA'dr and r i ( t ) = C~¢(t)B.+I + Di,.+l.
178
R e f e r e n c e s
[1] A. Berman and R. Blemmons, Nonnegative Matrices in the Mathematical Sci ences, Academic Press, New York, 1979.
[2] M. A. Dahleh and Y. Ohta, "A Necessary and Sufficient Condition for Robust BIBO Stability," Systems & Control letters 11, 1988, pp. 271275.
[3] M. A. Dahleh and J. B. Pearson, "l I Optimal Feedback Controllers for MIMO Discrete Time Systems," IEEE Transactions on Automatic Control, Vol. AC 32, No. 4, April 1987, pp. 314322.
[4] C. Desoer and M. Vidyasagar, Feedback Systems: InputOutput Properties, Academic Press, New York, 1975.
[5] J. C. Doyle, "Analysis of Feedback Systems with Structured Uncertainty," IEEE Proceedings, Vol. 129, PtD, No. 6, pp. 242250, November, 1982.
[6] J. C. Doyle, J. E. Wall, and G. Stein, "Performance and Robustness Analysis for Structured Uncertainty," Proceedings of the 20th IEEE Conference on Decision and Control, 1982, pp. 629636.
[7] B. Francis, A Course in 7"l °° Control Theory, Springer Verlag, 1987.
[8] M. Khammash and J. B. Pearson, " Robustness Synthesis for DiscreteTime Systems with Structured Uncertainty", to be presented in 1991 ACC, Boston, Massachusetts.
[9] M. Khammash and J. B. Pearson, "Performance Robustness of DiscreteTime Systems with Structured Uncertainty", IEEE Transactions on Automatic Con trol, vol. AC36, no. 4, pp. 398412, 1991.
[I0] M. Khammash, "Necessary and Sufficient Conditions for the Robustness of TimeVarying Systems with Applications to SampledData Systems," IEEE Transactions on A u tomatic Con trol, submitted.
[11] J.S. McDonald and J.B. Pearson, "Constrained Optimal Control Using the t 1 Norm," Rice University Technical Report, No. 8923, July 1989. (Submitted for publication)
[12] J. Shamma and M. Dahleh, "TimeVarying vs. TimeInvariant Compensation for Rejection of Persistent Bounded Disturbances and Robust Stabilization", IEEE Transactions on Automatic Control, to appear.
[13] R. Varga, Matrix Iterative Analysis, Prentice Hall, 1963.
A STUDY ON A VARIABLE ADAPTIVE LAW
Seiichi SHIN
Institute of information Sciences and Electronics, University of Tsukuba
11I Tennodal, Tsukuba, 305 JAPAN
Toshiyuki KITAMORI
Department of Mathematical Engineering and Information Physics
Faculty of Engineering, University of Tokyo
731 Hongo, Bunkyoku, Tokyo, 113 JAPAN
ABSTRACT
This paper presents a variable adaptive law, which is robust to bounded disturbances. Combining
with the dead zone, the proposing adaptive law uses a sort of the omodifiod adaptive law in the small
estimation error region. Viability of the adaptive law, in comparing with the dead zone type adaptive law, is
shown by theoretical analysis and a simple numerical simulation.
1. INTRODUCTION
Robust adaptive control systems are now widely studied, l ) Adaptive control system subject to
bounded noises is one of these robust adaptive control systems. The major adaptive laws used in a such
adaptive control system are the dead zone type adaptive law 2) and the omodified adaptive law. 3) The
former law needs an upper bound of a norm of the bounded disturbance. This bound is sometimes set to be
too conservative, so that the estimation error remains large. Although the omodified adaptive law does not
need such a bound, the leakage term in the law hinders convergence of the estimation error in a small
region. Moreover, it causes a bursting phenomena.
This study proposes a combination of these two adaptive laws in order to achieve good performance
in control. The basic deterministic adaptive law is used in the large error region similar to the dead zone
type and the omodified adaptive law is used in the small error region, where the adaptation is stopped in
the dead zonc type. Therefore, the proposing adaptive law has a switching property from the deterministic
to the omodifiod adaptive laws dependent on the magnitude of thc estimation error. The next section
describes precious descriptions of the adaptive control system considered here and the proposing adaptive
law. Basic property of the adaptive law is analyzed theoretically in the section 3 and a simple numerical
simulation is presented in section 4. In these sections, the viability of the proposing adaptive law is shown
comparing with the dead zone type.
2. ADAPTIVE SYSTEM
180
According to 2), the controlled object considered here is a controllable nth order linear system
described by,
~¢= Apx+bu+w (1) where xE R n, uE R, and we R n respectively are the state, the input, and the disturbance. An upper bound
w 0 of Ilwll is known, where Ilxtl is the Euclid norm when x is a vector and an induced norm from the Euclid
norm when x is a matrix. Ap~ R nxn is unknown and be R n is known here.
The objective is the model following. Therefore, the problem considered here is how to generate the control input u so as to make the state x follow Xm, which is the state of the reference model denoted by,
;¢m= Amxm+br (2) where, r is a bounded command input, Am~ R nxn is a stable matrix. Then, there exists a.positive definite
matrix Pc R nxn, which satisfies,
AmTP+PAm = Q (3) for arbitrary positive definite matrix QE R nxn, where T means the transpose of matrices or vectors. We
further assume that there exists a finite k*E R n, which satisfies,
Ap+bk *T= Am (4)
This is called as the model matching condition.
Now, the control input is set to be,
u=r+~Tx (5) where, ~ R n is an adjustable gain, which is specified in the later. Let the estimation error sE R n and the
parameter error kE R n be,
s= XXm (6)
k= ~¢k* (7)
Then, from (1)(7), we get an error system as follows,
~= Ame+bkTx+w (8)
Peterson and Narendra 2) proposed an adapdv¢ control system combining (8) and the dead zone type
adaptive law;
~= .erpbx, if eft's> so (9) 1¢ = 0 if eTps~ eo ( 1 O)
where, sO = 4~2~.M3(P)]~m'2(Q)wo 2 (11)
and AM(P) and ~m(Q), respectively are the maximum and minimum ¢igenvalues of the matrix Q. The
positive real number o~ (> 1) is a safety margin. It should be set to be large if the upper bound wo is
uncertain, when the threshold e0 becomes large and the control quality is decreased. We call the error e is in
the region S I if sTpe> eo, and in the region $2 if eTPs~ eo
181
With the adaptive law, (9)(10), Peterson and Narendra 2) showed convergence of the error in $2
.However, when the safety margin is large, the error remains large since the adaptive law is terminated in
$2. Instead of (I0), we use the omodeified adaptive law;
~= a~eTpnx + drkt, i f eTpea eO (12)
where, dr is a positive real number, which is specified in the later. The vector k ~ R n is an value of~ at the
instance when the error crosses the boundary from St to $2 and the suffixj means number of crossing the
boundary from S! to $2. Therefore, j is set to be zero initially when t= 0, and is increased at t= t i, when the
error crosses the boundary from St to $2. The initial value ko is the same of the initial value of 'It.
With the parameter error k, (12) can be rewritten as,
=  drk.eTpbx+ dr(k ik*), if eTpe.~ e0 (12')
After all, we use the adaptive law (9) when error of (g) is in St and the adaptive law (12') when error in
$2. Equation (12') means parameter error k becomes very small when the term ¢TPbx is small and kj is
close to k*. We analyze this property in the next section.
3. ANALYSIS
Let a nonnegative definite function v be,
v= eree+krk 03)
From (9) and (11),
I/t//> 2 ~ ( t ' ) a M ' l ( Q)wO (14)
in SI. Then, the same analysis done in 2) leads to
v= erQe + 2er Pw~Z,n( Q)//e//2 + 2ZM( P )// e//wo <4¢x(Ct1)AM2(p),~M'I(Q)w02< 0 (15)
in SI. Since v is monotonically dccrcased in SI, The error e goes to the region $2 in a finite time.
On the other hand, the definition of $2 leads to,
Hell< 2 aTtM213(P )2,n'l ( Q)wo
in $2. Furthermore, from the definition of v and k 1, we get,
vj= eo+/Ik*kj/I 2 where vj means the instance value of v when t= tj, that is, e goes into $2. Therefore,
v= eTQe+2eTpw2drkTk+2akT(kjk *)
<_ ;t,.( Q)//e//2+ 2ZM(P )//e//woOl/t¢/2+ ~/tcjk*//2 <_ ~(~.M(P)//e//2+/]kl/2)+ Ovj4iz~.MS/2(p)}tmllZ(P)~.m'2(Q)wo2( dro.~Mll2(p),~.m(Q))
_~ ¢~v+ ovj..4 cY.gMSl2( P )gmt l2( P )~,m'2( Q)wo2( dr~Ml /2(P )~,m( Q) )
where,
5=min(Am(Q)2M'lfP),¢r)
(16)
(17)
(18)
(i9)
From (18), if,
5= dr (20)
182
and, or(~AMll2(P ).Am( Q)> 0 (21)
then v is decreased and be less than vj after t= tj, while e remains $2.Therefore, a sufficient condition for
decreasing v in $2 is, ~.m(Q)(OtAMtl2(P)~.ra(P))'t < or.cS ~.m(Q),~.M'I (P) (22)
In order to ensure existence of Or, which satisfy the inequality (22),
~x> ZMtt2(P)~.m'tt2(P) (23)
Therefore, we can set Or satisfies (22) when safety margin u is larger than )~tttz(P)~trn'll2(P). In this
case, v is also decreased in $2 while it remains constant value vj in $2 if the adaptive law (10) is used in
the region $2. This improvement is achieved by using error information in $2 unlike the adaptive law (I0),
where no adaptation is performed in S2.lt is pointed out that the parameter Or depends only on P and Q as
shown in (22) and does not depend on any unknown parameter of the controlled object (1). Namely, if or is
set to be its upper limit as follows, or= a~(Q)ZMI(p) (24)
from (18), v is decreased as, v.~ vj.40t~M712(p))tm'l12(p);[ra'2(Q)wo2(Ot~Ml12(p)~tml12(p)")~M(P)) (25)
The analysis performed here shows that nonnegative definite function v is decreased monotonically
in SI and it becomes less than vj in $2. Therefore, v is bounded at all time, so that all variables remain
bounded, that is, the proposed adaptive system is stable.
4. NUMERICAL EXAMPLE
We show here a simple numerical simulation result in order to illustrate the property analyzed in the
former section. The controlled object and the reference model used here are, A,= [ 0 2 1 l ] , a m : [ o 0 ]
b= [ 1 11T (26)
Therefore,
k*= [ 1 21T (27)
Matrices P and Q are,
Ol Q:Yo o] (2,) Then,
~.MfP)= ~.rn(e) = I,~,MfQ)= 4, Zm(Q)= 2 (29)
The disturbance is,
w= Isin(2m/5) sin(2ra/5)l r (30) Therefore, wo becomes ~/2. The command signal is set to be,
r= 10sin(2 m/5) (31)
From these settings ,the same as those of 2), we get, eo=4ct2~.m3(p);tm'Z(Q)w02= 2a 2 (32)
and,
183
21a< a_<2, o~ 1 (33)
The equation (33) corresponds to (22) and (23). Here, we set gas 2. This corresponds to (24). The initial
values of the controlled object, the reference model, and estimate'~ are all set to be zero.
First, we use the adaptive law (9) and (I0), which are proposed in 2). The results are Fig. 1 and Fig.
2. It is shown that the function v is decreased in the region S1 and the adaptation is stopped in $2. Figures
3 and 4 are the results with (9) and (12), which is the proposing adaptive law. The function v is decreased
both in SI and in $2 and goes to a small limit cycle. The control quality in Fig. 4 is improved comparing
with Fig. 2.
5. CONCLUSION
We analyzed here that the novel robust adaptive law, which is a combination of the dead zone and g
modification, improves the control quality.
REFERENCE
1) K. S. Narendra and A. M. Annaswamy, Stable Adaptive Systems, Prentice Hail, 1989 2) B. B. Peterson and K. S. Narendra: Bounded error adaptive control, IEEE Trans. Automatic Control, vol. AC27, no. 6, pp. 11611168 (1982) 3) P. A. loannou and P. V. Kokotovic: Instability analysis and improvement of adaptive control, Automatica, vol. 20, no. 5, pp. 583594 (1984)
4
184
eo
$1
1 I I 2 5 4 5
eTpe
Figure 1. Lyapunov function v versus the control error eTpe with the dead zone type adaptive law.
$1 ~ A A ~o ,,
2 4 6 8 10 TIME
Figure 2. Time responses of Lyapunov function v and the control error eTpe with the dead zone type
adaptive law.
185
6 t > e°
o I I  I " I J 1 2 5
eTPe
I I 4 5
Figure 3. Lyapunov function v versus the control error eTpe with the the proposing adaptive law.
5
2
, e r Pe Sl
eo
$2 V
2
2 4 6 8 10 TIME
Figure 4. Time responses of Lyapunov function v and the control error eTPe with the proposing adaptive
law.
M O D E L R E F E R E N C E ROBUST C O N T R O L 1
Minyue Fu
Department of Electrical and Computer Engineering
University of Newcastle, N.S.W. 2308 Australia
Abst rac t
Classical model reference adaptive control schemes require the following assumptions on the plant: A1) minimum phase; A2) known upper bound of the plant order; A3) known relative degree; and A4) known sign of high frequency gain. It is wellknown that the robustness of the adaptive sys tems is a potential problem, and it requires many sophisticated techniques to fix it. In this paper, we consider the same model reference control prob lem via robust control. By further assuming that the boundedness of the parameter uncertainties of the plant (which i sa very weak assumption), we show that a linear timeinvariant dynamic output feedback con troller can be constructed to give the following property: the closedloop system is internally stable and its transfer function is arbitrarily close to the reference model. This method provides simple controllers and good robustness. It also has potential to cope with large size fast timevarying uncertainties.
1 I n t r o d u c t i o n
Both adaptive control theory and robust control theory have been developed to
accommodate a wide range of uncertainties. Although experts have not agreed
on the distinction between these two theories, one of their important differences
is that adaptive controllers are nonlinear and timevarying (NLTV) while ro
bust controllers are usually linear and timeinvariant (LTI). Despite of the over
whelming progress made recently on the robust control theory, we find ourselves
constantly "bothered" with the following fundamental question:
Q1. Can an LTI controller really compete with an NLTV controller?
1This work is supported by the Australian Research Council.
187
A positive answer to the above question has been given to the important
case of quardratic stabilization of linear systems which are subject to a type of
normbounded uncertainty. It is shown in this case that NLTV controllers offer
no advantage over their LTI partners for the (see, e.g., [1]). However, since the
adaptive control is often concerned with performance (such as model matching)
and structured uncertainty in the parameter (coefficient) space, the above result
does not apply and the question Q1 still deserves serious attention.
In this paper, we consider the problem of model reference control of uncertain
linear systems and address the question Q1 from a different perspective:
q2. Under what conditions can we find an LTI controller such that the closed
loop system is stable and ~matches" a given reference model in certain
sense for all admissible uncertainties?
Note that this is exactly the question asked by the adaptive control theory except
that an NLTV controller is allowed there. It is well known that all the classi
cal model reference adaptive control (MtLAC) schemes (developed prior to the
1980's) require the following standard assumptions:
A1. The plant is of minimum phase;
A2. The upper bound of the plant order is known;
A3. The relative degree of the plant is known;
A4. The sign of the highfrequency gain is known;
B1. The reference model has the same relative degree as the plant; and
B2. The reference input is bounded and piecewise continuous.
With these assumptions, the MRAC schemes can guarantee that the closed
loop system becomes stable and converges to the reference model. However, the
robustness of the classical adaptive schemes is known to be a serious problem,
and it requires sophisticated techniques to fix it (see, e.g., [2]). Moreover, besides
the complexity problem of the "robustified" adaptive controllers, the degree of
robustness is often small, and additional information on the system (such as the
size of the unmodeled dynamics and a bound on the input disturbance) is usually
required.
The focal point of this paper is to answer the question Q2 by showing that
a result similar to that given by MRAC can be achieved by a model reference
188
robust control (MRB.C) technique, provided there are some additional very mild
assumptions. More precisely, suppose assumptions A1)A4) and the following
additional ones are satisfied:
A5. The set of admissible uncertain parameters is compact and known;
A6. The transfer function of the reference model is stable and strictly proper.
Then, we provide a technique for constructing a simple LTI controller which
guarantees the robust stability of the closedloop system and that the Hoo norm of
the difference in the transfer functions of the closedloop system and the reference
model can be made to be arbitrarily small. Several advantages of this MRRC
method are obvious: an LTI controller allows simple analysis of the system per
formaace and robust stability; it provides easy implementation and simulation;
furthermore, robustness margins for input a~d output disturbances and addi
tional unstructured perturbations can be computed by using the sensitivity and
complementary sensitivity of the closedloop system.
The result explained above is achieved by using feedback which possibly in
volves high gains. But we find in simulations that feedback gains are usually
moderate when the plant uncertainties are not large and the requirement on
model matching is not severe. The feedback gains need to be high when the
plant is subject to large uncertainties apd/or the plant is very different from the
reference model.
Having established the MRRC method for timeinvariant uncertainty, we look
into the problem of timevarying uncertainty which the adaptive control theory
has difficulty with. We find that the MRRC approach may also be suitable for
accommodating a certain class of timevarying uncertainties. This point will be
made via some discussions and a conjecture.
The endeavor of this paper should not be interpreted as an deemphasis of
adaptive control, it should rather be viewed as an attempt to have a better
understanding of both the adaptive and robust control theories and as an exercise
in our course of searching for better adaptive control schemes. More investigation
on this subject is needed.
189
2 R e l a t e d W o r k o n R o b u s t C o n t r o l
There axe three robust control design methods which are most pertinent to our
MRRC technique.
The quantitative feedback theory (QFT) by Horowitz and his colleagues [3]
provides a first systematic procedure for designing robust controllers. This
method uses a twodegreeoffreedom controller to achieve desired frequency re
sponse and the stability margin of the closedloop system. More specifically, a
loop compensator is used to reduce uncertainty and assuring closedloop stability
while a prefilter is used to shape the closedloop inputoutput transfer function.
This design method can be regarded as a model reference control method. How
ever, the reference model is not given in terms of a transfer function, and the
phase of the closedloop inputoutput transfer function is not paid attention to.
The QFT method is somewhat empirical because trials and errors axe needed for
a proper design. There is also a concern about the effectiveness of this design
method for multiinputmultioutput systems.
Baxmish and Wei [4] employs a unityfeedback controller for robust stabi
lization of an uncertain linear timeinvaxiant plant satisfying assumptions A1A4
and some mild conditions. They show that an LTI stable and minimumphase
stabilizer can always be constructed for the uncertain plant. However, the model
reference control problem is not treated in [4] because only one degree of freedom
is used.
The problem of model reference robust control was recently studied by Sun,
Olbrot and Polis [5]. They consider linear singleinputsingleoutput plants satis
fying Assumptions similar to A1A6, and use the socalled =modeling error com
pensation" technique to show that a stabilizing controller can be constructed such
that the closedloop inputoutput transfer function is made to be arbitrarily close
to the reference model over a finite bandwidth. However, the construction and
analysis of the controller seems very complicated. Furthermore, model matching
is done only on a finite bandwidth, this restriction, as we shall see, can be lifted.
3 A N e w A p p r o a c h t o M R R C
In this section, we use a twodegreeoffreedom controller to solve the MRRC
problem for plants and reference models satisfying assumptions A1A6. The
r.(0 ,[ F(s)
Prefilter
190
, Reference model
. . . . . .
Figure 1: Model Reference Robust Control
schematic diagram of the closedloop system is shown in Figure 1. The transfer
function G(s) of the plant takes the following form:
~i=0 bis c C s ) = g C s ) = " ' ~i=0 albl D(s) s" + ,,1 , b,~ # O. (1)
The nominM model of the plant is denoted by Go(s), and it ca= be chosen ar
bitrarily although its choice might effect the controller. The closedloop input
output transfer function will be denoted by Go(s), i.e.,
G~(s) = F(s)C(s)G(s) 1 + C(s)G(s) (2)
The main result is presented as follows:
T h e o r e m 1. Consider an SISO uncertain linear plant G(s) and a linear ref
erence model Gin(a) satisfifing assumptions A1A6. Then, given any (arbitrar
ily small} ~ > O, there ezist a stable transfer function F(s) and a stable and
minimumphase transfer function C(s) such that the closedloop system given in
Figure I is robustly stable and that the difference between the transfer functions
of the closedloop system and the reference model has Hoo norm less than e.
To assist this theorem, we provide a simple algorithm for constructing the
controller. The design of C(s) is simplified from the robust stabilization technique
in [4], and that of F(s) is motivated by the QFT [3].
Cons t ruc t ion of C(s) and F(s):
• Choose Ne(s) to be any (nm1)th stable polynomial;
191
• Choose 1/Dc(s, a) to be any paxameterized stable unity gain lowpass filter
(parameterized in a > O) with relative degree no less than nml, and and
the cutoff frequency ~ oo as a * O. For example, take
1 l/D¢Cs) = (as + 1)""* ; (3)
• Take go( )
= KDo( ' (4)
[ _c(.)ao(.).1' F(.) = LCs, p)amCs) 1 + cCs)GoCs)l (5)
where L(s, p) is either a unity gain or a pararneterized unitgain lowpass
filter with cutoff frequency * oo when # ~ 0. K, a and p are design
parameters to be tuned;
• Choose K sufficiently large and a and p sufficiently small (which are guar
anteed to exist) such that
l ine(s )  am(S)lloo < (6)
for all admissible G(s) satisfying assumptions A1)A5).
Remark I. The purpose of the stable polynomial N¢(s) is to assure that N¢(s)G(s) is a minimum phase transfer function with relative degree equal to one so that
with C(s) = KNc(8) and sufficiently large K, the characteristic equation of the
closedloop system is robustly stable and its sensitivity function is sufficiently
small. The reason for 1/D~(s,a) to be a lowpass filter of the required form
is to guarantee the properness of C(s) while preserving the above property of
the closedloop system when a is sufficiently small. The function F(s) is used
to shape the closedloop transfer function so that it approximates the reference
model. The lowpass filter L(s, p) is optional, only for reducing the unnecessary
highfrequency gain of the dosedloop system. These points should be considered
in tuning the controller. The proof of Theorem 1 is omitted, for the construction
of the controller and this remark have explained the validity of the result.
192
4 B e y o n d T i m e  i n v a r i a n t P a r a m e t e r U n c e r t a i n t y
The purpose of this section is to look into the possibility of using the MRRC
technique to deal with timevarying parameter uncertainty. In certain sense, the
discussions in this section might be "the center of gravity" of the paper.
As we pointed out earlier, the adaptive control theory is applicable only to
uncertainties which are timeinvariant (or varying "sufficiently slowly"). The
robust control methods mentioned in Section 2 and the MRRC technique in
Section 3 are in general also restricted to timeinvariant uncertainties due to
their dependency on frequency domain analysis. However, the fact the MRRC
technique requires only a simple LTI controller seems to suggest that it might
be able to tolerate timevarying uncertainty better the adaptive schemes. This
intuition is supported by a number of simulations carried out by the author.
Motivated by these simulations, it is suspected by the author that & result similar
to Theorem 1 in Section 3 also holds for plants with timevarying uncertainties.
To be more precise, we consider an uncertain plant described by the following
inputoutput differential equation:
n ~ l Trt
yc ) + a,(t)yC') = b,(t)uCi) (7) i=0 i=O
where u(t) and y(t) are the input and output of the plant, respectively. It is
assumed that the assumptions A1A6 hold. Here, the minimumphase should be
interpreted as that the zero dynamics of the plant:
= 0 (8) i=0
is "stable" in certain sense. Then, the following conjecture is put forward:
Conjecture: Consider the uncertain plant in (7) and reference model Gin(s)
satisfying assumptions A1A6. Then, under some additional mild assumptions
and an appropriate definition for stability, there ezists a linear timeinvariant
controller such that the closedloop system is stable and the induced norm of the
operator from r,~(t) to y(t)  y~(t) can be made arbitrarily small.
Possible additional assumptions might be that the coefficients ai(t) and hi(t)
are differentiable and their derivatives are bounded. And investigation is needed
193
for different types of stability such as Lyapunov, boundedinputboundedoutput,
and exponential. Since timevarying systems are considered, frequency domain
analysis is no longer valid. Therefore, new techniques need to be developed for
proving or disproving the conjecture. A promising avenue is to apply the singular
perturbation theory, this will be studied.
5 Conclusion
In this paper, we have demonstrated the potentials of an MP~C technique.
This technique provides a nonHoe method for designing a low order robust LTI
controller to solve the model reference problem, and it is able to handle large size
parametric uncertainties. An important feature of this technique is its potential
of handling'largesize fast timevarying parametric uncertainties, this is a matter
which deserves further research.
It is, however, noted that the MRRC technique potentially requires a high
gain feedback controller, when the size of uncertainties is large and/or the plant
is vastly different from the reference model. The tradeoffs between the MRAC
and MRRC techniques need to be further investigated.
REFERENCES
[1] M. A. Rotea and P. P. Khargonekar, "Stabilizability of linear timevarying and uncertain linear systems," IEEE Trans. Auto. Contr., vol. 33, no. 9, pp. 884887, 1988.
[2] It. Ortega and Y. Tang, "Robustness of adaptive controllersa survey," An. tornatica, vol. 25, no. 5, pp. 651677, 1989.
[3] I. Horowitz, "Quantitative feedback theory," Prec. IEE, PartD, vol. 129, pp. 215226, 1982.
[4] B. R. Barmish and K. H. Wei, "Simultaneous stabilizability of single in ~ut single output systems," in Modelling, Identification and Robust Control, NorthHolland, Amsterdam), 1986.
[5] J. Sun, A. W. Olbrot, and M. P. Polis, "Robust stabilization and robust per formance using model reference control and modelling error compensation," in Prec. International Workshop on Robust Control, (San Antonio, Texas), March 1991.
P A ~ I Z A T I O N OF flGo FI CONTROLLER AND Hm/H2 STATE FEEDBACK CONTROL
Tsutomu ~ l t a and KangZhi LIu
Department of Electrical and Electronics Engineering
Chlba Unlverslty 133, Yayolcho, 260, Chlba
ABSTRACT
In thls paper, we wlll derive complete parametrlzatlon of the Ho FI
controller and that of the H~ state feedback {SF) controller. And we wlll
determine the obtained dynamical free parameters so that they minimizes an H2
control performance and will show the condltlon when the whole controller
becomes a constant feedback gain other than the central solution Fm.
1. I n t r o d u c t i o n
We h a v e s e v e r a l a p p r o a c h e s to s o l v e t h e s t a n d a r d H c o n t r o l p r o b l e m (Klmura
e t a l . [ 1 ] , Doyle e t a l . [ 2 ] ) and many r e l a t e d r e s u l t s have been o b t a i n e d ,
e . g . , t h e l l a / l i  c o n t r o l was s t u d i e d by B e r n s t e a i n e t a l . [ 3 ] , Doyle e t a l . { 4 ]
and R o t e a e t a l . [ 5 ] and s p e c i a l c o n s t a n t g a i n s o f t h e s t a t e f e e d b a c k H c o n t r o l
wcre o b t a i n e d by P e r t e r s o n [6] and Zhou e t a l . [ 7 ] . t lowever l i t t l e I s known
a b o u t t h e c a p a b i l i t y o f t h e f r e e p a r a m e t e r s o f t h e H~ c o n t r o l l e r . We can
e x p e c t t h a t t h e s e f r e e p a r a m e t e r s a r e u s e d to improve t h e t i m e r e s p o n s e .
In this paper we first correct the parametrizatlon of the H controller of
the FI {Full Information) problem derived by Doyle et al. [2] since they have
neglected the parametrlzatlon of a null space of G21(s} of the generallzed
plant. Then using the corrected result we wlll derive the parametrlzatlon of
the H~ state feedback (SF) controller.
Using the obtained parametrlzatlon of SF controller, we wlll first
characterize all SF controllers which are reduced to constant gains and we
will secondly determine these free parameters so that they minimize an H2
control performance.
We will omit the argument s whenever It is clear from the contexts and
will adopt the following notations:
ollM(Z,Q)=(Zltq+Z12)(Z21Q+Zz2) I , LF{Z,Q}ffiZtI+ZI2Q(IZ22Q)IZ2
f o r Z f { Z I ~ } : I . J = I ~ 2 .
• RH2:a s t r i c t l y p r o p e r RII® f u n c t i o n
BH2:an RII2 function of which II® norm is strictly less I.
195
2. P a r a m e t r l z a t i o n o f FI c o n t r o l l e r and SF c o n t r o l l e r
The g e n e r a l i z e d p l a n t I s g i v e n by
X=AX+BsW+B2U (la)
z=Clx+Dt~u ( Ib )
y f x ( l e }
where xffiR,, umRp. waR, and zaR, are the s ta te v a r i a b l e , the c o n t r o l i n p u t , the
d i s tu rbance inpu t and the c o n t r o l l e d ou tpu t , r e s p e c t i v e l y , l e assume tha t both
(A.B=) and (A.Bt ) are s t a b i l l z a b l e . (Ct.A) i s d e t e c t a b l e and BI i s of column
f u l l rank as we l l as the f o l l o w i n g or thogona l c o n d i t i o n
CtTDt2=O, Da2*Dt2=I, (2)
We d e f i n e t h e c l o s e d loop s y s t e m as
z=G(s)w (3)
when a dynamical s t a te feedback c o n t r o l
u= K(s )x (4)
is app l i ed to (1) where K(s) i s a proper r a t i o n a l c o n t r o l l e r . Then the main
prob lem i s to p a r a m e t r i z e K(s ) which s a t i s f y t h e i n t e r n a l s t a b i l i t y as w e l l as
II GH oo < i (s)
We will call this problem the SF (State Feedback) problem.
To derive the parametrlzation of K(s) of SF problem we introduce the FI
(Full I n f o r m a t i o n ) problem [2] in which o u t p u t e q u a t i o n (ic) and t h e c o n t r o l
law (4) a r e r e p l a c e d by
and
1 r e s p e c t i v e l y . The p r o b l e m I s to f l n d a p r o p e r r a t i o n a l K l ( s ) t h a t a s s u r e s
Internal stability and satisfies II Gzls)H = <l where GI(s) Is the closed loop
transfer function between w and z when (7) is applied to (I).
We can parametrtze all internally stabilizing controller using LF (linear
fractional transformation) form.
L e n a 1 . I f we choose an F stabilizing A+B~F and define
@(s) = sI(A+B2F) (8)
then we will have the following results.
(~) All controllers K(s) which stabilize (I) are represented by
K(s)=LF{Z,Q} (9a)
where
E F . I . ={I,, @'B, I (9b)
and q is a free parameter matrix that satisfies Q(s)eRHm.
196
(II) All controllers Kl(s) which stabilize (i) are represented by
Zz (s)=LF{Zt, (O= .Q. ) } (lOa)
where
I (F, 0), I, }
and Q= and Qv are free parameters that satisfy qKls). Q. lsIaRH=.
Proof.: We will set HfB2F in Youla's parametrization for SF case and
C2=(I,O)T,D=I(O.I) T and H=(B2F.O) for FI case Ell].
Then every SF controller assuring Internal stability Is represented by [11]
K(s)=HM{Z,R} : R(s)mRIioo (11a)
where
,I"'" I.[ I*F''B'' F.,.,F I (lib) =tNz, X:, {@" I B2 ,I+0"I B2 FJ
Motlvated by the f a c t t ha t K(s)=F when R(s)=F, we d e f i n e
R(s)=F+Q(s) :O(s)mRlloo (11c)
Then substituting this R(s) into (lla) and transforming the result to LF form
to get (9). For FI case, KiKs) can be represented by
K! (s)=IIM{ZI,(Rx,R.)} : Rx, Rv=RII¢o (12a)
where
li:l i i°l] z z , 0 (12b) Z;=
Then defining
(Rx,Rw)=(F,O)+(Qx, Rw):Qx, QvuRll® (12c)
to get (10) by the same reason. Q~E.D
Since we assume that the H= control problem is solvable In this paper, we
introduce
Fm=B~ T X (13a)
where X is a stabilizing positive semldefinlte solution of
XA+ATX',X(Bt Bt TBz B2 T )X+Ct T CI =0 (13b)
and replace all the F in Lena 1 by F® so that
FfF®, @(S )=sl (A*BzF®) (14)
Then all solutions of the FI problem can be derived as follows.
Theorem 1 (FI problem): All Hi(s) satisfying 1[ G,(s)[[ co<l as well as the
internal stability are represented by
KI(s) fLF{Z, , (Qx, Q,)} (15a)
In this expression, Qx and Q. are given by
Qx(s)ffi[I*N(s)B1TX® "t ( s ) B z ] ' t [N(s)BITXWx(s)] (15b)
Q~ (s):[I*N(s)B~ TXO ' (s)B= ]I [N(s)W. (s) ] (15c)
where N is a free parameter matrix satisfying N(s)mBlb. and Wx, W. also free
parameters satisfying
197
Wx(s)O"S(s)Bt÷Wv(s)ffiO : Wx(S), Wu(s)'RH= (15d)
Proof: Since A÷B2F® is s table, i t fo l lows from (12) and Lena l that a l l
con t ro l l e rs assuring the i n te rna l s t a b i l i t y can be wr i t t en as
Kt (s)=HM{Zt , (F=+Qx ,Qv)} (16a)
=LF{ZI , (Qx ,qw )}= (F=,O)(IQx@ t Bz )I (Qx ,Qw ) (16b)
Using (16a) we can transform Gl into a model matching form [11]
Gl (s)=TlTz QTa (17a)
where
[,' [s)B, ] Qf[F=÷Qx.Q,]. Ts=[ I t (17b)
DGKF [2] derived a solution of the FI problem in their item FI.5 as follows
K~ ( s ) f [ F c o  N ( s ) B I T X , N(S)] :N(s)mDHQo (18)
E q u a t i n g (16b) and (18) to g e t
Qxf(I÷NBsTX~IB2)'INBITX , Qw=(I+NBtrXOIBz)tN (19)
llowevor (19) does not reflect Q(s) orthogonal to T= since NHO is obtained from
(Q=,Q.)Ts=(I÷NBsTX@IBz)'IN(I.B TsX~'IBt )HO (20)
Therefore we need additional free parameters satisfying
(Qx,Q=)T3O :(Qx, Qw)'RH® (21)
for a complete parametrlzatlon of Ks (s). These free parameters are introduced
as Wx, Wv=RII® satisfying (15d). But since
(I*NBt rX@ i Bz )I (22)
is unImodular for all N,,BII= {see APPENDIX l) we can add
(I+NBITX@ s B2 )sWx.  (l+NBs rX@1 B2 ) t Wu
to Q= and Qu in (19), respectively, to get the complete
(23)
parametrlzatlon (15).
Q.E.D.
We w i l l p a r a m e t r l z e t he SF c o n t r o l l e r u s i n g Theorem 1. At f i r s t we d e f i n e ff
as an o r t h o g o n a l complement o f Bs s a t i s f y i n g
H ]{HT,B~)=I . : BJ+=(BITBsl ' tB , T (24) B:+
S i n c e { la) i s w r i t t e n as sx=Ax+BIw+B2u, we can e x t r a c t w. d e n o t e d #. f rom x
such t h a t
#=Bt*(sxAxBzu)=Bs*{(s I ABzF®)x+B2F®xB2u}fBs*{~(s )xn2v} {25)
where
v = u  F ~ x (u=F~ x+v) (26)
and g e t t h e f o l l o w i n g t h eo rem.
Theorem 2 (SF problem): All K(s) satisfying [[ G[J ®<i as well as internal
stability are given by
K(S)= F~*[I+W(s)HB2+N(s)BI*Bz]'I[W(s)H#(s)+N(s)Bt*O(s)N(s)BtTX] (27a)
where
N(s)mBHz(pxr). W(s)lRll2(pxh~¥) (27b)
are the free parameters.
Proof From Theorem 1 and (25 ) , t he i n p u t o f t h e SF sy s t em can be g i v e n by
1 9 8
x
=Fox (lQx @" t B2 ) "I [ (Qx *q. BI "@)xQ, B: * B2v] (28}
which is equivalent to the Input of the FI system in sense of a tansfer
function equivalence and satisfies [[ GII<I. We will prove that this u assures
the internal stability of the SF system in the last paragraph.
If (28) is rearranged using u=F®x+v, it becomes
v(lQx# I B2QvBt ° BZ )I (Qx*Q, BI °@)x (29}
where
IQx@'t BzqwBt °B2 =(I+NBI TX@t Bz )'! [I+NBt °Bz+(Wx@'z +WvB! ° }B2 ] (30a}
Qx*QvBi°O •(I+/{BITX@IBz}'t [NBtTXNBt°@(Wx@'I+WvBI °)#] (30b)
Then it follows from (15d) and (24} that we can parametrize the term
W x@:÷WwBt ° in (30) as
Wx@'l+WvB:°= Wx@'l ( I  B I B I °)*(Wx@'IBI+Wv)Bs* = Wx@I ( I BIBi ° )
("1 =Wx O I ( g T , BI ) Bt * ( I B1 BI " ) =Wx O" ~ IIT II Wll ( 31 )
w h e r e W(s) i s a new p a r a m e t e r d e f i n e d by
W(s)=w~ (s)4, ~ ( s )II T (32)
It is readily seen that WnRH2 lff WxmRH" (see APPENDIX 2).
Therefore (29) becomes
vffi (I*NBI "Bz *WIIBz )I (NBI TXNBI "@WII~) x (33)
and K(s) is written by
K(s) =F® (I*NBI ° B2 *WlIBz )" t (NBI TXNDI * 4)WII¢) (34)
But to make K{s) proper we need Nis) to be strictly proper since BI"~O {see
APPENDIX 3 ) .
Finally if this K(s) is represented by the form K=LF(~.Q) we get
Q(s)=(I+NBI r X@ I Bz )" l [N(BI TX.B!*@)WII~] (35}
Since W(s) and N(s) are strictly proper and (I*NB~TX@~Bz) ~ is stable, we can
see QmRII~ and it follows from (I) of Lemma 1 that K{s) Internally stabilizes
the SF system. Q.E.D.
3. A p p l i c a t i o n s
3.1 Constant galn state feedback
In this section we will parametrlze dynamical N(s} and W(s) which make K(s)
Just a constant gain other than the central solution F. That Is; we want to
characterize all K(s) satlsfylng
K(s)fFfconstant (36)
This problem is called CSF {constant state feedback) problem.
It will readily follow from (27) that this can be satisfied if and only if
199
 N B I r X + WHO + NBI*~ = ( l + WHBz + NBt°B2)AF ( 3 7 )
S o l v i n g t h i s e q a u t l o n we ge t
W(s)=AF[sI(A~+BtBITX)]H r , R ( s ) f & F [ s I  ( A ~ + B t B t r X ) ] ' I B I (38)
where
AF=A*BzF , AFfFF~ (39)
I f NmBH2 then K(s) I s a s o l u t i o n of the CSF problem s i n c e s t a b i l i t y of N
a u t o m a t i c a l l y i m p l i e s t h a t of W. The c o n d i t i o n NmBH2 I s e a s i l y o b t a i n e d
u s i n g bounded r e a l lem=a.
Theorem 3 [9] 1_)) F I s a s o l u t i o n of the CSF problem I f and on ly i f
YAF+AeTy+YBtBITy+FTF+CITCI=O (40) has a stabilizing solution YtO. If this condition Is satisfied the free
parameters W(s) and N{s) are determined by (38).
2_~) The stabilizing solution XtO of (13b) Is the mlnlmu= semiposltlve definite
stabilizing solution among all stabilizing solutions of (40).
3 .2 II~/llz c o n t r o l
We w i l l employ W and N to minimize an Hz c o n t r o l p e r f o r m a n c e . So the sys tem
c o n s i d e r e d he re i s g iven by
x=Ax*B1 w+Bq ~+B2 u (41a)
z=C~ x+D1 z u (41b) zq =Cax*Dq u (41C)
y=x (41d) where x, z, u and w are the same as t h o s e of (1 ) , ~aRl (qsn ) and zlmRs a re the
d i s t u r b a n c e and the c o n t r o l l e d o u t p u t of an Itz c o n t r o l l o o p . We w i l l t ake
accoun t a l l the a s s u m p t i o n s imposed to ( I ) . In a d d i t i o n we assume (C~oA) I s
detectable and
CqTDq=O, DqTDq=R>O (42)
where R i s a c o n s t a n t w e i g h t i n g m a t r i x .
We d e f i n e Gq(s) as the c l o s e d loop t r a n s f e r f u n c t i o n from ~ to zq to ge t
z~fG~(s)~ (43) when the control (4) Is applied which assures an internal stability of the
closed loop system composed of (4) and (41).
We will consider the followlng problem here.
Problem: ~Inlmlze llGq(s)lJz by unconstrained W(s) and N(s} and check
whether they satisfy the condition (27b) so that ]IG(s)]I=<l holds.
This problem is a sufficient one of the original H=/H~ control problem [3].
Rotea et al [5] have parametrlzed unconstrained Hz optimal K(s} and have found
K(s) that satisfies II G(s)l[ ®<1 in their Problem B. Therefore the present
problem will solve the Problem B In the reverse direction.
The system (41) together with K(s) In (27) can be represented by
200
X=Amx+BI w+Bq ~+B2v (44a)
[::I 1 ,,,c,
v=W(s)~I+N(s)w2 (44d)
as a s e m i  c l o s e d loop sys t em where
A®fA+B2F* (44e)
C o n s i d e r i n g W and N new u n c o n s t r a i n e d c o n t r o l l e r s , we min imizes [ [Gq(s) [ [2 by
W and N to ge t [I0]
~(s)=AF[sI(AF÷LBIBtTX)]ILH T, N(s)=AF[sI(AF÷LBtBsTX)]ILBI (45)
where
L=B~B~ ° , AF=FF*, AF=A+BzF (46)
and F i s t h e Hz op t ima l s t a t e feedback ga in such as
F=RZB2Tp, PA+ATpPB2RIB2TP+CqTCq=O (47)
And the global H2 minimum is attained by these W and N. Then the condition
NIBII2 will lead to the following theorem.
Theorem 4 [i0] The necessary and sufficient conditions for the existence of a
solution of the H/II2 control problem are as follows.
I. (13a) has a stabilizing solution XZO.
2. YAF+AFTY+FTF+CITC+[X(IL)+YL]BIBIT[LTY+(ILT)XI=O (48)
has a s t a b i l i z i n g s o l u t i o n Y~0.
3. Y~ X (49}
4. Conc lus ion
We have obtained the parametrlzatlon of H® controllers for the FI problem
and the state feedback control problem and have expressed constant state
feedback gain in terms of the free parameters derived in this parametrlzatlon.
Further H®/Hz control has also treated in this context.
R e f e r e n c e s
[ll II.Zimura and R.Kawatanl (1988). "Synthesis of H~ Controllers Based on
Conjugation", Proc. of 27th CDC, pp. 713
[2] J.Doyle, K.Glover, P.P.Khargonekar & B.A.Francls {1989}. "Statespace
solutions to standard H2 and Hm control problems", IEEE Trans. Auto.
Control, AC34, pp. 831847.
[3] D.S. Bernsteln and W.H.Haddad {1989). "LQG control wlth an H® performance
bound: A Riccatl equation approach", IEEE Trans. Auto. Control, AC34,
pp.293305.
201
[41J.Voyle, K.Zhou & B.Dodenhelmer (1989). "Optlmal control with mixed H2 and
H • performance objectives", Proc. 1989 ACC, pp. 20652070.
[5] M.A.Rotea & P.P.Khargonekar(1990). "Slmultaneous H2/H" optlmal control with
state feedback", Proc. 1990 ACC, pp.28302834
[6] I.R.Perterson (1987). ~Visturbance attenuation and H optimization: A
design method based on the algebralc Riccatl equation", IEEE Trans. Auto.
Control, AC32, pp. 427429
[7] K.Zhou and P.P.Khargonekar (1988). "An algebralc Rlccati equation approach
to H optimization", Systems & Control Letters.t1, pp.8591
[8] Z.Z.Liu, T.Mlta & R.Kawatani (1990).'Parametrlzatlon of state feedback H~
controllers", Int. J. Control, 51, pp.535551
[9] T.Hita (1991)."Complete parametrlzatlons of H controllers in full Informat
ion problem and state feedback problem ". Prom. SICE 13th DST, pp. 5964
[10] T. $1ta (1990). "ll®/ll2 nixed control', J. Fac. Enrg. Chlba Unlv., 422,
pp.1322, Proc. STNS'91 Kobe
[11] B.A.Francls (1987). A course in ll control theory, SpringerVeralag
APPENDIX 1: It iS clear that I*NBITX@IB2mRH~. Since
X(A+BzF®)÷(A*BzF=)TX*XBzB2rX÷XBI BITX=C! TCl ~ 0 (A1) has a stablllzlng solutlon XZO, BITXOIB2mBB= which proves (I+NBITXOI B2)I
=RII= by virtue of the small gain theorem as well as NoBH2.
APPENDIX 2: It follows from (15d) and (32) that
Wx(s)@ l (s)(II T,B~)=[w(s),ww(s)] (A2)
It also can be written as
W. (s)= [W(s) ,W. (s) ] [:l.l*(s) (A3)
Therefore Wx(m)uRll® iff W(w)=O and W.(®)=O, i.e., •(s)oRH2 and Ww(s)oRHz.
APPENDIX 3: Since W()=O and N(s)mBil®, when s ? w
K(s)=F®[I+N(®)B1*B2Jl[sN(~)Bt*constant] (A4)
This shows N(s) must be in BH2 to assure K(~)=constant since B'[sO.
THE MIXED H~ AND Hoo CONTROL PROBLEM
A.A. Stoorvogel z Dep t . of Electrical Engineering University of Michigan Ann Arbor, MI 481092122 U.S.A.
H.L. Trentelman Mathematics Ins t i tu te University of Gronlngen P.O. Box 800 9700 AV Groningen The Netherlands
A b s t r a c t
This paper is concerned with robust stability in combination with nominal performance. We pese an Hoo norm bound on one transfer matrix to guarantee robust stability. Under this constraint we minimize an upper bound for t h e / / 2 norm of a second transfer matrix. This transfer matrix is chosen so that its H~ norm is a good measure for performance. We extend earlier work on this problem. The intention is to reduce this problem to a convex optimization problem.
Keywords : The H2 control problem, the Hoo control problem, robust stability, auxiliary cost.
1 Introduction
In all classical design methods robustness, i.e. insensitivity to system perturbations, plays an important role. In the last decade, the quest for robust controllers has resulted in a widespread search for controllers which robustly stabilize a system (see e.g. [6, 8, 9]). The application of the small gain theorem yields constraints on the Hoo norm of the transfer matrix. The problem of minimizing the Hoo norm by state and output feedback was studied in many papers (see e.g. [3, 5, 12]).
Clearly, besides guaranteed stability, in practical design we axe also very much concerned with performance. Since performance can very often be expressed in H2 norm constraints, a possible new goal is: minimize the nominal 112 norm (guarantees nominal performance) under the constraint of an//oo norm bound (guaxaatees robust stability). There is only one very lim ited result available for this setup which can be found in [10]. An alternative approach of robust stability and robust performance can be found in [2, 11]. Several recent papers investigate the minimization of an auxiliary cost function which guarantees an Boo norm bound while minimiz ing an upper bound for the nominal H2 norm (see [I, 4, 7, 15]). In this paper we will extend the results of [7].
IOn leave from the Dept. of Mathematics, Eindhoven University of Technology, the Netherlands, supported by the Netherlands Organization for Scientific Research (NWO).
203
Zl
z2 ~+
Figure 1: Closed loop system E x EF with perturbation A
2 P r o b l e m f o r m u l a t i o n a n d m a i n r e s u l t s
Let a Linear timeinvaxia~t system E be given by the following equations:
:~ = Az + Bu + Ew,
r. : Y = Coz + Dow, (2.1) zl = C1x + DlU,
z2 = C2z + D2u.
z,u ,w, lt, zl and z~ take values in finite dimensional vector sp~ces: z(t) E ~n, u(t) E 7?. ~, w(t) E 7?. I, y(t) G ~q, zl(t) E 7~" and z2(t) E 7~p. The system paxa~neters A, B, E, Co, C1, C2, Do, D1 and D2 axe matrices of appropriate dimensions. We assume that the system is time invaxiant, i.e. the system paxaxneters axe independent of time. The input w is the disturbance working on the system. We apply compensators EF to E of the form:
r.F: ~ ~ = Kp+ Ly, (2.2) t u = Mp + Ny,
The resulting closed loop system is of the form:
I xe= Acre+ Eew, r. x r.F : zl = Cl,e=e + Dl,eW, (2.3)
Z2 = C2,~x~ + D2,ew.
where :re = (x ~, p~r)v. We will call a controller internally stabilizing if the resulting m~trix Ae in (2.3) is a stability matrix. The closed loop transfer matrix from w to z I and from w to z2 will be denoted by Gt,r. F and G2r. s, respectively. Our overMl goal is to minimize the H2 norm of the transfer matrix G2r~ F over MI internally stabilizing controllers of the form (2.2) under the constraint that the Hoo norm of G1,r T is strictly less than I. The motivation for this problem formulation is that the Hco norm constraint guaxantees that the interconnectlon depicted in figure 1 is asymptotlcMly stable for all, possibly timevaxying or nonlineax, A with gain less than or equal to 1, i.e. ItAzll2 _< I]zH2 for all z E £2. Hence we guaxantee robust stability. On the other hand, by minimizing the H~ norm from w to z2 we optimize nominal performance, i.e. performance for A  0, where we assume that performance is expressed in terms of the H2 norm. Frequencydependent weighting matrices can be taken into account, by incorporating them in the plant. Clearly the more general picture, where the output of the disturbance system enters the system ~ via a different input is even more attractive but very hard to handle. In this paper we investigate an Mternative costfunction which minimizes an upper bound for the H2 norm. This bound has been introduced in [1] and further investigated in [4, 7, 15]. Define the following algebraic Riccati equation
AeYc + YeA~ + (YeC~,. + E.D~,e)( I  D1,.D~,e)I ( CI,cYe + Dl,cEV~ ) + E.E~ = 0 (2.4)
204
with stability condition
. (A. + (Y.CT, . + E . D L ) ( I  D , , . D L )  ' G , , . ) C C (2.5)
where a denotes the spectrum of a matrix and C the open left half plane. We have the following basic fact, see e.g. [1, 14, 16]
L e m m a 2.1 : Let E be described by (Z.I). lf El~, described by (2.2) is such that the closed loop system, described by (2.3) satisfies a( Ae) C C, then the follouHng statements are equivalent:
(i) UG,,r, II~ < X
(it) There ezists a real $ymmetrlc matriz K that satisfies (2.$) and (2.@.
If such ~ ezists then it is unique. In addition HG2,r.,i[:z < oo if and only if D=., = 0 and in that ~ $ e
,a , . z , , l (c, . .Y.cL) o
It cnn be shown that Trace (C2,eYcC~.c) depends only on the transfer matr ix from w to (zl, z2). We could therefore denote it by J z ( ~ F ) , a value that only depends on EF xnd not on a particular realization of ~. J r ( ~ F ) is called the auxiliary cost associated with ~F. Our problem becomes:
mJnlmlze J~:(~F) over all compensators such that ~ × ~ F is internally stable, I lG,x , l ]~ < x and D~,. = D2NDo = O.
Obviously, the first question to be posed is: does there exist a compensator ~ F such that a(Ac) C C, ]]G,,~:rHoo < 1 and D2NDo = O. In order to answer this question we need the following definitions. For P E 7~ nxn we define the following matrix:
F(P) := ( A ' r P ÷ P A ÷ G ~ C ' + P E E ' P PB÷C~D1 + D~C, D~D1
If F(P) > O, we say that P is a solution of the quadratic matrix inequality. We also define a dual version of this quadratic matr ix inequality. For any matrix Q E 7~ nxa we define the following matrix:
/ AQ+QA'+ EE'+QC~CIQ QC~o+ED~ G(Q) := ~ )
CoQ + DoE" DoD~
If G(Q) >_ O, we say that Q is a solution of the dual quadratic matr ix inequality. In addition to these two matrices, we define two matrices pencils, which play dual roles:
L(p,s) : ( s, A  E E V  B ) ,
M(Q, 8) :=
Finally, we define
G~(s) :=
Gdi(S) :=
s I  A  QC[Cl ) . Co
the following two transfer matrices:
C1 (sl  A)  l B + D1,
Co (sl  A) 1 E + Do,
205
Let p(M) denote the spectral radius of the matrix M. By rank~(,)M we denote the rank of M as a matrix with elements in the field of real rational functions ~Z(5). We define the invariant zeros of a system (A, B, C, D) as the points in the complex plane where the Rosenbrock system matrix loses rank. We are now in a position to recall the main result from [12]:
Theorem 2.2 : Consider the system ($.1). Assume that both the system ( A ,B , CI, DI ) as well as the system (A, E, Co, Do) have no invariant zeros on the imaginary axis. Then the following two statements are equivalent:
( i) For the system (2.1) there exists a controller P'F of the form (2.2) such that the resulting clasedloop system from w to z,, with transfer matrix Gt,r.,, is internally stable and has Ho~ norm less than 1, i.e. [[G,,l;r][oo < 1.
( ii) There ezist positive semi.definite solutions 1", Q of the quadratic matriz inequalities F( P) > 0 and G(Q) _> 0 satisfying p(PQ) < 1, such that the following rank conditions are satisfied
(a) rank F(P) = rank~(,)Gd,
(b) rank G(Q) = ronkTz(,)Gdl,
(c) rank F(P) = n + ran/9~(o)G~q V s 6 C O U C +,
(d) rank (M(Q,s) G(Q) ) =n+ran/t.~.(dGd, VseCOuC +. Q
We note that the existence and determination of P and Q can be performed by investigating reduced order tticcatl equations. Moreover, if condition (ii) holds then one can always find ~F with N = 0 so D2,e = D2NDo = O. We will assume throughout this paper that the assumption on the imaginary axis zeros as stated in the previous theorem holds. We will a/so assume that there exists matrices P and Q such that condition (ii) above is satisfied. This is needed to guarantee the existence of a controller ~]F which makes Jr.(~]F) finite. In the next section we show how to reduce our problem by a suitable system transformation.
3 A system transformation
We will use the idea of system transformations, introduced for this setting in [12]. We construct the following system,
E~: Yq = Zl, q ,~
z2.~ =
where
a ( q ) = Do
A~xq + B~u~ +Eqwq,
Coxq + Oowq, C, zq + O, uq,
Csxq + Dsuq.
( ),
(3.1)
(3.2)
and
A~ := A + qc~co B o : = B+QC~Do
206
It has been shown in [12] that the factorization in (3.2) is always possible. It has been shown in [12] that this new system is such that (Aq,Eo, Co, Do) is left invertible and minimum phase. It is interesting to see how this transformation affects our cost function ,7"~(EF). The following theorem is an extension of a theorem in [12]:
T h e o r e m 3.1 : Let an arbitrary compensator P'F of the form (2.2) be given. The following two statements are equivalent:
( i) The compensator EF applied to the original system F, in (2.1) is internally stabilizing, yields D2NDo = 0 and the resulting closed loop transfer function from w to zl has Hoo norm less than 1.
( ii) The compensator EF applied to the new system ~o in (3.1) is internally stabilizing, yields D2N Do = 0 and the resulting closed loop transfer function from w~ to zl.o has Hoo norm less than 1.
Moreover,
ffr.(EF) = Trace [(C2 + D2NCo)Q(C2 + D2NCo) r] + ffro(V,F) (3.3)
[3
In earlier papers (see [1, 4, 7, 15]) the assumption was made that Do is surjective. Then our constraint that D2NDo = 0 guarantees that D2NCo = 0 and therefore the first term on the right is independent of our specific choice for the compensator. The question is whether this is also true for the case that Do is not surjective. It is, as is stated in the following lemma:
L e m m a 3.2 :Let Q satisfying G(Q) > 0 be given. Then we have
ImCoQ C ImDo
Moreover, we have:
El
This enables us to follow the saxae lines as the analysis in [7]. We start by defining the following two classes of controllers:
• A(E) denotes the set of controllers ZF of the form (2.2) such that the closed loop system is internally stable, D2NDo = 0, and IlGt,rT[[oo < 1.
• .Am(T) denotes the set of state feedbacks EF of the form u = Fz such that the closed loop system is internally stable and [[Gl,r.rH= < 1.
207
We define the following system
i~ o = Aox o + Bou~ + Eowo,
P'O,,/ : Yo = xo, (3.4) z~.o = Clx o + Dtuo,
z2.o = C~z~ + D2u~.
Eo., ! is the "state feedback version" of P.o" It turns out that P.,~ has the nice property that the state feedback and the dynamic feedback cases are strongly correlated as is worked out in the following theorem (see [7]):
T h e o r e m 3.3 : Let E o be described by (3.1) and let [email protected] be the state feedback version o r e o. Then we have the following equalities:
inf ffz~ (EF) = inf Y~o . ,(EF) rweAtr'o) rwEA=(~o) ' 0
Moreover, suppose we have a minimizing sequence for j~:o.,f(EF) over the set Am(p.o) (note that the infimum will in general not be attained). This will be a sequence of state feedbacks of the form u o = F~z o (~ I 0). It has been shown in [12] that (Ao, Eo, Co, Do) is left invertible and minimum phase. Therefore (see [13]) there exists a sequence of observer gains K, such that the observer:
~obo : i ' = AoP + K~(CoP  Vo), (3.5) for the system ~@ has the following property:
• Both the H~ norm and the Hoo norm of the closed loop transfer matrix from to to x  p go t o 0 a s e l 0 .
Then it is easy to show that the following sequence of controllers yields a minimizing sequence for J 'r.o(~F) over the set .A(P.Q).
p = Ac~p + Bou o + Ke(Cop Yo), (3.6) r.f,~ ( u o = F,p.
By theorem 3.1, this sequence of controllers (with Yo, uo replaced by y, u) is also a minimizing sequence for J r . (~F) over the set .A(~o). Hence our problem has been solved, except for the construction of this minimizing sequence of static feedbacks. Hence the problem has been reduced to a finitedimensional optimization problem (an optimization over F). However, using the techniques from [7], it can be shown that we can reformulate this problem as a convex optimization problem. Assume that the closed loop system after applying the feedback u o = F z o to r.o., / is of the form (2.3) (note that DI,e = 0 and D2,e = 0). We first use a different characterization of Jl:o., ,(r.f). Define the following algebraic Riccati inequality
AeY + YA~ + (YC~, e + E~D~.,)(CI,eY + Dl,eEV,) + EeEV= < 0 (3.7)
Note that (because of the strict inequality) the stability requirement (2.5) is equivalent to the requirement Y > 0. We get
Jr.o(P.F ) = in f{ Trace C2,eYC~, e [Y > 0 satisfies (3.7))
208
The above introduced overparametrization win allow us to transform the problem into a coneez optimization problem. Using the above, we can reformulate our problem:
inf JI:~.,(EF) = inf Trace (C~ + D2F)Y(Cz + D2F) T Z j , ~ . 4 = ( ~ : ~ ) ' F,Y
under the following constraints:
(i) r > 0,
( ii) Ao + BQF is asymptotically stable,
(iii) r satisfies (3.7).
Note that Ae = Aq + BoF. Therefore, since Y > 0 and Y(Ao + B~F) T + (A¢ + BoF)Y < 0 we know that Ao + BqF is asymptotically stable. This makes constraint (li) superfluous. Because Y > 0 we can instead optimize over Y, W := FY  t . We get the following constraints:
(a) Y > 0
(t,) A,~Y + YA~ + BqW + W'vB~ + E,~E~  ( Y ~ + W~vDD(C1Y + DII,V) < O.
with cost function:
inf Trace C2YC~ + 2C2WD~ + D2WYIWTD~ w,Y
As shown in [7] this is a convex optimization. Note that since we only use the matrix Q from theorem 2.2 it can be shown that the above derivation holds under one single assumption: the system (A, E, Co, D0) has no invarlant zeros on the imaginary axis.
4 C o n c l u s i o n
In this paper we have shown that one of the assumptions made in [7] can be removed. We obtain a sequence of observers which, combined with a sequence of static feedbacks, results in a minimizing sequence. The resulting static feedback problem can be solved via a finite dimensional convex optimization problem. We have an extra difficulty, introduced because Do is not necessarily surjective: we can not find an optimal observer and have to resort to a sequence of observers which in the limit has the desired property. Since we are investigating the suboptimal case thls is not essential. A key point in the above analysis is that the resulting convex optimization problem is completely decoupled from the fact whether Do is surjective or not.
Clearly, the open problems remain: we would llke to use the H~ norm instead of the auxiliary cost. Moreover, we would llke the output of the disturbance system A to enter the system arbitrarily and not the special case of figure 1.
R e f e r e n c e s
[1] D.S. BEItNSTEIN AND W.M. HADDAD, "LQG control with an H~ performance bound: a Riccati equation approach", IEEE Trans. Aut. Contr., 34 (1989), pp. 293305.
[2] , ~P~bust stability and performaace analysis for statespace systems via quadratic Lyapunov hounds", SIAM J. Matrix. Anal. ~ Appl., 11 (1990), pp. 239271.
209
[3] J. DOYLE, K. GLOVER, P.P. KHARGONEKAR, AND B.A. F~Ncls , "State spax:e solutions to standard/12 and//0o control problems", IEEE Trans. Aut. Contr., 34 (1989), pp. 831 847.
[4] J.C. DOYLE, K. ZHOU, AND B. BODENIIEIMER, UOptlmal control with mixed //2 and //oo performance objectives", in Proc. ACC, Pittsburgh, PA, 1989, pp. 20652070.
[5] B.A. FRANCIS, A course in ~~co control theory, vol. 88 of Lecture notes in control and information sciences, SpringerVerlag, 1987.
[6] P.P. KIIARGONEKAR, I.R. PETERSEN, AND K. ZIIOU, "Robust stabi]iT~.tion of uncertain linear systems: quadratic stabilizability mad//oo control theory", IEEE Trans. Aut. Contr., 35 (1990), pp. 356361.
[7] P.P. KSAP.GONEKA]t AND M.A. ROTE^, =Mixed//~///oo control: a convex optimization approach", To appear in IEEE Trans. Aut. Contr., 1990.
[8] I.P~. PETERSEN, UA stabilization algorithm for a class of uncertain linear systems", Syst. Contr. Letters, 8 (1987), pp. 351357.
[9] 1.11. PETERSEN AND C.V. HOLLOT, "A Riccati equation approach to the stabilization of uncertain linear systems", Automatics, 22 (1986), pp. 397411.
[10] M.A. ROTEA AND P.P. KIIAItGONEKAIt, =//2 optimal control with an Hoo constraint: the state feedba~:k case", Automatica, 27 (1991), pp. 307316.
[11] A.A. STOORVOGEL, "The robust//2 control problem: a worst case design", Submitted for publication.
[12] , =The singular Hoo control problem with dynamic measurement feedback", SIAM J. Contr. & Opt., 29 (1991), pp. 160184.
[13] H.L. TRENTELMAN, Almost invariant subspaces and high gain feedback, vol. 29 of CWI Tracts, Amsterdam, 1986.
[14] J.C. WILI, EMS, "Least squares stationary optimal control and the algebraic ILiccati equa tion", IEEE Trans. Aut. Contr., 16 (1971), pp. 621634.
[15] K. ZHOU, J. DOYLE, K. GLOVER, AND B. BODENHEIMER, "Mixed H2 and Hoo control", in Proc. ACC, San Diego, CA, 1990, pp. 25022507.
[16] K. ZHOU AND P.P. KHARGONEKAR, "An algebraic Iticcati equation approach to H¢o optimization", Syst./k Contr. Letters, 11 (1988), pp. 8591.
Nash Games
1D.J.N. Limebeer
and mixed H2/Hoo Control
2B.D.O. Anderson 1 B. Hendel
Department of Electrical Engineering, 1 Imperial College, Exhibition Rd., London.
Department of Systems Engineering, a Australian National University, Canberra, Australia.
A b s t r a c t
The established theory of nonzero sum games is used to solve a mixed Ha/Hoo control problem. Our idea is to use the two payoff functions associated with a two player Nash game to represent the Ha and Hoo criteria separately. We treat the state feedback problem, and we find necessary and sufficient conditions for the existence of a solution. A full stability analysis is available in the infinite horizon case [13], and the resulting controller is a constant state feedback law which is characterised by the solution to a pair of crosscoupled Riccati differential equations.
1 I n t r o d u c t i o n It is a wellknown fact that the solution to multivariable Hoo control problems is hardly ever unique. If the solution is not optimal it is never unique, even in the scalar case. With these comments in mind, the question arises as to what one can sensibly do with the remaining degrees of freedom. Some authors have suggested recovering uniqueness by strengthening the optimality criterion and solving a "superoptimal" problem [10, 11, 14, 21, 22]. Another possibility is entropy minimisation which was introduced into the literature by Arov and Krein [2], with other contributions coming from [7, 8, 9, 15, 18]. Entropy minimisation is of particular interest in the present context because entropy provides an upper bound on the Ha norm of the inputoutput operator [17]. One may therefore think of entropy minirnisation as minimising an upper bound on the / /2 cost. This gives entropy minirnisation an H2/Hoo interpretation. As an additional bonus we mention that the solution to the minimum entropy problem is particularly simple. In the case of most statespace representation formulae for all solutions, all one need do is set the free parameter to zero [9, 15, 18].
Another approach to mixed H2/Hoo problems is due to Bernstein and Haddad [5]. Their solution turns out to be equivalent to entropy minimisation [16]. The work of Bernstein and Haddad is extended in [6, 23], where a mixed Ha/Hoo problem with output feedback is considered. The outcome of this work is a formula for a mixed Ha/Hoo controller which is parameterised in terms of coupled algebraic Riccati equations. The
211
central drawback of this approach is concerned with solution procedures for the Riccati equations; currently an expensive method based on homotopy algorithms is all that is on offer. Khargonekar and Rotea examine multipleobjective control problems which include mixed H2/Hoo type problems. In contrast to the other work in this area, they give an algorithmic solution based on convex optimisation [19].
In the present paper we seek to solve a mixed H2/Hoo problem via the solution of an associated Nash game. As is well known [4, 20], two player nonzero sum games have two performance criteria, and the idea is to use one performance index to reflect an Hoo criterion, while the second reflects an H2 optimality criterion. Following the precise problem statement given in section 2.1, we supply necessary and sufficient conditions for the existence of a Nash equilibrium in Section 2.2. The aim of section 2.3 is to provide our previous work with a stochastic interpretation which resembles classical LQG control. Section 2.4 provides a reconciliation with pure//2 and H~o control. In particular, we show that//2, Hoo and H2/Hoo control may all be captured as special cases of a two player Nash game. Brief conclusions appear in section 3.
Our notation and conventions are standard. For A E ~xm A' denotes the complex conjugate transpose. £{.} is the expectation operator. We denote Ileal,, as the operator norm induced by the usual 2norm on functions of time. That is :
IIRII2, = sup IIR',=II2 ,.,*'.c+l,o,,.,+l 11112"
2 The H2/Hoo Control Prob lem
2.1 Problem Statement
We begin by considering an H2/lloo problem in which there is a single exogenous input. Given a linear system described by
~:(t) = A ( t )= ( t ) Jr B l ( t ) w ( t ) Jr B2( t )u ( t ) =(to)  =0 ( 2.1 )
• (t) = [cCt )=( t ) ] D(t ) . ( t ) ( 2.2 )
in which the entries of A(t), Bz(t), B2(t), C(t) and D(t) are continuous functions of time. We suppose also that D~(t)D(t) = I. In the remainder of our analysis we will take it as read that the problem data is time varying.
We wish to
(I) find a control law u'(t) such that
I1~11~ < 7211~11~ w,(~) e £2[to, tl]. ( 2.3 )
This condition can be interpreted as an L:oo type norm constraint of the form
I1~..,11~, _< "~ ( 2.4 )
where the operator ~z,u maps the disturbance signal w(t) to the output z(t) when the optimal control law u*(t) is invoked.
212
(2) In addition, we require the control u*(t) to regulate the state z(t) in such a way as to minimise the output energy when the worst case disturbance is applied to the system.
As we will now show, this problem may be formulated Ls an LQ, nonzero sum, differ ential game between two opposing players u(t, z) and w(t, z). We begin by defining the strategy sets for each player. To avoid difllculties with nonunique global Nash solutions with possibly differing Nash costs, we force both players to use linear, memoryless feed back controls. This restriction also results in a simple controller implementation which is easily compared with the corresponding linear quadratic and Hoo controllers. The dif ficulties associated with feedback strategies involving memory are beautifully illustrated by example in Basar [3]. We introduce the cost functions to be associated with the / /2 and H~o criteria as
/: j , ( , , , , , , ) = (~ 'w , ( t , =)w(t, =)  . ' ( t ) . ( t ) ) d~ ( 2.5 )
and
f~o ! J2(u,w) = z'(t)z(t)d~, ( 2.6 )
and we seek equilibrium strategies u°(t, z) and w'( t , z) which satisfy the Nash equilibria defined by
J d ~ ' , ~ ' ) < J , ( ~ ' , ~ ) ( 2 7 ) a~cu.,w.) < J,(,,, ,,,.). ( 2.8 )
It turns out that the equilibrium values of Jl( ' , ") and J2( ' , ' ) are quadratic in z0. Con sequently, if z0 = 0, we have that Jt(u°,w *)  0 and J2(u*,w*) = 0. This observation motivates our particular choice of Jl( ' , ") as follows : Since Jl(u ° , w*)  0, we must have II211~ < 7=llwll~ for . ( t , = ) = u'Ct,=) and all w e Z=[~o,~S], which ensures II~.~ll=, < 7 The second Nash inequality shows that u" regulates the state to zero with minimum control energy when the input disturbance is at its worst.
2 . 2 T h e n e c e s s a r y a n d s u f f i c i e n t c o n d i t i o n s f o r t h e
e x i s t e n c e o f l i n e a r c o n t r o l s
The aim of this section is to give necessary and sufficient conditions for the existence of linear, memoryless Nash equilibrium controls. When controllers exist, we show that they are unique and parameterised by a pair of crosscoupled Riccati differential equations. We use f~ to denote the set of all linear and memoryless state feedback controls on [to, tl].
T h e o r e m 2.1 Given the system described by :
~(t) = A=(~) + B~,(~,=) + B2,,(t,~) =(~o) = =o ( 2.9 ) ~(,) = [ c=(,) ] o, . ( t , =) o ' o = z, ( 2.1o )
213
there ezist Nash equilibrium strategies u*(t,z) Efl and ut*(t,z) E f~ such that
Jx(~',w') < Jl(~',w) vw(t ,x)en J2(u',w') < J2C~,w') vu( t ,~ )ea
where
J£' a~(, , ,w) = [ ' r2 , , , ' ( t , . )w ( t ,= )  .'(Oz(Ol dt ( 2.11 )
£, J2(,,,,,,) = ¢ ( O z ( t ) e ( 2.12 )
if and only if the coupled Riccati differential equations
Px(0  A'P~(t) + P~(t)a  6"6'  [ P~(t) P2(t) ] [ ':2B~BL B~B'~ Pl(t)
B2B~ B:~B~][ ] P,(t) (2.13) P2(t) = A'P2(t) + P~(t)A + G~C
 [ PI(t) P2(t) ] [ 0 ~I2BtB~I PI(L) 7 'BtBI B,B~ ] [ (2.14)
with P~(ts) = 0 and P2(t l ) = 0 ha,,e solutions P~(t) <_ 0 and P2(t) >_ 0 on [to,tS]. ff solutions exist, we have fha~
(i) the Nash equilibrium strategies are uniquely specified by
u'(t,z) = B~P2(t)z(t) (2.15) w*(t,z) = 72B~Pl(t)z(t) (2.16)
(ii)JiCu',w') = zo'Pt(to)xo (2.17) J2(u',w') = xotP2Cto)xo. ( 2.18 )
(iii) In the ~ s e that , , ( t ,x) = u ' ( t , x ) with ~o = O,
IIn,~ll~ < 7 Vw e£2[to,tj] (2.19)
where the operator T~,w is defined by
~(t) = ( A  B2B~P2(t))z(t)+ Bxw(t,x) ( 2.20 )
z(t) = [ DB~P,(t)G ]z(t). (2.21)
Proof This is given in [13]
214
2.3 A Stochastic Interpretation In the previous section we found a control law u'( t , z) which achieves [[~Jw[[2i < 7, and at the same time solves the deterministic regulator problem
min{J2(u ,w*)=~i t z ' ( t ) z ( t )d t } •
In this section we extend the analysis to the case of a second white noise disturbance input. In particular, we show that the control law u*(t, z) solves the stochastic linear regulator problem
when the equation for the state dynamics is replaced by
k(t) = AzCt ) + Bowo(t) + B1w(t) + B2u(t)
in which wo(t) is a realisation of a white noise process.
T h e o r e m 2.2 Suppose
~:(t) = Az(t) + Bowo(t) + B1w(t,z) + B2u(t,z) z(to) = zo ( 2.22 )
• (t) = ~u(t ) ~ ' D  I ( 2.23 )
with ~{~o~o'} = Qo and e{wo(,')wo(t)}  I6(t  r). then the cont~Z taw u'(t, ~) = B~P2(t)~(t)
(i)
and
O0
results in H~,~[[21 < 7, where the operalor T~jw is described by :
, ( t ) = ( a  B~P2(t))~(~) + B~w(t,~) =(~o) = o
~(,) = [ c ]xCt) DB~P2(t )
solves the stochastic linear regulator problem
with w*(t, z) = 72B'xPl(t)z(t). In addition, we get
]
( 2.24 )
(2 .25)
215
Remark Implementing u*(t, z) and va*(t, z) gives
= (A~/2BtBLPIB2B~Pz)z+Bowo z ( t o ) = 0
w ° = 72BIPlz. It is immediate that
 ( [~ t + i' 2) = ( A  72 B~ BI Pt  B=B~P=)'( Pt + P2) + (Pt + P=) (A 72BtB;Pt  B2B;P2) + ~2PtBtB;Pt
in which (Pt +P2)(tf) = 0. It then follows from a standard result on stochastic processes, [12] Thm 1.54, that the energy in the worstcase disturbance in given by :
, { , .Z?o. ' , , ,o. , , , , , I =
= , ,{Z2,, , , . ,+, , , , , , , , ,o, ,} .
2 . 4 R e c o n c i l i a t i o n o f /  / 2 , //co a n d H 2 / H c o t h e o r i e s
In this section we establish a link between linear quadratic control, Hoo control and mixed H2/Hoo control problems. Each of the three problems may be generated as special cases of the following nonzero sum, twoplayer Nash differential garne:
Given the system described by (2.9), find Nash equilibrium strategies u*(t, z) and w*(t, z) in ~ (the set of linear memoryless feedback laws) which satisfy J1(u', to*) < Jl(u*,w) and J2(u*,w') < J2(u,w'), where
/,? J t (u ,w) = [~ '~ ( t , =)w(t, =)  . ' ( 0 K 0 ] dt
2' ~,(u,,,,)  [ . ' (0z(0  p2,~/(t, =)w(t, =)1 dr.
The solution to this game is given by
u°(t, z) = B~S2(t)=(t) and w*(t, z) = 72BiSt(t)z(t). where Sl(t) and S2(t) satisfy the coupled Riccati differential equations
Sl(t) = A'St(t) + S t ( t )A CtC
 [ stct) s~ct) ] [ I
72BlB~ B2B~ B2B~ ] [ S,( t)] ,..qt(t]) = 0
B2B~ St(t)
32(t) = A'S,(t) + S~(t)A + C'C
 [ stCt) S=(t) ] [ P='r4B'Bi 7=B'Bi stCt) 7 'BtB i B,B, ] [ S,(t) ] S , ( t l ) = 0
(i) The standard LQ optimal control problem is recovered by setting p = 0 and 7 = co. In which case  S l ( t ) = S2(t) = P(0 and u'(t, =) = B~P(t)z( t ) and w'(t, =)  O. (ii) The pure Hoo control problem is recovered by setting p = 3'. Again, St(t) = S=(t) = Pco(t) and u*(t, z) = B~Poo(t)z(t) and w'(t, z) = 7=BIP.o(t)z(t). (iii) The mixed H2/Hoo control problem comes from p = 0.
216
3 Conclus ion We have shown how to solve a mixed H2/Hoo problem by formulating it as a two player Nash game. The necessary and sufficient conditions for the existence of a solution to the problem are given in terms of the existence of solutions to a pair of crcascoupled Riccati differential equations. If the controller strategy sets are expanded to include memoryless nonlinear controls which are analytic in the state, the necessary and sufficient conditions for the existence of a solution are unchanged, as are the control laws themselves. We have also established a link between//2, H¢o and mixed H=/Hoo theories by generating each as a special case of another twoplayer LQ Nash game. In conclusion we mention that it is possible to obtain results for the infinite horizon problem. Under certain existence conditions, we show that the solutions of the crosscoupled Riccati equations approach limits Pt and />2, (1) which satisfy algebraic versions of (2.13) and (2.14), (2) Px _< 0 and P2 _> 0, and (3) Pl and P2 have stability properties which are analogous to those associated with LQG and pure Hoo problems.
References [1] B. D. O. Anderson and J. B. Moore,"Optimal Control Linear Quadratic Methods,"
PrenticeHall International, 1989
[2] D. Arov and M. G. Krein,"On the evaluation of entropy functionals and their minima in generalized extension problems,"Acta Scienta Math., Vol.45, pp 3350, 1983, (in Russian)
[3] T. Basar,"On the uniqueness of the Nash solution in linearquadratic differential games,"Int. Journal of Game Theory,Vol. 5, pp 6590, 1977
[4] T. Basar and G. J. Olsder,"Dynamic noncooperative game theory," Academic Press, New York, 1982
[5] D.S. Bernstein and W. M. Haddad,"LQG control with an Hco performance bound: A Riccati equation approach,'lEEE Trans. Automat. Control, Vol. AC34, No. 3, pp 293305, 1989
[6] 3. C. Doyle, K. Zhou and B. Bodenheimer,"Optimai control with mixed//2 and Hoo performance objectives,"Proc, lg8g American Control Conf., Pittsburgh, Pennsyl vania, pp 20652070, 1989
[7] H. Dym,"Jcontractive matrix functions, reproducing kernel Hilbert spaces and in. terpolation," Vol. 71 of Regional Conference Series in Mathematics, Am. Math. Sot., 1989
[8] H. Dym and I. Gohberg,"A maximum entropy principle for contractive inter polants," J. Funct. Analysis, Vol.65, pp 83125, 1986
[9] K. Glover and D. Mustafa,"Derivation of the maximum entropy Hm controller and a statespace formula for its entropy,'Int. J. Control, Vol.50, pp 899916, 1989
[10] I. M. Jaimoukha and D. J. N. Limebeer,"Statespace algorithm for the solution of the 2block superoptimal distance problem,"lEEE Conference on Decision and Control, Brighton, United Kingdom, 1991
217
[11] H. Kwakernsak, aA ploynomial approach to minimax frequency domain optimisation of multivariable feedback systems,~Int. J. Control, Vol.44, No.l, pp 117156, 1986
[12] H. Kwakernaak and K. Sivan,"Linear optimal control systems," J. W'dey & son, 1972
[13] D. J. N. Limebeer, B. D. O. Anderson and B. Hendel,"A Nash game approach to mixed H2/Hoo control," submitted for publication
[14] D.J .N. Limebeer, G. D. Halikias and K. Glover,"Statespace algorithm for the com putation of superoptimal matrix interpolating functions," Int. J. Control, Vol.50, No.6, pp 24312466, 1989
[15] D. J. N. Limebeer and Y. S. Hung,"An analysis of the polezero cancellations in Hoo optimal control problems of the first kind,"SIAM J. Control Optimiz., Vol.25, pp 14571493, 1987
[16] D. Mustafa,"Relations between maximum entropy H~ control and combined Hoo /LQG control," Systems and Control Letters, Vol. 12, pp 193203, 1989
[17] D. Mustafa and K. Glover,"Minimum entropy H~ control," Heidelberg, Springer Verlag, 1990
[18] D. Mustafa, K. Clover and D. J. N. Limebeer,"Solutions to the Hoo general distance problem which minimize an entropy integral,"Automatica, Vol. 27, pp. 193199, 1991
[19] M. A. Rotea and P. P. Khargonekar,"H~optimal control with an Hoo constraint  the state feedback case," Internal Report, Univ. of Minnesota
[20] A.W. Start and Y. C. Ho,"Nonserosum differential games," J. Optimization Theory and Applications, Vol.3, No. 3, pp 184206, 1967
[21] M. C. Tsai, D. W. Gu and I. Pcetlethwaite,"A state space approa/'.3a to superoptimal Hoo control problems,'IEEE Trans. Auto. Control, AC33, No.9, 1988
[22] N.J. Young,"The NevanlinnaPick problem for matrixvalued functions," J. Operator Theory, Vo1.15, pp239265, 1986
[23] K. Zhou, J. C. Doyle, K. Clover and B. Bodenheimer,"Mixed//2 and H~ con trol," Proc. 1990 American Control Conf., San Diego, pp 25022507, 1990
R O B U S T l 1_ OPTIMAL CONTROL
J .B . P e a r s o n
Department of Electrical and Computer Engineering Rice University, Houston, Texas 772511892 U.S.A.
1. Introduction
This paper presents a brief account of lLoptimal control theory and its role in the robust performance problem, which is one of the most important problems in control system design.
The ILoptimization problem was first formulated in 1986 by Vidyasagar [1] and solved in a series of papers [25] by Dahleh and Pearson. In its original form, the only source of uncertainty was in the description of the exogenous signals (reference and disturbance inputs). Further work by Dahleh and Ohta [6] determined a necessary and sufficient condition for robust stability of a system with respect to unstructured plant perturbations with bounded /*'induced norms. This result was generalized by K.harnrnash and Pearson [7] to structured perturbations. It was also established in [7] that the robust performance problem and the robust stability problem are equivalent in a certain sense. This result makes it possible to solve the robust performance problem for the class of generalized plants having a linear, time invariant part plus /*'norm bounded, structured perturbations which can be nonlinear and timevarying.
This paper is organized as follows. In Section 2, the robust performance problem is stated and I l theory is briefly reviewed. In Section 3, we consider plant perturbations and review the current status of the robust stability problem and how it applies to the robust performance problem. Finally, in Section 4 we discuss the solution of the robust performance problem and some recent results on solutions of the L 1optimization problem.
2. The R o b u s t Pe r fo rmance Problem
Figure 1 shows a system in which the uncertain plant is described by the pair = (G,A) where G is a fixed lineartimeinvariant system and A is a diagonal
operator with entries A i which represents causal, normbounded perturbations of the nominal plant G. C is a lineartimeinvariant controller, and w and z represent the exogenous inputs (reference and disturbance inputs) and the regulated outputs respectively.
219
W , , 7 .
Figure I
The class of admissible perturbations is given by the induced norms
IlAixllp < I IIAIII = s u p
x~0 Ilxllp
where p = 2 or p = oo. In the first case, if the A i are lineartimeinvariant, then the induced norm is the H** norm. In the second case, the induced norm is the L 1 norm when the A t axe lincartimeinvariant.
System performance is measured by the system "gain"
Uzllp IIT~II = s u p p, = 2, **
w~0 iiwtl P "
The problem considered in this paper is p = ** and the A i represent causal norm
bounded perturbations which can be nonlinear and/or timevarying. In this ease the system gain is the L** induced norm. We will first consider discretetime systems and l** induced norms on the perturbations and system gain. G represents a linearshift invariant plant and the problem of interest is the
Robus t P e r f o r m a n c e Problem (RPP).
Given the system shown in Figure I, find a linearshiftinvariant controller C such that the system is internally stable and
IIT~II < 1 for all admissible G.
This problem is the most fundamental of all control problems and is the underlying motivation for most of the introductory control systems textbooks that have been written in the last fifty years. Unfortunately, file problem was never posed in this manner until recently and as a consequence, our libraries are full of textbooks that never precisely state what the control problem is and represent merely a collection of various tools that may or may not apply. There are a few notable exceptions such as Bower and Schultheiss [8]
220
and Horowitz [9]. Horowitz was the first to come to grips with the problem of plant uncertainty which is the primary motivation for feedback control. Almost all other textbooks treat plant uncertainty superficially, if at all, and then only by means of specifications such as gain and phase margins. We owe our current enlightment primarily to George Zames and John Doyle via the introduction of H " theory and the structured
singularvalue (SSV) or gmethodology.
We will begin our discussion of l 1optimal control theory with the Nominal Perform_~nce Problem (NPP) where A = 0 and the objective is to minimize the system gain IIT~,[I over all stabilizing controllers C. In this case, there is no nonllneartime varying part so the system gain is the 11.norm of the system impulse response.
Assume that the map T~ is represented by the impulse response sequence = {~i} which can be represented as
= H  U * Q * V
where H, U and V are matrix l I sequences, i.e.
£ t o<k>t < £ Iu, <k)t < 00, £ tv, <k)t <, k=O k=O k=O
and * represents convolution. Q is also a matrix l 1 sequence and is the wellknown YJBK [1012] parameter. The first part of our problem is to obtain a characterization of all 11 sequences {Kij(k)} that can be represented as
K = U*Q.*V
with Q e I 1. This can be done and for a general result see McDonald and Pearson [13]. We will define M = {KEI t 1 K= U*Q*V*, Q e l I} .
o r
Our optimization problem now becomes I
= ~nfM IIH KIll
~o = ~ l[Ollt
with ~)=HK.
This problem reduces to a finite linear pro g~mming (LP) problem in the case where the ztransforms of U and V, i.e. U and V have full row and full conmn ranks respectively. This is the socalled oneblock problem. Other cases result in the two or fourblock problems [13].
It has been shown that if 0 and V have no zeros on the unit circle, there always exist solutions to the optimization problem. In the one block case, the solution is straightforward. Furthermore, the optimal impulse response sequence is of finite length and thus the optimal controller is always rational when the plant is rational.
In the multiblock case, the situation is quite different. Here we know that optimal solutions exist, and it is possible to calculate suboptimal solutions that approach the
221
optimal. The current method of solution is to set the problem up in such a way that finite impulse response solutions are feasible solutions of the LP problem. Then for a given length impulse response, an optimal solution is calculated which is suboptimal for the original problem. These solutions converge to the optimal as the number of terms in the impulse response goes to infinity.
The advantage of this method of solution is that at any stage, we may stop the calculation and know that we have a stabilizing controUer that famishes a value of performance index equal to the value of the last LP problem.
3. The Robust Stability Problem
Consider the system shown in Figure 1. Part of the RPP is the
Robust Stability Problem (RSP).
Find a linear shift invariant C such that the aystera in Figure 1 is internally stable for all admissible G .
The first step is to consider the analysis problem; given G and C, when is the system robustly stable (i.e. stable for all admissible G ) ?
G and C are lumped into the linearshiftinvariant Consider Figure 2 where operator M.
W
Figure 2
b
_ Z
We say that this system is robustly stable if, and only if,
( I  M 2 2 * A ) 1
is l " stable, i.e. it maps l " sequences to 1" sequences and is bounded. Mzz is the open loop impulse response between points a and b in Figure 2.
We can define robust performance for the system by stating that robust performance is achieved if, and only if, (I  M~*A) 1 is I " stable and
IIT~[I < 1 .
This can be convened into a RSP as shown in Figure 3.
[o o] Now define A = Av with IIApll < 1. Then the system in Figure 3 is
222
Figure 3
robustly stable if, and only if, (I  M'A)  I is 1" stable.
Recent results establish the following equivalence.
Theorem [7]
The system in Figure 2 achieves robust performance if, and only if, the system in Figure 3 achieves robust stability.
This is similar to the result established in [14] when the perturbations are linear timeinvariant and the norms are H " norms. In our ease, the result is true when A is nonlinear and/or time varying.
Now we will solve RPP ff we can solve RSP for the system in Figure 3.
The solution to RSP is given as follows.
Define Mll . . . Mln
M =
Mnl ' ' " Mnn
where the Mij represent impulse response sequences between the jth input and i th
output and
Theorem [7]
I Ot, = IMgk)l . kO
The system in Figure 3 is robustly stable if, and only if, the system of inequalities t l
xi < Z IlM0411 xj i = 1 . . . . . n j= l
has no solutions in (R+: / {0}.
223
This result is quite easy to check. For example when n = 1, we have
IlMllt < 1
as the necessary and sufficient condition for robust stability. This is the result obtained by Dahleh and Ohta [6].
When n = 2, we have
and
and so forth.
IIMllll I < 1
IIM2111t IIMIulI, IIM2211* + 1  I lMt l l l I < 1
These results are useful for stability anulysis but not for synthesis. This problem is
M =
solved as follows.
Define IIM11111
I [ M n l l l l
X = diag (xi)
• . . I I M I ~ I I t
• " IIM.A[,
x~>0
p (M) = spectral radius of M .
Theorem [15]
The following are equivalent:
1) The system in Figure 3 is robustly stable.
2) p(M) < 1.
3) rain I~IMXU < 1. x
i = 1 , 2 , . . . , n
4. So lut ion of RPP
It follows from the PerronFrobenius theory of positive matrices [16] that
p(M} = rain IIXiMXIll x
and the minimizing diagonal matrix X is made up of the entries of the eigenvector corresponding to p(M). This gives us an approach to the synthesis problem, i.e. the problem of finding a controller that satisfies p(M) < 1.
Consider the following:
Write
M = H  U * Q * V
and define
224
It;, = rain rain IIXt(HU*Q*V)XIII . Q x
An iteration can now be set up in order to determine ff there exists a Q such that V~<I.
This is the same type of iteration that is used in itsynthesis and is subject to some of the same problems. First, the izdnlmization problem is nonconvex, so only local minima can be found and, second, it is not clear how to choose the (in general) no~unique Q at each stage so as to decrease the spectral radius as much as poss~le.
An important question remaining is how do we handle continuoustime plants? It was shown in [3] that continuoustime L t. optimal solutions consist of strings of delayed impulses. These are practically unrealizable and must be approximated by rational solutions. A recent paper on this type of approximation is that of Ohm, Maeda, and Kodama [17]. Another approach to the continuous time problem is, instead of trying to design a continuoustime controller, design a digital controller and, therefore, view the system as a sampleddata system in which the objective is to minimize the L'induced system norm. This is a very active area of research, and new results arc being obtained rapidly. The results directly applicable to the L' induced norm are those of Sivashankar and Khargonekar [18], Dullerud and Francis [19], Bamieh etal. [20], and Khammash [21]. It is still too early to evaluate these results in terms of practical design algorithms, but it appears that we are on the right track, and that useful results will be forthcoming.
5. Acknowledgement
This research was sponsored under valious grants Foundation and the Air Force Office of Scientific Research.
by the National Science
6. References
[I] Vidyasagar, M., "Optimal rejection of persistent, bounded disturbances," IEEE Trans. on Automatic Control, AC31(6), pp. 527534 (Jun. 1986).
[2] Dableh, M.A. and Pearson, LB., "llopRmal feedback controllers for MIMO discretetime systems," IEEE Trans. on Automatic Control, AC32(4), pp. 314 322 (Apr. 1987).
[3] Dahleh, M.A. and Pearson, J.B., "L Ioptimal compensators for continuoustime systems," IEEE Trans. on Automatic Control, AC32(I0), pp. 889895 (Oct. 1987).
[4] Dahleh, M.A. and Pearson, LB., "Optimal rejection of persistent disturbance.s, robust stability, and mixed sensitivity minimization, IEEE Trans. on Automauc Control, AC33(8), pp. 722731 (Aug. 1988).
[5] Dahleh, M.A. and Pearson, LB., "Minimization of a regulated response to a fixed input," IEEE Trans. on Automatic Control, AC33(I0), pp. 924930 (Oct. 1988).
[6] Dahleh, M.A. and Ohm, Y., "A necessary and sufficient condition for robust BIBO stability," Systems and Control Letters, 11, pp. 271275 (1988).
225
[7] Khammash, M. and Pearson, J.B., "Performance robustness of discretetime systems with structured uncertainty," IEEE Trans. on Automatic Control, AC36(4), pp. 398412 (Apr. 1991).
[8] Bower, J.L. and Schultheiss, P.M., Introduction to the Design of Servomechanisms, New York: John Wil~y & Sons (1958).
[9] Horowitz, I.M., Synthesis of Feedback Systems, New York: Academic Press (1963). [10] Youla, D.C., Bongiomo, J.J., and Jabr, H.A., "Modem WienerHopf design of
optimal controllers  Part I: The single inputoutput case," IEEE Trans. on Automatic Control, AC21, pp. 313 (1976).
[11] Youla, D.C., Bongiorno, J.J., and Jabr, H.A., "Modem WienerHopf design of optimal controllers  Part II : The multivariable case," IEEE Trans. on Automatic Control, AC21, pp. 319338 (Jun. 1976).
[12] Kucera, V., Discrete Linear Control: The Polynomial Equation Approach, New York: John Wiley and Sons (1979).
[13] McDonald, J.S. and Pearson, J.B., "I loptimal control of multivariable systems with output norm constraints," Automatica, 27(2), pp. 317329 (1991).
[14] Doyle, J.C., Wall, LEg, and Stein, G., "Performance and robustness analysis for structured uncertainty, Proceedings of the 21st IEEE Conference on Decision and Control, pp. 629636 (Dee. 1982).
[15] Khammash, M. and Pearson, J.B., "Robustness synthesis for discretetime systems with structured uncertainty," Proceedings of the 1991 Automatic Control Conference, Boston, MA, to appear (Jun. 1991).
[16] Gantmaeher, F.R., The Theory of Matrices, Vol. II, New York: Chelsea Publishing (1959).
[17] Ohta, Y., Maeda, H., and Kodama, S., "Rational approximation of Lroptimal controllers for SISO systems," Workshop on Robust Control, Tokyo, Japan (June 2425, 199I).
[18] Sivashankar, N. and Khargonekar, P., "L** induced norm of sampleddata systems," Proceedings of the 1991 Conference on Decision and Control, Brighton, England, to appear (Dec. 1991).
[19] Dullerud, G. and Francis, B., "L l performance in sampleddata systems," IEEE Trans. on Automatic Control (to appear).
[20] Bamieh, B., Dahleh, M.A., and Pearson, J.B., "Minimization of the L*' induced norm for sampleddata systems," Technical Report 9109, Dept. of Electrical and Computer Engineering, Rice University (1991).
[21] Klaammash, M., "Necessary and sufficient conditions for the robustness of time varying systems with applications to sampleddata systems," IEEE Trans. on Automatic Control (submitted).