Output Violation Compensation for Systems with Output Constraints

The problem of output constraints in linear systems is considered and a new methodology which helps the closed loop respect these limits is described. The new methodology invokes ideas from the anti-windup literature in order to address the problem from a practical point of view. This leads to a design procedure very much like that found in anti-windup design: first a linear controller ignoring output constraints is designed; then an additional compensation network which ensures the output limits are, as far as possible, respected, is added. As the constraints occur at the output, global results can be obtained for both stable and unstable plants.

An upper bound is obtained by integrating the cost along the simulated trajectory, starting in x i = (05; 0; 0) T , q i = 1, is 8.5.The lower bound given by the value function is 7.9.

VII. CONCLUSION
This note presented an extended version of the Hamilton-Jacobi-Bellman (HJB) inequality to be used for optimal control of hybrid systems.The extended version constitutes a successful marriage between computer science and control theory, containing pure discrete-dynamic programming as well as pure continuous-dynamic programming as special cases.
The extended HJB inequality, which gives a lower bound on the value function, was discretized to a finite, computer-solvable LP that preserves the lower bound property.Based on the value function, an approximation of the optimal control feedback law was derived.
A problem with DP is the "curse of dimensionality," an expression coined by Bellman, the inventor of this method.Since the cost for a family of trajectories is computed (rather than a single trajectory as in the Pontryagin maximum principle), the problem grows exponentially in the number of states.
The advantage with this method, however, is its applicability and ease of use for low-dimension systems.The discretization method presented in this note allows problems with up to three continuous states on a 336-MHz Ultra Sparc II.
A set of MATLAB commands has been compiled by the authors to make it easy to test the aforementioned methods and implement the examples.The LP solver that is used is "PCx," developed by the Optimization Technology Center, Illinois.The MATLAB commands and a manual of usage are available free of charge upon request from the authors.

I. INTRODUCTION
The literature reveals a vast and varied treatment of linear systems subject to input, or saturation, constraints.This problem has been tackled from many different perspectives and its study has formed one of the most important topics in the control community over several decades.To avoid repeating prior work, we do not describe this work in detail; it suffices to mention that there are now several mature techniques available to cope with input constraints [1].The amount of attention devoted to this problem is perhaps not surprising when one considers the virtual omnipresence of control constraints in real engineering systems.
Control constraints are not the only time-domain constraint present in control systems, however.In addition to constraints on the transient response of various closed-loop signals (e.g., rise time, settling time), there are sometimes "hard" or "soft" limits imposed on the magnitude of certain plant outputs, or states.These limits reflect issues such as safety requirements or are there to prevent excessive maintenance to system components.For example, in certain aircraft, during the approach to land, there is a limit on the angle of attack to prevent accidents caused by stall or pilot error, etc.Alternatively, if a certain value is exceeded too frequently, this can cause increased wear on components, requiring more frequent maintenance to preserve performance and safety.Thus the study of output limits is an important subject for engineers.
There are several ways of tackling what we call "output constraints"-constraints on a system's state, or linear combinations of states, which can be measured or estimated reasonably accurately-some already existing in the literature.Many of these have been combined with the input-constraint literature and have, arguably, not been given the attention they deserve in their own right.Possibly one of the more complete works on output constraints is that of [3], where the authors group both state and control constraints into the search for a "maximum output admissible set"-the set of all states such that these time domain constraints are not violated.This leads to a quite significant linear programming problem and a controller can be designed such that it ensures that the state always belongs to the maximum output admissible set.
Another way of incorporating such constraints into controller design is via model predictive control, and other receding horizon-based strategies.In such an approach, time domain constraints such as control and state limits can be taken into account by adding them as constraints in the optimization procedure.However, along with the method of [3], such an approach is generally expensive in terms of computation and is often lacking in terms of intuition.Thus, for many applications, particularly those where real-time computational availability is limited, these two methods can be unattractive for the practical control engineer.
The other way of taking into account output constraints, and the one which we revisit in this note, is so-called "override control."This is a technique used predominantly in industry as, typically, it is the simplest and least restrictive in terms of nominal controller structure.There is little documentation describing this approach to handling output constraints, the most lucid and comprehensive accounts occurring in [4] and [5].Essentially, the idea behind override control is to design a controller such that, for a given output, the system behaves as normal until an output limit-which may be in a different channel to that being controlled-is violated, in which case the control is altered to bring this output below its limit again.This is closely related to multimode control (e.g., [9]), where there are more control objectives than control inputs and to obtain satisfactory performance, the control system must be switched at certain points.The works of [5] and [4] give an analysis of performance and stability of such systems and guidelines on designing compensators, but, in our opinion, do not tackle the synthesis of such compensators in a methodical manner.In addition, much of the work in [4] is directed toward single-loop schemes, which allows one to observe a very similar structure to antiwindup systems.Another paper related to these ideas is that of [14], although the actual control strategy used is invariance based and results in a more complicated control law.
The work in this note was motivated by real engineering problems where the plant under consideration is quite large and where any solution to an output constraint problem must be simple due to further constraints on computation.The basic, but less general, framework of the problem we consider was introduced in [13] and used in [12] for successfully conditioning a vertical/short takeoff land (VSTOL) aircraft model (a similar but purely static approach was used in [8]).
We use the same basic idea as override control; first, a controller is designed for the nominal linear system; then, in the event of output violation, an additional compensator becomes active to regulate the output back below its limit.However, our work builds on the traditional override control in several useful ways: it gives a definition of the problem we are trying to solve with our output violation compensator; it gives sufficient conditions, in terms of linear matrix inequalities (LMIs), for an output violation compensator to exist; and it is directed at multivariable systems as well as single-loop configurations.
In [2], a similar, but different, problem to the one we define here is considered.Reference [2] treats the stability analysis of a single-input-single-output closed-loop system subject to both input and output constraints.Our work differs from [2] in that we consider the design of a violation compensator which can be "retro-fitted" to ensure stability and we do not explicitly consider input constraints.
Notation is standard throughout, with kxk := p x 0 x denoting the Euclidean norm and kxk p denoting the L p norm of a vector x(t).The induced L p norm is kH(:)k i;p := sup 06 =x2L (kH(x)k p =kxk p ).The distance is given by dist(x; X) := infw2X kx0wk.The space of real rational, i 2j-dimensional transfer function matrices is denoted R i2j ; the subset of these which are analytic in the closed right-half complex plane, with supremum on the imaginary axis, is denoted RH1 .

A. Nominal System
We consider the plant where x p 2 n is the plant state, u 2 m is the control input, y 2 n is the output, which is fed back to the controller, d 2 n \ Lp 1 is a disturbance acting on the plant, and y l 2 q is the output on which limits are imposed.We make no assumption on the location of the poles of G(s).From this, we define the following transfer function matrices to represent the disturbance feedforward and feedback parts of G(s): We assume that the following stabilizing linear controller has been designed where x c 2 n is the controller state and r 2 n \ L p represents a disturbance on the controller, normally the reference input.From this, we designate the following transfer functions: Assumption 1: The nominal closed loop system formed out of the interconnection of G(s) and K (s) is internally stable and well posed.Equivalently This is necessary for our work to make any practical sense, and, in addition, we assume that K (s) has been designed such that for most common reference demands, y l (t) behaves sensibly and exceeds its limits only occasionally.This assumption is reminiscent of the antiwindup literature where it is implicitly assumed that the control input saturates infrequently.

B. Output Limiting
Consider Fig. 1, which shows how violation compensation is introduced into the system.We have modeled the output limits as a saturation function y m = sat(y l ), where sat(y l ) = [sat(y l;1 ); . . .; sat(y l;q )] 0 (5) and sat(y l;i ) = sign(y l;i ) 2 min fjy l;i j; y l;i g, y l;i > 0, 8 i 2 f1; . . .; qg.y l;i denotes the output limit in the ith channel of y l . 2 In order to activate the violation compensator 8(s) 2 RH 1 , we compare the "limited" signal ym to the actual output signal y l and if there is a difference, then 8(s) will become active.This is similar to the antiwindup strategy except, there the "real" signal is the limited signal, um = sat(u); here the "real" signal is the unlimited signal, y l .If 8(s) is active, it produces a signal (s) which is then fed into the controller, thus where = [ 0 1 0 2 ] 0 2 n +m .Hence, if an output has been violated, the control is modified in order to regulate the output below the limit again.Equivalently, this can be drawn as Fig. 2, where we have used the fact that ỹ = y l 0 sat(y l ) = Dz(y l ) (Dz(:) represents the deadzone operator).The resulting closed-loop can now be described by  where w = [r 0 d 0 ] 0 -a full description of these state-space matrices is given in the Appendix for convenience.By Assumption 1, A is a Hurwitz matrix.
An important variation on this theme is to feed into the controller as shown in Fig. 3, where 2 n is subtracted from the reference and can be interpreted as a "back-off" of the reference demand.In this case, the dimensions of (t) and 8(s) are changed and the modified form of the controller is This can be important from a conceptual point of view and also can allow one to take advantage of an already decoupled closed loop in the case of multivariable systems (assuming G cl (s) has been decoupled to some extent using K(s)).This case can be handled with little extra difficulty; only the state-space matrices (given in the Appendix) change.

III. STABILITY AND PERFORMANCE
The task now is to design 8(s) 2 RH 1 such that stability is maintained and some performance improvement is obtained by adding it to the system.One appealing way to measure performance is by how much the actual output y l (t) deviates from the ideal limited output sat(y l (t)), that is, performance can be measured by the size of ỹ(t).
Ideally, we would like ỹ(t) = 0, 8 r, d, x 0 , but with our configuration this is not possible. 3Instead we approximate this objective by trying to keep kỹk small in response to the exogenous input, w(t).Also, as a subsidiary minimization problem, we would like to keep (t) small-if (t) is large this probably indicates that our original objective, that is the nominal linear response, has been "backed-off" considerably.Hence, we could pose the following optimization problem: for some matrices W 1 , W 2 > 0. This objective is a hard optimization problem, so instead we will be content to ensure that < kwk p (10) for some integer p 2 [1; 1) and some suitably small > 0. We now formally define the problem we seek to address in the remainder of this note (a definition which was inspired by that given in [10]).
Definition 1: 8(s) 2 RH 1 is said to solve the output violation compensation problem if the closed loop is internally stable well posed and the following conditions hold.
2) If dist(y l ; Y) = 0 8 t 0, then = 0 8 t 0 (assuming zero intial conditions for 8(s)).3) If dist(y l ; Y) 2 L p , for some integer p 2 [1; 1), then 2 L p , where Y = [0 y l;1 ; y l;1 ] 2 111 2 [0 y l;q ; y l;q ]. 8(s) is said to solve strongly the output violation compensation problem if condition 3) holds.4) Inequality ( 10) is satisfied for some integer p 2 [1; 1), some > 0, and some matrices W1, W2 > 0. Remark 1: Condition 1) ensures linear behavior if y l (t) never violates its limits (note Dz(y l ) = 0 8 y l 2 Y): it is trivially satisfied if 8(s) 2 RH 1 , but if we allow 8(:) to be nonlinear this condi- tion is needed.Condition 2) ensures that if y l (t) exceeds its limits for some finite time, thus exciting 8(s), then after y l (t) falls below its threshold, linear behavior will eventually resume.This is reminiscent of the anti-windup literature where the local structure of the controller is preserved unless saturation occurs.This property makes our work a special case of the general local-global framework introduced in [11].
Condition 3) ensures a finite L p gain which roughly captures the performance of the system as discussed earlier.Condition 3) implies condition 2).
This note only considers the stronger version of the problem; the weaker version, which does not involve finite L p gains, is the subject of ongoing research.Our first result is an existence result for admissable compensators.
Lemma 1: For a given closed-loop system G cl (s), there always exists a 8(s) which solves strongly the output violation compensation problem for any matrices W1, W2 > 0. By the assumed stability of the nominal linear system G cl (s), the subsystems G cl;1 (s) and G cl;2 (s) are both stable.As G cl;2 (s) 2 RH 1 it follows that, for some > 0 we have kG cl;2 ki;2 = kG cl;2 k1 =: .Note also that kDz(:)k i;2 = 1 So, by the small gain theorem, if k8(s)k1 = < 01 the closed loop is L2 bounded.So, let 8(s) be a transfer function satisfying this bound and, moreover, let it be strictly proper to ensure well posedness, then it follows that kỹk 2 (1 0 ) 01 kG cl;1 k 1 kwk 2 (13) kk2 (1 0 ) 01 kG cl;1 k1kwk2: Although Lemma 1 ensures the existence of a compensator which solves strongly the output violation compensation problem, using it as a synthesis guide would probably lead to poor results: for a large reaction-and, thus, swift output regulation-we would like the gain of 8(s) to be quite large.Of course, the small gain analysis of Lemma 1 restricts this.The following theorem allows the optimization of an L2 performance index using a static compensator 8. 4  Theorem 1: There exists a compensator 8 2 (n +m)2q which solves strongly the output violation compensation problem for p = 2 if there exist matrices Q > 0, U = diag(1; . . .; q) > 0, L 2 (16) is satisfied.Furthermore, if this inequality is satisfied, a compensator satisfying an L 2 gain bound of = + p is given by 8 = LU 01 .
Proof: By virtue of 8(s) being linear, condition 1) of the output violation compensation problem is satisfied.To see the L 2 gain part, fix p = 2, and note that we want to enforce (10) for some positive-definite matrices W1, W2 > 0. As = 8ỹ, we obtain Assume there exists a function v(x) = x 0 P x > 0 such that d dt 2 0 2 kwk 2 < 0: Then, it follows that ( 17) is satisfied and, hence, the output violation compensation problem is solved.Next, note that as Dz(:) 2 Sector[0; I], we have ỹi(yl;i 0 ỹi) 0 8 i 2 f1; . . .; qg.This implies there exists a matrix W = diag(w 1 ; . . .; w q ) > 0 such that ỹ0 W (y l 0 ỹ) 0: Hence, a sufficient condition for inequality (18) to hold is that the inequality d dt x 0 P x + ỹ0 (W 1 + 8 0 W 2 8)ỹ 0 2 w 0 w + 2ỹ 0 W (y l 0 ỹ) < 0 (20) is satisfied.This can be rewritten as ( 21), shown at the bottom of the page.Using standard Schur complement arguments and congruence transformations, along with the definitions := 2 and L := 8U, it follows that this holds iff the inequality ( 16) is satisfied.
To prove well posedness, we first need the following lemma, similar to that proven in [6], except for a varying 5.The proof is found in the Appendix. 4Static compensators require no extra states to be added to the system.where i (z i (t)) is unique for all z i (t).
In order to prove well posedness, we need to prove that y l (t) = Cx(t) + D0w(t) + D8ỹ(t) has a unique solution for all ỹ(t) = Dz(y l (t)).As Dz(y l ) is a globally Lipschitz sector bounded nonlinearity, there exists a unique 1(y l (t)) 2 5 such that Dz(y l ) = 1(y l (t))y l (t), 8 y l (t).So we can replace Dz(y l (t)) by the uniquely determined time-varying gain 1(y l (t))y l (t) 2 5. Thus, the question of well posedness reduces to whether we can find a unique solution to l (t) = Cx(t) + D 0 w(t) + D81(y l (t))y l (t) for all y l (t).Existence is equivalent to the invertibility of (I 0 D81(y l (t)) 8 1(:) 2 5. Using Lemma 2, we know this to be the case if holds for some diagonal matrix V > 0. Inspecting the LMI in the theorem and noting that L := 8U, we see that this is indeed the case as U > 0 is diagonal.Uniqueness is somewhat harder to prove but follows by noting that jDz i (:)j is monotonically increasing and such that jDzi(y l;i )j jy l;i j, 8 y l;i .Corollary 1: There exists a compensator 8 2 (n +m)2q which solves strongly the output violation compensation problem for all integers p 2 [1; 1) if there exist matrices Q > 0, U = diag( 1 ; ...; q ) > 0 and L 2 (n +m)2q such that Furthermore, if this inequality is satisfied, then a suitable compensator is given by 8 = LU 01 .
Proof: The well-posedness part of the proof is identical to that of Theorem 1.The derivation of the LMI (26) is similar except that we omit the L 2 gain objective.To see that finite L p gain still holds note that the LMI (26) gives sufficient conditions for the existence of a Lyapunov function v(x) = x 0 Px > 0 such that _ v(x) < 0kxk 2 when w = 0.This implies that the origin of G cl (s) is exponentially stable with ỹ = Dz(y l ).Now, note that the functions f(x; w) =Ax + B0w + B8Dz(y l ) h(x; w) =Cx + D 0 w + D8Dz(y l ) are globally Lipschitz in both x and w (note well-posedness).Then, [7, Th. 6.1] can be invoked to establish that kh(x; w)k p < kwk p for some > 0 and all integers p 2 [0; 1), i.e., the output violation compensation problem is solved strongly.Remark 2: The advantage of Theorem 1 is that it gives a constructive way of minimizing the L2 gain by way of the LMI ( 16).The advantage of Corollary 1 is that it is less computationally demanding, and not biased toward the L 2 gain.

IV. DYNAMIC SUBOPTIMAL COMPENSATORS
Although static compensators are, from a computational perspective, desirable as they require little extra online computation, in certain circumstances they may not be appropriate as they feature no frequency shaping.For example, we may want the signal to contain only low frequencies to avoid any jerkiness in the control input (which could cause actuator degradation in the long term).By including a lowpass filter in 8(s) this could be avoided easily; without such an option, it would be difficult to enforce.
This section is devoted to the synthesis of a class of suboptimal compensators which address this problem.Rather than explicitly synthesising optimal dynamic compensators which tend to be high order and can also suffer from numerical problems in the synthesis and implementation stages, we choose to synthesise suboptimal dynamic compensators.By suboptimal compensators, we mean those by which we choose the dynamic part of the compensators but also cascade this with a static matrix which is synthesised optimally.In other words, we let 8(s) be given by 8(s) = 8(s)K 2 R (n +m)2q , where 8(s) 2 R (n +m)2(n +m) is a given dynamic transfer function matrix, and K 2 (n +m)2q is a static matrix to be synthesised in some sort of optimal fashion.It is the construction of K which the remainder of this section addresses.
The first step is to assign 8(s) 2 RH 1 the minimal state-space realization where = K ỹ.Augmenting the nominal system G cl (s) with these dynamics, we have Gcl (s) _ x = Ãx + B0w + B1 y l = C1x + D01w + D1 = C2 x + D2 : (30) A full expression for the "tilded" matrices is given in the Appendix.
Theorem 2: Given 8(s) 2 RH 1 such that deg( 8(s)) = k, there exists a kth-order compensator, 8(s) = 8(s)K 2 R (n +m)2q which solves strongly the output violation compensation problem for p = 2 if there exist matrices Q > 0, U = diag(1; ...;q) > 0, L 2 (n +m)2q and a positive real scalar such that the LMI is satisfied.Furthermore, if this inequality is satisfied a suitable K satisfying a finite L 2 gain = + p is given by K = LU 01 .
Proof: Noting (30), the proof is similar, mutatis mutandis, to the proof of Theorem 1.

A New Parameterization of Stable Polynomials
T. E. Djaferis, D. L. Pepyne, and D. M. Cushing Abstract-In this note, we develop a new characterization of stable polynomials.Specifically, given positive, ordered numbers (frequencies), we develop a procedure for constructing a stable degree monic polynomial with real coefficients.This construction can be viewed as a mapping from the space of ordered frequencies to the space of stable degree monic polynomials.The mapping is one-one and onto, thereby giving a complete parameterization of all stable, degree monic polynomials.We show how the result can be used to generate parameterizations of stabilizing fixedorder proper controllers for unity feedback systems.We apply these results in the development of stability margin lower bounds for systems with parameter uncertainty.Index Terms-Robustness, stability, stability margin.

I. INTRODUCTION
Stable polynomials can be studied in many ways and from a number of different perspectives.In particular, one can think of polynomials in terms of their roots or coefficients.One can exploit their Markov parameters or take the Hermite-Biehler theorem viewpoint [5] and consider their "even" and "odd" parts.In this characterization, a polynomial will be stable if and only if the even and odd parts form a "positive pair."In the "frequency domain," one can express this fact in terms of a set of frequencies that interlace.Another frequency domain interpretation states that a stable degree n-polynomial has the property that as !ranges from 0 to 1 the graph of the polynomial plotted in the complex plane has increasing phase and the net increase is n=2 rad [6].
The finite Nyquist theorem [1]  the polynomial value lies in appropriately defined consecutive sectors.
The Mikhailov criterion is one possibility where the sectors are 90 wide and nonoverlaping.We take a frequency domain viewpoint and "construct" a polynomial by requiring that it "behave" in a particular manner.Specifically, given an ordered set of frequencies 0 < ! 1 < ! 2 <; . . .; < !n we require that the polynomial value lie on a specified straight line (not ray) at each frequency.This generates a set of linear equations in the polynomial coefficients which when solved identifies a polynomial.The corresponding polynomial is shown to be stable and this construction can be viewed as a mapping between ordered sets of n frequencies and monic, stable, degree n polynomials.The mapping is one-one and onto and the space of positive, ordered frequencies is convex.We demonstrate these properties in Section II and in Section III show how this result leads immediately to parameterizations of fixed-order, proper, stabilizing controllers.We then apply these results in developing analytic expressions for stability margin lower bounds for systems with parameter uncertainties.

II. PARAMETERIZATION OF STABLE POLYNOMIALS
Consider the question of parameterizing the set of all stable polynomials.One can approach this question in several ways.If one fixes the number of real and complex conjugate roots, then one can express each real root by a degree one factor and each pair of complex conjugate roots by a degree two factor.Multiplying out the terms, one can obtain the coefficients of the polynomial as functions of the roots.Clearly, if a different distribution of real and complex conjugate roots is chosen then a different expression will be obtained.We would like to develop a different characterization, one that has the same functional representation regardless of the polynomial root distribution.Furthermore, since our ultimate goal is robust analysis and design, we would like to have this parameterization give us certain advantages in that context.
We first introduce the following notation and, for simplicity of exposition, assume that n is odd: (s) = s n + 1s n01 + 2 s n02 + 1 11 + n , e (s) = n + n02 s 2 + n04 s 4 + 111, o (s) = n01 + n03 s 2 + n05 s 4 + 1 11, (s) = e (s) + s o (s), (s) = je(s) + so(s).Suppose that we are given n frequencies 0 < ! 1 < ! 2 <; 11 1;< !n and we require that at these frequencies the value of some polynomial lies on straight lines through the origin at angles =4; 3=4; 5=4; . . .; n=4, respectively.Note the distinction between straight lines and rays.These conditions do not a priori guarantee that the constructed polynomial is stable.It would have been stable if we required that it lie on the rays through the origin at angles =4; 3=4; 5=4; . . .; n=4 as an immediate consequence of the finite Nyquist theorem.In particular, we require that at s = j! 1 the value of the polynomial lies on the line at angle =4 rad.This can be expressed as e (j!1) = !1o(j!1) : (1) If we require that at s = j!2 the value of the polynomial lie on the line through the origin at 3=4 rad, this can be written as e (j!2) = 0!2o (j!2) : (2) Continuing in this manner we can generate a system of n linear equations in the polynomial coefficients.For odd numbered frequencies (odd numbered quadrants) the expression will be as in (1) and for even numbered frequencies (even numbered quadrants) the expression will 0018-9286/02$17.00© 2002 IEEE + Bp u + B pd d y = C p x p + D p u + D pd d y l = C pl x p + D pl u + D pdl d

Fig. 2 .
Fig. 2. Equivalent representation of the output violation compensation scheme.

Proof:
We give a simple proof, although there are several.First, let G cl (s) be partitioned as y l = G cl (s) w = [G cl;1 (s) G cl;2 (s)] w :(11)

2 ,
then noting that W1 and W 2 are constant matrices and that k[z 1 z 2 ]k 2 kz 1 k 2 + kz 2 of the strong output violation compensation problem has been satisfied (which implies condition 2).Noting that 8(s) is linear, and assuming zero initial conditions, proves that part (1) of the problem is satisfied.

(
n +m)2q and a postive-real scalar such that the LMI