Improved Prediction Dynamics for Robust MPC

This article proposes a new model predictive control (MPC) control scheme for polytopic uncertain and/or time-varying systems with state and input constraints. The MPC policies we consider employ: 1) the intersection of ellipsoids to characterize the domain of attraction, 2) a time-varying Lyapunov function to bound from above the cost function, 3) a tailored alternating direction method of multipliers algorithm to solve efficiently the online optimization problem. With respect to other well-known techniques, the main advantage of the new approach is the reduced conservativeness.


I. INTRODUCTION
T HE article is concerned with the regulation problem of uncertain and/or time-varying discrete-time linear systems in the presence of state and input constraints. This is a problem that has been extensively studied over the last decades, and a number of different solutions are available as, for example, those based on min-max model predictive control (MPC) with its implicit and explicit formulation [1], [2], [3], or interpolating control [4], [5], or invariant set methods [6].
In min-max MPC, the control signal is computed at each sampling interval by minimizing the worst case of a performance cost. This is in turn computed by maximizing over the possible cases of uncertainty. Solving these problems may need prohibitively long time as they are NPhard [7]. This is an important limitation for systems with fast dynamics.
Numerous works are aimed at mitigating the computational load. In [1], an optimal linear state feedback gain is computed at each sampling interval by minimizing an upper bound on the worst-case performance index. However, it is required an online solution of a semidefinite program (SDP). In [3], it was shown that the min-max MPC with a linear cost is equivalent to a multiparametric linear program, where the state plays the role of a vector of parameters for the optimization problem. The solution is the so-called explicit MPC where the control is a piecewise affine function of the state over a polyhedral partition of the state space. However, the explicit solution may be very complex due to a high number of polyhedral cells. To avoid the high computational load of the SDP-based min-max MPC, another explicit solution were also proposed in [8]. A set of stabilizing linear state feedback laws and an associated set of nested invariant ellipsoids are provided. However, this is a switching control law, which is not a continuous function of the state.
Interpolating control techniques [4], [5] were also suggested as min-max MPC alternatives. The basic idea is to blend a highperformance feedback with constraint-aware low-gain feedback strategies. By interpolation, simpler control algorithms may be employed for the same class of problems addressed by MPC. However, for ellipsoid-based interpolating control schemes, a solution of an SDP problem is required at each sampling interval [9].
The line of research in [2], [10], and [11] have succeeded to reduce the computational burden by avoiding completely to solve online an SDP problem. The method consists of offline and online stages. In the offline stage, the parameters of a dynamic controller are optimized in order to maximize the domain of attraction of the closed-loop system. Once the parameters were determined, the next step is to estimate an optimal upper bound of the performance index. In the online stage, the state of the dynamic controller is optimized by minimizing the obtained upper bound at each sampling time. The online optimization problem can be solved very efficiently by using a univariate Newton-Raphson method. However, in [2], to design the parameters of the dynamic controller as well as to calculate the optimal upper bound, common quadratic Lyapunov functions are used. Hence, the result might be conservative in the sense that only a small amount of uncertainty is allowed. This is due to the fact that the same Lyapunov matrix must verify for all vertices of the uncertain domain. In addition, a lifting process in conjunction with a nonlinear parameter transformation technique is employed. Hence, the resulting optimization problem is in high dimension.
Recently, in [12], it was noticed that despite an efficient online computation, the full control range of the prediction dynamics-based MPC method with linear feedback is rarely exploited. Hence, the time to regulate the plant to the origin is often much longer than necessary. To overcome this problem, a new prediction dynamics-based MPC method with saturated feedback was proposed [12]. It was shown that by taking the input saturation information into account in the design of the dynamic controller, the control range is fully exploited. Hence, the performance can be significantly improved. However the method in [12] is still based on common quadratic functions in order to optimize the parameters of the saturated dynamic feedback law, to describe the domain of attraction, and to calculate the upper bound of the cost function.
The main aim of this article consists of improving the control techniques presented in [2] and [12], by proposing new procedures for robust constrained prediction dynamics-based MPC. The new methods do not require the quadratic stabilizability of the given uncertain system. The main contributions of this article are as follows.
1) To describe the domain of attraction, we do not use a single ellipsoid but the intersection of ellipsoids, each one corresponds to a different vertex of the uncertainty polytope and/or of the input saturation. With respect to existing nonconvex conditions that are used for the invariance of the intersection of ellipsoids [13], our conditions are convex. 2) To calculate the upper bound of the cost function, instead of a time-invariant, a time-varying Lyapunov function is employed. 3) A tailored alternating direction method of multipliers (ADMM) algorithm is proposed to solve online the new optimization problem. Comparing the existing solution to the same problem [14], the advantage of our method is that the subproblems have an analytical solution. Hence, they can be solved very efficiently. Another main contribution of this article is that we propose a particular choice of the parameters of the linear/saturated dynamic control laws without solving any optimization problem. This choice has the same desired property as that of algorithms in [2] and [12], i.e., the domain of attraction of the controlled system under a linear/saturated dynamic feedback law is identical to the domain of attraction under any static linear/saturated state feedback law.
This article is organized as follows. Section II is concerned with the problem formulation and preliminaries. Section III is dedicated to the prediction dynamics-based MPC method with linear feedback, while the ADMM algorithm is studied in Section IV. In Section V, results on the design of the new prediction dynamics control law with saturated feedback are presented.
Two simulated examples with comparison to earlier solutions are evaluated in Section VI. Finally, Section VII concludes this article.

A. Notation
A positive definite (semidefinite) matrix P is denoted by P 0 (P 0). For a given P ∈ R n×n , P 0, E(P ) represents the following ellipsoid: Letting h l be the lth column of the matrix H ∈ R n×n h , L(H) is used to denote the following symmetric polyhedron: The indicator function of a set C ⊆ R n is denoted by I C and is defined as For symmetric matrices, the symbol ( * ) denotes each of its symmetric block. 0, 1, and I are the zero matrix, the all-ones matrix, and the identity matrix of appropriate dimension, respectively.

A. Problem Formulation
Consider the following uncertain and/or time-varying linear discrete-time system: where x(k) ∈ R n is the measured state, and u(k) ∈ R m is the control input. The matrices A(λ(k)) ∈ R n×n and B(λ(k)) ∈ It is underlined that λ i are unknown and time-varying. It is assumed that system (1) is controllable. The state x(k) and input u(k) are subject to symmetric linear constraints where f l is the lth column of F ∈ R n×n c . For simplicity, symmetric constraints are considered. However, the proposed approach can be extended to asymmetric constraints using, e.g., asymmetric quadratic functions [15]. The objective is to design a control law u(k) = U (x(k)) such that the controlled system fulfills the input and state constraints (4) despite the uncertainties. Furthermore, U (x(k)) should solve the following min-max problem: where Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
x(k + t), u(k + t), t = 0, 1, . . ., are the predicted states and the predicted inputs from time k. Q 0 and R 0 are weighting matrices.

B. Preliminaries
Consider the following ellipsoid: Definition 1 (Projection): Given the ellipsoid E x 1 x 2 , the or- The cut of the ellipsoid E x 1 x 2 ⊆ R n 1 +n 2 through x 2 = 0 is given as It can be shown that (9) is the ellipsoid E(P 11 − P 12 P −1 22 P T 12 ), i.e., Definition 2 (Invariance): A set Ω is said to be robustly invariant for system (1) if ∀x(k) ∈ Ω, ∃u(k) = U (x(k)) such that x(k + 1) ∈ Ω. In addition, if Ω ⊆ L(F ) and then Ω is robustly invariant and constraint-admissible.

A. Parameters Optimization
In the absence of the constraints (4), it is well known that (5) is a linear quadratic (LQ) regulator problem. The solution is the linear state feedback controller where the gain K ∈ R m×n can be found by solving an SDP problem.
In the presence of (4), problem (5) is intractable since the number of constraints is infinite due to (1), (2), and (4). A way to overcome this problem is to employ the following dynamic control law [2], [11], [18]: where v(k) ∈ R n v is the controller state, and N and M i are unknown matrices that will be treated as decision variables. The uncertain parameters λ i are the same as in (3). As will be seen later, knowledge of the λ i is not required for the implementation of the control law (13).
Combining (1) and (13), one obtains where It is well known [2], [9] that invariance and constraint admissibility of E(P ) = {z ∈ R n+n v : z T P −1 z ≤ 1} for the system (14), and for the constraints (4) are equivalent to where K j and F l are, respectively, the jth and lth row of K and F.
In the control strategy [2], N, M i , i = 1, s are optimized in order to maximize the domain of attraction. This is done by maximizing the volume of the projection E x of E(P ) onto the x-subspace. The optimization problem is nonconvex because (16) and (17) are bilinear matrix inequalities (BMI) in P, N, and M i . As shown in [2], the problem can be convexified through the use of a nonlinear parameter transformation. In addition, two other main results are obtained in [2].
i) In terms of the size of the domain of attraction, there is no advantage to be gained by using n v > n. ii) With n v = n, the maximum volume of E x under the control law (13) can be as large as the volume of the largest invariant and constraint-admissible ellipsoid obtained under any robustly stabilizing static state feedback law, L ∈ R m×n The first aim of this article is to present a particular choice of N and M i such that the two results (i) and (ii) in [2] hold for the given matrix gain L. This is done without solving any optimization problem. For this purpose, define Ω L as a robustly invariant and constraint-admissible set for system (1) and for constraints (4) under the controller (19). Note that Ω L can be any kind of set, and is not needed to be an ellipsoid. Consider the following choice for N, M i ∀i = 1, s: The following theorem concerns the relation between the feasible sets for (19) and for (13) and (20).
Theorem 1: Let Ω z be an invariant and constraint-admissible set for (4), (14), and (20). Let also Ω x be the projection of Ω z onto the x-subspace. It is always possible to optimize Ω z in such a way that Ω L ⊆ Ω x .
Proof: See Appendix A.
Note that with the choice (20), the information of the system dynamics (1) are heavily exploited. In the rest of this article, (20) will be used for the linear dynamic control law (13).
Remark 1: Clearly, the simplest way to have Ω L as the domain of attraction is to employ the control law u(k) = Lx(k). In general, L should be a low gain in order for Ω L to be large. However, such a low-gain controller u(k) = Lx(k) is generally de-tuned and thus menaces the optimal performance.

B. Domain of Attraction
In this section, we show how to calculate a robustly invariant and constraint-admissible set of the closed-loop system (14) with the constraints (4).
Two classes of invariant sets are generally considered for system (14). The first one is ellipsoidal invariant sets, which correspond to quadratic Lyapunov functions, and are the most commonly used. Their popularity is due to computational efficiency via the use of linear matrix inequality (LMI) formulation, and the complexity is fixed with respect to the dimension of the state space. However, it is well known [6] that there exist asymptotically stable uncertain systems that do not admit quadratic Lyapunov functions. The second class is polyhedral invariant sets. In general, polyhedral invariant sets are preferred to the ellipsoidal ones, as they form a universal class of Lyapunov functions [6]. However, constructing a polyhedral invariant set is generally harder than the computation of an ellipsoidal one, especially for uncertain systems and/or for the large state dimension [6], [9].
In recent years, other types of nonquadratic Lyapunov functions are considered for discrete-time systems with constraints, e.g., [13], [19]. The Lyapunov functions in these works pertain to or are composed from several quadratic functions. However they lead to optimization problems with BMI constraints, which are nonconvex. In this article, we follow the idea in [13], i.e., we use the intersection of ellipsoids to characterize the domain of attraction. Compared to [13], our conditions are expressed as LMI constraints, which are convex.
Theorem 2: If there exist matrices P i ∈ R 2n×2n , P i 0 ∀i = 1, s such that the following conditions hold: then the intersection of ellipsoids s i=1 E(P i ) is robustly invariant and constraint-admissible for (4) and (14).
Proof: See Appendix B.

Remark 2:
In Theorem 2, if we set P i = P ∀i = 1, s, then conditions (21)-(23) boil down to (16)- (18). This implies that Theorem 2 is less conservative than the one that employs a single ellipsoid.
Remark 3: Condition (21) should be read as follows: (21) holds for all i 1 = 1, s and for all i = 1, s. For example, if s = 2, then (21) is Remark 4: Conditions (22) and (23) should be read as follows: for each j ∈ 1, m and each l ∈ 1, n c , there exists i 2 ∈ 1, s and i 3 ∈ 1, s such that For example, consider the case s = m = 2. Then, the input constraints are satisfied if and only if one of the following four conditions holds: In the case s = m = 2, the set E(P 1 ) E(P 1 ) is robustly invariant and input constraint-admissible if P 1 and P 2 satisfy condi- where P i,11 ∈ R n×n , P i,12 ∈ R n×n , P i,21 ∈ R n×n , and P i22 ∈ R n×n , with P i,11 = P T i,11 , P i,22 = P T i, 22 , and P i,12 = P T i, 21 . The projection of s i=1 E(P i ) onto the x-subspace is the domain of attraction for the control law (13), (20). Using (8), this set is proportional to s i=1 E(P i,11 ) and should be maximized. On the other hand, using (10), the intersection of 21 ). In this set, the control law (13), (20) becomes 21 ) should also be maximized. In the literature, e.g., [2], the size of the set is generally measured by its volume. Here, following the idea in [16], we take the shape of a set into consideration.
Let X r ⊂ R n be a prescribed compact convex set. Define, for If σ ≤ 1, then X r ⊆ X c . The reference set X r can be chosen according to the available information on the initial conditions. In general, ellipsoids and polytopes are considered for X r .
Define X ri and Y ri as reference sets for E(P i ), and for E(P i,11 − P i, 12 Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. The problem of maximizing the size of s i=1 E(P i, 11 ) and 21 ) with respect to shape reference sets can be formulated as where γ ≥ 0 is a weighting factor, which presents a balance between the domain of attraction and the local optimality. One also could use different For simplicity, we only consider the case where X ri and Y ri are ellipsoids. With a slight abuse of notation, X ri and Y ri are used to denote the set as well as the matrices of E(X ri ) and E(Y ri ), i.e., Condition (b) can be written as 21 . Using Schur complement, one gets Using (26) and (27), the optimization problem (25) can be rewritten as (22), (23), (26), (27).
Equation (28) is a convex SDP problem. It can be solved efficiently using free available LMI parser, such as CVX [20] or Yalmip [21].

C. Min-Max Cost and Overall MPC Algorithm
In this section, a way to compute an upper bound of the cost J(k) is presented. Then, the new control scheme is introduced. Consider the following time-varying function: where Ξ i ∈ R 2n×2n and Ξ i 0, i = 1, s are chosen to satisfy ∀t = 0, 1 . . . , We will provide a way to compute Ξ i , i = 1, s. For the moment, let us assume that Ξ i are known. Since K and L are robustly stabilizing control laws and A(k) is an upper triangular matrix, it follows that system (14) is robustly asymptotically stable for states near the origin. Hence, The following theorem concerns the existence of Ξ i in (30). Theorem 3: There exist Ξ i ∀i = 1, s satisfying (30) if and only if the following conditions hold: where Proof: See Appendix C.
Since A i are upper triangular matrices, there exist Ξ i ∈ R 2n×2n satisfying (31), if and only if there exist Ξ i in the following diagonal form: where Γ i ∈ R n×n , Φ i ∈ R n×n , Γ i 0, and Φ i 0 ∀i = 1, s. Ξ i are used to provide the upper bound of the cost function J(k). The optimal Ξ i can be found by solving the following SDP problem: Denote Γ * i , Φ * i as an optimal solution of (34), and At time k, consider the following optimization problem: Let v * (k) be the solution of (35). The control signal at time k is Theorem 4: Assuming feasibility at the initial state, the optimization based controller (36) guarantees recursive feasibility and robust asymptotic stability.
Proof: See Appendix D.

Remark 5:
The online computational complexity is reduced if the same Φ * i can be used for a set of vertices. The price to be paid is that the performance might be degraded. Note that no constraints are imposed on Γ * i , i.e., Γ * i can be different for different vertices. In the extreme case, i.e., Φ * i = Φ∀i = 1, s, The proposed control policy with linear feedback is summarized in Algorithm 1. It consists of two stages: offline stage and online stage.

IV. ONLINE OPTIMIZATION PROBLEM VIA ADMM
It can be shown that (35) is a quadratically constrained quadratic program (QCQP). Hence, the solution can be obtained by using, e.g., the interior point methods (IPMs) [22]. IPMs have attractive polynominal time complexity. However, at each iteration, they require sophisticated and heavy computational tasks to be performed, e.g., solving Newton's type systems. For systems (1) with fast sampling time, a single iteration of such a polynomial time algorithm is often too expensive to be of practical use.
In the following, an alternative method is provided. The solution is based on the ADMM [23]. By exploiting the inner structures of (35), the advantage of our method is that the suboptimization problems of the ADMM at each iteration can be solved very efficiently. This is because they have an analytical solution. Note that the idea of applying the ADMM to a QCQP problem is already considered in [14]. A general nonconvex QCQP problem is reformulated as a consensus optimization form, and then solved by the ADMM. The main drawback of the approach in [14] is that the solution of the suboptimization problems can only be obtained numerically via iterative procedures. Note that the solution in this article is very simple to implement, and does not require any extra libraries.
In the following, we will first describe the ADMM and then show how to apply it to solve (35) in an efficient manner.

A. Alternating Direction Method of Multipliers
Consider the following optimization problem: where f (z) and g(s) are convex functions. Solution to (38) is found by investigating the augmented Lagrangian where β is the Lagrange multiplier, ρ > 0 is a tuning parameter, which presents a tradeoff between the cost function and the constraints. The matrix E is positive definite. In practice, E is generally chosen as E = I [14], [23]. In this article, E is a design parameter. It will be selected in order to solve efficiently the suboptimization problems.
In the following, we use q as iteration counter of the ADMM. A superscript (q) is used to denote the values of variables calculated at iteration q. The ADMM iteratively minimizes the augmented Lagrangian function with respect to z and s, and maximizes it with respect to β. The resulting iterates are summarized in Algorithm 2.
The exit conditions of the algorithm are determined by the primal p and dual d residuals given by The justification for using d as a residual for dual optimality can be found in [23].

B. Efficient Suboptimization
Now, we show how to reformulate (35) as (38). For this purpose, rewrite (35) as where Note that for simplicity, the terms x(k) T Γ i x(k) are removed from the cost in (41). Because A i are upper triangular matrices, the control law (36) with v * (k) as the solution of (41) also guarantees recursive feasibility and the closed-loop system is robustly asymptotically stable. Problem (41) is equivalent to Note that to bound from above the cost function, we use a quadratic function instead of a standard linear function [24]. As will be shown later, this allows us to solve efficiently the suboptimization problems.
The decision variable is structured as The following auxiliary variables η i , γ i , χ i , s are introduced: This leads to the following matrix C: The matrix E is chosen as Using (44) and (45), problem (43) is reformulated as (38) with where H 0 = diag(1, 0).
Step 1 of Algorithm 2: The optimization problem is Equation (48) is a convex unconstrained QP problem. The optimal solution is given by where The constant matrices K β and K s are computed offline.
Step 2 of Algorithm 2: The optimization problem is The cost function and the constraints of (50) are separable in (η i , γ i ) and χ i . The updates of these variables can be carried out in parallel.
The update of (η i , γ i ) is the solution of the optimization problem ∀i = 1, s γ i are, respectively, the corresponding Langrange multipliers for η i and γ i at iteration q. Define Dividing the cost function by ρ 2 , problem (51) is rewritten as min which has an analytical solution given by the following theorem. Theorem 5: The optimal solution of the convex optimization problem (53) is given by Proof: See Appendix E. Finally, the update of χ i is the solution of the following optimization problem: where β (q) χ i is the corresponding Langrange multiplier for χ i at iteration q. Define Problem (55) is rewritten as Algorithm 3: ADMM-Based Solver for (41).
It is well known [25] that (57) has an explicit solution given by Remark 6: With E given in (46), we are able to solve (53) and (57) efficiently. Note that if E = I, there exists no analytical solution for (53) and (57).
The particularization of Algorithm 2 to problem (41) is summarized in Algorithm 3.
Remark 7: The algorithm of Hann and Lou [26] can also be applied to solve (41). This method splits up the computation into a number parallel subproblems. The main inconvenience of [26] is that the solution of each suboptimization problem can only be obtained numerically via iterative procedures.

A. Prediction Dynamics
As written in Section I, despite an efficient online computation, the control signal of the prediction dynamics-based MPC method with linear feedback hits rarely the constraints. Hence, the time to regulate the plant to the origin is much longer than, e.g., by time-optimal control. To overcome this problem, a new prediction dynamics-based MPC method with saturated feedback was considered in [12]. With the new algorithm, the control range is fully exploited, and the performance can be significantly improved. In addition, the domain of attraction of the new control law can be as large as that of any static saturated state feedback law, L ∈ R m×n u(k) = sat(Lx(k)). (59) However, the method in [12] is still based on common quadratic functions to characterize the domain of attraction as well as to calculate the upper bound of the cost function. The main aim of this section is to present a way to reduce the conservativeness of [12]. In addition, a particular choice of the dynamic controller parameters is also proposed. To this aim, consider the following saturated dynamic control law: where v(k) ∈ R n is the controller state. The saturation function sat(u) : R m → R m is defined as Using (60) and (61), the input constraints (4) are automatically satisfied. Note that if v(k) = 0, then (60) is the robust optimal saturated LQ controller In the following, we recall the linear differential inclusion (LDI) modeling framework, which was proposed in [16]. It will be used to model the saturation nonlinearity. Define D as the set of m × m diagonal matrix whose diagonal elements are either 0 or 1. For example, if m = 2, then There are 2 m elements in D. Define E r , r = 1, 2 m , as an element in D. Define also E − r = I m×m − E r . Clearly, E − r is also an element of D. Lemma 3 ([16]): Remark 8: For simplicity, the LDI modeling framework in [16] is used. However, the approach in the article can be straightforwardly extended with the LDI framework in [27]. Compared with [16], the main advantage of [27] is that the conservativeness is reduced. However, it comes with a cost of higher computational complexity.
Using Lemma 3, a way to model the system (1) with the saturated control law (59) and (62) is proposed. The obtained models are then used to design the prediction dynamics with the control law (60). Since the way to model (1) with (59), or (1) with (62) are the same, only the one with (59) is shown here.
where H ∈ R m×n is an unknown matrix that will be treated as a decision variable, 2 m r=1 α r (k) = 1, α r (k) ≥ 0. Proof: See Appendix F. The following theorem concerns conditions for a set to be robustly invariant and constraint-admissible for (1) and (4) under the control law (59).
Theorem 7: Suppose that matrices W p ∈ R n×n , W p 0∀p = 1, s2 m , G ∈ R n×n , Y ∈ R m×n satisfy the following LMIs: where Y j is the jth row of Y , and A p , B p are ∀i = 1, s∀r = 1, 2 m and constraint-admissible with respect to (4).
In general, one would like to maximize the size of s2 m p=1 E(W p ). This can be done as in Section III-B, i.e., maximize s2 m p=1 E(W p ) with respect to some shape reference sets. Alternatively, it is well known [24] that the volume of ellipsoid is proportional to the determinant of its matrix. Hence, the following SDP problem can be used to optimize the volume of Denote W * p , p = 1, s2 m , G * , Y * , and H = Y * (G * ) −1 as an optimal solution of (68). W * p will be used to show a theoretical property of the new control law (60), while H is for designing the prediction dynamics.
Analogously, by solving a corresponding SDP problem, one can obtain an auxiliary matrix gain Υ ∈ R m×n for the saturated controller (62) such that the domain of attraction of the closedloop system (1), (62) is maximized. Consider now the saturated dynamic control law (60). Using Lemma 3, one gets ∀x and ∀v such that where H = [Υ H − Υ]. By using the same procedure as the one to prove Theorem 5, it can be shown that the closed-loop system (1), (60) can be rewritten as ∀x, ∀v satisfying (70) where ζ p (k) = λ i (k)α r (k) ∀i = 1, s, r = 1, 2 m , p = 1, s2 m , and . Assuming that v(k) is the output of the following auxiliary system: Note that the dynamics (72) depend not only on the polytopic uncertainty (2), but also on the input saturation. Combining (71) and (72), one obtains Define Ω c,z as a robustly invariant and constraint-admissible set for (73), for the state constraints (4), and for (70). Define also Ω c,x as the projection of Ω c,z onto the x-space. The following corollary holds. Corollary 1: Ω c,z can be optimized in such a way that Proof: It is omitted here, since it follows the same lines as the proof of Theorem 1.
Corollary 1 states that the domain of attraction of the control law (60) can be as large as that of any static saturated state feedback law. Note that for simplicity the intersection of ellipsoids s2 m p=1 E(W * p ) is used as the domain of attraction for the closed-loop system (1), (59). However, it is clear that this set can be any kind of set.

B. Domain of Attraction, Cost Function, and Control Law
Once the prediction dynamics are designed, the next steps are as follows: i) to calculate the domain of attraction; ii) to estimate the upper bound of the cost function J(k); iii) to minimize the obtained upper bound online.
Similarly to Theorem 2, the following corollary establishes the theoretical support of the algorithm proposed to obtain an estimation of the domain of attraction for (73).
Corollary 2: Suppose that matrices P c,p ∈ R 2n×2n , P c,p 0 ∀p = 1, s2 m satisfy the following LMIs: where H j is the jth row of H. Then, the set s2 m p=1 E(P c,p ) is robustly invariant for (73), and constraints admissible with respect to the state constraints (4), and to (70).
Proof: It is omitted here. Once robust invariance and constraint admissible conditions are expressed as LMI constraints, the size of s2 m p=1 E(P c,p ) can be maximized, as in Section III-B. Details are not considered further.
Our next step is to calculate the upper bound of the cost function J(k). Consider the following time-varying quadratic function: where Ξ c,p ∈ R 2n×2n , Ξ c,p 0 ∀p = 1, s2 m are chosen to satisfy By summing (78) from t = 0 to t = ∞, one can show that where R i = R∀i = 1, s. The following corollary concerns the existence of Ξ c,p .
where Q is defined in (32). Proof: The proof is omitted since it is almost the same as the one of Theorem 3.
We are generally looking for Ξ c,p in the diagonal form, p = 1, s2 m The optimal Ξ c,p can be found by solving the following SDP problem: Denote Γ * c,p , Φ * c,p , p = 1, s2 m as an optimal solution of (82). At time k, for a given state x(k), consider the following optimization problem: As in Section IV, (83) can be solved by Algorithm 3. Let v * (k) be the optimal solution of (83). The control signal is computed to (1).
Corollary 4: The controller (84) guarantees recursive feasibility and robust asymptotic stability for all feasible initial states.
Proof: It is omitted here. Algorithm 4 summarizes the prediction dynamics-based MPC with saturated feedback. It consists of two stages: offline stage and online stage.

VI. EXAMPLES
Two example systems are shown in this section. The CVX toolbox [20] was used to solve the SDP optimization problems. For comparison purpose, the algorithm in [2], where a single ellipsoid is used to describe the domain of attraction and a time-invariant function to upper bound the cost, is denoted as Algorithm 5. We also denote Algorithm 1 with the same Φ * i and with different Φ * i as Algorithms 1.1 and 1.2, respectively. This is for illustrating the benefits of using the intersection of ellipsoids to characterize the domain of attraction as well as the time-varying function to bound from above the cost function.

A. Example 1
To illustrate the concept of the intersection of ellipsoids used to characterize the domain of attraction, a very simple example is presented. Consider the system (1) with There are only input constraints     Note that in Fig. 1, only three ellipsoids E(P c,p ), p = 1, 2, 4, are shown. This is because E(P c,p ) ⊂ E(P c,2 ), p = 1, 2, 4. Hence, ,4 ).

B. Example 2
This example is a classical angular positioning system [1]. The system consists of a rotating antenna at the origin of the plane, driven by an electric motor (see Fig. 2). The control problem is to rotate the antenna by applying the input voltage to the motor so that it always points in the direction of a moving object in the plane. The motion of the antenna can be described by the discrete-time system (1) obtained from its continuous-time counterpart by discretizations with a sampling time of 0.1[s] and Euler's first-order approximation for the derivative. As a result, one obtains The input and state constraints are The weighting matrices are The corresponding LQ gain matrix is The matrices {P 1 , P 2 }, {P c,p }, p = 1, 4, and P are obtained using Algorithms 1, 4, and 5, respectively. They are not reported here but will be sent to the reader on demand. Fig. 3 shows the cut of the robustly invariant and constraintadmissible sets through v = 0 for Algorithm 1 (dashed red lines), for Algorithm 4 (solid blue lines), and for Algorithm 5 (dash-dotted yellow line). Note that, since v = 0 1) The prediction dynamics controller becomes u(k) = Kx(k) for Algorithm 1 inside the intersection of the two dashed red sets. 2) The prediction dynamics controller becomes u(k) = sat(Kx(k)) for Algorithm 4 inside the intersection of the four solid blue sets.  3) The prediction dynamics controller becomes u(k) = Kx(k) for Algorithm 5 inside the dash-dotted yellow set. For the initial condition x(0) = [−1 − 0.5] T , Figs. 4 and 5 show the state and input trajectories of the closed-loop system as functions of time using Algorithm 1.1 (dashed green), for Algorithm 1.2 (dashed red), Algorithm for 4 (solid blue), and for Algorithm 5 (dash-dotted yellow). Figs. 4 and 5 also show the state and input trajectories for the low-gain controller u(k) = Lx(k). Note that for this initial condition, the performances are the same for u(k) = Lx(k) and for u(k) = sat(Lx(k)), since the control action is not saturated. Fig. 5 also presents the realization of λ.
Using Figs. 4 and 5, the following remarks can be made. 1) Algorithm 4 has the best performance.
2) The performance of Algorithm 1.2 with the time-varying upper-bound cost function is better than that of Algorithm 1.1 with the time-invariant upper-bound cost function. 3) Algorithm 1.1 with the intersection of ellipsoids as the domain of attraction has a better performance than Algorithm 5 with a single ellipsoid. 4) All optimization-based controllers outperform the static control law u(k) = Lx(k). Finally, using the TIC/TOC function of MATLAB 2022a, the online computation times for one sampling interval for Algorithms 1.1, 1.2, 4, and 5 are 0.3036, 0.4635, 0.5568, and 0.1437 [ms]. The Newton-Raphson method were used to solve the online optimization problem for Algorithm 5.

VII. CONCLUSION
Two novel prediction dynamics-based MPC approaches are presented for polytopic uncertain and/or time-varying discretetime systems with state and input constraints. A particular choice of the parameters of the prediction dynamics with linear/saturated feedback is proposed without solving any optimization problem. This choice has the same desired property as that of earlier solutions in the literature, i.e., the domain of attraction of the controlled system under a linear/saturated dynamic feedback law is identical to the domain of attraction under any static linear/saturated state feedback law. To describe the domain of attraction as well as to estimate the upper bound of the cost function, a set of quadratic functions is employed, each one corresponding to a different vertices of the uncertainty polytope and/or the saturated inputs. The design procedures do not require the assumption of quadratic stability. A tailored efficient ADMM algorithm is proposed to solve online the optimization problem. Two numerical examples with comparison to earlier solutions in the literature demonstrate the effectiveness of the new methods. The toolbox for the proposed methods is currently under development.

APPENDIX A PROOF OF THEOREM 1
Since Ω L is a robustly invariant and constraint-admissible set for (1) and (4) under (19), it follows that ∀x(k) ∈ Ω L and where L j is the jth row of L. Using (20), rewrite the closed-loop system (14) as Using (13) and (20), the constraints (4) become where K j is the jth row of K. ∀z(k) such that x(k) = v(k), one has Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply.
Hence, z(k + 1) ∈ E(P i 1 )∀i 1 = 1, s, or equivalently, z(k + 1) ∈ s i 1 =1 E(P i 1 ). It remains to prove the constraint admissibility (4). Since Using Lemma 1, condition E(P i 2 ) ⊆ L K T j is equivalent to 1 − K j P i 2 K T j ≥ 0. This condition is (22). Similarly, for the state constraints, for each l ∈ 1, n c if there exists i 3 such that E(P i 3 ) ⊆ L(F T l ), then Using Lemma 1, one can rewrite E(P i 3 ) ⊆ L(F T l ) as 1 − F l P i 3 F T l ≥ 0. One obtains (23).

APPENDIX C PROOF OF THEOREM 3
Since t is arbitrary, it suffices to verify (30) for t = 0. Using (30) and the facts that x(k) = [I n 0 n ]z(k) and u(k) = Kz(k), one obtains Condition (92) holds if and only if ∀i 1 = 1, s Using Schur complement, one obtains ∀i 1 = 1, s ⎡ or equivalently ∀i, ∀i 1 = 1, s The proof is complete.

APPENDIX D PROOF OF THEOREM 4
For a given feasible state x(k), the set s i=1 E(P i ) is not empty for v(k). Hence, the optimization problem (35) is feasible at time k. s i=1 E(P i ) is robustly invariant and constraintadmissible (14), (4) under the control law (36). Hence, z(k + 1) = x(k + 1) v(k + 1) ∈ s i=1 E(P i ). As a consequence, recursive feasibility is guaranteed.
For the robust asymptotic stability proof, consider the following Lyapunov function candidate: The proof is complete by noting that J 2 ≤ J 1 ∀γ i,c , η i,c such that γ T i,c Φ * i γ i,c ≥ η 2 i,c .

APPENDIX F PROOF OF THEOREM 6
Substituting (64) into (1), one gets or equivalently Define ∀i = 1, s∀r = 1, 2 m Rewrite (100) as Using the fact that it follows that the system matrix (101) can be expressed as a convex combination of vertices of S.