# Error Bounds for Parametric Polynomial Systems with Applications to Higher-Order Stability Analysis and Convergence Rates

###### Abstract

The paper addresses parametric inequality systems described by polynomial functions in finite dimensions, where state-dependent infinite parameter sets are given by finitely many polynomial inequalities and equalities. Such systems can be viewed, in particular, as solution sets to problems of generalized semi-infinite programming with polynomial data. Exploiting the imposed polynomial structure together with powerful tools of variational analysis and semialgebraic geometry, we establish a far-going extension of the Łojasiewicz gradient inequality to the general nonsmooth class of supremum marginal functions as well as higher-order (Hölder type) local error bounds results with explicitly calculated exponents. The obtained results are applied to higher-order quantitative stability analysis for various classes of optimization problems including generalized semi-infinite programming with polynomial data, optimization of real polynomials under polynomial matrix inequality constraints, and polynomial second-order cone programming. Other applications provide explicit convergence rate estimates for the cyclic projection algorithm to find common points of convex sets described by matrix polynomial inequalities and for the asymptotic convergence of trajectories of subgradient dynamical systems in semialgebraic settings.

Keywords: Polynomial Optimization, Generalized Semi-Infinite Programming, Error Bounds, Variational Analysis, Generalized Differentiation, Semialgebraic Functions and Sets, Łojasiewicz Inequality, Second-Order Cone Programming, Higher-Order Stability Analysis, Convergence Rate of Algorithms

AMS Subject Classification: 90C26, 90C31, 90C34, 49J52, 49J53, 26C05

Dedicated to Terry Rockafellar in honor of his 80th birthday

## 1 Introduction

This paper is largely devoted to polynomial semi-infinite optimization and related topics being revolved around deriving explicit error bounds for infinite parametric inequality systems with (real) polynomial data as well as their various applications to stability analysis in optimization and convergence of algorithms. The imposed polynomial structure allows us to widely use powerful tools of semialgebraic geometry, while parametric inequalities naturally call for applying constructions and results of variational analysis and generalized differentiation. Needless to say that the seminal contributions by Terry Rockafellar to variational analysis and optimization are difficult to overstate, and it is our honor to dedicate this paper to him.

The primary attention of this paper is paid to the parametric inequality systems

(1.1) |

where each function as for the given natural number is a polynomial, and where is a set-valued mapping that is also described by finitely many polynomials via inequality and equality constraints. Systems of type (1.1) naturally arise as feasible solution sets in problems of generalized semi-infinite programming, second-order cone programming, robust optimization, and matrix inequalities with polynomial data; see below for more details and applications.

One of the most important issues associated with the inequality systems (1.1) is establishing the so-called error bounds. Given , recall that a (local) error bound of with a Hölder exponent holds for at if there exist constants and such that

(1.2) |

where signifies the Euclidean distance between and , and where . The supremum in (1.2) is obviously achieved and it can be replaced by ‘max’ if is closed and bounded.

The study of error bounds has attracted a lot of attention of many researchers over the years and has found numerous applications to, in particular, sensitivity analysis for various problems of mathematical programming, termination criteria for numerical algorithms, etc. We refer the reader to [32] for an excellent survey in these directions and to the more recent papers [11, 18, 19, 21, 23, 31, 36] with the bibliographies therein. It is worth noting that the major attention in the aforementioned and many other publications on error bounds has drawn to the case of linear rate (, where this issue is related to metric regularity and subregularity notions in basic variational analysis. Our main interest in this paper concerns fractional/root error bounds in (1.2). For the case of finite and fixed sets in (1.2), some results in this direction have been obtained in, e.g., [8, 10, 20, 21, 25, 26, 30] with various applications therein.

It is proved [25] in this finite case of , by using the cerebrated Łojasiewicz gradient inequality [24], that (1.2) holds with some unknown exponent for polynomial systems (1.1). Employing advanced techniques of variational analysis, we have recently derived in this case [22] several error bounds with exponents explicitly determined by the dimension of the underlying space and the number/degree of the involved polynomials. The techniques and results developed in [22] allowed us to resolve several open questions raised in the literature, which include establishing explicit Hölder error bounds for nonconvex quadratic systems and higher-order semismoothness of the maximum eigenvalue for symmetric tensors.

The primary goal of this paper is to obtain explicit error bounds of type (1.2) for polynomial inequality systems with infinite and variable sets . Besides undoubted importance of these issues for their own sake, we have been motivated by applications of infinite polynomial systems and error bounds for them to higher-order stability and convergence rates of algorithms in optimization-related areas as well as in asymptotic analysis of dynamical systems, where estimates of type (1.2) with infinite sets are crucial.

As the reader can see below, deriving error bounds for the case of infinite and variable sets in (1.1) is significantly more involved in comparison with our developments for finite systems in [22]. First we present the following three-dimensional example showing that the error bound (1.2) may fail for any for infinite polynomial inequality systems even in the case of constant sets .

###### Example 1.1

(failure of Hölder error bounds for infinite polynomial systems). Consider the polynomial system of type (1.1) containing only one inequality given by in the form

where the infinite set is constructed as follows. Take the -smooth function of one variable

and define the set by the conditions

(1.3) |

where stands for the derivative of . Since is -smooth, the set in (1.3) is nonempty and compact as the image of a compact interval under a continuous mapping. We claim that

(1.4) |

Indeed, for any and there is with and . Thus

Applying the second-order Taylor expansion to the function tells us that

Note that . Hence we get the relationships , where the last inequality holds due to the choice of

which imply the inequality whenever . On the other hand, it follows from the above constructions of and that

which therefore justifies the claim in (1.4). Having this in mind, consider the set

(1.5) |

and observe that . Let us now check that the local Hölder error bound (1.2) fails for (1.5) at for any exponent . To see it, take as and get for all large that . This allows us to conclude that

whenever is chosen, and thus the error bound (1.2) fails for the system in (1.5).

In what follows we prove that such a situation does not emerge if the sets in (1.1) are described by

(1.6) |

where and are polynomials. It is shown in Section 4 that in (1.2) is explicitly calculated in terms of degrees of the polynomials and dimensions of the spaces in question. The key of our analysis is a new nonsmooth extension of the Łojasiewicz inequality to the class of supremum marginal functions

described by polynomials and in (1.6). This is done in Section 3 by using powerful tools of variational analysis and semialgebraic geometry reviewed in Section 2.

Sections 5 and 6 are devoted to applications. In Section 5 we develop quantitative higher-order stability analysis for remarkable classes of polynomial optimization problems: generalized semi-infinite programming, optimization of matrix inequalities, and second-order cone programming. Finally, Section 6 contains explicit estimates of convergence rates for the cyclic projection algorithm to solve feasibility problems for convex sets described by matrix polynomial inequalities and also for asymptotic analysis of subgradient dynamical systems governed by maximum functions with polynomial data.

## 2 Tools of Variational Analysis and Semialgebraic Geometry

This section briefly discusses some tools of generalized differentiation in variational analysis and of semialgebraic geometry widely used in the paper. Throughout this work we deal with finite-dimensional Euclidean spaces labeled as and endowed by the inner product . The symbol (resp. ) stands for the open (resp. closed) ball with center and radius while (resp. ) stands for the open (resp closed) unit ball centered at the origin in . Given a set , its interior (resp. boundary, convex hull, and conic convex hull) is denoted by (resp. , , and ).

Starting with variational analysis, recall first two subdifferential notions needed in what follows. The reader can find more information and references in the books [27, 34].

Given a function continuous around , the proximal subdifferential of at is

(2.1) |

The limiting subdifferential of at (known also as the general, basic or Mordukhovich subdifferential) is

(2.2) |

We clearly have , where the first set may often be empty (while not so in a dense sense), but the second one is nonempty for any locally Lipschitzian function. Furthermore, the set is always convex but may not be closed, while is closed but may often be nonconvex. Both subdifferentials (2.1) and (2.2) reduce to the gradient for smooth functions and to the subdifferential of convex analysis for convex ones. A significant advantage of the limiting subdifferential (2.2) is full calculus in the general nonconvex setting that is based on variational and extremal principles; see [27, 34] for more details.

The major variational notion used below is the limiting subdifferential slope of at defined via (2.2) by

(2.3) |

where . It reduces to the classical gradient slope for smooth functions.

Let us next formulate some continuity notions for set-valued mappings ; see, e.g., [34]. It is said that is outer semicontinuous (o.s.c.) at if for any sequence converging to with we have . Further, is inner semicontinuous (i.s.c.) at if for any sequence and any there are as satisfying . We say that is o.s.c. or i.s.c. around if it has this property at every in a neighborhood of .

Finally, we present some notions and facts from (real) semialgebraic geometry following [6]. It is said that:

is a semialgebraic set if it is a finite union of subsets given by

where all the functions , , are polynomials of some degrees.

is a semialgebraic mapping if it maps one semialgebraic set to another one and its graph is a semialgebraic subset of . We say that is locally semialgebraic around if there exists a neighborhood of the point such that the set is semialgebraic in .

The class of semialgebraic sets is closed under taking finite intersections, finite unions, and complements; furthermore, a Cartesian product of semialgebraic sets is semialgebraic. A major fact concerning the class of semialgebraic sets is given by the following seminal result of semialgebraic geometry.

Tarski-Seidenberg Theorem. Images of semialgebraic sets under semialgebraic maps are semialgebraic.

We also need another fundamental result taken from [1, Theorem 4.2], which provides an exponent estimate in the classical Łojasiewicz gradient inequality for polynomials. For brevity we label it as:

Łojasiewicz Gradient Inequality. Let be a polynomial on with degree . Suppose that and . Then there exist constants such that for all with we have

(2.4) |

## 3 Łojasiewicz Inequality for Supremum Marginal Functions

The main aim of this section is extending the Łojasiewicz gradient inequality (2.4) to the following class of (polynomial) supremum marginal functions given by

(3.1) |

where is a polynomial of degree at most , and where the set-valued mapping is defined in (1.6) by polynomials for and for of degree at most . Functions of type (3.1) are intrinsically nonsmooth, and thus deriving a nonsmooth counterpart of (2.4) for them requires the usage of an appropriate subdifferential of . The reader will see below that a nonsmooth version of the Łojasiewicz inequality for in terms of the limiting subdifferential slope from (2.3), which replaces the gradient norm in (2.4), plays a key role in establishing our Hölder-type local error bounds and their subsequent applications in this paper.

To proceed in this direction, we have to calculate the limiting subdifferential of from (3.1) in terms of its initial data, which is not an easy task by taking into account that the sets are infinite and variable. When , (3.1) reduces to the supremum function for which the most recent subdifferential results can be found in [28]; see also the references therein. When the supremum in (3.1) is replaced by the infimum, we arrive at the class of (infimum) marginal functions well investigated in variational analysis [27]. Needless to say that the supremum and infimum operations are essentially different in unilateral analysis and that (lower) subdifferential properties under supremum are significantly more involved and challenging.

We first show that the functions of type (3.1) enjoy the following important properties.

###### Proposition 3.1

Proof. It follows from the Tarski-Seidenberg Theorem and the assumptions made that is well defined and semialgebraic around . Let us verify that is u.s.c. around . Select a number and a compact set so that for all . We claim that is locally u.s.c. on . Indeed, assume on the contrary that is not u.s.c. at some and then get a sequence and a number such that . Using the continuity of , the closedness of for all , and the inclusion allows us to find with . By passing to a subsequence we may suppose without loss of generality that . It is easy to check that due to the continuity of as and as in (1.6). Since is continuous, we have , which contradicts the assumption that . Thus is locally u.s.c. around .∎

Next we study the lower semicontinuity (l.s.c.) and Hölder continuity of the function (3.1) under rather mild assumptions. Define the argmaximum set important for our further analysis by

(3.2) |

and recall that for .

###### Proposition 3.2

(continuity of supremum marginal functions). Given , suppose that the sets are nonempty and uniformly bounded around . The following assertions hold:

(i) If as , then is l.s.c. and hence continuous at . Thus the i.s.c. property of at ensures that is continuous at this point.

(ii) If there are numbers and such that

(3.3) |

then is Hölder continuous on with order .

Proof. Since the sets are nonempty and uniformly bounded around , there exist a compact set and a number such that for all . Let us first verify (i) while arguing by contradiction. Suppose that as and that is not l.s.c. at . Then there exist a constant and a sequence such that . Since as , we find and with . Using the boundedness of and passing to a subsequence if necessary allow us to find with and thus . It is easy to see that and that , which yield in turn that . Hence we have as while contradicting the assumption on . This verifies that is l.s.c. at and hence its continuity there by Proposition 3.1. The second statement in (i) follows from the first one and the definitions.

To justify now assertion (ii), suppose that (3.3) holds and pick any . Since the sets and are compact for all , we deduce from (3.3) that there are and satisfying . Taking into account the Lipschitz continuity of the polynomial on with some constant , it follows that

Similarly we conclude that , which verifies the Hölder continuity of on with order and thus completes the proof of the proposition.∎

###### Remark 3.3

(effective conditions for continuity of marginal functions). Proposition 3.2(i) tells us that in (3.1) is continuous at provided that is i.s.c. at this point, which surely holds if is locally Lipschitzian around in the standard/Hausdorff sense with taking into account that the sets from (1.6) are closed, nonempty, and uniformly bounded. Necessary and sufficient conditions for the local Lipschitz property of around are given in [27, Corollary 4.39] in terms of the gradients of the constraint functions at for any . In this way we get effective conditions ensuring the continuity of at (in fact around) via the initial data of (1.6). On the other hand, [27, Theorem 3.38(iv)] justifies the Lipschitz continuity of around under the i.s.c. property of the argmaximum mapping (3.2) around with some fixed and the Lipschitz-like (Aubin, pseudo-Lipschitz) property of around this pair , which is effectively characterized in [27, Corollary 4.39] via the initial data.

To proceed further, we need the following qualification condition imposed at a reference point relative to some (in fact optimal) subset of the constraint set in (3.1).

###### Definition 3.4

(marginal constrained qualification). Given and from (1.6), we say that the marginal Mangasarian-Fromovitz constraint qualification (MMFCQ) holds at relative to some subset if there is a vector such that

(3.4) |

whenever Lagrange multipliers satisfy the conditions

(3.5) |

###### Remark 3.5

(relationships of MMFCQ with EMFCQ and MFCQ). The defined MMFCQ for the marginal supremum functions (3.1) is motivated by the extended Mangasarian-Fromovitz constraint qualification (EMFCQ) introduced in [17] for generalized semi-infinite programs (GSIPs) discussed below in Section 5. Reformulating EMFCQ for (3.1), we see that it involves the cost function and also requires the validity of (3.4) for a larger set of Lagrange multipliers in comparison with MMFCQ; so MMFCQ is weaker than EMFCQ in general. Observe also that when the functions and do not depend on , the formulated MMFCQ condition means that there is no satisfying (3.5), which is equivalent to the conventional Mangasarian-Fromovitz constraint qualification (MFCQ) on in the sense that

(3.6) |

for any . As we see in the subsequent sections, the imposed MMFCQ holds automatically for important classes of polynomial optimization and related problems arising in applications.

It is worth emphasizing that the majority of our applications below requires the validity of MMFCQ for the case when in Definition 3.4 is chosen as the argmaximum set (3.2) at the reference point. Consider further the standard Lagrangian function for (3.1) with the negative sign of taken due to the maximization

(3.7) |

Observe that any is a minimizer of the following nonlinear program:

(3.8) |

Applying the classical Lagrange multiplier rule in the Fritz John form to (3.8) tells us that the set of multipliers satisfying the conditions

(3.9) |

is always nonempty. Given , consider also the set of satisfying

(3.10) |

Using the Lagrangian description, we now show that the MMFCQ property relative to from (3.2) is robust in the general setting of Proposition 3.2(i).

###### Proposition 3.6

(robustness of MMFCQ). Given , assume that the sets are nonempty and uniformly bounded around . If MMFCQ holds at relative to and if is l.s.c. at , then there exists such that MMFCQ is satisfied at any point relative to .

Proof. Proposition 3.1 tells us that is continuous at . Supposing on the contrary that MMFCQ is not robust, i.e., there are as such that MMFCQ fails at relative to . Take any and find and satisfying , for , and . Normalization gives us for all . Using now the uniform boundedness of around , we select subsequences and with . It follows from the continuity of at and from that , i.e., . Furthermore, passing to limit as yields for , , and . Since was chosen arbitrarily, this contradicts the assumed MMFCQ at relative to and thus completes the proof.∎

The next result important for its own sake plays a crucial role in deriving the extended Łojasiewicz inequality for supremum marginal functions. It explicitly evaluates the limiting subdifferential of for such functions via the initial data and the corresponding Lagrange multiplies being significantly different from the preceding results and techniques of [28] even for the constant mapping in (3.1) considered therein.

###### Theorem 3.7

(limiting subgradients of supremum marginal functions). Given , suppose that the sets in (1.6) are nonempty and uniformly bounded around , that MMFCQ holds at relative to , and that is l.s.c. around . The following assertions hold:

(i) There exists such that for any and we can find and , , satisfying the conditions

(3.11) |

where and are defined in (3.7) and (3.10), respectively.

(ii) Given , there are positive numbers such that for any and we can find and , , satisfying (3.11).

Proof. It follows from Proposition 3.6 that there is such that MMFCQ holds at any point relative to . Moreover, Proposition 3.1 and the l.s.c. assumption on ensure that this function is continuous on . Let us first evaluate the proximal subdifferential (2.1) and then the limiting subdifferential (2.2) of the supremum marginal function (3.1) at each fixed .

Claim 1: For any proximal subgradient we have

(3.12) |

To verify (3.12), deduce from (2.1) that there are constants and satisfying

(3.13) |

The assumptions imposed on the mapping ensure that for all , which tells us by (3.13) that the pair is a local minimizer of the following GSIP:

(3.14) |

Applying to (3.14) the necessary optimality condition from [17, Theorem 1.1], we find , , , , and for so that

(3.15) |

To justify (3.12), it remains to show that in (3.15). Indeed, assuming the contrary tells us that

(3.16) |

with for . Consider the set , which is nonempty by . It follows from (3.16) that for . Hence and for all . Combining this with (3.16) yields

which contradicts the assumed MMFCQ at relative to and thus verifies (3.12) for .

Claim 2: Any limiting subgradient satisfies (3.12) and thus (3.11).

Fix . Since is continuous on , we find by definition (2.2) sequences and with . Using (3.12) for and applying the Carathéodory theorem to the conic convex hull in (3.12) give us , , and for such that

(3.17) |

Let us show that the sequence of is bounded. Arguing by contradiction, suppose that and define for . Since the convergent sequence of is bounded as , we deduce from (3.17) that

(3.18) |

It follows from the boundedness of , , and for that some subsequences of them converge to , , and , respectively. Letting in (3.18) yields