## Publications

157 results found

Bettiol P, Quincampoix M, Vinter RB, 2019, Existence and Characterization of the Values of Two Player Differential Games with State Constraints, *APPLIED MATHEMATICS AND OPTIMIZATION*, Vol: 80, Pages: 765-799, ISSN: 0095-4616

- Author Web Link
- Cite
- Citations: 1

Vinter RB, 2019, Free end-time optimal control problems: Conditions for the absence of an infimum gap, *Vietnam Journal of Mathematics*, Vol: 47, Pages: 757-768, ISSN: 0866-7179

This paper concerns free end-time optimal control problems, in which the dynamic constraint takes the form of a controlled differential inclusion. Such problems may fail to have a minimizer. Relaxation is a procedure for enlarging the domain of an optimization problem to guarantee existence of a minimizer. In the context of problems studied here, the standard relaxation procedure involves replacing the velocity sets in the original problem by their convex hulls. It is desirable that the original and relaxed versions of the problem have the same infimum cost. For then we can obtain a sub-optimal state trajectory, by obtaining a solution to the relaxed problem and approximating it. It is important, therefore, to investigate when the infimum costs of the two problems are the same; for otherwise the above strategy for generating sub-optimal state trajectories breaks down. We explore the relation between the existence of an infimum gap and abnormality of necessary conditions for the free-time problem. Such relations can translate into verifiable hypotheses excluding the existence of an infimum gap. Links between existence of an infimum gap and normality have previously been explored for fixed end-time problems. This paper establishes, for the first time, such links for free end-time problems.

Vinter RB, 2019, OPTIMAL CONTROL PROBLEMS WITH TIME DELAYS: CONSTANCY OF THE HAMILTONIAN, *SIAM JOURNAL ON CONTROL AND OPTIMIZATION*, Vol: 57, Pages: 2574-2602, ISSN: 0363-0129

Vinter RB, 2018, State constrained optimal control problems with time delays, *JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS*, Vol: 457, Pages: 1696-1712, ISSN: 0022-247X

Bettiol P, Vinter RB, 2017, THE HAMILTON JACOBI EQUATION FOR OPTIMAL CONTROL PROBLEMS WITH DISCONTINUOUS TIME DEPENDENCE, *SIAM JOURNAL ON CONTROL AND OPTIMIZATION*, Vol: 55, Pages: 1199-1225, ISSN: 0363-0129

Boccia A, Vinter RB, 2017, THE MAXIMUM PRINCIPLE FOR OPTIMAL CONTROL PROBLEMS WITH TIME DELAYS, *SIAM JOURNAL ON CONTROL AND OPTIMIZATION*, Vol: 55, Pages: 2905-2935, ISSN: 0363-0129

Vinter RB, Boccia A, Pinho M, 2016, Optimal Control Problems with Mixed and Pure State Constraints, *SIAM Journal on Control and Optimization*, Vol: 54, Pages: 3061-3083, ISSN: 0363-0129

This paper provides necessary conditions of optimality for optimal control problems, in whichthe pathwise constraints comprise both ‘pure’ constraints on the state variable and also ‘mixed’constraints on control and state variables. The proofs are along the lines of earlier analysis formixed constraint problems, according to which Clarke’s theory of ‘stratified’ necessary conditions isapplied to a modified optimal control problem resulting from absorbing the mixed constraint into thedynamics; the difference here is that necessary conditions which now take account of the presenceof pure state constraints are applied to the modified problem. Necessary conditions are given for arather general formulation of the problem containing both forms of the constraints, and then theseare specialized to apply to problems having special structure. While combined pure state and mixedcontrol/state problems have been previously treated in the literature, the necessary conditions in thispaper are proved under less restrictive hypotheses and for novel formulations of the constraints.

Bettiol P, Vinter RB, 2016, L∞ estimates on trajectories confined to a closed subset, for control systems with bounded time variation, *Mathematical Programming*, Vol: 168, Pages: 201-228, ISSN: 1436-4646

The term ‘distance estimate’ for state constrained control systems refers to an estimate on the distance of an arbitrary state trajectory from the subset of state trajectories that satisfy a given state constraint. Distance estimates have found widespread application in state constrained optimal control. They have been used to establish regularity properties of the value function, to establish the non-degeneracy of first order conditions of optimality, and to validate the characterization of the value function as a unique solution of the HJB equation. The most extensively applied estimates of this nature are so-called linear L∞L∞ distance estimates. The earliest estimates of this nature were derived under hypotheses that required the multifunctions, or controlled differential equations, describing the dynamic constraint, to be locally Lipschitz continuous w.r.t. the time variable. Recently, it has been shown that the Lipschitz continuity hypothesis can be weakened to a one-sided absolute continuity hypothesis. This paper provides new, less restrictive, hypotheses on the time-dependence of the dynamic constraint, under which linear L∞L∞ estimates are valid. Here, one-sided absolute continuity is replaced by the requirement of one-sided bounded variation. This refinement of hypotheses is significant because it makes possible the application of analytical techniques based on distance estimates to important, new classes of discontinuous systems including some hybrid control systems. A number of examples are investigated showing that, for control systems that do not have bounded variation w.r.t. time, the desired estimates are not in general valid, and thereby illustrating the important role of the bounded variation hypothesis in distance estimate analysis.

Festa A, Vinter RB, 2016, Decomposition of Differential Games with Multiple Targets, *Journal of Optimization Theory and Applications*, Vol: 169, Pages: 848-875, ISSN: 1573-2878

This paper provides a decomposition technique for the purpose of simplifying the solution of certain zero-sum differential games. The games considered terminate when the state reaches a target, which can be expressed as the union of a collection of target subsets considered as ‘multiple targets’; the decomposition consists in replacing the original target by each of the target subsets. The value of the original game is then obtained as the lower envelope of the values of the collection of games, resulting from the decomposition, which can be much easier to solve than the original game. Criteria are given for the validity of the decomposition. The paper includes examples, illustrating the application of the technique to pursuit/evasion games and to flow control.

Bettiol P, Khalil N, Vinter RB, 2016, Normality of Generalized Euler-Lagrange Conditions for State Constrained Optimal Control Problems, *Journal of Convex Analysis*, Vol: 23, Pages: 291-311, ISSN: 0944-6532

We consider state constrained optimal control problems in which the cost to minimize comprises both integral and end-point terms, establishing normality of the generalized Euler-Lagrange condition. Simple examples illustrate that the validity of the Euler-Lagrange condition (and related necessary conditions), in normal form, depends crucially on the interplay between velocity sets, the left end-point constraint set and the state constraint set. We show that this is actually a common feature for general state constrained optimal control problems, in which the state constraint is represented by closed convex sets and the left end-point constraint is a closed set. In these circumstances classical constraint qualifications involving the state constraints and the velocity sets cannot be used alone to guarantee normality of the necessary conditions. A key feature of this paper is to prove that the additional information involving tangent vectors to the left end-point and the state constraint sets can be used to establish normality.

Boccia A, Vinter RB, 2016, The Maximum Principle for Optimal Control Problems with Time Delays, 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS), Publisher: ELSEVIER, Pages: 951-955, ISSN: 2405-8963

- Author Web Link
- Cite
- Citations: 4

Vinter RB, 2015, Multifunctions of bounded variation, *Journal of Differential Equations*, Vol: 260, Pages: 3350-3379, ISSN: 1090-2732

Consider control systems described by a differential equation with a control term or, more generally, by a differential inclusion with velocity set F(t,x). Certain properties of state trajectories can be derived when it is assumed that F(t,x) is merely measurable w.r.t. the time variable t . But sometimes a refined analysis requires the imposition of stronger hypotheses regarding the time dependence. Stronger forms of necessary conditions for minimizing state trajectories can be derived, for example, when F(t,x) is Lipschitz continuous w.r.t. time. It has recently become apparent that significant addition properties of state trajectories can still be derived, when the Lipschitz continuity hypothesis is replaced by the weaker requirement that F(t,x) has bounded variation w.r.t. time. This paper introduces a new concept of multifunctions F(t,x) that have bounded variation w.r.t. time near a given state trajectory, of special relevance to control. We provide an application to sensitivity analysis.

Palladino M, Vinter RB, 2015, Regularity of the Hamiltonian Along Optimal Trajectories, *SIAM Journal on Control and Optimization*, Vol: 53, Pages: 1892-1919, ISSN: 1095-7138

This paper concerns state constrained optimal control problems, in which the dynamic constraint takes the form of a differential inclusion. If the differential inclusion does not depend on time, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, is independent of time. If the differential inclusion is Lipschitz continuous, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, is Lipschitz continuous. These two well-known results are examples of the following principle: the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, inherits the regularity properties of the differential inclusion, regarding its time dependence. We show that this principle also applies to another kind of regularity: if the differential inclusion has bounded variation w.r.t. time, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, has bounded variation. Two applications of these newly found properties are demonstrated. One is to derive improved conditions which guarantee the nondegeneracy of necessary conditions of optimality in the form of a Hamiltonian inclusion. The other application is to derive new conditions under which minimizers in the calculus of variations have bounded slope. The analysis is based on a recently proposed, local concept of differential inclusions that have bounded variation w.r.t. the time variable, in which conditions are imposed on the multifunction involved, only in a neighborhood of a given state trajectory.

Palladino M, Vinter RB, 2015, When are minimizing controls also minimizing relaxed controls?, *Discrete and Continuous Dynamical Systems*, Vol: 35, Pages: 4573-4592, ISSN: 1553-5231

Relaxation refers to the procedure of enlarging the domain of a variational problem or the search space for the solution of a set of equations, to guarantee the existence of solutions. In optimal control theory relaxation involves replacing the set of permissible velocities in the dynamic constraint by its convex hull. Usually the infimum cost is the same for the original optimal control problem and its relaxation. But it is possible that the relaxed infimum cost is strictly less than the infimum cost. It is important to identify such situations, because then we can no longer study the infimum cost by solving the relaxed problem and evaluating the cost of the relaxed minimizer. Following on from earlier work by Warga, we explore the relation between the existence of an infimum gap and abnormality of necessary conditions (i.e. they are valid with the cost multiplier set to zero). Two kinds of theorems are proved. One asserts that a local minimizer, which is not also a relaxed minimizer, satisfies an abnormal form of the Pontryagin Maximum Principle. The other asserts that a local relaxed minimizer that is not also a minimizer satisfies an abnormal form of the relaxed Pontryagin Maximum Principle.

Palladino M, Vinter RB, 2014, Minimizers That Are Not Also Relaxed Minimizers, *SIAM Journal on Control and Optimization*, Vol: 52, Pages: 2164-2179, ISSN: 1095-7138

Relaxation is a widely used regularization procedure in optimal control, involving the replacement of velocity sets by their convex hulls, to ensure the existence of a minimizer. It can be an important step in the construction of suboptimal controls for the original, unrelaxed, optimal control problem (which may not have a minimizer), based on obtaining a minimizer for the relaxed problem and approximating it. In some cases the infimum cost of the unrelaxed problem is strictly greater than the infimum cost over relaxed state trajectories; we need to identify such situations because then the above procedure fails. The noncoincidence of these two infima leads also to a breakdown of the dynamic programming method because, typically, solving the Hamilton--Jacobi equation yields the minimum cost of the relaxed, not the original, optimal control problem. Following on from earlier work by Warga, we explore the relation between, on the one hand, noncoincidence of the minimum cost of the optimal control and its relaxation and, on the other, abnormality of necessary conditions (in the sense that they take a degenerate form in which the cost multiplier is set to zero). Two kinds of theorems are proved, depending on whether we focus attention on minimizers of the unrelaxed or the relaxed formulation of the optimal control problem. One kind asserts that a local minimizer which is not also a relaxed local minimizer satisfies an abnormal form of the Hamiltonian inclusion. The other asserts that a relaxed local minimizer that is not also a local minimizer also satisfies an abnormal form of Hamiltonian inclusion.

Bettiol P, Frankowska H, Vinter RB, 2014, Improved Sensitivity Relations in State Constrained Optimal Control, *Applied Mathematics and Optimization*, Vol: 71, Pages: 353-377, ISSN: 1432-0606

Sensitivity relations in optimal control provide an interpretation of the costate trajectory and the Hamiltonian, evaluated along an optimal trajectory, in terms of gradients of the value function. While sensitivity relations are a straightforward consequence of standard transversality conditions for state constraint free optimal control problems formulated in terms of control-dependent differential equations with smooth data, their verification for problems with either pathwise state constraints, nonsmooth data, or for problems where the dynamic constraint takes the form of a differential inclusion, requires careful analysis. In this paper we establish validity of both ‘full’ and ‘partial’ sensitivity relations for an adjoint state of the maximum principle, for optimal control problems with pathwise state constraints, where the underlying control system is described by a differential inclusion. The partial sensitivity relation interprets the costate in terms of partial Clarke subgradients of the value function with respect to the state variable, while the full sensitivity relation interprets the couple, comprising the costate and Hamiltonian, as the Clarke subgradient of the value function with respect to both time and state variables. These relations are distinct because, for nonsmooth data, the partial Clarke subdifferential does not coincide with the projection of the (full) Clarke subdifferential on the relevant coordinate space. We show for the first time (even for problems without state constraints) that a costate trajectory can be chosen to satisfy the partial and full sensitivity relations simultaneously. The partial sensitivity relation in this paper is new for state constraint problems, while the full sensitivity relation improves on earlier results in the literature (for optimal control problems formulated in terms of Lipschitz continuous multifunctions), because a less restrictive inward pointing hypothesis is invoked in the proo

Gavriel C, Vinter RB, 2014, Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers: An Abstract Framework, *Applied Mathematics and Optimization*, Vol: 70, Pages: 411-442, ISSN: 1432-0606

Standard second order sufficient conditions in optimal control theory provide not only the information that an extremum is a weak local minimizer, but also tell us that the extremum is locally unique. It follows that such conditions will never cover problems in which the extremum is continuously embedded in a family of constant cost extrema. Such problems arise in periodic control, when the cost is invariant under time translations, in shape optimization, where the cost is invariant under Euclidean transformations (translations and rotations of the extremal shape), and other areas where the domain of the optimization problem does not really comprise elements in a linear space, but rather an equivalence class of such elements. We supply a set of sufficient conditions for minimizers that are not locally unique, tailored to problems of this nature. The sufficient conditions are in the spirit of earlier conditions for ‘non-isolated’ minima, in the context of general infinite dimensional nonlinear programming problems provided by Bonnans, Ioffe and Shapiro, and require coercivity of the second variation in directions orthogonal to the constant cost set. The emphasis in this paper is on the derivation of directly verifiable sufficient conditions for a narrower class of infinite dimensional optimization problems of special interest. The role of the conditions in providing easy-to-use tests of local optimality of a non-isolated minimum, obtained by numerical methods, is illustrated by an example in optimal control.

Vinter RB, 2014, The Hamiltonian Inclusion for Nonconvex Velocity Sets, *SIAM Journal on Control and Optimization*, Vol: 52, Pages: 1237-1250, ISSN: 1095-7138

Since Clarke's 1973 proof of the Hamiltonian inclusion for optimal control problems with convex velocity sets, there has been speculation (and, more recently, speculation relating to a stronger, partially convexified version of the Hamiltonian inclusion) as to whether these necessary conditions are valid in the absence of the convexity hypothesis. The issue was in part resolved by Clarke himself when, in 2005, he showed that $L^{\infty}$ local minimizers satisfy the Hamiltonian inclusion. In this paper it is shown, by counterexample, that the Hamiltonian inclusion (and so also the stronger partially convexified Hamiltonian inclusion) are not in general valid for nonconvex velocity sets when the local minimizer in question is merely a $W^{1,1}$ local minimizer, not an $L^{\infty}$ local minimizer. The counterexample demonstrates that the need to consider $L^{\infty}$ local minimizers, not $W^{1,1}$ local minimizers, in the proof of the Hamiltonian inclusion for nonconvex velocity sets is fundamental, not just a technical restriction imposed by currently available proof techniques. The paper also establishes the validity of the partially convexified Hamiltonian inclusion for $W^{1,1}$ local minimizers under a normality assumption, thereby correcting earlier assertions in the literature.

Bettiol P, Boccia A, Vinter RB, 2013, Stratified Necessary Conditions for Differential Inclusions with State Constraints, *SIAM Journal on Control and Optimization*, Vol: 51, Pages: 3903-3917, ISSN: 1095-7138

The concept of stratified necessary conditions for optimal control problems, whose dynamic constraint is formulated as a differential inclusion, was introduced by F. H. Clarke. These are conditions satisfied by a feasible state trajectory that achieves the minimum value of the cost over state trajectories whose velocities lie in a time-varying open ball of specified radius about the velocity of the state trajectory of interest. Considering different radius functions stratifies the interpretation of “minimizer.” In this paper we prove stratified necessary conditions for optimal control problems involving pathwise state constraints. As was shown by Clarke in the state constraint-free case, we find that, also in our more general setting, the stratified necessary conditions yield generalizations of earlier optimality conditions for unbounded differential inclusions as simple corollaries. Some examples are provided, giving insights into the nature of the hypotheses invoked for the derivation of stratified necessary conditions and into the scope for their further refinement.

Bettiol P, Vinter RB, 2013, Estimates on trajectories in a closed set with corners for (t,x) dependent data, *Mathematical Control and Related Fields*, Vol: 3, Pages: 245-267, ISSN: 2156-8472

Estimates on the distance of a given process from the set of processes that satisfy a specified state constraint in terms of the state constraint violation are important analytical tools in state constrained optimal control theory; they have been employed to ensure the validity of the Maximum Principle in normal form, to establish regularity properties of the value function, to justify interpreting the value function as a unique solution of the Hamilton-Jacobi equation, and for other purposes. A range of estimates are required, which differ according the metrics used to measure the `distance' and the modulus θ(h) of state constraint violation h in terms of which the estimates are expressed. Recent research has shown that simple linear estimates are valid when the state constraint set A has smooth boundary, but do not generalize to a setting in which the boundary of A has corners. Indeed, for a velocity set F which does not depend on (t,x) and for state constraints taking the form of the intersection of two closed spaces (the simplest case of a boundary with corners), the best distance estimates we can hope for, involving the W1,1, metric on state trajectories, is a super-linear estimate expressed in terms of the h|log(h)| modulus. But, distance estimates involving the h|log(h)| modulus are not in general valid when the velocity set F(.,x) is required merely to be continuous, while not even distance estimates involving the weaker, Hölder modulus hα (with α arbitrarily small) are in general valid, when F(.,x) is allowed to be discontinuous. This paper concerns the validity of distance estimates when the velocity set F(t,x) is (t,x)-dependent and satisfy standard hypotheses on the velocity set (linear growth, Lipschitz x-dependence and an inward pointing condition). Hypotheses are identified for the validity of distance estimates, involving both the h|log(h)| and linear moduli, within the framework of control systems described by a controlled dif

Boccia A, Falugi P, Maurer H, et al., 2013, Free time optimal control problems with time delays, 52nd IEEE Annual Conference on Decision and Control (CDC), Publisher: IEEE, Pages: 520-525, ISSN: 0743-1546

- Author Web Link
- Cite
- Citations: 7

Festa A, Vinter RB, 2013, A decomposition technique for pursuit evasion games with many pursuers, 52nd IEEE Annual Conference on Decision and Control (CDC), Publisher: IEEE, Pages: 5797-5802, ISSN: 0743-1546

- Author Web Link
- Cite
- Citations: 8

Palladino M, Vinter RB, 2013, When Does Relaxation Reduce the Minimum Cost of an Optimal Control Problem?, 52nd IEEE Annual Conference on Decision and Control (CDC), Publisher: IEEE, Pages: 526-531, ISSN: 0743-1546

- Author Web Link
- Cite
- Citations: 1

Falugi P, Kountouriotis P-A, Vinter RB, 2012, Differential Games Controllers That Confine a System to a Safe Region in the State Space, With Applications to Surge Tank Control, *IEEE Transactions on Automatic Control*, Vol: 57, Pages: 2778-2788, ISSN: 1558-2523

Surge tanks are units employed in chemical processing to regulate the flow of fluids between reactors. A notable feature of surge tank control is the need to constrain the magnitude of the Maximum Rate of Change (MROC) of the surge tank outflow, since excessive fluctuations in the rate of change of outflow can adversely affect down-stream processing (through disturbance of sediments, initiation of turbulence, etc.). Proportional + Integral controllers, traditionally employed in surge tank control, do not take direct account of the MROC. It is therefore of interest to explore alternative approaches. We show that the surge tank controller design problem naturally fits a differential games framework, proposed by Dupuis and McEneaney, for controlling a system to confine the state to a safe region of the state space. We show furthermore that the differential game arising in this way can be solved by decomposing it into a collection of (one player) optimal control problems. We discuss the implications of this decomposition technique, for the solution of other controller design problems possessing some features of the surge tank controller design problem.

Clark JMC, Vinter RB, 2012, Stochastic exit time problems arising in process control, *Stochastics-An International Journal of Probability and Stochastic Processes*, Vol: 84, Pages: 667-681, ISSN: 1744-2516

This paper concerns the problem of controlling a stochastic system, with small noise parameter, to prevent it leaving a safe region of the state space. Such problems arise in flow control and other areas. We consider a formulation of the problem, in which a control is sought, to maximize a cost which is related to the expected exit time, but modified to reduce the probability of an early exit, according to a specified level of risk aversion (‘risk sensitive’ stochastic control). Formally letting the noise parameter tend to zero, we find that the optimal control strategy for this problem coincides with the optimal feedback control strategy for a differential game. We identify a class of differential games arising in this way, the so called decomposable differential games, for which the optimal control strategy can be easily obtained and illustrate the proposed solution technique by applying it to a flow control problem arising in process systems engineering.

Bettiol P, Frankowska H, Vinter RB, 2011, L∞ estimates on trajectories confined to a closed subset, *Journal of Differential Equations*, Vol: 252, Pages: 1912-1933, ISSN: 1090-2732

This paper concerns the validity of estimates on the distance of an arbitrary state trajectory from the set of state trajectories which lie in a given state constraint set. These so called distance estimates have wide-spread application in state constrained optimal control, including justifying the use of the Maximum Principle in normal form and establishing regularity properties of value functions. We focus on linear, L∞ distance estimates which, of all the available estimates have, so far, been the most widely used. Such estimates are known to be valid for general, closed state constraint sets, provided the functions defining the dynamic constraint are Lipschitz continuous, with respect to the time and state variables. We ask whether linear, L∞ distance estimates remain valid when the Lipschitz continuity hypothesis governing t-dependence of the data is relaxed. We show by counter-example that these distance estimates are not valid in general if the hypothesis of Lipschitz continuity is replaced by continuity. We also provide a new hypothesis, ‘absolute continuity from the left’, for the validity of linear, L∞ estimates. The new hypothesis is less restrictive than Lipschitz continuity and even allows discontinuous time dependence in certain cases. It is satisfied, in particular, by differential inclusions exhibiting non-Lipschitz t-dependence at isolated points, governed, for example, by a fractional-power modulus of continuity. The relevance of distance estimates for state constrained differential inclusions permitting fractional-power time dependence is illustrated by an example in engineering design, where we encounter an isolated, square-root type singularity, concerning the t-dependence of the data.

Vinter RB, Bettiol P, 2011, Trajectories satisfying a state constraint: improved estimates and new non-degeneracy conditions, *IEEE Transactions on Automatic Control*, Vol: 56, Pages: 1090-1096

For a state-constrained control system described by a differential inclusion and a single functional inequality state constraint, it is known that, under an `inward pointing condition', the $W^{1,1}$ distance of an arbitrary state trajectory to the set of state trajectories, which have the same left endpoint and which satisfy the state constraint, is linearly related to the state constraint violation. In this paper we show that, in situations where the state-constrained control system is described instead by a controlled differential equation, this estimate can be improved by replacing the $W^{1,1}$ distance on state trajectories by the Ekeland metric of the distance of the control functions. A counter-example reveals that a refinement of this nature is not in general valid for state constrained differential inclusions. Finally we show how the refined estimates may be used to establish new conditions for non-degeneracy of the state constrained Maximum Principle, in circumstances when the data depends discontinuously on the control variable.

Clark JMC, Kountouriotis PA, Vinter RB, 2011, A Gaussian Mixture Filter for Range-Only Tracking, *IEEE TRANSACTIONS ON AUTOMATIC CONTROL*, Vol: 56, Pages: 602-613, ISSN: 0018-9286

- Author Web Link
- Cite
- Citations: 13

Bettiol P, Bressan A, Vinter RB, 2011, ESTIMATES FOR TRAJECTORIES CONFINED TO A CONE IN R-n, *SIAM JOURNAL ON CONTROL AND OPTIMIZATION*, Vol: 49, Pages: 21-41, ISSN: 0363-0129

- Author Web Link
- Cite
- Citations: 11

Singh R, Pal BC, Jabr RA,
et al., 2011, Meter Placement for Distribution System State Estimation: An Ordinal Optimization Approach, *IEEE Transactions on Power Systems*, Vol: 26, Pages: 2328-2335, ISSN: 0885-8950

This paper addresses the problem of meter placement for distribution system state estimation (DSSE). The approach taken is to seek a set of meter locations that minimizes the probability that the peak value of the relative errors in voltage magnitudes and angle estimates across the network exceeds a specified threshold. The proposed technique is based on ordinal optimization and employs exact calculations of the probabilities involved, rather than estimates of these probabilities as used in our earlier work. The use of ordinal optimization leads to a decrease in computational effort without compromising the quality of the solution. The benefits of the approach in terms of reduced estimation errors is illustrated by simulations involving a 95-bus UKGDS distribution network model.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.