- Control Systems (optimal control of non-linear systems, non-linear feedback design, computation of optimal controls, distributed parameter control systems, hybrid control systems, differential games)
- Filtering and Estimation
- Calculus of Variations
- Non-Linear Analysis
Optimal Control concerns control strategies for dynamic systems that optimise a criterion of best performance.
Optimal Orbit Transfer
This is an example of a problem where optimization is a design objective. The object is to find a control strategy to transfer a vehicle to a circular orbit of maximal radius (under constraints on the amount of fuel comsumed and on the rate of fuel consumption).
Dynamic optimisation predicts bang-bang thrust, with continuously varying thrust angle. Many much more complex problems arising in mission planning have been investigated (‘gravity assist’ in outer planets exploration, minimise atmospheric heating, etc.)
Autonomous Underwater Vehicle (AUV) control:
This is an example were an optimal control problem is formulated to capture indirectly a number of design objectives. The objective here is to maximise altitude over a specified time horizon, but to reduce the effects on the trajectory of a variable drag coefficient:
A term penalizing ''sensitivity'' to the drag coefficient is included in the cost. The graph illustrate the trade-off that is possible between minimizing the cost and reducing sensitivity.
R. B. VINTER,“Optimal Control ”, Birkhäuser, Boston, 2000.
The following recent papers concern properties of optimal controls. Many of them relate to optimal control problems with path-wise state constraints. Topics include first order necessary conditions, generalizing the Pontryagin Maximum Principle or the Euler Lagrange condition of the classical Calculus of Variations, Sensitivity Relations, ‘distance estimates’ concerning the proximity of a given state trajectories to the subset of state trajectories satisfying a state constraint and applications regarding multiplier non-degeneracy, minimizer regularity, etc., conditions under which relaxation procedures do not introduce an ‘infimum gap’, optimality conditions expressed in terms of the Hamiltonian generalizing the classical condition `the Hamiltonian is constant along optimal trajectories for autonomous optimal control problems’. Research is also reported on recent insights into longstanding questions in non-smooth optimal concerning conditions for validity of the Hamiltonian inclusion. Several papers provide a systematic exploration of properties of optimal trajectories for problems in which the time dependence of the dynamic constraint is discontinuous.
R. Vinter, The Hamiltonian Inclusion for Non-Convex Velocity Sets, SIAM J. Control and Optim., 52, 2, 2014, pp. 1237-125.
M. Palladino and R. B. Vinter, Minimizers That Are Not Also Relaxed Minimizers, SIAM J. Control and Optim. 52, 4, 2014, pp. 2164–2179.
M. Palladino and R. B. Vinter, Regularity of the Hamiltonian Along Optimal Trajectories, SIAM J. Control Optim., 53, 2, 2015, pp. 1892–1919.
P. Bettiol, H. Frankowska, R. B. Vinter, Improved Sensitivity Relations in State Constrained Optimal Control, Applied Mathematics & Optimization, (electronic version), 2014.
R. Vinter, Multifunctions of Bounded Variation, Journal of Differential Equations, to appear.
P. Bettiol, A. Boccia and R. B. Vinter, Stratified Necessary Conditions for Differential Inclusions with State Constraints, SIAM J. Control and Optim., 51, 5, 2013, pp. 3903-3917.
P. Bettiol, A. Bressan and R. B. Vinter, Estimates for Trajectories Confined to a Cone in R^n, SIAM J. on Control and Optimization, Vol. 49, No. 1, 2011, pp. 21-42.
P. Bettiol, A. Bressan and R. B. Vinter, ‘On trajectories satisfying a state constraint: W^1,1 estimates and counter-examples, SIAM J. Control and Optimization 49, 7, 2010, pp. 4664-4679.
P. Bettiol, and R. B. Vinter Trajectories satisfying a state constraint: Improved estimates and new non-degeneracy conditions, IEEE TAC, Vol. 56, No. 5, 2011, pp. 1090-1096.
P. Bettiol, H. Frankowska and R. B. Vinter L-infinity Estimates on Tranjectories Confined to a Closed Sunset, J. of Differential Equations, 252, 2, 2012, pp. 1912–1933.
P. Bettiol and R. B. Vinter, Sensitivity Interpretations of the Co-State Variable for Optimal Control Problems with State Constraints, SIAM J. Control and Optim., 48, 5, 2010, pp.3297-3317.
F. Rampazzo and R. B. Vinter,"Degenerate Optimal Control Problems with State Constraints", SIAM J. Control, and Optim., 39, 2000, pp. 989-1007.
R. B. Vinter, Mini-Max Optimal Control, SIAM J. Control and Optim., 44, pp. 939-968, 2005.
I. A. Shvartsman and R. B. Vinter, Regularity Properties of Optimal Controls with Time Varying State and Control Constraints,, Journal of Nonlinear Analysis, 65, pp.448-474, 2006.
G. GALBRAITH and R. B. VINTER, Lipschitz Continuity of Optimal Controls for State Constrained Problems, SIAM J. Control and Optim., 42, pp. 1727-1744, 2003.
D. BEROVIC and R. B. VINTER,The Application of Dynamic Programming to OPtimal Inventory Control, IEEE Trans. Automatic Control 49 ( 5), pp. 676-685, ( 2004).
H . FRANKOWSKA and R. B. VINTER, “ A Theorem on Existence of Neighbouring Feasible Trajectories: Applications to Dynamic Programming for State Constrained Optimal Control”, J. Optim. Theory and Applic., 104, 21-40, 2000.
Higher Order Sufficient Conditions of Optimality:
C Gavriel and R B Vinter, Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers: An Abstract Framework, Applied Mathematics & Optimization, 70, 2014, pp. 411-442, 2014.
J. C. Allwright and R. B. Vinter, Second Order Conditions for Periodic Optimal Control Problems, Control and Cybernetics, 34, 3, pp. 617-643, 2005.
The aim of this research is to develop and assess new, high precision algorithms for difficult tracking problems involving single and multiple targets, applicable in situations where traditional tracking algorithms perform badly or fail altogether. The algorithms are Bayesian; they are based on probabilistic modeling and the recursive construction of approximations to the evolving condition distribution of target motion, given the observations. The problems considered include such features as ill conditioned bearings only measurements, target models with unknown parameters and tracking in high clutter environments. Research efforts have centred on developing and assessing a new algorithm, called the shifted Rayleigh filter, for bearings-only tracking of a single target. It takes its name form the fact that certain coefficients appearing in the algorithm can be interpreted as moments of a shifted Rayleigh distribution. Attention has also been given to developing similar algorithms for range-only tracking problems.
The Shifted Rayleigh Filter
In common with other moment matching algorithms, the shifted Rayleigh filter makes use of a normal approximation to the prior distribution of target motion. It is unusual, however, in incorporating an exact calculation of the updated distribution, to take account of a new measurement. Thus the only approximation introduced by the algorithm is to replace a conditional distribution by a matched normal distribution, at a single point in each iteration. The isolation of the approximation in this way is important because it simplifies the analysis of tracker performance and permits the construction of error bounds.
Paper , in which full details of the underlying analysis appear, supplies a theoretical justification of the shifted Rayleigh algorithm. The paper also confirms that the algorithm is competitive with other moment matching algorithms and particle filters in a ‘benign’ scenario, which has been the basis of earlier comparative studies.
The conference papers provide an assessment of the shifted Rayleigh filter, applied to more challenging bearings only tracking problems where, according to earlier simulation studies reported in the literature, standard moment matching algorithms, such as the extended Kalman filter, fail to provide useful estimates. Paper  reports on a comparative study of a particle filter and the shifted Rayleigh filter, where the purpose is to estimate the position of a moving target from noisy, bearings only measurements taken by six drifting sonobuoys, whose positions are estimated from bearings only measurements taken by a stationary monitoring sensor. Simulation studies reveal that the shifted Rayleigh filter performs favourably compared with the particle filter, while reducing the computational burden by an order of magnitude. Paper  concerns the application of the shifted Rayleigh filter a high clutter variant on the preceding tracking problem. Here, the filter provides excellent estimates, even in scenarios in which the clutter probability is 67% and standard deviations on the bearings only measurements are in excess of 16 degrees. Paper  assesses the performance of the shifted Rayleigh filter for a challenging scenario in which the extended Kalman filter fails altogether (The target passes under the sensor platform).
There are many tracking problems for which moment matching algorithms are not suitable, notably those when the distributions of interest are multi-modal. But moment matching algorithms offer such substantial computational savings over particle filters, that it is important to explore the range of applicability of such algorithms. Perh aps the most significant as pect of this research is to point to new classes of nonlinear filtering problems for which moment matching algorithms, appropriately applied , are the best available choice.
Collaborator: J M C Clark
- J. M. C. Clark, P. A. Kountouriotis and R. B. Vinter, ''A Gaussian mix ture filter for range-only trac king,'', Trans. Aut. Control, , IEEE TAC, Vol. 56, No. 5, 2011, pp. 1090-1096.
- J. M. C. Clark, P. A. Kountouriotis and R. B. Vinter, ''A new Gaussian mixture algorithm for GMTI tracking un der a minimum detectable velocity c onstraint'', Trans. Aut. Control,54 ,12, pp. 2745-2756 , 2009
- J. M. C. Clark, R. B. Vinter and M. Yaqoob, ‘The Shifted Rayleigh Filter: A New Algorithm for Bearings Only Tracking'', IEEE Trans on Aerospace a nd Electr o nic Systems'', IEEE Trans. Aero. and Electronic Systems, 43,4, pp. 13 73-1384, 2007
- J M C Clark, S Robiatti and R B Vinter, ''The Shifted-Rayleigh Filter Mixture Algorithm for Bearings-Only Tracking of Manoeuvring Targets'', IEEE Trans on Signal Processing, 55, 7, pp. 3207-3218, 2007
- Rajiv Arulampulam, Martin Clark and Richard Vinter, ''Performance of the Shifted Rayleigh Filter in Single-sensor Bearings-only Tracking'', Proc. Fusion 2007, Quebe c, 200 7
- R. B. Vinter and J. M. C. Clark, ''A New Class of Moment M at ching Filters for Nonlinear Tracking and Estimation Problems'', IEEE Nonlinear Statistical Signal Processing Workshop, Cambridge, (2006)
- J M C Clark, S Maskell, R B Vinter and M Yaqoob, ''Comparative Study Of the Shifted Rayleigh Filter and a Particle Filter'', 2005 IEEE Aerospac e Conference, Big Sky Montana (Special Session on Monte Carlo Methods).
- J M C Clark, R B Vinter and M Yaqoob, '' The Shifted Rayleigh Filter for Bearings On ly Tracking’, Proc. Fusion 2005, Philadelphia, 2005
DIFFERENTIAL GAMES AND ROBUST CONTROLLER DESIGN
The goal of this research is to develop new, practical approaches to the design of feedback controllers of nonl inear systems (such as flow control systems and dynamic telecommunication s links), based on the theory of differenti al games. The approach takes account of ‘ worst case’ disturbances and path-wise constraints (representing, fo r example, actuator sa turation or the necessity to avoid ‘dangerous’ regions of the operational profile in an aeronautics or process control context).
What Are Differential Games?
Differential Games concern the balance of ‘optimal’ strategies applied by two opposing players, who have conflicting notions of ‘best’ performance of the dynamical system they are both trying to control. The field has its origins in pursuit-evasion games in a military context, but now has a much more important role in Robust Controller Design.
Relevance of Differential Games to Robust Controller Design
Robust Control concerns the design of control systems whose performance is not degraded by modelling inaccuracies or the presence of disturbances. It is linke d to Differential Games, because disturbances and model changes can be interpreted as ‘strategies’ o f an antagonistic playe r. The Differential Games approach provides controllers that deal with disturbances on a worst case basis.
- P. Falugi, P. A. Kountouriotis and R. B. Vinter,‘Controllers that Confine a System to a Safe Region in the State Space’, IEEE Trans. Automat. Contr. 57, 11 (2012) pp. 2778-278.
- J. M. C. Clark and R. B. Vinter, ‘Stochastic Exit Time Problems Arising in Process Control’, Stochastics, 84, 5-6 (2012), pp. 667-681
- J. M. C. CLARK, M. R. JAMES and R. B. VINTER, R 20;The Interpretation of Discontinuous State Feedback Control Laws as Non-Anticipative Control Strategie s in Differential Games”, IEEE Transactions Automatic Control, IEEE Transactions Automatic Control 49, 8 ( 2004), pp. 1360-1365.
- J. M. C. CLARK and R. B. VINTER, "A Differential Dynamic Games Approach to Flow Control", Proc. 42nd CDC, Hawaii, 2003.
5. A. Festa and R. B. Vinter, Decomposition of Differential Games with Multiple Targets, J. Optim. Theory and Applic., to appear.
ROBUST MODEL PREDICTIVE CONTROL
The broad objective of this research is t he development, analysis, assessment and exploitation of a new form of model predictive control (MPC), F eedback MPC that is inherently robust in the face of uncertainty. The main objective is to devise a method , the complex ity of which is considerably less than that of dynamic programming, for achieving feedback model predictive control of constrained dynamic systems that is robust to a wide class of uncertainties (unknown disturbances, model error and state estimation error when output feedback is used).
What is model predictive control?
Model Predictive Control is an approach to controller desgin that involves on-line optimization calculations. the online optimzation problem takes account of system dynamics, constraints and control objectives. Conventional model predictive control requires the solution of an open-loop optimal control problem, in which the decision variable is a sequence of control actions. At each sample time the current control is set equal to the first term of the control sequence.
Model predictive control has a rich and unusual history. The main reason for the wide-scale adoption by industry of model predictive control is its ability to handle hard constraints on controls and states that arise in most applications. These constraints are particularly important in the petro-chemical industry wher e optimization of s et points results in steady-state operation on, or close to, the boundary of the set of permissible states. Model predictive control is one of very few methods available for handling hard constraints, and it is precisely this fact that makes it so useful for the control engineer, particularly in the process industries where plants being controlled are sufficiently `slow'' to permit its implementation.
Robust Model Predictive Control
Since uncertainty often has a significant effect on stability and performance. Robust model predictive control requires, in principle, on-line solution of a min-max optimal control problem in which the decision variable is a sequence of control laws , that provides the feedback necessary for robustness, and the adversary is uncertainty. Naive inclusion of feedback leads to a dynamic programming problem of overwhelming complexity. The real challenge is to devise a method, the complexity of which is considerably less than that of dynamic programming, for achieving feedback model predictive control of constrained dynamic systems that is robust to a wide class of uncertainties.
In order to overcome existing limitations of robust model predictive contr ol, in our approach the predicted trajecto ry is replaced by a predicted tube in state space; the control policy (sequence of control laws) is linear in the tube, and the tube and the policy are chosen so that all r ealizations of the state lie within the tube. State and control constraints are easily handled.
The online optimal control problem requires is more complex than that for conventional model predictive control but the i ncrease in com p lexity is relat ively modest, perm itti ng emplo y ment of this strateg y in situa tions where robustness is required.Collaborators: D Q Mayne and E Kerrigan
- D. Q. Mayne, J.B. Rawlings, C.V. Rao and P.O.M. Scokaert, ''Constrained model predictive control: stability and optimality'', Automatica, 36, pp. 789-81 4, 2000, Survey Paper
- D.Q.Mayne, S.Rakovic, R.B.Vinter and E.C.Kerrigan, ''Characterization of the solution to a constrained H-infinity optimal control problem'', Automatica 42, pp.371-382 (2006)
''OPTIMAL CONTROL'', by R B VINTER
Publication Details: Birhauser, Boston 2000, 507, ISBN 0-8176-4075-4 pages, $79.95
Since the 1980''s, new ideas in Optimal Control have led to far-reaching ex tensions of the theory. These include generalizations of the Pontryagin Maximum Principle, a rigorous framework for Dynamic Programming based on novel concepts of ''solution'' to the Hamilton Jacobi Equation, such as viscosity solutions, and new, unrestrictive, conditions for minimizer regularity. A key element has been new analytic techniques that give sense to ''grad ients'' of functions, that are not differentiable in the conventional sense (Nonsmooth Analysis).
Optimal Control brings together many of the important advances of the last tw o decades. The analysis is self-contained and inco rporates many of the simplifications and unifying concepts reveale d by recent research. Among other purposes, the book aims to meet the needs of readers with littl e prior exposure to modern Optimal Co ntrol, who seek quick answe rs to the questions: what are the main results, what were the deficie ncies of the classical theory and to wh at extent have they been overcome? The book includes, for their benefit, a len gthy overview, in which analytical details a re suppressed and emphasis is placed inste ad on communicating underlying ideas.
1) A self-contained and accessible exposition of Nonsmooth Analysis and its applications to the analysis of minimizin g arcs.
2) A thorough investi gation of necessary conditions, including nonsmooth maximum principles and Euler-Lagrange and Hamilton-type conditions for differential inclusion problems.
3) Self-contained coverage of Dynamic Programming from a system-theoretic point of view, with an emphasis on discontinuous value functions.
4) Detailed consideration of minimizer regularity, free-end time problems involving data discontinuous in time and other topics not previously treated in book form.
Chapter 1: Overview. pp. 1-60
Chapter 2: Measurable Multifunctions an Differential Inclusions, p p. 61-108
Chapter 3: Variational Principles, pp. 109-125
Chapter 4 : Nonsmooth Analysis, pp. 127-170
Chapter 5: Subdifferential Calculus, pp. 179-197
Chapter 6: The Maximum Principle, pp. 201-228
Chapter 7: The Extended Euler-Lagrange and Hamilton Conditions. pp.233-252
Chapter 8: Necessary Conditions for Free End-Time Proble ms, pp. 285-318
Chapter 9: The Maximum Principle for State Constrained Problems, pp. 321-359
Chapter 10: Differential Inclusions with State Constraints, pp. 361-396
Chapter 11: Regularity of Minimizers, pp. 397 -432
Chapter 12: Dynamic Programming, pp. 435-487
Review s: Automatica 38, 8, 2002 by B. Piccoli ( SISSA, Trieste), Mathematical Reviews 2001c:49001 by QJ Zhu < /em>
- Control for Energy and Sustainability (EPSRC Programme Grant) (£4.2M), Director, 2009-2014
- Control and Power (EPSRC Portfolio Partnership Grant) (£2.42M), 2003-2008 (Subsumes earlier EPSRC grants including ‘Robust Optimal Control Control’)
- Integrated Programme in Aeronautical Research (EPSRC and BAe Systems)(£375K), 2004-2007
- Fault Detection and Condition Monitoring (Data and Information Fusion DTC) (£270K), 2003-2006.