Imperial College London

Emeritus ProfessorRichardVinter

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Emeritus Professor in Electrical and Electronic Engineering
 
 
 
//

Contact

 

+44 (0)20 7594 6287r.vinter Website

 
 
//

Assistant

 

Mrs Raluca Reynolds +44 (0)20 7594 6281

 
//

Location

 

618Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

206 results found

Bernis J, Bettiol P, Vinter RB, 2022, Solutions to the Hamilton-Jacobi equation for state constrained Bolza problems with discontinuous time dependence, JOURNAL OF DIFFERENTIAL EQUATIONS, Vol: 341, Pages: 589-619, ISSN: 0022-0396

Journal article

Miao K, Vinter R, 2021, Optimal control of a growth/consumption model, OPTIMAL CONTROL APPLICATIONS & METHODS, Vol: 42, Pages: 1672-1688, ISSN: 0143-2087

Journal article

Mayne D, Vinter R, 2021, First-Order Necessary Conditions in Optimal Control, JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, Vol: 189, Pages: 716-743, ISSN: 0022-3239

Journal article

Vinter R, Zheng H, 2021, Obituary, Stochastics: An International Journal of Probability and Stochastic Processes, Vol: 93, Pages: 1-2, ISSN: 1744-2508

Journal article

Bettiol P, Vinter RB, 2021, A New Look at the Weierstrass Condition in Optimal Control, 60th IEEE Conference on Decision and Control (CDC), Publisher: IEEE, Pages: 4564-4569, ISSN: 0743-1546

Conference paper

Bettiol P, Quincampoix M, Vinter RB, 2019, Existence and characterization of the values of two player differential games with state constraints, Applied Mathematics and Optimization, Vol: 80, Pages: 765-799, ISSN: 0095-4616

We consider a two player, zero sum differential game with a cost of Bolza type, subject to a state constraint. It is shown that, under a suitable hypothesis concerning existence of inward pointing velocity vectors for the minimizing player at the boundary of the constraint set, the lower value of the game is Lipschitz continuous and is the unique viscosity solution (appropriately defined) of the lower Hamilton-Jacobi-Isaacs equation. If the inward pointing hypothesis is satisfied by the maximizing player’s velocity set, then the upper game is Lipschitz continuous and is the unique solution of the upper Hamilton-Jacobi-Isaacs equation. Under the classical Isaacs condition, the upper and lower Hamilton-Jacobi-Isaacs equation coincide. In this case, even if the inward pointing hypothesis is satisfied w.r.t. both players, the value of the game might fail to exist; however imposing stronger constraint qualifications (involving the existence of inward pointing vectors associated with saddle points for the Hamiltonian) the game value does exist and is the unique solution to this Hamilton-Jacobi-Isaacs equation. The novelty of our work resides in the fact that we permit the two players’ controls to be completely coupled within the dynamic constraint, state constraint and the cost functional, in contrast to earlier work, in which the players’ controls are decoupled w.r.t. the dynamics and state constraint, and interaction between them only occurs through the cost function. Furthermore, the inward pointing hypotheses that we impose are of a verifiable nature and less restrictive than those earlier employed.

Journal article

Vinter RB, 2019, Free end-time optimal control problems: Conditions for the absence of an infimum gap, Vietnam Journal of Mathematics, Vol: 47, Pages: 757-768, ISSN: 0866-7179

This paper concerns free end-time optimal control problems, in which the dynamic constraint takes the form of a controlled differential inclusion. Such problems may fail to have a minimizer. Relaxation is a procedure for enlarging the domain of an optimization problem to guarantee existence of a minimizer. In the context of problems studied here, the standard relaxation procedure involves replacing the velocity sets in the original problem by their convex hulls. It is desirable that the original and relaxed versions of the problem have the same infimum cost. For then we can obtain a sub-optimal state trajectory, by obtaining a solution to the relaxed problem and approximating it. It is important, therefore, to investigate when the infimum costs of the two problems are the same; for otherwise the above strategy for generating sub-optimal state trajectories breaks down. We explore the relation between the existence of an infimum gap and abnormality of necessary conditions for the free-time problem. Such relations can translate into verifiable hypotheses excluding the existence of an infimum gap. Links between existence of an infimum gap and normality have previously been explored for fixed end-time problems. This paper establishes, for the first time, such links for free end-time problems.

Journal article

Friedman A, Forys U, de Pinho MDR, Vinter Ret al., 2019, PREFACE, DISCRETE AND CONTINUOUS DYNAMICAL SYSTEMS-SERIES B, Vol: 24, Pages: I-II, ISSN: 1531-3492

Journal article

Motta M, Rampazzo F, Vinter R, 2019, NORMALITY AND GAP PHENOMENA IN OPTIMAL UNBOUNDED CONTROL, ESAIM-CONTROL OPTIMISATION AND CALCULUS OF VARIATIONS, Vol: 24, Pages: 1645-1673, ISSN: 1292-8119

Journal article

Vinter RB, 2019, Optimal control problems with time delays: constancy of the Hamiltonian, SIAM Journal on Control and Optimization, Vol: 57, Pages: 2574-2602, ISSN: 0363-0129

This paper concerns necessary conditions of optimality for optimal control problems with time delays in the state variable. It is well known that, when there are no time delays and the dynamics are autonomous, the standard necessary conditions in the form of a maximum principle can be supplemented by an extra condition, namely “constancy of the Hamiltonian" along optimal trajectories (and associated costate trajectories). This property, possibly supplemented by other invariance principles, has been used to investigate properties of optimal trajectories, such as solution regularity, without the need to solve the underlying extremal equations. In classical mechanics, for example, the constancy of the Hamiltonian condition can be used to derive a conservation of energy principle from Hamilton's principle of least action. While the maximum principle has been generalized to cover time delays, the validity of constancy of the Hamiltonian-type conditions has not been previously investigated. We provide the first “extra" optimality condition of this nature for autonomous, time delay optimal control problems. The new “constancy of the Hamiltonian" condition involves a correction term, without which the condition is not valid. We illustrate the significance of this condition by applications to minimizer regularity and conservation laws in nonclassical Hamiltonian mechanics.

Journal article

Vinter RB, 2018, State constrained optimal control problems with time delays, JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS, Vol: 457, Pages: 1696-1712, ISSN: 0022-247X

Journal article

Hermosilla C, Vinter R, Zidani H, 2017, Hamilton-Jacobi-Bellman equations for optimal control processes with convex state constraints, SYSTEMS & CONTROL LETTERS, Vol: 109, Pages: 30-36, ISSN: 0167-6911

Journal article

Bettiol P, Vinter RB, 2017, THE HAMILTON JACOBI EQUATION FOR OPTIMAL CONTROL PROBLEMS WITH DISCONTINUOUS TIME DEPENDENCE, SIAM JOURNAL ON CONTROL AND OPTIMIZATION, Vol: 55, Pages: 1199-1225, ISSN: 0363-0129

Journal article

Boccia A, Vinter RB, 2017, THE MAXIMUM PRINCIPLE FOR OPTIMAL CONTROL PROBLEMS WITH TIME DELAYS, SIAM JOURNAL ON CONTROL AND OPTIMIZATION, Vol: 55, Pages: 2905-2935, ISSN: 0363-0129

Journal article

Vinter RB, Boccia A, Pinho M, 2016, Optimal Control Problems with Mixed and Pure State Constraints, SIAM Journal on Control and Optimization, Vol: 54, Pages: 3061-3083, ISSN: 0363-0129

This paper provides necessary conditions of optimality for optimal control problems, in whichthe pathwise constraints comprise both ‘pure’ constraints on the state variable and also ‘mixed’constraints on control and state variables. The proofs are along the lines of earlier analysis formixed constraint problems, according to which Clarke’s theory of ‘stratified’ necessary conditions isapplied to a modified optimal control problem resulting from absorbing the mixed constraint into thedynamics; the difference here is that necessary conditions which now take account of the presenceof pure state constraints are applied to the modified problem. Necessary conditions are given for arather general formulation of the problem containing both forms of the constraints, and then theseare specialized to apply to problems having special structure. While combined pure state and mixedcontrol/state problems have been previously treated in the literature, the necessary conditions in thispaper are proved under less restrictive hypotheses and for novel formulations of the constraints.

Journal article

Bettiol P, Vinter RB, 2016, L∞ estimates on trajectories confined to a closed subset, for control systems with bounded time variation, Mathematical Programming, Vol: 168, Pages: 201-228, ISSN: 1436-4646

The term ‘distance estimate’ for state constrained control systems refers to an estimate on the distance of an arbitrary state trajectory from the subset of state trajectories that satisfy a given state constraint. Distance estimates have found widespread application in state constrained optimal control. They have been used to establish regularity properties of the value function, to establish the non-degeneracy of first order conditions of optimality, and to validate the characterization of the value function as a unique solution of the HJB equation. The most extensively applied estimates of this nature are so-called linear L∞L∞ distance estimates. The earliest estimates of this nature were derived under hypotheses that required the multifunctions, or controlled differential equations, describing the dynamic constraint, to be locally Lipschitz continuous w.r.t. the time variable. Recently, it has been shown that the Lipschitz continuity hypothesis can be weakened to a one-sided absolute continuity hypothesis. This paper provides new, less restrictive, hypotheses on the time-dependence of the dynamic constraint, under which linear L∞L∞ estimates are valid. Here, one-sided absolute continuity is replaced by the requirement of one-sided bounded variation. This refinement of hypotheses is significant because it makes possible the application of analytical techniques based on distance estimates to important, new classes of discontinuous systems including some hybrid control systems. A number of examples are investigated showing that, for control systems that do not have bounded variation w.r.t. time, the desired estimates are not in general valid, and thereby illustrating the important role of the bounded variation hypothesis in distance estimate analysis.

Journal article

Festa A, Vinter RB, 2016, Decomposition of Differential Games with Multiple Targets, Journal of Optimization Theory and Applications, Vol: 169, Pages: 848-875, ISSN: 1573-2878

This paper provides a decomposition technique for the purpose of simplifying the solution of certain zero-sum differential games. The games considered terminate when the state reaches a target, which can be expressed as the union of a collection of target subsets considered as ‘multiple targets’; the decomposition consists in replacing the original target by each of the target subsets. The value of the original game is then obtained as the lower envelope of the values of the collection of games, resulting from the decomposition, which can be much easier to solve than the original game. Criteria are given for the validity of the decomposition. The paper includes examples, illustrating the application of the technique to pursuit/evasion games and to flow control.

Journal article

Bettiol P, Khalil N, Vinter RB, 2016, Normality of Generalized Euler-Lagrange Conditions for State Constrained Optimal Control Problems, Journal of Convex Analysis, Vol: 23, Pages: 291-311, ISSN: 0944-6532

We consider state constrained optimal control problems in which the cost to minimize comprises both integral and end-point terms, establishing normality of the generalized Euler-Lagrange condition. Simple examples illustrate that the validity of the Euler-Lagrange condition (and related necessary conditions), in normal form, depends crucially on the interplay between velocity sets, the left end-point constraint set and the state constraint set. We show that this is actually a common feature for general state constrained optimal control problems, in which the state constraint is represented by closed convex sets and the left end-point constraint is a closed set. In these circumstances classical constraint qualifications involving the state constraints and the velocity sets cannot be used alone to guarantee normality of the necessary conditions. A key feature of this paper is to prove that the additional information involving tangent vectors to the left end-point and the state constraint sets can be used to establish normality.

Journal article

Boccia A, Vinter RB, 2016, The Maximum Principle for Optimal Control Problems with Time Delays, 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS), Publisher: ELSEVIER, Pages: 951-955, ISSN: 2405-8963

Conference paper

Vinter RB, 2015, Multifunctions of bounded variation, Journal of Differential Equations, Vol: 260, Pages: 3350-3379, ISSN: 1090-2732

Consider control systems described by a differential equation with a control term or, more generally, by a differential inclusion with velocity set F(t,x). Certain properties of state trajectories can be derived when it is assumed that F(t,x) is merely measurable w.r.t. the time variable t . But sometimes a refined analysis requires the imposition of stronger hypotheses regarding the time dependence. Stronger forms of necessary conditions for minimizing state trajectories can be derived, for example, when F(t,x) is Lipschitz continuous w.r.t. time. It has recently become apparent that significant addition properties of state trajectories can still be derived, when the Lipschitz continuity hypothesis is replaced by the weaker requirement that F(t,x) has bounded variation w.r.t. time. This paper introduces a new concept of multifunctions F(t,x) that have bounded variation w.r.t. time near a given state trajectory, of special relevance to control. We provide an application to sensitivity analysis.

Journal article

Palladino M, Vinter RB, 2015, Regularity of the Hamiltonian Along Optimal Trajectories, SIAM Journal on Control and Optimization, Vol: 53, Pages: 1892-1919, ISSN: 1095-7138

This paper concerns state constrained optimal control problems, in which the dynamic constraint takes the form of a differential inclusion. If the differential inclusion does not depend on time, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, is independent of time. If the differential inclusion is Lipschitz continuous, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, is Lipschitz continuous. These two well-known results are examples of the following principle: the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, inherits the regularity properties of the differential inclusion, regarding its time dependence. We show that this principle also applies to another kind of regularity: if the differential inclusion has bounded variation w.r.t. time, then the Hamiltonian, evaluated along the optimal state trajectory and the co-state trajectory, has bounded variation. Two applications of these newly found properties are demonstrated. One is to derive improved conditions which guarantee the nondegeneracy of necessary conditions of optimality in the form of a Hamiltonian inclusion. The other application is to derive new conditions under which minimizers in the calculus of variations have bounded slope. The analysis is based on a recently proposed, local concept of differential inclusions that have bounded variation w.r.t. the time variable, in which conditions are imposed on the multifunction involved, only in a neighborhood of a given state trajectory.

Journal article

Palladino M, Vinter RB, 2015, When are minimizing controls also minimizing relaxed controls?, Discrete and Continuous Dynamical Systems, Vol: 35, Pages: 4573-4592, ISSN: 1553-5231

Relaxation refers to the procedure of enlarging the domain of a variational problem or the search space for the solution of a set of equations, to guarantee the existence of solutions. In optimal control theory relaxation involves replacing the set of permissible velocities in the dynamic constraint by its convex hull. Usually the infimum cost is the same for the original optimal control problem and its relaxation. But it is possible that the relaxed infimum cost is strictly less than the infimum cost. It is important to identify such situations, because then we can no longer study the infimum cost by solving the relaxed problem and evaluating the cost of the relaxed minimizer. Following on from earlier work by Warga, we explore the relation between the existence of an infimum gap and abnormality of necessary conditions (i.e. they are valid with the cost multiplier set to zero). Two kinds of theorems are proved. One asserts that a local minimizer, which is not also a relaxed minimizer, satisfies an abnormal form of the Pontryagin Maximum Principle. The other asserts that a local relaxed minimizer that is not also a minimizer satisfies an abnormal form of the relaxed Pontryagin Maximum Principle.

Journal article

Vinter R, 2014, Maria Petrou - Leading authority on image processing Born 17 May 1953; Died 15 October 2012 Obituary, PATTERN RECOGNITION LETTERS, Vol: 48, Pages: 103-103, ISSN: 0167-8655

Journal article

Bettiol P, Frankowska H, Vinter RB, 2014, Improved Sensitivity Relations in State Constrained Optimal Control, Applied Mathematics and Optimization, Vol: 71, Pages: 353-377, ISSN: 1432-0606

Sensitivity relations in optimal control provide an interpretation of the costate trajectory and the Hamiltonian, evaluated along an optimal trajectory, in terms of gradients of the value function. While sensitivity relations are a straightforward consequence of standard transversality conditions for state constraint free optimal control problems formulated in terms of control-dependent differential equations with smooth data, their verification for problems with either pathwise state constraints, nonsmooth data, or for problems where the dynamic constraint takes the form of a differential inclusion, requires careful analysis. In this paper we establish validity of both ‘full’ and ‘partial’ sensitivity relations for an adjoint state of the maximum principle, for optimal control problems with pathwise state constraints, where the underlying control system is described by a differential inclusion. The partial sensitivity relation interprets the costate in terms of partial Clarke subgradients of the value function with respect to the state variable, while the full sensitivity relation interprets the couple, comprising the costate and Hamiltonian, as the Clarke subgradient of the value function with respect to both time and state variables. These relations are distinct because, for nonsmooth data, the partial Clarke subdifferential does not coincide with the projection of the (full) Clarke subdifferential on the relevant coordinate space. We show for the first time (even for problems without state constraints) that a costate trajectory can be chosen to satisfy the partial and full sensitivity relations simultaneously. The partial sensitivity relation in this paper is new for state constraint problems, while the full sensitivity relation improves on earlier results in the literature (for optimal control problems formulated in terms of Lipschitz continuous multifunctions), because a less restrictive inward pointing hypothesis is invoked in the proo

Journal article

Palladino M, Vinter RB, 2014, Minimizers That Are Not Also Relaxed Minimizers, SIAM Journal on Control and Optimization, Vol: 52, Pages: 2164-2179, ISSN: 1095-7138

Relaxation is a widely used regularization procedure in optimal control, involving the replacement of velocity sets by their convex hulls, to ensure the existence of a minimizer. It can be an important step in the construction of suboptimal controls for the original, unrelaxed, optimal control problem (which may not have a minimizer), based on obtaining a minimizer for the relaxed problem and approximating it. In some cases the infimum cost of the unrelaxed problem is strictly greater than the infimum cost over relaxed state trajectories; we need to identify such situations because then the above procedure fails. The noncoincidence of these two infima leads also to a breakdown of the dynamic programming method because, typically, solving the Hamilton--Jacobi equation yields the minimum cost of the relaxed, not the original, optimal control problem. Following on from earlier work by Warga, we explore the relation between, on the one hand, noncoincidence of the minimum cost of the optimal control and its relaxation and, on the other, abnormality of necessary conditions (in the sense that they take a degenerate form in which the cost multiplier is set to zero). Two kinds of theorems are proved, depending on whether we focus attention on minimizers of the unrelaxed or the relaxed formulation of the optimal control problem. One kind asserts that a local minimizer which is not also a relaxed local minimizer satisfies an abnormal form of the Hamiltonian inclusion. The other asserts that a relaxed local minimizer that is not also a local minimizer also satisfies an abnormal form of Hamiltonian inclusion.

Journal article

Gavriel C, Vinter RB, 2014, Second Order Sufficient Conditions for Optimal Control Problems with Non-unique Minimizers: An Abstract Framework, Applied Mathematics and Optimization, Vol: 70, Pages: 411-442, ISSN: 1432-0606

Standard second order sufficient conditions in optimal control theory provide not only the information that an extremum is a weak local minimizer, but also tell us that the extremum is locally unique. It follows that such conditions will never cover problems in which the extremum is continuously embedded in a family of constant cost extrema. Such problems arise in periodic control, when the cost is invariant under time translations, in shape optimization, where the cost is invariant under Euclidean transformations (translations and rotations of the extremal shape), and other areas where the domain of the optimization problem does not really comprise elements in a linear space, but rather an equivalence class of such elements. We supply a set of sufficient conditions for minimizers that are not locally unique, tailored to problems of this nature. The sufficient conditions are in the spirit of earlier conditions for ‘non-isolated’ minima, in the context of general infinite dimensional nonlinear programming problems provided by Bonnans, Ioffe and Shapiro, and require coercivity of the second variation in directions orthogonal to the constant cost set. The emphasis in this paper is on the derivation of directly verifiable sufficient conditions for a narrower class of infinite dimensional optimization problems of special interest. The role of the conditions in providing easy-to-use tests of local optimality of a non-isolated minimum, obtained by numerical methods, is illustrated by an example in optimal control.

Journal article

Vinter RB, 2014, The Hamiltonian Inclusion for Nonconvex Velocity Sets, SIAM Journal on Control and Optimization, Vol: 52, Pages: 1237-1250, ISSN: 1095-7138

Since Clarke's 1973 proof of the Hamiltonian inclusion for optimal control problems with convex velocity sets, there has been speculation (and, more recently, speculation relating to a stronger, partially convexified version of the Hamiltonian inclusion) as to whether these necessary conditions are valid in the absence of the convexity hypothesis. The issue was in part resolved by Clarke himself when, in 2005, he showed that $L^{\infty}$ local minimizers satisfy the Hamiltonian inclusion. In this paper it is shown, by counterexample, that the Hamiltonian inclusion (and so also the stronger partially convexified Hamiltonian inclusion) are not in general valid for nonconvex velocity sets when the local minimizer in question is merely a $W^{1,1}$ local minimizer, not an $L^{\infty}$ local minimizer. The counterexample demonstrates that the need to consider $L^{\infty}$ local minimizers, not $W^{1,1}$ local minimizers, in the proof of the Hamiltonian inclusion for nonconvex velocity sets is fundamental, not just a technical restriction imposed by currently available proof techniques. The paper also establishes the validity of the partially convexified Hamiltonian inclusion for $W^{1,1}$ local minimizers under a normality assumption, thereby correcting earlier assertions in the literature.

Journal article

Clark JMC, Kountouriotis PA, Vinter RB, 2014, Gaussian mixture filtering for range only tracking problems, Pages: 49-54

Range only tracking problems arise in extended data collection for inverse synthetic radar applications, robotics, navigation and other areas. For such problems, the conditional density of the state variable given the measurement history is multi-modal or exhibits curvature, even in seemingly benign scenarios. For this reason, the use of extended Kalman filter (EKF) and other nonlinear filtering techniques based on Gaussian approximations can result in inaccurate and unreliable estimates. In this paper, we introduce a new filter specifically designed for range only tracking called the Gaussian mixture range only filter (GMROF). The filter recursively generates Gaussian mixture approximations to the conditional density. The filter equations are derived by analytic techniques based on the specific nonlinearities arising in range only tracking. Simulation results, based on scenarios taken from earlier comparative studies, indicate that the GMROF consistently outperformed the EKF, and achieved the accuracy of particle filters while significantly reducing the computational cost.

Conference paper

Bettiol P, Boccia A, Vinter RB, 2013, Stratified Necessary Conditions for Differential Inclusions with State Constraints, SIAM Journal on Control and Optimization, Vol: 51, Pages: 3903-3917, ISSN: 1095-7138

The concept of stratified necessary conditions for optimal control problems, whose dynamic constraint is formulated as a differential inclusion, was introduced by F. H. Clarke. These are conditions satisfied by a feasible state trajectory that achieves the minimum value of the cost over state trajectories whose velocities lie in a time-varying open ball of specified radius about the velocity of the state trajectory of interest. Considering different radius functions stratifies the interpretation of “minimizer.” In this paper we prove stratified necessary conditions for optimal control problems involving pathwise state constraints. As was shown by Clarke in the state constraint-free case, we find that, also in our more general setting, the stratified necessary conditions yield generalizations of earlier optimality conditions for unbounded differential inclusions as simple corollaries. Some examples are provided, giving insights into the nature of the hypotheses invoked for the derivation of stratified necessary conditions and into the scope for their further refinement.

Journal article

Bettiol P, Vinter RB, 2013, Estimates on trajectories in a closed set with corners for (t,x) dependent data, Mathematical Control and Related Fields, Vol: 3, Pages: 245-267, ISSN: 2156-8472

Estimates on the distance of a given process from the set of processes that satisfy a specified state constraint in terms of the state constraint violation are important analytical tools in state constrained optimal control theory; they have been employed to ensure the validity of the Maximum Principle in normal form, to establish regularity properties of the value function, to justify interpreting the value function as a unique solution of the Hamilton-Jacobi equation, and for other purposes. A range of estimates are required, which differ according the metrics used to measure the `distance' and the modulus θ(h) of state constraint violation h in terms of which the estimates are expressed. Recent research has shown that simple linear estimates are valid when the state constraint set A has smooth boundary, but do not generalize to a setting in which the boundary of A has corners. Indeed, for a velocity set F which does not depend on (t,x) and for state constraints taking the form of the intersection of two closed spaces (the simplest case of a boundary with corners), the best distance estimates we can hope for, involving the W1,1, metric on state trajectories, is a super-linear estimate expressed in terms of the h|log(h)| modulus. But, distance estimates involving the h|log(h)| modulus are not in general valid when the velocity set F(.,x) is required merely to be continuous, while not even distance estimates involving the weaker, Hölder modulus hα (with α arbitrarily small) are in general valid, when F(.,x) is allowed to be discontinuous. This paper concerns the validity of distance estimates when the velocity set F(t,x) is (t,x)-dependent and satisfy standard hypotheses on the velocity set (linear growth, Lipschitz x-dependence and an inward pointing condition). Hypotheses are identified for the validity of distance estimates, involving both the h|log(h)| and linear moduli, within the framework of control systems described by a controlled dif

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00006114&limit=30&person=true