Imperial College London

DrThulasiMylvaganam

Faculty of EngineeringDepartment of Aeronautics

Senior Lecturer in Control Engineering
 
 
 
//

Contact

 

+44 (0)20 7594 5129t.mylvaganam

 
 
//

Location

 

221City and Guilds BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

53 results found

Nortmann B, Monti A, Sassano M, Mylvaganam Tet al., 2024, Nash equilibria for linear quadratic discrete-time dynamic games via iterative and data-driven algorithms, IEEE Transactions on Automatic Control, Pages: 1-15, ISSN: 0018-9286

Determining feedback Nash equilibrium solutions of nonzero-sum dynamic games is generally challenging. In this paper, we propose four different iterative algorithms to find Nash equilibrium strategies for discrete-time linear quadratic games. The strategy update laws are based on the solution of either Lyapunov or Riccati equations for each player. Local convergence criteria are discussed. Motivated by the fact that in many practical scenarios each player in the game may have access to different (incomplete) information, we also introduce purely data-driven implementations of the algorithms. This allows the players to reach a Nash equilibrium solution of the game via scheduled experiments and without knowledge of each other's performance criteria or of the system dynamics. The efficacy of the presented algorithms is illustrated via numerical examples and a practical example involving human-robot interaction.

Journal article

Monti A, Nortmann B, Mylvaganam T, Sassano Met al., 2024, Feedback and open-loop nash equilibria for lq infinite-horizon discrete-time dynamic games, SIAM Journal on Control and Optimization, ISSN: 0363-0129

We consider dynamic games defined over an infinite horizon, characterized by linear, discrete-time dynamics and quadratic cost functionals. Considering such linear-quadratic (LQ) dynamic games, we focus on their solutions in terms of Nash equilibrium strategies. Both Feedback (F-NE) and Open-Loop (OL-NE) Nash equilibrium solutions are considered. The contributions of the paper are threefold. First, our detailed study reveals some interesting structural insights in relation to F-NE solutions. Second, as a stepping stone towards our consideration of OL-NE strategies,we consider a specific infinite-horizon discrete-time (single-player) optimal control problem, wherein the dynamics are influenced by a known exogenous input and draw connections between its solution obtained via Dynamic Programming and Pontryagin’s Minimum Principle. Finally, we exploit the latter result to provide a characterization of OL-NE strategies of the class of infinite-horizon dynamicgames. The results and key observations made throughout the paper are illustrated via a numerical example.

Journal article

Scarpa ML, Mylvaganam T, 2024, Open-loop and feedback LQ potential differential gamesfor Multi-Agent Systems, 2023 62nd IEEE Conference on Decision and Control, Publisher: Institute of Electrical and Electronics Engineers, ISSN: 2576-2370

Open-loop and feedback potential differential games for multi-agent systems are considered in this pa-per. Constructive sufficient conditions under which a linear quadratic differential game constitutes a potential differential game are provided. The conditions enable the construction of associated optimal control problems that yield (at signifi-cantly reduced computational complexity) solutions (in terms of open-loop and feedback Nash equilibrium strategies) of the original differential game. The results are demonstrated on a practically-motivated example that concerns spacecraft formation control.

Conference paper

Nortmann B, Mylvaganam T, 2023, Approximate Nash equilibria for discrete-time linear quadratic dynamic games, 22nd IFAC World Congress, Publisher: Elsevier, Pages: 1760-1765, ISSN: 2405-8963

It is generally challenging to determine Nash equilibriumsolutions of nonzero-sum dynamic games, even for games characterised by a quadratic cost and linear dynamics, and particularly in the discrete-time, infinite-horizon case. Motivated by this, we propose and characterise a notion of approximate feedback Nash equilibrium solutions for this class of dynamic games, the epsilon-alpha-beta-Nash equilibrium, which provides guarantees on the convergence rate of the trajectories of the resulting closed-loop system. The efficacy of the results is demonstrated via a simulation example involving macroeconomic policy design.

Conference paper

Nortmann B, Monti A, Sassano M, Mylvaganam Tet al., 2023, Feedback Nash equilibria for scalar two-player linear-quadratic discrete-time dynamic games, 22nd IFAC World Congress, Publisher: Elsevier, Pages: 1772-1777, ISSN: 2405-8963

In this paper, we consider discrete-time, scalar,two-player, linear-quadratic dynamic games and study thecoupled algebraic equations characterising feedback Nashequilibria. Using geometric arguments, we first analyse thepossible number of distinct feedback Nash equilibriumsolutions a game may admit and discuss properties ofdifferent solutions, before deriving conditions for theexistence of no, one, two or three distinct feedback Nashequilibria. Finally, illustrative numerical simulationscorroborate the theoretical findings.

Conference paper

Sassano M, Mylvaganam T, 2023, Finite-dimensional characterisation of optimal control laws over an infinite horizon for nonlinear systems, IEEE Transactions on Automatic Control, Vol: 68, Pages: 5954-5965, ISSN: 0018-9286

Infinite-horizon optimal control problems for nonlinear systems are considered. Due to the nonlinear and intrinsically infinite-dimensional nature of the task, solving such optimal control problems is challenging. In this paper an exact finite-dimensional characterisation of the optimal solution over the entire horizon is proposed. This is obtained via the (static) minimisation of a suitably defined function of (projected) trajectories of the underlying Hamiltonian dynamics on a hypersphere of fixed radius. The result is achieved in the spirit of the so-called shooting methods by introducing, via simultaneous forward/backward propagation, an intermediate shooting point much closer to the origin, regardless of the actual initial state. A modified strategy allows one to determine an arbitrarily accurate approximate solution by means of standard gradient-descent algorithms over compact domains. Finally, to further increase robustness of the control law, a receding-horizon architecture is envisioned by designing a sequence of shrinking hyperspheres. These aspects are illustrated by means of a benchmark numerical simulation.

Journal article

Nortmann B, Mylvaganam T, 2023, Direct data-driven control of LTV systems, IEEE Transactions on Automatic Control, Vol: 68, Pages: 4888-4895, ISSN: 0018-9286

Considering discrete-time linear time-varying systems with unknown dynamics, controllers guaranteeing bounded closed-loop trajectories, optimal performance and robustness to process and measurement noise are designed via convex feasibility and optimisation problems involving purely data-dependent linear matrix inequalities. For the special case of periodically time-varying systems, infinite-horizon guarantees are achieved based on finite-length data sequences.

Journal article

Sassano M, Mylvaganam T, Astolfi A, 2023, Model-based policy iterations for nonlinear systems via controlled Hamiltonian dynamics, IEEE Transactions on Automatic Control, Vol: 68, Pages: 2683-2698, ISSN: 0018-9286

The infinite-horizon optimal control problem for nonlinear systems is studied. In the context of model-based, iterative learning strategies we propose an alternative definition and construction of the temporal difference error arising in Policy Iteration strategies. In such architectures the error is computed via the evolution of the Hamiltonian function (or, possibly, of its integral) along the trajectories of the closed-loop system. Herein the temporal difference error is instead obtained via two subsequent steps: first the dynamics of the underlying costate variable in the Hamiltonian system is steered by means of a (virtual) control input in such a way that the stable invariant manifold becomes externally attractive. Then, the distance-from-invariance of the manifold, induced by approximate solutions, yields a natural candidate measure for the policy evaluation step. The policy improvement phase is then performed by means of standard gradient descent methodsthat allows to correctly update the weights of the underlying functional approximator. The above architecture then yields an iterative (episodic) learning scheme based on a scalar, constant reward at each iteration, the value of which is insensitive to the length of the episode, as in the originalspirit of Reinforcement Learning strategies for discrete-time systems. Finally, the theory is validated by means of a numerical simulation involving an automatic flight control problem.

Journal article

Scarpa ML, Nortmann B, Pettersen KY, Mylvaganam Tet al., 2023, Data-driven control of planar snake robot locomotion, 61st IEEE Conference on Decision and Control, Publisher: IEEE

A direct data-driven strategy for snake-robot lo-comotion control is proposed in this paper. The approach leadsto a time-varying state feedback controller with robustnessguarantees. Instead of relying on exact model knowledge -which is often not available in practice - the proposed controlstrategy requires only input-state data collected during offlineexperiments. The efficacy of the proposed strategy is demon-strated via simulations. Notably, by using data to compensatefor inaccurate models, the proposed control strategy can leadto significant improvements in closed-loop performance com-pared to existing (model-based) control strategies, while alsoeliminating the need for manual tuning of control parameters.

Conference paper

Nortmann B, Monti A, Mylvaganam T, Sassano Met al., 2023, Nash equilibria for scalar LQ games: iterative and data-driven algorithms, 61st IEEE Conference on Decision and Control, Publisher: IEEE

Determining Nash equilibrium solutions of nonzero-sum dynamic games is generally challenging. In this paper, we propose four different iterative algorithms for finding Nash equilibrium strategies of discrete-time scalar linear quadratic games, with strategy updates based on the solution of either Lyapunov or Riccati equations. Local convergence criteria are discussed. Motivated by the fact that in many practical scenarios each player in the game may have access to different (incomplete) information, we introduce purely data-driven implementations of the algorithms. This allows theplayers to reach a Nash equilibrium solution of the game via scheduled experiments and without knowledge of each other’s performance criteria or of the system dynamics. The efficacy of the presented algorithms is illustrated via a numerical example.

Conference paper

Nortmann B, Mylvaganam T, 2022, Data-driven cost representation for optimal control and its relevance toa class of asymmetric linear quadratic dynamic games, 2022 European Control Conference, Publisher: IEEE, Pages: 2185-2190

Motivated by the fact that optimal performance criteria are often not known a priori, we present an approach to represent quadratic objective functions in the context of optimal control directly using finite, open-loop, non-optimaldata trajectories of the state, input and a performance variable. Combined with a data-based representation of linear time-invariant systems this allows us to solve linear quadratic regulator problems with unknown dynamics and unknown cost matrices via data-dependent convex programmes. We show that this result is relevant to a specific class of linear quadratic games, in which one player is missing information regarding the control objectives of the other players and/or the system dynamics. The applicability of the presented results is highlighted via an example concerning human-robot interaction.

Conference paper

Bai H, Mylvaganam T, Scarciotti G, 2022, Model reduction for quadratic-bilinear systems using nonlinear moments, European Control Conference 2022, Publisher: IEEE, Pages: 1702-1707

We propose a steady-state based moment matchingmethod for model reduction of quadratic-bilinear systems.Considering a large-scale quadratic-bilinear system possessing astable equilibrium at the origin, the goal of this paper is to designa reduced order model which maintains certain properties of theoriginal system. More precisely, it is required that the reducedorder model also possesses a stable equilibrium at the originand, further, it may be desirable that its relative degree matchesthat of the original system. We use the notion of nonlinearmoments and exploit a formal power expansion to solve thisproblem. Two different families of reduced order models areprovided in this paper and their use is demonstrated on anillustrative numerical example.

Conference paper

Sassano M, Mylvaganam T, Astolfi A, 2022, On the analysis of open-loop Nash equilibria admitting a feedbacksynthesis in nonlinear differential games, Automatica, Vol: 142, Pages: 1-8, ISSN: 0005-1098

Open-loop Nash equilibrium strategies for differential games described by nonlinear, input-affine, systems and cost functionals that are quadratic with respect to the control input are studied. First it is shown that the computation of such strategies hinges upon the solution of a system of nonlinear, time-varying, partial differential equations (PDEs) obtained by building on arguments borrowed from Pontryagin’s Minimum Principle and combined with Dynamic Programming considerations. Then, by relying on a state/costate interpretation of the above characterization, a feedback synthesis of the underlying open-loop strategy is obtained by solving linear first-order PDEs that ensure invariance of certain submanifolds in the state-space of the extended state/costate dynamics. These PDEs are the nonlinear counterpart of the well-known asymmetric Algebraic Riccati Equations arising in the study of linear quadratic Nash games.

Journal article

Sassano M, Mylvaganam T, Astolfi A, 2022, Infinite-horizon optimal control problems for nonlinear systems, IEEE Conference on Decision and Control (CDC 2021), Publisher: IEEE, Pages: 1721-1721

Infinite-horizon optimal control problems for non-linear systems are studied and discussed. First, we thoroughlyrevisit the formulation of the underlying dynamic optimisation problem together with the classical results providing itssolution. Then, we consider two alternative methods to con-struct solutions (or approximations there of) of such problems, developed in recent years, that provide theoretical insights as well as computational benefits. While the considered methods are mostly based on tools borrowed from the theories of Dynamic Programming and Pontryagin’s Minimum Principles, or a combination of the two, the proposed control design strategies yield innovative, systematic and constructive methods to provide exact or approximate solutions of nonlinear optimal control problems. Interestingly, similar ideas can be extended also to linear and nonlinear differential games, namely dynamic optimisation problems involving several decision-makers. Due their advantages in terms of computational complexity, the considered methods have found several applications. An example ofthis is provided, through the consideration of the multi-agent collision avoidance problem, for which both simulations and experimental results are provided.

Conference paper

Mylvaganam T, Sassano M, Astolfi A, 2022, Nonlinear optimal control of a ballast-stabilized floating wind turbine viaexternally stabilised Hamiltonian dynamics, IEEE Conference on Decision and Control, Publisher: IEEE, Pages: 2428-2433

We consider the problem of controlling a ballast-stabilized offshore wind turbine. We formulate an optimal control problem with the objective of maximising the power generation while minimising structural fatigue of the wind turbine. Due to the nonlinear nature of the model, obtaining a solution to the above control task poses a severe challenge.Recalling that solutions of the optimal control problem are characterised by a certain (unstable) invariant manifold of the underlying Hamiltonian system, we demonstrate that nonlinear control strategies which approximate the solution of the optimalcontrol problem can be constructed through the introduction of an externally stabilised Hamiltonian system. This observation enables the construction of an algorithm to compute (with rel-atively low computational complexity) an approximate solution of the optimal control problem, without ignoring nonlinearities in the control design. This approach has several benefits, asdemonstrated via simulations on a ballast-stabilized offshore wind turbine.

Conference paper

Sassano M, Mylvaganam T, Astolfi A, 2021, Optimal control for nonlinear systems driven by a known exogenous signal, IEEE Transactions on Automatic Control, Vol: 67, Pages: 3678-3684, ISSN: 0018-9286

We consider optimal control problems forcontinuous-time systems with time-dependent dynamics,in which the time-dependence arises from the presence of aknown exogenous signal. The problem has been elegantlysolved in the case of linear input-affine systems, for whichit has been shown that the solution has a remarkablestructure: it is given by the sum of two contributions; a statefeedback, which coincides with the unperturbed optimalcontrol law, and a purely feedforward term in charge ofcompensating the effect of the exogenous signal. The objective of this note is to extend the above result to nonlinearinput-affine systems. It is shown that, while some of therelevant features of the linear case indeed rely heavily onlinearity and are not preserved in the nonlinear setting, several structural claims can be proved also in the nonlinearcase.

Journal article

Cappello D, Mylvaganam T, 2021, Distributed differential games for control of multi-agent systems, IEEE Transactions on Control of Network Systems, Vol: 9, ISSN: 2325-5870

Motivated by the challenges arising in thefield of multi-agent systems (MAS) control, we consider linear heterogenous MAS subject to local communication andinvestigate the problem of designing distributed controllersfor such systems. We provide a game theoretic frameworkfor systematically designing distributed controllers, takinginto account individual objectives of the agents and theirpossibly incomplete knowledge of the MAS. Linear statefeedback control laws are obtained via the introduction of adistributed differential game, namely the combination of local non-cooperative differential games, which are solved ina decentralised fashion. Conditions for stability of the MASare provided for the special cases of acyclic and stronglyconnected communication graph topologies. These resultsare then exploited to provide stability conditions for general graph topologies. The proposed framework is demonstrated on a tracking synchronisation problem associatedwith the design of a distributed secondary voltage controller for microgrids and on a numerical example.

Journal article

Cappello D, Garcin S, Mao Z, Sassano M, Paranjape A, Mylvaganam Tet al., 2021, A hybrid controller for multi-agent collision avoidance via a differential game formulation, IEEE Transactions on Control Systems Technology, Vol: 29, Pages: 1750-1757, ISSN: 1063-6536

We consider the multi-agent collision avoidance problemfor a team of wheeled mobile robots. Recently, a local solutionto this problem, based on a game theoretic formulation, has beenprovided and validated via numerical simulations. Due to itslocal nature the result is not well-suited for online application.In this paper we propose a novel hybrid implementation of thecontrol inputs that yields a control strategy suited for the onlinenavigation of mobile robots. Moreover, subject to a certain dwelltime condition, the resulting trajectories are globally convergent.The control design is demonstrated both via simulations andexperiments.

Journal article

Cappello D, Mylvaganam T, 2021, Approximate Nash equilibrium solutions of linear quadratic differential games, 21st IFAC World Congress, 2020, Publisher: IFAC Secretariat, Pages: 6685-6690, ISSN: 2405-8963

It is well known that finding Nash equilibrium solutions of nonzero-sum differential games is a challenging task. Focusing on a class of linear quadratic differential games, we consider three notions of approximate feedback Nash equilibrium solutions and provide a characterisation of these in terms of matrix inequalities which constitute quadratic feasibility problems. These feasibility problems are then recast first as bilinear feasibility problems and finally as rank constrained optimisation problems, i.e. a class of static problems frequently encountered in control theory.

Conference paper

Sassano M, Mylvaganam T, Astolfi A, 2021, (Cyclo-passive) Port-Controlled Hamiltonian dynamics in LQ differentialgames, American Control Conference, Publisher: IEEE

It is shown that the state/costate dynamics arising in a certain class of linear quadratic differential games can be interpreted as the interconnection of (cyclo-passive) Port-Controlled Hamiltonian systems. This property relies on the fact that the (virtual) energy functions associated to each player depend only on the interplay between the inputs of the players, as opposed to the system’s matrix or the individual cost functionals. Finally, it is shown that an arbitrarily accurate approximation of an open-loop Nash equilibrium strategy, obtained from the trajectories of the state/costate system, can be robustified by externally stabilizing the stable eigenspace of the underlying state/costate system.

Conference paper

Wrzos-Kaminska M, Mylvaganam T, Pettersen KY, Gravdahl JTet al., 2020, Collision avoidance using mixed H2/H∞ control for an articulated intervention-AUV, European Control Conference, Publisher: IEEE, Pages: 881-888

In this paper we consider the problem of mixedH2/H∞control to combine optimal and robust control for a dou-ble integrator system with nonlinear performance variables, andwe apply this to control an articulated intervention autonomousunderwater vehicle (AIAUV). The AIAUV has an articulatedbody like a snake robot, is equipped with thrusters, and canbe used as a free-floating underwater manipulator. The objectiveis to control the joints of the AIAUV to desired setpoints withoutcausing collisions between links or with obstacles in the envi-ronment. The mixedH2/H∞problem is viewed as a differentialgame, and a set of matrix equations is solved in order to constructan approximate solution to the problem for a system describedby double integrator dynamics and with nonlinear performancevariables. A feedback linearising controller is derived to obtainthe double integrator dynamics for the joints of the AIAUV, andthe solution found for the mixedH2/H∞control problem isapplied to the resulting system. Simulations demonstrate thatcollisions between links of the manipulator are successfullyavoided also in the presence of parameter uncertainties whileregulating the joints to the desired setpoints, and the methodcan easily be extended to include collision avoidance with staticand dynamic obstacles in the environment.

Conference paper

Nortmann B, Mylvaganam T, 2020, Data-Driven Control of Linear Time-Varying Systems, 59th IEEE Conference on Decision and Control

An identification-free control design strategy fordiscrete-time linear time-varying systems with unknown dynamics is introduced. The closed-loop system (under statefeedback) is parametrised with data-dependent matrices obtained from an ensemble of input-state trajectories collectedoffline. This data-driven system representation is used to classifycontrol laws yielding trajectories which satisfy a certain boundand to solve the linear quadratic regulator problem - both usingdata-dependent linear matrix inequalities only. The results areillustrated by means of a numerical example.

Conference paper

Cappello D, Mylvaganam T, 2020, A game theoretic framework for distributed control of multi-agent systems with acyclic communication topologies, 58th IEEE Conference on Decision and Control, Publisher: IEEE, Pages: 1-6

A multi-agent system consisting of hetero-geneous agents, described by nonlinear dynamics andwith inter-agent communication characterised by a directedacyclic graph, is considered in this paper. A frameworkfor designing distributed control strategies obtained viathe combination oflocal non-cooperative differential gamesis provided. The resulting dynamic (local) state-feedbackcontrol laws can be computed offline and in a decentralisedmanner. Conditions for ensuring stability of the overallclosed-loop system are provided, before the proposed gametheoretic framework is applied to a formation control problem.

Conference paper

Mylvaganam T, Sassano M, 2020, Disturbance attenuation by measurement feedback in nonlinear systems via immersion and algebraic conditions, IEEE Transactions on Automatic Control, Vol: 65, Pages: 854-860, ISSN: 0018-9286

In this paper we consider the problem ofdis-turbance attenuation with internal stabilityfor nonlinear,input-affine systems via measurement feedback. The solu-tion to the above problem has been provided, three decadesago, in terms of the solution to a system of coupled non-linear, first-order partial differential equations (PDEs). Asa consequence, despite the rather elegant characterizationof the solution, the presence of PDEs renders the controldesign synthesis almost infeasible in practice. Therefore, tocircumvent such a computational bottle-neck, in this paperwe provide a novel characterization of the exact solution tothe problem that does not hinge upon theexplicitcompu-tation of the solution to any PDE. The result is achieved byconsidering theimmersionof the nonlinear dynamics intoan extended system for which locally positive definite func-tions solving the required PDEs may be directly provided inclosed-formby relying only on the solutions to Riccati-like,state-dependent, algebraic matrix equations.

Journal article

Mylvaganam T, Possieri C, Sassano M, 2019, Global stabilization of nonlinear systems via hybrid implementation of dynamic continuous-time local controllers, Automatica, Vol: 106, Pages: 401-405, ISSN: 0005-1098

Given a continuous-time system and a dynamic control law such that the closed-loop system satisfies standard Lyapunov conditions for local asymptotic stability, we propose a hybrid implementation of the continuous-time control law. We demonstrate that subject to certain “relaxed” conditions, the hybrid implementation yields global asymptotic stability properties. These conditions can be further specialized to yield local/regional asymptotic stability with an enlarged basin of attraction with respect to the original control law. Two illustrative numerical examples are provided to demonstrate the main results.

Journal article

Scarciotti G, Mylvaganam T, 2019, Approximate infinite-horizon optimal control for stochastic systems, 57th IEEE Conference on Decision and Control (CDC), Publisher: IEEE

The policy of an optimal control problem fornonlinear stochastic systems can be characterized by a second-order partial differential equation for which solutions are notreadily available. In this paper we provide a systematic methodfor obtaining approximate solutions for the infinite-horizonoptimal control problem in the stochastic framework. Themethod is demonstrated on an illustrative numerical examplein which the control effort is not weighted, showing that thetechnique is able to deal with one of the most striking featuresof stochastic optimal control.

Conference paper

Cappello D, Mylvaganam T, 2019, Distributed control of multi-agent systems via linear quadratic differential games with partial information, 57th IEEE Conference on Decision and Control, Publisher: Institute of Electrical and Electronics Engineers, ISSN: 0191-2216

A multi-agent system consisting of linear het-erogeneous agents is considered in this paper. Distributedcontrol laws for each agent are designed through theformulation of linear quadratic differential games withpartial information. Exact and approximate solutions for thedifferential games are provided before the problem of for-mation control with limited communication is considered.A numerical example is provided to illustrate the theory.

Conference paper

Sassano M, Mylvaganam T, Astolfi A, 2019, An algebraic approach to dynamic optimisation of nonlinear systems: a survey and some new results, Journal of Control and Decision, Vol: 6, Pages: 1-29, ISSN: 2330-7706

Dynamic optimisation, with a particular focus on optimal control and nonzero-sum differential games, is considered. For nonlinear systems solutions sought via the dynamic programming strategy are inevitably characterised by partial differential equations (PDEs) which are often difficult to solve. A detailed overview of a control design framework which enables the systematic construction of approximate solutions for optimal control problems and differential games without requiring the explicit solution of any PDE is provided along with a novel design of a nonlinear control gain aimed at improving the ‘level of approximation’ achieved. Multi-agent systems are considered as a possible application of the theory.

Journal article

Mylvaganam T, Ortega R, Machado J, Astolfi Aet al., 2018, Dynamic zero finding for algebraic equations, European Control Conference, Publisher: IEEE, Pages: 1244-1249

In a variety of contexts, for example the solution of differential games and the control of power systems, the design of feedback control laws requires the solution of nonlinear algebraic equations: obtaining such solutions is often not trivial. Motivated by such situations we consider systems of nonlinear algebraic equations and propose a method for obtaining their solutions. In particular, a dynamical system is introduced and (locally) stabilizing control laws which ensure that elements of the state converge to a solution of the algebraic equations are given. Illustrative numerical examples are provided. In addition it is shown that the proposed method is applicable to determine the equilibria of electrical networks with constant power loads.

Conference paper

Cristofaro A, Mylvaganam T, Bauso D, 2018, A two-point boundary value formulation of a class of multi-population mean-field games, IEEE Conference on Control Technology and Applications, Publisher: IEEE

We consider a multi-agent system consisting ofseveral populations. The interaction between large populationsof agents seeking to regulate their state on the basis of thedistribution of the neighboring populations is studied. Examplesof such interactions can typically be found in social networksand opinion dynamics, where heterogeneous agents or clustersare present and decisions are influenced by individual objectivesas well as by global factors. In this paper, such a problemis posed as a multi-population mean-field game, for whichsolutions depend on two partial differential equations, namelythe Hamilton-Jacobi-Bellman equation and the Fokker-Planck-Kolmogorov equation. The case in which the distributions ofagents are sums of polynomials and the value functions arequadratic polynomials is considered. It is shown that for thisclass of problems, which can be considered as approximations ofmore general problems, a set of ordinary differential equations,with two-point boundary value conditions, can be solved inplace of the more complicated partial differential equationscharacterizing the solution of the multi-population mean-fieldgame.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00486199&limit=30&person=true