Title:

Dynamic Programs with Shared Resources and Signals: Dynamic Fluid Policies and Asymptotic Optimality

Abstract:

We consider a sequential decision problem involving shared resources and signals in which a decision maker repeatedly observes some exogenous information (the signal), modeled as a finite-state Markov process, then allocates a limited amount of a shared resource across a set of projects. The framework includes a number of applications and generalizes Markovian multi-armed bandit problems by (a) incorporating exogenous information through the signal and (b) allowing for more general resource allocation decisions. Such problems are naturally formulated as stochastic dynamic programs (DPs) but solving the DP is impractical unless the number of projects is small. In this paper, we develop a Lagrangian relaxation and a DP formulation of the corresponding fluid relaxation — a dynamic fluid relaxation — that provide upper bounds on the optimal value function as well as a feasible policy. We develop an iterative primal-dual algorithm for solving the dynamic fluid relaxation and analyze the performance of the feasible dynamic fluid policy. Our performance analysis implies, under mild conditions, that the dynamic fluid relaxation bound and feasible policy are asymptotically optimal as the number of projects grows large. Our Lagrangian relaxation uses Lagrange multipliers that depend on the history of past signals in each period: we show that the bounds and analogous policies using restricted forms of Lagrange multipliers (e.g., only depending on the current signal state in each period) in general lead to a performance gap that is linear in the number of projects and thus are not asymptotically optimal in the regime of many projects. We demonstrate the model and results in two applications: (i) a dynamic capital budgeting problem and (ii) a multi-location inventory management problem with limited production capacity and demands that are correlated across locations by a changing market state.

Biography:

David B. Brown is a Professor at The Fuqua School of Business at Duke University. His research focuses on decision-making under uncertainty, approximate dynamic programming, and dynamic resource allocation problems. His recent work includes the development and analysis of methods in network revenue management, dynamic pricing in shared vehicle systems, assortment optimization with demand learning, stochastic scheduling, and sequential search.

 


Zoom Meeting Details