## Publications

7 results found

Ban G-Y, Keskin NB, 2021, Personalized Dynamic Pricing with Machine Learning: High-Dimensional Features and Heterogeneous Elasticity, *Management Science*, Vol: 67, Pages: 5549-5568, ISSN: 0025-1909

<jats:p> We consider a seller who can dynamically adjust the price of a product at the individual customer level, by utilizing information about customers’ characteristics encoded as a d-dimensional feature vector. We assume a personalized demand model, parameters of which depend on s out of the d features. The seller initially does not know the relationship between the customer features and the product demand but learns this through sales observations over a selling horizon of T periods. We prove that the seller’s expected regret, that is, the revenue loss against a clairvoyant who knows the underlying demand relationship, is at least of order [Formula: see text] under any admissible policy. We then design a near-optimal pricing policy for a semiclairvoyant seller (who knows which s of the d features are in the demand model) who achieves an expected regret of order [Formula: see text]. We extend this policy to a more realistic setting, where the seller does not know the true demand predictors, and show that this policy has an expected regret of order [Formula: see text], which is also near-optimal. Finally, we test our theory on simulated data and on a data set from an online auto loan company in the United States. On both data sets, our experimentation-based pricing policy is superior to intuitive and/or widely-practiced customized pricing methods, such as myopic pricing and segment-then-optimize policies. Furthermore, our policy improves upon the loan company’s historical pricing decisions by 47% in expected revenue over a six-month period. </jats:p><jats:p> This paper was accepted by Noah Gans, stochastic models and simulation. </jats:p>

Ban G-Y, 2020, Confidence Intervals for Data-Driven Inventory Policies with Demand Censoring, *Operations Research*, ISSN: 0030-364X

Ban G-Y, Gallien J, Mersereau AJ, 2019, Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method, *Manufacturing & Service Operations Management*, Vol: 21, Pages: 798-815, ISSN: 1523-4614

<jats:p> Problem definition: We study the practice-motivated problem of dynamically procuring a new, short-life-cycle product under demand uncertainty. The firm does not know the demand for the new product but has data on similar products sold in the past, including demand histories and covariate information such as product characteristics. Academic/practical relevance: The dynamic procurement problem has long attracted academic and practitioner interest, and we solve it in an innovative data-driven way with proven theoretical guarantees. This work is also the first to leverage the power of covariate data in solving this problem. Methodology: We propose a new combined forecasting and optimization algorithm called the residual tree method and analyze its performance via epiconvergence theory and computations. Our method generalizes the classical scenario tree method by using covariates to link historical data on similar products to construct demand forecasts for the new product. Results: We prove, under fairly mild conditions, that the residual tree method is asymptotically optimal as the size of the data set grows. We also numerically validate the method for problem instances derived using data from the global fashion retailer Zara. We find that ignoring covariate information leads to systematic bias in the optimal solution, translating to a 6%–15% increase in the total cost for the problem instances under study. We also find that solutions based on trees using just two to three branches per node, which is common in the existing literature, are inadequate, resulting in 30%–66% higher total costs compared with our best solution. Managerial implications: The residual tree is a new and generalizable approach that uses past data on similar products to manage new product inventories. We also quantify the value of covariate information and of granular demand modeling. </jats:p>

Ban G-Y, Rudin C, 2019, The Big Data Newsvendor: Practical Insights from Machine Learning, *Operations Research*, Vol: 67, Pages: 90-108, ISSN: 0030-364X

<jats:p> In Ban and Rudin’s (2018) “The Big Data Newsvendor: Practical Insights from Machine Learning,” the authors take an innovative machine-learning approach to a classic problem solved by almost every company, every day, for inventory management. By allowing companies to use large amounts of data to predict the correct answers to decisions directly, they avoid intermediate questions, such as “how many customers will we get tomorrow?” and instead can tell the company how much inventory to stock for these customers. This has implications for almost all other decision-making problems considered in operations research, which has traditionally considered data estimation separately from the decision optimization. Their proposed methods are shown to work both analytically and empirically with the latter explored in a hospital nurse staffing example in which the best one-step, feature-based newsvendor algorithm (the kernel-weights optimization method) is shown to beat the best-practice benchmark by 24% in the out-of-sample cost at a fraction of the speed. </jats:p>

Ban G-Y, El Karoui N, Lim AEB, 2018, Machine Learning and Portfolio Optimization, *Management Science*, Vol: 64, Pages: 1136-1154, ISSN: 0025-1909

<jats:p> The portfolio optimization model has limited impact in practice because of estimation issues when applied to real data. To address this, we adapt two machine learning methods, regularization and cross-validation, for portfolio optimization. First, we introduce performance-based regularization (PBR), where the idea is to constrain the sample variances of the estimated portfolio risk and return, which steers the solution toward one associated with less estimation error in the performance. We consider PBR for both mean-variance and mean-conditional value-at-risk (CVaR) problems. For the mean-variance problem, PBR introduces a quartic polynomial constraint, for which we make two convex approximations: one based on rank-1 approximation and another based on a convex quadratic approximation. The rank-1 approximation PBR adds a bias to the optimal allocation, and the convex quadratic approximation PBR shrinks the sample covariance matrix. For the mean-CVaR problem, the PBR model is a combinatorial optimization problem, but we prove its convex relaxation, a quadratically constrained quadratic program, is essentially tight. We show that the PBR models can be cast as robust optimization problems with novel uncertainty sets and establish asymptotic optimality of both sample average approximation (SAA) and PBR solutions and the corresponding efficient frontiers. To calibrate the right-hand sides of the PBR constraints, we develop new, performance-based k-fold cross-validation algorithms. Using these algorithms, we carry out an extensive empirical investigation of PBR against SAA, as well as L1 and L2 regularizations and the equally weighted portfolio. We find that PBR dominates all other benchmarks for two out of three Fama–French data sets. </jats:p><jats:p> This paper was accepted by Yinyu Ye, optimization. </jats:p>

Lim AEB, Shanthikumar JG, Vahn G-Y, 2012, Robust Portfolio Choice with Learning in the Framework of Regret: Single-Period Case, *Management Science*, Vol: 58, Pages: 1732-1746, ISSN: 0025-1909

<jats:p> In this paper, we formulate a single-period portfolio choice problem with parameter uncertainty in the framework of relative regret. Relative regret evaluates a portfolio by comparing its return to a family of benchmarks, where the benchmarks are the wealths of fictitious investors who invest optimally given knowledge of the model parameters, and is a natural objective when there is concern about parameter uncertainty or model ambiguity. The optimal relative regret portfolio is the one that performs well in relation to all the benchmarks over the family of possible parameter values. We analyze this problem using convex duality and show that it is equivalent to a Bayesian problem, where the Lagrange multipliers play the role of the prior distribution, and the learning model involves Bayesian updating of these Lagrange multipliers/prior. This Bayesian problem is unusual in that the prior distribution is endogenously chosen by solving the dual optimization problem for the Lagrange multipliers, and the objective function involves the family of benchmarks from the relative regret problem. These results show that regret is a natural means by which robust decision making and learning can be combined. </jats:p><jats:p> This paper was accepted by Dimitris Bertsimas, optimization. </jats:p>

Lim AEB, Shanthikumar JG, Vahn G-Y, 2011, Conditional value-at-risk in portfolio optimization: Coherent but fragile, *Operations Research Letters*, Vol: 39, Pages: 163-171, ISSN: 0167-6377

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.