SpeakerProfessor Milan Vojnovic (London School of Economics & Political Science (LSE))

Abstract: In this talk, we present new results on the rate of convergence of gradient descent and MM algorithms for fitting the parameters of generalised Bradley-Terry models of ranking data, including the Bradley-Terry model, Luce’s choice model, and Plackett-Luce ranking model. We consider both the maximum likelihood estimation and Bayesian inference. We establish that the rate of convergence is linear and present tight bounds on the rate of convergence. The MM algorithm is shown to have essentially the same convergence rate as the gradient descent algorithm. For the maximum likelihood estimation, the key parameter that determines the convergence rate is the algebraic connectivity of a matrix whose elements are the counts of paired comparisons between distinct pairs of items. We show that both gradient descent and MM algorithms can be arbitrarily slow for Bayesian inference depending on the parameters of the prior distribution (we show this for a common prior used in practice). We then derive new, simple to implement, accelerated iterative optimization methods that resolve the slow convergence issue.