Imperial College London


Faculty of EngineeringDepartment of Electrical and Electronic Engineering




+44 (0)20 7594 6173c.ciliberto CV




1003Electrical EngineeringSouth Kensington Campus






BibTex format

author = {Denevi, G and Ciliberto, C and Stamos, D and Pontil, M},
pages = {10169--10179},
title = {Learning to learn around a common mean},
year = {2018}

RIS format (EndNote, RefMan)

AB - © 2018 Curran Associates Inc.All rights reserved. The problem of learning-to-learn (LTL) or meta-learning is gaining increasing attention due to recent empirical evidence of its effectiveness in applications. The goal addressed in LTL is to select an algorithm that works well on tasks sampled from a meta-distribution. In this work, we consider the family of algorithms given by a variant of Ridge Regression, in which the regularizer is the square distance to an unknown mean vector. We show that, in this setting, the LTL problem can be reformulated as a Least Squares (LS) problem and we exploit a novel meta-algorithm to efficiently solve it. At each iteration the meta-algorithm processes only one dataset. Specifically, it firstly estimates the stochastic LS objective function, by splitting this dataset into two subsets used to train and test the inner algorithm, respectively. Secondly, it performs a stochastic gradient step with the estimated value. Under specific assumptions, we present a bound for the generalization error of our meta-algorithm, which suggests the right splitting parameter to choose. When the hyper-parameters of the problem are fixed, this bound is consistent as the number of tasks grows, even if the sample size is kept constant. Preliminary experiments confirm our theoretical findings, highlighting the advantage of our approach, with respect to independent task learning.
AU - Denevi,G
AU - Ciliberto,C
AU - Stamos,D
AU - Pontil,M
EP - 10179
PY - 2018///
SN - 1049-5258
SP - 10169
TI - Learning to learn around a common mean
ER -