BEGIN:VCALENDAR
PRODID:-//eluceo/ical//2.0/EN
VERSION:2.0
CALSCALE:GREGORIAN
BEGIN:VEVENT
UID:be63ffc3db7d5ddbec8c2d813bf6c535
DTSTAMP:20240722T074808Z
SUMMARY:IPC Lab Seminars | Stefan Vlaski – Learning over Graphs – Beyon
d Consensus and Convexity
DESCRIPTION:If you are interested in attending this event\, please get in t
ouch using the contact information listed.\nAbstract\n\nMost learning prob
lems\, from linear or logistic regression to deep learning\, are formulate
d as optimization problems\, where the objective is to pursue a model\, wh
ich describes the available data best. On the other hand\, a trend toward
an increasingly networked society has sparked a need for the development
and analysis of decentralized learning algorithms\, where a collection of
intelligent agents coordinate in solving a more challenging inference prob
lem without the need for a central parameter server. Instead\, agents exch
ange information only locally\, as defined by a graph topology\, resultin
g in scalable and robust mechanisms.\n \n\nWe describe two recent directi
ons in the study learning algorithms over graphs. First\, we will deviate
from the wide-spread “consensus optimization” setting\, where agents a
re forced to agreement on a common model\, resulting in poor performance i
n heterogeneous settings. We will show how different task-relatedness mode
ls give rise to a family of multi-task learning algorithms over graphs\, w
hich allow for improved learning performance without the need to force con
sensus. We will then show how\, in the absence of a task-relatedness model
\, multi-task learning over graphs is still possible via a decentralized v
ariant of model-agnostic meta-learning (MAML).\n\nThe second part will stu
dy the impact of the loss function chosen to quantify model fit. The dynam
ics of decentralized learning algorithms with convex loss functions is fai
rly well understood\, yet performance guarantees in non-convex environment
s have long remained elusive. This is due to the fact that non-convex loss
functions can be riddled with local minima and saddle-points\, where the
gradient vanishes (and hence gradient descent stagnates)\, while performan
ce can be arbitrarily poor. This is in contrast to the empirical success o
f deep learning\, which gives rise to non-convex loss surfaces\, suggestin
g that stochastic gradient descent\, as implemented via backpropagation\,
avoids saddle-points. We review recent results shedding light on these dyn
amics and show how decentralized algorithms can continue to match centrali
zed ones\, even when it comes to evading saddle-points.\n\n\nAbout the spe
aker\nStefan Vlaski received the B.Sc. degree in Electrical Engineering fr
om Technical University Darmstadt\, Germany in 2013 and the M.S. in Electr
ical Engineering as well as Ph.D. in Electrical and Computer Engineering f
rom the University of California\, Los Angeles in 2014 and 2019 respective
ly. He is currently a postdoctoral researcher at the Adaptive Systems Labo
ratory\, EPFL\, Switzerland. His research interests are in machine learnin
g\, signal processing\, and optimization. His current focus is on the deve
lopment and study of learning algorithms with a particular emphasis on ada
ptive and decentralized solutions.
URL:https://www.imperial.ac.uk/events/136146/ipc-lab-seminars-stefan-vlaski
-learning-over-graphs-beyond-consensus-and-convexity/
DTSTART;TZID=Europe/London:20210616T140000
DTEND;TZID=Europe/London:20210616T153000
LOCATION:United Kingdom
END:VEVENT
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
DTSTART:20210616T140000
TZNAME:BST
TZOFFSETTO:+0100
TZOFFSETFROM:+0100
END:DAYLIGHT
END:VTIMEZONE
END:VCALENDAR