BEGIN:VCALENDAR
VERSION:2.0
PRODID:www.imperial.ac.uk
BEGIN:VEVENT
UID:610c61a9851e2
DTSTART:20210611T130000Z
SEQUENCE:0
TRANSP:OPAQUE
DTEND:20210611T140000Z
URL:https://www.imperial.ac.uk/events/135939/statistics-seminar-dr-omar-riv
asplata-deepmind-ucl-tbc/
LOCATION:United Kingdom
SUMMARY:[Statistics Seminar] Dr Omar Rivasplata (Deepmind/UCL): TBC
CLASS:PUBLIC
DESCRIPTION:Title: PAC-Bayes Analysis Beyond the Usual Bounds.\nAbstract:
We focus on a stochastic learning model where the learner observes a finit
e set of training examples\, and the output of the learning process is a d
ata-dependent distribution over a space of hypotheses. The learned data-de
pendent distribution is then used to make randomized predictions\, and the
high-level theme addressed here is guaranteeing the quality of prediction
s on examples that were not seen during training\, i.e. generalization. In
this setting the unknown quantity of interest is the expected risk of the
data-dependent randomized predictor\, for which upper bounds can be deriv
ed via a PAC-Bayes analysis\, leading to PAC-Bayes bounds.\nSpecifically\,
we present a basic PAC-Bayes inequality for stochastic kernels\, from whi
ch one may derive extensions of various known PAC-Bayes bounds as well as
novel bounds. We clarify the role of the requirements of fixed `data-free
’ priors\, bounded losses\, and i.i.d. data. We highlight that those req
uirements were used to upper-bound an exponential moment term\, while the
basic PAC-Bayes theorem remains valid without those restrictions. We prese
nt three bounds that illustrate the use of data-dependent priors\, includi
ng one for the unbounded square loss.
X-ALT-DESC;FMTTYPE=text/html:**Title: **PAC-Bayes Analysis Beyond the
Usual Bounds.

\n**Abstract**: We focus on a stochastic le
arning model where the learner observes a finite set of training examples\
, and the output of the learning process is a data-dependent distribution
over a space of hypotheses. The learned data-dependent distribution is the
n used to make randomized predictions\, and the high-level theme addressed
here is guaranteeing the quality of predictions on examples that were not
seen during training\, i.e. generalization. In this setting the unknown q
uantity of interest is the expected risk of the data-dependent randomized
predictor\, for which upper bounds can be derived via a PAC-Bayes analysis
\, leading to PAC-Bayes bounds.

\nSpecifically\, we pre
sent a basic PAC-Bayes inequality for stochastic kernels\, from which one
may derive extensions of various known PAC-Bayes bounds as well as novel b
ounds. We clarify the role of the requirements of fixed `data-free’ prio
rs\, bounded losses\, and i.i.d. data. We highlight that those requirement
s were used to upper-bound an exponential moment term\, while the basic PA
C-Bayes theorem remains valid without those restrictions. We present three
bounds that illustrate the use of data-dependent priors\, including one f
or the unbounded square loss.

DTSTAMP:20210805T220945Z
END:VEVENT
END:VCALENDAR