Publications from our Researchers

Several of our current PhD candidates and fellow researchers at the Data Science Institute have published, or in the proccess of publishing, papers to present their research.  

Citation

BibTex format

@inproceedings{Rosas:2021:10.1109/itw46852.2021.9457579,
author = {Rosas, FE and Mediano, PAM and Gastpar, M},
doi = {10.1109/itw46852.2021.9457579},
pages = {1--5},
publisher = {IEEE},
title = {Learning, compression, and leakage: Minimising classification error via meta-universal compression principles},
url = {http://dx.doi.org/10.1109/itw46852.2021.9457579},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Learning and compression are driven by the common aim of identifying and exploiting statistical regularities in data, which opens the door for fertile collaboration between these areas. A promising group of compression techniques for learning scenarios is normalised maximum likelihood (NML) coding, which provides strong guarantees for compression of small datasets — in contrast with more popular estimators whose guarantees hold only in the asymptotic limit. Here we consider a NMLbased decision strategy for supervised classification problems, and show that it attains heuristic PAC learning when applied to a wide variety of models. Furthermore, we show that the misclassification rate of our method is upper bounded by the maximal leakage, a recently proposed metric to quantify the potential of data leakage in privacy-sensitive scenarios.
AU - Rosas,FE
AU - Mediano,PAM
AU - Gastpar,M
DO - 10.1109/itw46852.2021.9457579
EP - 5
PB - IEEE
PY - 2021///
SP - 1
TI - Learning, compression, and leakage: Minimising classification error via meta-universal compression principles
UR - http://dx.doi.org/10.1109/itw46852.2021.9457579
UR - https://ieeexplore.ieee.org/document/9457579
UR - http://hdl.handle.net/10044/1/90016
ER -