Machine learning experts from across a range of departments at Imperial have successfully published papers as part of the 37th NeurIPS Conference.
A number of papers authored or co-authored by Imperial researchers from across the College have been accepted into the next edition of the Neural Information Processing Systems Conference (NeurIPS2023) - a prestigious machine learning and computational neuroscience conference.
Founded in 1987, the NeurIPS is now a multi-track interdisciplinary annual meeting that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers.
Alongside the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that provide a less formal setting for the exchange of ideas.
In particular this year, Imperial researcher Dr Dario Paccagnan in collaboration with Marco Campi and Simone Garatti from the Polytechnic University of Milan were accepted as a spotlight – a highly selective category. Their paper entitled 'The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-training Performance’ focuses on the importance of generalization bounds in understanding learning processes and assessing model performance on new data. The researchers introduced a new framework, called Pick-to-Learn which helps to enhance the performance and reliability of machine learning models on unseen data.
In another paper, Imperial researchers Che Liu, Dr Sibo Cheng, Dr César Quilodrán Casas and Dr Rossella Arcucci from Imperial’s Data Science Institute and the Department of Earth Sciences and Engineering helped to create a new AI model to help computers understand medical data from different languages to reduce bias and improve their accuracy. Their paper, ‘Med-UniC: Unifying Cross-Lingual Medical Vision-Language Pre-Training by Diminishing Bias’ was created in partnership with The Ohio State University, Peking University, The Chinese University of Hong Kong and The Hong Kong University of Science and Technology and is explain more in this Imperial News Story.
Finally in a paper by Arnaud Robert, Dr Ciara Pike-Burke and Professor Aldo Faisal, “Sample Complexity of Goal-Conditioned Hierarchical Reinforcement Learning”, the Imperial team explored the benefits of hierarchical decomposition in improving the sample efficiency of algorithms in goal-conditioned hierarchical reinforcement learning. Their work is about how breaking down complex tasks into smaller, simpler sub-tasks can help algorithms learn more efficiently.
The list of accepted papers for NeurIPS2023 are below with Imperial academics linked:
Wan, Z., Liu, C., Zhang, M., Fu, J., Wang, B., Cheng, S., Ma, L., Quilodrán-Casas, C. and Arcucci, R., 2023. Med-UniC: Unifying Cross-Lingual Medical Vision-Language Pre-Training by Diminishing Bias.
Paccagnan, D., Campi, M. C. and Garatti, S., 2023. The Pick-to-Learn Algorithm: Empowering Compression for Tight Generalization Bounds and Improved Post-training Performance (spotlight)
Schröder, T., Ou, Z., Lim, J. N., Li, Y., Vollmer, S. J. and Duncan, A. B., 2023. Energy Discrepancies: A Score-Independent Loss for Energy-Based Models.
Zhang, S., Salazar, J. S. C., Feldmann, C., Walz, D., Sandfort, F., Mathea, M., Tsay, C. and Misener, R., 2023. Optimizing over trained GNNs via symmetry breaking.
Swaminathan, S., Dedieu, A., Raju, R.V., Shanahan, M., Lazaro-Gredilla, M. and George, D., 2023. Schema-learning and rebinding as mechanisms of in-context learning and emergence.
Issa, Z., Horvath, B., Lemercier, M. and Salvi, C., 2023. Non-adversarial training of Neural SDEs with signature kernel scores.
Ward, F.R., Everitt, T., Belardinelli, F. and Toni, F., Honesty Is the Best Policy: Defining and Mitigating AI Deception.
Ekström Kelvinius, F., Georgiev, D., Petrov Toshev, A. and Gasteiger, J., 2023. Accelerating Molecular Graph Neural Networks via Knowledge Distillation.
Kaissis, G., Ziller, A., Kolek, S. Riess, A. and Rueckert, D. 2023. Optimal privacy guarantees against sub-optimal adversaries in differentially private machine learning
Article text (excluding photos or graphics) © Imperial College London.
Photos and graphics subject to third party copyright used with permission or © Imperial College London.