Imperial College London

ProfessorBjoernSchuller

Faculty of EngineeringDepartment of Computing

Professor of Artificial Intelligence
 
 
 
//

Contact

 

+44 (0)20 7594 8357bjoern.schuller Website

 
 
//

Location

 

574Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

1107 results found

Schmid J, Hoss A, Schuller BW, 2022, A Survey on Client Throughput Prediction Algorithms in Wired and Wireless Networks, ACM COMPUTING SURVEYS, Vol: 54, ISSN: 0360-0300

Journal article

Schuller BW, Loechner J, Qian K, Hu Bet al., 2022, Digital Mental Health-Breaking a Lance for Prevention, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 9, Pages: 1584-1588, ISSN: 2329-924X

Journal article

Bartl-Pokorny KD, Pokorny FB, Garrido D, Schuller BW, Zhang D, Marschik PBet al., 2022, Vocalisation Repertoire at the End of the First Year of Life: An Exploratory Comparison of Rett Syndrome and Typical Development, JOURNAL OF DEVELOPMENTAL AND PHYSICAL DISABILITIES, Vol: 34, Pages: 1053-1069, ISSN: 1056-263X

Journal article

Kathan A, Harrer M, Kuester L, Triantafyllopoulos A, He X, Milling M, Gerczuk M, Yan T, Rajamani ST, Heber E, Grossmann I, Ebert DD, Schuller BWet al., 2022, Personalised depression forecasting using mobile sensor data and ecological momentary assessment, FRONTIERS IN DIGITAL HEALTH, Vol: 4

Journal article

Loechner JW, Schuller B, 2022, Child and Youth Affective Computing-Challenge Accepted, IEEE INTELLIGENT SYSTEMS, Vol: 37, Pages: 69-76, ISSN: 1541-1672

Journal article

Niu M, Zhao Z, Tao J, Li Y, Schuller BWet al., 2022, Selective Element and Two Orders Vectorization Networks for Automatic Depression Severity Diagnosis via Facial Changes, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, Vol: 32, Pages: 8065-8077, ISSN: 1051-8215

Journal article

Mehta Y, Stachl C, Markov K, Yun JT, Schuller Wet al., 2022, Future-generation personality prediction from digital footprints, FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, Vol: 136, Pages: 322-325, ISSN: 0167-739X

Journal article

Zhao S, Yao X, Yang J, Jia G, Ding G, Chua T-S, Schuller BW, Keutzer Ket al., 2022, Affective Image Content Analysis: Two Decades Review and New Perspectives, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Vol: 44, Pages: 6729-6751, ISSN: 0162-8828

Journal article

Hu B, Qian K, Dong Q, Luo Y, Yamamoto Y, Schuller BWet al., 2022, Psychological Field Versus Physiological Field: From Qualitative Analysis to Quantitative Modeling of the Mental Status, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 9, Pages: 1275-1281, ISSN: 2329-924X

Journal article

Xu X, Deng J, Zhang Z, Fan X, Zhao L, Devillers L, Schuller BWet al., 2022, Rethinking Auditory Affective Descriptors Through Zero-Shot Emotion Recognition in Speech, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 9, Pages: 1530-1541, ISSN: 2329-924X

Journal article

Hu B, Qian K, Zhang Y, Shen J, Schuller BWet al., 2022, The Inverse Problems for Computational Psychophysiology: Opinions and Insights, CYBORG AND BIONIC SYSTEMS, Vol: 2022

Journal article

Ottl S, Amiriparian S, Gerczuk M, Schuller BWet al., 2022, motilitAl: A machine learning framework for automatic prediction of human sperm motility, ISCIENCE, Vol: 25

Journal article

Pokorny FB, Schmitt M, Egger M, Bartl-Pokorny KD, Zhang D, Schuller BW, Marschik PBet al., 2022, Automatic vocalisation-based detection of fragile X syndrome and Rett syndrome, SCIENTIFIC REPORTS, Vol: 12, ISSN: 2045-2322

Journal article

Liu S, Mallol-Ragolta A, Yan T, Qian K, Parada-Cabaleiro E, Hu B, Schuller BWet al., 2022, Capturing Time Dynamics From Speech Using Neural Networks for Surgical Mask Detection, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 26, Pages: 4291-4302, ISSN: 2168-2194

Journal article

Qian K, Koike T, Nakamura T, Schuller B, Yamamoto Yet al., 2022, Learning Multimodal Representations for Drowsiness Detection, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, Vol: 23, Pages: 11539-11548, ISSN: 1524-9050

Journal article

Schuller BW, Lochner J, Qian K, Hu Bet al., 2022, COVID-19's Impact on Mental Health-The Hour of Computational Aid?, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 9, Pages: 967-973, ISSN: 2329-924X

Journal article

Schuller BW, Batliner A, Amiriparian S, Bergler C, Gerczuk M, Holz N, Larrouy-Maestri P, Bayerl SP, Riedhammer K, Mallol-Ragolta A, Pateraki M, Coppock H, Kiskin I, Sinka M, Roberts Set al., 2022, The ACM Multimedia 2022 Computational Paralinguistics Challenge: Vocalisations, Stuttering, Activity, & Mosquitoes, Publisher: ArXuc

The ACM Multimedia 2022 Computational Paralinguistics Challenge addressesfour different problems for the first time in a research competition underwell-defined conditions: In the Vocalisations and Stuttering Sub-Challenges, aclassification on human non-verbal vocalisations and speech has to be made; theActivity Sub-Challenge aims at beyond-audio human activity recognition fromsmartwatch sensor data; and in the Mosquitoes Sub-Challenge, mosquitoes need tobe detected. We describe the Sub-Challenges, baseline feature extraction, andclassifiers based on the usual ComPaRE and BoAW features, the auDeep toolkit,and deep feature extraction from pre-trained CNNs using the DeepSpectRumtoolkit; in addition, we add end-to-end sequential modelling, and alog-mel-128-BNN.

Working paper

Hecker P, Steckhan N, Eyben F, Schuller BW, Arnrich Bet al., 2022, Voice Analysis for Neurological Disorder Recognition-A Systematic Review and Perspective on Emerging Trends, FRONTIERS IN DIGITAL HEALTH, Vol: 4

Journal article

Akman A, Coppock H, Gaskell A, Tzirakis P, Jones L, Schuller BWet al., 2022, Evaluating the COVID-19 identification ResNet (CIdeR) on the INTERSPEECH COVID-19 from audio challenges, Frontiers in Digital Health, Vol: 4, ISSN: 2673-253X

Several machine learning-based COVID-19 classifiers exploiting vocal biomarkers of COVID-19 has been proposed recently as digital mass testing methods. Although these classifiers have shown strong performances on the datasets on which they are trained, their methodological adaptation to new datasets with different modalities has not been explored. We report on cross-running the modified version of recent COVID-19 Identification ResNet (CIdeR) on the two Interspeech 2021 COVID-19 diagnosis from cough and speech audio challenges: ComParE and DiCOVA. CIdeR is an end-to-end deep learning neural network originally designed to classify whether an individual is COVID-19-positive or COVID-19-negative based on coughing and breathing audio recordings from a published crowdsourced dataset. In the current study, we demonstrate the potential of CIdeR at binary COVID-19 diagnosis from both the COVID-19 Cough and Speech Sub-Challenges of INTERSPEECH 2021, ComParE and DiCOVA. CIdeR achieves significant improvements over several baselines. We also present the results of the cross dataset experiments with CIdeR that show the limitations of using the current COVID-19 datasets jointly to build a collective COVID-19 classifier.

Journal article

Batliner A, Hantke S, Schuller B, 2022, Ethics and Good Practice in Computational Paralinguistics, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 13, Pages: 1236-1253, ISSN: 1949-3045

Journal article

Ren Z, Chang Y, Bartl-Pokorny KD, Pokorny FB, Schuller BWet al., 2022, The Acoustic Dissection of Cough: Diving Into Machine Listening-based COVID-19 Analysis and Detection., J Voice

OBJECTIVES: The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19's transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge of the acoustic characteristics of COVID-19 cough sounds is limited but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds. METHODS: By applying conventional inferential statistics, we analyze the acoustic correlates of COVID-19 cough sounds based on the ComParE feature set, i.e., a standardized set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions. RESULTS: The experimental results demonstrate that a set of acoustic parameters of cough sounds, e.g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, bear essential acoustic information in terms of effect sizes for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our general automatic COVID-19 detection model performs significantly above chance level, i.e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positive/negative: 210/1,201). CONCLUSIONS: Based on the acoustic correlates analysis on the ComParE feature set and the feature analysis in the effective COVID-19 detection approach, we find that several acoustic features that show higher effects in convention

Journal article

Milling M, Pokorny FBB, Bartl-Pokorny KDD, Schuller BWet al., 2022, Is Speech the New Blood? Recent Progress in AI-Based Disease Detection From Audio in a Nutshell, FRONTIERS IN DIGITAL HEALTH, Vol: 4

Journal article

Latif S, Rana R, Khalifa S, Jurdak R, Epps J, Schuller BWet al., 2022, Multi-Task Semi-Supervised Adversarial Autoencoding for Speech Emotion Recognition, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 13, Pages: 992-1004, ISSN: 1949-3045

Journal article

Zhang Y, Weninger F, Schuller B, Picard RWet al., 2022, Holistic Affect Recognition Using PaNDA: Paralinguistic Non-Metric Dimensional Analysis, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 13, Pages: 769-780, ISSN: 1949-3045

Journal article

Zhang L, Li J, Li P, Lu X, Gong M, Shen P, Zhu G, Shah SA, Bennamoun M, Qian K, Schuller BWet al., 2022, MEDAS: an open-source platform as a service to help break the walls between medicine and informatics, NEURAL COMPUTING & APPLICATIONS, Vol: 34, Pages: 6547-6567, ISSN: 0941-0643

Journal article

Stappen L, Baird A, Lienhart M, Baetz A, Schuller Bet al., 2022, An Estimation of Online Video User Engagement From Features of Time- and Value-Continuous, Dimensional Emotions, FRONTIERS IN COMPUTER SCIENCE, Vol: 4

Journal article

Mallol-Ragolta A, Semertzidou A, Pateraki M, Schuller Bet al., 2022, Outer Product-Based Fusion of Smartwatch Sensor Data for Human Activity Recognition, FRONTIERS IN COMPUTER SCIENCE, Vol: 4

Journal article

Amiriparian S, Huebner T, Karas V, Gerczuk M, Ottl S, Schuller BWet al., 2022, DeepSpectrumLite: A Power-Efficient Transfer Learning Framework for Embedded Speech and Audio Processing From Decentralized Data, FRONTIERS IN ARTIFICIAL INTELLIGENCE, Vol: 5

Journal article

Ren Z, Chang Y, Bartl-Pokorny KD, Pokorny FB, Schuller BWet al., 2022, The Acoustic Dissection of Cough: Diving into Machine Listening-based COVID-19 Analysis and Detection

<jats:title>Abstract</jats:title><jats:sec><jats:title>Purpose</jats:title><jats:p>The coronavirus disease 2019 (COVID-19) has caused a crisis worldwide. Amounts of efforts have been made to prevent and control COVID-19’s transmission, from early screenings to vaccinations and treatments. Recently, due to the spring up of many automatic disease recognition applications based on machine listening techniques, it would be fast and cheap to detect COVID-19 from recordings of cough, a key symptom of COVID-19. To date, knowledge on the acoustic characteristics of COVID-19 cough sounds is limited, but would be essential for structuring effective and robust machine learning models. The present study aims to explore acoustic features for distinguishing COVID-19 positive individuals from COVID-19 negative ones based on their cough sounds.</jats:p></jats:sec><jats:sec><jats:title>Methods</jats:title><jats:p>With the theory of computational paralinguistics, we analyse the acoustic correlates of COVID-19 cough sounds based on the COMPARE feature set, i. e., a standardised set of 6,373 acoustic higher-level features. Furthermore, we train automatic COVID-19 detection models with machine learning methods and explore the latent features by evaluating the contribution of all features to the COVID-19 status predictions.</jats:p></jats:sec><jats:sec><jats:title>Results</jats:title><jats:p>The experimental results demonstrate that a set of acoustic parameters of cough sounds, e. g., statistical functionals of the root mean square energy and Mel-frequency cepstral coefficients, are relevant for the differentiation between COVID-19 positive and COVID-19 negative cough samples. Our automatic COVID-19 detection model performs significantly above chance level, i. e., at an unweighted average recall (UAR) of 0.632, on a data set consisting of 1,411 cough samples (COVID-19 positiv

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00672433&limit=30&person=true&page=3&respub-action=search.html