Imperial College London

ProfessorBjoernSchuller

Faculty of EngineeringDepartment of Computing

Professor of Artificial Intelligence
 
 
 
//

Contact

 

+44 (0)20 7594 8357bjoern.schuller Website

 
 
//

Location

 

574Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

1107 results found

Schuller BW, Batliner A, Bergler C, Mascolo C, Han J, Lefter I, Kaya H, Amiriparian S, Baird A, Stappen L, Ottl S, Gerczuk M, Tzirakis P, Brown C, Chauhan J, Grammenos A, Hasthanasombat A, Spathis D, Xia T, Cicuta P, Rothkrantz LJM, Zwerts JA, Treep J, Kaandorp CSet al., 2021, The INTERSPEECH 2021 Computational Paralinguistics Challenge: COVID-19 Cough, COVID-19 Speech, Escalation & Primates, INTERSPEECH 2021, Pages: 431-435, ISSN: 2308-457X

Journal article

Cummins N, Schuller BW, 2020, Five Crucial Challenges in Digital Health, FRONTIERS IN DIGITAL HEALTH, Vol: 2

Journal article

Amiriparian S, Gerczuk M, Ottl S, Stappen L, Baird A, Koebe L, Schuller Bet al., 2020, Towards cross-modal pre-training and learning tempo-spatial characteristics for audio recognition with convolutional and recurrent neural networks, EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, Vol: 2020, ISSN: 1687-4722

Journal article

Pokorny FB, Bartl-Pokorny KD, Zhang D, Marschik PB, Schuller D, Schuller BWet al., 2020, Efficient Collection and Representation of Preverbal Data in Typical and Atypical Development, JOURNAL OF NONVERBAL BEHAVIOR, Vol: 44, Pages: 419-436, ISSN: 0191-5886

Journal article

Pandit V, Schmitt M, Cummins N, Schuller Bet al., 2020, I see it in your eyes: Training the shallowest-possible CNN to recognise emotions and pain from muted web-assisted in-the-wild video-chats in real-time, INFORMATION PROCESSING & MANAGEMENT, Vol: 57, ISSN: 0306-4573

Journal article

Baird A, Schuller B, 2020, Considerations for a More Ethical Approach to Data in AI: On Data Representation and Infrastructure, FRONTIERS IN BIG DATA, Vol: 3

Journal article

Zhang Z, Metaxas DN, Lee H-Y, Schuller BWet al., 2020, Guest Editorial Special Issue on Adversarial Learning in Computational Intelligence, IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, Vol: 4, Pages: 414-416, ISSN: 2471-285X

Journal article

Dong F, Qian K, Ren Z, Baird A, Li X, Dai Z, Dong B, Metze F, Yamamoto Y, Schuller BWet al., 2020, Machine Listening for Heart Status Monitoring: Introducing and Benchmarking HSS-The Heart Sounds Shenzhen Corpus, IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, Vol: 24, Pages: 2082-2092, ISSN: 2168-2194

Journal article

Qian K, Li X, Li H, Li S, Li W, Ning Z, Yu S, Hou L, Tang G, Lu J, Li F, Duan S, Du C, Cheng Y, Wang Y, Gan L, Yamamoto Y, Schuller BWet al., 2020, Computer Audition for Healthcare: Opportunities and Challenges, FRONTIERS IN DIGITAL HEALTH, Vol: 2

Journal article

Amiriparian S, Cummins N, Gerczuk M, Pugachevskiy S, Ottl S, Schuller Bet al., 2020, "Are You Playing a Shooter Again?!" Deep Representation Learning for Audio-Based Video Game Genre Recognition, IEEE TRANSACTIONS ON GAMES, Vol: 12, Pages: 145-154, ISSN: 2475-1502

Journal article

Parada-Cabaleiro E, Costantini G, Batliner A, Schmitt M, Schuller BWet al., 2020, DEMoS: an Italian emotional speech corpus Elicitation methods, machine learning, and perception, LANGUAGE RESOURCES AND EVALUATION, Vol: 54, Pages: 341-383, ISSN: 1574-020X

Journal article

Baur T, Heimerl A, Lingenfelser F, Wagner J, Valstar MF, Schuller B, Andre Eet al., 2020, eXplainable Cooperative Machine Learning with NOVA, KUNSTLICHE INTELLIGENZ, Vol: 34, Pages: 143-164, ISSN: 0933-1875

Journal article

Li X, Qian K, Xie L-L, Li X-J, Cheng M, Jiang L, Schuller BWet al., 2020, A Mini Review on Current Clinical and Research Findings for Children Suffering from COVID-19

<jats:title>Abstract</jats:title><jats:sec><jats:title>Background</jats:title><jats:p>As the novel coronavirus triggering COVID-19 has broken out in Wuhan, China and spread rapidly worldwide, it threatens the lives of thousands of people and poses a global threat on the economies of the entire world. However, infection with COVID-19 is currently rare in children.</jats:p></jats:sec><jats:sec><jats:title>Objective</jats:title><jats:p>To discuss the latest findings and research focus on the basis of characteristics of children confirmed with COVID-19, and provide an insight into the future treatment and research direction.</jats:p></jats:sec><jats:sec><jats:title>Methods</jats:title><jats:p>We searched the terms “COVID-19 OR coronavirus OR SARS-CoV-2” AND “Pediatric OR children” on PubMed, Embase, Cochrane library, NIH, CDC, and CNKI. The authors also reviewed the guidelines published on Chinese CDC and Chinese NHC.</jats:p></jats:sec><jats:sec><jats:title>Results</jats:title><jats:p>We included 25 published literature references related to the epidemiology, clinical manifestation, accessary examination, treatment, and prognosis of pediatric patients with COVID-19.</jats:p></jats:sec><jats:sec><jats:title>Conclusion</jats:title><jats:p>The numbers of children with COVID-19 pneumonia infection are small, and most of them come from family aggregation. Symptoms are mainly mild or even asymptomatic, which allow children to be a risk factor for transmission. Thus, strict epidemiological history screening is needed for early diagnosis and segregation. This holds especially for infants, who are more susceptible to infection than other age groups in pediatric age, but have most likely subtle and unspecific symptoms. They need to be paid more attention to. CT examination is a

Journal article

Kaklauskas A, Zavadskas EK, Schuller B, Lepkova N, Dzemyda G, Sliogeriene J, Kurasova Oet al., 2020, Customized ViNeRS Method for Video Neuro-Advertising of Green Housing, INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH, Vol: 17

Journal article

Wu P, Sun X, Zhao Z, Wang H, Pan S, Schuller Bet al., 2020, Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning, COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, Vol: 2020, ISSN: 1687-5265

Journal article

Parada-Cabaleiro E, Batliner A, Baird A, Schuller Bet al., 2020, The perception of emotional cues by children in artificial background noise, INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, Vol: 23, Pages: 169-182, ISSN: 1381-2416

Journal article

Deng J, Schuller B, Eyben F, Schuller D, Zhang Z, Francois H, Oh Eet al., 2020, Exploiting time-frequency patterns with LSTM-RNNs for low-bitrate audio restoration, NEURAL COMPUTING & APPLICATIONS, Vol: 32, Pages: 1095-1107, ISSN: 0941-0643

Journal article

Zhao Z, Bao Z, Zhang Z, Deng J, Cummins N, Wang H, Tao J, Schuller Bet al., 2020, Automatic Assessment of Depression From Speech via a Hierarchical Attention Transfer Network and Attention Autoencoders, IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, Vol: 14, Pages: 423-434, ISSN: 1932-4553

Journal article

Yang Z, Qian K, Ren Z, Baird A, Zhang Z, Schuller Bet al., 2020, Learning multi-resolution representations for acoustic scene classification via neural networks, Pages: 133-143, ISBN: 9789811527555

This study investigates the performance of wavelet as well as conventional temporal and spectral features for acoustic scene classification, testing the effectiveness of both feature sets when combined with neural networks on acoustic scene classification. The TUT Acoustic Scenes 2017 Database is used in the evaluation of the system. The model with wavelet energy feature achieved 74.8 % and 60.2 % on development and evaluation set respectively, which is better than the model using temporal and spectral feature set (72.9 % and 59.4 %). Additionally, to optimise the generalisation and robustness of the models, a decision fusion method based on the posterior probability of each audio scene is used. Comparing with the baseline system of the Detection and Classification Acoustic Scenes and Events 2017 (DCASE 2017) challenge, the best decision fusion model achieves 79.2 % and 63.8 % on the development and evaluation sets, respectively, where both results significantly exceed the baseline system result of 74.8 % and 61.0 % (confirmed by one tailed z-test p < 0.01 and p < 0.05 respectively.

Book chapter

Costin H, Schuller B, Florea AM, 2020, Preface

Book

Amiriparian S, Schmitt M, Ottl S, Gerczuk M, Schuller Bet al., 2020, Deep unsupervised representation learning for audio-based medical applications, Intelligent Systems Reference Library, Pages: 137-164

Feature learning denotes a set of approaches for transforming raw input data into representations that can be effectively utilised in solving machine learning problems. Classifiers or regressors require training data which is computationally suitable to process. However, real-world data, e.g., an audio recording from a group of people talking in a park whilst in the background a dog is barking and a musician is playing the guitar, or health-related data such as coughing and sneezing recorded by consumer smartphones, comprises a remarkably variable and complex nature. For understanding such data, developing expert-designed, hand-crafted features often demands for an exhaustive amount of time and resources. Another disadvantage of such features is the lack of generalisation, i.e., there is a need for re-engineering new features for new tasks. Therefore, it is inevitable to develop automatic representation learning methods. In this chapter, we first discuss the preliminaries of contemporary representation learning techniques for computer audition tasks. Hereby, we differentiate between approaches based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We then introduce and evaluate three state-of-the-art deep learning systems for unsupervised representation learning from raw audio: (1) pre-trained image classification CNNs, (2) a deep convolutional generative adversarial network (DCGAN), and (3) a recurrent sequence-to-sequence autoencoder (S2SAE). For each of these algorithms, the representations are obtained from the spectrograms of the input audio data. Finally, for a range of audio-based machine learning tasks, including abnormal heart sound classification, snore sound classification, and bipolar disorder recognition, we evaluate the efficacy of the deep representations, which are: (i) the activations of the fully connected layers of the pre-trained CNNs, (ii) the activations of the discriminator in case of the DCGAN, and (iii) the activ

Book chapter

Zhang Z, Han J, Qian K, Janott C, Guo Y, Schuller Bet al., 2020, Snore-GANs: improving automatic snore sound classification with synthesized data, IEEE Journal of Biomedical and Health Informatics, Vol: 24, Pages: 300-310, ISSN: 2168-2194

One of the frontier issues that severely hamper the development of automatic snore sound classification (ASSC) associates to the lack of sufficient supervised training data. To cope with this problem, we propose a novel data augmentation approach based on semi-supervised conditional Generative Adversarial Networks (scGANs), which aims to automatically learn a mapping strategy from a random noise space to original data distribution. The proposed approach has the capability of well synthesizing ‘realistic’ high-dimensional data, while requiring no additional annotation process. To handle the mode collapse problem of GANs, we further introduce an ensemble strategy to enhance the diversity of the generated data. The systematic experiments conducted on a widely used Munich-Passau snore sound corpus demonstrate that the scGANs-based systems can remarkably outperform other classic data augmentation systems, and are also competitive to other recently reported systems for ASSC.

Journal article

Littmann M, Selig K, Cohen-Lavi L, Frank Y, Hoenigschmid P, Kataka E, Moesch A, Qian K, Ron A, Schmid S, Sorbie A, Szlak L, Dagan-Wiener A, Ben-Tal N, Niv MY, Razansky D, Schuller BW, Ankerst D, Hertz T, Rost Bet al., 2020, Validity of machine learning in biology and medicine increased through collaborations across fields of expertise, NATURE MACHINE INTELLIGENCE, Vol: 2, Pages: 18-24

Journal article

Keren G, Sabato S, Schuller B, 2020, Analysis of loss functions for fast single-class classification, KNOWLEDGE AND INFORMATION SYSTEMS, Vol: 62, Pages: 337-358, ISSN: 0219-1377

Journal article

Pateraki M, Fysarakis K, Sakkalis V, Spanoudakis G, Varlamis I, Maniadakis M, Lourakis M, Ioannidis S, Cummins N, Schuller B, Loutsetis E, Koutsouris Det al., 2020, Biosensors and Internet of Things in smart healthcare applications: challenges and opportunities, WEARABLE AND IMPLANTABLE MEDICAL DEVICES: APPLICATIONS AND CHALLENGES, VOL 7, Editors: Dey, Ashour, Fong, Bhatt, Publisher: ACADEMIC PRESS LTD-ELSEVIER SCIENCE LTD, Pages: 25-53, ISBN: 978-0-12-815369-7

Book chapter

Cummins N, Matcham F, Klapper J, Schuller Bet al., 2020, Artificial intelligence to aid the detection of mood disorders, ARTIFICIAL INTELLIGENCE IN PRECISION HEALTH: FROM CONCEPT TO APPLICATIONS, Editors: Barh, Publisher: ACADEMIC PRESS LTD-ELSEVIER SCIENCE LTD, Pages: 231-255, ISBN: 978-0-12-817133-2

Book chapter

Cummins N, Ren Z, Mallol-Ragolta A, Schuller Bet al., 2020, Machine learning in digital health, recent trends, and ongoing challenges, ARTIFICIAL INTELLIGENCE IN PRECISION HEALTH: FROM CONCEPT TO APPLICATIONS, Editors: Barh, Publisher: ACADEMIC PRESS LTD-ELSEVIER SCIENCE LTD, Pages: 121-148, ISBN: 978-0-12-817133-2

Book chapter

Diener L, Amiriparian S, Botelho C, Scheck K, Kuester D, Trancoso I, Schuller BW, Schultz Tet al., 2020, Towards Silent Paralinguistics: Deriving Speaking Mode and Speaker ID from Electromyographic Signals, Interspeech Conference, Publisher: ISCA-INT SPEECH COMMUNICATION ASSOC, Pages: 3117-3121, ISSN: 2308-457X

Conference paper

Schuller BW, Batliner A, Bergler C, Messner E-M, Hamilton A, Amiriparian S, Baird A, Rizos G, Schmitt M, Stappen L, Baumeister H, MacIntyre AD, Hantke Set al., 2020, The INTERSPEECH 2020 Computational Paralinguistics Challenge: Elderly Emotion, Breathing & Masks, Interspeech Conference, Publisher: ISCA-INT SPEECH COMMUNICATION ASSOC, Pages: 2042-2046, ISSN: 2308-457X

Conference paper

MacIntyre AD, Rizos G, Batliner A, Baird A, Amiriparian S, Hamilton A, Schuller BWet al., 2020, Deep Attentive End-to-End Continuous Breath Sensing from Speech, Interspeech Conference, Publisher: ISCA-INT SPEECH COMMUNICATION ASSOC, Pages: 2082-2086, ISSN: 2308-457X

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00672433&person=true&page=7&respub-action=search.html