Publications
1105 results found
Latif S, Rana R, Khalifa S, et al., 2023, Self Supervised Adversarial Domain Adaptation for Cross-Corpus and Cross-Language Speech Emotion Recognition, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 14, Pages: 1912-1926, ISSN: 1949-3045
Niu M, Zhao Z, Tao J, et al., 2023, Dual Attention and Element Recalibration Networks for Automatic Depression Level Prediction, IEEE Transactions on Affective Computing, Vol: 14, Pages: 1954-1965
Song M, Yang Z, Parada-Cabaleiro E, et al., 2023, Identifying languages in a novel dataset: ASMR-whispered speech, FRONTIERS IN NEUROSCIENCE, Vol: 17
Bayerl SP, Gerczuk M, Batliner A, et al., 2023, Classification of stuttering-The ComParE challenge and beyond, COMPUTER SPEECH AND LANGUAGE, Vol: 81, ISSN: 0885-2308
- Author Web Link
- Cite
- Citations: 1
Mira R, Vougioukas K, Ma P, et al., 2023, End-to-End Video-to-Speech Synthesis Using Generative Adversarial Networks, IEEE TRANSACTIONS ON CYBERNETICS, Vol: 53, Pages: 3454-3466, ISSN: 2168-2267
- Author Web Link
- Cite
- Citations: 8
Batliner A, Neumann M, Burkhardt F, et al., 2023, Ethical Awareness in Paralinguistics: A Taxonomy of Applications, INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, Vol: 39, Pages: 1904-1921, ISSN: 1044-7318
- Author Web Link
- Cite
- Citations: 1
Mira R, Coutinho E, Parada-Cabaleiro E, et al., 2023, Automated composition of Galician Xota-tuning RNN-based composers for specific musical styles using deep Q-learning, PEERJ COMPUTER SCIENCE, Vol: 9
Latif S, Rana R, Khalifa S, et al., 2023, Survey of Deep Representation Learning for Speech Emotion Recognition, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 14, Pages: 1634-1654, ISSN: 1949-3045
- Author Web Link
- Cite
- Citations: 25
Stappen L, Baird A, Schumann L, et al., 2023, The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and Improvements, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 14, Pages: 1334-1350, ISSN: 1949-3045
- Author Web Link
- Cite
- Citations: 4
Gerczuk M, Amiriparian S, Ottl S, et al., 2023, EmoNet: A Transfer Learning Framework for Multi-Corpus Speech Emotion Recognition, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 14, Pages: 1472-1487, ISSN: 1949-3045
- Author Web Link
- Cite
- Citations: 7
Wang Z, Qian K, Liu H, et al., 2023, Exploring interpretable representations for heart sound abnormality detection, BIOMEDICAL SIGNAL PROCESSING AND CONTROL, Vol: 82, ISSN: 1746-8094
Tian G, Qian K, Li X, et al., 2023, Can a Holistic View Facilitate the Development of Intelligent Traditional Chinese Medicine? A Survey, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 10, Pages: 700-713, ISSN: 2329-924X
- Author Web Link
- Cite
- Citations: 1
Lawson J, Rizos G, Jasinghe D, et al., 2023, Automated acoustic detection of Geoffroy's spider monkey highlights tipping points of human disturbance, PROCEEDINGS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, Vol: 290, ISSN: 0962-8452
- Author Web Link
- Cite
- Citations: 1
Triantafyllopoulos A, Schuller BW, Iymen G, et al., 2023, An Overview of Affective Speech Synthesis and Conversion in the Deep Learning Era, PROCEEDINGS OF THE IEEE, ISSN: 0018-9219
- Author Web Link
- Cite
- Citations: 1
Coppock H, Akman A, Bergler C, et al., 2023, A summary of the ComParE COVID-19 challenges, FRONTIERS IN DIGITAL HEALTH, Vol: 5
Amin M, Cambria EW, Schuller B, 2023, Will Affective Computing Emerge From Foundation Models and General Artificial Intelligence? A First Evaluation of ChatGPT, IEEE INTELLIGENT SYSTEMS, Vol: 38, Pages: 15-23, ISSN: 1541-1672
- Author Web Link
- Cite
- Citations: 2
Lai W-H, Chou T-Y, Chou M-C, et al., 2023, Robust Audio Watermarking Based on Empirical Mode Decomposition and Group Differential Relations, JOURNAL OF THE AUDIO ENGINEERING SOCIETY, Vol: 71, Pages: 100-117, ISSN: 1549-4950
Triantafyllopoulos A, Reichel U, Liu S, et al., 2023, Multistage linguistic conditioning of convolutional layers for speech emotion recognition, FRONTIERS IN COMPUTER SCIENCE, Vol: 5
Calvo RA, Peters D, Moradbakhti L, et al., 2023, Assessing the feasibility of a text-based conversational agent for asthma support: protocol for a mixed methods observational study, JMIR Research Protocols, Vol: 12, Pages: 9-9, ISSN: 1929-0748
BACKGROUND: Despite efforts, the UK death rate from asthma is the highest in Europe, and 65% of people with asthma in the United Kingdom do not receive the professional care they are entitled to. Experts have recommended the use of digital innovations to help address the issues of poor outcomes and lack of care access. An automated SMS text messaging-based conversational agent (ie, chatbot) created to provide access to asthma support in a familiar format via a mobile phone has the potential to help people with asthma across demographics and at scale. Such a chatbot could help improve the accuracy of self-assessed risk, improve asthma self-management, increase access to professional care, and ultimately reduce asthma attacks and emergencies. OBJECTIVE: The aims of this study are to determine the feasibility and usability of a text-based conversational agent that processes a patient's text responses and short sample voice recordings to calculate an estimate of their risk for an asthma exacerbation and then offers follow-up information for lowering risk and improving asthma control; assess the levels of engagement for different groups of users, particularly those who do not access professional services and those with poor asthma control; and assess the extent to which users of the chatbot perceive it as helpful for improving their understanding and self-management of their condition. METHODS: We will recruit 300 adults through four channels for broad reach: Facebook, YouGov, Asthma + Lung UK social media, and the website Healthily (a health self-management app). Participants will be screened, and those who meet inclusion criteria (adults diagnosed with asthma and who use WhatsApp) will be provided with a link to access the conversational agent through WhatsApp on their mobile phones. Participants will be sent scheduled and randomly timed messages to invite them to engage in dialogue about their asthma risk during the period of study. After a data collection period (28
Qian K, Schuller BW, Guan X, et al., 2023, Intelligent Music Intervention for Mental Disorders: Insights and Perspectives, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 10, Pages: 2-9, ISSN: 2329-924X
- Author Web Link
- Cite
- Citations: 1
Parada-Cabaleiro E, Batliner A, Schmitt M, et al., 2023, Perception and classification of emotions in nonsense speech: Humans versus machines, PLOS ONE, Vol: 18, ISSN: 1932-6203
Amiriparian S, Schuller BW, Asghar N, et al., 2023, Guest Editorial: Special Issue on Affective Speech and Language Synthesis, Generation, and Conversion, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 14, Pages: 3-5, ISSN: 1949-3045
Zhou K, Sisman B, Rana R, et al., 2023, Emotion Intensity and its Control for Emotional Voice Conversion, IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, Vol: 14, Pages: 31-48, ISSN: 1949-3045
- Author Web Link
- Cite
- Citations: 3
Cheng J, Liang R, Zhao L, et al., 2023, Speech Denoising and Compensation for Hearing Aids Using an FTCRN-Based Metric GAN, IEEE SIGNAL PROCESSING LETTERS, Vol: 30, Pages: 374-378, ISSN: 1070-9908
Rajamani ST, Rajamani K, Venkateshvaran A, et al., 2023, Toward Detecting and Addressing Corner Cases in Deep Learning Based Medical Image Segmentation, IEEE ACCESS, Vol: 11, Pages: 95334-95345, ISSN: 2169-3536
Qian K, Hu B, Yamamoto Y, et al., 2023, The Voice of the Body: Why AI Should Listen to It and an Archive., Cyborg Bionic Syst, Vol: 4
The sound generated by body carries important information about our health status physically and psychologically. In the past decades, we have witnessed a plethora of successes achieved in the field of body sound analysis. Nevertheless, the fundamentals of this young field are still not well established. In particular, publicly accessible databases are rarely developed, which dramatically restrains a sustainable research. To this end, we are launching and continuously calling for participation from the global scientific community to contribute to the Voice of the Body (VoB) archive. We aim to build an open access platform to collect the well-established body sound databases in a well standardized way. Moreover, we hope to organize a series of challenges to promote the development of audio-driven methods for healthcare via the proposed VoB. We believe that VoB can help break the walls between different subjects toward an era of Medicine 4.0 enriched by audio intelligence.
Xie J, Zhong Y, Xiao T, et al., 2022, A multi-information fusion model for short term load forecasting of an architectural complex considering spatio-temporal characteristics, ENERGY AND BUILDINGS, Vol: 277, ISSN: 0378-7788
- Author Web Link
- Cite
- Citations: 3
Liu S, Mallol-Ragolta A, Parada-Cabaleiro E, et al., 2022, Audio self-supervised learning: A survey, PATTERNS, Vol: 3, ISSN: 2666-3899
- Author Web Link
- Cite
- Citations: 5
Schmid J, Hoss A, Schuller BW, 2022, A Survey on Client Throughput Prediction Algorithms in Wired and Wireless Networks, ACM COMPUTING SURVEYS, Vol: 54, ISSN: 0360-0300
- Author Web Link
- Cite
- Citations: 2
Schuller BW, Loechner J, Qian K, et al., 2022, Digital Mental Health-Breaking a Lance for Prevention, IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, Vol: 9, Pages: 1584-1588, ISSN: 2329-924X
- Author Web Link
- Cite
- Citations: 2
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.