Themes of Work

Our research centres around the body and how technology can be used to improve how that body exists and interacts with the surrounding environment. We focus on haptic and aural modalities, using textiles as the physical medium for building wearable computational systems. Some of the research projects we undertake focus exclusively on textile sensing and interfaces whilst other focus solely on how auditory displays can be improved for users. A growing area of our work is looking towards how these two complementary technologies can be brought together in novel applications.

Below is an non-exhaustive list of some of the research we have undertaken.

Citation

BibTex format

@article{Mao:2022:10.1098/rsif.2021.0921,
author = {Mao, A and Giraudet, CSE and Liu, K and De, Almeida Nolasco I and Xie, Z and Xie, Z and Gao, Y and Theobald, J and Bhatta, D and Stewart, R and McElligott, AG},
doi = {10.1098/rsif.2021.0921},
journal = {Journal of the Royal Society Interface},
pages = {1--11},
title = {Automated identification of chicken distress vocalizations using deep learning models.},
url = {http://dx.doi.org/10.1098/rsif.2021.0921},
volume = {19},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - The annual global production of chickens exceeds 25 billion birds, which are often housed in very large groups, numbering thousands. Distress calling triggered by various sources of stress has been suggested as an 'iceberg indicator' of chicken welfare. However, to date, the identification of distress calls largely relies on manual annotation, which is very labour-intensive and time-consuming. Thus, a novel convolutional neural network-based model, light-VGG11, was developed to automatically identify chicken distress calls using recordings (3363 distress calls and 1973 natural barn sounds) collected on an intensive farm. The light-VGG11 was modified from VGG11 with significantly fewer parameters (9.3 million versus 128 million) and 55.88% faster detection speed while displaying comparable performance, i.e. precision (94.58%), recall (94.89%), F1-score (94.73%) and accuracy (95.07%), therefore more useful for model deployment in practice. To additionally improve light-VGG11's performance, we investigated the impacts of different data augmentation techniques (i.e. time masking, frequency masking, mixed spectrograms of the same class and Gaussian noise) and found that they could improve distress calls detection by up to 1.52%. Our distress call detection demonstration on continuous audio recordings, shows the potential for developing technologies to monitor the output of this call type in large, commercial chicken flocks.
AU - Mao,A
AU - Giraudet,CSE
AU - Liu,K
AU - De,Almeida Nolasco I
AU - Xie,Z
AU - Xie,Z
AU - Gao,Y
AU - Theobald,J
AU - Bhatta,D
AU - Stewart,R
AU - McElligott,AG
DO - 10.1098/rsif.2021.0921
EP - 11
PY - 2022///
SN - 1742-5662
SP - 1
TI - Automated identification of chicken distress vocalizations using deep learning models.
T2 - Journal of the Royal Society Interface
UR - http://dx.doi.org/10.1098/rsif.2021.0921
UR - https://www.ncbi.nlm.nih.gov/pubmed/35765806
UR - https://royalsocietypublishing.org/doi/10.1098/rsif.2021.0921
VL - 19
ER -