A primary motivation of our research is the monitoring of physical, physiological, and biochemical parameters - in any environment and without activity restriction and behaviour modification - through using miniaturised, wireless Body Sensor Networks (BSN). Key research issues that are currently being addressed include novel sensor designs, ultra-low power microprocessor and wireless platforms, energy scavenging, biocompatibility, system integration and miniaturisation, processing-on-node technologies combined with novel ASIC design, autonomic sensor networks and light-weight communication protocols. Our research is aimed at addressing the future needs of life-long health, wellbeing and healthcare, particularly those related to demographic changes associated with an ageing population and patients with chronic illnesses. This research theme is therefore closely aligned with the IGHI’s vision of providing safe, effective and accessible technologies for both developed and developing countries.

Some of our latest works were exhibited at the 2015 Royal Society Summer Science Exhibition.


Citation

BibTex format

@inproceedings{Qiu:2019:10.1109/BSN.2019.8771095,
author = {Qiu, J and Lo, FP-W and Lo, B},
doi = {10.1109/BSN.2019.8771095},
publisher = {IEEE},
title = {Assessing individual dietary intake in food sharing scenarios with a 360 camera and deep learning},
url = {http://dx.doi.org/10.1109/BSN.2019.8771095},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - A novel vision-based approach for estimating individual dietary intake in food sharing scenarios is proposed in this paper, which incorporates food detection, face recognition and hand tracking techniques. The method is validated using panoramic videos which capture subjects' eating episodes. The results demonstrate that the proposed approach is able to reliably estimate food intake of each individual as well as the food eating sequence. To identify the food items ingested by the subject, a transfer learning approach is designed. 4, 200 food images with segmentation masks, among which 1,500 are newly annotated, are used to fine-tune the deep neural network for the targeted food intake application. In addition, a method for associating detected hands with subjects is developed and the outcomes of face recognition are refined to enable the quantification of individual dietary intake in communal eating settings.
AU - Qiu,J
AU - Lo,FP-W
AU - Lo,B
DO - 10.1109/BSN.2019.8771095
PB - IEEE
PY - 2019///
SN - 2376-8886
TI - Assessing individual dietary intake in food sharing scenarios with a 360 camera and deep learning
UR - http://dx.doi.org/10.1109/BSN.2019.8771095
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000492872400035&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://ieeexplore.ieee.org/document/8771095
UR - http://hdl.handle.net/10044/1/75190
ER -