A primary motivation of our research is the monitoring of physical, physiological, and biochemical parameters - in any environment and without activity restriction and behaviour modification - through using miniaturised, wireless Body Sensor Networks (BSN). Key research issues that are currently being addressed include novel sensor designs, ultra-low power microprocessor and wireless platforms, energy scavenging, biocompatibility, system integration and miniaturisation, processing-on-node technologies combined with novel ASIC design, autonomic sensor networks and light-weight communication protocols. Our research is aimed at addressing the future needs of life-long health, wellbeing and healthcare, particularly those related to demographic changes associated with an ageing population and patients with chronic illnesses. This research theme is therefore closely aligned with the IGHI’s vision of providing safe, effective and accessible technologies for both developed and developing countries.

Some of our latest works were exhibited at the 2015 Royal Society Summer Science Exhibition.


Citation

BibTex format

@article{Lo:2018:10.3390/nu10122005,
author = {Lo, FP-W and Sun, Y and Qiu, J and Lo, B},
doi = {10.3390/nu10122005},
journal = {Nutrients},
pages = {1--20},
title = {Food volume estimation based on deep learning view synthesis from a single depth map},
url = {http://dx.doi.org/10.3390/nu10122005},
volume = {10},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - An objective dietary assessment system can help users to understand their dietary behavior and enable targeted interventions to address underlying health problems. To accurately quantify dietary intake, measurement of the portion size or food volume is required. For volume estimation, previous research studies mostly focused on using model-based or stereo-based approaches which rely on manual intervention or require users to capture multiple frames from different viewing angles which can be tedious. In this paper, a view synthesis approach based on deep learning is proposed to reconstruct 3D point clouds of food items and estimate the volume from a single depth image. A distinct neural network is designed to use a depth image from one viewing angle to predict another depth image captured from the corresponding opposite viewing angle. The whole 3D point cloud map is then reconstructed by fusing the initial data points with the synthesized points of the object items through the proposed point cloud completion and Iterative Closest Point (ICP) algorithms. Furthermore, a database with depth images of food object items captured from different viewing angles is constructed with image rendering and used to validate the proposed neural network. The methodology is then evaluated by comparing the volume estimated by the synthesized 3D point cloud with the ground truth volume of the object items
AU - Lo,FP-W
AU - Sun,Y
AU - Qiu,J
AU - Lo,B
DO - 10.3390/nu10122005
EP - 20
PY - 2018///
SN - 2072-6643
SP - 1
TI - Food volume estimation based on deep learning view synthesis from a single depth map
T2 - Nutrients
UR - http://dx.doi.org/10.3390/nu10122005
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000455073200186&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://www.mdpi.com/2072-6643/10/12/2005
UR - http://hdl.handle.net/10044/1/75178
VL - 10
ER -