Imperial College London

Professor Pantelis Georgiou

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Biomedical Electronics
 
 
 
//

Contact

 

+44 (0)20 7594 6326pantelis Website

 
 
//

Location

 

902Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Zhu:2021:10.1007/978-3-030-53352-6_5,
author = {Zhu, T and Li, K and Georgiou, P},
doi = {10.1007/978-3-030-53352-6_5},
pages = {45--53},
title = {Personalized Dual-Hormone Control for Type 1 Diabetes Using Deep Reinforcement Learning},
url = {http://dx.doi.org/10.1007/978-3-030-53352-6_5},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - We introduce a dual-hormone control algorithm for people with Type 1 Diabetes (T1D) which uses deep reinforcement learning (RL). Specifically, double dilated recurrent neural networks are used to learn the control strategy, trained by a variant of Q-learning. The inputs to the model include the real-time sensed glucose and meal carbohydrate content, and the outputs are the actions necessary to deliver dual-hormone (basal insulin and glucagon) control. Without prior knowledge of the glucose-insulin metabolism, we develop a data-driven model using the UVA/Padova Simulator. We first pre-train a generalized model using long-term exploration in an environment with average T1D subject parameters provided by the simulator, then adopt importance sampling to train personalized models for each individual. In-silico, the proposed algorithm largely reduces adverse glycemic events, and achieves time in range, i.e., the percentage of normoglycemia, for the adults and for the adolescents, which outperforms previous approaches significantly. These results indicate that deep RL has great potential to improve the treatment of chronic diseases such as diabetes.
AU - Zhu,T
AU - Li,K
AU - Georgiou,P
DO - 10.1007/978-3-030-53352-6_5
EP - 53
PY - 2021///
SN - 1860-949X
SP - 45
TI - Personalized Dual-Hormone Control for Type 1 Diabetes Using Deep Reinforcement Learning
UR - http://dx.doi.org/10.1007/978-3-030-53352-6_5
ER -