Xingchen Zhang is a member of the Personal Robotics Laboratory led by Prof. Yiannis Demiris. He is working on the D-RISK project funded by Innovate UK (collaborate with Transport for London (TfL), Claytex, DRisk.AI, and the Transport Logistics Laboratory of Imperial College London). The main research contents are autonomous driving, pedestrian intention detection, human-vehicle interaction. In particular, he is dealing with safe interaction between mobility devices (autonomous cars and wheelchairs) and pedestrians, working on multimodal (multiple types of cameras - RGB/D & thermal) pedestrian action prediction and AV/wheelchair-pedestrian interaction.
Xingchen's other research areas include: image fusion (visible-thermal, multi-focus, multi-exposure), visual object tracking (RGB-based, RGB-T), computer vision, deep learning.
Before coming to Imperial, Xingchen was a postdoctoral research fellow and a research assistant professor at Shanghai Jiao Tong University, China. He was also a member of the Science and Technology Expert Database of Science and Technology Commission of Shanghai Municipality, the PI of a project funded by the Science and Technology Commission of Shanghai Municipality, and the co-PI of a project funded by the Science & Technology Department of Sichuan Province, China.
Xingchen has extensive experiences in image fusion and its applications. He is the main author of VIFB, which is the first visible-infrared image fusion benchmark and also the very first benchmark in image fusion. He is the author of MFIFB and MEFB, which are the first multi-focus image fusion benchmark and the first multi-exposure image fusion benchmark, respectively. He is the main author of the first comprehensive review in the field of RGB-T fusion tracking and one of the co-authors of the book Image Fusion funded by National Science and Technology Academic Publications Fund of China. He is the recipient of the best paper honorable mention award of the 9th Chinese Conference on Information fusion.
Xingchen has won several prizes as a team leader during his PhD study at Queen Mary University of London, for example the 2nd place in the 2016 Beijing Overseas Talents Entrepreneurship Competition, the champion of 2016 Beijing Overseas Talent Entrepreneurship Challenge (UK Division), the second prize in the Mission on Mars Robot Challenge organized by Mathworks Inc.
pedestrian intention prediction, pedestrian action prediction, human-vehicle interaction, image fusion, object tracking, RGB-T tracking, computer vision, deep learning.
[08/2020] The Image Fusion book I co-authored when I was a research assistant professor at Shanghai Jiao Tong University has been published by Springer Nature Singapore Pte Ltd. and Shanghai Jiao Tong University Press. This book was sponsored by Chinese National Science and Technology Academic Publications Fund (2019).
[08/2020] The paper 'Real-time long-term tracking with reliability assessment and object recovery' is accepted by IET Image Processing. The first author of this paper is the student that I co-supervised at Shanghai Jiao Tong University.
2013 - 2017, PhD, Queen Mary University of London, London, United Kingdom
2012 - 2013, Postgraduate student, Shanghai Jiao Tong University, Shanghai, China
2008 - 2012, BSc, Huazhong University of Science and Technology, Wuhan, China
et al., 2020, Object fusion tracking based on visible and infrared images: A comprehensive review, Information Fusion, Vol:63, ISSN:1566-2535, Pages:166-187
et al., 2020, DSiamMFT: An RGB-T fusion tracking method via dynamic Siamese networks using multi-layer feature fusion, Signal Processing-Image Communication, Vol:84, ISSN:0923-5965
et al., 2019, Anti-occlusion object tracking based on correlation filter, Signal Image and Video Processing, Vol:14, ISSN:1863-1703, Pages:753-761
Zhang X, Ye P, Xiao G, VIFB: A Visible and Infrared Image Fusion Benchmark, CVPR Workshop, ISSN:1550-5499