Imperial College London

ProfessorPier LuigiDragotti

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Signal Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6192p.dragotti

 
 
//

Location

 

814Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Liu:2023:10.1109/TPAMI.2023.3278940,
author = {Liu, S and Dragotti, PL},
doi = {10.1109/TPAMI.2023.3278940},
journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
pages = {12444--12458},
title = {Sensing diversity and sparsity models for event generation and video reconstruction from events},
url = {http://dx.doi.org/10.1109/TPAMI.2023.3278940},
volume = {45},
year = {2023}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Events-to-video (E2V) reconstruction and video-to-events (V2E) simulation are two fundamental research topics in event-based vision. Current deep neural networks for E2V reconstruction are usually complex and difficult to interpret. Moreover, existing event simulators are designed to generate realistic events, but research on how to improve the event generation process has been so far limited. In this paper, we propose a light, simple model-based deep network for E2V reconstruction, explore the diversity for adjacent pixels in V2E generation, and finally build a video-to-events-to-video (V2E2V) architecture to validate how alternative event generation strategies improve video reconstruction. For the E2V reconstruction, we model the relationship between events and intensity using sparse representation models. A convolutional ISTA network (CISTA) is then designed using the algorithm unfolding strategy. Long short-term temporal consistency (LSTC) constraints are further introduced to enhance the temporal coherence. In the V2E generation, we introduce the idea of having interleaved pixels with different contrast threshold and lowpass bandwidth and conjecture that this can help extract more useful information from intensity. Finally, V2E2V architecture is used to verify the effectiveness of this strategy. Results highlight that our CISTA-LSTC network outperforms state-of-the-art methods and achieves better temporal consistency. Sensing diversity in event generation reveals more fine details and this leads to a significantly improved reconstruction quality.
AU - Liu,S
AU - Dragotti,PL
DO - 10.1109/TPAMI.2023.3278940
EP - 12458
PY - 2023///
SN - 0162-8828
SP - 12444
TI - Sensing diversity and sparsity models for event generation and video reconstruction from events
T2 - IEEE Transactions on Pattern Analysis and Machine Intelligence
UR - http://dx.doi.org/10.1109/TPAMI.2023.3278940
UR - https://www.webofscience.com/api/gateway?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:001068816800058&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=a2bf6146997ec60c407a63945d4e92bb
UR - https://ieeexplore.ieee.org/document/10130595
UR - http://hdl.handle.net/10044/1/110060
VL - 45
ER -