Imperial College London

ProfessorPier LuigiDragotti

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Signal Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6192p.dragotti

 
 
//

Location

 

814Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Deng:2019:10.1109/tcsvt.2019.2923901,
author = {Deng, X and Song, P and Rodrigues, MRD and Dragotti, PL},
doi = {10.1109/tcsvt.2019.2923901},
journal = {IEEE Transactions on Circuits and Systems for Video Technology},
pages = {1--1},
title = {RADAR: robust algorithm for depth image super resolution based on FRI theory and multimodal dictionary learning},
url = {http://dx.doi.org/10.1109/tcsvt.2019.2923901},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Depth image super-resolution is a challenging problem, since normally high upscaling factors are required (e.g., 16×), and depth images are often noisy. In order to achieve large upscaling factors and resilience to noise, we propose a Robust Algorithm for Depth imAge super Resolution (RADAR) that combines the power of finite rate of innovation (FRI) theory with multimodal dictionary learning. Given a low-resolution (LR) depth image, we first model its rows and columns as piece-wise polynomials and propose a FRI-based depth upscaling (FDU) algorithm to super-resolve the image. Then, the upscaled moderate quality (MQ) depth image is further enhanced with the guidance of a registered high-resolution (HR) intensity image. This is achieved by learning multimodal mappings from the joint MQ depth and HR intensity pairs to the HR depth, through a recently proposed triple dictionary learning (TDL) algorithm. Moreover, to speed up the super-resolution process, we introduce a new projection-based rapid upscaling (PRU) technique that pre-calculates the projections from the joint MQ depth and HR intensity pairs to the HR depth. Compared with state-of-the-art deep learning based methods, our approach has two distinct advantages: we need a fraction of training data but can achieve the best performance, and we are resilient to mismatches between training and testing datasets. Extensive numerical results show that the proposed method outperforms other state-of-the-art methods on either noise-free or noisy datasets with large upscaling factors up to 16× and can handle unknown blurring kernels well.
AU - Deng,X
AU - Song,P
AU - Rodrigues,MRD
AU - Dragotti,PL
DO - 10.1109/tcsvt.2019.2923901
EP - 1
PY - 2019///
SN - 1051-8215
SP - 1
TI - RADAR: robust algorithm for depth image super resolution based on FRI theory and multimodal dictionary learning
T2 - IEEE Transactions on Circuits and Systems for Video Technology
UR - http://dx.doi.org/10.1109/tcsvt.2019.2923901
UR - https://ieeexplore.ieee.org/document/8741062
ER -