Imperial College London

ProfessorAbhijeetGhosh

Faculty of EngineeringDepartment of Computing

Professor of Graphics and Imaging
 
 
 
//

Contact

 

+44 (0)20 7594 8351abhijeet.ghosh Website

 
 
//

Location

 

376Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Rainer:2019,
author = {Rainer, G and Jakob, W and Ghosh, A and Weyrich, T},
journal = {Computer Graphics Forum},
title = {Neural BTF compression and interpolation},
url = {http://reality.cs.ucl.ac.uk/projects/btf/rainer19neural.html},
volume = {38},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - The Bidirectional Texture Function (BTF) is a data-driven solution to render materials with complex appearance. A typicalcapture contains tens of thousands of images of a material sample under varying viewing and lighting conditions. While capableof faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, bothfor storing and rendering, making effective compression of BTF data a critical component in practical applications. Commoncompression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format ofthe original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset stillrelies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, thiscan lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data,using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfullyrecreate materials with complex non-local lighting effects (subsurface scattering, inter-reflections, shadowing and masking...).In light of these observations, we propose a neural network-based BTF representation inspired by autoencoders: our encodercompresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view directionand outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and viewhemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFswith a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios andhigh-quality interpolation/extrapolation without blurring or ghosting artifacts.
AU - Rainer,G
AU - Jakob,W
AU - Ghosh,A
AU - Weyrich,T
PY - 2019///
SN - 0167-7055
TI - Neural BTF compression and interpolation
T2 - Computer Graphics Forum
UR - http://reality.cs.ucl.ac.uk/projects/btf/rainer19neural.html
UR - http://hdl.handle.net/10044/1/70019
VL - 38
ER -