Imperial College London

Alexandros Lattas

Faculty of EngineeringDepartment of Computing

Academic Visitor
 
 
 
//

Contact

 

a.lattas Website

 
 
//

Location

 

Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

6 results found

Gitlina Y, Guarnera GC, Dhillon DS, Hansen J, Lattas A, Pai D, Ghosh Aet al., 2020, Practical measurement and reconstruction of spectral skin reflectance, Computer Graphics Forum: the international journal of the Eurographics Association, Vol: 39, Pages: 75-89, ISSN: 0167-7055

We present two practical methods for measurement of spectral skin reflectance suited for live subjects, and drive a spectral BSSRDF model with appropriate complexity to match skin appearance in photographs, including human faces. Our primary measurement method employs illuminating a subject with two complementary uniform spectral illumination conditions using a multispectral LED sphere to estimate spatially varying parameters of chromophore concentrations including melanin and hemoglobin concentration, melanin blend‐type fraction, and epidermal hemoglobin fraction. We demonstrate that our proposed complementary measurements enable higher‐quality estimate of chromophores than those obtained using standard broadband illumination, while being suitable for integration with multiview facial capture using regular color cameras. Besides novel optimal measurements under controlled illumination, we also demonstrate how to adapt practical skin patch measurements using a hand‐held dermatological skin measurement device, a Miravex Antera 3D camera, for skin appearance reconstruction and rendering. Furthermore, we introduce a novel approach for parameter estimation given the measurements using neural networks which is significantly faster than a lookup table search and avoids parameter quantization. We demonstrate high quality matches of skin appearance with photographs for a variety of skin types with our proposed practical measurement procedures, including photorealistic spectral reproduction and renderings of facial appearance.

Journal article

Lattas A, Moschoglou S, Gecer B, Ploumpis S, Triantafylloy V, Ghosh A, Zafeiriou Set al., 2020, AvatarMe: Realistically Renderable 3D Facial Reconstruction “in-the-wild”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

Conference paper

Lattas A, Moschoglou S, Gecer B, Ploumpis S, Triantafyllou V, Ghosh A, Zafeiriou Set al., 2020, AvatarMe: realistically renderable 3D facial reconstruction "in-the-wild", Publisher: arXiv

Over the last years, with the advent of Generative Adversarial Networks(GANs), many face analysis tasks have accomplished astounding performance, withapplications including, but not limited to, face generation and 3D facereconstruction from a single "in-the-wild" image. Nevertheless, to the best ofour knowledge, there is no method which can produce high-resolutionphotorealistic 3D faces from "in-the-wild" images and this can be attributed tothe: (a) scarcity of available data for training, and (b) lack of robustmethodologies that can successfully be applied on very high-resolution data. Inthis paper, we introduce AvatarMe, the first method that is able to reconstructphotorealistic 3D faces from a single "in-the-wild" image with an increasinglevel of detail. To achieve this, we capture a large dataset of facial shapeand reflectance and build on a state-of-the-art 3D texture and shapereconstruction method and successively refine its results, while generating theper-pixel diffuse and specular components that are required for realisticrendering. As we demonstrate in a series of qualitative and quantitativeexperiments, AvatarMe outperforms the existing arts by a significant margin andreconstructs authentic, 4K by 6K-resolution 3D faces from a singlelow-resolution image that, for the first time, bridges the uncanny valley.

Working paper

Gecer B, Lattas A, Ploumpis S, Deng J, Papaioannou A, Moschoglou S, Zafeiriou Set al., 2020, Synthesizing Coupled 3D Face Modalities by Trunk-Branch Generative Adversarial Networks, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 12374 LNCS, Pages: 415-433, ISSN: 0302-9743

Generating realistic 3D faces is of high importance for computer graphics and computer vision applications. Generally, research on 3D face generation revolves around linear statistical models of the facial surface. Nevertheless, these models cannot represent faithfully either the facial texture or the normals of the face, which are very crucial for photo-realistic face synthesis. Recently, it was demonstrated that Generative Adversarial Networks (GANs) can be used for generating high-quality textures of faces. Nevertheless, the generation process either omits the geometry and normals, or independent processes are used to produce 3D shape information. In this paper, we present the first methodology that generates high-quality texture, shape, and normals jointly, which can be used for photo-realistic synthesis. To do so, we propose a novel GAN that can generate data from different modalities while exploiting their correlations. Furthermore, we demonstrate how we can condition the generation on the expression and create faces with various facial expressions. The qualitative results shown in this paper are compressed due to size limitations, full-resolution results and the accompanying video can be found in the supplementary documents. The code and models are available at the project page: https://github.com/barisgecer/TBGAN.

Journal article

Tzirakis P, Papaioannou A, Lattas A, Tarasiou M, Schuller B, Zafeiriou Set al., 2020, Synthesising 3D Facial Motion from "In-the-Wild" Speech, 2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), Pages: 265-272, ISSN: 2326-5396

Journal article

Lattas A, Wang M, Zafeiriou S, Ghosh Aet al., 2019, Multi-view Facial Capture using Binary Spherical Gradient Illumination, Association-for-Computing-Machinery-Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques (SIGGRAPH) Conference, Publisher: ASSOC COMPUTING MACHINERY

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01401660&limit=30&person=true