46 results found
FAN C, Lin Y, Ghosh A, 2023, Deep shape and SVBRDF estimation using smartphone multi-lens imaging, Computer Graphics Forum: the international journal of the Eurographics Association, Vol: 42, ISSN: 0167-7055
We present a deep neural network-based method that acquires high-quality shape and spatially varying reflectance of 3D objects using smartphone multi-lens imaging. Our method acquires two images simultaneously using a zoom lens and a wide angle lens of a smartphone under either natural illumination or phone flash conditions, effectively functioning like a single-shot method. Unlike traditional multi-view stereo methods which require sufficient differences in viewpoint and only estimate depth at a certain coarse scale, our method estimates fine-scale depth by utilising an optical-flow field extracted from subtle baseline and perspective due to different optics in the two images captured simultaneously. We further guide the SVBRDF estimation using the estimated depth, resulting in superior results compared to existing single-shot methods.
Rainer G, Bridgeman L, Ghosh A, 2023, Neural shading fields for efficient facial inverse rendering, Computer Graphics Forum: the international journal of the Eurographics Association, Vol: 42, ISSN: 0167-7055
Given a set of unstructured photographs of a subject under unknown lighting, 3D geometry reconstruction is relatively easy, but reflectance estimation remains a challenge. This is because it requires disentangling lighting from reflectance in the ambiguous observations. Solutions exist leveraging statistical, data-driven priors to output plausible reflectance maps even in the under-constrained single-view, unknown lighting setting. We propose a very low-cost inverse optimization method that does not rely on data-driven priors, to obtain high-quality diffuse and specular, albedo and normal maps in the setting of multi-view unknown lighting. We introduce compact neural networks that learn the shading of a given scene by efficiently finding correlations in the appearance across the face. We jointly optimize the implicit global illumination of the scene in the networks with explicit diffuse and specular reflectance maps that can subsequently be used for physically-based rendering. We analyze the veracity of results on ground truth data, and demonstrate that our reflectance maps maintain more detail and greater personal identity than state-of-the-art deep learning and differentiable rendering methods.
Lin A, Lin Y, Ghosh A, 2023, Practical acquisition of shape and plausible appearance of reflective and translucent objects, Computer Graphics Forum: the international journal of the Eurographics Association, Vol: 42, ISSN: 0167-7055
We present a practical method for acquisition of shape and plausible appearance of reflective and translucent objects forrealistic rendering and relighting applications. Such objects are extremely challenging to scan with existing capture setups,and have previously required complex lightstage hardware emitting continuous illumination. We instead employ a practicalcapture setup consisting of a set of desktop LCD screens to illuminate such objects with piece-wise continuous illuminationfor acquisition. We employ phase-shifted sinusoidal illumination for novel estimation of high quality photometric normals andtransmission vector along with diffuse-specular separated reflectance/transmission maps for realistic relighting. We furtheremploy neural in-painting to fill gaps in our measurements caused by gaps in screen illumination, and a novel NeuS-basedneural rendering that combines these shape and reflectance maps acquired from multiple viewpoints for high-quality 3Dsurface geometry reconstruction along with plausible realistic rendering of complex light transport in such objects.
Lattas A, Moschoglou S, Ploumpis S, et al., 2022, AvatarMe<SUP>++</SUP>: Facial Shape and BRDF Inference With Photorealistic Rendering-Aware GANs, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, Vol: 44, Pages: 9269-9284, ISSN: 0162-8828
Lattas A, Lin Y, Kannan J, et al., 2022, Practical and scalable desktop-based high-quality facial capture, European Conference on Computer Vision (ECCV) 2022, Publisher: Springer, Pages: 522-537, ISSN: 0302-9743
We present a novel desktop-based system for high-qualityfacial capture including geometry and facial appearance. The proposedacquisition system is highly practical and scalable, consisting purely ofcommodity components. The setup consists of a set of displays for con-trolled illumination for reflectance capture, in conjunction with multi-view acquisition of facial geometry. We additionally present a novel setof modulated binary illumination patterns for efficient acquisition of re-flectance and photometric normals using our setup, with diffuse-specularseparation. We demonstrate high-quality results with two different vari-ants of the capture setup – one entirely consisting of portable mobiledevices targeting static facial capture, and the other consisting of desk-top LCD displays targeting both static and dynamic facial capture.
Nogue E, Lin Y, Ghosh A, 2022, Polarization-imaging surface reflectometry using near-field display, Eurographics Symposium on Rendering (EGSR) 2022, Publisher: Europeangraphics, Pages: 1-9
We present a practical method for measurement of spatially varying isotropic surface reflectance of planar samples using a combination of single-view polarization imaging and near-field display illumination. Unlike previous works that have required multiview imaging or more complex polarization measurements, our method requires only three linear polarizer measurements from a single viewpoint for estimating diffuse and specular albedo and spatially varying specular roughness. We obtain highquality estimate of the surface normal with two additional polarized measurements under a gradient illumination pattern. Our approach enables high-quality renderings of planar surfaces while reducing measurements to a near-optimal number for the estimated SVBRDF parameters.
Guarnera GC, Gitlina Y, Deschaintre V, et al., 2022, Spectral upsampling approaches for RGB illumination, Eurographics Symposium on Rendering (EGSR) 2022, Publisher: Eurographics, Pages: 1-12
We present two practical approaches for high fidelity spectral upsampling of previously recorded RGB illumination in the form of an image-based representation such as an RGB light probe. Unlike previous approaches that require multiple measurements with a spectrometer or a reference color chart under a target illumination environment, our method requires no additional information for the spectral upsampling step. Instead, we construct a data-driven basis of spectral distributions for incident illumination from a set of six RGBW LEDs (three narrowband and three broadband) that we employ to represent a given RGB color using a convex combination of the six basis spectra. We propose two different approaches for estimating the weights of the convex combination using – (a) genetic algorithm, and (b) neural networks. We additionally propose a theoretical basis consisting of a set of narrow and broad Gaussians as a generalization of the approach, and also evaluate an alternate LED basis for spectral upsampling. We achieve good qualitative matches of the predicted illumination spectrum using our spectral upsampling approach to ground truth illumination spectrum while achieving near perfect matching of the RGB color of the given illumination in the vast majority of cases. We demonstrate that the spectrally upsampled RGB illumination can be employed for various applications including improved lighting reproduction as well as more accurate spectral rendering.
Deschaintre V, Lin Y, Ghosh A, 2021, Deep polarization imaging for 3D shape and SVBRDF acquisition, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 15562-15571
We present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues. Unlike previous works that have exploited polarization to estimate material or object appearance under certain constraints (known shape or multiview acquisition), we lift such restrictions by coupling polarization imaging with deep learning to achieve high quality estimate of 3D object shape (surface normals and depth)and SVBRDF using single-view polarization imaging under frontal flash illumination. In addition to acquired polarization images, we provide our deep network with strong novel cues related to shape and reflectance, in the form of a normalized Stokes map and an estimate of diffuse color. We additionally describe modifications to network architecture and training loss which provide further qualitative improvements. We demonstrate our approach to achieve superior results compared to recent works employing deep learning in conjunction with flash illumination.
Riviere J, Gotardo P, Bradley D, et al., 2020, Single-shot high-quality facial geometry and skin appearance capture, ACM Transactions on Graphics, Vol: 39, ISSN: 0730-0301
We propose a new light-weight face capture system capable of reconstructing both high-quality geometry and detailed appearance maps from a single exposure. Unlike currently employed appearance acquisition systems, the proposed technology does not require active illumination and hence can readily be integrated with passive photogrammetry solutions. These solutions are in widespread use for 3D scanning humans as they can be assembled from off-the-shelf hardware components, but lack the capability of estimating appearance. This paper proposes a solution to overcome this limitation, by adding appearance capture to photogrammetry systems. The only additional hardware requirement to these solutions is that a subset of the cameras are cross-polarized with respect to the illumination, and the remaining cameras are parallel-polarized. The proposed algorithm leverages the images with the two different polarization states to reconstruct the geometry and to recover appearance properties. We do so by means of an inverse rendering framework, which solves per texel diffuse albedo, specular intensity, and high-resolution normals, as well as global specular roughness considering the subsurface scattering nature of skin. We show results for a variety of human subjects of different ages and skin typology, illustrating how the captured fine-detail skin surface and subsurface scattering effects lead to realistic renderings of their digital doubles, also in different illumination conditions.
Gitlina Y, Guarnera GC, Dhillon DS, et al., 2020, Practical measurement and reconstruction of spectral skin reflectance, Computer Graphics Forum: the international journal of the Eurographics Association, Vol: 39, Pages: 75-89, ISSN: 0167-7055
We present two practical methods for measurement of spectral skin reflectance suited for live subjects, and drive a spectral BSSRDF model with appropriate complexity to match skin appearance in photographs, including human faces. Our primary measurement method employs illuminating a subject with two complementary uniform spectral illumination conditions using a multispectral LED sphere to estimate spatially varying parameters of chromophore concentrations including melanin and hemoglobin concentration, melanin blend‐type fraction, and epidermal hemoglobin fraction. We demonstrate that our proposed complementary measurements enable higher‐quality estimate of chromophores than those obtained using standard broadband illumination, while being suitable for integration with multiview facial capture using regular color cameras. Besides novel optimal measurements under controlled illumination, we also demonstrate how to adapt practical skin patch measurements using a hand‐held dermatological skin measurement device, a Miravex Antera 3D camera, for skin appearance reconstruction and rendering. Furthermore, we introduce a novel approach for parameter estimation given the measurements using neural networks which is significantly faster than a lookup table search and avoids parameter quantization. We demonstrate high quality matches of skin appearance with photographs for a variety of skin types with our proposed practical measurement procedures, including photorealistic spectral reproduction and renderings of facial appearance.
Lattas A, Moschoglou S, Gecer B, et al., 2020, AvatarMe: Realistically Renderable 3D Facial Reconstruction “in-the-wild”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Realistic rendering using discrete reflectance measurements is challenging, because arbitrary directions on the light and viewhemispheres are queried at render time, incurring large memory requirements and the need for interpolation. This explains thedesire for compact and continuously parametrized models akin to analytic BRDFs; however, fitting BRDF parameters to complexdata such as BTF texels can prove challenging, as models tend to describe restricted function spaces that cannot encompassreal-world behavior. Recent advances in this area have increasingly relied on neural representations that are trained to reproduceacquired reflectance data. The associated training process is extremely costly and must typically be repeated for each material.Inspired by autoencoders, we propose a unified network architecture that is trained on a variety of materials, and which projectsreflectance measurements to a shared latent parameter space. Similarly to SVBRDF fitting, real-world materials are representedby parameter maps, and the decoder network is analog to the analytic BRDF expression (also parametrized on light and viewdirections for practical rendering application). With this approach, encoding and decoding materials becomes a simple matter ofevaluating the network. We train and validate on BTF datasets of the University of Bonn, but there are no prerequisites on eitherthe number of angular reflectance samples, or the sample positions. Additionally, we show that the latent space is well-behavedand can be sampled from, for applications such as mipmapping and texture synthesis.
Lattas A, Moschoglou S, Gecer B, et al., 2020, AvatarMe: realistically renderable 3D facial reconstruction "in-the-wild", Publisher: arXiv
Over the last years, with the advent of Generative Adversarial Networks(GANs), many face analysis tasks have accomplished astounding performance, withapplications including, but not limited to, face generation and 3D facereconstruction from a single "in-the-wild" image. Nevertheless, to the best ofour knowledge, there is no method which can produce high-resolutionphotorealistic 3D faces from "in-the-wild" images and this can be attributed tothe: (a) scarcity of available data for training, and (b) lack of robustmethodologies that can successfully be applied on very high-resolution data. Inthis paper, we introduce AvatarMe, the first method that is able to reconstructphotorealistic 3D faces from a single "in-the-wild" image with an increasinglevel of detail. To achieve this, we capture a large dataset of facial shapeand reflectance and build on a state-of-the-art 3D texture and shapereconstruction method and successively refine its results, while generating theper-pixel diffuse and specular components that are required for realisticrendering. As we demonstrate in a series of qualitative and quantitativeexperiments, AvatarMe outperforms the existing arts by a significant margin andreconstructs authentic, 4K by 6K-resolution 3D faces from a singlelow-resolution image that, for the first time, bridges the uncanny valley.
Lin Y, Peers P, Ghosh A, 2019, On-site example-based material appearance acquisition, Computer Graphics Forum, Vol: 38, ISSN: 1467-8659
We present a novel example-based material appearance modeling method suitable for rapid digital content creation. Ourmethod only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known naturalillumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our methoddoes not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedurebased on binary RGB profile matching. We also model the appearance of materials exhibiting a regular or stationary texture-likeappearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplarwith (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presentsa suitable alternative for a variety of applications that require plausible “rapid-appearance-modeling”
Rainer G, Jakob W, Ghosh A, et al., 2019, Neural BTF compression and interpolation, Computer Graphics Forum, Vol: 38, ISSN: 0167-7055
The Bidirectional Texture Function (BTF) is a data-driven solution to render materials with complex appearance. A typicalcapture contains tens of thousands of images of a material sample under varying viewing and lighting conditions. While capableof faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, bothfor storing and rendering, making effective compression of BTF data a critical component in practical applications. Commoncompression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format ofthe original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset stillrelies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, thiscan lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data,using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfullyrecreate materials with complex non-local lighting effects (subsurface scattering, inter-reflections, shadowing and masking...).In light of these observations, we propose a neural network-based BTF representation inspired by autoencoders: our encodercompresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view directionand outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and viewhemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFswith a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios andhigh-quality interpolation/extrapolation without blurring or ghosting artifacts.
Lattas A, Wang M, Zafeiriou S, et al., 2019, Multi-view Facial Capture using Binary Spherical Gradient Illumination, Association-for-Computing-Machinery-Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques (SIGGRAPH) Conference, Publisher: ASSOC COMPUTING MACHINERY
Gotardo P, Riviere J, Bradley D, et al., 2018, Practical dynamic facial appearance modeling and acquisition, ACM Transactions on Graphics, Vol: 37, ISSN: 0730-0301
We present a method to acquire dynamic properties of facial skin appearance, including dynamic diffuse albedo encoding blood flow, dynamic specular intensity, and per-frame high resolution normal maps for a facial performance sequence. The method reconstructs these maps from a purely passive multi-camera setup, without the need for polarization or requiring temporally multiplexed illumination. Hence, it is very well suited for integration with existing passive systems for facial performance capture. To solve this seemingly underconstrained problem, we demonstrate that albedo dynamics during a facial performance can be modeled as a combination of: (1) a static, high-resolution base albedo map, modeling full skin pigmentation; and (2) a dynamic, one-dimensional component in the CIE L*a*b* color space, which explains changes in hemoglobin concentration due to blood flow. We leverage this albedo subspace and additional constraints on appearance and surface geometry to also estimate specular reflection parameters and resolve high-resolution normal maps with unprecedented detail in a passive capture system. These constraints are built into an inverse rendering framework that minimizes the difference of the rendered face to the captured images, incorporating constraints from multiple views for every texel on the face. The presented method is the first system capable of capturing high-quality dynamic appearance maps at full resolution and video framerates, providing a major step forward in the area of facial appearance acquisition.
Toisoul A, Dhillon D, Ghosh A, 2018, Acquiring spatially varying appearance of printed holographic surfaces, ACM Transactions on Graphics, Vol: 37, ISSN: 0730-0301
We present two novel and complimentary approaches to measure diffraction effects in commonly found planar spatially varying holographic surfaces. Such surfaces are increasingly found in various decorative materials such as gift bags, holographic papers, clothing and security holograms, and produce impressive visual effects that have not been previously acquired for realistic rendering. Such holographic surfaces are usually manufactured with one dimensional diffraction gratings that are varying in periodicity and orientation over an entire sample in order to produce a wide range of diffraction effects such as gradients and kinematic (rotational) effects. Our proposed methods estimate these two parameters and allow an accurate reproduction of these effects in real-time. The first method simply uses a point light source to recover both the grating periodicity and orientation in the case of regular and stochastic textures. Under the assumption that the sample is made of the same repeated diffractive tile, good results can be obtained using just one to five photographs on a wide range of samples. The second method is based on polarization imaging and enables an independent high resolution measurement of the grating orientation and relative periodicity at each surface point. The method requires a minimum of four photographs for accurate results, does not assume repetition of an exemplar tile, and can even reveal minor fabrication defects. We present point light source renderings with both approaches that qualitatively match photographs, as well as real-time renderings under complex environmental illumination.
Kim J, Ghosh A, 2018, Polarized Light Field Imaging for Single-Shot Reflectance Separation, SENSORS, Vol: 18
Kampouris C, Zafeiriou S, Ghosh A, 2018, Diffuse-specular separation using binary spherical gradient illumination, Eurographics Symposium on Rendering (EGSR) 2018, Publisher: The Eurographics Association, ISSN: 1727-3463
We introduce a novel method for view-independent diffuse-specular separation of albedo and photometric normals withoutrequiring polarization using binary spherical gradient illumination. The key idea is that with binary gradient illumination, adielectric surface oriented towards the dark hemisphere exhibits pure diffuse reflectance while a surface oriented towards thebright hemisphere exhibits both diffuse and specular reflectance. We exploit this observation to formulate diffuse-specular separationbased on color-space analysis of a surface’s response to binary spherical gradients and their complements. The methoddoes not impose restrictions on viewpoints and requires fewer photographs for multiview acquisition than polarized sphericalgradient illumination. We further demonstrate an efficient two-shot capture using spectral multiplexing of the illumination thatenables diffuse-specular separation of albedo and heuristic separation of photometric normals.
Kim J, Han G, Han H, et al., 2017, ThirdLight: low-cost and high-speed 3D interaction using photosensor markers, European Conference on Visual Media Production (CVMP), Publisher: ACM
We present a low-cost 3D tracking system for virtual reality, gesture modeling, and robot manipulation applications which require fast and precise localization of headsets, data gloves, props, or controllers. Our system removes the need for cameras or projectors for sensing, and instead uses cheap LEDs and printed masks for illumination, and low-cost photosensitive markers. The illumination device transmits a spatiotemporal pattern as a series of binary Gray-code patterns. Multiple illumination devices can be combined to localize each marker in 3D at high speed (333Hz). Our method has strengths in accuracy, speed, cost, ambient performance, large working space (1m-5m) and robustness to noise compared with conventional techniques. We compare with a state-of-the-art instrumented glove and vision-based systems to demonstrate the accuracy, scalability, and robustness of our approach. We propose a fast and accurate method for hand gesture modeling using an inverse kinematics approach with the six photosensitive markers. We additionally propose a passive markers system and demonstrate various interaction scenarios as practical applications.
Toisoul A, Ghosh A, 2017, Real-time rendering of realistic surface diffraction using low-rank factorisation, European Conference on Visual Media Production (CVMP), Publisher: ACM
We propose a novel approach for real-time rendering of diffraction effects in surface reflectance in arbitrary environments. Such renderings are usually extremely expensive as they require the computation of a convolution at real-time framerates. In the case of diffraction, the diffraction lobes usually have high frequency details that can only be captured with high resolution convolution kernels which make calculations even more expensive. Our method uses a low rank factorisation of the diffraction lookup table to approximate a 2D convolution kernel by two simpler low rank kernels which allow the computation of the convolution at real-time framerates using two rendering passes. We show realistic renderings in arbitrary environments and achieve a performance from 50 to 100 FPS making possible to use such a technique in real-time applications such as video games and VR.
We present a novel approach for on-site acquisition of surface reflectance for planar, spatially varying, isotropic samples in uncontrolled outdoor environments. Our method exploits the naturally occurring linear polarization of incident and reflected illumination for this purpose. By rotating a linear polarizing filter in front of a camera at three different orientations, we measure the polarization reflected off the sample and combine this information with multi-view analysis and inverse rendering in order to recover per-pixel, high resolution reflectance and surface normal maps. Specifically, we employ polarization imaging from two near orthogonal views close to the Brewster angle of incidence in order to maximize polarization cues for surface reflectance estimation. To the best of our knowledge, our method is the first to successfully extract a complete set of reflectance parameters with passive capture in completely uncontrolled outdoor settings. To this end, we analyze our approach under the general, but previously unstudied, case of incident partial linear polarization (due to the sky) in order to identify the strengths and weaknesses of the method under various outdoor conditions. We provide practical guidelines for on-site acquisition based on our analysis, and demonstrate high quality results with an entry level DSLR as well as a mobile phone.
Kim J, Reshetouski I, Ghosh A, 2017, Acquiring axially-symmetric transparent objects using single-view transmission imaging, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 1484-1492, ISSN: 1063-6919
We propose a novel, practical solution for high quality reconstruction of axially-symmetric transparent objects. While a special case, such transparent objects are ubiquitous in the real world. Common examples of these are glasses, goblets, tumblers, carafes, etc., that can have very unique and visually appealing forms making their reconstruction interesting for vision and graphics applications. Our acquisition setup involves imaging such objects from a single viewpoint while illuminating them from directly behind with a few patterns emitted by an LCD panel. Our reconstruction step is then based on optimization of the objects geometry and its refractive index to minimize the difference between observed and simulated transmission/refraction of rays passing through the object. We exploit the objects axial symmetry as a strong shape prior which allows us to achieve robust reconstruction from a single viewpoint using a simple, commodity acquisition setup. We demonstrate high quality reconstruction of several common rotationally symmetric as well as more complex n-fold symmetric transparent objects with our approach.
Toisoul A, Ghosh A, 2017, Practical acquisition and rendering of diffraction effects in surface reflectance, ACM Transactions on Graphics, Vol: 36, ISSN: 1557-7368
We propose two novel contributions for measurement based rendering ofdiffraction effects in surface reflectance of planar homogeneous diffractivematerials. As a general solution for commonly manufactured materials, wepropose a practical data-driven rendering technique and a measurementapproach to efficiently render complex diffraction effects in real-time. Ourmeasurement step simply involves photographing a planar diffractive sam-ple illuminated with an LED flash. Here, we directly record the resultantdiffraction pattern on the sample surface due to a narrow band point sourceillumination. Furthermore, we propose an efficient rendering method thatexploits the measurement in conjunction with the Huygens-Fresnel principleto fit relevant diffraction parameters based on a first order approximation.Our proposed data-driven rendering method requires the precomputationof asinglediffraction look up table for accurate spectral rendering of com-plex diffraction effects. Secondly, for sharp specular samples, we proposea novel method for practical measurement of the underlying diffractiongrating using out-of-focus “bokeh” photography of the specular highlight.We demonstrate how the measured bokeh can be employed as a heightfield to drive a diffraction shader based on a first order approximation forefficient real-time rendering. Finally, we also drive analytic solutions for afew special cases of diffraction from our measurements and demonstraterealistic rendering results under complex light sources and environments.
Toisoul A, Ghosh A, 2016, Image-based relighting using room lighting basis, European Conference on Visual Media Production, Publisher: ACM
We present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. The lighting basis includes diverse light sources such as the house lights and the natural illumination coming from the windows. Once the data is captured, we homogenize the reflectance field to take into account the variety of light source colours to minimise the tone difference in the reflectance field. Additionally, we measure the room dark level corresponding to a small amount of global illumination with all lights switched off and blinds drawn. The dark level, due to some light leakage through the blinds, is removed from the individual local lighting basis conditions and employed as an additional global lighting basis. Finally we optimize the projection of a desired lighting environment on to our room lighting basis to get a close approximation of the environment with our sparse lighting basis. We achieve plausible results for diffuse and glossy objects that are qualitatively similar to results produced with dense sampling of the reflectance field including using a light stage and we demonstrate effective relighting results in two different room configurations. We believe our approach can be applied for practical relighting applications with general studio lighting.
Dhillon DS, Ghosh A, 2016, Efficient surface diffraction renderings with Chebyshev approximations, SIGGRAPH Asia 2016 Technical Briefs, Publisher: ACM
We propose an efficient method for reproducing diffraction colours on natural surfaces with complex nanostructures that can be represented as height-fields. Our method employs Chebyshev approximations to accurately model view-dependent iridescences for such a surface into its spectral bidirectional reflectance distribution function (BRDF). As main contribution, our method significantly reduces the runtime memory footprint from precomputed lookup tables without compromising photorealism. Our accuracy is comparable with current state-of-the-art methods and better at equal memory usage. Furthermore, a Chebyshev polynomial basis set with its near-best approximation properties allow for scalable memory-vs-performance trade-offs. We show realistic diffraction effects with just two lookup textures for natural, quasi-periodic surface nanostructures. Performance intensive applications like games and VR can benefit from our method, especially for low-end GPU or mobile platforms.
Kampouris C, Zafeiriou S, Ghosh A, et al., 2016, Fine-grained Material Classification using Micro-geometry and Reflectance, European Conference on Computer Vision 2016, Publisher: Springer, Pages: 778-792, ISSN: 0302-9743
In this paper we focus on an understudied computer vision problem, particularly how the micro-geometry and the reflectance of a surface can be used to infer its material. To this end, we introduce a new, publicly available database for fine-grained material classification, consisting of over 2000 surfaces of fabrics (http://ibug.doc.ic.ac.uk/resources/fabrics.). The database has been collected using a custom-made portable but cheap and easy to assemble photometric stereo sensor. We use the normal map and the albedo of each surface to recognize its material via the use of handcrafted and learned features and various feature encodings. We also perform garment classification using the same approach. We show that the fusion of normals and albedo information outperforms standard methods which rely only on the use of texture information. Our methodologies, both for data collection, as well as for material classification can be applied easily to many real-word scenarios including design of new robots able to sense materials and industrial inspection.
Kim J, Izadi S, Ghosh A, 2016, Single-shot layered reflectance separation using a polarized light field camera, 2016 Eurographics Symposium on Rendering, Publisher: The Eurographics Association, ISSN: 1727-3463
We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as wellas novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF)camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired withthe TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarizationpreserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novelangular separation of layered reflectance including separation of specular reflectance and single scattering in the polarizationpreserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We applyour approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separationof photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed singleshot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting whileachieving separation results under a variety of illumination conditions.
Guarnera D, Guarnera GC, Ghosh A, et al., 2016, BRDF Representation and Acquisition, Computer Graphics Forum, Vol: 35, ISSN: 1467-8659
Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects,Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design.Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials.In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the waya surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact thesecharacteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interactwith light. Thus the representation and acquisition of material models has become such an active research area.This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Re-flectance Distribution Function) models used to represent surface/material reflection characteristics, and describes currentacquisition methods for the capture and rendering of photorealistic materials.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.