36 results found
Deschaintre V, Lin Y, Ghosh A, 2021, Deep polarization imaging for 3D shape and SVBRDF acquisition, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE
We present a novel method for efficient acquisition of shape and spatially varying reflectance of 3D objects using polarization cues. Unlike previous works that have exploited polarization to estimate material or object appearance under certain constraints (known shape or multiview acquisition), we lift such restrictions by coupling polarization imaging with deep learning to achieve high quality estimate of 3D object shape (surface normals and depth)and SVBRDF using single-view polarization imaging under frontal flash illumination. In addition to acquired polarization images, we provide our deep network with strong novel cues related to shape and reflectance, in the form of a normalized Stokes map and an estimate of diffuse color. We additionally describe modifications to network architecture and training loss which provide further qualitative improvements. We demonstrate our approach to achieve superior results compared to recent works employing deep learning in conjunction with flash illumination.
Riviere J, Gotardo P, Bradley D, et al., 2020, Single-shot high-quality facial geometry and skin appearance capture, ACM Transactions on Graphics, Vol: 39, ISSN: 0730-0301
We propose a new light-weight face capture system capable of reconstructing both high-quality geometry and detailed appearance maps from a single exposure. Unlike currently employed appearance acquisition systems, the proposed technology does not require active illumination and hence can readily be integrated with passive photogrammetry solutions. These solutions are in widespread use for 3D scanning humans as they can be assembled from off-the-shelf hardware components, but lack the capability of estimating appearance. This paper proposes a solution to overcome this limitation, by adding appearance capture to photogrammetry systems. The only additional hardware requirement to these solutions is that a subset of the cameras are cross-polarized with respect to the illumination, and the remaining cameras are parallel-polarized. The proposed algorithm leverages the images with the two different polarization states to reconstruct the geometry and to recover appearance properties. We do so by means of an inverse rendering framework, which solves per texel diffuse albedo, specular intensity, and high-resolution normals, as well as global specular roughness considering the subsurface scattering nature of skin. We show results for a variety of human subjects of different ages and skin typology, illustrating how the captured fine-detail skin surface and subsurface scattering effects lead to realistic renderings of their digital doubles, also in different illumination conditions.
Gitlina Y, Guarnera GC, Dhillon DS, et al., 2020, Practical measurement and reconstruction of spectral skin reflectance, Computer Graphics Forum: the international journal of the Eurographics Association, Vol: 39, Pages: 75-89, ISSN: 0167-7055
We present two practical methods for measurement of spectral skin reflectance suited for live subjects, and drive a spectral BSSRDF model with appropriate complexity to match skin appearance in photographs, including human faces. Our primary measurement method employs illuminating a subject with two complementary uniform spectral illumination conditions using a multispectral LED sphere to estimate spatially varying parameters of chromophore concentrations including melanin and hemoglobin concentration, melanin blend‐type fraction, and epidermal hemoglobin fraction. We demonstrate that our proposed complementary measurements enable higher‐quality estimate of chromophores than those obtained using standard broadband illumination, while being suitable for integration with multiview facial capture using regular color cameras. Besides novel optimal measurements under controlled illumination, we also demonstrate how to adapt practical skin patch measurements using a hand‐held dermatological skin measurement device, a Miravex Antera 3D camera, for skin appearance reconstruction and rendering. Furthermore, we introduce a novel approach for parameter estimation given the measurements using neural networks which is significantly faster than a lookup table search and avoids parameter quantization. We demonstrate high quality matches of skin appearance with photographs for a variety of skin types with our proposed practical measurement procedures, including photorealistic spectral reproduction and renderings of facial appearance.
Lattas A, Moschoglou S, Gecer B, et al., 2020, AvatarMe: Realistically Renderable 3D Facial Reconstruction “in-the-wild”, IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Realistic rendering using discrete reflectance measurements is challenging, because arbitrary directions on the light and viewhemispheres are queried at render time, incurring large memory requirements and the need for interpolation. This explains thedesire for compact and continuously parametrized models akin to analytic BRDFs; however, fitting BRDF parameters to complexdata such as BTF texels can prove challenging, as models tend to describe restricted function spaces that cannot encompassreal-world behavior. Recent advances in this area have increasingly relied on neural representations that are trained to reproduceacquired reflectance data. The associated training process is extremely costly and must typically be repeated for each material.Inspired by autoencoders, we propose a unified network architecture that is trained on a variety of materials, and which projectsreflectance measurements to a shared latent parameter space. Similarly to SVBRDF fitting, real-world materials are representedby parameter maps, and the decoder network is analog to the analytic BRDF expression (also parametrized on light and viewdirections for practical rendering application). With this approach, encoding and decoding materials becomes a simple matter ofevaluating the network. We train and validate on BTF datasets of the University of Bonn, but there are no prerequisites on eitherthe number of angular reflectance samples, or the sample positions. Additionally, we show that the latent space is well-behavedand can be sampled from, for applications such as mipmapping and texture synthesis.
Lin Y, Peers P, Ghosh A, 2019, On-site example-based material appearance acquisition, Computer Graphics Forum, Vol: 38, ISSN: 1467-8659
We present a novel example-based material appearance modeling method suitable for rapid digital content creation. Ourmethod only requires a single HDR photograph of a homogeneous isotropic dielectric exemplar object under known naturalillumination. While conventional methods for appearance modeling require prior knowledge on the object shape, our methoddoes not, nor does it recover the shape explicitly, greatly simplifying on-site appearance acquisition to a lightweight photography process suited for non-expert users. As our central contribution, we propose a shape-agnostic BRDF estimation procedurebased on binary RGB profile matching. We also model the appearance of materials exhibiting a regular or stationary texture-likeappearance, by synthesizing appropriate mesostructure from the same input HDR photograph and a mesostructure exemplarwith (roughly) similar features. We believe our lightweight method for on-site shape-agnostic appearance acquisition presentsa suitable alternative for a variety of applications that require plausible “rapid-appearance-modeling”
Rainer G, Jakob W, Ghosh A, et al., 2019, Neural BTF compression and interpolation, Computer Graphics Forum, Vol: 38, ISSN: 0167-7055
The Bidirectional Texture Function (BTF) is a data-driven solution to render materials with complex appearance. A typicalcapture contains tens of thousands of images of a material sample under varying viewing and lighting conditions. While capableof faithfully recording complex light interactions in the material, the main drawback is the massive memory requirement, bothfor storing and rendering, making effective compression of BTF data a critical component in practical applications. Commoncompression schemes used in practice are based on matrix factorization techniques, which preserve the discrete format ofthe original dataset. While this approach generalizes well to different materials, rendering with the compressed dataset stillrelies on interpolating between the closest samples. Depending on the material and the angular resolution of the BTF, thiscan lead to blurring and ghosting artefacts. An alternative approach uses analytic model fitting to approximate the BTF data,using continuous functions that naturally interpolate well, but whose expressive range is often not wide enough to faithfullyrecreate materials with complex non-local lighting effects (subsurface scattering, inter-reflections, shadowing and masking...).In light of these observations, we propose a neural network-based BTF representation inspired by autoencoders: our encodercompresses each texel to a small set of latent coefficients, while our decoder additionally takes in a light and view directionand outputs a single RGB vector at a time. This allows us to continuously query reflectance values in the light and viewhemispheres, eliminating the need for linear interpolation between discrete samples. We train our architecture on fabric BTFswith a challenging appearance and compare to standard PCA as a baseline. We achieve competitive compression ratios andhigh-quality interpolation/extrapolation without blurring or ghosting artifacts.
Lattas A, Wang M, Zafeiriou S, et al., 2019, Multi-view Facial Capture using Binary Spherical Gradient Illumination, Association-for-Computing-Machinery-Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques (SIGGRAPH) Conference, Publisher: ASSOC COMPUTING MACHINERY
Toisoul A, Dhillon D, Ghosh A, 2018, Acquiring spatially varying appearance of printed holographic surfaces, ACM Transactions on Graphics, Vol: 37, ISSN: 0730-0301
We present two novel and complimentary approaches to measure diffraction effects in commonly found planar spatially varying holographic surfaces. Such surfaces are increasingly found in various decorative materials such as gift bags, holographic papers, clothing and security holograms, and produce impressive visual effects that have not been previously acquired for realistic rendering. Such holographic surfaces are usually manufactured with one dimensional diffraction gratings that are varying in periodicity and orientation over an entire sample in order to produce a wide range of diffraction effects such as gradients and kinematic (rotational) effects. Our proposed methods estimate these two parameters and allow an accurate reproduction of these effects in real-time. The first method simply uses a point light source to recover both the grating periodicity and orientation in the case of regular and stochastic textures. Under the assumption that the sample is made of the same repeated diffractive tile, good results can be obtained using just one to five photographs on a wide range of samples. The second method is based on polarization imaging and enables an independent high resolution measurement of the grating orientation and relative periodicity at each surface point. The method requires a minimum of four photographs for accurate results, does not assume repetition of an exemplar tile, and can even reveal minor fabrication defects. We present point light source renderings with both approaches that qualitatively match photographs, as well as real-time renderings under complex environmental illumination.
Gotardo P, Riviere J, Bradley D, et al., 2018, Practical dynamic facial appearance modeling and acquisition, ACM Transactions on Graphics, Vol: 37, ISSN: 0730-0301
We present a method to acquire dynamic properties of facial skin appearance, including dynamic diffuse albedo encoding blood flow, dynamic specular intensity, and per-frame high resolution normal maps for a facial performance sequence. The method reconstructs these maps from a purely passive multi-camera setup, without the need for polarization or requiring temporally multiplexed illumination. Hence, it is very well suited for integration with existing passive systems for facial performance capture. To solve this seemingly underconstrained problem, we demonstrate that albedo dynamics during a facial performance can be modeled as a combination of: (1) a static, high-resolution base albedo map, modeling full skin pigmentation; and (2) a dynamic, one-dimensional component in the CIE L*a*b* color space, which explains changes in hemoglobin concentration due to blood flow. We leverage this albedo subspace and additional constraints on appearance and surface geometry to also estimate specular reflection parameters and resolve high-resolution normal maps with unprecedented detail in a passive capture system. These constraints are built into an inverse rendering framework that minimizes the difference of the rendered face to the captured images, incorporating constraints from multiple views for every texel on the face. The presented method is the first system capable of capturing high-quality dynamic appearance maps at full resolution and video framerates, providing a major step forward in the area of facial appearance acquisition.
Kampouris C, Zafeiriou S, Ghosh A, 2018, Diffuse-specular separation using binary spherical gradient illumination, Eurographics Symposium on Rendering (EGSR) 2018, Publisher: The Eurographics Association, ISSN: 1727-3463
We introduce a novel method for view-independent diffuse-specular separation of albedo and photometric normals withoutrequiring polarization using binary spherical gradient illumination. The key idea is that with binary gradient illumination, adielectric surface oriented towards the dark hemisphere exhibits pure diffuse reflectance while a surface oriented towards thebright hemisphere exhibits both diffuse and specular reflectance. We exploit this observation to formulate diffuse-specular separationbased on color-space analysis of a surface’s response to binary spherical gradients and their complements. The methoddoes not impose restrictions on viewpoints and requires fewer photographs for multiview acquisition than polarized sphericalgradient illumination. We further demonstrate an efficient two-shot capture using spectral multiplexing of the illumination thatenables diffuse-specular separation of albedo and heuristic separation of photometric normals.
Kim J, Han G, Han H, et al., 2017, ThirdLight: low-cost and high-speed 3D interaction using photosensor markers, European Conference on Visual Media Production (CVMP), Publisher: ACM
We present a low-cost 3D tracking system for virtual reality, gesture modeling, and robot manipulation applications which require fast and precise localization of headsets, data gloves, props, or controllers. Our system removes the need for cameras or projectors for sensing, and instead uses cheap LEDs and printed masks for illumination, and low-cost photosensitive markers. The illumination device transmits a spatiotemporal pattern as a series of binary Gray-code patterns. Multiple illumination devices can be combined to localize each marker in 3D at high speed (333Hz). Our method has strengths in accuracy, speed, cost, ambient performance, large working space (1m-5m) and robustness to noise compared with conventional techniques. We compare with a state-of-the-art instrumented glove and vision-based systems to demonstrate the accuracy, scalability, and robustness of our approach. We propose a fast and accurate method for hand gesture modeling using an inverse kinematics approach with the six photosensitive markers. We additionally propose a passive markers system and demonstrate various interaction scenarios as practical applications.
Toisoul A, Ghosh A, 2017, Real-time rendering of realistic surface diffraction using low-rank factorisation, European Conference on Visual Media Production (CVMP), Publisher: ACM
We propose a novel approach for real-time rendering of diffraction effects in surface reflectance in arbitrary environments. Such renderings are usually extremely expensive as they require the computation of a convolution at real-time framerates. In the case of diffraction, the diffraction lobes usually have high frequency details that can only be captured with high resolution convolution kernels which make calculations even more expensive. Our method uses a low rank factorisation of the diffraction lookup table to approximate a 2D convolution kernel by two simpler low rank kernels which allow the computation of the convolution at real-time framerates using two rendering passes. We show realistic renderings in arbitrary environments and achieve a performance from 50 to 100 FPS making possible to use such a technique in real-time applications such as video games and VR.
We present a novel approach for on-site acquisition of surface reflectance for planar, spatially varying, isotropic samples in uncontrolled outdoor environments. Our method exploits the naturally occurring linear polarization of incident and reflected illumination for this purpose. By rotating a linear polarizing filter in front of a camera at three different orientations, we measure the polarization reflected off the sample and combine this information with multi-view analysis and inverse rendering in order to recover per-pixel, high resolution reflectance and surface normal maps. Specifically, we employ polarization imaging from two near orthogonal views close to the Brewster angle of incidence in order to maximize polarization cues for surface reflectance estimation. To the best of our knowledge, our method is the first to successfully extract a complete set of reflectance parameters with passive capture in completely uncontrolled outdoor settings. To this end, we analyze our approach under the general, but previously unstudied, case of incident partial linear polarization (due to the sky) in order to identify the strengths and weaknesses of the method under various outdoor conditions. We provide practical guidelines for on-site acquisition based on our analysis, and demonstrate high quality results with an entry level DSLR as well as a mobile phone.
Kim J, Reshetouski I, Ghosh A, 2017, Acquiring axially-symmetric transparent objects using single-view transmission imaging, 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 1484-1492, ISSN: 1063-6919
We propose a novel, practical solution for high quality reconstruction of axially-symmetric transparent objects. While a special case, such transparent objects are ubiquitous in the real world. Common examples of these are glasses, goblets, tumblers, carafes, etc., that can have very unique and visually appealing forms making their reconstruction interesting for vision and graphics applications. Our acquisition setup involves imaging such objects from a single viewpoint while illuminating them from directly behind with a few patterns emitted by an LCD panel. Our reconstruction step is then based on optimization of the objects geometry and its refractive index to minimize the difference between observed and simulated transmission/refraction of rays passing through the object. We exploit the objects axial symmetry as a strong shape prior which allows us to achieve robust reconstruction from a single viewpoint using a simple, commodity acquisition setup. We demonstrate high quality reconstruction of several common rotationally symmetric as well as more complex n-fold symmetric transparent objects with our approach.
Toisoul A, Ghosh A, 2017, Practical acquisition and rendering of diffraction effects in surface reflectance, ACM Transactions on Graphics, Vol: 36, ISSN: 1557-7368
We propose two novel contributions for measurement based rendering ofdiffraction effects in surface reflectance of planar homogeneous diffractivematerials. As a general solution for commonly manufactured materials, wepropose a practical data-driven rendering technique and a measurementapproach to efficiently render complex diffraction effects in real-time. Ourmeasurement step simply involves photographing a planar diffractive sam-ple illuminated with an LED flash. Here, we directly record the resultantdiffraction pattern on the sample surface due to a narrow band point sourceillumination. Furthermore, we propose an efficient rendering method thatexploits the measurement in conjunction with the Huygens-Fresnel principleto fit relevant diffraction parameters based on a first order approximation.Our proposed data-driven rendering method requires the precomputationof asinglediffraction look up table for accurate spectral rendering of com-plex diffraction effects. Secondly, for sharp specular samples, we proposea novel method for practical measurement of the underlying diffractiongrating using out-of-focus “bokeh” photography of the specular highlight.We demonstrate how the measured bokeh can be employed as a heightfield to drive a diffraction shader based on a first order approximation forefficient real-time rendering. Finally, we also drive analytic solutions for afew special cases of diffraction from our measurements and demonstraterealistic rendering results under complex light sources and environments.
Toisoul A, Ghosh A, 2016, Image-based relighting using room lighting basis, European Conference on Visual Media Production, Publisher: ACM
We present a novel and practical approach for image-based relighting that employs the lights available in a regular room to acquire the reflectance field of an object. The lighting basis includes diverse light sources such as the house lights and the natural illumination coming from the windows. Once the data is captured, we homogenize the reflectance field to take into account the variety of light source colours to minimise the tone difference in the reflectance field. Additionally, we measure the room dark level corresponding to a small amount of global illumination with all lights switched off and blinds drawn. The dark level, due to some light leakage through the blinds, is removed from the individual local lighting basis conditions and employed as an additional global lighting basis. Finally we optimize the projection of a desired lighting environment on to our room lighting basis to get a close approximation of the environment with our sparse lighting basis. We achieve plausible results for diffuse and glossy objects that are qualitatively similar to results produced with dense sampling of the reflectance field including using a light stage and we demonstrate effective relighting results in two different room configurations. We believe our approach can be applied for practical relighting applications with general studio lighting.
Dhillon DS, Ghosh A, 2016, Efficient surface diffraction renderings with Chebyshev approximations, SIGGRAPH Asia 2016 Technical Briefs, Publisher: ACM
We propose an efficient method for reproducing diffraction colours on natural surfaces with complex nanostructures that can be represented as height-fields. Our method employs Chebyshev approximations to accurately model view-dependent iridescences for such a surface into its spectral bidirectional reflectance distribution function (BRDF). As main contribution, our method significantly reduces the runtime memory footprint from precomputed lookup tables without compromising photorealism. Our accuracy is comparable with current state-of-the-art methods and better at equal memory usage. Furthermore, a Chebyshev polynomial basis set with its near-best approximation properties allow for scalable memory-vs-performance trade-offs. We show realistic diffraction effects with just two lookup textures for natural, quasi-periodic surface nanostructures. Performance intensive applications like games and VR can benefit from our method, especially for low-end GPU or mobile platforms.
Kampouris C, Zafeiriou S, Ghosh A, et al., 2016, Fine-grained Material Classification using Micro-geometry and Reflectance, European Conference on Computer Vision 2016, Publisher: Springer, Pages: 778-792, ISSN: 0302-9743
In this paper we focus on an understudied computer vision problem, particularly how the micro-geometry and the reflectance of a surface can be used to infer its material. To this end, we introduce a new, publicly available database for fine-grained material classification, consisting of over 2000 surfaces of fabrics (http://ibug.doc.ic.ac.uk/resources/fabrics.). The database has been collected using a custom-made portable but cheap and easy to assemble photometric stereo sensor. We use the normal map and the albedo of each surface to recognize its material via the use of handcrafted and learned features and various feature encodings. We also perform garment classification using the same approach. We show that the fusion of normals and albedo information outperforms standard methods which rely only on the use of texture information. Our methodologies, both for data collection, as well as for material classification can be applied easily to many real-word scenarios including design of new robots able to sense materials and industrial inspection.
Kim J, Izadi S, Ghosh A, 2016, Single-shot layered reflectance separation using a polarized light field camera, 2016 Eurographics Symposium on Rendering, Publisher: The Eurographics Association, ISSN: 1727-3463
We present a novel computational photography technique for single shot separation of diffuse/specular reflectance as wellas novel angular domain separation of layered reflectance. Our solution consists of a two-way polarized light field (TPLF)camera which simultaneously captures two orthogonal states of polarization. A single photograph of a subject acquired withthe TPLF camera under polarized illumination then enables standard separation of diffuse (depolarizing) and polarizationpreserving specular reflectance using light field sampling. We further demonstrate that the acquired data also enables novelangular separation of layered reflectance including separation of specular reflectance and single scattering in the polarizationpreserving component, and separation of shallow scattering from deep scattering in the depolarizing component. We applyour approach for efficient acquisition of facial reflectance including diffuse and specular normal maps, and novel separationof photometric normals into layered reflectance normals for layered facial renderings. We demonstrate our proposed singleshot layered reflectance separation to be comparable to an existing multi-shot technique that relies on structured lighting whileachieving separation results under a variety of illumination conditions.
Guarnera D, Guarnera GC, Ghosh A, et al., 2016, BRDF Representation and Acquisition, Computer Graphics Forum, Vol: 35, ISSN: 1467-8659
Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects,Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design.Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials.In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the waya surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact thesecharacteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interactwith light. Thus the representation and acquisition of material models has become such an active research area.This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Re-flectance Distribution Function) models used to represent surface/material reflection characteristics, and describes currentacquisition methods for the capture and rendering of photorealistic materials.
Fyffe G, Graham P, Tunwattanapong B, et al., 2016, Near-instant capture of high-resolution facial geometry and reflectance, Computer Graphics Forum, Vol: 35, ISSN: 1467-8659
We present a near-instant method for acquiring facial geometry and reflectance using a set of commodity DSLR cameras andflashes. Our setup consists of twenty-four cameras and six flashes which are fired in rapid succession with subsets of thecameras. Each camera records only a single photograph and the total capture time is less than the 67ms blink reflex. Thecameras and flashes are specially arranged to produce an even distribution of specular highlights on the face. We employ thisset of acquired images to estimate diffuse color, specular intensity, specular exponent, and surface orientation at each pointon the face. We further refine the facial base geometry obtained from multi-view stereo using estimated diffuse and specularphotometric information. This allows final submillimeter surface mesostructure detail to be obtained via shape-from-specularity.The final system uses commodity components and produces models suitable for authoring high-quality digital human characters.
Riviere J, Peers P, Ghosh A, 2016, Mobile Surface Reflectometry, Computer Graphics Forum, Vol: 35, Pages: 191-202, ISSN: 1467-8659
We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surfacereflectance and mesostructure of a planar material sample using commodity mobile devices. The first approachrelies on the integrated camera and flash pair present on typical mobile devices to support free-form handheldacquisition of spatially varying rough specular material samples. The second approach, suited for highly specularsamples, uses the LCD panel to illuminate the sample with polarized second order gradient illumination. Toaddress the limited overlap of the front facing camera’s view and the LCD illumination (and thus limited samplesize), we propose a novel appearance transfer method that combines controlled reflectance measurement of a smallexemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally,we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close-upobservations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposedmobile reflectometry methods on a wide variety of spatially varying materials.
Nagano K, Fyffe G, Alexander O, et al., 2015, Skin microstructure deformation with displacement map convolution, ACM Transactions on Graphics, Vol: 34, ISSN: 1557-7368
We present a technique for synthesizing the effects of skin microstructure deformation by anisotropically convolving a high-resolution displacement map to match normal distribution changes in measured skin samples. We use a 10-micron resolution scanning technique to measure several in vivo skin samples as they are stretched and compressed in different directions, quantifying how stretching smooths the skin and compression makes it rougher. We tabulate the resulting surface normal distributions, and show that convolving a neutral skin microstructure displacement map with blurring and sharpening filters can mimic normal distribution changes and microstructure deformations. We implement the spatially-varying displacement map filtering on the GPU to interactively render the effects of dynamic microgeometry on animated faces obtained from high-resolution facial scans.
Wang P, Bicazan D, Ghosh A, 2014, Rerendering Landscape Photographs, CVMP '14 11th European Conference on Visual Media Production (CVMP 2014), Publisher: Association for Computing Machinery
We present a practical approach for realistic rerendering of landscape photographs. We extract a view dependent depth map from single input landscape images by examining global and local pixel color distributions and demonstrate applications of depth dependent rendering such as novel viewpoints, digital refocusing and dehazing.We also present a simple approach to relight the input landscape photograph under novel sky illumination. Here, we assume diffuse reflectance and relight landscapes by estimating the irradiance due the sky in the input photograph. Finally, we also takeinto account specular reflections on water surfaces which are common in landscape photography and demonstrate a semiautomatic process for relighting scenes with still water.
Tunwattanapong B, Fyffe G, Graham P, et al., 2013, Acquiring Reflectance and Shape from Continuous Spherical Harmonic Illumination, ACM Transactions on Graphics, Vol: 32
Zhu Y, Garigipati P, Peers P, et al., 2013, Estimating diffusion parameters from polarized spherical-gradient illumination, IEEE Computer Graphics and Applications, Vol: 33, Pages: 34-43, ISSN: 0272-1716
The proposed method acquires subsurface-scattering parameters of heterogeneous translucent materials. It directly obtains dense per-surface-point scattering parameters from observations under cross-polarized spherical-gradient illumination of curved surfaces. This method does not require explicit fitting of observed scattering profiles. A variety of heterogeneous translucent objects illustrate its validity. © 1981-2012 IEEE.
Graham P, Tunwattanapong B, Busch J, et al., 2013, Measurement Based Synthesis of Facial Microgeometry, Computer Graphics Forum, Vol: 32, Pages: 335-344
Guarnera GC, Peers P, Debevec P, et al., 2012, Estimating Surface Normals from Spherical Stokes Reflectance Fields, ECCV Workshop on Color and Photometry in Computer Vision (CPCV) 2012
Stratou G, Ghosh A, Debevec P, et al., 2012, Exploring the effect of illumination on automatic expression recognition using the ICT-3DRFE database, Image and Vision Computing, Vol: 30, Pages: 728-737, ISSN: 0262-8856
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.