What is a picture? And, just as importantly, what is it not?
If you take a picture of a scene - for the sake of argument, let's say a traditional 2D photograph of a face, although the same thing applies to any image -, what you have is actually a view-dependent and instantaneous snapshot of the interactions between lights and materials in the scene in which:
- The color of the pixels is a product of the radiance emitted by the different light sources (depending on their directions, wavelength, surface areas ....) and diffused/reflected/refracted from the different objects materials in the scene (depending on their position, shape, color, brightness, roughness...), then transmitted to the film or camera sensor.
- The pixels have absolutly no structure and don't contain any semantic information about the scene:
while our brain is able to interpret the image to build this understanding, each pixel has no idea whether it represents the background or the subject, part of the nose or of the lips.
Since all material properties are deeply mixed up in the color of the pixels, the direct use of photographs for rendering is only valid when illumination of the 3D scene is equivalent or very similar to the capture conditions. In order to permit a photorealistic representation - i.e. a physically-based rendering of the subject under any lighting - we have to analyze and extract the different components that characterize its material properties.Cause physically-based shading/rendering request for physically-based textures
In order to do so, our high-speed 'material cameras' capture the subject under multiple illuminations conditions. The analysis of this moutain of data enables to progressively decorrelate the different contributions of the signal, in order to first separate the diffuse color from the specular components.
In the diffuse color, we then dissociate the albedo (the real color of the object as if you were able to see it in the dark) from sub-surface scattering effects due to light refractions between the dermal and epidermal layers of the skin.
In the specular, we characterize the intensity and estimate a roughness/glossiness value for the surface.
Simultaneously, this separation enables us to extract normal information per channel up to the specular normal which characterize the real surface geometry, down to its finest details.
Other intricate properties like the refraction index, absorption ratio, curvatures can also be derived along the way.
A major benefit of breaking the properties of a material in this way is that this analysis generates the information layers involved in physically-based shading and rendering: processes central to rendering a photorealistic CG image, or creating a realistic interactive application.
By relying on extracted material properties, our CG representation of the subject conforms precisely to reality, no matter what lighting conditions it is displayed under - something we'll explore further in future Tech Friday posts.