Contact address logo Eisko, 24 rue de l'Est 75020 Paris
Contact phone logo +33 (0)1 85 08 52 51
Contact mail logo contact@eisko.com
> contact us!
Eisko logo

BLOG

tech friday #5 : Physically-based shading and rendering

December 26, 2014


Hi Folks, after examining the capture, analysis and reconstruction process, let's see how the resulting model enables us to faithfully reproduce the appearance of a subject in both offline and real-time renders.

We have had the chance and pleasure to collaborate with the dancer, choregrapher and actress Tatiana Seguin. With her kind agreement, we have been able to reconstruct her digital double to showcase our services.

For real-time applications, we have chosen to present this result in Unity due to the versatility of this multi-platform game engine.

Relying on reconstructed geometry and material properties, we have developed a physically-based shader in Unity5 which integrates:
- diffuse, albedo, specular intensity and roughness
- micro-surface displacment tesselation
- high-frequency details of the normal map
as well as screen-based subsurface scattering and ambiant occlusion.

We consider a similar approach under Solid Angle's Arnold for offline rendering based on the HD geometry and float textures.

Real-Time digital double of Tatiana Seguin in Unity5

In addition to the "neutral" face, we generally recommand capturing two extreme expressions associated with muscular contraction and relaxation, in order to characterize facial deformations.
These are of great help when dealing with real-time animations, as you will see in our future posts.

Neutral / Compressed / Uncompressed scans of Tatiana in Maya

Here are offline renders of the Digital Double of Tatiana using a dark Image-Based Lighting (IBL):
Offline rendering of the Digital Double of Tatiana Seguin in Arnold

Here are some pictures of Tatiana under uniform illumination:
Photos of Tatiana Seguin

Here are some snapshots of the real-time Digital Double of Tatiana using a light IBL:
Snapshots of the Real-Time Digital Double of Tatiana Seguin in Marmoset

To retain the fidelity of the reconstructed model, we developed a real-time blending mechanism in Unity which enables us to interpolate middle-range displacements and high-frequency normals associated to the facial expressions:



The same approach can naturally be considered in CGI as well:




This makes a direct transition to set-up and animation ... Stay tuned :)
Eisko

tech friday #4 : Data Capture versus Model Reconstruction

December 19, 2014


Hi Folks! Welcome on this fourth tech friday. Today we will investigate the fundamental difference between "Data" and "Model", trying to avoid using too much technical terminology.


To understand how crucial this is, we need to consider what happens with usual scanning technics.
Whatever the properties you measure (position, luminance...) and whatever the capture technic you consider (laser scan, photography...), the result of your acquisition is just an unstructured collection of samples.
You can use numerous technics and various softwares to generate a 3D mesh from these acquisitions -extracting point clouds from photos, expressing them under a cylindrical or spherical parametrization through Agisof's photoscan in the case of photogrammetry for example ...- but the output will still remain unstructured and carry no semantic information about the subject you captured.

Traditionnally, it looks like this:


In addition to being incomplete and noisy as describe in tech Friday #3, the results of your acquisition is a messy soup of polygons... The vertices have absolutely no clue wheter they represent the pic of the nose or the corner of the eye, so the resulting mesh actually has no interest except as a simple and poor visualization of the capture it-self.

Most people's reflex action is generally to open zbrush or a similar modeling tool, making your up-most to clean the noise and retopologize this soup to sculpt the render-compliant mesh with clean UVs that would be adapted to your production pipeline or to the constraints of your real-time rendering.
Such a process is not only a tedious, costly and time-consuming but it drastically impacts the accuracy of your data. Indeed, many subbtle details get wiped away, so you often end up wondering why the resulting model does not look like the scanned person... Things get definitely worse when you have to insure the integrity of the mesh and associated UVs among different scanned expressions to prevent the geometry and textures 'sliding' during facial animation.


To prepare the model for animation and rendering while garantying 100% fidelity to the captured subject, we reconstruct and structure these information by computing a dense (i.e. point-to-point) correspondence between our high detailed acquisitions and a generic model (Topology+UVs).
Depending on customer's choice, this generic mesh can correspond to the topology provided by the videogame studio, the polygon counts and texture coordinates of which perfectly match requirements of the considered game engine, or to our generic male/female model.


Since this correspondence is driven by data/model features registration, it enables to count for any pose and facial expression while insuring extreme accuracy in the position of the vertices and UVs.

This dense correspondence enables us to re-express the different materials components as textures maps according to the textures coordinates associated with the topology under consideration. These ones are originally encoded as 4K 32 bits EXR files - We generally derive PNG textures for real-time rendering.


Simultaneously, medium and high range frequencies are reported and encoded as displacement and normal maps respectively. This permits to retain all the tiny details of the HD geometric scan, through both offline and real-time renders of the model.


We'll show you what the results look like in our next friday tech. Stay tuned!

Eisko

tech Friday #3: What differentiates materials acquisition from standard photography?

December 5, 2014



What is a picture? And, just as importantly, what is it not?

If you take a picture of a scene - for the sake of argument, let's say a traditional 2D photograph of a face, although the same thing applies to any image -, what you have is actually a view-dependent and instantaneous snapshot of the interactions between lights and materials in the scene in which:
- The color of the pixels is a product of the radiance emitted by the different light sources (depending on their directions, wavelength, surface areas ....) and diffused/reflected/refracted from the different objects materials in the scene (depending on their position, shape, color, brightness, roughness...),  then transmitted to the film or camera sensor.
- The pixels have absolutly no structure and don't contain any semantic information about the scene:
while our brain is able to interpret the image to build this understanding, each pixel has no idea whether it represents the background or the subject, part of the nose or of the lips.

Since all material properties are deeply mixed up in the color of the pixels, the direct use of photographs for rendering is only valid when illumination of the 3D scene is equivalent or very similar to the capture conditions. In order to permit a photorealistic representation - i.e. a physically-based rendering of the subject under any lighting - we have to analyze and extract the different components that characterize its material properties.


Cause physically-based shading/rendering request for physically-based textures

In order to do so, our high-speed 'material cameras' capture the subject under multiple illuminations conditions. The analysis of this moutain of data enables to progressively decorrelate the different contributions of the signal, in order to first separate the diffuse color from the specular components.




In the diffuse color, we then dissociate the albedo (the real color of the object as if you were able to see it in the dark) from sub-surface scattering effects due to light refractions between the dermal and epidermal layers of the skin.
In the specular, we characterize the intensity and estimate a roughness/glossiness value for the surface.

Simultaneously, this separation enables us to extract normal information per channel up to the specular normal which characterize the real surface geometry, down to its finest details.



Other intricate properties like the refraction index, absorption ratio, curvatures can also be derived along the way.



A major benefit of breaking the properties of a material in this way is that this analysis generates the information layers involved in physically-based shading and rendering: processes central to rendering a photorealistic CG image, or creating a realistic interactive application.

By relying on extracted material properties, our CG representation of the subject conforms precisely to reality, no matter what lighting conditions it is displayed under - something we'll explore further in future Tech Friday posts.

Eisko