Contact address logo Eisko, 24 rue de l'Est 75020 Paris
Contact phone logo +33 (0)1 85 08 52 51
Contact mail logo
> contact us!
Eisko logo


tech friday #5 : Physically-based shading and rendering

December 26, 2014

Hi Folks, after examining the capture, analysis and reconstruction process, let's see how the resulting model enables us to faithfully reproduce the appearance of a subject in both offline and real-time renders.

We have had the chance and pleasure to collaborate with the dancer, choregrapher and actress Tatiana Seguin. With her kind agreement, we have been able to reconstruct her digital double to showcase our services.

For real-time applications, we have chosen to present this result in Unity due to the versatility of this multi-platform game engine.

Relying on reconstructed geometry and material properties, we have developed a physically-based shader in Unity5 which integrates:
- diffuse, albedo, specular intensity and roughness
- micro-surface displacment tesselation
- high-frequency details of the normal map
as well as screen-based subsurface scattering and ambiant occlusion.

We consider a similar approach under Solid Angle's Arnold for offline rendering based on the HD geometry and float textures.

Real-Time digital double of Tatiana Seguin in Unity5

In addition to the "neutral" face, we generally recommand capturing two extreme expressions associated with muscular contraction and relaxation, in order to characterize facial deformations.
These are of great help when dealing with real-time animations, as you will see in our future posts.

Neutral / Compressed / Uncompressed scans of Tatiana in Maya

Here are offline renders of the Digital Double of Tatiana using a dark Image-Based Lighting (IBL):
Offline rendering of the Digital Double of Tatiana Seguin in Arnold

Here are some pictures of Tatiana under uniform illumination:
Photos of Tatiana Seguin

Here are some snapshots of the real-time Digital Double of Tatiana using a light IBL:
Snapshots of the Real-Time Digital Double of Tatiana Seguin in Marmoset

To retain the fidelity of the reconstructed model, we developed a real-time blending mechanism in Unity which enables us to interpolate middle-range displacements and high-frequency normals associated to the facial expressions:

The same approach can naturally be considered in CGI as well:

This makes a direct transition to set-up and animation ... Stay tuned :)