Contact address logo Eisko, 24 rue de l'Est 75020 Paris
Contact phone logo +33 (0)1 85 08 52 51
Contact mail logo contact@eisko.com
> contact us!
Eisko logo

BLOG

tech friday #5 : Physically-based shading and rendering

December 26, 2014


Hi Folks, after examining the capture, analysis and reconstruction process, let's see how the resulting model enables us to faithfully reproduce the appearance of a subject in both offline and real-time renders.

We have had the chance and pleasure to collaborate with the dancer, choregrapher and actress Tatiana Seguin. With her kind agreement, we have been able to reconstruct her digital double to showcase our services.

For real-time applications, we have chosen to present this result in Unity due to the versatility of this multi-platform game engine.

Relying on reconstructed geometry and material properties, we have developed a physically-based shader in Unity5 which integrates:
- diffuse, albedo, specular intensity and roughness
- micro-surface displacment tesselation
- high-frequency details of the normal map
as well as screen-based subsurface scattering and ambiant occlusion.

We consider a similar approach under Solid Angle's Arnold for offline rendering based on the HD geometry and float textures.

Real-Time digital double of Tatiana Seguin in Unity5

In addition to the "neutral" face, we generally recommand capturing two extreme expressions associated with muscular contraction and relaxation, in order to characterize facial deformations.
These are of great help when dealing with real-time animations, as you will see in our future posts.

Neutral / Compressed / Uncompressed scans of Tatiana in Maya

Here are offline renders of the Digital Double of Tatiana using a dark Image-Based Lighting (IBL):
Offline rendering of the Digital Double of Tatiana Seguin in Arnold

Here are some pictures of Tatiana under uniform illumination:
Photos of Tatiana Seguin

Here are some snapshots of the real-time Digital Double of Tatiana using a light IBL:
Snapshots of the Real-Time Digital Double of Tatiana Seguin in Marmoset

To retain the fidelity of the reconstructed model, we developed a real-time blending mechanism in Unity which enables us to interpolate middle-range displacements and high-frequency normals associated to the facial expressions:



The same approach can naturally be considered in CGI as well:




This makes a direct transition to set-up and animation ... Stay tuned :)
Eisko

tech friday #4 : Data Capture versus Model Reconstruction

December 19, 2014


Hi Folks! Welcome on this fourth tech friday. Today we will investigate the fundamental difference between "Data" and "Model", trying to avoid using too much technical terminology.


To understand how crucial this is, we need to consider what happens with usual scanning technics.
Whatever the properties you measure (position, luminance...) and whatever the capture technic you consider (laser scan, photography...), the result of your acquisition is just an unstructured collection of samples.
You can use numerous technics and various softwares to generate a 3D mesh from these acquisitions -extracting point clouds from photos, expressing them under a cylindrical or spherical parametrization through Agisof's photoscan in the case of photogrammetry for example ...- but the output will still remain unstructured and carry no semantic information about the subject you captured.

Traditionnally, it looks like this:


In addition to being incomplete and noisy as describe in tech Friday #3, the results of your acquisition is a messy soup of polygons... The vertices have absolutely no clue wheter they represent the pic of the nose or the corner of the eye, so the resulting mesh actually has no interest except as a simple and poor visualization of the capture it-self.

Most people's reflex action is generally to open zbrush or a similar modeling tool, making your up-most to clean the noise and retopologize this soup to sculpt the render-compliant mesh with clean UVs that would be adapted to your production pipeline or to the constraints of your real-time rendering.
Such a process is not only a tedious, costly and time-consuming but it drastically impacts the accuracy of your data. Indeed, many subbtle details get wiped away, so you often end up wondering why the resulting model does not look like the scanned person... Things get definitely worse when you have to insure the integrity of the mesh and associated UVs among different scanned expressions to prevent the geometry and textures 'sliding' during facial animation.


To prepare the model for animation and rendering while garantying 100% fidelity to the captured subject, we reconstruct and structure these information by computing a dense (i.e. point-to-point) correspondence between our high detailed acquisitions and a generic model (Topology+UVs).
Depending on customer's choice, this generic mesh can correspond to the topology provided by the videogame studio, the polygon counts and texture coordinates of which perfectly match requirements of the considered game engine, or to our generic male/female model.


Since this correspondence is driven by data/model features registration, it enables to count for any pose and facial expression while insuring extreme accuracy in the position of the vertices and UVs.

This dense correspondence enables us to re-express the different materials components as textures maps according to the textures coordinates associated with the topology under consideration. These ones are originally encoded as 4K 32 bits EXR files - We generally derive PNG textures for real-time rendering.


Simultaneously, medium and high range frequencies are reported and encoded as displacement and normal maps respectively. This permits to retain all the tiny details of the HD geometric scan, through both offline and real-time renders of the model.


We'll show you what the results look like in our next friday tech. Stay tuned!

Eisko

tech Friday #3: What differentiates materials acquisition from standard photography?

December 5, 2014



What is a picture? And, just as importantly, what is it not?

If you take a picture of a scene - for the sake of argument, let's say a traditional 2D photograph of a face, although the same thing applies to any image -, what you have is actually a view-dependent and instantaneous snapshot of the interactions between lights and materials in the scene in which:
- The color of the pixels is a product of the radiance emitted by the different light sources (depending on their directions, wavelength, surface areas ....) and diffused/reflected/refracted from the different objects materials in the scene (depending on their position, shape, color, brightness, roughness...),  then transmitted to the film or camera sensor.
- The pixels have absolutly no structure and don't contain any semantic information about the scene:
while our brain is able to interpret the image to build this understanding, each pixel has no idea whether it represents the background or the subject, part of the nose or of the lips.

Since all material properties are deeply mixed up in the color of the pixels, the direct use of photographs for rendering is only valid when illumination of the 3D scene is equivalent or very similar to the capture conditions. In order to permit a photorealistic representation - i.e. a physically-based rendering of the subject under any lighting - we have to analyze and extract the different components that characterize its material properties.


Cause physically-based shading/rendering request for physically-based textures

In order to do so, our high-speed 'material cameras' capture the subject under multiple illuminations conditions. The analysis of this moutain of data enables to progressively decorrelate the different contributions of the signal, in order to first separate the diffuse color from the specular components.




In the diffuse color, we then dissociate the albedo (the real color of the object as if you were able to see it in the dark) from sub-surface scattering effects due to light refractions between the dermal and epidermal layers of the skin.
In the specular, we characterize the intensity and estimate a roughness/glossiness value for the surface.

Simultaneously, this separation enables us to extract normal information per channel up to the specular normal which characterize the real surface geometry, down to its finest details.



Other intricate properties like the refraction index, absorption ratio, curvatures can also be derived along the way.



A major benefit of breaking the properties of a material in this way is that this analysis generates the information layers involved in physically-based shading and rendering: processes central to rendering a photorealistic CG image, or creating a realistic interactive application.

By relying on extracted material properties, our CG representation of the subject conforms precisely to reality, no matter what lighting conditions it is displayed under - something we'll explore further in future Tech Friday posts.

Eisko

tech friday #2 : Eisko capture protocol

November 28, 2014

Hi folks!

Last week we started to explain our capture system and approach. Today, let's focus on the protocol : what does the scanned person have to do while being scanned?

When it comes to capturing expressions, a lot of things have to be planned in advance: you cannot afford to recall the person later because you missed a pose. Nor you can capture thousands of expressions just to be on the safe side: in production, things usually need to be done very quickly.

Having worked on human capture for years, we have explored all of the existing capture protocols, and have taken the best aspects of each one to create our own. After years of iteration and improvment, we think we have quite a well-optimized capture protocol. It basically has three main parts:

- Muscle deformations of face (eye ball deformations, opening the mouth, jaw movements, etc.)

- Emotionnal expressions. These are grouped in two categories: primary emotions (happiness, anger, fear, etc.) and secondary emotions (interest, irritation, disregard, sarcastic, etc.)

- Visemes deformations. Visemes are a subset of phonemes. Basically, they represent the shapes that a face makes while speaking. Recording them is a bit more complicated than reciting the letters of the alphabet, as the same as the same phoneme induce different deformations in different contexts (depending on surrounding words and emotional state).
Visemes fall into three main categories:  widened consonants, rounded consonants and monophtongs.

By asking the subject to recite a standard phrase that contains all the visemes in the English or French language, we can quickly capture all the speech-related facial deformations. These phrase are derived from our study of various scientific papers on the subject.


This protocol allows us to capture and characterize all the possible deformations linked to the 104 physiological degree-of-freedom of your face, or my face, or anyone else's.

Once this is done, we are left with Gigabits of data. What we do with that data is a question we will answer in our future Tech fridays. Stay tuned!

Eisko


Tech Friday #1: Yet another capture system? Or: why photogrammetry just isn't good enough!

November 21, 2014



Hi Folks,

In our first Tech Friday post, we wanted to give you a brief insight into the motivations and approach underlying our capture system.

Since their introduction for computer assisted design and quality inspection, a number of different approaches to capturing 3D objects have been investigated:





    scanner scan                                      contact measurment                           structured light




These approaches can be classified according to:
  • The physical properties they can acquire:  '3D scanning' generally refers to geometry capture, potentially supplemented by photos mapping.
  • Wheter capture is passive or active: does the capture process modify/interfer the phenomena it records ?
  • The type of object captured: does the process record static or mobile/articulated/deformable objects ?
While contact measurments and manual scanner are not well adapted to capture non-rigid and deformable objects like the face, structured-light and photogrammetry are potential good candidates for geometry capture.




Photogrammetry is based on the detection and registration of similar features in multiple DSLR images. External and internal calibration of the camera sensors enables to triangulate the associated position in space.

Such capture rigs are easy to set-up and progressively more affordable, thanks to the increasing availability of good-quality cameras. This approach enables to deal with a subject in different poses, since shooting is almost instantaneous.

But looking closer to it, the resulting scan is noisy and unstructured...


It took us some time to realize that such artefacts result from two fundamental causes, independent of the number and resolution of the cameras used:
  • Light scatering caused by specular and subsurface properties of the skin, surface fuzz, eyelashes which reflects and absorb lights depending on the direction so that epipolar features can not be identical in different camera views.
  • The difficulty to achieve a perfectly accurate geometric calibration of the cameras - which still remains an open problem for photogrammetry.
After years of technology survey and experimentations on this thematic, we finally decided to develop our own capture system. Our objective was to ensure the simultaneous acquisition of both detailed geometry and physically-based textures which characterize the precise appearance of any subject's pose.


To do so, we do not consider a direct acquisition, but analyze and dissociate the way in which the skin, the mouth, the eyes diffuse, reflect and absorb light under precisely calibrated illumination conditions.

Our capture system is made of 1.600 LED lights which permit us to control the intensity and direction of the lighting very accurately. Our high-frequency 'material cameras' enable us to separate the diffuse and specular components of the surfaces being recorded, and extract normal information for each RGB color channel, among other things.

The result of this analysis is a clean 3D geometric scan, resolution-independent down to 20 micrometers, complete with tiny details like wrinkles and pores, totally free from reflection and refracton side effects traditionnal in photogrammetry :)







I hope this post has helped you understand why we decided to build another capture system. In future Tech Fridays, we'll look at what makes Eisko unique both on the capture and reconstruction sides.

Eisko

It's alive !

November 18, 2014

We have had the pleasure of collaborating with the dancer, choreographer and actress Tatiana Seguin. With her kind permission, we have been able to reconstruct her digital double.

This week Tatiana saw her digital double, animated in Unity. She was so taken with it that I started filming her reactions with one of our capture camera. It is always extremely impressive to discover your digital self on screen.

"It's alive!" she said :)

(Yes, there is a drawing of rosetta probe on the white board behind Tatiana)


Eisko

Game connection

October 30, 2014

Hi folks!

Cedric gave a talk at Game Connection in Paris on 29 October.




It was a great opportunity to unveil our new animation service. The presentation, which explains pretty deeply our process for capturing, reconstructing, rigging and animating a digital double, then integrating it in Unity 5.0, went down well with the audience.

We know that live presentation is a difficult exercice,  especially when explaining such a unique and technical process. It's a good thing we could make this first presentation as we know we will be better and better with time. We will be sure presenting this again, next date to be the famous GDC in San Francisco!

We will also be posting about our capture, analysis and reconstruction process on this blog. Stay tuned!

Eisko

Unite keynote

October 5, 2014

unite 2014 logo


Hi folks! Unity showcased Eisko's digital double at the Unite keynote in Seattle. We collaborated with Unity to prove the succesful integration of our "digital doubles" in Unity 5.0 (thus, in any game engine or production pipeline), using the brand new physically based rendering of Unity 5.0.

Stay tuned for more in-depth posts here, and be sure to check out www.eisko.com


Eisko

Siggraph 2014

October 3, 2014

Hi folks! Siggraph is always an exciting moment for all people passionated about CGI and any kind of digital imagery !



Pierre flied to Vancouver, and happily presented Eisko's digital double service.




Much more content to come, stay tuned !

Eisko.

We are live !

October 2, 2014

We are happy to announce a great news: we are live! Eisko just started as a new company brand.
Led by Cedric, we proudly propose "a breakthrough in photoreal digital characters" :)
We believe that our technology will change the character creation process in many entertainment applications. Scanning techniques already move a few lines, but there is so much more to come...

A lot a fun posts are waiting to be put here, explaining who we are and what we do (also why and how) so stay tuned !
Meanwhile, be sure to check out www.eisko.com !

Eisko