Contact address logo Eisko, 24 rue de l'Est 75020 Paris
Contact phone logo +33 (0)1 85 08 52 51
Contact mail logo contact@eisko.com
> contact us!
Eisko logo

BLOG

tech friday #4 : Data Capture versus Model Reconstruction

December 19, 2014


Hi Folks! Welcome on this fourth tech friday. Today we will investigate the fundamental difference between "Data" and "Model", trying to avoid using too much technical terminology.


To understand how crucial this is, we need to consider what happens with usual scanning technics.
Whatever the properties you measure (position, luminance...) and whatever the capture technic you consider (laser scan, photography...), the result of your acquisition is just an unstructured collection of samples.
You can use numerous technics and various softwares to generate a 3D mesh from these acquisitions -extracting point clouds from photos, expressing them under a cylindrical or spherical parametrization through Agisof's photoscan in the case of photogrammetry for example ...- but the output will still remain unstructured and carry no semantic information about the subject you captured.

Traditionnally, it looks like this:


In addition to being incomplete and noisy as describe in tech Friday #3, the results of your acquisition is a messy soup of polygons... The vertices have absolutely no clue wheter they represent the pic of the nose or the corner of the eye, so the resulting mesh actually has no interest except as a simple and poor visualization of the capture it-self.

Most people's reflex action is generally to open zbrush or a similar modeling tool, making your up-most to clean the noise and retopologize this soup to sculpt the render-compliant mesh with clean UVs that would be adapted to your production pipeline or to the constraints of your real-time rendering.
Such a process is not only a tedious, costly and time-consuming but it drastically impacts the accuracy of your data. Indeed, many subbtle details get wiped away, so you often end up wondering why the resulting model does not look like the scanned person... Things get definitely worse when you have to insure the integrity of the mesh and associated UVs among different scanned expressions to prevent the geometry and textures 'sliding' during facial animation.


To prepare the model for animation and rendering while garantying 100% fidelity to the captured subject, we reconstruct and structure these information by computing a dense (i.e. point-to-point) correspondence between our high detailed acquisitions and a generic model (Topology+UVs).
Depending on customer's choice, this generic mesh can correspond to the topology provided by the videogame studio, the polygon counts and texture coordinates of which perfectly match requirements of the considered game engine, or to our generic male/female model.


Since this correspondence is driven by data/model features registration, it enables to count for any pose and facial expression while insuring extreme accuracy in the position of the vertices and UVs.

This dense correspondence enables us to re-express the different materials components as textures maps according to the textures coordinates associated with the topology under consideration. These ones are originally encoded as 4K 32 bits EXR files - We generally derive PNG textures for real-time rendering.


Simultaneously, medium and high range frequencies are reported and encoded as displacement and normal maps respectively. This permits to retain all the tiny details of the HD geometric scan, through both offline and real-time renders of the model.


We'll show you what the results look like in our next friday tech. Stay tuned!

Eisko