Making of a realistic facial animation with EasyCap Studio Facemotion

We would like to introduce an example, where creators used mocap data to achieve life-like result.

As the goal was to give back as many natural movements, as possible in a spectacular way, coworkers of Vertigo Digital sculpted the detailed, photorealistic 3D model of the actor’s face. For that, they used a scan of the face as a reference, and they sculpted the final model manually. It contains about 40 000 triangles, so they produced a medium density 3D model. The rig has to be ready for mocap data, so building it can differ a bit from building rigs that will be animated manually.

 

Their skilled sculptors had shaped the blendshapes for face expressions after that. 30 blendshapes were made, and this part of the work took the most time (around 50 hours). At this point, the workload is completely the same as during „manual” 3D animation.

This model’s movements are driven by a lower detailed model’s movement, but they are transposed between the blendshapes to give extra details (for a more life-like result). The lower detailed model is directly driven by EasyCap Studio Facemotion mocap data. This way it is even possible to retarget mocap data into a different head model in high quality. If the rig was made to be driven by mocap data, it is quite easy to set the proper face expressions for marker cloud positions.

 

 

The lower quality model is optimised to accept mocap data according to a specific marker setup. Facemotion does not use pre-made setups, it is possible to make different ones according to the current tasks and needs. In this example, they have used the following, with 68 markers (+2 markers for the eye centers). The system is ready to export movements with or without the head movement, as shown in the video.

 

Any kind of face expression and performance can be transferred to the final model quite easily and quickly. Only the face mocap data has to be produced with the needed content, and the result is available in hours. By using mocap data, production time depends less on the length of the performance, so from that point, working time can be saved. Naturally the result always need to be supervised by an experienced animator, who can remove mistakes, and add small details.



 

Working phase
Workload

(mocap driven rig)
Theoretical workload (same result by manual animation)
Build the rig and textures

40 hrs

40 hrs
Produce the blendshapes

50 hrs

50 hrs
Prepare Facemotion system and actor for recording mocap data

0,75 hr

-
Record, track, export and clean mocap data

0,5 hr

-
Apply mocap data to the rig

2 hrs

-
Animation ‘by hand’

-

25 hrs
Total

93,25 hrs

115 hrs
 

Please note, that workload in every work phase greatly depends on the needed quality, so can differ in other projects! Lenght of performance has little effect on mocap data production time.

 
Further possibilities
 

If you need more control on the result, it is possible to make an additional layer for manual animation. By using this, it is possible to modify mocap driven movements manually, so important details can be added afterwards, to get perfectly the same result you have imagined before.

 

Also textures can be modified according to the face expression (blood in the veins, etc.) to give even more life for the digital character.

 

As guys from Vertigo said: "Easycap`sFacemotion could generate a rock solid-track so fast and so easily that it completely blew our mind. All trackpoints were ready to even direct-drive our face rig without any cleanup process." So with some experience in mocap, you can solve the same problem in several different ways. In most cases, it is completely compatible with proven methods, and there are a lot of possibilities to improve new ones. We advise you to have a look at our sample files (they are raw export results without any kind of cleaning), and check it, if you can see a possibility in Facemotion.

Download (.pdf)