{"page":"413 - 426","status":"public","year":"2011","publisher":"Science Direct","day":"01","date_published":"2011-01-01T00:00:00Z","citation":{"ieee":"B. Bickel and M. Lang, “From sparse mocap to highly detailed facial animation,” in GPU Computing Gems Emerald Edition, Science Direct, 2011, pp. 413–426.","short":"B. Bickel, M. Lang, in:, GPU Computing Gems Emerald Edition, Science Direct, 2011, pp. 413–426.","ama":"Bickel B, Lang M. From sparse mocap to highly detailed facial animation. In: GPU Computing Gems Emerald Edition. Science Direct; 2011:413-426. doi:10.1016/B978-0-12-384988-5.00027-9","mla":"Bickel, Bernd, and Manuel Lang. “From Sparse Mocap to Highly Detailed Facial Animation.” GPU Computing Gems Emerald Edition, Science Direct, 2011, pp. 413–26, doi:10.1016/B978-0-12-384988-5.00027-9.","apa":"Bickel, B., & Lang, M. (2011). From sparse mocap to highly detailed facial animation. In GPU Computing Gems Emerald Edition (pp. 413–426). Science Direct. https://doi.org/10.1016/B978-0-12-384988-5.00027-9","chicago":"Bickel, Bernd, and Manuel Lang. “From Sparse Mocap to Highly Detailed Facial Animation.” In GPU Computing Gems Emerald Edition, 413–26. Science Direct, 2011. https://doi.org/10.1016/B978-0-12-384988-5.00027-9.","ista":"Bickel B, Lang M. 2011.From sparse mocap to highly detailed facial animation. In: GPU Computing Gems Emerald Edition. , 413–426."},"date_created":"2018-12-11T11:55:42Z","month":"01","doi":"10.1016/B978-0-12-384988-5.00027-9","author":[{"first_name":"Bernd","full_name":"Bernd Bickel","orcid":"0000-0001-6511-9385","last_name":"Bickel","id":"49876194-F248-11E8-B48F-1D18A9856A87"},{"first_name":"Manuel","full_name":"Lang, Manuel","last_name":"Lang"}],"_id":"2098","title":"From sparse mocap to highly detailed facial animation","quality_controlled":0,"publication_status":"published","publication":"GPU Computing Gems Emerald Edition","abstract":[{"lang":"eng","text":"This chapter presents a method for real-time animation of highly detailed facial expressions based on sparse motion captures data and a limited set of static example poses. The method for real-time animation of highly detailed facial expressions decomposes geometry into large-scale motion and fine-scale details, such as expression wrinkles. Both large- and fine-scale deformation algorithms run entirely on the GPU, and our implementation based on CUDA achieves an overall performance of about 30 fps. The face conveys the most relevant visual characteristics of human identity and expression. Hence, realistic facial animations or interactions with virtual avatars are important for storytelling and gameplay. However, current approaches are either computationally expensive, require very specialized capture hardware, or are extremely labor intensive. At runtime, given an arbitrary facial expression, the algorithm computes the skin strain from the relative distance between marker points and derives fine-scale corrections for the largescale deformation. During gameplay only the sparse set of marker-point positions is transmitted to the GPU. The face animation is entirely computed on the GPU where the resulting mesh can directly be used as input for the rendering stages. This data can be easily obtained by traditional capture hardware. The proposed in-game algorithm is fast. It also is easy to implement and maps well onto programmable GPUs."}],"publist_id":"4935","date_updated":"2021-01-12T06:55:17Z","type":"book_chapter","extern":1}