{"oa_version":"None","intvolume":" 34","publist_id":"5535","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","date_created":"2018-12-11T11:53:06Z","volume":34,"year":"2015","publisher":"ACM","scopus_import":1,"title":"Detailed spatio-temporal reconstruction of eyelids","quality_controlled":"1","status":"public","type":"conference","author":[{"full_name":"Bermano, Amit","last_name":"Bermano","first_name":"Amit"},{"full_name":"Beeler, Thabo","last_name":"Beeler","first_name":"Thabo"},{"full_name":"Kozlov, Yeara","first_name":"Yeara","last_name":"Kozlov"},{"full_name":"Bradley, Derek","first_name":"Derek","last_name":"Bradley"},{"id":"49876194-F248-11E8-B48F-1D18A9856A87","full_name":"Bickel, Bernd","first_name":"Bernd","last_name":"Bickel","orcid":"0000-0001-6511-9385"},{"full_name":"Gross, Markus","first_name":"Markus","last_name":"Gross"}],"day":"27","doi":"10.1145/2766924","month":"07","date_updated":"2021-01-12T06:52:05Z","citation":{"ieee":"A. Bermano, T. Beeler, Y. Kozlov, D. Bradley, B. Bickel, and M. Gross, “Detailed spatio-temporal reconstruction of eyelids,” presented at the SIGGRAPH: Special Interest Group on Computer Graphics and Interactive Techniques, Los Angeles, CA, United States, 2015, vol. 34, no. 4.","mla":"Bermano, Amit, et al. Detailed Spatio-Temporal Reconstruction of Eyelids. Vol. 34, no. 4, 44, ACM, 2015, doi:10.1145/2766924.","short":"A. Bermano, T. Beeler, Y. Kozlov, D. Bradley, B. Bickel, M. Gross, in:, ACM, 2015.","apa":"Bermano, A., Beeler, T., Kozlov, Y., Bradley, D., Bickel, B., & Gross, M. (2015). Detailed spatio-temporal reconstruction of eyelids (Vol. 34). Presented at the SIGGRAPH: Special Interest Group on Computer Graphics and Interactive Techniques, Los Angeles, CA, United States: ACM. https://doi.org/10.1145/2766924","ista":"Bermano A, Beeler T, Kozlov Y, Bradley D, Bickel B, Gross M. 2015. Detailed spatio-temporal reconstruction of eyelids. SIGGRAPH: Special Interest Group on Computer Graphics and Interactive Techniques vol. 34, 44.","chicago":"Bermano, Amit, Thabo Beeler, Yeara Kozlov, Derek Bradley, Bernd Bickel, and Markus Gross. “Detailed Spatio-Temporal Reconstruction of Eyelids,” Vol. 34. ACM, 2015. https://doi.org/10.1145/2766924.","ama":"Bermano A, Beeler T, Kozlov Y, Bradley D, Bickel B, Gross M. Detailed spatio-temporal reconstruction of eyelids. In: Vol 34. ACM; 2015. doi:10.1145/2766924"},"conference":{"location":"Los Angeles, CA, United States","end_date":"2015-08-13","name":"SIGGRAPH: Special Interest Group on Computer Graphics and Interactive Techniques","start_date":"2015-08-09"},"date_published":"2015-07-27T00:00:00Z","publication_status":"published","issue":"4","_id":"1625","article_number":"44","language":[{"iso":"eng"}],"department":[{"_id":"BeBi"}],"abstract":[{"text":"In recent years we have seen numerous improvements on 3D scanning and tracking of human faces, greatly advancing the creation of digital doubles for film and video games. However, despite the high-resolution quality of the reconstruction approaches available, current methods are unable to capture one of the most important regions of the face - the eye region. In this work we present the first method for detailed spatio-temporal reconstruction of eyelids. Tracking and reconstructing eyelids is extremely challenging, as this region exhibits very complex and unique skin deformation where skin is folded under while opening the eye. Furthermore, eyelids are often only partially visible and obstructed due to selfocclusion and eyelashes. Our approach is to combine a geometric deformation model with image data, leveraging multi-view stereo, optical flow, contour tracking and wrinkle detection from local skin appearance. Our deformation model serves as a prior that enables reconstruction of eyelids even under strong self-occlusions caused by rolling and folding skin as the eye opens and closes. The output is a person-specific, time-varying eyelid reconstruction with anatomically plausible deformations. Our high-resolution detailed eyelids couple naturally with current facial performance capture approaches. As a result, our method can largely increase the fidelity of facial capture and the creation of digital doubles.","lang":"eng"}]}