Photorealistic Monocular Gaze Redirection Using Machine Learning

Published in IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2018

Recommended citation: D. Kononenko, Y. Ganin, D. Sungatullina and V. Lempitsky, "Photorealistic Monocular Gaze Redirection Using Machine Learning," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 11, pp. 2696-2710, 1 Nov. 2018, doi: 10.1109/TPAMI.2017.2737423.

[PDF]

Abstract

We propose a general approach to the gaze redirection problem in images that utilizes machine learning. The idea is to learn to re-synthesize images by training on pairs of images with known disparities between gaze directions. We show that such learning-based re-synthesis can achieve convincing gaze redirection based on monocular input, and that the learned systems generalize well to people and imaging conditions unseen during training. We describe and compare three instantiations of our idea. The first system is based on efficient decision forest predictors and redirects the gaze by a fixed angle in real-time (on a single CPU), being particularly suitable for the videoconferencing gaze correction. The second system is based on a deep architecture and allows gaze redirection by a range of angles. The second system achieves higher photorealism, while being several times slower. The third system is based on real-time decision forests at test time, while using the supervision from a “teacher” deep network during training. The third system approaches the quality of a teacher network in our experiments, and thus provides a highly realistic real-time monocular solution to the gaze correction problem. We present in-depth assessment and comparisons of the proposed systems based on quantitative measurements and a user study.