Facial Composite Generation with Iterative Human Feedback
Florian Strohm,
Ekta Sood,
Dominike Thomas,
Mihai Bâce,
Andreas Bulling
Proc. The 1st Gaze Meets ML workshop, PMLR,
pp. 165–183,
2023.
Abstract
Links
BibTeX
Project
We propose the first method in which human and AI collaborate to iteratively reconstruct the human’s mental image of another person’s face only from their eye gaze. Current tools for generating digital human faces involve a tedious and time-consuming manual design process. While gaze-based mental image reconstruction represents a promising alternative, previous methods still assumed prior knowledge about the target face, thereby severely limiting their practical usefulness. The key novelty of our method is a collaborative, it- erative query engine: Based on the user’s gaze behaviour in each iteration, our method predicts which images to show to the user in the next iteration. Results from two human studies (N=12 and N=22) show that our method can visually reconstruct digital faces that are more similar to the mental image, and is more usable compared to other methods. As such, our findings point at the significant potential of human-AI collaboration for recon- structing mental images, potentially also beyond faces, and of human gaze as a rich source of information and a powerful mediator in said collaboration.
@inproceedings{strohm23_gmml,
title = {Facial Composite Generation with Iterative Human Feedback},
author = {Strohm, Florian and Sood, Ekta and Thomas, Dominike and B{\^a}ce, Mihai and Bulling, Andreas},
booktitle = {Proc. The 1st Gaze Meets ML workshop, PMLR},
pages = {165--183},
year = {2023},
editor = {Lourentzou, Ismini and Wu, Joy and Kashyap, Satyananda and Karargyris, Alexandros and Celi, Leo Anthony and Kawas, Ban and Talathi, Sachin},
volume = {210},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v210/strohm23a/strohm23a.pdf},
url = {https://proceedings.mlr.press/v210/strohm23a.html}
}