EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour
Andreas Bulling, Christian Weichel, Hans Gellersen
Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 305-308, 2013.
Abstract
In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.Links
Paper: bulling13_chi.pdf
BibTeX
@inproceedings{bulling13_chi,
author = {Bulling, Andreas and Weichel, Christian and Gellersen, Hans},
title = {EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour},
booktitle = {Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)},
year = {2013},
pages = {305-308},
doi = {10.1145/2470654.2470697},
video = {https://www.youtube.com/watch?v=bhdVmWnnnIM}
}