CAI Logo

Datasets


SalChartQA

ACM CHI 2024

Dataset contents: 6,000 question-driven attention maps on 3,000 visualisations, with 74,340 answers from crowdsourcing workers
Number of participants: 165

VisRecall++

ACM ETRA 2024

Dataset contents: Gaze data from 40 participants on 200 information visualisations and five recallability question types.
Number of participants: 40

VisRecall

TVCG (IEEE VIS 2023)

Dataset contents: 200 information visualisations that are annotated with crowd-sourced human recallability scores obtained from 1,000 questions from five question types
Number of participants: 305

MSCOCO-EMMA and FigureQA-EMMA

CogSci 2023

Dataset contents: Large-scale cognitively plausible synthetic gaze data on corresponding images in the full MSCOCO and FigureQA datasets
Number of participants: N/A

ConAn

ICML 2021

Dataset contents: Four showcase videos taken with a 360° camera (Insta360 One X) depicting different interactions
Number of participants: N/A

VQA-MHUG

ACL ConLL 2021

Dataset contents: Multimodal human gaze data over textual questions and their corresponding images
Number of participants: 49

MovieQA-Reading Comprehension (MQA-RC)

ACL ConLL 2020

Dataset contents: Question-answer pairs, eye-tracking extension to the MovieQA dataset
Number of participants: 23

Everyday Mobile Visual Attention (EMVA)

ACM CHI 2020

Dataset contents: Video snippets, usage logs, interaction events, sensor data
Number of participants: 32

DEyeAdicContact

ACM ETRA 2020

Dataset contents: Fine-grained eye contact annotations for 74h of YouTube videos (videos not included)
Number of participants: N/A

MPIIDPEye: Privacy-Aware Eye Tracking Using Differential Privacy

ACM ETRA 2019

Dataset contents: Eye tracking data, eye movement features, ground truth annotation
Number of participants: 20

PrivacEye: Privacy-Preserving Head-Mounted Eye Tracking Using Egocentric Scene Image and Eye Movement Features

ACM ETRA 2019

Dataset contents: First-person video dataset; Data Annotation, Features and ground truth, Video frames and ground truth, Private segments statistics
Number of participants: 17

MPIIMobileAttention: Forecasting User Attention During Everyday Mobile Interactions Using Device-Integrated and Wearable

ACM MobileHCI 2018

Dataset contents: Everyday mobile phone interactions
Number of participants: 20

MPIIEgoFixation: Fixation Detection for Head-Mounted Eye Tracking Based on Visual Similarity of Gaze Targets

ACM ETRA 2018

Dataset contents: Data files, ground truth files (fixation IDs, start and end frame of corresponding scenes)
Number of participants: 5, 2300+ fixations in total

InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation

ACM IMWUT 2017

Dataset contents: 280,000 close-up eye images
Number of participants: 17

MPIIFaceGaze: It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation

IEEE TPAMI 2019, IEEE CVPRW 2017

Dataset contents: MPIIGaze dataset augmented with human facial landmark annotation, pupil centers, face regions
Number of participants: N/A

Labeled pupils in the wild (LPW): A dataset for studying pupil detection in unconstrained environments

ACM ETRA 2016

Dataset contents: Eye region videos (95 fps, head-mounted eye tracker)
Number of participants: 22

3DGazeSim: 3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers

ACM ETRA 2016

Dataset contents: 7+ hours of eye-tracking, 10 recordings per participant, 2 per depth/5 different depths
Number of participants: 14

MPIIEmo

ACM ACII 2015

Dataset contents: 224 sequences, 8 viewpoints per sequence, 1792 video files
Number of participants: 16

Discovery of Everyday Human Activities From Long-term Visual Behaviour Using Topic Models

ACM UbiComp 2015

Dataset contents: 80+ hours of eye tracking data, ground truth annotations
Number of participants: 10

MPIIGaze: Appearance-Based Gaze Estimation in the Wild

IEEE TPAMI 2019, IEEE CVPR 2015

Dataset contents: 213 659 images
Number of participants: 15

Prediction of Search Targets From Fixations in Open-World Settings

IEEE CVPR 2015

Dataset contents: Fixation data
Number of participants: 18

Recognition of Visual Memory Recall Processes Using Eye Movement Analysis

ACM UbiComp 2011

Dataset contents: 7 hours of electrooculography (EOG) data, ground truth annotations
Number of participants: 7

Eye Movement Analysis for Activity Recognition Using Electrooculography

IEEE TPAMI 2011, ACM UbiComp 2009

Dataset contents: 8 hours of electrooculography (EOG) data, 2 experimental runs per participant, full ground truth annotations
Number of participants: 8

Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography

ACM TAP 2012, IEEE Pervasive 2008

Dataset contents: 6 hours of electrooculography (EOG) data, 4 experimental runs per participant, full ground truth annotations
Number of participants: 8