CAI Logo

Grid Labeling: Crowdsourcing Task-Specific Importance from Visualizations

Minsuk Chang, Yao Wang, Huichen Will Wang, Andreas Bulling, Cindy Xiong Bearfield

Proc. 27th Annual Conference on Data Visualization (EuroVis), pp. 1–6, 2025.


Abstract

Knowing where people look in visualizations is key to effective design. Yet, existing research primarily focuses on task-agnostic saliency models, although visual attention is inherently task-dependent. Collecting task-relevant importance data remains a resource-intensive challenge. To address this, we introduce Grid Labeling, a novel annotation method for collecting task-specific importance data to enhance saliency prediction models. Grid Labeling dynamically segments visualizations into Adaptive Grids, enabling efficient, low-effort annotation while adapting to visualization structure. We conducted a human-subject study comparing Grid Labeling with existing annotation methods, ImportAnnots, and BubbleView across multiple metrics. Results show that Grid Labeling produces the least noisy data and the highest inter-participant agreement with fewer participants while requiring less physical (e.g., clicks or mouse movements) and cognitive effort.

Links


BibTeX

@inproceedings{chang25_eurovis, title = {Grid Labeling: Crowdsourcing Task-Specific Importance from Visualizations}, author = {Chang, Minsuk and Wang, Yao and Wang, Huichen Will and Bulling, Andreas and Bearfield, Cindy Xiong}, year = {2025}, booktitle = {Proc. 27th Annual Conference on Data Visualization (EuroVis)}, pages = {1--6}, url = {https://arxiv.org/abs/2502.13902} }