August 3, 2022
Animated GIFs, prevalent in social media, texting platforms and websites, often lack adequate alt-text descriptions, resulting in inaccessible GIFs for blind or low-vision (BLV) users and the loss of meaning, context, and nuance in what they read. In an article published in the Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’22), a research team led by CREATE Co-director Jacob O. Wobbrock has demonstrated a system called Ga11y (pronounced “galley”) for creating GIF annotations and improving the accessibility of animated GIFs.
Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests.
Wobbrock’s co-authors are Mingrui “Ray” Zhang, a Ph.D. candidate in the UW iSchool, and Mingyuan Zhong, a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering.
Part of this work was funded by CREATE.