Ga11y improves accessibility of automated GIFs for visually impaired users

Animated GIFs, prevalent in social media, texting platforms and websites, often lack adequate alt-text descriptions, resulting in inaccessible GIFs for blind or low-vision (BLV) users and the loss of meaning, context, and nuance in what they read. In an article published in the Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’22), a research team led by CREATE Co-director Jacob O. Wobbrock has demonstrated a system called Ga11y (pronounced “galley”) for creating GIF annotations and improving the accessibility of animated GIFs.

Video describing Ga11y, an Automated GIF Annotation System for Visually Impaired Users. The video frame shows an obscure image and the question, How would you describe this GIF to someone so they can understand it without seeing it?

Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests.

Wobbrock’s co-authors are Mingrui “Ray” Zhang, a Ph.D. candidate in the UW iSchool, and Mingyuan Zhong, a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering.

Part of this work was funded by CREATE.