Hard Mode: Accessibility, Difficulty and Joy for Gamers With Disabilities

Video games often pose accessibility barriers to gamers with disabilities, but there is no standard method for identifying which games have barriers, what those barriers are, and whether and how they can be overcome. CREATE and Allen School Ph.D. student Jesse Martinez has been working to understand the strategies and resources gamers with disabilities regularly use when trying to identify a game to play and the challenges disabled gamers face in this process, with the hopes of advising the games industry on how better support disabled members of their audience.

Martinez, with CREATE associate directors James Fogarty and Jon Froehlich as project advisors and co-authors, published the team’s findings for the ACM CHI conference on Human Factors in Computing Systems (CHI 2024).

Martinez will present the paper, Playing on Hard Mode: Accessibility, Difficulty and Joy in Video Game Adoption for Gamers With Disabilities, virtually at the hybrid conference, and will present it in person at UW DUB’s upcoming para.chi.dub event.

Martinez’s passion for this work came from personal experience: as someone who loves playing all kinds of games, he has spent lots of time designing new ways to play games to make them accessible for himself and other friends with disabilities. He also has experience working independently as a game and puzzle designer and has consulted on accessibility for tabletop gaming studio Exploding Kittens, giving him a unique perspective on how game designers create games and how disabled gamers hack them.

First, understand the game adoption process

The work focuses on the process of “Game Adoption”, which includes everything from learning about a new game (game discovery), learning about it to see if it’s a good fit for one’s taste and access needs (game evaluation), and getting set up with the game and making any modifications necessary to improve the overall experience (game adaptation). As Martinez notes in the paper, gamers with disabilities already do work to make gaming more accessible, so it’s very important not to overlook this work when designing new solutions.

To explore this topic, Martinez interviewed 13 people with a range of disabilities and very different sets of access needs. In the interviews, they discussed what each person’s unique game adoption process looked like, where they encountered challenges, and how they would want to see things change to better support their process.

Graphic from the research paper showing the progression from Discovery (finding a game to play), to Evaluation (assessing a game's fit), to Adaptation (getting set up with a game).

Game discovery

In discussing game discovery, the team found that social relationships and online disabled gaming communities were the most valuable resource for learning about new games. Game announcements often don’t come with promises of being accessible. But if a friend suggested a game, it often meant the friend had already considered whether the game had a chance of being accessible. Participants also mentioned that since there is no equivalent to a games store for accessible games, it was sometimes hard to learn about new games. In their recommendations, Martinez suggests game distributors like Steam and Xbox work to support this type of casual browsing of accessible games.

Game evaluation

In discussing game evaluation, the team found that community-created game videos on platforms like YouTube and Twitch were useful for making accessibility judgments. Interestingly, the videos didn’t need to be accessibility-focused, since just seeing how the game worked was useful information. One participant in the study highlights the accessibility options menus in their Twitch streams, and asks streamers to do likewise, since this information can be tricky to find online.

Game adaptation

Martinez and team discovered many different approaches people took to make a game accessible to them, starting with enabling accessibility features like captions or getting the game to work with their screen reader. Some participants designed their own special tools to make the system work, such as a 3D-printed wrist mount for a gaming mouse. Participants shared that difficulty levels in a game are very important accessibility resources, especially when inaccessibility in the game already made things harder.

The important thing is that players be allowed to choose what challenges they want to face, rather than being forced to play on “hard mode” if they don’t want to.

Other participants discussed how they change their own playstyle to make the game accessible, such as playing as a character who fights with a ranged weapon or who can teleport across parts of the game world. Others went even further, creating their own new objectives in the game that better suited what they wanted from their experience. This included ignoring the competitive part of the racing game Mario Kart to just casually enjoy driving around its intricate worlds, and participating in a friendly roleplaying community in GTA V where they didn’t have to worry about the game’s fast-paced missions and inaccessible challenges.

Overcoming inaccessible games

Martinez uses all this context to introduce two concepts to the world of human-computer interaction and accessibility research: “access difficulty” and “disabled gaming.”

“Access difficulty” is how the authors describe the challenges created in a game specifically due to inaccessibility, which are different from the challenges a game designer intentionally creates to make the game harder. The authors emphasize that the important thing is that players be allowed to choose what challenges they actually want to face, rather than being forced to play on “hard mode” compared to nondisabled players.

“Disabled gaming” acknowledges the particular way gamers with disabilities play games, which is often very different from how nondisabled people play games. Disabled gaming is about taking the game you’re presented and turning it into something fun however you can, regardless of whether that’s what the game designer expects or wants you to do.

Martinez and his co-authors are very excited to share this work with the CREATE community and the world, and they encourage anyone interested in participating in a future study of disabled gaming to join the #study-recruitment channel. If you’re not on CREATE’s Slack, request to join.

New Book: Teaching Accessible Computing

March 14, 2024

A new, free, and community-sourced online book helps Computer Science educators integrate accessibility topics into their classes. Teaching Accessibility provides the foundations of accessibility relevant to computer science teaching and then presents teaching methods for integrating those topics into course designs.

From the first page of the book, a line drawing of a person hunched over a laptop with their face close to the screen which is populated by large, unreadable characters.

The editors are Alannah Oleson, a postdoctoral scholar and co-founder at the UW Center for Learning, Computing, and Imagination (LCI), CREATE and iSchool faculty Amy Ko, and Richard Ladner, CREATE Director of Education Emeritus. You may recognize many CREATE faculty members’ research referenced throughout the guide. CREATE Director Jennifer Mankoff and CREATE Ph.D. student Avery Kelly Mack contributed a foundational chapter that advocates for teaching inclusively in addition to teaching about accessibility.

Letting the book speak for itself

“… we’ve designed this book as a freeopenlivingweb-first document. It’s free thanks to a National Science Foundation grant (NSF No. 2137312) that has funded our time to edit and publish the book. It’s open in that you can see and comment on the book at any time, creating community around its content. It’s living in that we expect it to regularly change and evolve as the community of people integrating accessibility into their CS courses grows and evolves. And it’s web-first in that the book is designed first and foremost as an accessible website to be read on desktops, laptops, and mobile devices, rather than as a print book or PDF. This ensures that everyone can read it, but also that it can be easily changed and updated as our understandings of how to teach accessibility in CS evolve.”

Introduction by Alannah Oleson, Amy J. Ko, Richard Ladner

“To write these chapters, we recruited some of the world’s experts on accessible computing and teaching accessible computing, giving them a platform to share both their content knowledge about how accessibility intersects with specific CS topics, but also their pedagogical content knowledge about how to teach those intersections in CS courses.”

Introduction by Alannah Oleson, Amy J. Ko, Richard Ladner

Alice Wong and Patty Berne: Two UW lectures moderated by CREATE researchers

January 29, 2024

Winter 2024 quarter kicked off with two outstanding conversations with women of color who are leaders in disability justice.

Alice Wong: Raising the visibility of disabled people

First, Alice Wong discussed topics important to her work in raising the visibility of disabled people. Wong’s book Year of the Tiger: An Activist’s Life was the topic of the Autumn 2023 CREATE Accessibility Seminar.

CREATE Director Jennifer Mankoff started the conversation asking Wong about her experience as a disabled person in academia and what needs to change. Wong said her work in disability justice was inspired in part by the “incredible amount of emotion and physical labor to ask for equal access” in academic settings. She had to spend precious time, money and energy to gain the accommodations and access she needed to succeed. But she realized that as soon as she transitioned out, her efforts would be lost and the next student would have to start over to prove their need and request a new set of accommodations. Wong was doubtful that large academic institutions can support the goal of collective liberation. It’s the “dog-eat-dog world [of] academia where the competition is stiff and everyone is pushed to their limits to produce and be valuable.” She encouraged instructors to incorporate books about disability justice in their syllabi (see the reading list below). 

Wong, who spoke with a text-to-voice tool and added emphasis with her facial expressions on the screen, also addressed the value and the limitations of assistive technology. She noted that the text-to-speech app she uses does not convey her personality. She also discussed how ableism appears in activist discourse.

One of her examples was a debate over gig economy delivery services, which are enormously important for many people with disabilities and that also under-compensate delivery work. She noted that blaming disabled people for undermining efforts for better wages was not the solution; collective efforts to make corporations compensate workers is the solution. She also explained that hashtag activism, which has been disparaged in popular discourse, is a crucial method for disabled people to participate in social justice activism. And she discussed her outrage when, as she prepared to give a talk to a public health school, her own access needs were used to censor her. Throughout her talk, Wong returned again and again to the principles of disability justice, and encouraged attendees to engage in collective forms of change.

Wong’s responses embodied a key component of disability justice principles: citational practices that name fellow contributors to collective disability justice wisdom. Her long list of recommended reading for the audience inspired us to build our new RDT reading list. Wong referenced Patty Berne several times, calling Berne her introduction to disability justice.

Patty Berne on disability justice: Centering intersectionality and liberation

A week later, two CREATE Ph.D. students, Aashaka Desai and Aaleyah Lewis, moderated a conversation with Patty Berne. Berne, who identifies as a Japanese-Haitian queer disabled woman, co-founded Sins Invalid, a disability justice-based arts project focusing on disabled artists of color and queer and gender non-conforming artists with disabilities. Berne defined disability justice as advocating for each other, understanding access needs, and normalizing those needs. On the topic of climate justice, she noted that state-sponsored disaster planning often overlooks the needs of people with motor impairments or life-sustaining medical equipment. This is where intersectional communities do, and should, take care of each other when disaster strikes.

Berne addressed language justice within the disability community, noting that “we don’t ‘language’ like able-bodied people.” For example, the use of ventilators and augmented speech technology change the cadence of speech. Berne wants to normalize access needs for a more inclusive experience of everyday life. Watch the full conversation on YouTube.

ARTennis attempts to help low vision players

December 16, 2023

People with low vision (LV) have had fewer options for physical activity, particularly in competitive sports such as tennis and soccer that involve fast, continuously moving elements such as balls and players. A group of researchers from CREATE associate director Jon E. Froehlich‘s Makeability Lab hopes to overcome this challenge by enabling LV individuals to participate in ball-based sports using real-time computer vision (CV) and wearable augmented reality (AR) headsets. Their initial focus has been on tennis.

The team includes Jaewook Lee (Ph.D. student, UW CSE), Devesh P. Sarda (MS/Ph.D. student, University of Wisconsin), Eujean Lee (Research Assistant, UW Makeability Lab), Amy Seunghyun Lee (BS student, UC Davis), Jun Wang (BS student, UW CSE), Adrian Rodriguez (Ph.D. student, UW HCDE), and Jon Froehlich.

Their paper, Towards Real-time Computer Vision and Augmented Reality to Support Low Vision Sports: A Demonstration of ARTennis was published in the 2023 ACM Symposium on User Interface Software and Technology (UIST).

ARTennis is their prototype system capable of tracking and enhancing the visual saliency of tennis balls from a first-person point-of-view (POV). Recent advancements in deep learning have led to models like TrackNet, a neural network capable of tracking tennis balls in third-person recordings of tennis games that is used to improve sports viewing for LV people. To enhance playability, the team first built a dataset of first-person POV images by having the authors wear an AR headset and play tennis. They then streamed video from a pair of AR glasses to a back-end server, analyzed the frames using a custom-trained deep learning model, and sent back the results for real-time overlaid visualization.

After a brainstorming session with an LV research team member, the team added visualization improvements to enhance the ball’s color contrast and add a crosshair in real-time.

Early evaluations have provided feedback that the prototype could help LV people enjoy ball-based sports but there’s plenty of further work to be done. A larger field-of-view (FOV) and audio cues would improve a player’s ability to track the ball. The weight and bulk of the headset, in addition to its expense are also factors the team expects to improve with time, as Lee noted in an interview on Oregon Public Broadcasting.

“Wearable AR devices such as the Microsoft HoloLens 2 hold immense potential in non-intrusively improving accessibility of everyday tasks. I view AR glasses as a technology that can enable continuous computer vision, which can empower BLV individuals to participate in day-to-day tasks, from sports to cooking. The Makeability Lab team and I hope to continue exploring this space to improve the accessibility of popular sports, such as tennis and basketball.”

Jaewook Lee, Ph.D. student and lead author

Ph.D. student Jaewook Lee presents a research poster, Makeability Lab Demos - GazePointAR & ARTennis.

Off to the Park: A Geospatial Investigation of Adapted Ride-on Car Usage

November 7, 2023

Adapted ride-on cars (ROC) are an affordable, power mobility training tool for young children with disabilities. But weather and adequate drive space create barriers to families’ adoption of their ROC. 

CREATE Ph.D. student Mia E. Hoffman is the lead author on a paper that investigates the relationship between the built environment and ROC usage.

Mia Hoffman smiling into the sun. She has long, blonde hair. Behind her is part of the UW campus with trees and brick buildings.

With her co-advisors Kat Steele and Heather A. Feldner, Jon E. Froehlich (all three CREATE associate directors), and Kyle N. Winfree as co-authors, Hoffman found that play sessions took place more often within the participants’ homes. But when the ROC was used outside, children engaged in longer play sessions, actively drove for a larger portion of the session, and covered greater distances.

Accessibility scores for the sidewalks near a participant’s home on the left and the drive path of the participant on the right. Participant generally avoided streets that were not accessible.

Most notably, they found that children drove more in pedestrian-friendly neighborhoods and when in proximity to accessible paths, demonstrating that providing an accessible place for a child to move, play, and explore is critical in helping a child and family adopt the mobility device into their daily life.

UW News: Can AI help boost accessibility? CREATE researchers tested it for themselves

November 2, 2023 | UW News

Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoningperpetuating ableist biases.

This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.

Four AI-generated images show different interpretations of a doll-sized “crocheted lavender husky wearing ski goggles,” including two pictured outdoors and one against a white background.

The team presented its findings Oct. 22 at the ASSETS 2023 conference in New York.

“When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”

The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.

Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.

“When technology changes rapidly, there’s always a risk that disabled people get left behind.”

Jennifer Mankoff, CREATE Director, professor in the Allen School

Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.

The results of the other tests researchers selected were equally mixed:

  • One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
  • Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
  • Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.

“I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”

The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.”  The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”

Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.

“Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”

Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka DesaiKelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.


For more information, contact Glazko at glazko@cs.washington.edu and Mankoff at jmankoff@cs.washington.edu.


This article was adapted from the UW News article by Stefan Milne.

UW News: A11yBoard accessible presentation software

October 30, 2023 | UW News

A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.

A user demonstrates creating a presentation slide with A11yBoard on a touchscreen tablet and computer screen.

Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.

Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.

“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School

“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.

For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.

The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.

Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.

“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”

This research was funded in part by the University of Washington’s Center for Research and Education on Accessible Technology and Experiences (UW CREATE). For more information, contact Zhang at zhuohao@uw.edu and Wobbrock at wobbrock@uw.edu.


This article was adapted from the UW News article by Stefan Milne.

Augmented Reality to Support Accessibility

October 25, 2023

RASSAR – Room Accessibility and Safety Scan in Augmented Reality – is a novel smartphone-based prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues. With RASSAR, the user holds out their phone and scans a space. The tool uses LiDAR and camera data, real-time machine learning, and AR to construct a real-time model of the 3D scene, attempts to identify and classify known accessibility and safety issues, and visualizes potential problems overlaid in AR. 

RASSAR researchers envision the tool as an aid in the building and validation of new construction, planning renovations, or updating homes for health concerns, or for telehealth home visits with occupational therapists. UW News interviewed two CREATE Ph.D. students about their work on the project:


Augmented Reality to Support Accessibility

CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.

Illustration of a user holding a smartphone using the RASSAR prototype app to scan the room for accessibility issues.

CREATE Open Source Projects Awarded at Web4All

July 6, 2023

CREATE researchers shone this spring at the 2023 Web4All 2023 conference that, in part, seeks to “make the internet more accessible to the more than one billion people who struggle to interact with digital content each day due to neurodivergence, disability or other impairments.” Two CREATE-funded open source projects won accolades.

Best Technical Paper award:
Understanding and Improving Drilled-Down Information Extraction from Online Data Visualizations for Screen-Reader Users

Authors: Ather Sharif, Andrew Mingwei Zhang, CREATE faculty member Katharina Reinecke, and CREATE Associate Director Jacob O. Wobbrock

Built on prior research to develop taxonomies of information sought by screen-reader users to interact with online data visualizations, the team’s research used these taxonomies to extend the functionality of VoxLens—an open-source multi-modal system that improves the accessibility of data visualizations—by supporting drilled-down information extraction. They assessed the performance of their VoxLens enhancements through task-based user studies with 10 screen-reader and 10 non-screen-reader users. Their enhancements “closed the gap” between the two groups by enabling screen-reader users to extract information with approximately the same accuracy as non-screen-reader users, reducing interaction time by 22% in the process.

Accessibility Challenge Delegates’ Award:
UnlockedMaps: A Web-Based Map for Visualizing the Real-Time Accessibility of Urban Rail Transit Stations

Authors: Ather Sharif, Aneesha Ramesh, Qianqian Yu, Trung-Anh H. Nguyen, and Xuhai Xu

Ather Sharif’s work on another project, UnlockedMaps, was honored with the Accessibility Challenge Delegates’ Award. The paper details a web-based map that allows users to see in real time how accessible rail transit stations are in six North American cities, including Seattle, Toronto, New York and the Bay Area. UnlockedMaps shows whether stations are accessible and if they are currently experiencing elevator outages. Their work includes a public website that enables users to make informed decisions regarding their commute and an open source API that can be used by developers, disability advocates, and policy makers for a variety of purposes, including shedding light on the frequency of elevator outages and their repair times to identify the disparities between neighborhoods in a given city.

Read more

Welcome Mark Harniss, CREATE’s New Director for Education

June 13, 2023

CREATE is thrilled to have Mark Harniss as our new Director for Education. Harniss is an associate professor in the Department of Rehabilitation Medicine and director of the University Center for Excellence in Developmental Disabilities (UCEDD) and the Center for Technology and Disability. Until recently, he was the director of the Disability Studies Program but stepped down at the end of the 2023 academic year.

Mark Harniss a white man in his 50s with short brown hair and blue eyes wearing a dark polo shirt in front of fall-colored leaves.

Harniss’ professional background lies in special education and instructional technology, but his current focus revolves around knowledge translation, assistive technology, accessible information technology (IT), and disability law and policy.

In his role as CREATE Director for Education, Mark aims to foster collaboration and cooperation between UW “upper and lower campus,” particularly by forging connections between CREATE, the Disability Studies Program, the Institute on Human Development and Disability (IHDD), and the Department of Rehabilitation Medicine. Additionally, he intends to expand CREATE’s reach by establishing links with important external communities, ensuring that the innovations generated within CREATE are available to these communities. In turn, he envisions that these communities will provide valuable insights to CREATE researchers regarding their specific needs.

Deep Gratitude to Wobbrock, Ladner & Caspi

June 13, 2023

The CREATE community thanks three of our founding leaders for their energy and service in launching the center as we embark upon some transitions. “CREATE would not be where it is today without the vision, passion, and commitment that Jake, Richard, and Anat brought to their work leading the center,” says CREATE Director Jennifer Mankoff.

Co-Director Jacob O. Wobbrock: From vision, to launch, to sustainable leadership

Jacob O. Wobbrock, a 40-something white man with short hair, a beard, and glasses. He is smiling in front of a white board.

It was back in June 2019 that Jacob O. Wobbrock, CREATE’s founding Co-Director, was on a panel discussion at Microsoft’s IdeaGen 2030 event, where he talked about ability-based design. Also on that panel was future CREATE Associate Director Kat Steele. After the event, the two talked with Microsoft Research colleagues, particularly Dr. Meredith Ringel Morris, about the possibility of founding an accessible technology research center at the University of Washington.

Wobbrock and Steele thought that a center could bring faculty together and make them more than the sum of their parts. Within a few months, Wobbrock returned to Microsoft with Jennifer Mankoff, Richard Ladner, and Anat Caspi to pitch Microsoft’s Chief Accessibility Officer, Jenny Lay-Flurrie, on the idea of supporting the new Center for Research and Education on Accessible Technology and Experiences (CREATE). With additional support from Microsoft President Brad Smith, and input from Morris, the center was launched by Smith and UW President Ana Marie Cauce at Microsoft’s Ability Summit in Spring 2020.

Wobbrock, along with Mankoff, served as CREATE’s inaugural co-directors until June 2023, when Wobbrock stepped down into an associate director role, with Mankoff leading CREATE as sole Director. “I’m a founder by nature,” Wobbrock said. “I helped start DUB, the MHCI+D degree, a startup called AnswerDash, and then CREATE. I really enjoy establishing new organizations and seeing them take flight. Now that CREATE is soaring, it’s time for more capable hands than mine to pilot the plane. Jennifer Mankoff is one of the best, most capable, energetic, and visionary leaders I know. She will take CREATE into its next chapter and I can’t wait to see what she does.” Wobbrock will still be very active with the center.

Professor Emeritus Richard Ladner, one of CREATE’s founders and our inaugural Education Director

Headshot of Richard Ladner. He has grey hair and beard and is wearing a blue shirt and colorful tie.

We thank Professor Emeritus Richard Ladner for three years of leadership as one of our founders and CREATE’s inaugural Education Director. Ladner initiated the CREATE Student Minigrant Program that helps fund small grants up to $2,000 in support of student initiated research projects.

Ladner has shepherded 10 minigrants and worked directly with eight Teach Access Study Away students. Through his AccessComputing program, he helped fund several summer research internships for undergraduate students working with CREATE faculty. All CREATE faculty contribute to accessibility related education in their courses, where he provides encouragement.

Anat Caspi, inaugural Director of Translation

Anat Caspi: A white woman smiling into the camera. She is wearing a purple blouse.

Anat Caspi defined and elevated CREATE’s translation efforts, leveraging the center’s relationships with partners in industry, disability communities, and academia. Her leadership created sustainable models for translation and built on our prior successes. Collaborations with the TASKAR centerHuskyADAPT, and the UW Disability Studies Program have ensured diverse voices to inform innovation. 

Director of Translation duties will be distributed across Mankoff, CREATE’s Community Engagement and Partnerships Manager Kathleen Quin Voss, and the Taskar Center for Accessible Technology, which Caspi directs.

CREATE’s Newest Ph.D Graduates

June 9, 2023

We’re proud to see these talented, passionate students receive their Ph.D.s and excited to see how they continue their work in accessibility.

Alyssa Spomer, Ph.D. Mechanical Engineering

Dissertation: Evaluating multimodal biofeedback to target and improve motor control in cerebral palsy

Advisor: Kat Steele

Honors, awards and articles:

Current: Clinical Scientist at Gillette Children’s Hospital, leading research in the Gillette Rehabilitation Department to improve healthcare outcomes for children with complex movement conditions.

Elijah Kuska, Ph.D. Mechanical Engineering

Elijah Kuska smiling with a sunset in the background

Dissertation: In Silico Techniques to Improve Understanding of Gait in Cerebral Palsy

Advisor: Kat Steele

Honors, awards and articles:

Plans: Elija will start as an assistant professor at the Colorado School of Mines in the Mechanical Engineering Department in January 2024.

Megan Ebers, Ph.D. Mechanical Engineering

Headshot of Megan Ebers, a young woman with dark wavy hair, smiling broadly.

Dissertation: Machine learning for dynamical models of human movement

Advisors: Kat Steele and Nathan Kutz

Awards, honors and articles:

  • Dual Ph.D.s in Mechanical Engineering and Applied Math
  • NSF Graduate Research Fellowship

Plans: Megan will join the UW AI Institute as a postdoc in Spring of 2023 to pursue clinical translation of her methods to evaluate digital biomarkers to support health and function from wearable data. 

Nicole Zaino, Ph.D. Mechanical Engineering

Headshot of Nicole Zaino, a young woman with wavy brown hair and teal eyeglasses.

Dissertation: Walking and rolling: Evaluating technology to support multimodal mobility for individuals with disabilities

Advisors: Kat Steele and Heather Feldner

Awards, honors and articles: 

  • National Science Foundation Graduate Research Fellow, 2018 – Present
  • Gatzert Child Welfare Fellowship, University of Washington, 2022
  • Best Paper Award at the European Society of Movement Analysis for Adults and Children, 2019.
  • Finalist, International Society of Biomechanics David Winter Young Investigator Award, 2019

Plans: Nicole is headed to Bozeman Montana to join the Crosscut Elite Training team to work toward joining the national paralympic nordic ski team for Milano-Cortina 2026, while working part-time with academia and industry partners. 

Ricky Zhang

Headshot of Ricky Zhang, a young man with short hair, wearing black frame glasses and a gray business suit.

Dissertation: Pedestrian Path Network Mapping and Assessment with Scalable Machine Learning Approaches

Advisors: Anat Caspi and Linda Shapiro

Plans: Ricky will be a postdoc in Bill Howe’s lab at the University of Washington.


Kat Steele, who has been busy advising four out of five of these new PH.D.s, noted, “We have an amazing crew of graduate students continuing and expanding upon much of this work. We’re excited for new collaborations and translating these methods into the clinic and community.”

CREATE Ph.D. Student Emma McDonnell Wins Dennis Lang Award

June 6, 2023

Congratulations to Emma McDonnell on receiving a Dennis Lang Award from the UW Disability Studies program! McDonnell, a fourth year Ph.D. candidate in Human Centered Design & Engineering, is advised by CREATE associate director Leah Findlater.

Emma McDonnell, a white woman in her 20s with short red hair, freckles, and a warm smile. in the background: a lush landscape and the Colosseum.

McDonnell’s research focuses on accessible communication technologies and explores how these tools could be designed to engage non-disabled people in making their communication approaches more accessible. She has studied how real-time captioning is used during videoconferencing and her current work is exploring how people caption their TikTok videos. 

The Dennis Lang Award recognizes undergraduate or graduate students across the UW who demonstrate academic excellence in disability studies and a commitment to social justice issues as they relate to people with disabilities.

This article is excerpted from Human Centered Design & Engineering news.

Jacob O. Wobbrock awarded Ten-Year Technical Impact Award

January 5, 2023

The Association for Computing Machinery (ACM) has honored CREATE Co-Director Jacob O. Wobbrock and colleagues with a 10-year lasting impact award for their groundbreaking work improving how computers recognize stroke gestures.

Jacob O. Wobbrock, a 40-something white man with short hair, a beard, and glasses. He is smiling in front of a white board.

Wobbrock, a professor in the Information School, and co-authors Radu-Daniel Vatavu and Lisa Anthony were presented with the 2022 Ten Year Technical Impact Award in November at the ACM International Conference on Multimodal Interaction (ICMI). The award honors their 2012 paper titled Gestures as point clouds: A $P recognizer for user interface prototypeswhich also won ICMI’s Outstanding Paper Award when it was published.

The $P point-cloud gesture recognizer was a key advancement in the way computers recognize stroke gestures, such as swipes, shapes, or drawings on a touchscreen. It provided a new way to quickly and accurately recognize what users’ fingers or styluses were telling their devices to do, and even could be used with whole-hand gestures to accomplish more complex tasks such as typing in the air or controlling a drone with finger movements.

The research built on Wobbrock’s 2007 invention of the $1 unistroke recognizer, which made it much easier for devices to recognize single-stroke gestures, such as a circle or a triangle. Wobbrock called it “$1” — 100 pennies — because it required only 100 lines of code, making it easy for user interface developers to incorporate gestures in their prototypes.

This article was excerpted from the UW iSchool article, iSchool’s Wobbrock Honored for Lasting Impact by Doug Parry

Wobbrock Co-leads ACM UIST Conference, Brings Accessibility to the Conversation

CREATE founding Co-Director Jacob O. Wobbrock served as General Co-Chair for ACM User Interface Software and Technology (UIST) 2022, held at the end of October.

Nearly 500 people traveled to beautiful Bend, OR to share their latest innovations in user interface software and technology from fabrication and materials, to VR and AR, to interactive tools and interaction techniques. UIST showcased the very best inventive research in the field of human-computer interaction. “Attending UIST is like attending an exclusive preview of possible tomorrows, where one gazes into the future and imagines living there, if only for a moment,” said Wobbrock.

Two photos from UIST 2022 Conference: A table of attendees chatting animatedly and a photo of Jacob O. Wobbrock and closing keynote speaker Marissa Mayer

Bringing accessibility into the conversation, Wobbrock’s opening keynote questioned the assumptions made in statements we often see, such as, “Just touch the screen” assumes the ability to see the screen, to move the hand, and so on.

For the closing keynote, available on YouTube, Wobbrock interviewed Marissa Mayer, former CEO of Yahoo and an early employee at Google. She studied Symbolic Systems and Computer Science with a focus on artificial intelligence at Stanford, along with Wobbrock. Mayer answered audience questions, including one about making design choices through a combination of crowdsourcing, an abundance of data, and strong opinions.

A Ph.D. Student’s Promising Research in Mobility in Cerebral Palsy

Whether she’s researching how biofeedback systems can guide gait training in children with cerebral palsy or leading toy adaptation events, Alyssa Spomer is committed to advancing accessible technology.

A Ph.D. student in UW Mechanical Engineering (ME) and advised by CREATE Associate Director Kat Steele, Spomer is the student chair of CREATE-sponsored HuskyADAPT. Her studies have been multidisciplinary, spanning ME and rehabilitation medicine. She uses her engineering skills to understand the efficacy of using robotic devices to target and improve neuromuscular control during walking.

“Delving into how the central nervous system controls movement and how these systems are impacted by brain injury has been such an interesting aspect of my work,” Spomer says. “My research is a mix of characterizing the capacity for individuals to adapt their motor control and movement patterns, and evaluating the efficacy of devices that may help advance gait rehabilitation.”

In her dissertation work, Spomer is primarily evaluating how individuals adapt movement patterns while using a pediatric robotic exoskeleton paired with an audiovisual biofeedback system that she helped design. The Biomtoum SPARK exoskeleton works to sense and support motion at the ankle during walking, using motors worn on a hip belt to provide either resistance or assistance to the ankles during walking. The audiovisual system is integrated into the device’s app and provides the user with real-time information on their ankle motion alongside a desired target to help guide movement correction.

Collage of two images. Left: Man walking on a treadmill wearing a robotic exoskeleton device around his hips and legs while a female researcher monitors the process through a tablet. Right: A closeup of the researcher's hands holding the tablet showing the real-time information.
The audiovisual system that Spomer helped design (shown on a screen in the right photo) provides the user with real-time information on their ankle motion alongside a desired target to help guide movement correction.

Inspired by CREATE’s Kat Steele and the Steele Lab

Spomer was drawn to ME by the Steele Lab’s focus on enhancing human mobility through engineering and design. Working with Kat Steele has been a highlight of her time at the UW.

“I really resonated with Kat’s approach to research,” Spomer says. “The body is the ultimate machine, meaning that we as engineers can apply much of our foundational curriculum in dynamics and control to characterize its function. The beauty of ME is that you are able to develop such a rich knowledge base with numerous applications which really prepares you to create and work in these multidisciplinary spaces.”

This winter, Spomer will begin a new job at Gillette Children’s Specialty Healthcare. She’s excited to pursue research that aligns with her Ph.D. work. Her goal remains the same: “How can we advance and improve the accessibility of healthcare strategies to help promote independent and long-term mobility?”

Excerpted from UW Mechanical Engineering, written by Lyra Fontaine with photos by Mark Stone, University of Washington. Read the full article

Increasing Data Equity Through Accessibility

Data equity can level the playing field for people with disabilities both in opening new employment opportunities and through access to information, while data inequity may amplify disability by disenfranchising people with disabilities.

In response to the U.S. Science and Technology Policy Office’s request for information (RFI) better supporting intra- and extra-governmental collaboration around the production and use of equitable data, CREATE Co-director, Jennifer Mankoff co-authored a position statement with Frank Elavsky, Carnegie Mellon University and Arvind Satyanarayan, MIT Visualization Group. The authors address the three questions most pertinent to the needs of disabled people.

They highlight the opportunity to expand upon the government’s use of accessible tools to produce accessible visualizations through broad-based worker training. “From the CDC to the Census Bureau, critical data that is highly important to all historically underrepresented peoples and should be available to underrepresented scholars and research institutions to access and use, must be accessible to fully include everyone.”

Reiterating “Nothing about us without us,” the statement notes that when authoring policies that involve data, access, and equitable technology, people with disabilities must be consulted. “Calls for information, involvement, and action should explicitly invite and encourage participation of those most affected.” In addition to process notes, the response addresses roles, education, laws, and tools.

Just having access to data is not enough, or just, when power, understanding and action are in the hands of government agents, computer scientists, business people and the many other stakeholders implementing data systems who do not themselves have disabilities.

The statement identifies access to the tools for producing accessible data, such as data visualizations, as low-hanging fruit and concludes with a call for funding of forward-thinking research that investigates structural and strategic limitations to equitable data access. More research is needed to investigate the ways that various cultural and socio-economic factors intersect with disability and access to technology.

Read the details in the full response on arxiv.org (PDF).

Large-Scale Analysis Finds Many Mobile Apps Are Inaccessible

Mobile apps have become a key feature of everyday life, with apps for banking, work, entertainment, communication, transportation, and education, to name a few. But many apps remain inaccessible to people with disabilities who use screen readers or other assistive technologies.

iStockPhoto image of several generic application icons such as weather, books, music, etc.

Any person who uses an assistive technology can describe negative experiences with apps that do not provide proper support. For
example, screen readers unhelpfully announce “unlabeled button” when they encounter a screen widget without proper information provided by the developer.

We know that apps often lack adequate accessibility, but until now, it has been difficult to get a big picture of mobile app accessibility overall.

How good or bad is the state of mobile app accessibility? What are the common problems? What can be done?

Research led by Ph.D. student Anne Spencer Ross and co-advised by James Fogarty (CREATE Associate Director) and Jacob O. Wobbrock (CREATE Co-Director) has been examining these questions in first-of-their-kind large-scale analyses of mobile app accessibility. Their latest research automatically examined data from approximately 10,000 apps to identify seven common types of accessibility failures. Unfortunately, this analysis found that many apps are highly inaccessible. For example, 23% of the analyzed apps failed to provide accessibility metadata, known as a “content description,” for more than 90% of their image-based buttons. The functionality of those buttons will therefore be inaccessible when using a screen reader.

A bar chart showing the percentage of application icons and images missing labels. Out of 8,901 apps, 23 percent were not missing labels, 23 percent were missing labels on all elements. The rest of the apps were missing labels for 6 to 7 percent of their elements.
Bar chart shows that 23 percent of apps are missing labels on all their elements. Another 23 percent were not missing labels on any elements. And the rest were missing labels on 6 to 7 percent of their elements.

Clearly, we need better approaches to ensuring all
apps are accessible. This research has also shown that large-scale data
can help identify reasons why such labeling failures occur. For example, “floating action buttons” are a relatively new Android element that typically present a commonly-used command as an image-button floating atop other elements. Our analyses found that 93% of such buttons lacked a content description, so they were even more likely than other buttons to be inaccessible. By examining this issue closely, Ross and her advisors found that commonly used software development tools do not detect this error. In addition to highlighting accessibility failures in individual apps, results like these suggest that identifying and addressing underlying failures in common developer tools can improve the accessibility of many apps.

Next, the researchers aim to detect a greater variety of accessibility failures and to include longitudinal analyses over time. Eventually, they hope to paint a complete picture of mobile app accessibility at scale.

CREATE + I-LABS: focus on access, mobility, and the brain

We are excited to celebrate the launch of a new research and innovation partnership between CREATE and the UW Institute of Learning and Brain Sciences (I-LABS) focusing on access, mobility, and the brain.

Mobility technology such as manual and powered wheelchairs, scooters, and modified ride-on toy cars, are essential tools for young children with physical disabilities to self-initiate exploration, make choices, and learn about the world. In essence, these devices are mobile learning environments.

Collage of several toddlers playing, learning and laughing on modified ride-on toys

The collaboration, led by Heather Feldner and Kat Steele from CREATE, and Pat Kuhl and Andy Meltzoff from I-LABS, brings together expertise from fields of rehabilitation medicine and disability studies, engineering, language development, psychology, and learning. The team will address several critical knowledge gaps, starting with, How do early experiences with mobility technology impact brain development and learning outcomes? What are critical periods for mobility?

Read the full Research Highlight.

Ga11y improves accessibility of automated GIFs for visually impaired users

Animated GIFs, prevalent in social media, texting platforms and websites, often lack adequate alt-text descriptions, resulting in inaccessible GIFs for blind or low-vision (BLV) users and the loss of meaning, context, and nuance in what they read. In an article published in the Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’22), a research team led by CREATE Co-director Jacob O. Wobbrock has demonstrated a system called Ga11y (pronounced “galley”) for creating GIF annotations and improving the accessibility of animated GIFs.

Video describing Ga11y, an Automated GIF Annotation System for Visually Impaired Users. The video frame shows an obscure image and the question, How would you describe this GIF to someone so they can understand it without seeing it?

Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests.

Wobbrock’s co-authors are Mingrui “Ray” Zhang, a Ph.D. candidate in the UW iSchool, and Mingyuan Zhong, a Ph.D. student in the Paul G. Allen School of Computer Science & Engineering.

Part of this work was funded by CREATE.