Hard Mode: Accessibility, Difficulty and Joy for Gamers With Disabilities

Video games often pose accessibility barriers to gamers with disabilities, but there is no standard method for identifying which games have barriers, what those barriers are, and whether and how they can be overcome. CREATE and Allen School Ph.D. student Jesse Martinez has been working to understand the strategies and resources gamers with disabilities regularly use when trying to identify a game to play and the challenges disabled gamers face in this process, with the hopes of advising the games industry on how better support disabled members of their audience.

Martinez, with CREATE associate directors James Fogarty and Jon Froehlich as project advisors and co-authors, published the team’s findings for the ACM CHI conference on Human Factors in Computing Systems (CHI 2024).

Martinez will present the paper, Playing on Hard Mode: Accessibility, Difficulty and Joy in Video Game Adoption for Gamers With Disabilities, virtually at the hybrid conference, and will present it in person at UW DUB’s upcoming para.chi.dub event.

Martinez’s passion for this work came from personal experience: as someone who loves playing all kinds of games, he has spent lots of time designing new ways to play games to make them accessible for himself and other friends with disabilities. He also has experience working independently as a game and puzzle designer and has consulted on accessibility for tabletop gaming studio Exploding Kittens, giving him a unique perspective on how game designers create games and how disabled gamers hack them.

First, understand the game adoption process

The work focuses on the process of “Game Adoption”, which includes everything from learning about a new game (game discovery), learning about it to see if it’s a good fit for one’s taste and access needs (game evaluation), and getting set up with the game and making any modifications necessary to improve the overall experience (game adaptation). As Martinez notes in the paper, gamers with disabilities already do work to make gaming more accessible, so it’s very important not to overlook this work when designing new solutions.

To explore this topic, Martinez interviewed 13 people with a range of disabilities and very different sets of access needs. In the interviews, they discussed what each person’s unique game adoption process looked like, where they encountered challenges, and how they would want to see things change to better support their process.

Graphic from the research paper showing the progression from Discovery (finding a game to play), to Evaluation (assessing a game's fit), to Adaptation (getting set up with a game).

Game discovery

In discussing game discovery, the team found that social relationships and online disabled gaming communities were the most valuable resource for learning about new games. Game announcements often don’t come with promises of being accessible. But if a friend suggested a game, it often meant the friend had already considered whether the game had a chance of being accessible. Participants also mentioned that since there is no equivalent to a games store for accessible games, it was sometimes hard to learn about new games. In their recommendations, Martinez suggests game distributors like Steam and Xbox work to support this type of casual browsing of accessible games.

Game evaluation

In discussing game evaluation, the team found that community-created game videos on platforms like YouTube and Twitch were useful for making accessibility judgments. Interestingly, the videos didn’t need to be accessibility-focused, since just seeing how the game worked was useful information. One participant in the study highlights the accessibility options menus in their Twitch streams, and asks streamers to do likewise, since this information can be tricky to find online.

Game adaptation

Martinez and team discovered many different approaches people took to make a game accessible to them, starting with enabling accessibility features like captions or getting the game to work with their screen reader. Some participants designed their own special tools to make the system work, such as a 3D-printed wrist mount for a gaming mouse. Participants shared that difficulty levels in a game are very important accessibility resources, especially when inaccessibility in the game already made things harder.

The important thing is that players be allowed to choose what challenges they want to face, rather than being forced to play on “hard mode” if they don’t want to.

Other participants discussed how they change their own playstyle to make the game accessible, such as playing as a character who fights with a ranged weapon or who can teleport across parts of the game world. Others went even further, creating their own new objectives in the game that better suited what they wanted from their experience. This included ignoring the competitive part of the racing game Mario Kart to just casually enjoy driving around its intricate worlds, and participating in a friendly roleplaying community in GTA V where they didn’t have to worry about the game’s fast-paced missions and inaccessible challenges.

Overcoming inaccessible games

Martinez uses all this context to introduce two concepts to the world of human-computer interaction and accessibility research: “access difficulty” and “disabled gaming.”

“Access difficulty” is how the authors describe the challenges created in a game specifically due to inaccessibility, which are different from the challenges a game designer intentionally creates to make the game harder. The authors emphasize that the important thing is that players be allowed to choose what challenges they actually want to face, rather than being forced to play on “hard mode” compared to nondisabled players.

“Disabled gaming” acknowledges the particular way gamers with disabilities play games, which is often very different from how nondisabled people play games. Disabled gaming is about taking the game you’re presented and turning it into something fun however you can, regardless of whether that’s what the game designer expects or wants you to do.

Martinez and his co-authors are very excited to share this work with the CREATE community and the world, and they encourage anyone interested in participating in a future study of disabled gaming to join the #study-recruitment channel. If you’re not on CREATE’s Slack, request to join.

DUB hosts para.chi event

March 1, 2024

Para.chi is a worldwide parallel event to CHI ’24 for those unable or unwilling to join CHI ‘24. UW Design. Use. Build. (DUB) is hosting para.chi.dub with members of the DUB team–and maybe you.

  • Live session for accepted virtual papers
  • Networking opportunities
  • Accessibility for students and early career researchers locally and online

Wednesday, May 8, 2024 
Hybrid event: Seattle location to be announced and virtual info shared upon registration
Presenter applications due March 15 
Register to attend by Monday, April 1.

Do you have a virtual paper and wish to get feedback from a live audience? Perhaps you have a journal paper accepted to an HCI venue and wish to present it live? Then consider joining us!

Note that presenter space is somewhat limited. Decisions about how to distribute poster, presenter, and hybrid opportunities will be made after March 15.

Seattle and beyond

Each regional team is offering a different event, from mini-conferences to virtual paper sessions to mentoring and networking events. 

Learn more:

UW News: Can AI help boost accessibility? CREATE researchers tested it for themselves

November 2, 2023 | UW News

Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoningperpetuating ableist biases.

This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.

Four AI-generated images show different interpretations of a doll-sized “crocheted lavender husky wearing ski goggles,” including two pictured outdoors and one against a white background.

The team presented its findings Oct. 22 at the ASSETS 2023 conference in New York.

“When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”

The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.

Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.

“When technology changes rapidly, there’s always a risk that disabled people get left behind.”

Jennifer Mankoff, CREATE Director, professor in the Allen School

Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.

The results of the other tests researchers selected were equally mixed:

  • One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
  • Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
  • Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.

“I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”

The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.”  The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”

Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.

“Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”

Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka DesaiKelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.


For more information, contact Glazko at glazko@cs.washington.edu and Mankoff at jmankoff@cs.washington.edu.


This article was adapted from the UW News article by Stefan Milne.

UW News: A11yBoard accessible presentation software

October 30, 2023 | UW News

A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.

A user demonstrates creating a presentation slide with A11yBoard on a touchscreen tablet and computer screen.

Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.

Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.

“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School

“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.

For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.

The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.

Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.

“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”

This research was funded in part by the University of Washington’s Center for Research and Education on Accessible Technology and Experiences (UW CREATE). For more information, contact Zhang at zhuohao@uw.edu and Wobbrock at wobbrock@uw.edu.


This article was adapted from the UW News article by Stefan Milne.

ASSETS 2023 Papers and Posters

October 4, 2023


Augmented Reality to Support Accessibility

CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.

Illustration of a user holding a smartphone using the RASSAR prototype app to scan the room for accessibility issues.

As has become customary, CREATE faculty, students and alumni will have a large presence at the 2023 ASSETS Conference. It’ll be quiet on campus October 23-25 with these folks in New York.

Papers and presentations

How Do People with Limited Movement Personalize Upper-Body Gestures? Considerations for the Design of Personalized and Accessible Gesture Interfaces
Monday, Oct 23 at 11:10 a.m. Eastern time
Momona Yamagami, Alexandra A Portnova-Fahreeva, Junhan Kong, Jacob O. Wobbrock, Jennifer Mankoff

Understanding Digital Content Creation Needs of Blind and Low Vision People
Monday, Oct 23 at 1:40 p.m. Eastern time
Lotus Zhang, Simon Sun, Leah Findlater

Notably Inaccessible — Data Driven Understanding of Data Science Notebook (In)Accessibility
Monday, Oct 23 at 4 p.m. Eastern time
Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff

A Large-Scale Mixed-Methods Analysis of Blind and Low-vision Research in ACM and IEEE
Tuesday, Oct 24 at 11:10 a.m. Eastern time
Yong-Joon Thoo, Maximiliano Jeanneret Medina, Jon E. Froehlich, Nicolas Ruffieux, Denis Lalanne

Working at the Intersection of Race, Disability and Accessibility
Tuesday, Oct 24 at 1:40 p.m. Eastern time
Christina Harrington, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, Jennifer Mankoff

Comparing Locomotion Techniques in Virtual Reality for People with Upper-Body Motor Impairments
Wednesday, Oct 25 at 8:45 a.m. Eastern time
Rachel L. Franz, Jinghan Yu, Jacob O. Wobbrock

Jod: Examining the Design and Implementation of a Videoconferencing Platform for Mixed Hearing Groups
Wednesday, Oct 25 at 11:10 a.m. Eastern time
Anant Mittal, Meghna Gupta, Roshni Poddar, Tarini Naik, SeethaLakshmi Kuppuraj, James Fogarty. Pratyush Kumar, Mohit Jain

Azimuth: Designing Accessible Dashboards for Screen Reader Users
Wednesday, Oct 25 at 1:25 p.m. Eastern time
Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker, Jennifer Mankoff

Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users
Wednesday, Oct 25 at 1:25 p.m. Eastern time
Zhuohao Zhang, Gene S-H Kim, Jacob O. Wobbrock

Experience Reports

An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility
Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff

Maintaining the Accessibility Ecosystem: a Multi-Stakeholder Analysis of Accessibility in Higher Education
Kelly Avery Mack, Natasha A Sidik, Aashaka Desai, Emma J McDonnell, Kunal Mehta, Christina Zhang, Jennifer Mankoff

TACCESS Papers

“I’m Just Overwhelmed”: Investigating Physical Therapy Accessibility and Technology Interventions for People with Disabilities and/or Chronic Conditions

Momona Yamagami, Kelly Mack, Jennifer Mankoff, Katherine M. Steele

The Global Care Ecosystems of 3D Printed Assistive Devices

Saiph Savage, Claudia Flores-Saviaga, Rachel Rodney, Liliana Savage, Jon Schull, Jennifer Mankoff

Posters

Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means
Ather Sharif, Ruican Zhong, Yadi Wang

U.S. Deaf Community Perspectives on Automatic Sign Language Translation
Nina Tran, Richard E. Ladner, Danielle Bragg (Microsoft Research)

Workshops

Bridging the Gap: Towards Advancing Privacy and Accessibility
Rahaf Alharbi, Robin Brewer, Gesu India, Lotus Zhang, Leah Findlater, and Abigale Stangl

Tackling the Lack of a Practical Guide in Disability-Centered Research
Emma McDonnell, Kelly Avery Mack, Kathrin Gerling, Katta Spiel, Cynthia Bennett, Robin N. Brewer, Rua M. Williams, and Garreth W. Tigwell

A11yFutures: Envisioning the Future of Accessibility Research
Foad Hamidi Kirk Crawford, Jason Wiese, Kelly Avery Mack, Jennifer Mankoff

Demos

A Demonstration of RASSAR : Room Accessibility and Safety Scanning in Augmented Reality
Xia Su, Kaiming Cheng, Han Zhang, Jaewook Lee, Wyatt Olson, Jon E. Froehlich

BusStopCV: A Real-time AI Assistant for Labeling Bus Stop Accessibility Features in Streetscape Imagery
Chaitanyashareef Kulkarni, Chu Li, Jaye Ahn, Katrina Oi Yau Ma, Zhihan Zhang, Michael Saugstad, Kevin Wu, Jon E. Froehlich; with Valerie Novack and Brent Chamberlain (Utah State University)

Papers and presentations by CREATE associates and alumni

  • Monday, Oct 23 at 4:00 p.m. Eastern time
    Understanding Challenges and Opportunities in Body Movement Education of People who are Blind or have Low Vision
    Madhuka Thisuri De Silva, Leona M Holloway, Sarah Goodwin, Matthew Butler
  • Tuesday, Oct 24 at 8:45 a.m. Eastern time
    AdaptiveSound: An Interactive Feedback-Loop System to Improve Sound Recognition for Deaf and Hard of Hearing Users
    Hang Do, Quan Dang, Jeremy Zhengqi Huang, Dhruv Jain
  • Tuesday, Oct 24 at 8:45 a.m. Eastern time
    “Not There Yet”: Feasibility and Challenges of Mobile Sound Recognition to Support Deaf and Hard-of-Hearing People
    Jeremy Zhengqi Huang, Hriday Chhabria, Dhruv Jain
  • Tuesday, Oct 24 at 4:00 p.m. Eastern time
    The Potential of a Visual Dialogue Agent In a Tandem Automated Audio Description System for Videos
    Abigale Stangl, Shasta Ihorn, Yue-Ting Siu, Aditya Bodi, Mar Castanon, Lothar D Narins, Ilmi Yoon

Research at the Intersection of Race, Disability and Accessibility

October 13, 2023

What are the opportunities for research to engage the intersection of race and disability?

What is the value of considering how constructs of race and disability work alongside each other within accessibility research studies?

Two CREATE Ph.D. students have explored these questions and found little focus on this intersection within accessibility research. In their paper, Working at the Intersection of Race, Disability and Accessibility (PDF), they observe that we’re missing out on the full nuance of marginalized and “otherized” groups. 

The Allen School Ph.D. students, Aashaka Desai and Aaleyah Lewis, and collaborators will present their findings at the ASSETS 2023 conference on Tuesday, October 24.

Spurred by the conversation at the Race, Disability & Technology research seminar earlier in the year, members of the team realized they lacked a framework for thinking about work at this intersection. In response, they assembled a larger team to conduct an analysis of existing work and research with accessibility research.

The resulting paper presents a review of considerations for engaging with race and disability in the research and education process. It offers analyses of exemplary papers, highlights opportunities for intersectional engagement, and presents a framework to explore race and disability research. Case studies exemplify engagement at this intersection throughout the course of research, in designs of socio-technical systems, and in education. 


   Case studies

  • Representation in image descriptions: How to describe appearance, factoring preferences for self-descriptions of identity, concerns around misrepresentation by others, interest in knowing others’ appearance, and guidance for AI-generated image descriptions.
  • Experiences of immigrants with disabilities: Cultural barriers that include cultural disconnects and levels of stigma about disability between refugees and host countries compound language barriers.
  • Designing for intersectional, interdependent accessibility: How access practices as well as cultural and racial practices influence every stage of research design, method, and dissemination, in the context of work with communities of translators.

Composite image of the six authors of a variety of backgrounds: Christina Harringon, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anned Spencer Ross, and Jennifer Mankoff
Authors, left to right: Christina Harringon, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, and Jennifer Mankoff

Authors

CREATE Open Source Projects Awarded at Web4All

July 6, 2023

CREATE researchers shone this spring at the 2023 Web4All 2023 conference that, in part, seeks to “make the internet more accessible to the more than one billion people who struggle to interact with digital content each day due to neurodivergence, disability or other impairments.” Two CREATE-funded open source projects won accolades.

Best Technical Paper award:
Understanding and Improving Drilled-Down Information Extraction from Online Data Visualizations for Screen-Reader Users

Authors: Ather Sharif, Andrew Mingwei Zhang, CREATE faculty member Katharina Reinecke, and CREATE Associate Director Jacob O. Wobbrock

Built on prior research to develop taxonomies of information sought by screen-reader users to interact with online data visualizations, the team’s research used these taxonomies to extend the functionality of VoxLens—an open-source multi-modal system that improves the accessibility of data visualizations—by supporting drilled-down information extraction. They assessed the performance of their VoxLens enhancements through task-based user studies with 10 screen-reader and 10 non-screen-reader users. Their enhancements “closed the gap” between the two groups by enabling screen-reader users to extract information with approximately the same accuracy as non-screen-reader users, reducing interaction time by 22% in the process.

Accessibility Challenge Delegates’ Award:
UnlockedMaps: A Web-Based Map for Visualizing the Real-Time Accessibility of Urban Rail Transit Stations

Authors: Ather Sharif, Aneesha Ramesh, Qianqian Yu, Trung-Anh H. Nguyen, and Xuhai Xu

Ather Sharif’s work on another project, UnlockedMaps, was honored with the Accessibility Challenge Delegates’ Award. The paper details a web-based map that allows users to see in real time how accessible rail transit stations are in six North American cities, including Seattle, Toronto, New York and the Bay Area. UnlockedMaps shows whether stations are accessible and if they are currently experiencing elevator outages. Their work includes a public website that enables users to make informed decisions regarding their commute and an open source API that can be used by developers, disability advocates, and policy makers for a variety of purposes, including shedding light on the frequency of elevator outages and their repair times to identify the disparities between neighborhoods in a given city.

Read more

Wobbrock Co-leads ACM UIST Conference, Brings Accessibility to the Conversation

CREATE founding Co-Director Jacob O. Wobbrock served as General Co-Chair for ACM User Interface Software and Technology (UIST) 2022, held at the end of October.

Nearly 500 people traveled to beautiful Bend, OR to share their latest innovations in user interface software and technology from fabrication and materials, to VR and AR, to interactive tools and interaction techniques. UIST showcased the very best inventive research in the field of human-computer interaction. “Attending UIST is like attending an exclusive preview of possible tomorrows, where one gazes into the future and imagines living there, if only for a moment,” said Wobbrock.

Two photos from UIST 2022 Conference: A table of attendees chatting animatedly and a photo of Jacob O. Wobbrock and closing keynote speaker Marissa Mayer

Bringing accessibility into the conversation, Wobbrock’s opening keynote questioned the assumptions made in statements we often see, such as, “Just touch the screen” assumes the ability to see the screen, to move the hand, and so on.

For the closing keynote, available on YouTube, Wobbrock interviewed Marissa Mayer, former CEO of Yahoo and an early employee at Google. She studied Symbolic Systems and Computer Science with a focus on artificial intelligence at Stanford, along with Wobbrock. Mayer answered audience questions, including one about making design choices through a combination of crowdsourcing, an abundance of data, and strong opinions.

CREATE Leadership at ASSETS’22 Conference

ASSETS 2022 logo, composed of a PCB-style Parthenon outline with three people standing and communicating with each other in the Parthenon, representing three main iconic disabilities: blind, mobility impaired, deaf and hard of hearing.

CREATE Associate Director Jon Froehlich was the General Chair for ASSETS’22, the premier ACM conference for research on the design, evaluation, use, and education related to computing for people with disabilities and older adults. This year, over 300 participants from 37 countries engaged with state-of-the-art research in the design and evaluation of technology for people with disabilities. UW CREATE was a proud sponsor of ASSETS’22.

Keynote speaker Haben Girma is the first Deafblind graduate of Harvard Law School and a leading human rights advocate in disability. Girma highlighted systemic ableism in education, employment, and tech and opportunities for change in her speech.

“There is a myth that non-disabled people are independent and disabled people are dependent. We are all interdependent. Many of you like drinking coffee; very few of you grow your own beans,” she pointed out.

ASSETS’22 was held in Athens, Greece. “The birthplace of democracy, we were surrounded by so many beautiful antiquities that highlighted the progress and innovation of humanity and served as inspiration to our community,” said Froehlich.

“Perhaps my favorite experience was the accessible private tours of the Acropolis Museum with conference attendees—hearing of legends, seeing the artistic craft, and moving about a state-of-the-art event center all in the shadow of the looming Acropolis was an experience I’ll never forget,” he added.

Artifact awards

CREATE Ph.D. student Venkatesh Potluri, advised by CREATE Co-Director Jennifer Mankoff in the Make4All Group, and his team tied for 1st place for the Artifact Award. Potluri presented their work on CodeWalk, Facilitating Shared Awareness in Mixed-Ability Collaborative Software Development.

Third place went to Ather Sharif‘s team, advised by Jacob Wobbrock, UnlockedMaps: Visualizing Real-Time Accessibility of Urban Rail Transit Using a Web-Based Map.

Future of urban accessibility

As part of the conference, Froehlich, Heather Feldner, and Anat Caspi held a virtual workshop entitled the “Future of Urban Accessibility” More here: https://accessiblecities.github.io/UrbanAccess2022/

Accessible CS Education workshop focuses on inclusive experiences

Amid a global pandemic, innovative thinkers have been hard at work developing plans to improve equity in modern learning environments. The Accessible Computer Science Education Fall Workshop was held November 17-19, 2020, and jointly sponsored by Microsoft, The Coleman Institute for Cognitive Disabilities, and CREATE.

Each day of the event focused on strategies to improve classroom experiences for students and faculty with disabilities. You can watch recorded sessions where speakers provided a wide range of perspectives on computer science pedagogy and how to increase diversity, equity, and inclusion in computing disciplines.

Two students work together in a lab on a computer screen using accessibility tools

Two students work together on a computer screen using accessibility tools.

The event provided an intimate environment to share work and establish new collaborations. The most visible result, for now, is five white papers and action plans taken from the break-out group reports (CREATE faculty contributors noted):


The program resulted in more than conversations; each group developed formal white papers and action plans that will guide future research and collaboration.

Microsoft logo

Throughout the workshop, participants focused on four areas:

  1. Education for employment pathways
  2. Making K-12 computing education accessible
  3. Making higher education in computing accessible
  4. Building accessible hardware and systems.

Conversations generated ideas about technologies that can boost employment and assist people with disabilities who experience barriers in various learning environments.

The committee behind the event successfully cultivated a productive and inclusive atmosphere that sponsors hope will translate to future projects. Members of the committee include Andrew Begel, Heather Dowty, Cecily Morrison, Teddy Seyed, and Roy Zimmerman from Microsoft; Anat Caspi and Richard Ladner from UW CREATE; and Clayton Lewis from the University of Colorado Boulder.