Formally, Dr. Joshua Miele describes himself as a blind scientist, designer, performance artist and disability activist who is focused on the overlap of technology, disability, and equity. But in his personable and humorous lecture, he listed a few more identities: Interrupter. Pain in the ass. “CAOS” promoter.
Miele’s passions are right in line with CREATE’s work and he started his lecture, after being introduced by CREATE Director Jennifer Mankoff, with a compliment we heartily accept: “This community at the University of Washington is one of the largest, one of the most vibrant communities of people thinking and working around disability, accessibility, and technology.”
Miele shared his enthusiasm for disability-inclusive design and its impact on global disability equity and inclusion. Drawing on examples and counterexamples from his own life and career, Dr. Miele described some of the friction the accessibility field has faced and speculated about what challenges may lie ahead, with particular emphasis on the centrality of user-centered practices, and the exhilarating potential of open source solutions and communities.
When he received the MacArthur grant, Miele had to decide what to do with the spotlight on his work. He shared his hopes for a Center for Accessibility and Open Source (CAOS, pronounced “chaos”) to promote global digital equity for people with disabilities through making low-cost accessible tools available to everyone, whether they have financial resources or not. He invited anyone interested in global equity, disability, direct action, performance art, and CAOS/chaos to reach out to work together on this incredibly important work.
CREATE Director Jennifer Mankoff started the conversation asking Wong about her experience as a disabled person in academia and what needs to change. Wong said her work in disability justice was inspired in part by the “incredible amount of emotion and physical labor to ask for equal access” in academic settings. She had to spend precious time, money and energy to gain the accommodations and access she needed to succeed. But she realized that as soon as she transitioned out, her efforts would be lost and the next student would have to start over to prove their need and request a new set of accommodations. Wong was doubtful that large academic institutions can support the goal of collective liberation. It’s the “dog-eat-dog world [of] academia where the competition is stiff and everyone is pushed to their limits to produce and be valuable.” She encouraged instructors to incorporate books about disability justice in their syllabi (see the reading list below).
Wong, who spoke with a text-to-voice tool and added emphasis with her facial expressions on the screen, also addressed the value and the limitations of assistive technology. She noted that the text-to-speech app she uses does not convey her personality. She also discussed how ableism appears in activist discourse.
One of her examples was a debate over gig economy delivery services, which are enormously important for many people with disabilities and that also under-compensate delivery work. She noted that blaming disabled people for undermining efforts for better wages was not the solution; collective efforts to make corporations compensate workers is the solution. She also explained that hashtag activism, which has been disparaged in popular discourse, is a crucial method for disabled people to participate in social justice activism. And she discussed her outrage when, as she prepared to give a talk to a public health school, her own access needs were used to censor her. Throughout her talk, Wong returned again and again to the principles of disability justice, and encouraged attendees to engage in collective forms of change.
Wong’s responses embodied a key component of disability justice principles: citational practices that name fellow contributors to collective disability justice wisdom. Her long list of recommended reading for the audience inspired us to build our new RDT reading list. Wong referenced Patty Berne several times, calling Berne her introduction to disability justice.
Patty Berne on disability justice: Centering intersectionality and liberation
A week later, two CREATE Ph.D. students, Aashaka Desai and Aaleyah Lewis, moderated a conversation with Patty Berne. Berne, who identifies as a Japanese-Haitian queer disabled woman, co-founded Sins Invalid, a disability justice-based arts project focusing on disabled artists of color and queer and gender non-conforming artists with disabilities. Berne defined disability justice as advocating for each other, understanding access needs, and normalizing those needs. On the topic of climate justice, she noted that state-sponsored disaster planning often overlooks the needs of people with motor impairments or life-sustaining medical equipment. This is where intersectional communities do, and should, take care of each other when disaster strikes.
Berne addressed language justice within the disability community, noting that “we don’t ‘language’ like able-bodied people.” For example, the use of ventilators and augmented speech technology change the cadence of speech. Berne wants to normalize access needs for a more inclusive experience of everyday life. Watch the full conversation on YouTube.
In January 2024, the CREATE community was invited to participate in the Taskar Center’s 2024 Annual Ben Taskar Memorial Event, themed “Transportation and Responsible AI.”
Sessions
Project Poster Viewing and Team Discussions
Explore innovative projects from the course on “Responsible Data Science in Urban Spaces” under the guidance of Anat Caspi, TCAT director, contributing to Dr. Caspi’s recent Human Rights Education Award.
Community Townhall Meeting with Dragomir Anguelov
oin us for a captivating discussion with Dragomir Anguelov, VP and Head of Research at Waymo, as he shares Waymo’s insights into operating an autonomous ride-share fleet, covering over 7 million miles. The session, moderated by Anat Caspi, focuses on responsible AI in transportation. (This session will not be recorded and not available to remote participants.)
Spotlight on AccessMap Multimodal
Discover the latest advancements in accessible transportation with a spotlight on the recent deployment of AccessMap Multimodal. The session will highlight the personalized trip planner for travelers with disabilities and provide insights into the user experience, including the use of screen readers.
Ben Taskar Memorial Distinguished Lecture: Dragomir Anguelov: Toward Total Scene Understanding for Autonomous Driving
In this engaging lecture, Drago Anguelov will delve into recent Waymo research on performant ML models and architectures that handle the variety and complexity of real-world environments in autonomous driving. He will also discuss the impact of progress in building Autonomous Driving agents on people with disabilities and explore current open questions about enhancing embodied AI agent capabilities with ML.
Congratulations to Anat Caspi on receiving the 2023 Human Rights Education Award from the Seattle Human Rights Commission!
Caspi, a CREATE associate director and the founder and director of the Taskar Center for Accessible Technology, thanked the commission for recognition of her individual dedication and emphasized that it also celebrates the collective efforts of the Taskar Center community.
In her role as CREATE’s Director of Strategy and Operations, Olivia Banner, Ph.D., will help develop and oversee organizational strategy, design and implement new programs, manage center operations, and help ensure a sustainable trajectory of high quality work in service of the CREATE’s core mission.
Banner is a disabled author and educator who has taught courses on disability, technology, and media. She comes to Seattle and the UW from the University of Texas at Dallas, where she was an associate professor of Critical Media Studies. She is the author of Communicative Biocapitalism: The Voice of the Patient in Digital Health and the Health Humanities. Her new book about technology, psychiatry, and practices of mutual care is forthcoming with Duke University Press. Her research has been published in Catalyst: Feminism, Theory, Technoscience, Literature and Medicine and is forthcoming in Disability Studies Quarterly.
“Her principles and commitment to intersectional work caught our attention in our conversations about the new role. We are so lucky to have her joining CREATE!”
Jennifer Mankoff, CREATE Director
Banner says she looks forward to integrating disability principles into projects with tangible effects on disabled peoples’ lives, including AI + Accessibility, integrating disabled perspectives into projects, and race, technology, and disability—which align with Banner’s previous academic work. She is personally invested in fostering just technological futures through collaborative work and is very excited about the Center’s aim of expanding access through community partnerships.
CREATE Director Jennifer Mankoff is equally excited about the vision Banner brings for CREATE’s future, her policy experience, her administrative skills, and her commitment to amplifying the voices of those she serves. “Her principles and commitment to intersectional work caught our attention in our conversations about the new role. We are so lucky to have her joining CREATE!” says Mankoff.
In her research, scholarship, and teaching, Banner has centered disability knowledge as a method for envisioning technological futures. Her work extends to multiple collaborative projects, including co-teaching a seminar on surveillance with a computer science professor, serving on a Lancet-sponsored commission developing policies for global digital health development, and co-directing Social Practice & Community Engagement Media, a lab that used low-tech methods to reimagine campus practices of care. Toward the goal of improving access on the UT Dallas campus, Banner conducted critical access mapping projects, led Teach-Ins and workshops on disability and equity and on accessible course design, and served on the University Accessibility Committee.
Having served as managing editor of an academic journal and as Associate Dean of Graduate Studies for her School, she also brings professional experience working with faculty, students, staff, and community members from varied disciplines and professions, and anticipates generative conversations on the horizon. She joins CREATE eager to support and enhance its visions of accessible and equitable technology.
People with low vision (LV) have had fewer options for physical activity, particularly in competitive sports such as tennis and soccer that involve fast, continuously moving elements such as balls and players. A group of researchers from CREATE associate director Jon E. Froehlich‘s Makeability Lab hopes to overcome this challenge by enabling LV individuals to participate in ball-based sports using real-time computer vision (CV) and wearable augmented reality (AR) headsets. Their initial focus has been on tennis.
ARTennis is their prototype system capable of tracking and enhancing the visual saliency of tennis balls from a first-person point-of-view (POV). Recent advancements in deep learning have led to models like TrackNet, a neural network capable of tracking tennis balls in third-person recordings of tennis games that is used to improve sports viewing for LV people. To enhance playability, the team first built a dataset of first-person POV images by having the authors wear an AR headset and play tennis. They then streamed video from a pair of AR glasses to a back-end server, analyzed the frames using a custom-trained deep learning model, and sent back the results for real-time overlaid visualization.
After a brainstorming session with an LV research team member, the team added visualization improvements to enhance the ball’s color contrast and add a crosshair in real-time.
Early evaluations have provided feedback that the prototype could help LV people enjoy ball-based sports but there’s plenty of further work to be done. A larger field-of-view (FOV) and audio cues would improve a player’s ability to track the ball. The weight and bulk of the headset, in addition to its expense are also factors the team expects to improve with time, as Lee noted in an interview on Oregon Public Broadcasting.
“Wearable AR devices such as the Microsoft HoloLens 2 hold immense potential in non-intrusively improving accessibility of everyday tasks. I view AR glasses as a technology that can enable continuous computer vision, which can empower BLV individuals to participate in day-to-day tasks, from sports to cooking. The Makeability Lab team and I hope to continue exploring this space to improve the accessibility of popular sports, such as tennis and basketball.”
Students from CSE 493 and additional CREATE researchers shared their work at the December 2023 CREATE Research Showcase. The event was well attended by CREATE students, faculty, and community partners. Projects included, for example: an analysis of the accessibility of transit stations and a tool to aid navigation within transit stations; an app to help colorblind people of color pick makeup; and consider the accessibility of generative AI while also considering ableist implications of limited training data.
CSE 493 student projects
In it’s first offering Autumn quarter 2023, CSE’s undergraduate Accessibility class has been focusing on the importance of centering first-person accounts in disability-focused technology work. Students worked this quarter on assignments ranging from accessibility assessments of county voting systems to disability justice analysis to open-ended final projects.
Alti is a Discord bot that automatically generates alt text for any image that gets uploaded onto Discord. Once you add Alti to your Discord server, Alti will automatically generate alt text for the image using artificial intelligence (AI).
Allows individuals with color blindness to upload an image of their skin, and provides a makeup foundation match. Additionally, individuals can upload existing swatches and will be provided with filtered photos that better show the matching accuracy.
Arianna Montoya, Anusha Gani, Claris Winston, Joo Kim
Parses menus on restaurants’ websites to provide information on restaurants’ dietary restrictions to support individuals with specific dietary requirements, such as vegan vegetarian, and those with Celiac disease.
Julia Tawfik, Kenneth Ton, Balbir Singh, Aaron Brown
A Laser Cutter Generator’ interface which displays a form to select shapes and set dimensions for SVG creation.
CREATE student and faculty projects
Designing and Implementing Social Stories in Technology: Enhancing Collaboration for Parents and Children with Neurodiverse Needs
Elizabeth Castillo, Annuska Zolyomi, Ting Zhou
Our research project, conducted through interviews in Panama, focuses on the user-centered design of technology to enhance autism social stories for children with neurodiverse needs. We aim to improve collaboration between parents, therapists, and children by creating a platform for creating, sharing, and tracking the usage of social stories. While our initial research was conducted in Panama, we are eager to collaborate with individuals from Japan and other parts of the world where we have connections, to further advance our work in supporting neurodiversity.
An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility
Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff
With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular. To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.
Machine Learning for Quantifying Rehabilitation Responses in Children with Cerebral Palsy
Charlotte D. Caskey, Siddhi R. Shrivastav, Alyssa M. Spomer, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele
Increases in step length and decreases in step width are often a rehabilitation goal for children with cerebral palsy (CP) participating in long-term treadmill training. But it can be challenging to quantify the non-linear, highly variable, and interactive response to treadmill training when parameters such as treadmill speed increase over time. Here we use a machine learning method, Bayesian Additive Regression Trees, to show that there is a direct effect of short-burst interval locomotor treadmill training on increasing step length and modulating step width for four children with CP, even after controlling for cofounding parameters of speed, treadmill incline, and time within session.
Spinal Stimulation Improves Spasticity and Motor Control in Children with Cerebral Palsy
Victoria M. Landrum, Charlotte D. Caskey, Siddhi R. Shrivastav, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele
Cerebral palsy (CP) is caused by a brain injury around the time of birth that leads to less refined motor control and causes spasticity, a velocity dependent stretch reflex that can make it harder to bend and move joints, and thus impairs walking function. Many surgical interventions that target spasticity often lead to negative impacts on walking function and motor control, but transcutaneous spinal cord stimulation (tSCS), a novel, non-invasive intervention, may amplify the neurological response to traditional rehabilitation methods. Results from a 4-subject pilot study indicate that long-term usage of tSCS with treadmill training led to improvements in spasticity and motor control, indicating better walking function.
Adaptive Switch Kit
Kate Bokowy, Mia Hoffman, Heather A. Feldner, Katherine M. Steele
We are developing a switch kit for clinicians and parents to build customizable switches for children with disabilities. These switches are used to help children play with computer games and adapted toys as an early intervention therapy.
We explored the benefits of a Dynamic Harness System Using Partial Body Weight Support (PBWS) within an enriched play environment on Gross Motor Development for Infants with Down Syndrome using randomized cross over study design. We found that the effectiveness of the PBWS harness system on gross motor development was clearly evident. The overall intervention positively affected activity levels, however, the direct impact of the harness remains unclear.
StreetComplete for Better Pedestrian Mapping
Sabrina Fang, Kohei Matsushima
StreetComplete is a gamified, structured, and user-friendly mobile application for users to improve OpenStreetMap data by completing pilot quests. OpenStreetMap is an open-source, editable world map created and maintained by a community of volunteers. The goal of this research project is to design pilot quests in StreetComplete to accurately collect information about “accessibility features,” such as sidewalk width and the quality of lighting, to improve accessibility for pedestrian mapping.
Transit Stations Are So Confusing!
Jackie Chen, Milena Johnson, Haochen Miao, and Raina Scherer
We are collecting data on the wayfinding nodes in four different Sound Transit light rail stations, and interpreting them through the GTFS-pathways schema. In the future, we plan on visualizing this information through AccessMaps such that it can be referenced by all users.
The project is born out of a commitment to enhance the quality of life for individuals with disabilities in the city of Seattle. The primary objective is to systematically analyze and improve the allocation and management of curbside parking spaces designated for disabled individuals. By improving accessibility for individuals with disabilities, the project contributes to fostering a more equitable and welcoming urban environment.
Developing Accessible Tele-Operation Interfaces for Assistive Robots with Occupational Therapists
Vinitha Ranganeni, Maya Cakmak
The research is motivated by the potential of using tele-operation interfaces with assistive robots, such as the Stretch RE2, to enhance the independence of individuals with motor limitations in completing activities of daily living (ADLs). We explored the impact of customization of tele-operation interfaces and a deployed the Stretch RE2 in a home for several weeks facilitated by an occupational therapist and enabled a user with quadriplegia to perform daily activities more independently. Ultimately, this work aims to empower users and occupational therapists in optimizing assistive robots for individual needs.
HuskyADAPT: Accessible Design and Play Technology
HuskyADAPT Student Organization
HuskyADAPT is a multidisciplinary community at the University of Washington that supports the development of accessible design and play technology. Our community aims to initiate conversations regarding accessibility and ignite change through engineering design. It is our hope that we can help train the next generation of inclusively minded engineers, clinicians, and educators to help make the world a more equitable place.
A11yBoard for Google Slides: Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users
Zhuohao (Jerry) Zhang, Gene S-H Kim, Jacob O. Wobbrock
Presentation software is largely inaccessible to blind users due to the limitations of screen readers with 2-D artboards. This study introduces an advanced version of A11yBoard, initially developed by Zhang & Wobbrock (CHI2023), which now integrates with Google Slides and addresses real-world challenges. The enhanced A11yBoard, developed through participatory design including a blind co-author, demonstrates through case studies that blind users can independently read and create slides, leading to design guidelines for accessible digital content creation tools.
“He could go wherever he wanted”: Driving Proficiency, Developmental Change, and Caregiver Perceptions following Powered Mobility Training for Children 1-3 Years with Disabilities
Heather A. Feldner, PT, MPT, PhD; Anna Fragomeni, PT; Mia Hoffman, MS; Kim Ingraham, PhD; Liesbeth Gijbels, PhC; Kiana Keithley, SPT; Patricia K. Kuhl, PhD; Audrey Lynn, SPT; Andrew Meltzoff, PhD; Nicole Zaino, PhD; Katherine M. Steele, PhD
The objective of this study was to investigate how a powered mobility intervention for young children (ages 1-3years) with disabilities impacted: 1) Driving proficiency over time; 2) Global developmental outcomes; 3) Learning tool use (i.e., joystick activation); and 4) Caregiver perceptions about powered mobility devices and their child’s capabilities.
Access to Frequent Transit in Seattle
Darsh Iyer, Sanat Misra, Angie Niu, Dr. Anat Caspi, Cole Anderson
The research project in Seattle focuses on analyzing access to public transit, particularly frequent transit stops, by considering factors like median household income. We scripted in QGIS, analyzed walksheds, and examined demographic data surrounding Seattle’s frequent transit stops to understand the equity of transit access in different neighborhoods. Our goal was to visualize and analyze the data to gain insights into the relationship between transit access, median household income, and other demographic factors in Seattle.
Health Service Accessibility
Seanna Qin, Keona Tang, Anat Caspi, Cole Anderson
Our research aims to discover any correlation between median household income and driving duration from census tracts to the nearest urgent care location in the Bellevue and Seattle region
Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means
Ather Sharif, Ruican Zhong, and Yadi Wang
Incorporating uncertainty in data visualizations is critical for users to interpret and reliably draw informed conclusions from the underlying data. However, visualization creators conventionally convey the information regarding uncertainty in data visualizations using visual techniques (e.g., error bars), which disenfranchises screen-reader users, who may be blind or have low vision. In this preliminary exploration, we investigated ways to convey uncertainty in data visualizations to screen-reader users.
Training a robot to feed people presents an array of challenges for researchers. Foods come in a nearly endless variety of shapes and states (liquid, solid, gelatinous), and each person has a unique set of needs and preferences. A team led by CREATE Ph.D. students Ethan K. Gordon and Amal Nanavati created a set of 11 actions a robotic arm can make to pick up nearly any food attainable by fork.
In tests with this set of actions, the robot picked up the foods more than 80% of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal. UW News talked with co-lead authors Gordon and Nanavati, both doctoral students in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding. The team presented its findings Nov. 7 at the 2023 Conference on Robotic Learning in Atlanta.
The Personal Robotics Lab has been working on robot-assisted feeding for several years. What is the advance of this paper?
Ethan K. Gordon: I joined the Personal Robotics Lab at the end of 2018 when Siddhartha Srinivasa, a professor in the Allen School and senior author of our new study, and his team had created the first iteration of its robot system for assistive applications. The system was mounted on a wheelchair and could pick up a variety of fruits and vegetables on a plate. It was designed to identify how a person was sitting and take the food straight to their mouth. Since then, there have been quite a few iterations, mostly involving identifying a wide variety of food items on the plate. Now, the user with their assistive device can click on an image in the app, a grape for example, and the system can identify and pick that up.
Taylor Kessler Faulkner: Also, we’ve expanded the interface. Whatever accessibility systems people use to interact with their phones — mostly voice or mouth control navigation — they can use to control the app.
EKG: In this paper we just presented, we’ve gotten to the point where we can pick up nearly everything a fork can handle. So we can’t pick up soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad to an actual vegetable salad, as well as pre-cut pizza or a sandwich or pieces of meat.
In previous work with the fruit salad, we looked at which trajectory the robot should take if it’s given an image of the food, but the set of trajectories we gave it was pretty limited. We were just changing the pitch of the fork. If you want to pick up a grape, for example, the fork’s tines need to go straight down, but for a banana they need to be at an angle, otherwise it will slide off. Then we worked on how much force we needed to apply for different foods.
In this new paper, we looked at how people pick up food, and used that data to generate a set of trajectories. We found a small number of motions that people actually use to eat and settled on 11 trajectories. So rather than just the simple up-down or coming in at an angle, it’s using scooping motions, or it’s wiggling inside of the food item to increase the strength of the contact. This small number still had the coverage to pick up a much greater array of foods.
We think the system is now at a point where it can be deployed for testing on people outside the research group. We can invite a user to the UW, and put the robot either on a wheelchair, if they have the mounting apparatus ready, or a tripod next to their wheelchair, and run through an entire meal.
For you as researchers, what are the vital challenges ahead to make this something people could use in their homes every day?
EKG: We’ve so far been talking about the problem of picking up the food, and there are more improvements that can be made here. Then there’s the whole other problem of getting the food to a person’s mouth, as well as how the person interfaces with the robot, and how much control the person has over this at least partially autonomous system.
TKF: Over the next couple of years, we’re hoping to personalize the robot to different people. Everyone eats a little bit differently. Amal did some really cool work on social dining that highlighted how people’s preferences are based on many factors, such as their social and physical situations. So we’re asking: How can we get input from the people who are eating? And how can the robot use that input to better adapt to the way each person wants to eat?
Amal Nanavati: There are several different dimensions that we might want to personalize. One is the user’s needs: How far the user can move their neck impacts how close the fork has to get to them. Some people have differential strength on different sides of their mouth, so the robot might need to feed them from a particular side of their mouth. There’s also an aspect of the physical environment. Users already have a bunch of assistive technologies, often mounted around their face if that’s the main part of their body that’s mobile. These technologies might be used to control their wheelchair, to interact with their phone, etc. Of course, we don’t want the robot interfering with any of those assistive technologies as it approaches their mouth.
There are also social considerations. For example, if I’m having a conversation with someone or at home watching TV, I don’t want the robot arm to come right in front of my face. Finally, there are personal preferences. For example, among users who can turn their head a little bit, some prefer to have the robot come from the front so they can keep an eye on the robot as it’s coming in. Others feel like that’s scary or distracting and prefer to have the bite come at them from the side.
A key research direction is understanding how we can create intuitive and transparent ways for the user to customize the robot to their own needs. We’re considering trade-offs between customization methods where the user is doing the customization, versus more robot-centered forms where, for example, the robot tries something and says, “Did you like it? Yes or no.” The goal is to understand how users feel about these different customization methods and which ones result in more customized trajectories.
What should the public understand about robot-assisted feeding, both in general and specifically the work your lab is doing?
EKG: It’s important to look not just at the technical challenges, but at the emotional scale of the problem. It’s not a small number of people who need help eating. There are various figures out there, but it’s over a million people in the U.S. Eating has to happen every single day. And to require someone else every single time you need to do that intimate and very necessary act can make people feel like a burden or self-conscious. So the whole community working towards assistive devices is really trying to help foster a sense of independence for people who have these kinds of physical mobility limitations.
AN: Even these seven-digit numbers don’t capture everyone. There are permanent disabilities, such as a spinal cord injury, but there are also temporary disabilities such as breaking your arm. All of us might face disability at some time as we age and we want to make sure that we have the tools necessary to ensure that we can all live dignified lives and independent lives. Also, unfortunately, even though technologies like this greatly improve people’s quality of life, it’s incredibly difficult to get them covered by U.S. insurance companies. I think more people knowing about the potential quality of life improvement will hopefully open up greater access.
Additional co-authors on the paper were Ramya Challa, who completed this research as an undergraduate student in the Allen School and is now at Oregon State University, and Bernie Zhu, a UW doctoral student in the Allen School. This research was partially funded by the National Science Foundation, the Office of Naval Research and Amazon.
Adapted ride-on cars (ROC) are an affordable, power mobility training tool for young children with disabilities. But weather and adequate drive space create barriers to families’ adoption of their ROC.
CREATE Ph.D. student Mia E. Hoffman is the lead author on a paper that investigates the relationship between the built environment and ROC usage.
With her co-advisors Kat Steele and Heather A. Feldner, Jon E. Froehlich (all three CREATE associate directors), and Kyle N. Winfree as co-authors, Hoffman found that play sessions took place more often within the participants’ homes. But when the ROC was used outside, children engaged in longer play sessions, actively drove for a larger portion of the session, and covered greater distances.
Most notably, they found that children drove more in pedestrian-friendly neighborhoods and when in proximity to accessible paths, demonstrating that providing an accessible place for a child to move, play, and explore is critical in helping a child and family adopt the mobility device into their daily life.
PAVE’s mission is to provide support, training, information, and resources to empower and give voice to individuals, youth, and families living with disabilities throughout Washington State.
“Without technology—accessible technology—PAVE would never be able to support those who rely on us for accurate information and resources.” says Barb Koumjian, Project Coordinator for Lifespan Respite WA at PAVE. This includes the highly accessible PAVE website, with links to parent training programs, family health resources, and support systems.
“All of us at PAVE are deeply committed to addressing the concerns of parents worried about their loved one in school, navigating medical supports, or caregiving for a family member. PAVE’s goal is to provide a seamless online experience, allowing everyone to find information quickly, get support, and hopefully get some peace of mind,” adds Communications Specialist Nicol Walsh. “PAVE’s goal is to provide a seamless online experience, allowing everyone to find information quickly and get support.”
PAVE supports accessibility via adaptive technology: “For the families I support at PAVE, there is an uprising of parents advocating for AAC, in any capacity, at an early age with an autism diagnosis,” says Shawnda Hicks, PAVE Coordinator. “Giving children communication in early learning stages reduces frustration and high behaviors.”
“As a statewide organization, we’re deeply committed to accessibility and equity for everyone, and we value our collaborations with UW CREATE for all we serve in Washington,” says Tracy Kahlo, PAVE Executive Director.
Thanks to these PAVE staff members for contributing words, data, and perspective: Barb Koumjian, Nicol Walsh, Shawnda Hicks, and Tracy Kahlo.
Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoning, perpetuating ableist biases.
This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.
“When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”
The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.
Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.
“When technology changes rapidly, there’s always a risk that disabled people get left behind.”
Jennifer Mankoff, CREATE Director, professor in the Allen School
Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.
The results of the other tests researchers selected were equally mixed:
One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.
“I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”
The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.” The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”
Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.
“Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”
Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka Desai, Kelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.
A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.
Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.
Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.
“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”
Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School
“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”
A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.
For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.
The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.
Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.
“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”
RASSAR – Room Accessibility and Safety Scan in Augmented Reality – is a novel smartphone-based prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues. With RASSAR, the user holds out their phone and scans a space. The tool uses LiDAR and camera data, real-time machine learning, and AR to construct a real-time model of the 3D scene, attempts to identify and classify known accessibility and safety issues, and visualizes potential problems overlaid in AR.
RASSAR researchers envision the tool as an aid in the building and validation of new construction, planning renovations, or updating homes for health concerns, or for telehealth home visits with occupational therapists. UW News interviewed two CREATE Ph.D. students about their work on the project:
Augmented Reality to Support Accessibility
CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.
CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.
As has become customary, CREATE faculty, students and alumni will have a large presence at the 2023 ASSETS Conference. It’ll be quiet on campus October 23-25 with these folks in New York.
Understanding Digital Content Creation Needs of Blind and Low Vision People Monday, Oct 23 at 1:40 p.m. Eastern time Lotus Zhang, Simon Sun, Leah Findlater
Notably Inaccessible — Data Driven Understanding of Data Science Notebook (In)Accessibility Monday, Oct 23 at 4 p.m. Eastern time Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff
A Large-Scale Mixed-Methods Analysis of Blind and Low-vision Research in ACM and IEEE Tuesday, Oct 24 at 11:10 a.m. Eastern time Yong-Joon Thoo, Maximiliano Jeanneret Medina, Jon E. Froehlich, Nicolas Ruffieux, Denis Lalanne
Working at the Intersection of Race, Disability and Accessibility Tuesday, Oct 24 at 1:40 p.m. Eastern time Christina Harrington, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, Jennifer Mankoff
Comparing Locomotion Techniques in Virtual Reality for People with Upper-Body Motor Impairments Wednesday, Oct 25 at 8:45 a.m. Eastern time Rachel L. Franz, Jinghan Yu, Jacob O. Wobbrock
Jod: Examining the Design and Implementation of a Videoconferencing Platform for Mixed Hearing Groups Wednesday, Oct 25 at 11:10 a.m. Eastern time Anant Mittal, Meghna Gupta, Roshni Poddar, Tarini Naik, SeethaLakshmi Kuppuraj, James Fogarty. Pratyush Kumar, Mohit Jain
Azimuth: Designing Accessible Dashboards for Screen Reader Users Wednesday, Oct 25 at 1:25 p.m. Eastern time Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker, Jennifer Mankoff
Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users Wednesday, Oct 25 at 1:25 p.m. Eastern time Zhuohao Zhang, Gene S-H Kim, Jacob O. Wobbrock
Experience Reports
An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff
Maintaining the Accessibility Ecosystem: a Multi-Stakeholder Analysis of Accessibility in Higher Education Kelly Avery Mack, Natasha A Sidik, Aashaka Desai, Emma J McDonnell, Kunal Mehta, Christina Zhang, Jennifer Mankoff
TACCESS Papers
“I’m Just Overwhelmed”: Investigating Physical Therapy Accessibility and Technology Interventions for People with Disabilities and/or Chronic Conditions
Momona Yamagami, Kelly Mack, Jennifer Mankoff, Katherine M. Steele
The Global Care Ecosystems of 3D Printed Assistive Devices
Saiph Savage, Claudia Flores-Saviaga, Rachel Rodney, Liliana Savage, Jon Schull, Jennifer Mankoff
Posters
Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means Ather Sharif, Ruican Zhong, Yadi Wang
U.S. Deaf Community Perspectives on Automatic Sign Language Translation Nina Tran, Richard E. Ladner, Danielle Bragg (Microsoft Research)
Workshops
Bridging the Gap: Towards Advancing Privacy and Accessibility Rahaf Alharbi, Robin Brewer, Gesu India, Lotus Zhang, Leah Findlater, and Abigale Stangl
Tackling the Lack of a Practical Guide in Disability-Centered Research Emma McDonnell, Kelly Avery Mack, Kathrin Gerling, Katta Spiel, Cynthia Bennett, Robin N. Brewer, Rua M. Williams, and Garreth W. Tigwell
A11yFutures: Envisioning the Future of Accessibility Research Foad Hamidi Kirk Crawford, Jason Wiese, Kelly Avery Mack, Jennifer Mankoff
Demos
A Demonstration of RASSAR : Room Accessibility and Safety Scanning in Augmented Reality Xia Su, Kaiming Cheng, Han Zhang, Jaewook Lee, Wyatt Olson, Jon E. Froehlich
BusStopCV: A Real-time AI Assistant for Labeling Bus Stop Accessibility Features in Streetscape Imagery Chaitanyashareef Kulkarni, Chu Li, Jaye Ahn, Katrina Oi Yau Ma, Zhihan Zhang, Michael Saugstad, Kevin Wu, Jon E. Froehlich; with Valerie Novack and Brent Chamberlain (Utah State University)
Papers and presentations by CREATE associates and alumni
Monday, Oct 23 at 4:00 p.m. Eastern time Understanding Challenges and Opportunities in Body Movement Education of People who are Blind or have Low Vision Madhuka Thisuri De Silva, Leona M Holloway, Sarah Goodwin, Matthew Butler
Tuesday, Oct 24 at 8:45 a.m. Eastern time AdaptiveSound: An Interactive Feedback-Loop System to Improve Sound Recognition for Deaf and Hard of Hearing Users Hang Do, Quan Dang, Jeremy Zhengqi Huang, Dhruv Jain
Tuesday, Oct 24 at 8:45 a.m. Eastern time “Not There Yet”: Feasibility and Challenges of Mobile Sound Recognition to Support Deaf and Hard-of-Hearing People Jeremy Zhengqi Huang, Hriday Chhabria, Dhruv Jain
Tuesday, Oct 24 at 4:00 p.m. Eastern time The Potential of a Visual Dialogue Agent In a Tandem Automated Audio Description System for Videos Abigale Stangl, Shasta Ihorn, Yue-Ting Siu, Aditya Bodi, Mar Castanon, Lothar D Narins, Ilmi Yoon
Two recent publications address unnecessary challenges faced by parents with disabilities and how those challenges are made extraordinary by a legal system that is not protecting parents or their children.
The National Council on Disability report finds that roughly 4 million parents in the U.S. who are disabled (about 6% of parents) are the only distinct community that must struggle to retain custody of their children.
While we have moved (somewhat) beyond the blatant eugenics of the 20th century, some of those tactics persist. Further, “parents with disabilities are the only distinct community of Americans who must struggle to retain custody of their children.” This is also connected to other intersectional factors. For example, “Because children from African American and Native American families are more likely to be poor, they are more likely to be exposed to mandated reporters as they turn to the public social service system for support in times of need…”
Research has shown that exposure bias is evident at each decision point in the child welfare system.
Author Robyn Poweldetails how the child welfare system employs extensive surveillance that disproportionately targets marginalized families. Yet centers for independent living and other existing programs have the potential to support these parents. Instead, “The child welfare system, more accurately referred to as the family policing system, employs extensive surveillance that disproportionately targets marginalized families, subjecting them to relentless oversight.”
One particular story in that article highlights the role of technology in this ‘policing’: “…just as the Hackneys were preparing to bring [their 8 month old] home, the Allegheny County DHS [alleged] negligence due to [the parents’] disabilities… More than a year later, their toddler remains in the foster care system, an excruciating separation for the Hackneys. The couple is left questioning whether DHS’ use of a predictive artificial intelligence (“AI”) tool unfairly targeted them based on their disabilities.”
As technologists, we wonder whether this AI tool was tested for racial or disability bias. It is essential that the technologies we create are equitable before they are deployed.
What are the opportunities for research to engage the intersection of race and disability?
What is the value of considering how constructs of race and disability work alongside each other within accessibility research studies?
Two CREATE Ph.D. students have explored these questions and found little focus on this intersection within accessibility research. In their paper, Working at the Intersection of Race, Disability and Accessibility (PDF), they observe that we’re missing out on the full nuance of marginalized and “otherized” groups.
The Allen School Ph.D. students, Aashaka Desai and Aaleyah Lewis, and collaborators will present their findings at the ASSETS 2023 conference on Tuesday, October 24.
Spurred by the conversation at the Race, Disability & Technology research seminar earlier in the year, members of the team realized they lacked a framework for thinking about work at this intersection. In response, they assembled a larger team to conduct an analysis of existing work and research with accessibility research.
The resulting paper presents a review of considerations for engaging with race and disability in the research and education process. It offers analyses of exemplary papers, highlights opportunities for intersectional engagement, and presents a framework to explore race and disability research. Case studies exemplify engagement at this intersection throughout the course of research, in designs of socio-technical systems, and in education.
Case studies
Representation in image descriptions: How to describe appearance, factoring preferences for self-descriptions of identity, concerns around misrepresentation by others, interest in knowing others’ appearance, and guidance for AI-generated image descriptions.
Experiences of immigrants with disabilities: Cultural barriers that include cultural disconnects and levels of stigma about disability between refugees and host countries compound language barriers.
Designing for intersectional, interdependent accessibility: How access practices as well as cultural and racial practices influence every stage of research design, method, and dissemination, in the context of work with communities of translators.
Authors, left to right: Christina Harringon, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, and Jennifer Mankoff
Authors
Christina N. Harrington, Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon
The U.S. Department of Health and Human Services’ (HHS) Office for Civil Rights published a proposed update to the HHS regulations implementing Section 504 of the Rehabilitation Act of 1973, which prohibits disability discrimination by recipients of federal funding.
If you have any questions, reach out to CREATE at create-contact@uw.edu.
This is the first comprehensive update to the regulations since they were first put in place more than 40 years ago. The proposed rule includes new requirements prohibiting discrimination in the areas of:
Medical treatment
The use of value assessments
Web, mobile, and kiosk accessibility
Requirements for accessible medical equipment, so that persons with disabilities have an opportunity to participate in or benefit from health care programs and activities that is equal to the opportunity afforded others.
For 60 days starting on September 14, HHS will be seeking public comment on the proposed rule. Input from the disability and aging communities is essential!
Note that CREATE also provided a review guide and CREATE’s response in an accessible and tagged PDF document (53 pages) for a previous public comment invitation, specifically for the U.S. Department of Justice in the areas of digital accessibility.
CREATE has submitted a response, in collaboration with colleagues within the UW and at peer institutions, to the U.S. Department of Justice (DoJ) proposal for new digital accessibility guidelines for entities that receive federal funds (schools, universities, agencies, etc.). The DoJ proposal invited review of the proposed guidelines.
CREATE’s official response, in collaboration with UW and other colleagues, is posted on the DOJ site temporarily.
If you have any questions, reach out to CREATE at create-contact@uw.edu.
The response commends the Department of Justice for addressing the issue of inaccessible websites and mobile apps for Title II entities through the approach proposed through the Notice of Proposed Rulemaking (NPRM). The future popularity of websites and apps was not anticipated when the Americans with Disabilities Act was signed into law in 1990. Since then, websites, non-web documents, mobile apps, and other software have become popular ways for Title II entities to reach out and inform the public, to offer benefits and activities, and to use as a part of their offerings to members of the public. In recent years, many entities have asked for clearer legal guidance, so we appreciate the Department’s efforts to address these issues in proposed rulemaking.
The designers of the Virtual Traffic Stop app aim to ease tensions and prevent misunderstandings between drivers and law enforcement during traffic stops. For Hard-of-Hearing or Deaf drivers, the app can be used to communicate with law enforcement via chat during the video. Users can add family members and invite them to the chat for additional assistance.
A Gainesville Florida K-12 school has announced their endorsement of Virtual Traffic Stop and has encouraged parents and their children to sign up and start using the app. Currently, the app is being used by the University of Florida and Gainesville Florida police departments.
If your community is interested in using the app, contact Dr. Juan E. Gilbert, a former CREATE Advisory Board member and Chair of the Human-Centered Computing Department at the University of Florida, by calling 352-392-1527 or emailing juan@ufl.edu.
In September 2023, the Director of the National Institute on Minority Health and Health Disparities announced the designation of people with disabilities as a population with health disparities. The designation is one of several steps National Institutes of Health (NIH) is taking to address health disparities faced by people with disabilities and ensure their representation in NIH research.
Dr. Eliseo J. Pérez-Stable, in consultation with Dr. Robert Otto Valdez, the Director of the Agency for Healthcare Research and Quality cited careful consideration of the National Advisory Council on Minority Health and Health Disparities final report, input from the disability community, and a review of the science and evidence.