CREATE Papers and Presentations at CHI 2024

This is a work in progress as there are many papers and presentations from CREATE researchers at CHI 2024, the ACM CHI conference on Human Factors in Computing Systems. We appreciate your patience! For a list of all papers from UW researchers, see DUB’s roundoup.

Papers

A Virtual Reality Scene Taxonomy: Identifying and Designing Accessible Scene-Viewing Techniques

Rachel L. Franz, UW Information School (iSchool); Sasa Junuzovic, Microsoft; Martez E Mott, Microsoft.

An Emotion Translator: Speculative Design by Neurodiverse Dyads

Annuska Zolyomi, CREATE faculty and assistant professor at UW Bothell Computing & Software Systems; Jaime Snyder, iSchool.

BLIP: Facilitating the Exploration of Undesirable Consequences of Digital Technologies

Rock Yuren Pang and his advisor Katharina Reinecke, CREATE faculty member; Sebastin Santy; Rene Just — all from the Allen School of Computer Science & Engineering (Allen School).

“Caption It in an Accessible Way That Is Also Enjoyable”: Characterizing User-Driven Captioning Practices on TikTok

Emma J McDonnell; UW Human Centered Design & Engineering (HCDE) and CREATE associate directors Jon E. Froehlich, Allen School and Leah Findlater, HCDE; Tessa Eagle, University of California, Santa Cruz; Pitch Sinlapanuntakul, HCDE; Soo Hyun Moon, HCDE; Kathryn E Ringland, University of California, Santa Cruz.

Designing Accessible Obfuscation Support for Blind Individuals’ Visual Privacy Management

Lotus Zhang, HCDE; Abigale Stangl, former CREATE/HCDE postdoctoral researcher; Tanusree Sharma, University of Illinois Urbana-Champaign; Yu-Yun Tseng, University of Colorado; Inan Xu, University of California, Santa Cruz; Danna Gurari, University of Colorado; Yang Wang, University of Illinois Urbana-Champaign; Leah Findlater, HCDE.

Avery Mack, Allen School; and from Google: Rida Qadri, Remi Denton, Shaun K Kane, Cynthia L Bennett.

GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality

Jaewook Lee, CREATE Ph.D. student, Allen School; Jun Wang, Allen School; Elizabeth Brown, UW Linguistics; Liam Chu, UW Applied and Computational Mathematical Sciences; Sebastian S Rodriguez, University of Illinois Urbana-Champaign; Jon E. Froehlich, Allen School.

Playing on Hard Mode: Accessibility, Difficulty and Joy in Video Game Adoption for Gamers With Disabilities

Jesse J. Martinez, CREATE Ph.D. student, Allen School; James Fogarty and Jon E. Froehlich, Allen School.

RASSAR: Room Accessibility and Safety Scanning in Augmented Reality

Xia Su, Ph.D. student, Allen School; Kaiming Cheng, Allen School; Han Zhang, Allen School; Jaewook Lee, Allen School; Qiaochu Liu, Tsinghua University; Wyatt Olson, UW Art + Art History + Design; Jon E Froehlich, Allen School.

Empowering users with disabilities through customized interfaces for assistive robots

March 15, 2024

For people with severe physical limitations such as quadriplegia, the ability to tele-operate personal assistant robots could bring a life-enhancing level of independence and self-determination. Allen School Ph.D. candidate Vinitha Ranganeni and her advisor, CREATE faculty member Maya Cakmak, have been working to understand and meet the needs of users of assistive robots.

This month, Ranganeni and Cakmak presented a video at the Human Robot Interaction (HRI) conference that illustrates the practical (and touching) ways deploying an assistive robot in a test household has helped Henry Evans require a bit less from his caregivers and connect to his family.

The research was funded by NIA/NIH Phase II SBIR Grant #2R44AG072982-02 and the NIBIB Grant #1R01EB034580-01

Captioned video of Henry Evans demonstrating how he can control an assistive robot using the customized graphical user interface he co-designed with CREATE Ph.D. student/Allen School Ph.D. candidate Vinitha Ranganeni.

Their earlier study, Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots, evaluated the usability and effectiveness of a customized, tele-operation interface for the Stretch RE2 assistive robot. The authors show that no single interface configuration satisfies all users’ needs and preferences. Users perform better when using the customized interface for navigation, and the differences in preferences between participants with and without motor impairments are significant.

Last summer, as a robotics engineering consultant for Hello Robot, Ranganeni led the development of the interface for deploying an assistive robot in a test household, that of Henry and Jane Evans. Henry was a Silicon Valley CFO when a stroke suddenly left him non-speaking and with quadriplegia. His wife Jane is one of his primary caregivers.

The research team developed a highly customizable graphical user interface to control Stretch, a relatively simple and lightweight robot that has enough range of motion to reach from the floor to countertops.

Work in progress, but still meaningful independence

Stretch can’t lift heavy objects or climb stairs. Assistive robots are expensive, prone to shutting down, and the customization is still very complex and time-intensive. And, as noted in an IEEE Spectrum article about the Evans’ installation, getting the robot’s assistive autonomy to a point where it’s functional and easy to use is the biggest challenge right now. And more work needs to be done on providing simple interfaces, like voice control. 

The article states, “Perhaps we should judge an assistive robot’s usefulness not by the tasks it can perform for a patient, but rather on what the robot represents for that patient, and for their family and caregivers. Henry and Jane’s experience shows that even a robot with limited capabilities can have an enormous impact on the user. As robots get more capable, that impact will only increase.”

In a few short weeks, Stretch made a difference for Henry Evans. “They say the last thing to die is hope. For the severely disabled, for whom miraculous medical breakthroughs don’t seem feasible in our lifetimes, robots are the best hope for significant independence,” says Henry.” 


Collaborator, advocate, and community researcher Tyler Schrenk

Though it has been many months since the death of Tyler Schrenk, a CREATE-funded researcher and a frequent collaborator, his impact is still felt in our collective research.

Tyler Schrenk making a presentation at the head of a lecture room. He has brown spiky hair, a full beard, and is seated in his power wheelchair.

Schrenk was a dedicated expert in the assistive technology field and led the way in teaching individuals and companies how to use assistive technologies to create independence. He was President & Executive Director of the Tyler Schrenk Foundation until his death in 2023. 


Related reading:

ARTennis attempts to help low vision players

December 16, 2023

People with low vision (LV) have had fewer options for physical activity, particularly in competitive sports such as tennis and soccer that involve fast, continuously moving elements such as balls and players. A group of researchers from CREATE associate director Jon E. Froehlich‘s Makeability Lab hopes to overcome this challenge by enabling LV individuals to participate in ball-based sports using real-time computer vision (CV) and wearable augmented reality (AR) headsets. Their initial focus has been on tennis.

The team includes Jaewook Lee (Ph.D. student, UW CSE), Devesh P. Sarda (MS/Ph.D. student, University of Wisconsin), Eujean Lee (Research Assistant, UW Makeability Lab), Amy Seunghyun Lee (BS student, UC Davis), Jun Wang (BS student, UW CSE), Adrian Rodriguez (Ph.D. student, UW HCDE), and Jon Froehlich.

Their paper, Towards Real-time Computer Vision and Augmented Reality to Support Low Vision Sports: A Demonstration of ARTennis was published in the 2023 ACM Symposium on User Interface Software and Technology (UIST).

ARTennis is their prototype system capable of tracking and enhancing the visual saliency of tennis balls from a first-person point-of-view (POV). Recent advancements in deep learning have led to models like TrackNet, a neural network capable of tracking tennis balls in third-person recordings of tennis games that is used to improve sports viewing for LV people. To enhance playability, the team first built a dataset of first-person POV images by having the authors wear an AR headset and play tennis. They then streamed video from a pair of AR glasses to a back-end server, analyzed the frames using a custom-trained deep learning model, and sent back the results for real-time overlaid visualization.

After a brainstorming session with an LV research team member, the team added visualization improvements to enhance the ball’s color contrast and add a crosshair in real-time.

Early evaluations have provided feedback that the prototype could help LV people enjoy ball-based sports but there’s plenty of further work to be done. A larger field-of-view (FOV) and audio cues would improve a player’s ability to track the ball. The weight and bulk of the headset, in addition to its expense are also factors the team expects to improve with time, as Lee noted in an interview on Oregon Public Broadcasting.

“Wearable AR devices such as the Microsoft HoloLens 2 hold immense potential in non-intrusively improving accessibility of everyday tasks. I view AR glasses as a technology that can enable continuous computer vision, which can empower BLV individuals to participate in day-to-day tasks, from sports to cooking. The Makeability Lab team and I hope to continue exploring this space to improve the accessibility of popular sports, such as tennis and basketball.”

Jaewook Lee, Ph.D. student and lead author

Ph.D. student Jaewook Lee presents a research poster, Makeability Lab Demos - GazePointAR & ARTennis.

Winter 2023 CREATE Research Showcase

December 12, 2023

Students from CSE 493 and additional CREATE researchers shared their work at the December 2023 CREATE Research Showcase. The event was well attended by CREATE students, faculty, and community partners. Projects included, for example: an analysis of the accessibility of transit stations and a tool to aid navigation within transit stations; an app to help colorblind people of color pick makeup; and consider the accessibility of generative AI while also considering ableist implications of limited training data.

CSE 493 student projects

In it’s first offering Autumn quarter 2023, CSE’s undergraduate Accessibility class has been focusing on the importance of centering first-person accounts in disability-focused technology work. Students worked this quarter on assignments ranging from accessibility assessments of county voting systems to disability justice analysis to open-ended final projects.

Alti Discord Bot »

Keejay Kim, Ben Kosa, Lucas Lee, Ashley Mochizuki

Alti is a Discord bot that automatically generates alt text for any image that gets uploaded onto Discord. Once you add Alti to your Discord server, Alti will automatically generate alt text for the image using artificial intelligence (AI).

Enhancing Self-Checkout Accessibility at QFC »

Abosh Upadhyaya, Ananya Ganapathi, Suhani Arora

Makes self-checkout more accessible to visually impaired individuals

Complexion Cupid: Color Matching Foundation Program »

Ruth Aramde, Nancy Jimenez-Garcia, Catalina Martinez, Nora Medina

Allows individuals with color blindness to upload an image of their skin, and provides a makeup foundation match. Additionally, individuals can upload existing swatches and will be provided with filtered photos that better show the matching accuracy.

Twitter Content Warnings »

Stefan D’Souza, Aditya Nair

A chrome extension meant to be used in conjunction with twitter.com in order to help people with PTSD

Lettuce Eat! A Map App for Accessibly Dietary Restrictions »

Arianna Montoya, Anusha Gani, Claris Winston, Joo Kim

Parses menus on restaurants’ websites to provide information on restaurants’ dietary restrictions to support individuals with specific dietary requirements, such as vegan vegetarian, and those with Celiac disease.

Form-igate »

Sam Assefa

A chrome extension that allows users with motor impairments to interact with google forms using voice commands, enhancing accessibility.

Lite Lingo: Plain Text Translator »

Ryan Le, Michelle Vu, Chairnet Muche, Angelo Dauz

A plain text translator to help individuals with learning disabilities

Matplotalt: Alt text for matplotlib figures »

Kai Nailund

[No abstract]

PadMap: Accessible Map for Menstrual Products »

Kirsten Graham, Maitri Dedhia, Sandy Cheng, Aaminah Alam

Our goal is to ensure that anywhere on campus, people can search up the closest free menstrual products to them and get there in an accessible way.

SCRIBE: Crowdsourcing Scientific Alt Text »

Sanjana Chintalapati, Sanjana Sridhar, Zage Strassberg-Phillips

A prototype plugin for arXiv that adds alt text to requested papers via crowdwork.

PalPalette »

Pu Thavikulwat, Masaru Chida, Srushti Adesara, Angela Lee

A web app that helps combat loneliness and isolation for young adults with disabilities

SpeechIT »

Pranati Dani, Manasa Lingireddy, Aryan Mahindra

A presentation speech checker to ensure a user’s verbal speech during presentation is accessible and understandable for everyone.

Enhancing Accessibility in SVG Design: A Fabric.js Solution »

Julia Tawfik, Kenneth Ton, Balbir Singh, Aaron Brown

A Laser Cutter Generator’ interface which displays a form to select shapes and set dimensions for SVG creation.

CREATE student and faculty projects

Designing and Implementing Social Stories in Technology: Enhancing Collaboration for Parents and Children with Neurodiverse Needs

Elizabeth Castillo, Annuska Zolyomi, Ting Zhou

Our research project, conducted through interviews in Panama, focuses on the user-centered design of technology to enhance autism social stories for children with neurodiverse needs. We aim to improve collaboration between parents, therapists, and children by creating a platform for creating, sharing, and tracking the usage of social stories. While our initial research was conducted in Panama, we are eager to collaborate with individuals from Japan and other parts of the world where we have connections, to further advance our work in supporting neurodiversity.

An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility

Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff

With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular. To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.

Machine Learning for Quantifying Rehabilitation Responses in Children with Cerebral Palsy

Charlotte D. Caskey, Siddhi R. Shrivastav, Alyssa M. Spomer, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele

Increases in step length and decreases in step width are often a rehabilitation goal for children with cerebral palsy (CP) participating in long-term treadmill training. But it can be challenging to quantify the non-linear, highly variable, and interactive response to treadmill training when parameters such as treadmill speed increase over time. Here we use a machine learning method, Bayesian Additive Regression Trees, to show that there is a direct effect of short-burst interval locomotor treadmill training on increasing step length and modulating step width for four children with CP, even after controlling for cofounding parameters of speed, treadmill incline, and time within session.

Spinal Stimulation Improves Spasticity and Motor Control in Children with Cerebral Palsy

Victoria M. Landrum, Charlotte D. Caskey, Siddhi R. Shrivastav, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele

Cerebral palsy (CP) is caused by a brain injury around the time of birth that leads to less refined motor control and causes spasticity, a velocity dependent stretch reflex that can make it harder to bend and move joints, and thus impairs walking function. Many surgical interventions that target spasticity often lead to negative impacts on walking function and motor control, but transcutaneous spinal cord stimulation (tSCS), a novel, non-invasive intervention, may amplify the neurological response to traditional rehabilitation methods. Results from a 4-subject pilot study indicate that long-term usage of tSCS with treadmill training led to improvements in spasticity and motor control, indicating better walking function.

Adaptive Switch Kit

Kate Bokowy, Mia Hoffman, Heather A. Feldner, Katherine M. Steele

We are developing a switch kit for clinicians and parents to build customizable switches for children with disabilities. These switches are used to help children play with computer games and adapted toys as an early intervention therapy.

Developing a Sidewalk Improvement Cost Function

Alex Kirchmeier, Cole Anderson, Anat Caspi

In this ongoing project, I am developing a Python script that uses a sidewalk issues dataset to determine the cost of improving Seattle’s sidewalks. My intention is to create a customizable function that will help users predict the costs associated with making sidewalks more accessible.

Exploring the Benefits of a Dynamic Harness System Using Partial Body Weight Support on Gross Motor Development for Infants with Down Syndrome

Reham Abuatiq, PT, MSc1; Mia Hoffman, ME, BSc2; Alyssa Fiss, PT, PhD3; Julia Looper, PT, PhD4; & Heather Feldner, PT, PhD, PCS1,5,6

We explored the benefits of a Dynamic Harness System Using Partial Body Weight Support (PBWS) within an enriched play environment on Gross Motor Development for Infants with Down Syndrome using randomized cross over study design. We found that the effectiveness of the PBWS harness system on gross motor development was clearly evident. The overall intervention positively affected activity levels, however, the direct impact of the harness remains unclear.

StreetComplete for Better Pedestrian Mapping

Sabrina Fang, Kohei Matsushima

StreetComplete is a gamified, structured, and user-friendly mobile application for users to improve OpenStreetMap data by completing pilot quests. OpenStreetMap is an open-source, editable world map created and maintained by a community of volunteers. The goal of this research project is to design pilot quests in StreetComplete to accurately collect information about “accessibility features,” such as sidewalk width and the quality of lighting, to improve accessibility for pedestrian mapping.

Transit Stations Are So Confusing!

Jackie Chen, Milena Johnson, Haochen Miao, and Raina Scherer

We are collecting data on the wayfinding nodes in four different Sound Transit light rail stations, and interpreting them through the GTFS-pathways schema. In the future, we plan on visualizing this information through AccessMaps such that it can be referenced by all users.

Optimizing Seattle Curbside Disability Parking Spots

Wendy Bu, Cole Anderson, Anat Caspi

The project is born out of a commitment to enhance the quality of life for individuals with disabilities in the city of Seattle. The primary objective is to systematically analyze and improve the allocation and management of curbside parking spaces designated for disabled individuals. By improving accessibility for individuals with disabilities, the project contributes to fostering a more equitable and welcoming urban environment.

Developing Accessible Tele-Operation Interfaces for Assistive Robots with Occupational Therapists

Vinitha Ranganeni, Maya Cakmak

The research is motivated by the potential of using tele-operation interfaces with assistive robots, such as the Stretch RE2, to enhance the independence of individuals with motor limitations in completing activities of daily living (ADLs). We explored the impact of customization of tele-operation interfaces and a deployed the Stretch RE2 in a home for several weeks facilitated by an occupational therapist and enabled a user with quadriplegia to perform daily activities more independently. Ultimately, this work aims to empower users and occupational therapists in optimizing assistive robots for individual needs.

HuskyADAPT: Accessible Design and Play Technology

HuskyADAPT Student Organization

HuskyADAPT is a multidisciplinary community at the University of Washington that supports the development of accessible design and play technology. Our community aims to initiate conversations regarding accessibility and ignite change through engineering design. It is our hope that we can help train the next generation of inclusively minded engineers, clinicians, and educators to help make the world a more equitable place.

A11yBoard for Google Slides: Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users

Zhuohao (Jerry) Zhang, Gene S-H Kim, Jacob O. Wobbrock

Presentation software is largely inaccessible to blind users due to the limitations of screen readers with 2-D artboards. This study introduces an advanced version of A11yBoard, initially developed by Zhang & Wobbrock (CHI2023), which now integrates with Google Slides and addresses real-world challenges. The enhanced A11yBoard, developed through participatory design including a blind co-author, demonstrates through case studies that blind users can independently read and create slides, leading to design guidelines for accessible digital content creation tools.

“He could go wherever he wanted”: Driving Proficiency, Developmental Change, and Caregiver Perceptions following Powered Mobility Training for Children 1-3 Years with Disabilities

Heather A. Feldner, PT, MPT, PhD; Anna Fragomeni, PT; Mia Hoffman, MS; Kim Ingraham, PhD; Liesbeth Gijbels, PhC; Kiana Keithley, SPT; Patricia K. Kuhl, PhD; Audrey Lynn, SPT; Andrew Meltzoff, PhD; Nicole Zaino, PhD; Katherine M. Steele, PhD

The objective of this study was to investigate how a powered mobility intervention for young children (ages 1-3years) with disabilities impacted: 1) Driving proficiency over time; 2) Global developmental outcomes; 3) Learning tool use (i.e., joystick activation); and 4) Caregiver perceptions about powered mobility devices and their child’s capabilities.

Access to Frequent Transit in Seattle

Darsh Iyer, Sanat Misra, Angie Niu, Dr. Anat Caspi, Cole Anderson

The research project in Seattle focuses on analyzing access to public transit, particularly frequent transit stops, by considering factors like median household income. We scripted in QGIS, analyzed walksheds, and examined demographic data surrounding Seattle’s frequent transit stops to understand the equity of transit access in different neighborhoods. Our goal was to visualize and analyze the data to gain insights into the relationship between transit access, median household income, and other demographic factors in Seattle.

Health Service Accessibility

Seanna Qin, Keona Tang, Anat Caspi, Cole Anderson

Our research aims to discover any correlation between median household income and driving duration from census tracts to the nearest urgent care location in the Bellevue and Seattle region

Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means

Ather Sharif, Ruican Zhong, and Yadi Wang

Incorporating uncertainty in data visualizations is critical for users to interpret and reliably draw informed conclusions from the underlying data. However, visualization creators conventionally convey the information regarding uncertainty in data visualizations using visual techniques (e.g., error bars), which disenfranchises screen-reader users, who may be blind or have low vision. In this preliminary exploration, we investigated ways to convey uncertainty in data visualizations to screen-reader users.

UW News: How an assistive-feeding robot went from picking up fruit salads to whole meals

November, 2023

In tests with this set of actions, the robot picked up the foods more than 80% of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal. UW News talked with Gordon and Nanavati co-lead authors, both doctoral students in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding.

An assistive-feeding robotic arm attached to a wheelchair uses a fork to stab a piece of fruit on a plate among other fruits.

The team presented its findings Nov. 7 at the 2023 Conference on Robotic Learning in Atlanta.

UW News talked with co-lead authors Gordon and Nanavati, both doctoral students members of CREATE and in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding for 1.8 million people in the U.S. (according to data from 2010) who can’t eat on their own.

The Personal Robotics Lab has been working on robot-assisted feeding for several years. What is the advance of this paper?

Ethan K. Gordon: I joined the Personal Robotics Lab at the end of 2018 when Siddhartha Srinivasa, a professor in the Allen School and senior author of our new study, and his team had created the first iteration of its robot system for assistive applications. The system was mounted on a wheelchair and could pick up a variety of fruits and vegetables on a plate. It was designed to identify how a person was sitting and take the food straight to their mouth. Since then, there have been quite a few iterations, mostly involving identifying a wide variety of food items on the plate. Now, the user with their assistive device can click on an image in the app, a grape for example, and the system can identify and pick that up.

Taylor Kessler Faulkner: Also, we’ve expanded the interface. Whatever accessibility systems people use to interact with their phones — mostly voice or mouth control navigation — they can use to control the app.

EKG: In this paper we just presented, we’ve gotten to the point where we can pick up nearly everything a fork can handle. So we can’t pick up soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad to an actual vegetable salad, as well as pre-cut pizza or a sandwich or pieces of meat.

In previous work with the fruit salad, we looked at which trajectory the robot should take if it’s given an image of the food, but the set of trajectories we gave it was pretty limited. We were just changing the pitch of the fork. If you want to pick up a grape, for example, the fork’s tines need to go straight down, but for a banana they need to be at an angle, otherwise it will slide off. Then we worked on how much force we needed to apply for different foods.

In this new paper, we looked at how people pick up food, and used that data to generate a set of trajectories. We found a small number of motions that people actually use to eat and settled on 11 trajectories. So rather than just the simple up-down or coming in at an angle, it’s using scooping motions, or it’s wiggling inside of the food item to increase the strength of the contact. This small number still had the coverage to pick up a much greater array of foods.

We think the system is now at a point where it can be deployed for testing on people outside the research group. We can invite a user to the UW, and put the robot either on a wheelchair, if they have the mounting apparatus ready, or a tripod next to their wheelchair, and run through an entire meal.

For you as researchers, what are the vital challenges ahead to make this something people could use in their homes every day?

EKG: We’ve so far been talking about the problem of picking up the food, and there are more improvements that can be made here. Then there’s the whole other problem of getting the food to a person’s mouth, as well as how the person interfaces with the robot, and how much control the person has over this at least partially autonomous system.

TKF: Over the next couple of years, we’re hoping to personalize the robot to different people. Everyone eats a little bit differently. Amal did some really cool work on social dining that highlighted how people’s preferences are based on many factors, such as their social and physical situations. So we’re asking: How can we get input from the people who are eating? And how can the robot use that input to better adapt to the way each person wants to eat?

Amal Nanavati: There are several different dimensions that we might want to personalize. One is the user’s needs: How far the user can move their neck impacts how close the fork has to get to them. Some people have differential strength on different sides of their mouth, so the robot might need to feed them from a particular side of their mouth. There’s also an aspect of the physical environment. Users already have a bunch of assistive technologies, often mounted around their face if that’s the main part of their body that’s mobile. These technologies might be used to control their wheelchair, to interact with their phone, etc. Of course, we don’t want the robot interfering with any of those assistive technologies as it approaches their mouth.

There are also social considerations. For example, if I’m having a conversation with someone or at home watching TV, I don’t want the robot arm to come right in front of my face. Finally, there are personal preferences. For example, among users who can turn their head a little bit, some prefer to have the robot come from the front so they can keep an eye on the robot as it’s coming in. Others feel like that’s scary or distracting and prefer to have the bite come at them from the side.

A key research direction is understanding how we can create intuitive and transparent ways for the user to customize the robot to their own needs. We’re considering trade-offs between customization methods where the user is doing the customization, versus more robot-centered forms where, for example, the robot tries something and says, “Did you like it? Yes or no.” The goal is to understand how users feel about these different customization methods and which ones result in more customized trajectories.

What should the public understand about robot-assisted feeding, both in general and specifically the work your lab is doing?

EKG: It’s important to look not just at the technical challenges, but at the emotional scale of the problem. It’s not a small number of people who need help eating. There are various figures out there, but it’s over a million people in the U.S. Eating has to happen every single day. And to require someone else every single time you need to do that intimate and very necessary act can make people feel like a burden or self-conscious. So the whole community working towards assistive devices is really trying to help foster a sense of independence for people who have these kinds of physical mobility limitations.

AN: Even these seven-digit numbers don’t capture everyone. There are permanent disabilities, such as a spinal cord injury, but there are also temporary disabilities such as breaking your arm. All of us might face disability at some time as we age and we want to make sure that we have the tools necessary to ensure that we can all live dignified lives and independent lives. Also, unfortunately, even though technologies like this greatly improve people’s quality of life, it’s incredibly difficult to get them covered by U.S. insurance companies. I think more people knowing about the potential quality of life improvement will hopefully open up greater access.

Additional co-authors on the paper were Ramya Challa, who completed this research as an undergraduate student in the Allen School and is now at Oregon State University, and Bernie Zhu, a UW doctoral student in the Allen School. This research was partially funded by the National Science Foundation, the Office of Naval Research and Amazon.

For more information, contact Gordon at ekgordon@cs.uw.edu, Nanavati at amaln@cs.uw.edu and Faulkner at taylorkf@cs.washington.edu.


Excerpted and adapted from the UW News story by Stefan Milne.

Off to the Park: A Geospatial Investigation of Adapted Ride-on Car Usage

November 7, 2023

Adapted ride-on cars (ROC) are an affordable, power mobility training tool for young children with disabilities. But weather and adequate drive space create barriers to families’ adoption of their ROC. 

CREATE Ph.D. student Mia E. Hoffman is the lead author on a paper that investigates the relationship between the built environment and ROC usage.

Mia Hoffman smiling into the sun. She has long, blonde hair. Behind her is part of the UW campus with trees and brick buildings.

With her co-advisors Kat Steele and Heather A. Feldner, Jon E. Froehlich (all three CREATE associate directors), and Kyle N. Winfree as co-authors, Hoffman found that play sessions took place more often within the participants’ homes. But when the ROC was used outside, children engaged in longer play sessions, actively drove for a larger portion of the session, and covered greater distances.

Accessibility scores for the sidewalks near a participant’s home on the left and the drive path of the participant on the right. Participant generally avoided streets that were not accessible.

Most notably, they found that children drove more in pedestrian-friendly neighborhoods and when in proximity to accessible paths, demonstrating that providing an accessible place for a child to move, play, and explore is critical in helping a child and family adopt the mobility device into their daily life.

UW News: Can AI help boost accessibility? CREATE researchers tested it for themselves

November 2, 2023 | UW News

Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoningperpetuating ableist biases.

This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.

Four AI-generated images show different interpretations of a doll-sized “crocheted lavender husky wearing ski goggles,” including two pictured outdoors and one against a white background.

The team presented its findings Oct. 22 at the ASSETS 2023 conference in New York.

“When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”

The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.

Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.

“When technology changes rapidly, there’s always a risk that disabled people get left behind.”

Jennifer Mankoff, CREATE Director, professor in the Allen School

Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.

The results of the other tests researchers selected were equally mixed:

  • One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
  • Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
  • Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.

“I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”

The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.”  The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”

Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.

“Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”

Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka DesaiKelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.


For more information, contact Glazko at glazko@cs.washington.edu and Mankoff at jmankoff@cs.washington.edu.


This article was adapted from the UW News article by Stefan Milne.

UW News: A11yBoard accessible presentation software

October 30, 2023 | UW News

A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.

A user demonstrates creating a presentation slide with A11yBoard on a touchscreen tablet and computer screen.

Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.

Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.

“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School

“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.

For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.

The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.

Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.

“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”

This research was funded in part by the University of Washington’s Center for Research and Education on Accessible Technology and Experiences (UW CREATE). For more information, contact Zhang at zhuohao@uw.edu and Wobbrock at wobbrock@uw.edu.


This article was adapted from the UW News article by Stefan Milne.

Augmented Reality to Support Accessibility

October 25, 2023

RASSAR – Room Accessibility and Safety Scan in Augmented Reality – is a novel smartphone-based prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues. With RASSAR, the user holds out their phone and scans a space. The tool uses LiDAR and camera data, real-time machine learning, and AR to construct a real-time model of the 3D scene, attempts to identify and classify known accessibility and safety issues, and visualizes potential problems overlaid in AR. 

RASSAR researchers envision the tool as an aid in the building and validation of new construction, planning renovations, or updating homes for health concerns, or for telehealth home visits with occupational therapists. UW News interviewed two CREATE Ph.D. students about their work on the project:


Augmented Reality to Support Accessibility

CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.

Illustration of a user holding a smartphone using the RASSAR prototype app to scan the room for accessibility issues.

ASSETS 2023 Papers and Posters

October 4, 2023


Augmented Reality to Support Accessibility

CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.

Illustration of a user holding a smartphone using the RASSAR prototype app to scan the room for accessibility issues.

As has become customary, CREATE faculty, students and alumni will have a large presence at the 2023 ASSETS Conference. It’ll be quiet on campus October 23-25 with these folks in New York.

Papers and presentations

How Do People with Limited Movement Personalize Upper-Body Gestures? Considerations for the Design of Personalized and Accessible Gesture Interfaces
Monday, Oct 23 at 11:10 a.m. Eastern time
Momona Yamagami, Alexandra A Portnova-Fahreeva, Junhan Kong, Jacob O. Wobbrock, Jennifer Mankoff

Understanding Digital Content Creation Needs of Blind and Low Vision People
Monday, Oct 23 at 1:40 p.m. Eastern time
Lotus Zhang, Simon Sun, Leah Findlater

Notably Inaccessible — Data Driven Understanding of Data Science Notebook (In)Accessibility
Monday, Oct 23 at 4 p.m. Eastern time
Venkatesh Potluri, Sudheesh Singanamalla, Nussara Tieanklin, Jennifer Mankoff

A Large-Scale Mixed-Methods Analysis of Blind and Low-vision Research in ACM and IEEE
Tuesday, Oct 24 at 11:10 a.m. Eastern time
Yong-Joon Thoo, Maximiliano Jeanneret Medina, Jon E. Froehlich, Nicolas Ruffieux, Denis Lalanne

Working at the Intersection of Race, Disability and Accessibility
Tuesday, Oct 24 at 1:40 p.m. Eastern time
Christina Harrington, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, Jennifer Mankoff

Comparing Locomotion Techniques in Virtual Reality for People with Upper-Body Motor Impairments
Wednesday, Oct 25 at 8:45 a.m. Eastern time
Rachel L. Franz, Jinghan Yu, Jacob O. Wobbrock

Jod: Examining the Design and Implementation of a Videoconferencing Platform for Mixed Hearing Groups
Wednesday, Oct 25 at 11:10 a.m. Eastern time
Anant Mittal, Meghna Gupta, Roshni Poddar, Tarini Naik, SeethaLakshmi Kuppuraj, James Fogarty. Pratyush Kumar, Mohit Jain

Azimuth: Designing Accessible Dashboards for Screen Reader Users
Wednesday, Oct 25 at 1:25 p.m. Eastern time
Arjun Srinivasan, Tim Harshbarger, Darrell Hilliker, Jennifer Mankoff

Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users
Wednesday, Oct 25 at 1:25 p.m. Eastern time
Zhuohao Zhang, Gene S-H Kim, Jacob O. Wobbrock

Experience Reports

An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility
Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff

Maintaining the Accessibility Ecosystem: a Multi-Stakeholder Analysis of Accessibility in Higher Education
Kelly Avery Mack, Natasha A Sidik, Aashaka Desai, Emma J McDonnell, Kunal Mehta, Christina Zhang, Jennifer Mankoff

TACCESS Papers

“I’m Just Overwhelmed”: Investigating Physical Therapy Accessibility and Technology Interventions for People with Disabilities and/or Chronic Conditions

Momona Yamagami, Kelly Mack, Jennifer Mankoff, Katherine M. Steele

The Global Care Ecosystems of 3D Printed Assistive Devices

Saiph Savage, Claudia Flores-Saviaga, Rachel Rodney, Liliana Savage, Jon Schull, Jennifer Mankoff

Posters

Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means
Ather Sharif, Ruican Zhong, Yadi Wang

U.S. Deaf Community Perspectives on Automatic Sign Language Translation
Nina Tran, Richard E. Ladner, Danielle Bragg (Microsoft Research)

Workshops

Bridging the Gap: Towards Advancing Privacy and Accessibility
Rahaf Alharbi, Robin Brewer, Gesu India, Lotus Zhang, Leah Findlater, and Abigale Stangl

Tackling the Lack of a Practical Guide in Disability-Centered Research
Emma McDonnell, Kelly Avery Mack, Kathrin Gerling, Katta Spiel, Cynthia Bennett, Robin N. Brewer, Rua M. Williams, and Garreth W. Tigwell

A11yFutures: Envisioning the Future of Accessibility Research
Foad Hamidi Kirk Crawford, Jason Wiese, Kelly Avery Mack, Jennifer Mankoff

Demos

A Demonstration of RASSAR : Room Accessibility and Safety Scanning in Augmented Reality
Xia Su, Kaiming Cheng, Han Zhang, Jaewook Lee, Wyatt Olson, Jon E. Froehlich

BusStopCV: A Real-time AI Assistant for Labeling Bus Stop Accessibility Features in Streetscape Imagery
Chaitanyashareef Kulkarni, Chu Li, Jaye Ahn, Katrina Oi Yau Ma, Zhihan Zhang, Michael Saugstad, Kevin Wu, Jon E. Froehlich; with Valerie Novack and Brent Chamberlain (Utah State University)

Papers and presentations by CREATE associates and alumni

  • Monday, Oct 23 at 4:00 p.m. Eastern time
    Understanding Challenges and Opportunities in Body Movement Education of People who are Blind or have Low Vision
    Madhuka Thisuri De Silva, Leona M Holloway, Sarah Goodwin, Matthew Butler
  • Tuesday, Oct 24 at 8:45 a.m. Eastern time
    AdaptiveSound: An Interactive Feedback-Loop System to Improve Sound Recognition for Deaf and Hard of Hearing Users
    Hang Do, Quan Dang, Jeremy Zhengqi Huang, Dhruv Jain
  • Tuesday, Oct 24 at 8:45 a.m. Eastern time
    “Not There Yet”: Feasibility and Challenges of Mobile Sound Recognition to Support Deaf and Hard-of-Hearing People
    Jeremy Zhengqi Huang, Hriday Chhabria, Dhruv Jain
  • Tuesday, Oct 24 at 4:00 p.m. Eastern time
    The Potential of a Visual Dialogue Agent In a Tandem Automated Audio Description System for Videos
    Abigale Stangl, Shasta Ihorn, Yue-Ting Siu, Aditya Bodi, Mar Castanon, Lothar D Narins, Ilmi Yoon

Research at the Intersection of Race, Disability and Accessibility

October 13, 2023

What are the opportunities for research to engage the intersection of race and disability?

What is the value of considering how constructs of race and disability work alongside each other within accessibility research studies?

Two CREATE Ph.D. students have explored these questions and found little focus on this intersection within accessibility research. In their paper, Working at the Intersection of Race, Disability and Accessibility (PDF), they observe that we’re missing out on the full nuance of marginalized and “otherized” groups. 

The Allen School Ph.D. students, Aashaka Desai and Aaleyah Lewis, and collaborators will present their findings at the ASSETS 2023 conference on Tuesday, October 24.

Spurred by the conversation at the Race, Disability & Technology research seminar earlier in the year, members of the team realized they lacked a framework for thinking about work at this intersection. In response, they assembled a larger team to conduct an analysis of existing work and research with accessibility research.

The resulting paper presents a review of considerations for engaging with race and disability in the research and education process. It offers analyses of exemplary papers, highlights opportunities for intersectional engagement, and presents a framework to explore race and disability research. Case studies exemplify engagement at this intersection throughout the course of research, in designs of socio-technical systems, and in education. 


   Case studies

  • Representation in image descriptions: How to describe appearance, factoring preferences for self-descriptions of identity, concerns around misrepresentation by others, interest in knowing others’ appearance, and guidance for AI-generated image descriptions.
  • Experiences of immigrants with disabilities: Cultural barriers that include cultural disconnects and levels of stigma about disability between refugees and host countries compound language barriers.
  • Designing for intersectional, interdependent accessibility: How access practices as well as cultural and racial practices influence every stage of research design, method, and dissemination, in the context of work with communities of translators.

Composite image of the six authors of a variety of backgrounds: Christina Harringon, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anned Spencer Ross, and Jennifer Mankoff
Authors, left to right: Christina Harringon, Aashaka Desai, Aaleyah Lewis, Sanika Moharana, Anne Spencer Ross, and Jennifer Mankoff

Authors

CREATE Open Source Projects Awarded at Web4All

July 6, 2023

CREATE researchers shone this spring at the 2023 Web4All 2023 conference that, in part, seeks to “make the internet more accessible to the more than one billion people who struggle to interact with digital content each day due to neurodivergence, disability or other impairments.” Two CREATE-funded open source projects won accolades.

Best Technical Paper award:
Understanding and Improving Drilled-Down Information Extraction from Online Data Visualizations for Screen-Reader Users

Authors: Ather Sharif, Andrew Mingwei Zhang, CREATE faculty member Katharina Reinecke, and CREATE Associate Director Jacob O. Wobbrock

Built on prior research to develop taxonomies of information sought by screen-reader users to interact with online data visualizations, the team’s research used these taxonomies to extend the functionality of VoxLens—an open-source multi-modal system that improves the accessibility of data visualizations—by supporting drilled-down information extraction. They assessed the performance of their VoxLens enhancements through task-based user studies with 10 screen-reader and 10 non-screen-reader users. Their enhancements “closed the gap” between the two groups by enabling screen-reader users to extract information with approximately the same accuracy as non-screen-reader users, reducing interaction time by 22% in the process.

Accessibility Challenge Delegates’ Award:
UnlockedMaps: A Web-Based Map for Visualizing the Real-Time Accessibility of Urban Rail Transit Stations

Authors: Ather Sharif, Aneesha Ramesh, Qianqian Yu, Trung-Anh H. Nguyen, and Xuhai Xu

Ather Sharif’s work on another project, UnlockedMaps, was honored with the Accessibility Challenge Delegates’ Award. The paper details a web-based map that allows users to see in real time how accessible rail transit stations are in six North American cities, including Seattle, Toronto, New York and the Bay Area. UnlockedMaps shows whether stations are accessible and if they are currently experiencing elevator outages. Their work includes a public website that enables users to make informed decisions regarding their commute and an open source API that can be used by developers, disability advocates, and policy makers for a variety of purposes, including shedding light on the frequency of elevator outages and their repair times to identify the disparities between neighborhoods in a given city.

Read more

Accessible Technology Research Showcase – Spring 2023

June 30, 2024

Poster session in progress, with 9 or so posters on easels in view and student presenters talking to attendees.

In June 2023, CREATE and HuskyADAPT co-hosted a showcase — and celebration — of outstanding UW research on accessible technology. The showcase featured poster presentations, live demonstrations by our faculty, students, and researchers and was altogether vibrant and exciting. Over 100 attendees viewed 25 projects, presentations, and posters.

Congratulations and appreciation to CREATE Engagement and Partnerships Manager Kathleen Quin Voss and HuskyAdapt Student Executive Chair Mia Hoffman for putting on an amazing research showcase!

View the Projects


Codesigning Videoconferencing Tools for Small Groups with Mixed Hearing Status

June 12, 2023

CREATE students and faculty have published a new paper at CHI 2023, ‘Easier or Harder, Depending on Who the Hearing Person Is’: Codesigning Videoconferencing Tools for Small Groups with Mixed Hearing Status”.

Led by Human Centered Design and Engineering (HCDE) Ph.D. candidate Emma McDonnell and supported by CREATE, this work investigates how groups with both hearing and d/Deaf and hard of hearing (DHH) members could be better supported when using captions during videoconferences. 

Emma McDonnell, a white woman in her 20s with short red hair, freckles, and a warm smile. In the background: a lush landscape and the Colosseum.

Researchers recruited four groups to participate in a series of codesign sessions, which de-centers researchers’ priorities and seeks to empower participants to lead the development of new design ideas. In the study, participants reflected on their experiences using captioning, sketched and discussed their ideas for technology that could help build accessible group norms, and then critiqued video prototypes researchers created of their ideas. 

One major finding from this research is that participants’ relationships with each other shape what kinds of accessibility support the group would benefit from.

For example, one group that participated in our study were cousins who had been close since childhood. Now in their mid-twenties, they found they did not have to actively plan for accessibility; they had their ways of communicating and would stop and clarify if things broke down. On the other hand, a group of colleagues who work on technology for DHH people had many explicit norms they used to ensure communication accessibility. One participant, Blake, noted, I was pretty emotional after the first meeting because it was just so inclusive.” These different approaches demonstrate that there is no one-size-fits-all approach to communication accessibility – people work together as a group to develop an approach that works for them. 

This paper also contributes new priorities for the design of videoconferencing software. Participants focused on designing add-ons to videoconferencing systems that would better support their group in communicating accessibly. Their designs fell into four categories: 

  • Speaker Identity and Overlap: Having video conferencing tools identify speakers and warn groups when multiple people speak at once, since overlapping speech can’t be captioned accurately. Participants found this to be critical, and often missing, information.
  • Support for Behavioral Feedback: Building in ways for people to subtly notify conversation partners if they need to adjust their behavior. Participants desired tools to flag when people need to adjust their cameras, critical caption errors, and if speech rate gets too high. They considered, but decided against, a general purpose conversation breakdown warning. 
  • Videoconferencing Infrastructure for Accessibility: Adding more features and configurable settings around conversational accessibility to videoconferencing platforms. Participants desired basic controls, such as color and font size, as well as the ability to preset and share group accessibility norms and customize behavior feedback tools. 
  • Sound Information: Providing more information about the sound happening during a conversation. Participants were excited about building sound recognition into captioning tools, and considered conveying speech volume via font weight, but decided it would be overwhelming and ambiguous. 

This research also has implications for broader captioning and videoconferencing design. While often captioning tools are designed for individual d/Deaf and hard of hearing people, researchers argue that we should design for the entire group having a conversation. This shift in focus revealed many ways that, on top of transcribing a conversation, technology could help groups communicate in ways that can be more effectively captioned. Many of these tools are easy to build with current technology, such as being able to click on a confusing caption to request clarification. The research team hopes that their work can illuminate the need to pay attention to groups’ social context when studying captioning and can provide videoconferencing platform designers a design approach to better support groups with mixed hearing abilities. 

McDonnell is advised by CREATE Associate Directors Leah Findlater, HCDE, and Jon Froehlich, Paul G. Allen School of Computer Science & Engineering.

User-informed, robot-assisted social dining for people with motor impairments

June 1, 2023

A team of Allen School robotics researchers has published a paper on the finer aspects of robot-assisted dining with friends. “A meal should be memorable, and not for a potential faux pas from the machine,” notes co-author Patrícia Alves-Oliveira. Supported by a CREATE Student minigrant and in the spirit of “nothing about us without us,” they are working with the Tyler Schrenk Foundation to address the design of robot-assisted feeding systems that facilitate meaningful social dining experiences.

The team is led by Ph.D. student Amal Nanavati, postdoc Patrícia Alves-Oliveira and includes CREATE faculty member Maya Cakmak and community researcher Tyler Schrenk.

Teleconference screenshot of 4 people: Patrícia Alves-Oliveira (top left), Amal Nanavati (top right), Tyler Schrenk (bottom left), and an anonymous participant (bottom right)

Learn more:

Rethinking Disability and Advancing Access

UW CREATE collaborates toward a world with fewer problems and more solutions for people of all abilities.

The UW College of Engineering showcased CREATE’s mission, moonshots, and collaborative successes in a feature article, Rethinking disability and advancing access, written by Alice Skipton. The article is reproduced and reformatted here.

A person sitting in a wheelchair looking at a phone while two people are looking over her shoulder at the phone.
CREATE researchers and partners work on high-impact projects — such as those focused on mobility and on mobile device accessibility — advancing the inclusion and participation for people with disabilities.

According to the Centers for Disease Control and Prevention (CDC), one in four people in the United States lives with a disability.

“The presence of disability is everywhere. But how disability has been constructed, as an individual problem that needs to be fixed, leads to exclusion and discrimination.”

Heather Feldner, UW Medicine assistant professor in Rehabilitation Medicine and a CREATE associate director

The construct also ignores the reality that people’s physical and mental abilities continually change. Examples include pregnancy, childbirth, illness, injuries, accidents and aging. Additionally, assuming that people all move, think or communicate in a certain way fails to recognize diverse bodies and minds. By ignoring this reality, technology and access solutions have traditionally been limited and limiting.

UW CREATE logo with icon of person with prosthetic arm holding a lightbulb and Center for Research and Education on Accessible Technology and Experiences, University of Washington

UW CREATE, a practical, applied research center, exists to counter this problem by making technology accessible and the world accessible through technology. Launched in early 2020 with support from Microsoft, the Center connects research to industry and the community.

On campus, it brings together accessibility experts and work-in-progress from across engineering, medicine, disability studies, computer science, information science and more, with the model always open to new collaborators. 

“Anyone interested in working in the area of accessible technology is invited to become part of CREATE,” says Jacob O. Wobbrock, a professor in the UW Information School and one of the founders and co-director of the Center.

Shooting for the moon

A toddler-aged child in a ride-on toy gaining mobility to explore other toys, accompanied by a researcher.
CREATE is partnering with UW I-LABS to explore how accessibility impacts young children’s development, identity and agency. Their study uses the only powered mobility device available in the U.S. designed for children one to three years old. Photo courtesy of UW CREATE.

“We have an amazing critical mass at UW of faculty doing accessibility research,” says Jennifer Mankoff, a professor in the Paul G. Allen School of Computer Science & Engineering and another founder and co-director of CREATE. “There’s also a lot of cross-talk with Microsoft, other technology leaders, and local and national community groups. CREATE wants to ensure people joining the workforce know about accessibility and technology and that the work they do while they are at UW directly and positively impacts the disability community.” The Center’s community and corporate partnerships approach increases creativity and real-world impact.

The concept of moonshots — technology breakthroughs resulting from advances in space exploration — offers a captivating way of thinking about the potential of CREATE’s research. The Center currently has four research moonshots for addressing technological accessibility problems. One focuses on how accessibility impacts young children’s development, identity and agency and includes a mobility and learning study with the UW Institute for Learning & Brain Sciences (I-LABS) that employs the only powered mobility device available in the U.S. market specifically designed for children one to three years old. Another looks more broadly at mobility indoors and outdoors, such as sidewalk and transit accessibility. A third seeks ways to make mobile and wearable devices more accessible along with the apps people use every day to access such essentials as banking, gaming, transportation and more. A fourth works toward addressing access, equity and inclusion for multiply marginalized people.

“CREATE wants to ensure people joining the workforce know about accessibility and technology and that the work they do while they are at UW directly and positively impacts the disability community.”

— Jennifer Mankoff, founder and co-director of CREATE

For CREATE, advancing these moonshots isn’t just about areas where technologies already exist, like improving an interface to meet more people’s needs. It’s about asking questions and pushing research to address larger issues and inequities. “In certain spaces, disabled people are overrepresented, like in the unhoused or prison populations, or in health-care settings,” Mankoff says. “In others, they are underrepresented, such as in higher education, or simply overlooked. For example, disabled people are more likely to die in disaster situations because disaster response plans often don’t include them. We need to ask how technology contributes to these problems and how it can be part of the solution.”

Broader problem-solving abilities

For even greater impact, CREATE has situated these research moonshots within a practical framework for change that involves education initiatives, translation work and research funding. Seminars, conversations, courses, clubs and internship opportunities all advance the knowledge and expertise of the next generation of accessibility leaders. Translation work ensures that ideas get shaped and brought to life by community stakeholders and through collaborations with UW entities like the TASKAR Center for Accessible TechnologyHuskyADAPT and the UW Disability Studies Program, as well as through collaborations with industry leaders like Microsoft, Google and Meta. CREATE’s research funding adds momentum by supporting education, translation and direct involvement of people with disabilities.

Related story:
Sidewalk Equity

A person in wheelchair and another standing person at a city sidewalk

Engineering and computer science researchers seek to make digital wayfinding more equitable and accessible to more people.

Nicole Zaino, a mechanical engineering Ph.D. student participating in CREATE’s early childhood mobility technology research, describes the immense benefits of having her education situated in the context of CREATE. “It’s broadened my research and made me a better engineer,” she says. She talks about the critical importance of end-user expertise, like the families participating in the mobility and learning study. Doing collaborative research and taking classes in other disciplines gives her more insights into intersecting issues. That knowledge and new vocabulary inform her work because she can search out research from different fields she otherwise wouldn’t have known about.

More equity advocates

At the same time, Zaino’s lived experience with her disability also broadens her perspective and enhances her research. She became interested in her current field when testing out new leg braces and seeing other assistive technology on the shelves at the clinic. For Mankoff, it was the reverse. She worked in the field and then experienced disability when diagnosed with Lyme disease, something she’s incorporated into her research. Wobbrock got a front-row seat to mobility and accessibility challenges when he severely herniated his L5-S1 disc and couldn’t sit down for two years. For Feldner, although she studied disability academically as a physical therapist and in disability studies, first-hand experiences came later in her career when she became a disability advocate for one of her children and a parent. At CREATE, more than 50% of those involved have some lived experience with disability. This strengthens the Center by bringing a diversity of perspectives and first-hand knowledge about how assumptions often get in the way of progress. 

closeup image of a smartphone with many small app icons.jpg

Seeking to push progress further on campus, CREATE has an initiative on research at the intersection of race, disability and technology with the Allen School, the Simpson Center for the Humanities, the Population Health Initiative, the Office of Minority Affairs and Diversity, the Buerk Center for Entrepreneurship, and the Office of the ADA Coordinator. 

CDC statistics show that the number of people experiencing a disability is higher when examined through the lens of race and ethnicity. With events and an open call for proposals, the initiative seeks increased research and institutional action in higher education, health care, artificial intelligence, biased institutions and more. 

“If we anticipate that people don’t conform to certain ability assumptions, we can think ahead,” says Wobbrock. “What would that mean for a particular technology design? It’s a longstanding tenant of accessibility research that better access for some people results in better access for all people.”

 

Make a gift

By supporting UW CREATE, you can help make technology accessible and make the world accessible through technology.

Donate to CREATE

 

A11yBoard Seeks to Make Digital Artboards Accessible to Blind and Low-Vision Users

Just about everybody in business, education, and artistic settings needs to use presentation software like Microsoft PowerPoint, Google Slides, and Adobe Illustrator. These tools use artboards to hold objects such as text, shapes, images, and diagrams. But for blind and low vision (BLV) people, using such software adds a new level of challenge beyond keeping our bullet points short and images meaningful. They experience:

  • High added cognitive load
  • Difficulty determining relationships between objects
  • Uncertainty if an operation has been successful

Screen readers, which were built for 1-D text information, don’t handle 2-D information spaces like artboards well.

For example, NVDA and Windows Narrator would only report artboard objects in their Z-order – regardless of where those objects are located or whether they are visually overlapping – and only report its shape name without any other useful information.

From A11yBoard video: still image of an artboard with different shapes and the unhelpful NVDA & Windows Narrator explanation as text.

To address these challenges Zhuohao (Jerry) Zhang, a CREATE Ph.D. student advised by Jacob O. Wobbrock at the ACE Lab, asked: 

  • Can digital artboards in presentation software be made accessible for blind and low-vision users to read and edit on their own?
  • Can we design interaction techniques to deliver rich 2-D information to screen reader users?

The answer is yes! 

They developed a multidevice, multimodal interaction system – A11yBoard – to mirror the desktop’s canvas on a mobile touchscreen device, and enabled rapid finger-driven screen reading via touch, gesture, and speech. 

Blind and low-vision users can explore the artboard by using a “reading finger” to move across objects and receive audio tone feedback. They can also use a second finger to “split-tap” on the screen to receive detailed information and select this object for further interactions.

From A11yBoard video: still image showing touch and gesture combos that help blind and low vision users lay out images and text.

“Walkie-talkie mode,” when turned on by dwelling a finger on the screen like turning on a switch, lets users “talk” to the application. 

Users can therefore access tons of details and properties of objects and their relationships. For example, they can ask for a number of closest objects to understand what objects are near to explore. As for some operations that are not easily manipulable using touch, gesture, and speech, we also designed an intelligent keyboard search interface to let blind and low-vision users perform all object-related tasks possible. 

Through a series of evaluations with blind users, A11yBoard was shown to provide intuitive spatial reasoning, multimodal access to objects’ properties and relationships, and eyes-free reading and editing experience of 2-D objects. 

Currently, much digital content has been made accessible for blind and low-vision people to read and “digest.” But few technologies have been introduced to make the creation process accessible to them so that blind and low-vision users can create visual content on their own. With A11yBoard, we have gained a step towards a bigger goal – to make heavily visual-based content creation accessible to blind and low-vision people.


Paper author Zhuohao (Jerry) Zhang is a second-year Ph.D. student at the UW iSchool. His work in HCI and accessibility focuses on designing assistive technologies for blind and low-vision people. Zhang has published and presented at CHI, UIST, and ASSETS conferences, receiving a CHI best paper honorable mention award, a UIST best poster honorable mention award, and a CHI Student Research Competition Winner, and featured by Microsoft New Future of Work Report 2022. He is advised by CREATE Co-Director Jacob O. Wobbrock.

Zhuohao (Jerry) Zhang standing in front of a poster, wearing a black sweater and a pair of black glasses, smiling.

Postdoc Research Spotlight: Making Biosignal Interfaces Accessible

The machines and devices we use every day – for example, touch screens, gas pedals, and computer track pads – interpret our actions and intentions via sensors. But these sensors are designed based on assumptions about our height, strength, dexterity, and abilities. When they aim for the average person (who does not actually exist), they aren’t usable or accessible. 

CREATE Post-doctoral student Momona Yamagami seeks to integrate personalization and customization into sensor design and the resulting algorithms baked into the products we use. Her research has shown that biosignal interfaces that use electromyography sensors, accelerometers, and other biosignals as inputs provide promise to improve accessibility for people with disabilities.

In a recent presentation of her research as a CREATE postdoctoral scholar, she emphasizes that generalized models that are not personalized to the individual’s abilities, body sizes, and skin tones may not perform well.

Momona Yamagami presenting her biosignal research, with a slide noting that biosignals fluctuate and are higher on the neural circuitry.
Momona Yamagami presenting her biosignal research, with a slide noting that biosignals fluctuate and are higher on the neural circuitry and a smartwatch as an “always on” sensors for continuous health monitoring.

Individualized interfaces that are personalized to the individual and their abilities could significantly enhance accessibility. Continuous (i.e., 2-dimensional trajectory-tracking) and discrete (i.e., gesture) electromyography (EMG) interfaces can be personalized to the individual: 

  • For the continuous task, we used methods from game theory to iteratively optimize a linear model that mapped EMG input to cursor position.
  • For the discrete task, we developed a dataset of participants with and without disabilities performing gestures that are accessible to them.
  • As biosignal interfaces become more commonly available, it is important to ensure that such interfaces have high performance across a wide spectrum of users.


Momona Yamagami is completing her time as a CREATE postdoctoral scholar, advised by CREATE Co-director Jennifer Mankoff. Starting summer 2023, Yamagami will be an Assistant Professor at Rice University Electrical & Computer Engineering as part of the Digital Health Initiative.

Jacob O. Wobbrock awarded Ten-Year Technical Impact Award

January 5, 2023

The Association for Computing Machinery (ACM) has honored CREATE Co-Director Jacob O. Wobbrock and colleagues with a 10-year lasting impact award for their groundbreaking work improving how computers recognize stroke gestures.

Jacob O. Wobbrock, a 40-something white man with short hair, a beard, and glasses. He is smiling in front of a white board.

Wobbrock, a professor in the Information School, and co-authors Radu-Daniel Vatavu and Lisa Anthony were presented with the 2022 Ten Year Technical Impact Award in November at the ACM International Conference on Multimodal Interaction (ICMI). The award honors their 2012 paper titled Gestures as point clouds: A $P recognizer for user interface prototypeswhich also won ICMI’s Outstanding Paper Award when it was published.

The $P point-cloud gesture recognizer was a key advancement in the way computers recognize stroke gestures, such as swipes, shapes, or drawings on a touchscreen. It provided a new way to quickly and accurately recognize what users’ fingers or styluses were telling their devices to do, and even could be used with whole-hand gestures to accomplish more complex tasks such as typing in the air or controlling a drone with finger movements.

The research built on Wobbrock’s 2007 invention of the $1 unistroke recognizer, which made it much easier for devices to recognize single-stroke gestures, such as a circle or a triangle. Wobbrock called it “$1” — 100 pennies — because it required only 100 lines of code, making it easy for user interface developers to incorporate gestures in their prototypes.

This article was excerpted from the UW iSchool article, iSchool’s Wobbrock Honored for Lasting Impact by Doug Parry

UnlockedMaps provides real-time accessibility info for rail transit users

Congratulations to CREATE Ph.D. student Ather SharifOrson (Xuhai) Xu, and team for this great project on transit access! Together they developed UnlockedMaps, a web-based map that allows users to see in real time how accessible rail transit stations are in six metro areas including Seattle, Philadelphia (where the project was first conceived by Sharif and a friend at a hackathon), Chicago, Toronto, New York, and the California Bay Area.

screenshot of UnlockedMaps in New York. Stations that are labeled green are accessible while stations that are labeled orange are not accessible. Yellow stations have elevator outages reported.

Shown here is a screenshot of UnlockedMaps in New York. Stations that are labeled green are accessible while stations that are labeled orange are not accessible. Yellow stations have elevator outages reported.

Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering advised by CREATE Co-Director Jacob O. Wobbrock, said the team also included nearby and accessible restaurant and bathroom data. “I think restaurants and restrooms are two of the most common things that people look for when they plan their commute. But no other maps really let you filter those out by accessibility. You have to individually click on each restaurant and check if it’s accessible or not, using Google Maps. With UnlockedMaps, all that information is right there!”

Adapted from UW News interview with Ather Sharif. Read full article »