Congrats to CREATE’s Graduating Ph.D. Students 2024!

May 30, 2024

Four of CREATE’s influential and productive doctoral students are graduating this spring. Please join us in congratulating Avery Mack, Emma McDonnell, Venkatesh Potluri, and Ather Sharif and wishing them well.

Avery Mack, a white, femme-presenting person with curly light brown hair shaved close on one side wearing a green blazer and grey top

Avery Mack will receive their Ph.D. from the Paul G. Allen School of Computer Science & Engineering. Advised by CREATE Director Jennifer Mankoff, their research focuses on representation of people with disabilities in digital technologies like avatars and generative AI tools. They recently have investigated how technology can support people with fluctuating access needs, like neurodiverse people and people with chronic or mental health conditions.

Mack has been an invaluable resource at CREATE, co-leading graduate seminars, presenting workshops on accessibility, and contributing to CREATE’s accessibility research. “CREATE has been a great place to meet other accessibility researchers and get in contact with disabled people in our community,” says Mack. “As someone who tries to align my researchers with community needs and desires, this connection to the Seattle disability community is invaluable.”

Mack, whose thesis is titled, Dissertation Title: Understanding, Designing, and Building Adaptable Technology for Fluctuating Accessibility Needs in Group Settings, is on the job market, interested in a research scientist position in industry.

Emma McDonnell, a white woman in her 20s with short red hair, freckles, and a warm smile. In the background: a lush landscape and the Colosseum.

Earning her Ph.D. from Human Centered Design and Engineering, Emma McDonnell is advised by CREATE associate director Leah Findlater. McDonnell’s research blends computer science, design, and disability studies to explore ways that technology can be designed to align with disability politics and social worlds.

Her dissertation research explores how communication technology, specifically captioning, could be redesigned to encourage mixed ability groups to take a collective approach to accessibility. Along with CREATE associate directors Leah Findlater and  Jon Froehlich, McDonnell studied captioning practices on TikTok and offered steps toward a standard for user-generated captioning. With fellow Ph.D. student, McDonnell presented a workshop on accessible presentations for CREATE’s GAAD Day 2024, contextualizing the importance of accessibility within the longer history of disability discrimination and activism.

Looking ahead

McDonnell is interested in postdoctoral opportunities to continue exploring new possibilities for technology design anchored in critical disability perspectives. 

Venkatesh Potluri leans toward the camera smiling with eyes cast downward

Advised by CREATE Director Jennifer Mankoff in the Paul G. Allen School of Computer Science & Engineering, Venkatesh Potluri’s research examines accessibility barriers experienced by blind or visually impaired (BVI) developers participating in professional programming domains such as user interface design, data science, and physical computing. His work contributes real-world systems to improve developer tools and new interaction techniques to address these access barriers. His thesis is titled, A Paradigm Shift in Nonvisual Programming.

While at the UW, Potluri has been selected as an Apple Scholar and a Google Lime Scholar, and contributed to the Accessibility Guide for Data Science and STEM Classes. He presented a paper on a large-scale analysis of the accessibility of Jupyter notebooks, a new tool that enables blind and visually impaired people to create their own data visualizations to explore streaming data.

Asked about his experience with CREATE, Potluri responded, "Since CREATE's founding, I've been thrilled by its mission to take a holistic approach to accessibility with disabled experts and stakeholders—from education to research to translation. I'm grateful to have been part of this beacon of high-quality research informed by a deep understanding of disability. I aspire to carry the torch forward and help make the world accessible!"

Future plans

Potluri will join the University of Michigan as an assistant professor in the Information School in Fall 2024.

 

Headshot of Ather Sharif outside on a sunny balcony with blue sky behind himGraduating with a Ph.D. from the Paul G. Allen School of Computer Science & Engineering, Ather Sharif is co-advised by CREATE faculty Katharina Reinecke and CREATE associate director Jacob O. Wobbrock. Sharif’s research on Human-Computer Interaction (HCI) focuses on making online data visualizations accessible to screen-reader users. He pioneered the first-of-its-kind system, VoxLens, that utilizes voice assistants for screen-reader users to extract information from online data visualizations. He also created UnlockedMaps, an open-data map that assists users with mobility disabilities to make informed decisions regarding their commute.

Sharif has garnered many awards while at the UW, including:

Sharif credits CREATE leaders, who include his advisors as well as Richard Ladner, Jennifer Mankoff, and Anat Caspi, to name a few, “who are not only prominent allies for disabled people but are always willing to advise and guide students to be the best researchers they can be.”

“I cannot begin to express how incredible it is to have CREATE as part of our ecosystem,” says Ather. “It advances the state of accessible technologies for people with disabilities through cutting-edge research. Personally, as someone with a disability, it means the world to me. As a researcher, CREATE has funded almost all of my research work at UW.”

After graduation, Sharif will be traveling on a 2024 UW Bonderman Fellowship. He plans to visit Brazil, Argentina, Peru, Costa Rica, Japan, Vietnam, South Korea, and Thailand to learn about disability rights history and distinct physical infrastructure for wheelchair users and enhance his perspectives, challenge his viewpoints, and identify real-life barriers disabled people face. 

With too many accomplishments amongst them to list here, these almost-minted Ph.D.s collaborated on projects that have contributed to CREATE’s growth and success. In addition to mentoring undergraduate students, publishing and presenting papers, and working in labs and with researchers, here are a few of the ways Sharif, Potluri, McDonnell and Mack have worked together:

  • Avery Mack and Venkatesh Potluri contributed to the Accessibility Guide for Data Science and STEM Classes, available via the CREATE website, A11y in Action resource link. They, with the other lead contributors, received the 2024 UW Digital Accessibility Team Award as part of UW Accessible Technology’s Global Accessibility Awareness Day celebration. 
  • Potluri and Mack also co-led 5 CREATE Accessibility Seminars to discuss relevant reading and share accessibility research.
  • Mack and Ather Sharif collaborated with Lucille Njoo to dispel common myths about students with disabilities in an article in the Winter 2024 Allen School DEIA newsletter.
  • McDonnell and Mack presented an accessible presentations workshop as part of UW’s 2024 Global Accessibility Awareness Day celebration.

Empowering users with disabilities through customized interfaces for assistive robots

March 15, 2024

For people with severe physical limitations such as quadriplegia, the ability to tele-operate personal assistant robots could bring a life-enhancing level of independence and self-determination. Allen School Ph.D. candidate Vinitha Ranganeni and her advisor, CREATE faculty member Maya Cakmak, have been working to understand and meet the needs of users of assistive robots.

This month, Ranganeni and Cakmak presented a video at the Human Robot Interaction (HRI) conference that illustrates the practical (and touching) ways deploying an assistive robot in a test household has helped Henry Evans require a bit less from his caregivers and connect to his family.

The research was funded by NIA/NIH Phase II SBIR Grant #2R44AG072982-02 and the NIBIB Grant #1R01EB034580-01

Captioned video of Henry Evans demonstrating how he can control an assistive robot using the customized graphical user interface he co-designed with CREATE Ph.D. student/Allen School Ph.D. candidate Vinitha Ranganeni.

Their earlier study, Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots, evaluated the usability and effectiveness of a customized, tele-operation interface for the Stretch RE2 assistive robot. The authors show that no single interface configuration satisfies all users’ needs and preferences. Users perform better when using the customized interface for navigation, and the differences in preferences between participants with and without motor impairments are significant.

Last summer, as a robotics engineering consultant for Hello Robot, Ranganeni led the development of the interface for deploying an assistive robot in a test household, that of Henry and Jane Evans. Henry was a Silicon Valley CFO when a stroke suddenly left him non-speaking and with quadriplegia. His wife Jane is one of his primary caregivers.

The research team developed a highly customizable graphical user interface to control Stretch, a relatively simple and lightweight robot that has enough range of motion to reach from the floor to countertops.

Work in progress, but still meaningful independence

Stretch can’t lift heavy objects or climb stairs. Assistive robots are expensive, prone to shutting down, and the customization is still very complex and time-intensive. And, as noted in an IEEE Spectrum article about the Evans’ installation, getting the robot’s assistive autonomy to a point where it’s functional and easy to use is the biggest challenge right now. And more work needs to be done on providing simple interfaces, like voice control. 

The article states, “Perhaps we should judge an assistive robot’s usefulness not by the tasks it can perform for a patient, but rather on what the robot represents for that patient, and for their family and caregivers. Henry and Jane’s experience shows that even a robot with limited capabilities can have an enormous impact on the user. As robots get more capable, that impact will only increase.”

In a few short weeks, Stretch made a difference for Henry Evans. “They say the last thing to die is hope. For the severely disabled, for whom miraculous medical breakthroughs don’t seem feasible in our lifetimes, robots are the best hope for significant independence,” says Henry.” 


Collaborator, advocate, and community researcher Tyler Schrenk

Though it has been many months since the death of Tyler Schrenk, a CREATE-funded researcher and a frequent collaborator, his impact is still felt in our collective research.

Tyler Schrenk making a presentation at the head of a lecture room. He has brown spiky hair, a full beard, and is seated in his power wheelchair.

Schrenk was a dedicated expert in the assistive technology field and led the way in teaching individuals and companies how to use assistive technologies to create independence. He was President & Executive Director of the Tyler Schrenk Foundation until his death in 2023. 


Related reading:

Winter 2023 CREATE Research Showcase

December 12, 2023

Students from CSE 493 and additional CREATE researchers shared their work at the December 2023 CREATE Research Showcase. The event was well attended by CREATE students, faculty, and community partners. Projects included, for example: an analysis of the accessibility of transit stations and a tool to aid navigation within transit stations; an app to help colorblind people of color pick makeup; and consider the accessibility of generative AI while also considering ableist implications of limited training data.

CSE 493 student projects

In it’s first offering Autumn quarter 2023, CSE’s undergraduate Accessibility class has been focusing on the importance of centering first-person accounts in disability-focused technology work. Students worked this quarter on assignments ranging from accessibility assessments of county voting systems to disability justice analysis to open-ended final projects.

Alti Discord Bot »

Keejay Kim, Ben Kosa, Lucas Lee, Ashley Mochizuki

Alti is a Discord bot that automatically generates alt text for any image that gets uploaded onto Discord. Once you add Alti to your Discord server, Alti will automatically generate alt text for the image using artificial intelligence (AI).

Enhancing Self-Checkout Accessibility at QFC »

Abosh Upadhyaya, Ananya Ganapathi, Suhani Arora

Makes self-checkout more accessible to visually impaired individuals

Complexion Cupid: Color Matching Foundation Program »

Ruth Aramde, Nancy Jimenez-Garcia, Catalina Martinez, Nora Medina

Allows individuals with color blindness to upload an image of their skin, and provides a makeup foundation match. Additionally, individuals can upload existing swatches and will be provided with filtered photos that better show the matching accuracy.

Twitter Content Warnings »

Stefan D’Souza, Aditya Nair

A chrome extension meant to be used in conjunction with twitter.com in order to help people with PTSD

Lettuce Eat! A Map App for Accessibly Dietary Restrictions »

Arianna Montoya, Anusha Gani, Claris Winston, Joo Kim

Parses menus on restaurants’ websites to provide information on restaurants’ dietary restrictions to support individuals with specific dietary requirements, such as vegan vegetarian, and those with Celiac disease.

Form-igate »

Sam Assefa

A chrome extension that allows users with motor impairments to interact with google forms using voice commands, enhancing accessibility.

Lite Lingo: Plain Text Translator »

Ryan Le, Michelle Vu, Chairnet Muche, Angelo Dauz

A plain text translator to help individuals with learning disabilities

Matplotalt: Alt text for matplotlib figures »

Kai Nailund

[No abstract]

PadMap: Accessible Map for Menstrual Products »

Kirsten Graham, Maitri Dedhia, Sandy Cheng, Aaminah Alam

Our goal is to ensure that anywhere on campus, people can search up the closest free menstrual products to them and get there in an accessible way.

SCRIBE: Crowdsourcing Scientific Alt Text »

Sanjana Chintalapati, Sanjana Sridhar, Zage Strassberg-Phillips

A prototype plugin for arXiv that adds alt text to requested papers via crowdwork.

PalPalette »

Pu Thavikulwat, Masaru Chida, Srushti Adesara, Angela Lee

A web app that helps combat loneliness and isolation for young adults with disabilities

SpeechIT »

Pranati Dani, Manasa Lingireddy, Aryan Mahindra

A presentation speech checker to ensure a user’s verbal speech during presentation is accessible and understandable for everyone.

Enhancing Accessibility in SVG Design: A Fabric.js Solution »

Julia Tawfik, Kenneth Ton, Balbir Singh, Aaron Brown

A Laser Cutter Generator’ interface which displays a form to select shapes and set dimensions for SVG creation.

CREATE student and faculty projects

Designing and Implementing Social Stories in Technology: Enhancing Collaboration for Parents and Children with Neurodiverse Needs

Elizabeth Castillo, Annuska Zolyomi, Ting Zhou

Our research project, conducted through interviews in Panama, focuses on the user-centered design of technology to enhance autism social stories for children with neurodiverse needs. We aim to improve collaboration between parents, therapists, and children by creating a platform for creating, sharing, and tracking the usage of social stories. While our initial research was conducted in Panama, we are eager to collaborate with individuals from Japan and other parts of the world where we have connections, to further advance our work in supporting neurodiversity.

An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility

Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff

With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular. To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.

Machine Learning for Quantifying Rehabilitation Responses in Children with Cerebral Palsy

Charlotte D. Caskey, Siddhi R. Shrivastav, Alyssa M. Spomer, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele

Increases in step length and decreases in step width are often a rehabilitation goal for children with cerebral palsy (CP) participating in long-term treadmill training. But it can be challenging to quantify the non-linear, highly variable, and interactive response to treadmill training when parameters such as treadmill speed increase over time. Here we use a machine learning method, Bayesian Additive Regression Trees, to show that there is a direct effect of short-burst interval locomotor treadmill training on increasing step length and modulating step width for four children with CP, even after controlling for cofounding parameters of speed, treadmill incline, and time within session.

Spinal Stimulation Improves Spasticity and Motor Control in Children with Cerebral Palsy

Victoria M. Landrum, Charlotte D. Caskey, Siddhi R. Shrivastav, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele

Cerebral palsy (CP) is caused by a brain injury around the time of birth that leads to less refined motor control and causes spasticity, a velocity dependent stretch reflex that can make it harder to bend and move joints, and thus impairs walking function. Many surgical interventions that target spasticity often lead to negative impacts on walking function and motor control, but transcutaneous spinal cord stimulation (tSCS), a novel, non-invasive intervention, may amplify the neurological response to traditional rehabilitation methods. Results from a 4-subject pilot study indicate that long-term usage of tSCS with treadmill training led to improvements in spasticity and motor control, indicating better walking function.

Adaptive Switch Kit

Kate Bokowy, Mia Hoffman, Heather A. Feldner, Katherine M. Steele

We are developing a switch kit for clinicians and parents to build customizable switches for children with disabilities. These switches are used to help children play with computer games and adapted toys as an early intervention therapy.

Developing a Sidewalk Improvement Cost Function

Alex Kirchmeier, Cole Anderson, Anat Caspi

In this ongoing project, I am developing a Python script that uses a sidewalk issues dataset to determine the cost of improving Seattle’s sidewalks. My intention is to create a customizable function that will help users predict the costs associated with making sidewalks more accessible.

Exploring the Benefits of a Dynamic Harness System Using Partial Body Weight Support on Gross Motor Development for Infants with Down Syndrome

Reham Abuatiq, PT, MSc1; Mia Hoffman, ME, BSc2; Alyssa Fiss, PT, PhD3; Julia Looper, PT, PhD4; & Heather Feldner, PT, PhD, PCS1,5,6

We explored the benefits of a Dynamic Harness System Using Partial Body Weight Support (PBWS) within an enriched play environment on Gross Motor Development for Infants with Down Syndrome using randomized cross over study design. We found that the effectiveness of the PBWS harness system on gross motor development was clearly evident. The overall intervention positively affected activity levels, however, the direct impact of the harness remains unclear.

StreetComplete for Better Pedestrian Mapping

Sabrina Fang, Kohei Matsushima

StreetComplete is a gamified, structured, and user-friendly mobile application for users to improve OpenStreetMap data by completing pilot quests. OpenStreetMap is an open-source, editable world map created and maintained by a community of volunteers. The goal of this research project is to design pilot quests in StreetComplete to accurately collect information about “accessibility features,” such as sidewalk width and the quality of lighting, to improve accessibility for pedestrian mapping.

Transit Stations Are So Confusing!

Jackie Chen, Milena Johnson, Haochen Miao, and Raina Scherer

We are collecting data on the wayfinding nodes in four different Sound Transit light rail stations, and interpreting them through the GTFS-pathways schema. In the future, we plan on visualizing this information through AccessMaps such that it can be referenced by all users.

Optimizing Seattle Curbside Disability Parking Spots

Wendy Bu, Cole Anderson, Anat Caspi

The project is born out of a commitment to enhance the quality of life for individuals with disabilities in the city of Seattle. The primary objective is to systematically analyze and improve the allocation and management of curbside parking spaces designated for disabled individuals. By improving accessibility for individuals with disabilities, the project contributes to fostering a more equitable and welcoming urban environment.

Developing Accessible Tele-Operation Interfaces for Assistive Robots with Occupational Therapists

Vinitha Ranganeni, Maya Cakmak

The research is motivated by the potential of using tele-operation interfaces with assistive robots, such as the Stretch RE2, to enhance the independence of individuals with motor limitations in completing activities of daily living (ADLs). We explored the impact of customization of tele-operation interfaces and a deployed the Stretch RE2 in a home for several weeks facilitated by an occupational therapist and enabled a user with quadriplegia to perform daily activities more independently. Ultimately, this work aims to empower users and occupational therapists in optimizing assistive robots for individual needs.

HuskyADAPT: Accessible Design and Play Technology

HuskyADAPT Student Organization

HuskyADAPT is a multidisciplinary community at the University of Washington that supports the development of accessible design and play technology. Our community aims to initiate conversations regarding accessibility and ignite change through engineering design. It is our hope that we can help train the next generation of inclusively minded engineers, clinicians, and educators to help make the world a more equitable place.

A11yBoard for Google Slides: Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users

Zhuohao (Jerry) Zhang, Gene S-H Kim, Jacob O. Wobbrock

Presentation software is largely inaccessible to blind users due to the limitations of screen readers with 2-D artboards. This study introduces an advanced version of A11yBoard, initially developed by Zhang & Wobbrock (CHI2023), which now integrates with Google Slides and addresses real-world challenges. The enhanced A11yBoard, developed through participatory design including a blind co-author, demonstrates through case studies that blind users can independently read and create slides, leading to design guidelines for accessible digital content creation tools.

“He could go wherever he wanted”: Driving Proficiency, Developmental Change, and Caregiver Perceptions following Powered Mobility Training for Children 1-3 Years with Disabilities

Heather A. Feldner, PT, MPT, PhD; Anna Fragomeni, PT; Mia Hoffman, MS; Kim Ingraham, PhD; Liesbeth Gijbels, PhC; Kiana Keithley, SPT; Patricia K. Kuhl, PhD; Audrey Lynn, SPT; Andrew Meltzoff, PhD; Nicole Zaino, PhD; Katherine M. Steele, PhD

The objective of this study was to investigate how a powered mobility intervention for young children (ages 1-3years) with disabilities impacted: 1) Driving proficiency over time; 2) Global developmental outcomes; 3) Learning tool use (i.e., joystick activation); and 4) Caregiver perceptions about powered mobility devices and their child’s capabilities.

Access to Frequent Transit in Seattle

Darsh Iyer, Sanat Misra, Angie Niu, Dr. Anat Caspi, Cole Anderson

The research project in Seattle focuses on analyzing access to public transit, particularly frequent transit stops, by considering factors like median household income. We scripted in QGIS, analyzed walksheds, and examined demographic data surrounding Seattle’s frequent transit stops to understand the equity of transit access in different neighborhoods. Our goal was to visualize and analyze the data to gain insights into the relationship between transit access, median household income, and other demographic factors in Seattle.

Health Service Accessibility

Seanna Qin, Keona Tang, Anat Caspi, Cole Anderson

Our research aims to discover any correlation between median household income and driving duration from census tracts to the nearest urgent care location in the Bellevue and Seattle region

Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means

Ather Sharif, Ruican Zhong, and Yadi Wang

Incorporating uncertainty in data visualizations is critical for users to interpret and reliably draw informed conclusions from the underlying data. However, visualization creators conventionally convey the information regarding uncertainty in data visualizations using visual techniques (e.g., error bars), which disenfranchises screen-reader users, who may be blind or have low vision. In this preliminary exploration, we investigated ways to convey uncertainty in data visualizations to screen-reader users.

UW News: How an assistive-feeding robot went from picking up fruit salads to whole meals

November, 2023

In tests with this set of actions, the robot picked up the foods more than 80% of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal. UW News talked with co-lead authors Gordon and Nanavati, both doctoral students in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding. The team presented its findings Nov. 7 at the 2023 Conference on Robotic Learning in Atlanta.

An assistive-feeding robotic arm attached to a wheelchair uses a fork to stab a piece of fruit on a plate among other fruits.

The Personal Robotics Lab has been working on robot-assisted feeding for several years. What is the advance of this paper?

Ethan K. Gordon: I joined the Personal Robotics Lab at the end of 2018 when Siddhartha Srinivasa, a professor in the Allen School and senior author of our new study, and his team had created the first iteration of its robot system for assistive applications. The system was mounted on a wheelchair and could pick up a variety of fruits and vegetables on a plate. It was designed to identify how a person was sitting and take the food straight to their mouth. Since then, there have been quite a few iterations, mostly involving identifying a wide variety of food items on the plate. Now, the user with their assistive device can click on an image in the app, a grape for example, and the system can identify and pick that up.

Taylor Kessler Faulkner: Also, we’ve expanded the interface. Whatever accessibility systems people use to interact with their phones — mostly voice or mouth control navigation — they can use to control the app.

EKG: In this paper we just presented, we’ve gotten to the point where we can pick up nearly everything a fork can handle. So we can’t pick up soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad to an actual vegetable salad, as well as pre-cut pizza or a sandwich or pieces of meat.

In previous work with the fruit salad, we looked at which trajectory the robot should take if it’s given an image of the food, but the set of trajectories we gave it was pretty limited. We were just changing the pitch of the fork. If you want to pick up a grape, for example, the fork’s tines need to go straight down, but for a banana they need to be at an angle, otherwise it will slide off. Then we worked on how much force we needed to apply for different foods.

In this new paper, we looked at how people pick up food, and used that data to generate a set of trajectories. We found a small number of motions that people actually use to eat and settled on 11 trajectories. So rather than just the simple up-down or coming in at an angle, it’s using scooping motions, or it’s wiggling inside of the food item to increase the strength of the contact. This small number still had the coverage to pick up a much greater array of foods.

We think the system is now at a point where it can be deployed for testing on people outside the research group. We can invite a user to the UW, and put the robot either on a wheelchair, if they have the mounting apparatus ready, or a tripod next to their wheelchair, and run through an entire meal.

For you as researchers, what are the vital challenges ahead to make this something people could use in their homes every day?

EKG: We’ve so far been talking about the problem of picking up the food, and there are more improvements that can be made here. Then there’s the whole other problem of getting the food to a person’s mouth, as well as how the person interfaces with the robot, and how much control the person has over this at least partially autonomous system.

TKF: Over the next couple of years, we’re hoping to personalize the robot to different people. Everyone eats a little bit differently. Amal did some really cool work on social dining that highlighted how people’s preferences are based on many factors, such as their social and physical situations. So we’re asking: How can we get input from the people who are eating? And how can the robot use that input to better adapt to the way each person wants to eat?

Amal Nanavati: There are several different dimensions that we might want to personalize. One is the user’s needs: How far the user can move their neck impacts how close the fork has to get to them. Some people have differential strength on different sides of their mouth, so the robot might need to feed them from a particular side of their mouth. There’s also an aspect of the physical environment. Users already have a bunch of assistive technologies, often mounted around their face if that’s the main part of their body that’s mobile. These technologies might be used to control their wheelchair, to interact with their phone, etc. Of course, we don’t want the robot interfering with any of those assistive technologies as it approaches their mouth.

There are also social considerations. For example, if I’m having a conversation with someone or at home watching TV, I don’t want the robot arm to come right in front of my face. Finally, there are personal preferences. For example, among users who can turn their head a little bit, some prefer to have the robot come from the front so they can keep an eye on the robot as it’s coming in. Others feel like that’s scary or distracting and prefer to have the bite come at them from the side.

A key research direction is understanding how we can create intuitive and transparent ways for the user to customize the robot to their own needs. We’re considering trade-offs between customization methods where the user is doing the customization, versus more robot-centered forms where, for example, the robot tries something and says, “Did you like it? Yes or no.” The goal is to understand how users feel about these different customization methods and which ones result in more customized trajectories.

What should the public understand about robot-assisted feeding, both in general and specifically the work your lab is doing?

EKG: It’s important to look not just at the technical challenges, but at the emotional scale of the problem. It’s not a small number of people who need help eating. There are various figures out there, but it’s over a million people in the U.S. Eating has to happen every single day. And to require someone else every single time you need to do that intimate and very necessary act can make people feel like a burden or self-conscious. So the whole community working towards assistive devices is really trying to help foster a sense of independence for people who have these kinds of physical mobility limitations.

AN: Even these seven-digit numbers don’t capture everyone. There are permanent disabilities, such as a spinal cord injury, but there are also temporary disabilities such as breaking your arm. All of us might face disability at some time as we age and we want to make sure that we have the tools necessary to ensure that we can all live dignified lives and independent lives. Also, unfortunately, even though technologies like this greatly improve people’s quality of life, it’s incredibly difficult to get them covered by U.S. insurance companies. I think more people knowing about the potential quality of life improvement will hopefully open up greater access.

Additional co-authors on the paper were Ramya Challa, who completed this research as an undergraduate student in the Allen School and is now at Oregon State University, and Bernie Zhu, a UW doctoral student in the Allen School. This research was partially funded by the National Science Foundation, the Office of Naval Research and Amazon.

For more information, contact Gordon at ekgordon@cs.uw.edu, Nanavati at amaln@cs.uw.edu and Faulkner at taylorkf@cs.washington.edu.


Excerpted and adapted from the UW News story by Stefan Milne.

UW News: A11yBoard accessible presentation software

October 30, 2023 | UW News

A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.

A user demonstrates creating a presentation slide with A11yBoard on a touchscreen tablet and computer screen.

Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.

Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.

“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School

“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.

For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.

The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.

Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.

“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”

This research was funded in part by the University of Washington’s Center for Research and Education on Accessible Technology and Experiences (UW CREATE). For more information, contact Zhang at zhuohao@uw.edu and Wobbrock at wobbrock@uw.edu.


This article was adapted from the UW News article by Stefan Milne.

Proposed Federal Accessibility Standards: CREATE’s Guide to Reviewing and Commenting

A proposal for new digital accessibility guidelines for entities receiving federal funds was released for review by the U.S. Department of Justice on August 4, 2023. Anyone affected by these guidelines had until October 3, 2023 to comment.

  • CREATE’s official response, in collaboration with colleagues within the UW and at peer institutions, is posted on the DOJ site temporarily.
  • The response is available as an accessible and tagged PDF document (53 pages).
  • If you have any questions, reach out to CREATE at create-contact@uw.edu.

August 2023 announcement

Note that the comment period has ended.

The U.S. Department of Justice (DoJ) is proposing new requirements for digital accessibility for the Americans with Disabilities Act (ADA). Their goal is to provide public entities with clear and concrete standards for how to fulfill their obligations under the Americans with Disabilities Act Title II Regulations. The goal of these new standards is to ensure public entities provide equal access to all services, programs, and activities that are provided via the web and mobile apps. 

These standards impact mobile apps, websites, and course materials created by and for government bodies, including public schools (K-12 and universities), and public services of all kinds.

Below, we have tried to summarize some of the most important aspects of the proposed rule and to explain them. However, in summarizing we have naturally emphasized things we think are important. Some of the topics the rule touches on that we summarize below include the proposed timeline for making digital content accessible; the proposed rules impacting K-12 and college/university course content; what standards should be met for digital content to be accessible for websites, apps, and live audio captioning; and how compliance should be assessed

Note that submitted comments are publicly available online at: DOJ-CRT-2023-0007 on www.regulations.gov.

roposed rule, but the DOJ has asked a number of very specific questions that you might want to comment on.

Notable questions, highlighted

We highlight several of the DOJ questions below, labeled with the Question # that the DOJ uses for them. You will see that we present these questions out of order – we present them in the order that made sense to us when we summarized this proposed rule. You can read the whole proposed rule and all the questions, in order, on the posting of Docket (DOJ-CRT-2023-0007) on www.regulations.gov. Sometimes we write Question to Consider before a question; these are questions we think you might want to comment on even though the DOJ did not ask about them.

Why submit comments?

The DOJ is still trying to decide exactly what the rule should say, how quickly public entities should improve digital accessibility, and what exceptions to allow. For example, the current rule states that course content posted on a password-protected website (such as a learning management system (LMS) like Canvas) does not have to be made accessible until a student with a disability needs access to that content. If a student registers for the course, or transfers into it, then the course content has to be made fully accessible to all disabilities by the start of the term or within 5 days (if the term has already started). In addition, the course needs to stay accessible over time.

If you agree with this, you might want to say so in your comments, because someone else might think this is unreasonable, and the DOJ should hear from both sides. But you might disagree, in which case you should also comment.  

What are the new standards about?

These standards affect web content and mobile apps. These are very broadly defined and include almost any digital content that is important to interacting with public entities. 

Web content is defined as “information or sensory experience that is communicated to the user by a web browser or other software. This includes text, images, sounds, videos, controls, animations, navigation menus, and documents.” It also includes things like web content posted on social media apps, to the extent possible (for example, if the app supports it, the public entity should add image descriptions to images it posts).


Question 1: The DOJ’s definition of “conventional electronic documents” consists of an exhaustive list of specific file types. Should the DOJ instead craft a more flexible definition that generally describes the types of documents that are covered or otherwise change the proposed definition, such as by including other file types (e.g., images or movies), or removing some of the listed file types?


Mobile applications, or “apps,” are defined as “software applications that are designed to be downloaded and run on mobile devices such as smartphones and tablets.” A public entity may use a mobile app that someone else designed and built (an “external mobile app”); in this case it still needs to be accessible. 


Question 25: What types of external mobile apps, if any, do public entities use to offer their services, programs, and activities to members of the public, and how accessible are these apps? … should [these apps be exempt]? If so, should this exception expire after a certain time, and how would this exception impact persons with disabilities.


Timeline for making web content and mobile apps accessible

Almost any content or app that is important to interacting with a public entity has to be made accessible within 2 years of the date when the rule becomes official, regardless of whether a disabled person asks for it or is known to be using that content or app. For public entities that serve a small number of people (<50,000), the proposed deadline is 3 years. For example, a small county police department in a county with <50,000 people would have 3 years. However only truly independent entities qualify for this exception. For example, the same policy department in a county with >50,000 people would only have two years; similarly a small public school in a large county would only have two years.

If a public entity feels this would be too costly, under the proposed rule they must prove this and they still have to do as much as possible, “to the maximum extent possible” to support their disabled constituents.


Question 4: What compliance costs and challenges might small public entities face in conforming with this rule? … [do they have internal staff for addressing accessibility? If they have recently addressed accessibility, how much did that cost?]

Question 5: Should the DOJ adopt a different WCAG version or conformance level for small entities or a subset of small entities?


Exemptions

The new rule does include some exceptions, meaning it allows some content to be inaccessible. However, when a disabled person needs the inaccessible content, existing regulations implementing title II of the ADA may come into effect, typically requiring the content to be made accessible.

Exemptions

The new rule does include some exceptions, meaning it allows some content to be inaccessible. However, when a disabled person needs the inaccessible content, existing regulations implementing title II of the ADA may come into effect, typically requiring the content to be made accessible.

Archived and pre-existing non-web documents

The following exemptions are intended to reduce the burden of the new rule for large collections of rarely used documents:

  • Archived content not in active use
  • Pre-existing non-web documents

The DOJ has several questions about these exemptions (see Questions 15, 16, 17, 18, 19, and 20) which relate to how such content is currently used, where it is posted, and how these exemptions would impact people with disabilities.


Course content

Generally speaking, course content (such as a public syllabus or handout) has to be made accessible. However, course content inside a password protected website such as a learning management system (for both K-12 schools and colleges/universities) is exempted if the content is only available specifically to admitted students enrolled in the relevant course (and disabled parents, in the case of K-12 materials). 

Once an institution knows, or should have known that a student (or, for K-12 courses, parent) with a disability is enrolled in the course, “all existing course content must be made fully accessible by the [start of the academic term] for that course… New content added throughout the term for the course must also comply… at the time it is added to the website.”

Under today’s interpretation of the ADA, transferring to a course during the add/drop period or from a waitlist often means that the course is less accessible. The DOJ guidelines address this, requiring in these cases that course material is made accessible within five business days of the student’s enrollment. The DOJ also requires “auxiliary aids and services… that enable the student with a disability to participate” while a course is being made accessible. Notably, the relevant material would need to be fully accessible to all disabilities, “not merely the criteria related to that student or parent’s disability.

Importantly, the obligation to make the course content accessible is “ongoing for the duration of the course” and “as long as that content is available to students on the password-protected course website.” It is not clear whether this applies to future offerings of the same course, as typically use of an LMS involves creating a “new” password-protected site for each offering.

The DOJ has a lengthy analysis of tangible and intangible benefits of this ruling, as well as expected costs. They estimate that “By the end of year four (two years after postsecondary schools begin to remediate course content), 96 percent of courses offered by public four-year and postgraduate institutions and 90 percent of courses offered by community colleges will have been remediated. They further estimate that postsecondary institutions will finish remediation on their own to preempt requests in the following year.” They have similar estimates about K-12 education.


Question to consider: Should the “the duration of a course” apply to a single offering of a course for a single term, or all offerings of that course in all terms (even if a separate LMS site, with a separate password, was created for each offering)? How might this impact the likelihood that most courses are fully accessible within four years?

Question to Consider: Do  you agree that the proposed rule will increase course accessibility to include 96% of courses? How might the variable representation of people with disabilities across fields impact this? For example, people with disabilities are particularly under-represented in STEM fields, where diagrams and math equations are often particularly inaccessible. What could be changed about the proposed rule to make this prediction more likely to come true?


The DOJ also has several questions; we highlight some of them below. We have combined the questions for K-12 and post-secondary educational institutions by referring to [public educational institutions] since the primary difference is that parents with disabilities trigger the need to make documents in the case of K-12 education only. 


Question 27 & 36: How difficult would it be for [public educational institutions] to comply with this rule in the absence of this exception?

Question 28 & 37: What would the impact of this exception be on people with disabilities?

Question 33 & 42: How long would it take to make course content available on a public entity’s password-protected or otherwise secured website for a particular course accessible, and does this vary based on the type of course? Do students need access to course content before the first day of class? How much delay in accessing online course content can a student reasonably overcome in order to have an equal opportunity to succeed in a course, and does the answer change depending on the point in the academic term that the delay occurs?

Question 35 & 44: Should the DOJ consider an alternative approach, such as requiring that all newly posted course content be made accessible on an expedited time frame, while adopting a later compliance date for remediating existing content?


This includes third party content. For example, a website for practicing math problems provided, if required to complete coursework, would need to be accessible. The rules do not specifically mention textbooks. However the DOJ asks:


Question 26: Are there particular issues relating to the accessibility of digital books and textbooks that the DOJ should consider in finalizing this rule? Are there particular issues that the DOJ should consider regarding the impact of this rule on libraries?

Question to Consider: Has textbook accessibility been a barrier to accessing courses? What are some examples of problems you’ve encountered? How common are these problems? What could help?


Other exemptions

The DOJ also exempts linked 3rd party information (if it is not providing a direct service) and individualized, password-protected documents (such as personal utility bills). However it specifies that if these documents have deadlines associated with them, and are not accessible, they need to adjust their deadlines “to ensure that a person with a disability has equal access to its services, programs, or activities.” The DOJ asks about whether proper processes are in place:


Question to Consider: How might a delay in receiving an accessible document affect you? For example, could it affect whether you receive care services, money for food, or healthcare services that could cause harm if delayed? If you think this is a concern, what would be a reasonable deadline for receiving these documents?

Question 46: Do public entities have adequate systems for receiving notification that an individual with a disability requires access to an individualized, password-protected conventional electronic document? What kinds of burdens do these notification systems place on individuals with disabilities and how easy are these systems to access? Should the DOJ consider requiring a particular system for notification or a particular process or timeline that entities must follow when they are on notice that an individual with a disability requires access to such a document?


How is “accessible” defined?

The ADA has always included digital accessibility. However, a lack of specific standards in the past has left public entities to define for themselves what compliance looks like. The result has been a lack of consistent attention to accessibility. According to the DOJ, “voluntary compliance … has been insufficient in providing access.”

Now, the DOJ is requiring public entities to follow the Web Content Accessibility Guidelines (WCAG)version 2.1, at the AA level. This is a carefully tested web standard which has recently been expanded to touch on mobile accessibility needs as well. 


Question 3: Are there technical standards or performance standards other than WCAG 2.1 that the Department should consider? … If so, what is a reasonable time frame for State and local compliance with WCAG 2.2 and why? Is there any other standard that the Department should consider, especially in light of the rapid pace at which technology changes?


The DOJ notes that “the Access Board’s section 508 standards include additional requirements applicable to mobile apps that are not in WCAG 2.1 [including]: interoperability requirements to ensure that a mobile app does not disrupt a device’s assistive technology for persons with disabilities (e.g., screen readers for persons who are blind or have low vision); requirements for mobile apps to follow preferences on a user’s phone such as settings for color, contrast, and font size; and requirements for caption controls and audio description controls that enable users to adjust caption and audio description functions.


Question 8: Is WCAG 2.1 Level AA the appropriate accessibility standard for mobile apps? Should the Department instead adopt another accessibility standard or alternative for mobile apps, such as the requirements from section 508 discussed above?


The DOJ also notes that this includes captioning of “live audio,” such as in real-time presentations. They note that many meetings have moved online since the start of the COVID-19 pandemic, making live audio captioning “even more critical for individuals with certain types of disabilities to participate fully in civic life.” Proper live audio captioning includes speaker identification as well as accurate transcription of spoken text, sound effects, and other significant audio. Live audio captioning of this sort cannot be automated, and the DOJ is concerned about costs. They ask:


Question 13: Should the Department consider a different compliance date for the captioning of live-audio content in synchronized media or exclude some public entities from the requirement?

Question 14: What types of live-audio content do public entities and small public entities post? What has been the cost for providing live-audio captioning?


Finally, the DOJ notes that “WCAG 2.1 can be interpreted to permit the development of two separate websites—one for individuals with relevant disabilities and another for individuals without relevant disabilities—even when doing so is unnecessary and when users with disabilities would have a better experience using the main web page.” They rightly point out that this raises “concerns about user experience, segregation of users with disabilities, unequal access to information, and maintenance”. Thus the proposed rule explicitly states that parallel development of a separate website, document or app is “permissible only where it is not possible to make websites and web content directly accessible due to technical limitations (e.g., technology is not yet capable of being made accessible) or legal limitations (e.g., web content is protected by copyright).”  They go on to ask:


Question 49: Would allowing [a separate alternate version of a website, document, or app] due to technical or legal limitations result in individuals with disabilities receiving unequal access to a public entity’s services, programs, and activities?


How will compliance be measured?

The DOJ has many questions about the best way to measure compliance. The DOJ acknowledges that a public entity might reasonably not be in full compliance with all of WCAG 2.1’s AA standards at all times. This is because web content changes frequently, assessments may not always agree, and may include thousands of pages of content, making compliance more difficult than ensuring access to, say, “a town hall that is renovated once a decade…. The Department also believes that slight deviations from WCAG 2.1 Level AA may be more likely to occur without having a detrimental impact on access than is the case with the ADA Standards. Additionally, it may be easier for an aggrieved individual to find evidence of noncompliance with WCAG 2.1 Level AA than noncompliance with the ADA Standards, given the availability of many free testing tools and the fact that public entities’ websites can be accessed from almost anywhere.” 

They discuss several alternatives that could allow for the necessity of slight deviations and short periods of noncompliance while still promoting high compliance overall, including a percentage-based standard (which may be difficult to implement, and may need to weight different aspects of WCAG 2.1 differently to achieve equity); a standard based on policies for feedback, testing and remediation (which may be inconsistently applied); or “organizational maturity” meaning the organization can show it has a robust accessibility program in place (which may not translate to full accessibility or compliance). They solicit commentary on compliance: 


The DOJ asks what evidence an allegation of noncompliance requires (Question 50); Whether organizational feedback practices, testing policies, remediation practices, or organizational maturity should matter in assessing compliance (Questions 51, 55, 58). The DOJ also asks about what specific feedback practices, testing policies, remediation policies, and level of organizational maturity are needed (Questions 52, 53, 54, 59). They also ask:

Question 62: Should the Department address the different level of impact that different instances of nonconformance with a technical standard might have on the ability of people with disabilities to access the services, programs, and activities that a public entity offers via the web or a mobile app? If so, how?


To conclude, the DOJ’s proposed rule covers a number of topics that are of great importance to people with disabilities. We strongly urge you to comment on the rule.

If you have any questions, reach out to CREATE at create-contact@uw.edu.

CREATE Open Source Projects Awarded at Web4All

July 6, 2023

CREATE researchers shone this spring at the 2023 Web4All 2023 conference that, in part, seeks to “make the internet more accessible to the more than one billion people who struggle to interact with digital content each day due to neurodivergence, disability or other impairments.” Two CREATE-funded open source projects won accolades.

Best Technical Paper award:
Understanding and Improving Drilled-Down Information Extraction from Online Data Visualizations for Screen-Reader Users

Authors: Ather Sharif, Andrew Mingwei Zhang, CREATE faculty member Katharina Reinecke, and CREATE Associate Director Jacob O. Wobbrock

Built on prior research to develop taxonomies of information sought by screen-reader users to interact with online data visualizations, the team’s research used these taxonomies to extend the functionality of VoxLens—an open-source multi-modal system that improves the accessibility of data visualizations—by supporting drilled-down information extraction. They assessed the performance of their VoxLens enhancements through task-based user studies with 10 screen-reader and 10 non-screen-reader users. Their enhancements “closed the gap” between the two groups by enabling screen-reader users to extract information with approximately the same accuracy as non-screen-reader users, reducing interaction time by 22% in the process.

Accessibility Challenge Delegates’ Award:
UnlockedMaps: A Web-Based Map for Visualizing the Real-Time Accessibility of Urban Rail Transit Stations

Authors: Ather Sharif, Aneesha Ramesh, Qianqian Yu, Trung-Anh H. Nguyen, and Xuhai Xu

Ather Sharif’s work on another project, UnlockedMaps, was honored with the Accessibility Challenge Delegates’ Award. The paper details a web-based map that allows users to see in real time how accessible rail transit stations are in six North American cities, including Seattle, Toronto, New York and the Bay Area. UnlockedMaps shows whether stations are accessible and if they are currently experiencing elevator outages. Their work includes a public website that enables users to make informed decisions regarding their commute and an open source API that can be used by developers, disability advocates, and policy makers for a variety of purposes, including shedding light on the frequency of elevator outages and their repair times to identify the disparities between neighborhoods in a given city.

Read more

Spring 2023 Accessible Technology Research Showcase

May 15, 2023

Faculty and students will present research projects at the 2023 Spring Accessible Technology Research Showcase, hosted by CREATE and HuskyADAPT.

Accessible card games: Switch scanning-enabled card holder and dispenser

Project lead: Katrina Ma

While playing card games, individuals with motor disabilities or limited hand and finger use experience a lack of confidentiality, frustration having to depend on a caregiver, and difficulty connecting with other players. Our team aims to create a simple, universal, user-friendly, and affordable device that allows individuals with motor disabilities to independently hold and play cards.

Our idea is to incorporate switch-scanning technology in a device that can hold up to twelve cards and allow the user to dispense a chosen card to other players at the click of a switch. The device has a universal jack to accommodate users' own switches.

Accessisteer

Project lead: Michelle Jin

In order to provide a comfortable bike-riding experience to a child with hemiplegia, our mission is to create seat, handlebar, and stability modifications to a commercially available bicycle. This allows for cycling without the need for external assistance.

Adapted Ride-on Car+

Project lead: Mia Hoffman

Early self-initiated mobility is fundamental to a young child’s development. Adapted ride-on cars (ROCs) are an affordable alternative mobility option for young children with disabilities. We will be investigating the impact that control types has on a child’s directional control and engagement during play using ROCs that are joystick-controlled and manually steered.

Current ROCs depend on manual steering, which results in steering being difficult for a young child, especially with limited motor function. Modifications for ROCs that allow for joystick control are now becoming available, allowing the child to use a joystick for steering. We will be investigating the impact that control types has on a child’s directional control and engagement during play using ROCs that are joystick-controlled and manually steered. To quantify the child’s device interaction, we have developed a custom data logger, the ROC+, using an Ardunio Nano 33 IoT. The data logger measures switch activation for the traditional ROC, steering wheel rotation or joystick position, wheel rotation, acceleration, and angular velocity. We can also measure when an adult has taken control of the device using a remote control. The ROC+ will be used in a forthcoming study to quantify a child’s driving ability and the relationship between a parent and a child while a child is learning to use a powered mobility device.

Blocks4All: A screen reader and switch accessible block-based programming environment

Project lead: Yitong Shan

Blocks4All is a block-based programming environment for all children including those with disabilities. It is accessible with VoiceOver, Switch Control, and Voice Control. Children can learn beginning programming concepts by placing blocks on the app to control the Dash robot.

Chronically Under-Addressed: Considerations for HCI Accessibility Practice with Chronically Ill People

Project lead: Kelly Mack and Emma McDonnell

Accessible design and technology could support the large and growing group of people with chronic illnesses. However, human computer interactions (HCI) has largely approached people with chronic illnesses through a lens of medical tracking or treatment rather than accessibility. We describe and demonstrate a framework for designing technology in ways that center the chronically ill experience.

First, we identify guiding tenets: 1) treating chronically ill people not as patients but as people with access needs and expertise, 2) recognizing the way that variable ability shapes accessibility considerations, and 3) adopting a theoretical understanding of chronic illness that attends to the body. We then illustrate these tenets through autoethnographic case studies of two chronically ill authors using technology. Finally, we discuss implications for technology design, including designing for consequence-based accessibility, considering how to engage care communities, and how HCI research can engage chronically ill participants in research.

Cultivating inclusive play and maker mindset among neurodiverse children in a preschool classroom

Project lead: Maitraye Das

Young neurodivergent children need equitable opportunities to co-engage in high quality learning activities alongside neurotypical peers from early childhood. While there has been critical movement toward increasing participation of neurodivergent children in classrooms, much of this work involve school-age kids (6 years or over), leaving open questions around how neurodivergent preschoolers of age 3-5 years might engage in collaborative play with and around technologies.

We aim to understand whether and how programmable toy robots (e.g., KIBO) can foster inclusive play and maker mindset among neurodiverse children in preschool classrooms.

We partnered with the Experimental Education Unit (EEU) at the UW Haring Center. We conducted our research in two preschool classrooms, each including 16 children between the ages of 3-5. Six to eight children in each classroom have neurodevelopmental conditions including autism, developmental delays, and speech difficulties. Our research activities center around supporting children in making and interacting with the toy robot called KIBO. Preliminary findings show that through careful and accessible adaptation of activities, KIBO could enhance understanding of cause of effect, trial and error, enthusiasm for making and imagination, and sense of collaboration (and at times competition and negotiation) among neurodiverse groups of children.

Design Principles for Robot-Assisted Feeding in Social Contexts

Project lead: Amal Nanavati

Social dining is a meaningful and culturally significant experience. For 1.8 million Americans with motor impairments who cannot eat without assistance, challenges restrict them from enjoying this social ritual. In this work, we identify the needs of participants with motor impairments during social dining and how robot-assisted feeding can address them. 

Following a community-based participatory research method, we worked with a community researcher with motor impairments throughout this study. We contribute (a) insights into how a robot can help overcome challenges in social dining and (b) design principles for creating robot-assisted feeding systems to facilitate meaningful social dining.

Easier or Harder, Depending on Who the Hearing Person Is”: Codesigning Videoconferencing Tools for Small Groups with Mixed Hearing Status

Project lead: Emma McDonnell

With improvements in automated speech recognition and increased use of videoconferencing, real-time captioning has changed significantly. This shift toward broadly available but less accurate captioning invites exploration of the role hearing conversation partners play in shaping the accessibility of a conversation to d/Deaf and hard of hearing (DHH) captioning users.

While recent work has explored DHH individuals’ videoconferencing experiences with captioning, we focus on established groups’ current practices and priorities for future tools to support more accessible online conversations.

Our study consists of three codesign sessions, conducted with four groups (17 participants total, 10 DHH, 7 hearing). We found that established groups crafted social accessibility norms that met their relational contexts. We also identify promising directions for future captioning design, including the need to standardize speaker identification and customization, opportunities to provide behavioral feedback during a conversation, and ways that videoconferencing platforms could enable groups to set and share norms.

The Effect of Increased Sensory Feedback from Neuromodulation and Exoskeleton use on Ankle Co-contraction in Children with Cerebral Palsy

Project lead: Charlotte Caskey

Children with cerebral palsy (CP) have altered gait that limits mobility through the activation of antagonistic muscle pairs simultaneously. This study will quantify changes in muscle co-contraction during walking with two devices that increase sensory feedback.

Children with cerebral palsy (CP) have altered gait that limits mobility. One hallmark of CP gait is increased muscle co-contraction, or the activation of antagonistic muscle pairs at the same time.This may contribute to increased energy expenditure and reduced physical activity for children with CP. Amplifying sensory feedback may help combat this by prompting more refined motor control and lead to reduced co-contraction with CP. The purpose of this study is to quantify changes in muscle co-contraction during walking with two devices that increase sensory feedback: an ankle exoskeleton with audiovisual feedback (Exo) and transcutaneous spinal cord stimulation (tSCS). The Exo provides increased haptic feedback targeting external sensory information while tSCS boost neural communication internally. We hypothesized that co-contraction would decrease when walking with spinal stimulation and the ankle exoskeleton. We compared changes in co-contraction of the biceps femoris and rectus femoris (BF/RF) and the tibialis anterior and soleus (TA/Sol)  with 1) no devices, 2) Exo only, 3) tSCS only, and 4) Exo+tSCS for 5 children with CP. We found that tSCS only led to the greatest reduction in TA/Sol co-contraction but Exo only and Exo+tSCS led to the greatest reductions in BF/RF co-contraction. This work is fundamental in helping us understand how children with CP respond within a single session of using these devices and how the devices might be used for longer term rehabilitation.

The Effects of Weakness, Contracture, and Altered Control on Walking Energetics During Crouch Gait

Project lead: Elijah Kuska

Cerebral palsy (CP) is the result of a pediatric brain injury that primarily alters control. However, individuals with CP often develop progressive, secondary impairments like weakness and contracture. Multi-modal impairments-that of control and morphology-impose complex restrictions on mobility and elevate energetics. This study seeks to utilize modeling, simulation, and machine learning to parse the relative effects of multi-modal impairments during non-disabled and CP gait, identifying the primary impairment driving elevated energetics.

Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots

Project lead: Vinitha Ranganeni

Mobile manipulator platforms, like the Stretch RE1 robot, make the promise of in-home robotic assistance feasible. For people with severe physical limitations, like those with quadriplegia, the ability to tele-operate these robots themselves means that they can perform physical tasks they cannot otherwise do themselves, thereby increasing their level of independence.

In order for users with physical limitations to operate these robots, their interfaces must be accessible and cater to the specific needs of all users. As physical limitations vary amongst users, it is difficult to make a single interface that will accommodate all users. Instead, such interfaces should be customizable to each individual user.

In this work we explore the value of customization of a browser-based interface for tele-operating the Stretch RE1 robot. More specifically, we evaluate the usability and effectiveness of a customized interface in comparison to the default interface configurations from prior work. We present a user study involving participants with motor impairments (N=10) and without motor impairments, who could serve as a caregiver, (N=13) that use the robot to perform mobile manipulation tasks in a real kitchen environment.

Our study demonstrates that no single interface configuration satisfies all users' needs and preferences. Users perform better when using the customized interface for navigation, but not for manipulation due to higher complexity of learning to manipulate through the robot. All participants are able to use the robot to complete all tasks and participants with motor impairments believe that having the robot in their home would make them more independent.

Exploring Virtual Whiteboard Sessions in Mixed Hearing Environments

Project lead: Shaun Kalweit

Traditional ideation processes have been challenged due to current hybrid work environments and reliance on telecommunication tools. Our sponsor, Microsoft Teams, offers a platform for collaborative work, including brainstorming with whiteboards. However, these virtual sessions pose accessibility issues for D/deaf and hard of hearing (DHH) individuals. This project aims to address the challenges faced by DHH users of Microsoft Teams Whiteboard during remote meetings and develop a solution to enhance inclusivity and accessibility.

GoBabyGo Modification

Project lead: Nadia Sanchez

The student engineers on this team will redesign a joystick control modification to a ride-on car for young children as well as develop an easy-to-follow assembly manual in order to make it easier for the GoBabyGo volunteers to assemble.

How Do People with Limited Movement Personalize Upper-Body Gestures?

Project lead: Momona Yamagami

Biosignal interfaces that use electromyography sensors, accelerometers, and other biosignals as inputs provide promise to improve accessibility for people with disabilities. However, generalized models that are not personalized to the individual’s abilities, body sizes, and skin tones may not perform well. Individualized interfaces that are personalized to the individual and their abilities could significantly enhance accessibility.

In this work, I discuss how personalized electromyography gesture interfaces can be personalized to each user's abilities and characterize personalized gestures for 25 participants with upper-body motor impairments. As biosignal interfaces become more commonly available, it is important to ensure that such interfaces have high performance across a wide spectrum of users.

Husky Adapt: Switch Kit

Project lead: Jordan Huang

The current switch kit has concerns regarding accessibility, safety, and durability. Our project seeks to modify and enhance a switch kit that enables children with disabilities to engage in collaborative play, providing a safe, enjoyable, and inclusive experience for all.

An Interactive Mat for Inclusive Gross Motor Play

Project lead: Sabrina Lin

Our mission is to design an accessible solution that will accommodate diverse needs and encourage inclusivity, in addition to co-creating other educational models to establish an equitable learning environment for students with and without disabilities at the Experimental Education Unit, an inclusive early childhood school community associated with the University of Washington. We focused on creating an interactive sensory mat for children to play “Floor is Lava,” encouraging them to further develop their gross motor skills and play collaboratively.

Notably Inaccessible – Understanding Data Science Notebook (In)Accessibility

Project lead: Venkatesh Potluri 

Computational notebooks, tools that facilitate storytelling through exploration, data analysis, and information visualization, have become the widely accepted standard in the data science community both in academia and industry. While there is extensive research that investigates how data scientists use these notebooks, identifies their pain points, and enables collaborative data science practices, very little is known about the various accessibility barriers experienced by blind and visually impaired (BVI) notebook users.

We present findings from a large scale analysis of 100K Jupyter notebooks, showing that BVI notebook users may experience accessibility barriers due to authoring practices, data representations in these notebooks, and the incapability of  tools and infrastructures that are used to work with these notebooks. we make recommendations to improve accessibility of the artifacts of a notebook, suggest authoring practices, and propose changes to infrastructure to make notebooks accessible.

A Pilot Study of Sidewalk Equity in Seattle Using Crowdsourced Sidewalk Assessment Data

Project lead: Chu Li

We examine the potential of using large-scale open crowdsourced sidewalk data from Project Sidewalk to study the distribution and condition of sidewalks in Seattle, WA. While potentially noisier than professionally gathered sidewalk datasets, crowdsourced data enables large, cross-regional studies that would be otherwise expensive and difficult to manage.

As an initial case study, we examine spatial patterns of sidewalk quality in Seattle and their relationship to racial diversity, income level, built density, and transit modes. We close with a reflection on our approach, key limitations, and opportunities for future work.

Quantifying device and environment exploration during powered mobility use in toddlers

Project lead: Nicole Zaino

Toddlers with mobility disabilities and delays require the use of technology to access self-initiated mobility at an early age which is critical for development, mobility, and social interaction. My work is on investigating the toddler-device-environment relationship and interaction for toddlers learning how to navigate and explore with a pediatric powered mobility device (Permobil Explorer Mini).

Sports Chair

Project lead: Yusuke Maruo

Our mission is to create a towing device that significantly improves the Seattle Adaptive Sports Center’s basketball athletes’ ability to transport their sports chair using their daily chair.

Steering Modifications to Support On-time Powered Mobility Use

Project lead: Kate Bokowy

Adapted ride-on cars are a great mobility learning tool for young kids with disabilities, but they can be hard to steer. We have created 3D-printed steering modifications to make it easier for a child to turn the steering wheel using different modalities.

Toward Open and Shared Pedestrian Path Network Mapping and Assessment at Scale

Project lead: Ricky Zhang

Manual mapping of pedestrian path networks is often a challenging task due to the substantial data requirements and potential errors. In response, we’ve developed AI-powered automated tools that integrate diverse types of globally-available data for proactive generation and analysis of pedestrian path and network data, with a keen focus on accessibility considerations. The resulting pedestrian path network data is represented in a standardized format per the OpenSidewalks data schema, making it readily usable in downstream routing and analytic applications.

Wireless Switch for Accessible Play

Project lead: Spencer Madrid

Our mission is to create a viable wireless switch that is more affordable than commercially available switches and is adapted to increase accessibility in any situation.


Classification of light rail stations using semantic segmentation

Project lead: Anat Caspi