Empowering users with disabilities through customized interfaces for assistive robots

March 15, 2024

For people with severe physical limitations such as quadriplegia, the ability to tele-operate personal assistant robots could bring a life-enhancing level of independence and self-determination. Allen School Ph.D. candidate Vinitha Ranganeni and her advisor, CREATE faculty member Maya Cakmak, have been working to understand and meet the needs of users of assistive robots.

This month, Ranganeni and Cakmak presented a video at the Human Robot Interaction (HRI) conference that illustrates the practical (and touching) ways deploying an assistive robot in a test household has helped Henry Evans require a bit less from his caregivers and connect to his family.

The research was funded by NIA/NIH Phase II SBIR Grant #2R44AG072982-02 and the NIBIB Grant #1R01EB034580-01

Captioned video of Henry Evans demonstrating how he can control an assistive robot using the customized graphical user interface he co-designed with CREATE Ph.D. student/Allen School Ph.D. candidate Vinitha Ranganeni.

Their earlier study, Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots, evaluated the usability and effectiveness of a customized, tele-operation interface for the Stretch RE2 assistive robot. The authors show that no single interface configuration satisfies all users’ needs and preferences. Users perform better when using the customized interface for navigation, and the differences in preferences between participants with and without motor impairments are significant.

Last summer, as a robotics engineering consultant for Hello Robot, Ranganeni led the development of the interface for deploying an assistive robot in a test household, that of Henry and Jane Evans. Henry was a Silicon Valley CFO when a stroke suddenly left him non-speaking and with quadriplegia. His wife Jane is one of his primary caregivers.

The research team developed a highly customizable graphical user interface to control Stretch, a relatively simple and lightweight robot that has enough range of motion to reach from the floor to countertops.

Work in progress, but still meaningful independence

Stretch can’t lift heavy objects or climb stairs. Assistive robots are expensive, prone to shutting down, and the customization is still very complex and time-intensive. And, as noted in an IEEE Spectrum article about the Evans’ installation, getting the robot’s assistive autonomy to a point where it’s functional and easy to use is the biggest challenge right now. And more work needs to be done on providing simple interfaces, like voice control. 

The article states, “Perhaps we should judge an assistive robot’s usefulness not by the tasks it can perform for a patient, but rather on what the robot represents for that patient, and for their family and caregivers. Henry and Jane’s experience shows that even a robot with limited capabilities can have an enormous impact on the user. As robots get more capable, that impact will only increase.”

In a few short weeks, Stretch made a difference for Henry Evans. “They say the last thing to die is hope. For the severely disabled, for whom miraculous medical breakthroughs don’t seem feasible in our lifetimes, robots are the best hope for significant independence,” says Henry.” 


Collaborator, advocate, and community researcher Tyler Schrenk

Though it has been many months since the death of Tyler Schrenk, a CREATE-funded researcher and a frequent collaborator, his impact is still felt in our collective research.

Tyler Schrenk making a presentation at the head of a lecture room. He has brown spiky hair, a full beard, and is seated in his power wheelchair.

Schrenk was a dedicated expert in the assistive technology field and led the way in teaching individuals and companies how to use assistive technologies to create independence. He was President & Executive Director of the Tyler Schrenk Foundation until his death in 2023. 


Related reading:

Zhang is CREATE’s Newest Apple AIML fellow

March 18, 2024

Congratulations to Zhuohao (Jerry) Zhang – the most recent CREATE Ph.D. student to receive an Apple Scholars in AIML PhD fellowship. The prestigious award supports students through funding, internship opportunities, and mentorship with an Apple researcher. 

Zhang is a 3rd-year iSchool Ph.D. student advised by Prof. Jacob. O Wobbrock. His research focuses on using human-AI interactions to address real-world accessibility problems. He is particularly interested in designing and evaluating intelligent assistive technologies to make creativity tasks accessible.

Zhuohao (Jerry) Zhang standing in front of a poster, wearing a black sweater and a pair of black glasses, smiling.

Zhang joins previous CREATE-advised Apple AIML fellows:

Venkatesh Potluri (Apple AIML Ph.D. fellow 2022), advised by CREATE Director Jennifer Mankoff in the Allen School. His research makes overlooked software engineering spaces such as IOT and user interface development accessible to developers who are blind or visually impaired. His work systematically understands the accessibility gaps in these spaces and addresses them by enhancing widely used programming tools.

Venkatesh Potluri leans toward the camera smiling with eyes cast downward

Rachel Franz (Apple AIML Ph.D. fellow 2021) is also advised by Wobbrock in the iSchool. Her research focuses on accessible technology design and evaluation for users with functional impairments and low digital literacy. Specifically, she is focused on using AI to make virtual reality more accessible to individuals with mobility limitations.

Rachel Franz, a woman with long blond hair and light skin, photographed in front of a rock wall.

Three Myths and Three Actions: “Accommodating” Disabled Students

February 29, 2024

Excerpted from the Winter 2024 Allen School DEIA newsletter article contributed by CREATE Ph.D. students Kelly Avery Mack and Ather Sharif, with Lucille Njoo.

Completing graduate school is difficult for any student, but it’s especially difficult when you’re trying to learn at an institution that isn’t built for you. Students with disabilities at UW face extra challenges every day because our university doesn’t support equitable participation in educational activities like research and mentorship – those of us who don’t fit the mold face an uphill struggle to make ourselves heard in an academic culture that values maximum efficiency over unique perspectives. In this article, we share three common myths about students with disabilities, reveal the reality of our inequitable experience as grad students at UW, and propose a few potential solutions to begin ameliorating this reality, both at our university and beyond.

Myth 1: DRS (Disabilities Resources for Students) handles all accessibility accommodations.

This is an incorrect expectation of the role DRS serves in a campus ecosystem. The term “accommodations,” in the first place, frames us as outcasts, implying that someone needs to “review” and “approve” of our “requests” to simply exist equitably; but given that this is the term folks are most familiar with, we’ll continue referring to them as “accommodations” for ease of communication. While DRS can provide some assistance, they are outrageously under-staffed, and UW research has demonstrated that they are only part of the ecosystem. Instructors need to consider accessibility when building their courses and when teaching their classes. Accessibility, like computer security, works best when it is considered from the beginning, but it’s not too late to start repairing inaccessible PDFs or lecture slides for a future quarter. UW DO-IT has a great resource for accessible teaching.

Myth 2: Making my materials accessible is all I have to do
for disabled students, right?

Disability is highly individual, and no matter how much an instructor prepares, a student might need further accommodations than what was prepared ahead of time. Listen to and believe disabled students when they discuss the accessibility barriers they face. Questioning their disability or using language that makes them doubt their self-worth is a hard no. Then, work with the student to decide on a solution moving forward, and remember that students are the number-one experts on their own accessibility needs.

Myth 3: Advising a student with a disability is the same as advising a student without a disability.

Disabled students have very different experiences of grad school, and they need advisors who are informed, aware, and proactive about those differences. If you are taking on a disabled student, the best ways to prepare yourself are:

Educate yourself about disability.

Disabled students are tired of explaining the same basic accessibility practices over and over again. Be willing to listen if your student wants to educate you more about their experience with disability, and recognize action items from the conversation that you can incorporate to improve your methods.

Expect that timelines might look different.

Disabled students deal with all kinds of barriers, from inaccessible technology to multiple-week hospital stays, so they may do things faster or slower than other students (as is true for any student). This does not mean they are not as productive or deserving of research positions. Disabled students produce high-quality research and award-winning papers, and their unique perspectives have the potential to strengthen every field, not just those related to disability studies. And they are able to do their best work when they have an advisor who recognizes their intellectual merit and right to be a part of the program.

Be prepared to be your student’s number-one ally.

Since DRS cannot fulfill all accessibility needs, you might need to figure out how to solve them yourselves. Can you find $200 in a grant to purchase an OCR tool to help make PDFs accessible for a blind student? (Yes, you can.) Can you advocate for them if their instructor isn’t meeting accessibility requests? (Yes, you can.) Not only will this help them do their best work, but it also sets an example for the other students in your lab and establishes an academic culture that values students of all abilities.

ARTennis attempts to help low vision players

December 16, 2023

People with low vision (LV) have had fewer options for physical activity, particularly in competitive sports such as tennis and soccer that involve fast, continuously moving elements such as balls and players. A group of researchers from CREATE associate director Jon E. Froehlich‘s Makeability Lab hopes to overcome this challenge by enabling LV individuals to participate in ball-based sports using real-time computer vision (CV) and wearable augmented reality (AR) headsets. Their initial focus has been on tennis.

The team includes Jaewook Lee (Ph.D. student, UW CSE), Devesh P. Sarda (MS/Ph.D. student, University of Wisconsin), Eujean Lee (Research Assistant, UW Makeability Lab), Amy Seunghyun Lee (BS student, UC Davis), Jun Wang (BS student, UW CSE), Adrian Rodriguez (Ph.D. student, UW HCDE), and Jon Froehlich.

Their paper, Towards Real-time Computer Vision and Augmented Reality to Support Low Vision Sports: A Demonstration of ARTennis was published in the 2023 ACM Symposium on User Interface Software and Technology (UIST).

ARTennis is their prototype system capable of tracking and enhancing the visual saliency of tennis balls from a first-person point-of-view (POV). Recent advancements in deep learning have led to models like TrackNet, a neural network capable of tracking tennis balls in third-person recordings of tennis games that is used to improve sports viewing for LV people. To enhance playability, the team first built a dataset of first-person POV images by having the authors wear an AR headset and play tennis. They then streamed video from a pair of AR glasses to a back-end server, analyzed the frames using a custom-trained deep learning model, and sent back the results for real-time overlaid visualization.

After a brainstorming session with an LV research team member, the team added visualization improvements to enhance the ball’s color contrast and add a crosshair in real-time.

Early evaluations have provided feedback that the prototype could help LV people enjoy ball-based sports but there’s plenty of further work to be done. A larger field-of-view (FOV) and audio cues would improve a player’s ability to track the ball. The weight and bulk of the headset, in addition to its expense are also factors the team expects to improve with time, as Lee noted in an interview on Oregon Public Broadcasting.

“Wearable AR devices such as the Microsoft HoloLens 2 hold immense potential in non-intrusively improving accessibility of everyday tasks. I view AR glasses as a technology that can enable continuous computer vision, which can empower BLV individuals to participate in day-to-day tasks, from sports to cooking. The Makeability Lab team and I hope to continue exploring this space to improve the accessibility of popular sports, such as tennis and basketball.”

Jaewook Lee, Ph.D. student and lead author

Ph.D. student Jaewook Lee presents a research poster, Makeability Lab Demos - GazePointAR & ARTennis.

Winter 2023 CREATE Research Showcase

December 12, 2023

Students from CSE 493 and additional CREATE researchers shared their work at the December 2023 CREATE Research Showcase. The event was well attended by CREATE students, faculty, and community partners. Projects included, for example: an analysis of the accessibility of transit stations and a tool to aid navigation within transit stations; an app to help colorblind people of color pick makeup; and consider the accessibility of generative AI while also considering ableist implications of limited training data.

CSE 493 student projects

In it’s first offering Autumn quarter 2023, CSE’s undergraduate Accessibility class has been focusing on the importance of centering first-person accounts in disability-focused technology work. Students worked this quarter on assignments ranging from accessibility assessments of county voting systems to disability justice analysis to open-ended final projects.

Alti Discord Bot »

Keejay Kim, Ben Kosa, Lucas Lee, Ashley Mochizuki

Alti is a Discord bot that automatically generates alt text for any image that gets uploaded onto Discord. Once you add Alti to your Discord server, Alti will automatically generate alt text for the image using artificial intelligence (AI).

Enhancing Self-Checkout Accessibility at QFC »

Abosh Upadhyaya, Ananya Ganapathi, Suhani Arora

Makes self-checkout more accessible to visually impaired individuals

Complexion Cupid: Color Matching Foundation Program »

Ruth Aramde, Nancy Jimenez-Garcia, Catalina Martinez, Nora Medina

Allows individuals with color blindness to upload an image of their skin, and provides a makeup foundation match. Additionally, individuals can upload existing swatches and will be provided with filtered photos that better show the matching accuracy.

Twitter Content Warnings »

Stefan D’Souza, Aditya Nair

A chrome extension meant to be used in conjunction with twitter.com in order to help people with PTSD

Lettuce Eat! A Map App for Accessibly Dietary Restrictions »

Arianna Montoya, Anusha Gani, Claris Winston, Joo Kim

Parses menus on restaurants’ websites to provide information on restaurants’ dietary restrictions to support individuals with specific dietary requirements, such as vegan vegetarian, and those with Celiac disease.

Form-igate »

Sam Assefa

A chrome extension that allows users with motor impairments to interact with google forms using voice commands, enhancing accessibility.

Lite Lingo: Plain Text Translator »

Ryan Le, Michelle Vu, Chairnet Muche, Angelo Dauz

A plain text translator to help individuals with learning disabilities

Matplotalt: Alt text for matplotlib figures »

Kai Nailund

[No abstract]

PadMap: Accessible Map for Menstrual Products »

Kirsten Graham, Maitri Dedhia, Sandy Cheng, Aaminah Alam

Our goal is to ensure that anywhere on campus, people can search up the closest free menstrual products to them and get there in an accessible way.

SCRIBE: Crowdsourcing Scientific Alt Text »

Sanjana Chintalapati, Sanjana Sridhar, Zage Strassberg-Phillips

A prototype plugin for arXiv that adds alt text to requested papers via crowdwork.

PalPalette »

Pu Thavikulwat, Masaru Chida, Srushti Adesara, Angela Lee

A web app that helps combat loneliness and isolation for young adults with disabilities

SpeechIT »

Pranati Dani, Manasa Lingireddy, Aryan Mahindra

A presentation speech checker to ensure a user’s verbal speech during presentation is accessible and understandable for everyone.

Enhancing Accessibility in SVG Design: A Fabric.js Solution »

Julia Tawfik, Kenneth Ton, Balbir Singh, Aaron Brown

A Laser Cutter Generator’ interface which displays a form to select shapes and set dimensions for SVG creation.

CREATE student and faculty projects

Designing and Implementing Social Stories in Technology: Enhancing Collaboration for Parents and Children with Neurodiverse Needs

Elizabeth Castillo, Annuska Zolyomi, Ting Zhou

Our research project, conducted through interviews in Panama, focuses on the user-centered design of technology to enhance autism social stories for children with neurodiverse needs. We aim to improve collaboration between parents, therapists, and children by creating a platform for creating, sharing, and tracking the usage of social stories. While our initial research was conducted in Panama, we are eager to collaborate with individuals from Japan and other parts of the world where we have connections, to further advance our work in supporting neurodiversity.

An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility

Kate S Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, Jennifer Mankoff

With the recent rapid rise in Generative Artificial Intelligence (GAI) tools, it is imperative that we understand their impact on people with disabilities, both positive and negative. However, although we know that AI in general poses both risks and opportunities for people with disabilities, little is known specifically about GAI in particular. To address this, we conducted a three-month autoethnography of our use of GAI to meet personal and professional needs as a team of researchers with and without disabilities. Our findings demonstrate a wide variety of potential accessibility-related uses for GAI while also highlighting concerns around verifiability, training data, ableism, and false promises.

Machine Learning for Quantifying Rehabilitation Responses in Children with Cerebral Palsy

Charlotte D. Caskey, Siddhi R. Shrivastav, Alyssa M. Spomer, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele

Increases in step length and decreases in step width are often a rehabilitation goal for children with cerebral palsy (CP) participating in long-term treadmill training. But it can be challenging to quantify the non-linear, highly variable, and interactive response to treadmill training when parameters such as treadmill speed increase over time. Here we use a machine learning method, Bayesian Additive Regression Trees, to show that there is a direct effect of short-burst interval locomotor treadmill training on increasing step length and modulating step width for four children with CP, even after controlling for cofounding parameters of speed, treadmill incline, and time within session.

Spinal Stimulation Improves Spasticity and Motor Control in Children with Cerebral Palsy

Victoria M. Landrum, Charlotte D. Caskey, Siddhi R. Shrivastav, Kristie F. Bjornson, Desiree Roge, Chet T. Moritz, Katherine M. Steele

Cerebral palsy (CP) is caused by a brain injury around the time of birth that leads to less refined motor control and causes spasticity, a velocity dependent stretch reflex that can make it harder to bend and move joints, and thus impairs walking function. Many surgical interventions that target spasticity often lead to negative impacts on walking function and motor control, but transcutaneous spinal cord stimulation (tSCS), a novel, non-invasive intervention, may amplify the neurological response to traditional rehabilitation methods. Results from a 4-subject pilot study indicate that long-term usage of tSCS with treadmill training led to improvements in spasticity and motor control, indicating better walking function.

Adaptive Switch Kit

Kate Bokowy, Mia Hoffman, Heather A. Feldner, Katherine M. Steele

We are developing a switch kit for clinicians and parents to build customizable switches for children with disabilities. These switches are used to help children play with computer games and adapted toys as an early intervention therapy.

Developing a Sidewalk Improvement Cost Function

Alex Kirchmeier, Cole Anderson, Anat Caspi

In this ongoing project, I am developing a Python script that uses a sidewalk issues dataset to determine the cost of improving Seattle’s sidewalks. My intention is to create a customizable function that will help users predict the costs associated with making sidewalks more accessible.

Exploring the Benefits of a Dynamic Harness System Using Partial Body Weight Support on Gross Motor Development for Infants with Down Syndrome

Reham Abuatiq, PT, MSc1; Mia Hoffman, ME, BSc2; Alyssa Fiss, PT, PhD3; Julia Looper, PT, PhD4; & Heather Feldner, PT, PhD, PCS1,5,6

We explored the benefits of a Dynamic Harness System Using Partial Body Weight Support (PBWS) within an enriched play environment on Gross Motor Development for Infants with Down Syndrome using randomized cross over study design. We found that the effectiveness of the PBWS harness system on gross motor development was clearly evident. The overall intervention positively affected activity levels, however, the direct impact of the harness remains unclear.

StreetComplete for Better Pedestrian Mapping

Sabrina Fang, Kohei Matsushima

StreetComplete is a gamified, structured, and user-friendly mobile application for users to improve OpenStreetMap data by completing pilot quests. OpenStreetMap is an open-source, editable world map created and maintained by a community of volunteers. The goal of this research project is to design pilot quests in StreetComplete to accurately collect information about “accessibility features,” such as sidewalk width and the quality of lighting, to improve accessibility for pedestrian mapping.

Transit Stations Are So Confusing!

Jackie Chen, Milena Johnson, Haochen Miao, and Raina Scherer

We are collecting data on the wayfinding nodes in four different Sound Transit light rail stations, and interpreting them through the GTFS-pathways schema. In the future, we plan on visualizing this information through AccessMaps such that it can be referenced by all users.

Optimizing Seattle Curbside Disability Parking Spots

Wendy Bu, Cole Anderson, Anat Caspi

The project is born out of a commitment to enhance the quality of life for individuals with disabilities in the city of Seattle. The primary objective is to systematically analyze and improve the allocation and management of curbside parking spaces designated for disabled individuals. By improving accessibility for individuals with disabilities, the project contributes to fostering a more equitable and welcoming urban environment.

Developing Accessible Tele-Operation Interfaces for Assistive Robots with Occupational Therapists

Vinitha Ranganeni, Maya Cakmak

The research is motivated by the potential of using tele-operation interfaces with assistive robots, such as the Stretch RE2, to enhance the independence of individuals with motor limitations in completing activities of daily living (ADLs). We explored the impact of customization of tele-operation interfaces and a deployed the Stretch RE2 in a home for several weeks facilitated by an occupational therapist and enabled a user with quadriplegia to perform daily activities more independently. Ultimately, this work aims to empower users and occupational therapists in optimizing assistive robots for individual needs.

HuskyADAPT: Accessible Design and Play Technology

HuskyADAPT Student Organization

HuskyADAPT is a multidisciplinary community at the University of Washington that supports the development of accessible design and play technology. Our community aims to initiate conversations regarding accessibility and ignite change through engineering design. It is our hope that we can help train the next generation of inclusively minded engineers, clinicians, and educators to help make the world a more equitable place.

A11yBoard for Google Slides: Developing and Deploying a Real-World Solution for Accessible Slide Reading and Authoring for Blind Users

Zhuohao (Jerry) Zhang, Gene S-H Kim, Jacob O. Wobbrock

Presentation software is largely inaccessible to blind users due to the limitations of screen readers with 2-D artboards. This study introduces an advanced version of A11yBoard, initially developed by Zhang & Wobbrock (CHI2023), which now integrates with Google Slides and addresses real-world challenges. The enhanced A11yBoard, developed through participatory design including a blind co-author, demonstrates through case studies that blind users can independently read and create slides, leading to design guidelines for accessible digital content creation tools.

“He could go wherever he wanted”: Driving Proficiency, Developmental Change, and Caregiver Perceptions following Powered Mobility Training for Children 1-3 Years with Disabilities

Heather A. Feldner, PT, MPT, PhD; Anna Fragomeni, PT; Mia Hoffman, MS; Kim Ingraham, PhD; Liesbeth Gijbels, PhC; Kiana Keithley, SPT; Patricia K. Kuhl, PhD; Audrey Lynn, SPT; Andrew Meltzoff, PhD; Nicole Zaino, PhD; Katherine M. Steele, PhD

The objective of this study was to investigate how a powered mobility intervention for young children (ages 1-3years) with disabilities impacted: 1) Driving proficiency over time; 2) Global developmental outcomes; 3) Learning tool use (i.e., joystick activation); and 4) Caregiver perceptions about powered mobility devices and their child’s capabilities.

Access to Frequent Transit in Seattle

Darsh Iyer, Sanat Misra, Angie Niu, Dr. Anat Caspi, Cole Anderson

The research project in Seattle focuses on analyzing access to public transit, particularly frequent transit stops, by considering factors like median household income. We scripted in QGIS, analyzed walksheds, and examined demographic data surrounding Seattle’s frequent transit stops to understand the equity of transit access in different neighborhoods. Our goal was to visualize and analyze the data to gain insights into the relationship between transit access, median household income, and other demographic factors in Seattle.

Health Service Accessibility

Seanna Qin, Keona Tang, Anat Caspi, Cole Anderson

Our research aims to discover any correlation between median household income and driving duration from census tracts to the nearest urgent care location in the Bellevue and Seattle region

Conveying Uncertainty in Data Visualizations to Screen-Reader Users Through Non-Visual Means

Ather Sharif, Ruican Zhong, and Yadi Wang

Incorporating uncertainty in data visualizations is critical for users to interpret and reliably draw informed conclusions from the underlying data. However, visualization creators conventionally convey the information regarding uncertainty in data visualizations using visual techniques (e.g., error bars), which disenfranchises screen-reader users, who may be blind or have low vision. In this preliminary exploration, we investigated ways to convey uncertainty in data visualizations to screen-reader users.

UW News: How an assistive-feeding robot went from picking up fruit salads to whole meals

November, 2023

In tests with this set of actions, the robot picked up the foods more than 80% of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal. UW News talked with Gordon and Nanavati co-lead authors, both doctoral students in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding.

An assistive-feeding robotic arm attached to a wheelchair uses a fork to stab a piece of fruit on a plate among other fruits.

The team presented its findings Nov. 7 at the 2023 Conference on Robotic Learning in Atlanta.

UW News talked with co-lead authors Gordon and Nanavati, both doctoral students members of CREATE and in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding for 1.8 million people in the U.S. (according to data from 2010) who can’t eat on their own.

The Personal Robotics Lab has been working on robot-assisted feeding for several years. What is the advance of this paper?

Ethan K. Gordon: I joined the Personal Robotics Lab at the end of 2018 when Siddhartha Srinivasa, a professor in the Allen School and senior author of our new study, and his team had created the first iteration of its robot system for assistive applications. The system was mounted on a wheelchair and could pick up a variety of fruits and vegetables on a plate. It was designed to identify how a person was sitting and take the food straight to their mouth. Since then, there have been quite a few iterations, mostly involving identifying a wide variety of food items on the plate. Now, the user with their assistive device can click on an image in the app, a grape for example, and the system can identify and pick that up.

Taylor Kessler Faulkner: Also, we’ve expanded the interface. Whatever accessibility systems people use to interact with their phones — mostly voice or mouth control navigation — they can use to control the app.

EKG: In this paper we just presented, we’ve gotten to the point where we can pick up nearly everything a fork can handle. So we can’t pick up soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad to an actual vegetable salad, as well as pre-cut pizza or a sandwich or pieces of meat.

In previous work with the fruit salad, we looked at which trajectory the robot should take if it’s given an image of the food, but the set of trajectories we gave it was pretty limited. We were just changing the pitch of the fork. If you want to pick up a grape, for example, the fork’s tines need to go straight down, but for a banana they need to be at an angle, otherwise it will slide off. Then we worked on how much force we needed to apply for different foods.

In this new paper, we looked at how people pick up food, and used that data to generate a set of trajectories. We found a small number of motions that people actually use to eat and settled on 11 trajectories. So rather than just the simple up-down or coming in at an angle, it’s using scooping motions, or it’s wiggling inside of the food item to increase the strength of the contact. This small number still had the coverage to pick up a much greater array of foods.

We think the system is now at a point where it can be deployed for testing on people outside the research group. We can invite a user to the UW, and put the robot either on a wheelchair, if they have the mounting apparatus ready, or a tripod next to their wheelchair, and run through an entire meal.

For you as researchers, what are the vital challenges ahead to make this something people could use in their homes every day?

EKG: We’ve so far been talking about the problem of picking up the food, and there are more improvements that can be made here. Then there’s the whole other problem of getting the food to a person’s mouth, as well as how the person interfaces with the robot, and how much control the person has over this at least partially autonomous system.

TKF: Over the next couple of years, we’re hoping to personalize the robot to different people. Everyone eats a little bit differently. Amal did some really cool work on social dining that highlighted how people’s preferences are based on many factors, such as their social and physical situations. So we’re asking: How can we get input from the people who are eating? And how can the robot use that input to better adapt to the way each person wants to eat?

Amal Nanavati: There are several different dimensions that we might want to personalize. One is the user’s needs: How far the user can move their neck impacts how close the fork has to get to them. Some people have differential strength on different sides of their mouth, so the robot might need to feed them from a particular side of their mouth. There’s also an aspect of the physical environment. Users already have a bunch of assistive technologies, often mounted around their face if that’s the main part of their body that’s mobile. These technologies might be used to control their wheelchair, to interact with their phone, etc. Of course, we don’t want the robot interfering with any of those assistive technologies as it approaches their mouth.

There are also social considerations. For example, if I’m having a conversation with someone or at home watching TV, I don’t want the robot arm to come right in front of my face. Finally, there are personal preferences. For example, among users who can turn their head a little bit, some prefer to have the robot come from the front so they can keep an eye on the robot as it’s coming in. Others feel like that’s scary or distracting and prefer to have the bite come at them from the side.

A key research direction is understanding how we can create intuitive and transparent ways for the user to customize the robot to their own needs. We’re considering trade-offs between customization methods where the user is doing the customization, versus more robot-centered forms where, for example, the robot tries something and says, “Did you like it? Yes or no.” The goal is to understand how users feel about these different customization methods and which ones result in more customized trajectories.

What should the public understand about robot-assisted feeding, both in general and specifically the work your lab is doing?

EKG: It’s important to look not just at the technical challenges, but at the emotional scale of the problem. It’s not a small number of people who need help eating. There are various figures out there, but it’s over a million people in the U.S. Eating has to happen every single day. And to require someone else every single time you need to do that intimate and very necessary act can make people feel like a burden or self-conscious. So the whole community working towards assistive devices is really trying to help foster a sense of independence for people who have these kinds of physical mobility limitations.

AN: Even these seven-digit numbers don’t capture everyone. There are permanent disabilities, such as a spinal cord injury, but there are also temporary disabilities such as breaking your arm. All of us might face disability at some time as we age and we want to make sure that we have the tools necessary to ensure that we can all live dignified lives and independent lives. Also, unfortunately, even though technologies like this greatly improve people’s quality of life, it’s incredibly difficult to get them covered by U.S. insurance companies. I think more people knowing about the potential quality of life improvement will hopefully open up greater access.

Additional co-authors on the paper were Ramya Challa, who completed this research as an undergraduate student in the Allen School and is now at Oregon State University, and Bernie Zhu, a UW doctoral student in the Allen School. This research was partially funded by the National Science Foundation, the Office of Naval Research and Amazon.

For more information, contact Gordon at ekgordon@cs.uw.edu, Nanavati at amaln@cs.uw.edu and Faulkner at taylorkf@cs.washington.edu.


Excerpted and adapted from the UW News story by Stefan Milne.

UW News: Can AI help boost accessibility? CREATE researchers tested it for themselves

November 2, 2023 | UW News

Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoningperpetuating ableist biases.

This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.

Four AI-generated images show different interpretations of a doll-sized “crocheted lavender husky wearing ski goggles,” including two pictured outdoors and one against a white background.

The team presented its findings Oct. 22 at the ASSETS 2023 conference in New York.

“When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”

The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.

Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.

“When technology changes rapidly, there’s always a risk that disabled people get left behind.”

Jennifer Mankoff, CREATE Director, professor in the Allen School

Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.

The results of the other tests researchers selected were equally mixed:

  • One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
  • Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
  • Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.

“I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”

The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.”  The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”

Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.

“Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”

Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka DesaiKelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.


For more information, contact Glazko at glazko@cs.washington.edu and Mankoff at jmankoff@cs.washington.edu.


This article was adapted from the UW News article by Stefan Milne.

UW News: A11yBoard accessible presentation software

October 30, 2023 | UW News

A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.

A user demonstrates creating a presentation slide with A11yBoard on a touchscreen tablet and computer screen.

Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.

Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.

“We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School

“For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with “artboards” — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.

For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.

The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.

Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.

“That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”

This research was funded in part by the University of Washington’s Center for Research and Education on Accessible Technology and Experiences (UW CREATE). For more information, contact Zhang at zhuohao@uw.edu and Wobbrock at wobbrock@uw.edu.


This article was adapted from the UW News article by Stefan Milne.

Augmented Reality to Support Accessibility

October 25, 2023

RASSAR – Room Accessibility and Safety Scan in Augmented Reality – is a novel smartphone-based prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues. With RASSAR, the user holds out their phone and scans a space. The tool uses LiDAR and camera data, real-time machine learning, and AR to construct a real-time model of the 3D scene, attempts to identify and classify known accessibility and safety issues, and visualizes potential problems overlaid in AR. 

RASSAR researchers envision the tool as an aid in the building and validation of new construction, planning renovations, or updating homes for health concerns, or for telehealth home visits with occupational therapists. UW News interviewed two CREATE Ph.D. students about their work on the project:


Augmented Reality to Support Accessibility

CREATE students Xia Su and Jae Lee, advised by CREATE Associate Director Jon Froehlich in the Makeability Lab, discuss their work using augmented reality to support accessibility. The Allen School Ph.D. students are presenting their work at ASSETS and UIST this year.

Illustration of a user holding a smartphone using the RASSAR prototype app to scan the room for accessibility issues.

Accessible Technology Research Showcase – Spring 2023

June 30, 2024

Poster session in progress, with 9 or so posters on easels in view and student presenters talking to attendees.

In June 2023, CREATE and HuskyADAPT co-hosted a showcase — and celebration — of outstanding UW research on accessible technology. The showcase featured poster presentations, live demonstrations by our faculty, students, and researchers and was altogether vibrant and exciting. Over 100 attendees viewed 25 projects, presentations, and posters.

Congratulations and appreciation to CREATE Engagement and Partnerships Manager Kathleen Quin Voss and HuskyAdapt Student Executive Chair Mia Hoffman for putting on an amazing research showcase!

View the Projects


CREATE’s Newest Ph.D Graduates

June 9, 2023

We’re proud to see these talented, passionate students receive their Ph.D.s and excited to see how they continue their work in accessibility.

Alyssa Spomer, Ph.D. Mechanical Engineering

Dissertation: Evaluating multimodal biofeedback to target and improve motor control in cerebral palsy

Advisor: Kat Steele

Honors, awards and articles:

Current: Clinical Scientist at Gillette Children’s Hospital, leading research in the Gillette Rehabilitation Department to improve healthcare outcomes for children with complex movement conditions.

Elijah Kuska, Ph.D. Mechanical Engineering

Elijah Kuska smiling with a sunset in the background

Dissertation: In Silico Techniques to Improve Understanding of Gait in Cerebral Palsy

Advisor: Kat Steele

Honors, awards and articles:

Plans: Elija will start as an assistant professor at the Colorado School of Mines in the Mechanical Engineering Department in January 2024.

Megan Ebers, Ph.D. Mechanical Engineering

Headshot of Megan Ebers, a young woman with dark wavy hair, smiling broadly.

Dissertation: Machine learning for dynamical models of human movement

Advisors: Kat Steele and Nathan Kutz

Awards, honors and articles:

  • Dual Ph.D.s in Mechanical Engineering and Applied Math
  • NSF Graduate Research Fellowship

Plans: Megan will join the UW AI Institute as a postdoc in Spring of 2023 to pursue clinical translation of her methods to evaluate digital biomarkers to support health and function from wearable data. 

Nicole Zaino, Ph.D. Mechanical Engineering

Headshot of Nicole Zaino, a young woman with wavy brown hair and teal eyeglasses.

Dissertation: Walking and rolling: Evaluating technology to support multimodal mobility for individuals with disabilities

Advisors: Kat Steele and Heather Feldner

Awards, honors and articles: 

  • National Science Foundation Graduate Research Fellow, 2018 – Present
  • Gatzert Child Welfare Fellowship, University of Washington, 2022
  • Best Paper Award at the European Society of Movement Analysis for Adults and Children, 2019.
  • Finalist, International Society of Biomechanics David Winter Young Investigator Award, 2019

Plans: Nicole is headed to Bozeman Montana to join the Crosscut Elite Training team to work toward joining the national paralympic nordic ski team for Milano-Cortina 2026, while working part-time with academia and industry partners. 

Ricky Zhang

Headshot of Ricky Zhang, a young man with short hair, wearing black frame glasses and a gray business suit.

Dissertation: Pedestrian Path Network Mapping and Assessment with Scalable Machine Learning Approaches

Advisors: Anat Caspi and Linda Shapiro

Plans: Ricky will be a postdoc in Bill Howe’s lab at the University of Washington.


Kat Steele, who has been busy advising four out of five of these new PH.D.s, noted, “We have an amazing crew of graduate students continuing and expanding upon much of this work. We’re excited for new collaborations and translating these methods into the clinic and community.”

CREATE Ph.D. Student Emma McDonnell Wins Dennis Lang Award

June 6, 2023

Congratulations to Emma McDonnell on receiving a Dennis Lang Award from the UW Disability Studies program! McDonnell, a fourth year Ph.D. candidate in Human Centered Design & Engineering, is advised by CREATE associate director Leah Findlater.

Emma McDonnell, a white woman in her 20s with short red hair, freckles, and a warm smile. in the background: a lush landscape and the Colosseum.

McDonnell’s research focuses on accessible communication technologies and explores how these tools could be designed to engage non-disabled people in making their communication approaches more accessible. She has studied how real-time captioning is used during videoconferencing and her current work is exploring how people caption their TikTok videos. 

The Dennis Lang Award recognizes undergraduate or graduate students across the UW who demonstrate academic excellence in disability studies and a commitment to social justice issues as they relate to people with disabilities.

This article is excerpted from Human Centered Design & Engineering news.

A11yBoard Seeks to Make Digital Artboards Accessible to Blind and Low-Vision Users

Just about everybody in business, education, and artistic settings needs to use presentation software like Microsoft PowerPoint, Google Slides, and Adobe Illustrator. These tools use artboards to hold objects such as text, shapes, images, and diagrams. But for blind and low vision (BLV) people, using such software adds a new level of challenge beyond keeping our bullet points short and images meaningful. They experience:

  • High added cognitive load
  • Difficulty determining relationships between objects
  • Uncertainty if an operation has been successful

Screen readers, which were built for 1-D text information, don’t handle 2-D information spaces like artboards well.

For example, NVDA and Windows Narrator would only report artboard objects in their Z-order – regardless of where those objects are located or whether they are visually overlapping – and only report its shape name without any other useful information.

From A11yBoard video: still image of an artboard with different shapes and the unhelpful NVDA & Windows Narrator explanation as text.

To address these challenges Zhuohao (Jerry) Zhang, a CREATE Ph.D. student advised by Jacob O. Wobbrock at the ACE Lab, asked: 

  • Can digital artboards in presentation software be made accessible for blind and low-vision users to read and edit on their own?
  • Can we design interaction techniques to deliver rich 2-D information to screen reader users?

The answer is yes! 

They developed a multidevice, multimodal interaction system – A11yBoard – to mirror the desktop’s canvas on a mobile touchscreen device, and enabled rapid finger-driven screen reading via touch, gesture, and speech. 

Blind and low-vision users can explore the artboard by using a “reading finger” to move across objects and receive audio tone feedback. They can also use a second finger to “split-tap” on the screen to receive detailed information and select this object for further interactions.

From A11yBoard video: still image showing touch and gesture combos that help blind and low vision users lay out images and text.

“Walkie-talkie mode,” when turned on by dwelling a finger on the screen like turning on a switch, lets users “talk” to the application. 

Users can therefore access tons of details and properties of objects and their relationships. For example, they can ask for a number of closest objects to understand what objects are near to explore. As for some operations that are not easily manipulable using touch, gesture, and speech, we also designed an intelligent keyboard search interface to let blind and low-vision users perform all object-related tasks possible. 

Through a series of evaluations with blind users, A11yBoard was shown to provide intuitive spatial reasoning, multimodal access to objects’ properties and relationships, and eyes-free reading and editing experience of 2-D objects. 

Currently, much digital content has been made accessible for blind and low-vision people to read and “digest.” But few technologies have been introduced to make the creation process accessible to them so that blind and low-vision users can create visual content on their own. With A11yBoard, we have gained a step towards a bigger goal – to make heavily visual-based content creation accessible to blind and low-vision people.


Paper author Zhuohao (Jerry) Zhang is a second-year Ph.D. student at the UW iSchool. His work in HCI and accessibility focuses on designing assistive technologies for blind and low-vision people. Zhang has published and presented at CHI, UIST, and ASSETS conferences, receiving a CHI best paper honorable mention award, a UIST best poster honorable mention award, and a CHI Student Research Competition Winner, and featured by Microsoft New Future of Work Report 2022. He is advised by CREATE Co-Director Jacob O. Wobbrock.

Zhuohao (Jerry) Zhang standing in front of a poster, wearing a black sweater and a pair of black glasses, smiling.

CSE course sequence designed with “accessibility from the start”

The CSE 121, 122, and 123 introductory course sequence lets students choose their entry point into computer science and engineering studies, whatever their background, experience, or confidence level. And, as part of the effort to improve diversity, equity, inclusion, and accessibility (DEIA), the courses were designed with “accessibility from the start.”

A member of the course development team was a dedicated accessibility expert, tasked with developing guidelines for producing accessible materials: using HTML tags correctly, providing alt text for all images, and ensuring accurate captions on all videos. The team audited both content and platforms — including the course website — for accessibility concerns.

In CSE’s DEIA Newsletter article, author Brett Wortzman, Associate Teaching Faculty, points out that “many of the guidelines followed are good universal design, helping all students, not just those with disabilities, and at the same time reducing the work for instructors needing to comply with many DRS [Disability Resources for Students] accommodations.”


Excerpted from article by Brett Wortzman, Associate Teaching Faculty, in CSE’s DEIA Newsletter.

Postdoctoral Fellowship application open: Accessibility researcher in physical computing and fabrication

Update: January 2, 2024

CREATE, the Paul G. Allen School of Computer Science & Engineering, the College of Engineering, and the Department of Rehabilitation Medicine have an opening for a Postdoctoral Scholar.

The goal of this fellowship is to train leaders in accessibility research who can harness advances in physical computing and fabrication to enhance community living and participation with people with disabilities. Specifically, we seek applicants who are interested in developing their skills and expertise investigating how fabrication technologies (e.g., 3D printing and machine knitting) and physical computing technologies can be used to address challenges in rehabilitation technology and accessibility. Applicants from technical backgrounds (e.g., computer science or engineering), rehabilitation medicine (e.g., physical or occupational therapy), or disability studies are encouraged to apply. Multiple postdoctoral fellows with complementary backgrounds will be recruited to collaborate and advance multidisciplinary innovation. Each postdoctoral scholar will be mentored by at least two faculty from the CREATE center.

Application deadlines

Application review begins February 15, 2024 and continues until the position is filled. Start date is flexible but September 2024 is preferred. 

CREATE’s mission includes ensuring that people with disabilities are able to participate in the research process; and CREATE’s faculty and students include people with disabilities. CREATE also has funding to help address accessibility concerns above and beyond the support offered by the UW Campus disability offices. CREATE’s mission also includes a focus on racial equity and representation across intersectional identities.

Postdoctoral scholar appointments are full time, with a 12-month service period. Reappointments may be possible, inclusive of all postdoctoral experience at other institutions. Anticipated start is September 2024. This individual will work closely with a team of computer scientists, engineers, rehabilitation professionals, disability studies scholars and human computer interaction experts from CREATE to improve accessibility for people with disabilities.

For this NIDILRR-funded research, the postdoctoral fellows will engage in 70% research, 20% didactics, and 10% community engagement. The primary responsibilities for each fellow will be to propose and execute an accessibility research project that uses physical computing and fabrication applications to improve community living for people with disabilities including scholarly publications and presentations; engage in coursework and seminars that supplement existing knowledge in areas of engineering, rehabilitation, and disability studies; engage with community organizations that serve disability communities in the Western Washington region to identify participation and technology needs; and facilitate a community-based physical computing workshop.  

We are looking for candidates who have a passion for multidisciplinary research and have expertise in one or more of: the technical aspects of accessibility; rehabilitation technology; disability studies; and fabrication/physical computing technologies. You will be working closely with people with disabilities, engineers, rehabilitation professionals, and other scientists throughout the research project. This training grant is led by four faculty from the Center for Research and Education in Accessible Technology and Experiences (CREATE):

The overarching mission of CREATE is to make technology accessible and make the world accessible through technology. We take a needs-based, human-centered approach to accessibility research and education, work closely with stakeholders in disability communities, and apply knowledge and skills across computer science, rehabilitation medicine, engineering, design, and disability studies to improve access and quality of life for diverse populations. More information about our center and on-going research can be found on the CREATE website.

Qualifications

Applicants must have a Ph.D. or foreign equivalent, at the start date of the position, in engineering, human centered design, or rehabilitation science. Other life sciences may be considered. Rehabilitation professionals should be licensed or eligible for licensure in their respective discipline in the State of Washington. Strong oral and written communication skills and the ability to work as an effective member of a multidisciplinary team are critical for the success of this research. Candidates may have no more than 48 months of prior postdoc experience in order to fulfill the initial 1-year appointment period.

Application instructions

Applicants should provide all of the following:

  1. A cover letter clearly describing your interest and relevant background in this project
  2. A CV
  3. Copies of two representative publications
  4. Contact information for three references

Submit application and materials to create-jobs@uw.edu.

Questions about the project and application may also be submitted to create-jobs@uw.edu.

Carl James Dunlap Memorial Scholarship

University of Washington student Carl James Dunlap had a powerful impact on the UW community with his vibrant personality and persistent advocacy for students with disabilities. To honor his legacy, the Dunlap family established the Carl James Dunlap Memorial Endowment. The Dunlap Memorial Endowment seeks to support students with disabilities encountering unique challenges when attending and completing higher education. The D Center is grateful to further Carl’s legacy by awarding two $2,000 Carl James Dunlap Memorial Scholarships to UW students for Winter 2023.

The Dunlap Memorial Scholarship selection criteria is a UW student who identifies as having a disability and is currently receiving financial aid.

Apply no later than January 31

If you have any questions, please contact the D Center at dcenter@uw.edu.


The Carl James Dunlap Memorial Fund is accepting donations to further help students with disabilities.

Flyer for the Carl James Dunlap Memorial Scholarship with a link to contact dcenter@uw.edu for details and a picture of the UW Seattle campus in fall.

UnlockedMaps provides real-time accessibility info for rail transit users

Congratulations to CREATE Ph.D. student Ather SharifOrson (Xuhai) Xu, and team for this great project on transit access! Together they developed UnlockedMaps, a web-based map that allows users to see in real time how accessible rail transit stations are in six metro areas including Seattle, Philadelphia (where the project was first conceived by Sharif and a friend at a hackathon), Chicago, Toronto, New York, and the California Bay Area.

screenshot of UnlockedMaps in New York. Stations that are labeled green are accessible while stations that are labeled orange are not accessible. Yellow stations have elevator outages reported.

Shown here is a screenshot of UnlockedMaps in New York. Stations that are labeled green are accessible while stations that are labeled orange are not accessible. Yellow stations have elevator outages reported.

Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering advised by CREATE Co-Director Jacob O. Wobbrock, said the team also included nearby and accessible restaurant and bathroom data. “I think restaurants and restrooms are two of the most common things that people look for when they plan their commute. But no other maps really let you filter those out by accessibility. You have to individually click on each restaurant and check if it’s accessible or not, using Google Maps. With UnlockedMaps, all that information is right there!”

Adapted from UW News interview with Ather Sharif. Read full article »

CREATE Leadership at ASSETS’22 Conference

ASSETS 2022 logo, composed of a PCB-style Parthenon outline with three people standing and communicating with each other in the Parthenon, representing three main iconic disabilities: blind, mobility impaired, deaf and hard of hearing.

CREATE Associate Director Jon Froehlich was the General Chair for ASSETS’22, the premier ACM conference for research on the design, evaluation, use, and education related to computing for people with disabilities and older adults. This year, over 300 participants from 37 countries engaged with state-of-the-art research in the design and evaluation of technology for people with disabilities. UW CREATE was a proud sponsor of ASSETS’22.

Keynote speaker Haben Girma is the first Deafblind graduate of Harvard Law School and a leading human rights advocate in disability. Girma highlighted systemic ableism in education, employment, and tech and opportunities for change in her speech.

“There is a myth that non-disabled people are independent and disabled people are dependent. We are all interdependent. Many of you like drinking coffee; very few of you grow your own beans,” she pointed out.

ASSETS’22 was held in Athens, Greece. “The birthplace of democracy, we were surrounded by so many beautiful antiquities that highlighted the progress and innovation of humanity and served as inspiration to our community,” said Froehlich.

“Perhaps my favorite experience was the accessible private tours of the Acropolis Museum with conference attendees—hearing of legends, seeing the artistic craft, and moving about a state-of-the-art event center all in the shadow of the looming Acropolis was an experience I’ll never forget,” he added.

Artifact awards

CREATE Ph.D. student Venkatesh Potluri, advised by CREATE Co-Director Jennifer Mankoff in the Make4All Group, and his team tied for 1st place for the Artifact Award. Potluri presented their work on CodeWalk, Facilitating Shared Awareness in Mixed-Ability Collaborative Software Development.

Third place went to Ather Sharif‘s team, advised by Jacob Wobbrock, UnlockedMaps: Visualizing Real-Time Accessibility of Urban Rail Transit Using a Web-Based Map.

Future of urban accessibility

As part of the conference, Froehlich, Heather Feldner, and Anat Caspi held a virtual workshop entitled the “Future of Urban Accessibility” More here: https://accessiblecities.github.io/UrbanAccess2022/

CREATE becomes a principal sponsor of HuskyADAPT

CREATE is pleased to be a financial and advisory sponsor of HuskyADAPT, an interdisciplinary community that is dedicated to improving the availability of accessible technology in Washington and fostering conversations about the importance of accessible design. 

HuskyADAPT is led by a team of UW students and six faculty advisors, including CREATE directors Kat SteeleHeather FeldnerAnat Caspi and Jennifer Mankoff. Open to all to join, their three primary focus areas are annual design projects, K-12 outreach and toy adaptation workshops, where volunteers learn how to modify off-the-shelf toys to make them switch accessible. The team also collaborates closely with Go Baby Go!.

Sign up for HuskyADAPT’s newsletter

HuskyADAPT logo, with 3 heaxagons containing icons of tools, people and vehicles.

Wobbrock team’s VoxLens allows screen-reader users to interact with data visualizations

A screen reader with a refreshable Braille display. Credit: Elizabeth Woolner/Unsplash

Working with screen-reader users, CREATE graduate student Ather Sharif and Co-Director Jacob O. Wobbrock, along with other UW researchers, have designed VoxLens, a JavaScript plugin that allows people to interact with visualizations. To implement VoxLens, visualization designers add just one line of code.

Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities or motion sensitivity. But visually-oriented graphics often are not accessible to people who use screen readers. VoxLens lead author Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering noted, “Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.”

With written content, there is a beginning, middle and end of a sentence, Wobbrock, Co-senior author explained, “But as soon as you move things into two dimensional spaces, such as visualizations, there’s no clear start and finish. It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.”

Participants learned how to use VoxLens and then completed nine tasks, each of which involved answering questions about a visualization. Compared to participants who did not have access to this tool, VoxLens users completed the tasks with 122% increased accuracy and 36% decreased interaction time.

Learn more


This article was excerpted from a UW News article. Read the full article for additional details about the project.