CREATE Newsfeed


  • Amy Ko named ACM Distinguished Member

    March 18, 2024

    Congratulations to CREATE faculty Amy J. Ko, who has been recognized as a Distinguished Member of the Association for Computing Machinery (ACM) for her work on human-centered theories of program understanding and the development of tools and learning technologies. 

    Amy J. Ko, a 40-something white/Asian woman with brown hair and black rimmed eyeglasses.

    “I'm honored to be recognized by my nominators, all of whom have been role models and mentors in my career,” said Ko, a professor in the iSchool. “It makes me want to pay their giving and caring work forward to more junior scholars across my community.” 

    Ko has made substantial contributions to researching computing education, human-computer interaction, and humanity’s struggle to understand computing and harness it for creativity, equity and justice. She is one of the editors of the newly released, open source book, Teaching Accessible Computing and has released a beta version of Wordplay, an educational programming language created particularly for adolescents with disabilities and those who are not English fluent, who have so often been left behind in learning about computing. (She invites undergraduates interested in making programming languages more playful, global, and accessible to join Wordplaypen, a community that helps design, build, and maintain Wordplay.)

    The ACM is the world’s largest computing society. It recognizes up to 10 percent of its worldwide membership as distinguished members based on their professional experience, groundbreaking achievements, and longstanding participation in computing. The ACM has three tiers of recognition: fellows, distinguished members and senior members.


    This article has been excerpted from an iSchool article.

    Read more


  • Empowering users with disabilities through customized interfaces for assistive robots

    March 15, 2024

    For people with severe physical limitations such as quadriplegia, the ability to tele-operate personal assistant robots could bring a life-enhancing level of independence and self-determination. Allen School Ph.D. candidate Vinitha Ranganeni and her advisor, CREATE faculty member Maya Cakmak, have been working to understand and meet the needs of users of assistive robots.

    This month, Ranganeni and Cakmak presented a video at the Human Robot Interaction (HRI) conference that illustrates the practical (and touching) ways deploying an assistive robot in a test household has helped Henry Evans require a bit less from his caregivers and connect to his family.

    The research was funded by NIA/NIH Phase II SBIR Grant #2R44AG072982-02 and the NIBIB Grant #1R01EB034580-01

    https://youtu.be/K2U7wwEMLDU?si=wkyZO5cX75UqCjgN
    Captioned video of Henry Evans demonstrating how he can control an assistive robot using the customized graphical user interface he co-designed with CREATE Ph.D. student/Allen School Ph.D. candidate Vinitha Ranganeni.

    Their earlier study, Evaluating Customization of Remote Tele-operation Interfaces for Assistive Robots, evaluated the usability and effectiveness of a customized, tele-operation interface for the Stretch RE2 assistive robot. The authors show that no single interface configuration satisfies all users' needs and preferences. Users perform better when using the customized interface for navigation, and the differences in preferences between participants with and without motor impairments are significant.

    Last summer, as a robotics engineering consultant for Hello Robot, Ranganeni led the development of the interface for deploying an assistive robot in a test household, that of Henry and Jane Evans. Henry was a Silicon Valley CFO when a stroke suddenly left him non-speaking and with quadriplegia. His wife Jane is one of his primary caregivers.

    The research team developed a highly customizable graphical user interface to control Stretch, a relatively simple and lightweight robot that has enough range of motion to reach from the floor to countertops.

    Work in progress, but still meaningful independence

    Stretch can’t lift heavy objects or climb stairs. Assistive robots are expensive, prone to shutting down, and the customization is still very complex and time-intensive. And, as noted in an IEEE Spectrum article about the Evans’ installation, getting the robot’s assistive autonomy to a point where it’s functional and easy to use is the biggest challenge right now. And more work needs to be done on providing simple interfaces, like voice control. 

    The article states, “Perhaps we should judge an assistive robot’s usefulness not by the tasks it can perform for a patient, but rather on what the robot represents for that patient, and for their family and caregivers. Henry and Jane’s experience shows that even a robot with limited capabilities can have an enormous impact on the user. As robots get more capable, that impact will only increase."

    In a few short weeks, Stretch made a difference for Henry Evans. “They say the last thing to die is hope. For the severely disabled, for whom miraculous medical breakthroughs don’t seem feasible in our lifetimes, robots are the best hope for significant independence,” says Henry.” 


    Collaborator, advocate, and community researcher Tyler Schrenk

    Though it has been many months since the death of Tyler Schrenk, a CREATE-funded researcher and a frequent collaborator, his impact is still felt in our collective research.

    Tyler Schrenk making a presentation at the head of a lecture room. He has brown spiky hair, a full beard, and is seated in his power wheelchair.

    Schrenk was a dedicated expert in the assistive technology field and led the way in teaching individuals and companies how to use assistive technologies to create independence. He was President & Executive Director of the Tyler Schrenk Foundation until his death in 2023. 


    Related reading:

    Read more


  • Zhang is CREATE’s Newest Apple AIML fellow

    March 18, 2024

    Congratulations to Zhuohao (Jerry) Zhang – the most recent CREATE Ph.D. student to receive an Apple Scholars in AIML PhD fellowship. The prestigious award supports students through funding, internship opportunities, and mentorship with an Apple researcher. 

    Zhang is a 3rd-year iSchool Ph.D. student advised by Prof. Jacob. O Wobbrock. His research focuses on using human-AI interactions to address real-world accessibility problems. He is particularly interested in designing and evaluating intelligent assistive technologies to make creativity tasks accessible.

    Zhuohao (Jerry) Zhang standing in front of a poster, wearing a black sweater and a pair of black glasses, smiling.

    Zhang joins previous CREATE-advised Apple AIML fellows:

    Venkatesh Potluri (Apple AIML Ph.D. fellow 2022), advised by CREATE Director Jennifer Mankoff in the Allen School. His research makes overlooked software engineering spaces such as IOT and user interface development accessible to developers who are blind or visually impaired. His work systematically understands the accessibility gaps in these spaces and addresses them by enhancing widely used programming tools.

    Venkatesh Potluri leans toward the camera smiling with eyes cast downward

    Rachel Franz (Apple AIML Ph.D. fellow 2021) is also advised by Wobbrock in the iSchool. Her research focuses on accessible technology design and evaluation for users with functional impairments and low digital literacy. Specifically, she is focused on using AI to make virtual reality more accessible to individuals with mobility limitations.

    Rachel Franz, a woman with long blond hair and light skin, photographed in front of a rock wall.

    Read more


  • New Book: Teaching Accessible Computing

    March 14, 2024

    A new, free, and community-sourced online book helps Computer Science educators integrate accessibility topics into their classes. Teaching Accessibility provides the foundations of accessibility relevant to computer science teaching and then presents teaching methods for integrating those topics into course designs.

    From the first page of the book, a line drawing of a person hunched over a laptop with their face close to the screen which is populated by large, unreadable characters.

    The editors are Alannah Oleson, a postdoctoral scholar and co-founder at the UW Center for Learning, Computing, and Imagination (LCI), CREATE and iSchool faculty Amy Ko, and Richard Ladner, CREATE Director of Education Emeritus. You may recognize many CREATE faculty members’ research referenced throughout the guide. CREATE Director Jennifer Mankoff and CREATE Ph.D. student Avery Kelly Mack contributed a foundational chapter that advocates for teaching inclusively in addition to teaching about accessibility.

    Letting the book speak for itself

    "... we’ve designed this book as a freeopenlivingweb-first document. It’s free thanks to a National Science Foundation grant (NSF No. 2137312) that has funded our time to edit and publish the book. It’s open in that you can see and comment on the book at any time, creating community around its content. It’s living in that we expect it to regularly change and evolve as the community of people integrating accessibility into their CS courses grows and evolves. And it’s web-first in that the book is designed first and foremost as an accessible website to be read on desktops, laptops, and mobile devices, rather than as a print book or PDF. This ensures that everyone can read it, but also that it can be easily changed and updated as our understandings of how to teach accessibility in CS evolve."

    Introduction by Alannah Oleson, Amy J. Ko, Richard Ladner

    "To write these chapters, we recruited some of the world’s experts on accessible computing and teaching accessible computing, giving them a platform to share both their content knowledge about how accessibility intersects with specific CS topics, but also their pedagogical content knowledge about how to teach those intersections in CS courses."

    Introduction by Alannah Oleson, Amy J. Ko, Richard Ladner

    Read more


  • CREATE AI+Accessibility Hackfest - Winter '24

    March 6, 2024 - post-event update

    The event featured invited speakers Heather Nolis, Ian Stenseng, and Shaun Kane and exciting workshops on building custom GPT and creating accessible Jupyter notebooks. See the full lineup of brainstorming, hacking, and presentation sessions.

    The 3-day hackfest attendees included those with no experience in coding or hacking, others with advanced experience in generative AI and building software or tools, and, at the center, attendees with lived experiences of disabilities who contributed their experiences and expertise to invent an accessible AI-enabled future.

    Prizes awarded

    While appreciation and congratulations go to all participants, these projects were awarded prizes:

    First place: LookLoud.ai

    Nishit Bhasin and Lakshya Garg

    LookLoud.ai is voice-activated assistance technology, powered by GPT-4 Vision, and designed to make e-commerce accessible to everyone. Users can navigate, select, and buy products using simple voice commands. 

    Second prize: AI Posture Monitor & Intervention Alerts for Home Health

    Max Smoot, Lige Yang, and Richard Li

    AI Posture Monitor & Intervention Alerts for Home Health monitors someone’s seated position to identify when they are in an at-risk posture and subsequently alerts a caretaker with recommended corrections.

    Third prize: Formflow Ai

    Abdul Hussein, Abreham Tegenge, and Aelaph Elias

    Formflow.ai reads PDFs, mail, and forms and gives an easy-to-read summarization, with the goal of helping people read and understand documents and forms. 

    Fourth place: Clearview Assist

    Dhruv Khanna, Ritika Rajpal, Minal Naik, and Menita Agarwal

    (No description provided.)

    Fifth place: Student Success Portal

    Mia Vong, Cameron Jacob Miller, Keyvyn Rogers, and Jerid Stevenot

    Student Success Portal provides AI-powered assistance for challenges in supporting K-12 students with Individualized Education Programs (IEPs).

    Sessions, workshops and hack time

    • Introductory session about the potential of AI for accessibility (also on Zoom)
    • Invited speaker Ian Stenseng, Director of Innovation & Accessibility at The Lighthouse for the Blind, Inc. (also on Zoom)
    • Brainstorming project ideas

      • Learn from community members with lived experiences of disabilities to make sure your hack is solving a real accessibility need.

    • Lunch (provided) and conversation, mentoring, team forming, idea hatching
    • Invited speaker Heather Nolis, Principal Machine Learning Engineer of the Digital AI Team and Chair of the Accessibility Community at T-Mobile (ACT) at T-Mobile (also on Zoom)
    • Optional Workshops and hack time
    • Hack time
    • Pizza dinner and opportunities to get feedback from mentors

    Saturday

    • Work time
    • Lunch (provided) and opportunity to present for feedback from mentors
    • Presentation of judging rubric
    • Invited speaker, Shaun Kane, Researcher at Google AI and Director of the Superhuman Computing Lab at University of Colorado Boulder (also on Zoom)
    • Hack time

    Sunday

    • Optional hack time
    • How to present accessibly & sample pitch presentation (also on Zoom)

    Monday

    • Presentations to judges (also on Zoom)
    • Judges deliberation
    • Announcements, prizes, and closing keynote (also on Zoom)

    Judges

    Speakers

    Workshops

    Brainstorming ideas

    Relevant topics will be driven by community needs to increase access to technology, and to the world through technology. These topics could include, for example:

    • AI’s use for generating plain language summaries of rights
    • Accessibility of AI tools and interfaces
    • Using AI to increase the accessibility of written and visual content
    • Robotic control for access
    • Tools for designing accessible physical objects
    • Using AI to get feedback on the accessibility of things you’re making
    • AI for embodied agent interactions
    • AI applications for health and wellbeing
    • Modalities for human/generative AI interactions such as voice or touch
    • Guidelines or ideas around agents that that may be used for accessibility
    • What disability simulation might look like in the age of AI agents
    • Best practices and pitfalls

    Read more


  • DUB hosts para.chi event

    March 1, 2024

    Para.chi is a worldwide parallel event to CHI ’24 for those unable or unwilling to join CHI ‘24. UW Design. Use. Build. (DUB) is hosting para.chi.dub with members of the DUB team–and maybe you.

    • Live session for accepted virtual papers
    • Networking opportunities
    • Accessibility for students and early career researchers locally and online

    Wednesday, May 8, 2024 
    Hybrid event: Seattle location to be announced and virtual info shared upon registration
    Presenter applications due March 15 
    Register to attend by Monday, April 1.

    Do you have a virtual paper and wish to get feedback from a live audience? Perhaps you have a journal paper accepted to an HCI venue and wish to present it live? Then consider joining us!

    Note that presenter space is somewhat limited. Decisions about how to distribute poster, presenter, and hybrid opportunities will be made after March 15.

    Seattle and beyond

    Each regional team is offering a different event, from mini-conferences to virtual paper sessions to mentoring and networking events. 

    Learn more:

    Read more


  • Three Myths and Three Actions: “Accommodating” Disabled Students

    CREATE Ph.D. students Kelly Avery Mack and Ather Sharif, along with Lucille Njoo, share three common myths about students with disabilities. They reveal the reality of their inequitable experience as grad students at UW, and propose a few potential solutions to begin ameliorating this reality, both at our university and beyond.

    Read more


  • Wheels in motion: Improving mobility technologies for children

    February 28, 2024

    Being able to easily get from the house to the playground affects how long and how often children use an adapted ride-on car, according to a study, Off to the park: a geospatial investigation of adapted ride-on car usage, published by CREATE Ph.D. student Mia Hoffman with CREATE associate director Heather A. Feldner, who is the lead researcher on the project. Their research demonstrates the importance of accessibility in the built environment and that advocating for environmental accessibility should include both the indoors and outdoors.

    Two children ride in small toy cars, one of which has an adapted steering wheel to make it accessible for the child to use.

    For a recent study, adapted ride-on cars were provided to 14 families with young children in locations across Western Washington. Photo courtesy of Heather Feldner.

    Ride-on cars are miniature toy cars for children with steering wheels and a battery-powered pedal. Adapted ride-on cars are an easy-to-use temporary solution for children with mobility issues. Although wheelchairs have more finite control, insurance typically covers new wheelchairs every five years. Children under age 5 can use adapted ride-on cars to explore their surroundings if they outgrow their wheelchair, or if they aren’t able to be in a wheelchair yet.

    Exploration is critical to language, social and physical development. There are big benefits when a child starts moving.

    Mia Hoffman, CREATE Ph.D. student

    “Adapted ride-on cars allow children to explore by themselves,” says Mia Hoffman, the Ph.D. candidate in mechanical engineering who co-authored the paper published in fall 2023. “Exploration is critical to language, social and physical development. There are big benefits when a child starts moving.”

    The researchers adapted the ride-on cars to make them more accessible. Instead of a foot pedal, children might start the car with a different option that’s accessible to them, such as a large button or a sip-and-puff, which is a pneumatic device that would respond to air being blown into it. Researchers added additional structural supports to the device, such as a backrest made out of kickboards or PVC side-supports.

    Adapted ride-on cars were provided to 14 families with young children in locations across Western Washington. Heather Feldner, an assistant professor in the Department of Rehabilitation Medicine and adjunct assistant professor in ME, trained families on how to use the cars. The families then spent a year playing with the cars. Each car had an integrated data logger that tracked how often the child pressed the switch to move the car, and GPS data indicated how far they traveled.

    The study found that most play sessions occurred indoors, underscoring the importance of indoor accessibility for children’s mobility technology. However, children used the car longer outdoors, and identifying an accessible route increased the frequency and duration of outside play sessions. Study participants drove outdoors more often in pedestrian-friendly neighborhoods, measured by researchers with the Walk Score, and when close to accessible paths, measured by Project Sidewalk’s AccessScore.

    “Families can sometimes be uncertain about introducing powered mobility for their children in these early stages of development,” says Feldner. “But ride-on cars and other small devices designed for kids open up so many opportunities — from experiencing the joy of mobility, learning more about the world around them, enjoying social time with family and friends in new environments, and working on developmental skills. We want to work with kids and families to show them what is possible with these devices, listen to their needs and ideas, and continue working to ensure that both our technology designs and our community environments are accessible and available for all.”

    Exploring different mobility devices

    Heather Feldner and Mia Hoffman stand next to their poster board about adapted ride-on cars research at a conference.

    As a graduate student, Hoffman conducts research on children ages 3 and under who might crawl, roll, sit up, or cruise in a power mobility device. Besides processing sensor data and other data analysis, Hoffman’s work also involves getting to know families, “playing with a lot of toys, singing, and entertaining kids,” she jokes.

    Research involving pediatrics and accessibility like the adapted ride-on cars study is why Hoffman joined the Steele Lab. She became interested in biomechanics in sixth grade, when she learned that working on engineering and medical design was possible. As an undergraduate at the University of Notre Dame, Hoffman studied brain biomechanics, computational design and assistive technology. She worked on projects such as analyzing the morphology of monkey brains and creating 3D-printed prosthetic hands for children.

    After connecting with Feldner and Kat Steele, Albert Kobayashi Professor in Mechanical Engineering and CREATE associate director, Hoffman realized that the Steele Lab, which often collaborates with UW Medicine, was the perfect fit.

    Hoffman is currently working on research with Feldner and Steele that compares children’s usage of a commercial pediatric powered mobility device to their usage of adapted ride-on cars in the community environment. Next, Hoffman will conduct one of the first comparative studies about how using supported mobility in the form of a partial body weight support system or using a powered wheelchair affects children’s exploration patterns. The study involves children with Down Syndrome, who often have delayed motor development and who are underrepresented in mobility research.

    There can be stigma associated with using a wheelchair instead of a walker or another mobility device that may help with motor development, but Hoffman says the study could demonstrate that both are important.

    “The goal is to show that children can simultaneously work on motor gains while using powered wheelchairs or other mobility devices to explore their environment,” she says.

    “Our hope is for kids to just be kids,” says Hoffman. “We want them to be mobile and experience life at the same time as their peers. It’s about meeting a kid where they’re at and supporting them so that they can move around and play with their friends and family.”


    This article was excerpted from an article written by Lyra Fontaine for Mechanical Engineering.

    Read more


  • Joshua Miele: Driving Accessibility through Open Source

    February 15, 2024

    Formally, Dr. Joshua Miele describes himself as a blind scientist, designer, performance artist and disability activist who is focused on the overlap of technology, disability, and equity. But in his personable and humorous lecture, he listed a few more identities: Interrupter. Pain in the ass. “CAOS” promoter.

    The Allen School Distinguished Lecture took place earlier this month and is a worthwhile listen on YouTube.

    Miele’s passions are right in line with CREATE’s work and he started his lecture, after being introduced by CREATE Director Jennifer Mankoff, with a compliment we heartily accept: “This community at the University of Washington is one of the largest, one of the most vibrant communities of people thinking and working around disability, accessibility, and technology.”

    Miele shared his enthusiasm for disability-inclusive design and its impact on global disability equity and inclusion. Drawing on examples and counterexamples from his own life and career, Dr. Miele described some of the friction the accessibility field has faced and speculated about what challenges may lie ahead, with particular emphasis on the centrality of user-centered practices, and the exhilarating potential of open source solutions and communities.

    When he received the MacArthur grant, Miele had to decide what to do with the spotlight on his work. He shared his hopes for a Center for Accessibility and Open Source (CAOS, pronounced “chaos”) to promote global digital equity for people with disabilities through making low-cost accessible tools available to everyone, whether they have financial resources or not. He invited anyone interested in global equity, disability, direct action, performance art, and CAOS/chaos to reach out to work together on this incredibly important work.

    More about Miele and the lecture

    Read more


  • Alice Wong and Patty Berne: Two UW lectures moderated by CREATE researchers

    January 29, 2024

    Winter 2024 quarter kicked off with two outstanding conversations with women of color who are leaders in disability justice.

    Alice Wong: Raising the visibility of disabled people

    First, Alice Wong discussed topics important to her work in raising the visibility of disabled people. Wong’s book Year of the Tiger: An Activist’s Life was the topic of the Autumn 2023 CREATE Accessibility Seminar.

    CREATE Director Jennifer Mankoff started the conversation asking Wong about her experience as a disabled person in academia and what needs to change. Wong said her work in disability justice was inspired in part by the “incredible amount of emotion and physical labor to ask for equal access” in academic settings. She had to spend precious time, money and energy to gain the accommodations and access she needed to succeed. But she realized that as soon as she transitioned out, her efforts would be lost and the next student would have to start over to prove their need and request a new set of accommodations. Wong was doubtful that large academic institutions can support the goal of collective liberation. It’s the “dog-eat-dog world [of] academia where the competition is stiff and everyone is pushed to their limits to produce and be valuable.” She encouraged instructors to incorporate books about disability justice in their syllabi (see the reading list below). 

    Wong, who spoke with a text-to-voice tool and added emphasis with her facial expressions on the screen, also addressed the value and the limitations of assistive technology. She noted that the text-to-speech app she uses does not convey her personality. She also discussed how ableism appears in activist discourse.

    One of her examples was a debate over gig economy delivery services, which are enormously important for many people with disabilities and that also under-compensate delivery work. She noted that blaming disabled people for undermining efforts for better wages was not the solution; collective efforts to make corporations compensate workers is the solution. She also explained that hashtag activism, which has been disparaged in popular discourse, is a crucial method for disabled people to participate in social justice activism. And she discussed her outrage when, as she prepared to give a talk to a public health school, her own access needs were used to censor her. Throughout her talk, Wong returned again and again to the principles of disability justice, and encouraged attendees to engage in collective forms of change.

    Wong’s responses embodied a key component of disability justice principles: citational practices that name fellow contributors to collective disability justice wisdom. Her long list of recommended reading for the audience inspired us to build our new RDT reading list. Wong referenced Patty Berne several times, calling Berne her introduction to disability justice.

    Patty Berne on disability justice: Centering intersectionality and liberation

    A week later, two CREATE Ph.D. students, Aashaka Desai and Aaleyah Lewis, moderated a conversation with Patty Berne. Berne, who identifies as a Japanese-Haitian queer disabled woman, co-founded Sins Invalid, a disability justice-based arts project focusing on disabled artists of color and queer and gender non-conforming artists with disabilities. Berne defined disability justice as advocating for each other, understanding access needs, and normalizing those needs. On the topic of climate justice, she noted that state-sponsored disaster planning often overlooks the needs of people with motor impairments or life-sustaining medical equipment. This is where intersectional communities do, and should, take care of each other when disaster strikes.

    Berne addressed language justice within the disability community, noting that “we don’t ‘language’ like able-bodied people.” For example, the use of ventilators and augmented speech technology change the cadence of speech. Berne wants to normalize access needs for a more inclusive experience of everyday life. Watch the full conversation on YouTube.

    Read more


  • Ben Taskar Memorial Event

    January 26, 2024

    In January 2024, the CREATE community was invited to participate in the Taskar Center's 2024 Annual Ben Taskar Memorial Event, themed "Transportation and Responsible AI."

    Sessions

    Project Poster Viewing and Team Discussions

    Explore innovative projects from the course on "Responsible Data Science in Urban Spaces" under the guidance of Anat Caspi, TCAT director, contributing to Dr. Caspi's recent Human Rights Education Award.

    Community Townhall Meeting with Dragomir Anguelov

    oin us for a captivating discussion with Dragomir Anguelov, VP and Head of Research at Waymo, as he shares Waymo's insights into operating an autonomous ride-share fleet, covering over 7 million miles. The session, moderated by Anat Caspi, focuses on responsible AI in transportation. (This session will not be recorded and not available to remote participants.)

    Spotlight on AccessMap Multimodal

    Discover the latest advancements in accessible transportation with a spotlight on the recent deployment of AccessMap Multimodal. The session will highlight the personalized trip planner for travelers with disabilities and provide insights into the user experience, including the use of screen readers.

    Ben Taskar Memorial Distinguished Lecture:
    Dragomir Anguelov: Toward Total Scene Understanding for Autonomous Driving

    In this engaging lecture, Drago Anguelov will delve into recent Waymo research on performant ML models and architectures that handle the variety and complexity of real-world environments in autonomous driving. He will also discuss the impact of progress in building Autonomous Driving agents on people with disabilities and explore current open questions about enhancing embodied AI agent capabilities with ML.

    Read more


  • Anat Caspi receives Human Rights Educator Award

    Congratulations to Anat Caspi on receiving the 2023 Human Rights Education Award from the Seattle Human Rights Commission!

    Caspi, a CREATE associate director and the founder and director of the Taskar Center for Accessible Technology, thanked the commission for recognition of her individual dedication and emphasized that it also celebrates the collective efforts of the Taskar Center community.

    You can watch as Olivia Quesada accepts the award on Caspi's behalf at the ceremony.

    Olivia Quesada stands at a podium to accept the 2023 Human Rights Educator Award for Anat Caspi whose photo is shown on a large screen in the background.

    Read more


  • CREATE Welcomes Dr. Olivia Banner!

    January 2, 2024

    Olivia Banner, a white woman with a warm smile and smiling eyes.

    In her role as CREATE’s Director of Strategy and Operations, Olivia Banner, Ph.D., will help develop and oversee organizational strategy, design and implement new programs, manage center operations, and help ensure a sustainable trajectory of high quality work in service of the CREATE’s core mission

    Banner is a disabled author and educator who has taught courses on disability, technology, and media. She comes to Seattle and the UW from the University of Texas at Dallas, where she was an associate professor of Critical Media Studies. She is the author of Communicative Biocapitalism: The Voice of the Patient in Digital Health and the Health Humanities. Her new book about technology, psychiatry, and practices of mutual care is forthcoming with Duke University Press. Her research has been published in Catalyst: Feminism, Theory, TechnoscienceLiterature and Medicine and is forthcoming in Disability Studies Quarterly.  

    “Her principles and commitment to intersectional work caught our attention in our conversations about the new role. We are so lucky to have her joining CREATE!”

    Jennifer Mankoff, CREATE Director

    Banner says she looks forward to integrating disability principles into projects with tangible effects on disabled peoples’ lives, including AI + Accessibility, integrating disabled perspectives into projects, and race, technology, and disability—which align with Banner’s previous academic work. She is personally invested in fostering just technological futures through collaborative work and is very excited about the Center’s aim of expanding access through community partnerships.

    CREATE Director Jennifer Mankoff is equally excited about the vision Banner brings for CREATE's future, her policy experience, her administrative skills, and her commitment to amplifying the voices of those she serves. “Her principles and commitment to intersectional work caught our attention in our conversations about the new role. We are so lucky to have her joining CREATE!” says Mankoff.

    In her research, scholarship, and teaching, Banner has centered disability knowledge as a method for envisioning technological futures. Her work extends to multiple collaborative projects, including co-teaching a seminar on surveillance with a computer science professor, serving on a Lancet-sponsored commission developing policies for global digital health development, and co-directing Social Practice & Community Engagement Media, a lab that used low-tech methods to reimagine campus practices of care. Toward the goal of improving access on the UT Dallas campus, Banner conducted critical access mapping projects, led Teach-Ins and workshops on disability and equity and on accessible course design, and served on the University Accessibility Committee.

    Having served as managing editor of an academic journal and as Associate Dean of Graduate Studies for her School, she also brings professional experience working with faculty, students, staff, and community members from varied disciplines and professions, and anticipates generative conversations on the horizon. She joins CREATE eager to support and enhance its visions of accessible and equitable technology.

    Read more


  • ARTennis attempts to help low vision players

    December 16, 2023

    People with low vision (LV) have had fewer options for physical activity, particularly in competitive sports such as tennis and soccer that involve fast, continuously moving elements such as balls and players. A group of researchers from CREATE associate director Jon E. Froehlich's Makeability Lab hopes to overcome this challenge by enabling LV individuals to participate in ball-based sports using real-time computer vision (CV) and wearable augmented reality (AR) headsets. Their initial focus has been on tennis.

    The team includes Jaewook Lee (Ph.D. student, UW CSE), Devesh P. Sarda (MS/Ph.D. student, University of Wisconsin), Eujean Lee (Research Assistant, UW Makeability Lab), Amy Seunghyun Lee (BS student, UC Davis), Jun Wang (BS student, UW CSE), Adrian Rodriguez (Ph.D. student, UW HCDE), and Jon Froehlich.

    Their paper, Towards Real-time Computer Vision and Augmented Reality to Support Low Vision Sports: A Demonstration of ARTennis was published in the 2023 ACM Symposium on User Interface Software and Technology (UIST).

    ARTennis is their prototype system capable of tracking and enhancing the visual saliency of tennis balls from a first-person point-of-view (POV). Recent advancements in deep learning have led to models like TrackNet, a neural network capable of tracking tennis balls in third-person recordings of tennis games that is used to improve sports viewing for LV people. To enhance playability, the team first built a dataset of first-person POV images by having the authors wear an AR headset and play tennis. They then streamed video from a pair of AR glasses to a back-end server, analyzed the frames using a custom-trained deep learning model, and sent back the results for real-time overlaid visualization.

    After a brainstorming session with an LV research team member, the team added visualization improvements to enhance the ball’s color contrast and add a crosshair in real-time.

    Early evaluations have provided feedback that the prototype could help LV people enjoy ball-based sports but there's plenty of further work to be done. A larger field-of-view (FOV) and audio cues would improve a player's ability to track the ball. The weight and bulk of the headset, in addition to its expense are also factors the team expects to improve with time, as Lee noted in an interview on Oregon Public Broadcasting.

    "Wearable AR devices such as the Microsoft HoloLens 2 hold immense potential in non-intrusively improving accessibility of everyday tasks. I view AR glasses as a technology that can enable continuous computer vision, which can empower BLV individuals to participate in day-to-day tasks, from sports to cooking. The Makeability Lab team and I hope to continue exploring this space to improve the accessibility of popular sports, such as tennis and basketball."

    Jaewook Lee, Ph.D. student and lead author

    Ph.D. student Jaewook Lee presents a research poster, Makeability Lab Demos - GazePointAR & ARTennis.

    Read more


  • UW News: How an assistive-feeding robot went from picking up fruit salads to whole meals

    November, 2023

    In tests with this set of actions, the robot picked up the foods more than 80% of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal. UW News talked with Gordon and Nanavati co-lead authors, both doctoral students in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding.

    An assistive-feeding robotic arm attached to a wheelchair uses a fork to stab a piece of fruit on a plate among other fruits.

    The team presented its findings Nov. 7 at the 2023 Conference on Robotic Learning in Atlanta.

    UW News talked with co-lead authors Gordon and Nanavati, both doctoral students members of CREATE and in the Paul G. Allen School of Computer Science & Engineering, and with co-author Taylor Kessler Faulkner, a UW postdoctoral scholar in the Allen School, about the successes and challenges of robot-assisted feeding for 1.8 million people in the U.S. (according to data from 2010) who can’t eat on their own.

    The Personal Robotics Lab has been working on robot-assisted feeding for several years. What is the advance of this paper?

    Ethan K. Gordon: I joined the Personal Robotics Lab at the end of 2018 when Siddhartha Srinivasa, a professor in the Allen School and senior author of our new study, and his team had created the first iteration of its robot system for assistive applications. The system was mounted on a wheelchair and could pick up a variety of fruits and vegetables on a plate. It was designed to identify how a person was sitting and take the food straight to their mouth. Since then, there have been quite a few iterations, mostly involving identifying a wide variety of food items on the plate. Now, the user with their assistive device can click on an image in the app, a grape for example, and the system can identify and pick that up.

    Taylor Kessler Faulkner: Also, we’ve expanded the interface. Whatever accessibility systems people use to interact with their phones — mostly voice or mouth control navigation — they can use to control the app.

    EKG: In this paper we just presented, we’ve gotten to the point where we can pick up nearly everything a fork can handle. So we can’t pick up soup, for example. But the robot can handle everything from mashed potatoes or noodles to a fruit salad to an actual vegetable salad, as well as pre-cut pizza or a sandwich or pieces of meat.

    In previous work with the fruit salad, we looked at which trajectory the robot should take if it’s given an image of the food, but the set of trajectories we gave it was pretty limited. We were just changing the pitch of the fork. If you want to pick up a grape, for example, the fork’s tines need to go straight down, but for a banana they need to be at an angle, otherwise it will slide off. Then we worked on how much force we needed to apply for different foods.

    In this new paper, we looked at how people pick up food, and used that data to generate a set of trajectories. We found a small number of motions that people actually use to eat and settled on 11 trajectories. So rather than just the simple up-down or coming in at an angle, it’s using scooping motions, or it’s wiggling inside of the food item to increase the strength of the contact. This small number still had the coverage to pick up a much greater array of foods.

    We think the system is now at a point where it can be deployed for testing on people outside the research group. We can invite a user to the UW, and put the robot either on a wheelchair, if they have the mounting apparatus ready, or a tripod next to their wheelchair, and run through an entire meal.

    https://youtu.be/6j2ymtDI8LI?si=rGys4bODV3EwATkC

    For you as researchers, what are the vital challenges ahead to make this something people could use in their homes every day?

    EKG: We’ve so far been talking about the problem of picking up the food, and there are more improvements that can be made here. Then there’s the whole other problem of getting the food to a person’s mouth, as well as how the person interfaces with the robot, and how much control the person has over this at least partially autonomous system.

    TKF: Over the next couple of years, we’re hoping to personalize the robot to different people. Everyone eats a little bit differently. Amal did some really cool work on social dining that highlighted how people’s preferences are based on many factors, such as their social and physical situations. So we’re asking: How can we get input from the people who are eating? And how can the robot use that input to better adapt to the way each person wants to eat?

    Amal Nanavati: There are several different dimensions that we might want to personalize. One is the user’s needs: How far the user can move their neck impacts how close the fork has to get to them. Some people have differential strength on different sides of their mouth, so the robot might need to feed them from a particular side of their mouth. There’s also an aspect of the physical environment. Users already have a bunch of assistive technologies, often mounted around their face if that’s the main part of their body that’s mobile. These technologies might be used to control their wheelchair, to interact with their phone, etc. Of course, we don’t want the robot interfering with any of those assistive technologies as it approaches their mouth.

    There are also social considerations. For example, if I’m having a conversation with someone or at home watching TV, I don’t want the robot arm to come right in front of my face. Finally, there are personal preferences. For example, among users who can turn their head a little bit, some prefer to have the robot come from the front so they can keep an eye on the robot as it’s coming in. Others feel like that’s scary or distracting and prefer to have the bite come at them from the side.

    A key research direction is understanding how we can create intuitive and transparent ways for the user to customize the robot to their own needs. We’re considering trade-offs between customization methods where the user is doing the customization, versus more robot-centered forms where, for example, the robot tries something and says, “Did you like it? Yes or no.” The goal is to understand how users feel about these different customization methods and which ones result in more customized trajectories.

    What should the public understand about robot-assisted feeding, both in general and specifically the work your lab is doing?

    EKG: It’s important to look not just at the technical challenges, but at the emotional scale of the problem. It’s not a small number of people who need help eating. There are various figures out there, but it’s over a million people in the U.S. Eating has to happen every single day. And to require someone else every single time you need to do that intimate and very necessary act can make people feel like a burden or self-conscious. So the whole community working towards assistive devices is really trying to help foster a sense of independence for people who have these kinds of physical mobility limitations.

    AN: Even these seven-digit numbers don’t capture everyone. There are permanent disabilities, such as a spinal cord injury, but there are also temporary disabilities such as breaking your arm. All of us might face disability at some time as we age and we want to make sure that we have the tools necessary to ensure that we can all live dignified lives and independent lives. Also, unfortunately, even though technologies like this greatly improve people’s quality of life, it’s incredibly difficult to get them covered by U.S. insurance companies. I think more people knowing about the potential quality of life improvement will hopefully open up greater access.

    Additional co-authors on the paper were Ramya Challa, who completed this research as an undergraduate student in the Allen School and is now at Oregon State University, and Bernie Zhu, a UW doctoral student in the Allen School. This research was partially funded by the National Science Foundation, the Office of Naval Research and Amazon.

    For more information, contact Gordon at ekgordon@cs.uw.edu, Nanavati at amaln@cs.uw.edu and Faulkner at taylorkf@cs.washington.edu.


    Excerpted and adapted from the UW News story by Stefan Milne.

    Read more


  • Off to the Park: A Geospatial Investigation of Adapted Ride-on Car Usage

    November 7, 2023

    Adapted ride-on cars (ROC) are an affordable, power mobility training tool for young children with disabilities. But weather and adequate drive space create barriers to families' adoption of their ROC. 

    CREATE Ph.D. student Mia E. Hoffman is the lead author on a paper that investigates the relationship between the built environment and ROC usage.

    Mia Hoffman smiling into the sun. She has long, blonde hair. Behind her is part of the UW campus with trees and brick buildings.

    With her co-advisors Kat Steele and Heather A. Feldner, Jon E. Froehlich (all three CREATE associate directors), and Kyle N. Winfree as co-authors, Hoffman found that play sessions took place more often within the participants' homes. But when the ROC was used outside, children engaged in longer play sessions, actively drove for a larger portion of the session, and covered greater distances.

    Accessibility scores for the sidewalks near a participant’s home on the left and the drive path of the participant on the right. Participant generally avoided streets that were not accessible.

    Most notably, they found that children drove more in pedestrian-friendly neighborhoods and when in proximity to accessible paths, demonstrating that providing an accessible place for a child to move, play, and explore is critical in helping a child and family adopt the mobility device into their daily life.

    Read more


  • Community Partner Spotlight: PAVE

    November 8, 2023

    CREATE is pleased to work with PAVE (Partnerships for Action | Voices for Empowerment) to help guide our efforts and shape solutions around the needs and limitations of accessible technology. They’ve supported our grant applications, shared opportunities for participation in CREATE research projects with their community, and published CREATE research on the importance of self-initiated mobility for children, particularly children with disabilities. 


    PAVE logo, with the V in a light green color and stylized to look like a flower.

    PAVE’s mission is to provide support, training, information, and resources to empower and give voice to individuals, youth, and families living with disabilities throughout Washington State.


    “Without technology—accessible technology—PAVE would never be able to support those who rely on us for accurate information and resources.” says Barb Koumjian, Project Coordinator for Lifespan Respite WA at PAVE. This includes the highly accessible PAVE website, with links to parent training programs, family health resources, and support systems.

    "All of us at PAVE are deeply committed to addressing the concerns of parents worried about their loved one in school, navigating medical supports, or caregiving for a family member. PAVE's goal is to provide a seamless online experience, allowing everyone to find information quickly, get support, and hopefully get some peace of mind," adds Communications Specialist Nicol Walsh. "PAVE's goal is to provide a seamless online experience, allowing everyone to find information quickly and get support."

    PAVE supports accessibility via adaptive technology: "For the families I support at PAVE, there is an uprising of parents advocating for AAC, in any capacity, at an early age with an autism diagnosis," says Shawnda Hicks, PAVE Coordinator. "Giving children communication in early learning stages reduces frustration and high behaviors."

    Connecting with PAVE

    Cute, mixed race child during hearing exam wears special headphones.

    Proud to be a UW CREATE Community Partner

    "As a statewide organization, we're deeply committed to accessibility and equity for everyone, and we value our collaborations with UW CREATE for all we serve in Washington," says Tracy Kahlo, PAVE Executive Director. 


    Thanks to these PAVE staff members for contributing words, data, and perspective: Barb Koumjian, Nicol Walsh, Shawnda Hicks, and Tracy Kahlo.

    Read more


  • UW News: Can AI help boost accessibility? CREATE researchers tested it for themselves

    November 2, 2023 | UW News

    Generative artificial intelligence tools like ChatGPT, an AI-powered language tool, and Midjourney, an AI-powered image generator, can potentially assist people with various disabilities. They could summarize content, compose messages, or describe images. Yet they also regularly spout inaccuracies and fail at basic reasoningperpetuating ableist biases.

    This year, seven CREATE researchers conducted a three-month autoethnographic study — drawing on their own experiences as people with and without disabilities — to test AI tools’ utility for accessibility. Though researchers found cases in which the tools were helpful, they also found significant problems with AI tools in most use cases, whether they were generating images, writing Slack messages, summarizing writing or trying to improve the accessibility of documents.

    Four AI-generated images show different interpretations of a doll-sized “crocheted lavender husky wearing ski goggles,” including two pictured outdoors and one against a white background.

    The team presented its findings Oct. 22 at the ASSETS 2023 conference in New York.

    “When technology changes rapidly, there’s always a risk that disabled people get left behind,” said senior author Jennifer Mankoff, CREATE’s director and a professor in the Paul G. Allen School of Computer Science & Engineering. “I’m a really strong believer in the value of first-person accounts to help us understand things. Because our group had a large number of folks who could experience AI as disabled people and see what worked and what didn’t, we thought we had a unique opportunity to tell a story and learn about this.”

    The group presented its research in seven vignettes, often amalgamating experiences into single accounts to preserve anonymity. For instance, in the first account, “Mia,” who has intermittent brain fog, deployed ChatPDF.com, which summarizes PDFs, to help with work. While the tool was occasionally accurate, it often gave “completely incorrect answers.” In one case, the tool was both inaccurate and ableist, changing a paper’s argument to sound like researchers should talk to caregivers instead of to chronically ill people. “Mia” was able to catch this, since the researcher knew the paper well, but Mankoff said such subtle errors are some of the “most insidious” problems with using AI, since they can easily go unnoticed.

    Yet in the same vignette, “Mia” used chatbots to create and format references for a paper they were working on while experiencing brain fog. The AI models still made mistakes, but the technology proved useful in this case.

    “When technology changes rapidly, there’s always a risk that disabled people get left behind.”

    Jennifer Mankoff, CREATE Director, professor in the Allen School

    Mankoff, who’s spoken publicly about having Lyme disease, contributed to this account. “Using AI for this task still required work, but it lessened the cognitive load. By switching from a ‘generation’ task to a ‘verification’ task, I was able to avoid some of the accessibility issues I was facing,” Mankoff said.

    The results of the other tests researchers selected were equally mixed:

    • One author, who is autistic, found AI helped to write Slack messages at work without spending too much time troubling over the wording. Peers found the messages “robotic,” yet the tool still made the author feel more confident in these interactions.
    • Three authors tried using AI tools to increase the accessibility of content such as tables for a research paper or a slideshow for a class. The AI programs were able to state accessibility rules but couldn’t apply them consistently when creating content.
    • Image-generating AI tools helped an author with aphantasia (an inability to visualize) interpret imagery from books. Yet when they used the AI tool to create an illustration of “people with a variety of disabilities looking happy but not at a party,” the program could conjure only fraught images of people at a party that included ableist incongruities, such as a disembodied hand resting on a disembodied prosthetic leg.

    “I was surprised at just how dramatically the results and outcomes varied, depending on the task,” said lead author Kate Glazko, a UW doctoral student in the Allen School. “”n some cases, such as creating a picture of people with disabilities looking happy, even with specific prompting — can you make it this way? — the results didn’t achieve what the authors wanted.”

    The researchers note that more work is needed to develop solutions to problems the study revealed. One particularly complex problem involves developing new ways for people with disabilities to validate the products of AI tools, because in many cases when AI is used for accessibility, either the source document or the AI-generated result is inaccessible. This happened in the ableist summary ChatPDF gave “Mia” and when “Jay,” who is legally blind, used an AI tool to generate code for a data visualization. He could not verify the result himself, but a colleague said it “didn’t make any sense at all.”  The frequency of AI-caused errors, Mankoff said, “makes research into accessible validation especially important.”

    Mankoff also plans to research ways to document the kinds of ableism and inaccessibility present in AI-generated content, as well as investigate problems in other areas, such as AI-written code.

    “Whenever software engineering practices change, there is a risk that apps and websites become less accessible if good defaults are not in place,” Glazko said. “For example, if AI-generated code were accessible by default, this could help developers to learn about and improve the accessibility of their apps and websites.”

    Co-authors on this paper are Momona Yamagami, who completed this research as a UW postdoctoral scholar in the Allen School and is now at Rice University; Aashaka DesaiKelly Avery Mack and Venkatesh Potluri, all UW doctoral students in the Allen School; and Xuhai Xu, who completed this work as a UW doctoral student in the Information School and is now at the Massachusetts Institute of Technology. This research was funded by Meta, Center for Research and Education on Accessible Technology and Experiences (CREATE), Google, an NIDILRR ARRT grant and the National Science Foundation.


    For more information, contact Glazko at glazko@cs.washington.edu and Mankoff at jmankoff@cs.washington.edu.


    This article was adapted from the UW News article by Stefan Milne.

    Read more


  • UW News: A11yBoard accessible presentation software

    October 30, 2023 | UW News

    A team led by CREATE researchers has created A11yBoard for Google Slides, a browser extension and phone or tablet app that allows blind users to navigate through complex slide layouts, objects, images, and text. Here, a user demonstrates the touchscreen interface. Team members Zhuohao (Jerry) Zhang, Jacob O. Wobbrock, and Gene S-H Kim presented the research at ASSETS 2023.

    A user demonstrates creating a presentation slide with A11yBoard on a touchscreen tablet and computer screen.

    Screen readers, which convert digital text to audio, can make computers more accessible to many disabled users — including those who are blind, low vision or dyslexic. Yet slideshow software, such as Microsoft PowerPoint and Google Slides, isn’t designed to make screen reader output coherent. Such programs typically rely on Z-order — which follows the way objects are layered on a slide — when a screen reader navigates through the contents. Since the Z-order doesn’t adequately convey how a slide is laid out in two-dimensional space, slideshow software can be inaccessible to people with disabilities.

    Combining a desktop computer with a mobile device, A11yBoard lets users work with audio, touch, gesture, speech recognition and search to understand where different objects are located on a slide and move these objects around to create rich layouts. For instance, a user can touch a textbox on the screen, and the screen reader will describe its color and position. Then, using a voice command, the user can shrink that textbox and left-align it with the slide’s title.

    "We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box."

    Jacob O. Wobbrock, CREATE associate director and professor in the UW Information School

    “For a long time and even now, accessibility has often been thought of as, ‘We’re doing a good job if we enable blind folks to use modern products.’ Absolutely, that’s a priority,” said senior author Jacob O. Wobbrock, a UW professor in the Information School. “But that is only half of our aim, because that’s only letting blind folks use what others create. We want to empower people to create their own content, beyond a PowerPoint slide that’s just a title and a text box.”

    A11yBoard for Google Slides builds on a line of research in Wobbrock’s lab exploring how blind users interact with "artboards" — digital canvases on which users work with objects such as textboxes, shapes, images and diagrams. Slideshow software relies on a series of these artboards. When lead author Zhuohao (Jerry) Zhang, a UW doctoral student in the iSchool, joined Wobbrock’s lab, the two sought a solution to the accessibility flaws in creativity tools, like slideshow software. Drawing on earlier research from Wobbrock’s lab on the problems blind people have using artboards, Wobbrock and Zhang presented a prototype of A11yBoard in April. They then worked to create a solution that’s deployable through existing software, settling on a Google Slides extension.



    For the current paper, the researchers worked with co-author Gene S-H Kim, an undergraduate at Stanford University, who is blind, to improve the interface. The team tested it with two other blind users, having them recreate slides. The testers both noted that A11yBoard greatly improved their ability to understand visual content and to create slides themselves without constant back-and-forth iterations with collaborators; they needed to involve a sighted assistant only at the end of the process.

    The testers also highlighted spots for improvement: Remaining continuously aware of objects’ positions while trying to edit them still presented a challenge, and users were forced to do each action individually, such as aligning several visual groups from left to right, instead completing these repeated actions in batches. Because of how Google Slides functions, the app’s current version also does not allow users to undo or redo edits across different devices.

    Ultimately, the researchers plan to release the app to the public. But first they plan to integrate a large language model, such as GPT, into the program.

    “That will potentially help blind people author slides more efficiently, using natural language commands like, ‘Align these five boxes using their left edge,’” Zhang said. “Even as an accessibility researcher, I’m always amazed at how inaccessible these commonplace tools can be. So with A11yBoard we’ve set out to change that.”

    This research was funded in part by the University of Washington’s Center for Research and Education on Accessible Technology and Experiences (UW CREATE). For more information, contact Zhang at zhuohao@uw.edu and Wobbrock at wobbrock@uw.edu.


    This article was adapted from the UW News article by Stefan Milne.

    Read more


  • Augmented Reality to Support Accessibility

    October 25, 2023

    RASSAR – Room Accessibility and Safety Scan in Augmented Reality – is a novel smartphone-based prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues. With RASSAR, the user holds out their phone and scans a space. The tool uses LiDAR and camera data, real-time machine learning, and AR to construct a real-time model of the 3D scene, attempts to identify and classify known accessibility and safety issues, and visualizes potential problems overlaid in AR. 

    RASSAR researchers envision the tool as an aid in the building and validation of new construction, planning renovations, or updating homes for health concerns, or for telehealth home visits with occupational therapists. UW News interviewed two CREATE Ph.D. students about their work on the project:



    Read more