Leah Findlater, Associate Director

I am interested in how to create technologies that adapt to accommodate individual user needs and preferences, whether to improve basic interactions such as touchscreen text entry or more complex tasks such as working with machine learning models. My research goal is to ensure that the next generation of computing technologies are designed to meet the needs of the broadest range of users.

Affiliations:

Associate Professor, Human Centered Design & Engineering

Adjunct Associate Professor, Allen School of Computer Science & Engineering

Director, Inclusive Design Lab

Research highlights

Expanding voice-based interaction

Over the past few years, conversational voice assistants (VAs) such as Amazon Alexa and Google Assistant have become ubiquitous. We have shown that VAs offer tremendous potential to support equal access to information, particularly for blind and low vision users: they are inherently accessible regardless of vision level and, as novice tools, they offer an approachable introduction to audio-based interaction for people unfamiliar with screen readers. However, VAs currently support only a limited set of tasks. In collaboration with researchers at Microsoft, we are investigating how to combine the strengths of screen readers–powerful expert tools–with the approachability of VA interaction. An example is our VERSE web search tool.

Real-time captioning and sound awareness support

With advances in wearable computing and machine learning, Jon Froehlich and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches (SoundWatch, website forthcoming). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.


Related news

James Fogarty, Associate Director

My broad research interests are in Human-Computer Interaction, User Interface Software and Technology, and Ubiquitous Computing. My focus is on developing, deploying, and evaluating new approaches to the human obstacles surrounding widespread everyday adoption of ubiquitous sensing and intelligent computing technologies.

Affiliations:

Professor, Allen School of Computer Science & Engineering

Research highlights

Large-Scale Android Accessibility Analyses

Fogarty’s research group is leading the largest-known open analyses of the accessibility of Android apps, thus providing new understanding of the current state of mobile accessibility and new insights into factors in the ecosystem that contribute to accessibility failures (ASSETS 2017, ASSETS 2018, TACCESS 2020). For example, our analyses found that 45% of apps are missing screenreader labels for more than 90% of their image-based buttons, leaving much of the functionality of those apps inaccessible to many people. Such results also highlight that pervasive accessibility failures require continued research and new approaches to addressing contributing factors in the technology ecosystem. Our analyses of common failure scenarios has directly led to Google improvements in the accessibility ecosystem (e.g., corrections to Android documentation code snippets that were inaccessible, thus creating many accessibility failures as such snippets were used in apps) and motivated additional research (e.g., our ongoing work on developer tools that better scaffold developer learning about how to correctly apply accessibility metadata).

Runtime Mobile Accessibility Repair and Enhancement

Fogarty’s research group is developing new techniques for runtime repair and enhancement of mobile accessibility. Key to these approaches is a new ability to support third-party runtime enhancements within Android’s security model and without requiring modification to apps (CHI 2017, UIST 2018). We have applied these approaches to accessibility repair (e.g., techniques to allow social annotation of apps with missing screenreader data), but also to enable entirely new forms of tactile accessibility enhancements (ASSETS 2018). These techniques therefore provide a research basis for both improving current accessibility and exploring new forms of future accessibility enhancements.


Related news

Jon Froehlich, Associate Director

My research focuses on designing, building, and evaluating interactive technology that addresses high value social issues such as environmental sustainability, computer accessibility, and personalized health and wellness.

Affiliations:

Associate Professor, Allen School of Computer Science & Engineering

Research highlights

Real-time captioning and sound awareness support

With advances in wearable computing and machine learning, Leah Findlater and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches (SoundWatch, website forthcoming). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.

Project Sidewalk

Project Sidewalk combines remote crowdsourcing + AI identify and assess sidewalk accessibility in online imagery. Working with people who have mobility disabilities, local government partners, and NGOs, we have deployed Project Sidewalk into five cities (Washington DC, Seattle, WA, Newberg, OR, Columbus, OH, and Mexico City, MX), collecting over 500,000 geo-tagged sidewalk accessibility labels on curb ramps, surface problems, and other obstacles.


Related news

Kat Steele, Associate Director

My research focuses upon using novel computational and experimental tools to understand human movement and improve treatment and quality of life of individuals with cerebral palsy, stroke, and other neurological disorders.

My research strives to connect engineering and medicine to create solutions that can advance our understanding of human ability, but also translate research results to the clinic and daily life. 

Affiliations

Albert S. Kobayashi Endowed Professor of Mechanical Engineering

Ability & Innovation Lab

AMP Lab

HuskyADAPT and AccessEngineering

Research highlights

Ubiquitous Rehabilitation

Ubiquitous Rehabilitation seeks to develop the sensors, algorithms, and data visualization techniques required to deploy wearable technology that can reduce the burdens of rehabilitation and improve outcomes. Biomechanical principles guide the design of hardware and software that integrate rehabilitation into daily life.

Open-Orthoses

Open-Orthoses leverages advances in 3D-printing, scanning, and fabrication to build innovative hand and arm orthoses (aka exoskeletons). Multidisciplinary teams of engineers and clinicians work with individuals with disabilities to co-design customized devices, rigorously test the devices, and provide open-source designs that accelerate development.

AccessEngineering

AccessEngineering was founded in 2015 to (1) support and encourage individuals with disabilities to pursue careers in engineering, and (2) train all engineers in principles of accessible and inclusive design. This program has trained over 60 engineering faculty, facilitates communities of practice for engineering professionals with disabilities, and curates a knowledge base with over 100 articles for engineering students, faculty, and professionals.


Related news