Jacob O. Wobbrock, Founding Co-Director

My research seeks to scientifically understand people’s experiences of computers and information, and to improve those experiences through design and engineering, especially for people with disabilities. My specific research topics include input & interaction techniques, human performance measurement & modeling, HCI research & design methods, mobile computing, and accessible computing.

Affiliations:

Professor, The Information School

Adjunct Professor, Paul G. Allen School of Computer Science & Engineering

Director, ACE Lab

Research highlights

Slide Rule

A project that invented the world’s first touch-based finger-driven screen reader for smartphones. The interaction techniques employed by Slide Rule influenced Apple in their creation of VoiceOver, their built-in smartphone screen reader, and subsequently TalkBack on Android. Developed from 2007-2008, today Slide Rule has directly influenced products shipping on billions of touch devices. This work was recently honored for its impact.

Ability-Based Design

A new design approach developed from 2008-2020 that emphasizes what people can do and seeks to tailor technologies to people’s specific abilities through adaptation, customization, and ability-focused design practice. Interfaces that adapt to their users’ abilities, touch recognizers that model their users’ touch behaviors, and mouse cursors that dynamically adapt their speeds to make pointing more accurate were all projects that came from, and informed, ability-based design, whose 2018 Communications of the ACM article has been influential at major companies, including Microsoft.

Accessible Input Techniques

Mouse pointing and text entry are still the most fundamental inputs we give desktop and laptop computing systems, but for many users, these bedrock input capabilities are still inaccessible. Since my own doctoral research from 2001-2006, I have been inventing and evaluating more accessible means of providing input to computing systems. For example, my EdgeWrite technology provided more accessible text input using handheld devices, wheelchair joysticks, touchpads, and trackballs. Recently, my Pointing Magnifier 2 software, originally a research project with Leah Findlater, provides a cursor replacement on Microsoft Windows that has been useful to people with motor or visual impairments, older adults, and graphic designers.


Related news

Anat Caspi, Director for Translation

I am interested in exploring ways in which collaborative commons and cooperation can challenge and transform the current economics of assistive technology and incentivize rapid development and deployment of ethically built accessible technologies. My research focuses on engineering machine intelligent solutions for customizable real-time, responsive technologies in the context of work, play and urban street environments.

Affiliations:

Affiliate Assistant Professor, Electrical & Computer Engineering

Director and co-founder, Taskar Center for Accessible Technology

Research highlights

Equity in Transportation Data

All travelers want directions they can trust, but most maps and automated pedestrian routers do not have the data travelers with accessibility requirements need. When we built AccessMap, a personalized, automated pedestrian routing application that takes mobility limitations into consideration, it was clear that municipalities and agencies have not been effective in collecting and maintaining detailed pedestrian-centric map information. Users of AccessMap, currently served in Seattle, Bellingham, and Mt. Vernon, have made it clear with over 35,000 routing requests that people of all abilities require better mobility apps that provide customized information about the pedestrian environment. To scale our efforts, we created the OpenSidewalks data standard along with understandable tools for gathering sidewalk network data, focusing on (1) tools for individual citizen-scientist data entry (2) mass import tools for municipal datasets, and (3) automated computer vision pipelines to map geo-located videos. Our standard and methods for effective data exchange and sharing were recently adopted by King County Metro, Sound Transit, and MVTransit Inc, the largest paratransit operator company with worldwide presence.

The Taskar Center for Accessible Technology (TCAT)

An initiative co-founded by Anat Caspi at the Paul G. Allen School of Computer Science & Engineering to develop, translate and deploy open source, accessible technologies, with a focus on benefiting individuals with motor limitations or speech impairments. TCAT’s translation efforts promote collaborative use of data commons and shared community resources with the recognition that bringing novel accessible technologies to users requires challenging the traditional technology-transfer path. With our partners, we launched the first assistive technology and adapted toy lending library in the Pacific Northwest, serving physical technologies and online resources for others to replicate. Over the past 5 years, TCAT has engaged more than 200 undergraduate and 50 graduate design and engineering students in participatory design and inclusive design practices with our communities of practice, bringing together people of diverse abilities, backgrounds and skill sets towards a common goal of designing for the fullness of human abilities and experiences


Related news

Richard Ladner, Director for Education

I am interested in accessibility technology research, especially technology for deaf, deaf-blind, hard-of-hearing, and blind people. Active in promoting the inclusion of persons with disabilities in computing fields, I am the Principal Investigator for the National Science Foundation funded AccessComputing and AccessCSforAll.

Affiliations:

Professor Emeritus, Allen School of Computer Science & Engineering

Principal Investigator, AccessComputing

Principal Investigator, AccessCSforAll

Research highlights

ASL-STEM Forum

ASL-STEM Forum is a website for scientists who know American Sign Language (ASL) to upload signs for terms in science, technology, engineering, and mathematics (STEM) fields. These signs can be used by teachers, interpreters and other professionals in need of knowledge about how to sign a particular STEM term. Since 2010 more than 3000 signs have been uploaded with more than 1.3  million views on YouTube.

Perkinput

Perkinput is a non-visual text entry method for touchscreens based on Braille developed by Shiri Azenkot, a student of Richard Ladner and Jacob Wobbrock.  The method does not use specific targets but tracks fingers as they type six-dot Braille characters on the screen. Braille can be input with one hand on a small touchscreen or with two hands on a larger touchscreen.  In studies users can type up to 17 words per minute with one hand and 37 words per minute with two hands with high accuracy.  Braille-based text entry is now common on touchscreen devices.

Blocks4All

Blocks4All is an accessible block-based programming environment for young children developed by Lauren Milne, a student of Richard Ladner.  Block-based programming environments like Scratch, Alice, and many others are the most popular for young children to learn computing concepts such as conditional and loops.  Unfortunately, none of these environments are accessible to young screen reader users. Blocks4All is the first block-based programming environment for touchscreen devices that is fully accessible.

AccessComputing

AccessComputing is a National Science Foundation program, founded in 2006 and centered at the University Washington, with the goal of increasing the participation and success of individuals with disabilities in computing fields. It is a joint project with the Allen School, Information School, and the DO-IT center.  To date, it has served more than one thousand students across the United States providing professional development, peer mentoring, industry and research internships, and funding for travel to conferences.  With its 65+ academic, organizational, and industry partners, it has also focused on institutional change, influencing computing departments, organizations, and companies to make sure they are welcoming and accessible to people with disabilities.  


Related news

Leah Findlater, Associate Director

I am interested in how to create technologies that adapt to accommodate individual user needs and preferences, whether to improve basic interactions such as touchscreen text entry or more complex tasks such as working with machine learning models. My research goal is to ensure that the next generation of computing technologies are designed to meet the needs of the broadest range of users.

Affiliations:

Associate Professor, Human Centered Design & Engineering

Adjunct Associate Professor, Allen School of Computer Science & Engineering

Director, Inclusive Design Lab

Research highlights

Expanding voice-based interaction

Over the past few years, conversational voice assistants (VAs) such as Amazon Alexa and Google Assistant have become ubiquitous. We have shown that VAs offer tremendous potential to support equal access to information, particularly for blind and low vision users: they are inherently accessible regardless of vision level and, as novice tools, they offer an approachable introduction to audio-based interaction for people unfamiliar with screen readers. However, VAs currently support only a limited set of tasks. In collaboration with researchers at Microsoft, we are investigating how to combine the strengths of screen readers–powerful expert tools–with the approachability of VA interaction. An example is our VERSE web search tool.

Real-time captioning and sound awareness support

With advances in wearable computing and machine learning, Jon Froehlich and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches (SoundWatch, website forthcoming). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.


Related news

James Fogarty, Associate Director

My broad research interests are in Human-Computer Interaction, User Interface Software and Technology, and Ubiquitous Computing. My focus is on developing, deploying, and evaluating new approaches to the human obstacles surrounding widespread everyday adoption of ubiquitous sensing and intelligent computing technologies.

Affiliations:

Professor, Allen School of Computer Science & Engineering

Research highlights

Large-Scale Android Accessibility Analyses

Fogarty’s research group is leading the largest-known open analyses of the accessibility of Android apps, thus providing new understanding of the current state of mobile accessibility and new insights into factors in the ecosystem that contribute to accessibility failures (ASSETS 2017, ASSETS 2018, TACCESS 2020). For example, our analyses found that 45% of apps are missing screenreader labels for more than 90% of their image-based buttons, leaving much of the functionality of those apps inaccessible to many people. Such results also highlight that pervasive accessibility failures require continued research and new approaches to addressing contributing factors in the technology ecosystem. Our analyses of common failure scenarios has directly led to Google improvements in the accessibility ecosystem (e.g., corrections to Android documentation code snippets that were inaccessible, thus creating many accessibility failures as such snippets were used in apps) and motivated additional research (e.g., our ongoing work on developer tools that better scaffold developer learning about how to correctly apply accessibility metadata).

Runtime Mobile Accessibility Repair and Enhancement

Fogarty’s research group is developing new techniques for runtime repair and enhancement of mobile accessibility. Key to these approaches is a new ability to support third-party runtime enhancements within Android’s security model and without requiring modification to apps (CHI 2017, UIST 2018). We have applied these approaches to accessibility repair (e.g., techniques to allow social annotation of apps with missing screenreader data), but also to enable entirely new forms of tactile accessibility enhancements (ASSETS 2018). These techniques therefore provide a research basis for both improving current accessibility and exploring new forms of future accessibility enhancements.


Related news

Jon Froehlich, Associate Director

My research focuses on designing, building, and evaluating interactive technology that addresses high value social issues such as environmental sustainability, computer accessibility, and personalized health and wellness.

Affiliations:

Associate Professor, Allen School of Computer Science & Engineering

Research highlights

Real-time captioning and sound awareness support

With advances in wearable computing and machine learning, Leah Findlater and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches (SoundWatch, website forthcoming). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.

Project Sidewalk

Project Sidewalk combines remote crowdsourcing + AI identify and assess sidewalk accessibility in online imagery. Working with people who have mobility disabilities, local government partners, and NGOs, we have deployed Project Sidewalk into five cities (Washington DC, Seattle, WA, Newberg, OR, Columbus, OH, and Mexico City, MX), collecting over 500,000 geo-tagged sidewalk accessibility labels on curb ramps, surface problems, and other obstacles.


Related news

Kat Steele, Associate Director

My research focuses upon using novel computational and experimental tools to understand human movement and improve treatment and quality of life of individuals with cerebral palsy, stroke, and other neurological disorders.

My research strives to connect engineering and medicine to create solutions that can advance our understanding of human ability, but also translate research results to the clinic and daily life. 

Affiliations

Albert S. Kobayashi Endowed Professor of Mechanical Engineering

Ability & Innovation Lab

AMP Lab

HuskyADAPT and AccessEngineering

Research highlights

Ubiquitous Rehabilitation

Ubiquitous Rehabilitation seeks to develop the sensors, algorithms, and data visualization techniques required to deploy wearable technology that can reduce the burdens of rehabilitation and improve outcomes. Biomechanical principles guide the design of hardware and software that integrate rehabilitation into daily life.

Open-Orthoses

Open-Orthoses leverages advances in 3D-printing, scanning, and fabrication to build innovative hand and arm orthoses (aka exoskeletons). Multidisciplinary teams of engineers and clinicians work with individuals with disabilities to co-design customized devices, rigorously test the devices, and provide open-source designs that accelerate development.

AccessEngineering

AccessEngineering was founded in 2015 to (1) support and encourage individuals with disabilities to pursue careers in engineering, and (2) train all engineers in principles of accessible and inclusive design. This program has trained over 60 engineering faculty, facilitates communities of practice for engineering professionals with disabilities, and curates a knowledge base with over 100 articles for engineering students, faculty, and professionals.


Related news