Jennifer Mankoff, Founding Co-Director

My research focuses on accessibility and 3D printing.  I have led the effort to better understand both clinical and DIY stakeholders in this process, and developed better, more usable tools for production. Together, these can enhance the capabilities and participation of all users in today’s  manufacturing revolution.

Affiliations:

Richard E. Ladner Professor, Paul G. Allen School of Computer Science & Engineering

Director, Make4all Lab

Research highlights

Better data sets that capture the varied experience of people with disabilities

Better data sets that capture the varied experience of people with disabilities are crucial to building better accessibility solutions. Mankoff has been involved in multiple pioneering data collection efforts. Most recently, her work capturing fine-grained, longitudinal behavioral data about the experiences of college undergraduates with and without disabilities has allowed her to study the unequal impacts of COVID-19’s changes to society on students with disabilities. She has also collected, and is currently exploring the first data set containing fine-grained end-to-end trip data about over 60 people with disabilities, combined with self reports of successes and failures. In the past, she collected over a year of real-world mouse data from individuals with various impairments, a data set whose size is unparalleled in a community that usually tests ideas on 1-10 individuals in lab settings. With this data, she was able to pioneer pixel based analysis methods that could improve on standard accessibility APIs, achieving a shift from 75% to 89% in accuracy identifying on-screen targets; demonstrate the huge variability within a single user and among many users with impairments that affect desktop computer use; and develop classifiers that could dynamically determine a user’s pointing ability with 92% accuracy on a single sample.

Better understanding of clinical and DIY accessible technology production

The advent of consumer-grade fabrication technology, most notably low-cost 3D printing, has opened the door to increasing power and participation in do-it-yourself and do-for-others accessible technology production. However, such production faces challenges not only at the level of process and policy, but with respect to materials, design tools, and follow-up. As summarized in a 2019 Communications of The ACM article, Mankoff has led the effort to better understand both clinical and DIY stakeholders in this process, and developed better, more usable tools for production. Together, these can enhance the capabilities and participation of all users in today’s  manufacturing revolution.

AccessSIGCHI directorship

Mankoff is the long-time director of AccessSIGCHI, the national group that has helped to improve conference accessibility in one of ACM’s largest professional groups, and is working collaboratively to help set standards and document best practices for use across ACM.


Related news

Richard Ladner, Director for Education

I am interested in accessibility technology research, especially technology for deaf, deaf-blind, hard-of-hearing, and blind people. Active in promoting the inclusion of persons with disabilities in computing fields, I am the Principal Investigator for the National Science Foundation funded AccessComputing and AccessCSforAll.

Affiliations:

Professor Emeritus, Allen School of Computer Science & Engineering

Principal Investigator, AccessComputing

Principal Investigator, AccessCSforAll

Research highlights

ASL-STEM Forum

ASL-STEM Forum is a website for scientists who know American Sign Language (ASL) to upload signs for terms in science, technology, engineering, and mathematics (STEM) fields. These signs can be used by teachers, interpreters and other professionals in need of knowledge about how to sign a particular STEM term. Since 2010 more than 3000 signs have been uploaded with more than 1.3  million views on YouTube.

Perkinput

Perkinput is a non-visual text entry method for touchscreens based on Braille developed by Shiri Azenkot, a student of Richard Ladner and Jacob Wobbrock.  The method does not use specific targets but tracks fingers as they type six-dot Braille characters on the screen. Braille can be input with one hand on a small touchscreen or with two hands on a larger touchscreen.  In studies users can type up to 17 words per minute with one hand and 37 words per minute with two hands with high accuracy.  Braille-based text entry is now common on touchscreen devices.

Blocks4All

Blocks4All is an accessible block-based programming environment for young children developed by Lauren Milne, a student of Richard Ladner.  Block-based programming environments like Scratch, Alice, and many others are the most popular for young children to learn computing concepts such as conditional and loops.  Unfortunately, none of these environments are accessible to young screen reader users. Blocks4All is the first block-based programming environment for touchscreen devices that is fully accessible.

AccessComputing

AccessComputing is a National Science Foundation program, founded in 2006 and centered at the University Washington, with the goal of increasing the participation and success of individuals with disabilities in computing fields. It is a joint project with the Allen School, Information School, and the DO-IT center.  To date, it has served more than one thousand students across the United States providing professional development, peer mentoring, industry and research internships, and funding for travel to conferences.  With its 65+ academic, organizational, and industry partners, it has also focused on institutional change, influencing computing departments, organizations, and companies to make sure they are welcoming and accessible to people with disabilities.  


Related news

James Fogarty, Associate Director

My broad research interests are in Human-Computer Interaction, User Interface Software and Technology, and Ubiquitous Computing. My focus is on developing, deploying, and evaluating new approaches to the human obstacles surrounding widespread everyday adoption of ubiquitous sensing and intelligent computing technologies.

Affiliations:

Professor, Allen School of Computer Science & Engineering

Research highlights

Large-Scale Android Accessibility Analyses

Fogarty’s research group is leading the largest-known open analyses of the accessibility of Android apps, thus providing new understanding of the current state of mobile accessibility and new insights into factors in the ecosystem that contribute to accessibility failures (ASSETS 2017, ASSETS 2018, TACCESS 2020). For example, our analyses found that 45% of apps are missing screenreader labels for more than 90% of their image-based buttons, leaving much of the functionality of those apps inaccessible to many people. Such results also highlight that pervasive accessibility failures require continued research and new approaches to addressing contributing factors in the technology ecosystem. Our analyses of common failure scenarios has directly led to Google improvements in the accessibility ecosystem (e.g., corrections to Android documentation code snippets that were inaccessible, thus creating many accessibility failures as such snippets were used in apps) and motivated additional research (e.g., our ongoing work on developer tools that better scaffold developer learning about how to correctly apply accessibility metadata).

Runtime Mobile Accessibility Repair and Enhancement

Fogarty’s research group is developing new techniques for runtime repair and enhancement of mobile accessibility. Key to these approaches is a new ability to support third-party runtime enhancements within Android’s security model and without requiring modification to apps (CHI 2017, UIST 2018). We have applied these approaches to accessibility repair (e.g., techniques to allow social annotation of apps with missing screenreader data), but also to enable entirely new forms of tactile accessibility enhancements (ASSETS 2018). These techniques therefore provide a research basis for both improving current accessibility and exploring new forms of future accessibility enhancements.


Related news

Jon Froehlich, Associate Director

My research focuses on designing, building, and evaluating interactive technology that addresses high value social issues such as environmental sustainability, computer accessibility, and personalized health and wellness.

Affiliations:

Associate Professor, Allen School of Computer Science & Engineering

Research highlights

Real-time captioning and sound awareness support

With advances in wearable computing and machine learning, Leah Findlater and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches (SoundWatch, website forthcoming). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.

Project Sidewalk

Project Sidewalk combines remote crowdsourcing + AI identify and assess sidewalk accessibility in online imagery. Working with people who have mobility disabilities, local government partners, and NGOs, we have deployed Project Sidewalk into five cities (Washington DC, Seattle, WA, Newberg, OR, Columbus, OH, and Mexico City, MX), collecting over 500,000 geo-tagged sidewalk accessibility labels on curb ramps, surface problems, and other obstacles.


Related news