My research focuses on designing, building, and evaluating interactive technology that addresses high value social issues such as environmental sustainability, computer accessibility, and personalized health and wellness.
Affiliations:
Associate Professor, Allen School of Computer Science & Engineering
Research highlights
Real-time captioning and sound awareness support
With advances in wearable computing and machine learning, Leah Findlater and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches (SoundWatch, website forthcoming). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.
Project Sidewalk
Project Sidewalk combines remote crowdsourcing + AI identify and assess sidewalk accessibility in online imagery. Working with people who have mobility disabilities, local government partners, and NGOs, we have deployed Project Sidewalk into five cities (Washington DC, Seattle, WA, Newberg, OR, Columbus, OH, and Mexico City, MX), collecting over 500,000 geo-tagged sidewalk accessibility labels on curb ramps, surface problems, and other obstacles.
Related news
- Mobile 3D printer can autonomously add accessibility features to a room
- Hard Mode: Accessibility, Difficulty and Joy for Gamers With Disabilities
- ARTennis attempts to help low vision players
- Off to the Park: A Geospatial Investigation of Adapted Ride-on Car Usage
- Augmented Reality to Support Accessibility