Skip to content

Jon E. Froehlich, Associate Director

My research focuses on designing, building, and evaluating interactive technology that addresses high value social issues such as environmental sustainability, computer accessibility, and personalized health and wellness.

Affiliations

Professor, Allen School of Computer Science & Engineering

Director, Makeability Lab

Research highlights

Disability Parking Support

Accessible parking is critical for people with disabilities (PwDs), allowing equitable access to destinations, independent mobility, and community participation. Despite mandates, there has been no large-scale investigation of the quality or allocation of disability parking in the US nor significant research on PwD perspectives and uses of disability parking. We introduce AccessParkCV, a new deep learning pipeline and annotated parking dataset for automatically detecting disability parking and inferring quality characteristics (e.g., width) from orthrectified aerial imagery. Our work contributes new qualitative understandings of disability parking, a novel detection pipeline and open dataset, and design guidelines for future tools.

Semi-automatic Room Reconstruction and Accessibility Scanning

To help improve the safety and accessibility of indoor spaces, researchers and health professionals have created assessment instruments that enable homeowners and trained experts to audit and improve homes. With advances in computer vision, augmented reality (AR), and mobile sensors, new approaches are now possible. We introduce RASSAR (Room Accessibility and Safety Scanning in Augmented Reality), a new proof-of-concept prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues using LiDAR + camera data, machine learning, and AR. We present an overview of the current RASSAR prototype and a preliminary evaluation in a single home.

AI-assisted Vision for Low Vision Sports Participation

Individuals with low vision (LV) can experience vision-related challenges when participating in sports, especially those with fast-moving objects. We introduce ARTennis, a prototype for wearable augmented reality (AR) that utilizes real-time computer vision (CV) to enhance the visual saliency of tennis balls. As initial design, a red dot is placed over the tennis ball and four green arrows point at the ball, forming a crosshair. As AR and CV technologies continue to improve, we expect head-worn AR to broaden the inclusivity of sports, such as tennis and basketball.

Real-time Captioning and Sound Awareness Support

With advances in wearable computing and machine learning, Leah Findlater and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches and mobile phones (SoundWatch and SPECTRA). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.

Project Sidewalk

Project Sidewalk’s mission is to transform how the world’s sidewalks are mapped and assessed using crowdsourcing, machine learning, and online satellite and streetscape imagery. Working with local community groups and governmental partners, we have deployed Project Sidewalk in 40 cities across nine countries, including the US, Mexico, Ecuador, Netherlands, Switzerland, and New Zealand (with more to come). In total, Project Sidewalk users have contributed over 2 million data points—the largest crowdsourced sidewalk accessibility dataset in existence.


Related news