Skip to content
Photo of Jon Froehlich wearing a Makeability Lab t-shirt, a hoodie, and a sportcoat. He is a white man with brown hair.

Jon E. Froehlich, Associate Director

My research focuses on designing, building, and evaluating interactive technology that addresses high value social issues such as environmental sustainability, computer accessibility, and personalized health and wellness.

Affiliations

Professor, Allen School of Computer Science & Engineering

Research highlights

Semi-automatic Room Reconstruction and Accessibility Scanning

To help improve the safety and accessibility of indoor spaces, researchers and health professionals have created assessment instruments that enable homeowners and trained experts to audit and improve homes. With advances in computer vision, augmented reality (AR), and mobile sensors, new approaches are now possible. We introduce RASSAR (Room Accessibility and Safety Scanning in Augmented Reality), a new proof-of-concept prototype for semi-automatically identifying, categorizing, and localizing indoor accessibility and safety issues using LiDAR + camera data, machine learning, and AR. We present an overview of the current RASSAR prototype and a preliminary evaluation in a single home.

AI-assisted Vision for Low Vision Sports Participation

Individuals with low vision (LV) can experience vision-related challenges when participating in sports, especially those with fast-moving objects. We introduce ARTennis, a prototype for wearable augmented reality (AR) that utilizes real-time computer vision (CV) to enhance the visual saliency of tennis balls. As initial design, a red dot is placed over the tennis ball and four green arrows point at the ball, forming a crosshair. As AR and CV technologies continue to improve, we expect head-worn AR to broaden the inclusivity of sports, such as tennis and basketball.

Real-time captioning and sound awareness support

With advances in wearable computing and machine learning, Leah Findlater and I have been investigating new opportunities for real-time captioning and sound awareness support for people who are deaf/Deaf and hard of hearing (DHH). Our work spans three primary areas: real-time captioning in augmented reality and wearables (ARCaptions), sound awareness support in the “smart home” (HomeSound), and real-time sound identification on smart watches (SoundWatch, website forthcoming). Throughout this work, we’ve engaged with over 250 DHH participants to help identify design opportunities, pain points, and to solicit feedback on our designs.

Project Sidewalk

Project Sidewalk’s mission is to transform how the world’s sidewalks are mapped and assessed using crowdsourcing, machine learning, and online satellite and streetscape imagery. Working with local community groups and governmental partners, we have deployed Project Sidewalk in 20 cities across seven countries, including the US, Mexico, Ecuador, Netherlands, Switzerland, and New Zealand (with more to come). In total, Project Sidewalk users have contributed over 1.5 million data points—the largest crowdsourced sidewalk accessibility dataset in existence.


Related news