Codesigning Videoconferencing Tools for Small Groups with Mixed Hearing Status

June 12, 2023

CREATE students and faculty have published a new paper at CHI 2023, ‘Easier or Harder, Depending on Who the Hearing Person Is’: Codesigning Videoconferencing Tools for Small Groups with Mixed Hearing Status”.

Led by Human Centered Design and Engineering (HCDE) Ph.D. candidate Emma McDonnell and supported by CREATE, this work investigates how groups with both hearing and d/Deaf and hard of hearing (DHH) members could be better supported when using captions during videoconferences. 

Emma McDonnell, a white woman in her 20s with short red hair, freckles, and a warm smile. In the background: a lush landscape and the Colosseum.

Researchers recruited four groups to participate in a series of codesign sessions, which de-centers researchers’ priorities and seeks to empower participants to lead the development of new design ideas. In the study, participants reflected on their experiences using captioning, sketched and discussed their ideas for technology that could help build accessible group norms, and then critiqued video prototypes researchers created of their ideas. 

One major finding from this research is that participants’ relationships with each other shape what kinds of accessibility support the group would benefit from.

For example, one group that participated in our study were cousins who had been close since childhood. Now in their mid-twenties, they found they did not have to actively plan for accessibility; they had their ways of communicating and would stop and clarify if things broke down. On the other hand, a group of colleagues who work on technology for DHH people had many explicit norms they used to ensure communication accessibility. One participant, Blake, noted, I was pretty emotional after the first meeting because it was just so inclusive.” These different approaches demonstrate that there is no one-size-fits-all approach to communication accessibility – people work together as a group to develop an approach that works for them. 

This paper also contributes new priorities for the design of videoconferencing software. Participants focused on designing add-ons to videoconferencing systems that would better support their group in communicating accessibly. Their designs fell into four categories: 

  • Speaker Identity and Overlap: Having video conferencing tools identify speakers and warn groups when multiple people speak at once, since overlapping speech can’t be captioned accurately. Participants found this to be critical, and often missing, information.
  • Support for Behavioral Feedback: Building in ways for people to subtly notify conversation partners if they need to adjust their behavior. Participants desired tools to flag when people need to adjust their cameras, critical caption errors, and if speech rate gets too high. They considered, but decided against, a general purpose conversation breakdown warning. 
  • Videoconferencing Infrastructure for Accessibility: Adding more features and configurable settings around conversational accessibility to videoconferencing platforms. Participants desired basic controls, such as color and font size, as well as the ability to preset and share group accessibility norms and customize behavior feedback tools. 
  • Sound Information: Providing more information about the sound happening during a conversation. Participants were excited about building sound recognition into captioning tools, and considered conveying speech volume via font weight, but decided it would be overwhelming and ambiguous. 

This research also has implications for broader captioning and videoconferencing design. While often captioning tools are designed for individual d/Deaf and hard of hearing people, researchers argue that we should design for the entire group having a conversation. This shift in focus revealed many ways that, on top of transcribing a conversation, technology could help groups communicate in ways that can be more effectively captioned. Many of these tools are easy to build with current technology, such as being able to click on a confusing caption to request clarification. The research team hopes that their work can illuminate the need to pay attention to groups’ social context when studying captioning and can provide videoconferencing platform designers a design approach to better support groups with mixed hearing abilities. 

McDonnell is advised by CREATE Associate Directors Leah Findlater, HCDE, and Jon Froehlich, Paul G. Allen School of Computer Science & Engineering.