Despite many recent advances in automatic speech recognition (ASR), linguists and language communities engaged in language documentation projects continue to face the obstacle of the “transcription bottleneck”. Researchers in NLP typically do not distinguish between widely spoken languages that currently happen to have few training resources and endangered languages that will never have abundant data. As a result, we often fail to thoroughly explore when ASR is helpful for language documentation, what architectures work best for the sorts of languages that are in need of documentation, and how data can be collected and organized to produce optimal results. In this talk I describe several projects that attempt to bridge the gap between the promise of ASR for language documentation and the reality of using this technology in real-world settings.
Dr. Emily Prud’hommeaux is the Gianinno Family Sesquicentennial Assistant Professor in the Department of Computer Science at Boston College. She received her BA (Harvard) and MA (UCLA) in Linguistics, and her PhD in Computer Science and Engineering (OHSU/OGI). Funded by the US National Science Foundation, the US National Institutes of Health, and the Computing Research Association, her research focuses on natural language processing in low-resource settings, with a particular emphasis on endangered languages and the language of individuals with conditions impacting communication and cognition.