Inproceedings,

Resolving Implicit References in Instructional Texts

, and .
Proceedings of the 2nd Workshop on Computational Approaches to Discourse, page 58--71. Punta Cana, Dominican Republic and Online, Association for Computational Linguistics, (November 2021)

Abstract

The usage of (co-)referring expressions in discourse contributes to the coherence of a text. However, text comprehension can be difficult when referring expressions are non-verbalized and have to be resolved in the discourse context. In this paper, we propose a novel dataset of such implicit references, which we automatically derive from insertions of references in collaboratively edited how-to guides. Our dataset consists of 6,014 instances, making it one of the largest datasets of implicit references and a useful starting point to investigate misunderstandings caused by underspecified language. We test different methods for resolving implicit references in our dataset based on the Generative Pre-trained Transformer model (GPT) and compare them to heuristic baselines. Our experiments indicate that GPT can accurately resolve the majority of implicit references in our data. Finally, we investigate remaining errors and examine human preferences regarding different resolutions of an implicit reference given the discourse context.

Tags

Users

  • @dr.michaelroth

Comments and Reviews