Abstract
The question of what kinds of linguistic information are encoded in different layers
of Transformer-based language models is of considerable interest for the NLP community.
Existing work, however, has overwhelmingly focused on word-level representations and
encoder-only language models with the masked-token training objective.
In this paper, we present experiments with semantic structural probing,
a method for studying sentence-level representations
via finding a subspace of the embedding space that provides
suitable task-specific pairwise distances between data-points.
We apply our method to language models from different families (encoder-only, decoder-only,
encoder-decoder) and of different sizes in the context of two tasks, semantic textual similarity
and natural-language inference. We find that model families differ substantially in their
performance and layer dynamics, but that the results are largely model-size invariant.
Users
Please
log in to take part in the discussion (add own reviews or comments).