Skip to main content
Rhetorical Figures, Grammatical Constructions, and Form/Meaning Alliances in Pretrained Language Models

Abstract

The speakers of a language respond to formal patterns—rhetorical figures—which have been studied as tools of eloquence. More recent linguistics has produced construction grammars to analyze different sorts of patterns, from idioms to regular syntax, that compose discourse. We ask how well computational models, particularly Large Language Models (LLMs), can process these two kinds of patterns, in the absence of the contextual and sensory information people have access to. Our pilot study tests seven LLMs on five well-investigated grammatical constructions and five rhetorical figures on an authored sample, and statistically and qualitatively analyzes the generated outputs. Evaluating LLMs on both offers a testing ground for how well they process the formal and conceptual dimensions of rhetorical and grammatical patterns. The results suggest greater challenges for LLMs in processing salient rhetorical figures than arbitrary constructions: statistical frequency alone may fail to capture important formal dimensions of how people put words together.

Keywords

Computational Rhetoric, Construction Grammar, Figures, Pretrained Language Models, Large Language Models, Rhetorical Figures

How to Cite

Duvvoori, K. & Harris, R., (2026) “Rhetorical Figures, Grammatical Constructions, and Form/Meaning Alliances in Pretrained Language Models”, POROI 20(1): 8. doi: https://doi.org/10.17077/2151-2957.33942

164

Views

36

Downloads

Share

Authors

Downloads

Issue

Publication details

Licence

CC BY-NC 4.0

Identifiers

Peer Review

This article has been peer reviewed.

File Checksums (MD5)

  • PDF: bab40ea7d1215e3610e9d31041e918df