From theory to data

Semantic theory has achieved remarkable success in characterizing the compositional structure of natural language meaning. Through decades of careful theoretical work, semanticists have developed elegant formal systems that capture how complex meanings arise from the systematic combination of simpler parts. These theories explain two fundamental types of judgments that speakers make: acceptability judgments about whether strings are well-formed, and inference judgments about what follows from what speakers say.

The field now stands at an exciting juncture. The rise of large-scale experimental methods and computational modeling opens new opportunities to test and refine these theoretical insights against rich behavioral data. The challenge—and opportunity—is to connect our elegant formal theories to the messy, gradient patterns we observe when hundreds of speakers make thousands of judgments. How can we maintain the theoretical insights that formal semantics has achieved while extending them to account for this new empirical richness?

Probabilistic Dynamic Semantics (PDS) aims to provide a systematic bridge between these theoretical insights and behavioral data. It takes the compositional analyses developed using traditional Montagovian methods and maps them to probabilistic models that can be quantitatively evaluated against experimental results. The goal is not to replace traditional semantics but to extend its reach, allowing us to test theoretical predictions at unprecedented scale while maintaining formal rigor.

Traditional Semantic Methodology: Foundations of Success

Semanticists study the systematic relationships between linguistic expressions and the inferences they support. The field’s methodology centers on two types of judgments:

Acceptability judgments assess whether strings are well-formed relative to a language and in a particular context of use (see Schütze 2016 and references therein). For example, in a context where a host asks what a guest wants with coffee, (1) is clearly acceptable, while (2) is not Sprouse and Villata (2021):

  1. What would you like with your coffee?
  2. #What would you like and your coffee?

Inference judgments assess relationships between strings (see Davis and Gillon 2004). When speakers hear (3), they typically infer (4) (White 2019):

  1. Jo loved that Mo left.
  2. Mo left.

Observational Adequacy

A core desideratum for semantic theories is observational adequacy (Chomsky 1964): for any string \(s \in \Sigma^*\), we should predict how acceptable speakers find it in context, and for acceptable strings \(s, s'\), we should predict whether speakers judge \(s'\) inferable from \(s\). Achieving observational adequacy requires mapping vocabulary elements to abstractions that predict judgments parsimoniously.

These abstractions may be discrete or continuous, simple or richly structured. Through careful analysis of consistent inference patterns, semanticists have identified powerful generalizations. For instance, examining predicates like love, hate, be surprised, and know, theorists observed they all give rise to inferences about their complement clauses that survive under negation and questioning. This led to positing that they all share a property that predicts systematic inferential behavior across diverse predicates (Kiparsky and Kiparsky 1970; cf. Karttunen 1971).

Descriptive Adequacy and Theoretical Depth

Beyond observational adequacy lies descriptive adequacy: capturing data “in terms of significant generalizations that express underlying regularities in the language” (Chomsky 1964, 63). This drive for deeper explanation motivates the field’s emphasis on parsimony and formal precision.

The history of generative syntax illustrates two approaches to achieving descriptive adequacy:

  1. Analysis-driven: Start with observationally adequate analyses in expressive formalisms, then extract generalizations as constraints Baroni (2022).
  2. Hypothesis-driven: Begin with constrained formalisms (like CCG or minimalist grammars) and test their empirical coverage Steedman (2000).

The hypothesis-driven approach, which PDS adopts for semantics, aims to delineate phenomena through representational constraints. This becomes crucial when developing models that both accord with theoretical assumptions and can be evaluated quantitatively.

The Power and Natural Boundaries of Traditional Methods

This methodology has yielded profound insights into semantic composition, scope phenomena, discourse dynamics, and the semantics-pragmatics interface more generally. By focusing on carefully constructed examples and native speaker intuitions, theorists have uncovered deep regularities in how meaning is constructed and interpreted.

Yet every methodology has natural boundaries. Traditional semantic methods excel at identifying patterns and building theories but face practical constraints when we ask:

  • How well do our generalizations, based on examining 5-10 predicates, extend to the thousands of predicates in the lexicon?
  • What factors beyond semantic knowledge influence the judgments we observe?
  • How exactly does abstract semantic knowledge produce concrete behavioral responses?

References

Baroni, Marco. 2022. “On the Proper Role of Linguistically Oriented Deep Net Analysis in Linguistic Theorising.” In Algebraic Structures in Natural Language. CRC Press.
Chomsky, Noam. 1964. “Current Issues in Linguistic Theory.” In The Structure of Language, edited by J. Fodor and J. Katz, 50–118. New York: Prentice Hall.
———. 1973. “Conditions on Transformations.” In A Festschrift for Morris Halle, edited by S. Anderson and P. Kiparsky, 232–86. New York: Holt, Rinehart, & Winston.
Davis, Steven, and Brendan S Gillon. 2004. Semantics: A Reader. New York: Oxford University Press.
Karttunen, Lauri. 1971. “Some Observations on Factivity.” Paper in Linguistics 4 (1): 55–69. https://doi.org/10.1080/08351817109370248.
Kiparsky, Paul, and Carol Kiparsky. 1970. FACT.” In Progress in Linguistics, 143–73. De Gruyter Mouton. https://doi.org/10.1515/9783111350219.143.
Pavlick, Ellie. 2023. “Symbols and Grounding in Large Language Models.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 381 (2251). https://doi.org/10.1098/rsta.2022.0041.
Ross, John Robert. 1967. “Constraints on Variables in Syntax.” PhD thesis, Massachusetts Institute of Technology.
Schütze, Carson T. 2016. The Empirical Base of Linguistics. Classics in Linguistics 2. Berlin: Language Science Press. https://doi.org/10.17169/langsci.b89.100.
Sprouse, Jon, and Sandra Villata. 2021. “Island Effects.” In The Cambridge Handbook of Experimental Syntax, edited by Grant Goodall, 227–57. Cambridge Handbooks in Language and Linguistics. Cambridge University Press. https://doi.org/10.1017/9781108569620.010.
Stabler, Edward. 1997. “Derivational Minimalism.” In Logical Aspects of Computational Linguistics, edited by Christian Retoré, 68–95. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer. https://doi.org/10.1007/BFb0052152.
Steedman, Mark. 2000. The Syntactic Process. Cambridge: MIT Press.
White, Aaron Steven. 2019. “Lexically Triggered Veridicality Inferences.” In Handbook of Pragmatics, 22:115–48. John Benjamins Publishing Company. https://doi.org/10.1075/hop.22.lex4.