The experimental turn

The traditional methodology’s success has created a foundation solid enough to support exciting new extensions. Experimental semantics brings the tools of behavioral experimentation to bear on questions about meaning, allowing us to test and refine theoretical insights at unprecedented scale.

Scaling Semantic Investigation

Where traditional methods might examine a handful of predicates, experimental approaches can investigate entire lexical domains. Extending our example involving the verb love: English has thousands of similar clause-embedding predicates, each potentially varying in its inferential properties. We can now test whether generalizations based on canonical examples extend across these vast lexicons.

The MegaAttitude project (White and Rawlins 2016, 2018, 2020; White et al. 2018; An and White 2020; Moon and White 2020; Kane, Gantt, and White 2022) is one example of this approach. This project aims to collect inference judgments for hundreds of predicates across multiple contexts and inference types. This scale reveals patterns that are very difficult to see and evaluate the quality of using traditional methods—subtle distinctions between near-synonyms, unexpected predicate clusters, and systematic variation across semantic domains.

Teasing Apart Contributing Factors

Experimental methods also allow us to investigate the rich array of factors that influence inference judgments:

  • Semantic knowledge: The core meanings of expressions
  • World knowledge: Prior beliefs about plausibility
  • Contextual factors: The discourse context and QUD
  • Individual differences: Variation in how speakers interpret expressions
  • Response strategies: How participants use rating scales

Rather than viewing these as confounds, we can see them as windows into the cognitive processes underlying semantic interpretation. For instance, Degen and Tonhauser (2021) systematically manipulated world knowledge to show how prior beliefs modulate the strength of factive inferences, revealing the interplay between semantic and pragmatic factors.

Making Linking Hypotheses Explicit

Perhaps most importantly, experimental approaches force us to make explicit what traditional methods leave implicit: the link between semantic representations and behavioral responses (Jasbi, Waldon, and Degen 2019; Waldon and Degen 2020; Phillips et al. 2021). When we say speakers judge that an inference follows, what cognitive processes produce that judgment? How do abstract semantic representations map onto the responses on some scale?

This is not merely a methodological detail—it’s a substantive theoretical question. Different linking hypotheses make different predictions about response patterns, allowing us to test not just our semantic theories but our assumptions about how those theories connect to behavior. Even if our real interest is in characterizing the semantic representations of speakers, we can’t ignore the way those representations map onto their responses in some task.

References

An, Hannah, and Aaron White. 2020. “The Lexical and Grammatical Sources of Neg-Raising Inferences.” Proceedings of the Society for Computation in Linguistics 3 (1): 220–33. https://doi.org/https://doi.org/10.7275/yts0-q989.
Degen, Judith, and Judith Tonhauser. 2021. “Prior Beliefs Modulate Projection.” Open Mind 5 (September): 59–70. https://doi.org/10.1162/opmi_a_00042.
Jasbi, Masoud, Brandon Waldon, and Judith Degen. 2019. “Linking Hypothesis and Number of Response Options Modulate Inferred Scalar Implicature Rate.” Frontiers in Psychology 10 (February). https://doi.org/10.3389/fpsyg.2019.00189.
Kane, Benjamin, Will Gantt, and Aaron Steven White. 2022. “Intensional Gaps: Relating Veridicality, Factivity, Doxasticity, Bouleticity, and Neg-Raising.” Semantics and Linguistic Theory 31 (0): 570–605. https://doi.org/10.3765/salt.v31i0.5137.
Moon, Ellise, and Aaron White. 2020. “The Source of Nonfinite Temporal Interpretation.” In Proceedings of the 50th Annual Meeting of the North East Linguistic Society, edited by Mariam Asatryan, Yixiao Song, and Ayana Whitmal, 3:11–24. Amherst: GLSA Publications.
Phillips, Colin, Phoebe Gaston, Nick Huang, and Hanna Muller. 2021. “Theories All the Way Down: Remarks on Theoretical and Experimental Linguistics.” In The Cambridge Handbook of Experimental Syntax, edited by Grant Goodall, 587–616. Cambridge Handbooks in Language and Linguistics. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781108569620.023.
Waldon, Brandon, and Judith Degen. 2020. “Modeling Behavior in Truth Value Judgment Task Experiments.” In Proceedings of the Society for Computation in Linguistics 2020, edited by Allyson Ettinger, Gaja Jarosz, and Joe Pater, 238–47. New York, New York: Association for Computational Linguistics. https://aclanthology.org/2020.scil-1.29/.
White, Aaron Steven, and Kyle Rawlins. 2016. “A Computational Model of S-Selection.” Semantics and Linguistic Theory 26 (0): 641–63. https://doi.org/10.3765/salt.v26i0.3819.
———. 2018. “The Role of Veridicality and Factivity in Clause Selection.” In NELS 48: Proceedings of the Forty-Eighth Annual Meeting of the North East Linguistic Society, edited by Sherry Hucklebridge and Max Nelson, 48:221–34. University of Iceland: GLSA (Graduate Linguistics Student Association), Department of Linguistics, University of Massachusetts.
———. 2020. “Frequency, Acceptability, and Selection: A Case Study of Clause-Embedding.” Glossa: A Journal of General Linguistics 5 (1). https://doi.org/10.5334/gjgl.1001.
White, Aaron Steven, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2018. “Lexicosyntactic Inference in Neural Models.” In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4717–24. Brussels, Belgium: Association for Computational Linguistics. https://doi.org/10.18653/v1/D18-1501.