You are here:



Friday, April 17 2020

9:00 AM - 12:00 PM


8:30 Check-in and light breakfast 

9:00 Opening remarks – Prof. Scott Jarvis, University of Utah 

Speaker: Dr. Jesse Egbert, Northern Arizona University 

A Corpus Linguistic Approach to Quatifying Surplusage in Statutes

 A Corpus Linguistic Approach to Quatifying Surplusage in Statutes

Statutory interpretation often relies on linguistic canons of construction, or widely accepted ‘rules of thumb for interpreting the language. One such canon that is frequently appealed to is the ‘surplusage’ canon which, according to Cooley (1988), states that “The courts must […] lean in favor of a construction which will render every word operative, rather than one which may make some idle and nugatory”. This legal canon has a counterpart in linguistics, known as the Maxim of Quantity: “Do not make your contribution more informative than is required” (Grice, 1975). In spite of these widely cited expectations, it is also generally accepted that “legal drafters often include redundant language on purpose to cover unforeseen gaps or simply for no good reason at all” (Jellum, 2008; see also Scalia & Garner, 2012). As a result, it is not uncommon for questions of statutory interpretation to hinge on whether a phrase or provision contains language that is redundant or superfluous. In spite of this, there are currently no reliable methods for detecting possible surplusage in statutes. In this talk, I present a new method for determining whether there is linguistic evidence that words in binomials (e.g. care and support, liens and claims, null and void) violate the canon of surplusage (i.e. are semantically redundant), and should thus not be assigned independent meanings. This method relies on linguistic methods—applied to a corpus (a large sample of naturally produced language)—and is designed to reliably quantify the degree to which binomials are formulaic and semantically similar. This new method is applied to a set of binomials that have been the subject of dispute in previous legal cases. I will conclude by discussing this new method with relation to the role of linguistic data and judicial discretion in statutory interpretation.

Speaker: Prof. Lawrence Solum, Georgetown University  

Please check back soon for title and abstract. 

Speaker: Dr. Tammy Gales, Hofstra University 

Please check back soon for title and abstract. 

Speaker: Associate Chief Justice Thomas Lee, Utah Supreme Court 

Corpus Linguistics in the Courts: Critiques, Responses, and the Path Forward

Corpus Linguistics in the Courts:

Critiques, Responses, and the Path Forward

Lawyers and jurists have long sought to discern the “ordinary meaning” of the language of the law. In interpreting statutes, constitutional provisions, and contracts, our courts claim to be applying the “plain” or “ordinary” meaning of legal language. But judicial tools for discerning such meaning have long fallen short. The judge’s traditional toolbox includes dictionaries, etymology, and old-fashioned judicial intuition—which may have a role but all fall short.

In the past decade a few judges have begun to utilize additional tools, borrowed from the field of linguistics, to sharpen this inquiry. In opinions in a few state supreme courts and federal courts of appeals, judges have proposed to utilize corpus linguistic tools of collocation analysis and concordance line analysis to assemble transparent evidence of ordinary usage of legal language. Because premises of “ordinary meaning” seem to turn on actual usage of language by ordinary people, judges have suggested that the law’s assessment of ordinary meaning should be informed by statistical analysis of actual language usage in naturally occurring samples of language—in corpora like the Corpus of Contemporary American English, the News on the Web Corpus, or the Corpus of Historical American English.

This move has prompted a series of critiques and concerns. Some judges have suggested that there is a judicial ethics problem with a judge conducting his own corpus linguistic analysis without the benefit of expert witness testimony. Others have asserted that the data assembled from a corpus cannot, in any event, inform the “ordinary meaning” questions posed in the law. And some commentators have questioned the statistical or scientific relevance or salience of corpus analysis in law, suggesting the possible need for alternative approaches—the use of different corpora, or other means of empirical inquiry (such as the use of human-subject surveys).

This paper summarizes these developments and describes and responds to critiques of the corpus linguistics movement in the courts. It first explains that judges are as ethically free to use corpus linguistics tools to inquire into ordinary meaning as they are to consult various dictionaries or perform historical research into the original meaning of a provision of the Constitution. It then concedes that refinements in corpus methodology are needed to improve on the utility of these methods in the law of interpretation, but notes that these tools fare better than any other set of tools used to date by judges. In conclusion, the paper highlights misunderstandings in the empirical criticisms of the use of corpus methods, as well as shortcomings in the proposed use of human-subject surveys in this field.

Closing remarks – Prof. William Eggington, Brigham Young University 

12:00 noon adjourn








Last Updated: 2/13/20