Friday, April 17 2020
Jesse Egbert is Associate Professor of Applied Linguistics at Northern Arizona University, where he received a Ph.D. in Applied Linguistics in 2014. Jesse specializes in register variation, particularly in academic and online writing. His research also explores issues related to quantitative linguistic research, including corpus design and representativeness and methodological triangulation. He is General Editor of the international peer reviewed journal Register Studies. He has authored or edited three books: Triangulating Methodological Approaches in Corpus Linguistic Research (Routledge, 2016), Register Variation Online (Cambridge, 2018), and Using Corpus Methods to Triangulate Linguistic Analysis (Routledge, 2019). He has published more than 60 papers published in journals such as: Language Variation and Change, Corpus Linguistics and Linguistic Theory, Journal of English Linguistics, Journal of Applied Statistics, and International Journal of Corpus Linguistics. He has been an active researcher in the area of statutory interpretation, with related publications in the BYU Law Review and The Routledge Handbook of Corpus Approaches to Discourse Analysis.
A Corpus Linguistic Approach to Quantifying Surplusage in Statutes
Statutory interpretation often relies on linguistic canons of construction, or widely accepted ‘rules of thumb for interpreting the language. One such canon that is frequently appealed to is the ‘surplusage’ canon which, according to Cooley (1988), states that “The courts must […] lean in favor of a construction which will render every word operative, rather than one which may make some idle and nugatory”. This legal canon has a counterpart in linguistics, known as the Maxim of Quantity: “Do not make your contribution more informative than is required” (Grice, 1975). In spite of these widely cited expectations, it is also generally accepted that “legal drafters often include redundant language on purpose to cover unforeseen gaps or simply for no good reason at all” (Jellum, 2008; see also Scalia & Garner, 2012). As a result, it is not uncommon for questions of statutory interpretation to hinge on whether a phrase or provision contains language that is redundant or superfluous. In spite of this, there are currently no reliable methods for detecting possible surplusage in statutes. In this talk, I present a new method for determining whether there is linguistic evidence that words in binomials (e.g. care and support, liens and claims, null and void) violate the canon of surplusage (i.e. are semantically redundant), and should thus not be assigned independent meanings. This method relies on linguistic methods—applied to a corpus (a large sample of naturally produced language)—and is designed to reliably quantify the degree to which binomials are formulaic and semantically similar. This new method is applied to a set of binomials that have been the subject of dispute in previous legal cases. I will conclude by discussing this new method with relation to the role of linguistic data and judicial discretion in statutory interpretation.
Please check back soon for bio and abstract.
Tammy Gales is an Associate Professor of Linguistics and the Director of Research at the Institute for Forensic Linguistics, Threat Assessment, and Strategic Analysis at Hofstra University, New York. She currently serves on the Executive Committee for the International Association of Forensic Linguists. She received her Ph.D. in Linguistics from the University of California, Davis, and performed her dissertation research on threatening communications with the Academy Group, the world’s largest private behavioral analysis firm of retired Supervisory Special Agents from the FBI. Her research interests cross the boundaries of language and the law and forensic linguistics. Within language and the law, she applies corpus linguistic methods to the interpretation of meaning in legal statutes and to disputed meanings in trademark cases. Within forensic linguistics, she applies corpus and discourse analytic methods to the examination of authorial stance in threatening communications as well as to other contexts such as the cross-examination of victims of sexual assault and parole board hearings in which certain populations are disproportionately denied parole. She has presented her research at universities such as Georgetown, Yale, and Princeton; has trained law enforcement from agencies across Canada and the U.S.; and has worked on criminal and civil cases for both the prosecution and defense.
Please check back soon for Tammy Gale's abstract.
Thomas Rex Lee
Thomas R. Lee serves as Associate Chief Justice of the Utah Supreme Court. He is a graduate (with High Honors) of the University of Chicago Law School and a former law clerk to Justice Clarence Thomas and to Judge J. Harvie Wilkinson III. Before his appointment to the Utah Supreme Court, Lee was a full-time law professor at Brigham Young University. In his spare time, he teaches as a Lecturer at Brigham Young, Harvard, and the University of Chicago.
Lee has written extensively at the intersection of law and linguistics. His judicial opinions and academic scholarship advocate the use of theories and tools used by linguists in interpreting the language of the law. His judicial and academic work on law and language has been cited in a range of federal and state courts. His contributions to this field will be synthesized in a forthcoming monograph, Law & Corpus Linguistics (Oxford Univ. Press 2020).
During his years as a full-time law professor, Lee developed a part-time appellate practice, arguing cases in federal courts throughout the country and in the United States Supreme Court. In 2004-05, Justice Lee served as Deputy Assistant Attorney General in the Civil Division of the U.S. Department of Justice.
Corpus Linguistics in the Courts:
Critiques, Responses, and the Path Forward
Thomas R. Lee
Lawyers and jurists have long sought to discern the “ordinary meaning” of the language of the law. In interpreting statutes, constitutional provisions, and contracts, our courts claim to be applying the “plain” or “ordinary” meaning of legal language. But judicial tools for discerning such meaning have long fallen short. The judge’s traditional toolbox includes dictionaries, etymology, and old-fashioned judicial intuition—which may have a role but all fall short.
In the past decade a few judges have begun to utilize additional tools, borrowed from the field of linguistics, to sharpen this inquiry. In opinions in a few state supreme courts and federal courts of appeals, judges have proposed to utilize corpus linguistic tools of collocation analysis and concordance line analysis to assemble transparent evidence of ordinary usage of legal language. Because premises of “ordinary meaning” seem to turn on actual usage of language by ordinary people, judges have suggested that the law’s assessment of ordinary meaning should be informed by statistical analysis of actual language usage in naturally occurring samples of language—in corpora like the Corpus of Contemporary American English, the News on the Web Corpus, or the Corpus of Historical American English.
This move has prompted a series of critiques and concerns. Some judges have suggested that there is a judicial ethics problem with a judge conducting his own corpus linguistic analysis without the benefit of expert witness testimony. Others have asserted that the data assembled from a corpus cannot, in any event, inform the “ordinary meaning” questions posed in the law. And some commentators have questioned the statistical or scientific relevance or salience of corpus analysis in law, suggesting the possible need for alternative approaches—the use of different corpora, or other means of empirical inquiry (such as the use of human-subject surveys).
This paper summarizes these developments and describes and responds to critiques of the corpus linguistics movement in the courts. It first explains that judges are as ethically free to use corpus linguistics tools to inquire into ordinary meaning as they are to consult various dictionaries or perform historical research into the original meaning of a provision of the Constitution. It then concedes that refinements in corpus methodology are needed to improve on the utility of these methods in the law of interpretation, but notes that these tools fare better than any other set of tools used to date by judges. In conclusion, the paper highlights misunderstandings in the empirical criticisms of the use of corpus methods, as well as shortcomings in the proposed use of human-subject surveys in this field.
Please check back soon for more information on Language and Law 2020 forum speakers.