Published on July 31, 2015 | by LawNews
LLT Lab Presents Paper on Argumentation Mining at ASAIL 2015 Workshop
Professor Vern R. Walker, Parisa Bagheri ’15, and Andrew J. Lauria ’15, all members of Hofstra Law’s Research Laboratory for Law, Logic and Technology (LLT Lab), presented a paper at the 2015 Workshop on Automated Detection, Extraction and Analysis of Semantic Information in Legal Texts (ASAIL 2015), held in conjunction with the 15th International Conference on Artificial Intelligence and Law (ICAIL), on June 8-12, 2015, in San Diego.
The paper, “Argumentation Mining from Judicial Decisions: The Attribution Problem and the Need for Legal Discourse Models,” discusses the “attribution problem” lawyers encounter when trying to understand judicial decisions — that is, the problem of determining who believes a stated proposition to be true. This is a particularly difficult problem for developing natural-language processing software that can determine who treats or accepts an expressed proposition as (probably) true, or who relies upon or uses it as support.
The problem arises for judicial decisions because the author of the decision (the judge) does not always believe the propositional content expressed by every sentence she writes in the decision. For example, a judge might write the sentence “The varicella vaccine can cause neuropathy in humans,” but the sentence might report an allegation of a party, or the testimony of an expert witness, or the text of a document exhibit, as well as — or even in contrast to — a conclusion or finding of fact by the judge herself.
Solving this attribution problem requires development of an adequate “legal discourse model” — that is, a data structure representing the actors in a legal proceeding and their argument-related information that is important for understanding the meaning of a judicial decision. The paper discusses some basic content for such a legal discourse model that would be useful in making attribution determinations, drawing upon vaccine-injury compensation decisions analyzed by the LLT Lab.
The paper also argues that adequate development of a legal discourse model requires empirical investigation of actual judicial decisions.