justice for many Americans. By making its work available to all participants in legal decision-making processes, the LLT Lab aims to increase the transparency, accuracy, efficiency, and accessibility of such decision making. Legal Reasoning and the Need for Empirical Research Law is a pragmatic profession. Judges and regulators always balance two different types of objectives: the epistemic objective of producing findings of fact that are as accurate as possible and warranted by the evidence available, and non-epistemic objectives such as procedural fairness to parties, administrative efficiency, and specific substantive objectives (such as protecting public health from unsafe food). In addition, judges and regulators must make important decisions in real time, based on incomplete evidence. The reasoning structures they employ have evolved to serve this pragmatic orientation. Legal reasoning tends to be dynamic and probabilistic in nature, efficiently arriving at plausible conclusions, but those conclusions are subject to revision if new evidence arises or old evidence needs reanalysis. These characteristics make legal reasoning a leading example of what logicians call “default reasoning.” The pragmatic nature of legal reasoning requires empirical research into how such competing values are being balanced in different legal contexts. Trying to solve legal problems under the rule of law creates reasoning patterns that are effective in solving such problems. Each particular area of law evolves new concepts and modes of reasoning tailored to achieving its own balance of objectives. Only empirical research into the reasoning of actual decisions can discover what factfinders in different areas find plausible, and how those factfinders evaluate nonexpert and expert evidence to reach their conclusions or findings. Aspects of a New Research Paradigm The LLT Lab: Scientific Research at Hofstra Law School Vern R. Walker, Professor of Law and Director of the Research Laboratory for Law, Logic and Technology, Hofstra Law School H ofstra Law School has created a new kind of research institution: a research laboratory for law modeled on research laboratories in the sciences. The Research Laboratory for Law, Logic and Technology (LLT Lab) conducts empirical research on the reasoning in legal decisions that connects the evidence in the case to the findings of fact (usually called “fact-finding”). In conducting this research, the LLT Lab operates out of a theoretical framework, formulates and tests hypotheses, and disseminates its 4 HOFSTRA horizons fall 2010 work products for replication and use by others. This innovative and unique program employs a team approach to data generation and analysis, and integrates research with legal education. The goal is not only improving legal research and education, but also having an impact on legal decision making in society. Many important aspects of life depend upon accuracy and fairness in decision making – such as legal decisions about employment, housing, education, immigration, disability, and health care benefits. Decisions in these areas by courts or administrative agencies have two components: deciding what the legal rules are (conclusions of law), and deciding whether those rules apply in a particular case (fact-finding). Factfinding is critical but under-studied. Justice and the rule of law require that findings of fact be based reasonably and transparently on the evidence, that similar cases be decided similarly, and that outcomes be reasonably predictable. At the same time, an increased complexity in legal rules and evidence (including expert and scientific evidence) has increased societal costs and has limited access to Figure 1. Part of the vaccine rule tree, showing three sub-issues for proving causation and the logical connectives AND, OR and UNLESS. Just as science laboratories generate data by classifying and measuring real-world objects or events, the LLT Lab generates data by modeling the logical structure of the reasoning recorded in legal decisions. Such “logic models,” which capture the essential inference structure of the factfinder’s reasoning, have two major components: the legal rules applicable to all similar cases, and the evidentiary reasoning applying those rules to the particular case. First, lab researchers create “rule trees” constructed out of propositions and logical connectives, as models of the legal rules governing the decision-making process. These rule trees are inverted, with the (root) proposition to be proved at the top, and branches extending downward containing the propositions needed to prove the immediately higher proposition. A complete rule fall 2010 HOFSTRA horizons 5 tree identifies all the issues of fact in the case, and all the acceptable lines of proof for the ultimate issue. For example, a major research project in the LLT Lab studies proof of causation in vaccine cases – that is, how to prove whether or not a vaccination caused a patient’s later injury or medical condition. Such difficult issues are decided by “special masters” within the United States Court of Federal Claims in Washington, D.C. Figure 1 shows part of the lab’s rule tree for compensation claims in vaccine cases. The top proposition of the entire tree is the ultimate issue the petitioner must prove – namely, that the petitioner is entitled to compensation. At the bottom of the diagram is a three-part test for proving causation. The petitioner filing the claim must prove: (1) that a “medical theory causally connect[s]” the vaccination and the injury; (2) that a “logical sequence of cause and effect” shows that the vaccination “was the reason for” the injury; and (3) that a “proximate temporal relationship” exists between the vaccination and HOFSTRA horizons Second, in modeling the evidentiary reasoning in a particular case, LLT Lab researchers attach the findings of fact to the issues identified by the rule tree, and then create logic models of the reasoning supporting those findings. Thus, the logic model for an entire case includes the generic rule tree with the reasoning of the particular factfinder attached. For example, Figure 2 is a picture of a computer screen showing some of the modeled reasoning from the vaccine decision Casey v. Secretary of Health and Human Services, Case No. 97-612V (December 12, 2005). The special master found that there was indeed an adequate medical theory of causation, and supported that conclusion by two alternative lines of reasoning based on two causal pathways (direct viral infection and immunemediated inflammatory response). In modeling this reasoning, LLT Lab researchers used the plausibility connective “MAX.” This connective assigns to the conclusion the highest degree of plausibility assigned to any one of the supporting lines of reasoning. On a color computer display or a page printed in color, the round icon before each sentence in the model has a color that indicates the plausibility value assigned to the assertion. In the complete case model, each of these two alternative lines of reasoning also contains further reasoning, which proves each of these two conclusions. The LLT Lab uses special software called Legal Apprentice™ (a product of Apprentice Systems, Inc.) to create its logic models. The software keeps track of the logic, and propagates plausibility values and truth values up the tree, from individual items of evidence to the ultimate conclusion. The software also creates HTML documents of the logic models, as well as files of the models formatted in XML (a standard format used in Internet-based programs). As with any scientific research, the next phase in the LLT Lab is to analyze patterns and trends within the data collected. After a lab project (such as the Vaccine-Injury Project, illustrated in Figure 2) selects a sample of decisions to study and generates models for the reasoning in those decisions, lab researchers identify, abstract and formalize the inference patterns that re-occur within those decisions. The LLT Lab is especially interested in discovering “plausibility schemas,” which are patterns of reasoning that warrant default inferences to presumptively true conclusions. The research tries to identify which patterns the factfinders consider persuasive or not, and why. Because complete evidence is almost never available, this usually means developing “theories of uncertainty” – explanations about what evidence is missing, what uncertainty (potential for error) is inherent in drawing the conclusion, and how it could be reasonable to draw the conclusion even without the missing evidence. The mission of the LLT Lab is not merely to study fact-finding using scientific methods, but also to improve actual decision making in society. The lab uses its website to make publicly available its database of logic models of decisions. Lab researchers also post commentary on those decisions in the form of blogs, as well as articles about patterns and trends they discover across multiple cases, and about broad aspects of the reasoning they study. A priority Figure 2. Illustration of a portion of the logic model for the Casey decision using the Legal Apprentice™ software. 6 the injury. (The quotations are from the lead case of Althen v. Secretary of Health and Human Services, 418 F.3d 1274, 1278 (Fed. Cir. 2005).) Figure 1 also shows three logical connectives used in constructing rule trees: “AND” (all connected conditions must be true in order to prove the conclusion); “OR” (at least one connected condition must be true); and “UNLESS” (if the defeating condition is true, then the conclusion is false, even if the other conditions are true). fall 2010 is developing and providing useful tools that will assist parties, attorneys and decision makers in reaching accurate decisions more efficiently. The LLT Lab’s systematic focus on description and critique of reasoning and its mission to improve actual decision making in society, together with its organizational structure, enable an integration of research, education and practice. Faculty and students work in teams – reviewing each other’s logic models for decisions, orienting and training new researchers in the LLT Lab’s methodology, writing commentary on cases and topics through blog entries and articles, and brainstorming about hypotheses to test and the patterns discovered in decisions. Research, education and practice are three dimensions of the same core activity. Conducting the research is simultaneously training in logic skills and education in reasoning, while the research products are useful tools in legal practice. Professor Walker and Professor Giovanni Comandé, director of the International and Comparative Law Research Laboratory (Lider-Lab), standing on the steps of the courthouse in Pisa. Finally, the LLT Lab’s research methodology is designed to be collaborative not only within the lab itself, but also with other research laboratories. Because the methodology is logic-based, it is possible to compare rule trees and evidentiary reasoning across different areas of law, across different legal systems, and across time. And because the methodology is standardized, it can be used to produce comparable data (models) in multiple labs. For example, the lab currently has a joint research project with the International and Comparative Law Research Laboratory (Lider-Lab) of the Scuola Superiore Sant’Anna in Pisa, Italy. Together, the two labs are conducting comparative investigations of medical malpractice decisions in the United States and Italy, looking for similarities and dissimilarities in the rule systems and proof patterns. Using a single modeling framework allows the two labs to create logic models that can be compared directly to each other. fall 2010 HOFSTRA horizons 7 Hypotheses at the Cutting Edge True to its roots in scientific method, the LLT Lab formulates and tests hypotheses about both its legal subject matter and its own methodology. For example, one objective of the lab is to refine its protocols for generating the logic models for legal decisions, and to test the reliability of those protocols and the validity of the resulting models. Scientific “reliability” here means the degree of variability in modeling when different researchers model the same decision, and scientific “validity” means the degree to which a model accurately captures the reasoning reported by the factfinder. It is a working hypothesis of the lab that it can develop protocols that will reliably produce acceptably accurate models for legal decisions written by a variety of authors in a natural language such as English. Such protocols provide orientation materials for training new lab researchers, as well as general educational materials for training students in logic skills. Such protocols may also make it possible to automate parts of the modeling process by developing computer software. An example of a substantive hypothesis about the law involves the influence of legal policy on fact-finding. The hypothesis being tested in the LLT Lab’s vaccine project is that the special masters who act as factfinders have developed default inference patterns peculiar to this area of law, in which the presumptive warrant is furnished in critical part by social policies. The lab is investigating the extent to which those policies guide decisions about how much evidence is sufficient to establish an issue of fact, when residual uncertainty is acceptable, and when burdens of proof shift among the 8 HOFSTRA horizons fall 2010 parties. Gathering data about whether and how this actually occurs may lead to a normative critique of the extent to which it should occur. A third example of a testable hypothesis involves the dynamics within factfinding processes. The hypothesis is that certain fact-finding structures are more likely to develop “soft rules” of inference. Soft rules are general patterns of default reasoning that have become “safe havens” of inference because a reviewing authority (such as an appellate court) has decided that a particular finding is a reasonable inference from particular evidence. The hypothesis is that in an area of complex cases (such as the vaccine compensation cases), with a small number of repeat factfinders (the special masters), and documentation of the supervisory decision once it occurs (the court judgments), at least some patterns determined by authority to be reasonable would become “safe havens” for factfinders who do not wish to be reversed and who have an incentive to be efficient in deciding cases. Such patterns might become de facto default rules of inference in evidence assessment, and carry over from case to case. They are not rules of law, but “soft rules” of practice. The extent of such a phenomenon might have implications not only for increased efficiency in factfinding, but also for decreased fairness to parties in later cases. Expected Impact of the LLT Lab The LLT Lab’s approach to research and education has considerable potential as a paradigm. With respect to benefits to society generally, the goal is to produce databases of logic models for legal decisions in important social areas (such as vaccine-injury compensation), together with libraries of reasoning patterns that may be useful across many areas of law. By making this research publicly available to all participants in the legal process, the LLT Lab’s work should increase the transparency and predictability of future decisions, and help ensure that similar cases will be decided similarly. Accuracy should increase as fact-finding reasoning is scrutinized. Moreover, decision-making processes should become more efficient because all participants will be able to better organize their evidence and better assess the settlement value of their cases. Finally, justice should increase because information and insights generated by the LLT Lab will be accessible to parties that could not otherwise afford such expensive and challenging research. These benefits to society (increased transparency, predictability, accuracy, efficiency, and access to justice) should be achievable in many areas of the law, as work at the LLT Lab and other legal research labs progresses. areas where documented decision making is available. The LLT Lab’s databases and pattern libraries should also provide valuable resources for research in related fields outside the law. The lab’s modeling protocols and databases of analyzed legal decisions should provide resources for formal and informal logic theory, as well as for natural-language research in linguistics (especially semantics). Moreover, the LLT Lab’s work should expand the empirical basis for research on artificial intelligence and law, particularly in the area of evidentiary reasoning, and the lab’s modeling protocols should assist artificial-intelligence researchers in automating the extraction of reasoning from natural-language documents. The subtleties of legal reasoning are difficult for non-lawyers to study, but the LLT Lab’s methodology makes legal logic more accessible to non-lawyers. With regard to the impact on education, the LLT Lab provides a unique paradigm for legal education and for higher education generally. The same techniques developed for analyzing the reasoning of a factfinder will be useful in training students in logic and argumentation skills. The database of modeled cases provides numerous examples of evidentiary reasoning for students to study. Through the use of a team approach to research, the LLT Lab demonstrates how students can acquire logic skills in a research laboratory, while simultaneously producing important databases and tools for society. As a result, the education process, in both law and elsewhere, might become more effective pedagogically, more engaging to students, and more productive for society. Professor Vern Walker holds a doctorate in philosophy from the University of Notre Dame, with specialization in knowledge theory, artificial intelligence, deductive and inductive logic, and the conceptual foundations and methodologies of the sciences. His doctoral dissertation was on the perception of objects by biological and mechanical systems. He taught philosophy for four years at Creighton University in Omaha, Nebraska, including courses in logic, philosophy of science, ethics and bioethics. He earned the J.D. at Yale Law School, where he was also an editor of the Yale Law Journal. With respect to impact on research, the LLT Lab demonstrates how to apply scientific methods of modeling and measurement to legal reasoning, and especially to the reasoning of factfinders in actual cases. The research develops libraries of plausibility schemas, or normative patterns of default reasoning, and tests important hypotheses about the structure and dynamics of fact-finding. Moreover, the LLT Lab shows how the model of a research laboratory in the sciences can be applied in a legal setting, so that teams of students and faculty, employing tested methods of data gathering and analysis, can produce research that is valuable to society. This work can also provide a paradigm for research in non-legal Prior to joining the Hofstra Law School faculty, Professor Walker was a partner in the Washington, D.C., law firm of Swidler & Berlin. His practice included representation before state and federal administrative agencies and before courts on judicial review of agency Vern Walker actions. His administrative practice focused primarily on issues concerning public health, safety, and the environment. He also represented clients in civil litigation alleging products liability and toxic torts. While in law practice, he worked extensively with expert witnesses and scientific evidence, and he co-authored the book Product Risk Reduction in the Chemical Industry. At Hofstra, Professor Walker teaches courses in scientific evidence, torts, administrative law, administrative health law, and European Union law, and he is director of the Research Laboratory for Law, Logic and Technology. He is on the editorial board of the journal Law, Probability and Risk, as well as the editorial review board for the International Journal of Agent Technologies and Systems. He is a past president of the Risk Assessment and Policy Association. He has been a consultant to both private and governmental institutions in both the United States and Europe. Professor Walker has published extensively on the logic of legal reasoning and fact-finding, the design of fact-finding processes, and the use of scientific evidence in legal proceedings. His writings also explore the substantive topics of risk assessment, risk management, and scientific uncertainty. In addition, he designs computer software for capturing legal knowledge and modeling legal reasoning, and he explores ways to use logical analysis and artificial intelligence in his teaching. fall 2010 HOFSTRA horizons 9