skip navigation
search

Ending up in legal informatics was probably more or less inevitable for me, as I wanted to study both law and electrical engineering from early on, and I just hoped that the combination would start making some sense sooner or later. ICT law (which I still pursue sporadically) emerged as an obvious choice, but AI and law seemed a better fit for my inner engineer.

dada fuzzy teacupThe topic for my (still ongoing-ish) doctoral project just sort of emerged. Reading through the books recommended by my master’s thesis supervisor (professor Peter Blume in Copenhagen) a sentence in Cecilia Magnusson Sjöberg‘s dissertation caught my eye: “According to Bench-Capon and Sergot, fuzzy logic is unsuitable for modelling vagueness in law.” (translation ar) Having had some previous experiences with fuzzy control, this seemed like an interesting question to study in more detail. To me, vagueness and uncertainty did indeed seem like good places to use fuzzy logic, even in the legal domain.

After going through loads of relevant literature, I started looking for an example domain to do some experiments. The result was MOSONG, a fairly simple model of trademark similarity that used Type-2 fuzzy logic to represent both vagueness and uncertainty at the same time. Testing MOSONG yielded perfect results on the first validation set as well, which to me seemed more suspicious than positive. If the user/coder could decide the cases correctly without the help of the system, would it not affect the coding process as well? As a consequence I also started testing the system on a non-expert population (undergraduates, of course), and the performance started to conform better to my expectations.

My original idea for the thesis was to look at different aspects of legal knowledge by building a few working prototypes like MOSONG and then explaining them in terms of established legal theory (the usual suspects, starting from Hart, Dworkin, and Ross). Testing MOSONG had, however, made me perhaps more attuned to the perspective of an extremely naive reasoner, certainly a closer match for an AI system than a trained professional. From this perspective I found conventional legal theory thoroughly lacking, and so I turned to the more general psychological literature on reasoning and decision-making. After all, there is considerable overlap between cognitive science and artificial intelligence as multidisciplinary ventures. Around this time, the planned title of my thesis also received a subtitle, thus becoming Fuzzy Systems and Legal Knowledge: Prolegomena to a Cognitive Theory of Law, and a planned monograph morphed into an article-based dissertation instead.

Illustration by David PlunkettOne particularly useful thing I found was the dual-process theory of cognition, on which I presented a paper at IVR-2011 just a couple of months before Daniel Kahneman’s Thinking, Fast and Slow came out and everyone started thinking they understood what System 1 and System 2 meant. In my opinion, the dual-process theory has important implications for AI and law, and also explains why it has struggled to create widespread systems of practical utility. Representing legal reasoning only in classically rational System 2 terms may be adequate for expert human reasoners (and simple prototype systems), but AI needs to represent the ecological rationality (as opposed to the cognitive biases) of System 1 as well, and to do this properly, different methods are needed, and on a different scale. Hello, Big Dada!

In practice this means that the ultimate way to properly test one’s theories of legal reasoning computationally is through a full-scale R&D process of an AI system that hopefully does something useful. In an academic setting, doing the R part is no problem, but the D part is a different matter altogether, both because much of the work required can be fairly routine and too uninteresting from a publication standpoint, and because the muchness itself makes the project incompatible with normal levels of research funding. Instead, typically, an interested external recipient is required in order to get adequate funding. A relevant problem domain and a base of critical test users should also follow as a part of the bargain.

In the case of legal technology, the judiciary and the public administration are obvious potential recipients. Unfortunately, there are at least two major obstacles for this. One is attitudinal, as exemplified by the recent case of a Swedish candidate judge whose career path was cut short after creating a more usable IR system for case law on his own initiative. The other one is structural, with public sector software procurement in general in a state of crisis due to both a limited understanding of how to successfully develop software systems that result in efficiency rather than frustration, and the constraints of procurement law and associated practices which make such projects almost impossible to carry out successfully even if the required will and know-how were there.

The private sector is of course the other alternative. With law firms, the prevailing business model based on hourly billing offers no financial incentives for technological innovation, as most notably pointed out by Richard Susskind, and the attitudinal problems may not be all that different. Legal publishers are generally not much better, either. And overall, in large companies the organizational culture is usually geared towards an optimal execution of plans from above, making it too rigid to properly foster innovation, and for established small companies the required investment and the associated financial risk are too great.

So what is the solution? To all early-stage legal informatics researchers out there: Find yourselves a start-up! Either start one yourself (with a few other people with complementary skillsets) or find an existing one that is already trying to do something where your skills and knowledge should come in handy, maybe just on a consultancy basis. In the US, there are already over a hundred start-ups in the legal technology field. The number of start-ups doing intelligent legal technology (and European start-ups in the legal field in general) is already much smaller, so it should not be too difficult to gain a considerable advantage over the competition with the right idea and a solid implementation. I myself am fortunate enough to have found a way to leverage all the work I have done on MOSONG by co-founding Onomatics earlier this year.

This is not to say that just any idea, even one that is good enough to be the foundation for a doctoral thesis, will make for a successful business. This is indeed a common pitfall with the commercialization of academic research in general. Just starting with an existing idea, a prototype or even a complete system and then trying to find problems it (as such) could solve is a proven way to failure. If all you have is a hammer, all your problems start to look like nails. This is also very much the case with more sophisticated tools. A better approach is to first find a market need and then start working towards a marketable technological solution for it, of course using all one’s existing knowledge and technology whenever applicable, but without being constrained by them, when other methods work better.

Testing one’s theories by seeing whether they can actually be used to solve real-world problems is the best way forward towards broader relevance for one’s own work. Doing so typically involves considerable amounts of work that is neither scientifically interesting nor economically justifiable in an academic context, but which all the same is necessary to see if things work as they should. Because of this, such real-world integration is more feasible when done on a commercial basis. In this lies a considerable risk for the findings of this type of applied research to remain entirely confidential and proprietary as trade secrets, rather than becoming published at least to some degree, thus fuelling future research also in the broader research community and not just the individual company. To avoid this, active cooperation between the industry and academia should be encouraged.

Anna Ronkainen is currently working as the Chief Scientist of Onomatics, Inc., a legal technology start-up of which she is a co-founder. Previously she has worked with language technology both commercially and academically for over fifteen years. She is a serial dropout with (somehow) a LL.M. from the University of Copenhagen, and she expects to defend her LL.D. thesis Fuzzy Systems and Legal Knowledge: Prolegomena to a Cognitive Theory of Law at the University of Helsinki during the 2013/14 academic year. She blogs at www.legalfuturology.com (with Anniina Huttunen) and blog.onomatics.com.

 

VoxPopuLII is edited by Judith Pratt. Editors-in-Chief are Stephanie Davidson and Christine Kirchberger, to whom queries should be directed.

There has been much discussion on this blog about law-related information retrieval systems, celticknotgreen.jpgontologies, and metadata. Today, I’d like to take you into another corner of legal informatics: rule-based legal information systems. I’ll tell you what they are, what their strengths and limitations are, and how they’re made. I’ll also explain why I’m optimistic about their potential to expand public access to law and to improve the way legal expertise is deployed and consumed.

First, what are they?

A rule-based expert system represents knowledge of a particular domain — such as medicine, finance, or law — in the form of “if-then” rules. Here’s an example of a rule:

the employee is entitled to standard FMLA leave IF
the employee is an eligible employee AND
the reason for the leave is enumerated in 29 U.S.C. § 2612

A rule consists of a bunch of variables (here, three Boolean statements) together with some logical operators (if, then, and, or, not, mathematical operators, etc.). Rules are chained together to form a rulebase, which is basically a database of rules. “Chained together” means that the rules connect to each other: a condition in one rule is the consequent or conclusion in another rule. For example, here’s a rule that links to our first rule:

the reason for the leave is enumerated in 29 U.S.C. § 2612 IF
the employee needs to care for a newborn child OR
the employee is becoming an adoptive or foster parent OR
the employee’s relative has a serious health condition OR
the employee cannot perform their job due to a serious health condition

Each of the conditions in this new rule can be defined by yet more rules. And other rules can sprout off of the main rule tree to form a complex web of inference. If we were to visualize such a network of rules, it might begin to look something like this:

rulebase_visualization4.jpg

The rulebase inputs are shown in blue and the outputs – or “goals” – are highlighted in orange. The core function of the inference engine (or rule engine) is to figure out what conclusions can be drawn from the input facts. Also, given incomplete information, an inference engine will figure out what additional facts are needed in order to reach one of the goals.

Rule-based systems in context

From this extremely simple example we can start to get a sense of the strengths and limitations of rule-based representations of legal knowledge. Let’s start with the strengths. First, the law, to a significant degree, seems to consist of rules, and representing them in a constrained, logical language is fairly straightforward and natural. As a result, rule-based systems are transparent: the system code looks a lot like the text that’s being represented. This “isomorphism” means that you can trace the system logic back to the original source material, easily spot errors, and quickly adapt to changes in the law. Furthermore, rule-based systems can justify their determinations by explaining how they arrived at a particular conclusion and by providing audit trails. It’s also fairly easy for people to interact with rule-based systems, as they integrate well with interviews. In short, it’s relatively easy to put legal knowledge into rule-based systems, easy to maintain it, and easy to get it out.

But all this simplicity comes with a price: the sophistication of the knowledge that can be represented. For one thing, common sense knowledge does not lend itself to simple rule-based representations, as the decades-long Cyc project illustrates. A significant portion of my own rule-authoring effort is spent representing mundane concepts, like figuring whether a given date falls on a legal holiday or counting the number of weeks in which a given condition is true. Secondly, there’s the problem of how to model vague or “open-textured” concepts. For instance, if a liability determination turns upon whether a person’s conduct was “reasonable”, the uncertainty and fuzziness of that term can’t be modeled in a way analogous to human thinking. A third limitation facing rule-based systems is the “knowledge acquisition bottleneck.” This is the effort required to codify, test, and validate expert domain knowledge. Part of the challenge derives from the reasons I’ve already mentioned, and part results from the need to capture the knowledge of human subject matter experts who don’t always think in complete and precise “if-then” constructs. Another criticism often lodged at legal expert systems is that law is in essence not rule-based but is instead a fray of competing textual interpretations which cannot be accurately modeled.

My view is that, even given these limitations, there are still many problems that can be solved by rule-based systems. No one is asking them to solve all legal automation problems, or claiming that all legal knowledge can be represented in the form of rules. (Part of why little attention is paid to these systems today is that they were over-hyped during the artificial intelligence boom of the 1970s and 80s.) But there is a place for them, and that place is quite large even given the semantic confines that I just described. Rule-based systems are ideal for encoding legal principles found in statutes, regulations, and agency decisions — that is, law that’s explicit and knowable, but logically complicated. And there are millions of pages of such law, across thousands of jurisdictions around the world, just waiting to be embedded in rule-based systems.

Let me give you a few examples of what rule-based information systems can do, although chances are that you’ve already encountered one. Perhaps, like millions of American taxpayers, you used TurboTax tax preparation software to file your taxes this year. This and other tax preparation programs interview you about your income and finances, perform a multitude of behind-the-scenes calculations, and then fill out the relevant tax forms for you. I don’t actually know how this software was constructed, but if I were doing it I would absolutely take a rule-based approach. In fact, my team did use a rule engine when tasked to build a tax law advisory system for the IRS. That system, the Interactive Tax Assistant, answers seven common tax questions, is driven by about 1,300 rules, and contains around 200 question screens. Rule-based design can also produce systems like the Australian Visa Wizard, DirectLaw, and The Benefit Bank. Other rule-driven systems work behind the scenes at government agencies and corporations to process claims by making fast, consistent, and transparent decisions.

Available tools

In my view, the premier tool for engineering rule-based legal information systems is Oracle Policy Modeling (OPM, formerly known as Haley Office Rules, RuleBurst, and Softlaw). (Full disclosure: I used to work for Oracle.) OPM lets you write natural language rules that capture statutory text, calculations, date and time-based reasoning, and basic ontological relationships. It has decent debugging and rulebase visualization features (that’s how I created the rule network diagram above), and an excellent regression testing facility. OPM lets you deploy rulebases as Web interviews and integrate them into other computer systems. The major downside to OPM is its cost: I understand the list price to be in the ballpark of $100K per license.

You can also model legal rules using other business rule engines, such as ILOG, Blaze Advisor, JBoss Drools (free), and Jess (free). JBoss Drools has a promising feature that lets you create Domain Specific Languages by mapping natural language expressions to the underlying programming code. You could also use traditional logic programming / expert system languages like Prolog or CLIPS, which are extremely powerful but which do not allow for isomorphic representation of the law. OWL-centric ontology editors such as Protege are also beginning to support rule-based knowledge representation.

To address the lack of freely-available, practical legal modeling tools, I’ve been working on Jureeka.org, a project affiliated with Stanford’s CodeX Center for Computers and Law. Jureeka is an open, Web-based rule authoring platform that lets lawyers, law students, and other subject matter experts represent their knowledge as “if-then” rules. Jureeka then uses the rules to generate jurisdiction-specific interviews, which present the relevant topic in a digestible manner. Its strengths are that it’s completely Web-based, it makes navigation of the rules easy, and it lets rule authors work collaboratively to rapidly develop knowledge bases in a wiki-like fashion. The motivating vision is to provide a way for legal knowledge engineers to build topical rulebases, and then connect these modules together to form an information backbone that drives other IT systems and helps the general public get answers to their legal questions.

jureeka_screenshot1.jpg

Jureeka is very much a work in progress, and I’ll be the first to admit that its main weakness is the oversimplicity of its rule syntax. (For example, I’m currently working on an ontology layer and a way to reason across multiple instances of an object or variable.) But this is the type of knowledge-generating project that I’d like to see a developer community coalesce around.

Future potential

Rule-based programming is not the be-all and end-all of legal informatics, but it does have significant untapped potential. Government agencies are beginning to adopt rule-based legal information systems as a way to better serve the public. I think there are also lucrative opportunities available for law firms to seize the first mover advantage by automating slices of the law of interest to consumers. Rule-based systems can help nonprofit organizations advance their missions by guiding constituents through labyrinthine legal processes. And these systems are of obvious benefit to corporations, which need to comply with a variety of regulations across numerous jurisdictions.

Rule-based systems can also benefit the legislative drafting process. For example, an early incarnation of the OPM software helped the Australian Taxation Office simplify that country’s tax code. In addition to this kind of legislative refactoring (which entails clarifying and reorganizing Rube Goldberg-like legal texts), legislatures could also promulgate law in an “inference-ready” machine readable form. That is, portions of the law could be written in a syntax that both humans and machines can read, making the law not only accessible but executable. I’m not merely referring to high-level metadata; I’m talking about code that is intended to be run in an inference engine and that can be deployed as is into society’s computing infrastructure. [See, e.g., Professor Monica Palmirani’s example of legal rules coded in the Legal Knowledge Interchange Format (LKIF) (at slides 48 through 50); please note that this is a 4.5M download.]

Some people have raised the objection that rule-based systems and their creators engage in the unauthorized practice of law by dispensing “legal advice.” I think this concern is overblown and founded upon a lack of understanding of how these systems work. Legal advice entails applying the law to the facts of a particular case or, conversely, interpreting facts in light of the applicable law. Rule-based systems don’t do that.  Instead, they break up complicated legal provisions into atomic pieces and ask users to determine how each atom applies to them. Conceptually, it’s no different than reading a plain language description of legal rules and applying those rules to your own situation.

My goal in this post has been to introduce you to something that you may not have heard about and to convince you that it is a viable and worthwhile activity. Rule-based legal information systems have been around for a few decades, but we still have a long way to go until our rule-based legal modeling tools are as sophisticated as the Mathematica software is in the domain of mathematical computation. As we move in that direction, and as our legal knowledge engineering proficiency grows, we can advance toward the day when all people can take equal advantage of their legal rights. Knowing that they have them is the first step.

mp.jpgMichael Poulshock is a consultant specializing in legal knowledge engineering and a Fellow at Stanford University’s CodeX Center for Computers and Law. He is the creator of Jureeka.org and the Jureeka legal research browser add-on for Firefox and Chrome. He was previously a human rights lawyer.

VoxPopuLII is edited by Judith Pratt. Editor in chief is Robert Richards.

Blind Justice

Crime investigation is a difficult and laborious process. In a large case, investigators, judges and jurors are faced with a mass of unstructured evidence of which they have to make sense. They are expected, often without any prior formal training, to map out complex scenarios and assess the potential relevance of a vast amount of evidence to each of these hypothetical scenarios. Humans can only process a limited amount of information at once and various cognitive and social biases  such as tunnel vision, groupthink and confirmation bias may lead to unwanted situations and mistakes. Such mistakes, which seem almost unavoidable given the difficult nature of the task, can have a large impact on those involved in the case and in the past there have been a number of miscarriages of justice.

Reasoning with criminal evidence requires one to structure the individual pieces of incoming information. In addition to conventional database and spreadsheet programs, a number of programs, such as those produced by CaseSoft and i2, have been designed specifically for intelligence analysis. However, these tools have one major drawback, in that they do not allow analysts to express their reasoning in the case: the creation and evaluation of scenarios using evidence still take place in the heads of the analysts. At a time when knowledge- and argument mapping is taking off as a field that has to be taken seriously, this seems like a missed opportunity.

The project Making Sense of Evidence, which ran from 2005 to 2009, set out to develop a specialist support tool, in which not only the evidence and scenarios or stories can be structured in a simple way but in which it is also possible to express one’s reasoning about the evidence and stories using a sound underlying theory. Using insights from such diverse fields as legal theory, legal psychology, philosophy, argumentation theory, cognitive modelling and artificial intelligence (AI), a broad theory that both describes and prescribes  how crime investigation and criminal legal decision making (should) take place was developed by me in conjunction with Henry Prakken, Bart Verheij and Peter van Koppen. At the same time, Susan van den Braak (together with Gerard Vreeswijk) developed a support tool for crime investigation based on this theory and extensively tested this tool with police analysts (together with Herre van Oostendorp).

analysis processCrime investigation, legal decision making and the process of proof

Crime investigation and legal decision making both fall under what Wigmore calls the process of proof, an iterative process of discovering, testing and justifying various hypotheses in the case. Pirolli and Card have proposed an insightful model of intelligence analysis.  In Pirolli and Card’s model, the process consists of two main phases, namely foraging and sense-making. In the foraging phase, basic structure is given to a mass of evidence by schematizing the raw evidence into categories, time lines or relation schemes. In the sense-making phase, complex hypotheses consisting of scenarios and evidence are built and evaluated and these results are then presented. It is this last phase in which we are particularly interested: the existing tools for evidence analysis already support the foraging phase.

Reasoning with evidence: stories or arguments?

In the research on reasoning with criminal evi­dence, two main trends can be distinguished: the argumentative approach and the narrative approach. Arguments are constructed by taking items of evidence and reasoning towards a conclusion respecting facts at issue in the case. This approach has its roots in Toulmin’s argument structure and Wigmore’s evidence charts and has been adapted by influential legal theorists. It has been characterized as evidential reasoning because of the relations underlying each reasoning step: ‘a witness testifying to some event is evidence for the occurrence of the event’. Argumentative reasoning has also been called atomistic because the various elements of a case (i.e. hypotheses, evidential data) are considered separately and the case is not considered ‘as a whole’.

Hypothetical stories based on the evidence can be constructed, telling us what (might have) hap­pened in a case. Alter­native stories about what happened before, during and after the crime should then be compared ac­cording to their plausibility and the amount of evidence they explain. This approach has been advocated by people from the field of cognitive psychology as being the most natural approach to evidential reasoning. It has been characterized as causal reasoning because of the relations between the events in a story: ‘Because the suspect did not want to get caught by the police, he got in his car and drove off’. The story-based approach has also been called holistic (as opposed to atomistic), because the events are considered as a whole and the indi­vidual elements receive less attention.

Both the argument-based and the story-based approaches have their advantages. The argument-based approach, whicstory argumenth builds on a significant academic tradition of research, is well suited for a thorough analysis of the individual pieces of evidence, whilst the empirically tested story-based approach is appreciated for its natural account of crime scenarios and causal reasoning. Therefore, in my thesis I have proposed a hybrid theory that combines stories and arguments into one theory. In this hybrid theory, hypothetical stories about what (might have) happened in a case can be anchored in evidence using evidential arguments. Furthermore, arguments can be used to reason about the plausibility of a story.

Sense-making using argument mapping

In recent times, the interest in so-called sense-making systems tools has increased exponentially. In contrast to classic knowledge based systems from Artificial Intelligence, these sense-making systems do not contain a knowledge base and do not reason automatically. Instead, they are intended to help the user make sense out of a certain problem by allowing him or her to logically structure his or her knowledge and reasoning. Thus, they help users make sense out of a certain problem by allowing them to store, share and search knowledge in a structured and intuitive way. The techniques used in sense-making systems include mind maps, concept maps, issue maps and argument maps. Whilst each of these techniques has its own merits, the technique of argument mapping is of particular interest to the current discussion.

traffic managerArgument mapping, or argument visualization, traces its origins back to Wigmore, who carefully defined a complex visual language for reasoning with a mass of evidence. In the 1990’s, the advent of faster and more GUI-equipped computer programs stimulated the interest in argument mapping and specific software tools for performing these visualizations of argument.  For example, in 1998 Robert Horn released a series of complex maps about one of the main debates in AI: can computers think? Software tools for argument visualization have since been used for a variety of purposes. For example, Araucaria is used in legal education, making students familiar with legal argument, and in legal practice, aiding judges in handling simple cases by providing checklists in the form of critical questions to an argument.  Rationale is used in university courses to teach critical thinking and in a variety of consultancy tasks, such as producing a report for the army on whether or not to buy a new tank. Debategraph is a wiki debate visualization tool which aims to increase the transparency and rigor of public debate on the internet. The program has made it into mainstream media, as it will be used by CNN’s Christiane Amanpour. Cohere has similar aims, allowing for the visualization of ideas and debates on the web. The Online Visualisation of Argument (OVA) suite of argument mapping tools, while similar to Debategraph and Cohere in that it is world wide argumentbuilt to support the idea of a global World Wide Argument Web, has its own niche appeal in that it deals specifically with structured arguments and is explicitly based on rigorous academic theories of (computational) argumentation.

Tools for argument visualization work because they force one to make explicit the various elements of one’s reasoning, such as the premises and conclusions of an argument or the claims made by the participants in a discussion. Thus, certain ambiguities can be avoided. For example, (evidential) relations between the various elements in an argument can be clearly represented as arrows, whereas in natural language arguments clues that point to possible inferences are often left implicit or phrased ambiguously. Another example of argument visualization’s aiding complex reasoning is when there is more than one reason for a conclusion. As natural language text by its very nature imposes a sequential structure, visualizing the argument can help a great deal.

Story-mapping?

Tools for argument mapping can be worthwhile additions to the existing support software for crime investigation, because these tools enable the structuring, not only of the evidence itself, but also of the reasoning based on this evidence. However, as was argued above, reasoning in the process of proof does not just involve argumentation; stories or narratives play an equally important role.

The existing support software, such as Analyst’s Notebook, makes it possible to incorporate skeleton stories by drawing timelines. However, it is not just the events or sequence of events which makes a story. A proper story also needs to be coherent; that is, its (causal) structure needs to be believable. Because the plausibility of a story depends on the prior beliefs someone has, it is very subjective and therefore open to argument. The existing argument mapping software does not allow for the visualization of stories. Arguments in this software mainly focus on one or two main claims, whereas a story is usually about the greater whole. Although arguments for individual events in a story can be visualized in the current tools, those tools do not allow for the explicit representation of a story’s structure and the relations between the events in a story.

Our project developed a tool, AVERS, which allows for the visualization of causally connectedAVERS scenarios as well as the arguments supporting or attacking these scenarios. Thus, AVERS allows one to show how a scenario is contradicted by evidence, and to reason about the stories themselves. Arguments can be directly linked to source documents and the type of evidence used in those arguments can be indicated.

Looking at the future

The AVERS tool and the hybrid theory on which it is based are important first steps to developing powerful support and visualization tools tailored to a specific task such as crime investigation or legal decision making. On the theoretical side, further interdisciplinary research is necessary to achieve a truly integrated “science of evidence.” On the practical side, further testing and development of support tools are needed. While visualization can ease the interpretation of complex arguments, complex argument visualization can quickly become “boxes-and-arrow-spaghetti.” Depending on the context, a visual or textual representation may be preferred, and any sense-making tool for argumentation should allow for a combination of the two modes of representation.story FB

Floris BexFloris Bex is a research assistant at the Argumentation Research Group of the University of Dundee, working on the Dialectical Argumentation Machines (DAM) project. He has an M.Sc. in Cognitive Artificial Intelligence from Utrecht University. In 2009, he was awarded his Ph.D., for his thesis entitled “Evidence for a Good Story: A Hybrid Theory of Stories, Arguments and Criminal Evidence”, from the University of Groningen (Centre for Law and ICT). His thesis outlines a hybrid theory of reasoning with stories and arguments in the context of criminal evidence.

VoxPopuLII is edited by Judith Pratt.  Managing editor Rob Richards.