skip navigation
search

“To be blunt, there is just too much stuff.” (Robert C. Berring, 1994 [1])

Law is an information profession where legal professionals take on the role of intermediaries towards their clients. Today, those legal professionals routinely use online legal research services like Westlaw and LexisNexis to gain electronic access to legislative, judicial and scholarly legal documents.

Put simply, legal service providers make legal documents available online and enable users to search these text collections in order to find documents relevant to their information needs. For quite some time the main focus of providers has been the addition of more and more documents to their online collections. Quite contrary to other areas, like Web search, where an increase in the number of available documents has been accompanied by major changes in the search technology employed, the search systems used in online legal research services have changed little since the early days of computer-assisted legal research (CALR).

It is my belief, however, that the search technology employed in CALR systems will have to dramatically change in the next years. The future of online legal research services will more and more depend on the systems’ ability to create useful result lists to users’ queries. The continuing need to make additional texts available will only speed up the change. Electronic availability of a sufficient number of potentially relevant texts is no longer the main issue; quick findability of a few highly relevant documents among hundreds or even thousands of other potentially relevant ones is.

To reach that goal, from a search system’s perspective, relevance ranking is key. In a constantly growing number of situations – just like Professor Berring already stated almost 20 years ago (see above ) – even carefully chosen keywords bring back “too much stuff”.  Successful ranking, that is the ordering of search results according to their estimated relevance, becomes the main issue. A system’s ability to correctly assess the relevance of texts for every single individual user, and for every single of their queries will quickly become – or has arguably already become in most cases – the next holy grail of computer-assisted legal research.

Until a few years back providers could still successfully argue that search systems should not be blamed for the lack of  “theoretically, maybe, sometimes feasible” relevance-ranking capabilities, but rather that users had to be blamed for their missing search skills. I do not often hear that line of argumentation any longer, which certainly does not have to do with any improvement of (Boolean) search skills of end users. Representatives of service providers do not dare to follow that line of argumentation any longer, I think, because every single day every one of them uses Google by punching in vague, short queries and still mostly gets back sufficiently relevant top results. Why should this not work in CALR systems?

Indeed. Why, one might ask, is there not more Web search technology in contemporary computer-assisted legal research? Sure, according to another often-stressed argument of system providers, computer-assisted legal research is certainly different from Web search. In Web search we typically do not care about low recall as long as this guarantees high precision, while in CALR trading off recall for precision is problematic. But even with those clear differences, I have, for example, not heard a single plausible argument why the cornerstone of modern Web search, link analysis, should not be successfully used in every single CALR system out there.

These statements certainly are blunt and provocative generalizations. Erich Schweighofer, for example, has already even shown in 1999 (pre-mainstream-Web),  that there had in fact been technological changes in legal information retrieval in his well-named piece “The Revolution in Legal Information Retrieval or: The Empire Strikes Back”. And there have also been free CALR systems like PreCYdent that have fully employed citation-analysis techniques in computer-assisted legal research and have thereby – even if they did not manage to stay profitable – shown “one of the most innovative SE [search engine] algorithms“, according to experts.

An exhaustive and objective discussion of the various factors that contribute to the slow technological change in computer-assisted legal research can certainly neither be offered by myself alone nor in this short post. For a whole mix of reasons, there is not (yet) more “Google” in CALR, including the fear of system providers to be held liable for query modifications which might (theoretically) lead to wrong expert advice, and the lack of pressure from potential and existing customers to use more modern search technology.

What I want to highlight, however, is one more general explanation which is seldom put forward explicitly. What slows down technological innovation in online legal research, in my opinion, is also the interest of the whole legal profession to hold on to a conception of “legal relevance” that is immune to any kind of computer algorithm. A successfully employed, Web search-like ranking algorithm in CALR would after all not only produce comfortable, highly relevant search results, but would also reveal certain truths about legal research: The search for documents of high “legal relevance” to a specific factual or legal situation is, in most cases, a process which follows clear rules. Many legal research routines follow clear and pre-defined patterns which could be translated into algorithms. The legal profession will have to accept that truth at some point, and will therefore have to define and communicate “legal relevance” much less mystically and more pragmatically.

Again, also at this point, one might ask “Why?” I am certain that if the legal profession, that is legal professionals and their CALR service providers, do not include up-to-date search technology in their CALR systems, someone else will at some point do so without the need for a lot of involvement of legal professionals. To be blunt, at this point, Google can still serve as an example for our systems, at some point soon it might simply set an example instead of our systems.

Anton GeistAnton Geist is Law Librarian at WU (Vienna University of Economics and Business) University Library. He law degrees from University of Vienna (2006) and University of  Edinburgh (2010). He is grateful for feedback and discussions and can be contacted at home@antongeist.com.

[1] Berring, Robert C. (1994), Collapse of the Structure of the Legal Research Universe: The Imperative of Digital Information, 69 Wash. L. Rev. 9.

VoxPopuLII is edited by Judith Pratt. Editors-in-Chief are Stephanie Davidson and Christine Kirchberger, to whom queries should be directed. The information above should not be considered legal advice. If you require legal representation, please consult a lawyer.

[Editor’s Note] For topic-related VoxPopuLII posts please see: Núria Casellas, Semantic Enhancement of legal information … Are we up for the challenge?; Marcie Baranich, HeinOnline Takes a New Approach to Legal Research With Subject Specific Research Platforms; Elisabetta Fersini, The JUMAS Experience: Extracting Knowledge From Judicial Multimedia Digital Libraries; João Lima, et.al, LexML Brazil Project; Joe Carmel, LegisLink.Org: Simplified Human-Readable URLs for Legislative Citations; Robert Richards, Context and Legal Informatics Research; John Sheridan, Legislation.gov.uk

A new style of legal research

An attorney/author in Baltimore is writing an article about state bans of teachers’ religious clothing. She finds one of the tersely written statutes online. The website then does a query of its own and tells her about a useful statute she wasn’t aware of—one setting out the permitted disciplinary actions. When she views it, the site makes the connection clear by showing her the where the second statute references the original. This new information makes her article’s thesis stronger.Recipe card

Meanwhile, 2800 miles away in Oregon, a law student is researching the relationship between the civil and criminal state codes. Browsing a research site, he notices a pattern of civil laws making use of the criminal code, often to enact civil punishments or enable adverse actions. He then engages the website in an interactive text-based dialog, modifying his queries as he considers the previous results. He finally arrives at an interesting discovery: the offenses with the least additional civil burdens are white collar crimes.

A new kind of research system

A new field of computer-assisted legal research is emerging: one that encompasses research in both the academic and the practical “legal research” senses. The two scenarios above both took place earlier this year, enabled by the OregonLaws.org research system that I created and which typifies these new developments.

Interestingly, this kind of work is very recent; it’s distinct from previous uses of computers for researching the law and assisting with legal work. In the past, techniques drawn from computer science have been most often applied to areas such as document management, court administration, and inter-jurisdiction communication. Working to improve administrative systems’ efficiency, people have approached these problem domains through the development of common document formats and methods of data interchange.

The new trend, in contrast, looks in the opposite direction: divergently tackling new problems as opposed to convergently working towards a few focused goals. This organic type of development is occurring because programming and computer science research is vastly cheaper—and much more fun—than it has ever been in the past. Here are a couple of examples of this new trend:

“Computer Programming and the Law”

Law professor Paul Ohm recently wrote a proposal for a new “interdisciplinary research agenda” which he calls “Computer Programming and the Law.” (The law review article is itself also a functioning computer program, written in the literate programming style.) He envisions “researcher-programmers,” enabled by the steadily declining cost of computer-aided research, using computers in revolutionary ways for empirical legal scholarship. He illustrates four new methods for this kind of research: developing computer programs to “gather, create, visualize, and mine data” that can be found in diverse and far-flung sources.

“Computational Legal Studies”

Grad students Daniel Katz and Michael Bommarito (researcher-programmers, as Paul Ohm would call them) created the Computational Legal Studies Blog in March, 2009. The web site is a growing collection of visualization applied to diverse legal and policy issues. The site is part showcase for the authors’ own work and part catalog of the current work of others.

OregonLaws.org

I started the OregonLaws.org project because I wanted faster and and easier access to the 2007 Oregon Revised Statutes (ORS) and other primary and secondary sources. I had a couple of very statute-heavy courses (Wills & Trusts, and Criminal Law) and I frequently needed to quickly find an ORS section. But as I got further into the development, I realized that it could become a platform for experimenting with computational analysis of legal information, similar to the work being done on the Computational Legal Studies Blog.

I developed the system using pretty much the the steps that Paul Ohm discussed:

  1. Gathering data: I downloaded and cleaned up the ORS source documents, converting them from MS Word/HTML to plain text;
  2. Creating: I parsed the texts, creating a database model reflecting the taxonomy of the ORS: Volumes, Titles, Chapters, etc.;
  3. Creating: I created higher-level database entities based on insights into the documents. For example, by modeling textual references between sections explicitly as reference objects which capture a relationship between a referrer and a referent, and;
  4. Mining and Visualizing: Finally, I’ve begun making web-based views of these newly found objects and relationships.Object Model

The object database is the key to intelligent research

By taking the time to go through the steps listed above, powerful new features can be created. Below are some additions to the features described in the introductory scenarios:

We can search smarter. In a previous VoxPopulii post, Julie Jones advocates dropping our usual search methods, and applying techniques like subject-based indexing (a la Factiva’s) to legal content.

This is straightforward to implement with an object model. The Oregon Legislature created the ORS with a conceptual structure similar to most states:  The actual content is found in Sections.  These are grouped into Chapters, which are in turn grouped into Titles.  I was impressed by the organization and the architecture that I was discovering—insights that are obscured by the ways statutes are traditionally presented.

search-filter.png

And so I sought out ways to make use of the legislature’s efforts whenever it made sense.  In the case of search results, the Title organization and naming were extremely useful.  Each Section returned by the search engine “knows” what Chapter and Title it belongs to. A small piece of code can then calculate what Titles are represented in the results, and how frequently. The resulting bar graph doubles as an easy way for users to specify filtering by “subject area”. The screenshot above shows a search for forest.

The ORS’s framework of Volumes, Titles, and Chapters was essentially a tag cloud waiting to be discovered.

We can get better authentication. In another VoxPopulii post, John Joergensen discussed the need for authentication of digital resources. One aspect of this is showing the user the chain of custody from the original source to the current presentation. His ideas about using digital signatures are excellent: a scenario of being able to verify an electronic document’s legitimacy with complete assurance.

glossary-citations.png

We can get a good start towards this goal by explicitly modeling content sources. A source is given attributes for everything we’d want to know to create a citation; date last accessed, URL available at, etc.  Every content object in the database is linked to one of these source objects.  Now, every time we display a document, we can create properly formatted citations to the original sources.

The gather/create/mine/visualize and object-based approaches open up so many new possibilities, they can’t all be discussed in one short article. It sometimes seems that each new step taken enables previously unforeseen features. A few these others are new documents created by re-sorting and aggregating content, web service APIs, and extra annotations that enhance clarity. I believe that in the end, the biggest accomplishment of projects like this will be to raise our expectations for electronic legal research services, increase their quality, and lower their cost.

Robb Shecter is a software engineer and third year law student at Lewis & Clark Law School in  Portland, Oregon.   He is Managing Editor for the Animal Law Review, plays jazz bass, and has published articles in Linux Journal, Dr. Dobbs Journal, and Java Report.

VoxPopuLII is edited by Judith Pratt.