skip navigation

by Anne Washington

When the United States first mandated the preservation of government proceedings, clerks wrote in bound blank books. In 1789, Congress passed a law providing for the “safe-keeping” of legislative and government documents (1 Stat. 68).

When the Cornell Legal Information Institute started, digital information was shared on CD-ROMs and 3-1/2 inch disks. In 1992, two people on a cold day in upstate New York turned on a server box to provide law on the Internet.

To celebrate the remarkable work of the Cornell Legal Information Institute in its 25th year and the 228th year of the first US Congress, this is a provocation about what we might celebrate in future.

Vannaver Bush in 1945 imagined a machine he called a Memex that would enable the instant retrieval of “trails” of memory. He briefly discussed how it could apply to the law: “The lawyer has at his touch the associated opinions and decisions … The patent attorney has on call the millions of issued patents, with familiar trails to every point of his client’s interest.” (Bush, 1945, p 8)

What features might be in a future legal Memex ?

Imagine that it is 2042 and the 50th anniversary of LII.

The Vice President is listening to a debate on the Senate floor about a bill related to a campaign issue. She tracks public opinion on the social media site Face-oogle, particularly noting what the Majority Leader’s constituents are saying. More importantly, she receives an automated predictive score from the Congressional Data & Budget Office (CDB) about which existing laws might be impacted by a proposed amendment.

She focuses on the predicted laws that meet her preferred portfolio factors. She realizes that this would change an important law. She requests 3-5 potential placements of the amendment in the United States Code and creates composite texts. After comparing the composite text with the official United States Code source, she is ready to spread the word.

The daughter of the first librarian in the White House notifies the Attorney General.

The Attorney General quickly turns to the Legal Information Institute which now tracks all USC citations and their use in case law. The law has been used in several appeals court cases. LII has a detailed description of a pending case before the US Supreme Court that is related to this law. The Attorney General specifically notes the cases that have appeared before the appeals courts associated with the Majority Leader’s state.

In a brief summary, the Attorney General gives legal advice about potential enforcement issues. The message contains the proposed amendment, as it would appear in the United States Code if passed.

In an unprecedented continuation of two American political dynasties, Vice President Barbara Pierce Bush, sends a confidential instant message to President Mahlia Obama. The two are known for their ability to bridge by partisan divides through quick access to open government documents.

The President wants to understand the impact of the amendment on administrative law. A quick LII search provides a list of federal regulations that may be related to changes in the United States Code. In combination with internal government sources, her assistant is able to identify pending Federal Regulations that might be impacted by the amendment. He reviews the historic point-in-time directories of the Code of Federal Regulation. He completes an analysis of how this amendment might streamline workflows or create conflicts between agencies.

The Vice-President reaches across the aisle to the Majority Leader and shares her analysis. They agree on slight changes to the language in order to meet constituent opinion, address the policy concerns, and to keep the government running effectively.

When the amendment passes, LII notifies all lawyers, who have argued or written about the law and who have opted-in to notification services. The Senate immediately publishes everything as open data and it appears on G! government entertainment live-casting.

My legal Memex builds a network of the people and laws available in the public records of politicians and organizations. The infrastructure for this vision relies on open data, free access to law, and instantaneously availability. The text analysis and machine learning assumes a neutral analysis that is both defendable and leaves room for interpretation.

How far are we from meeting this vision?

As a scholar of legislative organizations, I consider the necessary institutional foundations for a successful legal information institute (LII). The past gives us encouragement. The past also shows how some fundamental principles are necessary to create reliable, accurate, timely, and authoritative data.

A few examples in the history of the United States show the progression necessary for creating the basis for the transition from documents to data.

Two weeks before the end of the first Congress, the chambers pass a law which orders the printing of government records. By the 13th Congress, they realized that distribution is just as important as printing. The Federal Depository Library Program (FDLP), which ensures that documents and skilled librarians are available in every jurisdiction, has its origins in 3 Stat 140 (1813).

The idea that there was too much legal information for any one person to track was evident by the early 20th century. The United States Code, finished in 1926, provided subject access to general and permanent federal laws. Congress thought it was a good idea to create indexes and summaries of all pending legislation as well. The Digest of Public General Bills, which first appeared in 1936, continued to be published in annual volumes through 1990 when it was subsumed into an online database. Congress, as early as 1999, embraced structured data formats so today we have at least the United States Code, votes, legislation, administrative law (Code of Federal Regulations) published in XML.

Law must be documented, printed, distributed, indexed, and structured. While important to the free access to law movement, these developments are equally vital to the internal procedures of public sector organizations. The legislative, judicial, and executive branches could learn from each other and grow together.

The transition from documents to data is vital to modernizing the functioning of our centuries-old bureaucracies around the world, as my other 25 for 25 authors have attested. The projects initiated by Sarah Frug, Sylvia Kwakye and others working on LII in 2017 lead the way with innovative solution for tomorrow.

Law and legislation do not sit still. For instance, the promulgation of laws in the form of the US Records Act was updated in 1795, 1796, 1814, 1816, 1820 and 1842. We need patience as we move forward. Policy has always been iterative.

It is time for the next generation of legal, policy, and technology pioneers to move the free access to law movement forward.

What is your ideal legal Memex?

Anne L. Washington is a digital government scholar who specializes in informatics and technology management. Her expertise on government data currently addresses the emerging policy and governance needs of data science. The National Science Foundation has funded her research on open government data and data-intensive political science. Her work draws on both interpretive research methods and computational text analysis. She is an Assistant Professor at the Schar School of Policy and Government at George Mason University in Arlington, VA, where she teaches organizational ethnography, socio-technical analysis, and electronic government. Prof. Washington serves on the Advisory Board of the Electronic Privacy Information Center (EPIC) and the Open Government Foundation. She has also served on the United Nations World E-parliament Working Group on XML in Parliament, the Oasis LegalXML technical committee of citations, and Federal Web Content Managers Usability Task Force. She was an invited expert to the W3C E-Government Interest Group and the W3C Government Linked Data Working Group.She holds a Bachelors of Arts (BA) in computer science from Brown University, and a Masters in Library Information Science (MLIS) from Rutgers University. She earned a PhD in Information Systems and Technology Management from The George Washington University School of Business with a secondary field in Organization Behavior.  Prior to completing her doctorate, she had extensive work experience in information architecture and information technology after years with the Congressional Research Service at the Library of Congress, Barclays Global Investors, Wells Fargo Nikko Investment Advisors and Apple Computer.

by Adam Ziegler

If I could snap my fingers and make it so, the Web would offer free and open access to every statute, regulation and court ruling ever issued. Unfortunately, finger-snapping doesn’t seem to work.

What does work is … work. Committed, painstaking, imperfect, incremental work over a long period of time, by people like Tom Bruce, Peter Martin, Sara Frug and their colleagues at LII. This series fittingly celebrates their extraordinary 25-year mission to ensure that “everyone [is] able to read and understand the laws that govern them, without cost.” No organization has had a more positive impact on access to law in the internet era.

Unfortunately, despite LII’s remarkable effort and impact, we remain a long way from fully realizing LII’s vision. One area that still requires substantial work is caselaw – the official rulings, decisions and opinions issued by our state and federal courts. Our official caselaw, for the most part, is locked inside print volumes and proprietary databases that offer limited access to the privileged few.

An Early, Profound Commitment to Access

For centuries our courts fulfilled their obligation to ensure public access to law by publishing and disseminating their written decisions in books, called “reporters.” The work by courts, judges, reporters of decisions, publishers, libraries and others to produce and preserve these books over many years has been monumental.

If you study the prefaces and introductory notes of early case reporters, as I have, you gain a profound appreciation for what it took to publish the law during this “book-only” legal publishing period. This was hard work, driven by a commitment to the idea that maximizing access was good for the legal profession and the public:

It has long been a subject of complaint, in this state, that we had no reports of the decisions of our courts of judicature. The importance of having authentic reports of cases. argued and determined in the Supreme Judicial Court, the only court in the state whose decisions are considered as authorities, must be obvious to all who have any pretensions to information on the subject.” (Ephraim Williams, Reporter of Decisions, Supreme Judicial Court of Massachusetts, 1805, published in Vol. 1 of the Massachusetts Reports)

I need not here enlarge upon the great utility, to the profession, especially, of books of Reports, nor on the necessity that exists in all countries, where the law is the rule of action, that it should be certain and known. The legislature may enact laws, but it is the courts that expound them, and if their expositions remain unpublished, much mischief and litigation must be the consequence. (Sidney Breese, 1831, published in Vol. 1 of the Illinois Reports)

The Federal Reporter is devoted exclusively to the prompt and complete publication of the judicial opinions delivered in each of the United State circuit and district courts. It publishes both oral and written opinions, and such charges to juries as are deemed of general importance…It is believed that by this means many able and learned opinions will be rescued from a most undeserved oblivion, while greater uniformity in the interpretation of the federal statutes and the practice of the various federal courts will at the same time be secured. In would seem, therefore, that such an undertaking is not only possessed of great intrinsic merit, but, now that it has been fairly inaugurated, it actually appears to present itself in the light of a public necessity. (West Publishing Company, 1880, published in Vol. 1 of the Federal Reporter)

This commitment to access shines through in so many of the early reporter volumes we’ve digitized as part of the Caselaw Access Project I lead at Harvard Law School. My favorite example is the Reporter’s Note in Volume 32 of the Georgia Reports, which tells the amazing personal story of George Lester’s efforts to publish the law during and after the Civil War, despite being wounded as a Confederate soldier, the burning of his house and papers, and finding himself “poor and destitute” at the close of the war. It’s hard to imagine someone more dedicated to access to law.

Is the Commitment to Access Fading?

Today, books are not the only or the best way for courts to deliver on their longstanding commitment to access. The “book-only” publishing model is long gone, thankfully. Yet a “book-first” publishing model still prevails for most courts and in most jurisdictions. In this model, courts send commercial publishers their decisions, and the “official” versions of those decisions are collected into bound volumes sold by the publishers to libraries. The publishers also get unique access to the final, digital versions of the decisions, which they use to populate expensive, subscription-only databases they alone control. Meanwhile the inferior, unofficial versions of decisions are sometimes made available, often temporarily, through court websites.

The unfortunate result is that today everyone has to pay to access and read the law. Even if you pay, your access is severely limited. And this takes place in an age in which it’s all too easy for anyone to post anything online for everyone to read, for free. What would Ephraim Williams, Sidney Breese, George Lester and their contemporaries say if they knew that it was possible for courts to make every ruling immediately, freely accessible to the entire world, yet many were not doing so? They might think the commitment to access had faded.

I don’t believe the courts’ commitment to access has faded. It remains every bit as profound and intense as it was centuries ago. Every conversation I have with judges or court officials reinforces this. The access problem today does not reflect a lack of commitment. It reflects, instead, the fundamental difficulty of changing behavior inside institutions designed, for good reason, to make change hard.

This attitude toward change is evident in the slow pace with which courts adopt new technology, which Chief Justice Roberts celebrated in his 2014 Year-End Report on the Federal Judiciary. According to Roberts, “[c]ourts are simply different in important respects when it comes to adopting technology, including information technology,” and this tendency toward caution is an institutional virtue. Technology experts scoff at this claim, because many court technology systems – PACER, for one – are fundamentally defective and unjustifiably difficult to use, and have been for a long time. Far from protecting courts from bad technology, courts’ resistance to change often prolongs their exposure to bad technology.

Nevertheless, those of us who want change in the way courts publish their decisions must respect this dynamic. We must work hard to appreciate the concerns and reservations courts have, to increase awareness and understanding of technological solutions, and to demonstrate paths forward that allow courts to fulfill their commitment to access without compromising other important values.

Access in a Modern World: Digital-First Publishing

Going forward, ensuring public access means publishing and distributing court decisions online as free and open data. That is unquestionably what every court in every jurisdiction should be moving toward.

Courts should focus their digital-publishing efforts forward. They should not worry about providing access to their historical decisions. The Free Law Project, led by Mike Lissner, has already amassed and made accessible a huge, growing collection of historical decisions and other legal materials, including federal trial court opinions from PACER. Our Library Innovation Lab, in partnership with Ravel Law, will provide public access to the Harvard Law Library’s full collection of historical court decisions extracted from roughly 40,000 bound reporter volumes. While bulk access to this data will be restricted temporarily, those restrictions cease once a state or federal court transitions to digital-first publishing. Thus, by making the transition prospectively, courts can also ensure free public access to all of their historical caselaw.

Because each court system has different challenges, constraints and opportunities, we should expect to see different approaches to the transition from book-first to digital-first publishing. We should not expect a one-size-fits-all solution. But we can try to identify a common, achievable standard.

To that end, described below is a set of proposed guidelines for any state making this transition to digital-first publishing. These guidelines recognize the need for flexibility. They outline an achievable standard but do not dictate particular means or methods. For states able to administer their own digital-first publishing systems, these guidelines can inform that system’s priorities and design. For states that will continue relying on the software and/or services of a partner, these guidelines can help define an RFP and inform negotiations and contracting.

Essential characteristics: To fulfill the court’s basic commitment to access, a digital-first publishing system should possess at least these characteristics:

  1. Online – Court decisions should be issued and available online via the Web.
  1. Free and Open – Court decisions should be accessible without charge and without any technical or contractual restrictions on access or usage.
  1. Comprehensive – All decisions should be made available digitally in the same fashion, using the same system. If a state distinguishes between precedential and non-precedential decisions, that distinction should not affect access.
  1. Official – The digital version of a decision should be the official version.
  1. Citable – The digital version of a decision should be citable in and by the courts of the relevant state, using a vendor neutral citation format.
  1. Machine Readable – The decisions should be made available in machine readable formats, meaning at least digitally created PDFs.

Desirable characteristics: To maximize access and to provide a greater public benefit, a court’s digital-first publishing system should possess these additional characteristics:

  1. Digitally Signed – Decisions should be digitally signed by the issuing court to permit authentication.
  1. Versioned – Decisions should be issued using a version control system that makes corrections easy for the courts and transparent to those relying on the decisions.
  1. Structured Data – Decisions should be issued with accompanying metadata that describes, according to a publicly disclosed standard, key attributes of the decisions, such as case name, citation, court name, attorneys, participating judges and authoring judge.
  1. Medium-Neutral – Decisions should include paragraph-numbering and avoid page-dependency.
  1. Archived – Decisions should be preserved, and the archived decisions should be made available online.
  1. Searchable – Decisions should be searchable using keywords and metadata fields.
  1. Bulk Downloadable – Decisions should be downloadable in bulk.
  1. API – Decisions should be accessible to any programmer via a public, documented Application Programming Interface.

My hope is that each court system, in furtherance of its longstanding commitment to access, will work to understand these guidelines and to adopt these as priorities. As LII has shown over 25 years, however, the hard work of ensuring access to law is not the government’s obligation alone. We all – libraries, law schools, lawyers, entrepreneurs – should find ways to advocate for and actively participate in creating the world envisioned by LII, in which “everyone [is] able to read and understand the laws that govern them, without cost.” We have a long way to go to realize this vision, especially for caselaw, but we all are fortunate to have LII’s example to follow.


Adam Ziegler is the Managing Director of the Library Innovation Lab at Harvard Law School Library, where he leads several legal technology and information projects, including the Caselaw Access Project, an effort to digitize and make publicly available Harvard’s full collection of historical court decisions. Before joining Harvard in 2014, he founded a small legal startup and represented clients in court for over 10 years.

Ginevra Peruginelli (Institute of Theory and Techniques of Legal Information of the National Research Council of Italy)

[Ed. note:  This instalment of our 25-for-25 looks, at first, like a bit of a departure for us — it talks about different methods of evaluating legal scholarship.  But with a little reading-between-the-lines, it’s not hard to see how well it ties in with questions that are very present for American legal experts.   The problem of evaluating the quality of legal expertise expressed, consumed, and commented upon in different online environments — blogs such as this one, online commentary, and nontraditional channels of all kinds — is a stubborn one that is gaining increased attention.   How do you measure the quality of scholarship, or its impact? Other disciplines have struggled with this, as reliance on particular publication vehicles becomes obsolete in the face of new methods of dissemination, community discussion, and response.  It is high time that we looked at legal scholarship as well.  Of late, law librarians interested in so-called “alt-metrics” have begun to.]

The evaluation of the quality of legal publications is now at the center of the debate in the legal academia in Europe (among others Flückiger and Tanquerel 2015). Nowadays, in principle, peer review remains the preferred method for assessing the quality of legal scholarship: this is partly due to the failure of a purely metrics-based system in this area. In legal sciences, where research output is usually produced in long written texts, research performance is hard to assess using quantitative indicators: bibliometric methods are not sufficiently capable of measuring research performance in legal scholarship and are not considered trustable by the legal community.

In 1992 Edward L. Rubin, professor of law at the Vanderbilt University Law School (Rubin 1992) argued that there is no theory of evaluation for legal sciences. He stated that what actually leads legal academics to assess a work is based on an undefined concept of quality of judgments. This creates a number of conceptual and practical difficulties that produce confusion and unease in the area. It is a matter of fact that many of the most heated discussions on legal scholarship concern the evaluation process and a relevant number of these are repetitive and non-productive for the total absence of an evaluation theory. Rubin directly tackles the question of what the foundation for evaluation should be and recommends an epistemological approach for formulating an evaluation theory. Some interesting issues are raised in his writings: the need for using criteria such as clarity, persuasiveness, significance, the consideration by evaluators of their own uncertainty, especially in case of topics somehow far from their discipline.

A strong debate is still going on over criteria and even about the possibility of objective reliable evaluation in the law domain; major critical issues are still in place and no innovative solutions have been brought forward yet.

According to one  part of the literature  (van Gestel and Vranken 2011; van Gestel 2015; Gutwirth 2009; Epstein and King 2002; Siems 2008), it is possible to identify some critical issues at the core of the debate on legal research assessment at European level. These are reported below in the form of questions and comments based on the current debate.

(a) Following the research assessment exercise of various European countries, content-based criteria such as originality, significance, societal impact are adopted. Is there general consensus on the value and interpretation of such criteria?

Depending on the type of research, on the literary genres and on the areas of law, the above content-based quality criteria can be critically different. Legal scholarship dedicated to interpret recent case law or a legal provision meets some difficulties in fulfilling the standard of originality as compared to theoretical research on general concepts, problems and principles of the law (Siems 2008). Similar difficulties arise in evaluating criteria such as internationalization and societal impact, particularly in some fields of law, which are not part of the international arena, in terms of relevance, competitiveness and approval by the scientific community, including the explicit collaboration of researchers and research teams from other countries.

(b) Is it possible to assess legal research on the basis of bibliometric evaluation techniques more or less widely accepted in other scientific disciplines?

Of course such alternatives should be thoroughly analyzed, taking into account a methodological justification in legal research. Although the best way to assess legal research and scientific publication is peer review, its time consuming process, the scarce availability of reviewers with expertise in this domain and the increasing request of research outputs evaluation limit the peer review method in legal science. Moreover, background figures that can be used to support the allocation of funds are being requested more and more by governments and policymakers (Gutwirth 2009). This situation has actually created the need for quantitative measurements of scientific output as support tools for peer review. Performance indicators used in the assessment of exact sciences are now a strong part of the debate concerning how to evaluate non-bibliometric areas such as law. However, adopting the criteria, evaluation processes and methods that are used in other sciences is not a good solution. It would be appropriate to create transnational standards for legal research quality assessment, taking into account the actual internationalization of research in this area and the increasing mobility of students and development of international law schools. The establishment of harmonized standards or of generally accepted quality indicators is a challenge to be met, despite the differences between national assessment methods, various publishing cultures and different academic traditions.

(c) How reliable is peer review?

Finding highly qualified peer reviewers is a difficult task when a pre-selection is to be performed, and usually it is not always clear how reviewers are recruited and selected (Lamont 2009). Besides that, subjectivity, unconscious biases and prejudices are impossible to eliminate. Honesty, accountability, openness and integrity are vital qualities for all reviewers who should be able to pursue their work in an atmosphere free from prejudice. In addition, if we focus on the problem from the point of view of legal journals and their publishing practices, it is important to reach clarity and consensus within editorial boards about the way criteria are used and the decisions are taken. Editorial boards should follow a well-documented procedure and make it clear to the audience (van Gestel and Vranken 2011). It is also up to the editorial boards to check that the submitted papers include a clear explanation of the research question and the research design. Quite important, submissions dealing with comparative law issues should contain an explanation of jurisdictions that are taken into account and employed methods of the analysis. In several European countries there is no common policy framework of articles submitted to national law journals: every journal/publisher follows its own practice to assess the quality of legal research outputs.

(d) Which are the advantages and disadvantages of law journals rankings?

Over the past few years, legal academics and their institutions have become obsessive about the starratings of the journals in which they publish. On one side ranking of journals gives university management a convenient method of assessing research performance, on the other hand, research evidence suggests that journal ranking is not a good proxy for the value and impact of an article. Moreover, when journal rankings are based on journal citation scores, the number of citations that a journal receives in other periodicals is a very indirect indicator of the quality of an article in that same periodical.

In particular, the law journal ranking system is encouraging the situation where academics become more interested in publishing in specific journals of high impact than in doing research that is of real value. Moreover, high qualified researchers are forced to publish in impacted journals abroad and there is no surprise that the national periodicals suffer for lack of the highest level submissions. In a longer period, this could have a negative effect on the existence of local scientific legal periodicals itself.

The idea of a European ranking of law journals represents a great challenge because it would require a cross-border classification of journals. A multilingual law journal database would be an important achievement, reflecting differences of legal cultures and jurisdictions (van Gestel 2015).

(e) Is the relation between legal science and legal practice important for research assessment?

Nowadays a close connection exists between legal science and legal practice, given that both rely on similar instruments for analysis, practical argumentation and reasoning. Legal science is both the science of law and one of the authoritative and influencing sources of that law. This is why there is a strict correlation between legal science and legal practice. As a result, legal science has to pass two “exams”: a quality test within legal academia, which first evaluates its robustness as scientific research, and secondly assesses the pertinence and relevance to legal practice. These overlapping dimensions produce legally relevant knowledge, which should both be considered in the process of evaluating legal science (Gutwirth 2009).

(f) Is the harmonization of legal research assessment exercises at European level desirable in years to come?

Legal research could take advantage of the delay it has experienced in comparison to evaluation procedures developed and carried out for the other social sciences, by initiating a scientific debate on the benefits and disadvantages of the various quality evaluation systems. The goal would be to eventually promote uniformity in the definition of indicators and standards (van Gestel and Vranken 2011).

These are some of the key questions that are most likely to form a framework for future debate, not only because they can promote lively discussions, but because they are also capable of involving countries that have only recently addressed the question of legal research assessment. Legal scholars within each country are the main actors of this discussion. In particular, quality indicators should not be imposed upon legal scholars from a top down perspective, and transparency as well as accountability are to be valued in the legal evaluation process so to build a strong evaluation culture.


Epstein L. and King G. (2002). The Rules of Inference, 69 Chicago Law Review: 1–209.

Flückiger A. and Tanquerel T. (2015). L’évaluation de la recherche en droit / Assessing research in law Enjeux et méthodes / Stakes and methods. Bruxelles, Bruylant.

van Gestel R. (2015). Sense and non-sense of a European ranking of law schools and law journals. Legal Studies, 35: 165–185. doi: 10.1111/lest.12050.

van Gestel R. and Vranken J. (2011). Assessing Legal Research: Sense and Nonsense of Peer Review versus Bibliometrics and the Need for a European Approach, German Law Journal, Vol. 12, no. 3 p. 901-929.

Gutwirth S. (2009). The evaluation of legal science. The Vl.I.R.-model for integral quality assessment of research in law: what next ? Brussels, It takes two to do science. The puzzling interactions between science and society, Available at:

Lamont M. (2009). How professors think. Inside the curious world of academic judgment, Harvard University Press, 336 pp.

Rubin E.L. (1992). On Beyond Truth: A Theory for Evaluating Legal Scholarship, 80 California Law Review vol. 80 n. 4 pp. 889-963 (Reprinted in Readings in Race and Law: A Guide to Critical Race Theory, Alex Johnson, ed., West, 2002).

Siems M.M. (2008). Legal Originality, 28 Oxford Journal of Legal Studies 174.


Ginevra Peruginelli is Researcher at ITTIG-CNR. She has a degree in Law and a Ph.D in Telematics and Information Society at the University of Florence. She has also received her Master’s degree in Computer Science at the University of Northumbria, Newcastle. Since 2003 she is entitled to practice as a lawyer.
She has been involved in several projects at European and national level such as the NiR (Norme in Rete – Legislation on the Net) portal, MINERVA (Ministerial Network for Valorising Activities in Digitisation), DALOS (Drafting Legislation with Ontology-based Support), CARE (Citizens Consular Assistance Regulation in Europe) and e-Codex (e-Justice Communication via Online Data Exchange). She has also worked in a research project promoted by the Publications Office of the EU concerning interoperability issues between the Eurovoc thesaurus and other European thesauri. In 2004 and in 2006 she won two CNR research fellowships as visiting scientist at the Institute of Advanced Legal Studies of the University of London and the Centre de recherche en droit public at the Faculty of Law of the University of Montréal.
Ginevra is the editor-in-chief of the Journal of Open Access to Law, a joint effort of ITTIG, the Autonomous University of Barcelona’s Center for Law and Technology, and the Legal Information Institute.

by David Curle

In order to agree to write about something that is 25 years old, you almost have to admit to being old enough to have something to say about it. So I might as well get my old codger bona fides out of the way.  I came of age at the very cusp of the digital revolution in legal information.  A month before my college graduation ceremony in June 1981, IBM launched its first PC.  I thus belong to the last generation of students who produced their term papers on a typewriter.  

The Former Next Great Thing

When I later entered law school the PCs were pretty well established (we used WordPerfect to write our briefs, of course), and the cutting edge of technology shifted to new legal research tools. Between trips to the library stacks to track down digests or to tediously Shepardize cases manually, we learned of Lexis and Westlaw, which in my first year were accessed via an acoustic-coupled modem and an IBM 3101 dumb terminal, squirreled away in a tiny lab-like room next to the reference desk in the library.  One terminal to serve an entire law school. Sign up to use it via a schedule on the door. Intrigued by this new world of digital information, I took a job in the law library, eventually teaching other students how to search on Lexis and Westlaw between shifts at the reference desk.  

By my second or third year, the 3101 was replaced by Lexis’ and Westlaw’s UBIQ and WALT dedicated terminals. My boss Tom Woxland, Reference Librarian and Head of Public Services at the University of Minnesota Law School, wrote an amusing article in Legal Reference Services Quarterly about a conflict between WALT and the library staff’s refrigerator that will give you a good sense of the level of technology sophistication we dealt with on a daily basis in those days.  

It was just a few years after this refrigerator incident that Tom Bruce and Peter Martin started up LII.  It’s hard to underestimate the imagination and vision that this must have taken, because the digital legal world was still in its infancy.  But they could see the way the world was headed in 1992, and not only that, they did something about it in starting LII.  

UBIQ and WALT, locked away in that room in the library, awakened an interest that turned into a career in legal information systems. I gradually lost interest in legal practice as a career as my interest in electronic information systems of all kinds grew.  By the time I first met Tom Bruce, it was in my capacity as a token representative of the commercial side of the legal information world; I was an analyst at the research firm Outsell, Inc., which tracks various information markets, and I covered Thomson Reuters, Reed Elsevier (RELX), Wolters Kluwer, and all of the smaller players nipping at their heels in the legal information hierarchies of the time. Tom called on me to help explain this commercial world to his community of people working in the more open and non-commercial part of the legal information landscape.  

I don’t intend this piece to be a tribute to LII, nor was I asked to provide one. Rather, Tom Bruce asked me to say a few words about the relationship between free and fee-based legal materials and how they relate to each other. In one big sense, that relationship has evolved in the face of new technologies, and that evolution is the focus of this essay. A fundamental shift in the way the legal market approaches legal information is underway: We no longer think of legal information simply as sets of documents; we are starting to see legal information as data.  

To go back to the chronicle of my digital awakening, there were several things about the new legal information systems that excited me even way back in the 1980s:

  • New entry points. Free-text searching in Westlaw and Lexis freed us from having to use finding tools such as digests, legal encyclopedia, and secondary analytical legal literature in order to find relevant cases. Suddenly any aspect of a case was open to search, not just those that legal indexers or secondary legal materials might have chosen to highlight. Dan Dabney, the former Senior Director, Classification Services at Thomson Reuters, wrote a thoughtful piece about the relationship between searching the natural language of the law, on the one hand, and the artificial languages like the Key Number System that we use to describe the law. He identified the advantages and disadvantages of both, but it was clear that free-text search was a leap forward. His article has held up well and is worth a read: The Universe of Thinkable Thoughts: Literary Warrant and West’s Key Number System
  • Universal availability.  Another aspect of the new legal databases that seemed obvious to me pretty early on was that comprehensive databases of electronic legal materials would be available anywhere, anytime. This had implications for the role of libraries, and for the workflow of lawyers.  It also had access to justice implications, because while most law libraries were open to the public and free (if inconvenient to use), online databases were, at the time, mostly commercial operations with paywalls. If theoretically available anytime and anywhere, legal materials were nonetheless limited to those who could invest the money to subscribe and the time to master their still-complex search syntax.
  • Hyperlinking. While the full hyperlinking possibilities of the World Wide Web were a decade off, I could see that online access to legal materials would shorten the steps between legal arguments and supporting sources.  Where before one might jot down a series of case citations in a text and then go to the stacks one by one to evaluate their relevancy, online you could do this all in one sitting. The editorial cross-referencing that already went in annotations, footnotes, and in-line cites in cases was about to become an orgy of cross-linking (across all kinds of content, not just legal content) that could be carried out at the click of a mouse.  

But as revolutionary as these new approaches were, electronic legal research systems still operated primarily as finding tools. The process of legal research was still oriented toward a single goal: leading the researcher to the documents that contained the answers to legal questions. The onus was still on lawyers to extract meaning from those documents and embed that meaning in their work product.  

A New Mindset: Data not Documents

In recent years, however, a shift in mindset has occurred. Some lawyers, with the help of data scientists, are now starting to think of legal information sources not as collections of individual documents that need to stand on their own in order to have meaning, but as data sets from which new kinds of meaning can be extracted.  

Some of those new applications for “law as data” are:

  • Lawyer and court analytics.  Lex Machina and Ravel Law, recently acquired by LexisNexis, are poster boys for this phenomenon, but others are joining the fray. Lex Machina takes court docket information and analyzes them not for their legal content but for performance data – how fast does this court handle a certain kind of motion, how well has that firm performed. The goal is to identify trends and make predictions based on objective performance data, which is quite a different inquiry than looking at a case based on the merits alone.  
  • Citation analysis and visualization  The value of it is open to discussion, but some commercial players are bringing new techniques to citation analysis, and quite often the result is some form of visualization.  Ravel Law and Fastcase have various kinds of visualizations that take sets of case law data and turn them into visual representations that are intended to illuminate and reveal relationships that traditional, more linear citation analysis might not find.
  • Usage analysis. The content of documents is valuable, but so are the trails of crumbs that users leave as they move from one document to another. Finding meaning in those patterns of usage is just as useful for lawyers as it is for consumers in the Amazon age of “people who bought this also bought that.” Knowing where other researchers have been is valuable data, and systems like Westlaw are able to track relationships between documents and leverage them as information that can be as valuable as any editorial classification scheme.  
  • Entity extraction. Legal documents are full of named entities: people, companies, product names, places, other organizations. Computers are getting better at finding and extracting those entity names from documents.  This has a number of utilities, beyond just helping to standardize the nomenclature used within a data source.  Open standards for entity names mean legal data can more easily be integrated with other types of data sources.  One such open standard identifier is Thomson Reuters’ PermID.
  • Statutes and regulations as inputs to smart contracts. It’s only a matter of time before large classes of contracts become automated and self-executing smart contracts supported by distributed ledgers and blockchains.  A classic example of such a smart contract is a shipping contract, where one party is obligated to pay another when goods arrive in a harbor, and GPS data on the location of a ship can be the signal that triggers such payment. But electronically stored statutes and regulations, especially to the extent that they govern quantitative measures such as time frames, currencies, or interest rates, can also become inputs to smart contracts, dynamically changing contract terms or triggering actions or obligations without human (i.e. lawyerly) intervention.



In all of these applications, we are moving quite a bit away from seeing legal documents for their “face value,” the intrinsic legal principles(s) that each document stands for. Rather, documents and interrelated sets of documents are sources of data points that can be leveraged in different ways in order to speed up and/or improve legal and business decisions. The data embedded in sets of legal documents becomes more than simply the sum of their content in substantive legal meaning; other meanings with strategic or commercial value can be surfaced.  

The Future: Better Data, Not Just Open Data

If there is one thing that the application of a lot of data science to the law has revealed, it’s that the law is a mess. Certain jurisdictions are better than others, of course, but in the US the raw data that we call the law is delivered to the public in an unholy variety of formats, with inconsistent frequency, various levels of comprehensiveness, and with self-imposed limitations on access.  On the state level alone, Sarah Glassmeyer, in her State Legal Information Census, identified 14 different barriers to access ranging from lack of search capability to lack of authoritativeness to restrictions on access for re-use.  Add to that the problematic publishing practices at the federal level (Pacer, anyone?) and the free-for-all at the county and municipal levels, and it’s nothing less than an untamed data jungle.

It is notoriously difficult to acquire and analyze what has been called the operating system of democracy, the law. When Lex Machina was acquired by LexisNexis, one of the primary motivations it gave was the high cost of acquiring, and then normalizing, the imperfect legal data that comes out of the federal courts. LexisNexis had already made the significant investment in building that data set; Lex Machina wanted to focus on what it was good at rather than on than spending its time acquiring and cleaning up the government’s data.  

When a large collection of US case law was made available to the public via Google Scholar in 2009, many saw this as the beginning of the end.  Finally, they thought, access to the law would no longer be a problem.  Since then, more and more legal sources – judicial, legislative, and administrative – have been brought to the public domain. But is that kind of access the beginning of the end, or the end of the beginning? Or the beginning of a new mission?

In a thoughtful 2014 essay about Google Scholar’s addition of case law, Tom Bruce reminded us not to get too self-congratulatory about simple access to legal documents.  Wider and freer availability of legal documents does solve one set of problems, especially for one set of users: lawyers. For the public at large, however, even free and open legal information is as impenetrable as if it had been locked up behind the most expensive paywalls. The reason for this is that most legal information is written and delivered as if only lawyers need it. In his essay, he sees the “what’s next” for the Open Access movement as opening legal information to the people who despite not being lawyers, are nonetheless affected by the law every minute of their lives.  

Yes, that “what next” does include pushing to make more primary legal documents freely available in the public domain. Yes, it does mean that organizations like LII can continue to help make law and regulations easier for non-lawyers to find, understand, and apply in their lives, jobs, and industries.  But Tom Bruce provided a few hints at what is now clearly an equally important imperative. Among his prescriptions for the future: “We need to increase the density of connections between documents by making connections easier for machines (rather than human authors) to create.”

Operating in a “law as data” mindset, lawyers, legal tech companies, and data-savvy players of all kind will be looking for cleaner, more well-structured, more machine-readable, and more consistently-formatted legal data. I think this might be a good role for the LIIs of the world in the future. Not instead of, but in addition to, the core mission now of making raw legal content more available to everyone. In a 2015 article, I lamented the fact that so much legal technology expertise is wasted on simply making sense of the unstructured mess found in legal documents. Someday, all the effort used to make sense of messy data might stimulate a movement to make the data less messy in the first place.  I cited Paul Lippe on this, in his discussion of the long-term effects of artificial intelligence in the legal system: “Watson will force a much more rigorous conversation about the actual structure of legal knowledge. Statutes, regulations, how-to-guides, policies, contracts and of course case law don’t work together especially well, making it challenging for systems like Watson to interpret them. This Tower of Babel says as much about the complex way we create law as it does about the limitations of Watson.”

LII and the Free Access to Law Movement have spent 25 years bringing the legal Tower of Babel into the sunlight. A worthy goal for the next 25 years would be to help guide that “rigourous conversation about the structure of legal knowledge.”  

David Curle is the director of Market Intelligence at Thomson Reuters Legal, providing research and thought leadership around the competitive environment and the changing legal services industry.





















Twenty-five years ago the LII at Cornell showed the world that access to the law via the Internet for all is possible.  It is not only possible, but can be cheap, even free.  And that “free” can be sustained.  It was and continues to be illuminating, even in the remotest places in Africa. The importance of the pioneering work of the LII, as it translates in Africa, is best understood against the background of complete absence of law reports and updated legislation in many African countries.  

Before free access to law touched down in South Africa in 1995, legal information was primarily distributed via the duopoly of the commercial legal publishers. Court reporters– usually advocates practicing in the region of the Court, would act as correspondents for the legal publishers. Cases would take months to be printed in the law reports and, due to constraints of the paper medium, heavy filtering could prevent the publication of really interesting cases from courts lower in the judicial hierarchy. Sometimes judgments marked by the presiding judge as reportable would be omitted from publication too. Space in the reports came at a premium – few got in.

This frustrated users of legal information (and most judges, who could not showcase their work and missed out on promotions!). It meant that additional resources were spent on using informal networks for gathering much needed legal information.  It also usually meant that only the handful of rich law firms, residing in the major urban areas of the country, had access to court judgments, that gave them advantage in preparing for litigation. Hunting for judgments from fellow colleagues, court registries and court libraries was common-place, as candidate attorneys were sent to the court’s archives to look for precedent.  It was not efficient, but often proved effective for those who could afford this kind of information-gathering.  Magistrates, judges and government lawyers could not dream of having this kind of information at their disposal. Citizens rarely had a chance to read a full judgment for themselves.

Imagine (remember?) that time!  Well, this would still be the situation in South Africa, and most definitely in many other African countries today, if it were not for SAFLII, AfricanLII, and 15 other LII projects across our continent that make the law available to all for free.  SAFLII started at the University of the Witwatersrand when the then Head of the Law Library – Ruth Ward, inspired by what Cornell had been doing for the past 3 years, enlisted the help of a law student with an unusual interest in computers to develop a website to host the judgments of the newly created South African Constitutional Court (there was yuuge demand for this material locally and regionally).  The Law School later partnered with AustLII to upgrade the software infrastructure, and SAFLII was born, a new member of the Free Access to Law Movement.  

In one of a few firsts in the FAL movement, almost exclusively academic until then, SAFLII was acquired and moved to the Constitutional Court of South Africa.  I remember some expressed apprehension — what would happen to an independent academic project under government? — but this turned out to be the best move.  SAFLII flourished with the backing of the Constitutional Court judges and expanded its content through a partnership with the Southern African Chief Justices Forum.  Unprecedented amount of African legal content slowly made its way to the web. LexUM and CanLII helped us a lot with advice on editorial practices and processing content, while Andrew and Philip of AustLII would fly in once or twice a year to work on site to fine-tune the software.  

We dreamt of systems the magnitude of AustLII and CanLII, and the sophistication of the LII.  But our reality was different.  When we were not busy digitizing paper-based content, we were engaged in training our users in electronic legal research. Yet users continued to demand the convenience of digested cases and consolidated legislation. Capacity was hard to come by.  Our friends at Kenya Law Reports right about then decided to open access to their (government funded) material.  This raised the bar higher – every judge in our network wanted their own Kenya Law.

To some extent, this became one of the core reasons for setting up the AfricanLII operation – as a programme that would contextualize the experience our team gathered with developing SAFLII, to help build locally-responsive LII operations.  The justice sector in most of our countries of operation was starving for proper legal information – in the vast majority of places there is no regular law reporting or law consolidation, and that affected their work and impacted society and individuals rights sometimes in most adverse ways. Both law revision and law reporting are expensive undertakings, especially when one has to start from scratch. But building a massive materials collection would not be useful if our users could not or would not make use of it. So we had to adapt and with our meager resources – devolve a centralised model (SAFLII) into local operations that allowed for better contextualization of the LIIs.

The proper development of the legal infrastructure, which is what LIIs mostly do in many African countries,  means moving at a pace and alongside the overhaul of vital areas of substantive law – human rights, environment, business and commercial law, ICTs and media, all areas developing at a considerable pace in the region. How do we adapt our LIIs to assist this development and remain relevant?

In this sense, I remember during a sustainability workshop discussion with LexUM, the LII and others, back in 2009, Tom Bruce made the point about being strategic in the choices informing our LII development plans.  Of course he raised it in his inimitable style – the fable involved something about throwing bottles in an ocean of bottles and the effects of that – but the advice was right on point.  When faced with a complete vacuum, as we were with the lack of digital legal information in Africa, the easiest thing to propose and attempt to do is to throw all your available resources at digitizing all information, to serve all potential users out there.

African LIIs, operating with scarce funding and in difficult economic times, are now more than ever orienting towards capitalizing on and further developing the value of the few collections, competencies and advantages that would derive maximum value for their users.  Having built a solid base of legal material, we are now looking at arranging it and communicating it in a way that is responsive to the needs of the justice sector.  For most LIIs, that would mean digesting (or sourcing interpretative material)  legal information and pushing through social media channels with the aim to educate citizens.  Or editorializing legal information to serve commercial audiences – and derive income for the LIIs. Or package our LIIs and ship them for off-line use by magistrates working in remote, unconnected areas of Africa.  All of this has meant that we’d had to strike a balance and pull resources out of digitization (the ocean of content) and invest in services (new kinds of bottles) that have the potential to sustain our African LIIs into the future.   

The LII at Cornell was a pioneer 25 years ago, but Tom, Sara and crew continue to push the envelope – innovating not only technology but also the business of free law.  I guess their flexibility and adaptability are some of the reasons why the LII is still going strong and growing 25 years into its existence.  And this has been the ultimate lesson for me as I continue to work together with a touch-group of committed individuals across the African continent, forging ahead and cementing their African LIIs into the future of their countries.  Our collective hats off to the LII @ Cornell for helping us figure things out along the way!

Mariya Badeva-Bright is the co-founder of the African Legal Information Institute. From 2006 to 2010, she was the head of Legal Informatics and Policy for the Southern African Legal Information Institute (SAFLII).  She has taught undergraduate courses in Legal Information Literacy and coordinated the postgraduate program in Cyberlaw at the University of the Witwatersrand in Johannesburg.  She holds a Magister Iuris in law from the Plovdivski universitet “Paisii Hilendarski” in Bulgaria, and an LLM in legal informatics from Stockholm University.



by Robert J. Ambrogi

On the 25th anniversary of the Legal Information Institute, I’m wondering how I talk about its significance without sounding like an old fogey. You know what I mean – those statements from your elders that always start, “Why, when I was younger …”

But to understand how far ahead of its time the LII was, you need to understand something about the world of legal information as it existed in 1992, the year it launched.

First of all, the Internet was still in its infancy, relative to what we know it as today. It was still a limited, text-only medium used primarily by academics and scientists, navigable only through archaic protocols with names such as Gopher, Archie, Jughead and Veronica (I’m not making those up), using esoteric and confusing commands.  

The first functioning hyperlinked version of the Internet – what we came to call the World Wide Web – had been developed just the year before, by Tim Berners-Lee at CERN in Switzerland. The web only began to gain momentum in 1993 – a year after the LII’s founding – with the development of the first two browsers that allowed graphical elements on web pages. One, Mosaic, later became Netscape Navigator, and the other, Cello, was the first browser designed to work with Microsoft Windows 3.1, which first went on the market the year before.

Remarkably, the Cello browser was created at the LII by cofounder Thomas R. Bruce so that the LII could begin to implement its vision of publishing hyperlinked legal materials on the Internet. How’s that for ahead of its time? No one had yet created a graphical browser that worked with Windows, so the LII built the first one.

As for the availability of legal information on the Internet in 1992 – fuggedaboutit, there wasn’t any. Neither Westlaw nor Lexis-Nexis were accessible through the Internet; access required either a proprietary dial-up terminal connecting over painfully slow phone lines or a visit to the library for hard-copy volumes. There was virtually no online access to court opinions or statutes or legal materials of any kind. Few legal professionals had even heard of the Internet.

In fact, the LII was the first legal site on the Internet. Think about that – about the proliferation and ubiquity of law-related websites today – and consider how prescient and trailblazing were Bruce and cofounder Peter W. Martin when they started the LII that quarter-century ago.

In short order, they developed and set in motion a model of free legal publishing that carried us to where we are today. They were the first to begin regularly publishing Supreme Court opinions on the Internet – at least a decade before the Supreme Court even had a website of its own. They published the first online edition of the U.S. Code in 1994. They created the first “crowdsourced” legal encyclopedia, Wex.

Blazing the Internet Trail

I came to the Internet party a bit later. In 1995, I began syndicating a column for lawyers about the Internet. In my third column, in May 1995, I surveyed the availability of court decisions on the Internet. Apart from the Supreme Court decisions then available through the LII’s website and some academic FTP sites, the only other decisions that could be found for free on the Web were those of two federal circuits, the 3rd and 11th (and only a year’s worth); the New York Court of Appeals; and the Alaska Supreme Court and Court of Appeals. North Carolina opinions were online via an older Gopher site.

In short, even three years after the LII’s launch, the Internet was still far from a viable medium for legal research. Here is how I described the situation in a December 1995 column:

When it comes to legal research, the Internet remains a promise waiting to be fulfilled. The promise is of virtually no-cost, electronic access to vast libraries of information, of an easily affordable alternative to Westlaw and Lexis that will put solo and small-firm lawyers on the same footing as their large-firm brothers and sisters.

The reality is that the Information Superhighway is littered with speed bumps. Courts, legislatures and government agencies have been slow to put their resources online. Those that do offer only recent information, with little in the way of archives. Secondary sources, such as treatises, remain even rarer. On top of it all, information on the Internet can be hard to find, requiring resort to a variety of indexes and search engines.

Yes, youngsters, we used to call it the Information Superhighway. Blame Al Gore.

The point is, we’ve come a long way baby. And there is little question in my mind that we would not be where we are today had Tom and Peter not had the crazy idea to launch the LII. From the start, their notion was to make the law freely and easily available to everyone. As the website says to this day, the LII “believes everyone should be able to read and understand the laws that govern them, without cost.”

In 1992, that was a revolutionary concept. Heck, in 2017, it is a revolutionary concept.

They didn’t have to go that route. They could have pursued a commercial enterprise in the hope of cashing in on the potential they saw in this emerging medium. But they didn’t. They chose the route they now call “law-not-com.”

So successful was the LII’s model that it inspired a world of copycats promoting free access to legal information all across the globe. These include the Asian Legal Information Institute, the Australasian Legal Information Institute, the British and Irish Legal Information Institute, the Canadian Legal Information Institute, the Hong Kong Legal Information Institute, the Southern African Legal Information Institute, the Uganda Legal Information Institute, and even the World Legal Information Institute, to name just some.

Continuing to Set the Standard

In my old-fogey nostalgia, I’ve been speaking about the LII in the past tense. Yet what is perhaps most remarkable about the LII is that it continues to set the standard for excellence and innovation in legal publishing. In the technology world, trailblazers often get left in the dust of the stampede that follows in their paths. But the LII continues to expand and innovate, both in the collections it houses and in its reach to global audiences.

Last year, for example, the LII became the new home for Oyez, the definitive collection of audio recordings of Supreme Court oral arguments. And, as more and more citizens take an interest in understanding their legal rights, traffic to the LII has been booming.

Twenty-five years after the LII ventured out into a largely barren Internet, striving to make legal information more widely available to the public, it is remarkable how far we’ve come. Even so, it is also disappointing how far we still have to go. Unfortunately, the legal-information landscape remains dotted with locked bunkers that keep many primary legal materials outside the public domain.

I don’t begrudge commercial publishing and research companies their right to charge for content they’ve created and innovations they’ve engineered. But I staunchly believe that there needs to be a baseline of free access for everyone to legal and government information. That was the goal of the LII when it launched in 1992 and that is the goal it has continued to work towards ever since. Were it not for the work of the LII, we would be nowhere as near to achieving that goal as we are today.

Robert J. Ambrogi is a lawyer and journalist who has been writing and speaking about legal technology and the Internet for over two decades. He writes the award-winning blog LawSites and is a technology columnist for Above the Law, ABA Journal and Law Practice magazine. Bob is a fellow of the College of Law Practice Management and was named in 2011 to the inaugural Fastcase 50, honoring “the law’s smartest, most courageous innovators, techies, visionaries and leaders.”


This year, I was lucky enough to be able to attend the annual LVI conference, held this year at Limmasol, Cyprus.  A truly beautiful place where Laris Vrahimis from CyLaw and the Cyprus Bar went out of their way to make a memorable event.  It was also an ongoing affirmation that the Free Access to Law Movement is alive and working.  But there was also a note of frustration and pessimism in the air.  The note of frustration was summed up in the question “where do we go from here?”  After 25 years of LIIs, this is a fair question.

It’s a very important question.  The LIIs across the world have been working on making primary source law available to their fellow citizens, and have gotten pretty good at it.  There are still far too few LIIs, but the ones that are around have the basics down pretty well.  But most are stuck at that basic level.  This is a problem with several levels.  The first is that the basics themselves are not all that easy.  It’s a lot of work to gather, process, and publish the law on the shoestring budgets that we all have.  And it is of crucial importance that the basic primary source law stay available.  This basic level must be maintained.

But what about everything else that ought to be done?  Here are three things to do.  There are places, like Cornell, that are doing some already, but there is room for every LII to think about and work on these steps.  

Access to Justice

The first item is assisting users with interpretive materials and guidance.  Fortunately, the Cornell LII, the Center for Computer Assisted Legal Instruction (CALI), and Justia have been doing things along these lines already.  For years now, Cornell LII has been developing WEX (, a free legal encyclopedia and dictionary.  They also have the Supreme Court Bulletin (  Justia has similar services in the form of the Justia blog and the Justia Verdict legal commentary site (, as well as its crowdsourced court decision annotations.  In the case of the LII, the labor and expertise is supplied mostly through the students of Cornell Law School, under the supervision of Cornell LII editors.  In the case of Justia, it is lawyers and academics who wish to be published, and who are getting advantages from the Justia service in return for their efforts.  

CALI does not have decision commentary, but has developed their A2J guided interview software system (  A2J allows law clinics to develop online interviews that guide clients through all the information needed to address a selected legal issue, and provide needed information or even print court or other documents ready for filing.

Translating these kinds of services to other LII’s might be harder or easier depending on their individual circumstances.  Some LII’s may be in a position to recruit volunteer labor, in which case, generation of commentary and guidance for popular benefit could be a practical path.  As to the CALI A2J system, it is available to anyone.  However, its use requires a great deal of initial dedication and labor to produce an interview, and any interview produced will require maintenance.


Maybe the least interesting thing that we can be involved with. It is certainly not going to generate interest (or donations) from the public.  However, it is of great importance.  How easy is it for 25 years of vital legal information to be wiped out in a small and terrible flash.  Even more insidious, is a slow bleed from bit rot. It’s the kind of problem that we won’t be aware of until it’s already upon us.  

Now it goes without saying that we all back up.  And we all back up carefully and regularly.  But as we move forward, and look to the long term, we know that real disasters will come upon us at some point.  We can assure ourselves that it won’t happen anytime soon, or on our watch.  But of course, that is exactly the sort of thinking that the librarians in Alexandria engaged in.  It did work for a long time, but not indefinitely.  The only real solution to data longevity is the old solution that the print world has been using since the development of the printing press: replication and distribution.  Many copies, distributed as widely as possible.

To the computer scientist, this seems horribly inefficient.  It is.  But they must overcome their horror, and understand that efficiency is not an end in itself.  Longevity is far more important.  And to live indefinitely, data must be immune from institutional failure.  The only way to guarantee that is not to rely on single institutions.  

A more serious barrier to widespread replication of data is distrust, both within institutions and nationally.  On the institutional level, there are understandable fears concerning reputation, prestige and funding.  If an LII allows other institutions to have a copy of the material they work so hard to develop, they will no longer get the credit they deserve.  In the long run, this will lead to a lack of support for the LII.  On the national level, some LIIs fear that sharing their data with institutions outside of their country will damage their standing with the governmental bodies they rely on for their data and for support.

Both of these are real problems that cannot be dismissed lightly.  However, as with the computer scientists, these hesitancies should not stand in the way of long term viability of the data that LIIs work so hard to develop.  To the extent we can do so, we need to distribute our data.  If this is just to places willing to act as repositories (with an agreement not to republish), that would be enough to insure the survival of the data.  For others, acknowledgement of their efforts through branding, etc. may be enough.  But in the end something like this needs to happen.  As a librarian, I can see that if every law library (law library defined as any institution that collects law) in the world had an electronic copy of all the world’s law, it would be very difficult to lose anyone’s law.  That would be quite something.  

A U.S. Problem: Administrative Decisions

I was very interested and encouraged to read  Pierre-Paul Lemyre’s February 22 post, “A Short Case Study of Administrative Decision Publishing” where Washington state’s PERC decisions are being made public.  For me, this is the next frontier of legal publishing that is badly in need of attention.  In the U.S., all 50 states and the federal government have elaborate administrative law structures that include administrative tribunals.  These tribunals are not a part of the regular judiciary, but are attached to the executive branch of government, usually the department with subject-matter jurisdiction.  In the past, the most important of these tribunals had their decisions published in print, usually by the GPO.  Of course, sending information to the GPO is not something agencies do very much any more, and from the way many government agency websites are organized, many either do not publish their ALJ decisions are hide them deep within their websites.  In the best of cases, they are not well searchable, and there is certainly no easy way to compare one department’s decisions with any other.

The result of the above situation is that only the ALJs and expert practitioners are even aware of the existence of ALJ decisions in any particular field.  Even among those practitioners, there is little or no knowledge of how other agencies adjudicate identical issues.  On the state level this situation is often worse (except in places like New Jersey, where there is a central Office of Administrative Law which hears all administrative cases and diligently publishes their decisions. See:  

Imagine, however, the possibilities raised by gathering and publishing federal ALJ decisions in an integrated collection.  In New Jersey, where these decisions are published, there is a large body of administrative common law which lends the consistency of stare decisis to their decisions.  This applies not only to decisions within each agency, but on similar issues between agencies as well.  The unified cadre of ALJs certainly makes this possible, but even without that, the existence of a full set of decisions which can easily be browsed and compared gives great impetus towards uniformity and predictability in decision making.  It is a great aid to the agencies, the bar and the public.

Unfortunately, I despair of ever convincing the federal government to embrace this sort of arrangement.  However, this is exactly the sort of project that an LII can excel at.  The gathering will be difficult, but doing this will greatly improve the state of American law.

John P. Joergensen is the Senior Associate Dean for Information Services, a Professor of Law and an award-winning Director of the Law Library that serves Rutgers Law Schools in both Newark and Camden . 

Professor Joergensen organized the New Jersey Courtweb Project, which provides free Internet access to the full text of the decisions of the New Jersey Supreme Court and appellate courts, Tax Court, administrative law decisions, U.S. District Court of the District of New Jersey decisions, and the New Jersey Supreme Court’s Ethics Committee opinions. His work also included digitizing U.S. congressional documents, the deliberations of state Constitutional Conventions, and other historical records. In 2007 he received the Public Access to Government Information Award from the American Association of Law Libraries and in 2011 was named to the Fastcase 50 as one of the country’s “most interesting and provocative leaders in the combined fields of law, scholarship and technology.


by G. Burgess Allison



The pioneer is a curious thing.  In the Old Days, pioneers were pretty easy to understand: There’s a mountain way over there that nobody’s crossed before—why don’t we cross it and see what’s on the other side?  But as we tamed our various geographical wilderni, pioneers had to tell much more difficult stories: No seriously, we’re gonna use electricity to talk to each other.  But of course we’ll have to hook up Really Long Wires between every building in the country.  (Well, until we switch to fiber.)

Cornell’s Legal Information Institute stands as one of those pioneers—one that was faced with telling a difficult story to a generally skeptical and plainly technophobic audience:  

No seriously, there’s this thing called the Internet.  (And—we’re off to a rocky start already.)  It’ll give us the opportunity to radically transform access to information of significance to the entire legal profession.  

Really?  Like Lexis and Westlaw?  Because we have that already.  

No, no, think broader than that.  And access will be free.  

Free?  Who’s gonna do headnotes and Key Numbers for “all information” … for free?  

Oh man, you just don’t get it.  The whole world is gonna change.

No I don’t.  Call me when the world has changed.

Here’s the thing about pioneers.  First and most importantly, after exploring the wilderness, after falling into traps and digging themselves out again, after making mistakes and learning lessons the hard way, they come back to the rest of us and tell everything!  Pioneers suffer all the personal pains of trailblazing, then return with the stories and findings, and with just a little bit of nurturing tell you exactly how to avoid all the difficulties.  With a tiny amount of encouragement, they’ll even offer to go out again and guide you along the way.

25 years ago, that’s exactly what we needed.  Tom Bruce and Peter Martin had the vision to see transformative change over the horizon, then set up shop to provide a home for experiments and new opportunities.  The LII was built to explore and try things out.  Some of those things would succeed, some would fail—but as we watched and followed the LII we learned that each effort was rolled out with a genuine enthusiasm and an open mind for the possibilities.  Don’t get me wrong, the Internet is not a judgment-free zone where every player wins a trophy.  This is an ENTJ wilderness with an embarrassingly-high score in Judging.  Technologies that don’t make it get pushed aside in a heartbeat.  Of course this is difficult for a laboratory like the LII.  Intellectually, you want to give each new technology time—time to show what it can do, time to make mistakes and attempt corrections, time to mature.  But realistically, tempus fugits faster on the Internet than anywhere else—the Internet does not embrace patience.  Any more than the legal profession embraces change.

Speaking of which … while the profession has well earned its reputation for resisting change (cf. IBM mag card typewriters), that does not mean the entire profession stuck its head in the technological sand.  Indeed, as an occasional speaker at the American Bar Association’s annual TECHSHOW conference, I was stunned at the audiences we drew on Internet-related topics: we filled the hallways when they put us in a smaller room; and we still went SRO when they put us in a bigger room.  So high was the enthusiasm (and so compelling was the pioneer spirit to share what people had discovered) that some already-robust panel discussions turned quickly into even-more-robust audience discussions as discoveries and new web sites were shouted from the audience.  The topic became lightning in a bottle.  One of the most popular programs at TECHSHOW became Sixty Sites in Sixty Minutes.  The excitement was palpable.

Certainly I was excited about what was happening as well.  In my own case, I was fortunate enough to have an outlet in the column I wrote for the ABA’s Section of Law Practice, Technology Update.  I tried, ever so hard, to explain to my readership just how big a change was coming.  The responses I got showed an intense level of interest, but a continued lack of information.  That in turn led to writing The Lawyers Guide to the Internet—which included Erik Heel’s groundbreaking list of online legal resources, The Legal List, and Lyonette Louis-Jacques’ list of law-related discussion groups, Lawlists.  While Lawyers Guide barely scratched the surface of Internet basics, it became the best-selling title in the ABA’s book publishing program.  Interest was high.

Two quick notes about Lawyers Guide:  First, it speaks volumes about how far we’ve come that Erik’s Legal List could actually contain every law-related web site and online resource.  Second, the first drafts of Lawyers Guide didn’t include this “new” technology called web sites—they hadn’t been invented yet.  They were added during the review of proof pages—not normally the time you would make such a significant change (with sincere gratitude to the ABA book program).  The only screen shot of a web page in the book came from the first and most prominent hypertext-enabled law-related web site.  At

The LII was site #1 in what I called Burge’s Bookmarks.  And it was featured so many times in the Sixty Sites programs that we eventually retired it to Hall of Fame status—to make room for sites and capabilities that were newer and less well-known.

The LII was, and remains, the best of the wilderness.  A place where pioneers are welcomed, to experiment and try things out.  A place where the Rest Of Us can come and see what the pioneers are up to.  And a place where the pioneers are so excited about what they’re doing that they just can’t help but share what they’ve learned.

Thank you LII, thank you Tom and Peter … now back to work, there’s so much more to be done!

I love technology. 🙂

G. Burgess Allison is a Fellow in the College of Law Practice Management and is an active member of the American Bar Association’s Law Practice Management Section (LPMS). He wrote the “Technology Update” column in Law Practice Management magazine for 18 years, and authored “The Lawyer’s Guide to the Internet.”, the best-selling publication in the history of the ABA’s book-publishing program. He has served on the Council for LPMS, and as Publisher and Technical Editor for LPM magazine. Burgess has a J.D. from the University of Michigan and a B.A. from the University of Delaware.  Prior to his retirement,  he was the IT Director for MITRE’s Center for Advanced Aviation System Development (CAASD).


[ Ed. note:  two of Professor Perritt’s papers have strongly influenced the LII.  We recommend them here as “extra credit” reading.  “Federal Electronic Information Policy” provides the rationale on which the LII has always built its innovations.  “Should Local Governments Sell Local Spatial Databases Through State Monopolies?” unpacks issues around the resale of public information that are still with us (and likely always will be )]

In 1990 something called the “Internet” was just becoming visible to the academic community. Word processing on small proprietary networks had gained traction in most enterprises, and PCs had been more than toys for about five years. The many personal computing magazines predicted that each year would be “the Year of the LAN”–local area network–but the year of the LAN always seemed to be next year. The first edition of my How to Practice Law With Computers, published in 1988, said that email could be useful to lawyers and had a chapter on Westlaw and Lexis. It predicted that electronic exchange of documents over wide-area networks would be useful, as would electronic filing of documents, but the word “Internet” did not appear in the index. My 1991 Electronic Contracting, Publishing and EDI Law, co-authored with Michael Baum, focused on direct mainframe connections for electronic commerce between businesses, but barely mentioned the Internet. The word Internet did appear in the index, but was mentioned in only two sentences in 871 pages.

Then, in 1990, The Kennedy school at Harvard University held a conference on the future of the Internet. Computer scientists from major universities, Department of Defense officials, and a handful of representatives of commercial entities considered how to release the Internet from its ties to defense and university labs and to embrace the growing desire to exploit it commercially. I was fortunate to be on sabbatical leave at the Kennedy School and to be one of the few lawyers participating in the conference. In a chapter of a book published by Harvard afterwards on market structures, I said, “A federally sponsored high-speed digital network with broad public, non-profit and private participation presents the possibility of a new kind of market for electronic information products, one in which the features of information products are ‘unbundled’ and assembled on a network.”

The most important insight from the 1990 conference was that the Internet would permit unbundling of value. My paper for the Harvard conference and a law review article I published shortly thereafter in the Harvard Journal of Law and Technology talked about ten separate value elements, ranging from content to payment systems, with various forms of indexing and retrieval in the middle. The Internet meant that integrated products were a thing of the past; you didn’t have to go inside separate walled gardens to shop. You didn’t have to pay for West’s key numbering system in order to get the text of a judicial opinion written by a public employee on taxpayer time. Soon, you wouldn’t have to buy the whole music album with nine songs you didn’t particularly like in order to get the one song you wanted. Eventually, you wouldn’t have to buy the whole cable bundle in order to get the History Channel, or to be a Comcast cable TV subscriber to get a popular movie or the Super Bowl streamed to your mobile device.

A handful of related but separate activities developed some of the ideas from the Harvard conference further. Ron Staudt, Peter Martin, Tom Bruce,  and I experimented with unbundling of legal information on small servers connected to the Internet to permit law students, lawyers, and members of public to obtain access to court decisions, statutes, and administrative agency decisions in new ways. Cornell’s Legal Information Institute was the result.

David Johnson, Ron Plesser, Jerry Berman, Bob Gellman, Peter Weiss, and I worked to shape the public discourse on how the law should channel new political and economic currents made possible by the Internet. Larry Lessig was a junior recruit to some of these earliest efforts, and he went on to be the best of us all in articulating a philosophy.

By 1996, I wrote a supplement to How to Practice Law With Computers, called Internet Basics for Lawyers, which encouraged lawyers to use the Internet for email and eventually to deliver legal services and to participate in litigation and agency rulemaking and adjudication. In the same year, I published a law review article explaining how credit-card dispute resolution procedures protected consumers in ecommerce.

One by one, the proprietary bastions fell—postal mail, libraries, bookstores, department stores, government agency reading rooms–as customers chose the open and ubiquitous over the closed and incompatible. Now, only a few people remember MCImail, Western Union’s EasyLink, dial up Earthlink, or CompuServe. AOL is a mere shadow of its former self, trying to grab the tail of the Internet that it too long resisted. The Internet gradually absorbed not only libraries and government book shops but also consumer markets and the legislative and adjudicative processes. Blockbuster video stores are gone. Borders Books is gone. The record labels are mostly irrelevant. Google, Amazon, and Netflix are crowding Hollywood. Millions of small merchants sell their goods every second on Amazon and eBay The shopping malls are empty. Amazon is building brick-and-mortar fulfillment warehouses all over the place. Tens of millions of artists are able to show their work on YouTube.

Now the Internet is well along in absorbing television and movies, and has begun absorbing the telephone system and two-way radio. Video images move as bit streams within IP packets. The rate at which consumers are cutting the cord and choosing to watch their favorite TV shows or the latest Hollywood blockbusters through the Internet is dramatic.

Television and other video entertainment are filling up the Internet’s pipes. New content delivery networks bypass the routers and links that serve general Internet users in the core of the Internet. But the most interesting engineering developments relate to the edge of the Internet, not its core. “Radio Access Networks,” including cellphone providers, are rushing to develop micro-, nano-, pico-, and femto-cells beyond the traditional cell towers to offload some of the traffic. Some use Wi-Fi, and some use licensed spectrum with LTE cellphone modulation. Television broadcasters meanwhile are embracing ATSC 3.0, which will allow their hundred megawatt transmitters to beam IP traffic over their footprint areas and – a first for television – to be able to charge subscribers for access.

The telephone industry and the FCC both have acknowledged that within a couple of years the public telephone system will no longer be the Public Switched Telephone System; circuit switches will be replaced completely by IP routers.

Already, the infrastructure for land mobile radio (public safety and industrial and commercial services) comprises individual handsets and other mobile transceivers communicating by VHF and UHF radio with repeater site or satellites, tied together through the Internet.

Four forces have shaped success: Conservatism, Catastrophe forecasts, Keepers of the Commons, and Capitalism. Conservatism operates by defending the status quo and casting doubt about technology’s possibilities. Opponents of technology have never been shy. A computer on every desk? “Never happen,” the big law firms said. “We don’t want our best and brightest young lawyers to be typists.”

Communicate by email?  ”It would be unethical,” dozens of CLE presenters said. “Someone might read the emails in transit while they are resting on store-and-forward servers.” (The email technology of the day did not use store-and-forward servers.).

Buy stuff online? “It’s a fantasy,” the commercial lawyers said. “No one will trust a website with her credit card number. Someone will have to invent a whole new form of cybermoney.”

Catastrophe has regularly been forecast. “Social interaction will atrophy. Evil influences will ruin our kids. Unemployment will skyrocket,” assistant professors eager for tenure and journalists jockeying to lead the evening news warned. “The Internet is insecure!” cybersecurity experts warned. “We’ve got to stick with paper and unplugged computers.” The innovators went ahead anyway and catastrophe did not happen. A certain level of hysteria about how new technologies will undermine life is normal. It is always easier to ring alarm bells than to understand the technology and think about its potential.

Keepers of the Commons—the computer scientists who invented the Internet—articulated two philosophies, which proved more important than engineering advances in enabling the Internet to replace one after another of preceding ways of organizing work, play, and commerce.  To be sure, new technologies mattered. Faster, higher quality printers were crucial in placing small computers and the Internet at the heart of new virtual libraries, first Westlaw and Lexis and then and then Google and Amazon. Higher speed modems and the advanced modulation schemes they enabled made it faster to retrieve an online resource than to walk to the library and pull the same resource off the shelf. One-click ordering made e-commerce more attractive. More than 8,000 RFCs define technology standards for the Internet.

The philosophies shaped use of the technologies.  The first was the realization that an open architecture opens up creative and economic opportunities for millions of individuals and small businesses that otherwise were barred from the marketplace by high barriers to entry. Second was the realization that being able to contribute creatively can be powerful motivators for activity, alongside expectations of profit. The engineers who invented the Internet have been steadfast in protecting the commons: articulating the Internet’s philosophy of indifference to content, leaving application development for the territory beyond its edges, and contributing untold hours to the development of open standards called “Requests for Comment” (“RFCs”). Continued work on RFCs, services like Wikipedia and LII, and posts to YouTube show that being able to contribute is a powerful motivator, regardless of whether you make any money. Many of these services prove that volunteers can add a great deal of value to the marketplace, with higher quality, often, than commercial vendors.

Capitalism has operated alongside the Commons, driving the Internet’s buildout and flourishing as a result. Enormous fortunes have been made in Seattle and Silicon Valley. Many of the largest business enterprises in the world did not exist in 1990.

Internet Capitalism was embedded in evangelism. The fortunes resulted from revolutionary ideas, not from a small-minded extractive philosophy well captured by the song “Master of the House” in the musical play, Les Miserables:

Nothing gets you nothing, everything has got a little price

Reasonable charges plus some little extras on the side

Charge ‘em for the lice, extra for the mice,

Two percent for looking in the mirror twice

Here a little slice, there a little cut,

Three percent for sleeping with the window shut.

Throughout most of the 1990s, the established, legacy firms were Masters of the House, unwilling to let the smallest sliver of an intellectual property escape their clutches without a payment of some kind. They reinforced the walls around their asset gardens and recruited more tolltakers than creative talent. The gates into the gardens work differently, but each charges a toll.

Meanwhile, the Apples, Googles, and Amazons of the world flourished because they offered something completely different–more convenient and more tailored to the way that consumers wanted to do things. Nobody ever accused Steve Jobs of giving away much for free or being shy in his pricing, but he made it clear that when you bought something from him you were buying something big and something new.

The tension between Commons and Capitalism continues. In the early days, it was contest between those who wanted to establish a monopoly over some resource – governmental information such as patent office records, Securities and Exchange Commission filings, or judicial opinions and statutes –and new entrants who fought to pry open new forms of access. Now the video entertainment industry’s  Master of the House habits are getting in the way of the necessary adaptation to cord cutting, big time. The video entertainment industry is scrambling to adapt its business models.

Intellectual property law can be an incentive to innovate, but it also represents a barrier to innovation. Throughout much of the 1980s, when the Internet was taking shape, law was uncertain whether either patent or copyright was available for computer software. Yet new businesses by the hundreds of thousands flocked to offer the fruits of their innovative labors to the marketplace. To be sure, as Internet related industries matured, their managers and the capital markets supporting seek copyright and patent protection of assets to encourage investment.

Whether you really believe in a free market depends on where you sit at a particular time. When you have just invented something, you think a free market is great as you try to build a customer base. Interoperability should be the norm. Once you have a significant market share, you think barriers to entry are great, and so do your potential investors. Switching costs should be as high as possible.

The Master of the House still operates his inns and walled gardens. Walled gardens reign supreme with respect to video entertainment. Popular social media sites like Facebook, Snapchat, Twitter, and YouTube are walled gardens. Technologically the Master of the House favors mobile apps at the expense of mobile web browsers; it’s easier to lock customers in a walled garden with an app; an app is a walled garden.

An Internet architecture designed to handle video entertainment bits in the core of the Internet will make it more difficult to achieve net neutrality. CDNs are private networks, outside the scope of the FCC’s Open Network order. They are free to perpetuate and extend the walled gardens that Balkanize the public space with finely diced and chopped intellectual property rights.

Net Neutrality is fashionable, but it also is dangerous. Almost 100 years of experience with the Interstate Commerce Commission, the FCC, and the Civil Aeronautics Board shows that regulatory schemes adopted initially to ensure appropriate conduct in the marketplace also make the coercive power of the government available to legacy defenders of the status quo who use it to stifle innovation and the competition that results from it.

It’s too easy for some heretofore un-appreciated group to claim that it is being underserved, that there is a new “digital divide,” and that resources need to be devoted primarily to making Internet use equitable. In addition, assistant professors seeking tenure and journalists seeking the front page or to lead the evening news are always eager to write stories and articles about how new Internet technologies present a danger to strongly held values. Regulating the Internet like telephone companies provide a well-established channel for these political forces to override economic and engineering decisions.

Security represents another potent threat. Terrorists, cyberstalkers, thieves, spies, and saboteurs use the Internet – like everyone else. Communication networks from the earliest days of telegraph, telephones, and radio, have made the jobs of police and counterintelligence agencies more difficult. The Internet does now. Calls for closer monitoring of Internet traffic, banning certain content, and improving security are nothing new. Each new threat, whether it be organization of terrorist cells, more creative email phishing exploits, or Russian interference in American elections intensifies calls for restrictions on the Internet. The incorporation of the telephone system and public safety two-way radio into the Internet will make it easier for exaggerated concerns about network security to make it harder to use the Internet. Security can always be improved by disconnecting. It can be improved by obscuring usefulness behind layers of guards. The Internet may be insecure, but it is easy to use.

These calls have always had a certain level of resonance with the public, but so far have mostly given way to stronger voices protecting the Internet’s philosophy of openness. Whether that will continue to happen is uncertain, given the weaker commitment to freedom of expression and entrepreneurial latitude in places like China, or even some places in Europe. Things might be different this time around because of the rise of a know-nothing populism around the world.

The law has actually has had very little to do with the Internet’s success. The Internet has been shaped almost entirely by entrepreneurs and engineers. The two most important Internet laws are shields: In 1992, my first law review article on the Internet, based on my participation in the Harvard conference said:

Any legal framework . . . should serve the following three goals: (1) There should be a diversity of information products and services in a competitive marketplace; this means that suppliers must have reasonable autonomy in designing their products; (2) users and organizers of information content should not be foreclosed from access to markets or audiences; and (3) persons suffering legal injury because of information content should be able to obtain compensation from someone if they can prove traditional levels of fault.

It recommended immunizing intermediaries from liability from harmful content as long as they acted like common carriers, not discriminating among content originators—a concept codified in the safe harbor provisions of the Digital Millennium Copyright Act and section 230 of the Communications Decency Act, both of which shield intermediaries from liability for allegedly harmful content sponsored by others. In that article and other early articles and policy papers, I urged a light touch for regulation.

David Johnson, David Post, Ron Plesser, Jack Goldsmith, and I used to argue in the late 1990s about whether the world needed some kind of new some kind of new Internet law or constitution. Goldsmith took the position that existing legal doctrines of tort, contract, and civil procedure were perfectly capable of adapting themselves to new kinds of Internet disputes. He was right.

Law is often criticized for being behind technology. That is not a weakness; it is a strength. For law to be ahead of technology stifles innovation. What is legal depends on guesses lawmakers have made about the most promising directions of technological development. Those guesses are rarely correct. Law should follow technology, because only if it does so will it be able to play its most appropriate role of filling in gaps and correcting the directions of other societal forces that shape behavior: economics, social pressure embedded in the culture, and private lawsuits.

Here is the best sequence: a new technology is developed. A few bold entrepreneurs take it up and build it into their business plans. In some cases it will be successful and spread; most cases it will not. The technologies that spread will impact other economic players. It will threaten to erode their market shares; it will confront them with choosing new technology if they wish to remain viable businesses; it will goad them into seeking innovations in their legacy technologies.

The new technology will probably cause accidents, injuring and killing some of its users and injuring the property and persons of bystanders. Widespread use of the technology also will have adverse effects on other, intangible interests, such as privacy and intellectual property. Those suffering injury will seek compensation from those using the technology and try to get them to stop using it.

Most of these disputes will be resolved privately without recourse to governmental institutions of any kind. Some of them will find their way to court. Lawyers will have little difficulty framing the disputes in terms of well-established rights, duties, privileges, powers, and liabilities. The courts will hear the cases, with lawyers on opposing sides presenting creative arguments as to how the law should be understood in light of the new technology. Judicial decisions will result, carefully explaining where the new technology fits within long-accepted legal principles.

Law professors, journalists, and interest groups will write about the judicial opinions, and, gradually, conflicting views will crystallize as to whether the judge-interpreted law is correct for channeling the technology’s benefits and costs. Eventually, if the matter has sufficient political traction, someone will propose a bill in a city council, state legislature, or the United States Congress. Alternately, an administrative agency will issue a notice of proposed rulemaking, and a debate over codification of legal principles will begin.

This is a protracted, complex, unpredictable process, and that may make it seem undesirable. But it is beneficial, because the kind of interplay that results from a process like this produces good law. It is the only way to test legal ideas thoroughly and assess their fit with the actual costs and benefits of technology as it is actually deployed in a market economy.

A look backwards warrants optimism for the future, despite new or renewed threats. The history of the Internet has always involved argument between those who said it would never take off because people would prefer established ways of doing business. It has always been subjected to various economic and legal attempts to block its use by new competitors. The Master of the House has always lurked. Shrill voices have always warned about its catastrophic social effects. Despite these enemies, it has prevailed and opened up new pathways for human fulfilment. The power of that vision and the experience of that fulfillment will continue to push aside the forces that are afraid of the future.

Henry H. Perritt, Jr. is Professor of Law and Director of the Graduate Program in Financial Services Law at the Chicago-Kent School of Law. A pioneer in Federal information policy, he served on President Clinton’s Transition Team, working on telecommunications issues, and drafted principles for electronic dissemination of public information, which formed the core of the Electronic Freedom of Information Act Amendments adopted by Congress in 1996. During the Ford administration, he served on the White House staff and as deputy under secretary of labor.

Professor Perritt served on the Computer Science and Telecommunications Policy Board of the National Research Council, and on a National Research Council committee on “Global Networks and Local Values.” He was a member of the interprofessional team that evaluated the FBI’s Carnivore system. He is a member of the bars of Virginia (inactive), Pennsylvania (inactive), the District of Columbia, Maryland, Illinois and the United States Supreme Court. He is a published novelist and playwright.


From an equally long time ago, and in one of those galaxies so far far away it is sometimes mistaken for the mythical Oz, we received Tom Bruce’s call for reflection on the history of free access to legal information. “Here’s what we *thought* we were doing, and here’s what it really turned into”, he suggested, so I have taken him up on that. Andrew Mowbray and I started the Australasian Legal Information Institute (AustLII) in 1995, and our second employee, Philip Chung, now AustLII’s Executive Director, joined us within a year. We are still working together 22 years later.

AustLII had a back-story, a preceding decade of collaborative research from 1985, in which Andrew and I were two players in the first wave of ‘AI and law’ (aka ‘legal expert systems’). Our ‘DataLex Project’ research was distinctive in one respect: we insisted that ‘inferencing systems’ (AI) could not be a closed box, but must be fully integrated with both hypertext and text retrieval (for reasons beyond this post). Andrew wrote our own search engine, hypertext engine, and inferencing engine; we developed applications on IP and on privacy, and had modest commercial success with them in the early 90s. Tools for relatively large-scale automation of mark-up of texts for hypertext and retrieval purposes were a necessary by-product. In that pre-Web era, when few had CD ROM drives, and free access to anything was impractical and unknown, products were distributing on bundles of disks. Our pre-Web ideology of ‘integrated legal information systems’ is encapsulated in a 1995 DataLex article. But a commercial publisher pulled the plug on our access to necessary data, and DataLex turned out to have more impact in its non-commercial after-life as AustLII.

Meanwhile, in January 1995 Andrew and I (for UTS and UNSW Law Schools) had obtained a grant of AUD $100,000 from the Australian Research Council’s research infrastructure fund, in order to explore the novel proposition that the newly-developing World-Wide-Web could be used to distribute legal information, and for free access, to assist academic legal research. A Sun SPARCstation, one ex-student employee, and a part-time consultant followed. Like Peter & Tom we sang from Paul Simon’s text, ‘let’s get together and call ourselves an Institute’, because it sounded so established.

What were we thinking? (and doing)

What were we thinking when we obtained this grant, and put it into practice in that first year? We can reconstruct this somewhat, not simply from faulty memories, but from what we actually did, and from our first article about AustLII in 1995, which contained something of a manifesto about the obligations of public bodies to facilitate free access to law. So here are things we did think we were doing in 1995 – no doubt we also had some bad ideas, now conveniently forgotten, but these ones have more or less stuck.

  1. End monopolies – Australia had been plagued for a decade by private sector and public sector monopolies (backed by Crown copyright) over computerised legal data. Our core principle was (polite) insistence on the ‘right to republish’ legislation, cases, and other publicly funded legal information. We appropriated our first large database (Federal legislation), but got away with it. The High Court told the federal government to supply ‘its cases’ to AustLII, and other courts followed.
  2. Rely on collaboration – Our 1995 ‘manifesto’ insisted that courts and legislative offices should provide the best quality data available to all who wished to republish it. Insistence on collaboration was a survival strategy, because we would never have enough resources to manage any other way. From the start, some courts started to email cases, and adopt protocols for consistent presentation, and eventually all did so.
  3. Disrupt publishing – Much Australian commercial legal publishing in 1995 was not much more than packaging raw legal materials, with little added value, for obscene prices. We stated that we intended to force 2nd-rate publishing to lift its game (‘you can’t compete with free’). It did, and what survived, thrived.
  4. Stay independent – While we had material support from our two Law Schools, and an ARC start-up grant, we tried from the start to be financially independent of any single source. Within a year we had other funds from a Foundation, and a business group (for industrial law), and were negotiating funding from government agencies. Later, as the funds needed for sustainability became larger, this was much more of a challenge. However, independence meant we could publish any types of content that we could fund, with no one else dictating what was appropriate. A 93 volume Royal Commission report on ‘Aboriginal deaths in custody’ for which the federal government had ‘lost’ the master copy was an early demonstration of this.
  5. Automate, integrate, don’t edit – The DataLex experience gave us good tools for complex automated mark-up of large sets of legislation, cases etc. Collaboration in data supply from official bodies multiplied the effect of this. We edited documents only when unavoidable. Sophisticated hypertexts also distinguished the pioneering work of the LII (Cornell) and LexUM from the chaff of commercial publishers. AustLII inherited from DataLex a preoccupation with combining the virtues of hypertext and text retrieval, most apparent from day 1 in the ‘Noteup’ function.
  6. Cater for all audiences – Our initial grant’s claim to serve academic research was only ever a half-truth, and our intention was to try to build a system that would cater for all audiences from practitioners to researchers to the general public. The LII (Cornell) had already demonstrated that there was a ‘latent legal market’, an enormous demand for primary legal materials from the public at large.
  7. All data types welcome – We believed that legislation, cases, treaties, law reform, and some publicly-funded scholarship should all be free access, and a LII should aim to provide them, as its resources allowed. This was a corollary of aiming to ‘serve all audiences’. In AustLII’s first year we included examples of all of these (and a Royal Commission report), the final element being the Department of Foreign Affairs agreement to partner a Treaties Library. It took us much longer to develop serious ‘law for the layperson’ content.
  8. ‘Born digital’ only – In 1995 there was already more digital data to gather than AustLII could handle, and scanning/OCR’ing data from paper was too expensive and low quality, so we ignored it, for more than a decade.
  9. ‘Comprehensiveness’ – As Daniel Poulin says in this series, AustLII was first to aim to create a nationally comprehensive free access system, or to succeed.  But the initial aims of comprehensiveness were limited to the current legislation of all 9 Australian jurisdictions, and the decisions of the superior courts of each. That took 4 years to achieve.  Addition of decisions of all lower courts and tribunals, and historical materials, were much later ambitions, still not quite achieved.
  10. ‘Australasian’ but ‘LII’ – We asked Cornell if we could borrow the ‘LII’ tag, and had vague notions that we might be part of a larger international movement, but no plans.  Our 1995 article exaggerates in saying ‘AustLII is part of the expanding international network of public legal information servers’ – we wished! However, the ‘Australasian’ aim was serious: NZLII’s superb content is a major part of AustLII, but PNG content found a better home on PacLII.
  11. Neutral citations, backdated – As soon as AustLII started receiving cases, we applied our own ‘neutral citations’ (blind to medium or publisher) to them, and applied this retrospectively to back-sets, partly so that we could automate the insertion of hypertext links. As in Canada, this was a key technical enabler. A couple of years later, the High Court of Australia led the Council of Chief Justices to adopt officially a slight variation of what AustLII had done (and we amended our standard). The neutral citation standard set with ‘[1998] HCA 1’ has since been  adopted in many common law countries. AustLII has applied it retrospectively as a parallel citation, for example ‘[1220] EngR 1’ and so on. Later, the value of neutral citations as a common-law-wide interconnector enabled the LawCite citator.
  12. Reject ‘value-adding’ – We saw invitations to distinguish ‘value-added’ (now ‘freemium’ or chargeable) services  from  AustLII’s ‘basic’ free content as a slippery slope, a recipe for free access always being second rate. So AustLII has stayed 100% free access content, including all technical innovations.
  13. ‘Free’ includes free from surveillance – Access was and is anonymous with no logins, cookies, advertisements or other surveillance mechanisms beyond logging of IP addresses. We used the Robot Exclusion Standard to prevent spidering/searching of case law by Google etc, and most (not all) other LIIs have done likewise. This has helped establish a reasonable balance between privacy and open justice in many common law jurisdictions. It also helps prevent asset stripping – AustLII is a free access publisher, not  a repository.

This ‘bakers dozen’ aspirations comes from another century, but the issues and questions they address still need consideration by anyone aiming to provide free access to law.

Why we were lucky

In at least four respects, we did not know how fortunate we were in Australia: the Australian Research Council awarded annual competitive grant funding for development of research infrastructure, not just for research; all Australian law schools were willing to back AustLII as a joint national facility (already in 1995 ‘supported by the Australian Committee of Law Deans’); UNSW and UTS Law Faculties backed us with both material assistance and academic recognition; later, we obtained charitable status for donations; and our courts never required AustLII to redact cases (contrast Canada and New Zealand), they did it themselves where it was necessary. Our colleagues in other common law jurisdictions were often not so fortunate.

Cornell, LexUM and AustLII were all also fortunate to be better prepared than most commercial or government legal information publishers to take advantage of the explosion of  public usage of the Internet (and the then-new WWW) in 1994/5. None of us were ‘just another publisher’, but were seen as novel developments. Later LIIs did not have this ‘first mover advantage’, and often operated in far more difficult circumstances in developing countries.


Given what AustLII, and free access to law globally, have developed into, what did we not imagine, back in 1995? Here are a few key unforseens.

Digitisation from paper did not became financially feasible for AustLII until about 2007. Since then, capturing historical data has become a major part of what AustLII does, with results such as the complete back-sets of over 120 non-commercial Australasian law journals,  and almost all Australasian reported cases and annual legislation 1788-1950. The aims of both ‘horizontal’ comprehensiveness of all current significant sources of law, and ‘vertical’ comprehensiveness of past sources, is new and no longer seems crazy nor unsustainable.

We did not envisage the scale of what AustLII would need to manage, whether data (currently 749 Australasian databases, and almost as much again internationally), sources (hundreds of email feeds), page accesses (about 1M per day), or collaborations (daily replication of other LII content), nor the equipment (and funding) demands this scale would pose. Independence allowed us to obtain hundreds of funding contributors for maintenance. Innovative developments are still supported by ARC and other grants. The future holds no guarantees, but as Poulin says, history has now demonstrated that sustainable large-scale LII developments are possible.

While AustLII’s initial aims were limited to Australasia, by the late 90s requests for assistance to create similar free access LIIs involved AustLII, LexUM and the LII (Cornell) in various countries. The Free Access to Law Movement (FALM) has expanded to nearly 70 members, has directly delivered considerable benefits of free access to law in many countries, and has encouraged governments almost everywhere to accept that free access to legislation and cases is now the norm, in a way that it was not in the early 90s. The delivery of free access content by independent LIIs has, for reasons Poulin outlines, turned out to sit more comfortably in common law than in civil law jurisdictions, and no global access to law via a LII framework has emerged. However, although this was not envisaged back in 1995, AustLII has been able to play a coordinating role in a network of collaborating LIIs from many common law jurisdictions, with compatible standards and software, resulting in access via CommonLII to nearly 1500 databases, and to the daily interconnection of their citations via LawCite. This extent of collaboration was not foreseeable in 1995.

Every free access to law provider has a different story to tell, with different challenges to overcome in environments typically much more difficult than Australia. Somewhere in each of our stories there is a corner reserved for the pioneering contributions of Martin, Bruce and the LII at Cornell. The LII (Cornell) continues to innovate 25 years after they set the wheels in motion.

Graham Greenleaf is Professor of Law & Information Systems at UNSW Australia. He was co-founder of AustLII, with Andrew Mowbray, and Co-Director (with Andrew and Philip Chung) until 2016, and is now Senior Researcher.