skip navigation
search

by David Curle

In order to agree to write about something that is 25 years old, you almost have to admit to being old enough to have something to say about it. So I might as well get my old codger bona fides out of the way.  I came of age at the very cusp of the digital revolution in legal information.  A month before my college graduation ceremony in June 1981, IBM launched its first PC.  I thus belong to the last generation of students who produced their term papers on a typewriter.  

The Former Next Great Thing

When I later entered law school the PCs were pretty well established (we used WordPerfect to write our briefs, of course), and the cutting edge of technology shifted to new legal research tools. Between trips to the library stacks to track down digests or to tediously Shepardize cases manually, we learned of Lexis and Westlaw, which in my first year were accessed via an acoustic-coupled modem and an IBM 3101 dumb terminal, squirreled away in a tiny lab-like room next to the reference desk in the library.  One terminal to serve an entire law school. Sign up to use it via a schedule on the door. Intrigued by this new world of digital information, I took a job in the law library, eventually teaching other students how to search on Lexis and Westlaw between shifts at the reference desk.  

By my second or third year, the 3101 was replaced by Lexis’ and Westlaw’s UBIQ and WALT dedicated terminals. My boss Tom Woxland, Reference Librarian and Head of Public Services at the University of Minnesota Law School, wrote an amusing article in Legal Reference Services Quarterly about a conflict between WALT and the library staff’s refrigerator that will give you a good sense of the level of technology sophistication we dealt with on a daily basis in those days.  

It was just a few years after this refrigerator incident that Tom Bruce and Peter Martin started up LII.  It’s hard to underestimate the imagination and vision that this must have taken, because the digital legal world was still in its infancy.  But they could see the way the world was headed in 1992, and not only that, they did something about it in starting LII.  

UBIQ and WALT, locked away in that room in the library, awakened an interest that turned into a career in legal information systems. I gradually lost interest in legal practice as a career as my interest in electronic information systems of all kinds grew.  By the time I first met Tom Bruce, it was in my capacity as a token representative of the commercial side of the legal information world; I was an analyst at the research firm Outsell, Inc., which tracks various information markets, and I covered Thomson Reuters, Reed Elsevier (RELX), Wolters Kluwer, and all of the smaller players nipping at their heels in the legal information hierarchies of the time. Tom called on me to help explain this commercial world to his community of people working in the more open and non-commercial part of the legal information landscape.  

I don’t intend this piece to be a tribute to LII, nor was I asked to provide one. Rather, Tom Bruce asked me to say a few words about the relationship between free and fee-based legal materials and how they relate to each other. In one big sense, that relationship has evolved in the face of new technologies, and that evolution is the focus of this essay. A fundamental shift in the way the legal market approaches legal information is underway: We no longer think of legal information simply as sets of documents; we are starting to see legal information as data.  

To go back to the chronicle of my digital awakening, there were several things about the new legal information systems that excited me even way back in the 1980s:

  • New entry points. Free-text searching in Westlaw and Lexis freed us from having to use finding tools such as digests, legal encyclopedia, and secondary analytical legal literature in order to find relevant cases. Suddenly any aspect of a case was open to search, not just those that legal indexers or secondary legal materials might have chosen to highlight. Dan Dabney, the former Senior Director, Classification Services at Thomson Reuters, wrote a thoughtful piece about the relationship between searching the natural language of the law, on the one hand, and the artificial languages like the Key Number System that we use to describe the law. He identified the advantages and disadvantages of both, but it was clear that free-text search was a leap forward. His article has held up well and is worth a read: The Universe of Thinkable Thoughts: Literary Warrant and West’s Key Number System
  • Universal availability.  Another aspect of the new legal databases that seemed obvious to me pretty early on was that comprehensive databases of electronic legal materials would be available anywhere, anytime. This had implications for the role of libraries, and for the workflow of lawyers.  It also had access to justice implications, because while most law libraries were open to the public and free (if inconvenient to use), online databases were, at the time, mostly commercial operations with paywalls. If theoretically available anytime and anywhere, legal materials were nonetheless limited to those who could invest the money to subscribe and the time to master their still-complex search syntax.
  • Hyperlinking. While the full hyperlinking possibilities of the World Wide Web were a decade off, I could see that online access to legal materials would shorten the steps between legal arguments and supporting sources.  Where before one might jot down a series of case citations in a text and then go to the stacks one by one to evaluate their relevancy, online you could do this all in one sitting. The editorial cross-referencing that already went in annotations, footnotes, and in-line cites in cases was about to become an orgy of cross-linking (across all kinds of content, not just legal content) that could be carried out at the click of a mouse.  

But as revolutionary as these new approaches were, electronic legal research systems still operated primarily as finding tools. The process of legal research was still oriented toward a single goal: leading the researcher to the documents that contained the answers to legal questions. The onus was still on lawyers to extract meaning from those documents and embed that meaning in their work product.  

A New Mindset: Data not Documents

In recent years, however, a shift in mindset has occurred. Some lawyers, with the help of data scientists, are now starting to think of legal information sources not as collections of individual documents that need to stand on their own in order to have meaning, but as data sets from which new kinds of meaning can be extracted.  

Some of those new applications for “law as data” are:

  • Lawyer and court analytics.  Lex Machina and Ravel Law, recently acquired by LexisNexis, are poster boys for this phenomenon, but others are joining the fray. Lex Machina takes court docket information and analyzes them not for their legal content but for performance data – how fast does this court handle a certain kind of motion, how well has that firm performed. The goal is to identify trends and make predictions based on objective performance data, which is quite a different inquiry than looking at a case based on the merits alone.  
  • Citation analysis and visualization  The value of it is open to discussion, but some commercial players are bringing new techniques to citation analysis, and quite often the result is some form of visualization.  Ravel Law and Fastcase have various kinds of visualizations that take sets of case law data and turn them into visual representations that are intended to illuminate and reveal relationships that traditional, more linear citation analysis might not find.
  • Usage analysis. The content of documents is valuable, but so are the trails of crumbs that users leave as they move from one document to another. Finding meaning in those patterns of usage is just as useful for lawyers as it is for consumers in the Amazon age of “people who bought this also bought that.” Knowing where other researchers have been is valuable data, and systems like Westlaw are able to track relationships between documents and leverage them as information that can be as valuable as any editorial classification scheme.  
  • Entity extraction. Legal documents are full of named entities: people, companies, product names, places, other organizations. Computers are getting better at finding and extracting those entity names from documents.  This has a number of utilities, beyond just helping to standardize the nomenclature used within a data source.  Open standards for entity names mean legal data can more easily be integrated with other types of data sources.  One such open standard identifier is Thomson Reuters’ PermID.
  • Statutes and regulations as inputs to smart contracts. It’s only a matter of time before large classes of contracts become automated and self-executing smart contracts supported by distributed ledgers and blockchains.  A classic example of such a smart contract is a shipping contract, where one party is obligated to pay another when goods arrive in a harbor, and GPS data on the location of a ship can be the signal that triggers such payment. But electronically stored statutes and regulations, especially to the extent that they govern quantitative measures such as time frames, currencies, or interest rates, can also become inputs to smart contracts, dynamically changing contract terms or triggering actions or obligations without human (i.e. lawyerly) intervention.

 

 

In all of these applications, we are moving quite a bit away from seeing legal documents for their “face value,” the intrinsic legal principles(s) that each document stands for. Rather, documents and interrelated sets of documents are sources of data points that can be leveraged in different ways in order to speed up and/or improve legal and business decisions. The data embedded in sets of legal documents becomes more than simply the sum of their content in substantive legal meaning; other meanings with strategic or commercial value can be surfaced.  

The Future: Better Data, Not Just Open Data

If there is one thing that the application of a lot of data science to the law has revealed, it’s that the law is a mess. Certain jurisdictions are better than others, of course, but in the US the raw data that we call the law is delivered to the public in an unholy variety of formats, with inconsistent frequency, various levels of comprehensiveness, and with self-imposed limitations on access.  On the state level alone, Sarah Glassmeyer, in her State Legal Information Census, identified 14 different barriers to access ranging from lack of search capability to lack of authoritativeness to restrictions on access for re-use.  Add to that the problematic publishing practices at the federal level (Pacer, anyone?) and the free-for-all at the county and municipal levels, and it’s nothing less than an untamed data jungle.

It is notoriously difficult to acquire and analyze what has been called the operating system of democracy, the law. When Lex Machina was acquired by LexisNexis, one of the primary motivations it gave was the high cost of acquiring, and then normalizing, the imperfect legal data that comes out of the federal courts. LexisNexis had already made the significant investment in building that data set; Lex Machina wanted to focus on what it was good at rather than on than spending its time acquiring and cleaning up the government’s data.  

When a large collection of US case law was made available to the public via Google Scholar in 2009, many saw this as the beginning of the end.  Finally, they thought, access to the law would no longer be a problem.  Since then, more and more legal sources – judicial, legislative, and administrative – have been brought to the public domain. But is that kind of access the beginning of the end, or the end of the beginning? Or the beginning of a new mission?

In a thoughtful 2014 essay about Google Scholar’s addition of case law, Tom Bruce reminded us not to get too self-congratulatory about simple access to legal documents.  Wider and freer availability of legal documents does solve one set of problems, especially for one set of users: lawyers. For the public at large, however, even free and open legal information is as impenetrable as if it had been locked up behind the most expensive paywalls. The reason for this is that most legal information is written and delivered as if only lawyers need it. In his essay, he sees the “what’s next” for the Open Access movement as opening legal information to the people who despite not being lawyers, are nonetheless affected by the law every minute of their lives.  

Yes, that “what next” does include pushing to make more primary legal documents freely available in the public domain. Yes, it does mean that organizations like LII can continue to help make law and regulations easier for non-lawyers to find, understand, and apply in their lives, jobs, and industries.  But Tom Bruce provided a few hints at what is now clearly an equally important imperative. Among his prescriptions for the future: “We need to increase the density of connections between documents by making connections easier for machines (rather than human authors) to create.”

Operating in a “law as data” mindset, lawyers, legal tech companies, and data-savvy players of all kind will be looking for cleaner, more well-structured, more machine-readable, and more consistently-formatted legal data. I think this might be a good role for the LIIs of the world in the future. Not instead of, but in addition to, the core mission now of making raw legal content more available to everyone. In a 2015 article, I lamented the fact that so much legal technology expertise is wasted on simply making sense of the unstructured mess found in legal documents. Someday, all the effort used to make sense of messy data might stimulate a movement to make the data less messy in the first place.  I cited Paul Lippe on this, in his discussion of the long-term effects of artificial intelligence in the legal system: “Watson will force a much more rigorous conversation about the actual structure of legal knowledge. Statutes, regulations, how-to-guides, policies, contracts and of course case law don’t work together especially well, making it challenging for systems like Watson to interpret them. This Tower of Babel says as much about the complex way we create law as it does about the limitations of Watson.”

LII and the Free Access to Law Movement have spent 25 years bringing the legal Tower of Babel into the sunlight. A worthy goal for the next 25 years would be to help guide that “rigourous conversation about the structure of legal knowledge.”  

David Curle is the director of Market Intelligence at Thomson Reuters Legal, providing research and thought leadership around the competitive environment and the changing legal services industry.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Twenty-five years ago the LII at Cornell showed the world that access to the law via the Internet for all is possible.  It is not only possible, but can be cheap, even free.  And that “free” can be sustained.  It was and continues to be illuminating, even in the remotest places in Africa. The importance of the pioneering work of the LII, as it translates in Africa, is best understood against the background of complete absence of law reports and updated legislation in many African countries.  

Before free access to law touched down in South Africa in 1995, legal information was primarily distributed via the duopoly of the commercial legal publishers. Court reporters– usually advocates practicing in the region of the Court, would act as correspondents for the legal publishers. Cases would take months to be printed in the law reports and, due to constraints of the paper medium, heavy filtering could prevent the publication of really interesting cases from courts lower in the judicial hierarchy. Sometimes judgments marked by the presiding judge as reportable would be omitted from publication too. Space in the reports came at a premium – few got in.

This frustrated users of legal information (and most judges, who could not showcase their work and missed out on promotions!). It meant that additional resources were spent on using informal networks for gathering much needed legal information.  It also usually meant that only the handful of rich law firms, residing in the major urban areas of the country, had access to court judgments, that gave them advantage in preparing for litigation. Hunting for judgments from fellow colleagues, court registries and court libraries was common-place, as candidate attorneys were sent to the court’s archives to look for precedent.  It was not efficient, but often proved effective for those who could afford this kind of information-gathering.  Magistrates, judges and government lawyers could not dream of having this kind of information at their disposal. Citizens rarely had a chance to read a full judgment for themselves.

Imagine (remember?) that time!  Well, this would still be the situation in South Africa, and most definitely in many other African countries today, if it were not for SAFLII, AfricanLII, and 15 other LII projects across our continent that make the law available to all for free.  SAFLII started at the University of the Witwatersrand when the then Head of the Law Library – Ruth Ward, inspired by what Cornell had been doing for the past 3 years, enlisted the help of a law student with an unusual interest in computers to develop a website to host the judgments of the newly created South African Constitutional Court (there was yuuge demand for this material locally and regionally).  The Law School later partnered with AustLII to upgrade the software infrastructure, and SAFLII was born, a new member of the Free Access to Law Movement.  

In one of a few firsts in the FAL movement, almost exclusively academic until then, SAFLII was acquired and moved to the Constitutional Court of South Africa.  I remember some expressed apprehension — what would happen to an independent academic project under government? — but this turned out to be the best move.  SAFLII flourished with the backing of the Constitutional Court judges and expanded its content through a partnership with the Southern African Chief Justices Forum.  Unprecedented amount of African legal content slowly made its way to the web. LexUM and CanLII helped us a lot with advice on editorial practices and processing content, while Andrew and Philip of AustLII would fly in once or twice a year to work on site to fine-tune the software.  

We dreamt of systems the magnitude of AustLII and CanLII, and the sophistication of the LII.  But our reality was different.  When we were not busy digitizing paper-based content, we were engaged in training our users in electronic legal research. Yet users continued to demand the convenience of digested cases and consolidated legislation. Capacity was hard to come by.  Our friends at Kenya Law Reports right about then decided to open access to their (government funded) material.  This raised the bar higher – every judge in our network wanted their own Kenya Law.

To some extent, this became one of the core reasons for setting up the AfricanLII operation – as a programme that would contextualize the experience our team gathered with developing SAFLII, to help build locally-responsive LII operations.  The justice sector in most of our countries of operation was starving for proper legal information – in the vast majority of places there is no regular law reporting or law consolidation, and that affected their work and impacted society and individuals rights sometimes in most adverse ways. Both law revision and law reporting are expensive undertakings, especially when one has to start from scratch. But building a massive materials collection would not be useful if our users could not or would not make use of it. So we had to adapt and with our meager resources – devolve a centralised model (SAFLII) into local operations that allowed for better contextualization of the LIIs.

The proper development of the legal infrastructure, which is what LIIs mostly do in many African countries,  means moving at a pace and alongside the overhaul of vital areas of substantive law – human rights, environment, business and commercial law, ICTs and media, all areas developing at a considerable pace in the region. How do we adapt our LIIs to assist this development and remain relevant?

In this sense, I remember during a sustainability workshop discussion with LexUM, the LII and others, back in 2009, Tom Bruce made the point about being strategic in the choices informing our LII development plans.  Of course he raised it in his inimitable style – the fable involved something about throwing bottles in an ocean of bottles and the effects of that – but the advice was right on point.  When faced with a complete vacuum, as we were with the lack of digital legal information in Africa, the easiest thing to propose and attempt to do is to throw all your available resources at digitizing all information, to serve all potential users out there.

African LIIs, operating with scarce funding and in difficult economic times, are now more than ever orienting towards capitalizing on and further developing the value of the few collections, competencies and advantages that would derive maximum value for their users.  Having built a solid base of legal material, we are now looking at arranging it and communicating it in a way that is responsive to the needs of the justice sector.  For most LIIs, that would mean digesting (or sourcing interpretative material)  legal information and pushing through social media channels with the aim to educate citizens.  Or editorializing legal information to serve commercial audiences – and derive income for the LIIs. Or package our LIIs and ship them for off-line use by magistrates working in remote, unconnected areas of Africa.  All of this has meant that we’d had to strike a balance and pull resources out of digitization (the ocean of content) and invest in services (new kinds of bottles) that have the potential to sustain our African LIIs into the future.   

The LII at Cornell was a pioneer 25 years ago, but Tom, Sara and crew continue to push the envelope – innovating not only technology but also the business of free law.  I guess their flexibility and adaptability are some of the reasons why the LII is still going strong and growing 25 years into its existence.  And this has been the ultimate lesson for me as I continue to work together with a touch-group of committed individuals across the African continent, forging ahead and cementing their African LIIs into the future of their countries.  Our collective hats off to the LII @ Cornell for helping us figure things out along the way!

Mariya Badeva-Bright is the co-founder of the African Legal Information Institute. From 2006 to 2010, she was the head of Legal Informatics and Policy for the Southern African Legal Information Institute (SAFLII).  She has taught undergraduate courses in Legal Information Literacy and coordinated the postgraduate program in Cyberlaw at the University of the Witwatersrand in Johannesburg.  She holds a Magister Iuris in law from the Plovdivski universitet “Paisii Hilendarski” in Bulgaria, and an LLM in legal informatics from Stockholm University.

 

 

This year, I was lucky enough to be able to attend the annual LVI conference, held this year at Limmasol, Cyprus.  A truly beautiful place where Laris Vrahimis from CyLaw and the Cyprus Bar went out of their way to make a memorable event.  It was also an ongoing affirmation that the Free Access to Law Movement is alive and working.  But there was also a note of frustration and pessimism in the air.  The note of frustration was summed up in the question “where do we go from here?”  After 25 years of LIIs, this is a fair question.

It’s a very important question.  The LIIs across the world have been working on making primary source law available to their fellow citizens, and have gotten pretty good at it.  There are still far too few LIIs, but the ones that are around have the basics down pretty well.  But most are stuck at that basic level.  This is a problem with several levels.  The first is that the basics themselves are not all that easy.  It’s a lot of work to gather, process, and publish the law on the shoestring budgets that we all have.  And it is of crucial importance that the basic primary source law stay available.  This basic level must be maintained.

But what about everything else that ought to be done?  Here are three things to do.  There are places, like Cornell, that are doing some already, but there is room for every LII to think about and work on these steps.  

Access to Justice

The first item is assisting users with interpretive materials and guidance.  Fortunately, the Cornell LII, the Center for Computer Assisted Legal Instruction (CALI), and Justia have been doing things along these lines already.  For years now, Cornell LII has been developing WEX (https://www.law.cornell.edu/wex), a free legal encyclopedia and dictionary.  They also have the Supreme Court Bulletin (https://www.law.cornell.edu/supct/cert).  Justia has similar services in the form of the Justia blog and the Justia Verdict legal commentary site (https://verdict.justia.com/), as well as its crowdsourced court decision annotations.  In the case of the LII, the labor and expertise is supplied mostly through the students of Cornell Law School, under the supervision of Cornell LII editors.  In the case of Justia, it is lawyers and academics who wish to be published, and who are getting advantages from the Justia service in return for their efforts.  

CALI does not have decision commentary, but has developed their A2J guided interview software system (https://www.a2jauthor.org/).  A2J allows law clinics to develop online interviews that guide clients through all the information needed to address a selected legal issue, and provide needed information or even print court or other documents ready for filing.

Translating these kinds of services to other LII’s might be harder or easier depending on their individual circumstances.  Some LII’s may be in a position to recruit volunteer labor, in which case, generation of commentary and guidance for popular benefit could be a practical path.  As to the CALI A2J system, it is available to anyone.  However, its use requires a great deal of initial dedication and labor to produce an interview, and any interview produced will require maintenance.

Archiving

Maybe the least interesting thing that we can be involved with. It is certainly not going to generate interest (or donations) from the public.  However, it is of great importance.  How easy is it for 25 years of vital legal information to be wiped out in a small and terrible flash.  Even more insidious, is a slow bleed from bit rot. It’s the kind of problem that we won’t be aware of until it’s already upon us.  

Now it goes without saying that we all back up.  And we all back up carefully and regularly.  But as we move forward, and look to the long term, we know that real disasters will come upon us at some point.  We can assure ourselves that it won’t happen anytime soon, or on our watch.  But of course, that is exactly the sort of thinking that the librarians in Alexandria engaged in.  It did work for a long time, but not indefinitely.  The only real solution to data longevity is the old solution that the print world has been using since the development of the printing press: replication and distribution.  Many copies, distributed as widely as possible.

To the computer scientist, this seems horribly inefficient.  It is.  But they must overcome their horror, and understand that efficiency is not an end in itself.  Longevity is far more important.  And to live indefinitely, data must be immune from institutional failure.  The only way to guarantee that is not to rely on single institutions.  

A more serious barrier to widespread replication of data is distrust, both within institutions and nationally.  On the institutional level, there are understandable fears concerning reputation, prestige and funding.  If an LII allows other institutions to have a copy of the material they work so hard to develop, they will no longer get the credit they deserve.  In the long run, this will lead to a lack of support for the LII.  On the national level, some LIIs fear that sharing their data with institutions outside of their country will damage their standing with the governmental bodies they rely on for their data and for support.

Both of these are real problems that cannot be dismissed lightly.  However, as with the computer scientists, these hesitancies should not stand in the way of long term viability of the data that LIIs work so hard to develop.  To the extent we can do so, we need to distribute our data.  If this is just to places willing to act as repositories (with an agreement not to republish), that would be enough to insure the survival of the data.  For others, acknowledgement of their efforts through branding, etc. may be enough.  But in the end something like this needs to happen.  As a librarian, I can see that if every law library (law library defined as any institution that collects law) in the world had an electronic copy of all the world’s law, it would be very difficult to lose anyone’s law.  That would be quite something.  

A U.S. Problem: Administrative Decisions

I was very interested and encouraged to read  Pierre-Paul Lemyre’s February 22 post, “A Short Case Study of Administrative Decision Publishing” where Washington state’s PERC decisions are being made public.  For me, this is the next frontier of legal publishing that is badly in need of attention.  In the U.S., all 50 states and the federal government have elaborate administrative law structures that include administrative tribunals.  These tribunals are not a part of the regular judiciary, but are attached to the executive branch of government, usually the department with subject-matter jurisdiction.  In the past, the most important of these tribunals had their decisions published in print, usually by the GPO.  Of course, sending information to the GPO is not something agencies do very much any more, and from the way many government agency websites are organized, many either do not publish their ALJ decisions are hide them deep within their websites.  In the best of cases, they are not well searchable, and there is certainly no easy way to compare one department’s decisions with any other.

The result of the above situation is that only the ALJs and expert practitioners are even aware of the existence of ALJ decisions in any particular field.  Even among those practitioners, there is little or no knowledge of how other agencies adjudicate identical issues.  On the state level this situation is often worse (except in places like New Jersey, where there is a central Office of Administrative Law which hears all administrative cases and diligently publishes their decisions. See: https://njlaw.rutgers.edu/collections/oal).  

Imagine, however, the possibilities raised by gathering and publishing federal ALJ decisions in an integrated collection.  In New Jersey, where these decisions are published, there is a large body of administrative common law which lends the consistency of stare decisis to their decisions.  This applies not only to decisions within each agency, but on similar issues between agencies as well.  The unified cadre of ALJs certainly makes this possible, but even without that, the existence of a full set of decisions which can easily be browsed and compared gives great impetus towards uniformity and predictability in decision making.  It is a great aid to the agencies, the bar and the public.

Unfortunately, I despair of ever convincing the federal government to embrace this sort of arrangement.  However, this is exactly the sort of project that an LII can excel at.  The gathering will be difficult, but doing this will greatly improve the state of American law.

John P. Joergensen is the Senior Associate Dean for Information Services, a Professor of Law and an award-winning Director of the Law Library that serves Rutgers Law Schools in both Newark and Camden . 

Professor Joergensen organized the New Jersey Courtweb Project, which provides free Internet access to the full text of the decisions of the New Jersey Supreme Court and appellate courts, Tax Court, administrative law decisions, U.S. District Court of the District of New Jersey decisions, and the New Jersey Supreme Court’s Ethics Committee opinions. His work also included digitizing U.S. congressional documents, the deliberations of state Constitutional Conventions, and other historical records. In 2007 he received the Public Access to Government Information Award from the American Association of Law Libraries and in 2011 was named to the Fastcase 50 as one of the country’s “most interesting and provocative leaders in the combined fields of law, scholarship and technology.

 

by G. Burgess Allison

 

 

The pioneer is a curious thing.  In the Old Days, pioneers were pretty easy to understand: There’s a mountain way over there that nobody’s crossed before—why don’t we cross it and see what’s on the other side?  But as we tamed our various geographical wilderni, pioneers had to tell much more difficult stories: No seriously, we’re gonna use electricity to talk to each other.  But of course we’ll have to hook up Really Long Wires between every building in the country.  (Well, until we switch to fiber.)

Cornell’s Legal Information Institute stands as one of those pioneers—one that was faced with telling a difficult story to a generally skeptical and plainly technophobic audience:  

No seriously, there’s this thing called the Internet.  (And—we’re off to a rocky start already.)  It’ll give us the opportunity to radically transform access to information of significance to the entire legal profession.  

Really?  Like Lexis and Westlaw?  Because we have that already.  

No, no, think broader than that.  And access will be free.  

Free?  Who’s gonna do headnotes and Key Numbers for “all information” … for free?  

Oh man, you just don’t get it.  The whole world is gonna change.

No I don’t.  Call me when the world has changed.

Here’s the thing about pioneers.  First and most importantly, after exploring the wilderness, after falling into traps and digging themselves out again, after making mistakes and learning lessons the hard way, they come back to the rest of us and tell everything!  Pioneers suffer all the personal pains of trailblazing, then return with the stories and findings, and with just a little bit of nurturing tell you exactly how to avoid all the difficulties.  With a tiny amount of encouragement, they’ll even offer to go out again and guide you along the way.

25 years ago, that’s exactly what we needed.  Tom Bruce and Peter Martin had the vision to see transformative change over the horizon, then set up shop to provide a home for experiments and new opportunities.  The LII was built to explore and try things out.  Some of those things would succeed, some would fail—but as we watched and followed the LII we learned that each effort was rolled out with a genuine enthusiasm and an open mind for the possibilities.  Don’t get me wrong, the Internet is not a judgment-free zone where every player wins a trophy.  This is an ENTJ wilderness with an embarrassingly-high score in Judging.  Technologies that don’t make it get pushed aside in a heartbeat.  Of course this is difficult for a laboratory like the LII.  Intellectually, you want to give each new technology time—time to show what it can do, time to make mistakes and attempt corrections, time to mature.  But realistically, tempus fugits faster on the Internet than anywhere else—the Internet does not embrace patience.  Any more than the legal profession embraces change.

Speaking of which … while the profession has well earned its reputation for resisting change (cf. IBM mag card typewriters), that does not mean the entire profession stuck its head in the technological sand.  Indeed, as an occasional speaker at the American Bar Association’s annual TECHSHOW conference, I was stunned at the audiences we drew on Internet-related topics: we filled the hallways when they put us in a smaller room; and we still went SRO when they put us in a bigger room.  So high was the enthusiasm (and so compelling was the pioneer spirit to share what people had discovered) that some already-robust panel discussions turned quickly into even-more-robust audience discussions as discoveries and new web sites were shouted from the audience.  The topic became lightning in a bottle.  One of the most popular programs at TECHSHOW became Sixty Sites in Sixty Minutes.  The excitement was palpable.

Certainly I was excited about what was happening as well.  In my own case, I was fortunate enough to have an outlet in the column I wrote for the ABA’s Section of Law Practice, Technology Update.  I tried, ever so hard, to explain to my readership just how big a change was coming.  The responses I got showed an intense level of interest, but a continued lack of information.  That in turn led to writing The Lawyers Guide to the Internet—which included Erik Heel’s groundbreaking list of online legal resources, The Legal List, and Lyonette Louis-Jacques’ list of law-related discussion groups, Lawlists.  While Lawyers Guide barely scratched the surface of Internet basics, it became the best-selling title in the ABA’s book publishing program.  Interest was high.

Two quick notes about Lawyers Guide:  First, it speaks volumes about how far we’ve come that Erik’s Legal List could actually contain every law-related web site and online resource.  Second, the first drafts of Lawyers Guide didn’t include this “new” technology called web sites—they hadn’t been invented yet.  They were added during the review of proof pages—not normally the time you would make such a significant change (with sincere gratitude to the ABA book program).  The only screen shot of a web page in the book came from the first and most prominent hypertext-enabled law-related web site.  At www.law.cornell.edu.

The LII was site #1 in what I called Burge’s Bookmarks.  And it was featured so many times in the Sixty Sites programs that we eventually retired it to Hall of Fame status—to make room for sites and capabilities that were newer and less well-known.

The LII was, and remains, the best of the wilderness.  A place where pioneers are welcomed, to experiment and try things out.  A place where the Rest Of Us can come and see what the pioneers are up to.  And a place where the pioneers are so excited about what they’re doing that they just can’t help but share what they’ve learned.

Thank you LII, thank you Tom and Peter … now back to work, there’s so much more to be done!

I love technology. 🙂

G. Burgess Allison is a Fellow in the College of Law Practice Management and is an active member of the American Bar Association’s Law Practice Management Section (LPMS). He wrote the “Technology Update” column in Law Practice Management magazine for 18 years, and authored “The Lawyer’s Guide to the Internet.”, the best-selling publication in the history of the ABA’s book-publishing program. He has served on the Council for LPMS, and as Publisher and Technical Editor for LPM magazine. Burgess has a J.D. from the University of Michigan and a B.A. from the University of Delaware.  Prior to his retirement,  he was the IT Director for MITRE’s Center for Advanced Aviation System Development (CAASD).

 

[ Ed. note:  two of Professor Perritt’s papers have strongly influenced the LII.  We recommend them here as “extra credit” reading.  “Federal Electronic Information Policy” provides the rationale on which the LII has always built its innovations.  “Should Local Governments Sell Local Spatial Databases Through State Monopolies?” unpacks issues around the resale of public information that are still with us (and likely always will be )]

In 1990 something called the “Internet” was just becoming visible to the academic community. Word processing on small proprietary networks had gained traction in most enterprises, and PCs had been more than toys for about five years. The many personal computing magazines predicted that each year would be “the Year of the LAN”–local area network–but the year of the LAN always seemed to be next year. The first edition of my How to Practice Law With Computers, published in 1988, said that email could be useful to lawyers and had a chapter on Westlaw and Lexis. It predicted that electronic exchange of documents over wide-area networks would be useful, as would electronic filing of documents, but the word “Internet” did not appear in the index. My 1991 Electronic Contracting, Publishing and EDI Law, co-authored with Michael Baum, focused on direct mainframe connections for electronic commerce between businesses, but barely mentioned the Internet. The word Internet did appear in the index, but was mentioned in only two sentences in 871 pages.

Then, in 1990, The Kennedy school at Harvard University held a conference on the future of the Internet. Computer scientists from major universities, Department of Defense officials, and a handful of representatives of commercial entities considered how to release the Internet from its ties to defense and university labs and to embrace the growing desire to exploit it commercially. I was fortunate to be on sabbatical leave at the Kennedy School and to be one of the few lawyers participating in the conference. In a chapter of a book published by Harvard afterwards on market structures, I said, “A federally sponsored high-speed digital network with broad public, non-profit and private participation presents the possibility of a new kind of market for electronic information products, one in which the features of information products are ‘unbundled’ and assembled on a network.”

The most important insight from the 1990 conference was that the Internet would permit unbundling of value. My paper for the Harvard conference and a law review article I published shortly thereafter in the Harvard Journal of Law and Technology talked about ten separate value elements, ranging from content to payment systems, with various forms of indexing and retrieval in the middle. The Internet meant that integrated products were a thing of the past; you didn’t have to go inside separate walled gardens to shop. You didn’t have to pay for West’s key numbering system in order to get the text of a judicial opinion written by a public employee on taxpayer time. Soon, you wouldn’t have to buy the whole music album with nine songs you didn’t particularly like in order to get the one song you wanted. Eventually, you wouldn’t have to buy the whole cable bundle in order to get the History Channel, or to be a Comcast cable TV subscriber to get a popular movie or the Super Bowl streamed to your mobile device.

A handful of related but separate activities developed some of the ideas from the Harvard conference further. Ron Staudt, Peter Martin, Tom Bruce,  and I experimented with unbundling of legal information on small servers connected to the Internet to permit law students, lawyers, and members of public to obtain access to court decisions, statutes, and administrative agency decisions in new ways. Cornell’s Legal Information Institute was the result.

David Johnson, Ron Plesser, Jerry Berman, Bob Gellman, Peter Weiss, and I worked to shape the public discourse on how the law should channel new political and economic currents made possible by the Internet. Larry Lessig was a junior recruit to some of these earliest efforts, and he went on to be the best of us all in articulating a philosophy.

By 1996, I wrote a supplement to How to Practice Law With Computers, called Internet Basics for Lawyers, which encouraged lawyers to use the Internet for email and eventually to deliver legal services and to participate in litigation and agency rulemaking and adjudication. In the same year, I published a law review article explaining how credit-card dispute resolution procedures protected consumers in ecommerce.

One by one, the proprietary bastions fell—postal mail, libraries, bookstores, department stores, government agency reading rooms–as customers chose the open and ubiquitous over the closed and incompatible. Now, only a few people remember MCImail, Western Union’s EasyLink, dial up Earthlink, or CompuServe. AOL is a mere shadow of its former self, trying to grab the tail of the Internet that it too long resisted. The Internet gradually absorbed not only libraries and government book shops but also consumer markets and the legislative and adjudicative processes. Blockbuster video stores are gone. Borders Books is gone. The record labels are mostly irrelevant. Google, Amazon, and Netflix are crowding Hollywood. Millions of small merchants sell their goods every second on Amazon and eBay The shopping malls are empty. Amazon is building brick-and-mortar fulfillment warehouses all over the place. Tens of millions of artists are able to show their work on YouTube.

Now the Internet is well along in absorbing television and movies, and has begun absorbing the telephone system and two-way radio. Video images move as bit streams within IP packets. The rate at which consumers are cutting the cord and choosing to watch their favorite TV shows or the latest Hollywood blockbusters through the Internet is dramatic.

Television and other video entertainment are filling up the Internet’s pipes. New content delivery networks bypass the routers and links that serve general Internet users in the core of the Internet. But the most interesting engineering developments relate to the edge of the Internet, not its core. “Radio Access Networks,” including cellphone providers, are rushing to develop micro-, nano-, pico-, and femto-cells beyond the traditional cell towers to offload some of the traffic. Some use Wi-Fi, and some use licensed spectrum with LTE cellphone modulation. Television broadcasters meanwhile are embracing ATSC 3.0, which will allow their hundred megawatt transmitters to beam IP traffic over their footprint areas and – a first for television – to be able to charge subscribers for access.

The telephone industry and the FCC both have acknowledged that within a couple of years the public telephone system will no longer be the Public Switched Telephone System; circuit switches will be replaced completely by IP routers.

Already, the infrastructure for land mobile radio (public safety and industrial and commercial services) comprises individual handsets and other mobile transceivers communicating by VHF and UHF radio with repeater site or satellites, tied together through the Internet.

Four forces have shaped success: Conservatism, Catastrophe forecasts, Keepers of the Commons, and Capitalism. Conservatism operates by defending the status quo and casting doubt about technology’s possibilities. Opponents of technology have never been shy. A computer on every desk? “Never happen,” the big law firms said. “We don’t want our best and brightest young lawyers to be typists.”

Communicate by email?  ”It would be unethical,” dozens of CLE presenters said. “Someone might read the emails in transit while they are resting on store-and-forward servers.” (The email technology of the day did not use store-and-forward servers.).

Buy stuff online? “It’s a fantasy,” the commercial lawyers said. “No one will trust a website with her credit card number. Someone will have to invent a whole new form of cybermoney.”

Catastrophe has regularly been forecast. “Social interaction will atrophy. Evil influences will ruin our kids. Unemployment will skyrocket,” assistant professors eager for tenure and journalists jockeying to lead the evening news warned. “The Internet is insecure!” cybersecurity experts warned. “We’ve got to stick with paper and unplugged computers.” The innovators went ahead anyway and catastrophe did not happen. A certain level of hysteria about how new technologies will undermine life is normal. It is always easier to ring alarm bells than to understand the technology and think about its potential.

Keepers of the Commons—the computer scientists who invented the Internet—articulated two philosophies, which proved more important than engineering advances in enabling the Internet to replace one after another of preceding ways of organizing work, play, and commerce.  To be sure, new technologies mattered. Faster, higher quality printers were crucial in placing small computers and the Internet at the heart of new virtual libraries, first Westlaw and Lexis and then and then Google and Amazon. Higher speed modems and the advanced modulation schemes they enabled made it faster to retrieve an online resource than to walk to the library and pull the same resource off the shelf. One-click ordering made e-commerce more attractive. More than 8,000 RFCs define technology standards for the Internet.

The philosophies shaped use of the technologies.  The first was the realization that an open architecture opens up creative and economic opportunities for millions of individuals and small businesses that otherwise were barred from the marketplace by high barriers to entry. Second was the realization that being able to contribute creatively can be powerful motivators for activity, alongside expectations of profit. The engineers who invented the Internet have been steadfast in protecting the commons: articulating the Internet’s philosophy of indifference to content, leaving application development for the territory beyond its edges, and contributing untold hours to the development of open standards called “Requests for Comment” (“RFCs”). Continued work on RFCs, services like Wikipedia and LII, and posts to YouTube show that being able to contribute is a powerful motivator, regardless of whether you make any money. Many of these services prove that volunteers can add a great deal of value to the marketplace, with higher quality, often, than commercial vendors.

Capitalism has operated alongside the Commons, driving the Internet’s buildout and flourishing as a result. Enormous fortunes have been made in Seattle and Silicon Valley. Many of the largest business enterprises in the world did not exist in 1990.

Internet Capitalism was embedded in evangelism. The fortunes resulted from revolutionary ideas, not from a small-minded extractive philosophy well captured by the song “Master of the House” in the musical play, Les Miserables:

Nothing gets you nothing, everything has got a little price

Reasonable charges plus some little extras on the side

Charge ‘em for the lice, extra for the mice,

Two percent for looking in the mirror twice

Here a little slice, there a little cut,

Three percent for sleeping with the window shut.

Throughout most of the 1990s, the established, legacy firms were Masters of the House, unwilling to let the smallest sliver of an intellectual property escape their clutches without a payment of some kind. They reinforced the walls around their asset gardens and recruited more tolltakers than creative talent. The gates into the gardens work differently, but each charges a toll.

Meanwhile, the Apples, Googles, and Amazons of the world flourished because they offered something completely different–more convenient and more tailored to the way that consumers wanted to do things. Nobody ever accused Steve Jobs of giving away much for free or being shy in his pricing, but he made it clear that when you bought something from him you were buying something big and something new.

The tension between Commons and Capitalism continues. In the early days, it was contest between those who wanted to establish a monopoly over some resource – governmental information such as patent office records, Securities and Exchange Commission filings, or judicial opinions and statutes –and new entrants who fought to pry open new forms of access. Now the video entertainment industry’s  Master of the House habits are getting in the way of the necessary adaptation to cord cutting, big time. The video entertainment industry is scrambling to adapt its business models.

Intellectual property law can be an incentive to innovate, but it also represents a barrier to innovation. Throughout much of the 1980s, when the Internet was taking shape, law was uncertain whether either patent or copyright was available for computer software. Yet new businesses by the hundreds of thousands flocked to offer the fruits of their innovative labors to the marketplace. To be sure, as Internet related industries matured, their managers and the capital markets supporting seek copyright and patent protection of assets to encourage investment.

Whether you really believe in a free market depends on where you sit at a particular time. When you have just invented something, you think a free market is great as you try to build a customer base. Interoperability should be the norm. Once you have a significant market share, you think barriers to entry are great, and so do your potential investors. Switching costs should be as high as possible.

The Master of the House still operates his inns and walled gardens. Walled gardens reign supreme with respect to video entertainment. Popular social media sites like Facebook, Snapchat, Twitter, and YouTube are walled gardens. Technologically the Master of the House favors mobile apps at the expense of mobile web browsers; it’s easier to lock customers in a walled garden with an app; an app is a walled garden.

An Internet architecture designed to handle video entertainment bits in the core of the Internet will make it more difficult to achieve net neutrality. CDNs are private networks, outside the scope of the FCC’s Open Network order. They are free to perpetuate and extend the walled gardens that Balkanize the public space with finely diced and chopped intellectual property rights.

Net Neutrality is fashionable, but it also is dangerous. Almost 100 years of experience with the Interstate Commerce Commission, the FCC, and the Civil Aeronautics Board shows that regulatory schemes adopted initially to ensure appropriate conduct in the marketplace also make the coercive power of the government available to legacy defenders of the status quo who use it to stifle innovation and the competition that results from it.

It’s too easy for some heretofore un-appreciated group to claim that it is being underserved, that there is a new “digital divide,” and that resources need to be devoted primarily to making Internet use equitable. In addition, assistant professors seeking tenure and journalists seeking the front page or to lead the evening news are always eager to write stories and articles about how new Internet technologies present a danger to strongly held values. Regulating the Internet like telephone companies provide a well-established channel for these political forces to override economic and engineering decisions.

Security represents another potent threat. Terrorists, cyberstalkers, thieves, spies, and saboteurs use the Internet – like everyone else. Communication networks from the earliest days of telegraph, telephones, and radio, have made the jobs of police and counterintelligence agencies more difficult. The Internet does now. Calls for closer monitoring of Internet traffic, banning certain content, and improving security are nothing new. Each new threat, whether it be organization of terrorist cells, more creative email phishing exploits, or Russian interference in American elections intensifies calls for restrictions on the Internet. The incorporation of the telephone system and public safety two-way radio into the Internet will make it easier for exaggerated concerns about network security to make it harder to use the Internet. Security can always be improved by disconnecting. It can be improved by obscuring usefulness behind layers of guards. The Internet may be insecure, but it is easy to use.

These calls have always had a certain level of resonance with the public, but so far have mostly given way to stronger voices protecting the Internet’s philosophy of openness. Whether that will continue to happen is uncertain, given the weaker commitment to freedom of expression and entrepreneurial latitude in places like China, or even some places in Europe. Things might be different this time around because of the rise of a know-nothing populism around the world.

The law has actually has had very little to do with the Internet’s success. The Internet has been shaped almost entirely by entrepreneurs and engineers. The two most important Internet laws are shields: In 1992, my first law review article on the Internet, based on my participation in the Harvard conference said:

Any legal framework . . . should serve the following three goals: (1) There should be a diversity of information products and services in a competitive marketplace; this means that suppliers must have reasonable autonomy in designing their products; (2) users and organizers of information content should not be foreclosed from access to markets or audiences; and (3) persons suffering legal injury because of information content should be able to obtain compensation from someone if they can prove traditional levels of fault.

It recommended immunizing intermediaries from liability from harmful content as long as they acted like common carriers, not discriminating among content originators—a concept codified in the safe harbor provisions of the Digital Millennium Copyright Act and section 230 of the Communications Decency Act, both of which shield intermediaries from liability for allegedly harmful content sponsored by others. In that article and other early articles and policy papers, I urged a light touch for regulation.

David Johnson, David Post, Ron Plesser, Jack Goldsmith, and I used to argue in the late 1990s about whether the world needed some kind of new some kind of new Internet law or constitution. Goldsmith took the position that existing legal doctrines of tort, contract, and civil procedure were perfectly capable of adapting themselves to new kinds of Internet disputes. He was right.

Law is often criticized for being behind technology. That is not a weakness; it is a strength. For law to be ahead of technology stifles innovation. What is legal depends on guesses lawmakers have made about the most promising directions of technological development. Those guesses are rarely correct. Law should follow technology, because only if it does so will it be able to play its most appropriate role of filling in gaps and correcting the directions of other societal forces that shape behavior: economics, social pressure embedded in the culture, and private lawsuits.

Here is the best sequence: a new technology is developed. A few bold entrepreneurs take it up and build it into their business plans. In some cases it will be successful and spread; most cases it will not. The technologies that spread will impact other economic players. It will threaten to erode their market shares; it will confront them with choosing new technology if they wish to remain viable businesses; it will goad them into seeking innovations in their legacy technologies.

The new technology will probably cause accidents, injuring and killing some of its users and injuring the property and persons of bystanders. Widespread use of the technology also will have adverse effects on other, intangible interests, such as privacy and intellectual property. Those suffering injury will seek compensation from those using the technology and try to get them to stop using it.

Most of these disputes will be resolved privately without recourse to governmental institutions of any kind. Some of them will find their way to court. Lawyers will have little difficulty framing the disputes in terms of well-established rights, duties, privileges, powers, and liabilities. The courts will hear the cases, with lawyers on opposing sides presenting creative arguments as to how the law should be understood in light of the new technology. Judicial decisions will result, carefully explaining where the new technology fits within long-accepted legal principles.

Law professors, journalists, and interest groups will write about the judicial opinions, and, gradually, conflicting views will crystallize as to whether the judge-interpreted law is correct for channeling the technology’s benefits and costs. Eventually, if the matter has sufficient political traction, someone will propose a bill in a city council, state legislature, or the United States Congress. Alternately, an administrative agency will issue a notice of proposed rulemaking, and a debate over codification of legal principles will begin.

This is a protracted, complex, unpredictable process, and that may make it seem undesirable. But it is beneficial, because the kind of interplay that results from a process like this produces good law. It is the only way to test legal ideas thoroughly and assess their fit with the actual costs and benefits of technology as it is actually deployed in a market economy.

A look backwards warrants optimism for the future, despite new or renewed threats. The history of the Internet has always involved argument between those who said it would never take off because people would prefer established ways of doing business. It has always been subjected to various economic and legal attempts to block its use by new competitors. The Master of the House has always lurked. Shrill voices have always warned about its catastrophic social effects. Despite these enemies, it has prevailed and opened up new pathways for human fulfilment. The power of that vision and the experience of that fulfillment will continue to push aside the forces that are afraid of the future.

Henry H. Perritt, Jr. is Professor of Law and Director of the Graduate Program in Financial Services Law at the Chicago-Kent School of Law. A pioneer in Federal information policy, he served on President Clinton’s Transition Team, working on telecommunications issues, and drafted principles for electronic dissemination of public information, which formed the core of the Electronic Freedom of Information Act Amendments adopted by Congress in 1996. During the Ford administration, he served on the White House staff and as deputy under secretary of labor.

Professor Perritt served on the Computer Science and Telecommunications Policy Board of the National Research Council, and on a National Research Council committee on “Global Networks and Local Values.” He was a member of the interprofessional team that evaluated the FBI’s Carnivore system. He is a member of the bars of Virginia (inactive), Pennsylvania (inactive), the District of Columbia, Maryland, Illinois and the United States Supreme Court. He is a published novelist and playwright.

 

From an equally long time ago, and in one of those galaxies so far far away it is sometimes mistaken for the mythical Oz, we received Tom Bruce’s call for reflection on the history of free access to legal information. “Here’s what we *thought* we were doing, and here’s what it really turned into”, he suggested, so I have taken him up on that. Andrew Mowbray and I started the Australasian Legal Information Institute (AustLII) in 1995, and our second employee, Philip Chung, now AustLII’s Executive Director, joined us within a year. We are still working together 22 years later.

AustLII had a back-story, a preceding decade of collaborative research from 1985, in which Andrew and I were two players in the first wave of ‘AI and law’ (aka ‘legal expert systems’). Our ‘DataLex Project’ research was distinctive in one respect: we insisted that ‘inferencing systems’ (AI) could not be a closed box, but must be fully integrated with both hypertext and text retrieval (for reasons beyond this post). Andrew wrote our own search engine, hypertext engine, and inferencing engine; we developed applications on IP and on privacy, and had modest commercial success with them in the early 90s. Tools for relatively large-scale automation of mark-up of texts for hypertext and retrieval purposes were a necessary by-product. In that pre-Web era, when few had CD ROM drives, and free access to anything was impractical and unknown, products were distributing on bundles of disks. Our pre-Web ideology of ‘integrated legal information systems’ is encapsulated in a 1995 DataLex article. But a commercial publisher pulled the plug on our access to necessary data, and DataLex turned out to have more impact in its non-commercial after-life as AustLII.

Meanwhile, in January 1995 Andrew and I (for UTS and UNSW Law Schools) had obtained a grant of AUD $100,000 from the Australian Research Council’s research infrastructure fund, in order to explore the novel proposition that the newly-developing World-Wide-Web could be used to distribute legal information, and for free access, to assist academic legal research. A Sun SPARCstation, one ex-student employee, and a part-time consultant followed. Like Peter & Tom we sang from Paul Simon’s text, ‘let’s get together and call ourselves an Institute’, because it sounded so established.

What were we thinking? (and doing)

What were we thinking when we obtained this grant, and put it into practice in that first year? We can reconstruct this somewhat, not simply from faulty memories, but from what we actually did, and from our first article about AustLII in 1995, which contained something of a manifesto about the obligations of public bodies to facilitate free access to law. So here are things we did think we were doing in 1995 – no doubt we also had some bad ideas, now conveniently forgotten, but these ones have more or less stuck.

  1. End monopolies – Australia had been plagued for a decade by private sector and public sector monopolies (backed by Crown copyright) over computerised legal data. Our core principle was (polite) insistence on the ‘right to republish’ legislation, cases, and other publicly funded legal information. We appropriated our first large database (Federal legislation), but got away with it. The High Court told the federal government to supply ‘its cases’ to AustLII, and other courts followed.
  2. Rely on collaboration – Our 1995 ‘manifesto’ insisted that courts and legislative offices should provide the best quality data available to all who wished to republish it. Insistence on collaboration was a survival strategy, because we would never have enough resources to manage any other way. From the start, some courts started to email cases, and adopt protocols for consistent presentation, and eventually all did so.
  3. Disrupt publishing – Much Australian commercial legal publishing in 1995 was not much more than packaging raw legal materials, with little added value, for obscene prices. We stated that we intended to force 2nd-rate publishing to lift its game (‘you can’t compete with free’). It did, and what survived, thrived.
  4. Stay independent – While we had material support from our two Law Schools, and an ARC start-up grant, we tried from the start to be financially independent of any single source. Within a year we had other funds from a Foundation, and a business group (for industrial law), and were negotiating funding from government agencies. Later, as the funds needed for sustainability became larger, this was much more of a challenge. However, independence meant we could publish any types of content that we could fund, with no one else dictating what was appropriate. A 93 volume Royal Commission report on ‘Aboriginal deaths in custody’ for which the federal government had ‘lost’ the master copy was an early demonstration of this.
  5. Automate, integrate, don’t edit – The DataLex experience gave us good tools for complex automated mark-up of large sets of legislation, cases etc. Collaboration in data supply from official bodies multiplied the effect of this. We edited documents only when unavoidable. Sophisticated hypertexts also distinguished the pioneering work of the LII (Cornell) and LexUM from the chaff of commercial publishers. AustLII inherited from DataLex a preoccupation with combining the virtues of hypertext and text retrieval, most apparent from day 1 in the ‘Noteup’ function.
  6. Cater for all audiences – Our initial grant’s claim to serve academic research was only ever a half-truth, and our intention was to try to build a system that would cater for all audiences from practitioners to researchers to the general public. The LII (Cornell) had already demonstrated that there was a ‘latent legal market’, an enormous demand for primary legal materials from the public at large.
  7. All data types welcome – We believed that legislation, cases, treaties, law reform, and some publicly-funded scholarship should all be free access, and a LII should aim to provide them, as its resources allowed. This was a corollary of aiming to ‘serve all audiences’. In AustLII’s first year we included examples of all of these (and a Royal Commission report), the final element being the Department of Foreign Affairs agreement to partner a Treaties Library. It took us much longer to develop serious ‘law for the layperson’ content.
  8. ‘Born digital’ only – In 1995 there was already more digital data to gather than AustLII could handle, and scanning/OCR’ing data from paper was too expensive and low quality, so we ignored it, for more than a decade.
  9. ‘Comprehensiveness’ – As Daniel Poulin says in this series, AustLII was first to aim to create a nationally comprehensive free access system, or to succeed.  But the initial aims of comprehensiveness were limited to the current legislation of all 9 Australian jurisdictions, and the decisions of the superior courts of each. That took 4 years to achieve.  Addition of decisions of all lower courts and tribunals, and historical materials, were much later ambitions, still not quite achieved.
  10. ‘Australasian’ but ‘LII’ – We asked Cornell if we could borrow the ‘LII’ tag, and had vague notions that we might be part of a larger international movement, but no plans.  Our 1995 article exaggerates in saying ‘AustLII is part of the expanding international network of public legal information servers’ – we wished! However, the ‘Australasian’ aim was serious: NZLII’s superb content is a major part of AustLII, but PNG content found a better home on PacLII.
  11. Neutral citations, backdated – As soon as AustLII started receiving cases, we applied our own ‘neutral citations’ (blind to medium or publisher) to them, and applied this retrospectively to back-sets, partly so that we could automate the insertion of hypertext links. As in Canada, this was a key technical enabler. A couple of years later, the High Court of Australia led the Council of Chief Justices to adopt officially a slight variation of what AustLII had done (and we amended our standard). The neutral citation standard set with ‘[1998] HCA 1’ has since been  adopted in many common law countries. AustLII has applied it retrospectively as a parallel citation, for example ‘[1220] EngR 1’ and so on. Later, the value of neutral citations as a common-law-wide interconnector enabled the LawCite citator.
  12. Reject ‘value-adding’ – We saw invitations to distinguish ‘value-added’ (now ‘freemium’ or chargeable) services  from  AustLII’s ‘basic’ free content as a slippery slope, a recipe for free access always being second rate. So AustLII has stayed 100% free access content, including all technical innovations.
  13. ‘Free’ includes free from surveillance – Access was and is anonymous with no logins, cookies, advertisements or other surveillance mechanisms beyond logging of IP addresses. We used the Robot Exclusion Standard to prevent spidering/searching of case law by Google etc, and most (not all) other LIIs have done likewise. This has helped establish a reasonable balance between privacy and open justice in many common law jurisdictions. It also helps prevent asset stripping – AustLII is a free access publisher, not  a repository.

This ‘bakers dozen’ aspirations comes from another century, but the issues and questions they address still need consideration by anyone aiming to provide free access to law.

Why we were lucky

In at least four respects, we did not know how fortunate we were in Australia: the Australian Research Council awarded annual competitive grant funding for development of research infrastructure, not just for research; all Australian law schools were willing to back AustLII as a joint national facility (already in 1995 ‘supported by the Australian Committee of Law Deans’); UNSW and UTS Law Faculties backed us with both material assistance and academic recognition; later, we obtained charitable status for donations; and our courts never required AustLII to redact cases (contrast Canada and New Zealand), they did it themselves where it was necessary. Our colleagues in other common law jurisdictions were often not so fortunate.

Cornell, LexUM and AustLII were all also fortunate to be better prepared than most commercial or government legal information publishers to take advantage of the explosion of  public usage of the Internet (and the then-new WWW) in 1994/5. None of us were ‘just another publisher’, but were seen as novel developments. Later LIIs did not have this ‘first mover advantage’, and often operated in far more difficult circumstances in developing countries.

Unimaginables

Given what AustLII, and free access to law globally, have developed into, what did we not imagine, back in 1995? Here are a few key unforseens.

Digitisation from paper did not became financially feasible for AustLII until about 2007. Since then, capturing historical data has become a major part of what AustLII does, with results such as the complete back-sets of over 120 non-commercial Australasian law journals,  and almost all Australasian reported cases and annual legislation 1788-1950. The aims of both ‘horizontal’ comprehensiveness of all current significant sources of law, and ‘vertical’ comprehensiveness of past sources, is new and no longer seems crazy nor unsustainable.

We did not envisage the scale of what AustLII would need to manage, whether data (currently 749 Australasian databases, and almost as much again internationally), sources (hundreds of email feeds), page accesses (about 1M per day), or collaborations (daily replication of other LII content), nor the equipment (and funding) demands this scale would pose. Independence allowed us to obtain hundreds of funding contributors for maintenance. Innovative developments are still supported by ARC and other grants. The future holds no guarantees, but as Poulin says, history has now demonstrated that sustainable large-scale LII developments are possible.

While AustLII’s initial aims were limited to Australasia, by the late 90s requests for assistance to create similar free access LIIs involved AustLII, LexUM and the LII (Cornell) in various countries. The Free Access to Law Movement (FALM) has expanded to nearly 70 members, has directly delivered considerable benefits of free access to law in many countries, and has encouraged governments almost everywhere to accept that free access to legislation and cases is now the norm, in a way that it was not in the early 90s. The delivery of free access content by independent LIIs has, for reasons Poulin outlines, turned out to sit more comfortably in common law than in civil law jurisdictions, and no global access to law via a LII framework has emerged. However, although this was not envisaged back in 1995, AustLII has been able to play a coordinating role in a network of collaborating LIIs from many common law jurisdictions, with compatible standards and software, resulting in access via CommonLII to nearly 1500 databases, and to the daily interconnection of their citations via LawCite. This extent of collaboration was not foreseeable in 1995.

Every free access to law provider has a different story to tell, with different challenges to overcome in environments typically much more difficult than Australia. Somewhere in each of our stories there is a corner reserved for the pioneering contributions of Martin, Bruce and the LII at Cornell. The LII (Cornell) continues to innovate 25 years after they set the wheels in motion.

Graham Greenleaf is Professor of Law & Information Systems at UNSW Australia. He was co-founder of AustLII, with Andrew Mowbray, and Co-Director (with Andrew and Philip Chung) until 2016, and is now Senior Researcher.

[ This post was contributed by Daniel Poulin, the founding director of CanLII, the first open-access publisher of law outside the United States, and a good friend of ours for many years ].

The Origins

In the eighties and nineties, the nascent Internet was closely connected with a culture of sharing. In those times, sharing did not meant “sharing economy” in today’s sense (Airbnb, Uber, etc), but making something available for free on the Internet. The dream was that over time even more things would be available for free and that everybody would benefit from it. It is the context in which I personally became interested in making Canadian law accessible for free on the Internet.

My first models were FTP sites accepting anonymous connections. I vaguely remember one at Stanford giving access to computer fonts and executable programs. I thought that the same approach could serve legal purposes and I was not alone thinking so. Indeed, at the beginning of the nineties, several American university professors, researchers and technology specialists started to use the Gopher technology to publish legal documents, generally case law collections. These offerings were not necessarily up-to-date or coherent, to say nothing about being complete. We were nevertheless in awe to discover that legal information could be made freely accessible as simply as that. I decided to do the same at the Centre de recherche en droit public (CRDP) of the University of Montreal and I set up a Gopher server.

I was started, yet the real epiphany for me was a presentation by Peter W. Martin and Thomas R. Bruce about the Legal Information Institute in fall 1992 in Montreal. Even better than a Gopher site, they were developing a World Wide Web site and they were ambitious. This was exactly what I wanted to do. I briefly met with them after their talk. I remember that my worries about converting decisions in HTML were brushed off by Tom in the offhand manner he always had with perceived technical difficulties. I was not alone in being impressed the LII work. As a matter of fact, in the following years, Internet publishing started adding up in American law schools (see Fig. 1).

 

Fig 1: Screen shot of the CRDP Gopher server circa 1993 listing legal Gopher sites

However the general enthusiasm for legal publishing did not last long. By the end of the nineties, most of these initiatives had been abandoned, although the model set by Peter W. Martin, Thomas R. Bruce and their small team at Cornell remained, and had also found followers abroad, in Canada and Australia and, few years later, in the UK.

A web server was launched by Lexum at the U. of Montreal in summer 1994 to publish the decisions of the Supreme Court of Canada. In 1995, the Australasian Legal Information Institute (AustLII) was set up in Sydney and they soon joined in with a web server as well. In retrospect, it seems that these three initial teams, which are still active today, LII, Lexum and AustLII were all characterized by a mix of research activities, technical developments and publishing. The pure play publishers in law schools probably never found a way to obtain the institutional and financial support required to keep going.

At the turn of the century there were a handful of groups in academia who were actively exploring the potential of the Internet to serve the law and were maintaining free access to law resources. This was the nucleus which was to grow and become the Free Access to Law Movement. Soon BaiLII, PacLII, HKLII and CyLAW and several others were to follow the LII model and start publishing.

The Evolution of LIIs

The approach initially developed by the LIIs, continued and further developed by several other academic groups, was apparently taking off. It was already successful in several common law countries. Colleagues in various European countries were starting to pay attention to the LII model. The older and better-established LIIs were involved in empowerment projects aiming at establishing free access in developing countries. We started to envision a constellation of LIIs covering the world. At the Law via the Internet Conference in 2002, a founding document was drafted (the Montreal Declaration on Free Access to Law) and an informal organization of legal information institutes, the Free Access to Law Movement, was established to further develop the LII model and to reach out to all those interested in maximizing access to public legal information.

More than twenty years later, taking stock of progress made, we can only note spectacular changes in many countries. In Canada, the Canadian Legal Information Institute (CanLII) now constitutes the first source of legal information for legal professionals. CanLII will soon have 2 million decisions published, frequently in both French and English. All statutes and regulations enacted over the last 15 years from all fourteen Canadian jurisdictions are also available. According to a survey of legal professionals prepared for CanLII in 2012, 56% of respondents start their legal research on CanLII. Four years later, CanLII’s usage statistics doubled again.

Beyond CanLII, to understand the strength of the Canadian free access to law system today, one must consider the favorable policies adopted by the Canadian Judicial Council and subsequently implemented by all Canadian courts. First to be noted there is the adoption of a neutral citation system. The existence of an authoritative way to cite judgments outside the privately-owned sphere of commerce now constitutes a central element of the legal information system in Canada. Courts add a citation they own and control to all their distributed decisions (something like “2017 QCCA 16”). This identifying element pertains to the decision and must follow it. Furthermore, decisions distributed by courts are final. There are no rules precluding the citation of a court decision in a counsel’s authorities beyond the principles of Stare Decisis as they apply in Canada. As a result, all court decisions, taken from a court’s own website, from CanLII or of course from a law report can be cited in court when relevant. Since decisions’ paragraphs are numbered, pin-point references are available. Today, counsels mix references to law reports and to CanLII in the authorities they submit to a court, and judges do the same in preparing their reasons for judgment (See Fig. 2).

Fig 2: Use of citations to CanLII (based on neutral citation) in a decision from the Ontario of Court of Appeal, 2015 ONCA 495 (CanLII)

The outlook is similar for Australia. Today, clearly, AustLII is the main outlet for case law in Australia. It must be recognized that the principals at AustLII were the first to establish almost country-wide comprehensive free access in 1997. It took four more years to reach that stage in Canada. Cornell’s LII demonstrated how the law can be published for free, but it is AustLII’s team that showed how this model can be expanded to its full-scale.

Several other legal information institutes are now well-established and would call for a more complete description. Unfortunately, such a description goes far beyond what is possible to do in a short blog post. SafLII and AfricanLII are superb achievements in improving access to legal information in Africa, and both are developing the legal information institute approach in conditions difficult to imagine for a Canadian living in Montreal. PacLII is doing similar work to serve the needs of some twenty developing countries in the South Pacific. BaiLII, the British and Irish Legal Information Institute, would also merit being more fully described here, for its very small team is maintaining a good offering with very limited resources. CyLAW, established in collaboration with the Cyprus Bar Association is illustration of competence and institutional viability in a smaller state. Other extremely valuable initiatives can be found on the Free Access to Law Movement web site.

To sum up, the pioneer work done at Cornell 25 years ago led to the establishment of viable, efficient free access to law resources in several countries, especially countries belonging to the common law tradition. However, the full–fledged internationale of LIIs has never materialized. Many factors can contribute to explaining that. First, in countries of the continental law tradition, case law plays a less central role as a source of law, the publication of legislative material is often taken in charge by the legislative authority (which is not a bad thing), and doctrinal comments and treaties play a much larger role. Altogether these differences have made the development of a LII more challenging. Second, most of the LIIs which reached sustainability started in universities; many are still attached to academia. One must admit that such academic ventures are more valued (or face less prejudice) in North America and Australia than, let’s say, in France. Third, Martin and Bruce were — and still are — “entrepreneurs”. They were doing whatever was needed to finance their LII (consulting for government bodies or the industry was not out of question), they were taking risks and they were persevering. The principals at AustLII, SafLII and Lexum were venturers too (even swashbucklers for some). Some deans could have been less understanding in accepting the activism required to maintain an LII. A related and final ingredient required for survival and growth was the capacity to obtain the required financial support. In this regard, all of us in developed countries were privileged. In the developing world, international development organizations supported some LIIs for a limited time but the end of their funding brought many free legal information concerns to shut down their servers.

The Future

The next question is to try to figure where all this is going: the LII at Cornell, the other LIIs, the Free Access to Law Movement. Divination is not my specialty, but I will try to single out some of the results of the last 25 years that appear to be more durable and mix them with observable trends today which may contribute to determining the future of the free access to law idea.

The very first thing to say is that governments and courts are much more present and active than before. For instance, in all Canadian jurisdictions, statutes and regulations are freely accessible on the web and most courts and tribunals publish their decisions on their web site. Commercial legal publishing has gone through a major transformation over the last twenty years: most publishers specializing in case law reports have disappeared or been acquired by bigger competitors. The survivors linked to major global publishing groups are bundling all they have, such as jurisprudence and various doctrinal writings, case law and legislation, and they offer integrated information products by practice domain. This kind of offering finds takers and business seem to be good. Finally, free access to law seems to have found its footing. CanLII’s funding is provided by the legal professions which obtain through CanLII a national common legal library serving their missions to ensure the competence of their members and to serve the public.

To conclude, let’s say that free access to law is possible and sustainable. Capture and control of official legal information by private interests is avoidable. The well-established nation-wide systems in Canada and Australia, to name only these two jurisdictions, demonstrate that trustworthy, efficient publishing of law is compatible with public domain status and that public legal information can be made accessible for free.

Technical standards are key. The adoption of neutral citation and related policies by the courts played a significant role in ensuring that country-wide free access to law publishers, such as CanLII and AustLII, achieved their potential. More standardization would help deliver even more benefits.

Funding and benefits can be linked. Canada with its communist public health system is sometimes perceived as a middle-of-the-road society; not squarely socialist like those small countries in Northern Europe, but not entirely liberal either. Then, go figure, CanLII is privately funded by the legal professions and operated under contract by a for-profit company, Lexum. The reason private interests can serve a social mission is that those who pay are those who get the most out of the benefits. The business of the for-profit company is to help make the law accessible, not only as a service provider to CanLII, but also through all the other products it sells.

– 0 –

Since inception, Cornell’s LII has favored quality over quantity. LII’s siblings, Lexum (then CanLII), AustLII and BaiLII, and later on PacLII and SafLII, have gone for volume, to make a difference in access to law in their respective countries. This is not to minimize the practical significance of Cornell’s LII: it has made a difference too, but not the same way. Instead of trying to offer comprehensive access to USA law, which would have been an overwhelming objective, the principals at the LII decided to put their talent into achieving excellence within the more defined boundaries of specific collections, such as the US Supreme Court decisions, the US Code and now the Code of Federal Regulations. These are not tiny corpuses. All are significant bodies of law heavily used not only in the US but abroad as well. Even though other legal information initiatives produced innovations, none aside from Cornell’s LII put knowledge development as the central product of their activities. Even 25 years after starting, LII is still on the edge, figuring how to accelerate the development of a legal semantic web. Beyond the fact that they were the first LII, this constant contribution to knowledge may be the real reason for the ongoing influence and prestige of the institute established 25 years ago by Peter and Tom.  

Twenty years later, the model initiated at Cornell has flourished. Members of the LII family have found their own ways. Looking at the global picture, one can only be pleased to see how an idea born in academia, and in large part at Cornell Law School, has influenced many legal information systems for the better. Even more surprising, it seems that we have not yet seen the end of it. The Cornell LII remains, after all these years, a hotbed of innovation with friends all around the world.

Bon anniversaire et amitiés au LII de la part de nous tous chez Lexum,

Daniel Poulin is the founding director of CanLII, the Canadian Legal Information Institute.  He is widely known for innovation in both legal publishing and in the business apparatus needed to sustain open-access efforts over the long term. He is now Emeritus Professor of the Law Faculty of the University of Montreal, and President of Lexum Information Juridique, a legal-information technology company spun off from his original research group at the University of Montreal.

[Note:  the second post in our “25 for 25” series is from Peter W. Martin, the LII’s founding co-director,  and former dean of the Cornell Law School.  Peter is, as well, a pioneer in the use of computers in law teaching, where he was the first to teach for-credit distance-learning courses (in copyright and Social Security law) across multiple institutions — another, less well known, LII first.  He is also the author of the immensely popular online guide, “An Introduction to Basic Legal Citation“.]

Tom Bruce has explained how he secured the Sun computer that launched our Internet escapades. I’ve been charged with adding further detail to the LII’s origin story.

Honesty compels a confession of how unclear we were at the outset about the role the Internet would play in our collaborative enterprise, but my files do contain notes on a January 1992 talk heralding the publication potential of the Internet. The venue was the annual meeting of the Association of American Law Schools; the presenter, Mitch Kapor, then chairman of the Electronic Frontier Foundation.

Three previous years spent designing and building an electronic reference work on Social Security law had persuaded me that:

  • law publishers were not reliable partners for such novel endeavors;
  • hypertext (not present in either Westlaw or Lexis at the time) had enormous value when applied to legal materials;
  • electronic media had the potential to erode barriers that had limited law schools to the role of consumers of information products; and
  • a zone of digital innovation could be established within a U.S. law school with only modest amounts of external funding.

A New York non-profit, the National Center for Automated Information Retrieval or NCAIR, spending down funds generated by the establishment of LEXIS, had supported my Social Security project. In April 1992, with the endorsement of Dean Russell Osgood, Tom and I submitted a proposal to NCAIR seeking a handsome sum for establishment “of an institute of legal information technologies” at Cornell. At NCAIR’s request, we scaled back our multi-year proposal and asked only for start-up funds. Critically, those included salary monies to buy Tom freedom from his duties as the school’s director of educational technologies. Upon securing that funding, we declared ourselves co-directors of an institute – coining a name that has stuck for 25 years and has, as Tom notes, been taken on by many others.

The two labels proved to be startlingly important. To the external world “institute” suggested significant scale and permanence. Within the academic environment that surrounded us – where boundaries of academic discipline and status framed most activity – “co-directors” expressed a partnership that straddled both, making possible the most exhilarating and fulfilling collaboration of my career.

The agenda we sketched for NCAIR included disk publication of important U.S. statutes (electronic course supplements) and experimentation with use of the Internet as a mode of electronic publication and exchange. Unquestionably, we did not anticipate the pace, scope, or longevity of those experiments. Much of what followed was a consequence of fortuitous timing. Looking back, one can see that by 1992 several critical factors had aligned.

First, thanks to LEXIS and Westlaw, U.S. legal insiders (lawyers, judges, legislators, and participants in legal education) had grown familiar with computer-based legal information.

Second, the bodies from which legal texts emerge – courts, legislatures, administrative agencies – had begun using computers in drafting and revision. Most continued to consider the resulting word processing files simply a more efficient means of producing print, but some offered journalists, lawyers, and interested others dial-up access to the digital originals. In 1990 the US Supreme Court took a further step, distributing its judgments electronically on the morning of release to a small number of redistributors. A dozen or so media companies and law publishers subscribed. But so, too, did one university, Case Western Reserve, placing the decision files on the Internet at an ftp site. This was a far cry from effective public access. To retrieve the syllabus and opinions in a specific case one had to have a dial-up connection to the Case Western site or be part of the scientific community then connected to the Internet. One also needed to know the docket number, download multiple files, and have compatible word-processing software.

Nonetheless, the availability of primary legal texts in digital format straight from the source and unencumbered by copyright (another favorable factor) spared electronic publishers, including upstarts like us, the cost of digitizing print – a huge expense burdening early legal database research and LEXIS and Westlaw during their first decades. Together with the spread of PCs and the emergence of CD-ROM as a high-capacity distribution medium, this opened the U.S. legal information market to a disruptive wave of fresh competition.

Disk distribution offered important functionality, including hypertext and copy and paste, that the major online systems did not then deliver. Software exploiting this potential had appeared by the early 1990s. By 1992 work was underway to bring the best of them to a scrollable graphical user interface, capable of displaying and printing legal documents with all the information carried in print by font size, style, graphics, and layout. This development was in turn made possible by Microsoft’s release, only a year and a half before, of Windows 3.0.

Neither the major legal information vendors nor U.S. public bodies responded nimbly to the opportunities opened by disk distribution or the Internet. And that created space for our uninhibited, experimental non-profit.

We ventured into that space during the summer of 1992, with a handful of disk publications and the Gopher server Tom has described. Gopher could not, however, deliver important features we had been able to realize in the LII disk publications. It was a desire to bring the quality we had achieved on disk to the Internet that led us to html and the Web in early 1993. Way back then, the Web, also in its infancy, was the tool of a technical community that worked on UNIX machines that had high bandwidth connections to the Internet. Our principal audience as we then conceived it consisted of legal insiders working with PCs and dial-up connections. For them no Web browser existed nor was one in sight. So Tom set to work and created the first Windows-based Web browser, Cello.

By then the infrastructure that would allow the explosion of the World Wide Web was in place. The capacity and speed of the Internet’s backbone had just been dramatically improved. Congress had removed the ban on commercial traffic over that backbone imposed by NSF’s “acceptable use policy,” and privatization was underway.

These developments put distribution of legal information to a broader public within reach, but only those who were aware of and had access to the Net. In 1992 and 1993 that was a small population. During 1992 the word “Internet” appeared in only twenty-two New York Times articles and not once in the American Bar Association Journal. In December 1993, when the product “Internet in a box” was announced, estimates of Internet users had climbed into the 15-20 million range (a ten-fold increase over the course of only a year or two). By then the LII was an established Web destination, and Tom and I had begun to appreciate that the public that valued our growing collection of legal information was far broader than the set of legal insiders we initially had in mind.

As early as 1995 all the ingredients that enabled our initial experiment to germinate, grow, and reach a global audience had come together. Although the LII enjoyed significant first mover advantage, the period since then has been filled with repeated challenges and hard choices – as public bodies, commercial entities with business plans that incorporated public access to legal information, new search engines and other finding aids crowded into the sector the LII once shared with only a few others. Casualties furnished a steady reminder that in this rapidly changing environment, survival, let alone impact, could not be taken for granted. Long gone, for example, are the fine treaty collection hosted by the University of Tromsø, the index to federal agency material offered by Villanova, and Indiana’s law meta list. In some cases disappearance was the result of being displaced by something better, in others of having attempted too much, and still others of shifts in allegiance or priorities of key personnel or the host institution.

Successfully threading a path through these and other obstacles, the LII has had to address a set of critical and recurring questions, concerning:

  • the institute’s relationship with Cornell;
  • whether to work in consort with any of the increasingly numerous commercial and public players in this field, and if so on what terms;
  • how to staff, organize, and fund expansion in scale and longevity beyond the initial experiment; and
  • how to continue to innovate while maintaining the information services essential to holding and growing the LII’s audience.

As the years have gone by, these questions have not grown less difficult. In the 13 years since retirement removed me from the founding partnership, Tom and his team have addressed them with such success that I have high confidence that this venture, begun so long ago, will continue to break fresh ground, while meeting important public needs, long past this 25th anniversary.

Peter W. Martin, the Jane M.G. Foster Professor of Law, Emeritus, at Cornell, co-founded the Legal Information Institute with Thomas R. Bruce and served as its co-director until 2003.


25 logo[ Note: This year marks the LII’s 25th year of operation.  In honor of the occasion, we’re going to offer “25 for 25” — blog posts by 25 leading thinkers in the field of open access to law and legal informatics, published here in the VoxPopuLII blog.  Submissions will appear irregularly, approximately twice per month.  We’ll open with postings from the LII’s original co-directors, and conclude with posts from the LII’s future leadership. Today’s post is by Tom Bruce; Peter Martin’s will follow later in the month.
]

It all started with two gin-and-tonics, consumed by a third party.  At the time I was the Director of Educational Technologies for the Cornell Law School.  Every Friday afternoon, there was a small gathering of people like me in the bar of the Statler Hotel, maybe 8 or 10 from different holes and corners in Cornell’s computer culture.  A lot of policy issues got solved there, and more than a few technical ones.  I first heard about something called “Perl” there.

The doyen of that group was Steve Worona, then some kind of special-assistant-for-important-stuff in the office of Cornell’s VP for Computing.  Knowing that the law school had done some work with CD-ROM based hypertext, he had been trying to get me interested in a Campus-Wide Information System (CWIS) platform called Gopher and, far off into the realm of wild-eyed speculation, this thing called the World-Wide Web.  One Friday in early 1992, noting that Steve was two gin-and-tonics into a generous mood, I asked him if he might have a Sun box laying around that I might borrow to try out a couple of those things.

He did, and the Sun 4-c in question became fatty.law.cornell.edu — named after Fatty the Bookkeeper, a leading character in the Brecht-Weill opera “Mahagonny”, which tells the story of a “City of Nets”.   It was the first institutional web server that had information about something other than high-energy physics, and somewhere around the 30th web server in the world. We still get a fair amount of traffic via links to “fatty”, though the machine name has not been in use for a decade and a half (in fact, we maintain a considerable library of redirection code so that most of the links that others have constructed to us over a quarter-century still work).

What did we put there?  First, a Gopher server.  Gopher pages were either menus or full text — it was more of a menuing system than full hypertext, and did not permit internal links.  Our first effort — Peter’s idea — was Title 17 of the US Code, whose structure was an excellent fit with Gopher’s capabilities, and whose content (copyright) was an excellent fit with the obsessions of those who had Internet access in those days.  It got a lot of attention, as did Peter’s first shot at a Supreme Court opinion in HTML form, Two Pesos v. Taco Cabana.

Other things followed rapidly, and later that year we began republishing all Supreme Court opinions in electronic form.  Initially we linked to the Cleveland Freenet site; then we began republishing them from text in ATEX format; later we were to add our own Project Hermes subscription.  Not long after we began publishing, I undertook to develop the first-ever web browser for Microsoft Windows — mostly because at the time it seemed unlikely that anyone else would, anytime soon.  We were just as interested in editorial innovations.   Our first legal commentary — then called “Law About…..”, and now WEX, was put together in 1993, based on work done by Peter and by Jim Milles in constructing a topics list useful to both lawyers and the general public. A full US Code followed in 1994.  Our work with CD-ROM continued for a surprisingly long time — we offered statutory supplements and leading Supreme Court cases on CD for a number of years, and our version of Title 26 was the basis for a CD distributed by the IRS into the new millennium.  Back in the day when there was, plausibly, a book called “The Whole Internet User’s Guide and Catalog”, we appeared in it eight times.

To talk about the early days solely in terms of technical firsts or Internet-publishing landmarks is, I think, to miss the more important parts of what we did.  First, as Peter Martin remarks in a promotional video that we made several years ago, we showed that there was tremendous potential for law schools to become creative spaces for innovation in all things related to legal information (they still have that tremendous potential, though very few are exercising it).  We used whatever creativity we could muster to break the stranglehold of commercial publishers not just on legal publishing as a product, but also on thinking about how law should be published, and where, and for whom.  In those days, it was all about caselaw, all about lawyers, and a mile wide and an inch deep. Legal academia, and the commercial publishers, were preoccupied with caselaw and with the relative prestige and authority of courts that publish it; they did not seem to imagine that there was a need for legal information outside the community of legal practitioners.  We thought, and did, differently.  

We were followed in that by others, first in Canada, and then in Australia, and later in a host of other places.  Many of those organizations — around 20 among them, I think — have chosen to use “LII” as part of their names, and “Legal Information Institute” has become a kind of brand name for open access to law.   Many of our namesakes offer comprehensive access to the laws of their countries, and some are de facto official national systems.  Despite recurring fascination with the idea of a “free Westlaw”, a centralized free-to-air system has never been a practical objective for an academically-based operation in the United States. We have, from the outset, seen our work  as a practical exploration of legal information technology, especially as it facilitates integration and aggregation of numerous providers.  The ultimate goal has been to develop new methods that will help people — all people, lawyers or not —  find and understand the law, without fees.

It was obvious to us from the start that effective work in this area would require deep, equal-status collaboration between legal experts and technologists, beginning with the two of us.  My collaboration with Peter was the center of my professional life for 20 years.  I was lucky to have the opportunity.  Legal-academic culture and institutions are often indifferent or hostile to such collaborations, and they are far rarer and much harder to maintain than they should be.  These days, it’s all the rage to talk about “teaching lawyers to code”. I think that lawyers would get better results if they would learn to communicate and collaborate with those who already know how.

Finally, we felt then – as we do now – that the best test of ideas was to implement them in practical, full-scale systems offered to the public in all its Internet-based, newfound diversity.  The resulting work, and the LII itself,  have been defined by the dynamism of opposites — technological expertise vs. legal expertise, practical publishing vs. academic research, bleeding-edge vs. when-the-audience-is-ready,  an audience of lawyers vs. an audience of non-lawyer professionals and private citizens.  That is a complicated, multidirectional balancing act — but we are still on the high-wire after 25 years, and that balancing act has been the most worthwhile thing about the organization, and one that will enable a new set of collaborators to do many more important things in the years to come.

Thomas R. Bruce is the Director of the Legal Information Institute, which he co-founded with Peter W. Martin in 1992.