skip navigation
search

As a first year law student, a handful of things are given to you (at least where I studied): a pre-fabricated schedule, a non-negotiable slate of professors, and a basic history lesson — illustrated through individual cases.  During my first year, the professor I fought with the most was my property law teacher.  Now, I realize that it wasn’t her that I couldn’t accept; it was the implications of the worldview she presented.  She saw “property law” as a construct through which wealthy people protected their interests at the expense of those who didn’t have the means to defend themselves.  Every case — from “fast fox, loose fox” on down — was an example of someone’s manipulating or changing the rules to exclude the poor from fighting for their interests.  It was a pretty radical position to accept and I, maybe to my own discredit, ignored it.

Then, I graduated. I began looking at legal systems around the world and tried to get a sense of how they actually function in practice. I found something a bit startling: they don’t function.  Or, at least not for most of us.

Justice: Inaccessible

At first glance, that may seem alarmist.  Honestly, it feels a bit radical to say.  But, then consider that in 2008, the United Nations issued a report entitled Making the Law Work for Everyone, which estimated that 4 billion people (of a global population of 6 billion at the time) lacked “meaningful” access to the rule of law.

Stop for a second. Read that again. Two-thirds of the world’s population don’t have access to rule-of-law institutions. This means that they lack, not just substantive representation or equal treatment, but even the most basic access to justice.

Now, before you write me, and the UN, off completely as crackpots, I must make some necessary caveats. “Rule-of-law institutions,” in the UN report, means formal, governmentally sponsored systems.  The term leaves out pluralistic systems, which rely on adapted or traditional actors, many of which exist exclusively outside of the purview of government, to settle civil or small-scale criminal disputes.   Similarly, the word “meaningful,” in this context, is somewhat ambiguous. Making the Law Work for Everyone isn’t clear about what standards it uses to determine what constitutes “access,” “fairness,” or relevant and substantive law (i.e., the number and content of laws). While the report’s major focus was on adapting an appropriate level of formalism in order to create inclusive systems, the strategy of definitionally avoiding pluralism and cultural relativism while assessing a global standard of an internationally (and often constitutionally) protected service significantly complicates the analysis offered in the report.

What’s causing the gap in access to justice?

So, let’s work from the basics. The global population has been rising steadily for, well, a while now, increasing the volume of need addressed by legal systems.  Concurrently, the number of countries has grown, and with them young legal systems, often without precedents or established institutional infrastructures. As the number of legal systems has grown, so too have the public’s expectations of the ability of these systems to provide formalized justice procedures. Within each of these nations, trends like urbanization, the emergence of new technologies, and the expansion of regulatory frameworks add complexity to the number of laws each domestic justice system is charged with enforcing. On top of this, the internationalization and proliferation of treaties, trade agreements, and institutions imposes another layer of complexity on what are often already over-burdened enforcement mechanisms. It’s understandable, then, why just about every government in the world struggles, not only to create justice systems that address all of these very complicated issues, but also to administer these systems so that they offer universally equal access and treatment.

Predictably, private industry observed these trends a long time ago. As a result, it should be no surprise that the cost of legal services has been steadily rising for at least 20 years.  Law is fairly unique in that it is in charge of creating its own complexity, which is also the basis of its business model.  The harder the law is to understand, the more work there is for lawyers. This means fewer people will have the specialized skills and relationships necessary to successfully achieve an outcome through the legal system.

What’s even more confusing is that because clients’ needs and circumstances vary so significantly, it’s very difficult to reliably judge the quality of service a lawyer provides.  The result is a market where people, lacking any other reliable indicator, judge by price, aesthetics, and reputation.  To a limited extent, this enables lawyers to self-create high-end market demand by inflating prices and, well, wearing really nice suits. (Yes, this is an oversimplification. But they do, often, wear REALLY nice suits).   The result is the exclusion (or short-shrifting) of middle- and low-income clients who need the same level of care, but are less concerned with the attire.  Incidentally, the size and spending power of the market being excluded — even despite growing wealth inequality — are enormous.

Redesigning legal services

I don’t mean to be simplistic or to re-state widely understood criticisms of legal systems.  Instead, I want to establish the foundations for my understanding of things. See, I approach this from a design viewpoint. The two perspectives above — namely, that of governments trying to implement systems, and that of law firms trying to capitalize on available service markets — often neglect the one design perspective that determines success: that of the user. When we’re judging the success of legal systems, we don’t spend nearly enough time thinking about what the average person encounters when trying to engage legal systems.  For most people, the accessibility (both physical and intellectual) and procedure of law, are as determinative of participation in the justice system as whether the system meets international standards.

The individuals and organizations on the cutting edge of this thinking, in my understanding, are those tasked with delivering legal services to low-resource and rural populations. Commercial and governmental legal service providers simply haven’t figured out a model that enables them to effectively engage these populations, who are also the world’s largest (relatively) untapped markets.  Legal aid providers, however, encounter the individuals who have to overcome barriers like cost, time, education, and distance to just preserve the status quo, as well as those who seek protection.  From the perspective of legal aid clients, the biggest challenge to accessing the justice system may be the fact that courts are often located dozens of miles away from clients’ homes, over miserable roads.  Or the biggest challenge may be the fact that clients have to appear in court repeatedly to accomplish what seem like small tasks, such as extensions or depositions. Or the biggest challenge may be simply not knowing whom to approach to accomplish their law-related goals.  Each of these challenges represents a barrier to access to justice.  Each barrier to access, when alleviated, represents an opportunity for engagement and, if done correctly, an opportunity for financial sustainability.

Mobile points the way

None of this is intended as criticism — almost every major service industry in the world grapples with the same challenges.  Well, with the exception of at least one: the mobile phone industry.  The emergence of mobile phones presents two amazing opportunities for the legal services industry: 1) the very real opportunity for effective engagement with low-income and rural communities; and 2) an example of how, when service offerings are appropriately priced, these communities can represent immensely profitable commercial opportunities.

Let’s begin with a couple of quick points of information.  Global mobile penetration (the number of people with active cell phone subscriptions) is approximately 5.3 billion, which is 78 percent of the world’s population.  There are two things that every single one of those mobile phone accounts can do: 1) make calls; and 2) send text messages.  Text messaging, or SMS (Short Message Service), is particularly interesting in the context of legal services because it is a way to actively engage with a potential client, or a client, immediately, cheaply, and digitally.  There are 4.3 billion active SMS users in the world and, in 2010, the world sent 6.1 trillion text messages, a figure that has tripled in the last 3 years and is projected to double again by 2013.  That’s more than twice the global Internet population of 2 billion. It’s no exaggeration, at this point, to say that mobile technology is transformative to, basically, everything.  What has not been fully explored is why and how mobile devices can transform service delivery in particular settings.

Why is SMS so promising?

Something well-understood in the technology space is the value of approaching people using the platforms that they’re familiar with.  In fact, in technology, the thing that matters most is use. Everything has to make sense to a user, and make things easier than they would be if the user didn’t use the system.  This thinking largely takes place in technology spaces, in the niche called “user-interface design.” (Forgive the nerdy term, lawyers. Forgive the simplicity, fellow tech nerds.)  These are the people who design the way that people engage with a new piece of technology.

In this way, considering it has 4.3 billion users, SMS has been one of the best, and most simply, designed technologies ever.  SMS is instant, (usually) cheap, private, digital, standardized, asynchronous (unlike a phone call, people can respond whenever they want), and very easy to use. These benefits have made it the most used digital text-based communication tool in human history.

User-Interface-Design Principles + SMS + Legal Services = ?

So. What happens when you take user-interface design thinking, and apply it to legal systems?  Recognizing that the assumptions underlying most formal legal systems arose when those systems originated (most of the time hundreds of years ago), how would we update or change what we do to improve the functioning of legal systems?

There are a lot of good answers to those questions, and moves toward transactional representation, form standardization (à la LegalZoom), legal process outsourcing (à la Pangea3), legal information systems (there are a lot), and process automation (such as document assembly) are all tremendously interesting approaches to this work.  Unfortunately, I’m not an expert on any of those.

FrontlineSMS:Legal

I work for an organization called FrontlineSMS, where I also founded our FrontlineSMS:Legal project.  What we do, at FrontlineSMS, is design simple pieces of technology that make it easier to use SMS to do complex and professional engagement.  The FrontlineSMS:Legal project seeks to capitalize on the benefits of SMS to improve access to justice and the efficiency of legal services.  That is, I spend a lot of my time thinking about all the ways in which SMS can be used to provide legal services to more people, more cheaply.

And the good news is, I think, that there are a lot of ways to do this.  Pardon me while I geek out on a few.

Intake and referral

Mobile Justice HousesThe process of remote legal client intake and referral takes a number of forms, depending on the organization, procedural context, and infrastructure. Within most legal processes, the initial interview between a service provider and a client is an exceptionally important and complex interaction. There are, however, often a number of simpler communications that precede and coordinate the initial interview, such as very basic information collection and appointment scheduling, which could be conducted remotely via SMS.

Given the complexity of legal institutions, providing remote intake and referral can significantly reduce the inefficiencies that so-called “last-mile” populations — i.e., populations who live in “areas …beyond the reach of basic government infrastructure or services — face in seeking access to services. The issue of complexity is often compounded by the centralization of legal service providers in urban areas, which requires potential clients to travel just to begin these processes. Furthermore, most rural or extension services operate with paper records, which are physically transported to central locations at fixed intervals. These records are not particularly practical from a workflow management perspective and often are left unexamined in unwieldy filing systems. FrontlineSMS:Legal can reduce these barriers by creating mobile interfaces for digital intake and referral systems, which enable clients to undertake simple interactions, such as identifying the appropriate service provider and scheduling an appointment.

Client and case management

After intake, most legal processes require service providers to interact with their clients on multiple occasions, in order to gather follow-up information, prepare the case, and manage successive court hearings. Recognizing that each such meetings require people from last-mile communities to travel significant distances, the iterative nature of these processes often imposes a disproportionate burden on clients, given the desired outcome. In addition, many countries struggle to provide sufficient postal or fixed-line telephone services, meaning that organizing follow-up appointments with clients can be a significant challenge. These challenges become considerably more complicated in cases that have multiple elements requiring coordination between both clients and institutions.

Similarly, in order to follow up with clients, service providers must place person-to-person phone calls, which can take significant chunks of time. Moreover, internal case management systems originate from paper records, causing large amounts of duplicative data entry and lags in data availability.

To alleviate these problems, we propose that legal service providers install a FrontlineSMS:Legal hub in a central location, such as a law firm or public defender’s office. During the intake interview, service agents would record the client’s mobile number and use SMS as an ongoing communications platform.

By creating a sustained communications channel between service providers and clients, lawyers and governments could communicate simple information, such as hearing reminders, probation compliance reminders, and simple case details. Additionally, these communications could be automated and sent to entire groups of clients, thereby reducing the amount of time required to manage clients and important case deadlines. This set of tools would reduce the barriers to communication with last-mile clients and create digital records of these interactions, enabling service providers to view all of these exchanges in one easy-to-use interface, reducing duplicative data entry and improving information usability.

Caseload- and service-extension agent management

Although this article focuses largely on innovations that improve direct access to legal services for last-mile populations, the same tools also have the effect of improving internal system efficiency by digitizing records and enabling a data-driven approach to measuring outcomes. Both urban and rural service extension programs have a difficult time monitoring their caseloads and agents in the field. The same communication barriers that limit a service provider’s ability to connect with last-mile clients also prevent communication with remote agents. Mobile interfaces have the effect of lowering these barriers, enabling both intake and remote reporting processes to feed digital interfaces. These digital record systems, when used effectively, inform a manager’s ability to allocate cases to the most available service provider.

Applied to legal processes, supervising attorneys can use the same SMS hubs that administer intake and case management processes to digitize their internal management structures. One central hub, fed by the intake process that information desks often perform, and remote input where service extension agents exist can allow managers to assign cases to individual service providers, and then track them through disposition. In doing so, legal service coordinators will be able to track each employee’s workload in real time. In addition, system administrators will be able to look at the types and frequency of cases they take on, which will inform their ability to allocate resources effectively. If, for example, one area has a dramatically higher number of cases than another, it may make sense to deploy multiple community legal advisors to adequately address the area of greatest need.

Ultimately, though, SMS use in legal services remains largely untested.  FrontlineSMS is currently working with several partners to design specific mobile interfaces that meet their needs.  These efforts will definitely turn up new and interesting things that can be done using SMS and, particularly, FrontlineSMS.  These projects, however, are still largely in the design phase.

In addition to practical implementation challenges, there are a large number of challenges that lie ahead, as we begin to consider the implications of the professional use of SMS.  Issues such as security, privacy, identity, and chain of custody will all need to be addressed as systems adapt to include new technologies.  There are a number of brilliant minds well ahead on this, and we’ve even jury-rigged a few solutions ourselves, but there will be plenty to learn along the way.

The potential is great

What is clear, though, is that SMS has the potential to improve cost efficiencies, engage new populations, and, for the first time, build a justice system that works for the people who need it most.

I don’t think any of this will square me with my property-law professor.  I’m not sure I’ll ever fix property law.  But I do think that by reaching out to new populations using the technologies in their pockets, we can make a difference in the way people interact with the law. And even if that’s just a little bit, even if it just enables one percent more people to protect their homes, start a business, or pursue a better life, isn’t that worth it?

[Editor’s Note: For other VoxPopuLII posts on using technology to improve access to justice, please see Judge Dory Reiling, IT and the Access-to-Justice Crisis; Nick Holmes, Accessible Law; and Christine Kirchberger, If the mountain will not come to the prophet, the prophet will go to the mountain.]

Sean Martin McDonald is the Director of Operations at FrontlineSMS and the founding Director of FrontlineSMS:Legal. He holds JD and MA degrees from American University. He is the author, most recently, of The Case for mLegal.

VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed. The statements above are not  legal  advice or legal representation. If you require legal advice,  consult a  lawyer. Find a lawyer in the Cornell LII Lawyer Directory.

The Civic Need

Civic morale in the U.S. is punishingly low and bleeding out. When it comes to recent public approval of the U.S. Congress, we’re talking imminent negative territory, if such were possible. Gallows chuckles were shared over an October 2011 NYT/CBS poll that found approval of the U.S. Congress down to 9% — lower than, yes, communism, the British Petroleum company during the oil spill, and King George III at the time of the American Revolution. The trends are beyond grim: Gallup in November tracked Congress falling to 13% approval, tying an all-time low. For posterity, this is indeed the first branch of the federal government in America’s constitutional republic, the one with “the power of the purse“, our mostly-millionaire law-makers. Also: the branch whose leadership recently attempted to hole up in an anti-democratic, unaccountable “SuperCommittee” to make historic decisions affecting public policy in secret. Members of Congress are the most fallible, despised elected officials in our representative democracy.

OpenCongress: Responding with open technology

Such was the visceral distrust of government (and apathy about the wider political process, in all its messy necessity) that our non-profit organization, the Participatory Politics Foundation (PPF), sought to combat with our flagship Web application, OpenCongress.org. Launched in 2007, its original motto was: “Bringing you the real story about what’s happening in Congress.” Our premise, then as today, is that radical transparency in government will increase public accountability, reduce systemic corruption in government, and result in better legislative outcomes. We believe free and open-source technology can push forward and serve a growing role in a much more deliberative democratic process — with an eye towards comprehensive electoral reform and increased voter participation. The technology buffet includes, in part, the following: software (in the code that powers OpenCongress); Web applications (like the user-friendly OpenCongress pages and engagement tools); mobile (booming, of course, globally); libre data and open standards; copyleft licensing; and more. One articulation of our goal is to encourage government, as the primary source, to comply exhaustively with the community-generated Principles of Open Government Data (which, it should be noted, are continually being revised and amended by #opengov advocates, as one would expect in a healthy, dynamic, and responsive community of watchdogs with itchy social sharing fingers). Another articulation of our goal, put reductively: we’ll know we’re doing better when voter participation rates rise in the U.S. from our current ballpark of 48% to levels comparable to those of other advanced democracies. Indeed, there has been a very strong and positive public demand for user-friendly Web interfaces and open data access to official government information. Since its launch, OpenCongress has grown to become the most-visited not-for-profit government transparency site in the U.S. (and possibly the world), with over one million visits per month, hundreds of thousands of users, and millions of automated data requests filled every week.

OpenGovernment.org: Opening up state legislatures

The U.S. Congress, unfortunately, remains insistently closed-off from the taxpaying public — living, breathing people and interested constituent communities — in its data inputs and outputs, while public approval keeps falling (for a variety of reasons, more than can be gestured towards here). This discouraging sentiment might be familiar to you — even cliché — if you’re an avid consumer of political news media, political blogs, and social media. But what’s happening in your state legislature? What bills in your state House or Senate chambers are affecting issues you care about? What are special interests saying about them, and how are campaign contributions influencing them? Even political junkies might not have conversational knowledge of key votes in state legislatures, which — if I may be reductive — take all the legislative arcane-ness of the federal Congress and boil it down to an even more restrictive group of state capitol “insiders” who really know the landscape. A June 2011 study by the University of Buffalo PoliSci Department found that, as summarized on Ballotpedia :

First, the American mass public seems to know little about their state governments. In a survey of Ohio, Patterson, Ripley, and Quinlan (1992) found that 72 percent of respondents could not name their state legislator. More recently, an NCSL-sponsored survey found that only 33 percent of respondents over 26 years old could correctly identify even the party that controlled their state legislature.

Further, state legislative elections are rarely competitive, and frequently feature only one major party candidate on the ballot. In the 2010 elections, 32.7 percent of districts had only one major party candidate running. (Ballotpedia 2010) In 18 of the 46 states holding legislative elections in 2010, over 40 percent of seats faced no major-party challenge, and in only ten states was the proportion of uncontested seats lower than 20 percent. In such an environment, the ability to shirk with limited consequences seems clear.”[1]

To open up state government, PPF created OpenGovernment.org as a joint project with the non-profit Sunlight Foundation and the community-driven Open States Project (of Sunlight Labs). Based on the proven OpenCongress model of transparency, OpenGovernment combines official government information with news and blog coverage, social media mentions, campaign contribution data, public discussion forums, and a suite of free engagement tools. The result, in short, is the most user-friendly page anywhere on the Web for accessing bill information at the state level. The site, launched in a public beta on January 18th, 2011, currently contains information for six U.S. state legislatures: California, Louisiana, Maryland, Minnesota, Texas, and Wisconsin. In March 2011, OpenGovernment was named a semi-finalist in the Accelerator Contest at South by Southwest Interactive conference.

Skimming a state homepage — for example, California — gives a good overview of the site’s offerings: every bill, legislator, vote, and committee, with as much full bill text as is technically available; plus issues, campaign contributions, key vote analysis, special interest group positions, and a raft of social wisdom. A bill page — for example, Wisconsin’s major freedom of association bill, SB 11 — shows how it all comes together in a highly user-friendly interface and, we hope, the best-online user experience. Users can track, share, and comment on legislation, and then contact their elected officials over email directly from OpenGovernment pages. OpenGovernment remains in active open-source development. Our developer hub has more information. See also our wish-list and how anyone can help us grow, as we seek to roll out to all 50 U.S. state legislatures before the November 2012 elections.

Opening up state legislative data: The benefits

To make the value proposition for researchers explicit, I believe fundamentally there is clear benefit in having a go-to Web resource to access official, cited information about any and all legislative objects in a given state legislature (as there is with OpenCongress and the U.S. Congress). It’s desirable for researchers to know they have a permalink of easy-to-skim info for bills, votes, and more on OpenGovernment — as opposed to clunky, outmoded official state legislative websites (screenshots of which can be found in our launch blog post, if you’re brave enough to bear them). Full bill text is, of course vital for citing, as is someday having fully-transparent version-control by legislative assistants and lobbyists and members themselves. For now, the site’s simple abilities to search legislation, sort by “most-viewed,” sort by date, sort by “most-in-the-news,” etc., all offer a highly contemporary user-experience, like those found by citizens elsewhere on the Web (e.g., as online consumers or on social media services). Our open API and code and data repositories ensure that researchers and outside developers (e.g., data specialists) have bulk access to the data we aggregate, in order to remix and sift through for discoveries and insights. Bloggers and journalists can use OpenGovernment (OG) in their political coverage, just as OpenCongress (OC) continues to be frequently cited by major media sites and blog communities. Issue advocates and citizen watchdogs can use OG to find, track, and contact their state legislators, soon with free online organizing features like Contact-Congress on OC. OpenGovernment‘s launch was covered by Alex Howard of O’Reilly Radar, the National Council of State Legislatures (The Thicket blog), and Governing, with notes as well from many of PPF and Sunlight’s #opengov #nonprofit allies, and later on by Knight Foundation, Unmatched Style, and dozens of smaller state-based political blogs.

The technology that powers OpenGovernment.org

The technology behind OpenGovernment was assembled by PPF’s former Director of Technology (and still good friend-of-PPF, following his amicable transition to personal projects) Carl Tashian. In designing it, Carl and I were driven first by a desire to ensure the code was not only relatively-remixable but also as modular as possible. Remixable, because we hoped and expect that other open-source versions of OpenGovernment will spring up, creating (apologies for the cliché, but it’s one I am loathe to relinquish, as it’s really the richest, most apt description of a desirable state of affairs) a diverse ecosystem of government watchdog sites for state legislatures. Open data and user-focused Web design can bring meaningful public accountability not only to state legislatures, but also to the executive and judicial branches of state government as well. PPF seeks non-profit funding support to bring OpenGovernment down to the municipal level — county, city, and local town councils, as hyper-local and close to the neighborhood block as possible — and up to foreign countries and international institutions like the United Nations. In theory, any government entity with official documents and elected official roles is a candidate for a custom version of OpenGovernment facing the public on the open Web — even those without fully-open data sets, which of course, most countries don’t have. But by making OpenGovernment as modular as possible, we aimed to ensure that the site could work with a variety of data inputs and formats. The software is designed to handle a best-case data stream — an API of legislative info — or less-than-best, such as XML feeds, HTML scraping, or even a static set of uploaded documents and spreadsheets.

Speaking of software, OpenGovernment is powered by GovKit, an open-source Ruby gem for aggregating and displaying open government APIs from around the Web. Diagrammed here, they are summarized here with a few notes:

  • Open States – a RESTful API of official government data, e.g. bills, votes, legislators, committees, and more. This data stream forms the backbone of OpenGovernment. A significantly volunteer effort coordinated by the talented and dedicated team at Sunlight Labs, Open States fulfills a gigantic public need for standardized data about state legislation — largely by the time-intensive process of scraping HTML from unstandardized existing official government websites. Really remarkable and precedent-setting public-interest work, the updates are by James Turk on the Labs Blog. Data received daily in .json format, and wherever possible, bill text is displayed in the smooth open-source DocumentCloud text viewer (e.g., WI SB11).
  • OpenCongress – API for federal bills, votes, people, and news and blog coverage. OpenGovernment is primarily focused on finding and tracking state bills and legislators, but one of our premises in designing the public resource was that the vast majority of users would first need to look up their elected officials by street address. (Can you name your state legislators with confidence offhand? I couldn’t before developing OpenCongress in 2007.) So since users were likely to take that action, we used our sibling site OpenCongress to find and display federal legislators underneath state ones (e.g., CA zip 94110).
  • Google News, Google Blog Search, Bing API – we use these methods to aggregate news and blog coverage of bills and members, as on OpenCongress: searching for specific search terms and thereby assembling pages that allow a user to skim down recent mentions of a bill (along with headlines and sources) without straying far from OpenGovernment. One key insight of OpenCongress was that lists of bills “most in the news” and “most-on-blogs” can point users towards what’s likely most-pressing or most-discussed or most-interesting to them, as search engine or even intra-site keyword searches on, say, “climate change bill” don’t always return most-relevant results, even when lightly editorially curated for SEO. On pages of news results for individual bills (e.g., CA SB 9) or members (e.g., WI Sen. Tim Carpenter), it’s certainly useful to get a sense of the latest news by skimming down aggregated headlines, even given known issues with bringing in similarly titled bills (e.g., SB 9 in Texas, not California) or sports statistics or spam. Future enhancements to OpenGovernment will do more to highlight trusted news sources from open data standards — a variety of services like NewsTrust exist on this front, and there’s no shortage of commercial partnerships possible (or via Facebook Connect and other closed social media), but PPF’s focus is on mitigating the “filter bubble” and staying in play on the open Web.
  • Transparency Data API (by Sunlight Labs) to bring in campaign contribution data from FollowTheMoney. If Open States data is the backbone of OpenGovernment, this money-in-politics data is its heart. PPF’s work is first and foremost motivated by a desire to work in the public interest to mitigate the harmful effects of systemic corruption at every level of government, from the U.S. Congress on down. (See, e.g., Lessig, Rootstrikers, innumerable academic studies and news investigations into the biased outcomes of a system where, for example, federal members of Congress spend between 30 and 70 percent of their time fundraising instead of connecting with constituents.) Part of this is vocally endorsing comprehensive electoral reforms such as non-partisan redistricting, right-to-vote legal frameworks, score voting, parliamentary representation, and the Fair Elections Now Act for full public financing of elections. But the necessary first step is radical transparency of campaign contributions by special interests to elected officials, accompanied by real-time financial disclosure, stronger ethics laws, aggressive oversight, and regulation to stop the revolving door with lobbyists and corporations that results in oligarchical elites and a captured government. Hence “The Money Trail” on OpenGovernment, e.g., for Texas, is a vital resource for connecting bills, votes, and donations. The primary source for money figures is our much-appreciated and detail-oriented non-profit partners at the National Institute on Money in State Politics, who receive data in either electronic or paper files from the state disclosure agencies with which candidates must file their campaign finance reports. Future enhancements to OG will integrate with MAPLight‘s unique analysis of industries supporting and opposing individual bills with their donations. MAPLight has data for CA and WI we’re looking to bring in, with more to come.
  • Project VoteSmart’s API brings in special-interest group ratings for state government and allows OpenGovernment to highlight the most-impactful legislation in each state, marking their non-partisan “key vote” bills (e.g., for TX). VoteSmart does remarkable legislative analysis that neatly ties in bills to issue areas, but VoteSmart doesn’t have a built-in money-in-politics tie-in on their pages, or tools to track and share legislation. (This is just another way in which OpenGovernment, by aggregating the best available data in a more user-focused design, adds value, we hope, in an open-source Web app, about which more below.) Project VoteSmart’s work is hugely valuable, but the data is again ornery — special interest group ratings are frequently sparse and vary in scale, and are therefore difficult to accurately summarize or average — so for members, where applicable, we show a total of the number of ratings in each category (e.g., for TX Sen. Dan Patrick) and link to a fuller description.
  • Wikipedia – OG first attempts to match on a legislator’s full name to a bio page on Wikipedia, with largely good but occasionally false-positive results. Of course many politicians go by nicknames, so this is a straightforward enhancement we’ll make once we can prioritize it with our available resources. See, e.g., TX Sen. Joan Huffman on OG, and her bio on Wikipedia.
  • Twitter – OG has first-pass implementation of bringing in mentions of a state hashtag and bill number, e.g., #txbill #sb7, and for members, state name and legislator name, e.g., Texas Joan Huffman. This is another relatively straightforward engineering enhancement that we can make more responsive and more accurate with additional resources — for example, bringing in more accurate mentions and highlighting ones made by influential publishers on social media. Spending our time working within walled gardens to capture mentions of key votes isn’t inherently pleasant, but bringing out vital chatter onto the open Web and making it available via our open API will be worth the time and investment.
  • Miro Community, free and open-source software from PPF’s sibling non-profit the Participatory Culture Foundation (PCF), makes it possible to crowdsource streaming online video about state legislatures (e.g., CA).

The OpenGovernment.org Web app is free, libre, and written in open-source Ruby on Rails code (developer hub). Like OpenCongress, the site is not-for-profit, non-commercial, promotes #opengovdata, open standards, and offers an open API, with volunteer contributions and remixes welcome and encouraged. Two features: most pages on the site are available for query via JSON and JSONP; and we offer free lookup of federal and state elected officials by latitude / longitude by URL. PostgreSQL and PostGIS power the back-end — we’ve seen with OpenCongress that the database of aggregated info can become huge, so laying a solid foundation for this was relevant in our early steps. The app uses the terrific open-source GeoServer to display vote maps — many enhancements possible there — and Jammit for asset packaging. For more technical details, see this enjoyable Changelog podcast w/ Carl from February 2011.

Web design on this beta OG Web app is by PPF and PCF’s former designer (and still good friend after an amicable parting) Morgan Knutson, now a designer with Google. As product manager, my goal was creating a user interface that — like the code base — would be as modular as possible. Lots of placeholder notes remain throughout the beta version pointing to areas of future enhancement that we can pursue with more resources and open-source volunteer help. Many of the engagement features of the site — from tracking to commenting to social sharing — were summarized brilliantly by Rob Richards in this Slaw.ca interview with me from July 29th, 2011 — viz., walking users up the “chain of engagement.” It’s a terrific, much-appreciated introduction to the civic-engagement goals of our organization and our beliefs regarding how well-designed web pages can do more than one might think to improve a real-life community in the near-term.

More on open government data and online civic engagement

To briefly run through more academic or data-driven research on the public benefits of #opengovdata and open-source Web tools for civic engagement (not intended to be comprehensive, of course, and with more caveats than I could fit here) :

OpenGovernment.org: Some metrics

To wrap up this summary of OpenGovernment in 2011, then, I’ll summarize some of the metrics we’ve seen on Google Analytics — with limited outreach and no paid advertising or commercial partnerships, OpenGovernment beta with its six states will have received over half a million pageviews in its first year of existence. As with OpenCongress, by far the most-viewed content as of now is bills, found via search engines by their official number, which send approximately two-thirds of all traffic (and of that, Google alone sends over half). Hot bills in Texas and the WI organizing bill constitute three of OG’s top ten most-viewed pages sitewide. After hearing about a firearms bill in the news or from a neighbor, for example, users type “texas bill 321” or “sb 321” into Google and end up on OG, where they’re able to skim the bill’s news coverage, view the campaign contributions (for example) and interest group ratings (for example) of its authors and sponsors, and notify their legislators of their opinions by finding and writing their elected officials.

OpenGovernment.org: Next steps, and How you can help

In addition to rolling out to all 50 U.S. states and launching pilot projects in municipal areas, one of our main goals for OpenGovernment is integration with the free organizing features we launched this past summer on OpenCongress version 3. Enabling OG users to email-their-state-reps directly from bill pages will significantly increase the amount of publicly transparent, linkable, query-able constituent communication on the open Web. Allowing issue-advocacy organizations and political blog communities to create campaigns as part of future MyOG Groups will coordinate whipping of state legislators for a more continually-connected civic experience. And as always, tweaks to the beta site’s user interface will allow us to highlight the best-available information about how money affects politics and votes in state legislatures, to fight systemic corruption, and to bring about a cleaner and more trustworthy democratic process. Help us grow and contact us anytime with questions or feedback. As a public charity, PPF aspires to be grow to become more akin to the Wikimedia Foundation (behind Wikipedia), Mozilla (behind Firefox), and MySociety (behind TheyWorkForYou, for the UK Parliament, and other projects). We’re working towards a future where staying in touch with what’s happening in state capitols is just as easy and as immediately rewarding as, for example, seeing photos from friends on Facebook, sharing a joke on Twitter, or loading a movie on Netflix.com.

David MooreDavid Moore is the Executive Director of the Participatory Politics Foundation, a non-profit organization using technology for civic engagement. He lives in Brooklyn, NY.

VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed. The statements above are not legal advice or legal representation. If you require legal advice, consult a lawyer. Find a lawyer in the Cornell LII Lawyer Directory.

Prosumption: shifting the barriers between information producers and consumers

One of the major revolutions of the Internet era has been the shifting of the frontiers between producers and consumers [1]. Prosumption refers to the emergence of a new category of actors who not only consume but also contribute to content creation and sharing. Under the umbrella of Web 2.0, many sites indeed enable users to share multimedia content, data, experiences [2], views and opinions on different issues, and even to act cooperatively to solve global problems [3]. Web 2.0 has become a fertile terrain for the proliferation of valuable user data enabling user profiling, opinion mining, trend and crisis detection, and collective problem solving [4].

The private sector has long understood the potentialities of user data and has used them for analysing customer preferences and satisfaction, for finding sales opportunities, for developing marketing strategies, and as a driver for innovation. Recently, corporations have relied on Web platforms for gathering new ideas from clients on the improvement or the development of new products and services (see for instance Dell’s Ideastorm; salesforce’s IdeaExchange; and My Starbucks Idea). Similarly, Lego’s Mindstorms encourages users to share online their projects on the creation of robots, by which the design becomes public knowledge and can be freely reused by Lego (and anyone else), as indicated by the Terms of Service. Furthermore, companies have been recently mining social network data to foresee future action of the Occupy Wall Street movement.

Even scientists have caught up and adopted collaborative methods that enable the participation of laymen in scientific projects [5].

Now, how far has government gone in taking up this opportunity?

Some recent initiatives indicate that the public sector is aware of the potential of the “wisdom of crowds.” In the domain of public health, MedWatcher is a mobile application that allows the general public to submit information about any experienced drug side effects directly to the US Food and Drug Administration. In other cases, governments have asked for general input and ideas from citizens, such as the brainstorming session organized by Obama government, the wiki launched by the New Zealand Police to get suggestions from citizens for the drafting of a new policing act to be presented to the parliament, or the Website of the Department of Transport and Main Roads of the State of Queensland, which encourages citizens to share their stories related to road tragedies.

Even in so crucial a task as the drafting of a constitution, government has relied on citizens’ input through crowdsourcing [6]. And more recently several other initiatives have fostered crowdsourcing for constitutional reform in Morocco and in Egypt .

It is thus undeniable that we are witnessing an accelerated redefinition of the frontiers between experts and non-experts, scientists and non-scientists, doctors and patients, public officers and citizens, professional journalists and street reporters. The ‘Net has provided the infrastructure and the platforms for enabling collaborative work. Network connection is hardly a problem anymore. The problem is data analysis.

In other words: how to make sense of the flood of data produced and distributed by heterogeneous users? And more importantly, how to make sense of user-generated data in the light of more institutional sets of data (e.g., scientific, medical, legal)? The efficient use of crowdsourced data in public decision making requires building an informational flow between user experiences and institutional datasets.

Similarly, enhancing user access to public data has to do with matching user case descriptions with institutional data repositories (“What are my rights and obligations in this case?”; “Which public office can help me”?; “What is the delay in the resolution of my case?”; “How many cases like mine have there been in this area in the last month?”).

From the point of view of data processing, we are clearly facing a problem of semantic mapping and data structuring. The challenge is thus to overcome the flood of isolated information while avoiding excessive management costs. There is still a long way to go before tools for content aggregation and semantic mapping are generally available. This is why private firms and governments still mostly rely on the manual processing of user input.

The new producers of legally relevant content: a taxonomy

Before digging deeper into the challenges of efficiently managing crowdsourced data, let us take a closer look at the types of user-generated data flowing through the Internet that have some kind of legal or institutional flavour.

One type of user data emerges spontaneously from citizens’ online activity, and can take the form of:

  • citizens’ forums
  • platforms gathering open public data and comments over them (see for instance data-publica )
  • legal expert blogs (blawgs)
  • or the journalistic coverage of the legal system.

User data can as well be prompted by institutions as a result of participatory governance initiatives, such as:

  • crowdsourcing (targeting a specific issue or proposal by government as an open brainstorming session)
  • comments and questions addressed by citizens to institutions through institutional Websites or through e-mail contact.

This variety of media supports and knowledge producers gives rise to a plurality of textual genres, semantically rich but difficult to manage given their heterogeneity and quick evolution.

Managing crowdsourcing

The goal of crowdsourcing in an institutional context is to extract and aggregate content relevant for the management of public issues and for public decision making. Knowledge management strategies vary considerably depending on the ways in which user data have been generated. We can think of three possible strategies for managing the flood of user data:

Pre-structuring: prompting the citizen narrative in a strategic way

A possible solution is to elicit user input in a structured way; that is to say, to impose some constraints on user input. This is the solution adopted by IdeaScale, a software application that was used by the Open Government Dialogue initiative of the Obama Administration. In IdeaScale, users are asked to check whether their idea has already been covered by other users, and alternatively to add a new idea. They are also invited to vote for the best ideas, so that it is the community itself that rates and thus indirectly filters the users’ input.

The MIT Deliberatorium, a technology aimed at supporting large-scale online deliberation, follows a similar strategy. Users are expected to follow a series of rules to enable the correct creation of a knowledge map of the discussion. Each post should be limited to a single idea, it should not be redundant, and it should be linked to a suitable part of the knowledge map. Furthermore, posts are validated by moderators, who should ensure that new posts follow the rules of the system. Other systems that implement the same idea are featurelist and Debategraph [7].

While these systems enhance the creation and visualization of structured argument maps and promote community engagement through rating systems, they present a series of limitations. The most important of these is the fact that human intervention is needed to manually check the correct structure of the posts. Semantic technologies can play an important role in bridging this gap.

Semantic analysis through ontologies and terminologies

Ontology-driven analysis of user-generated text implies finding a way to bridge Semantic Web data structures, such as formal ontologies expressed in RDF or OWL, with unstructured implicit ontologies emerging from user-generated content. Sometimes these emergent lightweight ontologies take the form of unstructured lists of terms used for tagging online content by users. Accordingly, some works have dealt with this issue, especially in the field of social tagging of Web resources in online communities. More concretely, different works have proposed models for making compatible the so-called top-down metadata structures (ontologies) with bottom-up tagging mechanisms (folksonomies).

The possibilities range from transforming folksonomies into lightly formalized semantic resources (Lux and Dsinger, 2007; Mika, 2005) to mapping folksonomy tags to the concepts and the instances of available formal ontologies (Specia and Motta, 2007; Passant, 2007). As the basis of these works we find the notion of emergent semantics (Mika, 2005), which questions the autonomy of engineered ontologies and emphasizes the value of meaning emerging from distributed communities working collaboratively through the Web.

We have recently worked on several case studies in which we have proposed a mapping between legal and lay terminologies. We followed the approach proposed by Passant (2007) and enriched the available ontologies with the terminology appearing in lay corpora. For this purpose, OWL classes were complemented with a has_lexicalization property linking them to lay terms.

The first case study that we conducted belongs to the domain of consumer justice, and was framed in the ONTOMEDIA project. We proposed to reuse the available Mediation-Core Ontology (MCO) and Consumer Mediation Ontology (COM) as anchors to legal, institutional, and expert knowledge, and therefore as entry points for the queries posed by consumers in common-sense language.

The user corpus contained around 10,000 consumer questions and 20,000 complaints addressed from 2007 to 2010 to the Catalan Consumer Agency. We applied a traditional terminology extraction methodology to identify candidate terms, which were subsequently validated by legal experts. We then manually mapped the lay terms to the ontological classes. The relations used for mapping lay terms with ontological classes are mostly has_lexicalisation and has_instance.

A second case study in the domain of consumer law was carried out with Italian corpora. In this case domain terminology was extracted from a normative corpus (the Code of Italian Consumer law) and from a lay corpus (around 4000 consumers’ questions).

In order to further explore the particularities of each corpus respecting the semantic coverage of the domain, terms were gathered together into a common taxonomic structure [8]. This task was performed with the aid of domain experts. When confronted with the two lists of terms, both laypersons and technical experts would link most of the validated lay terms to the technical list of terms through one of the following relations:

  • Subclass: the lay term denotes a particular type of legal concept. This is the most frequent case. For instance, in the class objects, telefono cellulare (cell phone) and linea telefonica (phone line) are subclasses of the legal terms prodotto (product) and servizio (service), respectively. Similarly, in the class actors agente immobiliare (estate agent) can be seen as subclass of venditore (seller). In other cases, the linguistic structures extracted from the consumers’ corpus denote conflictual situations in which the obligations have not been fulfilled by the seller and therefore the consumer is entitled to certain rights, such as diritto alla sostituzione (entitlement to a replacement). These types of phrases are subclasses of more general legal concepts such as consumer right.
  • Instance: the lay term denotes a concrete instance of a legal concept. In some cases, terms extracted from the consumer corpus are named entities that denote particular individuals, such as Vodafone, an instance of a domain actor, a seller.
  • Equivalent: a legal term is used in lay discourse. For instance, contratto (contract) or diritto di recessione (withdrawal right).
  • Lexicalisation: the lay term is a lexical variant of the legal concept. This is the case for instance of negoziante, used instead of the legal term venditore (seller) or professionista (professional).

The distribution of normative and lay terms per taxonomic level shows that, whereas normative terms populate mostly the upper levels of the taxonomy [9], deeper levels in the hierarchy are almost exclusively represented by lay terms.

Term distribution per taxonomic level

The result of this type of approach is a set of terminological-ontological resources that provide some insights on the nature of laypersons’ cognition of the law, such as the fact that citizens’ domain knowledge is mainly factual and therefore populates deeper levels of the taxonomy. Moreover, such resources can be used for the further processing of user input. However, this strategy presents some limitations as well. First, it is mainly driven by domain conceptual systems and, in a way, they might limit the potentialities of user-generated corpora. Second, they are not necessarily scalable. In other words, these terminological-ontological resources have to be rebuilt for each legal subdomain (such as consumer law, private law, or criminal law), and it is thus difficult to foresee mechanisms for performing an automated mapping between lay terms and legal terms.

Beyond domain ontologies: information extraction approaches

One of the most important limitations of ontology-driven approaches is the lack of scalability. In order to overcome this problem, a possible strategy is to rely on informational structures that occur generally in user-generated content. These informational structures go beyond domain conceptual models and identify mostly discursive, emotional, or event structures.

Discursive structures formalise the way users typically describe a legal case. It is possible to identify stereotypical situations appearing in the description of legal cases by citizens (i.e., the nature of the problem; the conflict resolution strategies, etc.). The core of those situations is usually predicates, so it is possible to formalize them as frame structures containing different frame elements. We followed such an approach for the mapping of the Spanish corpus of consumers’ questions to the classes of the domain ontology (Fernández-Barrera and Casanovas, 2011). And the same technique was applied for mapping a set of citizens’ complaints in the domain of acoustic nuisances to a legal domain ontology (Bourcier and Fernández-Barrera, 2011). By describing general structures of citizen description of legal cases we ensure scalability.

Emotional structures are extracted by current algorithms for opinion- and sentiment mining. User data in the legal domain often contain an important number of subjective elements (especially in the case of complaints and feedback on public services) that could be effectively mined and used in public decision making.

Finally, event structures, which have been deeply explored so far, could be useful for information extraction from user complaints and feedback, or for automatic classification into specific types of queries according to the described situation.

Crowdsourcing in e-government: next steps (and precautions?)

Legal prosumers’ input currently outstrips the capacity of government for extracting meaningful content in a cost-efficient way. Some developments are under way, among which are argument-mapping technologies and semantic matching between legal and lay corpora. The scalability of these methodologies is the main obstacle to overcome, in order to enable the matching of user data with open public data in several domains.

However, as technologies for the extraction of meaningful content from user-generated data develop and are used in public-decision making, a series of issues will have to be dealt with. For instance, should the system developer bear responsibility for the erroneous or biased analysis of data? Ethical questions arise as well: May governments legitimately analyse any type of user-generated content? Content-analysis systems might be used for trend- and crisis detection; but what if they are also used for restricting freedoms?

The “wisdom of crowds” can certainly be valuable in public decision making, but the fact that citizens’ online behaviour can be observed and analysed by governments without citizens’ acknowledgement poses serious ethical issues.

Thus, technical development in this domain will have to be coupled with the definition of ethical guidelines and standards, maybe in the form of a system of quality labels for content-analysis systems.

[Editor’s Note: For earlier VoxPopuLII commentary on the creation of legal ontologies, see Núria Casellas, Semantic Enhancement of Legal Information… Are We Up for the Challenge? For earlier VoxPopuLII commentary on Natural Language Processing and legal Semantic Web technology, see Adam Wyner, Weaving the Legal Semantic Web with Natural Language Processing. For earlier VoxPopuLII posts on user-generated content, crowdsourcing, and legal information, see Matt Baca and Olin Parker, Collaborative, Open Democracy with LexPop; Olivier Charbonneau, Collaboration and Open Access to Law; Nick Holmes, Accessible Law; and Staffan Malmgren, Crowdsourcing Legal Commentary.]


[1] The idea of prosumption existed actually long before the Internet, as highlighted by Ritzer and Jurgenson (2010): the consumer of a fast food restaurant is to some extent as well the producer of the meal since he is expected to be his own waiter, and so is the driver who pumps his own gasoline at the filling station.

[2] The experience project enables registered users to share life experiences, and it contained around 7 million stories as of January 2011: http://www.experienceproject.com/index.php.

[3] For instance, the United Nations Volunteers Online platform (http://www.onlinevolunteering.org/en/vol/index.html) helps volunteers to cooperate virtually with non-governmental organizations and other volunteers around the world.

[4] See for instance the experiment run by mathematician Gowers on his blog: he posted a problem and asked a large number of mathematicians to work collaboratively to solve it. They eventually succeeded faster than if they had worked in isolation: http://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/.

[5] The Galaxy Zoo project asks volunteers to classify images of galaxies according to their shapes: http://www.galaxyzoo.org/how_to_take_part. See as well Cornell’s projects Nestwatch (http://watch.birds.cornell.edu/nest/home/index) and FeederWatch (http://www.birds.cornell.edu/pfw/Overview/whatispfw.htm), which invite people to introduce their observation data into a Website platform.

[6] http://www.participedia.net/wiki/Icelandic_Constitutional_Council_2011.

[7] See the description of Debategraph in Marta Poblet’s post, Argument mapping: visualizing large-scale deliberations (http://serendipolis.wordpress.com/2011/10/01/argument-mapping-visualizing-large-scale-deliberations-3/).

[8] Terms have been organised in the form of a tree having as root nodes nine semantic classes previously identified. Terms have been added as branches and sub-branches, depending on their degree of abstraction.

[9] It should be noted that legal terms are mostly situated at the second level of the hierarchy rather than the first one. This is natural if we take into account the nature of the normative corpus (the Italian consumer code), which contains mostly domain specific concepts (for instance, withdrawal right) instead of general legal abstract categories (such as right and obligation).

REFERENCES

Bourcier, D., and Fernández-Barrera, M. (2011). A frame-based representation of citizen’s queries for the Web 2.0. A case study on noise nuisances. E-challenges conference, Florence 2011.

Fernández-Barrera, M., and Casanovas, P. (2011). From user needs to expert knowledge: Mapping laymen queries with ontologies in the domain of consumer mediation. AICOL Workshop, Frankfurt 2011.

Lux, M., and Dsinger, G. (2007). From folksonomies to ontologies: Employing wisdom of the crowds to serve learning purposes. International Journal of Knowledge and Learning (IJKL), 3(4/5): 515-528.

Mika, P. (2005). Ontologies are us: A unified model of social networks and semantics. In Proc. of Int. Semantic Web Conf., volume 3729 of LNCS, pp. 522-536. Springer.

Passant, A. (2007). Using ontologies to strengthen folksonomies and enrich information retrieval in Weblogs. In Int. Conf. on Weblogs and Social Media, 2007.

Poblet, M., Casellas, N., Torralba, S., and Casanovas, P. (2009). Modeling expert knowledge in the mediation domain: A Mediation Core Ontology, in N. Casellas et al. (Eds.), LOAIT- 2009. 3rd Workshop on Legal Ontologies and Artificial Intelligence Techniques joint with 2nd Workshop on Semantic Processing of Legal Texts. Barcelona, IDT Series n. 2.

Ritzer, G., and Jurgenson, N. (2010). Production, consumption, prosumption: The nature of capitalism in the age of the digital “prosumer.” In Journal of Consumer Culture 10: 13-36.

Specia, L., and Motta, E. (2007). Integrating folksonomies with the Semantic Web. Proc. Euro. Semantic Web Conf., 2007.

Meritxell Fernández-Barrera is a researcher at the Cersa (Centre d’Études et de Recherches de Sciences Administratives et Politiques) -CNRS, Université Paris 2-. She works on the application of natural language processing (NLP) to legal discourse and legal communication, and on the potentialities of Web 2.0 for participatory democracy.

VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed. The statements above are not legal advice or legal representation. If you require legal advice, consult a lawyer. Find a lawyer in the Cornell LII Lawyer Directory.


The Uniform Electronic Legal Material Act, referred to as UELMA, is ready for introduction into state legislatures.  It has undergone its final proofing and formatting process by the National Conference of Commissioners of Uniform State Laws (NCCUSL, or ULC) and has been posted on NCCUSL’s archival Website at the University of Pennsylvania, and is soon to come to NCCUSL’s official site.  The Act will be sent to the American Bar Association’s (ABA’s) House of Delegates for approval at the ABA Midyear Meeting in February, 2012 in New Orleans.

The UELMA addresses important issues in information management, providing sound guidance to states that are transitioning legal publications to digital formats.   The Act is citizen-oriented, and leaves all issues concerning commercial publishing to state policy and contract law.   Most importantly, the Act is outcomes-based, keeping it flexible in the face of changing technologies and evolving state practice.  A brief account of UELMA’s development and its main provisions is included in this posting.

The UELMA was drafted in response to a request from the American Association of Law Libraries (AALL), following the AALL’s 2007 National Summit on Authentication of Digital Legal Information. The purpose of the Summit was to bring national attention to the issues surrounding the rapid rise in the number of states publishing primary legal information resources electronically and, in some cases, cancelling print resources and publishing legal information only in electronic format.  Foremost among the issues were ensuring the trustworthiness of online legal resources  and preserving the electronic publications to provide for continuing accessibility.   The drafting of a uniform act on these topics was one of the top recommendations of the Summit’s attendees.

The ULC agreed to consider the development of a uniform law and appointed a Study Committee for that purpose.  The Study Committee recommended that a law be developed and a Drafting Committee was charged with the task.  After two years of consideration, including several face-to-face meetings, conference calls, and circulation of numerous drafts by email, the UELMA was read to and debated for the second time at the Annual Meeting of NCCUSL in July 2011.  After more than six hours of floor consideration, the NCCUSL Committee of the Whole passed the draft act, sending it to a Vote of the States.  UELMA passed its final hurdle with a positive Vote of the States, gaining approval by a vote of 45-0 (with 1 abstention and 7 jurisdictions not voting).

The UELMA, as it passed the Conference, requires a state that publishes official versions of its legal information in electronic format to do three things:

1.  Authenticate the information, by providing a method to determine that the legal material is unaltered from the version published by the state officer or employee that publishes the material;
2.  Preserve the information; and
3.  Ensure public accessibility on a permanent basis.

At a minimum, legal material that is covered by the Act includes the most basic of state-level legal information resources, including the state constitution, session laws, codified laws or statutes, and state agency rules with the effect of law.  In recognition of potential separation of powers issues, the UELMA does not automatically include judicial or executive materials, leaving it to the enacting state to decide whether and how to include those resources.  States may choose to include court rules and decisions, state administrative agency decisions, executive official documents, or almost any other information resources they designate as legal material.

For each type of legal material, the state must name a state agency or official as the “official publisher.”  The official publisher has the responsibility to authenticate, preserve, and provide access to the legal material. If legal material defined by the Act is published only electronically, that material must be designated “official” and meet the requirements of the Act.  If there is a print version of the legal material, an official publisher may designate the online version “official,” but the requirements of the Act to authenticate, preserve, and provide access must be met for the electronic version.

The requirements of the Act are not ended if the official electronic legal material is superseded, overruled, or otherwise ceases to be current law.   Legal material does not lose its value even if it is no longer in effect.  Accordingly, once a source is designated as official, it continues to be covered by the provisions of the UELMA.  Historical sources must be preserved and made available.

The Act does not affect any relationships between an official state publisher and a commercial publisher, leaving those relationships to contract law.  Copyright laws are unaffected by the Act. The Act does not affect the rules of evidence; judges continue to make decisions about the admissibility of electronic evidence in their courtrooms.

The comments to the UELMA provide a great deal of background on the decisions and intent of the Drafting Committee.  In many instances, the comments offer guidance to legislators who will be asked to consider the UELMA for passage.  The comments are included with the Act on the University of Pennsylvania’s Biddle Law Library Website.

Some issues specific to one of the three parts of the Act (authentication, preservation, and public access) are as follows.  More information on these points can be found in the comments to the Act.

Authentication (Sections 5 and 6) :

The Drafting Committee considered a wide range of approaches to authentication before settling on a policy of presenting a technology-neutral, outcomes-based document, leaving the choice of method used to authenticate legal material up to the states.  This approach also leaves it to each state’s discretion to change methods, as necessary or desirable.  What is required is that the official publisher provides a method for the user to determine that the electronic record is unaltered from the one published by the official publisher.

By the terms of the Act, the authenticated electronic legal material will receive a presumption of accuracy, the same presumption that is created by publication of legal material in print form.  The burden of proving inaccuracy shifts to the party that disputes the accuracy of the electronic legal material.   Electronic legal material from other states with substantially similar laws will receive the same presumption of accuracy.

Preservation (Section 7):

The Drafting Committee spent considerable time debating the preservation provisions.   The biggest issues were finding a way to describe what legal material would be covered by a preservation requirement, and how legal material should be preserved.

The Drafting Committee decided that, ultimately, all legal material covered by the Act’s authentication provisions should also be subject to its preservation requirements.  This was stated simply as requiring preservation of legal material “that is or was designated as official” under the Act.  This language requires that states preserve superseded or amended legal material, which retains importance despite its no longer being currently effective.  The comments to Section 7 make clear that the Drafting Committee intended the Act to cover not only the text of the law, but also the materials commonly published with the legal material.  This would mean that the lists of legislators and state officials typically published with session laws would be preserved, as would proposed or final state constitutional amendments, legislative resolutions, and any other type of information published with a legal material source.

The Drafting Committee decided to use an outcomes-based approach for the preservation requirements, similar to its approach to authentication.   The ultimate outcome of preservation is that legal material may be preserved in an electronic format, in print, or by whatever method the state may choose in the future; consistent with an outcomes-based approach, state policy and preference dictate the preservation method.

If legal material is preserved electronically, the UELMA requires that the integrity of the record be ensured, including through backup and disaster recovery preparations, and that the continuing usability of the legal material is ensured.  Recent natural disasters in the U.S. have highlighted the importance of disaster recovery preparations.  Further, information that is preserved in an unusable format is of no value.  The comments make clear that migration to new formats or storage media will be required from time to time.

The comments also note that the Drafting Committee intended that legally significant formatting be preserved.  The complexity of presentation of some legal materials — evident in indentations, italicization, and numbering of internal subdivisions, for example — may indicate or explain legislative or regulatory intent.  Preservation should not change the meaning of the legal material, but rather should ensure that the legal material is capable of being authenticated.

The Act recognizes that states have decades, and in some instances centuries, of expertise in preserving print materials, and does not specify preservation requirements or outcomes if the state chooses to preserve legal material in print.  Nor does the Act impose a duty on an enacting state to retrospectively convert its print material to an electronic format.  If, however, the state chooses to digitize previously non-electronic legal material, and if that newly electronic legal material is designated as official, then the requirements of the Act must be met.  Publication of legal material in an official electronic version subsequent to the adoption of the UELMA, even if the same legal material was published previously in print, triggers the requirements of the Act.

Permanent Access (Section 8):

Citizens must be informed as to government actions if they are to participate effectively in their government.  Legal material is an essential information source for citizens to access to become informed.   The UELMA recognizes this in requiring reasonable availability, on a permanent basis, of legal material, even that which is amended, repealed, or superseded.

The Drafting Committee debated conditions of access over several meetings, finally concluding that states already have long-term, relevant experience in making other materials available through archives, libraries, and state offices.  The enacting state has discretion to decide where, when, and how to provide access, including whether to charge fees for access.  Section 8’s requirement of permanent access does not require a state to provide unlimited access to its preserved legal information.  This drafting decision is consistent with the rest of the UELMA, which defers to state policy and practice in its other provisions.  Eventually, the Committee decided that the individual states could set their own requirements for access to legal material preserved under the Act, as long as the access is reasonable and in perpetuity.  For this reason, the Act does not address whether states can charge fees for access to preserved electronic legal material.

The Standards section of the Act (Section 9) directs official publishers of electronic legal material to consider developing standards and best practices as they choose and to implement methods for the authentication, preservation, and permanent access of electronic records.  The “Guiding Principles to Be Considered in Developing a Future Instrument,” the best practices document of the Hague Conference on Private International Law, were important guidelines that were repeatedly consulted in the drafting process.

Throughout its deliberations, the Drafting Committee was advised and informed by a large number of advisors and observers who came from federal and state governments, commercial legal publishers and software vendors, and a number of interested organizations.  Two American Bar Association advisors brought knowledge of and experience with technologies to the drafting process.  The observers were very helpful in assisting the Committee in its understanding of the possible impacts of proposed sections of the Act.  In some instances, the observers were able to explain existing and emerging technologies that might be used to accomplish the Act’s specified outcomes.  The Committee watched technology demonstrations and investigated various authentication processes already in effect.  The drafting process was strengthened by the level of support and expertise the advisors and observers brought, but, in the end, the Act was entirely the Committee’s work.

By designating the Committee’s product a uniform law, the ULC recognized the importance of the topic and urged wide adoption of the Act.  The final step in the UELMA’s development will be its introduction into state legislatures.   Bill sponsors are being identified, and the ULC anticipates the UELMA will be introduced in at least 8 states in January 2012, with the possibility of introduction in as many as 12.

The ULC has appointed an Enactment Committee for the UELMA to assist the larger ULC Legislative Committee with its charge to “endeavor to secure the enactment of [uniform] legislation.”   The Enactment Committee prepares “talking points” and summaries of the legislation, and works with individual legislatures, on occasion, to answer questions and further the introduction and approval of the Act.  Volunteers from several interested associations are also preparing to work towards the Act’s approval.  With strong support from the ULC and volunteers working on its behalf, by next summer the Uniform Act may itself become “legal material” in one or more states.

Barbara Bintliff
Barbara Bintliff is the Joseph C. Hutcheson Professor in Law at The University of Texas School of Law, and Director of Research at the School’s Tarlton Law Library and Jamail Center for Legal Research. She is The Reporter for The Uniform Electronic Legal Material Act.

VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed. The statements above are not legal advice or legal representation. If you require legal advice, consult a lawyer. Find a lawyer in the Cornell LII Lawyer Directory.

Source: AALL Universal Citation Guide (First Edition).

In his recent post, Fastcase CEO Ed Walters called on American states to tear down the copyright paywall for statutes. States that assert copyright over public laws limit their citizens’ access to such laws and impede a free and educated society. Convincing states (and publishers) to surrender these claims, however, is going to take some time.

A parallel problem involves The Bluebook and the courts that endorse it as a citation authority. By requiring parties to cite to an official published version of a statutory code, the courts are effectively restricting participants in the legal research market. Nowhere is this more evident than in those states where the government has delegated the publishing of the official code to a private publisher, as is the situation in more than half of the states.  Thus, even if the state itself or another company, such as Justia, publishes the law online for free, a brief cannot cite to these versions of the code.

To remedy this problem, we (and others) propose applying a system of vendor neutral (universal) citation to all primary legal source material, starting with the state codes. Assigning a universal, uniform identifier for state codes will make them easier to find, use, and cite. While we do not expect an immediate endorsement from The Bluebook, we hope that once these citations find their way into the stream of information, people will use them and states will take notice. We think it’s time to bring disruptive technology to bear on the legal information industry.

About Universal Citation


“Universal citation” refers to a non-proprietary legal citation that is applied the instant a document is created. “Universal citation” is also called a “vendor-neutral,” “media-neutral,” or “public domain” citation. Universal citation has been adopted by sixteen U.S. states in order to cite caselaw, but universal citation has not yet been applied to statutes by any state. A review of universal citation processes for caselaw is helpful in understanding how we may apply universal citation to statutes.

Briefly, a case follows this process before appearing as an official reported decision:

When issuing a written decision, a court first releases a draft called a slip opinion, which is often posted on the court’s Website. Private publishers then republish the slip opinion in various legal databases. A party can cite the slip opinion using a variety of citation formats, depending on the database.

Afterwards, the court transmits the slip opinion to the jurisdiction’s Reporter of Decisions, who may be a member of the judicial system or a private company. The Reporter edits the opinions, and then collects and reprints them in a bound volume with a citation. To cite a particular page within a case, which is also referred to as pinpoint citation, a party cites the case name, the publication, the volume, and the specific page number that contains the cited content.

Before the advent of electronic publishing, these books were the primary source for legal research. And, while publishers still print cases in book format, the majority of users read the cases in digital form. However, opinions in online database lack physical pages. To address this, online publishers insert page numbers into the digital version of an opinion to correspond to page breaks in the print version. Thus, the pinpoint citation (or star pagination) for an opinion, whether in print or online, is the same.

Under most court rules, and Bluebook guidance, once the official opinion is published, the Reporter citation must be used (see Bluebook Rule 10.3.1).

The decisions are published by a private company, usually Thomson West, and anyone wanting to read them must license the material from the company. Thus, if you want to cite to judicial law, you must pay to access the Reporter’s opinions. (Public law libraries offer books and database access, but readers must visit the physical library to use their resources. Google Scholar also provides free access to official cases online, but they must pay to obtain and license the opinions. In other words, Google, not the end user, is paying for the access.)

Universal citation bypasses the private publisher, and allows courts to create official opinions immediately. Under this system, judges assign a citation to the case when they release it. They insert paragraph numbers into the body of the opinion to allow pinpoint citation. This way, the case is instantly citeable. There is no intermediary lag time between slip and official opinion where different publishers cite the case differently, and there is no need to license proprietary databases in order to read and cite the work. In the jurisdictions that have adopted this system, the court’s opinion is the final, official version. Private publishers may republish and add their own parallel citations, but in most jurisdictions the court does not require citation to private publishers’ versions. (However, Louisiana and Montana require parallel citation to the regional reporter.)

The American Association of Law Libraries (AALL) developed the initial standards for vendor neutral citation formats. AALL published the Universal Citation Guide in 1999, and released an updated edition in 2004. The Bluebook adopted a similar scheme in Rule 10.3.3 – Public Domain Format. Under this format, a universal citation should include the following:

  • Year of decision
  • State’s 2-letter postal code
  • Court name abbreviation
  • Sequential number of the decision
  • “U” for unpublished cases
  • Pinpoint citation should reference the paragraph number, instead of the page number

The majority of states employing universal citation follow the AALL/Bluebook standard, but a few have adopted their own styles. (Illinois, Louisiana, Mississippi, New Mexico, and Ohio employ universal citation but use a different format than the AALL/Bluebook recommendation.)

Most states that use universal citation adopted it in the 1990s. Cornell Law Professor Peter Martin details these events in his article Neutral Citation, Court Websites, and Access to Authoritative Caselaw. Professor Ian Gallacher of Albany Law School has also written about the history of this movement in Cite Unseen: How Neutral Citation and Americas Law Schools Can Cure Our Strange Devotion to Bibliographical Orthodoxy and the Constriction of Open and Equal Access to the Law. To date, 16 states assign universal citations to their highest court opinions. (To date, Arkansas, Illinois, Louisiana, Maine, Mississippi, Montana, New Mexico, North Carolina, North Dakota, Ohio, Oklahoma, South Dakota, Utah, Vermont, Wisconsin, and Wyoming have adopted universal citation for caselaw.)  Illinois is the most recent state to adopt the measure (in June 2011), and the concept has been gaining traction in the legal blogosphere. John Joergensen at Rutgers-Camden School of Law started a cooperative effort called UniversalCitation.org this summer.

Universal Citation and State Codes

Applying universal citation to state statutes can provide the same benefits as to caselaw, making statutes easier to find and cite, and improving access. While all states publish some form of their laws online for the public, as Ed has noted, these versions of state laws are often burdened by copyright and licensing restrictions. With these restrictions in place, users are not free to reuse, remix, or republish law, resulting in stifled innovation and external costs associated with using poorly designed Websites that take longer to search.

Though the AALL provides guidance on universal citation for statutes, no state has adopted it. The Bluebook does not specifically reference universal forms of citations for statutes and generally requires citation to official code compilations. There are exceptions for the digital version of the official code, parallel citations to other sources, and the use of unofficial sources where they are the only available source. (Bluebook Rule 12 provides for citation to statutes, generally. The Bluebook addresses Internet sources in Rule 18.)

The AALL’s Universal Citation Guide provides a schema for citing statutes in a neutral format. Rules 305-307 lay out standardized code designations, numbering, and dating rules, and each state has a full description in the Appendices. Basically, the format uses the state postal code, abbreviations for the name of the statutes (Consolidated, Revised, etc.), and a date.

As a result, the universal citations look similar to the official citations.

The AALL universal citation uses a name abbreviation for the state name and the name of the statute compilation. AALL’s format does not use periods in the abbreviations. It also uses a different convention for the year. The Guide’s recommendation is to date the code by a “legislative event,” to make the date more precise. Using “current through” dating provides a timestamp for the version of the code being used. This approach is less ambiguous than listing simply the year.

States like California and Texas have very large, segmented code systems with more complicated official citation schemes. The AALL mirrors these with the universal version, giving each subject matter code an abbreviation similar to the one used by The Bluebook.

Universal citation does not designate whether the code version is annotated, and of course it does not mention the publisher of the source.

Experimenting with Universal Citation

Justia is now applying the AALL’s universal citation to the code compilations on our site. We add this citation to the most granular instance of the code citation, along with a statement identifying and explaining it. So far, we’ve added citations to the state codes of Hawaii, Idaho, Maine, and South Dakota.

We started with Hawaii. The official citation and the universal citation are fairly similar:

Official: Haw. Rev. Stat. § 5-9 (2010)
Universal: HI Rev Stat § x-x (2010 Reg Sess)

This is how the code looks on the Hawaii Legislature’s site:

This is how the code section looks on Justia. We added the citation right above the text of the statute.

On our site, the full citation is visible, so readers can quickly identify and cite to it.  The “What’s This?” link next to the citation explains the universal citation.

We used the Legislature’s site to determine the date.

We also added the universal citation to the title tags. This allows search engine users to see the universal citation in their search results. It makes the search results more readable, because the text of the statute name appears next to the citation. For example, compare a search for “Haw Rev Stat 5-9”

with “HI Rev Stat 5-9”:

With the search results for the universal citation (properly tagged), more information about that citation is presented. This helps the user quickly identify and digest the best search results.

We hope to accomplish three objectives by attaching universal citations to our codes. First, we want to give people an easy way to cite the code without having to look at proprietary publications. Not all citation goes into legal briefs or other documents that require formal citation to “official” sources listed by The Bluebook. The AALL universal citation scheme is easy to read and understand, and uses familiar abbreviations (like postal codes). Providing a citation right on the page of the code section will help people talk about, use, and cite to code sections without having to access “official” sources behind a paywall.

Second, we hope to demonstrate that universal citation can be applied in an easy and straightforward manner. The AALL has already developed a rigorous standard for universal citation; we are happy to use it and not reinvent the wheel. Legal folks here at Justia researched the AALL citation and the proper year/date information, and programmers applied the citation to the corpus. Anyone can do this, including the states.

Third, we want to encourage the adoption and widespread use of vendor-neutral citation schemes. There’s been a lot of talk about vendor-neutral citation for caselaw, and we are excited by efforts like UniversalCitation.org. Applying these principles to state codes will help get universal citation into the stream of legal information online. Just seeing the citation and the “What’s This?” page next to it will introduce readers to the concept. The more people use universal citations for state statutes, the more states will be forced to examine their reliance on third party publishers as the “official” source.

Next Steps

We plan to apply the universal citation to all of the codes in our corpus, but we have encountered some obstacles to achieving this for all 50 states. First, some of the codes are quite large and difficult to parse. Ari Hershowitz has documented his efforts to convert the California code into usable HTML. States like California, Texas, and New York will be more labor intensive. Second, the currency, or timestamp, is not always readily apparent on the state code site. With Idaho, I had to make a call to the Legislative Office to find out exactly when they last updated the code.

Source: AALL Universal Citation Guide (First Edition).

The third, and perhaps most troubling, issue is the “unofficial” status of the online state code repositories. With the exception of a few states (see Colorado), the codes hosted on the states’ own Websites are papered over with disclaimers about their authenticity. While I understand the preference for “official” sources when citing a code, there seems to be no good reason why the official statutes of any state are not available online, for free, for everyone. These are the laws we must obey and to which we are held accountable. Does the public really deserve something less than official version? The states are passing the buck by disclaiming all responsibility for publishing their own laws, and relying on third-party publishers, which charge taxpayers to view the laws that the taxpayers paid for. I hope that as we apply a universal citation to our state statutes, the law will become more usable for the public. By taking disruptive action and applying these rules to our large corpus of data, we hope that more people will see the statutes and cite using universal principles, and that the states will take notice.

We have assigned a universal citation to the first few states as a proof of concept. We will also be sharing our efforts by supplying copies of the code with the universal citations included for bulk download at public.resource.org. As we move forward with the remaining 46 states, we would love your input.  Comment here or contact me directly with your thoughts.

Peace and Onward.

[Editor’s Note: For other VoxPopuLII posts on universal citation and the status of content in legal repositories, see Ivan Mokanov’s post on the Canadian neutral citation standard, and John Joergensen’s post on authentication of digital legal repositories.]

Courtney Minick is an attorney and product manager at Justia, where she works on free law and open access initiatives. She can be found pushing her agenda at the Justia Law, Technology, and Legal Marketing Blog and on Twitter: @caminick.

VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed. The statements above are not legal advice or legal representation. If you require legal advice, consult a lawyer. Find a lawyer in the Cornell LII Lawyer Directory.

In May of this year, one of us wrote a post discussing two research projects being conducted at the University of Montreal’s Chair in Legal Information. One of those projects, known by its team as the “Free Access to Law – Is It Here to Stay?” Project, has just concluded. This co-authored post is about that project, the stories we heard throughout conducting the research, and what we can learn from those stories about sustaining legal information institutes (LIIs) — a concern that came up on many occasions at this year’s Law via the Internet Conference in Hong Kong, and again in the blogosphere in Eve Gray’s recent post, and Sean Hocking’s post on Slaw, among others.

The first section of this post — written by Isabelle Moncion of Lexum — is about the “Free Access to Law – Is It Here to Stay?” project as a whole, and the second portion, written by AfricanLII co-founder Mariya Badeva-Bright, focuses on lessons learned as applied to The African Legal Information Institute (AfricanLII).

First, a few words about the methodology of the “Free Access to Law – Is It Here to Stay?” project. In 11 countries and regions –- Burkina Faso, Hong Kong, India, Indonesia, Kenya, Malawi, Mali, Niger, the Philippines, South Africa, and Uganda –- researchers under the coordination of the Chair in Legal Information, AfricanLII, and the Centre for Internet and Society interviewed users of Free Access to Law (FAL) services, and practitioners who create and maintain those services, for purposes of building case studies on one FAL initiative per country. The research was guided by the Local Researcher’s Methodology Guide, which among other things asked the question, “What determines the sustainability of operations of Free Access to Law initiatives?” Along with the case studies (available here, published in the language in which they were written), a Good Practices Handbook (humbly renamed “Good” rather than “Best,” as stories from the FAL initiative showed that unfortunately, but not surprisingly, an always-successful series of practices does not exist) was written based on the results found in the case studies. The handbook will be online soon.

Do check out the case studies and good practices to find out more, as they will be able to provide you with much more in-depth analyses than we can provide in this post. But for now, allow me (Isabelle Moncion) to share a few stories and observations, and perhaps a preview of some good practices, before Mariya shows how these stories can be applied to building new, and supporting existing, LIIs.

PART 1

Sustainability… isn’t just about funding –

This statement is as much a conclusion from the case studies as it is the result of group discussions — held prior to the field research — devoted to defining “sustainability.” Did sustainability mean how we fund LIIs, or was it start-to-finish practices leading to that funding? We went with the latter, and field stories showed that that was the right choice.

Organisational capacity is pivotal to a FAL initiative’s capacity to stick around. In Mali, funding wasn’t so much the issue: the FAL site disappeared when the student intern who had decided to launch the site — after noticing the immense quantity and quality of legal information available at the NGO where he was working, and concluding that this information should be made available online — completed his internship. In Indonesia, funding is without a doubt a challenge, but the Indonesian FAL site currently depends on a single individual, who is unable to devote the time required to maintain the site. The situation is similar in Niger, where the editor must go from court registry to court registry with an external hard drive to collect judgments. The Hong Kong Legal Information Institute‘s (HKLII’s) team is also small, but thanks to a judiciary-supported workflow, the team has been able to offer its users a high quality, reliable service. The Southern African Legal Information Institute (SAFLII) case study further demonstrates that organisational capacity facilitates response to financial crises. To quote from the Good Practices Handbook,  “… it is important to build redundancy and transfer knowledge to ensure continuity even on tight budgets. Having a meaningful internship programme with intense mentoring covering the two core skill areas of IT [information technology] and content management, coupled with good documentation, could contribute enormously to the viability of the FAL initiative.”

Organisational capacity also means knowing where one is headed. How many FAL initiatives did we encounter, whose personnel told us their objective is to “reinforce the rule of law” and their target audience is “everyone”? These are no doubt admirable and overarching goals of FAL, but if not coupled with specific objectives, these goals do little to help determine an organisation’s priorities and response to the needs of a particular stakeholder group that is potentially capable of financing the FAL initiative in the future.

Innovation… isn’t just another buzzword –

After using “sustainability” as many times as I have in this post, and now throwing in “innovation,” I beg you to indulge me in this section, and assure you that I will attach meaning to my list of buzzwords. (I promise I’ll save “empowerment” or “participatory governance” for another day, but I may have to use “capacity building” soon.)

Innovation seems like an obvious “good practice” –- but what does it mean in the context of FAL? Many organisations now claim to have “innovation” as part of their values, but as Ginger Grant pointed out so well at a conference on Managing by Values, when asked, “Who are the organisation’s troublemakers?” bosses and managers seem proud to reply that they have none. Well if you have no troublemakers, asks Grant, who’s innovating?

Small FAL teams with limited resources have been able to succeed. Small teams seem to favour the birth of new ideas, which face less resistance than they may in larger teams. Larger teams have managed to reach their size precisely because they initially did something that no one else was doing at the time, but staying innovative can become an increasingly challenging feat.

Having a team knowledgeable in both (legal) information management and IT, knowing who the users are and what their needs are (e.g., making the effort to find out why and how users use the service, and how else they might use the service if resources were unlimited; using Web 2.0 technologies for all they have to offer respecting getting user feedback; etc.), and staying in touch with others doing similar work (the Free Access to Law Movement (FALM); the open source software movement; various open-access, access-to-knowledge, open-knowledge, etc. movements) are just some of the ways FAL initiatives have managed to stay ahead of the curve. This is in part how SAFLII and Kenya Law Reports became among the first LIIs to look in to mobile services. This is how the Canadian Legal Information Institute (CanLII) began offering point-in-time comparison of statutes. This is also how Indian Kanoon — described in this VoxPopuLII post — rests upon a single software engineer and hasn’t stopped growing since its launch.

Where there’s a will –

… there may not always be a way, but there is definitely no way without a will.

In any of the eleven countries studied, the success of FAL initiatives is often the result of key individuals passionate about the task at hand. In places where FAL initiatives have suffered, it is again often the result of lack of interest or competing priorities. Working to (here it comes) build capacity and foster innovation is the M.O. of FAL practitioners motivated often by nothing more than a conviction that “it’s the right thing to do.”

And I hear now what we’ve been told so often throughout the course of the study: “But what do you do when there just isn’t any money?” Of course, this is a monumental challenge for a number of FAL initiatives, but where legal information is being produced, legal information needs to be accessed. The beauty (and essence) of FAL is that content is available to users accessing content for professional reasons, and to any other user, whether he or she is interested in legal information for personal matters, education, social justice, etc. But each of those users may have different needs, and going back to what I was saying above, this is why, particularly with limited resources, it’s important to know whose needs will be prioritized.

Users requiring legal information for their profession are a great stakeholder to target, as they are likely to come with funds. Insure they are receiving a service that facilitates their work and they will see benefit in ensuring the service stays around. (This is part of CanLII’s story.) But, as in the case of West Africa, the legal profession itself isn’t always well funded. So, although I started by stating that sustainability wasn’t all about funding, allow me to conclude by admitting that funding is often FAL initiatives’ greatest concern. In the course of the study, we identified the following funding sources:

  • Advertising on the FAL initiative’s Website
  • Government, including the judiciary
  • International development agencies
  • Law societies
  • NGOs, or members of civil society with similar missions
  • Private donations from users
  • Selling parallel, value-added services to subsidize the FAL portion of the initiative
  • University grants

Funding from each of these sources comes with strengths and challenges, but such funding also comes with the risk of drying up. Sustainable FAL initiatives have been able to offer user-targeted services, and to identify funding sources accordingly.

Part 2

The lessons from the Free Access to Law Study

Access to the law of many African countries is difficult, as this law is either locked away in expensive commercial databases, only available in a few law libraries housing out-of-date law reports, or simply not available. The free access to law movement in Africa, through the pioneering efforts of the Southern African Legal Information Institute (SAFLII) and the National Council for Law Reporting (incorporating Kenya Law Reports and KenLII), proved that this deplorable situation can be changed by applying information and communication technologies (ICTs) to the legal information domain. However, my personal experiences, and those of my team, in setting up and running SAFLII (until April 2010) revealed that the solution is not as easily implementable as we would have imagined it. Thoughts on the challenges faced are available through early VoxPopuLII posts by SAFLII’s team here and here.

Passion is a necessary prerequisite for a free access to law project to succeed. What we, then as a SAFLII team, learnt through our experience, was that besides zeal, IT expertise, and legal information knowledge, a great deal of business sense, structured business planning, and development were also required. We did have access to business expertise, but applying business principles to a novel, and non-profit, enterprise, without systematic guidance from those who had done it before, was very difficult.  We learned to navigate the landscape “on the job.” The formulation of a business-development approach to these projects, without compromising the basic tenets of free access to law, has increasingly come into focus for many legal information institutes (LIIs) around the world and in Africa.

The first attempt at formalizing the business-development and project-management knowledge around free access to law projects was the sustainability study undertaken by LexUM and SAFLII in 2009, aptly entitled “Free Access to Law – Is It Here to Stay?”  The methodology guide produced during the study was especially useful as the guide systematized all functional, operational, and strategic areas that a free access to law project should account for in its development. All areas would presumably contribute to the strengthening, hence sustainability, of such projects. While I should immediately discount the notion that all new and existing LIIs should be implementing the elaborate structures and extensive practices detailed in the methodology guide assessment matrix (and this is clearly what emerges when we review the case studies produced), a combination of approaches within the broad areas coupled with contextualization for each country would, in my opinion, foster the development of more sustainable LIIs. In that sense, a discernable outcome of the FAL study has been the elaboration of a blueprint for the development of LIIs. The blueprint is based on the collective, two-decades-old knowledge of the free access to law community.

A major aim of the study has been proving the social value that free access to law delivers. To put it squarely, that means linking free access to primary legal materials to values such as democracy, rule of law, and transparency, as well as to more concrete outcomes such as facilitating education and investment, professional capacity, etc. The study does not establish precise causal links between what FAL projects do and these high democratic values. The case studies are largely committed to individual stories that may serve as a basis for a larger study. But the study has managed to isolate links between processes, projects, outputs, and some outcomes of LII projects.  The study, through the Good Practices Handbook, has identified causal links between a LII project’s design, implementation, and results. In doing so, the study has also provided the FAL and donor communities with a monitoring and evaluation framework for free access to law projects.

Free access to law projects are usually assessed on indicators such as numbers of documents published, the number of databases created, the number of unique visitors and hits to the Website, etc. But what meaning do growing document collections, growing usage, and a few words from grateful users have if the free access to law project does not use these indicators to channel support for its continued operation?  The FAL study has provided us with means to identify priorities and determine the relevance of projects in terms of fulfilling objectives efficiently and effectively, all the while focusing on sustainability. The study provokes a FAL project manager to collect, and donors to seek, credible and useful information that will enable a clear picture of the status of the FAL project to emerge. In addition, incorporation of the lessons learned into the review and development of the project’s operations and strategy will be vital.

To sum up, the main lessons that I have learned from the free access to law study are about streamlining operations and strategy around core thematic areas crucial for the sustainable future of a free access to law project. As a core set of principles that should guide a LII, my LII blueprint includes the following highlights:

  • Think sustainability from Day 1
  • Demonstrate value from Day 2
  • Build a solid organization (no matter how small)
  • Identify champions for the cause and make friends for the LII
  • Involve all stakeholders early in the life of the LII
  • Be transparent about overall objectives and how to achieve them in an efficient and effective way
  • Be transparent about income received and expenditures made
  • Review strategy and develop operations with an aim of achieving sustainable free access to law

AfricanLIIAfricanLII

The approach to free access to law that my new project — the African Legal Information Institute (AfricanLII) — takes is in many ways informed by the “Free Access to Law – Is It Here to Stay?” study. Having had the benefit of working on both elaborating the study’s methodology and conducting two of the case studies, I feel that we can continue to develop and apply the knowledge thus gathered to building a solid foundation for free access to law in Africa. The AfricanLII will be the hub that provides that platform.

Many people had spoken about the idea of establishing an AfricanLII before my colleagues Tererai Mafukidze and Kerry Anderson and I decided to form the Institute. Naturally, there were differences of opinion about what AfricanLII should do and how it should be structured. The commonality was that all saw AfricanLII as a continental-wide portal of African legal information, similar to what WorldLII, CommonLII, and AsianLII offered. The AfricanLII that we envisaged, however, is a lot different from other systems. It is not a centralized access point for African primary legal information. AfricanLII does not collect, digitize, and publish directly legal information from African jurisdictions. We do facilitate finding that information via a federated search facility and the African Legal Index. We do plan on building services around African legal information. But AfricanLII’s mission is to enable access to African legal information by entrenching free access to law principles on national level. We do this by working with institutions in individual African jurisdictions, and helping them establish national legal information institutes and develop and maintain them in a sustainable way.

A standardized approach to delivering free access to law through a regional collection point is not a viable option in Africa. I have learned this through my experience working for a regional portal of free law — SAFLII — operating in the context of a diverse, largely non-digitized, legal information environment. The regional approach does go a long way to prove value and incentivize commitment from national institutions and donors, but it does not provide room for meaningful outcomes, engagement, and a sustainable future for the concept of free access to law on our continent. (See the SAFLII case study in the FAL Project Website for more details.)

AfricanLII works with national LIIs (currently SwaziLII, MalawiLII, MozLII, SeyLII, SierraLII, and LesothoLII) to translate their particular environments into successful and sustainable free access to law operations. We implement sustainability measures on both national and regional levels. For example, targeting government and professional users to support content collection and publication in a jurisdiction is best achieved when the free access to law project is based in that jurisdiction and constantly interacts with the stakeholders to improve the value of its offering. Value additions are also best achieved by locals. AfricanLII assists national LIIs in formulating and executing strategies around local engagement. As a regional hub, we implement sustainability initiatives that make sense only on a regional level.  Website monetization activities — web advertisements, directory services, and services around aggregated content, such as news and legal content or free and premium commercial publisher content mashups — are all examples of projects that are best undertaken at a regional level, where more data and more traffic make the activities more profitable. Profits are then channelled into the free access to law work of national LIIs and AfricanLII. We have planned a rollout of financial sustainability initiatives that will take effect in the short, medium, and long terms.

Financial sustainability is achievable only if national LIIs stay on track and develop sound practices in pursuit of a clear strategy. AfricanLII provides contextual operational and strategic assistance, advice, and training to new LIIs which helps these projects develop to potential. In doing so, we engage in rapid skills transfer to organizations with little to no experience in free access to law projects. AfricanLII remains available for continued support beyond the initialization phase.

The Open Society Initiative of Southern Africa (OSISA), The Open Society Institute (OSI), and Freedom House have all provided start-up funding to AfricanLII and some of the national LIIs we support. AfricanLII has developed a monitoring and evaluation framework based on this FAL study which ensures that donor money is well spent and real outcomes are achieved. AfricanLII collects and presents donors with relevant, timely, and accurate information against indicators derived in a credible process.

In conclusion, the Free Access to Law study has had a tremendous, and perhaps not entirely expected, impact on the work of free access to law publishers in Africa. I expect that we will continue to use and develop the study to suit our projects and create new ones based on it.

Isabelle MoncionIsabelle Moncion is a project manager with Lexum, and was a research assistant at the Chair in Legal Information of the University of Montreal until the end of the above described research project. She holds an MA in political science with a specialisation in international development from the University of Quebec in Montreal, and a B.Sc. in political science and communications from the University of Ottawa.

Mariya Badeva-BrightMariya Badeva-Bright, Magister Iuris (Bulgaria), LL.M. (Law and Information Technology, Stockholm), co-founded AfricanLII as a project of the Southern Africa Litigation Centre, and works primarily on content, legal information management, electronic legal research training, and policy development for new LIIs in Africa. She is the former Head of Legal Informatics and Policy at SAFLII. She is also a sessional lecturer at the School of Law, University of the Witwatersrand, South Africa.

VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed. The statements above are not legal advice or legal representation. If you require legal advice, consult a lawyer. Find a lawyer in the Cornell LII Lawyer Directory.

Farmland outside Matatiele

My father was, as was his father before him, a country lawyer in a remote but very beautiful part of South Africa, in the foothills of the Maluti mountains on the border between South Africa and Lesotho. Prominent in his legal office near the Magistrate’s Court were shelves of leather bound volumes of South African statutes, cases, and law reports, which I found impressive, with their gold blocking on red spines. Even back then, South African lawyers were well supplied with legal publications, the production of which dated back to the mid-19th century, when a Dutch immigrant, Jan Carel Juta (who was married to Karl Marx’s sister) published the first law reports. This means that the legal profession in South Africa has access to a century and a half of legal records, something of undoubted value, given that many African countries have no legal publications at all.

If it was a court day, one could hear from my father’s office the hubbub of conversations in Sotho, Xhosa, English, and Afrikaans floating down the road from outside the Magistrate’s Court, where blanket-clad Sotho men down from the mountains had tied up their horses at a hitching post alongside police vans and farmer’s trucks.

Rural settlements

This was Wild West country in the 19th century — and cross-border cattle rustling cases continue to figure large — but when I grew up, in the wake of the Second World War, it presented itself as a quiet village, in a prosperous farming area surrounded by very large ‘trust lands’ (in colonial- and apartheid-speak) of traditional Black peasant communities, where the place names were those of the presiding chiefs. This naming was a symptom of the colonial manipulation of the legal system, described by Mahmood Mamdani, to impose an autocratic and patriarchal ‘customary’ system, a heritage that lingers on in a democratic South Africa. In a legal practice like my father’s, there was a startling dichotomy between the well-paid work done for the prosperous white community with its commercial- and property-law needs, and the customary-law and criminal cases that came from the overwhelmingly larger black communities, dependent on legal aid or paying their fees in small cash installments to a clerk in a back office.

Village traders

I was thus aware at a young age of conflicting values at the intersection between western concepts of the law, its formal and Latinate expression and punctilious enforcement, and the needs of rural black communities; the problematic role that language played in the adversarial ritual of criminal court procedure, alien to many participants; and the difficulties inherent in responding to the needs of very large and widely geographically dispersed poor and disenfranchised communities. The stories my father told about his days in court as a defending attorney were often tales of incomprehension compounded by mistranslation.

This rural setting provides a vivid and useful map of divergent needs for access to legal information in the complexity of an African context. In fact this setting throws a stark spotlight on issues of legal access that are easily obscured in the global North. In an urban setting in South Africa, the issues would be different respecting details, but generally the same: the question is how to bridge the gap between the formalities and rituals of colonially-based and imported legal discourse and the ways in which the legal system impacts on the lives of most of the population. In this context, how does one transform into action Nick Holmes’s concerns, as expressed in his VoxPopuLII blog, about making the law accessible, i.e., suited to meeting the needs of citizens and lawyers in less privileged practices, in an appropriate language and format? Or, to use Isabel Moncion’s distinction between the law and justice, how does one communicate the law in such a way as to reach the people who need the information? And lastly — of vital importance in an African setting where resources are scarce — how does one make such a publishing enterprise sustainable?

I do not come to this discussion with a legal training. I would have become a lawyer, no doubt, like the generations of my father’s family, but 1950s gender stereotypes got in the way. Instead, I became an academic publisher, and then a consultant and researcher on the potential of digital media in Africa. This trajectory gives a particular coloration to my concerns for access to legal information in Africa: my approach brings together an acknowledgement of the need for professional skills and sustainability with an awareness of the serious limitations of the current publishing regime in providing comprehensive access to legal information.

Law publishing in South Africa

The fact that South Africa has a well-established legal publishing sector sets that nation apart from the rest of Africa. The strength of the legal publishing industry is a reflection not only of South Africa’s prosperity, but also of the distinctiveness of the South African legal system, a fusion of Romano-Dutch and British legal traditions. The uniqueness of this system meant that South African law publishing could not rely on purely British sources, and gave local South African legal publishers a market not subject to competition from Britain. However, the nature of this legal system also gave it a tendency, at least in its early stages, towards a particularly impenetrable mode of expression, fueled by the Latinisms of its Roman roots.

Lawyers in practice, the legal departments of big companies, and the courts are relatively well served by the South African legal publishing industry, and the system is self-sustaining. However, there are problems. One is that the industry still clings to print-based business models. The focus is on the readership that can pay and on the topics that are of interest to this readership. The danger resides in seeing this situation as sufficient: in seeing the relatively wealthy market being served as the whole market, and the narrow range of publications produced as satisfying the totality of publication needs. With the South African legal profession still struggling to diversify out of white male dominance, this is an important issue.

As global media have consolidated in the last few decades, South African legal publishers have shown a decreasing willingness to try to find ways of addressing commercially marginal markets. This has meant that, although mainstream legal publishers in South Africa have long produced digital publications, there is reliance on a high-price market model. In other countries one might talk of a failure to address niche markets, but in South Africa it is the mass of the population who are marginalised by this business model. A smaller specialist publisher, Simon Sefton’s Siber Ink, seems more aware than the bigger players of the need for accessible language and affordable prices for legal resources, as well as active social media engagement to create debates about key community issues.

Some hope of solutions to the question of access by otherwise marginalised readers lies in the development, on the margins of the publishing industry, of innovative smaller players leveraging digital media to reach new readerships, often using open source models that combine the free and the paid for.

Access to legal information – The role of government

The main efforts being put into access to legal information in South Africa are quite rightly focusing on government-generated information, which, being taxpayer funded, should be in the public domain and is indeed available on the South African Government Information site. Progress is being made by the Southern African Legal Information Institute (SAFLII) in improving the accessibility of primary legal resources, and success would mean the availability of a substantial body of information that would then be available for interpretation and translation.

Beyond this, government practice in ensuring this level of access is patchy. Some departments are good at posting legislation on their Websites, others less so. Government Gazettes, although theoretically accessible to all, can be difficult to find and navigate; and the collation of legislative amendments with the original Acts is also patchy. There is — at least in theory — an acceptance of the need in government for an open government approach, but the fact that there is a publishing industry serving the profession and the courts ironically reduces the pressure to achieve this goal.

South Africa Truth and Reconciliation Commission Report

The Truth and Reconciliation Commission

There is a danger, however, when government sees the print-publication profit model as the natural and only way of producing sustainable publications. This was brought home in 1998 with a very important publication: the Report on the Truth and Reconciliation Commission (TRC). This sad and salutary story is worth telling in some detail.  But first, a disclaimer: I was working at the time for the company that distributed the Report, and I was actively involved in securing the bid from publishers, although I was not supportive of the business model that was imposed in the end.

Five volumes of testimony, analysis, and findings from the Commission were produced to high production standards. The compilers saw the archival material that lay behind these volumes as ‘the Commission’s greatest legacy’ and the published volumes as ‘a window on this incredible resource, offering a road map to those who wish to travel into our past’ (p.2).  The Department of Justice, working on the stereotypical view of how publication works, insisted that production and printing costs had to be fully recovered. The Department set a high price to be charged by the appointed distributor, Juta Law and Academic Publishers.

The second set of problems arose with the digital version of the publication that Juta had offered to develop. The digital division of the legal publisher insisted on high prices. It was this inappropriate digital business model that created a row in the press. Then, a ‘pirate’ version of the publication was produced by the developer of the TRC Website, who claimed that he had the rights to a free online product. Public opinion was firmly behind the idea that the digital version should be free and that the publisher was profiteering out of South Africa’s pain.

In the end, hardly any copies of the Report were sold. The lesson was a hard one for a publishing company: digital content that is seen as part of the national heritage cannot be subjected to high-price commercial strategies.

The full text of the TRC Report is now online on the South African Government Information Website.

The LRC Website

Leaping the divide – Law and land

What is more difficult and diffuse is the route to providing access to really useful information that could help communities engage with the impact of legislation on their lives, whether the issue be housing policy or land tenure legislation, gender rights or press freedom.

If we go back to my initial example of rural communities and their access to the law, there is a dauntingly wide range of issues at stake — questions of individual agency, gender rights, fair labour practice, property rights and access to land, food sustainability, and a number of human rights issues — including legislative process as the ANC government implements the Communal Land Rights Act of 2004. In Matatiele, the village in which my father practised, there has been a long-drawn-out dispute about provincial boundaries, with the community challenging the legislative process in the Constitutional Court.

Questions of access to this kind of information are addressed in an ecosystem broader than the conventional publishing industry. NGOs and research units based in universities and national research councils address the wider concerns of community justice; using a variety of business models, these organizations produce a range of publications and work closely with communities. In the case of the Communal Land Rights Act, the Legal Resources Centre (LRC) supported a Constitutional Court challenge and published a book on the Act and its problems. The LRC, like other organisations of its kind, makes booklets, brochures, and reports freely available online. These efforts tend to be donor-funded and, increasingly, donors like the Canadian International Development Research Centre (IDRC) insist that publications be distributed under Creative Commons licenses. In the case of books published by commercial publishers, this means an open access digital version, and a print version for sale.

A major problem in providing commentary on legislative issues for the general public is that of ensuring a lack of bias. In the case of the Communal Land Rights Act — as well as for the other critical justice issues that it covers — the LRC explicitly aimed to provide a comprehensive insight into the issues for experts and the general public; the Centre accordingly placed the full text of its submissions to the hearings as well as answering affidavits on a CD-ROM and online. It also produces a range of resources, online text, and audio, targeted at communities.

Similar publication efforts are undertaken by a number of other NGOs and research centres — such as the Institute for Poverty, Land, and Agrarian Studies (PLAAS) at the University of the Western Cape and the African Centre for Cities at the University of Cape Town — on a wide range of issues. These organizations’ publishing activities tend to be interdisciplinary and the general practice is to place reports and other publications online for free download. There is a growing wave, in scholarly publishing in particular, to seek a redefinition of what constitutes ‘proper’ publishing; this process has yielded the notion of a continuum between scholarly (and professional) work and the ‘translation’ of this work into more accessible versions.

A useful strategic exercise would be to tag and aggregate the legal publishing contributions of NGOs and research centres — as these resources are often difficult to track, or hidden deep in university Websites — preferably with social networking spaces for discussion and evaluation.

Sustainability models

These civil society publishers are generally dependent on donor funding. What is needed is to recognise them as part of the publishing ecosystem. The question is how to create publishing models that can offer longer-term sustainability that might work beyond a well-resourced country like South Africa. The most promising and sustainable future looks to be in small and innovative digital companies using open source publishing models, offering free content as well as value-added services for sale. Examples are currently mostly to be found in textbook and training models, like the Electric Book Works Health Care series, which offers free content online, with payment for print books, training, and accreditation.

What is clear is that multi-pronged solutions must be found over time to the question of how to bridge the divide in African access to reliable and relevant legal information, and that a promising site for these solutions is the intersection between research and civil society organisations and community activists.

Eve GrayEve Gray is an Honorary Research Associate in the Centre for Educational Technology at the University of Cape Town and an Associate in the IP Law and Policy Research Unit. She is a specialist in scholarly communications in the digital age, working on strategies for leveraging information technologies to grow African voices in an unequal global environment.

Photos: Eve Gray CC BY

VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed. The statements above are not legal advice or legal representation. If you require legal advice, consult a lawyer. Find a lawyer in the Cornell LII Lawyer Directory.