skip navigation
search

Where the culture of assessment meets actual learning about users.

These days, anyone with a pulse can sign up for a free Surveymonkey account, ask users a set of questions and call it a survey. The software will tally the results and create attractive charts to send upstairs, reporting on anything you want to know about your users. The technology of running surveys is easy, certainly, but thinking about what the survey should accomplish and ensuring that it meets your needs is not. Finding accurate measures — of the effectiveness of instructional programs, the library’s overall service quality, or efficiency, or of how well we’re serving the law school’s mission — is still something that is very, very hard. But librarians like to know that programs are effective, and Deans, ranking bodies, and prospective students all want to be able to compare libraries, so the draw of survey tools is strong. The logistics are easy, so where are the problems with assessment?

Between user surveys and various external questionnaires, we gather a lot of data about law library stackslaw libraries. Do they provide us with satisfactory methods of evaluating the quality of our libraries? Do they offer satisfactory methods for comparing and ranking libraries? The data we gather is rooted in an old model of the law library where collections could be measured in volumes, and that number was accepted as the basis for comparing library collections. We’ve now rejected that method of assessment, but struggle nevertheless for a more suitable yardstick. The culture of assessment from the broader library community has also entered law librarianship, bringing standardized service quality assessment tools. But despite these tools, and a lot of work on finding the right measurement of library quality, are we actually moving forward, or is some of this work holding us back from improvement? There are two types of measurement widely used to evaluate law libraries: assessments and surveys, which tend to be inward-looking, and the use of data such as budget figures and square footage, which can be used to compare and rank libraries. These are compared below, followed by an introduction to qualitative techniques for studying libraries and users.

(Self)Assessment

There are many tools available for conducting surveys of users, but the tool most familiar to law librarians is probably LibQUAL+®. Distributed as a package by ARL, LibQUAL+® is a “suite of services that libraries use to solicit, track, understand, and act upon users’ opinions of service quality.” The instrument itself is well-vetted, making it possible for libraries to run it without any pre-testing.

The goal is straightforward: to help librarians assess the quality of library services by asking patrons what they think. So, in “22 items and a box,” users can report on whether the library is doing things they expect, and whether the librarians are helpful. LibQUAL+® aligns with the popular “culture of assessment” in libraries, helping administrators to support regular assessment of the quality of their services. Though LibQUAL+® can help libraries assess user satisfaction with what they’re currently doing, it’s important to note that the survey results don’t tell a library what they’re not doing (and/or should be doing). It doesn’t identify gaps in service, or capture opinions on the library’s relevance to users’ work. And as others have noted, such surveys focus entirely on patron satisfaction, which is contextual and constantly shifting. Users with low expectations will be satisfied under very different conditions that users with higher expectations, and the standard instrument can’t fully account for that.

Ranking Statistics

The more visible or external data gathering for law libraries occurs annually, when libraries answer questionnaires from their accrediting bodies. The focus of these instruments is on numbers: quantitative data that can be used to rate and rank law libraries. The ABA’s annual questionnaire counts both space and money. Site visits every seven years add detail and richness to the picture of the institution and provide additional criteria for assessment against the ABA’s standards, but the annually reported data is primarily quantitative. The ABA also asks which methods libraries use “to survey student and faculty satisfaction of library services”, but they don’t gather the results of those surveys.

The ALL-SIS Statistics Committee has been working on developing better measures for the quality of libraries, leading discussions on the AALLNet list (requires login) and inviting input from the wider law librarian community, but this is difficult work, and so far few Big Ideas have emerged. One proposal suggested reporting, via the ALL-SIS supplemental form, responses from students, faculty, and staff regarding how the library’s services, collections and databases contribute to scholarship and teaching/learning, and how the library’s space contributes to their work. This is promising, but it would require more work to build rich qualitative data.

Another major external data gathering initiative is coordinated by the ARL itself, which collects data on law libraries as part of their general data collection for ARL-member (University) libraries. ARL statistics are similarly heavy on numbers, though: their questionnaire counts volumes (dropped just this year from the ABA questionnaire) and current serials, as well as money spent.

Surveys ≠ Innovation

When assessing the quality of libraries, two options for measurement dominate: user satisfaction, and collection size (using dollars spent, volumes, space allocated, or a combination of those). Both present problems: the former is simply insufficient as the sole measure of library quality, and is not useful for comparing libraries, and the latter ignores fundamental differences between the collection development and access issues of different libraries, making the supposedly comparable figures nearly meaningless. A library that is part of a larger university campus will likely have a long list of resources paid for by the main library, and a stand-alone law school won’t. Trying to use the budget figures for these two libraries to compare the size of the collection or the quality of the library would be like comparing apples and apple-shaped things. There’s also something limiting about rating libraries primarily based on their size; is the size of the collection, or the money spent on the collection, the strongest indicator of quality? The Yankees don’t win the World Series every year, after all, despite monetary advantages.

The field of qualitative research (a.k.a. naturalistic or ethnographic research) could offer microphones.jpg some hope. The techniques of naturalistic inquiry have deep roots in the social sciences, but have not yet gained a stronghold in library and information science. The use of naturalistic techniques could be particularly useful for understanding the diverse community of law library users. While not necessarily applicable as a means for rating or ranking libraries, the techniques could lead to a greater understanding of users of law libraries and their needs, and help libraries to develop measures that directly address the match between library and users’ needs.

How many of us have learned things about a library simply by having lunch with students, or chatting with faculty at a college event, or visiting another library? Participants in ABA Site Visits, for instance, get to know an institution in a way that numbers and reports can’t convey. Naturalistic techniques formalize the process of getting to know users, their culture and work, and the way that they use the library. Qualitative research could help librarians to see past habits and assumptions, teaching us about what our users do and what they need. Could those discoveries also shape our definition of service quality, and lead to better measures of quality?

In 2007, librarians at the University of Rochester River Campus conducted an ethnographic study with the help of their resident Lead Anthropologist (!). (The Danes did something similar a few years ago, coordinated through DEFF , the Danish libraries’ group.) The Rochester researchers asked — what do students really do when they write papers? The librarians had set goals to do more, reach more students, and better support the University’s educational mission. Through a variety of techniques, including short surveys, photo diaries, and charrette-style workshops, the librarians learned a great deal about how students work, how their work is integrated into their other life activities, and how students view the library. Some results led to immediate pilot programs: a late-night librarian program during crunch times, for instance. But equally important to the researchers was understanding the students’ perspective on space design and layout in preparation for a reading room renovation.

Concerns about how libraries will manage the increased responsibilities that may accrue from such studies are premature. Our service planning should take into account the priorities of our users. Perhaps some longstanding library services just aren’t that important to our users, after all. Carl Yirka recently challenged librarians on the assumption that everything we currently do is still necessary — and so far, few have risen to the challenge. Some of the things that librarians place value on are not ours to value; our patrons decide whether Saturday reference, instructional sessions on using the wireless internet, and routing of print journals are valuable services. Many services provided by librarians are valuable because they’re part of our responsibility as professionals: to select high-quality information, to organize and maintain it, and to help users find what they need. But the specific ways we do that may always be shifting. Having the Federal Reporter in your on-site print collection is not, in and of itself, a valuable thing, or an indicator of the strength of your collection.

“Measuring more is easy; measuring better is hard.”
Charles Handy (from Joseph R. Matthews, Strategic Planning and Management for Library Managers (2005))

Thinking is Hard

upside down toddler

Where does this leave us? The possibilities for survey research may be great, and the tools facile, but the discussion is still very difficult. At the ALL-SIS-sponsored “Academic Law Library of 2015” workshop this past July, one small group addressed the question of what users would miss if the library didn’t do what we currently do. If functions like purchasing and management of space were absorbed by other units on campus or in the college, what would be lost? Despite the experience of the group, it was a very challenging question. There were a few concrete ideas that the group could agree were unique values contributed by law librarians, including the following:

  • Assessment of the integrity of legal information
  • Evaluation of technologies and resources
  • Maintaining an eye on the big picture/long term life of information

The exercise was troubling, particularly in light of statements throughout the day by the many attendees who insisted on the necessity of existing services, while unable to articulate the unique value of librarians and libraries to the institution. The Yirka question (and follow-up) was a suggestion to release some tasks in order to absorb new ones, but we ought to be open to the possibility that we need a shift in the kind of services we provide, in addition to balancing the workload. As a professional community, we’re still short on wild fantasies of the library of the future, and our users may be more than happy to help supply some of their own.

Doing Qualitative work

Could good qualitative research move the ball forward? Though good research is time-consuming, it could help us to answer fundamental questions about how patrons use legal information services, how they use the library, and why they do or don’t use the library for their work. Qualitative research could also explore patron expectations in greater detail than quantitative studies like LibQual+, following up on how the library compares to other physical spaces and other sources of legal information that patrons use.

It’s important that librarians tap into resources on campus to support survey research, though, whether qualitative or quantitative. When possible, librarians should use previously vetted instruments, pretested for validity and reliability. This may be a great opportunity for AALL, working with researchers in library and information science to build a survey instrument that could be used by academic law libraries.

Stephanie DavidsonStephanie Davidson is Head of Public Services at the University of Illinois in Champaign. Her research addresses public services in the academic law library, and understanding patron needs and expectations. She is currently preparing a qualitative study of the information behavior of legal scholars.

VoxPopuLII is edited by Judith Pratt

4 Responses to “Surveying is Hard”

  1. […] » Surveying is Hard VoxPopuLII blog.law.cornell.edu/voxpop/2009/11/09/surveying-is-hard – view page – cached Published November 9, 2009 Law librarians , Legal information behavior , law library assessment […]

  2. […] of Public Services at the University of Illinois at Urbana-Champaign Law Library, has published a very interesting overview of assessment in law libraries, entitled Surveying is Hard, on the VoxPopuLII […]

  3. […] here: » Surveying is Hard VoxPopuLII By admin | category: law library | tags: basis, law library, now-rejected, old-model, […]

  4. Social comments and analytics for this post…

    This post was mentioned on Twitter by emasters: Latest at VoxPopuLII: Surveying is Hard http://bit.ly/4dahYp

Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)