{"id":72,"date":"2009-11-09T10:09:13","date_gmt":"2009-11-09T15:09:13","guid":{"rendered":"http:\/\/blog.law.cornell.edu\/voxpop\/2009\/11\/09\/surveying-is-hard\/"},"modified":"2009-11-09T11:34:16","modified_gmt":"2009-11-09T16:34:16","slug":"surveying-is-hard","status":"publish","type":"post","link":"https:\/\/blog.law.cornell.edu\/voxpop\/2009\/11\/09\/surveying-is-hard\/","title":{"rendered":"Surveying is Hard"},"content":{"rendered":"

Where the culture of assessment meets actual learning about users.
\n<\/em><\/p>\n

These days, anyone with a pulse can sign up for a free Surveymonkey<\/a> account, ask users a set of questions and call it a survey. The software will tally the results and create attractive charts to send upstairs, reporting on anything you want to know about your users. The technology of running surveys is easy, certainly, but thinking about what the survey should accomplish and ensuring that it meets your needs is not. Finding accurate measures — of the effectiveness of instructional programs, the library’s overall service quality, or efficiency, or of how well we’re serving the law school’s mission — is still something that is very, very hard. But librarians like to know that programs are effective, and Deans, ranking bodies, and prospective students all want to be able to compare libraries, so the draw of survey tools is strong. The logistics are easy, so where are the problems with assessment?<\/p>\n

Between user surveys and various external questionnaires, we gather a lot<\/strong> of data about \"lawlaw libraries. Do they provide us with satisfactory methods of evaluating the quality of our libraries? Do they offer satisfactory methods for comparing and ranking libraries? The data we gather is rooted in an old model of the law library where collections could be measured in volumes, and that number was accepted as the basis for comparing library collections. We’ve now rejected that method of assessment<\/a>, but struggle nevertheless for a more suitable yardstick. The culture<\/a> of assessment<\/a> from the broader library community<\/a> has also entered law librarianship, bringing standardized service quality assessment tools<\/a>. But despite these tools, and a lot of work on finding the right measurement of library quality, are we actually moving forward, or is some of this work holding us back from improvement? There are two types of measurement widely used to evaluate law libraries: assessments and surveys, which tend to be inward-looking, and the use of data such as budget figures and square footage, which can be used to compare and rank libraries. These are compared below, followed by an introduction to qualitative techniques for studying libraries and users.<\/p>\n

(Self)Assessment<\/h3>\n

There are many tools available for conducting surveys of users, but the tool most familiar to law librarians is probably LibQUAL+\u00ae<\/a>. Distributed as a package by ARL, LibQUAL+\u00ae is a “suite of services that libraries use to solicit, track, understand, and act upon users’ opinions of service quality.”<\/a> The instrument itself is well-vetted, making it possible for libraries to run it without any pre-testing.<\/p>\n

The goal is straightforward: to help librarians assess the quality of library services by asking patrons what they think. So, in “22 items and a box<\/a>,” users can report on whether the library is doing things they expect, and whether the librarians are helpful. LibQUAL+\u00ae aligns with the popular “culture of assessment<\/a>” in libraries, helping administrators to support regular assessment of the quality of their services. Though LibQUAL+\u00ae can help libraries assess user satisfaction with what they’re currently doing, it’s important to note that the survey results don’t tell a library what they’re not<\/em> doing (and\/or should be doing). It doesn’t identify gaps in service, or capture opinions on the library’s relevance to users’ work. And as others<\/a> have noted<\/a>, such surveys focus entirely on patron satisfaction, which is contextual and constantly shifting<\/a>. Users with low expectations will be satisfied under very different conditions that users with higher expectations, and the standard instrument can’t fully account for that.<\/p>\n

Ranking Statistics<\/h3>\n

The more visible or external data gathering for law libraries occurs annually, when libraries answer questionnaires from their accrediting bodies. The focus of these instruments is on numbers: quantitative data that can be used to rate and rank law libraries. The ABA’s annual questionnaire counts both space and money<\/a>. Site visits<\/a> every seven years add detail and richness to the picture of the institution and provide additional criteria for assessment against the ABA’s standards, but the annually reported data is primarily quantitative. The ABA also asks which methods libraries use “to survey student and faculty satisfaction of library services\u201d<\/a>, but they don’t gather the results of those surveys.<\/p>\n

The ALL-SIS Statistics Committee<\/a> has been working on developing better measures for the quality of libraries, leading discussions on the AALLNet list<\/a> (requires login) and inviting input from the wider law librarian community<\/a>, but this is difficult work, and so far few Big Ideas have emerged. One proposal suggested reporting, via the ALL-SIS supplemental form, responses from students, faculty, and staff regarding how the library’s services, collections and databases contribute to scholarship and teaching\/learning, and how the library’s space contributes to their work<\/a>. This is promising, but it would require more work to build rich qualitative data.<\/p>\n

Another major external data gathering initiative is coordinated by the ARL itself<\/a>, which collects data on law libraries as part of their general data collection<\/a> for ARL-member (University) libraries. ARL statistics are similarly heavy on numbers, though: their questionnaire counts volumes (dropped just this year<\/a> from the ABA questionnaire) and current serials, as well as money spent.<\/p>\n

Surveys \u2260 Innovation<\/h3>\n

When assessing the quality of libraries, two options for measurement dominate: user satisfaction, and collection size (using dollars spent, volumes, space allocated, or a combination of those). Both present problems: the former is simply insufficient as the sole measure of library quality, and is not useful for comparing libraries, and the latter ignores fundamental differences between the collection development and access issues of different libraries, making the supposedly comparable figures nearly meaningless. A library that is part of a larger university campus will likely have a long list of resources paid for by the main library, and a stand-alone law school won’t. Trying to use the budget figures for these two libraries to compare the size of the collection or the quality of the library would be like comparing apples and apple-shaped things. There’s also something limiting about rating libraries primarily based on their size; is the size of the collection, or the money spent on the collection, the strongest indicator of quality? The Yankees don’t win the World Series every<\/em> year, after all, despite monetary advantages.<\/p>\n

The field of qualitative <\/strong>research<\/a> (a.k.a. naturalistic or ethnographic research) could offer \"microphones.jpg\"<\/a> some hope. The techniques of naturalistic inquiry<\/a> have deep roots in the social sciences, but have not yet gained a stronghold in library and information science. The use of naturalistic techniques could be particularly useful for understanding the diverse community of law library users. While not necessarily applicable as a means for rating or ranking libraries, the techniques could lead to a greater understanding of users of law libraries and their needs, and help libraries to develop measures that directly address the match between library and users’ needs.<\/p>\n

How many of us have learned things about a library simply by having lunch with students, or chatting with faculty at a college event, or visiting another library? Participants in ABA Site Visits<\/a>, for instance, get to know an institution in a way that numbers and reports can’t convey. Naturalistic techniques formalize the process of getting to know users, their culture and work, and the way that they use the library. Qualitative research could help librarians to see past habits and assumptions, teaching us about what our users do and what they need. Could those discoveries also shape our definition of service quality, and lead to better measures of quality?<\/p>\n

In 2007, librarians at the University of Rochester River Campus conducted an ethnographic study<\/a> with the help of their resident<\/a> Lead<\/a> Anthropologist<\/a><\/em> (!). (The Danes did something similar a few years ago<\/a>, coordinated through DEFF , the Danish libraries’ group.) The Rochester researchers asked — what do students really do<\/em> when they write papers? The librarians had set goals to do more, reach more students, and better support the University’s educational mission. Through a variety of techniques, including short surveys, photo diaries, and charrette<\/a>-style workshops, the librarians learned a great deal about how students work, how their work is integrated into their other life activities, and how students view the library. Some results led to immediate pilot programs: a late-night librarian program during crunch times, for instance. But equally important to the researchers was understanding the students’ perspective on space design and layout in preparation for a reading room renovation.<\/p>\n

Concerns about how libraries will manage the increased responsibilities that may accrue from such studies are premature. Our service planning should take into account the priorities of our users. Perhaps some longstanding library services just aren’t that important to our users, after all. Carl Yirka recently challenged librarians<\/a> on the assumption that everything we currently do is still necessary — and so far, few have risen to the challenge. Some of the things that librarians place value on are not ours to value; our patrons decide whether Saturday reference, instructional sessions on using the wireless internet, and routing of print journals are valuable services. Many services provided by librarians are valuable because they’re part of our responsibility as professionals: to select high-quality information, to organize and maintain it, and to help users find what they need. But the specific ways we do that may always be shifting. Having the Federal Reporter in your on-site print collection is not, in and of itself, a valuable thing, or an indicator of the strength of your collection.<\/p>\n

“Measuring more is easy; measuring better is hard.”
\nCharles Handy (from Joseph R. Matthews, Strategic Planning and Management for Library Managers (2005)) <\/em><\/p><\/blockquote>\n

Thinking is Hard<\/em><\/h3>\n

\"upside<\/p>\n

Where does this leave us? The possibilities for survey research may be great, and the tools facile, but the discussion is still very difficult. At the ALL-SIS-sponsored “Academic Law Library of 2015” workshop<\/a> this past July, one small group addressed the question of what users would miss if the library didn’t do what we currently do. If functions like purchasing and management of space were absorbed by other units on campus or in the college, what would be lost? Despite the experience of the group, it was a very challenging question. There were a few concrete ideas that the group could agree were unique values contributed by law librarians, including the following:<\/p>\n