How Well Are You Doing Your Job? You Don’t Know. No One Does.
In Brief: The outputs librarians are measuring are not directly associated with specific practices that lead to improved lives for the people we serve. If we cannot make that connection, we have no way of knowing how well we are doing our jobs. This article suggests four measurable outcomes that libraries and librarians could use to make sure their activities are improving their constituents’ well-being, and also use in comparing their effectiveness with each other, allowing less effective libraries to learn from libraries that are achieving greater levels of success.
How well are you, as a librarian, doing your job? That’s what I ask myself every day. But to answer that question, we have to answer another one first, “What is a library?”
I think of a library as a cooperative for infrequently needed, relatively inexpensive, durable goods. This description captures the traditional view of libraries. For most people, an individual item like a library book, DVD, or CD is infrequently needed and relatively inexpensive to replace if it’s lost or broken. These items are also durable, in the sense that dozens of your neighbors, students, faculty, or colleagues can make use of them before they wear out.
Facts or other pieces of information we acquire the right to access, organize, and teach others to find and use are also infrequently needed, relatively inexpensive, and durable. The same applies to ebooks and individual sessions at a shared computer, so in my opinion BiblioTech, the “library without books” in San Antonio that’s been grabbing headlines, is a traditional library. As are libraries that lend seeds, cake pans, and tools.
The book remains libraries’ best known brand, and as brands go it’s a wonderful object with which to be associated, but libraries are more than book warehouses. Like many cooperatives, libraries employ people whose job it is to make sure its operations run smoothly and its members receive high quality service. And, like other cooperatives, programming and education are central to what libraries do: the International Co-operative Alliance includes “Education, Training and Information” among its seven principles. Libraries and other cooperatives don’t just provide less expensive access to goods or services, we also provide programs that enable people to understand how to use those goods or services, place them into context, enjoy them more, and interact with others who share similar interests.
The International Co-operative Alliance’s other six principles apply to us as well: we have voluntary and open membership; we are accountable to our members; our members contribute equitably to our capital; we are autonomous; libraries cooperate with each other; and we work for the sustainable development of our communities.
That’s what a library is. What should it do?
Improving Our Constituents’ Well-Being
Libraries are responsible, like every publicly funded agency, for increasing our constituents’ overall well-being.
Even if the library where you work receives no public funding, you still, like those of us in the public sector, have a moral and fiduciary responsibility to your colleagues, students, or anyone else who funds your ongoing employment and who relies on you to provide services that have the potential to make their life better, their studies richer, or their time at work more productive. It’s our job to complement other public and private services by making experiences and opportunities available that are more difficult or expensive to access in other ways. It’s our job to improve our constituents’ well-being in ways that make sense economically.
To this point in the article, I’ve discussed libraries in general. From this point forward, I’ll only discuss libraries and other agencies in the United States, and I’ll mostly discuss public libraries. The principles I’ll discuss are borrowed from agencies other than libraries, and much as I adapt these principles to American public libraries, I think it’s possible for readers to adapt these processes and suggestions to other types of American libraries, or to any type of library anywhere in the world. Though I would like to make more specific suggestions, some factors, such as funding models or cultural expectations, vary enough that accounting for everything is impossible for any single author: the data takes so long to gather and analyze that it changes before the process can be completed. But I think the general ideas I discuss in this article are universal.
Measuring Our Success
It’s librarians’ job to improve our constituents’ well-being. The Report by the Commission on the Measurement of Economic Performance and Social Progress defines “well-being” across eight dimensions that “should be considered simultaneously”:
- Material living standards (income, consumption and wealth);
- Health;
- Education;
- Personal activities including work;
- Political voice and governance;
- Social connections and relationships;
- Environment (present and future conditions);
- Insecurity, of an economic as well as a physical nature. (Stiglitz, Sen, & Fitoussi, 2009, pp. 14-15)
As is stated in its name, this 292-page report discusses the reasons for measuring well-being, both objective and subjective (p. 15), as well as methods that researchers can use to accomplish these tasks. An alternative way to describe this general concept, within the library context, is “impact”. An ISO standard released on April 10, 2014, Information and Documentation—Methods and Procedures for Assessing the Impact of Libraries, defines impact as “difference or change in an individual or group resulting from the contact with library services,” and notes that “(t)he change can be tangible or intangible” (ISO 16439:2014). Examples of impact on individuals could include “changes in skills and competences; changes in attitudes and behaviour; higher success in research, study or career; and increase of individual well-being” by which they seem to mean subjective well-being, while examples of impact on society could include “social inclusion; free access to information; options of education and life-long learning; local culture and identity; and better health care” (Poll, 2012, pp. 124-125).
I prefer the term “well-being” because (1) I value its association with the Report by the Commission on the Measurement of Economic Performance and Social Progress and other complementary scholarship, (2) it has more clearly articulated metrics associated with its measurement, (3) I think it’s more elegant to combine individuals and society when establishing appropriate metrics for libraries, and (4) I prefer to write that “our goal is improving or increasing our constituents’ well being” to writing that “our goal is to have a greater impact on our constituents.” ((Having “an impact on” people seems to put them into a passive position, and seems to imply the violence of condescension as well as physical threat.)) Though regardless of how we state it, this seems like a worthwhile goal. The more important questions deal with how we should measure our success.
Libraries currently measure the outputs required by IMLS or ARL, such as library visits, the number of items that get checked out or accessed in other ways, the number of people who attend our programs, the number of people a library employs (and how much it pays them), or the amount a library spends on acquisitions. These are important to know, as are other, more sophisticated outputs, such as user satisfaction (as measured by projects such as LibQUAL+), public perception (as reported by organizations like Pew and OCLC), or observational studies on how people actually use the library, such as a study conducted by the Public Library of Cincinnati and Hamilton County. These are all outputs, measures of the stuff we do, buy, or produce, including the subjective reactions we elicit or the behaviors we encourage. They are not outcomes, which are objective measures of the the difference we make in our constituents’ lives. ((Not all outcomes are positive; there can be negative outcomes. And while there may be positive outcomes that would not be considered improvements in well-being, I am not aware of them.))
As the philosopher John Rawls wrote in 1971, “it is irrational to advance one end rather than another simply because it can be more accurately estimated.” But that is exactly what libraries are doing. We measure the number of people who walk into the building and the number of books they take with them when they leave, rather than the difference the library and the books are making in their lives. We measure how many sessions we teach or how satisfied our students are with our services, rather than our contributions to who they become after graduation. Of course we need to continue to measure our outputs; if we were not doing it already, this article would be about the need to start. My point is not that we should stop measuring our outputs, but that we should begin measuring our outcomes and adjust our priorities accordingly.
Comparing Ourselves to Other Public Agencies
The difference between outputs and outcomes, and the process and value of measuring outcomes, can be easier to appreciate if we look at other types of agencies. For instance, in healthcare, we see measures of quality of care or of overall community or population health replacing tallies of the number of operations performed or the percentage of hospital beds occupied. Medicare’s Hospital Compare program includes measures of readmissions, complications, and deaths, as well as pain, addressing four of the most important patient outcomes. Accountable Care Organizations, which are reported to serve 17% of Americans, “get paid more for keeping their patients healthy and out of the hospital.”
Similarly, in evaluating police practices, we see measures of effectiveness that reflect a reduction in criminal behavior, rather than an output, such as an increase in the number of patrol hours or arrests. And in K-12 education, rather than relying solely on outputs like graduation rates, A Blueprint for Reform: The Reauthorization of the Elementary and Secondary Education Act proposes to “support the development and use of a new generation of assessments that are aligned with college- and career-ready standards, to better determine whether students have acquired the skills they need for success.” In other words, “A Blueprint for Reform,” suggests that it is possible to create a definition or definitions for success, work backward from that definition to identify the skills that lead to it, and then create assessments that measure those skills. Even studies on pre-K programs are linked to outcomes, such as studies on the Brookline Early Education Project (BEEP), the Carolina Abecedarian Project, and the Perry Preschool Program in which longitudinal studies demonstrate measurable improvements in well-being. As one report states, “Why did urban BEEPers surpass their peers in educational achievement and income as well as in physical and mental health? The executive skills participants had acquired in their earliest years of schooling … were applicable to non-school tasks and gave these young adults distinct advantages when they became responsible for their own lives….” (Carnegie, 2006, p. 11).
These other publicly funded agencies establish goals whose outcomes result in increases in our overall health, our safety, our success, or our life choices. They create measures that seem likely to be associated with these goals, conduct studies to assess their hypotheses, and use the information they collect to change their practices in ways that seem likely to improve their constituents’ well-being.
I believe libraries could do the same thing. There are numerous studies about libraries that discuss outcomes: the output vs. outcome distinction is too important for librarians to have missed. Unfortunately, these studies do not acknowledge that individual librarians or even individual libraries make choices that differ from those of their peers, and that these decisions cause some librarians and libraries to outperform their peers in specific ways and lag them in others. They create a fictional world in which we are the same, when we are actually autonomous agencies that vary a great deal. By promulgating this fiction, we lose the ability to identify the most effective practices within our profession, which denies the least effective among us the opportunity to follow their lead.
The case for aggregating our results, aside from the fact that it is easier to accomplish, is that it creates a simpler narrative when we discuss the intrinsic value of a library or librarian, or wish to argue that more funding for libraries is associated with desirable outcomes. This is a trait the reports on aggregated outcome measures share with the reports on return on investment (ROI) that have been collected on the ALA website. These ROI reports were assembled by consultants and intended for lobbyists to use in meeting with politicians: their implicit assumption is that every library is like a citizen of Lake Wobegon: strong, good looking, and above average.
We can believe in libraries and in the work we do, and still acknowledge that all of us are comparatively strong in some areas and comparatively weak in others. Practitioners of other publicly funded professions have accepted this fact and it has led to improved efficiency and more desirable outcomes as the practices that produce better results are identified and disseminated.
“You Mean, What Do We Tell Other People?”
In addition to conducting solitary research, I also consistently ask these questions in discussions with other librarians: “What’s the best way to measure how you’re making your constituents’ lives better? Or, in other words, how do you know how well you’re doing your job?”
The Taiga Forum, an assembly of Associate University Librarians, has open meetings at ALA conferences. I like to attend these meetings, and follow Taiga’s work, because they are fond of asking difficult or controversial questions. Because I am not an AUL, I have always kept quiet at Taiga meetings, but at ALA Midwinter in January 2014, I asked them my question: “How do you know you’re doing your job well?”
“You mean, what do we tell other people?” asked one of Taiga’s conveners. “Or do you mean, what do we tell ourselves at night when we’re sitting quietly with our thoughts?”
The Taiga members discussed this question, but no one had a particularly satisfying response they could use to reassure themselves when they were sitting quietly with their thoughts. There was general consensus that the University of Minnesota Libraries seemed to be on the right track with its Library Data and Student Success project, as well as some enthusiasm for Megan Oakleaf’s report for ACRL, Value of Academic Libraries. ((Edit: Thanks to Megan Oakleaf for pointing out the public libraries section in this report. She does a very good job of summarizing several different approaches to valuing public libraries.))
I agree that UM’s Libraries appear to be on the right track, but I also think there’s a big difference between its scope and initial inquiries, including measures of pass/fail rates, grades, GPA, retention, and library use, and the kinds of outcomes that are used as goals in health care, public safety, K-12 education, and preschool. I have similar optimism, but also similar reservations, about two new Bill and Melinda Gates Foundation-funded projects, the Edge Initiative and the Impact Survey. Neither appears likely to produce immediate answers, but both appear to be potentially useful tools in enabling outcomes-based assessment by collecting data that could be used to test hypotheses and adjust our decisions and practices accordingly.
Four Responses to the Question: Why Aren’t Libraries Already Measuring Outcomes?
The same question arises every time I bring up this topic with my colleagues in other libraries: “How could libraries ever know we’re increasing well-being?” Once again, it’s useful to answer another question first: “Why aren’t libraries already measuring outcomes?” Because if you want to start doing something, it’s useful to know why it isn’t happening already.
I have encountered four answers to this question, and I want to address each of them individually: (1) We are measuring outcomes, though with very little precision; (2) It’s too expensive to conduct the kinds of studies that are taking place in health care and education; (3) We would study outcomes if anyone other than libraries cared; (4) It’s impossible to measure libraries’ outcomes.
1. We Are
I agree that libraries likely are doing some very limited, unscientific planning based on the well-being we help to promote for our constituents. Librarians are smart, educated, hard-working people who have elected to work in a service profession. I think we pay attention to the feedback we get from interacting with and observing the people who use the library, and we react to the needs we perceive within the communities we serve. But the cultural shifts within a profession and the evolution of its practices can be inexact and are often misguided. Librarians are prone to all of the same cognitive biases as other people. Over time, we’re likely trending toward optimizing libraries to enhance well-being among the people who rely on our services, but that’s not science, and it’s not a satisfactory way to fulfill our obligations.
2. It’s Expensive
Of course, science is expensive. Healthcare has plenty of money to spend on assessment: the US spends almost 18% of its gross domestic product on healthcare. Education also has greater capacity to spend money, since that’s where over 5% of US GDP is spent. Public Libraries have significantly less money than either: the equivalent of about 2% of the amount of money that federal, state, and municipal governments allocate to K-12 public schools in the US is spent on public libraries. ((
Libraries | Schools | Percentage | |
---|---|---|---|
Federal Exp. | 46,868,000(1) | 55,900,112,000(2)** | 0.08% |
State Exp. | 481,579,000(3)* | 276,153,850,000(2)** | 0.17% |
Local Exp. | 10,896,002,000(3)* | 258,893,617,000(2)** | 4.21% |
Total Exp. | 11,424,593,261 | 590,994,447,000 | 1.93% |
* U.S. Census Bureau, 2011b. The Institute of Museum and Library Services (IMLS) collects self-reported data from libraries. In 2009, the IMLS reports that libraries received $873,327,000 in state funding and $9,757,162,000 in local funding. ** Data is from U.S. Census Bureau, 2011a, which includes data for federal, state, and local expenditures. U.S. Census Bureau, 2011b, reports $9,876,299,000 in state expenditures and $637,062,012,000 in local expenditures; these latter figures include capital outlays, which account for 16% of state expenditures and 11% of local expenditures. Sources: Swan et al., 2011 (1); U.S. Census Bureau, 2011a (2) and 2011b (3). |
))
The fact that so little is spent on libraries, compared to other public agencies, tends to reduce the amount of scrutiny for libraries. The amount of work necessary to cut library funding for any given dollar figure may not create enough incentive for elected officials to deal with the political or social backlash, given that an equivalent amount of savings might be realized by reducing other public services’ budgets by a much lower percentage.
3. No One Cares
This does not mean that no one cares, or that libraries will forever be relatively insulated from the level of scrutiny and the types of questions that other publicly funded services have encountered. The government and insurers have been significant factors in forcing healthcare providers to focus on outcomes, and the government has also forced educators to focus on outcomes, despite their objections. At this point, libraries may be the largest recipient of public funding that has yet to identify relevant outcomes, or develop coherent models and methods for assessing our efficiency in improving our constituents’ well-being. I believe we’re next, that it’s just a matter of time, and if we don’t do this work ourselves that it will be done for us by people who don’t appreciate our mission and our work as much as we do.
4. It’s Impossible
I do not think assessing our performance in enhancing our constituents’ well-being is impossible, nor do I think it has to be particularly expensive, especially if libraries work together. As Hugh Rundle points out in “What We Talk About When We Talk About Public Libraries,” “there is no incentive to produce research and no sanction for failing to do so…. “ This leads to a “positive feedback loop” in which the absence of useful research on public libraries provides little incentive for librarians who work at public libraries to read or contribute to our professional literature. It becomes easier not to question what we think is best when we are rarely, if ever, confronted with those questions.
One of the issues educators faced during the recent widespread introduction of standardized testing that accompanied “No Child Left Behind” was the insistence by many teachers that it was impossible to evaluate their performance. They stated, correctly, that every situation is different. What I think many failed to appreciate is that statisticians can correct for these differences, and can do it with a great deal of precision, if they have relevant, accurate data. The National Education Association’s official position supports collecting relevant, accurate data —”We must replace the cheap, flawed standardized tests now used with second-generation assessment systems that (1) provide students with multiple ways to show what they have learned over time and (2) provide educators with valid data to improve instruction and enhance support for students.”— but it doesn’t offer a specific, affordable, better alternative than those cheap, flawed tests. Educators found themselves in a defensive position, in part, because they were forced to issue critiques, rather than alternatives, when they failed to anticipate US politicians’ willingness to enact standards, whether the teachers supported them or not. I don’t want to see this happen to libraries as well.
Measurable Ways of Assessing Libraries’ Contributions to Our Constituents’ Well-Being
I’ve identified the reasons we are not yet measuring outcomes, and I think the reasons are honest, but they are not valid excuses. That’s an important distinction: honest reasons vs. valid excuses. You can have perfectly honest reasons for not doing something and still fail to have a valid excuse for not taking an important action. Here’s an example: the media and the political polling organizations had reasons for not predicting elections accurately, but, as Nate Silver demonstrated, their reasons were not valid excuses. The data was there, it just had to be associated with the outcome that best served our overall well-being: accurate forecasts. Speaking of voting…
Voting
Part of Andrew Carnegie’s motivation for providing the seed funding that brought public libraries to thousands of towns and cities in America and around the world was, as characterized recently by the Carnegie Corporation, “to promote democracy through access to knowledge.” Libraries often invoke this idea, but I have never seen it measured, even though measuring participation in democracy, at least in a relatively crude form, is simple.
Are the people in the area your library serves voting? That’s the basic form of the research question. And the basic form of the question leads naturally to somewhat more sophisticated versions of the question: Are people voting more than we would expect them to based on the demographic or other factors that contribute to how frequently people typically vote? How frequently are they voting in different types of elections, such as local, state, or national? Are they voting more or less in your service area than they are in comparable areas?
Taking this to the next level, libraries could research what motivates people to vote and how they feel after they do. Are there emotional factors that go into the decision to vote? Are people who are more educated on the issues more likely to vote? Do they vote if they know more about the candidates? Do they vote more if they know more of the people in their community?
Finally, libraries can investigate what they can do to promote participation in democracy. Maybe people with library cards are more likely to vote. Or people who attend programs at the library, regardless of the topic. Or if they know at least one librarian by name, or if at least one librarian knows their name, because that makes them feel more connected to the services they receive. ((I’m using the term “librarian” in the vernacular, to connote anyone who works in a library. As I wrote previously, I “use the word to describe anyone who works in a library, anyone who works specifically or primarily for the benefit of libraries, or anyone who has a degree in librarianship. Many of the best librarians I’ve met don’t (or don’t yet) have library degrees, don’t work solely for libraries, or aren’t employed by libraries directly. Librarians are people whose work benefits library users, and I think of the best librarians as the people whose work provides these users with the greatest benefit.”)) Maybe people who belong to community groups that meet at the library are more likely to vote.
There are dozens of reasonable hypotheses. We could associate our practices with the many polls conducted to determine how informed voters are about various issues. Or measure grassroots involvement or other ways in which people get involved in politics or political issues. If we decide we want to find evidence that libraries are capable of literally “promoting democracy,” we can do it. And if we want to increase participation in democracy by finding ways that our services can increase the elements of well-being that are associated with voting, I believe we’re capable of doing that as well.
Literacies
We already know that improving people’s literacy is associated with both objective and subjective improvements in well-being. Libraries, because of their association with books and reading are already associated with literacy as well. What I have not yet seen are the kinds of objective measures for literacy rates that I’m proposing for voting rates. We say all the time that libraries promote literacy, and it appears to be true, but by how much? Which libraries are doing it best? And how can the rest of us learn from them?
Measuring literacy is expensive, but the cost is hardly prohibitive, especially if we work together to fund baseline testing. NCES has already worked, if somewhat erratically, at estimating literacy at the state and county level. We can gather more accurate, more granular data, and then we can begin to assess and adjust our programs accordingly. We can also use our data to make accurate connections between our services and advances in well-being within the communities we serve.
While we’re at it, we should also evaluate our contributions to other kinds of literacies, such as numeracy, information literacy, health literacy, and cultural literacy. We can evaluate how increases in different kinds of literacy affect people’s lives, and also test for improvements. Some of these tests already exist, such as the ETS iSkills test. Purchasing this specific test for widespread assessment may not be feasible, either at the scale we would need or based on the price ETS may demand, but the fact that this test exists proves that it’s possible to evaluate an enormous number of people within a short amount of time. And the fact that it’s possible, and that it would help guide our decisions toward producing demonstrably meaningful outcomes for our constituents, leads me to hope we will start using this kind of method in the near future.
Perhaps one path toward that future is Los Angeles Public Library’s Career Online High School program. This program seems likely to increase well-being among LAPL’s constituents by increasing multiple literacies. It is also a technology that could be used, in the future, to measure the improvements in their well-being that accompany those literacies.
Employment
Employment figures are both difficult to measure and in widespread use. The Bureau of Labor Statistics releases monthly unemployment figures for the entire USA. What libraries need is a way to come up with similar estimates for the constituencies we serve. We then need to figure out what we can do to reduce our figures, by demonstrating either that libraries are capable of introducing specific programs or that doing a particularly good job at existing library programs will lead directly to reduced unemployment. Again, the key is differentiating library services not only from similar services offered by other agencies, but differentiating one library’s expected and actual performance from comparable libraries’ performance.
We also want to provide similar evidence for our ability to reduce under-employment, or improving income trajectories, or reducing the duration of unemployment when people are laid off, or helping people find jobs they like, or helping people start their own companies. There are dozens of ways for libraries to demonstrate their value to people who wish to find jobs. We need an understanding of where the people in our community are, where they would be expected to be, and the effect our services have on improving their objective and subjective well-being.
Social Capital
If I could only pick one area for libraries to research, social capital would be it. Numerous disciplines are researching social capital—it’s “hot”—which is a good thing, because it means there’s funding out there for advancing knowledge of social capital, and there are also plenty of opportunities out there for collaborations, i.e., learning what we need to know without having to do all the research or acquire all the necessary funding on our own.
Though I’m most excited about social capital because I think it’s the best fit I have encountered for explaining the value of All Libraries Everywhere. The idea behind social capital is that interpersonal contacts affect our objective and subjective well-being. For the duration of our existence, libraries have helped increase the number and quality of our constituents’ contacts, and in our “Bowling Alone” contemporary culture, the need for libraries to play this role appears to be increasing.
Library programs can improve social capital directly by creating opportunities for people to get to know their neighbors. ((See also: my suggestion in the section on voting about knowing one another’s names)) We can also improve social capital by helping people learn more about the topics that others are discussing. When people who don’t have high-speed internet connections at home or who aren’t comfortable enough to use computers without assistance come to the library, they learn about the things their neighbors are discussing. When they play the latest game or watch the latest movie or read the latest book or listen to songs they might not hear otherwise, they can become part of the conversation. ((See also, “What Happens in the Library…” and “What We Talk About When We Talk About Brangelina“))
Once again, there are dozens of ways we can demonstrate our contributions to increasing social capital. Perhaps, in this instance, hundreds or thousands. It’s an enormous, rich area for research, and some scholars who are interested in both social capital and libraries have begun making connections between the two. My hope is to see them create testable hypotheses, determine which libraries are providing demonstrably greater value to our constituents, isolate the practices that are producing the best results, and help us share our most successful techniques with each other.
Conclusion: We Can Do This the Easier Way…
The examples I’ve given are not intended to be comprehensive. I feel certain the good ideas I have not identified far outnumber the ideas I’ve described. Could the many libraries that participate in the United States Department of Agriculture’s Summer Food Service Program pair it with other library-based programs to produce dramatically life-changing results? Could we welcome the homeless into libraries in ways that produce measurable improvements in the well-being of some of the most vulnerable members of our community?
The point I’m trying to make is that what you measure tends to drive what you focus on improving. Right now, what we are measuring is not directly associated with specific practices that lead to improved lives for the people we serve, and if we cannot make that connection scientifically, we have no way of knowing how well we are doing our jobs, or even if we’re doing our jobs well at all.
The International Co-operative Alliance defines a cooperative as “an autonomous association of persons united voluntarily to meet their common economic, social, and cultural needs and aspirations through a jointly-owned and democratically-controlled enterprise.” Many US public libraries were brought into being by a vote, can only be dissolved by a vote, and rely on voters, either directly or indirectly, for their funding and their governance. These voters (along with their non-voting children, parents, and neighbors) trust us to help them meet their economic, social, and cultural needs and aspirations. We owe it to them to know how well we are doing.
I am heartened that librarians are creating conferences about assessment, writing articles about evidence-based library practices, and, most notably at the Public Library of Cincinnati and Hamilton County, using data collection and analysis as the basis for all of their strategic thinking. What I haven’t yet seen is libraries deciding which aspects of well-being they wish to promote, and then working backwards from those goals to figure out how they should allocate their resources.
Fortunately, there seems to be some willingness within libraries to begin catching up to our colleagues in other public services. The president of the Public Library Association, Carolyn Anthony, has written about the importance of “New Measures for a New Era” and created a three-year charge for a Performance Measurement Task Force. I hope they will spend those three years identifying appropriate outcomes to which we could make a meaningful contribution and to measuring our degree of participation in helping our constituents achieve them.
I’m optimistic because I like what Carolyn Anthony has written and the composition of the task force she has appointed. I’m also pleased that the Gates Foundation appears to be interested in funding the kinds of outcome-based studies on libraries that it has funded in other areas, most notably global health. ((Edit: Bad timing on my part: the Gates Foundation announced a few hours after I published this article that it will end its Global Libraries program over the next 3-5 years.)) And, despite my misgivings about some of its terminology, I am excited about the possibility that ISO 11639:2014—Methods and Procedures for Assessing the Impact of Libraries could establish useful ways for similar libraries to compare their “impact.”
But I’m also fearful, because we may be racing against the clock. If we don’t agree on outcomes and ways to measure them, and we don’t quickly and voluntarily begin working together, I suspect that others will mandate assessment measures for us, even if those measures do not represent our values or our understanding of how we might best serve our constituents.
Thanks to Paula Brehm-Heeger and to my In the Library with the Lead Pipe colleague, Gretchen Kolderup, for their comments on an earlier draft of this article. Thanks, too, to all the librarians and non-librarians who have discussed this topic with me in person or online. In particular, I want to thank the LIS faculty members at Rutgers University whose classes advanced my understanding of library research in general, and this topic in particular: Marie Radford, Marija Dalbello, Dan O’Connor, Ross Todd, and Betty Turock and Gus Friedrich.
References and Further Reading
Braun, L.W., Hartman, M.L., Hughes-Hassell, S., Kumasi, K., and Yoke, B. (2014). The future of library services for and with teens: A call to action. National Forum on Teens & Libraries. Accessed from http://www.ala.org/yaforum/sites/ala.org.yaforum/files/content/YALSA_nationalforum_final.pdf
Carnegie Corporation of New York. (2006). Why preschool pays off: A breakthrough study links early education to better life choices. Carnegie Results, Summer. Accessed from http://carnegie.org/fileadmin/Media/Publications/PDF/summer_06_results.pdf
Durrance, J. C., & Fisher-Pettigrew, K. E. (2002). Toward developing measures of the impact of library and information services. Reference & User Services Quarterly, 42(1), 43.
Gawande, A. (June 1, 2009). The cost conundrum. The New Yorker, 36. Accessed from http://www.newyorker.com/reporting/2009/06/01/090601fa_fact_gawande?currentPage=all
Holt, G. E., & Elliott, D. (2003). Measuring outcomes: Applying cost-benefit analysis to middle-sized and smaller public libraries. Library Trends, 51(3), 424.
Holt, L. (2007). How to succeed at public library service: Using outcome planning. Public Library Quarterly, 26(3/4), 109-118. doi:10.1300/Jll8v26n03_06
Huysmans, F., & Oomes, M. (2013). Measuring the public library’s societal value: A methodological research program. IFLA Journal, 39(2), 168-177. doi:10.1177/0340035213486412
Institute of Museum and Library Services. (n.d.). Lib-Value: Value, outcome and return on investment of academic libraries. Accessed from: http://libvalue.cci.utk.edu/
ISO (2014). ISO 16439:2014—Information and documentation—Methods and procedures for assessing the impact of libraries. Geneva: ISO.
Johnson, C. A. (2010). Do public libraries contribute to social capital: A preliminary investigation into the relationship. Library & Information Science Research, 32(2), 147-155. doi:10.1016/j.lisr.2009.12.006
Kennedy, D. M. (2009). Deterrence and crime prevention: Reconsidering the prospect of sanction. New York: Routledge.
Kyrillidou, M. (2002). From input and output measures to quality and outcome measures, or, from the user in the life of the library to the library in the life of the user. Journal Of Academic Librarianship, 28(1/2), 42.
Library Research Service. (n.d.). School libraries impact studies. Retrieved from: http://www.lrs.org/data-tools/school-libraries/impact-studies/
Lown, C., & Davis, H. (2009). Are you worth it? What return on investment can and can’t tell you about your library. In The Library With The Lead Pipe. Accessed from: https://www.inthelibrarywiththeleadpipe.org/2009/are-you-worth-it-what-return-on-investment/
Organisation for Economic Co-operation and Development. (2013). OECD guidelines on measuring subjective well-being. Paris: OECD.
Poll, R. (2003). Impact/outcome measures for libraries. Liber Quarterly: The Journal Of European Research Libraries, 13(1-4), 329-342.
Poll, R. (2012). Can we quantify the library’s influence? Creating an ISO standard for impact assessment. Performance Measurement and Metrics, 13(2), 121-130.
Poll, R., & Boekhorst, P. (2007). Measuring quality: Performance measurement in libraries. München: K.G. Saur.
Rankin, C. (2012). The potential of generic social outcomes in promoting the positive impact of the public library: Evidence from the national year of reading in Yorkshire. Evidence Based Library And Information Practice, 7(1), 7-21. Retrieved from https://ejournals.library.ualberta.ca/index.php/EBLIP/article/view/11727/13685
Rawls, J. (1971). A theory of justice. Cambridge, MA: Belknap Press of Harvard University Press.
Rooney-Browne, C. (2011). Methods for demonstrating the value of public libraries in the UK: A literature review. Library & Information Research, 35(109), 3-39.
Spinks, A. (2009). Library media programs and student achievement. Accessed from: http://www.cobbk12.org/librarymedia/proof/research.pdf
Stiglitz, J. E., Sen, A., & Fitoussi, J.-P. (2009). Report by the commission on the measurement of economic performance and social progress. Paris: The Commission.
Swan, D. W., Miller, K. A., Craig, T., Dorinski, S., Freeman, M., Isaac, N., … , Jennifer Scotto, J. (2011). Public libraries in the United States: Fiscal year 2009. Washington, DC: Institute of Museum and Library Services. Available from: https://harvester.census.gov/imls/pubs/pls/pub_detail.asp?id=140
U.S. Census Bureau. (2011a). Public education finances:2009. Washington, DC. Accessed from: http://www.census.gov/govs/school/
U.S. Census Bureau. (2011b). State and local government finances summary: 2009. Washington, DC. Accessed from: http://www2.census.gov/govs/estimate/09_summary_report.pdf
Vårheim, A. (2008). Theoretical approaches on public libraries as places creating social capital. IFLA Conference Proceedings, 1-13.
Weil, S. E., Rudd, P. D., & Institute of Museum and Library Services (U.S.). (2000). Perspectives on outcome based evaluation for libraries and museums. Washington, DC: Institute of Museum and Library Services.
As usual, lots of great food for thought from Brett. I hope other professionals will take up the call for finding ways to measure and codify public libraries impact through outcome measurement. Looking forward to reading more on this subject.
Thanks, Paula, for this comment and for making the article much better than it would have been without your critique and recommendations. In this article, I link to the award-winning article that Paula co-authored with her colleague, Greg Edwards. I encourage anyone who has any interest in this topic to read it: http://publiclibrariesonline.org/2013/05/remaking-one-of-the-nations-busiest-main-libraries/
“We can believe in libraries and in the work we do, and still acknowledge that all of us are comparatively strong in some areas and comparatively weak in others. Practitioners of other publicly funded professions have accepted this fact and it has led to improved efficiency and more desirable outcomes as the practices that produce better results are identified and disseminated.”
Fear is a great motivator to do nothing. I can’t speak for public libraries, having never worked in one, but in academic libraries we’re frequently on the defensive, competing for funding, personnel lines, and (in some cases) status (as in faculty or not) with our colleagues in traditional faculty roles. I think that academic librarians are often afraid to ask the question posed by the title of your article because the answer might very well be no. We have this narrative that the academic library is the heart of the college and librarians are vital to student learning but we have no mechanism for supporting that claim other than my gate counts and instructional stats (as in how many classes/students we teach).
I’m wrapping up an assessment project that in some ways shows that a piece of our instructional program is NOT effective. My first reaction upon running the statistical analysis that revealed this was–oh crap, how can I spin this? Which is just plain WRONG. After recovering from that moment, I changed the question to: What are we doing wrong? We need to not be afraid to show and share our weaknesses if we are to have any hope of improving what we do.
Thank you so much for this article. I read it at the time when I needed it most!!!
Thanks, Veronica. It feels great to know that you found this article useful.
I agree with everything you’ve written about about fear, and I think almost all of it applies to almost all librarians — and to many, many non-librarians as well. In the article, I used positive examples of people in peer professions who are asking the kinds of questions I hope we’ll ask, but there are at least as many examples of people who seem to be as afraid to ask those questions as we appear to be.
I hope you’re able to fix the piece of your instruction program that isn’t working the way you’d like, but whether you do or not, it would be interesting to read about what went wrong and the steps you’re taking to fix things. Authors and journals appear to have a bias against publishing negative results, but I often find them fascinating, regardless of whether the author is able to follow up on the negative results with a “how we done good” story.
Thanks for a very interesting indepth article. I’ll follow up on some of those links. Whilst not from America, you, and your readers, may be interested in the new set of Welsh Public Library Standards launched on 1st May 2014, and which include outcome and impact measures for the first time.
More about them available here in a blog post http://libalyson.wordpress.com/2014/05/01/new-standards-for-welsh-public-libraries/ including links to the document. The main outcomes measured are things like skills, wellbeing, making a difference, community, support for development. There are input indicators as well, for a holistic approach.
I’ll be updating my blog post with a link to your article.
Thanks for sharing this report. The full report (28 pages) is thoughtful, clearly explained, and beautifully designed… and then you created a six-page summary report as well, entitled “How good is your public library service?” I hope other libraries and library consortia around the world will steal not only your methods, but also the idea of publishing your framework in such an inviting way.
While I wrote primarily about American public libraries, at least to the extent that some of what I wrote was prescriptive (part of ItLwtLP’s mission is to “argue for solutions”), much of my thinking was shaped by non-US libraries or international agencies, as well as other types of libraries in the US, especially school and academic libraries. Though I would like to have made more generic recommendations, even the differences among American public libraries make it almost impossible to suggest broadly applicable metrics. That said, I believe strongly that our constituents will be best served if librarians who work in very different kinds of libraries are diligent about sharing our activities with each other. Again, I’m grateful to you for sharing your great work with me and with other Lead Pipe readers.
Pingback : New standards for Welsh public libraries | Alyson's Welsh libraries blog
Pingback : Veille hebdomadaire – 11.05.14 | Biblio Kams