Competency N

Evaluate programs and services using measurable criteria

Because libraries are essentially service organizations (albeit of a non-profit nature), it stands to reason that they might share some of the same goals, strategies, and methods as their business world, service-oriented peers. Among these goals are promoting client satisfaction and loyalty, reaching new patrons and communities, and increasing the use of their services, programs, and resources. Evaluation and assessment, Hernon, Dugan, Nitecki (2011) assert, are “separate but connected concepts and processes.” Together, they help libraries solve critical tasks related to capturing important quantitative and qualitative data, making evidence-based improvements to services and resources, and meeting near-term objectives and longer-term goals. On the evaluative side, librarians seek insight into how patrons view the usefulness, relevance and quality of programs and services – particularly in terms of whether it helps them refine their research skills and achieve their goals. Assessment, often reliant on qualitative data, seeks to clarify the overall impact that various initiatives are having on the library and its parent institution’s desired outcomes. Both frameworks – each using quantitative and qualitative data and different perspectives and focuses, captured from multiple sources – help define and report on performance metrics, and this in turn enhances the value that the library offers to its customers “downstream.” These reports can capture a very wide range of data points: print and electronic usage, ILL transfers and costs, Patron Driven Acquisition (PDA) stats, patron satisfaction with different services, events attendance, reference transactions, instructional services, ROI metrics for various programs, in-depth feedback from users, and many others. This information will be of interest not only to the analysts and managers who will base future decisions about optimizing collections on this information, but also to administrators and Boards of Directors as well, who want to understand the key performance indicators, value delivered, and stakeholder impact. Assessment helps librarians understand the progress that is being made (or not) on the library’s mission and its most important goals. Reporting on the successes and failures of library services, programs and patron satisfaction in various contexts often plays a decisive role in the library’s strategic planning as well as its funding, departmental budgets, and various financial outlays. Assessment and evaluation also provide feedback for altering and improving programs and services, where the “overall goal is data-based and user-centered continuous improvement of library collections and services” (Ryan, 2006). Regular evaluations and assessments made across the array of services and programs, collections, marketing and outreach campaigns that the library manages enables ongoing, incremental processes of change and refinement. Equally critical, they provide a means to present this data to stakeholders, particularly administrators and Boards of Directors, and hopefully extol the positive story they tell. As collection development has become more complex with the advent of “Big Deal Packages” and other contemporaneous purchasing, subscription and borrowing options, the need for more sophisticated reporting metrics to measure Cost per Use (CPU) and other salient spending and usage indicators has become more urgent. Many vendors, publishers and aggregators in the library collection space – particularly academic libraries – have been working to meet the need for these analytical tools in the past couple of decades, but arguably more progress has been made on the library side of the ledger, as I’ll explore later on.

The Reference and User Services Association (RUSA) provides guidance on how to evaluate reference interviews and other transactions, whether mediated by staff or not. Reference services have evolved in recent decades to include virtual methods of communication like online chat and email, and will no doubt continue to embrace new technologies. RUSA suggests five criteria that correlate to steps in a typical reference interview to gauge the librarian’s effectiveness: visibility / approachability, interest, listening / inquiring, searching and follow-up (2013). The principles are generally the same whether the transaction is conducted in person, on the phone or online, even while the tools and practices differ. It’s difficult to imagine a reference transaction that doesn’t begin with a high degree of visibility and approachability, qualities that enable a librarian to answer questions “at the patron’s point of need” (2013). Moreover, this step largely sets the “tone, depth and level” (2013) of the rest of the interaction. Both the interest and listening / inquiring lend steps help refine the librarian’s understanding of the information query – and frequently trigger new insights by the patron into their information needs as well – and also lend momentum to the resolution of the interview. RUSA maintains that interest consists of “non-verbal confirmation of understanding patron’s needs” and a “high degree of objective, nonjudgmental interest” (2013) that can be conveyed with eye contact and body language. Listening and inquiring is commonly practiced by way of a method of reflective listening in which a client’s inquiry is repeated back to them. It’s also important for reference librarians to ask open ended questions intended to elicit more description from the patron about the information and help they’re seeking.

Searching is the pivotal function performed by a librarian in their effort to retrieve accurate information that fulfills the patron’s request. There are several best practices related to search strategy, from using relevant search terms to identifying sources most likely to contain material that aligns with the patron query. RUSA argues that unless librarian search behavior is precise, and useful results are discovered – albeit these may require a more extended search process – patrons may become discouraged (2013). One compelling aspect of this step is that it offers the opportunity for librarians to share with the patron search techniques for retrieving and appraising results from various sources. The librarian can demonstrate their expertise in efficiently and effectively locating resources appropriate to an information need, or for that matter, for entertainment. In many cases a dialogue can take place in which the patron explains what they’ve already tried, search topics are narrowed or broadened, the librarian “offers pointers, detailed search paths, and names of resources used to find the answer” (2013). Beyond this impromptu sharing of practical knowledge, somewhat more formal instruction sessions aimed at student learning may make sense in school or academic libraries. The last step of the reference transaction is follow-up. Follow-up is critical for marking the completion of the librarian’s task at hand, and as a performance metric to gauge patron satisfaction. The librarian should always ask whether the patron’s information needs have been satisfied, and if, after exhausting the librarian‘s know how and search tactics, they still have not, the librarian should provide the patron with other resources to consult, including contact details for specialists in germane fields.

As far as individual reference collection items are concerned – including electronic sources – they should also be evaluated on a regular basis based on several criteria to determine their relevance and value to users. To condense Singer’s (2016) recommendations, criteria to use for all formats should include content and authority while criteria for electronic resources should also be judged by their user interfaces, branding and customization, provision of full text [search], accessibility and cost and licensing. She adds that print resources should also evaluated by physical attributes, indexing and cost (2016). While cost-per-use and ROI, whether based on check outs, reference interactions or downloads, are always top of mind for library administrators and funders. Singer’s criteria advocates for a multi-faceted view of the overall value that different reference resources offer by factoring in features and cost effectiveness.

Collection development practices and the policies that govern them were recurring subjects in several classes I took during my MLIS studies. Beyond the reference services and resources just discussed, there are many other areas within the library that are rightfully scrutinized to glean clues about collection development and how it’s performing vis-à-vis departmental and institutional goals. Among others, these measurements could include usage costs by collection subject and format; ROI and gap analyses; patron satisfaction studies and citation statistics; and potential opportunities to build on successful ventures and reign in poorly performing ones. In an attempt to establish basic boundaries of library collections measurement, Borin and Yi argue that: “capacity and usage are general ways of assessing the collection that are particularly useful for both program reviews and accreditation and both have been commonly cited in the collection assessment and evaluation literature” (2008). There are myriad methods and metrics for tracking capacity and usage which will generally be present for all types of libraries, with some variations. In the past two decades or so there has been a sea change in the volume and types of content available for librarians to purchase, from electronic databases and scholarly journals to popular literature and entertainment media. The “just in case” model of collection development based on aggressively purchasing materials across many subject areas so that they might be on hand (physically or electronically) in the event that an item is requested – materials which otherwise only have potential value – has been giving way to a “just in time” model which aims to meet patrons at their moment and place of need.

The practice of collection development has been revolutionized – and arguably made more complex – by the proliferation of electronic content alongside a backdrop of ongoing purchases of monographs and other print titles, albeit there has been more weeding and less replacement of the latter. Aggregators introduced “big deal” packages in the early 2000s which expanded the selection and customization of content – ostensibly at a lower cost per title / resource and use – though it also introduced some new complications in evaluating those costs. Perry and Weber articulate a somewhat pessimistic (some would say merely realistic) view that “with dramatic changes in the nature of library collections, off‐campus consortial holdings, networked digital resources and multimedia and other graphic materials, it has become near impossible early in the twenty‐first century to evaluate the effectiveness and the adequacy of library collections” (2001). This argument begs the question of whether more traditional concepts of “effectiveness” and “adequacy” should still constitute the chief focus of collection developers, or if this focus needs recalibration. The traditional approaches to collection development evaluation for print materials – and to a large extent reference desk transactions – has typically been criteria‐based like number of volumes and depth of subject collections, while in the internet era, collection developers have experimented with various electronic usage metrics. In some cases, new technologies and approaches to collection development have evolved in tandem with new librarian roles. For example, the tools made available by publishers and aggregators for librarians responsible for managing subject collections have increased in both print and electronic mediums, requiring that subject librarians develop new collection development skills, overhaul relevant policies, and even adopt new philosophies of collection management. As Agee notes, of particular help to subject specialists (and likely to generalists as well), contemporary collection evaluation managed correctly can identify gaps in electronic and print collections, sketch a picture of the historical depth and currency of these resources and frame collection issues for supervisors and colleagues (2005).

Among several common methods that have been used to evaluate the “capacity” of a library’s collections, two of the most traditional are the list-checking method, by which the library’s holdings are compared with one or more lists of selected titles, and “conspectus methodology.” This tactic can sometimes be used for electronic resources as well – from eBooks to electronic journals, electronic databases, and electronic media that the library owns or subscribes to – though there can be complications in concatenating various lists from different publishers and vendors. To gain a more comprehensive view of collection capacity that accounts for print and electronic resources, Bonin and Yi recommend “[measuring] resources using dollar expenditures in terms of one‐time expenditures or ongoing expenditures” (2011). Moreover, Bonin and Yi argue in favor of juxtaposing collection capacity with indicators of patron usage, which they sensibly assert should be hybrid data spanning print and electronic content: “we are in a transition period from primarily print to primarily electronic collections and measures of assessing the collection need to take account of this shifting landscape” (2011). Some of these usage stats include downloading, accessing and printing of articles, ILL transfers, PDA eBook requests and website analytics. These offer proxies for evaluating patron behavior both at a granular / article or book level and a vendor / package level. These new metrics are still congruent with fulfilling the traditional expectations for the “breadth and scholarly integrity” that are used as a yardstick to assess collections.

Driven by the escalating costs and volume of content, and the unsustainability of traditional efforts to secure the library’s reputation by purchasing an exhaustive diversity of material across all academic subject areas, collection developers have pivoted towards “just in time” methods to limit excessive and speculative (unrequested) buying. PDA and ILL requests have been shown to increase circulation rates of those items, which in turn improves circulation of collections more broadly. This also underscores a kind of built-in evaluative function of these tools: user requests for electronic and print titles leads to more efficient allocation of collection funds. It tips collection developers off to which subjects, formats and publishers are the most popular with patrons, thereby providing guidance on how to better calibrate firm and standing orders, profiles and approval plans. Other innovative solutions emerging from library affiliated organizations like “Our Research” as well as from publishers and aggregators are both reacting to and catalyzing transformative changes. Not long after the advent of PDA purchasing,  for example, vendors were eager to partner with libraries in promoting this new method of collection development as it presented an opportunity to further market their holdings and increase sales. The marketplace of vendors offering electronic and print content has diversified over the past decade and this has resulted in a good deal of innovation in products including hybrids of approval plans / e-preferred approval plans, add-on standing and firm orders, next-generation PDA platforms, and custom bundling of media and scholarly content. Gorman and Miller (2001) assert that “collections are now more varied, less stable and less predictable, but also more responsive, more immediate and more demand driven.” Large industry players like GOBI Library Solutions and ProQuest Rialto have attempted to meet collection developers’ needs by optimizing the resulting benefits and offsetting new complications. They’ve done so by offering more sophisticated reporting, hybrid print and e-preferred approval plans, flexible standard and firm ordering of print, media, and subscription resources, database access and in many cases integration of PDA, ILL and Open Access (OA) “platform” transactions. An important and influential player in this transformative process is Project COUNTER (“Counting Online Usage of NeTworked Electronic Resources”), an international non-profit consisting of libraries, publishers and vendors which was launched in 2003 to “provide credible usage statistics in a chaotic statistical environment as we move from print to electronic” (2021). COUNTER has been under continuous development ever since, and by general consensus among librarians, publishers and vendors alike, it has been an object of equal parts hope and frustration.

By creating a “Code of Practice” for publishers and vendors to follow when reporting usage can cost stats to libraries, COUNTER has attempted to provide collection developers with insights that could enable them make more robust and nuanced use of their budgets. The Code of Practice data also helps librarians evaluate and compare spending across different content providers. Ideally this data would foster better alignment of investments across numerous collection formats, subjects and publishers with patrons’ interests, feedback, and usage history. By illuminating what appears to be working well and what isn’t, COUNTER aims to show collection developers how they might improve their strategic outlook and decision making. However, the problems that COUNTER’s team has attempted to solve have never been trivial, a challenge compounded by the highly dynamic fields of academic publishing, aggregating and related technological change. One commonplace example is the difficulty of “de-duplicating” the usage statistics for a particular electronic article or journal. Until 2019, after COUNTER merged with another open-source project called UnSub, when a patron viewed the HTML version of an article and then clicked a link to open a PDF of that same article, COUNTER – like other methods and tools for measuring this transaction – would record two downloads or database access transactions. This double counting would create serious skew on a report detailing the number of times various resources have been accessed. Other problems in trying to calculate CPU and related metrics can occur when patron accesses of ILL content, OA articles and perpetual access entitlement data are not comprehensively accounted for. The complex relationships between electronic resources and library collections includes numerous classes of OA content, current and forward-looking projections of ILL costs, and allocations for perpetual access entitlement data. UnSub allows libraries to upload this cost information to create a more complete and multi-dimensional picture of collection budgets and the depth and scope of various collections. When working as intended, however, UnSub allows librarians to preview what would happen to their collections and budgets if they cancelled a big deal package and instead made more purchasing decisions using e-preferred approval plans and standing and firm orders, open access journals, and consortia borrowing. While it will no doubt require a significant investment of effort to understand how to ingest data, optimize settings and configure reports, UnSub promises to be a powerful evaluative tool that can save libraries considerable money while increasing the value of its electronic resources to its patrons

References

Agee, J. (2005). Collection evaluation: a foundation for collection development. Collection Building, Vol. 24 No. 3, pp. 82‐5.

Borin, J. and Yi, H. (2008), Indicators for collection evaluation: a new dimensional framework, Collection Building, Vol. 27 No. 4, pp. 136-143. https://doi-org.libaccess.sjlibrary.org/10.1108/01604950810913698

Borin, J. & Yi, H. (2011). Assessing an academic library collection through capacity and usage indicators: testing a multi‐dimensional model. Collection Building, Vol. 30 No. 3, pp. 120-125. https://doi-org.libaccess.sjlibrary.org/10.1108/01604951111146956

Dugan, R., Hernon, P. & Nitecki, D. (2011). Engaging in Evaluation and Assessment Research. Santa Barbara, CA: Libraries Unlimited.

Gorman, M. and Miller, R. (2001), Collection evaluation: new measures for a new environment. Advances in Librarianship, Vol. 25, pp. 67‐9.

Singer, C. (2016). Selection and Evaluation of Reference Sources. Reference and Information Services. Santa Barbara, California: Libraries Unlimited.

Evidence for Competency N

Evidence 1

For a course on “Issues in Academic Libraries” (INFO 230, Fall 2020), I wrote a paper entitled “DDA: Limitations and Adaptations” in which I attempted to survey the development of DDA (Demand Driven Acquisition) – also known as Patron Driven Acquisition (PDA) – platforms as a key component of collection development and a tool for evaluating patron interests. DDA features – along with Interlibrary Loan (ILL) and other consortia-based lending – present actionable data about patrons’ interests and needs. While DDA currently does not support article-level downloads from electronic journals, there are experts suggesting that this may be the direction this technology is moving towards. Much has been made of how DDA redefines collection-building as a service, one provided at the point and moment of need, but it’s equally important to acknowledge its role as an evaluative indicator of what library materials are or are not in demand. As described in the essay above, DDA and ILL activity offers a powerful predictor of future patron interest at the individual title, subject and at times publisher levels, and as such it provides far more evaluative insight than traditional collection development with its attendant scope and circulation statistics. While DDA is for the most part limited to eBook acquisition, there are signs that it may continue to evolve and encompass new formats. Another potentially promising avenue could involve vendors integrating DDA and ILL data into the spending and performance dashboards and reports they offer to collection developers.

INFO-230-Josh-Simpson-White-Paper-2

Evidence 2

I worked on a team project evaluating and critiquing and proposing changes to Discovery.com, the website of the Discovery Channel, for INFO 202 on Information Retrieval System Design.  While the course primarily focused on data structures, vocabulary design and database content, it also included a section on evaluating and designing websites. It was an unexpected assignment for a class on Information Retrieval System Design – at least it seemed that way to me at the beginning of the course. The conceptual bridge was the notion that a website was comprised of content organized along database design principles. Websites contain data structured into both primary hierarchical and secondary “horizontal” relationships. One assumption carried over from our course readings was that Usability and User Experience (UX) were both impacted by the organization of the website’s architecture and – inseparably – it’s navigation. There was a functional but not fully developed site map, a situation that put the site at a disadvantage in terms of being listed in the search engines. We created a site map of our own with proposed changes based on Search Engine Optimization principles. This new sitemap was written to make the site easier for users to browse, for developers to build and maintain over a long timeline, and for search engines to index. We found a lot to admire about the existing site structure and implied sitemap, but we nevertheless came to believe there was still room for improvement. Our primary goal was to streamline the site design from a user-centered perspective. Towards that end we suggested several changes to the organization and navigation of content on menus based on best practices discussed in class. We felt that the site should put forward a more explicit hierarchy of content while including links to some individual pages that didn’t fit without any content section, like Help and Store pages. We felt that these should appear on the template used for every page on the site. We also recommended conducting user testing as a basis for new site designs. Of course, we could do a much deeper dive into how users are interacting with the site if we had access to its web analytics data. The greater that extent that evaluation of the site could be based on site traffic data, the greater the chance for beneficial outcomes.

For this group project I wrote the introduction and co-wrote the recommendations section with Nicole Shaw. The entire group proofread and suggested edits for this site / structure data analysis assignment. Once tasks were assigned, the work proceeded smoothly and everyone not only finished their pieces of the project but stayed in close contact with the rest of the group by email. We used Google Docs to collaborate on the document, creating annotations for group members to review later and comment on.

The team was organized as follows:

Josh – Leader

Jaelynn – Scribe

Nicole – Editor

Alejandra – Tech

Susan – Editor

As project leader, I paid close attention to the progress we made on a daily (and at some points hourly) basis, and I frequently got involved in shepherding the work and addressing details in the documents and database design.  I tried to strike a balance between delegating tasks and inserting myself into the editorial process and encouraging my teammates to assert themselves as to what they wanted to do or why they’d made certain decisions.

INFO-202-Project-3-Evaluating-Designing-Websites