Competency E

Design, query, and evaluate information retrieval systems.

The skillful use of information retrieval (IR) tools is a central activity for information professionals and our clients. The design and optimization of IR systems and tools is a dynamic area of development and innovation for experts in data-related fields within or adjacent to librarianship. Manning et.al. provide the following frequently cited definition of Information Retrieval: “finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers)” (2009). IR can be further described as a tool for finding particular documents or records from a larger set of documents. It is also commonly construed to mean searching the text contents of these documents and records as well, a process known as “full text” or natural language search. Both the design of IR systems and use of IR tools entails compromises (in the latter case, often unbeknownst to the searcher) between the “precision” or “discrimination” of search results – how close they are to being highly relevant – and a broader “recall” or “aggregation” of all potentially relevant documents. Judith Weedman argues that “maximizing the ability to discriminate between relevant and irrelevant documents is the goal of information system design … a system should retrieve all and only the relevant information” (2019). It’s a truism of design in general, but perhaps more pointedly so when designing IR systems, that many decisions and compromises have to be made along the way towards a solution that is necessarily imperfect. On the positive side, librarians and their colleagues working in IR are uniquely qualified, by way of our social knowledge about our users and insight into classification systems, to “explore information needs at many levels and to create systems to meet them” (Weedman, 2019).

Taking a high-level view, a successful IR transaction which delivers the right content to someone with an information need depends on a complex combination of the design of the IR system, the presence of relevant contents that it provides representations of, and a searcher’s ability to create focused search queries – often through iterative improvements in the specific phrasing and use of special operators and filters. Knowledge of the inner workings of the tool and the details of its representations is crucial for effective search, discovery and evaluation of results. Using a well-designed IR system, a searcher who is conversant with it should be able to quickly retrieve relevant records, documents and data.

Because many users are not well versed in using IR tools other than the “Googlized” search box of various search engines, like the scholarly journal databases used in academia, they struggle to acquire the techniques needed for digital research. While librarians can and do play a major role in teaching patrons how to improve their search methods – from using optional fields and special operators to evaluating the authority of the results – IR developers have also embraced user centered design. “Requirements analysis” in IR design dictates understanding the goals for the system, ascertaining whether goals or market will evolve over time, and what you might hypothesize about the future needs of administrators and users. It is vital to learn as much as you can about the target audience and users of your IR system and their reasons for using it. There are many ways to solicit input from these users, both before development begins and as it progresses: by creating text outlines and visual mockups of core functions to share, through questionnaires and focus groups, and even performing ethnographic studies. The cost and complexity of the IR project should be matched with the cost and complexity of researching users.

Across the spectrum of Information Retrieval systems, a core design feature is the representation of its contents. Without representation there is no access. Representation can take on many different forms from simple analog descriptions to sophisticated digital metadata schemas; e.g. – labelling folders in a file cabinet, website design (structuring content through navigation elements and tagging), database search (using subject headings and classification frameworks to enable faceted and thesaurus-based queries), enterprise web search portals (user-centric search of private intranet networks) and extremely popular natural language web search tools like Google which are based on indexes of massive collections of website content. In the broadest sense of the word, all representations of documents and objects are types of metadata. The National Information Standards Organization (NISO) offers a helpful general definition of metadata: “structured information that describes, explains, locates, or otherwise makes it easier to retrieve, use, or manage an information resource” (2004). Consideration and selection of the type(s) of metadata to deploy have extensive design consequences. Some common forms of “subject representation” (to name one of the most important “fields” or headings of the representation of a document or object) are controlled vocabularies, natural language (full text or parts such as title, abstract and author), or classification (“hierarchical or faceted categories that allow you to place documents with other documents about the same topic”) (Weedman, 2019).

The ABC CLIO Online Dictionary for Library and Information Science defines “controlled vocabulary” as an “established list of preferred terms from which a cataloger or indexer must select when assigning subject headings or descriptors in a bibliographic record, to indicate the content of the work in a library catalog, index, or bibliographic database” (2004). This kind of metadata is familiar to librarians who have done cataloging (online or analog) and accessioning work using MARC / BIBFRAME, LCC or Dublin Core-based records. The ongoing cost of growing and maintaining an IR system which depends on the human input of controlled vocabulary, classification terms or other metadata will be much greater than the algorithmic approach of the large web search engines which use spiders to crawl and build its indexes of online content. That said, controlled vocabularies and classification systems used in conjunction with databases have the advantage of handling more complex queries and delivering more precise results to searchers. Spanning a range of applications from WorldCat’s database of library collections to electronic journal “discovery” tools like Primo Ex Libris and the finding aids used in online archives, among others, a variety of controlled vocabularies and classification systems are well established and appear to have a robust future in IR design, even in the face of rapid changes in standards and schemas.

Natural language search engines dominate web searching to such an extent that its alpha exemplar has become a verb. An extreme simplification of the architecture of these IR tools would at a minimum describe their use of crawlers / spiders to discover millions of websites and create representations of their contents. These representations are added to a very large index file which is distributed across many data centers for faster local retrieval. The index allows for extremely rapid searching of metadata and text contents of a significant portion of the entire public web. Many dozens of signals are used in the “black box” of algorithms at the heart of the system to determine the most relevant and authoritative matches to a search query. Natural language search allows users to scan the text of a web page in addition to other metadata. Over the years their accuracy in situations where the system can’t make an exact word or phrase match between user query and indexed text has improved dramatically; search engines now are able to detect synonyms and other semantic parallels, approaching the point where the algorithms understand the meaning of search terms and indexed web content.  As with any IR system, there are pros and cons to this architecture in terms of precision and recall and the mix of relevant and irrelevant results. Many search engines have added Boolean operators, filters and access to special content (books, scholarly work, videos, etc.) that in some cases behave similarly to controlled vocabulary searches of databases. The IR design of full text search stands alongside the efforts made by web developers to design and optimize websites so that they will achieve strong positioning in the search engine’s results pages.

Social tagging, a feature often found in social media platforms and other “web 2.0” applications like blogs, presents a phenomenon some call emergent vocabularies “because they take shape as users converge on certain preferred tags for particular topics” (Weedman, 2019). The collective process of social tagging can result in the kind of efficient organization of content that a controlled vocabulary offers, at a low cost and with even more relevance and utility for users. It also incorporates natural language search insofar as it parses matches between text and tags embedded in a webpage’s content in addition to scanning the tags present in metadata.

Creating a search query inherently involves balancing on the fine line of between precision and recall. Too broad of a search will retrieve less relevant documents and an overly specific query can exclude potentially useful results. This is a non-trivial problem for both IR designers and professional searchers (including librarians) who need a concrete understanding of the role of relevance in search tools. As Weedman notes the dilemma: “the key to evaluating an information system is relevance … but relevance is in the eye of the beholder” (2019). Sophisticated searchers like information professionals must be well versed in different search tactics and strategies, and in evaluating search results. Multiple searches using different tools  and resources – database queries, web searches, retrieving bibliographic metadata and performing citation lookups, etc. – along with special attention paid to the exact subject-related language used in various systems and in their contents are frequently required to locate the right information. Familiarity with a topic and its relation to other knowledge domains will be a huge benefit to an IR user, but the patient and resourceful searcher lacking this knowledge will be able to overcome that deficit in time. While there are legitimate reasons to be optimistic about the progress of IR systems, in Loren Doyle offered a wry observation back in 1963 that reverberates even today: ‘Relevance’ will serve its purpose, but will decline as the realization slowly comes that an individual’s information need is so complex that it cannot be accurately stated in a simple request.”

It’s worth noting that just as content in an IR must be represented by some kind of metadata which makes them accessible to users, queries in turn are representations of a users’ information needs or gaps, the IR system must interpret these representations; the search process brings the two representations together. A fair evaluation of IR system performance would necessarily include consideration of both the functionality of the representation of contents and the skillfulness and “literacy” of the queries. IR design has increasingly embraced users’ perspectives and user experience-based presentation. The preferences of users has become an important evaluation tool (Yao & Zhou, 2010). Other related and important IR evaluation questions include how easy it is for users to find the features they need such as controlled vocabulary and special operators as well as site navigation elements. A user centered interface and effective and elegant representation of content through metadata is an ideal blueprint for successful IR.

References

Reitz, J. (2004). Online Dictionary for Library and Information Science. Retrieved March 02, 2021 from https://products.abc-clio.com/ODLIS/odlis_c.aspx

Manning, C.D., Raghavan, P. and Schütze, H. (2008). An Introduction to Information Retrieval. Retrieved from https://nlp.stanford.edu/IR-book/

National Information Standards Organization. (2004). Understanding Metadata. Retrieved March 04, 2021 from https://www.lter.uaf.edu/metadata_files/UnderstandingMetadata.pdf

Weedman, J. (2008). Designing for Search. In Tucker, V. (Ed.), Information Retrieval: Designing, Querying, and Evaluating Information Systems. Edition 6.0 (pp. 118-155).(n.p.)

Zhou, B., & Yao, Y. (2010). Evaluating information retrieval system performance based on user preference. Journal of Intelligent Information Systems, 34(3), 227-248.

Evidence for Competency E

Evidence 1

I was a member of a team project for Info 202-17 (Information Retrieval System Design) in which we evaluated various elements on the Discovery Channel’s website which are impacting the “representation” and “retrievability” of the site’s multimedia content. These elements include the labels used for internal links, menus and breadcrumb navigation, and both the human readable and XML sitemaps optimized for search engine crawlers. We made recommendations to redesign the site’s information architecture in order to organize the content hierarchically for better “findability” (for both browsers and algorithms) and improved overall user experience. After proposing myriad small changes and a few broader ones, we argued in favor conducting user testing with the new design. Ideally a focus group could be convened with a mix of regular users of the website and people who aren’t familiar with it. After collecting detailed qualitative data, follow-up internal design discussions could take this valuable information into account before making final decisions about changing particular elements. I wrote the Site Map of Existing Site & Discussion section, co-wrote the introduction, and helped edit the whole document.

INFO-204-02-Organizational-Analysis