Competency L

Demonstrate understanding of quantitative and qualitative research methods, the ability to design a research project, and the ability to evaluate and synthesize research literature.

Academic libraries are greatly enriched by the scholarly research and publishing activity of its staff, and this work frequently benefits librarians and others outside the specific institution as well. Research in library and information science often begins by identifying a knowledge deficit / need or a problem which has arisen during the course of daily professional practice. The former is typically labeled as basic or theoretical research while the latter is described as applied or operations research. These aren’t fixed correlations, however, and crossover between the two approaches is not uncommon. For example, practitioner-researchers may undertake theoretical research in pursuit of practical solutions, and studies conducted and published by faculty librarians in hopes of solving practical problems can end up contributing to basic disciplinary knowledge. In fact, the burgeoning concept of “reflective practice” in academic librarianship is based on the integration of theory and practice.  A further distinction can be drawn between primary and secondary research, terms which describe the sources of data gathered and analyzed. Primary research entails the creation of new data during the process of conducting a study, while secondary research examines and synthesizes existing primary sources. Regardless of the type of research or sources it’s based on, the research question ultimately directs the research inquiry and provides it with boundaries that enable a careful literature review, sound overall design and clear execution of all study processes.

Some common sources of research questions include personal observations and experiences, the scholarly literature, real-world problems, programmatic needs, theory and sponsors (Branolini, Kennedy & Luo, 2017). Regardless of the specific motivations of a given study, academic libraries encourage research because of the numerous benefits it brings librarians, the institution and the professional field itself: 1) it builds on existing data and analysis in ways which add to the knowledge base and generate solutions to salient problems, 2) it encourages career advancement for librarians, 3) it expands cognizance of the research process among library staff, 4) it offers a way to “[look] analytically at librarianship through research [which] fosters growth, curiosity, awareness and promotes new learning” (Crumley & Koufogiannakis, 2002), and 5) it produces a robust framework for practitioner communities to  develop and thrive (Haddow & Klobas, 2004). Moreover, an increasing number of accreditation bodies mandate that academic libraries perform self-assessment reviews and create evidence-based policies. It’s become commonplace for academic libraries to use research methods to evaluate their own effectiveness and efficiency in key areas. In the unique environment of the academic library in which research is foregrounded as a source for enlarging both theoretical and practical knowledge in the name of “evidence-based librarianship (EBL)”, many academic librarians become practitioner-researchers, professionals who, in the words of Watson-Boone “approach projects and problems in ways that yield 1) solutions, 2) an enlarged understanding of their actual field of work – their practice -, and 3) improvements in practice” (2000).

The spark for many studies comes from the realization of an information need or gap which prompts a search for weaknesses in the relevant literature. The research process involves the “planned and systematic collection, analysis, and interpretation of data” (Powell, et. al. 2002) with the aim of answering a specific research question and satisfying a research need. Dervin (1998) argues that by defining information needs and uses, researchers attempt to “fill fundamental and pervasive discontinuities or gaps in their movement through time and space, [thus bridging] gaps in your professional or personal reality.” The presence of gaps related to a topic of interest within the scholarly record, institutional policies or daily activities offers opportunities for researchers to shed light on poorly understood phenomenon and / or practical issues. The research topic initiates the exploration of scholarly literature, which proceeds incrementally to narrow the topic down a focused research question in a dialectical way, allowing the literature to fine tune the topic and question. Once a research topic has been chosen, and potential research questions have been generated, a more thorough going literature review can begin.

Any rigorous study will be grounded in and informed by the scholarly literature related to a particular research topic or question. By uncovering useful information about an area of research interest, the searching and reviewing of relevant literature is intertwined with the development of the research question and operational definitions. A literature review contextualizes the research question and presents a fuller picture of the study’s background. Without knowing what has already been written about a topic, what data has been collected and analyzed and published, it is impossible to discover gaps in the literature that new research might address, or to have any confidence that a given research question could make an original contribution to the scholarly record. Moreover, many scholarly articles will helpfully point out potential future research directions. Researchers may revise and fine tune their question based on what they learn during the literature review, especially in the earliest stage, an interchange which is more common and likely more impactful with qualitative studies. Surveying the relevant literature intertwines with the development of the research question and operational definitions. Examining the literature not only constitutes the legwork of the literature review, it also often unearths useful information which can help clarify the research question and develop operational definitions.

The process of a literature review typically begins with searching library resources like databases of scholarly journals, though some will start the process using secondary sources like bibliographies, magazine articles, reference sources and even other literature reviews. A literature review entails discovering, examining, summarizing, evaluating and synthesizing the relevant literature. It’s vital to demonstrate familiarity with landmark studies and the history of various schools of thought and paradigms, though it’s seldom possible to do so in a completely exhaustive way. There are several techniques for identifying and locating pertinent source materials including following citations both from and to an article of interest. Current journal database offerings by universities and some library consortia allow for exploration of relevant content by subject headings, thesauri and other classification schemas. The task of synthesizing the literature can be time consuming and effortful. By synthesizing the most germane and important scholarly work, a theme-based presentation of how your research topic has been investigated by others will reveal how your study relates to the literature. And yet it’s also not uncommon for researchers doing literature reviews to encounter a paucity of relevant work. In this case the review will likely be shorter but expanding the conceptual scope to include adjacent topics can be helpful both to readers and to the researchers’ own process of figuring out how their work can span currently unconnected areas of inquire within their field. A literature review needs to do more than simply summarize existing research, it must show how different works fit together and relate to the topic. Beyond analysis, the review also interprets the literature and organizes them around shared themes. Dawidowicz (2010) recommends applying six higher-order thinking skills in reviewing and interpreting the literature: analysis, comparison, contrast, evaluation, synthesis, and integrations.

The research question and literature review will suggest the most apt research design, methods and instruments for data collection and analysis. Babie (2013) observed that research is a language of variables. The research question itself contains the key variables that the study needs to define, configure relations between, and operationalize. These operational variables are project specific and have meaning only in the context of the study (Brancolini, Kennedy & Luo, 2017). They function as a bridge between the research question and data collection design. Through these steps, researchers create a data collection instrument tailored to the operational variables. Though the research question contains the operational variables that must be included in the data collection instrument, it is the research design that will also steer the data collection itself as well as its analysis, structure the presentation of the study’s results, and even suggest choices for disseminating the work for maximum impact. The research design is akin to a map which directs you along a pathway of evidence gathering and analysis. The strategy it offers brings together the different components of the study in a functional and logical way.

Many sources testify that the most popular research methods are surveys, in-depth interviews, and content analysis. Within academic library research more specifically, popular research designs include cross-sectional design, longitudinal design, experimental/quasi-experimental design and case studies. Other key factors to consider within the scope of the study: the size of the study sample, staffing and funding requirements and time constraints when performing the literature review and collecting and analyzing data, and taking necessary steps to guarantee that the research meets the Institutional Review Board ethics standards by obtaining participants’ consent.

With an understanding of the variables of a research question and the best sampling method, a grounding in scholarly literature, and careful attention to the ultimate purpose of the study, an empirically based decision can be made about what kind of quantitative or qualitative data should be collected and analyzed: numeric and statistical or narrative and thematic, etc. The first of these hinges on quantitative data while the latter two are qualitative methods. Another complementary way to categorize research questions is as either descriptive, relationship or causality based. These categories can provide guidance for the researcher at several stages of the study. In terms of data type, the descriptive is qualitative in nature while relationship and causality questions deal with quantitative data and experimental studies. Quantitative data lends itself to research topics that enjoy widespread coverage in the scholarly literature. In fact it’s common for researchers to draw upon quantitative data created, analyzed and published by other scholars – along with other elements of these studies like scholarly citations and data collection instruments. A quantitative study might even be based on performing some form of analysis on existing statistics. While this can be true of qualitative data as well, it is rarely done, because the richly nuanced, narrative nature of qualitative data that makes it difficult to replicate or “plug into” a new study. Quantitative studies are frequently used to present statistic-based reports to stakeholders in service of funding and grant requests. They can also provide valuable information about the library’s user communities which can be applied to collection management and other programs and services.

The types of data and analyses of it which are needed to answer the research question also have implications for which type of population sampling should be used: probability or non-probability. In general, quantitative studies, especially ones that seek to understand a larger population or data set in statistical or numeric terms, will use probability sampling. Probability sampling randomizes the selection of participants, giving every individual in a large “sample frame” the same likelihood of being picked, thus allowing extrapolation from study populations to characterize groups or data that can’t be sampled in full. Qualitative data on the other hand is best served by non-probability sampling, which offers a way to collect narrative and thematic data from smaller groups where each participant supplies detailed feedback, using methods like in-depth interviews and focus groups, among others. In addition to the aforementioned consent forms and IRB review, a pilot test using a small selection from the target population should be done before conducting the main sampling process. The pilot test provides critical feedback to improve the data collection instrument. A related imperative is to examine the cultural sensitivity and relevance of the instrument by asking if there is anything culturally irrelevant, inappropriate, or objectionable in it (UC at Davis Center for Evaluation and Research, 2016).

Quantitative data lends itself to studies which hinge on objective measurements and aim to perform mathematic, statistical or numerical analysis with the goal of explaining an observed phenomenon or generalizing the analysis across groups of people. Some quantitative studies suggest and test a hypothesis, but generally speaking these are only necessary when the research study is predictive and seeks to evaluate the strength of relationships in the data. Some of the most common quantitative data collection tools are polls, questionnaires and surveys, all “structured” instruments which are well disposed to statistical analyses like frequency distribution, mode, or correlational coefficients, among others. Quantitative data is often presented in tables, charts, figures and other visual formats. The variables presented in a quantitative data collection instrument like a survey questionnaire are exhaustive and mutually exclusive: all possible responses are covered, and there is no way that a response can select more than one valid choice. When using composite variables with multiple indicators, researchers cluster scored or weighted attributes together to create an overall score. This allows for more sophisticated measurements of behavior or phenomena even though the variables are expressed as numbers, albeit in the form of nominal, ordinal, interval or ratio types.

The primary purpose of quantitative research methods is to establish the relationship between an independent variable and a dependent variable. This can be identified as a correlation or association between variables (“descriptive” design), or a causal one (“experimental” design). Making this determination should precede further research design decisions. Quantitative research design tries to isolate the influence of other variables so that the effect of the independent variable can be observed as discreetly as possible. In an experimental design, the independent variable has only two attributes: presence and absence. Changes in the dependent variable that occur when these attributes are toggled demonstrate the influence of independent variable. In this scenario, an experimental group created through probability sampling is exposed to the experimental stimulus (the independent variable) while a control group (also created through the same probability sampling process) is not. In such a design, the researcher measures the dependent variable before and after the experimental stimulus. While experimental design is very useful in homing in on the effect of the independent variable, it doesn’t explain why that effect behaves as it does. Moreover, it is often the case that preventing the dependent variable from being affected by variables other than the independent variable under investigation is a non-trivial task. The term “internal validity” applies to the robustness of an experimental study’s design in terms of how well it eliminates the influence of other variables on the dependent variable. According to de Vaus (2001), “where the logic and structure of a design are faulty and fail to eliminate competing explanations of results then the design lacks internal validity.” It’s imperative to ascertain whether the research design’s internal validity is susceptible to corruption by the unintended influence of variables outside of the ones you’re measuring. This can occur, among other instances, when the experimental and control groups don’t closely mirror one another in demographic variables like age, education, gender or socio-economic status.

Qualitative research methods elicit textual or narrative data in a variety of media which can be analyzed, synthesized and presented in a variety of ways. Trochim, Kanika and Donnelly (2004) suggest that the primary purposes for using qualitative research are for generating new theories or hypotheses, achieving a deep understanding of the issues, or developing detailed stories to describe a phenomenon. Brancolini, Kennedy & Luo write that, in qualitative studies, “research questions are typically developed or refined in all stages of a reflexive and interactive inquiry journey” (2017). Qualitative research questions don’t need to be so tightly focused because that may hinder the understanding and analysis of qualitative data. When there is scant coverage of a particular topic in the scholarly literature, qualitative methods can provide a helpful way to explore new research avenues. Qualitative research design is based on data collection instruments that are unstructured compared to quantitative instruments. In-depth “semi-structured” interviews, focus groups and content analysis are some of the most common qualitative research methods. Qualitative analysis isn’t used to measure variables in terms of frequency, correlation, causation, etc. like quantitative analysis, but rather to capture a richer depiction of the variations found in responses to a given variable. Qualitative data collection instruments capture a much fuller view of participants’ thoughts than multiple-choice surveys can. Qualitative methods are utilized when it’s more important to capture the experiences, opinions and attitudes of a study population – or the concepts and other descriptive information, whether latent or explicit, that can be extracted by a “content analysis” of a body of texts. Content analysis can apply to various media, not simply texts but also recordings of various types, data, chat transcripts, emails, social media posts and comments, etc. These are all examples of “unobtrusive research” as they require no interaction with study participants. In general, the mode of qualitative analysis is thematic: discerning and “coding” (labeling) themes as well as their nuances, contexts, relations and even contradictions. In the case of a collection of in-depth interviews, themes and patterns unearthed from the interviews are synthesized via codes to answer the research question. Codes used in the relevant scholarly literature can sometimes be borrowed and repurposed for new studies. It’s important to note that these interviews can be used down the road to create new surveys for larger scale studies. As with the attention that is paid to “internal validity” in quantitative research methods, researchers enacting qualitative studies must take steps to ensure that they are studying precisely what they intended to study, and that their results can be repeated. Transparency in methods as well as explicit documentation of study procedures is indispensable.

Designing interview guides used in conducting interviews, moderating focus groups and performing content analysis all depend on practicing and cultivating certain skills. There is no analog to the interpersonal engagement implicit in qualitative methods – at least in the first two cases – to be found in quantitative methods. Interviews and focus groups aim to foster detail rich conversation and as such allow a certain amount of freedom. The researcher prompts participants to encourage both topical replies and open-ended reflection, all while remaining “on script”. The end result is ideally the documentation of a range of perspectives held by participants and a detailed description of the variations in their responses. Repeating ideas, themes, categories and patterns can be detected within these documents and analyzed. In what is called focused coding, the codes that label these themes go through multiple rounds of refinement by grouping more narrowly focused codes with more important, overarching ones. Keene & Zimmermann describe the goal in almost poetic terms: “synthesis is about organizing the different pieces to create a mosaic, a meaning, a beauty greater than the sum of each shiny piece” (2007). This synthesizing activity that demands non-numerical human input. To guard against the problem of subjective judgement creeping into the process, more than one data analyst / synthesist is needed to code the same qualitative data set. For these and other reasons, qualitative data analysis is often very time consuming, demanding on schedules and expensive. Because of these costs, Brancolini, Kennedy & Luo reminds us that “if recorded content already exists that could serve your research purpose, you should use it … you ought to always consider whether unobtrusive methods such as content analysis are a possibility” (2017).

A final consideration in the lifecycle of the research process is dissemination / publishing. Though answering a research question brings considerable value in terms of the researcher’s own work – suggesting solutions to practical problems, grounding decisions in data and proven hypotheses, and bridging gaps in their professional or personal life – their results could also be helpful to other scholars researching similar questions, broadening the impact of the work, whether in theoretical or practice-based domains. Among others, these domains include libraries with similar users and researchers who come across your published study and make use of your literature review, data collection instrument, sampling design, and data analysis to improve their own research practice and output. Submitting finished papers to 3-5 journals that publish work related to relevant research topics increases the odds of publication. Identifying conferences that likewise cover fields of interest and becoming familiar with their style, formats and other requirements is the most direct path to presenting new scholarly work to an interested audience.

Evidence for Competency L

Evidence 1

For a class on Applied Research Methods (INFO 285) we worked on a semester long Research Proposal project which covered all the major steps of the research process excluding the actual execution of the study: formulating a research question, producing a comprehensive literature review, outlining a research design including operationalizing the question’s variables, developing the data collection instrument, choosing a sampling method, creating a pilot test along with consent forms and submitting to IRB for ethics evaluation, distributing or performing data collection instrument and gathering responses, analyzing and / or synthesizing captured data, and publishing and presenting work in journals and conferences. In my research proposal I decided to investigate a phenomenon related to new academic library librarians using a qualitative method: in-depth interviews. After introducing the broad topic of my study – the effect of coronavirus library closures on recent graduates of distance learning MLIS programs – I then narrowed my focus to this question: “How do new academic librarians currently working from home perceive the impact of the skills they gained in an online MLIS program on their job performance?” My hypothesis, though not directly addressed in the study, is that new academic librarians who received an MLIS degree from an online program were better adapted to working from home during the coronavirus shut down.

During the literature review, I discovered that the amount of scholarly work closely related to my research question is very thin. I decided to expand the lens of my review to include non-scholarly articles and also to survey adjacent themes and topics. While somewhat challenging, my exploration of the latter presented a rich if disparate context for my study. The key operational variables in my research question were qualitative, intended to elicit longer responses: how do participants perceive the *impact of skills* they learned in a distance learning graduate degree program on their *job performance* during the pandemic.  I go on to describe a convenience sampling process that uses listservs and email addresses of academic library directors to find qualified participants. My data collection instrument was an in-depth interview which was pilot tested by an employee of the SJSU King Library who fits my sample profile: a recent distance learning MLIS graduate (2019) who is now working from home as a User Experience designer at the King Library. The interview guide asks a series of questions about skills acquired during MLIS school and whether and how they’ve been helpful at the respondent’s current academic library job. I describe the rest of the study process in detail: documenting all changes to the research design / methods, collecting and analyzing / synthesizing the data, my qualifications for running the study, the study significance, which is to enrich the understanding of students, librarians, and educators about links between professional academic librarianship in the online context and skills-based (as opposed to knowledge-based) learning in online MLIS programs.

INFO-285-Research-Proposal-Josh-Simpson

Evidence 2

I wrote an extensive if perhaps unconventional literature review for Information Communities (INFO 200).  Instead of finding, summarizing and synthesizing the relevant scholarly literature in a narrative format, we created a “Literature Review Matrix.” We still searched for and identified multiple studies (in my case, 8) related to our semester long project on a specific community’s information needs and behavior. I focused on the online community of community gardeners – a community within another community.  The “discovery” process most often begins with searching databases of scholarly journals that most colleges subscribe to. Using a combination of techniques like filtered (faceted) search, exploring subject headings, and following citations, scholars / students work to locate more relevant articles. I was unable to find many peer-reviewed scholarly articles on my topic, so I broadened my reach to include Google searches non-scholarly work, and the wealth of grassroots urban agriculture websites in the US as well as regional advocacy groups, university Extension programs, and even global NGOs.

The matrix was created to describe nine aspects of each study; the first six, labeled “What They Say” are Authors / Date, Main Ideas, Theoretical / Conceptual Framework, Methods, Results & Analysis, and Conclusions. The last three are labeled “What I Say” and include Comments (Your Analysis), Future Research Implications, and Information Professional Practices Implications. These elements are of course found in most literature reviews, though conventionally blended into a narrative that outlines a time frame, acknowledges landmark works in terms of impact and citations, establishes relationships between studies, describes recurrent themes, etc. Breaking the articles down into the first six intrinsic elements is a great way to grasp the structure of studies and the purpose of all their constituent parts. The last three elements in a sense are extrapolated from the studies’ components to a somewhat more subjective analysis and understanding of future research implications and information professional practices implications. These “future research implications” is especially helpful for someone beginning a research study into a related topic, as it may identify gaps in the literature and point to opportunities for contributing to the scholarly record in this area of inquiry.

INFO-200-Literature-Review-Matrix