Logo for Open Educational Resources

Chapter 2. Research Design

Getting started.

When I teach undergraduates qualitative research methods, the final product of the course is a “research proposal” that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question. I highly recommend you think about designing your own research study as you progress through this textbook. Even if you don’t have a study in mind yet, it can be a helpful exercise as you progress through the course. But how to start? How can one design a research study before they even know what research looks like? This chapter will serve as a brief overview of the research design process to orient you to what will be coming in later chapters. Think of it as a “skeleton” of what you will read in more detail in later chapters. Ideally, you will read this chapter both now (in sequence) and later during your reading of the remainder of the text. Do not worry if you have questions the first time you read this chapter. Many things will become clearer as the text advances and as you gain a deeper understanding of all the components of good qualitative research. This is just a preliminary map to get you on the right road.

Null

Research Design Steps

Before you even get started, you will need to have a broad topic of interest in mind. [1] . In my experience, students can confuse this broad topic with the actual research question, so it is important to clearly distinguish the two. And the place to start is the broad topic. It might be, as was the case with me, working-class college students. But what about working-class college students? What’s it like to be one? Why are there so few compared to others? How do colleges assist (or fail to assist) them? What interested me was something I could barely articulate at first and went something like this: “Why was it so difficult and lonely to be me?” And by extension, “Did others share this experience?”

Once you have a general topic, reflect on why this is important to you. Sometimes we connect with a topic and we don’t really know why. Even if you are not willing to share the real underlying reason you are interested in a topic, it is important that you know the deeper reasons that motivate you. Otherwise, it is quite possible that at some point during the research, you will find yourself turned around facing the wrong direction. I have seen it happen many times. The reason is that the research question is not the same thing as the general topic of interest, and if you don’t know the reasons for your interest, you are likely to design a study answering a research question that is beside the point—to you, at least. And this means you will be much less motivated to carry your research to completion.

Researcher Note

Why do you employ qualitative research methods in your area of study? What are the advantages of qualitative research methods for studying mentorship?

Qualitative research methods are a huge opportunity to increase access, equity, inclusion, and social justice. Qualitative research allows us to engage and examine the uniquenesses/nuances within minoritized and dominant identities and our experiences with these identities. Qualitative research allows us to explore a specific topic, and through that exploration, we can link history to experiences and look for patterns or offer up a unique phenomenon. There’s such beauty in being able to tell a particular story, and qualitative research is a great mode for that! For our work, we examined the relationships we typically use the term mentorship for but didn’t feel that was quite the right word. Qualitative research allowed us to pick apart what we did and how we engaged in our relationships, which then allowed us to more accurately describe what was unique about our mentorship relationships, which we ultimately named liberationships ( McAloney and Long 2021) . Qualitative research gave us the means to explore, process, and name our experiences; what a powerful tool!

How do you come up with ideas for what to study (and how to study it)? Where did you get the idea for studying mentorship?

Coming up with ideas for research, for me, is kind of like Googling a question I have, not finding enough information, and then deciding to dig a little deeper to get the answer. The idea to study mentorship actually came up in conversation with my mentorship triad. We were talking in one of our meetings about our relationship—kind of meta, huh? We discussed how we felt that mentorship was not quite the right term for the relationships we had built. One of us asked what was different about our relationships and mentorship. This all happened when I was taking an ethnography course. During the next session of class, we were discussing auto- and duoethnography, and it hit me—let’s explore our version of mentorship, which we later went on to name liberationships ( McAloney and Long 2021 ). The idea and questions came out of being curious and wanting to find an answer. As I continue to research, I see opportunities in questions I have about my work or during conversations that, in our search for answers, end up exposing gaps in the literature. If I can’t find the answer already out there, I can study it.

—Kim McAloney, PhD, College Student Services Administration Ecampus coordinator and instructor

When you have a better idea of why you are interested in what it is that interests you, you may be surprised to learn that the obvious approaches to the topic are not the only ones. For example, let’s say you think you are interested in preserving coastal wildlife. And as a social scientist, you are interested in policies and practices that affect the long-term viability of coastal wildlife, especially around fishing communities. It would be natural then to consider designing a research study around fishing communities and how they manage their ecosystems. But when you really think about it, you realize that what interests you the most is how people whose livelihoods depend on a particular resource act in ways that deplete that resource. Or, even deeper, you contemplate the puzzle, “How do people justify actions that damage their surroundings?” Now, there are many ways to design a study that gets at that broader question, and not all of them are about fishing communities, although that is certainly one way to go. Maybe you could design an interview-based study that includes and compares loggers, fishers, and desert golfers (those who golf in arid lands that require a great deal of wasteful irrigation). Or design a case study around one particular example where resources were completely used up by a community. Without knowing what it is you are really interested in, what motivates your interest in a surface phenomenon, you are unlikely to come up with the appropriate research design.

These first stages of research design are often the most difficult, but have patience . Taking the time to consider why you are going to go through a lot of trouble to get answers will prevent a lot of wasted energy in the future.

There are distinct reasons for pursuing particular research questions, and it is helpful to distinguish between them.  First, you may be personally motivated.  This is probably the most important and the most often overlooked.   What is it about the social world that sparks your curiosity? What bothers you? What answers do you need in order to keep living? For me, I knew I needed to get a handle on what higher education was for before I kept going at it. I needed to understand why I felt so different from my peers and whether this whole “higher education” thing was “for the likes of me” before I could complete my degree. That is the personal motivation question. Your personal motivation might also be political in nature, in that you want to change the world in a particular way. It’s all right to acknowledge this. In fact, it is better to acknowledge it than to hide it.

There are also academic and professional motivations for a particular study.  If you are an absolute beginner, these may be difficult to find. We’ll talk more about this when we discuss reviewing the literature. Simply put, you are probably not the only person in the world to have thought about this question or issue and those related to it. So how does your interest area fit into what others have studied? Perhaps there is a good study out there of fishing communities, but no one has quite asked the “justification” question. You are motivated to address this to “fill the gap” in our collective knowledge. And maybe you are really not at all sure of what interests you, but you do know that [insert your topic] interests a lot of people, so you would like to work in this area too. You want to be involved in the academic conversation. That is a professional motivation and a very important one to articulate.

Practical and strategic motivations are a third kind. Perhaps you want to encourage people to take better care of the natural resources around them. If this is also part of your motivation, you will want to design your research project in a way that might have an impact on how people behave in the future. There are many ways to do this, one of which is using qualitative research methods rather than quantitative research methods, as the findings of qualitative research are often easier to communicate to a broader audience than the results of quantitative research. You might even be able to engage the community you are studying in the collecting and analyzing of data, something taboo in quantitative research but actively embraced and encouraged by qualitative researchers. But there are other practical reasons, such as getting “done” with your research in a certain amount of time or having access (or no access) to certain information. There is nothing wrong with considering constraints and opportunities when designing your study. Or maybe one of the practical or strategic goals is about learning competence in this area so that you can demonstrate the ability to conduct interviews and focus groups with future employers. Keeping that in mind will help shape your study and prevent you from getting sidetracked using a technique that you are less invested in learning about.

STOP HERE for a moment

I recommend you write a paragraph (at least) explaining your aims and goals. Include a sentence about each of the following: personal/political goals, practical or professional/academic goals, and practical/strategic goals. Think through how all of the goals are related and can be achieved by this particular research study . If they can’t, have a rethink. Perhaps this is not the best way to go about it.

You will also want to be clear about the purpose of your study. “Wait, didn’t we just do this?” you might ask. No! Your goals are not the same as the purpose of the study, although they are related. You can think about purpose lying on a continuum from “ theory ” to “action” (figure 2.1). Sometimes you are doing research to discover new knowledge about the world, while other times you are doing a study because you want to measure an impact or make a difference in the world.

Purpose types: Basic Research, Applied Research, Summative Evaluation, Formative Evaluation, Action Research

Basic research involves research that is done for the sake of “pure” knowledge—that is, knowledge that, at least at this moment in time, may not have any apparent use or application. Often, and this is very important, knowledge of this kind is later found to be extremely helpful in solving problems. So one way of thinking about basic research is that it is knowledge for which no use is yet known but will probably one day prove to be extremely useful. If you are doing basic research, you do not need to argue its usefulness, as the whole point is that we just don’t know yet what this might be.

Researchers engaged in basic research want to understand how the world operates. They are interested in investigating a phenomenon to get at the nature of reality with regard to that phenomenon. The basic researcher’s purpose is to understand and explain ( Patton 2002:215 ).

Basic research is interested in generating and testing hypotheses about how the world works. Grounded Theory is one approach to qualitative research methods that exemplifies basic research (see chapter 4). Most academic journal articles publish basic research findings. If you are working in academia (e.g., writing your dissertation), the default expectation is that you are conducting basic research.

Applied research in the social sciences is research that addresses human and social problems. Unlike basic research, the researcher has expectations that the research will help contribute to resolving a problem, if only by identifying its contours, history, or context. From my experience, most students have this as their baseline assumption about research. Why do a study if not to make things better? But this is a common mistake. Students and their committee members are often working with default assumptions here—the former thinking about applied research as their purpose, the latter thinking about basic research: “The purpose of applied research is to contribute knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment. While in basic research the source of questions is the tradition within a scholarly discipline, in applied research the source of questions is in the problems and concerns experienced by people and by policymakers” ( Patton 2002:217 ).

Applied research is less geared toward theory in two ways. First, its questions do not derive from previous literature. For this reason, applied research studies have much more limited literature reviews than those found in basic research (although they make up for this by having much more “background” about the problem). Second, it does not generate theory in the same way as basic research does. The findings of an applied research project may not be generalizable beyond the boundaries of this particular problem or context. The findings are more limited. They are useful now but may be less useful later. This is why basic research remains the default “gold standard” of academic research.

Evaluation research is research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems. We already know the problems, and someone has already come up with solutions. There might be a program, say, for first-generation college students on your campus. Does this program work? Are first-generation students who participate in the program more likely to graduate than those who do not? These are the types of questions addressed by evaluation research. There are two types of research within this broader frame; however, one more action-oriented than the next. In summative evaluation , an overall judgment about the effectiveness of a program or policy is made. Should we continue our first-gen program? Is it a good model for other campuses? Because the purpose of such summative evaluation is to measure success and to determine whether this success is scalable (capable of being generalized beyond the specific case), quantitative data is more often used than qualitative data. In our example, we might have “outcomes” data for thousands of students, and we might run various tests to determine if the better outcomes of those in the program are statistically significant so that we can generalize the findings and recommend similar programs elsewhere. Qualitative data in the form of focus groups or interviews can then be used for illustrative purposes, providing more depth to the quantitative analyses. In contrast, formative evaluation attempts to improve a program or policy (to help “form” or shape its effectiveness). Formative evaluations rely more heavily on qualitative data—case studies, interviews, focus groups. The findings are meant not to generalize beyond the particular but to improve this program. If you are a student seeking to improve your qualitative research skills and you do not care about generating basic research, formative evaluation studies might be an attractive option for you to pursue, as there are always local programs that need evaluation and suggestions for improvement. Again, be very clear about your purpose when talking through your research proposal with your committee.

Action research takes a further step beyond evaluation, even formative evaluation, to being part of the solution itself. This is about as far from basic research as one could get and definitely falls beyond the scope of “science,” as conventionally defined. The distinction between action and research is blurry, the research methods are often in constant flux, and the only “findings” are specific to the problem or case at hand and often are findings about the process of intervention itself. Rather than evaluate a program as a whole, action research often seeks to change and improve some particular aspect that may not be working—maybe there is not enough diversity in an organization or maybe women’s voices are muted during meetings and the organization wonders why and would like to change this. In a further step, participatory action research , those women would become part of the research team, attempting to amplify their voices in the organization through participation in the action research. As action research employs methods that involve people in the process, focus groups are quite common.

If you are working on a thesis or dissertation, chances are your committee will expect you to be contributing to fundamental knowledge and theory ( basic research ). If your interests lie more toward the action end of the continuum, however, it is helpful to talk to your committee about this before you get started. Knowing your purpose in advance will help avoid misunderstandings during the later stages of the research process!

The Research Question

Once you have written your paragraph and clarified your purpose and truly know that this study is the best study for you to be doing right now , you are ready to write and refine your actual research question. Know that research questions are often moving targets in qualitative research, that they can be refined up to the very end of data collection and analysis. But you do have to have a working research question at all stages. This is your “anchor” when you get lost in the data. What are you addressing? What are you looking at and why? Your research question guides you through the thicket. It is common to have a whole host of questions about a phenomenon or case, both at the outset and throughout the study, but you should be able to pare it down to no more than two or three sentences when asked. These sentences should both clarify the intent of the research and explain why this is an important question to answer. More on refining your research question can be found in chapter 4.

Chances are, you will have already done some prior reading before coming up with your interest and your questions, but you may not have conducted a systematic literature review. This is the next crucial stage to be completed before venturing further. You don’t want to start collecting data and then realize that someone has already beaten you to the punch. A review of the literature that is already out there will let you know (1) if others have already done the study you are envisioning; (2) if others have done similar studies, which can help you out; and (3) what ideas or concepts are out there that can help you frame your study and make sense of your findings. More on literature reviews can be found in chapter 9.

In addition to reviewing the literature for similar studies to what you are proposing, it can be extremely helpful to find a study that inspires you. This may have absolutely nothing to do with the topic you are interested in but is written so beautifully or organized so interestingly or otherwise speaks to you in such a way that you want to post it somewhere to remind you of what you want to be doing. You might not understand this in the early stages—why would you find a study that has nothing to do with the one you are doing helpful? But trust me, when you are deep into analysis and writing, having an inspirational model in view can help you push through. If you are motivated to do something that might change the world, you probably have read something somewhere that inspired you. Go back to that original inspiration and read it carefully and see how they managed to convey the passion that you so appreciate.

At this stage, you are still just getting started. There are a lot of things to do before setting forth to collect data! You’ll want to consider and choose a research tradition and a set of data-collection techniques that both help you answer your research question and match all your aims and goals. For example, if you really want to help migrant workers speak for themselves, you might draw on feminist theory and participatory action research models. Chapters 3 and 4 will provide you with more information on epistemologies and approaches.

Next, you have to clarify your “units of analysis.” What is the level at which you are focusing your study? Often, the unit in qualitative research methods is individual people, or “human subjects.” But your units of analysis could just as well be organizations (colleges, hospitals) or programs or even whole nations. Think about what it is you want to be saying at the end of your study—are the insights you are hoping to make about people or about organizations or about something else entirely? A unit of analysis can even be a historical period! Every unit of analysis will call for a different kind of data collection and analysis and will produce different kinds of “findings” at the conclusion of your study. [2]

Regardless of what unit of analysis you select, you will probably have to consider the “human subjects” involved in your research. [3] Who are they? What interactions will you have with them—that is, what kind of data will you be collecting? Before answering these questions, define your population of interest and your research setting. Use your research question to help guide you.

Let’s use an example from a real study. In Geographies of Campus Inequality , Benson and Lee ( 2020 ) list three related research questions: “(1) What are the different ways that first-generation students organize their social, extracurricular, and academic activities at selective and highly selective colleges? (2) how do first-generation students sort themselves and get sorted into these different types of campus lives; and (3) how do these different patterns of campus engagement prepare first-generation students for their post-college lives?” (3).

Note that we are jumping into this a bit late, after Benson and Lee have described previous studies (the literature review) and what is known about first-generation college students and what is not known. They want to know about differences within this group, and they are interested in ones attending certain kinds of colleges because those colleges will be sites where academic and extracurricular pressures compete. That is the context for their three related research questions. What is the population of interest here? First-generation college students . What is the research setting? Selective and highly selective colleges . But a host of questions remain. Which students in the real world, which colleges? What about gender, race, and other identity markers? Will the students be asked questions? Are the students still in college, or will they be asked about what college was like for them? Will they be observed? Will they be shadowed? Will they be surveyed? Will they be asked to keep diaries of their time in college? How many students? How many colleges? For how long will they be observed?

Recommendation

Take a moment and write down suggestions for Benson and Lee before continuing on to what they actually did.

Have you written down your own suggestions? Good. Now let’s compare those with what they actually did. Benson and Lee drew on two sources of data: in-depth interviews with sixty-four first-generation students and survey data from a preexisting national survey of students at twenty-eight selective colleges. Let’s ignore the survey for our purposes here and focus on those interviews. The interviews were conducted between 2014 and 2016 at a single selective college, “Hilltop” (a pseudonym ). They employed a “purposive” sampling strategy to ensure an equal number of male-identifying and female-identifying students as well as equal numbers of White, Black, and Latinx students. Each student was interviewed once. Hilltop is a selective liberal arts college in the northeast that enrolls about three thousand students.

How did your suggestions match up to those actually used by the researchers in this study? It is possible your suggestions were too ambitious? Beginning qualitative researchers can often make that mistake. You want a research design that is both effective (it matches your question and goals) and doable. You will never be able to collect data from your entire population of interest (unless your research question is really so narrow to be relevant to very few people!), so you will need to come up with a good sample. Define the criteria for this sample, as Benson and Lee did when deciding to interview an equal number of students by gender and race categories. Define the criteria for your sample setting too. Hilltop is typical for selective colleges. That was a research choice made by Benson and Lee. For more on sampling and sampling choices, see chapter 5.

Benson and Lee chose to employ interviews. If you also would like to include interviews, you have to think about what will be asked in them. Most interview-based research involves an interview guide, a set of questions or question areas that will be asked of each participant. The research question helps you create a relevant interview guide. You want to ask questions whose answers will provide insight into your research question. Again, your research question is the anchor you will continually come back to as you plan for and conduct your study. It may be that once you begin interviewing, you find that people are telling you something totally unexpected, and this makes you rethink your research question. That is fine. Then you have a new anchor. But you always have an anchor. More on interviewing can be found in chapter 11.

Let’s imagine Benson and Lee also observed college students as they went about doing the things college students do, both in the classroom and in the clubs and social activities in which they participate. They would have needed a plan for this. Would they sit in on classes? Which ones and how many? Would they attend club meetings and sports events? Which ones and how many? Would they participate themselves? How would they record their observations? More on observation techniques can be found in both chapters 13 and 14.

At this point, the design is almost complete. You know why you are doing this study, you have a clear research question to guide you, you have identified your population of interest and research setting, and you have a reasonable sample of each. You also have put together a plan for data collection, which might include drafting an interview guide or making plans for observations. And so you know exactly what you will be doing for the next several months (or years!). To put the project into action, there are a few more things necessary before actually going into the field.

First, you will need to make sure you have any necessary supplies, including recording technology. These days, many researchers use their phones to record interviews. Second, you will need to draft a few documents for your participants. These include informed consent forms and recruiting materials, such as posters or email texts, that explain what this study is in clear language. Third, you will draft a research protocol to submit to your institutional review board (IRB) ; this research protocol will include the interview guide (if you are using one), the consent form template, and all examples of recruiting material. Depending on your institution and the details of your study design, it may take weeks or even, in some unfortunate cases, months before you secure IRB approval. Make sure you plan on this time in your project timeline. While you wait, you can continue to review the literature and possibly begin drafting a section on the literature review for your eventual presentation/publication. More on IRB procedures can be found in chapter 8 and more general ethical considerations in chapter 7.

Once you have approval, you can begin!

Research Design Checklist

Before data collection begins, do the following:

  • Write a paragraph explaining your aims and goals (personal/political, practical/strategic, professional/academic).
  • Define your research question; write two to three sentences that clarify the intent of the research and why this is an important question to answer.
  • Review the literature for similar studies that address your research question or similar research questions; think laterally about some literature that might be helpful or illuminating but is not exactly about the same topic.
  • Find a written study that inspires you—it may or may not be on the research question you have chosen.
  • Consider and choose a research tradition and set of data-collection techniques that (1) help answer your research question and (2) match your aims and goals.
  • Define your population of interest and your research setting.
  • Define the criteria for your sample (How many? Why these? How will you find them, gain access, and acquire consent?).
  • If you are conducting interviews, draft an interview guide.
  •  If you are making observations, create a plan for observations (sites, times, recording, access).
  • Acquire any necessary technology (recording devices/software).
  • Draft consent forms that clearly identify the research focus and selection process.
  • Create recruiting materials (posters, email, texts).
  • Apply for IRB approval (proposal plus consent form plus recruiting materials).
  • Block out time for collecting data.
  • At the end of the chapter, you will find a " Research Design Checklist " that summarizes the main recommendations made here ↵
  • For example, if your focus is society and culture , you might collect data through observation or a case study. If your focus is individual lived experience , you are probably going to be interviewing some people. And if your focus is language and communication , you will probably be analyzing text (written or visual). ( Marshall and Rossman 2016:16 ). ↵
  • You may not have any "live" human subjects. There are qualitative research methods that do not require interactions with live human beings - see chapter 16 , "Archival and Historical Sources." But for the most part, you are probably reading this textbook because you are interested in doing research with people. The rest of the chapter will assume this is the case. ↵

One of the primary methodological traditions of inquiry in qualitative research, ethnography is the study of a group or group culture, largely through observational fieldwork supplemented by interviews. It is a form of fieldwork that may include participant-observation data collection. See chapter 14 for a discussion of deep ethnography. 

A methodological tradition of inquiry and research design that focuses on an individual case (e.g., setting, institution, or sometimes an individual) in order to explore its complexity, history, and interactive parts.  As an approach, it is particularly useful for obtaining a deep appreciation of an issue, event, or phenomenon of interest in its particular context.

The controlling force in research; can be understood as lying on a continuum from basic research (knowledge production) to action research (effecting change).

In its most basic sense, a theory is a story we tell about how the world works that can be tested with empirical evidence.  In qualitative research, we use the term in a variety of ways, many of which are different from how they are used by quantitative researchers.  Although some qualitative research can be described as “testing theory,” it is more common to “build theory” from the data using inductive reasoning , as done in Grounded Theory .  There are so-called “grand theories” that seek to integrate a whole series of findings and stories into an overarching paradigm about how the world works, and much smaller theories or concepts about particular processes and relationships.  Theory can even be used to explain particular methodological perspectives or approaches, as in Institutional Ethnography , which is both a way of doing research and a theory about how the world works.

Research that is interested in generating and testing hypotheses about how the world works.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

An approach to research that is “multimethod in focus, involving an interpretative, naturalistic approach to its subject matter.  This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them.  Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives." ( Denzin and Lincoln 2005:2 ). Contrast with quantitative research .

Research that contributes knowledge that will help people to understand the nature of a problem in order to intervene, thereby allowing human beings to more effectively control their environment.

Research that is designed to evaluate or test the effectiveness of specific solutions and programs addressing specific social problems.  There are two kinds: summative and formative .

Research in which an overall judgment about the effectiveness of a program or policy is made, often for the purpose of generalizing to other cases or programs.  Generally uses qualitative research as a supplement to primary quantitative data analyses.  Contrast formative evaluation research .

Research designed to improve a program or policy (to help “form” or shape its effectiveness); relies heavily on qualitative research methods.  Contrast summative evaluation research

Research carried out at a particular organizational or community site with the intention of affecting change; often involves research subjects as participants of the study.  See also participatory action research .

Research in which both researchers and participants work together to understand a problematic situation and change it for the better.

The level of the focus of analysis (e.g., individual people, organizations, programs, neighborhoods).

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A fictional name assigned to give anonymity to a person, group, or place.  Pseudonyms are important ways of protecting the identity of research participants while still providing a “human element” in the presentation of qualitative data.  There are ethical considerations to be made in selecting pseudonyms; some researchers allow research participants to choose their own.

A requirement for research involving human participants; the documentation of informed consent.  In some cases, oral consent or assent may be sufficient, but the default standard is a single-page easy-to-understand form that both the researcher and the participant sign and date.   Under federal guidelines, all researchers "shall seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence. The information that is given to the subject or the representative shall be in language understandable to the subject or the representative.  No informed consent, whether oral or written, may include any exculpatory language through which the subject or the representative is made to waive or appear to waive any of the subject's rights or releases or appears to release the investigator, the sponsor, the institution, or its agents from liability for negligence" (21 CFR 50.20).  Your IRB office will be able to provide a template for use in your study .

An administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated. The IRB is charged with the responsibility of reviewing all research involving human participants. The IRB is concerned with protecting the welfare, rights, and privacy of human subjects. The IRB has the authority to approve, disapprove, monitor, and require modifications in all research activities that fall within its jurisdiction as specified by both the federal regulations and institutional policy.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

  • Types of qualitative research designs

Last updated

20 February 2023

Reviewed by

Jean Kaluza

Researchers often conduct these studies to gain a detailed understanding of a particular topic through a small, focused sample. Qualitative research methods delve into understanding why something is happening in a larger quantitative study. 

To determine whether qualitative research is the best choice for your study, let’s look at the different types of qualitative research design.

Analyze all your qualitative research

Analyze qualitative data faster and surface more actionable insights

  • What are qualitative research designs?

Qualitative research designs are research methods that collect and analyze non-numerical data. The research uncovers why or how a particular behavior or occurrence takes place. The information is usually subjective and in a written format instead of numerical.

Researchers may use interviews, focus groups , case studies , journaling, and open-ended questions to gather in-depth information. Qualitative research designs can determine users' concepts, develop a hypothesis , or add context to data from a quantitative study.

  • Characteristics of qualitative research design

Most often, qualitative data answers how or why something occurs. Certain characteristics are usually present in all qualitative research designs to ensure accurate data. 

The most common characteristics of qualitative research design include the following:

Natural environment

It’s best to collect qualitative research as close to the subject’s original environment as possible to encourage natural behavior and accurate insights.

Empathy is key

Qualitative researchers collect the best data when they’re in sync with their users’ concerns and motivations. They can play into natural human psychology by combining open-ended questioning and subtle cues.

They may mimic body language, adopt the users’ terminology, and use pauses or trailing sentences to encourage their participants to fill in the blanks. The more empathic the interviewer, the purer the data.

Participant selection

Qualitative research depends on the meaning obtained from participants instead of the meaning conveyed in similar research or studies. To increase research accuracy, you choose participants randomly from carefully chosen groups of potential participants.

Different research methods or multiple data sources

To gain in-depth knowledge, qualitative research designs often rely on multiple research methods within the same group. 

Emergent design

Qualitative research constantly evolves, meaning the initial study plan might change after you collect data. This evolution might result in changes in research methods or the introduction of a new research problem.

Inductive reasoning

Since qualitative research seeks in-depth meaning, you need complex reasoning to get the right results. Qualitative researchers build categories, patterns, and themes from separate data sets to form a complete conclusion.

Interpretive data

Once you collect the data, you need to read between the lines rather than just noting what your participant said. Qualitative research is unique as we can attach actions to feedback. 

If a user says they love the look of your design but haven’t completed any tasks, it’s up to you to interpret this as a failed test, even with their positive sentiments.  

Holistic account

To paint a large picture of an issue and potential solutions, a qualitative researcher works to develop a complex description of the research problem. You can avoid a narrow cause-and-effect perspective by describing the problem’s wider perspectives. 

  • When to use qualitative research design

Qualitative research aims to get a detailed understanding of a particular topic. To accomplish this, you’ll typically use small focus groups to gather in-depth data from varied perspectives. 

This approach is only effective for some types of study. For instance, a qualitative approach wouldn’t work for a study that seeks to understand a statistically relevant finding.

When determining if a qualitative research design is appropriate, remember the goal of qualitative research is understanding the “ why .” 

Qualitative research design gathers in-depth information that stands on its own. It can also answer the “why” of a quantitative study or be a precursor to forming a hypothesis. 

You can use qualitative research in these situations:

Developing a hypothesis for testing in a quantitative study

Identifying customer needs

Developing a new feature

Adding context to the results of a quantitative study

Understanding the motivations, values, and pain points that guide behavior

Difference between qualitative and quantitative research design

Qualitative and quantitative research designs gather data, but that's where the similarities end. Consider the difference between quality and quantity. Both are useful in different ways.

Qualitative research gathers in-depth information to answer how or why . It uses subjective data from detailed interviews, observations, and open-ended questions. Most often, qualitative data is thoughts, experiences, and concepts.

In contrast, quantitative research designs gather large amounts of objective data that you can quantify mathematically. You typically express quantitative data in numbers or graphs, and you use it to test or confirm hypotheses.

Qualitative research designs generally have the same goals. However, there are various ways to achieve these goals. Researchers may use one or more of these approaches in qualitative research.

Historical study

This is where you use extensive information about people and events in the past to draw conclusions about the present and future.

Phenomenology

Phenomenology investigates a phenomenon, activity, or event using data from participants' perspectives. Often, researchers use a combination of methods.

Grounded theory

Grounded theory uses interviews and existing data to build a theory inductively.

Ethnography

Researchers immerse themselves in the target participant's environments to understand goals, cultures, challenges, and themes with ethnography .

A case study is where you use multiple data sources to examine a person, group, community, or institution. Participants must share a connection to the research question you’re studying.

  • Advantages and disadvantages of qualitative research

All qualitative research design types share the common goal of obtaining in-depth information. Achieving this goal generally requires extensive data collection methods that can be time-consuming. As such, qualitative research has advantages and disadvantages. 

Natural settings

Since you can collect data closer to an authentic environment, it offers more accurate results.  

The ability to paint a picture with data

Quantitative studies don't always reveal the full picture. With multiple data collection methods, you can expose the motivations and reasons behind data.

Flexibility

Analysis processes aren't set in stone, so you can adapt the process as ideas or patterns emerge.

Generation of new ideas

Using open-ended responses can uncover new opportunities or solutions that weren't part of your original research plan.

Small sample sizes

You can generate meaningful results with small groups.

Disadvantages

Potentially unreliable.

A natural setting can be a double-edged sword. The inability to attach findings to anything statistically relevant can make data more difficult to quantify. 

Subjectivity

Since the researcher plays a vital role in collecting and interpreting data, qualitative research is subject to the researcher's skills. For example, they may miss a cue that changes some of the context of the quotes they collected.

Labor-intensive

You generally collect qualitative data through manual processes like extensive interviews, open-ended questions, and case studies.

Qualitative research designs allow researchers to provide an in-depth analysis of why specific behavior or events occur. It can offer fresh insights, generate new ideas, or add context to statistics from quantitative studies. Depending on your needs, qualitative data might be a great way to gain the information your organization needs to move forward.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

qualitative research designs include

Users report unexpectedly high data usage, especially during streaming sessions.

qualitative research designs include

Users find it hard to navigate from the home page to relevant playlists in the app.

qualitative research designs include

It would be great to have a sleep timer feature, especially for bedtime listening.

qualitative research designs include

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

Qualitative Research: Characteristics, Design, Methods & Examples

Lauren McCall

MSc Health Psychology Graduate

MSc, Health Psychology, University of Nottingham

Lauren obtained an MSc in Health Psychology from The University of Nottingham with a distinction classification.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Qualitative research is a type of research methodology that focuses on gathering and analyzing non-numerical data to gain a deeper understanding of human behavior, experiences, and perspectives.

It aims to explore the “why” and “how” of a phenomenon rather than the “what,” “where,” and “when” typically addressed by quantitative research.

Unlike quantitative research, which focuses on gathering and analyzing numerical data for statistical analysis, qualitative research involves researchers interpreting data to identify themes, patterns, and meanings.

Qualitative research can be used to:

  • Gain deep contextual understandings of the subjective social reality of individuals
  • To answer questions about experience and meaning from the participant’s perspective
  • To design hypotheses, theory must be researched using qualitative methods to determine what is important before research can begin. 

Examples of qualitative research questions include: 

  • How does stress influence young adults’ behavior?
  • What factors influence students’ school attendance rates in developed countries?
  • How do adults interpret binge drinking in the UK?
  • What are the psychological impacts of cervical cancer screening in women?
  • How can mental health lessons be integrated into the school curriculum? 

Characteristics 

Naturalistic setting.

Individuals are studied in their natural setting to gain a deeper understanding of how people experience the world. This enables the researcher to understand a phenomenon close to how participants experience it. 

Naturalistic settings provide valuable contextual information to help researchers better understand and interpret the data they collect.

The environment, social interactions, and cultural factors can all influence behavior and experiences, and these elements are more easily observed in real-world settings.

Reality is socially constructed

Qualitative research aims to understand how participants make meaning of their experiences – individually or in social contexts. It assumes there is no objective reality and that the social world is interpreted (Yilmaz, 2013). 

The primacy of subject matter 

The primary aim of qualitative research is to understand the perspectives, experiences, and beliefs of individuals who have experienced the phenomenon selected for research rather than the average experiences of groups of people (Minichiello, 1990).

An in-depth understanding is attained since qualitative techniques allow participants to freely disclose their experiences, thoughts, and feelings without constraint (Tenny et al., 2022). 

Variables are complex, interwoven, and difficult to measure

Factors such as experiences, behaviors, and attitudes are complex and interwoven, so they cannot be reduced to isolated variables , making them difficult to measure quantitatively.

However, a qualitative approach enables participants to describe what, why, or how they were thinking/ feeling during a phenomenon being studied (Yilmaz, 2013). 

Emic (insider’s point of view)

The phenomenon being studied is centered on the participants’ point of view (Minichiello, 1990).

Emic is used to describe how participants interact, communicate, and behave in the research setting (Scarduzio, 2017).

Interpretive analysis

In qualitative research, interpretive analysis is crucial in making sense of the collected data.

This process involves examining the raw data, such as interview transcripts, field notes, or documents, and identifying the underlying themes, patterns, and meanings that emerge from the participants’ experiences and perspectives.

Collecting Qualitative Data

There are four main research design methods used to collect qualitative data: observations, interviews,  focus groups, and ethnography.

Observations

This method involves watching and recording phenomena as they occur in nature. Observation can be divided into two types: participant and non-participant observation.

In participant observation, the researcher actively participates in the situation/events being observed.

In non-participant observation, the researcher is not an active part of the observation and tries not to influence the behaviors they are observing (Busetto et al., 2020). 

Observations can be covert (participants are unaware that a researcher is observing them) or overt (participants are aware of the researcher’s presence and know they are being observed).

However, awareness of an observer’s presence may influence participants’ behavior. 

Interviews give researchers a window into the world of a participant by seeking their account of an event, situation, or phenomenon. They are usually conducted on a one-to-one basis and can be distinguished according to the level at which they are structured (Punch, 2013). 

Structured interviews involve predetermined questions and sequences to ensure replicability and comparability. However, they are unable to explore emerging issues.

Informal interviews consist of spontaneous, casual conversations which are closer to the truth of a phenomenon. However, information is gathered using quick notes made by the researcher and is therefore subject to recall bias. 

Semi-structured interviews have a flexible structure, phrasing, and placement so emerging issues can be explored (Denny & Weckesser, 2022).

The use of probing questions and clarification can lead to a detailed understanding, but semi-structured interviews can be time-consuming and subject to interviewer bias. 

Focus groups 

Similar to interviews, focus groups elicit a rich and detailed account of an experience. However, focus groups are more dynamic since participants with shared characteristics construct this account together (Denny & Weckesser, 2022).

A shared narrative is built between participants to capture a group experience shaped by a shared context. 

The researcher takes on the role of a moderator, who will establish ground rules and guide the discussion by following a topic guide to focus the group discussions.

Typically, focus groups have 4-10 participants as a discussion can be difficult to facilitate with more than this, and this number allows everyone the time to speak.

Ethnography

Ethnography is a methodology used to study a group of people’s behaviors and social interactions in their environment (Reeves et al., 2008).

Data are collected using methods such as observations, field notes, or structured/ unstructured interviews.

The aim of ethnography is to provide detailed, holistic insights into people’s behavior and perspectives within their natural setting. In order to achieve this, researchers immerse themselves in a community or organization. 

Due to the flexibility and real-world focus of ethnography, researchers are able to gather an in-depth, nuanced understanding of people’s experiences, knowledge and perspectives that are influenced by culture and society.

In order to develop a representative picture of a particular culture/ context, researchers must conduct extensive field work. 

This can be time-consuming as researchers may need to immerse themselves into a community/ culture for a few days, or possibly a few years.

Qualitative Data Analysis Methods

Different methods can be used for analyzing qualitative data. The researcher chooses based on the objectives of their study. 

The researcher plays a key role in the interpretation of data, making decisions about the coding, theming, decontextualizing, and recontextualizing of data (Starks & Trinidad, 2007). 

Grounded theory

Grounded theory is a qualitative method specifically designed to inductively generate theory from data. It was developed by Glaser and Strauss in 1967 (Glaser & Strauss, 2017).

 This methodology aims to develop theories (rather than test hypotheses) that explain a social process, action, or interaction (Petty et al., 2012). To inform the developing theory, data collection and analysis run simultaneously. 

There are three key types of coding used in grounded theory: initial (open), intermediate (axial), and advanced (selective) coding. 

Throughout the analysis, memos should be created to document methodological and theoretical ideas about the data. Data should be collected and analyzed until data saturation is reached and a theory is developed. 

Content analysis

Content analysis was first used in the early twentieth century to analyze textual materials such as newspapers and political speeches.

Content analysis is a research method used to identify and analyze the presence and patterns of themes, concepts, or words in data (Vaismoradi et al., 2013). 

This research method can be used to analyze data in different formats, which can be written, oral, or visual. 

The goal of content analysis is to develop themes that capture the underlying meanings of data (Schreier, 2012). 

Qualitative content analysis can be used to validate existing theories, support the development of new models and theories, and provide in-depth descriptions of particular settings or experiences.

The following six steps provide a guideline for how to conduct qualitative content analysis.
  • Define a Research Question : To start content analysis, a clear research question should be developed.
  • Identify and Collect Data : Establish the inclusion criteria for your data. Find the relevant sources to analyze.
  • Define the Unit or Theme of Analysis : Categorize the content into themes. Themes can be a word, phrase, or sentence.
  • Develop Rules for Coding your Data : Define a set of coding rules to ensure that all data are coded consistently.
  • Code the Data : Follow the coding rules to categorize data into themes.
  • Analyze the Results and Draw Conclusions : Examine the data to identify patterns and draw conclusions in relation to your research question.

Discourse analysis

Discourse analysis is a research method used to study written/ spoken language in relation to its social context (Wood & Kroger, 2000).

In discourse analysis, the researcher interprets details of language materials and the context in which it is situated.

Discourse analysis aims to understand the functions of language (how language is used in real life) and how meaning is conveyed by language in different contexts. Researchers use discourse analysis to investigate social groups and how language is used to achieve specific communication goals.

Different methods of discourse analysis can be used depending on the aims and objectives of a study. However, the following steps provide a guideline on how to conduct discourse analysis.
  • Define the Research Question : Develop a relevant research question to frame the analysis.
  • Gather Data and Establish the Context : Collect research materials (e.g., interview transcripts, documents). Gather factual details and review the literature to construct a theory about the social and historical context of your study.
  • Analyze the Content : Closely examine various components of the text, such as the vocabulary, sentences, paragraphs, and structure of the text. Identify patterns relevant to the research question to create codes, then group these into themes.
  • Review the Results : Reflect on the findings to examine the function of the language, and the meaning and context of the discourse. 

Thematic analysis

Thematic analysis is a method used to identify, interpret, and report patterns in data, such as commonalities or contrasts. 

Although the origin of thematic analysis can be traced back to the early twentieth century, understanding and clarity of thematic analysis is attributed to Braun and Clarke (2006).

Thematic analysis aims to develop themes (patterns of meaning) across a dataset to address a research question. 

In thematic analysis, qualitative data is gathered using techniques such as interviews, focus groups, and questionnaires. Audio recordings are transcribed. The dataset is then explored and interpreted by a researcher to identify patterns. 

This occurs through the rigorous process of data familiarisation, coding, theme development, and revision. These identified patterns provide a summary of the dataset and can be used to address a research question.

Themes are developed by exploring the implicit and explicit meanings within the data. Two different approaches are used to generate themes: inductive and deductive. 

An inductive approach allows themes to emerge from the data. In contrast, a deductive approach uses existing theories or knowledge to apply preconceived ideas to the data.

Phases of Thematic Analysis

Braun and Clarke (2006) provide a guide of the six phases of thematic analysis. These phases can be applied flexibly to fit research questions and data. 

Template analysis

Template analysis refers to a specific method of thematic analysis which uses hierarchical coding (Brooks et al., 2014).

Template analysis is used to analyze textual data, for example, interview transcripts or open-ended responses on a written questionnaire.

To conduct template analysis, a coding template must be developed (usually from a subset of the data) and subsequently revised and refined. This template represents the themes identified by researchers as important in the dataset. 

Codes are ordered hierarchically within the template, with the highest-level codes demonstrating overarching themes in the data and lower-level codes representing constituent themes with a narrower focus.

A guideline for the main procedural steps for conducting template analysis is outlined below.
  • Familiarization with the Data : Read (and reread) the dataset in full. Engage, reflect, and take notes on data that may be relevant to the research question.
  • Preliminary Coding : Identify initial codes using guidance from the a priori codes, identified before the analysis as likely to be beneficial and relevant to the analysis.
  • Organize Themes : Organize themes into meaningful clusters. Consider the relationships between the themes both within and between clusters.
  • Produce an Initial Template : Develop an initial template. This may be based on a subset of the data.
  • Apply and Develop the Template : Apply the initial template to further data and make any necessary modifications. Refinements of the template may include adding themes, removing themes, or changing the scope/title of themes. 
  • Finalize Template : Finalize the template, then apply it to the entire dataset. 

Frame analysis

Frame analysis is a comparative form of thematic analysis which systematically analyzes data using a matrix output.

Ritchie and Spencer (1994) developed this set of techniques to analyze qualitative data in applied policy research. Frame analysis aims to generate theory from data.

Frame analysis encourages researchers to organize and manage their data using summarization.

This results in a flexible and unique matrix output, in which individual participants (or cases) are represented by rows and themes are represented by columns. 

Each intersecting cell is used to summarize findings relating to the corresponding participant and theme.

Frame analysis has five distinct phases which are interrelated, forming a methodical and rigorous framework.
  • Familiarization with the Data : Familiarize yourself with all the transcripts. Immerse yourself in the details of each transcript and start to note recurring themes.
  • Develop a Theoretical Framework : Identify recurrent/ important themes and add them to a chart. Provide a framework/ structure for the analysis.
  • Indexing : Apply the framework systematically to the entire study data.
  • Summarize Data in Analytical Framework : Reduce the data into brief summaries of participants’ accounts.
  • Mapping and Interpretation : Compare themes and subthemes and check against the original transcripts. Group the data into categories and provide an explanation for them.

Preventing Bias in Qualitative Research

To evaluate qualitative studies, the CASP (Critical Appraisal Skills Programme) checklist for qualitative studies can be used to ensure all aspects of a study have been considered (CASP, 2018).

The quality of research can be enhanced and assessed using criteria such as checklists, reflexivity, co-coding, and member-checking. 

Co-coding 

Relying on only one researcher to interpret rich and complex data may risk key insights and alternative viewpoints being missed. Therefore, coding is often performed by multiple researchers.

A common strategy must be defined at the beginning of the coding process  (Busetto et al., 2020). This includes establishing a useful coding list and finding a common definition of individual codes.

Transcripts are initially coded independently by researchers and then compared and consolidated to minimize error or bias and to bring confirmation of findings. 

Member checking

Member checking (or respondent validation) involves checking back with participants to see if the research resonates with their experiences (Russell & Gregory, 2003).

Data can be returned to participants after data collection or when results are first available. For example, participants may be provided with their interview transcript and asked to verify whether this is a complete and accurate representation of their views.

Participants may then clarify or elaborate on their responses to ensure they align with their views (Shenton, 2004).

This feedback becomes part of data collection and ensures accurate descriptions/ interpretations of phenomena (Mays & Pope, 2000). 

Reflexivity in qualitative research

Reflexivity typically involves examining your own judgments, practices, and belief systems during data collection and analysis. It aims to identify any personal beliefs which may affect the research. 

Reflexivity is essential in qualitative research to ensure methodological transparency and complete reporting. This enables readers to understand how the interaction between the researcher and participant shapes the data.

Depending on the research question and population being researched, factors that need to be considered include the experience of the researcher, how the contact was established and maintained, age, gender, and ethnicity.

These details are important because, in qualitative research, the researcher is a dynamic part of the research process and actively influences the outcome of the research (Boeije, 2014). 

Reflexivity Example

Who you are and your characteristics influence how you collect and analyze data. Here is an example of a reflexivity statement for research on smoking. I am a 30-year-old white female from a middle-class background. I live in the southwest of England and have been educated to master’s level. I have been involved in two research projects on oral health. I have never smoked, but I have witnessed how smoking can cause ill health from my volunteering in a smoking cessation clinic. My research aspirations are to help to develop interventions to help smokers quit.

Establishing Trustworthiness in Qualitative Research

Trustworthiness is a concept used to assess the quality and rigor of qualitative research. Four criteria are used to assess a study’s trustworthiness: credibility, transferability, dependability, and confirmability.

Credibility in Qualitative Research

Credibility refers to how accurately the results represent the reality and viewpoints of the participants.

To establish credibility in research, participants’ views and the researcher’s representation of their views need to align (Tobin & Begley, 2004).

To increase the credibility of findings, researchers may use data source triangulation, investigator triangulation, peer debriefing, or member checking (Lincoln & Guba, 1985). 

Transferability in Qualitative Research

Transferability refers to how generalizable the findings are: whether the findings may be applied to another context, setting, or group (Tobin & Begley, 2004).

Transferability can be enhanced by giving thorough and in-depth descriptions of the research setting, sample, and methods (Nowell et al., 2017). 

Dependability in Qualitative Research

Dependability is the extent to which the study could be replicated under similar conditions and the findings would be consistent.

Researchers can establish dependability using methods such as audit trails so readers can see the research process is logical and traceable (Koch, 1994).

Confirmability in Qualitative Research

Confirmability is concerned with establishing that there is a clear link between the researcher’s interpretations/ findings and the data.

Researchers can achieve confirmability by demonstrating how conclusions and interpretations were arrived at (Nowell et al., 2017).

This enables readers to understand the reasoning behind the decisions made. 

Audit Trails in Qualitative Research

An audit trail provides evidence of the decisions made by the researcher regarding theory, research design, and data collection, as well as the steps they have chosen to manage, analyze, and report data. 

The researcher must provide a clear rationale to demonstrate how conclusions were reached in their study.

A clear description of the research path must be provided to enable readers to trace through the researcher’s logic (Halpren, 1983).

Researchers should maintain records of the raw data, field notes, transcripts, and a reflective journal in order to provide a clear audit trail. 

Discovery of unexpected data

Open-ended questions in qualitative research mean the researcher can probe an interview topic and enable the participant to elaborate on responses in an unrestricted manner.

This allows unexpected data to emerge, which can lead to further research into that topic. 

The exploratory nature of qualitative research helps generate hypotheses that can be tested quantitatively (Busetto et al., 2020).

Flexibility

Data collection and analysis can be modified and adapted to take the research in a different direction if new ideas or patterns emerge in the data.

This enables researchers to investigate new opportunities while firmly maintaining their research goals. 

Naturalistic settings

The behaviors of participants are recorded in real-world settings. Studies that use real-world settings have high ecological validity since participants behave more authentically. 

Limitations

Time-consuming .

Qualitative research results in large amounts of data which often need to be transcribed and analyzed manually.

Even when software is used, transcription can be inaccurate, and using software for analysis can result in many codes which need to be condensed into themes. 

Subjectivity 

The researcher has an integral role in collecting and interpreting qualitative data. Therefore, the conclusions reached are from their perspective and experience.

Consequently, interpretations of data from another researcher may vary greatly. 

Limited generalizability

The aim of qualitative research is to provide a detailed, contextualized understanding of an aspect of the human experience from a relatively small sample size.

Despite rigorous analysis procedures, conclusions drawn cannot be generalized to the wider population since data may be biased or unrepresentative.

Therefore, results are only applicable to a small group of the population. 

Extraneous variables

Qualitative research is often conducted in real-world settings. This may cause results to be unreliable since extraneous variables may affect the data, for example:

  • Situational variables : different environmental conditions may influence participants’ behavior in a study. The random variation in factors (such as noise or lighting) may be difficult to control in real-world settings.
  • Participant characteristics : this includes any characteristics that may influence how a participant answers/ behaves in a study. This may include a participant’s mood, gender, age, ethnicity, sexual identity, IQ, etc.
  • Experimenter effect : experimenter effect refers to how a researcher’s unintentional influence can change the outcome of a study. This occurs when (i) their interactions with participants unintentionally change participants’ behaviors or (ii) due to errors in observation, interpretation, or analysis. 

What sample size should qualitative research be?

The sample size for qualitative studies has been recommended to include a minimum of 12 participants to reach data saturation (Braun, 2013).

Are surveys qualitative or quantitative?

Surveys can be used to gather information from a sample qualitatively or quantitatively. Qualitative surveys use open-ended questions to gather detailed information from a large sample using free text responses.

The use of open-ended questions allows for unrestricted responses where participants use their own words, enabling the collection of more in-depth information than closed-ended questions.

In contrast, quantitative surveys consist of closed-ended questions with multiple-choice answer options. Quantitative surveys are ideal to gather a statistical representation of a population.

What are the ethical considerations of qualitative research?

Before conducting a study, you must think about any risks that could occur and take steps to prevent them. Participant Protection : Researchers must protect participants from physical and mental harm. This means you must not embarrass, frighten, offend, or harm participants. Transparency : Researchers are obligated to clearly communicate how they will collect, store, analyze, use, and share the data. Confidentiality : You need to consider how to maintain the confidentiality and anonymity of participants’ data.

What is triangulation in qualitative research?

Triangulation refers to the use of several approaches in a study to comprehensively understand phenomena. This method helps to increase the validity and credibility of research findings. 

Types of triangulation include method triangulation (using multiple methods to gather data); investigator triangulation (multiple researchers for collecting/ analyzing data), theory triangulation (comparing several theoretical perspectives to explain a phenomenon), and data source triangulation (using data from various times, locations, and people; Carter et al., 2014).

Why is qualitative research important?

Qualitative research allows researchers to describe and explain the social world. The exploratory nature of qualitative research helps to generate hypotheses that can then be tested quantitatively.

In qualitative research, participants are able to express their thoughts, experiences, and feelings without constraint.

Additionally, researchers are able to follow up on participants’ answers in real-time, generating valuable discussion around a topic. This enables researchers to gain a nuanced understanding of phenomena which is difficult to attain using quantitative methods.

What is coding data in qualitative research?

Coding data is a qualitative data analysis strategy in which a section of text is assigned with a label that describes its content.

These labels may be words or phrases which represent important (and recurring) patterns in the data.

This process enables researchers to identify related content across the dataset. Codes can then be used to group similar types of data to generate themes.

What is the difference between qualitative and quantitative research?

Qualitative research involves the collection and analysis of non-numerical data in order to understand experiences and meanings from the participant’s perspective.

This can provide rich, in-depth insights on complicated phenomena. Qualitative data may be collected using interviews, focus groups, or observations.

In contrast, quantitative research involves the collection and analysis of numerical data to measure the frequency, magnitude, or relationships of variables. This can provide objective and reliable evidence that can be generalized to the wider population.

Quantitative data may be collected using closed-ended questionnaires or experiments.

What is trustworthiness in qualitative research?

Trustworthiness is a concept used to assess the quality and rigor of qualitative research. Four criteria are used to assess a study’s trustworthiness: credibility, transferability, dependability, and confirmability. 

Credibility refers to how accurately the results represent the reality and viewpoints of the participants. Transferability refers to whether the findings may be applied to another context, setting, or group.

Dependability is the extent to which the findings are consistent and reliable. Confirmability refers to the objectivity of findings (not influenced by the bias or assumptions of researchers).

What is data saturation in qualitative research?

Data saturation is a methodological principle used to guide the sample size of a qualitative research study.

Data saturation is proposed as a necessary methodological component in qualitative research (Saunders et al., 2018) as it is a vital criterion for discontinuing data collection and/or analysis. 

The intention of data saturation is to find “no new data, no new themes, no new coding, and ability to replicate the study” (Guest et al., 2006). Therefore, enough data has been gathered to make conclusions.

Why is sampling in qualitative research important?

In quantitative research, large sample sizes are used to provide statistically significant quantitative estimates.

This is because quantitative research aims to provide generalizable conclusions that represent populations.

However, the aim of sampling in qualitative research is to gather data that will help the researcher understand the depth, complexity, variation, or context of a phenomenon. The small sample sizes in qualitative studies support the depth of case-oriented analysis.

Boeije, H. (2014). Analysis in qualitative research. Sage.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative research in psychology , 3 (2), 77-101. https://doi.org/10.1191/1478088706qp063oa

Brooks, J., McCluskey, S., Turley, E., & King, N. (2014). The utility of template analysis in qualitative psychology research. Qualitative Research in Psychology , 12 (2), 202–222. https://doi.org/10.1080/14780887.2014.955224

Busetto, L., Wick, W., & Gumbinger, C. (2020). How to use and assess qualitative research methods. Neurological research and practice , 2 (1), 14-14. https://doi.org/10.1186/s42466-020-00059-z 

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology nursing forum , 41 (5), 545–547. https://doi.org/10.1188/14.ONF.545-547

Critical Appraisal Skills Programme. (2018). CASP Checklist: 10 questions to help you make sense of a Qualitative research. https://casp-uk.net/images/checklist/documents/CASP-Qualitative-Studies-Checklist/CASP-Qualitative-Checklist-2018_fillable_form.pdf Accessed: March 15 2023

Clarke, V., & Braun, V. (2013). Successful qualitative research: A practical guide for beginners. Successful Qualitative Research , 1-400.

Denny, E., & Weckesser, A. (2022). How to do qualitative research?: Qualitative research methods. BJOG : an international journal of obstetrics and gynaecology , 129 (7), 1166-1167. https://doi.org/10.1111/1471-0528.17150 

Glaser, B. G., & Strauss, A. L. (2017). The discovery of grounded theory. The Discovery of Grounded Theory , 1–18. https://doi.org/10.4324/9780203793206-1

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18 (1), 59-82. doi:10.1177/1525822X05279903

Halpren, E. S. (1983). Auditing naturalistic inquiries: The development and application of a model (Unpublished doctoral dissertation). Indiana University, Bloomington.

Hammarberg, K., Kirkman, M., & de Lacey, S. (2016). Qualitative research methods: When to use them and how to judge them. Human Reproduction , 31 (3), 498–501. https://doi.org/10.1093/humrep/dev334

Koch, T. (1994). Establishing rigour in qualitative research: The decision trail. Journal of Advanced Nursing, 19, 976–986. doi:10.1111/ j.1365-2648.1994.tb01177.x

Lincoln, Y., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320(7226), 50–52.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Nowell, L. S., Norris, J. M., White, D. E., & Moules, N. J. (2017). Thematic Analysis: Striving to Meet the Trustworthiness Criteria. International Journal of Qualitative Methods, 16 (1). https://doi.org/10.1177/1609406917733847

Petty, N. J., Thomson, O. P., & Stew, G. (2012). Ready for a paradigm shift? part 2: Introducing qualitative research methodologies and methods. Manual Therapy , 17 (5), 378–384. https://doi.org/10.1016/j.math.2012.03.004

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches. London: Sage

Reeves, S., Kuper, A., & Hodges, B. D. (2008). Qualitative research methodologies: Ethnography. BMJ , 337 (aug07 3). https://doi.org/10.1136/bmj.a1020

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: exploring its conceptualization and operationalization. Quality & quantity , 52 (4), 1893–1907. https://doi.org/10.1007/s11135-017-0574-8

Scarduzio, J. A. (2017). Emic approach to qualitative research. The International Encyclopedia of Communication Research Methods, 1–2 . https://doi.org/10.1002/9781118901731.iecrm0082

Schreier, M. (2012). Qualitative content analysis in practice / Margrit Schreier.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

Starks, H., & Trinidad, S. B. (2007). Choose your method: a comparison of phenomenology, discourse analysis, and grounded theory. Qualitative health research , 17 (10), 1372–1380. https://doi.org/10.1177/1049732307307031

Tenny, S., Brannan, J. M., & Brannan, G. D. (2022). Qualitative Study. In StatPearls. StatPearls Publishing.

Tobin, G. A., & Begley, C. M. (2004). Methodological rigour within a qualitative framework. Journal of Advanced Nursing, 48, 388–396. doi:10.1111/j.1365-2648.2004.03207.x

Vaismoradi, M., Turunen, H., & Bondas, T. (2013). Content analysis and thematic analysis: Implications for conducting a qualitative descriptive study. Nursing & health sciences , 15 (3), 398-405. https://doi.org/10.1111/nhs.12048

Wood L. A., Kroger R. O. (2000). Doing discourse analysis: Methods for studying action in talk and text. Sage.

Yilmaz, K. (2013). Comparison of Quantitative and Qualitative Research Traditions: epistemological, theoretical, and methodological differences. European journal of education , 48 (2), 311-325. https://doi.org/10.1111/ejed.12014

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9.4 Types of qualitative research designs

Learning objectives.

  • Define focus groups and outline how they differ from one-on-one interviews
  • Describe how to determine the best size for focus groups
  • Identify the important considerations in focus group composition
  • Discuss how to moderate focus groups
  • Identify the strengths and weaknesses of focus group methodology
  • Describe case study research, ethnography, and phenomenology.

There are various types of approaches to qualitative research.  This chapter presents information about focus groups, which are often used in social work research.  It also introduces case studies, ethnography, and phenomenology.

Focus Groups

Focus groups resemble qualitative interviews in that a researcher may prepare a guide in advance and interact with participants by asking them questions. But anyone who has conducted both one-on-one interviews and focus groups knows that each is unique. In an interview, usually one member (the research participant) is most active while the other (the researcher) plays the role of listener, conversation guider, and question-asker. Focus groups , on the other hand, are planned discussions designed to elicit group interaction and “obtain perceptions on a defined area of interest in a permissive, nonthreatening environment” (Krueger & Casey, 2000, p. 5).  In focus groups, the researcher play a different role than in a one-on-one interview. The researcher’s aim is to get participants talking to each other,  to observe interactions among participants, and moderate the discussion.

qualitative research designs include

There are numerous examples of focus group research. In their 2008 study, for example, Amy Slater and Marika Tiggemann (2010) conducted six focus groups with 49 adolescent girls between the ages of 13 and 15 to learn more about girls’ attitudes towards’ participation in sports. In order to get focus group participants to speak with one another rather than with the group facilitator, the focus group interview guide contained just two questions: “Can you tell me some of the reasons that girls stop playing sports or other physical activities?” and “Why do you think girls don’t play as much sport/physical activity as boys?” In another focus group study, Virpi Ylanne and Angie Williams (2009) held nine focus group sessions with adults of different ages to gauge their perceptions of how older characters are represented in television commercials. Among other considerations, the researchers were interested in discovering how focus group participants position themselves and others in terms of age stereotypes and identities during the group discussion. In both examples, the researchers’ core interest in group interaction could not have been assessed had interviews been conducted on a one-on-one basis, making the focus group method an ideal choice.

Who should be in your focus group?

In some ways, focus groups require more planning than other qualitative methods of data collection, such as one-on-one interviews in which a researcher may be better able to the dialogue. Researchers must take care to form focus groups with members who will want to interact with one another and to control the timing of the event so that participants are not asked nor expected to stay for a longer time than they’ve agreed to participate. The researcher should also be prepared to inform focus group participants of their responsibility to maintain the confidentiality of what is said in the group. But while the researcher can and should encourage all focus group members to maintain confidentiality, she should also clarify to participants that the unique nature of the group setting prevents her from being able to promise that confidentiality will be maintained by other participants. Once focus group members leave the research setting, researchers cannot control what they say to other people.

qualitative research designs include

Group size should be determined in part by the topic of the interview and your sense of the likelihood that participants will have much to say without much prompting. If the topic is one about which you think participants feel passionately and will have much to say, a group of 3–5 could make sense. Groups larger than that, especially for heated topics, can easily become unmanageable. Some researchers say that a group of about 6–10 participants is the ideal size for focus group research (Morgan, 1997); others recommend that groups should include 3–12 participants (Adler & Clark, 2008).  The size of the focus group is ultimately the decision of the researcher. When forming groups and deciding how large or small to make them, take into consideration what you know about the topic and participants’ potential interest in, passion for, and feelings about the topic. Also consider your comfort level and experience in conducting focus groups. These factors will help you decide which size is right in your particular case.

It may seem counterintuitive, but in general, it is better to form focus groups consisting of participants who do not know one another than to create groups consisting of friends, relatives, or acquaintances (Agar & MacDonald, 1995).  The reason is that group members who know each other may not share some taken-for-granted knowledge or assumptions. In research, it is precisely the  taken-for-granted knowledge that is often of interest; thus, the focus group researcher should avoid setting up interactions where participants may be discouraged to question or raise issues that they take for granted. However, group members should not be so different from one another that participants will be unlikely to feel comfortable talking with one another.

Focus group researchers must carefully consider the composition of the groups they put together. In his text on conducting focus groups, Morgan (1997) suggests that “homogeneity in background and not homogeneity in attitudes” (p. 36) should be the goal, since participants must feel comfortable speaking up but must also have enough differences to facilitate a productive discussion.  Whatever composition a researcher designs for her focus groups, the important point to keep in mind is that focus group dynamics are shaped by multiple social contexts (Hollander, 2004). Participants’ silences as well as their speech may be shaped by gender, race, class, sexuality, age, or other background characteristics or social dynamics—all of which might be suppressed or exacerbated depending on the composition of the group. Hollander (2004) suggests that researchers must pay careful attention to group composition, must be attentive to group dynamics during the focus group discussion, and should use multiple methods of data collection in order to “untangle participants’ responses and their relationship to the social contexts of the focus group” (p. 632).

The role of the moderator

In addition to the importance of group composition, focus groups also require skillful moderation. A moderator is the researcher tasked with facilitating the conversation in the focus group. Participants may ask each other follow-up questions, agree or disagree with one another, display body language that tells us something about their feelings about the conversation, or even come up with questions not previously conceived of by the researcher. It is just these sorts of interactions and displays that are of interest to the researcher. A researcher conducting focus groups collects data on more than people’s direct responses to her question, as in interviews.

The moderator’s job is not to ask questions to each person individually, but to stimulate conversation between participants. It is important to set ground rules for focus groups at the outset of the discussion. Remind participants you’ve invited them to participate because you want to hear from all of them. Therefore, the group should aim to let just one person speak at a time and avoid letting just a couple of participants dominate the conversation. One way to do this is to begin the discussion by asking participants to briefly introduce themselves or to provide a brief response to an opening question. This will help set the tone of having all group members participate. Also, ask participants to avoid having side conversations; thoughts or reactions to what is said in the group are important and should be shared with everyone.

As the focus group gets rolling, the moderator will play a less active role as participants talk to one another. There may be times when the conversation stagnates or when you, as moderator, wish to guide the conversation in another direction. In these instances, it is important to demonstrate that you’ve been paying attention to what participants have said. Being prepared to interject statements or questions such as “I’d really like to hear more about what Sunil and Joe think about what Dominick and Jae have been saying” or “Several of you have mentioned X. What do others think about this?” will be important for keeping the conversation going. It can also help redirect the conversation, shift the focus to participants who have been less active in the group, and serve as a cue to those who may be dominating the conversation that it is time to allow others to speak. Researchers may choose to use multiple moderators to make managing these various tasks easier.

Moderators are often too busy working with participants to take diligent notes during a focus group. It is helpful to have a note-taker who can record participants’ responses (Liamputtong, 2011). The note-taker creates, in essence, the first draft of interpretation for the data in the study. They note themes in responses, nonverbal cues, and other information to be included in the analysis later on. Focus groups are analyzed in a similar way as interviews; however, the interactive dimension between participants adds another element to the analytical process. Researchers must attend to the group dynamics of each focus group, as “verbal and nonverbal expressions, the tactical use of humour, interruptions in interaction, and disagreement between participants” are all data that are vital to include in analysis (Liamputtong, 2011, p. 175). Note-takers record these elements in field notes, which allows moderators to focus on the conversation.

Strengths and weaknesses of focus groups

Focus groups share many of the strengths and weaknesses of one-on-one qualitative interviews. Both methods can yield very detailed, in-depth information; are excellent for studying social processes; and provide researchers with an opportunity not only to hear what participants say but also to observe what they do in terms of their body language. Focus groups offer the added benefit of giving researchers a chance to collect data on human interaction by observing how group participants respond and react to one another. Like one-on-one qualitative interviews, focus groups can also be quite expensive and time-consuming. However, there may be some savings with focus groups as it takes fewer group events than one-on-one interviews to gather data from the same number of people. Another potential drawback of focus groups, which is not a concern for one-on-one interviews, is that one or two participants might dominate the group, silencing other participants. Careful planning and skillful moderation on the part of the researcher are crucial for avoiding, or at least dealing with, such possibilities. The various strengths and weaknesses of focus group research are summarized in Table 91.

Grounded Theory

Grounded theory has been widely used since its development in the late 1960s (Glaser & Strauss, 1967). Largely derived from schools of sociology, grounded theory involves emersion of the researcher in the field and in the data. Researchers follow a systematic set of procedures and a simultaneous approach to data collection and analysis. Grounded theory is most often used to generate rich explanations of complex actions, processes, and transitions. The primary mode of data collection is one-on-one participant interviews. Sample sizes tend to range from 20 to 30 individuals, sampled purposively (Padgett, 2016). However, sample sizes can be larger or smaller, depending on data saturation. Data saturation is the point in the qualitative research data collection process when no new information is being discovered. Researchers use a constant comparative approach in which previously collected data are analyzed during the same time frame as new data are being collected.  This allows the researchers to determine when new information is no longer being gleaned from data collection and analysis — that data saturation has been reached — in order to conclude the data collection phase.

Rather than apply or test existing grand theories, or “Big T” theories, grounded theory focuses on “small t” theories (Padgett, 2016). Grand theories, or “Big T” theories, are systems of principles, ideas, and concepts used to predict phenomena. These theories are backed up by facts and tested hypotheses. “Small t” theories are speculative and contingent upon specific contexts. In grounded theory, these “small t” theories are grounded in events and experiences and emerge from the analysis of the data collected.

One notable application of grounded theory produced a “small t” theory of acceptance following cancer diagnoses (Jakobsson, Horvath, & Ahlberg, 2005). Using grounded theory, the researchers interviewed nine patients in western Sweden. Data collection and analysis stopped when saturation was reached. The researchers found that action and knowledge, given with respect and continuity led to confidence which led to acceptance. This “small t” theory continues to be applied and further explored in other contexts.

Case study research

Case study research is an intensive longitudinal study of a phenomenon at one or more research sites for the purpose of deriving detailed, contextualized inferences and understanding the dynamic process underlying a phenomenon of interest. Case research is a unique research design in that it can be used in an interpretive manner to build theories or in a positivist manner to test theories. The previous chapter on case research discusses both techniques in depth and provides illustrative exemplars. Furthermore, the case researcher is a neutral observer (direct observation) in the social setting rather than an active participant (participant observation). As with any other interpretive approach, drawing meaningful inferences from case research depends heavily on the observational skills and integrative abilities of the researcher.

Ethnography

The ethnographic research method, derived largely from the field of anthropology, emphasizes studying a phenomenon within the context of its culture. The researcher must be deeply immersed in the social culture over an extended period of time (usually 8 months to 2 years) and should engage, observe, and record the daily life of the studied culture and its social participants within their natural setting. The primary mode of data collection is participant observation, and data analysis involves a “sense-making” approach. In addition, the researcher must take extensive field notes, and narrate her experience in descriptive detail so that readers may experience the same culture as the researcher. In this method, the researcher has two roles: rely on her unique knowledge and engagement to generate insights (theory), and convince the scientific community of the trans-situational nature of the studied phenomenon.

The classic example of ethnographic research is Jane Goodall’s study of primate behaviors, where she lived with chimpanzees in their natural habitat at Gombe National Park in Tanzania, observed their behaviors, interacted with them, and shared their lives. During that process, she learnt and chronicled how chimpanzees seek food and shelter, how they socialize with each other, their communication patterns, their mating behaviors, and so forth. A more contemporary example of ethnographic research is Myra Bluebond-Langer’s (1996)14 study of decision making in families with children suffering from life-threatening illnesses, and the physical, psychological, environmental, ethical, legal, and cultural issues that influence such decision-making. The researcher followed the experiences of approximately 80 children with incurable illnesses and their families for a period of over two years. Data collection involved participant observation and formal/informal conversations with children, their parents and relatives, and health care providers to document their lived experience.

Phenomenology

Phenomenology is a research method that emphasizes the study of conscious experiences as a way of understanding the reality around us. Phenomenology is concerned with the systematic reflection and analysis of phenomena associated with conscious experiences, such as human judgment, perceptions, and actions, with the goal of (1) appreciating and describing social reality from the diverse subjective perspectives of the participants involved, and (2) understanding the symbolic meanings (“deep structure”) underlying these subjective experiences. Phenomenological inquiry requires that researchers eliminate any prior assumptions and personal biases, empathize with the participant’s situation, and tune into existential dimensions of that situation, so that they can fully understand the deep structures that drives the conscious thinking, feeling, and behavior of the studied participants.

Some researchers view phenomenology as a philosophy rather than as a research method. In response to this criticism, Giorgi and Giorgi (2003) developed an existential phenomenological research method to guide studies in this area. This method can be grouped into data collection and data analysis phases. In the data collection phase, participants embedded in a social phenomenon are interviewed to capture their subjective experiences and perspectives regarding the phenomenon under investigation. Examples of questions that may be asked include “can you describe a typical day” or “can you describe that particular incident in more detail?” These interviews are recorded and transcribed for further analysis. During data analysis, the researcher reads the transcripts to: (1) get a sense of the whole, and (2) establish “units of significance” that can faithfully represent participants’ subjective experiences. Examples of such units of significance are concepts such as “felt space” and “felt time,” which are then used to document participants’ psychological experiences. For instance, did participants feel safe, free, trapped, or joyous when experiencing a phenomenon (“felt-space”)? Did they feel that their experience was pressured, slow, or discontinuous (“felt-time”)? Phenomenological analysis should take into account the participants’ temporal landscape (i.e., their sense of past, present, and future), and the researcher must transpose herself in an imaginary sense in the participant’s situation (i.e., temporarily live the participant’s life). The participants’ lived experience is described in form of a narrative or using emergent themes. The analysis then delves into these themes to identify multiple layers of meaning while retaining the fragility and ambiguity of subjects’ lived experiences.

Key Takeaways

  • In terms of focus group composition, homogeneity of background among participants is recommended while diverse attitudes within the group are ideal.
  • The goal of a focus group is to get participants to talk with one another rather than the researcher.
  • Like one-on-one qualitative interviews, focus groups can yield very detailed information, are excellent for studying social processes, and provide researchers with an opportunity to observe participants’ body language; they also allow researchers to observe social interaction.
  • Focus groups can be expensive and time-consuming, as are one-on-one interviews; there is also the possibility that a few participants will dominate the group and silence others in the group.
  • Other types of qualitative research include case studies, ethnography, and phenomenology.
  • Data saturation – the point in the qualitative research data collection process when no new information is being discovered
  • Focus groups- planned discussions designed to elicit group interaction and “obtain perceptions on a defined area of interest in a permissive, nonthreatening environment” (Krueger & Casey, 2000, p. 5)
  • Moderator- the researcher tasked with facilitating the conversation in the focus group

Image attributions

target group by geralt CC-0

workplace team by Free-Photos CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Privacy Policy

Research Method

Home » Qualitative Research – Methods, Analysis Types and Guide

Qualitative Research – Methods, Analysis Types and Guide

Table of Contents

Qualitative Research

Qualitative Research

Qualitative research is a type of research methodology that focuses on exploring and understanding people’s beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus groups, observations, and textual analysis.

Qualitative research aims to uncover the meaning and significance of social phenomena, and it typically involves a more flexible and iterative approach to data collection and analysis compared to quantitative research. Qualitative research is often used in fields such as sociology, anthropology, psychology, and education.

Qualitative Research Methods

Types of Qualitative Research

Qualitative Research Methods are as follows:

One-to-One Interview

This method involves conducting an interview with a single participant to gain a detailed understanding of their experiences, attitudes, and beliefs. One-to-one interviews can be conducted in-person, over the phone, or through video conferencing. The interviewer typically uses open-ended questions to encourage the participant to share their thoughts and feelings. One-to-one interviews are useful for gaining detailed insights into individual experiences.

Focus Groups

This method involves bringing together a group of people to discuss a specific topic in a structured setting. The focus group is led by a moderator who guides the discussion and encourages participants to share their thoughts and opinions. Focus groups are useful for generating ideas and insights, exploring social norms and attitudes, and understanding group dynamics.

Ethnographic Studies

This method involves immersing oneself in a culture or community to gain a deep understanding of its norms, beliefs, and practices. Ethnographic studies typically involve long-term fieldwork and observation, as well as interviews and document analysis. Ethnographic studies are useful for understanding the cultural context of social phenomena and for gaining a holistic understanding of complex social processes.

Text Analysis

This method involves analyzing written or spoken language to identify patterns and themes. Text analysis can be quantitative or qualitative. Qualitative text analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Text analysis is useful for understanding media messages, public discourse, and cultural trends.

This method involves an in-depth examination of a single person, group, or event to gain an understanding of complex phenomena. Case studies typically involve a combination of data collection methods, such as interviews, observations, and document analysis, to provide a comprehensive understanding of the case. Case studies are useful for exploring unique or rare cases, and for generating hypotheses for further research.

Process of Observation

This method involves systematically observing and recording behaviors and interactions in natural settings. The observer may take notes, use audio or video recordings, or use other methods to document what they see. Process of observation is useful for understanding social interactions, cultural practices, and the context in which behaviors occur.

Record Keeping

This method involves keeping detailed records of observations, interviews, and other data collected during the research process. Record keeping is essential for ensuring the accuracy and reliability of the data, and for providing a basis for analysis and interpretation.

This method involves collecting data from a large sample of participants through a structured questionnaire. Surveys can be conducted in person, over the phone, through mail, or online. Surveys are useful for collecting data on attitudes, beliefs, and behaviors, and for identifying patterns and trends in a population.

Qualitative data analysis is a process of turning unstructured data into meaningful insights. It involves extracting and organizing information from sources like interviews, focus groups, and surveys. The goal is to understand people’s attitudes, behaviors, and motivations

Qualitative Research Analysis Methods

Qualitative Research analysis methods involve a systematic approach to interpreting and making sense of the data collected in qualitative research. Here are some common qualitative data analysis methods:

Thematic Analysis

This method involves identifying patterns or themes in the data that are relevant to the research question. The researcher reviews the data, identifies keywords or phrases, and groups them into categories or themes. Thematic analysis is useful for identifying patterns across multiple data sources and for generating new insights into the research topic.

Content Analysis

This method involves analyzing the content of written or spoken language to identify key themes or concepts. Content analysis can be quantitative or qualitative. Qualitative content analysis involves close reading and interpretation of texts to identify recurring themes, concepts, and patterns. Content analysis is useful for identifying patterns in media messages, public discourse, and cultural trends.

Discourse Analysis

This method involves analyzing language to understand how it constructs meaning and shapes social interactions. Discourse analysis can involve a variety of methods, such as conversation analysis, critical discourse analysis, and narrative analysis. Discourse analysis is useful for understanding how language shapes social interactions, cultural norms, and power relationships.

Grounded Theory Analysis

This method involves developing a theory or explanation based on the data collected. Grounded theory analysis starts with the data and uses an iterative process of coding and analysis to identify patterns and themes in the data. The theory or explanation that emerges is grounded in the data, rather than preconceived hypotheses. Grounded theory analysis is useful for understanding complex social phenomena and for generating new theoretical insights.

Narrative Analysis

This method involves analyzing the stories or narratives that participants share to gain insights into their experiences, attitudes, and beliefs. Narrative analysis can involve a variety of methods, such as structural analysis, thematic analysis, and discourse analysis. Narrative analysis is useful for understanding how individuals construct their identities, make sense of their experiences, and communicate their values and beliefs.

Phenomenological Analysis

This method involves analyzing how individuals make sense of their experiences and the meanings they attach to them. Phenomenological analysis typically involves in-depth interviews with participants to explore their experiences in detail. Phenomenological analysis is useful for understanding subjective experiences and for developing a rich understanding of human consciousness.

Comparative Analysis

This method involves comparing and contrasting data across different cases or groups to identify similarities and differences. Comparative analysis can be used to identify patterns or themes that are common across multiple cases, as well as to identify unique or distinctive features of individual cases. Comparative analysis is useful for understanding how social phenomena vary across different contexts and groups.

Applications of Qualitative Research

Qualitative research has many applications across different fields and industries. Here are some examples of how qualitative research is used:

  • Market Research: Qualitative research is often used in market research to understand consumer attitudes, behaviors, and preferences. Researchers conduct focus groups and one-on-one interviews with consumers to gather insights into their experiences and perceptions of products and services.
  • Health Care: Qualitative research is used in health care to explore patient experiences and perspectives on health and illness. Researchers conduct in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education: Qualitative research is used in education to understand student experiences and to develop effective teaching strategies. Researchers conduct classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work : Qualitative research is used in social work to explore social problems and to develop interventions to address them. Researchers conduct in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : Qualitative research is used in anthropology to understand different cultures and societies. Researchers conduct ethnographic studies and observe and interview members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : Qualitative research is used in psychology to understand human behavior and mental processes. Researchers conduct in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy : Qualitative research is used in public policy to explore public attitudes and to inform policy decisions. Researchers conduct focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

How to Conduct Qualitative Research

Here are some general steps for conducting qualitative research:

  • Identify your research question: Qualitative research starts with a research question or set of questions that you want to explore. This question should be focused and specific, but also broad enough to allow for exploration and discovery.
  • Select your research design: There are different types of qualitative research designs, including ethnography, case study, grounded theory, and phenomenology. You should select a design that aligns with your research question and that will allow you to gather the data you need to answer your research question.
  • Recruit participants: Once you have your research question and design, you need to recruit participants. The number of participants you need will depend on your research design and the scope of your research. You can recruit participants through advertisements, social media, or through personal networks.
  • Collect data: There are different methods for collecting qualitative data, including interviews, focus groups, observation, and document analysis. You should select the method or methods that align with your research design and that will allow you to gather the data you need to answer your research question.
  • Analyze data: Once you have collected your data, you need to analyze it. This involves reviewing your data, identifying patterns and themes, and developing codes to organize your data. You can use different software programs to help you analyze your data, or you can do it manually.
  • Interpret data: Once you have analyzed your data, you need to interpret it. This involves making sense of the patterns and themes you have identified, and developing insights and conclusions that answer your research question. You should be guided by your research question and use your data to support your conclusions.
  • Communicate results: Once you have interpreted your data, you need to communicate your results. This can be done through academic papers, presentations, or reports. You should be clear and concise in your communication, and use examples and quotes from your data to support your findings.

Examples of Qualitative Research

Here are some real-time examples of qualitative research:

  • Customer Feedback: A company may conduct qualitative research to understand the feedback and experiences of its customers. This may involve conducting focus groups or one-on-one interviews with customers to gather insights into their attitudes, behaviors, and preferences.
  • Healthcare : A healthcare provider may conduct qualitative research to explore patient experiences and perspectives on health and illness. This may involve conducting in-depth interviews with patients and their families to gather information on their experiences with different health care providers and treatments.
  • Education : An educational institution may conduct qualitative research to understand student experiences and to develop effective teaching strategies. This may involve conducting classroom observations and interviews with students and teachers to gather insights into classroom dynamics and instructional practices.
  • Social Work: A social worker may conduct qualitative research to explore social problems and to develop interventions to address them. This may involve conducting in-depth interviews with individuals and families to understand their experiences with poverty, discrimination, and other social problems.
  • Anthropology : An anthropologist may conduct qualitative research to understand different cultures and societies. This may involve conducting ethnographic studies and observing and interviewing members of different cultural groups to gain insights into their beliefs, practices, and social structures.
  • Psychology : A psychologist may conduct qualitative research to understand human behavior and mental processes. This may involve conducting in-depth interviews with individuals to explore their thoughts, feelings, and experiences.
  • Public Policy: A government agency or non-profit organization may conduct qualitative research to explore public attitudes and to inform policy decisions. This may involve conducting focus groups and one-on-one interviews with members of the public to gather insights into their perspectives on different policy issues.

Purpose of Qualitative Research

The purpose of qualitative research is to explore and understand the subjective experiences, behaviors, and perspectives of individuals or groups in a particular context. Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research aims to provide in-depth, descriptive information that can help researchers develop insights and theories about complex social phenomena.

Qualitative research can serve multiple purposes, including:

  • Exploring new or emerging phenomena : Qualitative research can be useful for exploring new or emerging phenomena, such as new technologies or social trends. This type of research can help researchers develop a deeper understanding of these phenomena and identify potential areas for further study.
  • Understanding complex social phenomena : Qualitative research can be useful for exploring complex social phenomena, such as cultural beliefs, social norms, or political processes. This type of research can help researchers develop a more nuanced understanding of these phenomena and identify factors that may influence them.
  • Generating new theories or hypotheses: Qualitative research can be useful for generating new theories or hypotheses about social phenomena. By gathering rich, detailed data about individuals’ experiences and perspectives, researchers can develop insights that may challenge existing theories or lead to new lines of inquiry.
  • Providing context for quantitative data: Qualitative research can be useful for providing context for quantitative data. By gathering qualitative data alongside quantitative data, researchers can develop a more complete understanding of complex social phenomena and identify potential explanations for quantitative findings.

When to use Qualitative Research

Here are some situations where qualitative research may be appropriate:

  • Exploring a new area: If little is known about a particular topic, qualitative research can help to identify key issues, generate hypotheses, and develop new theories.
  • Understanding complex phenomena: Qualitative research can be used to investigate complex social, cultural, or organizational phenomena that are difficult to measure quantitatively.
  • Investigating subjective experiences: Qualitative research is particularly useful for investigating the subjective experiences of individuals or groups, such as their attitudes, beliefs, values, or emotions.
  • Conducting formative research: Qualitative research can be used in the early stages of a research project to develop research questions, identify potential research participants, and refine research methods.
  • Evaluating interventions or programs: Qualitative research can be used to evaluate the effectiveness of interventions or programs by collecting data on participants’ experiences, attitudes, and behaviors.

Characteristics of Qualitative Research

Qualitative research is characterized by several key features, including:

  • Focus on subjective experience: Qualitative research is concerned with understanding the subjective experiences, beliefs, and perspectives of individuals or groups in a particular context. Researchers aim to explore the meanings that people attach to their experiences and to understand the social and cultural factors that shape these meanings.
  • Use of open-ended questions: Qualitative research relies on open-ended questions that allow participants to provide detailed, in-depth responses. Researchers seek to elicit rich, descriptive data that can provide insights into participants’ experiences and perspectives.
  • Sampling-based on purpose and diversity: Qualitative research often involves purposive sampling, in which participants are selected based on specific criteria related to the research question. Researchers may also seek to include participants with diverse experiences and perspectives to capture a range of viewpoints.
  • Data collection through multiple methods: Qualitative research typically involves the use of multiple data collection methods, such as in-depth interviews, focus groups, and observation. This allows researchers to gather rich, detailed data from multiple sources, which can provide a more complete picture of participants’ experiences and perspectives.
  • Inductive data analysis: Qualitative research relies on inductive data analysis, in which researchers develop theories and insights based on the data rather than testing pre-existing hypotheses. Researchers use coding and thematic analysis to identify patterns and themes in the data and to develop theories and explanations based on these patterns.
  • Emphasis on researcher reflexivity: Qualitative research recognizes the importance of the researcher’s role in shaping the research process and outcomes. Researchers are encouraged to reflect on their own biases and assumptions and to be transparent about their role in the research process.

Advantages of Qualitative Research

Qualitative research offers several advantages over other research methods, including:

  • Depth and detail: Qualitative research allows researchers to gather rich, detailed data that provides a deeper understanding of complex social phenomena. Through in-depth interviews, focus groups, and observation, researchers can gather detailed information about participants’ experiences and perspectives that may be missed by other research methods.
  • Flexibility : Qualitative research is a flexible approach that allows researchers to adapt their methods to the research question and context. Researchers can adjust their research methods in real-time to gather more information or explore unexpected findings.
  • Contextual understanding: Qualitative research is well-suited to exploring the social and cultural context in which individuals or groups are situated. Researchers can gather information about cultural norms, social structures, and historical events that may influence participants’ experiences and perspectives.
  • Participant perspective : Qualitative research prioritizes the perspective of participants, allowing researchers to explore subjective experiences and understand the meanings that participants attach to their experiences.
  • Theory development: Qualitative research can contribute to the development of new theories and insights about complex social phenomena. By gathering rich, detailed data and using inductive data analysis, researchers can develop new theories and explanations that may challenge existing understandings.
  • Validity : Qualitative research can offer high validity by using multiple data collection methods, purposive and diverse sampling, and researcher reflexivity. This can help ensure that findings are credible and trustworthy.

Limitations of Qualitative Research

Qualitative research also has some limitations, including:

  • Subjectivity : Qualitative research relies on the subjective interpretation of researchers, which can introduce bias into the research process. The researcher’s perspective, beliefs, and experiences can influence the way data is collected, analyzed, and interpreted.
  • Limited generalizability: Qualitative research typically involves small, purposive samples that may not be representative of larger populations. This limits the generalizability of findings to other contexts or populations.
  • Time-consuming: Qualitative research can be a time-consuming process, requiring significant resources for data collection, analysis, and interpretation.
  • Resource-intensive: Qualitative research may require more resources than other research methods, including specialized training for researchers, specialized software for data analysis, and transcription services.
  • Limited reliability: Qualitative research may be less reliable than quantitative research, as it relies on the subjective interpretation of researchers. This can make it difficult to replicate findings or compare results across different studies.
  • Ethics and confidentiality: Qualitative research involves collecting sensitive information from participants, which raises ethical concerns about confidentiality and informed consent. Researchers must take care to protect the privacy and confidentiality of participants and obtain informed consent.

Also see Research Methods

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

  • Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

732k Accesses

295 Citations

84 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

qualitative research designs include

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

qualitative research designs include

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

qualitative research designs include

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

qualitative research designs include

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

qualitative research designs include

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Survey Design 101: The Basics

10 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

qualitative research designs include

Types Of Qualitative Research Designs And Methods

Qualitative research design comes in many forms. Understanding what qualitative research is and the various methods that fall under its…

Types Of Qualitative Research Designs

Qualitative research design comes in many forms. Understanding what qualitative research is and the various methods that fall under its umbrella can help determine which method or design to use. Various techniques can achieve results, depending on the subject of study.

Types of qualitative research to explore social behavior or understand interactions within specific contexts include interviews, focus groups, observations and surveys. These identify concepts and relationships that aren’t easily observed through quantitative methods. Figuring out what to explore through qualitative research is the first step in picking the right study design.

Let’s look at the most common types of qualitative methods.

What Is Qualitative Research Design?

Types of qualitative research designs, how are qualitative answers analyzed, qualitative research design in business.

There are several types of qualitative research. The term refers to in-depth, exploratory studies that discover what people think, how they behave and the reasons behind their behavior. The qualitative researcher believes that to best understand human behavior, they need to know the context in which people are acting and making decisions.

Let’s define some basic terms.

Qualitative Method

A group of techniques that allow the researcher to gather information from participants to learn about their experiences, behaviors or beliefs. The types of qualitative research methods used in a specific study should be chosen as dictated by the data being gathered. For instance, to study how employers rate the skills of the engineering students they hired, qualitative research would be appropriate.

Quantitative Method

A group of techniques that allows the researcher to gather information from participants to measure variables. The data is numerical in nature. For instance, quantitative research can be used to study how many engineering students enroll in an MBA program.

Research Design

A plan or outline of how the researcher will proceed with the proposed research project. This defines the sample, the scope of work, the goals and objectives. It may also lay out a hypothesis to be tested. Research design could also combine qualitative and quantitative techniques.

Both qualitative and quantitative research are significant. Depending on the subject and the goals of the study, researchers choose one or the other or a combination of the two. This is all part of the qualitative research design process.

Before we look at some different types of qualitative research, it’s important to note that there’s no one correct approach to qualitative research design. No matter what the type of study, it’s important to carefully consider the design to ensure the method is suitable to the research question. Here are the types of qualitative research methods to choose from:

Cluster Sampling

This technique involves selecting participants from specific locations or teams (clusters). A researcher may set out to observe, interview, or create a focus group with participants linked by location, organization or some other commonality. For example, the researcher might select the top five teams that produce an organization’s finest work. The same can be done by looking at locations (stores in a geographic region). The benefit of this design is that it’s efficient in collecting opinions from specific working groups or areas. However, this limits the sample size to only those people who work within the cluster.

Random Sampling

This design involves randomly assigning participants into groups based on a set of variables (location, gender, race, occupation). In this design, each participant is assigned an equal chance of being selected into a particular group. For example, if the researcher wants to study how students from different colleges differ from one another in terms of workplace habits and friendships, a random sample could be chosen from the student population at these colleges. The purpose of this design is to create a more even distribution of participants across all groups. The researcher will need to choose which groups to include in the study.

Focus Groups

A focus group is a small group that meets to discuss specific issues. Participants are usually recruited randomly, although sometimes they might be recruited because of personal relationships with each other or because they represent part of a certain demographic (age, location). Focus groups are one of the most popular styles of qualitative research because they allow for individual views and opinions to be shared without introducing bias. Researchers gather data through face-to-face conversation or recorded observation.

Observation

This technique involves observing the interaction patterns in a particular situation. Researchers collect data by closely watching the behaviors of others. This method can only be used in certain settings, such as in the workplace or homes.

An interview is an open-ended conversation between a researcher and a participant in which the researcher asks predetermined questions. Successful interviews require careful preparation to ensure that participants are able to give accurate answers. This method allows researchers to collect specific information about their research topic, and participants are more likely to be honest when telling their stories. However, there’s no way to control the number of unique answers, and certain participants may feel uncomfortable sharing their personal details with a stranger.

A survey is a questionnaire used to gather information from a pool of people to get a large sample of responses. This study design allows researchers to collect more data than they would with individual interviews and observations. Depending on the nature of the survey, it may also not require participants to disclose sensitive information or details. On the flip side, it’s time-consuming and may not yield the answers researchers were looking for. It’s also difficult to collect and analyze answers from larger groups.

A large study can combine several of these methods. For instance, it can involve a survey to better understand which kind of organic produce consumers are looking for. It may also include questions on the frequency of such purchases—a numerical data point—alongside their views on the legitimacy of the organic tag, which is an open-ended qualitative question.

Knowledge of the types of qualitative research designs will help you achieve the results you desire.

With quantitative research, analysis of results is fairly straightforward. But, the nature of qualitative research design is such that turning the information collected into usable data can be a challenge. To do this, researchers have to code the non-numerical data for comparison and analysis.

The researcher goes through all their notes and recordings and codes them using a predetermined scheme. Codes are created by ‘stripping out’ words or phrases that seem to answer the questions posed. The researcher will need to decide which categories to code for. Sometimes this process can be time-consuming and difficult to do during the first few passes through the data. So, it’s a good idea to start off by coding a small amount of the data and conducting a thematic analysis to get a better understanding of how to proceed.

The data collected must be organized and analyzed to answer the research questions. There are three approaches to analyzing the data: exploratory, confirmatory and descriptive.

Explanatory Data Analysis

This approach involves looking for relationships within the data to make sense of it. This design can be useful if the research question is ambiguous or open-ended. Exploratory analysis is very flexible and can be used in a number of settings. But, it generally looks at the relationship between variables while the researcher is working with the data.

Confirmatory Data Analysis

This design is used when there’s a hypothesis or theory to be tested. Confirmatory research seeks to test how well past findings apply to new observations by comparing them to statistical tests that quantify relationships between variables. It can also use prior research findings to predict new results.

Descriptive Data Analysis

In this design, the researcher will describe patterns that can be observed from the data. The researcher will take raw data and interpret it with an eye for patterns to formulate a theory that can eventually be tested with quantitative data. The qualitative design is ideal for exploring events that can’t be observed (such as people’s thoughts) or when a process is being evaluated.

With careful planning and insightful analysis, qualitative research is a versatile and useful tool in business, public policy and social studies. In the workplace, managers can use it to understand markets and consumers better or to study the health of an organization.

Businesses conduct qualitative research for many reasons. Harappa’s Thinking Critically course prepares professionals to use such data to understand their work better. Driven by experienced faculty with real-world experience, the course equips employees on a growth trajectory with frameworks and skills to use their reasoning abilities to build better arguments. It’s possible to build more effective teams. Find out how with Harappa.

Explore Harappa Diaries to learn more about topics such as What is Qualitative Research , Quantitative Vs Qualitative Research , Examples of Phenomenological Research and Tips For Studying Online to upgrade your knowledge and skills.

Thriversitybannersidenav

  • Open access
  • Published: 16 May 2024

Integrating qualitative research within a clinical trials unit: developing strategies and understanding their implementation in contexts

  • Jeremy Segrott   ORCID: orcid.org/0000-0001-6215-0870 1 ,
  • Sue Channon 2 ,
  • Amy Lloyd 4 ,
  • Eleni Glarou 2 , 3 ,
  • Josie Henley 5 ,
  • Jacqueline Hughes 2 ,
  • Nina Jacob 2 ,
  • Sarah Milosevic 2 ,
  • Yvonne Moriarty 2 ,
  • Bethan Pell 6 ,
  • Mike Robling 2 ,
  • Heather Strange 2 ,
  • Julia Townson 2 ,
  • Qualitative Research Group &
  • Lucy Brookes-Howell 2  

Trials volume  25 , Article number:  323 ( 2024 ) Cite this article

109 Accesses

1 Altmetric

Metrics details

Background/aims

The value of using qualitative methods within clinical trials is widely recognised. How qualitative research is integrated within trials units to achieve this is less clear. This paper describes the process through which qualitative research has been integrated within Cardiff University’s Centre for Trials Research (CTR) in Wales, UK. We highlight facilitators of, and challenges to, integration.

We held group discussions on the work of the Qualitative Research Group (QRG) within CTR. The content of these discussions, materials for a presentation in CTR, and documents relating to the development of the QRG were interpreted at a workshop attended by group members. Normalisation Process Theory (NPT) was used to structure analysis. A writing group prepared a document for input from members of CTR, forming the basis of this paper.

Actions to integrate qualitative research comprised: its inclusion in Centre strategies; formation of a QRG with dedicated funding/roles; embedding of qualitative research within operating systems; capacity building/training; monitoring opportunities to include qualitative methods in studies; maximising the quality of qualitative research and developing methodological innovation. Facilitators of these actions included: the influence of the broader methodological landscape within trial/study design and its promotion of the value of qualitative research; and close physical proximity of CTR qualitative staff/students allowing sharing of methodological approaches. Introduction of innovative qualitative methods generated interest among other staff groups. Challenges included: pressure to under-resource qualitative components of research, preference for a statistical stance historically in some research areas and funding structures, and difficulties faced by qualitative researchers carving out individual academic profiles when working across trials/studies.

Conclusions

Given that CTUs are pivotal to the design and conduct of RCTs and related study types across multiple disciplines, integrating qualitative research into trials units is crucial if its contribution is to be fully realised. We have made explicit one trials unit’s experience of embedding qualitative research and present this to open dialogue on ways to operationalise and optimise qualitative research in trials. NPT provides a valuable framework with which to theorise these processes, including the importance of sense-making and legitimisation when introducing new practices within organisations.

Peer Review reports

The value of using qualitative methods within randomised control trials (RCTs) is widely recognised [ 1 , 2 , 3 ]. Qualitative research generates important evidence on factors affecting trial recruitment/retention [ 4 ] and implementation, aiding interpretation of quantitative data [ 5 ]. Though RCTs have traditionally been viewed as sitting within a positivist paradigm, recent methodological innovations have developed new trial designs that draw explicitly on both quantitative and qualitative methods. For instance, in the field of complex public health interventions, realist RCTs seek to understand the mechanisms through which interventions generate hypothesised impacts, and how interactions across different implementation contexts form part of these mechanisms. Proponents of realist RCTs—which integrate experimental and realist paradigms—highlight the importance of using quantitative and qualitative methods to fully realise these aims and to generate an understanding of intervention mechanisms and how context shapes them [ 6 ].

A need for guidance on how to conduct good quality qualitative research is being addressed, particularly in relation to feasibility studies for RCTs [ 7 ] and process evaluations embedded within trials of complex interventions [ 5 ]. There is also guidance on the conduct of qualitative research within trials at different points in the research cycle, including development, conduct and reporting [ 8 , 9 ].

A high proportion of trials are based within or involve clinical trials units (CTUs). In the UK the UKCRC Registered CTU Network describes them as:

… specialist units which have been set up with a specific remit to design, conduct, analyse and publish clinical trials and other well-designed studies. They have the capability to provide specialist expert statistical, epidemiological, and other methodological advice and coordination to undertake successful clinical trials. In addition, most CTUs will have expertise in the coordination of trials involving investigational medicinal products which must be conducted in compliance with the UK Regulations governing the conduct of clinical trials resulting from the EU Directive for Clinical Trials.

Thus, CTUs provide the specialist methodological expertise needed for the conduct of trials, and in the case of trials of investigational medicinal products, their involvement may be mandated to ensure compliance with relevant regulations. As the definition above suggests, CTUs also conduct and support other types of study apart from RCTs, providing a range of methodological and subject-based expertise.

However, despite their central role in the conduct and design of trials, (and other evaluation designs) little has been written about how CTUs have integrated qualitative work within their organisation at a time when such methods are, as stated above, now recognised as an important aspect of RCTs and evaluation studies more generally. This is a significant gap, since integration at the organisational level arguably shapes how qualitative research is integrated within individual studies, and thus it is valuable to understand how CTUs have approached the task. There are different ways of involving qualitative work in trials units, such as partnering with other departments (e.g. social science) or employing qualitative researchers directly. Qualitative research can be imagined and configured in different ways—as a method that generates data to inform future trial and intervention design, as an embedded component within an RCT or other evaluation type, or as a parallel strand of research focusing on lived experiences of illness, for instance. Understanding how trials units have integrated qualitative research is valuable, as it can shed light on which strategies show promise, and in which contexts, and how qualitative research is positioned within the field of trials research, foregrounding the value of qualitative research. However, although much has been written about its use within trials, few accounts exist of how trials units have integrated qualitative research within their systems and structures.

This paper discusses the process of embedding qualitative research within the work of one CTU—Cardiff University’s Centre for Trials Research (CTR). It highlights facilitators of this process and identifies challenges to integration. We use the Normalisation Process Theory (NPT) as a framework to structure our experience and approach. The key gap addressed by this paper is the implementation of strategies to integrate qualitative research (a relatively newly adopted set of practices and processes) within CTU systems and structures. We acknowledge from the outset that there are multiple ways of approaching this task. What follows therefore is not a set of recommendations for a preferred or best way to integrate qualitative research, as this will comprise diverse actions according to specific contexts. Rather, we examine the processes through which integration occurred in our own setting and highlight the potential value of these insights for others engaged in the work of promoting qualitative research within trials units.

Background to the integration of qualitative research within CTR

The CTR was formed in 2015 [ 10 ]. It brought together three existing trials units at Cardiff University: the South East Wales Trials Unit, the Wales Cancer Trials Unit, and the Haematology Clinical Trials Unit. From its inception, the CTR had a stated aim of developing a programme of qualitative research and integrating it within trials and other studies. In the sections below, we map these approaches onto the framework offered by Normalisation Process Theory to understand the processes through which they helped achieve embedding and integration of qualitative research.

CTR’s aims (including those relating to the development of qualitative research) were included within its strategy documents and communicated to others through infrastructure funding applications, annual reports and its website. A Qualitative Research Group (QRG), which had previously existed within the South East Wales Trials Unit, with dedicated funding for methodological specialists and group lead academics, was a key mechanism through which the development of a qualitative portfolio was put into action. Integration of qualitative research within Centre systems and processes occurred through the inclusion of qualitative research in study adoption processes and representation on committees. The CTR’s study portfolio provided a basis to track qualitative methods in new and existing studies, identify opportunities to embed qualitative methods within recently adopted studies (at the funding application stage) and to manage staff resources. Capacity building and training were an important focus of the QRG’s work, including training courses, mentoring, creation of an academic network open to university staff and practitioners working in the field of healthcare, presentations at CTR staff meetings and securing of PhD studentships. Standard operating procedures and methodological guidance on the design and conduct of qualitative research (e.g. templates for developing analysis plans) aimed to create a shared understanding of how to undertake high-quality research, and a means to monitor the implementation of rigorous approaches. As the QRG expanded its expertise it sought to develop innovative approaches, including the use of visual [ 11 ] and ethnographic methods [ 12 ].

Understanding implementation—Normalisation Process Theory (NPT)

Normalisation Process Theory (NPT) provides a model with which to understand the implementation of new sets of practices and their normalisation within organisational settings. The term ‘normalisation’ refers to how new practices become routinised (part of the everyday work of an organisation) through embedding and integration [ 13 , 14 ]. NPT defines implementation as ‘the social organisation of work’ and is concerned with the social processes that take place as new practices are introduced. Embedding involves ‘making practices routine elements of everyday life’ within an organisation. Integration takes the form of ‘sustaining embedded practices in social contexts’, and how these processes lead to the practices becoming (or not becoming) ‘normal and routine’ [ 14 ]. NPT is concerned with the factors which promote or ‘inhibit’ attempts to embed and integrate the operationalisation of new practices [ 13 , 14 , 15 ].

Embedding new practices is therefore achieved through implementation—which takes the form of interactions in specific contexts. Implementation is operationalised through four ‘generative mechanisms’— coherence , cognitive participation , collective action and reflexive monitoring [ 14 ]. Each mechanism is characterised by components comprising immediate and organisational work, with actions of individuals and organisations (or groups of individuals) interdependent. The mechanisms operate partly through forms of investment (i.e. meaning, commitment, effort, and comprehension) [ 14 ].

Coherence refers to how individuals/groups make sense of, and give meaning to, new practices. Sense-making concerns the coherence of a practice—whether it ‘holds together’, and its differentiation from existing activities [ 15 ]. Communal and individual specification involve understanding new practices and their potential benefits for oneself or an organisation. Individuals consider what new practices mean for them in terms of tasks and responsibilities ( internalisation ) [ 14 ].

NPT frames the second mechanism, cognitive participation , as the building of a ‘community of practice’. For a new practice to be initiated, individuals and groups within an organisation must commit to it [ 14 , 15 ]. Cognitive participation occurs through enrolment —how people relate to the new practice; legitimation —the belief that it is right for them to be involved; and activation —defining which actions are necessary to sustain the practice and their involvement [ 14 ]. Making the new practices work may require changes to roles (new responsibilities, altered procedures) and reconfiguring how colleagues work together (changed relationships).

Third, Collective Action refers to ‘the operational work that people do to enact a set of practices’ [ 14 ]. Individuals engage with the new practices ( interactional workability ) reshaping how members of an organisation interact with each other, through creation of new roles and expectations ( relational interaction ) [ 15 ]. Skill set workability concerns how the work of implementing a new set of practices is distributed and the necessary roles and skillsets defined [ 14 ]. Contextual integration draws attention to the incorporation of a practice within social contexts, and the potential for aspects of these contexts, such as systems and procedures, to be modified as a result [ 15 ].

Reflexive monitoring is the final implementation mechanism. Collective and individual appraisal evaluate the value of a set of practices, which depends on the collection of information—formally and informally ( systematisation ). Appraisal may lead to reconfiguration in which procedures of the practice are redefined or reshaped [ 14 , 15 ].

We sought to map the following: (1) the strategies used to embed qualitative research within the Centre, (2) key facilitators, and (3) barriers to their implementation. Through focused group discussions during the monthly meetings of the CTR QRG and in discussion with the CTR senior management team throughout 2019–2020 we identified nine types of documents (22 individual documents in total) produced within the CTR which had relevant information about the integration of qualitative research within its work (Table  1 ). The QRG had an ‘open door’ policy to membership and welcomed all staff/students with an interest in qualitative research. It included researchers who were employed specifically to undertake qualitative research and other staff with a range of study roles, including trial managers, statisticians, and data managers. There was also diversity in terms of career stage, including PhD students, mid-career researchers and members of the Centre’s Executive team. Membership was therefore largely self-selected, and comprised of individuals with a role related to, or an interest in, embedding qualitative research within trials. However, the group brought together diverse methodological perspectives and was not solely comprised of methodological ‘champions’ whose job it was to promote the development of qualitative research within the centre. Thus whilst the group (and by extension, the authors of this paper) had a shared appreciation of the value of qualitative research within a trials centre, they also brought varied methodological perspectives and ways of engaging with it.

All members of the QRG ( n  = 26) were invited to take part in a face-to-face, day-long workshop in February 2019 on ‘How to optimise and operationalise qualitative research in trials: reflections on CTR structure’. The workshop was attended by 12 members of staff and PhD students, including members of the QRG and the CTR’s senior management team. Recruitment to the workshop was therefore inclusive, and to some extent opportunistic, but all members of the QRG were able to contribute to discussions during regular monthly group meetings and the drafting of the current paper.

The aim of the workshop was to bring together information from the documents in Table  1 to generate discussion around the key strategies (and their component activities) that had been adopted to integrate qualitative research into CTR, as well as barriers to, and facilitators of, their implementation. The agenda for the workshop involved four key areas: development and history of the CTR model; mapping the current model within CTR; discussing the structure of other CTUs; and exploring the advantages and disadvantages of the CTR model.

During the workshop, we discussed the use of NPT to conceptualise how qualitative research had been embedded within CTR’s systems and practices. The group produced spider diagrams to map strategies and actions on to the four key domains (or ‘generative mechanisms’ of NPT) summarised above, to aid the understanding of how they had functioned, and the utility of NPT as a framework. This is summarised in Table  2 .

Detailed notes were made during the workshop. A core writing group then used these notes and the documents in Table  1 to develop a draft of the current paper. This was circulated to all members of the CTR QRG ( n  = 26) and stored within a central repository accessible to them to allow involvement and incorporate the views of those who were not able to attend the workshop. This draft was again presented for comments in the monthly CTR QRG meeting in February 2021 attended by n  = 10. The Standards for QUality Improvement Reporting Excellence 2.0 (SQUIRE) guidelines were used to inform the structure and content of the paper (see supplementary material) [ 16 ].

In the following sections, we describe the strategies CTR adopted to integrate qualitative research. These are mapped against NPT’s four generative mechanisms to explore the processes through which the strategies promoted integration, and facilitators of and barriers to their implementation. A summary of the strategies and their functioning in terms of the generative mechanisms is provided in Table  2 .

Coherence—making sense of qualitative research

In CTR, many of the actions taken to build a portfolio of qualitative research were aimed at enabling colleagues, and external actors, to make sense of this set of methodologies. Centre-level strategies and grant applications for infrastructure funding highlighted the value of qualitative research, the added benefits it would bring, and positioned it as a legitimate set of practices alongside existing methods. For example, a 2014 application for renewal of trials unit infrastructure funding stated:

We are currently in the process of undertaking […] restructuring for our qualitative research team and are planning similar for trial management next year. The aim of this restructuring is to establish greater hierarchical management and opportunities for staff development and also provide a structure that can accommodate continuing growth.

Within the CTR, various forms of communication on the development of qualitative research were designed to enable staff and students to make sense of it, and to think through its potential value for them, and ways in which they might engage with it. These included presentations at staff meetings, informal meetings between project teams and the qualitative group lead, and the visibility of qualitative research on the public-facing Centre website and Centre committees and systems. For instance, qualitative methods were included (and framed as a distinct set of practices) within study adoption forms and committee agendas. Information for colleagues described how qualitative methods could be incorporated within funding applications for RCTs and other evaluation studies to generate new insights into questions research teams were already keen to answer, such as influences on intervention implementation fidelity. Where externally based chief investigators approached the Centre to be involved in new grant applications, the existence of the qualitative team and group lead enabled the inclusion of qualitative research to be actively promoted at an early stage, and such opportunities were highlighted in the Centre’s brochure for new collaborators. Monthly qualitative research network meetings—advertised across CTR and to external research collaborators, were also designed to create a shared understanding of qualitative research methods and their utility within trials and other study types (e.g. intervention development, feasibility studies, and observational studies). Training events (discussed in more detail below) also aided sense-making.

Several factors facilitated the promotion of qualitative research as a distinctive and valuable entity. Among these was the influence of the broader methodological landscape within trial design which was promoting the value of qualitative research, such as guidance on the evaluation of complex interventions by the Medical Research Council [ 17 ], and the growing emphasis placed on process evaluations within trials (with qualitative methods important in understanding participant experience and influences on implementation) [ 5 ]. The attention given to lived experience (both through process evaluations and the move to embed public involvement in trials) helped to frame qualitative research within the Centre as something that was appropriate, legitimate, and of value. Recognition by research funders of the value of qualitative research within studies was also helpful in normalising and legitimising its adoption within grant applications.

The inclusion of qualitative methods within influential methodological guidance helped CTR researchers to develop a ‘shared language’ around these methods, and a way that a common understanding of the role of qualitative research could be generated. One barrier to such sense-making work was the varying extent to which staff and teams had existing knowledge or experience of qualitative research. This varied across methodological and subject groups within the Centre and reflected the history of the individual trials units which had merged to form the Centre.

Cognitive participation—legitimising qualitative research

Senior CTR leaders promoted the value and legitimacy of qualitative research. Its inclusion in centre strategies, infrastructure funding applications, and in public-facing materials (e.g. website, investigator brochures), signalled that it was appropriate for individuals to conduct qualitative research within their roles, or to support others in doing so. Legitimisation also took place through informal channels, such as senior leadership support for qualitative research methods in staff meetings and participation in QRG seminars. Continued development of the QRG (with dedicated infrastructure funding) provided a visible identity and equivalence with other methodological groups (e.g. trial managers, statisticians).

Staff were asked to engage with qualitative research in two main ways. First, there was an expansion in the number of staff for whom qualitative research formed part of their formal role and responsibilities. One of the three trials units that merged to form CTR brought with it a qualitative team comprising methodological specialists and a group lead. CTR continued the expansion of this group with the creation of new roles and an enlarged nucleus of researchers for whom qualitative research was the sole focus of their work. In part, this was linked to the successful award of projects that included a large qualitative component, and that were coordinated by CTR (see Table  3 which describes the PUMA study).

Members of the QRG were encouraged to develop their own research ideas and to gain experience as principal investigators, and group seminars were used to explore new ideas and provide peer support. This was communicated through line management, appraisal, and informal peer interaction. Boundaries were not strictly demarcated (i.e. staff located outside the qualitative team were already using qualitative methods), but the new team became a central focus for developing a growing programme of work.

Second, individuals and studies were called upon to engage in new ways with qualitative research, and with the qualitative team. A key goal for the Centre was that groups developing new research ideas should give more consideration in general to the potential value and inclusion of qualitative research within their funding applications. Specifically, they were asked to do this by thinking about qualitative research at an early point in their application’s development (rather than ‘bolting it on’ after other elements had been designed) and to draw upon the expertise and input of the qualitative team. An example was the inclusion of questions on qualitative methods within the Centre’s study adoption form and representation from the qualitative team at the committee which reviewed new adoption requests. Where adoption requests indicated the inclusion of qualitative methods, colleagues were encouraged to liaise with the qualitative team, facilitating the integration of its expertise from an early stage. Qualitative seminars offered an informal and supportive space in which researchers could share initial ideas and refine their methodological approach. The benefits of this included the provision of sufficient time for methodological specialists to be involved in the design of the proposed qualitative component and ensuring adequate costings had been drawn up. At study adoption group meetings, scrutiny of new proposals included consideration of whether new research proposals might be strengthened through the use of qualitative methods where these had not initially been included. Meetings of the QRG—which reviewed the Centre’s portfolio of new studies and gathered intelligence on new ideas—also helped to identify, early on, opportunities to integrate qualitative methods. Communication across teams was useful in identifying new research ideas and embedding qualitative researchers within emerging study development groups.

Actions to promote greater use of qualitative methods in funding applications fed through into a growing number of studies with a qualitative component. This helped to increase the visibility and legitimacy of qualitative methods within the Centre. For example, the PUMA study [ 12 ], which brought together a large multidisciplinary team to develop and evaluate a Paediatric early warning system, drew heavily on qualitative methods, with the qualitative research located within the QRG. The project introduced an extensive network of collaborators and clinical colleagues to qualitative methods and how they could be used during intervention development and the generation of case studies. Further information about the PUMA study is provided in Table  3 .

Increasing the legitimacy of qualitative work across an extensive network of staff, students and collaborators was a complex process. Set within the continuing dominance of quantitative methods with clinical trials, there were variations in the extent to which clinicians and other collaborators embraced the value of qualitative methods. Research funding schemes, which often continued to emphasise the quantitative element of randomised controlled trials, inevitably fed through into the focus of new research proposals. Staff and external collaborators were sometimes uncertain about the added value that qualitative methods would bring to their trials. Across the CTR there were variations in the speed at which qualitative research methods gained legitimacy, partly based on disciplinary traditions and their influences. For instance, population health trials, often located within non-health settings such as schools or community settings, frequently involved collaboration with social scientists who brought with them experience in qualitative methods. Methodological guidance in this field, such as MRC guidance on process evaluations, highlighted the value of qualitative methods and alternatives to the positivist paradigm, such as the value of realist RCTs. In other, more clinical areas, positivist paradigms had greater dominance. Established practices and methodological traditions across different funders also influenced the ease of obtaining funding to include qualitative research within studies. For drugs trials (CTIMPs), the influence of regulatory frameworks on study design, data collection and the allocation of staff resources may have played a role. Over time, teams gained repeated experience of embedding qualitative research (and researchers) within their work and took this learning with them to subsequent studies. For example, the senior clinician quoted within the PUMA case study (Table  3 below) described how they had gained an appreciation of the rigour of qualitative research and an understanding of its language. Through these repeated interactions, embedding of qualitative research within studies started to become the norm rather than the exception.

Collective action—operationalising qualitative research

Collective action concerns the operationalisation of new practices within organisations—the allocation and management of the work, how individuals interact with each other, and the work itself. In CTR the formation of a Qualitative Research Group helped to allocate and organise the work of building a portfolio of studies. Researchers across the Centre were called upon to interact with qualitative research in new ways. Presentations at staff meetings and the inclusion of qualitative research methods in portfolio study adoption forms were examples of this ( interactive workability ). It was operationalised by encouraging study teams to liaise with the qualitative research lead. Development of standard operating procedures, templates for costing qualitative research and methodological guidance (e.g. on analysis plans) also helped encourage researchers to interact with these methods in new ways. For some qualitative researchers who had been trained in the social sciences, working within a trials unit meant that they needed to interact in new and sometimes unfamiliar ways with standard operating procedures, risk assessments, and other trial-based systems. Thus, training needs and capacity-building efforts were multidirectional.

Whereas there had been a tendency for qualitative research to be ‘bolted on’ to proposals for RCTs, the systems described above were designed to embed thinking about the value and design of the qualitative component from the outset. They were also intended to integrate members of the qualitative team with trial teams from an early stage to promote effective integration of qualitative methods within larger trials and build relationships over time.

Standard Operating Procedures (SOPs), formal and informal training, and interaction between the qualitative team and other researchers increased the relational workability of qualitative methods within the Centre—the confidence individuals felt in including these methods within their studies, and their accountability for doing so. For instance, study adoption forms prompted researchers to interact routinely with the qualitative team at an early stage, whilst guidance on costing grants provided clear expectations about the resources needed to deliver a proposed set of qualitative data collection.

Formation of the Qualitative Research Group—comprised of methodological specialists, created new roles and skillsets ( skill set workability ). Research teams were encouraged to draw on these when writing funding applications for projects that included a qualitative component. Capacity-building initiatives were used to increase the number of researchers with the skills needed to undertake qualitative research, and for these individuals to develop their expertise over time. This was achieved through formal training courses, academic seminars, mentoring from experienced colleagues, and informal knowledge exchange. Links with external collaborators and centres engaged in building qualitative research supported these efforts. Within the Centre, the co-location of qualitative researchers with other methodological and trial teams facilitated knowledge exchange and building of collaborative relationships, whilst grouping of the qualitative team within a dedicated office space supported a collective identity and opportunities for informal peer support.

Some aspects of the context in which qualitative research was being developed created challenges to operationalisation. Dependence on project grants to fund qualitative methodologists meant that there was a continuing need to write further grant applications whilst limiting the amount of time available to do so. Similarly, researchers within the team whose role was funded largely by specific research projects could sometimes find it hard to create sufficient time to develop their personal methodological interests. However, the cultivation of a methodologically varied portfolio of work enabled members of the team to build significant expertise in different approaches (e.g. ethnography, discourse analysis) that connected individual studies.

Reflexive monitoring—evaluating the impact of qualitative research

Inclusion of questions/fields relating to qualitative research within the Centre’s study portfolio database was a key way in which information was collected ( systematisation ). It captured numbers of funding applications and funded studies, research design, and income generation. Alongside this database, a qualitative resource planner spreadsheet was used to link individual members of the qualitative team with projects and facilitate resource planning, further reinforcing the core responsibilities and roles of qualitative researchers within CTR. As with all staff in the Centre, members of the qualitative team were placed on ongoing rather than fixed-term contracts, reflecting their core role within CTR. Planning and strategy meetings used the database and resource planner to assess the integration of qualitative research within Centre research, identify opportunities for increasing involvement, and manage staff recruitment and sustainability of researcher posts. Academic meetings and day-to-day interaction fulfilled informal appraisal of the development of the group, and its position within the Centre. Individual appraisal was also important, with members of the qualitative team given opportunities to shape their role, reflect on progress, identify training needs, and further develop their skillset, particularly through line management systems.

These forms of systematisation and appraisal were used to reconfigure the development of qualitative research and its integration within the Centre. For example, group strategies considered how to achieve long-term integration of qualitative research from its initial embedding through further promoting the belief that it formed a core part of the Centre’s business. The visibility and legitimacy of qualitative research were promoted through initiatives such as greater prominence on the Centre’s website. Ongoing review of the qualitative portfolio and discussion at academic meetings enabled the identification of areas where increased capacity would be helpful, both for qualitative staff, and more broadly within the Centre. This prompted the qualitative group to develop an introductory course to qualitative methods open to all Centre staff and PhD students, aimed at increasing understanding and awareness. As the qualitative team built its expertise and experience it also sought to develop new and innovative approaches to conducting qualitative research. This included the use of visual and diary-based methods [ 11 ] and the adoption of ethnography to evaluate system-level clinical interventions [ 12 ]. Restrictions on conventional face-to-face qualitative data collection due to the COVID-19 pandemic prompted rapid adoption of virtual/online methods for interviews, observation, and use of new internet platforms such as Padlet—a form of digital note board.

In this paper, we have described the work undertaken by one CTU to integrate qualitative research within its studies and organisational culture. The parallel efforts of many trials units to achieve these goals arguably come at an opportune time. The traditional designs of RCTs have been challenged and re-imagined by the increasing influence of realist evaluation [ 6 , 18 ] and the widespread acceptance that trials need to understand implementation and intervention theory as well as assess outcomes [ 17 ]. Hence the widespread adoption of embedded mixed methods process evaluations within RCTs. These broad shifts in methodological orthodoxies, the production of high-profile methodological guidance, and the expectations of research funders all create fertile ground for the continued expansion of qualitative methods within trials units. However, whilst much has been written about the importance of developing qualitative research and the possible approaches to integrating qualitative and quantitative methods within studies, much less has been published on how to operationalise this within trials units. Filling this lacuna is important. Our paper highlights how the integration of a new set of practices within an organisation can become embedded as part of its ‘normal’ everyday work whilst also shaping the practices being integrated. In the case of CTR, it could be argued that the integration of qualitative research helped shape how this work was done (e.g. systems to assess progress and innovation).

In our trials unit, the presence of a dedicated research group of methodological specialists was a key action that helped realise the development of a portfolio of qualitative research and was perhaps the most visible evidence of a commitment to do so. However, our experience demonstrates that to fully realise the goal of developing qualitative research, much work focuses on the interaction between this ‘new’ set of methods and the organisation into which it is introduced. Whilst the team of methodological specialists was tasked with, and ‘able’ to do the work, the ‘work’ itself needed to be integrated and embedded within the existing system. Thus, alongside the creation of a team and methodological capacity, promoting the legitimacy of qualitative research was important to communicate to others that it was both a distinctive and different entity, yet similar and equivalent to more established groups and practices (e.g. trial management, statistics, data management). The framing of qualitative research within strategies, the messages given out by senior leaders (formally and informally) and the general visibility of qualitative research within the system all helped to achieve this.

Normalisation Process Theory draws our attention to the concepts of embedding (making a new practice routine, normal within an organisation) and integration —the long-term sustaining of these processes. An important process through which embedding took place in our centre concerned the creation of messages and systems that called upon individuals and research teams to interact with qualitative research. Research teams were encouraged to think about qualitative research and consider its potential value for their studies. Critically, they were asked to do so at specific points, and in particular ways. Early consideration of qualitative methods to maximise and optimise their inclusion within studies was emphasised, with timely input from the qualitative team. Study adoption systems, centre-level processes for managing financial and human resources, creation of a qualitative resource planner, and awareness raising among staff, helped to reinforce this. These processes of embedding and integration were complex and they varied in intensity and speed across different areas of the Centre’s work. In part this depended on existing research traditions, the extent of prior experience of working with qualitative researchers and methods, and the priorities of subject areas and funders. Centre-wide systems, sometimes linked to CTR’s operation as a CTU, also helped to legitimise and embed qualitative research, lending it equivalence with other research activity. For example, like all CTUs, CTR was required to conform with the principles of Good Clinical Practice, necessitating the creation of a quality management system, operationalised through standard operating procedures for all areas of its work. Qualitative research was included, and became embedded, within these systems, with SOPs produced to guide activities such as qualitative analysis.

NPT provides a helpful way of understanding how trials units might integrate qualitative research within their work. It highlights how new practices interact with existing organisational systems and the work needed to promote effective interaction. That is, alongside the creation of a team or programme of qualitative research, much of the work concerns how members of an organisation understand it, engage with it, and create systems to sustain it. Embedding a new set of practices may be just as important as the quality or characteristics of the practices themselves. High-quality qualitative research is of little value if it is not recognised and drawn upon within new studies for instance. NPT also offers a helpful lens with which to understand how integration and embedding occur, and the mechanisms through which they operate. For example, promoting the legitimacy of a new set of practices, or creating systems that embed it, can help sustain these practices by creating an organisational ambition and encouraging (or requiring) individuals to interact with them in certain ways, redefining their roles accordingly. NPT highlights the ways in which integration of new practices involves bi-directional exchanges with the organisation’s existing practices, with each having the potential to re-shape the other as interaction takes place. For instance, in CTR, qualitative researchers needed to integrate and apply their methods within the quality management and other systems of a CTU, such as the formalisation of key processes within standard operating procedures, something less likely to occur outside trials units. Equally, project teams (including those led by externally based chief investigators) increased the integration of qualitative methods within their overall study design, providing opportunities for new insights on intervention theory, implementation and the experiences of practitioners and participants.

We note two aspects of the normalisation processes within CTR that are slightly less well conceptualised by NPT. The first concerns the emphasis within coherence on identifying the distinctiveness of new practices, and how they differ from existing activities. Whilst differentiation was an important aspect of the integration of qualitative research in CTR, such integration could be seen as operating partly through processes of de-differentiation, or at least equivalence. That is, part of the integration of qualitative research was to see it as similar in terms of rigour, coherence, and importance to other forms of research within the Centre. To be viewed as similar, or at least comparable to existing practices, was to be legitimised.

Second, whilst NPT focuses mainly on the interaction between a new set of practices and the organisational context into which it is introduced, our own experience of introducing qualitative research into a trials unit was shaped by broader organisational and methodological contexts. For example, the increasing emphasis placed upon understanding implementation processes and the experiences of research participants in the field of clinical trials (e.g. by funders), created an environment conducive to the development of qualitative research methods within our Centre. Attempts to integrate qualitative research within studies were also cross-organisational, given that many of the studies managed within the CTR drew together multi-institutional teams. This provided important opportunities to integrate qualitative research within a portfolio of studies that extended beyond CTR and build a network of collaborators who increasingly included qualitative methods within their funding proposals. The work of growing and integrating qualitative research within a trials unit is an ongoing one in which ever-shifting macro-level influences can help or hinder, and where the organisations within which we work are never static in terms of barriers and facilitators.

The importance of utilising qualitative methods within RCTs is now widely recognised. Increased emphasis on the evaluation of complex interventions, the influence of realist methods directing greater attention to complexity and the widespread adoption of mixed methods process evaluations are key drivers of this shift. The inclusion of qualitative methods within individual trials is important and previous research has explored approaches to their incorporation and some of the challenges encountered. Our paper highlights that the integration of qualitative methods at the organisational level of the CTU can shape how they are taken up by individual trials. Within CTR, it can be argued that qualitative research achieved high levels of integration, as conceptualised by Normalisation Process Theory. Thus, qualitative research became recognised as a coherent and valuable set of practices, secured legitimisation as an appropriate focus of individual and organisational activity and benefitted from forms of collective action which operationalised these organisational processes. Crucially, the routinisation of qualitative research appeared to be sustained, something which NPT suggests helps define integration (as opposed to initial embedding). However, our analysis suggested that the degree of integration varied by trial area. This variation reflected a complex mix of factors including disciplinary traditions, methodological guidance, existing (un)familiarity with qualitative research, and the influence of regulatory frameworks for certain clinical trials.

NPT provides a valuable framework with which to understand how these processes of embedding and integration occur. Our use of NPT draws attention to the importance of sense-making and legitimisation as important steps in introducing a new set of practices within the work of an organisation. Integration also depends, across each mechanism of NPT, on the building of effective relationships, which allow individuals and teams to work together in new ways. By reflecting on our experiences and the decisions taken within CTR we have made explicit one such process for embedding qualitative research within a trials unit, whilst acknowledging that approaches may differ across trials units. Mindful of this fact, and the focus of the current paper on one trials unit’s experience, we do not propose a set of recommendations for others who are working to achieve similar goals. Rather, we offer three overarching reflections (framed by NPT) which may act as a useful starting point for trials units (and other infrastructures) seeking to promote the adoption of qualitative research.

First, whilst research organisations such as trials units are highly heterogenous, processes of embedding and integration, which we have foregrounded in this paper, are likely to be important across different contexts in sustaining the use of qualitative research. Second, developing a plan for the integration of qualitative research will benefit from mapping out the characteristics of the extant system. For example, it is valuable to know how familiar staff are with qualitative research and any variations across teams within an organisation. Thirdly, NPT frames integration as a process of implementation which operates through key generative mechanisms— coherence , cognitive participation , collective action and reflexive monitoring . These mechanisms can help guide understanding of which actions help achieve embedding and integration. Importantly, they span multiple aspects of how organisations, and the individuals within them, work. The ways in which people make sense of a new set of practices ( coherence ), their commitment towards it ( cognitive participation ), how it is operationalised ( collective action ) and the evaluation of its introduction ( reflexive monitoring ) are all important. Thus, for example, qualitative research, even when well organised and operationalised within an organisation, is unlikely to be sustained if appreciation of its value is limited, or people are not committed to it.

We present our experience of engaging with the processes described above to open dialogue with other trials units on ways to operationalise and optimise qualitative research in trials. Understanding how best to integrate qualitative research within these settings may help to fully realise the significant contribution which it makes the design and conduct of trials.

Availability of data and materials

Some documents cited in this paper are either freely available from the Centre for Trials Research website or can be requested from the author for correspondence.

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Hewison J. What can qualitative research do for randomised controlled trials? A systematic mapping review. BMJ Open. 2013;3(6):e002889.

Article   PubMed   PubMed Central   Google Scholar  

O’Cathain A, Thomas KJ, Drabble SJ, Rudolph A, Goode J, Hewison J. Maximising the value of combining qualitative research and randomised controlled trials in health research: the QUAlitative Research in Trials (QUART) study – a mixed methods study. Health Technol Assess. 2014;18(38):1–197.

Clement C, Edwards SL, Rapport F, Russell IT, Hutchings HA. Exploring qualitative methods reported in registered trials and their yields (EQUITY): systematic review. Trials. 2018;19(1):589.

Hennessy M, Hunter A, Healy P, Galvin S, Houghton C. Improving trial recruitment processes: how qualitative methodologies can be used to address the top 10 research priorities identified within the PRioRiTy study. Trials. 2018;19:584.

Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015;350(mar19 6):h1258.

Bonell C, Fletcher A, Morton M, Lorenc T, Moore L. Realist randomised controlled trials: a new approach to evaluating complex public health interventions. Soc Sci Med. 2012;75(12):2299–306.

Article   PubMed   Google Scholar  

O’Cathain A, Hoddinott P, Lewin S, Thomas KJ, Young B, Adamson J, et al. Maximising the impact of qualitative research in feasibility studies for randomised controlled trials: guidance for researchers. Pilot Feasibility Stud. 2015;1:32.

Cooper C, O’Cathain A, Hind D, Adamson J, Lawton J, Baird W. Conducting qualitative research within Clinical Trials Units: avoiding potential pitfalls. Contemp Clin Trials. 2014;38(2):338–43.

Rapport F, Storey M, Porter A, Snooks H, Jones K, Peconi J, et al. Qualitative research within trials: developing a standard operating procedure for a clinical trials unit. Trials. 2013;14:54.

Cardiff University. Centre for Trials Research. Available from: https://www.cardiff.ac.uk/centre-for-trials-research . Accessed 10 May 2024.

Pell B, Williams D, Phillips R, Sanders J, Edwards A, Choy E, et al. Using visual timelines in telephone interviews: reflections and lessons learned from the star family study. Int J Qual Methods. 2020;19:160940692091367.

Thomas-Jones E, Lloyd A, Roland D, Sefton G, Tume L, Hood K, et al. A prospective, mixed-methods, before and after study to identify the evidence base for the core components of an effective Paediatric Early Warning System and the development of an implementation package containing those core recommendations for use in th. BMC Pediatr. 2018;18:244.

May C, Finch T, Mair F, Ballini L, Dowrick C, Eccles M, et al. Understanding the implementation of complex interventions in health care: the normalization process model. BMC Health Serv Res. 2007;7:148.

May C, Finch T. Implementing, embedding, and integrating practices: an outline of normalization process theory. Sociology. 2009;43(3):535–54.

Article   Google Scholar  

May CR, Mair F, Finch T, Macfarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: normalization process theory. Implement Sci. 2009;4:29.

Ogrinc G, Davies L, Goodman D, Batalden PB, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): Revised publication guidelines from a detailed consensus process. BMJ Quality and Safety. 2016;25:986-92.

Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.

Jamal F, Fletcher A, Shackleton N, Elbourne D, Viner R, Bonell C. The three stages of building and testing mid-level theories in a realist RCT: a theoretical and methodological case-example. Trials. 2015;16(1):466.

Download references

Acknowledgements

Members of the Centre for Trials Research (CTR) Qualitative Research Group were collaborating authors: C Drew (Senior Research Fellow—Senior Trial Manager, Brain Health and Mental Wellbeing Division), D Gillespie (Director, Infection, Inflammation and Immunity Trials, Principal Research Fellow), R Hale (now Research Associate, School of Social Sciences, Cardiff University), J Latchem-Hastings (now Lecturer and Postdoctoral Fellow, School of Healthcare Sciences, Cardiff University), R Milton (Research Associate—Trial Manager), B Pell (now PhD student, DECIPHer Centre, Cardiff University), H Prout (Research Associate—Qualitative), V Shepherd (Senior Research Fellow), K Smallman (Research Associate), H Stanton (Research Associate—Senior Data Manager). Thanks are due to Kerry Hood and Aimee Grant for their involvement in developing processes and systems for qualitative research within CTR.

No specific grant was received to support the writing of this paper.

Author information

Authors and affiliations.

Centre for Trials Research, DECIPHer Centre, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Jeremy Segrott

Centre for Trials Research, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Sue Channon, Eleni Glarou, Jacqueline Hughes, Nina Jacob, Sarah Milosevic, Yvonne Moriarty, Mike Robling, Heather Strange, Julia Townson & Lucy Brookes-Howell

Division of Population Medicine, School of Medicine, Cardiff University, Neuadd Meirionnydd, Heath Park, Cardiff, CF14 4YS, UK

Eleni Glarou

Wales Centre for Public Policy, Cardiff University, Sbarc I Spark, Maindy Road, Cardiff, CF24 4HQ, UK

School of Social Sciences, Cardiff University, King Edward VII Avenue, Cardiff, CF10 3WA, UK

Josie Henley

DECIPHer Centre, School of Social Sciences, Cardiff University, Sbarc I Spark, Maindy Road, Cardiff, CF24 4HQ, UK

Bethan Pell

You can also search for this author in PubMed   Google Scholar

Qualitative Research Group

  • , D. Gillespie
  • , J. Latchem-Hastings
  • , R. Milton
  • , V. Shepherd
  • , K. Smallman
  •  & H. Stanton

Contributions

JS contributed to the design of the work and interpretation of data and was responsible for leading the drafting and revision of the paper. SC contributed to the design of the work, the acquisition of data and the drafting and revision of the paper. AL contributed to the design of the work, the acquisition of data and the drafting and revision of the paper. EG contributed to a critical review of the manuscript and provided additional relevant references. JH provided feedback on initial drafts of the paper and contributed to subsequent revisions. JHu provided feedback on initial drafts of the paper and contributed to subsequent revisions. NG provided feedback on initial drafts of the paper and contributed to subsequent revisions. SM was involved in the acquisition and analysis of data and provided a critical review of the manuscript. YM was involved in the acquisition and analysis of data and provided a critical review of the manuscript. MR was involved in the interpretation of data and critical review and revision of the paper. HS contributed to the conception and design of the work, the acquisition and analysis of data, and the revision of the manuscript. JT provided feedback on initial drafts of the paper and contributed to subsequent revisions. LB-H made a substantial contribution to the design and conception of the work, led the acquisition and analysis of data, and contributed to the drafting and revision of the paper.

Corresponding author

Correspondence to Jeremy Segrott .

Ethics declarations

Ethics approval and consent to participate.

Ethical approval was not sought as no personal or identifiable data was collected.

Consent for publication

Competing interests.

All authors are or were members of staff or students in the Centre for Trials Research. JS is an associate editor of Trials .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Segrott, J., Channon, S., Lloyd, A. et al. Integrating qualitative research within a clinical trials unit: developing strategies and understanding their implementation in contexts. Trials 25 , 323 (2024). https://doi.org/10.1186/s13063-024-08124-7

Download citation

Received : 20 October 2023

Accepted : 17 April 2024

Published : 16 May 2024

DOI : https://doi.org/10.1186/s13063-024-08124-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Qualitative methods
  • Trials units
  • Normalisation Process Theory
  • Randomised controlled trials

ISSN: 1745-6215

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

qualitative research designs include

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

What is Qualitative in Qualitative Research

Patrik aspers.

1 Department of Sociology, Uppsala University, Uppsala, Sweden

2 Seminar for Sociology, Universität St. Gallen, St. Gallen, Switzerland

3 Department of Media and Social Sciences, University of Stavanger, Stavanger, Norway

What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being “qualitative,” the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term “qualitative.” Then, drawing on ideas we find scattered across existing work, and based on Becker’s classic study of marijuana consumption, we formulate and illustrate a definition that tries to capture its core elements. We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. This formulation is developed as a tool to help improve research designs while stressing that a qualitative dimension is present in quantitative work as well. Additionally, it can facilitate teaching, communication between researchers, diminish the gap between qualitative and quantitative researchers, help to address critiques of qualitative methods, and be used as a standard of evaluation of qualitative research.

If we assume that there is something called qualitative research, what exactly is this qualitative feature? And how could we evaluate qualitative research as good or not? Is it fundamentally different from quantitative research? In practice, most active qualitative researchers working with empirical material intuitively know what is involved in doing qualitative research, yet perhaps surprisingly, a clear definition addressing its key feature is still missing.

To address the question of what is qualitative we turn to the accounts of “qualitative research” in textbooks and also in empirical work. In his classic, explorative, interview study of deviance Howard Becker ( 1963 ) asks ‘How does one become a marijuana user?’ In contrast to pre-dispositional and psychological-individualistic theories of deviant behavior, Becker’s inherently social explanation contends that becoming a user of this substance is the result of a three-phase sequential learning process. First, potential users need to learn how to smoke it properly to produce the “correct” effects. If not, they are likely to stop experimenting with it. Second, they need to discover the effects associated with it; in other words, to get “high,” individuals not only have to experience what the drug does, but also to become aware that those sensations are related to using it. Third, they require learning to savor the feelings related to its consumption – to develop an acquired taste. Becker, who played music himself, gets close to the phenomenon by observing, taking part, and by talking to people consuming the drug: “half of the fifty interviews were conducted with musicians, the other half covered a wide range of people, including laborers, machinists, and people in the professions” (Becker 1963 :56).

Another central aspect derived through the common-to-all-research interplay between induction and deduction (Becker 2017 ), is that during the course of his research Becker adds scientifically meaningful new distinctions in the form of three phases—distinctions, or findings if you will, that strongly affect the course of his research: its focus, the material that he collects, and which eventually impact his findings. Each phase typically unfolds through social interaction, and often with input from experienced users in “a sequence of social experiences during which the person acquires a conception of the meaning of the behavior, and perceptions and judgments of objects and situations, all of which make the activity possible and desirable” (Becker 1963 :235). In this study the increased understanding of smoking dope is a result of a combination of the meaning of the actors, and the conceptual distinctions that Becker introduces based on the views expressed by his respondents. Understanding is the result of research and is due to an iterative process in which data, concepts and evidence are connected with one another (Becker 2017 ).

Indeed, there are many definitions of qualitative research, but if we look for a definition that addresses its distinctive feature of being “qualitative,” the literature across the broad field of social science is meager. The main reason behind this article lies in the paradox, which, to put it bluntly, is that researchers act as if they know what it is, but they cannot formulate a coherent definition. Sociologists and others will of course continue to conduct good studies that show the relevance and value of qualitative research addressing scientific and practical problems in society. However, our paper is grounded in the idea that providing a clear definition will help us improve the work that we do. Among researchers who practice qualitative research there is clearly much knowledge. We suggest that a definition makes this knowledge more explicit. If the first rationale for writing this paper refers to the “internal” aim of improving qualitative research, the second refers to the increased “external” pressure that especially many qualitative researchers feel; pressure that comes both from society as well as from other scientific approaches. There is a strong core in qualitative research, and leading researchers tend to agree on what it is and how it is done. Our critique is not directed at the practice of qualitative research, but we do claim that the type of systematic work we do has not yet been done, and that it is useful to improve the field and its status in relation to quantitative research.

The literature on the “internal” aim of improving, or at least clarifying qualitative research is large, and we do not claim to be the first to notice the vagueness of the term “qualitative” (Strauss and Corbin 1998 ). Also, others have noted that there is no single definition of it (Long and Godfrey 2004 :182), that there are many different views on qualitative research (Denzin and Lincoln 2003 :11; Jovanović 2011 :3), and that more generally, we need to define its meaning (Best 2004 :54). Strauss and Corbin ( 1998 ), for example, as well as Nelson et al. (1992:2 cited in Denzin and Lincoln 2003 :11), and Flick ( 2007 :ix–x), have recognized that the term is problematic: “Actually, the term ‘qualitative research’ is confusing because it can mean different things to different people” (Strauss and Corbin 1998 :10–11). Hammersley has discussed the possibility of addressing the problem, but states that “the task of providing an account of the distinctive features of qualitative research is far from straightforward” ( 2013 :2). This confusion, as he has recently further argued (Hammersley 2018 ), is also salient in relation to ethnography where different philosophical and methodological approaches lead to a lack of agreement about what it means.

Others (e.g. Hammersley 2018 ; Fine and Hancock 2017 ) have also identified the treat to qualitative research that comes from external forces, seen from the point of view of “qualitative research.” This threat can be further divided into that which comes from inside academia, such as the critique voiced by “quantitative research” and outside of academia, including, for example, New Public Management. Hammersley ( 2018 ), zooming in on one type of qualitative research, ethnography, has argued that it is under treat. Similarly to Fine ( 2003 ), and before him Gans ( 1999 ), he writes that ethnography’ has acquired a range of meanings, and comes in many different versions, these often reflecting sharply divergent epistemological orientations. And already more than twenty years ago while reviewing Denzin and Lincoln’ s Handbook of Qualitative Methods Fine argued:

While this increasing centrality [of qualitative research] might lead one to believe that consensual standards have developed, this belief would be misleading. As the methodology becomes more widely accepted, querulous challengers have raised fundamental questions that collectively have undercut the traditional models of how qualitative research is to be fashioned and presented (1995:417).

According to Hammersley, there are today “serious treats to the practice of ethnographic work, on almost any definition” ( 2018 :1). He lists five external treats: (1) that social research must be accountable and able to show its impact on society; (2) the current emphasis on “big data” and the emphasis on quantitative data and evidence; (3) the labor market pressure in academia that leaves less time for fieldwork (see also Fine and Hancock 2017 ); (4) problems of access to fields; and (5) the increased ethical scrutiny of projects, to which ethnography is particularly exposed. Hammersley discusses some more or less insufficient existing definitions of ethnography.

The current situation, as Hammersley and others note—and in relation not only to ethnography but also qualitative research in general, and as our empirical study shows—is not just unsatisfactory, it may even be harmful for the entire field of qualitative research, and does not help social science at large. We suggest that the lack of clarity of qualitative research is a real problem that must be addressed.

Towards a Definition of Qualitative Research

Seen in an historical light, what is today called qualitative, or sometimes ethnographic, interpretative research – or a number of other terms – has more or less always existed. At the time the founders of sociology – Simmel, Weber, Durkheim and, before them, Marx – were writing, and during the era of the Methodenstreit (“dispute about methods”) in which the German historical school emphasized scientific methods (cf. Swedberg 1990 ), we can at least speak of qualitative forerunners.

Perhaps the most extended discussion of what later became known as qualitative methods in a classic work is Bronisław Malinowski’s ( 1922 ) Argonauts in the Western Pacific , although even this study does not explicitly address the meaning of “qualitative.” In Weber’s ([1921–-22] 1978) work we find a tension between scientific explanations that are based on observation and quantification and interpretative research (see also Lazarsfeld and Barton 1982 ).

If we look through major sociology journals like the American Sociological Review , American Journal of Sociology , or Social Forces we will not find the term qualitative sociology before the 1970s. And certainly before then much of what we consider qualitative classics in sociology, like Becker’ study ( 1963 ), had already been produced. Indeed, the Chicago School often combined qualitative and quantitative data within the same study (Fine 1995 ). Our point being that before a disciplinary self-awareness the term quantitative preceded qualitative, and the articulation of the former was a political move to claim scientific status (Denzin and Lincoln 2005 ). In the US the World War II seem to have sparked a critique of sociological work, including “qualitative work,” that did not follow the scientific canon (Rawls 2018 ), which was underpinned by a scientifically oriented and value free philosophy of science. As a result the attempts and practice of integrating qualitative and quantitative sociology at Chicago lost ground to sociology that was more oriented to surveys and quantitative work at Columbia under Merton-Lazarsfeld. The quantitative tradition was also able to present textbooks (Lundberg 1951 ) that facilitated the use this approach and its “methods.” The practices of the qualitative tradition, by and large, remained tacit or was part of the mentoring transferred from the renowned masters to their students.

This glimpse into history leads us back to the lack of a coherent account condensed in a definition of qualitative research. Many of the attempts to define the term do not meet the requirements of a proper definition: A definition should be clear, avoid tautology, demarcate its domain in relation to the environment, and ideally only use words in its definiens that themselves are not in need of definition (Hempel 1966 ). A definition can enhance precision and thus clarity by identifying the core of the phenomenon. Preferably, a definition should be short. The typical definition we have found, however, is an ostensive definition, which indicates what qualitative research is about without informing us about what it actually is :

Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives. (Denzin and Lincoln 2005 :2)

Flick claims that the label “qualitative research” is indeed used as an umbrella for a number of approaches ( 2007 :2–4; 2002 :6), and it is not difficult to identify research fitting this designation. Moreover, whatever it is, it has grown dramatically over the past five decades. In addition, courses have been developed, methods have flourished, arguments about its future have been advanced (for example, Denzin and Lincoln 1994) and criticized (for example, Snow and Morrill 1995 ), and dedicated journals and books have mushroomed. Most social scientists have a clear idea of research and how it differs from journalism, politics and other activities. But the question of what is qualitative in qualitative research is either eluded or eschewed.

We maintain that this lacuna hinders systematic knowledge production based on qualitative research. Paul Lazarsfeld noted the lack of “codification” as early as 1955 when he reviewed 100 qualitative studies in order to offer a codification of the practices (Lazarsfeld and Barton 1982 :239). Since then many texts on “qualitative research” and its methods have been published, including recent attempts (Goertz and Mahoney 2012 ) similar to Lazarsfeld’s. These studies have tried to extract what is qualitative by looking at the large number of empirical “qualitative” studies. Our novel strategy complements these endeavors by taking another approach and looking at the attempts to codify these practices in the form of a definition, as well as to a minor extent take Becker’s study as an exemplar of what qualitative researchers actually do, and what the characteristic of being ‘qualitative’ denotes and implies. We claim that qualitative researchers, if there is such a thing as “qualitative research,” should be able to codify their practices in a condensed, yet general way expressed in language.

Lingering problems of “generalizability” and “how many cases do I need” (Small 2009 ) are blocking advancement – in this line of work qualitative approaches are said to differ considerably from quantitative ones, while some of the former unsuccessfully mimic principles related to the latter (Small 2009 ). Additionally, quantitative researchers sometimes unfairly criticize the first based on their own quality criteria. Scholars like Goertz and Mahoney ( 2012 ) have successfully focused on the different norms and practices beyond what they argue are essentially two different cultures: those working with either qualitative or quantitative methods. Instead, similarly to Becker ( 2017 ) who has recently questioned the usefulness of the distinction between qualitative and quantitative research, we focus on similarities.

The current situation also impedes both students and researchers in focusing their studies and understanding each other’s work (Lazarsfeld and Barton 1982 :239). A third consequence is providing an opening for critiques by scholars operating within different traditions (Valsiner 2000 :101). A fourth issue is that the “implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm” (Goertz and Mahoney 2012 :9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving strategies to improve it and to develop standards of evaluation in qualitative research. However, a specific focus on its distinguishing feature of being “qualitative” while being implicitly acknowledged, was discussed only briefly (for example, Best 2004 ).

In 2014 a theme issue was published in this journal on “Methods, Materials, and Meanings: Designing Cultural Analysis,” discussing central issues in (cultural) qualitative research (Berezin 2014 ; Biernacki 2014 ; Glaeser 2014 ; Lamont and Swidler 2014 ; Spillman 2014). We agree with many of the arguments put forward, such as the risk of methodological tribalism, and that we should not waste energy on debating methods separated from research questions. Nonetheless, a clarification of the relation to what is called “quantitative research” is of outmost importance to avoid misunderstandings and misguided debates between “qualitative” and “quantitative” researchers. Our strategy means that researchers, “qualitative” or “quantitative” they may be, in their actual practice may combine qualitative work and quantitative work.

In this article we accomplish three tasks. First, we systematically survey the literature for meanings of qualitative research by looking at how researchers have defined it. Drawing upon existing knowledge we find that the different meanings and ideas of qualitative research are not yet coherently integrated into one satisfactory definition. Next, we advance our contribution by offering a definition of qualitative research and illustrate its meaning and use partially by expanding on the brief example introduced earlier related to Becker’s work ( 1963 ). We offer a systematic analysis of central themes of what researchers consider to be the core of “qualitative,” regardless of style of work. These themes – which we summarize in terms of four keywords: distinction, process, closeness, improved understanding – constitute part of our literature review, in which each one appears, sometimes with others, but never all in the same definition. They serve as the foundation of our contribution. Our categories are overlapping. Their use is primarily to organize the large amount of definitions we have identified and analyzed, and not necessarily to draw a clear distinction between them. Finally, we continue the elaboration discussed above on the advantages of a clear definition of qualitative research.

In a hermeneutic fashion we propose that there is something meaningful that deserves to be labelled “qualitative research” (Gadamer 1990 ). To approach the question “What is qualitative in qualitative research?” we have surveyed the literature. In conducting our survey we first traced the word’s etymology in dictionaries, encyclopedias, handbooks of the social sciences and of methods and textbooks, mainly in English, which is common to methodology courses. It should be noted that we have zoomed in on sociology and its literature. This discipline has been the site of the largest debate and development of methods that can be called “qualitative,” which suggests that this field should be examined in great detail.

In an ideal situation we should expect that one good definition, or at least some common ideas, would have emerged over the years. This common core of qualitative research should be so accepted that it would appear in at least some textbooks. Since this is not what we found, we decided to pursue an inductive approach to capture maximal variation in the field of qualitative research; we searched in a selection of handbooks, textbooks, book chapters, and books, to which we added the analysis of journal articles. Our sample comprises a total of 89 references.

In practice we focused on the discipline that has had a clear discussion of methods, namely sociology. We also conducted a broad search in the JSTOR database to identify scholarly sociology articles published between 1998 and 2017 in English with a focus on defining or explaining qualitative research. We specifically zoom in on this time frame because we would have expect that this more mature period would have produced clear discussions on the meaning of qualitative research. To find these articles we combined a number of keywords to search the content and/or the title: qualitative (which was always included), definition, empirical, research, methodology, studies, fieldwork, interview and observation .

As a second phase of our research we searched within nine major sociological journals ( American Journal of Sociology , Sociological Theory , American Sociological Review , Contemporary Sociology , Sociological Forum , Sociological Theory , Qualitative Research , Qualitative Sociology and Qualitative Sociology Review ) for articles also published during the past 19 years (1998–2017) that had the term “qualitative” in the title and attempted to define qualitative research.

Lastly we picked two additional journals, Qualitative Research and Qualitative Sociology , in which we could expect to find texts addressing the notion of “qualitative.” From Qualitative Research we chose Volume 14, Issue 6, December 2014, and from Qualitative Sociology we chose Volume 36, Issue 2, June 2017. Within each of these we selected the first article; then we picked the second article of three prior issues. Again we went back another three issues and investigated article number three. Finally we went back another three issues and perused article number four. This selection criteria was used to get a manageable sample for the analysis.

The coding process of the 89 references we gathered in our selected review began soon after the first round of material was gathered, and we reduced the complexity created by our maximum variation sampling (Snow and Anderson 1993 :22) to four different categories within which questions on the nature and properties of qualitative research were discussed. We call them: Qualitative and Quantitative Research, Qualitative Research, Fieldwork, and Grounded Theory. This – which may appear as an illogical grouping – merely reflects the “context” in which the matter of “qualitative” is discussed. If the selection process of the material – books and articles – was informed by pre-knowledge, we used an inductive strategy to code the material. When studying our material, we identified four central notions related to “qualitative” that appear in various combinations in the literature which indicate what is the core of qualitative research. We have labeled them: “distinctions”, “process,” “closeness,” and “improved understanding.” During the research process the categories and notions were improved, refined, changed, and reordered. The coding ended when a sense of saturation in the material arose. In the presentation below all quotations and references come from our empirical material of texts on qualitative research.

Analysis – What is Qualitative Research?

In this section we describe the four categories we identified in the coding, how they differently discuss qualitative research, as well as their overall content. Some salient quotations are selected to represent the type of text sorted under each of the four categories. What we present are examples from the literature.

Qualitative and Quantitative

This analytic category comprises quotations comparing qualitative and quantitative research, a distinction that is frequently used (Brown 2010 :231); in effect this is a conceptual pair that structures the discussion and that may be associated with opposing interests. While the general goal of quantitative and qualitative research is the same – to understand the world better – their methodologies and focus in certain respects differ substantially (Becker 1966 :55). Quantity refers to that property of something that can be determined by measurement. In a dictionary of Statistics and Methodology we find that “(a) When referring to *variables, ‘qualitative’ is another term for *categorical or *nominal. (b) When speaking of kinds of research, ‘qualitative’ refers to studies of subjects that are hard to quantify, such as art history. Qualitative research tends to be a residual category for almost any kind of non-quantitative research” (Stiles 1998:183). But it should be obvious that one could employ a quantitative approach when studying, for example, art history.

The same dictionary states that quantitative is “said of variables or research that can be handled numerically, usually (too sharply) contrasted with *qualitative variables and research” (Stiles 1998:184). From a qualitative perspective “quantitative research” is about numbers and counting, and from a quantitative perspective qualitative research is everything that is not about numbers. But this does not say much about what is “qualitative.” If we turn to encyclopedias we find that in the 1932 edition of the Encyclopedia of the Social Sciences there is no mention of “qualitative.” In the Encyclopedia from 1968 we can read:

Qualitative Analysis. For methods of obtaining, analyzing, and describing data, see [the various entries:] CONTENT ANALYSIS; COUNTED DATA; EVALUATION RESEARCH, FIELD WORK; GRAPHIC PRESENTATION; HISTORIOGRAPHY, especially the article on THE RHETORIC OF HISTORY; INTERVIEWING; OBSERVATION; PERSONALITY MEASUREMENT; PROJECTIVE METHODS; PSYCHOANALYSIS, article on EXPERIMENTAL METHODS; SURVEY ANALYSIS, TABULAR PRESENTATION; TYPOLOGIES. (Vol. 13:225)

Some, like Alford, divide researchers into methodologists or, in his words, “quantitative and qualitative specialists” (Alford 1998 :12). Qualitative research uses a variety of methods, such as intensive interviews or in-depth analysis of historical materials, and it is concerned with a comprehensive account of some event or unit (King et al. 1994 :4). Like quantitative research it can be utilized to study a variety of issues, but it tends to focus on meanings and motivations that underlie cultural symbols, personal experiences, phenomena and detailed understanding of processes in the social world. In short, qualitative research centers on understanding processes, experiences, and the meanings people assign to things (Kalof et al. 2008 :79).

Others simply say that qualitative methods are inherently unscientific (Jovanović 2011 :19). Hood, for instance, argues that words are intrinsically less precise than numbers, and that they are therefore more prone to subjective analysis, leading to biased results (Hood 2006 :219). Qualitative methodologies have raised concerns over the limitations of quantitative templates (Brady et al. 2004 :4). Scholars such as King et al. ( 1994 ), for instance, argue that non-statistical research can produce more reliable results if researchers pay attention to the rules of scientific inference commonly stated in quantitative research. Also, researchers such as Becker ( 1966 :59; 1970 :42–43) have asserted that, if conducted properly, qualitative research and in particular ethnographic field methods, can lead to more accurate results than quantitative studies, in particular, survey research and laboratory experiments.

Some researchers, such as Kalof, Dan, and Dietz ( 2008 :79) claim that the boundaries between the two approaches are becoming blurred, and Small ( 2009 ) argues that currently much qualitative research (especially in North America) tries unsuccessfully and unnecessarily to emulate quantitative standards. For others, qualitative research tends to be more humanistic and discursive (King et al. 1994 :4). Ragin ( 1994 ), and similarly also Becker, ( 1996 :53), Marchel and Owens ( 2007 :303) think that the main distinction between the two styles is overstated and does not rest on the simple dichotomy of “numbers versus words” (Ragin 1994 :xii). Some claim that quantitative data can be utilized to discover associations, but in order to unveil cause and effect a complex research design involving the use of qualitative approaches needs to be devised (Gilbert 2009 :35). Consequently, qualitative data are useful for understanding the nuances lying beyond those processes as they unfold (Gilbert 2009 :35). Others contend that qualitative research is particularly well suited both to identify causality and to uncover fine descriptive distinctions (Fine and Hallett 2014 ; Lichterman and Isaac Reed 2014 ; Katz 2015 ).

There are other ways to separate these two traditions, including normative statements about what qualitative research should be (that is, better or worse than quantitative approaches, concerned with scientific approaches to societal change or vice versa; Snow and Morrill 1995 ; Denzin and Lincoln 2005 ), or whether it should develop falsifiable statements; Best 2004 ).

We propose that quantitative research is largely concerned with pre-determined variables (Small 2008 ); the analysis concerns the relations between variables. These categories are primarily not questioned in the study, only their frequency or degree, or the correlations between them (cf. Franzosi 2016 ). If a researcher studies wage differences between women and men, he or she works with given categories: x number of men are compared with y number of women, with a certain wage attributed to each person. The idea is not to move beyond the given categories of wage, men and women; they are the starting point as well as the end point, and undergo no “qualitative change.” Qualitative research, in contrast, investigates relations between categories that are themselves subject to change in the research process. Returning to Becker’s study ( 1963 ), we see that he questioned pre-dispositional theories of deviant behavior working with pre-determined variables such as an individual’s combination of personal qualities or emotional problems. His take, in contrast, was to understand marijuana consumption by developing “variables” as part of the investigation. Thereby he presented new variables, or as we would say today, theoretical concepts, but which are grounded in the empirical material.

Qualitative Research

This category contains quotations that refer to descriptions of qualitative research without making comparisons with quantitative research. Researchers such as Denzin and Lincoln, who have written a series of influential handbooks on qualitative methods (1994; Denzin and Lincoln 2003 ; 2005 ), citing Nelson et al. (1992:4), argue that because qualitative research is “interdisciplinary, transdisciplinary, and sometimes counterdisciplinary” it is difficult to derive one single definition of it (Jovanović 2011 :3). According to them, in fact, “the field” is “many things at the same time,” involving contradictions, tensions over its focus, methods, and how to derive interpretations and findings ( 2003 : 11). Similarly, others, such as Flick ( 2007 :ix–x) contend that agreeing on an accepted definition has increasingly become problematic, and that qualitative research has possibly matured different identities. However, Best holds that “the proliferation of many sorts of activities under the label of qualitative sociology threatens to confuse our discussions” ( 2004 :54). Atkinson’s position is more definite: “the current state of qualitative research and research methods is confused” ( 2005 :3–4).

Qualitative research is about interpretation (Blumer 1969 ; Strauss and Corbin 1998 ; Denzin and Lincoln 2003 ), or Verstehen [understanding] (Frankfort-Nachmias and Nachmias 1996 ). It is “multi-method,” involving the collection and use of a variety of empirical materials (Denzin and Lincoln 1998; Silverman 2013 ) and approaches (Silverman 2005 ; Flick 2007 ). It focuses not only on the objective nature of behavior but also on its subjective meanings: individuals’ own accounts of their attitudes, motivations, behavior (McIntyre 2005 :127; Creswell 2009 ), events and situations (Bryman 1989) – what people say and do in specific places and institutions (Goodwin and Horowitz 2002 :35–36) in social and temporal contexts (Morrill and Fine 1997). For this reason, following Weber ([1921-22] 1978), it can be described as an interpretative science (McIntyre 2005 :127). But could quantitative research also be concerned with these questions? Also, as pointed out below, does all qualitative research focus on subjective meaning, as some scholars suggest?

Others also distinguish qualitative research by claiming that it collects data using a naturalistic approach (Denzin and Lincoln 2005 :2; Creswell 2009 ), focusing on the meaning actors ascribe to their actions. But again, does all qualitative research need to be collected in situ? And does qualitative research have to be inherently concerned with meaning? Flick ( 2007 ), referring to Denzin and Lincoln ( 2005 ), mentions conversation analysis as an example of qualitative research that is not concerned with the meanings people bring to a situation, but rather with the formal organization of talk. Still others, such as Ragin ( 1994 :85), note that qualitative research is often (especially early on in the project, we would add) less structured than other kinds of social research – a characteristic connected to its flexibility and that can lead both to potentially better, but also worse results. But is this not a feature of this type of research, rather than a defining description of its essence? Wouldn’t this comment also apply, albeit to varying degrees, to quantitative research?

In addition, Strauss ( 2003 ), along with others, such as Alvesson and Kärreman ( 2011 :10–76), argue that qualitative researchers struggle to capture and represent complex phenomena partially because they tend to collect a large amount of data. While his analysis is correct at some points – “It is necessary to do detailed, intensive, microscopic examination of the data in order to bring out the amazing complexity of what lies in, behind, and beyond those data” (Strauss 2003 :10) – much of his analysis concerns the supposed focus of qualitative research and its challenges, rather than exactly what it is about. But even in this instance we would make a weak case arguing that these are strictly the defining features of qualitative research. Some researchers seem to focus on the approach or the methods used, or even on the way material is analyzed. Several researchers stress the naturalistic assumption of investigating the world, suggesting that meaning and interpretation appear to be a core matter of qualitative research.

We can also see that in this category there is no consensus about specific qualitative methods nor about qualitative data. Many emphasize interpretation, but quantitative research, too, involves interpretation; the results of a regression analysis, for example, certainly have to be interpreted, and the form of meta-analysis that factor analysis provides indeed requires interpretation However, there is no interpretation of quantitative raw data, i.e., numbers in tables. One common thread is that qualitative researchers have to get to grips with their data in order to understand what is being studied in great detail, irrespective of the type of empirical material that is being analyzed. This observation is connected to the fact that qualitative researchers routinely make several adjustments of focus and research design as their studies progress, in many cases until the very end of the project (Kalof et al. 2008 ). If you, like Becker, do not start out with a detailed theory, adjustments such as the emergence and refinement of research questions will occur during the research process. We have thus found a number of useful reflections about qualitative research scattered across different sources, but none of them effectively describe the defining characteristics of this approach.

Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research. But because we understand that fieldwork tends to focus primarily on the collection and analysis of qualitative data, we expected to find within it discussions on the meaning of “qualitative.” But, again, this was not the case.

Instead, we found material on the history of this approach (for example, Frankfort-Nachmias and Nachmias 1996 ; Atkinson et al. 2001), including how it has changed; for example, by adopting a more self-reflexive practice (Heyl 2001), as well as the different nomenclature that has been adopted, such as fieldwork, ethnography, qualitative research, naturalistic research, participant observation and so on (for example, Lofland et al. 2006 ; Gans 1999 ).

We retrieved definitions of ethnography, such as “the study of people acting in the natural courses of their daily lives,” involving a “resocialization of the researcher” (Emerson 1988 :1) through intense immersion in others’ social worlds (see also examples in Hammersley 2018 ). This may be accomplished by direct observation and also participation (Neuman 2007 :276), although others, such as Denzin ( 1970 :185), have long recognized other types of observation, including non-participant (“fly on the wall”). In this category we have also isolated claims and opposing views, arguing that this type of research is distinguished primarily by where it is conducted (natural settings) (Hughes 1971:496), and how it is carried out (a variety of methods are applied) or, for some most importantly, by involving an active, empathetic immersion in those being studied (Emerson 1988 :2). We also retrieved descriptions of the goals it attends in relation to how it is taught (understanding subjective meanings of the people studied, primarily develop theory, or contribute to social change) (see for example, Corte and Irwin 2017 ; Frankfort-Nachmias and Nachmias 1996 :281; Trier-Bieniek 2012 :639) by collecting the richest possible data (Lofland et al. 2006 ) to derive “thick descriptions” (Geertz 1973 ), and/or to aim at theoretical statements of general scope and applicability (for example, Emerson 1988 ; Fine 2003 ). We have identified guidelines on how to evaluate it (for example Becker 1996 ; Lamont 2004 ) and have retrieved instructions on how it should be conducted (for example, Lofland et al. 2006 ). For instance, analysis should take place while the data gathering unfolds (Emerson 1988 ; Hammersley and Atkinson 2007 ; Lofland et al. 2006 ), observations should be of long duration (Becker 1970 :54; Goffman 1989 ), and data should be of high quantity (Becker 1970 :52–53), as well as other questionable distinctions between fieldwork and other methods:

Field studies differ from other methods of research in that the researcher performs the task of selecting topics, decides what questions to ask, and forges interest in the course of the research itself . This is in sharp contrast to many ‘theory-driven’ and ‘hypothesis-testing’ methods. (Lofland and Lofland 1995 :5)

But could not, for example, a strictly interview-based study be carried out with the same amount of flexibility, such as sequential interviewing (for example, Small 2009 )? Once again, are quantitative approaches really as inflexible as some qualitative researchers think? Moreover, this category stresses the role of the actors’ meaning, which requires knowledge and close interaction with people, their practices and their lifeworld.

It is clear that field studies – which are seen by some as the “gold standard” of qualitative research – are nonetheless only one way of doing qualitative research. There are other methods, but it is not clear why some are more qualitative than others, or why they are better or worse. Fieldwork is characterized by interaction with the field (the material) and understanding of the phenomenon that is being studied. In Becker’s case, he had general experience from fields in which marihuana was used, based on which he did interviews with actual users in several fields.

Grounded Theory

Another major category we identified in our sample is Grounded Theory. We found descriptions of it most clearly in Glaser and Strauss’ ([1967] 2010 ) original articulation, Strauss and Corbin ( 1998 ) and Charmaz ( 2006 ), as well as many other accounts of what it is for: generating and testing theory (Strauss 2003 :xi). We identified explanations of how this task can be accomplished – such as through two main procedures: constant comparison and theoretical sampling (Emerson 1998:96), and how using it has helped researchers to “think differently” (for example, Strauss and Corbin 1998 :1). We also read descriptions of its main traits, what it entails and fosters – for instance, an exceptional flexibility, an inductive approach (Strauss and Corbin 1998 :31–33; 1990; Esterberg 2002 :7), an ability to step back and critically analyze situations, recognize tendencies towards bias, think abstractly and be open to criticism, enhance sensitivity towards the words and actions of respondents, and develop a sense of absorption and devotion to the research process (Strauss and Corbin 1998 :5–6). Accordingly, we identified discussions of the value of triangulating different methods (both using and not using grounded theory), including quantitative ones, and theories to achieve theoretical development (most comprehensively in Denzin 1970 ; Strauss and Corbin 1998 ; Timmermans and Tavory 2012 ). We have also located arguments about how its practice helps to systematize data collection, analysis and presentation of results (Glaser and Strauss [1967] 2010 :16).

Grounded theory offers a systematic approach which requires researchers to get close to the field; closeness is a requirement of identifying questions and developing new concepts or making further distinctions with regard to old concepts. In contrast to other qualitative approaches, grounded theory emphasizes the detailed coding process, and the numerous fine-tuned distinctions that the researcher makes during the process. Within this category, too, we could not find a satisfying discussion of the meaning of qualitative research.

Defining Qualitative Research

In sum, our analysis shows that some notions reappear in the discussion of qualitative research, such as understanding, interpretation, “getting close” and making distinctions. These notions capture aspects of what we think is “qualitative.” However, a comprehensive definition that is useful and that can further develop the field is lacking, and not even a clear picture of its essential elements appears. In other words no definition emerges from our data, and in our research process we have moved back and forth between our empirical data and the attempt to present a definition. Our concrete strategy, as stated above, is to relate qualitative and quantitative research, or more specifically, qualitative and quantitative work. We use an ideal-typical notion of quantitative research which relies on taken for granted and numbered variables. This means that the data consists of variables on different scales, such as ordinal, but frequently ratio and absolute scales, and the representation of the numbers to the variables, i.e. the justification of the assignment of numbers to object or phenomenon, are not questioned, though the validity may be questioned. In this section we return to the notion of quality and try to clarify it while presenting our contribution.

Broadly, research refers to the activity performed by people trained to obtain knowledge through systematic procedures. Notions such as “objectivity” and “reflexivity,” “systematic,” “theory,” “evidence” and “openness” are here taken for granted in any type of research. Next, building on our empirical analysis we explain the four notions that we have identified as central to qualitative work: distinctions, process, closeness, and improved understanding. In discussing them, ultimately in relation to one another, we make their meaning even more precise. Our idea, in short, is that only when these ideas that we present separately for analytic purposes are brought together can we speak of qualitative research.

Distinctions

We believe that the possibility of making new distinctions is one the defining characteristics of qualitative research. It clearly sets it apart from quantitative analysis which works with taken-for-granted variables, albeit as mentioned, meta-analyses, for example, factor analysis may result in new variables. “Quality” refers essentially to distinctions, as already pointed out by Aristotle. He discusses the term “qualitative” commenting: “By a quality I mean that in virtue of which things are said to be qualified somehow” (Aristotle 1984:14). Quality is about what something is or has, which means that the distinction from its environment is crucial. We see qualitative research as a process in which significant new distinctions are made to the scholarly community; to make distinctions is a key aspect of obtaining new knowledge; a point, as we will see, that also has implications for “quantitative research.” The notion of being “significant” is paramount. New distinctions by themselves are not enough; just adding concepts only increases complexity without furthering our knowledge. The significance of new distinctions is judged against the communal knowledge of the research community. To enable this discussion and judgements central elements of rational discussion are required (cf. Habermas [1981] 1987 ; Davidsson [ 1988 ] 2001) to identify what is new and relevant scientific knowledge. Relatedly, Ragin alludes to the idea of new and useful knowledge at a more concrete level: “Qualitative methods are appropriate for in-depth examination of cases because they aid the identification of key features of cases. Most qualitative methods enhance data” (1994:79). When Becker ( 1963 ) studied deviant behavior and investigated how people became marihuana smokers, he made distinctions between the ways in which people learned how to smoke. This is a classic example of how the strategy of “getting close” to the material, for example the text, people or pictures that are subject to analysis, may enable researchers to obtain deeper insight and new knowledge by making distinctions – in this instance on the initial notion of learning how to smoke. Others have stressed the making of distinctions in relation to coding or theorizing. Emerson et al. ( 1995 ), for example, hold that “qualitative coding is a way of opening up avenues of inquiry,” meaning that the researcher identifies and develops concepts and analytic insights through close examination of and reflection on data (Emerson et al. 1995 :151). Goodwin and Horowitz highlight making distinctions in relation to theory-building writing: “Close engagement with their cases typically requires qualitative researchers to adapt existing theories or to make new conceptual distinctions or theoretical arguments to accommodate new data” ( 2002 : 37). In the ideal-typical quantitative research only existing and so to speak, given, variables would be used. If this is the case no new distinction are made. But, would not also many “quantitative” researchers make new distinctions?

Process does not merely suggest that research takes time. It mainly implies that qualitative new knowledge results from a process that involves several phases, and above all iteration. Qualitative research is about oscillation between theory and evidence, analysis and generating material, between first- and second -order constructs (Schütz 1962 :59), between getting in contact with something, finding sources, becoming deeply familiar with a topic, and then distilling and communicating some of its essential features. The main point is that the categories that the researcher uses, and perhaps takes for granted at the beginning of the research process, usually undergo qualitative changes resulting from what is found. Becker describes how he tested hypotheses and let the jargon of the users develop into theoretical concepts. This happens over time while the study is being conducted, exemplifying what we mean by process.

In the research process, a pilot-study may be used to get a first glance of, for example, the field, how to approach it, and what methods can be used, after which the method and theory are chosen or refined before the main study begins. Thus, the empirical material is often central from the start of the project and frequently leads to adjustments by the researcher. Likewise, during the main study categories are not fixed; the empirical material is seen in light of the theory used, but it is also given the opportunity to kick back, thereby resisting attempts to apply theoretical straightjackets (Becker 1970 :43). In this process, coding and analysis are interwoven, and thus are often important steps for getting closer to the phenomenon and deciding what to focus on next. Becker began his research by interviewing musicians close to him, then asking them to refer him to other musicians, and later on doubling his original sample of about 25 to include individuals in other professions (Becker 1973:46). Additionally, he made use of some participant observation, documents, and interviews with opiate users made available to him by colleagues. As his inductive theory of deviance evolved, Becker expanded his sample in order to fine tune it, and test the accuracy and generality of his hypotheses. In addition, he introduced a negative case and discussed the null hypothesis ( 1963 :44). His phasic career model is thus based on a research design that embraces processual work. Typically, process means to move between “theory” and “material” but also to deal with negative cases, and Becker ( 1998 ) describes how discovering these negative cases impacted his research design and ultimately its findings.

Obviously, all research is process-oriented to some degree. The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. Merging of data, which of course can be done in a quantitative research process, does not mean new data. New hypotheses are frequently tested, but the “raw data is often the “the same.” Obviously, over time new datasets are made available and put into use.

Another characteristic that is emphasized in our sample is that qualitative researchers – and in particular ethnographers – can, or as Goffman put it, ought to ( 1989 ), get closer to the phenomenon being studied and their data than quantitative researchers (for example, Silverman 2009 :85). Put differently, essentially because of their methods qualitative researchers get into direct close contact with those being investigated and/or the material, such as texts, being analyzed. Becker started out his interview study, as we noted, by talking to those he knew in the field of music to get closer to the phenomenon he was studying. By conducting interviews he got even closer. Had he done more observations, he would undoubtedly have got even closer to the field.

Additionally, ethnographers’ design enables researchers to follow the field over time, and the research they do is almost by definition longitudinal, though the time in the field is studied obviously differs between studies. The general characteristic of closeness over time maximizes the chances of unexpected events, new data (related, for example, to archival research as additional sources, and for ethnography for situations not necessarily previously thought of as instrumental – what Mannay and Morgan ( 2015 ) term the “waiting field”), serendipity (Merton and Barber 2004 ; Åkerström 2013 ), and possibly reactivity, as well as the opportunity to observe disrupted patterns that translate into exemplars of negative cases. Two classic examples of this are Becker’s finding of what medical students call “crocks” (Becker et al. 1961 :317), and Geertz’s ( 1973 ) study of “deep play” in Balinese society.

By getting and staying so close to their data – be it pictures, text or humans interacting (Becker was himself a musician) – for a long time, as the research progressively focuses, qualitative researchers are prompted to continually test their hunches, presuppositions and hypotheses. They test them against a reality that often (but certainly not always), and practically, as well as metaphorically, talks back, whether by validating them, or disqualifying their premises – correctly, as well as incorrectly (Fine 2003 ; Becker 1970 ). This testing nonetheless often leads to new directions for the research. Becker, for example, says that he was initially reading psychological theories, but when facing the data he develops a theory that looks at, you may say, everything but psychological dispositions to explain the use of marihuana. Especially researchers involved with ethnographic methods have a fairly unique opportunity to dig up and then test (in a circular, continuous and temporal way) new research questions and findings as the research progresses, and thereby to derive previously unimagined and uncharted distinctions by getting closer to the phenomenon under study.

Let us stress that getting close is by no means restricted to ethnography. The notion of hermeneutic circle and hermeneutics as a general way of understanding implies that we must get close to the details in order to get the big picture. This also means that qualitative researchers can literally also make use of details of pictures as evidence (cf. Harper 2002). Thus, researchers may get closer both when generating the material or when analyzing it.

Quantitative research, we maintain, in the ideal-typical representation cannot get closer to the data. The data is essentially numbers in tables making up the variables (Franzosi 2016 :138). The data may originally have been “qualitative,” but once reduced to numbers there can only be a type of “hermeneutics” about what the number may stand for. The numbers themselves, however, are non-ambiguous. Thus, in quantitative research, interpretation, if done, is not about the data itself—the numbers—but what the numbers stand for. It follows that the interpretation is essentially done in a more “speculative” mode without direct empirical evidence (cf. Becker 2017 ).

Improved Understanding

While distinction, process and getting closer refer to the qualitative work of the researcher, improved understanding refers to its conditions and outcome of this work. Understanding cuts deeper than explanation, which to some may mean a causally verified correlation between variables. The notion of explanation presupposes the notion of understanding since explanation does not include an idea of how knowledge is gained (Manicas 2006 : 15). Understanding, we argue, is the core concept of what we call the outcome of the process when research has made use of all the other elements that were integrated in the research. Understanding, then, has a special status in qualitative research since it refers both to the conditions of knowledge and the outcome of the process. Understanding can to some extent be seen as the condition of explanation and occurs in a process of interpretation, which naturally refers to meaning (Gadamer 1990 ). It is fundamentally connected to knowing, and to the knowing of how to do things (Heidegger [1927] 2001 ). Conceptually the term hermeneutics is used to account for this process. Heidegger ties hermeneutics to human being and not possible to separate from the understanding of being ( 1988 ). Here we use it in a broader sense, and more connected to method in general (cf. Seiffert 1992 ). The abovementioned aspects – for example, “objectivity” and “reflexivity” – of the approach are conditions of scientific understanding. Understanding is the result of a circular process and means that the parts are understood in light of the whole, and vice versa. Understanding presupposes pre-understanding, or in other words, some knowledge of the phenomenon studied. The pre-understanding, even in the form of prejudices, are in qualitative research process, which we see as iterative, questioned, which gradually or suddenly change due to the iteration of data, evidence and concepts. However, qualitative research generates understanding in the iterative process when the researcher gets closer to the data, e.g., by going back and forth between field and analysis in a process that generates new data that changes the evidence, and, ultimately, the findings. Questioning, to ask questions, and put what one assumes—prejudices and presumption—in question, is central to understand something (Heidegger [1927] 2001 ; Gadamer 1990 :368–384). We propose that this iterative process in which the process of understanding occurs is characteristic of qualitative research.

Improved understanding means that we obtain scientific knowledge of something that we as a scholarly community did not know before, or that we get to know something better. It means that we understand more about how parts are related to one another, and to other things we already understand (see also Fine and Hallett 2014 ). Understanding is an important condition for qualitative research. It is not enough to identify correlations, make distinctions, and work in a process in which one gets close to the field or phenomena. Understanding is accomplished when the elements are integrated in an iterative process.

It is, moreover, possible to understand many things, and researchers, just like children, may come to understand new things every day as they engage with the world. This subjective condition of understanding – namely, that a person gains a better understanding of something –is easily met. To be qualified as “scientific,” the understanding must be general and useful to many; it must be public. But even this generally accessible understanding is not enough in order to speak of “scientific understanding.” Though we as a collective can increase understanding of everything in virtually all potential directions as a result also of qualitative work, we refrain from this “objective” way of understanding, which has no means of discriminating between what we gain in understanding. Scientific understanding means that it is deemed relevant from the scientific horizon (compare Schütz 1962 : 35–38, 46, 63), and that it rests on the pre-understanding that the scientists have and must have in order to understand. In other words, the understanding gained must be deemed useful by other researchers, so that they can build on it. We thus see understanding from a pragmatic, rather than a subjective or objective perspective. Improved understanding is related to the question(s) at hand. Understanding, in order to represent an improvement, must be an improvement in relation to the existing body of knowledge of the scientific community (James [ 1907 ] 1955). Scientific understanding is, by definition, collective, as expressed in Weber’s famous note on objectivity, namely that scientific work aims at truths “which … can claim, even for a Chinese, the validity appropriate to an empirical analysis” ([1904] 1949 :59). By qualifying “improved understanding” we argue that it is a general defining characteristic of qualitative research. Becker‘s ( 1966 ) study and other research of deviant behavior increased our understanding of the social learning processes of how individuals start a behavior. And it also added new knowledge about the labeling of deviant behavior as a social process. Few studies, of course, make the same large contribution as Becker’s, but are nonetheless qualitative research.

Understanding in the phenomenological sense, which is a hallmark of qualitative research, we argue, requires meaning and this meaning is derived from the context, and above all the data being analyzed. The ideal-typical quantitative research operates with given variables with different numbers. This type of material is not enough to establish meaning at the level that truly justifies understanding. In other words, many social science explanations offer ideas about correlations or even causal relations, but this does not mean that the meaning at the level of the data analyzed, is understood. This leads us to say that there are indeed many explanations that meet the criteria of understanding, for example the explanation of how one becomes a marihuana smoker presented by Becker. However, we may also understand a phenomenon without explaining it, and we may have potential explanations, or better correlations, that are not really understood.

We may speak more generally of quantitative research and its data to clarify what we see as an important distinction. The “raw data” that quantitative research—as an idealtypical activity, refers to is not available for further analysis; the numbers, once created, are not to be questioned (Franzosi 2016 : 138). If the researcher is to do “more” or “change” something, this will be done by conjectures based on theoretical knowledge or based on the researcher’s lifeworld. Both qualitative and quantitative research is based on the lifeworld, and all researchers use prejudices and pre-understanding in the research process. This idea is present in the works of Heidegger ( 2001 ) and Heisenberg (cited in Franzosi 2010 :619). Qualitative research, as we argued, involves the interaction and questioning of concepts (theory), data, and evidence.

Ragin ( 2004 :22) points out that “a good definition of qualitative research should be inclusive and should emphasize its key strengths and features, not what it lacks (for example, the use of sophisticated quantitative techniques).” We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. Qualitative research, as defined here, is consequently a combination of two criteria: (i) how to do things –namely, generating and analyzing empirical material, in an iterative process in which one gets closer by making distinctions, and (ii) the outcome –improved understanding novel to the scholarly community. Is our definition applicable to our own study? In this study we have closely read the empirical material that we generated, and the novel distinction of the notion “qualitative research” is the outcome of an iterative process in which both deduction and induction were involved, in which we identified the categories that we analyzed. We thus claim to meet the first criteria, “how to do things.” The second criteria cannot be judged but in a partial way by us, namely that the “outcome” —in concrete form the definition-improves our understanding to others in the scientific community.

We have defined qualitative research, or qualitative scientific work, in relation to quantitative scientific work. Given this definition, qualitative research is about questioning the pre-given (taken for granted) variables, but it is thus also about making new distinctions of any type of phenomenon, for example, by coining new concepts, including the identification of new variables. This process, as we have discussed, is carried out in relation to empirical material, previous research, and thus in relation to theory. Theory and previous research cannot be escaped or bracketed. According to hermeneutic principles all scientific work is grounded in the lifeworld, and as social scientists we can thus never fully bracket our pre-understanding.

We have proposed that quantitative research, as an idealtype, is concerned with pre-determined variables (Small 2008 ). Variables are epistemically fixed, but can vary in terms of dimensions, such as frequency or number. Age is an example; as a variable it can take on different numbers. In relation to quantitative research, qualitative research does not reduce its material to number and variables. If this is done the process of comes to a halt, the researcher gets more distanced from her data, and it makes it no longer possible to make new distinctions that increase our understanding. We have above discussed the components of our definition in relation to quantitative research. Our conclusion is that in the research that is called quantitative there are frequent and necessary qualitative elements.

Further, comparative empirical research on researchers primarily working with ”quantitative” approaches and those working with ”qualitative” approaches, we propose, would perhaps show that there are many similarities in practices of these two approaches. This is not to deny dissimilarities, or the different epistemic and ontic presuppositions that may be more or less strongly associated with the two different strands (see Goertz and Mahoney 2012 ). Our point is nonetheless that prejudices and preconceptions about researchers are unproductive, and that as other researchers have argued, differences may be exaggerated (e.g., Becker 1996 : 53, 2017 ; Marchel and Owens 2007 :303; Ragin 1994 ), and that a qualitative dimension is present in both kinds of work.

Several things follow from our findings. The most important result is the relation to quantitative research. In our analysis we have separated qualitative research from quantitative research. The point is not to label individual researchers, methods, projects, or works as either “quantitative” or “qualitative.” By analyzing, i.e., taking apart, the notions of quantitative and qualitative, we hope to have shown the elements of qualitative research. Our definition captures the elements, and how they, when combined in practice, generate understanding. As many of the quotations we have used suggest, one conclusion of our study holds that qualitative approaches are not inherently connected with a specific method. Put differently, none of the methods that are frequently labelled “qualitative,” such as interviews or participant observation, are inherently “qualitative.” What matters, given our definition, is whether one works qualitatively or quantitatively in the research process, until the results are produced. Consequently, our analysis also suggests that those researchers working with what in the literature and in jargon is often called “quantitative research” are almost bound to make use of what we have identified as qualitative elements in any research project. Our findings also suggest that many” quantitative” researchers, at least to some extent, are engaged with qualitative work, such as when research questions are developed, variables are constructed and combined, and hypotheses are formulated. Furthermore, a research project may hover between “qualitative” and “quantitative” or start out as “qualitative” and later move into a “quantitative” (a distinct strategy that is not similar to “mixed methods” or just simply combining induction and deduction). More generally speaking, the categories of “qualitative” and “quantitative,” unfortunately, often cover up practices, and it may lead to “camps” of researchers opposing one another. For example, regardless of the researcher is primarily oriented to “quantitative” or “qualitative” research, the role of theory is neglected (cf. Swedberg 2017 ). Our results open up for an interaction not characterized by differences, but by different emphasis, and similarities.

Let us take two examples to briefly indicate how qualitative elements can fruitfully be combined with quantitative. Franzosi ( 2010 ) has discussed the relations between quantitative and qualitative approaches, and more specifically the relation between words and numbers. He analyzes texts and argues that scientific meaning cannot be reduced to numbers. Put differently, the meaning of the numbers is to be understood by what is taken for granted, and what is part of the lifeworld (Schütz 1962 ). Franzosi shows how one can go about using qualitative and quantitative methods and data to address scientific questions analyzing violence in Italy at the time when fascism was rising (1919–1922). Aspers ( 2006 ) studied the meaning of fashion photographers. He uses an empirical phenomenological approach, and establishes meaning at the level of actors. In a second step this meaning, and the different ideal-typical photographers constructed as a result of participant observation and interviews, are tested using quantitative data from a database; in the first phase to verify the different ideal-types, in the second phase to use these types to establish new knowledge about the types. In both of these cases—and more examples can be found—authors move from qualitative data and try to keep the meaning established when using the quantitative data.

A second main result of our study is that a definition, and we provided one, offers a way for research to clarify, and even evaluate, what is done. Hence, our definition can guide researchers and students, informing them on how to think about concrete research problems they face, and to show what it means to get closer in a process in which new distinctions are made. The definition can also be used to evaluate the results, given that it is a standard of evaluation (cf. Hammersley 2007 ), to see whether new distinctions are made and whether this improves our understanding of what is researched, in addition to the evaluation of how the research was conducted. By making what is qualitative research explicit it becomes easier to communicate findings, and it is thereby much harder to fly under the radar with substandard research since there are standards of evaluation which make it easier to separate “good” from “not so good” qualitative research.

To conclude, our analysis, which ends with a definition of qualitative research can thus both address the “internal” issues of what is qualitative research, and the “external” critiques that make it harder to do qualitative research, to which both pressure from quantitative methods and general changes in society contribute.

Acknowledgements

Financial Support for this research is given by the European Research Council, CEV (263699). The authors are grateful to Susann Krieglsteiner for assistance in collecting the data. The paper has benefitted from the many useful comments by the three reviewers and the editor, comments by members of the Uppsala Laboratory of Economic Sociology, as well as Jukka Gronow, Sebastian Kohl, Marcin Serafin, Richard Swedberg, Anders Vassenden and Turid Rødne.

Biographies

is professor of sociology at the Department of Sociology, Uppsala University and Universität St. Gallen. His main focus is economic sociology, and in particular, markets. He has published numerous articles and books, including Orderly Fashion (Princeton University Press 2010), Markets (Polity Press 2011) and Re-Imagining Economic Sociology (edited with N. Dodd, Oxford University Press 2015). His book Ethnographic Methods (in Swedish) has already gone through several editions.

is associate professor of sociology at the Department of Media and Social Sciences, University of Stavanger. His research has been published in journals such as Social Psychology Quarterly, Sociological Theory, Teaching Sociology, and Music and Arts in Action. As an ethnographer he is working on a book on he social world of big-wave surfing.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Patrik Aspers, Email: [email protected] .

Ugo Corte, Email: [email protected] .

  • Åkerström M. Curiosity and serendipity in qualitative research. Qualitative Sociology Review. 2013; 9 (2):10–18. [ Google Scholar ]
  • Alford, Robert R. 1998. The craft of inquiry. Theories, methods, evidence . Oxford: Oxford University Press.
  • Alvesson M, Kärreman D. Qualitative research and theory development . Mystery as method . London: SAGE Publications; 2011. [ Google Scholar ]
  • Aspers, Patrik. 2006. Markets in Fashion, A Phenomenological Approach. London Routledge.
  • Atkinson P. Qualitative research. Unity and diversity. Forum: Qualitative Social Research. 2005; 6 (3):1–15. [ Google Scholar ]
  • Becker HS. Outsiders. Studies in the sociology of deviance . New York: The Free Press; 1963. [ Google Scholar ]
  • Becker HS. Whose side are we on? Social Problems. 1966; 14 (3):239–247. [ Google Scholar ]
  • Becker HS. Sociological work. Method and substance. New Brunswick: Transaction Books; 1970. [ Google Scholar ]
  • Becker HS. The epistemology of qualitative research. In: Richard J, Anne C, Shweder RA, editors. Ethnography and human development. Context and meaning in social inquiry. Chicago: University of Chicago Press; 1996. pp. 53–71. [ Google Scholar ]
  • Becker HS. Tricks of the trade. How to think about your research while you're doing it. Chicago: University of Chicago Press; 1998. [ Google Scholar ]
  • Becker, Howard S. 2017. Evidence . Chigaco: University of Chicago Press.
  • Becker H, Geer B, Hughes E, Strauss A. Boys in White, student culture in medical school. New Brunswick: Transaction Publishers; 1961. [ Google Scholar ]
  • Berezin M. How do we know what we mean? Epistemological dilemmas in cultural sociology. Qualitative Sociology. 2014; 37 (2):141–151. [ Google Scholar ]
  • Best, Joel. 2004. Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , eds . Charles, Ragin, Joanne, Nagel, and Patricia White, 53-54. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf .
  • Biernacki R. Humanist interpretation versus coding text samples. Qualitative Sociology. 2014; 37 (2):173–188. [ Google Scholar ]
  • Blumer H. Symbolic interactionism: Perspective and method. Berkeley: University of California Press; 1969. [ Google Scholar ]
  • Brady H, Collier D, Seawright J. Refocusing the discussion of methodology. In: Henry B, David C, editors. Rethinking social inquiry. Diverse tools, shared standards. Lanham: Rowman and Littlefield; 2004. pp. 3–22. [ Google Scholar ]
  • Brown AP. Qualitative method and compromise in applied social research. Qualitative Research. 2010; 10 (2):229–248. [ Google Scholar ]
  • Charmaz K. Constructing grounded theory. London: Sage; 2006. [ Google Scholar ]
  • Corte, Ugo, and Katherine Irwin. 2017. “The Form and Flow of Teaching Ethnographic Knowledge: Hands-on Approaches for Learning Epistemology” Teaching Sociology 45(3): 209-219.
  • Creswell JW. Research design. Qualitative, quantitative, and mixed method approaches. 3. Thousand Oaks: SAGE Publications; 2009. [ Google Scholar ]
  • Davidsson D. The myth of the subjective. In: Davidsson D, editor. Subjective, intersubjective, objective. Oxford: Oxford University Press; 1988. pp. 39–52. [ Google Scholar ]
  • Denzin NK. The research act: A theoretical introduction to Ssociological methods. Chicago: Aldine Publishing Company Publishers; 1970. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Introduction. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. Collecting and interpreting qualitative materials. Thousand Oaks: SAGE Publications; 2003. pp. 1–45. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Introduction. The discipline and practice of qualitative research. In: Denzin NK, Lincoln YS, editors. The Sage handbook of qualitative research. Thousand Oaks: SAGE Publications; 2005. pp. 1–32. [ Google Scholar ]
  • Emerson RM, editor. Contemporary field research. A collection of readings. Prospect Heights: Waveland Press; 1988. [ Google Scholar ]
  • Emerson RM, Fretz RI, Shaw LL. Writing ethnographic fieldnotes. Chicago: University of Chicago Press; 1995. [ Google Scholar ]
  • Esterberg KG. Qualitative methods in social research. Boston: McGraw-Hill; 2002. [ Google Scholar ]
  • Fine, Gary Alan. 1995. Review of “handbook of qualitative research.” Contemporary Sociology 24 (3): 416–418.
  • Fine, Gary Alan. 2003. “ Toward a Peopled Ethnography: Developing Theory from Group Life.” Ethnography . 4(1):41-60.
  • Fine GA, Hancock BH. The new ethnographer at work. Qualitative Research. 2017; 17 (2):260–268. [ Google Scholar ]
  • Fine GA, Hallett T. Stranger and stranger: Creating theory through ethnographic distance and authority. Journal of Organizational Ethnography. 2014; 3 (2):188–203. [ Google Scholar ]
  • Flick U. Qualitative research. State of the art. Social Science Information. 2002; 41 (1):5–24. [ Google Scholar ]
  • Flick U. Designing qualitative research. London: SAGE Publications; 2007. [ Google Scholar ]
  • Frankfort-Nachmias C, Nachmias D. Research methods in the social sciences. 5. London: Edward Arnold; 1996. [ Google Scholar ]
  • Franzosi R. Sociology, narrative, and the quality versus quantity debate (Goethe versus Newton): Can computer-assisted story grammars help us understand the rise of Italian fascism (1919- 1922)? Theory and Society. 2010; 39 (6):593–629. [ Google Scholar ]
  • Franzosi R. From method and measurement to narrative and number. International journal of social research methodology. 2016; 19 (1):137–141. [ Google Scholar ]
  • Gadamer, Hans-Georg. 1990. Wahrheit und Methode, Grundzüge einer philosophischen Hermeneutik . Band 1, Hermeneutik. Tübingen: J.C.B. Mohr.
  • Gans H. Participant Observation in an Age of “Ethnography” Journal of Contemporary Ethnography. 1999; 28 (5):540–548. [ Google Scholar ]
  • Geertz C. The interpretation of cultures. New York: Basic Books; 1973. [ Google Scholar ]
  • Gilbert N. Researching social life. 3. London: SAGE Publications; 2009. [ Google Scholar ]
  • Glaeser A. Hermeneutic institutionalism: Towards a new synthesis. Qualitative Sociology. 2014; 37 :207–241. [ Google Scholar ]
  • Glaser, Barney G., and Anselm L. Strauss. [1967] 2010. The discovery of grounded theory. Strategies for qualitative research. Hawthorne: Aldine.
  • Goertz G, Mahoney J. A tale of two cultures: Qualitative and quantitative research in the social sciences. Princeton: Princeton University Press; 2012. [ Google Scholar ]
  • Goffman E. On fieldwork. Journal of Contemporary Ethnography. 1989; 18 (2):123–132. [ Google Scholar ]
  • Goodwin J, Horowitz R. Introduction. The methodological strengths and dilemmas of qualitative sociology. Qualitative Sociology. 2002; 25 (1):33–47. [ Google Scholar ]
  • Habermas, Jürgen. [1981] 1987. The theory of communicative action . Oxford: Polity Press.
  • Hammersley M. The issue of quality in qualitative research. International Journal of Research & Method in Education. 2007; 30 (3):287–305. [ Google Scholar ]
  • Hammersley, Martyn. 2013. What is qualitative research? Bloomsbury Publishing.
  • Hammersley M. What is ethnography? Can it survive should it? Ethnography and Education. 2018; 13 (1):1–17. [ Google Scholar ]
  • Hammersley M, Atkinson P. Ethnography . Principles in practice . London: Tavistock Publications; 2007. [ Google Scholar ]
  • Heidegger M. Sein und Zeit. Tübingen: Max Niemeyer Verlag; 2001. [ Google Scholar ]
  • Heidegger, Martin. 1988. 1923. Ontologie. Hermeneutik der Faktizität, Gesamtausgabe II. Abteilung: Vorlesungen 1919-1944, Band 63, Frankfurt am Main: Vittorio Klostermann.
  • Hempel CG. Philosophy of the natural sciences. Upper Saddle River: Prentice Hall; 1966. [ Google Scholar ]
  • Hood JC. Teaching against the text. The case of qualitative methods. Teaching Sociology. 2006; 34 (3):207–223. [ Google Scholar ]
  • James W. Pragmatism. New York: Meredian Books; 1907. [ Google Scholar ]
  • Jovanović G. Toward a social history of qualitative research. History of the Human Sciences. 2011; 24 (2):1–27. [ Google Scholar ]
  • Kalof L, Dan A, Dietz T. Essentials of social research. London: Open University Press; 2008. [ Google Scholar ]
  • Katz J. Situational evidence: Strategies for causal reasoning from observational field notes. Sociological Methods & Research. 2015; 44 (1):108–144. [ Google Scholar ]
  • King G, Keohane RO, Sidney S, Verba S. Scientific inference in qualitative research. Princeton: Princeton University Press; 1994. Designing social inquiry. [ Google Scholar ]
  • Lamont M. Evaluating qualitative research: Some empirical findings and an agenda. In: Lamont M, White P, editors. Report from workshop on interdisciplinary standards for systematic qualitative research. Washington, DC: National Science Foundation; 2004. pp. 91–95. [ Google Scholar ]
  • Lamont M, Swidler A. Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology. 2014; 37 (2):153–171. [ Google Scholar ]
  • Lazarsfeld P, Barton A. Some functions of qualitative analysis in social research. In: Kendall P, editor. The varied sociology of Paul Lazarsfeld. New York: Columbia University Press; 1982. pp. 239–285. [ Google Scholar ]
  • Lichterman, Paul, and Isaac Reed I (2014), Theory and Contrastive Explanation in Ethnography. Sociological methods and research. Prepublished 27 October 2014; 10.1177/0049124114554458.
  • Lofland J, Lofland L. Analyzing social settings. A guide to qualitative observation and analysis. 3. Belmont: Wadsworth; 1995. [ Google Scholar ]
  • Lofland J, Snow DA, Anderson L, Lofland LH. Analyzing social settings. A guide to qualitative observation and analysis. 4. Belmont: Wadsworth/Thomson Learning; 2006. [ Google Scholar ]
  • Long AF, Godfrey M. An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology. 2004; 7 (2):181–196. [ Google Scholar ]
  • Lundberg G. Social research: A study in methods of gathering data. New York: Longmans, Green and Co.; 1951. [ Google Scholar ]
  • Malinowski B. Argonauts of the Western Pacific: An account of native Enterprise and adventure in the archipelagoes of Melanesian New Guinea. London: Routledge; 1922. [ Google Scholar ]
  • Manicas P. A realist philosophy of science: Explanation and understanding. Cambridge: Cambridge University Press; 2006. [ Google Scholar ]
  • Marchel C, Owens S. Qualitative research in psychology. Could William James get a job? History of Psychology. 2007; 10 (4):301–324. [ PubMed ] [ Google Scholar ]
  • McIntyre LJ. Need to know. Social science research methods. Boston: McGraw-Hill; 2005. [ Google Scholar ]
  • Merton RK, Barber E. The travels and adventures of serendipity . A Study in Sociological Semantics and the Sociology of Science. Princeton: Princeton University Press; 2004. [ Google Scholar ]
  • Mannay D, Morgan M. Doing ethnography or applying a qualitative technique? Reflections from the ‘waiting field‘ Qualitative Research. 2015; 15 (2):166–182. [ Google Scholar ]
  • Neuman LW. Basics of social research. Qualitative and quantitative approaches. 2. Boston: Pearson Education; 2007. [ Google Scholar ]
  • Ragin CC. Constructing social research. The unity and diversity of method. Thousand Oaks: Pine Forge Press; 1994. [ Google Scholar ]
  • Ragin, Charles C. 2004. Introduction to session 1: Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , 22, ed. Charles C. Ragin, Joane Nagel, Patricia White. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf
  • Rawls, Anne. 2018. The Wartime narrative in US sociology, 1940–7: Stigmatizing qualitative sociology in the name of ‘science,’ European Journal of Social Theory (Online first).
  • Schütz A. Collected papers I: The problem of social reality. The Hague: Nijhoff; 1962. [ Google Scholar ]
  • Seiffert H. Einführung in die Hermeneutik. Tübingen: Franke; 1992. [ Google Scholar ]
  • Silverman D. Doing qualitative research. A practical handbook. 2. London: SAGE Publications; 2005. [ Google Scholar ]
  • Silverman D. A very short, fairly interesting and reasonably cheap book about qualitative research. London: SAGE Publications; 2009. [ Google Scholar ]
  • Silverman D. What counts as qualitative research? Some cautionary comments. Qualitative Sociology Review. 2013; 9 (2):48–55. [ Google Scholar ]
  • Small ML. “How many cases do I need?” on science and the logic of case selection in field-based research. Ethnography. 2009; 10 (1):5–38. [ Google Scholar ]
  • Small, Mario L 2008. Lost in translation: How not to make qualitative research more scientific. In Workshop on interdisciplinary standards for systematic qualitative research, ed in Michelle Lamont, and Patricia White, 165–171. Washington, DC: National Science Foundation.
  • Snow DA, Anderson L. Down on their luck: A study of homeless street people. Berkeley: University of California Press; 1993. [ Google Scholar ]
  • Snow DA, Morrill C. New ethnographies: Review symposium: A revolutionary handbook or a handbook for revolution? Journal of Contemporary Ethnography. 1995; 24 (3):341–349. [ Google Scholar ]
  • Strauss AL. Qualitative analysis for social scientists. 14. Chicago: Cambridge University Press; 2003. [ Google Scholar ]
  • Strauss AL, Corbin JM. Basics of qualitative research. Techniques and procedures for developing grounded theory. 2. Thousand Oaks: Sage Publications; 1998. [ Google Scholar ]
  • Swedberg, Richard. 2017. Theorizing in sociological research: A new perspective, a new departure? Annual Review of Sociology 43: 189–206.
  • Swedberg R. The new 'Battle of Methods'. Challenge January–February. 1990; 3 (1):33–38. [ Google Scholar ]
  • Timmermans S, Tavory I. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory. 2012; 30 (3):167–186. [ Google Scholar ]
  • Trier-Bieniek A. Framing the telephone interview as a participant-centred tool for qualitative research. A methodological discussion. Qualitative Research. 2012; 12 (6):630–644. [ Google Scholar ]
  • Valsiner J. Data as representations. Contextualizing qualitative and quantitative research strategies. Social Science Information. 2000; 39 (1):99–113. [ Google Scholar ]
  • Weber, Max. 1904. 1949. Objectivity’ in social Science and social policy. Ed. Edward A. Shils and Henry A. Finch, 49–112. New York: The Free Press.
  • Open access
  • Published: 18 May 2024

Determinants of appropriate antibiotic and NSAID prescribing in unscheduled outpatient settings in the veterans health administration

  • Michael J. Ward 1 , 2 , 3 , 4 ,
  • Michael E. Matheny 1 , 4 , 5 , 6 ,
  • Melissa D. Rubenstein 3 ,
  • Kemberlee Bonnet 7 ,
  • Chloe Dagostino 7 ,
  • David G. Schlundt 7 ,
  • Shilo Anders 4 , 8 ,
  • Thomas Reese 4 &
  • Amanda S. Mixon 1 , 9  

BMC Health Services Research volume  24 , Article number:  640 ( 2024 ) Cite this article

25 Accesses

1 Altmetric

Metrics details

Despite efforts to enhance the quality of medication prescribing in outpatient settings, potentially inappropriate prescribing remains common, particularly in unscheduled settings where patients can present with infectious and pain-related complaints. Two of the most commonly prescribed medication classes in outpatient settings with frequent rates of potentially inappropriate prescribing include antibiotics and nonsteroidal anti-inflammatory drugs (NSAIDs). In the setting of persistent inappropriate prescribing, we sought to understand a diverse set of perspectives on the determinants of inappropriate prescribing of antibiotics and NSAIDs in the Veterans Health Administration.

We conducted a qualitative study guided by the Consolidated Framework for Implementation Research and Theory of Planned Behavior. Semi-structured interviews were conducted with clinicians, stakeholders, and Veterans from March 1, 2021 through December 31, 2021 within the Veteran Affairs Health System in unscheduled outpatient settings at the Tennessee Valley Healthcare System. Stakeholders included clinical operations leadership and methodological experts. Audio-recorded interviews were transcribed and de-identified. Data coding and analysis were conducted by experienced qualitative methodologists adhering to the Consolidated Criteria for Reporting Qualitative Studies guidelines. Analysis was conducted using an iterative inductive/deductive process.

We conducted semi-structured interviews with 66 participants: clinicians ( N  = 25), stakeholders ( N  = 24), and Veterans ( N  = 17). We identified six themes contributing to potentially inappropriate prescribing of antibiotics and NSAIDs: 1) Perceived versus actual Veterans expectations about prescribing; 2) the influence of a time-pressured clinical environment on prescribing stewardship; 3) Limited clinician knowledge, awareness, and willingness to use evidence-based care; 4) Prescriber uncertainties about the Veteran condition at the time of the clinical encounter; 5) Limited communication; and 6) Technology barriers of the electronic health record and patient portal.

Conclusions

The diverse perspectives on prescribing underscore the need for interventions that recognize the detrimental impact of high workload on prescribing stewardship and the need to design interventions with the end-user in mind. This study revealed actionable themes that could be addressed to improve guideline concordant prescribing to enhance the quality of prescribing and to reduce patient harm.

Peer Review reports

Adverse drug events (ADEs) are the most common iatrogenic injury. [ 1 ] Efforts to reduce these events have primarily focused on the inpatient setting. However, the emergency department (ED), urgent care, and urgent primary care clinics are desirable targets for interventions to reduce ADEs because approximately 70% of all outpatient encounters occur in one of these settings. [ 2 ] Two of the most commonly prescribed drug classes during acute outpatient care visits that have frequent rates of potentially inappropriate prescribing include antibiotics and non-steroidal anti-inflammatory drugs (NSAIDs). [ 3 , 4 ]

An estimated 30% of all outpatient oral antibiotic prescriptions may be unnecessary. [ 5 , 6 ] The World Health Organization identified overuse of antibiotics and its resulting antimicrobial resistance as a global threat. [ 7 ] The Centers for Disease Control and Prevention (CDC) conservatively estimates that in the US there are nearly 3 million antibiotic-resistant infections that cause 48,000 deaths annually. [ 8 ] Antibiotics were the second most common source of adverse events with nearly one ADE resulting in an ED visit for every 100 prescriptions. [ 9 ] Inappropriate antibiotic prescriptions (e.g., antibiotic prescription for a viral infection) also contribute to resistance and iatrogenic infections such as C. difficile (antibiotic associated diarrhea) and Methicillin-resistant Staphylococcus aureus (MRSA) . [ 8 ] NSAID prescriptions, on the other hand, result in an ADE at more than twice the rate of antibiotics (2.2%), [ 10 ] are prescribed to patients at an already increased risk of potential ADEs, [ 4 , 11 ] and frequently interact with other medications. [ 12 ] Inappropriate NSAID prescriptions contribute to serious gastrointestinal, [ 13 ] renal, [ 14 ] and cardiovascular [ 15 , 16 ] ADEs such as gastrointestinal bleeding, acute kidney injury, and myocardial infarction or heart failure, respectively. Yet, the use of NSAIDs is ubiquitous; according to the CDC, between 2011 and 2014, 5% of the US population were prescribed an NSAID whereas an additional 2% take NSAIDs over the counter. [ 11 ]

Interventions to reduce inappropriate antibiotic prescribing commonly take the form of antimicrobial stewardship programs. However, no such national programs exist for NSAIDs, particularly in acute outpatient care settings. There is a substantial body of evidence supporting the evidence of such stewardship programs. [ 17 ] The CDC recognizes that such outpatient programs should consist of four core elements of antimicrobial stewardship, [ 18 ] including commitment, action for policy and practice, tracking and reporting, and education and expertise. However, the opportunities to extend antimicrobial stewardship in EDs are vast. Despite the effectiveness, there is a recognized need to understand which implementation strategies and how to implement multifaceted interventions. [ 19 ] Given the unique time-pressured environment of acute outpatient care settings, not all antimicrobial stewardship strategies work in these settings necessitating the development of approaches tailored to these environments. [ 19 , 20 ]

One particularly vulnerable population is within the Veterans Health Administration. With more than 9 million enrollees in the Veterans Health Administration, Veterans who receive care in Veteran Affairs (VA) hospitals and outpatient clinics may be particularly vulnerable to ADEs. Older Veterans have greater medical needs than younger patients, given their concomitant medical and mental health conditions as well as cognitive and social issues. Among Veterans seen in VA EDs and Urgent Care Clinics (UCCs), 50% are age 65 and older, [ 21 ] nearly three times the rate of non-VA emergency care settings (18%). [ 22 ] Inappropriate prescribing in ED and UCC settings is problematic with inappropriate antibiotic prescribing estimated to be higher than 40%. [ 23 ] In a sample of older Veterans discharged from VA ED and UCC settings, NSAIDs were found to be implicated in 77% of drug interactions. [ 24 ]

Learning from antimicrobial stewardship programs and applying to a broader base of prescribing in acute outpatient care settings, it is necessary to understand not only why potentially inappropriate prescribing remains a problem for antibiotics, but for medications (e.g., NSAIDs) which have received little stewardship focus previously. This understanding is essential to develop and implement interventions to reduce iatrogenic harm for vulnerable patients seen in unscheduled settings. In the setting of the Veterans Health Administration, we sought to use these two drug classes (antibiotics and NSAIDs) that have frequent rates of inappropriate prescribing in unscheduled outpatient care settings, to understand a diverse set of perspectives on why potentially inappropriate prescribing continues to occur.

Selection of participants

Participants were recruited from three groups in outpatient settings representing emergency care, urgent care, and urgent primary care in the VA: 1) Clinicians-VA clinicians such as physicians, advanced practice providers, and pharmacists 2) Stakeholders-VA and non-VA clinical operational and clinical content experts such as local and regional medical directors, national clinical, research, and administrative leadership in emergency care, primary care, and pharmacy including geriatrics; and 3) Veterans seeking unscheduled care for infectious or pain symptoms.

Clinicians and stakeholders were recruited using email, informational flyers, faculty/staff meetings, national conferences, and snowball sampling, when existing participants identify additional potential research subjects for recruitment. [ 25 ] Snowball sampling is useful for identifying and recruiting participants who may not be readily apparent to investigators and/or hard to reach. Clinician inclusion criteria consisted of: 1) at least 1 year of VA experience; and 2) ≥ 1 clinical shift in the last 30 days at any VA ED, urgent care, or primary care setting in which unscheduled visits occur. Veterans were recruited in-person at the VA by key study personnel. Inclusion criteria consisted of: 1) clinically stable as determined by the treating clinician; 2) 18 years or older; and 3) seeking care for infectious or pain symptoms in the local VA Tennessee Valley Healthcare System (TVHS). TVHS includes an ED at the Nashville campus with over 30,000 annual visits, urgent care clinic in Murfreesboro, TN with approximately 15,000 annual visits, and multiple primary care locations throughout the middle Tennessee region. This study was approved by the VA TVHS Institutional Review Board as minimal risk.

Data collection

Semi-structured interview guides (Supplemental Table 1) were developed using the Consolidated Framework for Implementation Research (CFIR) [ 26 ] and the Theory of Planned Behavior [ 27 , 28 ] to understand attitudes and beliefs as they relate to behaviors, and potential determinants of a future intervention. Interview guides were modified and finalized by conducting pilot interviews with three members of each participant group. Interview guides were tailored to each group of respondents and consisted of questions relating to: 1) determinants of potentially inappropriate prescribing; and 2) integration into practice (Table. 1 ). Clinicians were also asked about knowledge and awareness of evidence-based prescribing practices for antibiotics and NSAIDs. The interviewer asked follow-up questions to elicit clarity of responses and detail.

Each interview was conducted by a trained interviewer (MDR). Veteran interviews were conducted in-person while Veterans waited for clinical care so as not to disrupt clinical operations. Interviews with clinicians and stakeholders were scheduled virtually. All interviews (including in-person) were recorded and transcribed in a manner compliant with VA information security policies using Microsoft Teams (Redmond, WA). The audio-recorded interviews were transcribed and de-identified by a transcriptionist and stored securely behind the VA firewall using Microsoft Teams. Study personnel maintained a recording log on a password-protected server and each participant was assigned a unique participant ID number. Once 15 interviews were conducted per group, we planned to review interviews with the study team to discuss content, findings, and to decide collectively when thematic saturation was achieved, the point at which no new information was obtained. [ 29 ] If not achieved, we planned to conduct at least 2 additional interviews prior to group review for saturation. We estimated that approximately 20–25 interviews per group were needed to achieve thematic saturation.

Qualitative data coding and analysis was managed by the Vanderbilt University Qualitative Research Core. A hierarchical coding system (Supplemental Table 2) was developed and refined using an iterative inductive/deductive approach [ 30 , 31 , 32 ] guided by a combination of: 1) Consolidated Framework for Implementation Research (CFIR) [ 26 ]; 2) the Theory of Planned Behavior [ 27 , 28 ]; 3) interview guide questions; and 4) a preliminary review of the transcripts. Eighteen major categories (Supplemental Table 3) were identified and were further divided into subcategories, with some subcategories having additional levels of hierarchical division. Definitions and rules were written for the use of each of the coding categories. The process was iterative in that the coding system was both theoretically informed and derived from the qualitative data. The coding system was finalized after it was piloted by the coders. Data coding and analysis met the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines. [ 33 ]

Four experienced qualitative coders were trained by independently coding two transcripts from each of the three participant categories. Coding was then compared, and any discrepancies resolved by reconciliation. After establishing reliability in using the coding system, the coders divided and independently coded the remaining transcripts in sequential order. Each statement was treated as a separate quote and could be assigned up to 21 different codes. Coded transcripts were combined and sorted by code.

Following thematic saturation, the frequency of each code was calculated to understand the distribution of quotes. Quotes were then cross-referenced with coding as a barrier to understand potential determinants of inappropriate prescribing. A thematic analysis of the barriers was conducted and presented in an iterative process with the research team of qualitative methodologists and clinicians to understand the nuances and refine the themes and subthemes from the coded transcripts. Transcripts, quotations, and codes were managed using Microsoft Excel and SPSS version 28.0.

We approached 132 individuals and 66 (50%) agreed to be interviewed. Participants included 25 clinicians, 24 stakeholders, and 17 Veterans whose demographic characteristics are presented in Table 2 . The clinicians were from 14 VA facilities throughout the US and 20 physicians, and five advanced practice providers. Of the clinicians, 21 (84%) worked in either an ED or urgent care while the remainder practiced in primary care. The 24 stakeholders included 13 (54%) clinical service chiefs or deputy chief (including medical directors), five (21%) national directors, and six (25%) experts in clinical content and methodology. The 17 Veterans interviewed included 15 (88%) who were seen for pain complaints.

Results are organized by the six thematic categories with several subthemes in each category. Themes and subthemes are presented in Table 3  and are visually represented in Fig.  1 . The six themes were: 1) perceived versus actual Veterans expectations about prescribing, 2) the influence of a time-pressured clinical environment on prescribing stewardship, 3) limited clinician knowledge, awareness, and willingness to use evidence-based care, 4) uncertainties about the Veteran condition at the time of the clinical encounter, 5) limited communication, and 6) technology barriers.

figure 1

Visual representation of themes and subthemes from 66 clinician, stakeholder, and Veteran interviews

Theme 1: Perception that Veterans routinely expect a medication from their visit, despite clinical inappropriateness

According to clinicians, Veterans frequently expect to receive a prescription even when this decision conflicts with good clinical practice.

Certainly lots of people would say you know if you feel like you’re up against some strong expectations from the patients or caregivers or families around the utility of an antibiotic when it’s probably not indicated…In the emergency department the bias is to act and assume the worst and assume like the worst for the clinical trajectory for the patient rather than the reverse. [Clinician 49, Physician, ED]

In addition, stakeholders further stated that patient prescription expectations are quite influential and are likely shaped by Veterans’ prior experiences.

I think the patients, particularly for antibiotics, have strong feelings about whether they should or shouldn’t get something prescribed. [Stakeholder 34] You know I think the biggest challenge, I think, is adjusting patients’ expectations because you know they got better the last time they were doing an antibiotic. [Stakeholder 64]

Patient satisfaction and clinician workload may also influence the clinician’s prescription decision.

We have a lot of patients that come in with back pain or knee pain or something. We’ll get an x-ray and see there’s nothing actually wrong physically that can be identified on x-ray at least and you have to do something. Otherwise, patient satisfaction will dip, and patients leave angry. [Clinician 28, Physician, urgent care clinic] For some clinicians it’s just easier to prescribe an antibiotic when they know that’s the patient’s expectation and it shortens their in-room discussion and evaluation. [Clinician 55, Physician, ED]

Despite clinician perception, Veterans communicated that they did not necessarily expect a prescription and were instead focused on the clinical interaction and the clinician’s decision.

I’m not sure if they’ll give me [unintelligible] a prescription or what they’ll do. I don’t care as long as they stop the pain. [Patient 40, urgent care clinic] I don’t expect to [receive a prescription], but I mean whatever the doctor finds is wrong with me I will follow what he says. [Patient 31, ED]

Theme 2: Hectic clinical environments and unique practice conditions in unscheduled settings provide little time to focus on prescribing practices

Clinicians and stakeholders reported that the time-constrained clinical environment and need to move onto the next patient were major challenges to prescribing stewardship.

The number one reason is to get a patient out of your office or exam bay and move on to the next one. [Stakeholder 28] It takes a lot of time and you have to be very patient and understanding. So, you end up having to put a fair bit of emotional investment and intelligence into an encounter to not prescribe. [Stakeholder 1]

Stakeholders also noted that unique shift conditions and clinician perceptions that their patients were “different” might influence prescribing practices.

A common pushback was ‘well my patients are different.’ [Stakeholder 4] Providers who worked different types of shifts, so if you happened to work on a Monday when the clinics were open and had more adults from the clinics you were more likely to prescribe antibiotics than if you worked over night and had fewer patients. Providers who worked primarily holidays or your Friday prescribing pattern may be very different if you could get them into a primary care provider the next day. [Stakeholder 22]

Clinicians also reported that historical practices in the clinical environment practices may also contribute to inappropriate prescribing.

I came from working in the [outpatient] Clinic as a new grad and they’re very strict about prescribing only according to evidence-based practice. And then when I came here things are with other colleagues are a little more loose with that type of thing. It can be difficult because you start to adopt that practice to. [Clinician 61, Nurse Practitioner, ED]

Theme 3: Clinician knowledge, awareness, and willingness to use evidence-based care

Stakeholders felt that clinicians had a lack of knowledge about prescribing of NSAIDs and antibiotics.

Sometimes errors are a lack of knowledge or awareness of the need to maybe specifically dose for let’s say impaired kidney function or awareness of current up to date current antibiotic resistance patterns in the location that might inform a more tailored antibiotic choice for a given condition. [Stakeholder 37] NSAIDs are very commonly used in the emergency department for patients of all ages…the ED clinician is simply not being aware that for specific populations this is not recommended and again just doing routine practice for patients of all ages and not realizing that for older patients you actually probably should not be using NSAIDs. [Stakeholder 40]

Some clinicians may be unwilling to change their prescribing practices due to outright resistance, entrenched habits, or lack of interest in doing so.

It sounds silly but there’s always some opposition to people being mandated to do something. But there are some people who would look and go ‘okay we already have a handle on that so why do we need something else? I know who prescribes inappropriately and who doesn’t. Is this a requirement, am I evaluated on it? That would come from supervisors. Is this one more thing on my annual review?’ [Stakeholder 28] If people have entrenched habits that are difficult to change and are physicians are very individualistic people who think that they are right more often than the non-physician because of their expensive training and perception of professionalism. [Stakeholder 4]

Theme 4: Uncertainty about whether an adverse event will occur

Clinicians cited the challenge of understanding the entirety of a Veteran’s condition, potential drug-drug interactions, and existing comorbidities in knowing whether an NSAID prescription may result in an adverse event.

It’s oftentimes a judgement call if someone has renal function that’s right at the precipice of being too poor to merit getting NSAIDs that may potentially cause issues. [Clinician 43, Physician, inpatient and urgent care] It depends on what the harm is. So, for instance, you can’t always predict allergic reactions. Harm from the non-steroidals would be more if you didn’t pre-identify risk factors for harm. So, they have ulcer disease, they have kidney problems where a non-steroidal would not be appropriate for that patient. Or potential for a drug-drug interaction between that non-steroid and another medication in particular. [Clinician 16, Physician, ED]

Rather than be concerned about the adverse events resulting from the medication itself, stakeholders identified the uncertainty that clinicians experience about whether a Veteran may experience an adverse event from an infection if nothing is done. This uncertainty contributes to the prescription of an antibiotic.

My experience in working with providers at the VA over the years is that they worry more about the consequences of not treating an infection than about the consequences of the antibiotic itself. [Stakeholder 19] Sometimes folks like to practice conservatively and they’ll say even though I didn’t really see any hard evidence of a bacterial infection, the patient’s older and sicker and they didn’t want to risk it. [Stakeholder 16]

Theme 5: Limited communication during and after the clinical encounter

The role and type of communication about prescribing depended upon the respondent. Clinicians identified inadequate communication and coordination with the Veteran’s primary care physician during the clinical encounter.

I would like to have a little more communication with the primary doctors. They don’t seem to be super interested in talking to anyone in the emergency room about their patients… A lot of times you don’t get an answer from the primary doctor or you get I’m busy in clinic. You can just pick something or just do what you think is right. [Clinician 25, Physician, ED]

Alternatively, stakeholders identified post-encounter patient outcome and clinical performance feedback as potential barriers.

Physicians tend to think that they are doing their best for every individual patient and without getting patient by patient feedback there is a strong cognitive bias to think well there must have been some exception and reason that I did it in this setting. [Stakeholder 34] It’s really more their own awareness of like their clinical performance and how they’re doing. [Stakeholder 40]

Veterans, however, prioritized communication during the clinical encounter. They expressed the need for clear and informative communication with the clinician, and the need for the clinician to provide a rationale for the choice and medication-specific details along with a need to ask any questions.

I expect him to tell me why I’m taking it, what it should do, and probably the side effects. [Patient 25, ED] I’d like to have a better description of how to take it because I won’t remember all the time and sometimes what they put on the bottle is not quite as clear. [Patient 22, ED]

Veterans reported their desire for a simple way to learn about medication information. They provided feedback on the current approaches to educational materials about prescriptions.

Probably most pamphlets that people get they’re not going to pay attention to them. Websites can be overwhelming. [Patient 3, ED] Posters can be offsetting. If you’re sick, you’re not going to read them…if you’re sick you may glance at that poster and disregard it. So, you’re not really going to see it but if you give them something in the hand people will tend to look at it because it’s in their hand. [Patient 19, ED] It would be nice if labels or something just told me what I needed to know. You know take this exactly when and reminds me here’s why you’re taking it for and just real clear and not small letters. [Patient 7, ED]

Theme 6: Technology barriers limited the usefulness of clinical decision support for order checking and patient communication tools

Following the decision to prescribe a medication, clinicians complained that electronic health record pop-ups with clinical decision support warnings for potential safety concerns (e.g., drug-drug interactions) were both excessive and not useful in a busy clinical environment.

The more the pop ups, the more they get ignored. So, it’s finding that sweet spot right where you’re not constantly having to click out of something because you’re so busy. Particularly in our clinical setting where we have very limited amount of time to read the little monograph. Most of the time you click ‘no’ and off you go. (Clinician 16, Physician, ED) Some of these mechanisms like the EMR [electronic medical record] or pop-up decision-making windows really limit your time. If you know the guidelines appropriately and doing the right thing, even if you’re doing the right thing it takes you a long time to get through something. (Clinician 19, Physician, Primary care clinic)

For post-encounter communication that builds on Theme 5 about patient communication, patients reported finding using the VA patient portal (MyHealtheVet) challenging for post-event communication with their primary care physician and to review the medications they were prescribed.

I’ve got to get help to get onto MyHealtheVet but I would probably like to try and use that, but I haven’t been on it in quite some time. [Patient 22, ED] I tried it [MyHealtheVet] once and it’s just too complicated so I’m not going to deal with it. [Patient 37, Urgent care]

This work examined attitudes and perceptions of barriers to appropriate prescribing of antibiotics and NSAIDs in unscheduled outpatient care settings in the Veterans Health Administration. Expanding on prior qualitative work on antimicrobial stewardship programs, we also included an examination of NSAID prescribing, a medication class which has received little attention focused on prescribing stewardship. This work seeks to advance the understanding of fundamental problems underlying prescribing stewardship to facilitate interventions designed to improve not only the decision to prescribe antibiotics and NSAIDs, but enhances the safety checks once a decision to prescribe is made. Specifically, we identified six themes during these interviews: perceived versus actual Veteran expectations about prescribing, the influence of a time-pressured clinical environment on prescribing stewardship, limited clinician knowledge, awareness, and willingness to use evidence-based care, uncertainties about the Veteran condition at the time of the clinical encounter, limited communication, and technology barriers.

Sensitive to patient expectations, clinicians believed that Veterans would be dissatisfied if they did not receive an antibiotic prescription, [ 34 ] even though most patients presenting to the ED for upper respiratory tract infections do not expect antibiotics. [ 35 ] However, recent work by Staub et al. found that among patients with respiratory tract infections, receipt of an antibiotic was not independently associated with improved satisfaction. [ 36 ] Instead, they found that receipt of antibiotics had to match the patient’s expectations to affect patient satisfaction and recommended that clinicians communicate with their patients about prescribing expectations. This finding complements our results in the present study and the importance of communication about expectations is similarly important for NSAID prescribing as well.

A commitment to stewardship and modification of clinician behavior may be compromised by the time-pressured clinical environment, numerous potential drug interactions, comorbidities of a vulnerable Veteran population, and normative practices. The decision to prescribe medications such as antibiotics is a complex clinical decision and may be influenced by both clinical and non-clinical factors. [ 34 , 37 , 38 ] ED crowding, which occurs when the demand for services exceeds a system’s ability to provide care, [ 39 ] is a well-recognized manifestation of a chaotic clinical environment and is associated with detrimental effects on the hospital system and patient outcomes. [ 40 , 41 ] The likelihood that congestion and wait times will improve is unlikely as the COVID-19 pandemic has exacerbated the already existing crowding and boarding crisis in EDs. [ 42 , 43 ]

Another theme was the uncertainty in the anticipation of adverse events that was exacerbated by the lack of a feedback loop. Feedback on clinical care processes and patient outcomes is uncommonly provided in emergency care settings, [ 44 ] yet may provide an opportunity to change clinician behavior, particularly for antimicrobial stewardship. [ 45 ] However, the frequent use of ineffective feedback strategies [ 46 ] compromises the ability to implement effective feedback interventions; feedback must be specific [ 47 ] and address the Intention-to-Action gap [ 48 ] by including co-interventions to address recipient characteristics (i.e., beliefs and capabilities) and context to maximize impact. Without these, feedback may be ineffective.

An additional barrier identified from this work is the limited communication with primary care following discharge. A 2017 National Quality Forum report on ED care transitions [ 49 ] recommended that EDs and their supporting hospital systems should expand infrastructure and enhance health information technology to support care transitions as Veterans may not understand discharge instructions, may not receive post-ED or urgent care, [ 50 , 51 , 52 ] or may not receive a newly prescribed medication. [ 24 ] While there are existing mechanisms to communicate between the ED and primary care teams such as notifications when a Veteran presents to the ED and when an emergency clinician copies a primary care physician on a note, these mechanisms are insufficient to address care transition gaps and are variable in best practice use. To address this variability, the VA ED PACT Tool was developed using best practices (standardized processes, "closed-loop" communication, embedding into workflow) to facilitate and standardize communication between VA EDs and follow-up care clinicians. [ 53 ] While the ED PACT Tool is implemented at the Greater Los Angeles VA and can create a care coordination order upon ED discharge, its use is not yet widely adopted throughout the VA.

In the final theme about technology barriers, once the decision has been made to prescribe a medication, existing electronic tools that are key components of existing stewardship interventions designed to curtail potentially inappropriate prescriptions may be compromised by their lack of usability. For example, clinician and stakeholder interview respondents described how usability concerns were exacerbated in a time-pressured clinical environment (e.g., electronic health record clinical decision support tools). Clinical decision support is an effective tool to improve healthcare process measures in a diverse group of clinical environments; [ 54 ] however, usability remains a barrier when alerts must be frequently overridden. [ 55 , 56 ] Alert fatigue, as expressed in our interviews for order checking and recognized within the VA’s EHR, [ 57 , 58 ] may contribute to excessive overrides reducing the benefit of clinical decision support, [ 56 , 59 ] there was a notable lack of discussion about the decision to initiate appropriate prescriptions, which is a key action of the CDC’s outpatient antibiotic stewardship campaign. [ 18 ] Thus, a potentially more effective, albeit challenging approach, is to “nudge” clinicians towards appropriate prescribing and away from the initial decision to prescribe (e.g., inappropriate antibiotic prescribing for viral upper respiratory tract infections) with either default order sets for symptom management or to enhance prescription decisions through reminders about potential contraindications to specific indications (e.g., high risk comorbidities). Beyond EHR-based solutions that might change clinician behavior, the CDC’s outpatient antibiotic stewardship program provides a framework to change the normative practices around inappropriate prescribing and includes a commitment to appropriate prescribing, action for policy and change, tracking and reporting, and education and expertise. [ 18 ]

Another technical barrier faces patients through patient-facing electronic tools such as the VA’s MyHealtheVet portal, which was developed to enhance patient communication following care transitions and to allow Veterans to review their medications and to communicate with their primary care clinical team. Patient portals can be an effective tool for medication adherence [ 60 ] and offer promise to provide patient education [ 61 ] following a clinical encounter. However, they are similarly limited by usability concerns, representing an adoption barrier to broader Veteran use after unscheduled outpatient care visits [ 62 ], particularly in an older patient population.

These interviews further underscored that lack of usability of clinical decision support for order checking that arises from ineffective design and is a key barrier preventing health information technology from reaching its promise of improving patient safety. [ 63 ] A common and recognized reason for these design challenges include the failure to place the user (i.e., acute care clinician) at the center of the design process resulting in underutilization, workarounds, [ 64 ] and unintended consequences, [ 65 ] all of which diminish patient safety practices and fail to change clinician behavior (i.e., prescribing). Complex adaptive systems work best when the relative strengths of humans (e.g., context sensitivity, situation specificity) are properly integrated with the information processing power of computerized systems. [ 66 ] One potential approach to address usability concerns is through the integration of user-centered design into technology design represents an opportunity to design more clinician- and patient-centric systems of care to advance prescribing stewardship interventions that may have lacked broader adoption previously. As antimicrobial stewardship and additional prescribing stewardship efforts focus on time-pressured environments where usability is essential to adoption, taking a user-centered design approach to not only the development of electronic tools but also in addressing the identified barriers in prescribing represents a promising approach to enhance the quality of prescribing.

Limitations

The study findings should be considered in light of its limitations. First, the setting for this work was the Veterans Health Administration, the largest integrated health system in the US. Also, while we focused on the stewardship of two drug classes, there are numerous additional drug classes that are prescribed in these settings. Studies in other settings or on other drug classes may not generalize to other settings and drug classes. Second, while clinicians and stakeholder perspectives included diverse, national representation, the Veterans interviewed were local to the Tennessee Valley Healthcare System. Given the concurrent COVID-19 pandemic at the time of enrollment, most of the Veterans were seen for pain-related complaints, and only two infectious-related complaints were included. However, we also asked them about antibiotic prescribing. Clinician and stakeholder narratives may not completely reflect their practice patterns as their responses could be influenced by social desirability bias. Third, responses may be subject to recall bias and may influence the data collected. Finally, the themes and subthemes identified may overlap and have potential interactions. While we used an iterative process to identify discrete themes and subthemes, prescription decisions represent a complex decision process that are influenced by numerous patient and contextual factors and may not be completely independent.

Despite numerous interventions to improve the quality of prescribing, the appropriate prescription of antibiotics and NSAIDs in unscheduled outpatient care settings remains a challenge. Using the Veterans Health Administration, this study found that challenges to high quality prescribing include perceived Veteran expectations about receipt of medications, a hectic clinical environment deprioritizing stewardship, limited clinician knowledge, awareness, and willingness to use evidence-based care, uncertainty about the potential for adverse events, limited communication, and technology barriers. Findings from these interviews suggest that interventions should consider the detrimental impact of high workload on prescribing stewardship, clinician workflow, the initial decision to prescribe medications, and incorporate end-users into the intervention design process. Doing so is a promising approach to enhance adoption of high quality prescribing practices in order to improve the quality and patient outcomes from NSAID and antibiotic prescribing.

Availability of data and materials

De-identified datasets used and/or analysed during the current study will be made available from the corresponding author on reasonable request.

Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324(6):377–384.

Article   CAS   PubMed   Google Scholar  

Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620–9.

Article   PubMed   Google Scholar  

Palms DL, Hicks LA, Bartoces M, et al. Comparison of antibiotic prescribing in retail clinics, urgent care centers, emergency departments, and traditional ambulatory care settings in the United States. Jama Intern Med. 2018;178(9):1267–9.

Article   PubMed   PubMed Central   Google Scholar  

Davis JS, Lee HY, Kim J, et al. Use of non-steroidal anti-inflammatory drugs in US adults: changes over time and by demographic. Open Heart. 2017;4(1):e000550.

Fleming-Dutra KE, Hersh AL, Shapiro DJ, et al. Prevalence of inappropriate antibiotic prescriptions among US ambulatory care visits, 2010–2011. JAMA. 2016;315(17):1864–73.

Shively NR, Buehrle DJ, Clancy CJ, Decker BK. Prevalence of Inappropriate Antibiotic Prescribing in Primary Care Clinics within a Veterans Affairs Health Care System. Antimicrob Agents Chemother. 2018;62(8):e00337–18. https://doi.org/10.1128/AAC.00337-18 .  https://pubmed.ncbi.nlm.nih.gov/29967028/ .

World Health Organization. Global antimicrobial resistance and use surveillance system (GLASS) report: 2022. 2022.

Centers for Disease Control and Prevention. COVID-19: U.S. Impact on Antimicrobial Resistance, Special Report 2022. Atlanta: U.S. Department of Health and Human Services, CDC; 2022.

Google Scholar  

Shehab N, Lovegrove MC, Geller AI, Rose KO, Weidle NJ, Budnitz DS. US emergency department visits for outpatient adverse drug events, 2013–2014. JAMA. 2016;316(20):2115–25.

Fassio V, Aspinall SL, Zhao X, et al. Trends in opioid and nonsteroidal anti-inflammatory use and adverse events. Am J Manag Care. 2018;24(3):e61–72.

PubMed   Google Scholar  

Centers for Disease Control and Prevention. Chronic Kidney Disease Surveillance System—United States. http://www.cdc.gov/ckd . Accessed 21 March 2023.

Cahir C, Fahey T, Teeling M, Teljeur C, Feely J, Bennett K. Potentially inappropriate prescribing and cost outcomes for older people: a national population study. Br J Clin Pharmacol. 2010;69(5):543–52.

Gabriel SE, Jaakkimainen L, Bombardier C. Risk for Serious Gastrointestinal Complications Related to Use of Nonsteroidal Antiinflammatory Drugs - a Metaanalysis. Ann Intern Med. 1991;115(10):787–96.

Zhang X, Donnan PT, Bell S, Guthrie B. Non-steroidal anti-inflammatory drug induced acute kidney injury in the community dwelling general population and people with chronic kidney disease: systematic review and meta-analysis. BMC Nephrol. 2017;18(1):256.

McGettigan P, Henry D. Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies. PLoS Med. 2011;8(9): e1001098.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Holt A, Strange JE, Nouhravesh N, et al. Heart Failure Following Anti-Inflammatory Medications in Patients With Type 2 Diabetes Mellitus. J Am Coll Cardiol. 2023;81(15):1459–70.

Davey P, Marwick CA, Scott CL, et al. Interventions to improve antibiotic prescribing practices for hospital inpatients. Cochrane Database Syst Rev. 2017;2(2):CD003543.

Sanchez GV, Fleming-Dutra KE, Roberts RM, Hicks LA. Core Elements of Outpatient Antibiotic Stewardship. MMWR Recomm Rep. 2016;65(6):1–12.

May L, Martin Quiros A, Ten Oever J, Hoogerwerf J, Schoffelen T, Schouten J. Antimicrobial stewardship in the emergency department: characteristics and evidence for effectiveness of interventions. Clin Microbiol Infect. 2021;27(2):204–9.

May L, Cosgrove S, L'Archeveque M, et al. A call to action for antimicrobial stewardship in the emergency department: approaches and strategies. Ann Emerg Med. 2013;62(1):69–77 e62.

Veterans Health Administration Emergency Medicine Management Tool. EDIS GeriatricsAgeReport v3.

Cairns C KK, Santo L. National Hospital Ambulatory Medical Care Survey: 2020 emergency department summary tables. NHAMCS Factsheets - EDs Web site. https://www.cdc.gov/nchs/data/nhamcs/web_tables/2020-nhamcs-ed-web-tables-508.pdf . Accessed 20 Dec 2022.

Lowery JL, Alexander B, Nair R, Heintz BH, Livorsi DJ. Evaluation of antibiotic prescribing in emergency departments and urgent care centers across the Veterans’ Health Administration. Infect Control Hosp Epidemiol. 2021;42(6):694–701.

Hastings SN, Sloane RJ, Goldberg KC, Oddone EZ, Schmader KE. The quality of pharmacotherapy in older veterans discharged from the emergency department or urgent care clinic. J Am Geriatr Soc. 2007;55(9):1339–48.

Goodman LA. Snowball sampling. The annals of mathematical statistics. 1961. pp. 148–170.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

Article   Google Scholar  

Ajzen I. The theory of planned behaviour: reactions and reflections. Psychol Health. 2011;26(9):1113–27.  https://doi.org/10.1080/08870446.2011.613995 .  https://www.tandfonline.com/doi/full/10.1080/08870446.2011.613995 .

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Azungah T. Qualitative research: deductive and inductive approaches to data analysis. Qual Res J. 2018;18(4):383–400.

Tjora A. Qualitative research as stepwise-deductive induction. Routledge; 2018.  https://www.routledge.com/Qualitative-Research-as-Stepwise-Deductive-Induction/Tjora/p/book/9781138304499 .

Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: A hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2006;5(1):80–92.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Patel A, Pfoh ER, Misra Hebert AD, et al. Attitudes of High Versus Low Antibiotic Prescribers in the Management of Upper Respiratory Tract Infections: a Mixed Methods Study. J Gen Intern Med. 2020;35(4):1182–8.

May L, Gudger G, Armstrong P, et al. Multisite exploration of clinical decision making for antibiotic use by emergency medicine providers using quantitative and qualitative methods. Infect Control Hosp Epidemiol. 2014;35(9):1114–25.

Staub MB, Pellegrino R, Gettler E, et al. Association of antibiotics with veteran visit satisfaction and antibiotic expectations for upper respiratory tract infections. Antimicrob Steward Healthc Epidemiol. 2022;2(1): e100.

Schroeck JL, Ruh CA, Sellick JA Jr, Ott MC, Mattappallil A, Mergenhagen KA. Factors associated with antibiotic misuse in outpatient treatment for upper respiratory tract infections. Antimicrob Agents Chemother. 2015;59(7):3848–52.

Hruza HR, Velasquez T, Madaras-Kelly KJ, Fleming-Dutra KE, Samore MH, Butler JM. Evaluation of clinicians’ knowledge, attitudes, and planned behaviors related to an intervention to improve acute respiratory infection management. Infect Control Hosp Epidemiol. 2020;41(6):672–9.

American College of Emergency Physicians Policy Statement. Crowding. https://www.acep.org/globalassets/new-pdfs/policy-statements/crowding.pdf . Published 2019. Accessed 11 Oct 2023.

Bernstein SL, Aronsky D, Duseja R, et al. The effect of emergency department crowding on clinically oriented outcomes. Acad Emerg Med. 2009;16(1):1–10.

Rasouli HR, Esfahani AA, Nobakht M, et al. Outcomes of crowding in emergency departments; a systematic review. Arch Acad Emerg Med. 2019;7(1):e52.

PubMed   PubMed Central   Google Scholar  

Janke AT, Melnick ER, Venkatesh AK. Monthly Rates of Patients Who Left Before Accessing Care in US Emergency Departments, 2017–2021. JAMA Netw Open. 2022;5(9): e2233708.

Janke AT, Melnick ER, Venkatesh AK. Hospital Occupancy and Emergency Department Boarding During the COVID-19 Pandemic. JAMA Netw Open. 2022;5(9): e2233964.

Lavoie CF, Plint AC, Clifford TJ, Gaboury I. “I never hear what happens, even if they die”: a survey of emergency physicians about outcome feedback. CJEM. 2009;11(6):523–8.

Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(6):CD000259. https://doi.org/10.1002/14651858.CD000259.pub3 .

Hysong SJ, SoRelle R, Hughes AM. Prevalence of Effective Audit-and-Feedback Practices in Primary Care Settings: A Qualitative Examination Within Veterans Health Administration. Hum Factors. 2022;64(1):99–108.

Presseau J, McCleary N, Lorencatto F, Patey AM, Grimshaw JM, Francis JJ. Action, actor, context, target, time (AACTT): a framework for specifying behaviour. Implement Sci. 2019;14(1):102.

Desveaux L, Ivers NM, Devotta K, Ramji N, Weyman K, Kiran T. Unpacking the intention to action gap: a qualitative study understanding how physicians engage with audit and feedback. Implement Sci. 2021;16(1):19.

National Quality Forum. Emergency Department Transitions of Care: A Quality Measurement Framework—Final Report: DHHS contract HHSM‐500–2012–000091, Task Order HHSM‐500‐T0025. Washington, DC: National Quality Forum; 2017.

Kyriacou DN, Handel D, Stein AC, Nelson RR. Brief report: factors affecting outpatient follow-up compliance of emergency department patients. J Gen Intern Med. 2005;20(10):938–42.

Vukmir RB, Kremen R, Ellis GL, DeHart DA, Plewa MC, Menegazzi J. Compliance with emergency department referral: the effect of computerized discharge instructions. Ann Emerg Med. 1993;22(5):819–23.

Engel KG, Heisler M, Smith DM, Robinson CH, Forman JH, Ubel PA. Patient comprehension of emergency department care and instructions: are patients aware of when they do not understand? Ann Emerg Med. 2009;53(4):454–461 e415.

Cordasco KM, Saifu HN, Song HS, et al. The ED-PACT Tool Initiative: Communicating Veterans’ Care Needs After Emergency Department Visits. J Healthc Qual. 2020;42(3):157–65.

Bright TJ, Wong A, Dhurjati R, et al. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157(1):29–43.

Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS. Physicians’ decisions to override computerized drug alerts in primary care. Arch Intern Med. 2003;163(21):2625–31.

van der Sijs H, Aarts J, Vulto A, Berg M. Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc. 2006;13(2):138–47.

Shah T, Patel-Teague S, Kroupa L, Meyer AND, Singh H. Impact of a national QI programme on reducing electronic health record notifications to clinicians. BMJ Qual Saf. 2019;28(1):10–4.

Lin CP, Payne TH, Nichol WP, Hoey PJ, Anderson CL, Gennari JH. Evaluating clinical decision support systems: monitoring CPOE order check override rates in the Department of Veterans Affairs’ Computerized Patient Record System. J Am Med Inform Assoc. 2008;15(5):620–6.

Middleton B, Bloomrosen M, Dente MA, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc. 2013;20(e1):e2-8.

Han HR, Gleason KT, Sun CA, et al. Using Patient Portals to Improve Patient Outcomes: Systematic Review. JMIR Hum Factors. 2019;6(4): e15038.

Johnson AM, Brimhall AS, Johnson ET, et al. A systematic review of the effectiveness of patient education through patient portals. JAMIA Open. 2023;6(1):ooac085.

Lazard AJ, Watkins I, Mackert MS, Xie B, Stephens KK, Shalev H. Design simplicity influences patient portal use: the role of aesthetic evaluations for technology acceptance. J Am Med Inform Assoc. 2016;23(e1):e157-161.

IOM. Health IT and Patient Safety: Building Safer Systems for Better Care. Washington, DC: NAP;2012.

Koppel R, Wetterneck T, Telles JL, Karsh BT. Workarounds to barcode medication administration systems: their occurrences, causes, and threats to patient safety. J Am Med Inform Assoc. 2008;15(4):408–23.

Ash JS, Sittig DF, Poon EG, Guappone K, Campbell E, Dykstra RH. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc. 2007;14(4):415–23.

Hollnagel E, Woods D. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Boca Raton: CRC Press; 2006.

Download references

Acknowledgements

This material is based upon work supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development, Health Services Research and Development (I01HX003057). The content is solely the responsibility of the authors and does not necessarily represent the official views of the VA.

Author information

Authors and affiliations.

Education, and Clinical Center (GRECC), VA , Geriatric Research, Tennessee Valley Healthcare System, 2525 West End Avenue, Ste. 1430, Nashville, TN, 37203, USA

Michael J. Ward, Michael E. Matheny & Amanda S. Mixon

Medicine Service, Tennessee Valley Healthcare System, Nashville, TN, USA

Michael J. Ward

Department of Emergency Medicine, Vanderbilt University Medical Center, Nashville, TN, USA

Michael J. Ward & Melissa D. Rubenstein

Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN, USA

Michael J. Ward, Michael E. Matheny, Shilo Anders & Thomas Reese

Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA

Michael E. Matheny

Division of General Internal Medicine & Public Health, Vanderbilt University Medical Center, Nashville, TN, USA

Department of Psychology, Vanderbilt University, Nashville, TN, USA

Kemberlee Bonnet, Chloe Dagostino & David G. Schlundt

Center for Research and Innovation in Systems Safety, Vanderbilt University Medical Center, Nashville, TN, USA

Shilo Anders

Section of Hospital Medicine, Vanderbilt University Medical Center, Nashville, TN, USA

Amanda S. Mixon

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: MJW, ASM, MEM, DS, SA. Methodology: MJW, ASM, MEM, DS, KB, SA, TR. Formal analysis: KB, DS, CD, MJW. Investigation: MJW, MDR, DS. Resources: MJW, MEM. Writing—Original Draft. Preparation: MJW, ASM, KB, MDR. Writing—Review & Editing: All investigators. Supervision: MJW, ASM, MEM. Funding acquisition: MJW, MEM.

Corresponding author

Correspondence to Michael J. Ward .

Ethics declarations

Ethics approval and consent to participate.

This study was approved by the VA Tennessee Valley Healthcare System Institutional Review Board as minimal risk (#1573619). A waiver of informed consent was approved and each subject was verbally consented prior to interviews. The IRB determined that all requirements set forth in 38CFR16.111 in accordance for human subjects research have been satisfied. All the methods were carried out according the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Ward, M.J., Matheny, M.E., Rubenstein, M.D. et al. Determinants of appropriate antibiotic and NSAID prescribing in unscheduled outpatient settings in the veterans health administration. BMC Health Serv Res 24 , 640 (2024). https://doi.org/10.1186/s12913-024-11082-0

Download citation

Received : 11 October 2023

Accepted : 07 May 2024

Published : 18 May 2024

DOI : https://doi.org/10.1186/s12913-024-11082-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Non-Steroidal Anti-Inflammatory Drugs
  • Antibiotics
  • Qualitative Methods
  • Emergency Department
  • Urgent Care
  • Primary Care
  • Prescribing Stewardship

BMC Health Services Research

ISSN: 1472-6963

qualitative research designs include

COMMENTS

  1. What Is Qualitative Research?

    Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data. Common approaches include grounded theory, ethnography, action research, phenomenological research, and narrative research.

  2. Planning Qualitative Research: Design and Decision Making for New

    The four qualitative approaches we include are case study, ethnography, narrative inquiry, and phenomenology. ... Qualitative inquiry and research design: Choosing among five approaches (4th ed). Sage. Google Scholar. Denzin N., Lincoln Y. (2011). Introduction: The discipline and practice of qualitative research.

  3. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  4. Chapter 2. Research Design

    Chapter 2. Research Design Getting Started. When I teach undergraduates qualitative research methods, the final product of the course is a "research proposal" that incorporates all they have learned and enlists the knowledge they have learned about qualitative research methods in an original design that addresses a particular research question.

  5. What is Qualitative Research Design? Definition, Types, Methods and

    Qualitative research design is defined as a type of research methodology that focuses on exploring and understanding complex phenomena and the meanings attributed to them by individuals or groups. Learn more about qualitative research design types, methods and best practices. ... Key characteristics of qualitative research design include:

  6. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences ...

  7. Guide to Qualitative Research Designs

    The most common characteristics of qualitative research design include the following: Natural environment. ... Qualitative research designs allow researchers to provide an in-depth analysis of why specific behavior or events occur. It can offer fresh insights, generate new ideas, or add context to statistics from quantitative studies. ...

  8. PDF Qualitative Research Design: The Five Essential Components

    5. Validity. How can you ensure that the data you collect will, a) address your research questions, b) yield correct and defensible answers to these questions, and c) apply to the larger population or process of interest? 1 Taken from Joseph A. Maxwell, Qualitative Research Design, 2nd edition. 2004. Sage Publications.

  9. Qualitative Methods in Health Care Research

    The major types of qualitative research designs are narrative research, phenomenological research, grounded theory research, ethnographic research, historical research, and case study research. ... The data collection procedures include mainly interviews, field notes, letters, photographs, diaries, and documents collected from one or more ...

  10. Characteristics of Qualitative Research

    Qualitative research is a method of inquiry used in various disciplines, including social sciences, education, and health, to explore and understand human behavior, experiences, and social phenomena. It focuses on collecting non-numerical data, such as words, images, or objects, to gain in-depth insights into people's thoughts, feelings, motivations, and perspectives.

  11. Research Design

    Qualitative research designs tend to be more flexible and inductive, allowing you to adjust your approach based on what you find throughout the research process.. Example: Qualitative research If you want to generate new ideas for online teaching strategies, a qualitative approach would make the most sense. You can use this type of research to explore exactly what teachers and students ...

  12. PDF Qualitative Research Designs

    The qualitative researcher today faces a baffling array of options for con-ducting qualitative research. Numerous inquiry strategies (Denzin & Lincoln, 2005), inquiry traditions (Creswell, 1998), qualitative approaches (Miller & Crabtree, 1992), and design types (Creswell, 2007) are available for selec-tion. What criteria should govern whether ...

  13. 9.4 Types of qualitative research designs

    Focus Groups. Focus groups resemble qualitative interviews in that a researcher may prepare a guide in advance and interact with participants by asking them questions. But anyone who has conducted both one-on-one interviews and focus groups knows that each is unique. In an interview, usually one member (the research participant) is most active ...

  14. Qualitative Research

    Qualitative research is a method of inquiry that focuses on understanding the meanings that people attach to their experiences. ... Select your research design: There are different types of qualitative research designs, including ethnography, case study, grounded theory, and phenomenology. ... Researchers may also seek to include participants ...

  15. PDF Qualitative Research

    The second half of the chapter addresses qualitative research design. In this sec-tion, we provide guidance on when to use and, equally importantly, when not to use qualitative methods. Following this, we break the research design process down into ... thus to include them in the consultable record of what man has said. As such, an interpretive ...

  16. How to use and assess qualitative research methods

    This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common ...

  17. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  18. (PDF) Qualitative research design: An interactive approach

    ence the design of your study; these include your research skills, the a v ailable resources, per ceived problems, ethical standar ds, the research setting, and the data Designing a Qualitative ...

  19. Choosing a Qualitative Research Approach

    In this Rip Out, we describe 3 different qualitative research approaches commonly used in medical education: grounded theory, ethnography, and phenomenology. Each acts as a pivotal frame that shapes the research question (s), the method (s) of data collection, and how data are analyzed. 4, 5. Go to:

  20. Types Of Qualitative Research Designs And Methods

    Various techniques can achieve results, depending on the subject of study. Types of qualitative research to explore social behavior or understand interactions within specific contexts include interviews, focus groups, observations and surveys. These identify concepts and relationships that aren't easily observed through quantitative methods.

  21. Integrating qualitative research within a clinical trials unit

    The value of using qualitative methods within clinical trials is widely recognised. How qualitative research is integrated within trials units to achieve this is less clear. This paper describes the process through which qualitative research has been integrated within Cardiff University's Centre for Trials Research (CTR) in Wales, UK. We highlight facilitators of, and challenges to, integration.

  22. What is Qualitative in Qualitative Research

    Qualitative research involves the studied use and collection of a variety of empirical materials - case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts - that describe routine and problematic moments and meanings in individuals' lives.

  23. Determinants of appropriate antibiotic and NSAID prescribing in

    A common and recognized reason for these design challenges include the failure to place the user (i.e., acute care clinician) at the center of the design process resulting in underutilization, workarounds, ... Tjora A. Qualitative research as stepwise-deductive induction. Routledge; 2018.