U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Sample Size in Qualitative Interview Studies: Guided by Information Power

Affiliations.

  • 1 1 University of Copenhagen, Copenhagen, Denmark.
  • 2 2 Uni Research Health, Bergen, Norway.
  • 3 3 University of Bergen, Bergen, Norway.
  • PMID: 26613970
  • DOI: 10.1177/1049732315617444

Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed.

Keywords: information power; methodology; participants; qualitative; sample size; saturation.

PubMed Disclaimer

Similar articles

  • A simple method to assess and report thematic saturation in qualitative research. Guest G, Namey E, Chen M. Guest G, et al. PLoS One. 2020 May 5;15(5):e0232076. doi: 10.1371/journal.pone.0232076. eCollection 2020. PLoS One. 2020. PMID: 32369511 Free PMC article.
  • Informing a priori Sample Size Estimation in Qualitative Concept Elicitation Interview Studies for Clinical Outcome Assessment Instrument Development. Turner-Bowker DM, Lamoureux RE, Stokes J, Litcher-Kelly L, Galipeau N, Yaworsky A, Solomon J, Shields AL. Turner-Bowker DM, et al. Value Health. 2018 Jul;21(7):839-842. doi: 10.1016/j.jval.2017.11.014. Epub 2018 Mar 7. Value Health. 2018. PMID: 30005756
  • Open-ended interview questions and saturation. Weller SC, Vickers B, Bernard HR, Blackburn AM, Borgatti S, Gravlee CC, Johnson JC. Weller SC, et al. PLoS One. 2018 Jun 20;13(6):e0198606. doi: 10.1371/journal.pone.0198606. eCollection 2018. PLoS One. 2018. PMID: 29924873 Free PMC article.
  • Translational Metabolomics of Head Injury: Exploring Dysfunctional Cerebral Metabolism with Ex Vivo NMR Spectroscopy-Based Metabolite Quantification. Wolahan SM, Hirt D, Glenn TC. Wolahan SM, et al. In: Kobeissy FH, editor. Brain Neurotrauma: Molecular, Neuropsychological, and Rehabilitation Aspects. Boca Raton (FL): CRC Press/Taylor & Francis; 2015. Chapter 25. In: Kobeissy FH, editor. Brain Neurotrauma: Molecular, Neuropsychological, and Rehabilitation Aspects. Boca Raton (FL): CRC Press/Taylor & Francis; 2015. Chapter 25. PMID: 26269925 Free Books & Documents. Review.
  • An increasing number of qualitative research papers in oncology and palliative care: does it mean a thorough development of the methodology of research? Borreani C, Miccinesi G, Brunelli C, Lina M. Borreani C, et al. Health Qual Life Outcomes. 2004 Jan 23;2:7. doi: 10.1186/1477-7525-2-7. Health Qual Life Outcomes. 2004. PMID: 14741052 Free PMC article. Review.
  • Adolescents' perspectives on a novel digital treatment targeting eating disorders: a qualitative study. Holgersen G, Abdi-Dezfuli SE, Friis Darrud S, Stornes Espeset EM, Bircow Elgen I, Nordgreen T. Holgersen G, et al. BMC Psychiatry. 2024 Jun 5;24(1):423. doi: 10.1186/s12888-024-05866-1. BMC Psychiatry. 2024. PMID: 38840080 Free PMC article.
  • Dismantle and rebuild: the importance of preparedness and self-efficacy before, during and after allogeneic haematopoietic cell transplantation. Holmberg K, Bergkvist K, Wengström Y, Hagelin CL. Holmberg K, et al. J Cancer Surviv. 2024 Jun 3. doi: 10.1007/s11764-024-01622-2. Online ahead of print. J Cancer Surviv. 2024. PMID: 38829473
  • "And Now that I Feel Safe…I'm Coming Out of Fight or Flight": A Qualitative Exploration of Challenges and Opportunities for Residents' Mental Health in Substance Use Recovery Housing. Stewart HLN, Wilkerson JM, Gallardo KR, Zoschke IN, Gillespie D, Rodriguez SA, McCurdy SA. Stewart HLN, et al. Community Ment Health J. 2024 Jun 1. doi: 10.1007/s10597-024-01301-7. Online ahead of print. Community Ment Health J. 2024. PMID: 38822922
  • Telephone consulting for 'Personalised Care and Support Planning' with people with long-term conditions: a qualitative study of healthcare professionals' experiences during COVID-19 restrictions and beyond. McCann S, Entwistle VA, Oliver L, Lewis-Barned N, Haines R, Cribb A. McCann S, et al. BMC Prim Care. 2024 May 31;25(1):193. doi: 10.1186/s12875-024-02443-z. BMC Prim Care. 2024. PMID: 38822282 Free PMC article.
  • "More than just a walk in the park": A multi-stakeholder qualitative exploration of community-based walking sport programmes for middle-aged and older adults. Sivaramakrishnan H, Phoenix C, Quested E, Thogersen-Ntoumani C, Gucciardi DF, Cheval B, Ntoumanis N. Sivaramakrishnan H, et al. Qual Res Sport Exerc Health. 2023 Apr 2;15(6):772-788. doi: 10.1080/2159676X.2023.2197450. eCollection 2023. Qual Res Sport Exerc Health. 2023. PMID: 38812823 Free PMC article.

Related information

  • Cited in Books

LinkOut - more resources

Full text sources, other literature sources.

  • scite Smart Citations

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Sample Size Policy for Qualitative Studies Using In-Depth Interviews

  • Published: 12 September 2012
  • Volume 41 , pages 1319–1320, ( 2012 )

Cite this article

sample size needed for qualitative research

  • Shari L. Dworkin 1  

299k Accesses

28 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

In recent years, there has been an increase in submissions to the Journal that draw on qualitative research methods. This increase is welcome and indicates not only the interdisciplinarity embraced by the Journal (Zucker, 2002 ) but also its commitment to a wide array of methodologies.

For those who do select qualitative methods and use grounded theory and in-depth interviews in particular, there appear to be a lot of questions that authors have had recently about how to write a rigorous Method section. This topic will be addressed in a subsequent Editorial. At this time, however, the most common question we receive is: “How large does my sample size have to be?” and hence I would like to take this opportunity to answer this question by discussing relevant debates and then the policy of the Archives of Sexual Behavior . Footnote 1

The sample size used in qualitative research methods is often smaller than that used in quantitative research methods. This is because qualitative research methods are often concerned with garnering an in-depth understanding of a phenomenon or are focused on meaning (and heterogeneities in meaning )—which are often centered on the how and why of a particular issue, process, situation, subculture, scene or set of social interactions. In-depth interview work is not as concerned with making generalizations to a larger population of interest and does not tend to rely on hypothesis testing but rather is more inductive and emergent in its process. As such, the aim of grounded theory and in-depth interviews is to create “categories from the data and then to analyze relationships between categories” while attending to how the “lived experience” of research participants can be understood (Charmaz, 1990 , p. 1162).

There are several debates concerning what sample size is the right size for such endeavors. Most scholars argue that the concept of saturation is the most important factor to think about when mulling over sample size decisions in qualitative research (Mason, 2010 ). Saturation is defined by many as the point at which the data collection process no longer offers any new or relevant data. Another way to state this is that conceptual categories in a research project can be considered saturated “when gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories” (Charmaz, 2006 , p. 113). Saturation depends on many factors and not all of them are under the researcher’s control. Some of these include: How homogenous or heterogeneous is the population being studied? What are the selection criteria? How much money is in the budget to carry out the study? Are there key stratifiers (e.g., conceptual, demographic) that are critical for an in-depth understanding of the topic being examined? What is the timeline that the researcher faces? How experienced is the researcher in being able to even determine when she or he has actually reached saturation (Charmaz, 2006 )? Is the author carrying out theoretical sampling and is, therefore, concerned with ensuring depth on relevant concepts and examining a range of concepts and characteristics that are deemed critical for emergent findings (Glaser & Strauss, 1967 ; Strauss & Corbin, 1994 , 2007 )?

While some experts in qualitative research avoid the topic of “how many” interviews “are enough,” there is indeed variability in what is suggested as a minimum. An extremely large number of articles, book chapters, and books recommend guidance and suggest anywhere from 5 to 50 participants as adequate. All of these pieces of work engage in nuanced debates when responding to the question of “how many” and frequently respond with a vague (and, actually, reasonable) “it depends.” Numerous factors are said to be important, including “the quality of data, the scope of the study, the nature of the topic, the amount of useful information obtained from each participant, the use of shadowed data, and the qualitative method and study designed used” (Morse, 2000 , p. 1). Others argue that the “how many” question can be the wrong question and that the rigor of the method “depends upon developing the range of relevant conceptual categories, saturating (filling, supporting, and providing repeated evidence for) those categories,” and fully explaining the data (Charmaz, 1990 ). Indeed, there have been countless conferences and conference sessions on these debates, reports written, and myriad publications are available as well (for a compilation of debates, see Baker & Edwards, 2012 ).

Taking all of these perspectives into account, the Archives of Sexual Behavior is putting forward a policy for authors in order to have more clarity on what is expected in terms of sample size for studies drawing on grounded theory and in-depth interviews. The policy of the Archives of Sexual Behavior will be that it adheres to the recommendation that 25–30 participants is the minimum sample size required to reach saturation and redundancy in grounded theory studies that use in-depth interviews. This number is considered adequate for publications in journals because it (1) may allow for thorough examination of the characteristics that address the research questions and to distinguish conceptual categories of interest, (2) maximizes the possibility that enough data have been collected to clarify relationships between conceptual categories and identify variation in processes, and (3) maximizes the chances that negative cases and hypothetical negative cases have been explored in the data (Charmaz, 2006 ; Morse, 1994 , 1995 ).

The Journal does not want to paradoxically and rigidly quantify sample size when the endeavor at hand is qualitative in nature and the debates on this matter are complex. However, we are providing this practical guidance. We want to ensure that more of our submissions have an adequate sample size so as to get closer to reaching the goal of saturation and redundancy across relevant characteristics and concepts. The current recommendation that is being put forward does not include any comment on other qualitative methodologies, such as content and textual analysis, participant observation, focus groups, case studies, clinical cases or mixed quantitative–qualitative methods. The current recommendation also does not apply to phenomenological studies or life history approaches. The current guidance is intended to offer one clear and consistent standard for research projects that use grounded theory and draw on in-depth interviews.

Editor’s note: Dr. Dworkin is an Associate Editor of the Journal and is responsible for qualitative submissions.

Baker, S. E., & Edwards, R. (2012). How many qualitative interviews is enough? National Center for Research Methods. Available at: http://eprints.ncrm.ac.uk/2273/ .

Charmaz, K. (1990). ‘Discovering’ chronic illness: Using grounded theory. Social Science and Medicine, 30 , 1161–1172.

Article   PubMed   Google Scholar  

Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis . London: Sage Publications.

Google Scholar  

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . Chicago: Aldine Publishing Co.

Mason, M. (2010). Sample size and saturation in PhD studies using qualitative interviews. Forum: Qualitative Social Research, 11 (3) [Article No. 8].

Morse, J. M. (1994). Designing funded qualitative research. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research (pp. 220–235). Thousand Oaks, CA: Sage Publications.

Morse, J. M. (1995). The significance of saturation. Qualitative Health Research, 5 , 147–149.

Article   Google Scholar  

Morse, J. M. (2000). Determining sample size. Qualitative Health Research, 10 , 3–5.

Strauss, A. L., & Corbin, J. M. (1994). Grounded theory methodology. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 273–285). Thousand Oaks, CA: Sage Publications.

Strauss, A. L., & Corbin, J. M. (2007). Basics of qualitative research: Techniques and procedures for developing grounded theory . Thousand Oaks, CA: Sage Publications.

Zucker, K. J. (2002). From the Editor’s desk: Receiving the torch in the era of sexology’s renaissance. Archives of Sexual Behavior, 31 , 1–6.

Download references

Author information

Authors and affiliations.

Department of Social and Behavioral Sciences, University of California at San Francisco, 3333 California St., LHTS #455, San Francisco, CA, 94118, USA

Shari L. Dworkin

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Shari L. Dworkin .

Rights and permissions

Reprints and permissions

About this article

Dworkin, S.L. Sample Size Policy for Qualitative Studies Using In-Depth Interviews. Arch Sex Behav 41 , 1319–1320 (2012). https://doi.org/10.1007/s10508-012-0016-6

Download citation

Published : 12 September 2012

Issue Date : December 2012

DOI : https://doi.org/10.1007/s10508-012-0016-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

InterQ Research

How to Justify Sample Size in Qualitative Research

InterQ Research Explains How To Justify Sample Size In Qualitative Research

  • March 21, 2023

Article Summary : Sample sizes in qualitative research can be much lower than sample sizes in quantitative research. The key is having the right participant segmentation and study design. Data saturation is also a key principle to understand.

Qualitative research is a bit of a puzzle for new practitioners: since it is done via interviewing participants, observation, or studying people’s patterns and movements (in the case of user experience design), one can’t obviously have a huge sample size that is statistically significant. Interviewing 200+ people is not only incredibly time-consuming, it’s also quite expensive.

And, moreover, the goal of qualitative research is not to understand how much or how many. The goal is to collect themes and see patterns. It’s to uncover the “why” versus the amount.

So in this post, we’re going to explore the question every qualitative researcher asks, at one point or another: How do you justify the sample size in qualitative research?

Here are some guidelines.

Qualitative sample size guideline #1: Segmentation of participants

In qualitative research, because the goal is to understand themes and patterns of a particular subset (versus a broad population), the first step is segmentation. You may also know of this as “ persona ” development, but regardless of what you call it, the idea is to first bucket your various buyer/customer types into like-categories. For example, if you’re selling sales software, your target isn’t every single company who sells products. It’s likely much more specific: like mid-market sized VP-level sales execs who have a technology product and use a cloud-based CRM. If that’s your main buyer, that’s your segment who you would focus on in qualitative research.

Generally, most companies have multiple targets, so the trick is to think about all the various buyers/consumers and identify which underlying traits they have in common, as well as which traits differentiate them from other targets. Typically, this is where quantitative data comes into play: either through internal data analysis or surveys. Whatever your process, this is step 1 to figure out the segments you will be bucketing participants into so you can move into the qualitative phase, where you’ll ask in-depth questions, via interviews, to each segment category. At this stage, it’s time to bring in your recruiting company to find your participants.

Qualitative sample size guideline #2: Figure out the appropriate study design

After you’ve tackled your segmentation exercise and know how to divide up your participants, you’ll need to think through the qualitative methodology that is most appropriate for answering your research questions. At InterQ Research, we always design studies through the lens of contextual research. This means that you want to set up your studies to be as close to real life as possible. Is your product sale done through a group discussion or individual decision? Often, when teams decide on software or technology stacks, they’ll want to test it and talk amongst themselves. If this is the case, you would need to interview the team or a team of like-minded professionals to see how they come to a decision. In this case, focus groups would be a great methodology.

Conversely, if your product is thought through on an individual-basis, like, perhaps, a person navigating a website when purchasing a plane ticket, then you’d want to interview the individual, alone. In this case, you’d want to choose a hybrid approach, of a user experience/journey mapping exercise, along with an in-depth interview.

In qualitative research, there are numerous methodologies, and frequently, mixed-methodologies work best, in order to see the context of how people behave, as well as to understand how they think.

But I digress. Let’s get back to sample sizes in qualitative research.

Qualitative sample size guideline #3: Your sample size is completed when you reach saturation

So far we’ve covered how to first segment your audiences, and then we’ve talked about the methodology to choose, based on context. The third principle in qualitative research is to understand the theory of data saturation.

Saturation in qualitative research means that, when interviewing a distinct segment of participants, you are able to explore all of the common themes the sample set has in common. In other words, after doing, let’s say, 15 interviews about a specific topic, you start to hear the participants all say similar things. Since you have a fairly homogenous sample, these themes will start to come out after 10-20 interviews, if you’ve done your recruiting well (and sometimes as soon as 6 interviews). Once you hear the same themes, with no new information, this is data saturation.

The beauty of qualitative research is that if you:

  • Segment your audiences carefully, into distinct groups, and,
  • Choose the right methodology

You’ll start to hit saturation, and you will get diminishing returns with more interviews. In this manner, qualitative research can have smaller sample sizes than quantitative, since it’s thematic, versus statistical.

Let’s wrap it up: So what is the ideal sample size in qualitative research?

To bring this one home, let’s answer the question we sought out to investigate: the sample size in qualitative research.

Typically, sample sizes will range from 6-20, per segment. (So if you have 5 segments, 6 is your multiplier for the total number you’ll need, so you would have a total sample size of 30.) For very specific tasks, such as in user experience research, moderators will see the same themes after as few as 5-6 interviews. In most studies, though, researchers will reach saturation after 10-20 interviews. The variable here depends on how homogenous the sample is, as well as the type of questions being asked. Some researchers aim for a bakers dozen (13), and see if they’ve reached saturation after 13. If not, the study can be expanded to find more participants so that all the themes can be explored. But 13 is a good place to start.

Interested in running a qualitative research study? Request a proposal > 

Author Bio: Joanna Jones is the founder and CEO of InterQ Research. At InterQ, she oversees study design, manages clients, and moderators studies.

sample size needed for qualitative research

  • Request Proposal
  • Participate in Studies
  • Our Leadership Team
  • Our Approach
  • Mission, Vision and Core Values
  • Qualitative Research
  • Quantitative Research
  • Research Insights Workshops
  • Customer Journey Mapping
  • Millennial & Gen Z Market Research
  • Market Research Services
  • Our Clients
  • InterQ Blog

What’s a good sample size for qualitative research?

sample size needed for qualitative research

I consistently come across folks who are unfamiliar with sample size requirements for qualitative research -- assuming it needs stat significance like a quant test like an A/B test. 

Instead of stat significance the methodological principle used is 'saturation'. 

The standard in qualitative research is that it takes 12-13 responses to reach saturation -- meaning whether you survey 13 or 130 people, the number of insights/themes you get is the same. 

There are folks who debate the exact number of participants, but most in the scientific community agree it's below 20. 

A review of 23 peer-reviewed articles suggests that 9–17 participants can be sufficient to reach saturation, especially for studies with homogenous populations and narrowly defined objectives.

Hence our recommendation is to target ~15 people as a target sample size in your qualitative research.

What is data saturation?

Data saturation is the point at which new data no longer provides new insights into the research question. 

It’s when you have learned everything you can from the data and cannot find anything new. Data saturation is not about the numbers per se, but about the depth of the data (Burmeister & Aitken, 2012).

There is no one-size-fits-all answer to how many participants you need to reach data saturation. However, researchers agree on some general principles:

  • If you are not getting any new data, themes, or codes, then you have reached data saturation.
  • You should also be able to repeat the study and get the same results.

Some researchers have found that you can reach data saturation with as few as six participants  (Guest et al., 2006), but it depends on the population you are studying. 

More participants just means more costs

The vast majority of your target customer research should be qualitative. The point is to collect insights to drive demand, not big numbers to impress people.

Qualitative research with 15 people is a good investment because it yields the most findings at a lower cost. Running qualitative research studies with more than 15 people provides little additional benefit (you will hit saturation at around 15 people and identify 99% of insights) but costs quite a bit.

Spend that extra budget on more studies, not more participants.

Why qualitative research doesn’t need the same numbers as quantitative research

Qualitative research doesn't need the same numbers as quantitative research because it is focused on understanding the depth and complexity of people's experiences, rather than making generalizations about the general population. 

This type of understanding cannot be achieved by simply collecting data from a large number of people.

Just because 10 people in a 15-person study claim a strong interest in X does not mean that we can say that 66% of the overall population will have a similar preference.

Another thing is that qualitative research is often exploratory in nature.

This means that people conducting the research are not sure what they are going to find before they start collecting data. 

Qualitative research is often based on small samples of participants who are carefully selected to represent the group of people that the researcher is interested in studying.

This means that the researcher can be confident that their findings are relevant to the group they are studying, even if the sample size is small.

Sample sizes in user testing

Nielsen Norman Group recommends testing with 5 to 15 users to find most usability problems, as testing more people yields diminishing returns.

The math is explained in the chart below:

  • 0 users will find 0 problems. (Duh)
  • 5 users will be able to identify 85% of the usability problems
  • 10 users will find over 95% of issues
  • 15 users will identify over 99% of issues

sample size needed for qualitative research

The same principles can be applied to message testing as the key idea is the same: you’re trying to identify sources of friction. 

You might also enjoy

sample size needed for qualitative research

Know exactly what your buyers want and improve your messaging

sample size needed for qualitative research

  • Research article
  • Open access
  • Published: 21 November 2018

Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period

  • Konstantina Vasileiou   ORCID: orcid.org/0000-0001-5047-3920 1 ,
  • Julie Barnett 1 ,
  • Susan Thorpe 2 &
  • Terry Young 3  

BMC Medical Research Methodology volume  18 , Article number:  148 ( 2018 ) Cite this article

744k Accesses

173 Altmetric

Metrics details

Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research. Nevertheless, research shows that sample size sufficiency reporting is often poor, if not absent, across a range of disciplinary fields.

A systematic analysis of single-interview-per-participant designs within three health-related journals from the disciplines of psychology, sociology and medicine, over a 15-year period, was conducted to examine whether and how sample sizes were justified and how sample size was characterised and discussed by authors. Data pertinent to sample size were extracted and analysed using qualitative and quantitative analytic techniques.

Our findings demonstrate that provision of sample size justifications in qualitative health research is limited; is not contingent on the number of interviews; and relates to the journal of publication. Defence of sample size was most frequently supported across all three journals with reference to the principle of saturation and to pragmatic considerations. Qualitative sample sizes were predominantly – and often without justification – characterised as insufficient (i.e., ‘small’) and discussed in the context of study limitations. Sample size insufficiency was seen to threaten the validity and generalizability of studies’ results, with the latter being frequently conceived in nomothetic terms.

Conclusions

We recommend, firstly, that qualitative health researchers be more transparent about evaluations of their sample size sufficiency, situating these within broader and more encompassing assessments of data adequacy . Secondly, we invite researchers critically to consider how saturation parameters found in prior methodological studies and sample size community norms might best inform, and apply to, their own project and encourage that data adequacy is best appraised with reference to features that are intrinsic to the study at hand. Finally, those reviewing papers have a vital role in supporting and encouraging transparent study-specific reporting.

Peer Review reports

Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size . It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [ 1 ] and is implicated – particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises – in appraisals of validity and generalizability [ 2 , 3 , 4 , 5 ].

Samples in qualitative research tend to be small in order to support the depth of case-oriented analysis that is fundamental to this mode of inquiry [ 5 ]. Additionally, qualitative samples are purposive, that is, selected by virtue of their capacity to provide richly-textured information, relevant to the phenomenon under investigation. As a result, purposive sampling [ 6 , 7 ] – as opposed to probability sampling employed in quantitative research – selects ‘information-rich’ cases [ 8 ]. Indeed, recent research demonstrates the greater efficiency of purposive sampling compared to random sampling in qualitative studies [ 9 ], supporting related assertions long put forward by qualitative methodologists.

Sample size in qualitative research has been the subject of enduring discussions [ 4 , 10 , 11 ]. Whilst the quantitative research community has established relatively straightforward statistics-based rules to set sample sizes precisely, the intricacies of qualitative sample size determination and assessment arise from the methodological, theoretical, epistemological, and ideological pluralism that characterises qualitative inquiry (for a discussion focused on the discipline of psychology see [ 12 ]). This mitigates against clear-cut guidelines, invariably applied. Despite these challenges, various conceptual developments have sought to address this issue, with guidance and principles [ 4 , 10 , 11 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 ], and more recently, an evidence-based approach to sample size determination seeks to ground the discussion empirically [ 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 ].

Focusing on single-interview-per-participant qualitative designs, the present study aims to further contribute to the dialogue of sample size in qualitative research by offering empirical evidence around justification practices associated with sample size. We next review the existing conceptual and empirical literature on sample size determination.

Sample size in qualitative research: Conceptual developments and empirical investigations

Qualitative research experts argue that there is no straightforward answer to the question of ‘how many’ and that sample size is contingent on a number of factors relating to epistemological, methodological and practical issues [ 36 ]. Sandelowski [ 4 ] recommends that qualitative sample sizes are large enough to allow the unfolding of a ‘new and richly textured understanding’ of the phenomenon under study, but small enough so that the ‘deep, case-oriented analysis’ (p. 183) of qualitative data is not precluded. Morse [ 11 ] posits that the more useable data are collected from each person, the fewer participants are needed. She invites researchers to take into account parameters, such as the scope of study, the nature of topic (i.e. complexity, accessibility), the quality of data, and the study design. Indeed, the level of structure of questions in qualitative interviewing has been found to influence the richness of data generated [ 37 ], and so, requires attention; empirical research shows that open questions, which are asked later on in the interview, tend to produce richer data [ 37 ].

Beyond such guidance, specific numerical recommendations have also been proffered, often based on experts’ experience of qualitative research. For example, Green and Thorogood [ 38 ] maintain that the experience of most qualitative researchers conducting an interview-based study with a fairly specific research question is that little new information is generated after interviewing 20 people or so belonging to one analytically relevant participant ‘category’ (pp. 102–104). Ritchie et al. [ 39 ] suggest that studies employing individual interviews conduct no more than 50 interviews so that researchers are able to manage the complexity of the analytic task. Similarly, Britten [ 40 ] notes that large interview studies will often comprise of 50 to 60 people. Experts have also offered numerical guidelines tailored to different theoretical and methodological traditions and specific research approaches, e.g. grounded theory, phenomenology [ 11 , 41 ]. More recently, a quantitative tool was proposed [ 42 ] to support a priori sample size determination based on estimates of the prevalence of themes in the population. Nevertheless, this more formulaic approach raised criticisms relating to assumptions about the conceptual [ 43 ] and ontological status of ‘themes’ [ 44 ] and the linearity ascribed to the processes of sampling, data collection and data analysis [ 45 ].

In terms of principles, Lincoln and Guba [ 17 ] proposed that sample size determination be guided by the criterion of informational redundancy , that is, sampling can be terminated when no new information is elicited by sampling more units. Following the logic of informational comprehensiveness Malterud et al. [ 18 ] introduced the concept of information power as a pragmatic guiding principle, suggesting that the more information power the sample provides, the smaller the sample size needs to be, and vice versa.

Undoubtedly, the most widely used principle for determining sample size and evaluating its sufficiency is that of saturation . The notion of saturation originates in grounded theory [ 15 ] – a qualitative methodological approach explicitly concerned with empirically-derived theory development – and is inextricably linked to theoretical sampling. Theoretical sampling describes an iterative process of data collection, data analysis and theory development whereby data collection is governed by emerging theory rather than predefined characteristics of the population. Grounded theory saturation (often called theoretical saturation) concerns the theoretical categories – as opposed to data – that are being developed and becomes evident when ‘gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories’ [ 46 p. 113]. Saturation in grounded theory, therefore, does not equate to the more common focus on data repetition and moves beyond a singular focus on sample size as the justification of sampling adequacy [ 46 , 47 ]. Sample size in grounded theory cannot be determined a priori as it is contingent on the evolving theoretical categories.

Saturation – often under the terms of ‘data’ or ‘thematic’ saturation – has diffused into several qualitative communities beyond its origins in grounded theory. Alongside the expansion of its meaning, being variously equated with ‘no new data’, ‘no new themes’, and ‘no new codes’, saturation has emerged as the ‘gold standard’ in qualitative inquiry [ 2 , 26 ]. Nevertheless, and as Morse [ 48 ] asserts, whilst saturation is the most frequently invoked ‘guarantee of qualitative rigor’, ‘it is the one we know least about’ (p. 587). Certainly researchers caution that saturation is less applicable to, or appropriate for, particular types of qualitative research (e.g. conversation analysis, [ 49 ]; phenomenological research, [ 50 ]) whilst others reject the concept altogether [ 19 , 51 ].

Methodological studies in this area aim to provide guidance about saturation and develop a practical application of processes that ‘operationalise’ and evidence saturation. Guest, Bunce, and Johnson [ 26 ] analysed 60 interviews and found that saturation of themes was reached by the twelfth interview. They noted that their sample was relatively homogeneous, their research aims focused, so studies of more heterogeneous samples and with a broader scope would be likely to need a larger size to achieve saturation. Extending the enquiry to multi-site, cross-cultural research, Hagaman and Wutich [ 28 ] showed that sample sizes of 20 to 40 interviews were required to achieve data saturation of meta-themes that cut across research sites. In a theory-driven content analysis, Francis et al. [ 25 ] reached data saturation at the 17th interview for all their pre-determined theoretical constructs. The authors further proposed two main principles upon which specification of saturation be based: (a) researchers should a priori specify an initial analysis sample (e.g. 10 interviews) which will be used for the first round of analysis and (b) a stopping criterion , that is, a number of interviews (e.g. 3) that needs to be further conducted, the analysis of which will not yield any new themes or ideas. For greater transparency, Francis et al. [ 25 ] recommend that researchers present cumulative frequency graphs supporting their judgment that saturation was achieved. A comparative method for themes saturation (CoMeTS) has also been suggested [ 23 ] whereby the findings of each new interview are compared with those that have already emerged and if it does not yield any new theme, the ‘saturated terrain’ is assumed to have been established. Because the order in which interviews are analysed can influence saturation thresholds depending on the richness of the data, Constantinou et al. [ 23 ] recommend reordering and re-analysing interviews to confirm saturation. Hennink, Kaiser and Marconi’s [ 29 ] methodological study sheds further light on the problem of specifying and demonstrating saturation. Their analysis of interview data showed that code saturation (i.e. the point at which no additional issues are identified) was achieved at 9 interviews, but meaning saturation (i.e. the point at which no further dimensions, nuances, or insights of issues are identified) required 16–24 interviews. Although breadth can be achieved relatively soon, especially for high-prevalence and concrete codes, depth requires additional data, especially for codes of a more conceptual nature.

Critiquing the concept of saturation, Nelson [ 19 ] proposes five conceptual depth criteria in grounded theory projects to assess the robustness of the developing theory: (a) theoretical concepts should be supported by a wide range of evidence drawn from the data; (b) be demonstrably part of a network of inter-connected concepts; (c) demonstrate subtlety; (d) resonate with existing literature; and (e) can be successfully submitted to tests of external validity.

Other work has sought to examine practices of sample size reporting and sufficiency assessment across a range of disciplinary fields and research domains, from nutrition [ 34 ] and health education [ 32 ], to education and the health sciences [ 22 , 27 ], information systems [ 30 ], organisation and workplace studies [ 33 ], human computer interaction [ 21 ], and accounting studies [ 24 ]. Others investigated PhD qualitative studies [ 31 ] and grounded theory studies [ 35 ]. Incomplete and imprecise sample size reporting is commonly pinpointed by these investigations whilst assessment and justifications of sample size sufficiency are even more sporadic.

Sobal [ 34 ] examined the sample size of qualitative studies published in the Journal of Nutrition Education over a period of 30 years. Studies that employed individual interviews ( n  = 30) had an average sample size of 45 individuals and none of these explicitly reported whether their sample size sought and/or attained saturation. A minority of articles discussed how sample-related limitations (with the latter most often concerning the type of sample, rather than the size) limited generalizability. A further systematic analysis [ 32 ] of health education research over 20 years demonstrated that interview-based studies averaged 104 participants (range 2 to 720 interviewees). However, 40% did not report the number of participants. An examination of 83 qualitative interview studies in leading information systems journals [ 30 ] indicated little defence of sample sizes on the basis of recommendations by qualitative methodologists, prior relevant work, or the criterion of saturation. Rather, sample size seemed to correlate with factors such as the journal of publication or the region of study (US vs Europe vs Asia). These results led the authors to call for more rigor in determining and reporting sample size in qualitative information systems research and to recommend optimal sample size ranges for grounded theory (i.e. 20–30 interviews) and single case (i.e. 15–30 interviews) projects.

Similarly, fewer than 10% of articles in organisation and workplace studies provided a sample size justification relating to existing recommendations by methodologists, prior relevant work, or saturation [ 33 ], whilst only 17% of focus groups studies in health-related journals provided an explanation of sample size (i.e. number of focus groups), with saturation being the most frequently invoked argument, followed by published sample size recommendations and practical reasons [ 22 ]. The notion of saturation was also invoked by 11 out of the 51 most highly cited studies that Guetterman [ 27 ] reviewed in the fields of education and health sciences, of which six were grounded theory studies, four phenomenological and one a narrative inquiry. Finally, analysing 641 interview-based articles in accounting, Dai et al. [ 24 ] called for more rigor since a significant minority of studies did not report precise sample size.

Despite increasing attention to rigor in qualitative research (e.g. [ 52 ]) and more extensive methodological and analytical disclosures that seek to validate qualitative work [ 24 ], sample size reporting and sufficiency assessment remain inconsistent and partial, if not absent, across a range of research domains.

Objectives of the present study

The present study sought to enrich existing systematic analyses of the customs and practices of sample size reporting and justification by focusing on qualitative research relating to health. Additionally, this study attempted to expand previous empirical investigations by examining how qualitative sample sizes are characterised and discussed in academic narratives. Qualitative health research is an inter-disciplinary field that due to its affiliation with medical sciences, often faces views and positions reflective of a quantitative ethos. Thus qualitative health research constitutes an emblematic case that may help to unfold underlying philosophical and methodological differences across the scientific community that are crystallised in considerations of sample size. The present research, therefore, incorporates a comparative element on the basis of three different disciplines engaging with qualitative health research: medicine, psychology, and sociology. We chose to focus our analysis on single-per-participant-interview designs as this not only presents a popular and widespread methodological choice in qualitative health research, but also as the method where consideration of sample size – defined as the number of interviewees – is particularly salient.

Study design

A structured search for articles reporting cross-sectional, interview-based qualitative studies was carried out and eligible reports were systematically reviewed and analysed employing both quantitative and qualitative analytic techniques.

We selected journals which (a) follow a peer review process, (b) are considered high quality and influential in their field as reflected in journal metrics, and (c) are receptive to, and publish, qualitative research (Additional File  1 presents the journals’ editorial positions in relation to qualitative research and sample considerations where available). Three health-related journals were chosen, each representing a different disciplinary field; the British Medical Journal (BMJ) representing medicine, the British Journal of Health Psychology (BJHP) representing psychology, and the Sociology of Health & Illness (SHI) representing sociology.

Search strategy to identify studies

Employing the search function of each individual journal, we used the terms ‘interview*’ AND ‘qualitative’ and limited the results to articles published between 1 January 2003 and 22 September 2017 (i.e. a 15-year review period).

Eligibility criteria

To be eligible for inclusion in the review, the article had to report a cross-sectional study design. Longitudinal studies were thus excluded whilst studies conducted within a broader research programme (e.g. interview studies nested in a trial, as part of a broader ethnography, as part of a longitudinal research) were included if they reported only single-time qualitative interviews. The method of data collection had to be individual, synchronous qualitative interviews (i.e. group interviews, structured interviews and e-mail interviews over a period of time were excluded), and the data had to be analysed qualitatively (i.e. studies that quantified their qualitative data were excluded). Mixed method studies and articles reporting more than one qualitative method of data collection (e.g. individual interviews and focus groups) were excluded. Figure  1 , a PRISMA flow diagram [ 53 ], shows the number of: articles obtained from the searches and screened; papers assessed for eligibility; and articles included in the review (Additional File  2 provides the full list of articles included in the review and their unique identifying code – e.g. BMJ01, BJHP02, SHI03). One review author (KV) assessed the eligibility of all papers identified from the searches. When in doubt, discussions about retaining or excluding articles were held between KV and JB in regular meetings, and decisions were jointly made.

figure 1

PRISMA flow diagram

Data extraction and analysis

A data extraction form was developed (see Additional File  3 ) recording three areas of information: (a) information about the article (e.g. authors, title, journal, year of publication etc.); (b) information about the aims of the study, the sample size and any justification for this, the participant characteristics, the sampling technique and any sample-related observations or comments made by the authors; and (c) information about the method or technique(s) of data analysis, the number of researchers involved in the analysis, the potential use of software, and any discussion around epistemological considerations. The Abstract, Methods and Discussion (and/or Conclusion) sections of each article were examined by one author (KV) who extracted all the relevant information. This was directly copied from the articles and, when appropriate, comments, notes and initial thoughts were written down.

To examine the kinds of sample size justifications provided by articles, an inductive content analysis [ 54 ] was initially conducted. On the basis of this analysis, the categories that expressed qualitatively different sample size justifications were developed.

We also extracted or coded quantitative data regarding the following aspects:

Journal and year of publication

Number of interviews

Number of participants

Presence of sample size justification(s) (Yes/No)

Presence of a particular sample size justification category (Yes/No), and

Number of sample size justifications provided

Descriptive and inferential statistical analyses were used to explore these data.

A thematic analysis [ 55 ] was then performed on all scientific narratives that discussed or commented on the sample size of the study. These narratives were evident both in papers that justified their sample size and those that did not. To identify these narratives, in addition to the methods sections, the discussion sections of the reviewed articles were also examined and relevant data were extracted and analysed.

In total, 214 articles – 21 in the BMJ, 53 in the BJHP and 140 in the SHI – were eligible for inclusion in the review. Table  1 provides basic information about the sample sizes – measured in number of interviews – of the studies reviewed across the three journals. Figure  2 depicts the number of eligible articles published each year per journal.

figure 2

The publication of qualitative studies in the BMJ was significantly reduced from 2012 onwards and this appears to coincide with the initiation of the BMJ Open to which qualitative studies were possibly directed.

Pairwise comparisons following a significant Kruskal-Wallis Footnote 2 test indicated that the studies published in the BJHP had significantly ( p  < .001) smaller samples sizes than those published either in the BMJ or the SHI. Sample sizes of BMJ and SHI articles did not differ significantly from each other.

Sample size justifications: Results from the quantitative and qualitative content analysis

Ten (47.6%) of the 21 BMJ studies, 26 (49.1%) of the 53 BJHP papers and 24 (17.1%) of the 140 SHI articles provided some sort of sample size justification. As shown in Table  2 , the majority of articles which justified their sample size provided one justification (70% of articles); fourteen studies (25%) provided two distinct justifications; one study (1.7%) gave three justifications and two studies (3.3%) expressed four distinct justifications.

There was no association between the number of interviews (i.e. sample size) conducted and the provision of a justification (rpb = .054, p  = .433). Within journals, Mann-Whitney tests indicated that sample sizes of ‘justifying’ and ‘non-justifying’ articles in the BMJ and SHI did not differ significantly from each other. In the BJHP, ‘justifying’ articles ( Mean rank  = 31.3) had significantly larger sample sizes than ‘non-justifying’ studies ( Mean rank  = 22.7; U = 237.000, p  < .05).

There was a significant association between the journal a paper was published in and the provision of a justification (χ 2 (2) = 23.83, p  < .001). BJHP studies provided a sample size justification significantly more often than would be expected ( z  = 2.9); SHI studies significantly less often ( z  = − 2.4). If an article was published in the BJHP, the odds of providing a justification were 4.8 times higher than if published in the SHI. Similarly if published in the BMJ, the odds of a study justifying its sample size were 4.5 times higher than in the SHI.

The qualitative content analysis of the scientific narratives identified eleven different sample size justifications. These are described below and illustrated with excerpts from relevant articles. By way of a summary, the frequency with which these were deployed across the three journals is indicated in Table  3 .

Saturation was the most commonly invoked principle (55.4% of all justifications) deployed by studies across all three journals to justify the sufficiency of their sample size. In the BMJ, two studies claimed that they achieved data saturation (BMJ17; BMJ18) and one article referred descriptively to achieving saturation without explicitly using the term (BMJ13). Interestingly, BMJ13 included data in the analysis beyond the point of saturation in search of ‘unusual/deviant observations’ and with a view to establishing findings consistency.

Thirty three women were approached to take part in the interview study. Twenty seven agreed and 21 (aged 21–64, median 40) were interviewed before data saturation was reached (one tape failure meant that 20 interviews were available for analysis). (BMJ17). No new topics were identified following analysis of approximately two thirds of the interviews; however, all interviews were coded in order to develop a better understanding of how characteristic the views and reported behaviours were, and also to collect further examples of unusual/deviant observations. (BMJ13).

Two articles reported pre-determining their sample size with a view to achieving data saturation (BMJ08 – see extract in section In line with existing research ; BMJ15 – see extract in section Pragmatic considerations ) without further specifying if this was achieved. One paper claimed theoretical saturation (BMJ06) conceived as being when “no further recurring themes emerging from the analysis” whilst another study argued that although the analytic categories were highly saturated, it was not possible to determine whether theoretical saturation had been achieved (BMJ04). One article (BMJ18) cited a reference to support its position on saturation.

In the BJHP, six articles claimed that they achieved data saturation (BJHP21; BJHP32; BJHP39; BJHP48; BJHP49; BJHP52) and one article stated that, given their sample size and the guidelines for achieving data saturation, it anticipated that saturation would be attained (BJHP50).

Recruitment continued until data saturation was reached, defined as the point at which no new themes emerged. (BJHP48). It has previously been recommended that qualitative studies require a minimum sample size of at least 12 to reach data saturation (Clarke & Braun, 2013; Fugard & Potts, 2014; Guest, Bunce, & Johnson, 2006) Therefore, a sample of 13 was deemed sufficient for the qualitative analysis and scale of this study. (BJHP50).

Two studies argued that they achieved thematic saturation (BJHP28 – see extract in section Sample size guidelines ; BJHP31) and one (BJHP30) article, explicitly concerned with theory development and deploying theoretical sampling, claimed both theoretical and data saturation.

The final sample size was determined by thematic saturation, the point at which new data appears to no longer contribute to the findings due to repetition of themes and comments by participants (Morse, 1995). At this point, data generation was terminated. (BJHP31).

Five studies argued that they achieved (BJHP05; BJHP33; BJHP40; BJHP13 – see extract in section Pragmatic considerations ) or anticipated (BJHP46) saturation without any further specification of the term. BJHP17 referred descriptively to a state of achieved saturation without specifically using the term. Saturation of coding , but not saturation of themes, was claimed to have been reached by one article (BJHP18). Two articles explicitly stated that they did not achieve saturation; instead claiming a level of theme completeness (BJHP27) or that themes being replicated (BJHP53) were arguments for sufficiency of their sample size.

Furthermore, data collection ceased on pragmatic grounds rather than at the point when saturation point was reached. Despite this, although nuances within sub-themes were still emerging towards the end of data analysis, the themes themselves were being replicated indicating a level of completeness. (BJHP27).

Finally, one article criticised and explicitly renounced the notion of data saturation claiming that, on the contrary, the criterion of theoretical sufficiency determined its sample size (BJHP16).

According to the original Grounded Theory texts, data collection should continue until there are no new discoveries ( i.e. , ‘data saturation’; Glaser & Strauss, 1967). However, recent revisions of this process have discussed how it is rare that data collection is an exhaustive process and researchers should rely on how well their data are able to create a sufficient theoretical account or ‘theoretical sufficiency’ (Dey, 1999). For this study, it was decided that theoretical sufficiency would guide recruitment, rather than looking for data saturation. (BJHP16).

Ten out of the 20 BJHP articles that employed the argument of saturation used one or more citations relating to this principle.

In the SHI, one article (SHI01) claimed that it achieved category saturation based on authors’ judgment.

This number was not fixed in advance, but was guided by the sampling strategy and the judgement, based on the analysis of the data, of the point at which ‘category saturation’ was achieved. (SHI01).

Three articles described a state of achieved saturation without using the term or specifying what sort of saturation they had achieved (i.e. data, theoretical, thematic saturation) (SHI04; SHI13; SHI30) whilst another four articles explicitly stated that they achieved saturation (SHI100; SHI125; SHI136; SHI137). Two papers stated that they achieved data saturation (SHI73 – see extract in section Sample size guidelines ; SHI113), two claimed theoretical saturation (SHI78; SHI115) and two referred to achieving thematic saturation (SHI87; SHI139) or to saturated themes (SHI29; SHI50).

Recruitment and analysis ceased once theoretical saturation was reached in the categories described below (Lincoln and Guba 1985). (SHI115). The respondents’ quotes drawn on below were chosen as representative, and illustrate saturated themes. (SHI50).

One article stated that thematic saturation was anticipated with its sample size (SHI94). Briefly referring to the difficulty in pinpointing achievement of theoretical saturation, SHI32 (see extract in section Richness and volume of data ) defended the sufficiency of its sample size on the basis of “the high degree of consensus [that] had begun to emerge among those interviewed”, suggesting that information from interviews was being replicated. Finally, SHI112 (see extract in section Further sampling to check findings consistency ) argued that it achieved saturation of discursive patterns . Seven of the 19 SHI articles cited references to support their position on saturation (see Additional File  4 for the full list of citations used by articles to support their position on saturation across the three journals).

Overall, it is clear that the concept of saturation encompassed a wide range of variants expressed in terms such as saturation, data saturation, thematic saturation, theoretical saturation, category saturation, saturation of coding, saturation of discursive themes, theme completeness. It is noteworthy, however, that although these various claims were sometimes supported with reference to the literature, they were not evidenced in relation to the study at hand.

Pragmatic considerations

The determination of sample size on the basis of pragmatic considerations was the second most frequently invoked argument (9.6% of all justifications) appearing in all three journals. In the BMJ, one article (BMJ15) appealed to pragmatic reasons, relating to time constraints and the difficulty to access certain study populations, to justify the determination of its sample size.

On the basis of the researchers’ previous experience and the literature, [30, 31] we estimated that recruitment of 15–20 patients at each site would achieve data saturation when data from each site were analysed separately. We set a target of seven to 10 caregivers per site because of time constraints and the anticipated difficulty of accessing caregivers at some home based care services. This gave a target sample of 75–100 patients and 35–50 caregivers overall. (BMJ15).

In the BJHP, four articles mentioned pragmatic considerations relating to time or financial constraints (BJHP27 – see extract in section Saturation ; BJHP53), the participant response rate (BJHP13), and the fixed (and thus limited) size of the participant pool from which interviewees were sampled (BJHP18).

We had aimed to continue interviewing until we had reached saturation, a point whereby further data collection would yield no further themes. In practice, the number of individuals volunteering to participate dictated when recruitment into the study ceased (15 young people, 15 parents). Nonetheless, by the last few interviews, significant repetition of concepts was occurring, suggesting ample sampling. (BJHP13).

Finally, three SHI articles explained their sample size with reference to practical aspects: time constraints and project manageability (SHI56), limited availability of respondents and project resources (SHI131), and time constraints (SHI113).

The size of the sample was largely determined by the availability of respondents and resources to complete the study. Its composition reflected, as far as practicable, our interest in how contextual factors (for example, gender relations and ethnicity) mediated the illness experience. (SHI131).

Qualities of the analysis

This sample size justification (8.4% of all justifications) was mainly employed by BJHP articles and referred to an intensive, idiographic and/or latently focused analysis, i.e. that moved beyond description. More specifically, six articles defended their sample size on the basis of an intensive analysis of transcripts and/or the idiographic focus of the study/analysis. Four of these papers (BJHP02; BJHP19; BJHP24; BJHP47) adopted an Interpretative Phenomenological Analysis (IPA) approach.

The current study employed a sample of 10 in keeping with the aim of exploring each participant’s account (Smith et al. , 1999). (BJHP19).

BJHP47 explicitly renounced the notion of saturation within an IPA approach. The other two BJHP articles conducted thematic analysis (BJHP34; BJHP38). The level of analysis – i.e. latent as opposed to a more superficial descriptive analysis – was also invoked as a justification by BJHP38 alongside the argument of an intensive analysis of individual transcripts

The resulting sample size was at the lower end of the range of sample sizes employed in thematic analysis (Braun & Clarke, 2013). This was in order to enable significant reflection, dialogue, and time on each transcript and was in line with the more latent level of analysis employed, to identify underlying ideas, rather than a more superficial descriptive analysis (Braun & Clarke, 2006). (BJHP38).

Finally, one BMJ paper (BMJ21) defended its sample size with reference to the complexity of the analytic task.

We stopped recruitment when we reached 30–35 interviews, owing to the depth and duration of interviews, richness of data, and complexity of the analytical task. (BMJ21).

Meet sampling requirements

Meeting sampling requirements (7.2% of all justifications) was another argument employed by two BMJ and four SHI articles to explain their sample size. Achieving maximum variation sampling in terms of specific interviewee characteristics determined and explained the sample size of two BMJ studies (BMJ02; BMJ16 – see extract in section Meet research design requirements ).

Recruitment continued until sampling frame requirements were met for diversity in age, sex, ethnicity, frequency of attendance, and health status. (BMJ02).

Regarding the SHI articles, two papers explained their numbers on the basis of their sampling strategy (SHI01- see extract in section Saturation ; SHI23) whilst sampling requirements that would help attain sample heterogeneity in terms of a particular characteristic of interest was cited by one paper (SHI127).

The combination of matching the recruitment sites for the quantitative research and the additional purposive criteria led to 104 phase 2 interviews (Internet (OLC): 21; Internet (FTF): 20); Gyms (FTF): 23; HIV testing (FTF): 20; HIV treatment (FTF): 20.) (SHI23). Of the fifty interviews conducted, thirty were translated from Spanish into English. These thirty, from which we draw our findings, were chosen for translation based on heterogeneity in depressive symptomology and educational attainment. (SHI127).

Finally, the pre-determination of sample size on the basis of sampling requirements was stated by one article though this was not used to justify the number of interviews (SHI10).

Sample size guidelines

Five BJHP articles (BJHP28; BJHP38 – see extract in section Qualities of the analysis ; BJHP46; BJHP47; BJHP50 – see extract in section Saturation ) and one SHI paper (SHI73) relied on citing existing sample size guidelines or norms within research traditions to determine and subsequently defend their sample size (7.2% of all justifications).

Sample size guidelines suggested a range between 20 and 30 interviews to be adequate (Creswell, 1998). Interviewer and note taker agreed that thematic saturation, the point at which no new concepts emerge from subsequent interviews (Patton, 2002), was achieved following completion of 20 interviews. (BJHP28). Interviewing continued until we deemed data saturation to have been reached (the point at which no new themes were emerging). Researchers have proposed 30 as an approximate or working number of interviews at which one could expect to be reaching theoretical saturation when using a semi-structured interview approach (Morse 2000), although this can vary depending on the heterogeneity of respondents interviewed and complexity of the issues explored. (SHI73).

In line with existing research

Sample sizes of published literature in the area of the subject matter under investigation (3.5% of all justifications) were used by 2 BMJ articles as guidance and a precedent for determining and defending their own sample size (BMJ08; BMJ15 – see extract in section Pragmatic considerations ).

We drew participants from a list of prisoners who were scheduled for release each week, sampling them until we reached the target of 35 cases, with a view to achieving data saturation within the scope of the study and sufficient follow-up interviews and in line with recent studies [8–10]. (BMJ08).

Similarly, BJHP38 (see extract in section Qualities of the analysis ) claimed that its sample size was within the range of sample sizes of published studies that use its analytic approach.

Richness and volume of data

BMJ21 (see extract in section Qualities of the analysis ) and SHI32 referred to the richness, detailed nature, and volume of data collected (2.3% of all justifications) to justify the sufficiency of their sample size.

Although there were more potential interviewees from those contacted by postcode selection, it was decided to stop recruitment after the 10th interview and focus on analysis of this sample. The material collected was considerable and, given the focused nature of the study, extremely detailed. Moreover, a high degree of consensus had begun to emerge among those interviewed, and while it is always difficult to judge at what point ‘theoretical saturation’ has been reached, or how many interviews would be required to uncover exception(s), it was felt the number was sufficient to satisfy the aims of this small in-depth investigation (Strauss and Corbin 1990). (SHI32).

Meet research design requirements

Determination of sample size so that it is in line with, and serves the requirements of, the research design (2.3% of all justifications) that the study adopted was another justification used by 2 BMJ papers (BMJ16; BMJ08 – see extract in section In line with existing research ).

We aimed for diverse, maximum variation samples [20] totalling 80 respondents from different social backgrounds and ethnic groups and those bereaved due to different types of suicide and traumatic death. We could have interviewed a smaller sample at different points in time (a qualitative longitudinal study) but chose instead to seek a broad range of experiences by interviewing those bereaved many years ago and others bereaved more recently; those bereaved in different circumstances and with different relations to the deceased; and people who lived in different parts of the UK; with different support systems and coroners’ procedures (see Tables 1 and 2 for more details). (BMJ16).

Researchers’ previous experience

The researchers’ previous experience (possibly referring to experience with qualitative research) was invoked by BMJ15 (see extract in section Pragmatic considerations ) as a justification for the determination of sample size.

Nature of study

One BJHP paper argued that the sample size was appropriate for the exploratory nature of the study (BJHP38).

A sample of eight participants was deemed appropriate because of the exploratory nature of this research and the focus on identifying underlying ideas about the topic. (BJHP38).

Further sampling to check findings consistency

Finally, SHI112 argued that once it had achieved saturation of discursive patterns, further sampling was decided and conducted to check for consistency of the findings.

Within each of the age-stratified groups, interviews were randomly sampled until saturation of discursive patterns was achieved. This resulted in a sample of 67 interviews. Once this sample had been analysed, one further interview from each age-stratified group was randomly chosen to check for consistency of the findings. Using this approach it was possible to more carefully explore children’s discourse about the ‘I’, agency, relationality and power in the thematic areas, revealing the subtle discursive variations described in this article. (SHI112).

Thematic analysis of passages discussing sample size

This analysis resulted in two overarching thematic areas; the first concerned the variation in the characterisation of sample size sufficiency, and the second related to the perceived threats deriving from sample size insufficiency.

Characterisations of sample size sufficiency

The analysis showed that there were three main characterisations of the sample size in the articles that provided relevant comments and discussion: (a) the vast majority of these qualitative studies ( n  = 42) considered their sample size as ‘small’ and this was seen and discussed as a limitation; only two articles viewed their small sample size as desirable and appropriate (b) a minority of articles ( n  = 4) proclaimed that their achieved sample size was ‘sufficient’; and (c) finally, a small group of studies ( n  = 5) characterised their sample size as ‘large’. Whilst achieving a ‘large’ sample size was sometimes viewed positively because it led to richer results, there were also occasions when a large sample size was problematic rather than desirable.

‘Small’ but why and for whom?

A number of articles which characterised their sample size as ‘small’ did so against an implicit or explicit quantitative framework of reference. Interestingly, three studies that claimed to have achieved data saturation or ‘theoretical sufficiency’ with their sample size, discussed or noted as a limitation in their discussion their ‘small’ sample size, raising the question of why, or for whom, the sample size was considered small given that the qualitative criterion of saturation had been satisfied.

The current study has a number of limitations. The sample size was small (n = 11) and, however, large enough for no new themes to emerge. (BJHP39). The study has two principal limitations. The first of these relates to the small number of respondents who took part in the study. (SHI73).

Other articles appeared to accept and acknowledge that their sample was flawed because of its small size (as well as other compositional ‘deficits’ e.g. non-representativeness, biases, self-selection) or anticipated that they might be criticized for their small sample size. It seemed that the imagined audience – perhaps reviewer or reader – was one inclined to hold the tenets of quantitative research, and certainly one to whom it was important to indicate the recognition that small samples were likely to be problematic. That one’s sample might be thought small was often construed as a limitation couched in a discourse of regret or apology.

Very occasionally, the articulation of the small size as a limitation was explicitly aligned against an espoused positivist framework and quantitative research.

This study has some limitations. Firstly, the 100 incidents sample represents a small number of the total number of serious incidents that occurs every year. 26 We sent out a nationwide invitation and do not know why more people did not volunteer for the study. Our lack of epidemiological knowledge about healthcare incidents, however, means that determining an appropriate sample size continues to be difficult. (BMJ20).

Indicative of an apparent oscillation of qualitative researchers between the different requirements and protocols demarcating the quantitative and qualitative worlds, there were a few instances of articles which briefly recognised their ‘small’ sample size as a limitation, but then defended their study on more qualitative grounds, such as their ability and success at capturing the complexity of experience and delving into the idiographic, and at generating particularly rich data.

This research, while limited in size, has sought to capture some of the complexity attached to men’s attitudes and experiences concerning incomes and material circumstances. (SHI35). Our numbers are small because negotiating access to social networks was slow and labour intensive, but our methods generated exceptionally rich data. (BMJ21). This study could be criticised for using a small and unrepresentative sample. Given that older adults have been ignored in the research concerning suntanning, fair-skinned older adults are the most likely to experience skin cancer, and women privilege appearance over health when it comes to sunbathing practices, our study offers depth and richness of data in a demographic group much in need of research attention. (SHI57).

‘Good enough’ sample sizes

Only four articles expressed some degree of confidence that their achieved sample size was sufficient. For example, SHI139, in line with the justification of thematic saturation that it offered, expressed trust in its sample size sufficiency despite the poor response rate. Similarly, BJHP04, which did not provide a sample size justification, argued that it targeted a larger sample size in order to eventually recruit a sufficient number of interviewees, due to anticipated low response rate.

Twenty-three people with type I diabetes from the target population of 133 ( i.e. 17.3%) consented to participate but four did not then respond to further contacts (total N = 19). The relatively low response rate was anticipated, due to the busy life-styles of young people in the age range, the geographical constraints, and the time required to participate in a semi-structured interview, so a larger target sample allowed a sufficient number of participants to be recruited. (BJHP04).

Two other articles (BJHP35; SHI32) linked the claimed sufficiency to the scope (i.e. ‘small, in-depth investigation’), aims and nature (i.e. ‘exploratory’) of their studies, thus anchoring their numbers to the particular context of their research. Nevertheless, claims of sample size sufficiency were sometimes undermined when they were juxtaposed with an acknowledgement that a larger sample size would be more scientifically productive.

Although our sample size was sufficient for this exploratory study, a more diverse sample including participants with lower socioeconomic status and more ethnic variation would be informative. A larger sample could also ensure inclusion of a more representative range of apps operating on a wider range of platforms. (BJHP35).

‘Large’ sample sizes - Promise or peril?

Three articles (BMJ13; BJHP05; BJHP48) which all provided the justification of saturation, characterised their sample size as ‘large’ and narrated this oversufficiency in positive terms as it allowed richer data and findings and enhanced the potential for generalisation. The type of generalisation aspired to (BJHP48) was not further specified however.

This study used rich data provided by a relatively large sample of expert informants on an important but under-researched topic. (BMJ13). Qualitative research provides a unique opportunity to understand a clinical problem from the patient’s perspective. This study had a large diverse sample, recruited through a range of locations and used in-depth interviews which enhance the richness and generalizability of the results. (BJHP48).

And whilst a ‘large’ sample size was endorsed and valued by some qualitative researchers, within the psychological tradition of IPA, a ‘large’ sample size was counter-normative and therefore needed to be justified. Four BJHP studies, all adopting IPA, expressed the appropriateness or desirability of ‘small’ sample sizes (BJHP41; BJHP45) or hastened to explain why they included a larger than typical sample size (BJHP32; BJHP47). For example, BJHP32 below provides a rationale for how an IPA study can accommodate a large sample size and how this was indeed suitable for the purposes of the particular research. To strengthen the explanation for choosing a non-normative sample size, previous IPA research citing a similar sample size approach is used as a precedent.

Small scale IPA studies allow in-depth analysis which would not be possible with larger samples (Smith et al. , 2009). (BJHP41). Although IPA generally involves intense scrutiny of a small number of transcripts, it was decided to recruit a larger diverse sample as this is the first qualitative study of this population in the United Kingdom (as far as we know) and we wanted to gain an overview. Indeed, Smith, Flowers, and Larkin (2009) agree that IPA is suitable for larger groups. However, the emphasis changes from an in-depth individualistic analysis to one in which common themes from shared experiences of a group of people can be elicited and used to understand the network of relationships between themes that emerge from the interviews. This large-scale format of IPA has been used by other researchers in the field of false-positive research. Baillie, Smith, Hewison, and Mason (2000) conducted an IPA study, with 24 participants, of ultrasound screening for chromosomal abnormality; they found that this larger number of participants enabled them to produce a more refined and cohesive account. (BJHP32).

The IPA articles found in the BJHP were the only instances where a ‘small’ sample size was advocated and a ‘large’ sample size problematized and defended. These IPA studies illustrate that the characterisation of sample size sufficiency can be a function of researchers’ theoretical and epistemological commitments rather than the result of an ‘objective’ sample size assessment.

Threats from sample size insufficiency

As shown above, the majority of articles that commented on their sample size, simultaneously characterized it as small and problematic. On those occasions that authors did not simply cite their ‘small’ sample size as a study limitation but rather continued and provided an account of how and why a small sample size was problematic, two important scientific qualities of the research seemed to be threatened: the generalizability and validity of results.

Generalizability

Those who characterised their sample as ‘small’ connected this to the limited potential for generalization of the results. Other features related to the sample – often some kind of compositional particularity – were also linked to limited potential for generalisation. Though not always explicitly articulated to what form of generalisation the articles referred to (see BJHP09), generalisation was mostly conceived in nomothetic terms, that is, it concerned the potential to draw inferences from the sample to the broader study population (‘representational generalisation’ – see BJHP31) and less often to other populations or cultures.

It must be noted that samples are small and whilst in both groups the majority of those women eligible participated, generalizability cannot be assumed. (BJHP09). The study’s limitations should be acknowledged: Data are presented from interviews with a relatively small group of participants, and thus, the views are not necessarily generalizable to all patients and clinicians. In particular, patients were only recruited from secondary care services where COFP diagnoses are typically confirmed. The sample therefore is unlikely to represent the full spectrum of patients, particularly those who are not referred to, or who have been discharged from dental services. (BJHP31).

Without explicitly using the term generalisation, two SHI articles noted how their ‘small’ sample size imposed limits on ‘the extent that we can extrapolate from these participants’ accounts’ (SHI114) or to the possibility ‘to draw far-reaching conclusions from the results’ (SHI124).

Interestingly, only a minority of articles alluded to, or invoked, a type of generalisation that is aligned with qualitative research, that is, idiographic generalisation (i.e. generalisation that can be made from and about cases [ 5 ]). These articles, all published in the discipline of sociology, defended their findings in terms of the possibility of drawing logical and conceptual inferences to other contexts and of generating understanding that has the potential to advance knowledge, despite their ‘small’ size. One article (SHI139) clearly contrasted nomothetic (statistical) generalisation to idiographic generalisation, arguing that the lack of statistical generalizability does not nullify the ability of qualitative research to still be relevant beyond the sample studied.

Further, these data do not need to be statistically generalisable for us to draw inferences that may advance medicalisation analyses (Charmaz 2014). These data may be seen as an opportunity to generate further hypotheses and are a unique application of the medicalisation framework. (SHI139). Although a small-scale qualitative study related to school counselling, this analysis can be usefully regarded as a case study of the successful utilisation of mental health-related resources by adolescents. As many of the issues explored are of relevance to mental health stigma more generally, it may also provide insights into adult engagement in services. It shows how a sociological analysis, which uses positioning theory to examine how people negotiate, partially accept and simultaneously resist stigmatisation in relation to mental health concerns, can contribute to an elucidation of the social processes and narrative constructions which may maintain as well as bridge the mental health service gap. (SHI103).

Only one article (SHI30) used the term transferability to argue for the potential of wider relevance of the results which was thought to be more the product of the composition of the sample (i.e. diverse sample), rather than the sample size.

The second major concern that arose from a ‘small’ sample size pertained to the internal validity of findings (i.e. here the term is used to denote the ‘truth’ or credibility of research findings). Authors expressed uncertainty about the degree of confidence in particular aspects or patterns of their results, primarily those that concerned some form of differentiation on the basis of relevant participant characteristics.

The information source preferred seemed to vary according to parents’ education; however, the sample size is too small to draw conclusions about such patterns. (SHI80). Although our numbers were too small to demonstrate gender differences with any certainty, it does seem that the biomedical and erotic scripts may be more common in the accounts of men and the relational script more common in the accounts of women. (SHI81).

In other instances, articles expressed uncertainty about whether their results accounted for the full spectrum and variation of the phenomenon under investigation. In other words, a ‘small’ sample size (alongside compositional ‘deficits’ such as a not statistically representative sample) was seen to threaten the ‘content validity’ of the results which in turn led to constructions of the study conclusions as tentative.

Data collection ceased on pragmatic grounds rather than when no new information appeared to be obtained ( i.e. , saturation point). As such, care should be taken not to overstate the findings. Whilst the themes from the initial interviews seemed to be replicated in the later interviews, further interviews may have identified additional themes or provided more nuanced explanations. (BJHP53). …it should be acknowledged that this study was based on a small sample of self-selected couples in enduring marriages who were not broadly representative of the population. Thus, participants may not be representative of couples that experience postnatal PTSD. It is therefore unlikely that all the key themes have been identified and explored. For example, couples who were excluded from the study because the male partner declined to participate may have been experiencing greater interpersonal difficulties. (BJHP03).

In other instances, articles attempted to preserve a degree of credibility of their results, despite the recognition that the sample size was ‘small’. Clarity and sharpness of emerging themes and alignment with previous relevant work were the arguments employed to warrant the validity of the results.

This study focused on British Chinese carers of patients with affective disorders, using a qualitative methodology to synthesise the sociocultural representations of illness within this community. Despite the small sample size, clear themes emerged from the narratives that were sufficient for this exploratory investigation. (SHI98).

The present study sought to examine how qualitative sample sizes in health-related research are characterised and justified. In line with previous studies [ 22 , 30 , 33 , 34 ] the findings demonstrate that reporting of sample size sufficiency is limited; just over 50% of articles in the BMJ and BJHP and 82% in the SHI did not provide any sample size justification. Providing a sample size justification was not related to the number of interviews conducted, but it was associated with the journal that the article was published in, indicating the influence of disciplinary or publishing norms, also reported in prior research [ 30 ]. This lack of transparency about sample size sufficiency is problematic given that most qualitative researchers would agree that it is an important marker of quality [ 56 , 57 ]. Moreover, and with the rise of qualitative research in social sciences, efforts to synthesise existing evidence and assess its quality are obstructed by poor reporting [ 58 , 59 ].

When authors justified their sample size, our findings indicate that sufficiency was mostly appraised with reference to features that were intrinsic to the study, in agreement with general advice on sample size determination [ 4 , 11 , 36 ]. The principle of saturation was the most commonly invoked argument [ 22 ] accounting for 55% of all justifications. A wide range of variants of saturation was evident corroborating the proliferation of the meaning of the term [ 49 ] and reflecting different underlying conceptualisations or models of saturation [ 20 ]. Nevertheless, claims of saturation were never substantiated in relation to procedures conducted in the study itself, endorsing similar observations in the literature [ 25 , 30 , 47 ]. Claims of saturation were sometimes supported with citations of other literature, suggesting a removal of the concept away from the characteristics of the study at hand. Pragmatic considerations, such as resource constraints or participant response rate and availability, was the second most frequently used argument accounting for approximately 10% of justifications and another 23% of justifications also represented intrinsic-to-the-study characteristics (i.e. qualities of the analysis, meeting sampling or research design requirements, richness and volume of the data obtained, nature of study, further sampling to check findings consistency).

Only, 12% of mentions of sample size justification pertained to arguments that were external to the study at hand, in the form of existing sample size guidelines and prior research that sets precedents. Whilst community norms and prior research can establish useful rules of thumb for estimating sample sizes [ 60 ] – and reveal what sizes are more likely to be acceptable within research communities – researchers should avoid adopting these norms uncritically, especially when such guidelines [e.g. 30 , 35 ], might be based on research that does not provide adequate evidence of sample size sufficiency. Similarly, whilst methodological research that seeks to demonstrate the achievement of saturation is invaluable since it explicates the parameters upon which saturation is contingent and indicates when a research project is likely to require a smaller or a larger sample [e.g. 29 ], specific numbers at which saturation was achieved within these projects cannot be routinely extrapolated for other projects. We concur with existing views [ 11 , 36 ] that the consideration of the characteristics of the study at hand, such as the epistemological and theoretical approach, the nature of the phenomenon under investigation, the aims and scope of the study, the quality and richness of data, or the researcher’s experience and skills of conducting qualitative research, should be the primary guide in determining sample size and assessing its sufficiency.

Moreover, although numbers in qualitative research are not unimportant [ 61 ], sample size should not be considered alone but be embedded in the more encompassing examination of data adequacy [ 56 , 57 ]. Erickson’s [ 62 ] dimensions of ‘evidentiary adequacy’ are useful here. He explains the concept in terms of adequate amounts of evidence, adequate variety in kinds of evidence, adequate interpretive status of evidence, adequate disconfirming evidence, and adequate discrepant case analysis. All dimensions might not be relevant across all qualitative research designs, but this illustrates the thickness of the concept of data adequacy, taking it beyond sample size.

The present research also demonstrated that sample sizes were commonly seen as ‘small’ and insufficient and discussed as limitation. Often unjustified (and in two cases incongruent with their own claims of saturation) these findings imply that sample size in qualitative health research is often adversely judged (or expected to be judged) against an implicit, yet omnipresent, quasi-quantitative standpoint. Indeed there were a few instances in our data where authors appeared, possibly in response to reviewers, to resist to some sort of quantification of their results. This implicit reference point became more apparent when authors discussed the threats deriving from an insufficient sample size. Whilst the concerns about internal validity might be legitimate to the extent that qualitative research projects, which are broadly related to realism, are set to examine phenomena in sufficient breadth and depth, the concerns around generalizability revealed a conceptualisation that is not compatible with purposive sampling. The limited potential for generalisation, as a result of a small sample size, was often discussed in nomothetic, statistical terms. Only occasionally was analytic or idiographic generalisation invoked to warrant the value of the study’s findings [ 5 , 17 ].

Strengths and limitations of the present study

We note, first, the limited number of health-related journals reviewed, so that only a ‘snapshot’ of qualitative health research has been captured. Examining additional disciplines (e.g. nursing sciences) as well as inter-disciplinary journals would add to the findings of this analysis. Nevertheless, our study is the first to provide some comparative insights on the basis of disciplines that are differently attached to the legacy of positivism and analysed literature published over a lengthy period of time (15 years). Guetterman [ 27 ] also examined health-related literature but this analysis was restricted to 26 most highly cited articles published over a period of five years whilst Carlsen and Glenton’s [ 22 ] study concentrated on focus groups health research. Moreover, although it was our intention to examine sample size justification in relation to the epistemological and theoretical positions of articles, this proved to be challenging largely due to absence of relevant information, or the difficulty into discerning clearly articles’ positions [ 63 ] and classifying them under specific approaches (e.g. studies often combined elements from different theoretical and epistemological traditions). We believe that such an analysis would yield useful insights as it links the methodological issue of sample size to the broader philosophical stance of the research. Despite these limitations, the analysis of the characterisation of sample size and of the threats seen to accrue from insufficient sample size, enriches our understanding of sample size (in)sufficiency argumentation by linking it to other features of the research. As the peer-review process becomes increasingly public, future research could usefully examine how reporting around sample size sufficiency and data adequacy might be influenced by the interactions between authors and reviewers.

The past decade has seen a growing appetite in qualitative research for an evidence-based approach to sample size determination and to evaluations of the sufficiency of sample size. Despite the conceptual and methodological developments in the area, the findings of the present study confirm previous studies in concluding that appraisals of sample size sufficiency are either absent or poorly substantiated. To ensure and maintain high quality research that will encourage greater appreciation of qualitative work in health-related sciences [ 64 ], we argue that qualitative researchers should be more transparent and thorough in their evaluation of sample size as part of their appraisal of data adequacy. We would encourage the practice of appraising sample size sufficiency with close reference to the study at hand and would thus caution against responding to the growing methodological research in this area with a decontextualised application of sample size numerical guidelines, norms and principles. Although researchers might find sample size community norms serve as useful rules of thumb, we recommend methodological knowledge is used to critically consider how saturation and other parameters that affect sample size sufficiency pertain to the specifics of the particular project. Those reviewing papers have a vital role in encouraging transparent study-specific reporting. The review process should support authors to exercise nuanced judgments in decisions about sample size determination in the context of the range of factors that influence sample size sufficiency and the specifics of a particular study. In light of the growing methodological evidence in the area, transparent presentation of such evidence-based judgement is crucial and in time should surely obviate the seemingly routine practice of citing the ‘small’ size of qualitative samples among the study limitations.

A non-parametric test of difference for independent samples was performed since the variable number of interviews violated assumptions of normality according to the standardized scores of skewness and kurtosis (BMJ: z skewness = 3.23, z kurtosis = 1.52; BJHP: z skewness = 4.73, z kurtosis = 4.85; SHI: z skewness = 12.04, z kurtosis = 21.72) and the Shapiro-Wilk test of normality ( p  < .001).

Abbreviations

British Journal of Health Psychology

British Medical Journal

Interpretative Phenomenological Analysis

Sociology of Health & Illness

Spencer L, Ritchie J, Lewis J, Dillon L. Quality in qualitative evaluation: a framework for assessing research evidence. National Centre for Social Research 2003 https://www.heacademy.ac.uk/system/files/166_policy_hub_a_quality_framework.pdf Accessed 11 May 2018.

Fusch PI, Ness LR. Are we there yet? Data saturation in qualitative research Qual Rep. 2015;20(9):1408–16.

Google Scholar  

Robinson OC. Sampling in interview-based qualitative research: a theoretical and practical guide. Qual Res Psychol. 2014;11(1):25–41.

Article   Google Scholar  

Sandelowski M. Sample size in qualitative research. Res Nurs Health. 1995;18(2):179–83.

Article   CAS   Google Scholar  

Sandelowski M. One is the liveliest number: the case orientation of qualitative research. Res Nurs Health. 1996;19(6):525–9.

Luborsky MR, Rubinstein RL. Sampling in qualitative research: rationale, issues. and methods Res Aging. 1995;17(1):89–113.

Marshall MN. Sampling for qualitative research. Fam Pract. 1996;13(6):522–6.

Patton MQ. Qualitative evaluation and research methods. 2nd ed. Newbury Park, CA: Sage; 1990.

van Rijnsoever FJ. (I Can’t get no) saturation: a simulation and guidelines for sample sizes in qualitative research. PLoS One. 2017;12(7):e0181689.

Morse JM. The significance of saturation. Qual Health Res. 1995;5(2):147–9.

Morse JM. Determining sample size. Qual Health Res. 2000;10(1):3–5.

Gergen KJ, Josselson R, Freeman M. The promises of qualitative inquiry. Am Psychol. 2015;70(1):1–9.

Borsci S, Macredie RD, Barnett J, Martin J, Kuljis J, Young T. Reviewing and extending the five-user assumption: a grounded procedure for interaction evaluation. ACM Trans Comput Hum Interact. 2013;20(5):29.

Borsci S, Macredie RD, Martin JL, Young T. How many testers are needed to assure the usability of medical devices? Expert Rev Med Devices. 2014;11(5):513–25.

Glaser BG, Strauss AL. The discovery of grounded theory: strategies for qualitative research. Chicago, IL: Aldine; 1967.

Kerr C, Nixon A, Wild D. Assessing and demonstrating data saturation in qualitative inquiry supporting patient-reported outcomes research. Expert Rev Pharmacoecon Outcomes Res. 2010;10(3):269–81.

Lincoln YS, Guba EG. Naturalistic inquiry. London: Sage; 1985.

Book   Google Scholar  

Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2015;26:1753–60.

Nelson J. Using conceptual depth criteria: addressing the challenge of reaching saturation in qualitative research. Qual Res. 2017;17(5):554–70.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2017. https://doi.org/10.1007/s11135-017-0574-8 .

Caine K. Local standards for sample size at CHI. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2016;981–992. ACM.

Carlsen B, Glenton C. What about N? A methodological study of sample-size reporting in focus group studies. BMC Med Res Methodol. 2011;11(1):26.

Constantinou CS, Georgiou M, Perdikogianni M. A comparative method for themes saturation (CoMeTS) in qualitative interviews. Qual Res. 2017;17(5):571–88.

Dai NT, Free C, Gendron Y. Interview-based research in accounting 2000–2014: a review. November 2016. https://ssrn.com/abstract=2711022 or https://doi.org/10.2139/ssrn.2711022 . Accessed 17 May 2018.

Francis JJ, Johnston M, Robertson C, Glidewell L, Entwistle V, Eccles MP, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229–45.

Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82.

Guetterman TC. Descriptions of sampling practices within five approaches to qualitative research in education and the health sciences. Forum Qual Soc Res. 2015;16(2):25. http://nbn-resolving.de/urn:nbn:de:0114-fqs1502256 . Accessed 17 May 2018.

Hagaman AK, Wutich A. How many interviews are enough to identify metathemes in multisited and cross-cultural research? Another perspective on guest, bunce, and Johnson’s (2006) landmark study. Field Methods. 2017;29(1):23–41.

Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591–608.

Marshall B, Cardon P, Poddar A, Fontenot R. Does sample size matter in qualitative research?: a review of qualitative interviews in IS research. J Comput Inform Syst. 2013;54(1):11–22.

Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum Qual Soc Res 2010;11(3):8. http://nbn-resolving.de/urn:nbn:de:0114-fqs100387 . Accessed 17 May 2018.

Safman RM, Sobal J. Qualitative sample extensiveness in health education research. Health Educ Behav. 2004;31(1):9–21.

Saunders MN, Townsend K. Reporting and justifying the number of interview participants in organization and workplace research. Br J Manag. 2016;27(4):836–52.

Sobal J. 2001. Sample extensiveness in qualitative nutrition education research. J Nutr Educ. 2001;33(4):184–92.

Thomson SB. 2010. Sample size and grounded theory. JOAAG. 2010;5(1). http://www.joaag.com/uploads/5_1__Research_Note_1_Thomson.pdf . Accessed 17 May 2018.

Baker SE, Edwards R. How many qualitative interviews is enough?: expert voices and early career reflections on sampling and cases in qualitative research. National Centre for Research Methods Review Paper. 2012; http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf . Accessed 17 May 2018.

Ogden J, Cornwell D. The role of topic, interviewee, and question in predicting rich interview data in the field of health research. Sociol Health Illn. 2010;32(7):1059–71.

Green J, Thorogood N. Qualitative methods for health research. London: Sage; 2004.

Ritchie J, Lewis J, Elam G. Designing and selecting samples. In: Ritchie J, Lewis J, editors. Qualitative research practice: a guide for social science students and researchers. London: Sage; 2003. p. 77–108.

Britten N. Qualitative research: qualitative interviews in medical research. BMJ. 1995;311(6999):251–3.

Creswell JW. Qualitative inquiry and research design: choosing among five approaches. 2nd ed. London: Sage; 2007.

Fugard AJ, Potts HW. Supporting thinking on sample sizes for thematic analyses: a quantitative tool. Int J Soc Res Methodol. 2015;18(6):669–84.

Emmel N. Themes, variables, and the limits to calculating sample size in qualitative research: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):685–6.

Braun V, Clarke V. (Mis) conceptualising themes, thematic analysis, and other problems with Fugard and Potts’ (2015) sample-size tool for thematic analysis. Int J Soc Res Methodol. 2016;19(6):739–43.

Hammersley M. Sampling and thematic analysis: a response to Fugard and Potts. Int J Soc Res Methodol. 2015;18(6):687–8.

Charmaz K. Constructing grounded theory: a practical guide through qualitative analysis. London: Sage; 2006.

Bowen GA. Naturalistic inquiry and the saturation concept: a research note. Qual Res. 2008;8(1):137–52.

Morse JM. Data were saturated. Qual Health Res. 2015;25(5):587–8.

O’Reilly M, Parker N. ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research. Qual Res. 2013;13(2):190–7.

Manen M, Higgins I, Riet P. A conversation with max van Manen on phenomenology in its original sense. Nurs Health Sci. 2016;18(1):4–7.

Dey I. Grounding grounded theory. San Francisco, CA: Academic Press; 1999.

Hays DG, Wood C, Dahl H, Kirk-Jenkins A. Methodological rigor in journal of counseling & development qualitative research articles: a 15-year review. J Couns Dev. 2016;94(2):172–83.

Moher D, Liberati A, Tetzlaff J, Altman DG, Prisma Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6(7): e1000097.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Boyatzis RE. Transforming qualitative information: thematic analysis and code development. Thousand Oaks, CA: Sage; 1998.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual Psychol. 2017;4(1):2–22.

Morrow SL. Quality and trustworthiness in qualitative research in counseling psychology. J Couns Psychol. 2005;52(2):250–60.

Barroso J, Sandelowski M. Sample reporting in qualitative studies of women with HIV infection. Field Methods. 2003;15(4):386–404.

Glenton C, Carlsen B, Lewin S, Munthe-Kaas H, Colvin CJ, Tunçalp Ö, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings—paper 5: how to assess adequacy of data. Implement Sci. 2018;13(Suppl 1):14.

Onwuegbuzie AJ. Leech NL. A call for qualitative power analyses. Qual Quant. 2007;41(1):105–21.

Sandelowski M. Real qualitative researchers do not count: the use of numbers in qualitative research. Res Nurs Health. 2001;24(3):230–40.

Erickson F. Qualitative methods in research on teaching. In: Wittrock M, editor. Handbook of research on teaching. 3rd ed. New York: Macmillan; 1986. p. 119–61.

Bradbury-Jones C, Taylor J, Herber O. How theory is used and articulated in qualitative research: development of a new typology. Soc Sci Med. 2014;120:135–41.

Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A, et al. An open letter to the BMJ editors on qualitative research. BMJ. 2016;i563:352.

Download references

Acknowledgments

We would like to thank Dr. Paula Smith and Katharine Lee for their comments on a previous draft of this paper as well as Natalie Ann Mitchell and Meron Teferra for assisting us with data extraction.

This research was initially conceived of and partly conducted with financial support from the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) programme (EP/F063822/1 and EP/G012393/1). The research continued and was completed independent of any support. The funding body did not have any role in the study design, the collection, analysis and interpretation of the data, in the writing of the paper, and in the decision to submit the manuscript for publication. The views expressed are those of the authors alone.

Availability of data and materials

Supporting data can be accessed in the original publications. Additional File 2 lists all eligible studies that were included in the present analysis.

Author information

Authors and affiliations.

Department of Psychology, University of Bath, Building 10 West, Claverton Down, Bath, BA2 7AY, UK

Konstantina Vasileiou & Julie Barnett

School of Psychology, Newcastle University, Ridley Building 1, Queen Victoria Road, Newcastle upon Tyne, NE1 7RU, UK

Susan Thorpe

Department of Computer Science, Brunel University London, Wilfred Brown Building 108, Uxbridge, UB8 3PH, UK

Terry Young

You can also search for this author in PubMed   Google Scholar

Contributions

JB and TY conceived the study; KV, JB, and TY designed the study; KV identified the articles and extracted the data; KV and JB assessed eligibility of articles; KV, JB, ST, and TY contributed to the analysis of the data, discussed the findings and early drafts of the paper; KV developed the final manuscript; KV, JB, ST, and TY read and approved the manuscript.

Corresponding author

Correspondence to Konstantina Vasileiou .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

Terry Young is an academic who undertakes research and occasional consultancy in the areas of health technology assessment, information systems, and service design. He is unaware of any direct conflict of interest with respect to this paper. All other authors have no competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional Files

Additional file 1:.

Editorial positions on qualitative research and sample considerations (where available). (DOCX 12 kb)

Additional File 2:

List of eligible articles included in the review ( N  = 214). (DOCX 38 kb)

Additional File 3:

Data Extraction Form. (DOCX 15 kb)

Additional File 4:

Citations used by articles to support their position on saturation. (DOCX 14 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Vasileiou, K., Barnett, J., Thorpe, S. et al. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med Res Methodol 18 , 148 (2018). https://doi.org/10.1186/s12874-018-0594-7

Download citation

Received : 22 May 2018

Accepted : 29 October 2018

Published : 21 November 2018

DOI : https://doi.org/10.1186/s12874-018-0594-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Sample size
  • Sample size justification
  • Sample size characterisation
  • Data adequacy
  • Qualitative health research
  • Qualitative interviews
  • Systematic analysis

BMC Medical Research Methodology

ISSN: 1471-2288

sample size needed for qualitative research

  • 🇬🇧 UK / EUROPE
  • 🇦🇺 AUSTRALIA
  • 🇺🇸 UNITED STATES

Factors to determine your sample size for qualitative research

Factors to determine your sample size for qualitative research

How to choose qualitative research software

The 3 Market Research Trends Taking The Industry By Storm

How To Conduct Qualitative Market Research Successfully

A common question that is asked when working with new and existing clients is sample size - it always crops up along with further enquiries around minimum sample size, what constitutes a large sample and should I be aiming to get more people for quantitative research purposes?

The health of your qualitative research , as always, will depend on what it is you are aiming to discover - There are many elements that affect research and in this blog, we will look at some key factors that you can implement into your sample size when executing qualitative research.

Factors that determine your sample for qualitative research project can have a huge impact on your findings

It's a journey

Sample size for qualitative research is an organic process..

From the outset defining your sample size will be the first hurdle to jump through, but in these early stages, we need to acknowledge that who you initially acquire may not be your final selection of people. Often, when building a sample size in qualitative research, a good place to start is with a quantitative method such as a survey or questionnaire. From here you can segment your audience into a population that you are targeting or hoping to attract.

For example, if you were an academic institution such as a university, and you were wanting insight on how to improve your student services, you would first weigh up what would be an adequate sample size and what you were hoping to discover. An approach you may want to take is to canvas a large number of learners via a quantitative survey to their student accounts, and segment the responses into a smaller, 30-50 student group that best represents the audience you want to reach. Within that new group, you can launch a qualitative research campaign that is representative of your education institution.

Planning the journey of your qualitative research.

1. Have different research methods for different stages of your research journey. 2. Be open to new methods of collecting data and information. 3. Break up your larger sample into smaller groups depending on how they answer or score in preliminary research activities.

Determining sample sizes in qualitative research does matter;

(but bigger isn't always better).

Modern technology and AI is changing the way in which we manage research methods. The advancements that are happening mean you are able to process a large set of data . When determining sample size we need to step back from our traditional understanding of what we perceive qualitative research to be and consider what is the question we need answering and what sample do we have readily available?

Focus groups, in-depth interviews and consultations can produce great sample data and a range of info for you and your team to have a discussion; but ensuring the people you are sampling are relevant and appropriate is half the battle. Often we discover that the more you research and understand what you are wanting to discover, your sample size will become clear.

Experience tells us that sample sizes depend on the campaign you are deploying so you should consider one of the following methods.

Start large and reduce. Quantitative research is a great way to recruit a large sample of people which can then be reduced and introduced to qualitative methodologies. This approach enables you to handpick a population of individuals that match your criteria or target audience - aim to lower the initial sample size down to around 30-40 participants. Communities. These can accommodate large numbers of participants - often around 200, and using platforms that are designed in a WhatsApp style, can provide qualitative data where you can interact with participants using a range of qualitative (and quantitative) tools, guide them onto related points and answer any questions. Campaigns such as these could operate anywhere between 3 and 14 days. Long-Term Campaigns. This form of qualitative research can involve a large sample - sometimes up to 1000 participants and studies can last up to 12 months. Discussion is ongoing with a range of different activities to keep respondents engaged.

In community projects, participants can be contacted for qualitative discussions or interviews

Managing your sample size in qualitative is key

Let's be clear, there is no one size fits all approach when determining sample size; we can however be as scientific as possible with our methodology. When your research calls for larger samples, you'll need to manage that group of participants in a practical and nurturing way. If you are running a long-term community project with a high number of participants, engagement and analysis with your respondents can get expensive; but managing relationships and interactions will yield the best results so how do we achieve this?

Incentivise your project

When launching your project, consider adding an incentive that will motivate your sample to respond regularly and in detail. Look at the range of participants that you are working with and assess what a suitable reward could be.

Communicate in their language on a regular basis

This might seem like an obvious point to make, but depending on your campaign, interaction and engagement with your sample will yield greater results. If a respondent feels like they are being valued and moderators are taking note of their thoughts and responses, you are going to get less saturation with your findings.

Choose a platform that works for you

With a number of research platforms now on the market, it's essential that you choose one that suits the way in which you work and the budget you have available. When making a choice, check that it has the ability to centralise and store depth interviews, quantitative research, links to articles and performs a high level of analysis. If it is unable to do any of this, especially the analysis, then search for a platform that does.

Sample size in qualitative research is about insight

The most important factor for researchers when conducting qualitative research is the quality of insight that you are obtaining. At each stage of planning, you need to be considering what data you are wanting to study and in particular, what you are going to do with it. You do not want your sample to suffer from saturation so it's vital you are asking the correct questions and deploying the right methodology to your campaign.

Why do I write this? Because qualitative research is all about the level of insight you are wanting to harness. As mentioned previously, work with a platform that is able to centralise all the information, but importantly, does it in a way that makes sense - in a way that provides value. Insight is all about providing a relevant narrative to your work - whether it is on an 'in-house' company level, or full national level. Researchers will always tell you that insight is key and that you need to choose the correct means of collecting this in order to answer questions you established at the start.

Insight can take the form of heat maps, an interview, discussion on theoretical viewpoints to name a few and any platform should be delivering live, real-time results. The ability to see a large number of the population of your campaign delivering information as it happens enables you to make decisions more quickly and efficiently.

Budgeting for research

How much will it cost.

There is an age-old assumption that qualitative research is more expensive than quantitative because you are having to spend greater lengths of time analysing the data - the notion being the greater the sample sizes, the more time it takes to understand and interpret the research. Researchers have always taken their time to study a chosen method historically this has been a solid, practical way to establish a focus or action point. But as mentioned at the start, technology advances at a lightning pace which means features are being constantly updated resulting in prices being driven down. Yes, we would be correct in our understanding that a quick, small study will probably be cheaper - it requires fewer working hours and provides a smaller amount of analysis. Therefore it will provide you with good value and is ideal for projects where

Does a larger sample mean a larger bill?

The answer to this is, well a bit of a variable. Your sample size will inevitably play a part in the overall cost but it depends on what it is you are asking them to do and the level of support required. When conducting research the size of your sample will eat a large chunk of your budget if you are wanting a high volume of video content transcribed and analysed for sentiment. This being said, design a narrative in your project so that your sample size can be reduced as the project develops; this way you can save a range of the more data-intensive elements for when you have whittled down your initial cohort.

In our experience, most companies are passionate about research and want to work with anyone who has respect for the field. But when establishing your research budget, consider how much support you require, the amount of prep work you can do 'in-house' and the type of project you want to deploy.

READY TO JOIN THE 1000+ GLOBAL RESEARCHERS ALREADY HARNASSING THE POWER OF QUALZY?

Click here to get in touch.

Online consumer research panels: Are they the right method for your next project?

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

diagnostics-logo

Article Menu

sample size needed for qualitative research

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Is there a relationship between salivary cortisol and temporomandibular disorder: a systematic review, 1. introduction, 2. materials and methods, 2.1. protocol and registration, 2.2. eligibility criteria, 2.3. information sources and search strategies, 2.4. study selection, 2.5. data extraction and data items, 2.6. data synthesis (meta-analysis), 2.7. risk of bias in individual studies, 3.1. characteristics of included studies ( table 2 ), 3.2. salivary parameters of the participants of the included studies ( table 3 ), 3.3. risk of bias assessment ( table 4 , table 5 and table 6 ), 3.4. certainty of evidence (grade analysis) ( table 7 ).

Author, Year, RegionStudy DesignAge Range/
Average (yrs)
Sample Size (Test/Control)Study PopulationKey Findings Conclusion
Rosar et al., 2021, Brazil [ ]Cross-sectional19–3043 (28/15)TMD group
Healthy group
Similar salivary cortisol levels found between groups on awakening and after 30 minCortisol levels were not associated with the number or duration of bruxism (TMD) episodes
Venkatesh et al., 2021, India [ ]Cross-sectional18–2344 (22/22)Test with TMD
Controls without TMD
Salivary cortisol levels showed statistically significant difference between the TMD and control groupsSalivary cortisol can be used as a biological marker of stress in TMD
Goyal et al., 2020, India [ ]RCT24.05 ± 2.360 (20/20/20)TMDs and positive depression levels
TMDs and no depression
Healthy control
Statistically significant higher value of salivary cortisol in TMD with depression, as compared to TMD without depression and controlSalivary cortisol could be a promising tool in identifying underlying psychological factors associated with TMDs
D’Avilla, 2019, Brazil [ ]Cross sectional25.3 ± 5.160 (45/15)Group I: No TMD and clinically normal occlusion
Group II: With TMD and malocclusion
Group III: TMD and clinically normal occlusion
Group IV: No TMD and with malocclusion
Salivary cortisol level was significantly higher in individuals with TMD (G2 and G3), independent of the presence/absence of malocclusionQuality of life, pain, and emotional stress are associated with and impaired by the TMD condition, regardless of malocclusion presence
Bozovic et al., 2018, Bosnia and Herzegovina [ ]Case–control19.3560 (30/30)TMD group
Healthy controls
Levels of salivary cortisol were found to be significantly higher in the study group compared to the control groupSalivary cortisol plays a vital role in TMD development
Chinthakanan et al., 2018, Thailand [ ]Case–control2444 (21/23)TMD group
Control group
The salivary cortisol level of the TMD group was significantly greater than that of the control groupPatients with TMD demonstrated autonomic nervous system (ANS) imbalance and increased stress levels
Magri et al., 2018, Brazil [ ]RCT18–4064 (41/23)Laser group (TMD)
Placebo group
Without treatment group
Women with lower cortisol levels (below 10 ng/mL) were more responsive to active and placebo laser treatment than women with higher cortisol levels (above 10 ng/mL)Most responsive cluster to active and placebo LLLT was women with low levels of anxiety, salivary levels below 10 ng/mL
Rosar et al., 2017, Brazil [ ]RCT19–3043 (28/15)Sleep bruxism group (Gsb)
Control group (Gc)
Salivary cortisol showed a significant decrease between baseline and T1 in test, which was not observed in controlShort-term treatment with interocclusal splints had positive affect on salivary cortisol levels in subjects with sleep bruxism
Poorian et al., 2016, Iran [ ]Case–control19–4041 (15/26)TMD patients
Healthy people
Salivary cortisol levels in TMD patients are significantly higher than in healthy peopleIncrease in salivary cortisol levels increases the probability of suffering from TMD
Tosato et al., 2015, Brazil [ ]Cross-sectional18–4049 (26/25)Women with TMD
Healthy women
Moderate to strong correlations were found between salivary cortisol and EMG activities of the women with severe TMDIncrease in cortisol levels corresponded with greater muscle activity and TMD severity
Almeida et al., 2014, Brazil [ ]Case–control19–32 48 (25/23)With TMD
Without TMD
Results show no difference between groupsNo relationship between saliva cortisol, TMD, and depression
Nilsson and Dahlstrom, 2010, Sweden [ ]Case–control18–2460 (30/30)RDC/TMD criteria I
RDC/TMD criteria II
Control group with no TMD
No statistically significant differences were found between any of the groupsWaking cortisol levels were not associated with symptoms of TMD and were not differentiated between the groups
Quartana et al., 2010, USA [ ]Case–control29.8561 (39/22)TMD patients
Healthy controls
Pain index was not associated with cortisol levelsThere was no association between markers of pain sensitivity and adrenocortical responses
Jones et al., 1997, Canada [ ]Case–control27.0775 (36/39)TMD group
Control group
No significant differences found between TMD and control cortisol levels at baseline, but values were significantly higher in the TMD group at both 30 and 50 minNo relationship was found between psychological factors and hypersecretion of cortisol in TMD group
Study (Author, Year)Saliva CollectionSalivary Cortisol Levels in Tests/Morning/NightSalivary Cortisol Levels in ControlsStatistical Significance
Rosar et al., 2021 [ ]Stimulated saliva
Collection time: immediately after waking up and 30 min after waking up
Upon waking: 0.19 ± 0.21,
After 30 min: 0.24 ± 0.28 μg/dL
Upon waking: 0.16 ± 0.13,
After 30 min: 0.16 ± 0.09 μg/dL
No
p > 0.05
Venkatesh et al., 2021 [ ]Stimulated saliva
Collection time: 9:30 a.m. to 10:00 a.m.
1.107 ± 0.170.696 ± 0.16Yes
p < 0.001
Goyal et al., 2020 [ ]Unstimulated saliva
Collection time: twice between 7.00 and 8.00 h, and again between 20.00 and 22.00 h
Morning:
TMD with depression: 52.45 ± 18.62
TMD without depression: 20.35 ± 10.59
Evening:
TMD with depression: 28.13 ± 10.88
TMD without depression: 12.33 ± 6.15
Morning: 12.85 ± 4.28
Evening: 8.51 ± 4.32
Yes
p = 0.0001
D’Avilla, 2019 [ ]Stimulated whole saliva was collectedG2: 7.45 ± 4.93, G3: 7.87 ± 3.52, G4: 4.35 ± 2.59 μg/dL3.83 ± 2.72 μg/dLYes
p < 0.05
Bozovic et al., 2018 [ ]Stimulated saliva2.8 µg/dL0.6 µg/dLYes
p < 0.001
Chinthakanan et al., 2018 [ ]Unstimulated saliva
Collection time: morning, over five minutes
29.78 ± 2.67 ng/ml22.88 ± 1.38 ng/mLYes
p < 0.05
Magri et al., 2018 [ ]Unstimulated saliva
Collection time: between 7 and 10 a.m.
Under 10 ng/mL: 5/7
Above 10 ng/mL: 15/14
Under 10 ng/mL: 6
Above 10 ng/mL: 17
Yes
p < 0.05
Rosar et al., 2017 [ ]Stimulated saliva
Collection time: morning
Baseline: 5.9,
T1: 2.6,
T2: 2.5
Baseline: 4.9,
T1: 4.4,
T2: 4.3
Yes
p < 0.05
Poorian et al., 2016 [ ]Unstimulated saliva
Collection time: between 9–11 a.m.
29.0240 ± 5.27835 ng/ml8.8950 ± 9.58974 ng/mLYes
p = 0.000
Tosato et al., 2015 [ ]Unstimulated saliva
Collection time: between 8 and 9 a.m.
Mild: 25.39, moderate: 116.7, severe: 250.1 µg/dL Yes
p < 0.05 for moderate and severe
Almeida et al., 2014 [ ]Unstimulated saliva
Collection time: between 9:00 and 9:25 a.m.
0.272 µg/dL0.395 µg/dLNo
p = 0.121
Nilsson and Dahlstrom, 2010 [ ]Stimulated saliva10.53 ± 5.05/12.61 ± 8.17 nmol/L13.68 ± 9.96 nmol/LNo
p > 0.05
Quartana et al., 2010 [ ] Stimulated saliva
Collection time: immediately prior to the start of pain testing, immediately following the pain testing procedures, and 20 min after the pain testing procedures
High PCS:
BL: 0.8
Post-pain: 0.85
20 min after pain: 0.9 µg/dL
Low PCS:
BL: 0.92
Post-pain: 0.75
20 min after pain: 0.7 µg/ml
No
p > 0.05
Jones et al., 1997 [ ]Unstimulated saliva
Collection time: baseline (time, 0 min), peak secretion (time, 30 min), and after 20 min of rest (time, 50 min)
0 min: 6.41,
30 min: 11.96,
50 min: 10.28
0 min: 5.89,
30 min: 7.63,
50 min: 6.39
Yes
p ˂ 0.01
Authors/YearRandomization Process Deviation from Intended Intervention Missing Outcome Data Measurement of the Outcome Selection of the Reported ResultsOverall Bias
Goyal, 2020 [ ]LowLowLowSome concern LowLow
Magri, 2017 [ ]LowLowLowLowLowLow
Rosar, 2017 [ ]HighHigh LowHighHighHigh
Author, YearSelectionComparabilityExposure
Is the Case Definition Adequate?Representativeness of the CasesSelection of ControlsDefinition of ControlsComparability of Cases and Controls Based on the Design or AnalysisAscertainment of ExposureSame Method of Ascertainment for Cases and ControlsNon-Response RateRisk of Bias
Almeida et al., 2014 [ ]10111011Medium (6)
Bozovic et al., 2018 11111111Low (8)
Chinthakanan et al., 2018 [ ]11110011Medium (6)
Jones et al., 1997 [ ]11011011Medium (6)
Nilsson and Dahlstrom, 2010 [ ]11011010Medium (5)
Poorian et al., 2016 [ ]10000011High (3)
Quartana et al., 2010 [ ]11111111Low (8)
Representativeness of the SampleSample SizeNon-RespondentsAscertainment of the Exposure (Risk Factor)The Subjects in Different Outcome Groups are Comparable, Based on the Study Design or Analysis; Confounding Factors are ControlledAssessment of the OutcomeStatistical TestRisk of Bias
D’Avilla, 2019 [ ]1112111Low (8)
Rosar et al., 2021 [ ]1112111Low (8)
Tosato et al., 2015 [ ]1111121Low (8)
Venkatesh et al., 2021 [ ]1101011Medium (5)
No. of StudiesCertainty assessmentEffectCertaintyImportance
Study DesignRisk of BiasInconsistencyIndirectnessImprecisionOther ConsiderationsNo. of EventsNo. of IndividualsRate
(95% CI)
3Randomized trialsnot seriousserious serious very serious Strong association;
all plausible residual confounding would reduce the demonstrated effect
We cannot provide examples extracted from our review since our review was not intentionally limited to a specific prognostic factor. Instead, our goal has been to explore salivary cortisol levels at different times of day that have been investigated to date as potential risks for the persistence of a variety of chronic pain conditions and their associated TMDs. However, this poor representation would happen, for instance, if we were interested in exploring the effects of various levels of salivary cortisol on types of TMD. The studies included were only investigating the prognostic effect of salivary cortisol on TMD at a specific age.⨁⨁◯◯
Low
IMPORTANT
4Observational studies (cross-sectional)serious serious not seriousserious All plausible residual confounding would suggest spurious effect, while no effect was observed151196 ⨁⨁◯◯
Low
IMPORTANT
7Observational studies (case–control)serious not seriousserious serious Publication bias strongly suspected;
strong association;
all plausible residual confounding would suggest spurious effect, while no effect was observed
When conducting comprehensive systematic reviews of the effects of cortisol levels on TMD incidence among young adults, authors reported that the evidence of increasing salivary cortisol as a prognostic factor for chronic TMD pain has serious limitations. This evidence comes from four studies, and all of them have a moderate risk of bias.⨁⨁◯◯
Low
IMPORTANT

4. Discussion

4.1. association of cortisol and tmd, 4.2. evidence from randomized controlled trials, 4.3. evidence from case–control studies, 4.4. evidence from cross-sectional studies, 4.5. evidence from systematic reviews, 5. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Murphy, M.K.; MacBarb, R.F.; Wong, M.E. Temporomandibular Joint Disorders: A Review of Etiology, Clinical Management, and Tissue Engineering Strategies. Int. J. Oral Maxillofac. Implants 2013 , 28 , e393. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Giannakopoulos, N.N.; Keller, L.; Rammelsberg, P.; Kronmüller, K.-T.; Schmitter, M. Anxiety and depression in patients with chronic temporomandibular pain and in controls. J. Dent. 2010 , 38 , 369–376. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Tanaka, E.; Detamore, M.S.; Mercuri, L.G. Degenerative disorders of the temporomandibular joint: Etiology, diagnosis, and treatment. J. Dent. Res. 2008 , 87 , 296–307. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Liu, F.; Steinkeler, A. Epidemiology, diagnosis, and treatment of temporomandibular disorders. Dent. Clin. N. Am. 2013 , 57 , 465–479. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • McNeill, C. Management of temporomandibular disorders: Concepts and controversies. J. Prosthet. Dent. 1997 , 77 , 510–522. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Valesan, L.F.; Da-Cas, C.D.; Reus, J.C.; Denardin, A.C.S.; Garanhani, R.R.; Bonotto, D.; Januzzi, E.; de Souza, B.D.M. Prevalence of temporomandibular joint disorders: A systematic review and meta-analysis. Clin. Oral Investig. 2021 , 25 , 441–453. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Chisnoiu, A.M.; Picos, A.M.; Popa, S.; Chisnoiu, P.D.; Lascu, L.; Picos, A.; Chisnoiu, R. Factors involved in the etiology of temporomandibular disorders—A literature review. Med. Pharm. Rep. 2015 , 88 , 473–478. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Atsu, S.S.; Guner, S.; Palulu, N.; Bulut, A.C.; Kurkcuoglu, I. Oral parafunctions, personality traits, anxiety and their association with signs and symptoms of temporomandibular disorders in the adolescents. Afr. Health Sci. 2019 , 19 , 1801–1810. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ohrbach, R.; Michelotti, A. The Role of Stress in the Etiology of Oral Parafunction and Myofascial Pain. Oral Maxillofac. Surg. Clin. 2018 , 30 , 369–379. [ Google Scholar ] [ CrossRef ]
  • Smith, S.M.; Vale, W.W. The role of the hypothalamic-pituitary-adrenal axis in neuroendocrine responses to stress. Dialogues Clin. Neurosci. 2006 , 8 , 383–395. [ Google Scholar ] [ CrossRef ]
  • De Leeuw, R.; Bertoli, E.; Schmidt, J.E.; Carlson, C.R. Prevalence of traumatic stressors in patients with temporomandibular disorders. J. Oral Maxillofac. Surg. 2005 , 63 , 42–50. [ Google Scholar ] [ CrossRef ]
  • Gameiro, G.H.; da Silva Andrade, A.; Nouer, D.F.; Ferraz de Arruda Veiga, M.C. How may stressful experiences contribute to the development of temporomandibular disorders? Clin. Oral Investig. 2006 , 10 , 261–268. [ Google Scholar ] [ CrossRef ]
  • Cui, Q.; Liu, D.; Xiang, B.; Sun, Q.; Fan, L.; He, M.; Wang, Y.; Zhu, X.; Ye, H. Morning Serum Cortisol as a Predictor for the HPA Axis Recovery in Cushing’s Disease. Int. J. Endocrinol. 2021 , 2021 , 4586229. [ Google Scholar ] [ CrossRef ]
  • El-Farhan, N.; Rees, D.A.; Evans, C. Measuring cortisol in serum, urine and saliva—Are our assays good enough? Ann. Clin. Biochem. 2017 , 54 , 308–322. [ Google Scholar ] [ CrossRef ]
  • Kirschbaum, C.; Hellhammer, D.H. Salivary cortisol in psychoneuroendocrine research: Recent developments and applications. Psychoneuroendocrinology 1994 , 19 , 313–333. [ Google Scholar ] [ CrossRef ]
  • Almeida, C.D.; Paludo, A.; Stechman-Eto, J.; Amenábar, J.M. Saliva cortisol levels and depression in individuals with temporomandibular disorder: Preliminary study. Rev. Dor 2014 , 15 , 169–172. [ Google Scholar ] [ CrossRef ]
  • D’Avilla, B.M.; Pimenta, M.C.; Furletti, V.F.; Vedovello Filho, M.; Venezian, G.C.; Custodio, W. Comorbidity of TMD and malocclusion: Impacts on quality of life, masticatory capacity and emotional features. Braz. J. Oral Sci. 2019 , 18 , e191679. [ Google Scholar ] [ CrossRef ]
  • Kobayashi, F.Y.; Gavião, M.B.D.; Marquezin, M.C.S.; Fonseca, F.L.A.; Montes, A.B.M.; Barbosa, T.S.; Castelo, P.M. Salivary stress biomarkers and anxiety symptoms in children with and without temporomandibular disorders. Braz. Oral Res. 2017 , 31 , e78. [ Google Scholar ] [ CrossRef ]
  • Suprajith, T.; Wali, A.; Jain, A.; Patil, K.; Mahale, P.; Niranjan, V. Effect of Temporomandibular Disorders on Cortisol Concentration in the Body and Treatment with Occlusal Equilibrium. J. Pharm. Bioallied Sci. 2022 , 14 , S483–S485. [ Google Scholar ] [ CrossRef ]
  • Fritzen, V.M.; Colonetti, T.; Cruz, M.V.; Ferraz, S.D.; Ceretta, L.; Tuon, L.; Da Rosa, M.I.; Ceretta, R.A. Levels of Salivary Cortisol in Adults and Children with Bruxism Diagnosis: A Systematic Review and Meta-Analysis. J. Evid.-Based Dent. Pract. 2022 , 22 , 101634. [ Google Scholar ] [ CrossRef ]
  • Lu, L.; Yang, B.; Li, M.; Bao, B. Salivary cortisol levels and temporomandibular disorders—A systematic review and meta-analysis of 13 case-control studies. Trop. J. Pharm. Res. 2022 , 21 , 1341–1349. [ Google Scholar ] [ CrossRef ]
  • Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gotzsche, P.C.; Ioannidis, J.P.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Med. 2009 , 6 , e1000100. [ Google Scholar ] [ CrossRef ]
  • Huguet, A.; Hayden, J.A.; Stinson, J.; McGrath, P.J.; Chambers, C.T.; Tougas, M.E.; Wozney, L. Judging the quality of evidence in reviews of prognostic factor research: Adapting the GRADE framework. Syst. Rev. 2013 , 2 , 71. [ Google Scholar ] [ CrossRef ]
  • Sterne, J.A.C.; Savovic, J.; Page, M.J.; Elbers, R.G.; Blencowe, N.S.; Boutron, I.; Cates, C.J.; Cheng, H.Y.; Corbett, M.S.; Eldridge, S.M.; et al. RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ 2019 , 366 , l4898. [ Google Scholar ] [ CrossRef ]
  • Stang, A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur. J. Epidemiol. 2010 , 25 , 603–605. [ Google Scholar ] [ CrossRef ]
  • Dubey, V.P.; Kievisiene, J.; Rauckiene-Michealsson, A.; Norkiene, S.; Razbadauskas, A.; Agostinis-Sobrinho, C. Bullying and Health Related Quality of Life among Adolescents-A Systematic Review. Children 2022 , 9 , 766. [ Google Scholar ] [ CrossRef ]
  • Rosar, J.V.; Barbosa, T.S.; Dias, I.O.V.; Kobayashi, F.Y.; Costa, Y.M.; Gaviao, M.B.D.; Bonjardim, L.R.; Castelo, P.M. Effect of interocclusal appliance on bite force, sleep quality, salivary cortisol levels and signs and symptoms of temporomandibular dysfunction in adults with sleep bruxism. Arch. Oral Biol. 2017 , 82 , 62–70. [ Google Scholar ] [ CrossRef ]
  • Goyal, G.; Gupta, D.; Pallagatti, S. Salivary Cortisol Could Be a Promising Tool in the Diagnosis of Temporomandibular Disorders Associated with Psychological Factors. J. Indian Acad. Oral Med. Radiol. 2021 , 32 , 354–359. [ Google Scholar ] [ CrossRef ]
  • Magri, L.V.; Carvalho, V.A.; Rodrigues, F.C.C.; Bataglion, C.; Leite-Panissi, C.R.A. Non-specific effects and clusters of women with painful TMD responders and non-responders to LLLT: Double-blind randomized clinical trial. Lasers Med. Sci. 2018 , 33 , 385–392. [ Google Scholar ] [ CrossRef ]
  • Rosar, J.V.; Marquezin, M.C.S.; Pizzolato, A.S.; Kobayashi, F.Y.; Bussadori, S.K.; Pereira, L.J.; Castelo, P.M. Identifying predictive factors for sleep bruxism severity using clinical and polysomnographic parameters: A principal component analysis. J. Clin. Sleep Med. 2021 , 17 , 949–956. [ Google Scholar ] [ CrossRef ]
  • de Paiva Tosato, J.; Caria, P.H.; de Paula Gomes, C.A.; Berzin, F.; Politti, F.; de Oliveira Gonzalez, T.; Biasotto-Gonzalez, D.A. Correlation of stress and muscle activity of patients with different degrees of temporomandibular disorder. J. Phys. Ther. Sci. 2015 , 27 , 1227–1231. [ Google Scholar ] [ CrossRef ]
  • Venkatesh, S.B.; Shetty, S.S.; Kamath, V. Prevalence of temporomandibular disorders and its correlation with stress and salivary cortisol levels among students. Pesqui. Bras. Odontopediatria Clín. Integr. 2021 , 21 , e0120. [ Google Scholar ] [ CrossRef ]
  • Božović, Đ.; Ivković, N.; Račić, M.; Ristić, S. Salivary cortisol responses to acute stress in students with myofascial pain. Srpski Arhiv za Celokupno Lekarstvo 2018 , 146 , 20–25. [ Google Scholar ] [ CrossRef ]
  • Chinthakanan, S.; Laosuwan, K.; Boonyawong, P.; Kumfu, S.; Chattipakorn, N.; Chattipakorn, S.C. Reduced heart rate variability and increased saliva cortisol in patients with TMD. Arch. Oral Biol. 2018 , 90 , 125–129. [ Google Scholar ] [ CrossRef ]
  • Jones, D.A.; Rollman, G.B.; Brooke, R.I. The cortisol response to psychological stress in temporomandibular dysfunction. Pain 1997 , 72 , 171–182. [ Google Scholar ] [ CrossRef ]
  • Nilsson, A.M.; Dahlstrom, L. Perceived symptoms of psychological distress and salivary cortisol levels in young women with muscular or disk-related temporomandibular disorders. Acta Odontol. Scand. 2010 , 68 , 284–288. [ Google Scholar ] [ CrossRef ]
  • Poorian, B.; Dehghani, N.; Bemanali, M. Comparison of Salivary Cortisol Level in Temporomandibular Disorders and Healthy People. Int. J. Rev. Life Sci. 2015 , 5 , 1105–1113. [ Google Scholar ]
  • Quartana, P.J.; Buenaver, L.F.; Edwards, R.R.; Klick, B.; Haythornthwaite, J.A.; Smith, M.T. Pain catastrophizing and salivary cortisol responses to laboratory pain testing in temporomandibular disorder and healthy participants. J. Pain 2010 , 11 , 186–194. [ Google Scholar ] [ CrossRef ]
  • Anna, S.; Joanna, K.; Teresa, S.; Maria, G.; Aneta, W. The influence of emotional state on the masticatory muscles function in the group of young healthy adults. Biomed. Res. Int. 2015 , 2015 , 174013. [ Google Scholar ] [ CrossRef ]
  • Apkarian, A.V.; Baliki, M.N.; Geha, P.Y. Towards a theory of chronic pain. Prog. Neurobiol. 2009 , 87 , 81–97. [ Google Scholar ] [ CrossRef ]
  • Hannibal, K.E.; Bishop, M.D. Chronic stress, cortisol dysfunction, and pain: A psychoneuroendocrine rationale for stress management in pain rehabilitation. Phys. Ther. 2014 , 94 , 1816–1825. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Scrivani, S.J.; Keith, D.A.; Kaban, L.B. Temporomandibular disorders. N. Engl. J. Med. 2008 , 359 , 2693–2705. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Jasim, H.; Louca, S.; Christidis, N.; Ernberg, M. Salivary cortisol and psychological factors in women with chronic and acute oro-facial pain. J. Oral Rehabil. 2014 , 41 , 122–132. [ Google Scholar ] [ CrossRef ]
  • Nadendla, L.K.; Meduri, V.; Paramkusam, G.; Pachava, K.R. Evaluation of salivary cortisol and anxiety levels in myofascial pain dysfunction syndrome. Korean J. Pain 2014 , 27 , 30–34. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

PubMed(hydrocortisone) “[MeSH Terms] OR “hydrocortisone”[All Fields] OR “cortisol”[All Fields]) AND (“Temporomandibular disorder”[MeSH Terms] OR “TMD”[All Fields]) OR (“temporomandibular disfunction”[MeSH Terms] OR (“Facial muscle pain”[All Fields] AND young Adults [All Fields].
Scopus(TITLE-ABS-KEY (“craniomandibular disorder*” OR “temporomandibular joint disorder*” OR “temporomandibular disorder*” OR tmjd OR tmd OR “tmj disorder*” OR ((facial OR jaw OR orofacial OR craniofacial OR trigem*) AND pain))) AND (TITLE-ABS-KEY (pcs OR “Salivary cortisol” OR Hydrocortisone* OR cortisol AND (Young adults))))
Web of sciencecortisol* OR hydrocortisone* AND Temporomandibular disorder* OR TMD* AND Young adults.
Google scholar(cortisol OR Salivary cortisol AND Temporomandibular disorder OR TMD AND Young Adults).
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

AlSahman, L.; AlBagieh, H.; AlSahman, R. Is There a Relationship between Salivary Cortisol and Temporomandibular Disorder: A Systematic Review. Diagnostics 2024 , 14 , 1435. https://doi.org/10.3390/diagnostics14131435

AlSahman L, AlBagieh H, AlSahman R. Is There a Relationship between Salivary Cortisol and Temporomandibular Disorder: A Systematic Review. Diagnostics . 2024; 14(13):1435. https://doi.org/10.3390/diagnostics14131435

AlSahman, Lujain, Hamad AlBagieh, and Roba AlSahman. 2024. "Is There a Relationship between Salivary Cortisol and Temporomandibular Disorder: A Systematic Review" Diagnostics 14, no. 13: 1435. https://doi.org/10.3390/diagnostics14131435

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • An Bras Dermatol
  • v.89(4); Jul-Aug 2014

Sample size: how many participants do I need in my research? *

Jeovany martínez-mesa.

1 Latin American Cooperative Oncology Group - Porto Alegre (RS), Brazil.

David Alejandro González-Chica

2 Universidade Federal de Santa Catarina (UFSC) - Florianópolis (SC), Brazil.

João Luiz Bastos

Renan rangel bonamigo.

3 Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA) - Porto Alegre (RS), Brazil.

Rodrigo Pereira Duquia

The importance of estimating sample sizes is rarely understood by researchers, when planning a study. This paper aims to highlight the centrality of sample size estimations in health research. Examples that help in understanding the basic concepts involved in their calculation are presented. The scenarios covered are based more on the epidemiological reasoning and less on mathematical formulae. Proper calculation of the number of participants in a study diminishes the likelihood of errors, which are often associated with adverse consequences in terms of economic, ethical and health aspects.

INTRODUCTION

Investigations in the health field are oriented by research problems or questions, which should be clearly defined in the study project. Sample size calculation is an essential item to be included in the project to reduce the probability of error, respect ethical standards, define the logistics of the study and, last but not least, improve its success rates, when evaluated by funding agencies.

Let us imagine that a group of investigators decides to study the frequency of sunscreen use and how the use of this product is distributed in the "population". In order to carry out this task, the authors define two research questions, each of which involving a distinct sample size calculation: 1) What is the proportion of people that use sunscreen in the population?; and, 2) Are there differences in the use of sunscreen between men and women, or between individuals that are white or of another skin color group, or between the wealthiest and the poorest, or between people with more and less years of schooling? Before doing the calculations, it will be necessary to review a few fundamental concepts and identify which are the required parameters to determine them.

WHAT DO WE MEAN, WHEN WE TALK ABOUT POPULATIONS?

First of all, we must define what is a population . Population is the group of individuals restricted to a geographical region (neighborhood, city, state, country, continent etc.), or certain institutions (hospitals, schools, health centers etc.), that is, a set of individuals that have at least one characteristic in common. The target population corresponds to a portion of the previously mentioned population, about which one intends to draw conclusions, that is to say, it is a part of the population whose characteristics are an object of interest of the investigator. Finally, study population is that which will actually be part of the study, which will be evaluated and will allow conclusions to be drawn about the target population, as long as it is representative of the latter. Figure 1 demonstrates how these concepts are interrelated.

An external file that holds a picture, illustration, etc.
Object name is abd-89-04-0609-g01.jpg

Graphic representation of the concepts of population, target population and study population

We will now separately consider the required parameters for sample size calculation in studies that aim at estimating the frequency of events (prevalence of health outcomes or behaviors, for example), to test associations between risk/protective factors and dichotomous health conditions (yes/no), as well as with health outcomes measured in numerical scales. 1 The formulas used for these calculations may be obtained from different sources - we recommend using the free online software OpenEpi ( www.openepi.com ). 2

WHICH PARAMETERS DOES SAMPLE SIZE CALCULATION DEPEND UPON FOR A STUDY THAT AIMS AT ESTIMATING THE FREQUENCY OF HEALTH OUTCOMES, BEHAVIORS OR CONDITIONS?

When approaching the first research question defined at the beginning of this article (What is the proportion of people that use sunscreen?), the investigators need to conduct a prevalence study. In order to do this, some parameters must be defined to calculate the sample size, as demonstrated in chart 1 .

Description of different parameters to be considered in the calculation of sample size for a study aiming at estimating the frequency of health ouctomes, behaviors or conditions

Population sizeTotal population size from which the sample will be drawn and about which researchers will draw conclusions (target population)Information regarding population size may be obtained based on secondary data from hospitals, health centers, census surveys (population, schools etc.).
The smaller the target population (for example, less than 100 individuals), the larger the sample size will proportionally be.
Expected prevalence of outcome or event of interestThe study outcome must be a percentage, that is, a number that varies from 0% to 100%.Information regarding expected prevalence rates should be obtained from the literature or by carrying out a pilot-study.
When this information is not available in the literature or a pilot-study cannot be carried out, the value that maximizes sample size is used (50% for a fixed value of sample error).
Sample error for estimateThe value we are willing to accept as error in the estimate obtained by the study.The smaller the sample error, the larger the sample size and the greater the precision. In health studies, values between two and five percentage points are usually recommended.
Significance levelIt is the probability that the expected prevalence will be within the error margin being established.The higher the confidence level (greater expected precision), the larger will be the sample size. This parameter is usually fixed as 95%.
Design effectIt is necessary when the study participants are chosen by cluster selection procedures. This means that, instead of the participants being individually selected (simple, systematic or stratified sampling), they are first divided and randomly selected in groups (census tracts, neighborhood, households, days of the week, etc.) and later the individuals are selected within these groups. Thus, greater similarity is expected among the respondents within a group than in the general population. This generates loss of precision, which needs to be compensated by a sample size adjustment (increase).The principle is that the total estimated variance may have been reduced as a consequence of cluster selection. The value of the design effect may be obtained from the literature. When not available, a value between 1.5 and 2.0 may be determined and the investigators should evaluate, after the study is completed, the actual design effect and report it in their publications.
The greater the homogeneity within each group (the more similar the respondents are within each cluster), the greater the design effect will be and the larger the sample size required to increase precision. In studies that do not use cluster selection procedures (simple, systematic or stratified sampling), the design effect is considered as null or 1.0.

Chart 2 presents some sample size simulations, according to the outcome prevalence, sample error and the type of target population investigated. The same basic question was used in this table (prevalence of sunscreen use), but considering three different situations (at work, while doing sports or at the beach), as in the study by Duquia et al. conducted in the city of Pelotas, state of Rio Grande do Sul, in 2005. 3

Sample size calculation to estimate the frequency (prevalence) of sunscreen use in the population, considering different scenarios but keeping the significance level (95%) and the design effect (1.0) constant

   
   
Health center users investigated in a single day (population = 100)90 59 96789780
All users in the area covered by a health center (population size = 1,000)464 122 687260707278
All users from the areas covered by all health centers in a city (population size = 10,000)796 137 17943381937370
The entire city population (N = 40.000)847 138 20723472265381

p.p.= percentage points

The calculations show that, by holding the sample error and the significance level constant, the higher the expected prevalence, the larger will be the required sample size. However, when the expected prevalence surpasses 50%, the required sample size progressively diminishes - the sample size for an expected prevalence of 10% is the same as that for an expected prevalence of 90%.

The investigator should also define beforehand the precision level to be accepted for the investigated event (sample error) and the confidence level of this result (usually 95%). Chart 2 demonstrates that, holding the expected prevalence constant, the higher the precision (smaller sample error) and the higher the confidence level (in this case, 95% was considered for all calculations), the larger also will be the required sample size.

Chart 2 also demonstrates that there is a direct relationship between the target population size and the number of individuals to be included in the sample. Nevertheless, when the target population size is sufficiently large, that is, surpasses an arbitrary value (for example, one million individuals), the resulting sample size tends to stabilize. The smaller the target population, the larger the sample will be; in some cases, the sample may even correspond to the total number of individuals from the target population - in these cases, it may be more convenient to study the entire target population, carrying out a census survey, rather than a study based on a sample of the population.

SAMPLE CALCULATION TO TEST THE ASSOCIATION BETWEEN TWO VARIABLES: HYPOTHESES AND TYPES OF ERROR

When the study objective is to investigate whether there are differences in sunscreen use according to sociodemographic characteristics (such as, for example, between men and women), the existence of association between explanatory variables (exposure or independent variables, in this case sociodemographic variables) and a dependent or outcome variable (use of sunscreen) is what is under consideration.

In these cases, we need first to understand what the hypotheses are, as well as the types of error that may result from their acceptance or refutation. A hypothesis is a "supposition arrived at from observation or reflection, that leads to refutable predictions". 4 In other words, it is a statement that may be questioned or tested and that may be falsified in scientific studies.

In scientific studies, there are two types of hypothesis: the null hypothesis (H 0 ) or original supposition that we assume to be true for a given situation, and the alternative hypothesis (H A ) or additional explanation for the same situation, which we believe may replace the original supposition. In the health field, H 0 is frequently defined as the equality or absence of difference in the outcome of interest between the studied groups (for example, sunscreen use is equal in men and women). On the other hand, H A assumes the existence of difference between groups. H A is called two-tailed when it is expected that the difference between the groups will occur in any direction (men using more sunscreen than women or vice-versa). However, if the investigator expects to find that a specific group uses more sunscreen than the other, he will be testing a one-tailed H A .

In the sample investigated by Duquia et al., the frequency of sunscreen use at the beach was greater in men (32.7%) than in women (26.2%).3 Although this what was observed in the sample, that is, men do wear more sunscreen than women, the investigators must decide whether they refute or accept H 0 in the target population (which contends that there is no difference in sunscreen use according to sex). Given that the entire target population is hardly ever investigated to confirm or refute the difference observed in the sample, the authors have to be aware that, independently from their decision (accepting or refuting H 0 ), their conclusion may be wrong, as can be seen in figure 2 .

An external file that holds a picture, illustration, etc.
Object name is abd-89-04-0609-g02.jpg

Types of possible results when performing a hypothesis test

In case the investigators conclude that both in the target population and in the sample sunscreen use is also different between men and women (rejecting H 0 ), they may be making a type I or Alpha error, which is the probability of rejecting H 0 based on sample results when, in the target population, H 0 is true (the difference between men and women regarding sunscreen use found in the sample is not observed in the target population). If the authors conclude that there are no differences between the groups (accepting H 0 ), the investigators may be making a type II or Beta error, which is the probability of accepting H 0 when, in the target population, H 0 is false (that is, H A is true) or, in other words, the probability of stating that the frequency of sunscreen use is equal between the sexes, when it is different in the same groups of the target population.

In order to accept or refute H 0 , the investigators need to previously define which is the maximum probability of type I and II errors that they are willing to incorporate into their results. In general, the type I error is fixed at a maximum value of 5% (0.05 or confidence level of 95%), since the consequences originated from this type of error are considered more harmful. For example, to state that an exposure/intervention affects a health condition, when this does not happen in the target population may bring about behaviors or actions (therapeutic changes, implementation of intervention programs etc.) with adverse consequences in ethical, economic and health terms. In the study conducted by Duquia et al., when the authors contend that the use of sunscreen was different according to sex, the p value presented (<0.001) indicates that the probability of not observing such difference in the target population is less that 0.1% (confidence level >99.9%). 3

Although the type II or Beta error is less harmful, it should also be avoided, since if a study contends that a given exposure/intervention does not affect the outcome, when this effect actually exists in the target population, the consequence may be that a new medication with better therapeutic effects is not administered or that some aspects related to the etiology of the damage are not considered. This is the reason why the value of the type II error is usually fixed at a maximum value of 20% (or 0.20). In publications, this value tends to be mentioned as the power of the study, which is the ability of the test to detect a difference, when in fact it exists in the target population (usually fixed at 80%, as a result of the 1-Beta calculation).

SAMPLE CALCULATION FOR STUDIES THAT AIM AT TESTING THE ASSOCIATION BETWEEN A RISK/PROTECTIVE FACTOR AND AN OUTCOME, EVALUATED DICHOTOMOUSLY

In cases where the exposure variables are dichotomous (intervention/control, man/woman, rich/poor etc.) and so is the outcome (negative/positive outcome, to use sunscreen or not), the required parameters to calculate sample size are those described in chart 3 . According to the previously mentioned example, it would be interesting to know whether sex, skin color, schooling level and income are associated with the use of sunscreen at work, while doing sports and at the beach. Thus, when the four exposure variables are crossed with the three outcomes, there would be 12 different questions to be answered and consequently an equal number of sample size calculations to be performed. Using the information in the article by Duquia et al. 3 for the prevalence of exposures and outcomes, a simulation of sample size calculations was used for each one of these situations ( Chart 4 ).

Type I or Alpha errorIt is the probability of rejecting H0, when H0 is false in the target population. Usually fixed as 5%.It is expressed by the p value. It is usually 5% (p<0.05).
For sample size calculation, the confidence level may be adopted (usually 95%), calculated as 1-Alpha.
The smaller the Alpha error (greater confidence level), the larger will be the sample size.
Statistical Power (1-Beta)It is the ability of the test to detect a difference in the sample, when it exists in the target population.Calculated as 1-Beta.
The greater the power, the larger the required sample size will be.
A value between 80%-90% is usually used.
Relationship between non-exposed/exposed groups in the sampleIt indicates the existing relationship between non-exposed and exposed groups in the sample.For observational studies, the data are usually obtained from the scientific literature. In intervention studies, the value 1:1 is frequently adopted, indicating that half of the individuals will receive the intervention and the other half will be the control or comparison group. Some intervention studies may use a larger number of controls than of individuals receiving the intervention.
The more distant this ratio is from one, the larger will be the required sample size.
Prevalence of outcome in the non-exposed group (percentage of positive among the non-exposed)Proportion of individuals with the disease (outcome) among those non-exposed to the risk factor (or that are part of the control group).Data usually obtained from the literature. When this information is not available but there is information on general prevalence/incidence in the population, this value may be used in sample size calculation (values attributed to the control group in intervention studies) or estimated based on the following formula: PONE=pO/(pNE+(pE*PR) )
  where pO = prevalence of outcome; pNE = percentage of non-exposed; pE = percentage of exposed; PR = prevalence ratio (usually a value between 1.5 and 2.0).
Expected prevalence ratioRelationship between the prevalence of disease in the exposed (intervention) group and the prevalence of disease in the non-exposed group, indicating how many times it is expected that the prevalence will be higher (or lower) in the exposed compared to non-exposed group.It is the value that the investigators intend to find as HA, with the corresponding H0 equal to one (similar prevalence of the outcome in both exposed and non-exposed groups). For the sample size estimates, the expected outcome prevalence may be used for the non-exposed group, or the expected difference in the prevalence between the exposed and the non-exposed groups.
Usually, a value between 1.50 and 2.00 is used (exposure as risk factor) or between 0.50 and 0.75 (protective factor).
For intervention studies, the clinical relevance of this value should be considered.
The smaller the prevalence rate (the smaller the expected difference between the groups), the larger the required sample size.
Type of statistical testThe test may be one-tailed or two-tailed, depending on the type of the HA.Two-tailed tests require larger sample sizes

Ho - null hypothesis; Ha - alternative hypothesis

      
      
      
Female: 56%(E) n=1298n=388 n=487n=134 n=136n=28
Male:44%(NE) n=1738n=519 n=652n=179 n=181n=38
         
      
White: 82%(E) n=2630n=822 n=970n=276 n=275n=49
Other: 18%(NE) n=3520n=1100 n=1299n=370 n=368n=66
         
      
0-4 years: 25%(E) n=1340n=366 n=488n=131 n=138ND
>4 anos: 75%(NE) n=1795n=490 n=654n=175 n=184ND
         
      
≤133: 50%(E) n=1228n=360 n=458n=124 n=128n=28
>133: 50%(NE) n=1644n=480 n=612n=166 n=170n=36
         

E=exposed group; NE=non-exposed group; r=NE/E relationship; PONE=prevalence of outcome in the non-exposed group (percentage of positives in non-exposed group), estimated based on formula from chart 3 , considering an PR of 1.50; PR=prevalence ratio/incidence or expected relative risk; n= minimum necessary sample size; ND=value could not be determined, as prevalence of outcome in the exposed would be above 100%, according to specified parameters.

Estimates show that studies with more power or that intend to find a difference of a lower magnitude in the frequency of the outcome (in this case, the prevalence rates) between exposed and non-exposed groups require larger sample sizes. For these reasons, in sample size calculations, an effect measure between 1.5 and 2.0 (for risk factors) or between 0.50 and 0.75 (for protective factors), and an 80% power are frequently used.

Considering the values in each column of chart 3 , we may conclude also that, when the nonexposed/exposed relationship moves away from one (similar proportions of exposed and non-exposed individuals in the sample), the sample size increases. For this reason, intervention studies usually work with the same proportion of individuals in the intervention and control groups. Upon analysis of the values on each line, it can be concluded that there is an inverse relationship between the prevalence of the outcome and the required sample size.

Based on these estimates, assuming that the authors intended to test all of these associations, it would be necessary to choose the largest estimated sample size (2,630 subjects). In case the required sample size is larger than the target population, the investigators may decide to perform a multicenter study, lengthen the period for data collection, modify the research question or face the possibility of not having sufficient power to draw valid conclusions.

Additional aspects need to be considered in the previous estimates to arrive at the final sample size, which may include the possibility of refusals and/or losses in the study (an additional 10-15%), the need for adjustments for confounding factors (an additional 10-20%, applicable to observational studies), the possibility of effect modification (which implies an analysis of subgroups and the need to duplicate or triplicate the sample size), as well as the existence of design effects (multiplication of sample size by 1.5 to 2.0) in case of cluster sampling.

SAMPLE CALCULATIONS FOR STUDIES THAT AIM AT TESTING THE ASSOCIATION BETWEEN A DICHOTOMOUS EXPOSURE AND A NUMERICAL OUTCOME

Suppose that the investigators intend to evaluate whether the daily quantity of sunscreen used (in grams), the time of daily exposure to sunlight (in minutes) or a laboratory parameter (such as vitamin D levels) differ according to the socio-demographic variables mentioned. In all of these cases, the outcomes are numerical variables (discrete or continuous) 1 , and the objective is to answer whether the mean outcome in the exposed/intervention group is different from the non-exposed/control group.

In this case, the first three parameters from chart 4 (alpha error, power of the study and relationship between non-exposed/exposed groups) are required, and the conclusions about their influences on the final sample size are also applicable. In addition to defining the expected outcome means in each group or the expected mean difference between nonexposed/exposed groups (usually at least 15% of the mean value in non-exposed group), they also need to define the standard deviation value for each group. There is a direct relationship between the standard deviation value and the sample size, the reason why in case of asymmetric variables the sample size would be overestimated. In such cases, the option may be to estimate sample sizes based on specific calculations for asymmetric variables, or the investigators may choose to use a percentage of the median value (for example, 25%) as a substitute for the standard deviation.

SAMPLE SIZE CALCULATIONS FOR OTHER TYPES OF STUDY

There are also specific calculations for some other quantitative studies, such as those aiming to assess correlations (exposure and outcome are numerical variables), time until the event (death, cure, relapse etc.) or the validity of diagnostic tests, but they are not described in this article, given that they were discussed elsewhere. 5

Sample size calculation is always an essential step during the planning of scientific studies. An insufficient or small sample size may not be able to demonstrate the desired difference, or estimate the frequency of the event of interest with acceptable precision. A very large sample may add to the complexity of the study, and its associated costs, rendering it unfeasible. Both situations are ethically unacceptable and should be avoided by the investigator.

Conflict of Interest: None

Financial Support: None

* Work carried out at the Latin American Cooperative Oncology Group (LACOG), Universidade Federal de Santa Catarina (UFSC), and Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA), Brazil.

Como citar este artigo: Martínez-Mesa J, González-Chica DA, Bastos JL, Bonamigo RR, Duquia RP. Sample size: how many participants do I need in my research? An Bras Dermatol. 2014;89(4):609-15.

Pardon Our Interruption

As you were browsing something about your browser made us think you were a bot. There are a few reasons this might happen:

  • You've disabled JavaScript in your web browser.
  • You're a power user moving through this website with super-human speed.
  • You've disabled cookies in your web browser.
  • A third-party browser plugin, such as Ghostery or NoScript, is preventing JavaScript from running. Additional information is available in this support article .

To regain access, please make sure that cookies and JavaScript are enabled before reloading the page.

To read this content please select one of the options below:

Please note you do not have access to teaching notes, testing a novel haptic tram master controller technology via virtual reality: feasibility and user acceptance considerations.

Journal of Workplace Learning

ISSN : 1366-5626

Article publication date: 9 July 2024

Virtual reality (VR) has been explored as a training and testing environment in a range of work contexts, and increasingly so in transport. There is, however, a lack of research exploring the role of VR in the training of tram drivers, and in providing an environment in which advances in tram technology can be tested safely. This study aimed to test a novel haptic tram master controller within a tram-based Virtual environment (VE).

Design/methodology/approach

The master controller is the primary mechanism for operating a tram, and its effective manipulation can significantly influence the comfort and well-being of passengers, as well as the overall safety of the tram system. Here, the authors tested a haptically enhanced master controller that provides additional sensory information with 16 tram drivers. The feasibility and user acceptance of the novel technology were determined through surveys.

The results indicate that the haptic master controller is seen as beneficial to the drivers suggesting that it could enhance their driving and demonstrate good acceptance. The VE has provided a potential training environment that was accepted by the drivers and did not cause adverse effects (e.g. sickness).

Research limitations/implications

Although this study involved actual tram drivers from a local tram company, the authors acknowledge that the sample size was small, and additional research is needed to broaden perspectives and gather more user feedback. Furthermore, while this study focused on subjective feedback to gauge user acceptance of the new haptic technology, the authors agree that future evaluations should incorporate additional objective measures.

Practical implications

The insights gained from this VE-based research can contribute to future training scenarios and inform the development of technology used in real-world tram operations.

Originality/value

Through this investigation, the authors showed the broader possibilities of haptics in enhancing the functionality and user experience of various technological devices, while also contributing to the advancement of tram systems for safer and more efficient urban mobility.

  • Virtual reality environment
  • Master controller
  • Normal operation
  • Emergency scenario

Acknowledgements

This research has been sponsored by Coventry University (UK) through the Grant Scheme “Cross-Centre International and Interdisciplinary Pilot Projects” (Award No 13705-03) and has received support by Deakin University (Australia). Any dissemination reflects the authors’ view only and neither Coventry University nor Deakin University are responsible for any use that may be made of the information it contains.

Callari, T.C. , Moody, L. and Horan, B. (2024), "Testing a novel haptic tram master controller technology via virtual reality: feasibility and user acceptance considerations", Journal of Workplace Learning , Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JWL-01-2024-0010

Emerald Publishing Limited

Copyright © 2024, Emerald Publishing Limited

Related articles

All feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

IMAGES

  1. What is a Sample Size: Examples, Formula

    sample size needed for qualitative research

  2. Qualitative research sample design and sample size: resolving and

    sample size needed for qualitative research

  3. Developing questionnaire survey and calculating sample size

    sample size needed for qualitative research

  4. PPT

    sample size needed for qualitative research

  5. Minimum sample size recommendations for most common quantitative and

    sample size needed for qualitative research

  6. PPT

    sample size needed for qualitative research

VIDEO

  1. Sample size in Qualitative Research

  2. What is Exploratory Data Analysis

  3. Step by step: Find the minimum sample size needed. No preliminary estimate is available

  4. Interview sample size: how many are enough?

  5. Population and sample in research#research #short video#viral video #YouTube

  6. Sample size for saturation in qualitative research

COMMENTS

  1. Big enough? Sampling in qualitative inquiry

    So there was no uniform answer to the question and the ranges varied according to methodology. In fact, Shaw and Holland (2014) claim, sample size will largely depend on the method. (p. 87), "In truth," they write, "many decisions about sample size are made on the basis of resources, purpose of the research" among other factors. (p. 87).

  2. Sample sizes for saturation in qualitative research: A systematic

    These research objectives are typical of much qualitative heath research. The sample size of the datasets used varied from 14 to 132 interviews and 1 to 40 focus groups. All datasets except one (Francis et al., 2010) had a sample that was much larger than the sample ultimately needed for saturation, making them effective for assessing saturation.

  3. Determining the Sample Size in Qualitative Research

    finds a variation of the sample size from 1 to 95 (averages being of 31 in the first ca se and 28 in the. second). The research region - one of t he cultural factors, plays a significant role in ...

  4. Series: Practical guidance to qualitative research. Part 3: Sampling

    In quantitative research, by contrast, the sample size is determined by a power calculation. The usually small sample size in qualitative research depends on the information richness of the data, the variety of participants (or other units), the broadness of the research question and the phenomenon, the data collection method (e.g., individual ...

  5. Sample Size in Qualitative Interview Studies: Guided by Information

    The prevailing concept for sample size in qualitative studies is "saturation." ... the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy ...

  6. Sample size for qualitative research

    Sample size in qualitative research is always mentioned by reviewers of qualitative papers but discussion tends to be simplistic and relatively uninformed. The current paper draws attention to how sample sizes, at both ends of the size continuum, can be justified by researchers. This will also aid reviewers in their making of comments about the ...

  7. Characterising and justifying sample size sufficiency in interview

    Background. Choosing a suitable sample size in qualitative research is an area of conceptual debate and practical uncertainty. That sample size principles, guidelines and tools have been developed to enable researchers to set, and justify the acceptability of, their sample size is an indication that the issue constitutes an important marker of the quality of qualitative research.

  8. Sample Size in Qualitative Interview Studies: Guided by Information

    We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the ...

  9. Sample Size and its Importance in Research

    The sample size for a study needs to be estimated at the time the study is proposed; too large a sample is unnecessary and unethical, and too small a sample is unscientific and also unethical. The necessary sample size can be calculated, using statistical software, based on certain assumptions. If no assumptions can be made, then an arbitrary ...

  10. Sample Size Justification

    A good sample size justification in qualitative research is based on 1) an identification of the populations, including any sub-populations, 2) an estimate of the number of codes in the (sub-)population, 3) the probability a code is encountered in an information source, and 4) the sampling strategy that is used.

  11. Sample Size Policy for Qualitative Studies Using In-Depth Interviews

    There are several debates concerning what sample size is the right size for such endeavors. Most scholars argue that the concept of saturation is the most important factor to think about when mulling over sample size decisions in qualitative research (Mason, 2010).Saturation is defined by many as the point at which the data collection process no longer offers any new or relevant data.

  12. (PDF) Qualitative Research Designs, Sample Size and Saturation: Is

    The burden of offering adequate sample sizes in research has been one of. the major criticisms against qualitative s tudies. One of the most acceptable standards in qualitative research is to ...

  13. What's In A Number? Understanding The Right Sample Size For Qualitative

    Between 15-30. Based on research conducted on this issue, if you are building similar segments within the population, InterQ's recommendation for in-depth interviews is to have a sample size of 15-30. In some cases, a minimum of 10 is sufficient, assuming there has been integrity in the recruiting process. With the goal to maintain a rigorous ...

  14. How to Justify Sample Size in Qualitative Research

    Typically, sample sizes will range from 6-20, per segment. (So if you have 5 segments, 6 is your multiplier for the total number you'll need, so you would have a total sample size of 30.) For very specific tasks, such as in user experience research, moderators will see the same themes after as few as 5-6 interviews.

  15. Sample Sizes in Qualitative UX Research: A Definitive Guide

    A formula for determining qualitative sample size. In 2013, Research by Design published a whitepaper by Donna Bonde which included research-backed guidelines for qualitative sampling in a market research context. Victor Yocco, writing in 2017, drew on these guidelines to create a formula determining qualitative sample sizes.

  16. What's a good sample size for qualitative research?

    A review of 23 peer-reviewed articles suggests that 9-17 participants can be sufficient to reach saturation, especially for studies with homogenous populations and narrowly defined objectives. Hence our recommendation is to target ~15 people as a target sample size in your qualitative research.

  17. Characterising and justifying sample size sufficiency in interview

    Sample adequacy in qualitative inquiry pertains to the appropriateness of the sample composition and size.It is an important consideration in evaluations of the quality and trustworthiness of much qualitative research [] and is implicated - particularly for research that is situated within a post-positivist tradition and retains a degree of commitment to realist ontological premises - in ...

  18. Qualitative Sample Size Calculator

    What is a good sample size for a qualitative research study? Our sample size calculator will work out the answer based on your project's scope, participant characteristics, researcher expertise, and methodology. Just answer 4 quick questions to get a super actionable, data-backed recommendation for your next study.

  19. Factors to determine your sample size for qualitative research

    Consider: Planning the journey of your qualitative research. 1. Have different research methods for different stages of your research journey. 2. Be open to new methods of collecting data and information. 3. Break up your larger sample into smaller groups depending on how they answer or score in preliminary research activities.

  20. Sampling in Qualitative Research

    How many subjects is the perennial question. There is seldom a simple answer to the question of sample or cell size in qualitative research. There is no single formula or criterion to use. A "gold standard" that will calculate the number of people to interview is lacking (cf. Morse 1994). The question of sample size cannot be determined by ...

  21. PDF Quantitative Approaches for Estimating Sample Size for Qualitative

    Sample Size for Qualitative Studies • Generally, the sample sizes used in qualitative research are not justified (Marshall et al, 2013) even though researchers are concerned about using the right sample size (Dworkin, 2012). • Need to ensure there is enough, but not too much, data (>30 too large; Boddy, 2016).

  22. Can sample size in qualitative research be determined a priori?

    There has been considerable recent interest in methods of determining sample size for qualitative research a priori, rather than through an adaptive approach such as saturation. Extending previous literature in this area, we identify four distinct approaches to determining sample size in this way: rules of thumb, conceptual models, numerical ...

  23. Diagnostics

    Of these, eleven were observational studies (four cross-sectional and seven case-control), and three were randomized control trials. Eleven of the included studies presented a low to moderate risk in the qualitative synthesis. The total sample size of the included studies was 751 participants.

  24. The experience of young carers in Australia: a qualitative systematic

    The following sample characteristics were also collected: sample size, gender ratio, age range, race/ethnicity, care ... and young carers supports could be beneficial. Further research is certainly required to understand the best way to provide such information to families. ... Qualitative research practice: A guide for social science students ...

  25. Evaluating the impact of students' generative AI use in educational

    The research is centered on a university master's-level course in Instructional Design that incorporates GenAI as an instructional tool. The research will consider the broader context of the use of GenAI technologies, aligning with ongoing efforts to develop GenAI guidelines, principles and educational resources for students.

  26. Sample size: how many participants do I need in my research?

    It is the ability of the test to detect a difference in the sample, when it exists in the target population. Calculated as 1-Beta. The greater the power, the larger the required sample size will be. A value between 80%-90% is usually used. Relationship between non-exposed/exposed groups in the sample.

  27. RC002 Discussion (docx)

    Marketing document from University of Alaska, Anchorage, 1 page, Discussion: What are 3 key differences between quantitative and qualitative research designs? Response: Qualitative Research Understanding Behavior from a smaller sample size Data is then collected and synthesized in textual Form Data is interpreted Explo

  28. Using confirmatory factor analysis and qualitative methods to translate

    The Icelandic versions of the HEXACO-100 and HEXACO-60 were evaluated using confirmatory factor analysis. The sample for HEXACO-100 consisted of N = 716, and the sample for HEXACO-60 of N = 319. Six respondents were recruited for cognitive interviews. Modification indices, factor loadings, and cognitive interviews were used to identify items in need of revision, identifying a total of 56 items ...

  29. Testing a novel haptic tram master controller technology via virtual

    The VE has provided a potential training environment that was accepted by the drivers and did not cause adverse effects (e.g. sickness).,Although this study involved actual tram drivers from a local tram company, the authors acknowledge that the sample size was small, and additional research is needed to broaden perspectives and gather more ...