Pilot Study in Research: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design.

Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

How Does it Work?

Pilot studies are a fundamental stage of the research process. They can help identify design issues and evaluate a study’s feasibility, practicality, resources, time, and cost before the main research is conducted.

It involves selecting a few people and trying out the study on them. It is possible to save time and, in some cases, money by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e., unusual things), confusion in the information given to participants, or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling.”

This enables researchers to predict an appropriate sample size, budget accordingly, and improve the study design before performing a full-scale project.

Pilot studies also provide researchers with preliminary data to gain insight into the potential results of their proposed experiment.

However, pilot studies should not be used to test hypotheses since the appropriate power and sample size are not calculated. Rather, pilot studies should be used to assess the feasibility of participant recruitment or study design.

By conducting a pilot study, researchers will be better prepared to face the challenges that might arise in the larger study. They will be more confident with the instruments they will use for data collection.

Multiple pilot studies may be needed in some studies, and qualitative and/or quantitative methods may be used.

To avoid bias, pilot studies are usually carried out on individuals who are as similar as possible to the target population but not on those who will be a part of the final sample.

Feedback from participants in the pilot study can be used to improve the experience for participants in the main study. This might include reducing the burden on participants, improving instructions, or identifying potential ethical issues.

Experiment Pilot Study

In a pilot study with an experimental design , you would want to ensure that your measures of these variables are reliable and valid.

You would also want to check that you can effectively manipulate your independent variables and that you can control for potential confounding variables.

A pilot study allows the research team to gain experience and training, which can be particularly beneficial if new experimental techniques or procedures are used.

Questionnaire Pilot Study

It is important to conduct a questionnaire pilot study for the following reasons:
  • Check that respondents understand the terminology used in the questionnaire.
  • Check that emotive questions are not used, as they make people defensive and could invalidate their answers.
  • Check that leading questions have not been used as they could bias the respondent’s answer.
  • Ensure that the questionnaire can be completed in a reasonable amount of time. If it’s too long, respondents may lose interest or not have enough time to complete it, which could affect the response rate and the data quality.

By identifying and addressing issues in the pilot study, researchers can reduce errors and risks in the main study. This increases the reliability and validity of the main study’s results.

Assessing the practicality and feasibility of the main study

Testing the efficacy of research instruments

Identifying and addressing any weaknesses or logistical problems

Collecting preliminary data

Estimating the time and costs required for the project

Determining what resources are needed for the study

Identifying the necessity to modify procedures that do not elicit useful data

Adding credibility and dependability to the study

Pretesting the interview format

Enabling researchers to develop consistent practices and familiarize themselves with the procedures in the protocol

Addressing safety issues and management problems


Require extra costs, time, and resources.

Do not guarantee the success of the main study.

Contamination (ie: if data from the pilot study or pilot participants are included in the main study results).

Funding bodies may be reluctant to fund a further study if the pilot study results are published.

Do not have the power to assess treatment effects due to small sample size.

  • Viscocanalostomy: A Pilot Study (Carassa, Bettin, Fiori, & Brancato, 1998)
  • WHO International Pilot Study of Schizophrenia (Sartorius, Shapiro, Kimura, & Barrett, 1972)
  • Stephen LaBerge of Stanford University ran a series of experiments in the 80s that investigated lucid dreaming. In 1985, he performed a pilot study that demonstrated that time perception is the same as during wakefulness. Specifically, he had participants go into a state of lucid dreaming and count out ten seconds, signaling the start and end with pre-determined eye movements measured with the EOG.
  • Negative Word-of-Mouth by Dissatisfied Consumers: A Pilot Study (Richins, 1983)
  • A pilot study and randomized controlled trial of the mindful self‐compassion program (Neff & Germer, 2013)
  • Pilot study of secondary prevention of posttraumatic stress disorder with propranolol (Pitman et al., 2002)
  • In unstructured observations, the researcher records all relevant behavior without a system. There may be too much to record, and the behaviors recorded may not necessarily be the most important, so the approach is usually used as a pilot study to see what type of behaviors would be recorded.
  • Perspectives of the use of smartphones in travel behavior studies: Findings from a literature review and a pilot study (Gadziński, 2018)

Further Information

  • Lancaster, G. A., Dodd, S., & Williamson, P. R. (2004). Design and analysis of pilot studies: recommendations for good practice. Journal of evaluation in clinical practice, 10 (2), 307-312.
  • Thabane, L., Ma, J., Chu, R., Cheng, J., Ismaila, A., Rios, L. P., … & Goldsmith, C. H. (2010). A tutorial on pilot studies: the what, why and how. BMC Medical Research Methodology, 10 (1), 1-10.
  • Moore, C. G., Carter, R. E., Nietert, P. J., & Stewart, P. W. (2011). Recommendations for planning pilot studies in clinical and translational research. Clinical and translational science, 4 (5), 332-337.

Carassa, R. G., Bettin, P., Fiori, M., & Brancato, R. (1998). Viscocanalostomy: a pilot study. European journal of ophthalmology, 8 (2), 57-61.

Gadziński, J. (2018). Perspectives of the use of smartphones in travel behaviour studies: Findings from a literature review and a pilot study. Transportation Research Part C: Emerging Technologies, 88 , 74-86.

In J. (2017). Introduction of a pilot study. Korean Journal of Anesthesiology, 70 (6), 601–605. https://doi.org/10.4097/kjae.2017.70.6.601

LaBerge, S., LaMarca, K., & Baird, B. (2018). Pre-sleep treatment with galantamine stimulates lucid dreaming: A double-blind, placebo-controlled, crossover study. PLoS One, 13 (8), e0201246.

Leon, A. C., Davis, L. L., & Kraemer, H. C. (2011). The role and interpretation of pilot studies in clinical research. Journal of psychiatric research, 45 (5), 626–629. https://doi.org/10.1016/j.jpsychires.2010.10.008

Malmqvist, J., Hellberg, K., Möllås, G., Rose, R., & Shevlin, M. (2019). Conducting the Pilot Study: A Neglected Part of the Research Process? Methodological Findings Supporting the Importance of Piloting in Qualitative Research Studies. International Journal of Qualitative Methods. https://doi.org/10.1177/1609406919878341

Neff, K. D., & Germer, C. K. (2013). A pilot study and randomized controlled trial of the mindful self‐compassion program. Journal of Clinical Psychology, 69 (1), 28-44.

Pitman, R. K., Sanders, K. M., Zusman, R. M., Healy, A. R., Cheema, F., Lasko, N. B., … & Orr, S. P. (2002). Pilot study of secondary prevention of posttraumatic stress disorder with propranolol. Biological psychiatry, 51 (2), 189-192.

Richins, M. L. (1983). Negative word-of-mouth by dissatisfied consumers: A pilot study. Journal of Marketing, 47 (1), 68-78.

Sartorius, N., Shapiro, R., Kimura, M., & Barrett, K. (1972). WHO International Pilot Study of Schizophrenia1. Psychological medicine, 2 (4), 422-425.

Teijlingen, E. R; V. Hundley (2001). The importance of pilot studies, Social research UPDATE, (35)

Print Friendly, PDF & Email

  • En español – ExME
  • Em português – EME

What is a pilot study?

Posted on 31st July 2017 by Luiz Cadete

research meaning pilot study

Pilot studies can play a very important role prior to conducting a full-scale research project

Pilot studies are small-scale, preliminary studies which aim to investigate whether crucial components of a main study – usually a randomized controlled trial (RCT) – will be feasible. For example, they may be used in attempt to predict an appropriate sample size for the full-scale project and/or to improve upon various aspects of the study design. Often RCTs require a lot of time and money to be carried out, so it is crucial that the researchers have confidence in the key steps they will take when conducting this type of study to avoid wasting time and resources.

Thus, a pilot study must answer a simple question: “Can the full-scale study be conducted in the way that has been planned or should some component(s) be altered?”

The reporting of pilot studies must be of high quality to allow readers to interpret the results and implications correctly. This blog will highlight some key things for readers to consider when they are appraising a pilot study.

What are the main reasons to conduct a pilot study?

Pilot studies are conducted to evaluate the feasibility of some crucial component(s) of the full-scale study. Typically, these can be divided into 4 main aspects:

  • Process : where the feasibility of the key steps in the main study is assessed (e.g. recruitment rate; retention levels and eligibility criteria)
  • Resources: assessing problems with time and resources that may occur during the main study (e.g. how much time the main study will take to be completed; whether use of some equipment will be feasible or whether the form(s) of evaluation selected for the main study are as good as possible)
  • Management: problems with data management and with the team involved in the study (e.g. whether there were problems with collecting all the data needed for future analysis; whether the collected data are highly variable and whether data from different institutions can be analyzed together).

Reasons for not conducting a pilot study

A study should not simply be labelled a ‘pilot study’ by researchers hoping to justify a small sample size. Pilot studies should always have their objectives linked with feasibility and should inform researchers about the best way to conduct the future, full-scale project.

How to interpret a pilot study

Readers must interpret pilot studies carefully. Below are some key things to consider when assessing a pilot study:

  • The objectives of pilot studies must always be linked with feasibility and the crucial component that will be tested must always be stated.
  • The method section must present the criteria for success. For example: “the main study will be feasible if the retention rate of the pilot study exceeds 90%”. Sample size may vary in pilot studies (different articles present different sample size calculations) but the pilot study population, from which the sample is formed, must be the same as the main study. However, the participants in the pilot study should not be entered into the full-scale study. This is because participants may change their later behaviour if they had previously been involved in the research.
  • The pilot study may or may not be a randomized trial (depending on the nature of the study). If the researchers do randomize the sample in the pilot study, it is important that the process for randomization is kept the same in the full-scale project. If the authors decide to test the randomization feasibility through a pilot study, different kinds of randomization procedures could be used.
  • As well as the method section, the results of the pilot studies should be read carefully. Although pilot studies often present results related to the effectiveness of the interventions, these results should be interpreted as “potential effectiveness”. The focus in the results of pilot studies should always be on feasibility, rather than statistical significance. However, results of the pilot studies should nonetheless be provided with measures of variability (such as confidence intervals), particularly as the sample size of these studies is usually relatively small, and this might produce biased results.

After an interpretation of results, pilot studies should conclude with one of the following:

(1) the main study is not feasible;

(2) the main study is feasible, with changes to the protocol;

(3) the main study is feasible without changes to the protocol OR

(4) the main study is feasible with close monitoring.

Any recommended changes to the protocol should be clearly outlined.

Take home message

  • A pilot study must provide information about whether a full-scale study is feasible and list any recommended amendments to the design of the future study.

Thabane L, Ma J, Chu R, et al. A tutorial on pilot studies: what, why and how? BMC Med Res Methodol. 2010; 10: 1.

Cocks K and Torgerson DJ. Sample Size Calculations for Randomized Pilot Trials: A Confidence Interval approach . Journal of Clinical Epidemiology. 2013.

Lancaster GA, Dodd S, Williamson PR. Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004; 10 (2): 307-12.

Moore et al. Recommendations for Planning Pilot Studies in Clinical and Translational Research. Clin Transl Sci. 2011 October ; 4(5): 332–337.

' src=

Luiz Cadete

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

No Comments on What is a pilot study?

' src=

I want to do pilot study what can I do. My age is 30 .

' src=

Dear Khushbu, were you wanting to get involved in research? If so, what type of research were you interested in. There are lots of resources we can point you towards.

' src=

very intersesting

' src=

How can I study pilot and how can I start at first step? What should I do at the first time.

' src=

Informative.Thank you

' src=

If i am conducting a RCT then is it necessary to give interventions before conducting pilot study???

' src=

This fantastic. I am a doctoral student preparing do a pilot study on my main study.

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.

Popular Articles

student tutorial

A 20 Minute Introduction to Cluster Randomised Trials

Another 20 minute tutorial from Tim.

Student Tutorial badge

A beginner’s guide to interpreting odds ratios, confidence intervals and p-values

The nuts and bolts 20 minute tutorial from Tim.


Free information and resources for pupils and students interested in evidence-based health care

This new webpage from Cochrane UK is aimed at students of all ages. What is evidence-based practice? What is ‘best available research evidence’? Which resources will help you understand evidence and evidence-based practice, and search for evidence?

Pilot Study in Research

  • Key Concepts
  • Major Sociologists
  • News & Issues
  • Research, Samples, and Statistics
  • Recommended Reading
  • Archaeology

A pilot study is a preliminary small-scale study that researchers conduct in order to help them decide how best to conduct a large-scale research project. Using a pilot study, a researcher can identify or refine a research question, figure out what methods are best for pursuing it, and estimate how much time and resources will be necessary to complete the larger version, among other things.

Key Takeaways: Pilot Studies

  • Before running a larger study, researchers can conduct a pilot study : a small-scale study that helps them refine their research topic and study methods.
  • Pilot studies can be useful for determining the best research methods to use, troubleshooting unforeseen issues in the project, and determining whether a research project is feasible.
  • Pilot studies can be used in both quantitative and qualitative social science research.

Large-scale research projects tend to be complex, take a lot of time to design and execute, and typically require quite a bit of funding. Conducting a pilot study beforehand allows a researcher to design and execute a large-scale project in as methodologically rigorous a way as possible, and can save time and costs by reducing the risk of errors or problems. For these reasons, pilot studies are used by both quantitative and qualitative researchers in the social sciences.

Advantages of Conducting a Pilot Study

Pilot studies are useful for a number of reasons, including:

  • Identifying or refining a research question or set of questions
  • Identifying or refining a hypothesis or set of hypotheses
  • Identifying and evaluating a sample population, research field site , or data set
  • Testing research instruments like survey questionnaires , interview, discussion guides, or statistical formulas
  • Evaluating and deciding upon research methods
  • Identifying and resolving as many potential problems or issues as possible
  • Estimating the time and costs required for the project
  • Gauging whether the research goals and design are realistic
  • Producing preliminary results that can help secure funding and other forms of institutional investment

After conducting a pilot study and taking the steps listed above, a researcher will know what to do in order to proceed in a way that will make the study a success. 

Example: Quantitative Survey Research

Say you want to conduct a large-scale quantitative research project using survey data to study the relationship between race and political party affiliation . To best design and execute this research, you would first want to select a data set to use, such as the General Social Survey , for example, download one of their data sets, and then use a statistical analysis program to examine this relationship. In the process of analyzing the relationship, you are likely to realize the importance of other variables that may have an impact on political party affiliation. For example, place of residence, age, education level, socioeconomic status, and gender may impact party affiliation (either on their own or in interaction with race). You might also realize that the data set you chose does not offer you all the information that you need to best answer this question, so you might choose to use another data set, or combine another with the original that you selected. Going through this pilot study process will allow you to work out the kinks in your research design and then execute high-quality research.

Example: Qualitative Interview Studies

Pilot studies can also be useful for qualitative research studies, such as interview-based studies. For example, imagine that a researcher is interested in studying the relationship that Apple consumers have to the company's brand and products . The researcher might choose to first do a pilot study consisting of a couple of focus groups in order to identify questions and thematic areas that would be useful to pursue in-depth, one-on-one interviews. A focus group can be useful to this kind of study because while a researcher will have a notion of what questions to ask and topics to raise, she may find that other topics and questions arise when members of the target group talk among themselves. After a focus group pilot study, the researcher will have a better idea of how to craft an effective interview guide for a larger research project.

  • Definition of Idiographic and Nomothetic
  • An Overview of Qualitative Research Methods
  • Conducting Case Study Research in Sociology
  • How to Conduct a Sociology Research Interview
  • Understanding Secondary Data and How to Use It in Research
  • Convenience Samples for Research
  • Definition and Overview of Grounded Theory
  • Research in Essays and Reports
  • Macro- and Microsociology
  • Pros and Cons of Secondary Data Analysis
  • The Different Types of Sampling Designs in Sociology
  • Social Surveys: Questionnaires, Interviews, and Telephone Polls
  • What Is Ethnography?
  • What Is Participant Observation Research?
  • Introduction to Sociology
  • The Sociology of the Internet and Digital Sociology
  • Open access
  • Published: 06 January 2010

A tutorial on pilot studies: the what, why and how

  • Lehana Thabane 1 , 2 ,
  • Jinhui Ma 1 , 2 ,
  • Rong Chu 1 , 2 ,
  • Ji Cheng 1 , 2 ,
  • Afisi Ismaila 1 , 3 ,
  • Lorena P Rios 1 , 2 ,
  • Reid Robson 3 ,
  • Marroon Thabane 1 , 4 ,
  • Lora Giangregorio 5 &
  • Charles H Goldsmith 1 , 2  

BMC Medical Research Methodology volume  10 , Article number:  1 ( 2010 ) Cite this article

354k Accesses

1582 Citations

104 Altmetric

Metrics details

A Correction to this article was published on 11 March 2023

This article has been updated

Pilot studies for phase III trials - which are comparative randomized trials designed to provide preliminary evidence on the clinical efficacy of a drug or intervention - are routinely performed in many clinical areas. Also commonly know as "feasibility" or "vanguard" studies, they are designed to assess the safety of treatment or interventions; to assess recruitment potential; to assess the feasibility of international collaboration or coordination for multicentre trials; to increase clinical experience with the study medication or intervention for the phase III trials. They are the best way to assess feasibility of a large, expensive full-scale study, and in fact are an almost essential pre-requisite. Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies for phase III trials including: 1) the general reasons for conducting a pilot study; 2) the relationships between pilot studies, proof-of-concept studies, and adaptive designs; 3) the challenges of and misconceptions about pilot studies; 4) the criteria for evaluating the success of a pilot study; 5) frequently asked questions about pilot studies; 7) some ethical aspects related to pilot studies; and 8) some suggestions on how to report the results of pilot investigations using the CONSORT format.

1. Introduction

The Concise Oxford Thesaurus [ 1 ] defines a pilot project or study as an experimental, exploratory, test, preliminary, trial or try out investigation. Epidemiology and statistics dictionaries provide similar definitions of a pilot study as a small scale

" ... test of the methods and procedures to be used on a larger scale if the pilot study demonstrates that the methods and procedures can work" [ 2 ];

"...investigation designed to test the feasibility of methods and procedures for later use on a large scale or to search for possible effects and associations that may be worth following up in a subsequent larger study" [ 3 ].

Table 1 provides a summary of definitions found on the Internet. A closer look at these definitions reveals that they are similar to the ones above in that a pilot study is synonymous with a feasibility study intended to guide the planning of a large-scale investigation. Pilot studies are sometimes referred to as "vanguard trials" (i.e. pre-studies) intended to assess the safety of treatment or interventions; to assess recruitment potential; to assess the feasibility of international collaboration or coordination for multicentre trials; to evaluate surrogate marker data in diverse patient cohorts; to increase clinical experience with the study medication or intervention, and identify the optimal dose of treatments for the phase III trials [ 4 ]. As suggested by an African proverb from the Ashanti people in Ghana " You never test the depth of a river with both feet ", the main goal of pilot studies is to assess feasibility so as to avoid potentially disastrous consequences of embarking on a large study - which could potentially "drown" the whole research effort.

Feasibility studies are routinely performed in many clinical areas. It is fair to say that every major clinical trial had to start with some piloting or a small scale investigation to assess the feasibility of conducting a larger scale study: critical care [ 5 ], diabetes management intervention trials [ 6 ], cardiovascular trials [ 7 ], primary healthcare [ 8 ], to mention a few.

Despite their noted importance, the reality is that pilot studies receive little or no attention in scientific research training. Few epidemiology or research textbooks cover the topic with the necessary detail. In fact, we are not aware of any textbook that dedicates a chapter on this issue - many just mention it in passing or provide a cursory coverage of the topic. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies. In the next section, we narrow the focus of our definition of a pilot to phase III trials. Section 3 covers the general reasons for conducting a pilot study. Section 4 deals with the relationships between pilot studies, proof-of-concept studies, and adaptive designs, while section 5 addresses the challenges of pilot studies. Evaluation of a pilot study (i.e. how to determine if a pilot study was successful) is covered in Section 6. We deal with several frequently asked questions about pilot studies in Section 7 using a "question-and-answer" approach. Section 8 covers some ethical aspects related to pilot studies; and in Section 9, we follow the CONSORT format [ 9 ] to offer some suggestions on how to report the results of pilot investigations.

2. Narrowing the focus: Pilot studies for randomized studies

Pilot studies can be conducted in both quantitative and qualitative studies. Adopting a similar approach to Lancaster et al . [ 10 ], we focus on quantitative pilot studies - particularly those done prior to full-scale phase III trials. Phase I trials are non-randomized studies designed to investigate the pharmacokinetics of a drug (i.e. how a drug is distributed and metabolized in the body) including finding a dose that can be tolerated with minimal toxicity. Phase II trials provide preliminary evidence on the clinical efficacy of a drug or intervention. They may or may not be randomized. Phase III trials are randomized studies comparing two or more drugs or intervention strategies to assess efficacy and safety. Phase IV trials, usually done after registration or marketing of a drug, are non-randomized surveillance studies to document experiences (e.g. side-effects, interactions with other drugs, etc) with using the drug in practice.

For the purposes of this paper, our approach to utilizing pilot studies relies on the model for complex interventions advocated by the British Medical Research Council - which explicitly recommends the use of feasibility studies prior to Phase III clinical trials, but stresses the iterative nature of the processes of development, feasibility and piloting, evaluation and implementation [ 11 ].

3. Reasons for Conducting Pilot Studies

Van Teijlingen et al . [ 12 ] and van Teijlingen and Hundley [ 13 ] provide a summary of the reasons for performing a pilot study. In general, the rationale for a pilot study can be grouped under several broad classifications - process, resources, management and scientific (see also http://www.childrens-mercy.org/stats/plan/pilot.asp for a different classification):

Process: This assesses the feasibility of the steps that need to take place as part of the main study. Examples include determining recruitment rates, retention rates, etc.

Resources: This deals with assessing time and budget problems that can occur during the main study. The idea is to collect some pilot data on such things as the length of time to mail or fill out all the survey forms.

Management: This covers potential human and data optimization problems such as personnel and data management issues at participating centres.

Scientific: This deals with the assessment of treatment safety, determination of dose levels and response, and estimation of treatment effect and its variance.

Table 2 summarizes this classification with specific examples.

4. Relationships between Pilot Studies, Proof-of-Concept Studies, and Adaptive Designs

A proof-of-concept (PoC) study is defined as a clinical trial carried out to determine if a treatment (drug) is biologically active or inactive [ 14 ]. PoC studies usually use surrogate markers as endpoints. In general, they are phase I/II studies - which, as noted above, investigate the safety profile, dose level and response to new drugs [ 15 ]. Thus, although designed to inform the planning of phase III trials for registration or licensing of new drugs, PoC studies may not necessarily fit our restricted definition of pilot studies aimed at assessing feasibility of phase III trials as outlined in Section 2.

An adaptive trial design refers to a design that allows modifications to be made to a trial's design or statistical procedures during its conduct, with the purpose of efficiently identifying clinical benefits/risks of new drugs or to increase the probability of success of clinical development [ 16 ]. The adaptations can be prospective (e.g. stopping a trial early due to safety or futility or efficacy at interim analysis); concurrent (e.g. changes in eligibility criteria, hypotheses or study endpoints) or retrospective (e.g. changes to statistical analysis plan prior to locking database or revealing treatment codes to trial investigators or patients). Piloting is normally built into adaptive trial designs by determining a priori decision rules to guide the adaptations based on cumulative data. For example, data from interim analyses could be used to refine sample size calculations [ 17 , 18 ]. This approach is routinely used in internal pilot studies - which are primarily designed to inform sample size calculation for the main study, with recalculation of the sample size as the key adaptation. Unlike other phase III pilots, an internal pilot investigation does not usually address any other feasibility aspects - because it is essentially part of the main study [ 10 , 19 , 20 ]..

Nonetheless, we need to emphasize that whether or not a study is a pilot, depends on its objectives. An adaptive method is used as a strategy to reach that objective. Both a pilot and a non-pilot could be adaptive.

5. Challenges of and Common Misconceptions about Pilot Studies

Pilot studies can be very informative, not only to the researchers conducting them but also to others doing similar work. However, many of them never get published, often because of the way the results are presented [ 13 ]. Quite often the emphasis is wrongly placed on statistical significance, not on feasibility - which is the main focus of the pilot study. Our experience in reviewing submissions to a research ethics board also shows that most of the pilot projects are not well designed: i.e. there are no clear feasibility objectives; no clear analytic plans; and certainly no clear criteria for success of feasibility.

In many cases, pilot studies are conducted to generate data for sample size calculations. This seems especially sensible in situations where there are no data from previous studies to inform this process. However, it can be dangerous to use pilot studies to estimate treatment effects, as such estimates may be unrealistic/biased because of the limited sample sizes. Therefore if not used cautiously, results of pilot studies can potentially mislead sample size or power calculations [ 21 ] -- particularly if the pilot study was done to see if there is likely to be a treatment effect in the main study. In section 6, we provide guidance on how to proceed with caution in this regard.

There are also several misconceptions about pilot studies. Below are some of the common reasons that researchers have put forth for calling their study a pilot.

The first common reason is that a pilot study is a small single-centre study. For example, researchers often state lack of resources for a large multi-centre study as a reason for doing a pilot. The second common reason is that a pilot investigation is a small study that is similar in size to someone else's published study. In reviewing submissions to a research ethics board, we have come across sentiments such as

So-and-so did a similar study with 6 patients and got statistical significance - ours uses 12 patients (double the size)!

We did a similar pilot before (and it was published!)

The third most common reason is that a pilot is a small study done by a student or an intern - which can be completed quickly and does not require funding. Specific arguments include

I have funding for 10 patients only;

I have limited seed (start-up) funding;

This is just a student project!

My supervisor (boss) told me to do it as a pilot .

None of the above arguments qualifies as sound reasons for calling a study a pilot. A study should only be conducted if the results will be informative; studies conducted for the reasons above may result in findings of limited utility, which would be a waste of the researchers' and participants' efforts. The focus of a pilot study should be on assessment of feasibility, unless it was powered appropriately to assess statistical significance. Further, there is a vast number of poorly designed and reported studies. Assessment of the quality of a published report may be helpful to guide decisions of whether the report should be used to guide planning or designing of new studies. Finally, if a trainee or researcher is assigned a project as a pilot it is important to discuss how the results will inform the planning of the main study. In addition, clearly defined feasibility objectives and rationale to justify piloting should be provided.

Sample Size for Pilot Studies

In general, sample size calculations may not be required for some pilot studies. It is important that the sample for a pilot be representative of the target study population. It should also be based on the same inclusion/exclusion criteria as the main study. As a rule of thumb, a pilot study should be large enough to provide useful information about the aspects that are being assessed for feasibility. Note that PoC studies require sample size estimation based on surrogate markers [ 22 ], but they are usually not powered to detect meaningful differences in clinically important endpoints. The sample used in the pilot may be included in the main study, but caution is needed to ensure the key features of the main study are preserved in the pilot (e.g. blinding in randomized controlled trials). We recommend if any pooling of pilot and main study data is considered, this should be planned beforehand, described clearly in the protocol with clear discussion of the statistical consequences and methods. The goal is to avoid or minimize the potential bias that may occur due to multiple testing issues or any other opportunistic actions by investigators. In general, pooling when done appropriately can increase the efficiency of the main study [ 23 ].

As noted earlier, a carefully designed pilot study may be used to generate information for sample size calculations. Two approaches may be helpful to optimize information from a pilot study in this context: First, consider eliciting qualitative data to supplement the quantitative information obtained in the pilot. For example, consider having some discussions with clinicians using the approach suggested by Lenth [ 24 ] to illicit additional information on possible effect size and variance estimates. Second, consider creating a sample size table for various values of the effect or variance estimates to acknowledge the uncertainty surrounding the pilot estimates.

In some cases, one could use a confidence interval [CI] approach to estimate the sample size required to establish feasibility. For example, suppose we had a pilot trial designed primarily to determine adherence rates to the standardized risk assessment form to enhance venous thromboprophylaxis in hospitalized patients. Suppose it was also decided a priori that the criterion for success would be: the main trial would be ' feasibl e' if the risk assessment form is completed for ≥ 70% of eligible hospitalized patients.

6. How to Interpret the Results of a Pilot Study: Criteria for Success

It is always important to state the criteria for success of a pilot study. The criteria should be based on the primary feasibility objectives. These provide the basis for interpreting the results of the pilot study and determining whether it is feasible to proceed to the main study. In general, the outcome of a pilot study can be one of the following: (i) Stop - main study not feasible; (ii) Continue, but modify protocol - feasible with modifications; (iii) Continue without modifications, but monitor closely - feasible with close monitoring and (iv) Continue without modifications - feasible as is.

For example, the Prophylaxis of Thromboembolism in Critical Care Trial (PROTECT) was designed to assess the feasibility of a large-scale trial with the following criteria for determining success [ 25 ]:

98.5% of patients had to receive study drug within 12 hours of randomization;

91.7% of patients had to receive every scheduled dose of the study drug in a blinded manner;

90% or more of patients had to have lower limb compression ultrasounds performed at the specified times; and

> 90% of necessary dose adjustments had to have been made appropriately in response to pre-defined laboratory criteria .

In a second example, the PeriOperative Epidural Trial (POET) Pilot Study was designed to assess the feasibility of a large, multicentre trial with the following criteria for determining success [ 26 ]:

one subject per centre per week (i.e., 200 subjects from four centres over 50 weeks) can be recruited ;

at least 70% of all eligible patients can be recruited ;

no more than 5% of all recruited subjects crossed over from one modality to the other; and

complete follow-up in at least 95% of all recruited subjects .

7. Frequently asked questions about pilot studies

In this Section, we offer our thoughts on some of the frequently asked questions about pilot studies. These could be helpful to not only clinicians and trainees, but to anyone who is interested in health research.

Can I publish the results of a pilot study?

- Yes, every attempt should be made to publish.

Why is it important to publish the results of pilot studies?

- To provide information about feasibility to the research community to save resources being unnecessarily spent on studies that may not be feasible. Further, having such information can help researchers to avoid duplication of efforts in assessing feasibility.

- Finally, researchers have an ethical and scientific obligation to attempt publishing the results of every research endeavor. However, our focus should be on feasibility goals. Emphasis should not be placed on statistical significance when pilot studies are not powered to detect minimal clinically important differences. Such studies typically do not show statistically significant results - remember that underpowered studies (with no statistically significant results) are inconclusive, not negative since "no evidence of effect" is not "evidence of no effect" [ 27 ].

Can I combine data from a pilot with data from the main study?

- Yes, provided the sampling frame and methodologies are the same. This can increase the efficiency of the main study - see Section 5.

Can I combine the results of a pilot with the results of another study or in a meta-analysis?

- Yes, provided the sampling frame and methodologies are the same.

- No, if the main study is reported and it includes the pilot study.

Can the results of the pilot study be valid on their own, without existence of the main study

- Yes, if the results show that it is not feasible to proceed to the main study or there is insufficient funding.

Can I apply for funding for a pilot study?

- Yes. Like any grant, it is important to justify the need for piloting.

- The pilot has to be placed in the context of the main study.

Can I randomize patients in a pilot study?

- Yes. For a phase III pilot study, one of the goals could be to assess how a randomization procedure might work in the main study or whether the idea of randomization might be acceptable to patients [ 10 ]. In general, it is always best for a pilot to maintain the same design as the main study.

How can I use the information from a pilot to estimate the sample size?

- Use with caution, as results from pilot studies can potentially mislead sample size calculations.

- Consider supplementing the information with qualitative discussions with clinicians - see section 5; and

- Create a sample size table to acknowledge the uncertainty of the pilot information - see section 5.

Can I use the results of a pilot study to treat my patients?

- Not a good idea!

- Pilot studies are primarily for assessing feasibility.

What can I do with a failed or bad pilot study?

- No study is a complete failure; it can always be used as bad example! However, it is worth making clear that a pilot study that shows the main study is not likely to be feasible is not a failed (pilot) study. In fact, it is a success - because you avoided wasting scarce resources on a study destined for failure!

8. Ethical Aspects of Pilot Studies

Halpern et al . [ 28 ] stated that conducting underpowered trials is unethical. However, they proposed that underpowered trials are ethical in two situations: (i) small trials of interventions for rare diseases -- which require documenting explicit plans for including results with those of similar trials in a prospective meta-analysis; (ii) early-phase trials in the development of drugs or devices - provided they are adequately powered for defined purposes other than randomized treatment comparisons. Pilot studies of phase III trials (dealing with common diseases) are not addressed in their proposal. It is therefore prudent to ask: Is it ethical to conduct a study whose feasibility can not be guaranteed (i.e. with a high probability of success)?

It seems unethical to consider running a phase III study without having sufficient data or information about the feasibility. In fact, most granting agencies often require data on feasibility as part of their assessment of the scientific validity for funding decisions.

There is however one important ethical aspect about pilot studies that has received little or no attention from researchers, research ethics boards and ethicists alike. This pertains to the issue of the obligation that researchers have to patients or participants in a trial to disclose the feasibility nature of pilot studies. This is essential given that some pilot studies may not lead to further studies. A review of the commonly cited research ethics guidelines - the Nuremburg Code [ 29 ], Helsinki Declaration [ 30 ], the Belmont Report [ 31 ], ICH Good Clinical Practice [ 32 ], and the International Ethical Guidelines for Biomedical Research Involving Human Subjects [ 33 ] - shows that pilot studies are not addressed in any of these guidelines. Canadian researchers are also encouraged to follow the Tri-Council Policy Statement (TCPS) [ 34 ] - it too does not address how pilot studies need to be approached. It seems to us that given the special nature of feasibility or pilot studies, the disclosure of their purpose to study participants requires special wording - that informs them of the definition of a pilot study, the feasibility objectives of the study, and also clearly defines the criteria for success of feasibility. To fully inform participants, we suggest using the following wording in the consent form:

" The overall purpose of this pilot study is to assess the feasibility of conducting a large study to [state primary objective of the main study]. A feasibility or pilot study is a study that... [state a general definition of a feasibility study]. The specific feasibility objectives of this study are ... [state the specific feasibility objectives of the pilot study]. We will determine that it is feasible to carry on the main study if ... [state the criteria for success of feasibility] ."

9. Recommendation for Reporting the Results of Pilot Studies

Adopted from the CONSORT Statement [ 9 ], Table 3 provides a checklist of items to consider including in a report of a pilot study.

Title and abstract

Item #1: the title or abstract should indicate that the study is a "pilot" or "feasibility".

As a number one summary of the contents of any report, it is important for the title to clearly indicate that the report is for a pilot or feasibility study. This would also be helpful to other researchers during electronic information search about feasibility issues. Our quick search of PUBMED [on July 13, 2009], using the terms "pilot" OR "feasibility" OR "proof-of-concept" for revealed 24423 (16%) hits of studies that had these terms in the title or abstract compared with 149365 hits that had these terms anywhere in the text.

Item #2: Scientific background for the main study and explanation of rationale for assessing feasibility through piloting

The rationale for initiating a pilot should be based on the need to assess feasibility for the main study. Thus, the background of the main study should clearly describe what is known or not known about important feasibility aspects to provide context for piloting.

Item #3: Participants and setting of the study

The description of the inclusion-exclusion or eligibility criteria for participants should be the same as in the main study. The settings and locations where the data were collected should also be clearly described.

Item #4: Interventions

Precise details of the interventions intended for each group and how and when they were actually administered (if applicable) - state clearly if any aspects of the intervention are assessed for feasibility.

Item #5: Objectives

State the specific scientific primary and secondary objectives and hypotheses for the main study and the specific feasibility objectives. It is important to clearly indicate the feasibility objectives as the primary focus for the pilot.

Item #6: Outcomes

Clearly define primary and secondary outcome measures for the main study. Then, clearly define the feasibility outcomes and how they were operationalized - these should include key elements such as recruitment rates, consent rates, completion rates, variance estimates, etc. In some cases, a pilot study may be conducted with the aim to determine a suitable (clinical or surrogate) endpoint for the main study. In such a case, one may not be able to define the primary outcome of the main study until the pilot is finished. However, it is important that determining the primary outcome of the main study be clearly stated as part of feasibility outcomes.

Item #7: Sample Size

Describe how sample size was determined. If the pilot is a proof-of-concept study, is the sample size calculated based on primary/key surrogate marker(s)? In general if the pilot is for a phase III study, there may be no need for a formal sample size calculation. However, the confidence interval approach may be used to calculate and justify the sample size based on key feasibility objective(s).

Item #8: Feasibility criteria

Clearly describe the criteria for assessing success of feasibility - these should be based on the feasibility objectives.

Item #9: Statistical Analysis

Describe the statistical methods for the analysis of primary and secondary feasibility outcomes.

Item #10: Ethical Aspects

State whether the study received research ethics approval. Describe how informed consent was handled - given the feasibility nature of the study.

Item #11: Participant Flow

Describe the flow of participants through each stage of the study (use of a flow-diagram is strongly recommended -- see CONSORT [ 9 ] for a template). Describe protocol deviations from pilot study as planned with reasons for deviations. State the number of exclusions at each stage and corresponding reasons for exclusions.

Item #12: Recruitment

Report the dates defining the periods of recruitment and follow-up.

Item #13: Baseline Data

Report the baseline demographic and clinical characteristics of the participants.

Item #14: Outcomes and Estimation

For each primary and secondary feasibility outcomes, report the point estimate of effect and its precision ( e.g ., 95% CI) - if applicable.

Item # 15: Interpretation

Interpretation of the results should focus on feasibility, taking into account the stated criteria for success of feasibility, study hypotheses, sources of potential bias or imprecision (given the feasibility nature of the study) and the dangers associated with multiplicity - repeated testing on multiple outcomes.

Item #16: Generalizability

Discuss the generalizability (external validity) of the feasibility aspects observed in the study. State clearly what modifications in the design of the main study (if any) would be necessary to make it feasible.

Item #17: Overall evidence of feasibility

Discuss the general results in the context of overall evidence of feasibility. It is important that the focus be on feasibility.

9. Conclusions

Pilot or vanguard studies provide a good opportunity to assess feasibility of large full-scale studies. Pilot studies are the best way to assess feasibility of a large expensive full-scale study, and in fact are an almost essential pre-requisite. Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. Pilot studies should be well designed with clear feasibility objectives, clear analytic plans, and explicit criteria for determining success of feasibility. They should be used cautiously for determining treatment effects and variance estimates for power or sample size calculations. Finally, they should be scrutinized the same way as full scale studies, and every attempt should be taken to publish the results in peer-reviewed journals.

Change history

11 march 2023.

A Correction to this paper has been published: https://doi.org/10.1186/s12874-023-01880-1

Waite M: Concise Oxford Thesaurus. 2002, Oxford, England: Oxford University Press, 2

Google Scholar  

Last JM, editor: A Dictionary of Epidemiology. 2001, Oxford University Press, 4

Everitt B: Medical Statistics from A to Z: A Guide for Clinicians and Medical Students. 2006, Cambridge University Press: Cambridge, 2

Book   Google Scholar  

Tavel JA, Fosdick L, ESPRIT Vanguard Group. ESPRIT Executive Committee: Closeout of four phase II Vanguard trials and patient rollover into a large international phase III HIV clinical endpoint trial. Control Clin Trials. 2001, 22: 42-48. 10.1016/S0197-2456(00)00114-8.

Article   CAS   PubMed   Google Scholar  

Arnold DM, Burns KE, Adhikari NK, Kho ME, Meade MO, Cook DJ: The design and interpretation of pilot trials in clinical research in critical care. Crit Care Med. 2009, 37 (Suppl 1): 69-74. 10.1097/CCM.0b013e3181920e33.

Article   Google Scholar  

Computerization of Medical Practice for the Enhancement of Therapeutic Effectiveness. Last accessed August 8, 2009, [ http://www.compete-study.com/index.htm ]

Heart Outcomes Prevention Evaluation Study. Last accessed August 8, 2009, [ http://www.ccc.mcmaster.ca/hope.htm ]

Cardiovascular Health Awareness Program. Last accessed August 8, 2009, [ http://www.chapprogram.ca/resources.html ]

Moher D, Schulz KF, Altman DG, CONSORT Group (Consolidated Standards of Reporting Trials): The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. J Am Podiatr Med Assoc. 2001, 91: 437-442.

Lancaster GA, Dodd S, Williamson PR: Design and analysis of pilot studies: recommendations for good practice. J Eval Clin Pract. 2004, 10: 307-12. 10.1111/j..2002.384.doc.x.

Article   PubMed   Google Scholar  

Craig N, Dieppe P, Macintyre S, Michie S, Nazareth I, Petticrew M: Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008, 337: a1655-10.1136/bmj.a1655.

Article   PubMed   PubMed Central   Google Scholar  

Van Teijlingen ER, Rennie AM, Hundley V, Graham W: The importance of conducting and reporting pilot studies: the example of the Scottish Births Survey. J Adv Nurs. 2001, 34: 289-295. 10.1046/j.1365-2648.2001.01757.x.

Van Teijlingen ER, Hundley V: The Importance of Pilot Studies. Social Research Update. 2001, 35-[ http://sru.soc.surrey.ac.uk/SRU35.html ]

Lawrence Gould A: Timing of futility analyses for 'proof of concept' trials. Stat Med. 2005, 24: 1815-1835. 10.1002/sim.2087.

Fardon T, Haggart K, Lee DK, Lipworth BJ: A proof of concept study to evaluate stepping down the dose of fluticasone in combination with salmeterol and tiotropium in severe persistent asthma. Respir Med. 2007, 101: 1218-1228. 10.1016/j.rmed.2006.11.001.

Chow SC, Chang M: Adaptive design methods in clinical trials - a review. Orphanet J Rare Dis. 2008, 3: 11-10.1186/1750-1172-3-11.

Gould AL: Planning and revising the sample size for a trial. Stat Med. 1995, 14: 1039-1051. 10.1002/sim.4780140922.

Coffey CS, Muller KE: Properties of internal pilots with the univariate approach to repeated measures. Stat Med. 2003, 22: 2469-2485. 10.1002/sim.1466.

Zucker DM, Wittes JT, Schabenberger O, Brittain E: Internal pilot studies II: comparison of various procedures. Statistics in Medicine. 1999, 18: 3493-3509. 10.1002/(SICI)1097-0258(19991230)18:24<3493::AID-SIM302>3.0.CO;2-2.

Kieser M, Friede T: Re-calculating the sample size in internal pilot designs with control of the type I error rate. Statistics in Medicine. 2000, 19: 901-911. 10.1002/(SICI)1097-0258(20000415)19:7<901::AID-SIM405>3.0.CO;2-L.

Kraemer HC, Mintz J, Noda A, Tinklenberg J, Yesavage JA: Caution regarding the use of pilot studies to guide power calculations for study proposals. Arch Gen Psychiatry. 2006, 63: 484-489. 10.1001/archpsyc.63.5.484.

Yin Y: Sample size calculation for a proof of concept study. J Biopharm Stat. 2002, 12: 267-276. 10.1081/BIP-120015748.

Wittes J, Brittain E: The role of internal pilot studies in increasing the efficiency of clinical trials. Stat Med. 1990, 9: 65-71. 10.1002/sim.4780090113.

Lenth R: Some Practical Guidelines for Effective Sample Size Determination. The American Statistician. 2001, 55: 187-193. 10.1198/000313001317098149.

Cook DJ, Rocker G, Meade M, Guyatt G, Geerts W, Anderson D, Skrobik Y, Hebert P, Albert M, Cooper J, Bates S, Caco C, Finfer S, Fowler R, Freitag A, Granton J, Jones G, Langevin S, Mehta S, Pagliarello G, Poirier G, Rabbat C, Schiff D, Griffith L, Crowther M, PROTECT Investigators. Canadian Critical Care Trials Group: Prophylaxis of Thromboembolism in Critical Care (PROTECT) Trial: a pilot study. J Crit Care. 2005, 20: 364-372. 10.1016/j.jcrc.2005.09.010.

Choi PT, Beattie WS, Bryson GL, Paul JE, Yang H: Effects of neuraxial blockade may be difficult to study using large randomized controlled trials: the PeriOperative Epidural Trial (POET) Pilot Study. PLoS One. 2009, 4 (2): e4644-10.1371/journal.pone.0004644.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Altman DG, Bland JM: Absence of evidence is not evidence of absence. BMJ. 1995, 311: 485-

Halpern SD, Karlawish JH, Berlin JA: The continuing unethical conduct of underpowered clinical trials. JAMA. 2002, 288: 358-362. 10.1001/jama.288.3.358.

The Nuremberg Code, Research ethics guideline 2005. Last accessed August 8, 2009, [ http://www.hhs.gov/ohrp/references/nurcode.htm ]

The Declaration of Helsinki, Research ethics guideline. Last accessed December 22, 2009, [ http://www.wma.net/en/30publications/10policies/b3/index.html ]

The Belmont Report, Research ethics guideline. Last accessed August 8, 2009, [ http://ohsr.od.nih.gov/guidelines/belmont.html ]

The ICH Harmonized Tripartite Guideline-Guideline for Good Clinical Practice. Last accessed August 8, 2009, [ http://www.gcppl.org.pl/ma_struktura/docs/ich_gcp.pdf ]

The International Ethical Guidelines for Biomedical Research Involving Human Subjects. Last accessed August 8, 2009, [ http://www.fhi.org/training/fr/Retc/pdf_files/cioms.pdf ]

Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans, Government of Canada. Last accessed August 8, 2009, [ http://www.pre.ethics.gc.ca/english/policystatement/policystatement.cfm ]

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/10/1/prepub

Download references


Dr Lehana Thabane is clinical trials mentor for the Canadian Institutes of Health Research. We thank the reviewers for insightful comments and suggestions which led to improvements in the manuscript.

Author information

Authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada

Lehana Thabane, Jinhui Ma, Rong Chu, Ji Cheng, Afisi Ismaila, Lorena P Rios, Marroon Thabane & Charles H Goldsmith

Biostatistics Unit, St Joseph's Healthcare Hamilton, Hamilton, ON, Canada

Lehana Thabane, Jinhui Ma, Rong Chu, Ji Cheng, Lorena P Rios & Charles H Goldsmith

Department of Medical Affairs, GlaxoSmithKline Inc., Mississauga, ON, Canada

Afisi Ismaila & Reid Robson

Department of Medicine, Division of Gastroenterology, McMaster University, Hamilton, ON, Canada

Marroon Thabane

Department of Kinesiology, University of Waterloo, Waterloo, ON, Canada

Lora Giangregorio

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lehana Thabane .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

LT drafted the manuscript. All authors reviewed several versions of the manuscript, read and approved the final version.

The original online version of this article was revised: the authors would like to correct the number of sample size in the fourth paragraph under the heading Sample Size for Pilot Studies from “75 patients” to “289 patients”.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Thabane, L., Ma, J., Chu, R. et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol 10 , 1 (2010). https://doi.org/10.1186/1471-2288-10-1

Download citation

Received : 09 August 2009

Accepted : 06 January 2010

Published : 06 January 2010

DOI : https://doi.org/10.1186/1471-2288-10-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Pilot Study
  • Sample Size Calculation
  • Research Ethic Board
  • Adaptive Design

BMC Medical Research Methodology

ISSN: 1471-2288

research meaning pilot study

Enago Academy

Why Is a Pilot Study Important in Research?

' src=

Are you working on a new research project ? We know that you are excited to start, but before you dive in, make sure your study is feasible. You don’t want to end up having to process too many samples at once or realize you forgot to add an essential question to your questionnaire.

What is a Pilot Study?

You can determine the feasibility of your research design, with a pilot study before you start. This is a preliminary, small-scale “rehearsal” in which you test the methods you plan to use for your research project. You will use the results to guide the methodology of your large-scale investigation. Pilot studies should be performed for both qualitative and quantitative studies. Here, we discuss the importance of the pilot study and how it will save you time, frustration and resources.

“ You never test the depth of a river with both feet ” – African proverb

Components of a Pilot Study

Whether your research is a clinical trial of a medical treatment or a survey in the form of a questionnaire, you want your study to be informative and add value to your research field. Things to consider in your pilot study include:

  • Sample size and selection. Your data needs to be representative of the target study population. You should use statistical methods to estimate the feasibility of your sample size.
  • Determine the criteria for a successful pilot study based on the objectives of your study. How will your pilot study address these criteria?
  • When recruiting subjects or collecting samples ensure that the process is practical and manageable.
  • Always test the measurement instrument . This could be a questionnaire, equipment, or methods used. Is it realistic and workable? How can it be improved?
  • Data entry and analysis . Run the trial data through your proposed statistical analysis to see whether your proposed analysis is appropriate for your data set.
  • Create a flow chart of the process.

How to Conduct a Pilot Study

Conducting a pilot study is an essential step in many research projects. Here’s a general guide on how to conduct a pilot study:

Step 1: Define Objectives

Inspect what specific aspects of your main study do you want to test or evaluate in your pilot study.

Step 2: Evaluate Sample Size

Decide on an appropriate sample size for your pilot study. This can be smaller than your main study but should still be large enough to provide meaningful feedback.

Step 3: Select Participants

Choose participants who are similar to those you’ll include in the main study. Ensure they match the demographics and characteristics of your target population.

Step 4: Prepare Materials

Develop or gather all the materials needed for the study, such as surveys, questionnaires, protocols, etc.

Step 5: Explain the Purpose of the Study

Briefly explain the purpose and implementation method of the pilot study to participants. Pay attention to the study duration to help you refine your timeline for the main study.

Step 6: Gather Feedback

Gather feedback from participants through surveys, interviews, or discussions. Ask about their understanding of the questions, clarity of instructions, time taken, etc.

 Step 7: Analyze Results

Analyze the collected data and identify any trends or patterns. Take note of any unexpected issues, confusion, or problems that arise during the pilot.

Step 8: Report Findings

Write a brief report detailing the process, results, and any changes made.

Based on the results observed in the pilot study, make necessary adjustments to your study design, materials, procedures, etc. Furthermore, ensure you are following ethical guidelines for research, even in a pilot study.

Ready to test your understanding on conducting a pilot study? Take our short quiz today!

Fill the Details to Check Your Score


Importance of Pilot Study in Research

Pilot studies should be routinely incorporated into research design s because they:

  • Help define the research question
  • Test the proposed study design and process. This could alert you to issues which may negatively affect your project.
  • Educate yourself on different techniques related to your study.
  • Test the safety of the medical treatment in preclinical trials on a small number of participants. This is an essential step in clinical trials.
  • Determine the feasibility of your study, so you don’t waste resources and time.
  • Provide preliminary data that you can use to improve your chances for funding and convince stakeholders that you have the necessary skills and expertise to successfully carry out the research.

Are Pilot Studies Always Necessary?

We recommend pilot studies for all research. Scientific research does not always go as planned; therefore, you should optimize the process to minimize unforeseen events. Why risk disastrous and expensive mistakes that could have been discovered and corrected in a pilot study?

An Essential Component for Good Research Design

Pilot work not only gives you a chance to determine whether your project is feasible but also an opportunity to publish its results. You have an ethical and scientific obligation to get your information out to assist other researchers in making the most of their resources.

A successful pilot study does not ensure the success of a research project. However, it does help you assess your approach and practice the necessary techniques required for your project. It will give you an indication of whether your project will work. Would you start a research project without a pilot study? Let us know in the comments section below.

' src=

But it depends on the nature of the research, I suppose.

Awesome document

Good document

I totally agree with this article that pilot study helps the researcher be sure how feasible his research idea is. And is well worth the time, as it saves future time wastage.

Great article, it is always wise to carry out that test before putting out the Main stuff. It saves you time and future embarrasment.

I think that pilot study is a great way to avoid mistakes on a large scale. You can’t go wrong doing this cause there will always be some error that will arise in scientific researches.

Rate this article Cancel Reply

Your email address will not be published.

research meaning pilot study

Enago Academy's Most Popular Articles

Content Analysis vs Thematic Analysis: What's the difference?

  • Reporting Research

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for data interpretation

In research, choosing the right approach to understand data is crucial for deriving meaningful insights.…

Cross-sectional and Longitudinal Study Design

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right approach

The process of choosing the right research design can put ourselves at the crossroads of…

Networking in Academic Conferences

  • Career Corner

Unlocking the Power of Networking in Academic Conferences

Embarking on your first academic conference experience? Fear not, we got you covered! Academic conferences…

Research recommendation

Research Recommendations – Guiding policy-makers for evidence-based decision making

Research recommendations play a crucial role in guiding scholars and researchers toward fruitful avenues of…

research meaning pilot study

  • AI in Academia

Disclosing the Use of Generative AI: Best practices for authors in manuscript preparation

The rapid proliferation of generative and other AI-based tools in research writing has ignited an…

Intersectionality in Academia: Dealing with diverse perspectives

Meritocracy and Diversity in Science: Increasing inclusivity in STEM education

Avoiding the AI Trap: Pitfalls of relying on ChatGPT for PhD applications

research meaning pilot study

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

Here’s how you know

  • U.S. Department of Health and Human Services
  • National Institutes of Health

Pilot Studies: Common Uses and Misuses

Although pilot studies are a critical step in the process of intervention development and testing, several misconceptions exist on their true uses and misuses. NCCIH has developed a Framework for Developing and Testing Mind and Body Interventions that includes brief information on pilot studies. Here we offer additional guidance specifically on the do’s and don’ts of pilot work.

A pilot study is defined as “A small-scale test of the methods and procedures to be used on a larger scale” (Porta, Dictionary of Epidemiology , 5th edition, 2008). The goal of pilot work is not to test hypotheses about the effects of an intervention, but rather, to assess the feasibility/acceptability of an approach to be used in a larger scale study. Thus, in a pilot study you are not answering the question “Does this intervention work?” Instead you are gathering information to help you answer “Can I do this?”

Uses of Pilot Studies

There are many aspects of feasibility and acceptability to examine to address the “Can I do this?” question. Here are some examples:

You may be able to think of other feasibility questions relevant to your specific intervention, population, or design. When designing a pilot study, it is important to set clear quantitative benchmarks for feasibility measures by which you will evaluate successful or unsuccessful feasibility (e.g., a benchmark for assessing adherence rates might be that at least 70 percent of participants in each arm will attend at least 8 of 12 scheduled group sessions). These benchmarks should be relevant to the specific treatment conditions and population under study, and thus will vary from one study to another. While using a randomized design is not always necessary for pilot studies, having a comparison group can provide a more realistic examination of recruitment rates, randomization procedures, implementation of interventions, procedures for maintaining blinded assessments, and the potential to assess for differential dropout rates. Feasibility measures are likely to vary between “open-label” designs, where participants know what they are signing up for, versus a randomized design where they will be assigned to a group.

In addition to providing important feasibility data as described above, pilot studies also provide an opportunity for study teams to develop good clinical practices to enhance the rigor and reproducibility of their research. This includes the development of documentation and informed consent procedures, data collection tools, regulatory reporting procedures, and monitoring procedures.

The goal of pilot studies is not to test hypotheses; thus, no inferential statistics should be proposed. Therefore, it is not necessary to provide power analyses for the proposed sample size of your pilot study. Instead, the proposed pilot study sample size should be based on practical considerations including participant flow, budgetary constraints, and the number of participants needed to reasonably evaluate feasibility goals.

This testing of the methods and procedures to be used in a larger scale study is the critical groundwork we wish to support in PAR-14-182 , to pave the way for the larger scale efficacy trial. As part of this process, investigators may also spend time refining their intervention through iterative development and then test the feasibility of their final approach.

Misuses of Pilot Studies

Rather than focusing on feasibility and acceptability, too often, proposed pilot studies focus on inappropriate outcomes, such as determining “preliminary efficacy.” The most common misuses of pilot studies include:

  • Attempting to assess safety/tolerability of a treatment,
  • Seeking to provide a preliminary test of the research hypothesis, and
  • Estimating effect sizes for power calculations of the larger scale study.

Why can’t pilot studies be used to assess safety and tolerability?

Investigators often propose to examine “preliminary safety” of an intervention within a pilot study; however, due to the small sample sizes typically involved in pilot work, they cannot provide useful information on safety except for extreme cases where a death occurs or repeated serious adverse events surface. For most interventions proposed by NCCIH investigators, suspected safety concerns are quite minimal/rare and thus, unlikely to be picked up in a small pilot study. If any safety concerns are detected, group-specific rates with 95 percent confidence intervals should be reported for adverse events. However, if no safety concerns are demonstrated in the pilot study, investigators cannot conclude that the intervention is safe.

Why can’t pilot studies provide a “preliminary test” of the research hypothesis?

We routinely see specific aims for feasibility pilot studies that propose to evaluate “preliminary efficacy” of intervention A for condition X. However, there are two primary reasons why pilot studies cannot be used for this purpose. First, at the time a pilot study is conducted, there is a limited state of knowledge about the best methods to implement the intervention in the patient population under study. Therefore, conclusions about whether the intervention “works” are premature because you don’t yet know whether you implemented it correctly. Second, due to the smaller sample sizes used in pilot studies, they are not powered to answer questions about efficacy. Thus, any estimated effect size is uninterpretable—you do not know whether the “preliminary test” has returned a true result, a false positive result, or a false negative result (see Figure 1).

Why can’t pilot studies estimate effect sizes for power calculations of the larger scale study?

Since any effect size estimated from a pilot study is unstable, it does not provide a useful estimation for power calculations. If a very large effect size was observed in a pilot study and it achieves statistical significance, it only proves that the true effect is likely not zero, but the observed magnitude of the effect may be overestimating the true effect. Power calculations for the subsequent trial based on such effect size would indicate a smaller number of participants than actually needed to detect a clinically meaningful effect, ultimately resulting in a negative trial. On the other hand, if the effect size estimated from the pilot study was very small, the subsequent trial might not even be pursued due to assumptions that the intervention does not work. If the subsequent trial was designed, the power calculations would indicate a much larger number of participants than actually needed to detect an effect, which may reduce chances of funding (too expensive), or if funded, would expose an unnecessary number of participants to the intervention arms (see Figure 1).


So what else can you do to provide effect sizes for power calculations?

Because pilot studies provide unstable estimates of effect size, the recommended approach is to base sample size calculations for efficacy studies on estimates of a clinically meaningful difference as illustrated in Figure 2. Investigators can estimate clincally meaningful differences by consideration of what effect size would be necessary to change clinical behaviors and/or guideline recommendations. In this process it might be beneficial to convene stakeholder groups to determine what type of difference would be meaningful to patient groups, clinicians, practitioners, and/or policymakers. In the determination of a clinically meaningful effect, researchers should also consider the intensity of the intervention and risk of harm vs. the expectation of benefit. Observational data and the effect size seen with a standard treatment can provide useful starting points to help determine clinically meaningful effects. For all of these methods, you should ask the question, “What would make a difference for you?” You might consider using several of these methods and determining a range of effect sizes as a basis for your power calculations.


Pilot studies should not be used to test hypotheses about the effects of an intervention. The “Does this work?” question is best left to the full-scale efficacy trial, and the power calculations for that trial are best based on clinically meaningful differences. Instead, pilot studies should assess the feasibility/acceptability of the approach to be used in the larger study, and answer the “Can I do this?” question. You can read more about the other steps involved in developing and testing mind and body interventions on our NCCIH Research Framework page.

.header_greentext{color:green!important;font-size:24px!important;font-weight:500!important;}.header_bluetext{color:blue!important;font-size:18px!important;font-weight:500!important;}.header_redtext{color:red!important;font-size:28px!important;font-weight:500!important;}.header_darkred{color:#803d2f!important;font-size:28px!important;font-weight:500!important;}.header_purpletext{color:purple!important;font-size:31px!important;font-weight:500!important;}.header_yellowtext{color:yellow!important;font-size:20px!important;font-weight:500!important;}.header_blacktext{color:black!important;font-size:22px!important;font-weight:500!important;}.header_whitetext{color:white!important;font-size:22px!important;font-weight:500!important;}.header_darkred{color:#803d2f!important;}.Green_Header{color:green!important;font-size:24px!important;font-weight:500!important;}.Blue_Header{color:blue!important;font-size:18px!important;font-weight:500!important;}.Red_Header{color:red!important;font-size:28px!important;font-weight:500!important;}.Purple_Header{color:purple!important;font-size:31px!important;font-weight:500!important;}.Yellow_Header{color:yellow!important;font-size:20px!important;font-weight:500!important;}.Black_Header{color:black!important;font-size:22px!important;font-weight:500!important;}.White_Header{color:white!important;font-size:22px!important;font-weight:500!important;} Additional Resources:

  • Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research.   Journal of Psychiatric Research. 2011;45(5):626-629.
  • Kraemer HC, Mintz J, Noda A, et al. Caution regarding the use of pilot studies to guide power calculations for study proposals. Archives of General Psychiatry. 2006;63(5):484-489.
  • Kistin C, Silverstein M. Pilot studies: a critical but potentially misused component of interventional research. JAMA. 2015;314(15):1561-1562.
  • Keefe RS, Kraemer HC, Epstein RS, et al. Defining a clinically meaningful effect for the design and interpretation of randomized controlled trials. Innovations in Clinical Neuroscience. 2013;10(5-6 Suppl A):4S-19S.
  • What is pilot testing?

Last updated

12 February 2023

Reviewed by

Tanya Williams

When you have a new project in mind, conducting a pilot test can help you get a better feel for how it will ultimately perform. However, a strong understanding of what pilot testing is, how it works, and what you may need to do with it is essential to the overall performance of your test and your product.

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • Why pilot testing is important

In many cases, a pilot test can go a long way toward providing more information and deeper insights into your future study.

Learn more about potential costs

You will likely have a specific budget related to your project. Therefore, you will want to get the best possible results from your study within that budget, but you may not be exactly sure how the budget constraints will ultimately impact your project. 

Conducting a pilot study can help you determine what the cost of a larger study will look like, which can ensure that you manage your sample size and testing plans accordingly. 

Provide key information about possible issues

A pilot test can help provide deeper insights into any issues you might face when running your larger study. With a pilot test, you can learn more about the effectiveness of the methods you've chosen, the feasibility of gathering the information you need, and the practicality of your study. Furthermore, you may notice any possible problems with your study early, which means you can adjust your methods when you begin that larger-scale testing. 

Determine feasibility

In some cases, you may find through pilot testing that the study you want to perform simply isn't realistic, based on the availability of key data and/or the way your brand functions. For example, you might have a hard time getting real-world answers from your customers, or you might discover that customers aren't using your products the way you had hoped. 

By determining feasibility through the pilot test, you can avoid wasting money on a larger-scale study that might not provide you with the information you need. 

Shape your research

Sometimes, your pilot study may quickly reveal that the information you thought to be true, actually isn't. You may discover that customers are looking for different features or options than you initially expected, or that your customers aren't interested in a specific product. 

On the other hand, your pilot study may uncover that customers have a much deeper use for some feature or product than you thought, making efforts to remove it counterproductive. With a pilot study, you can shape your future research efforts more effectively. 

  • Uses for pilot studies

Pilot studies can be used in a variety of ways. Some of these include:

Trials of new products

Testing customer focus groups

Conducting product testing

Seeking more information about your target audience

Market research

  • Misuses of pilot studies

While pilot studies have a number of critical uses, they can, in some cases, be misused. Take a look at these common challenges that can occur during pilot studies, interfering with accurate data collection . 

Misreporting of data : can make it difficult for researchers to see the information they originally sought to obtain through pilot testing

Improper testing methods : pilot studies may, in some cases, use inaccurate or inappropriate testing methods, causing researchers to arrive at errant conclusions

Inaccurate predictions : if used to inform future testing methods, they may create bias in the final results of the study

Properly conducting pilot studies is essential to using that data and information correctly in the future. The data is only as good as the methodology used to procure it.

  • Objectives of pilot testing

Pilot testing has several objectives, including: 

Identify the potential cost of a larger study

Determine the feasibility of a study

Get a closer look at risks, time involved, and ultimate performance in a larger study

  • How to do pilot testing

Conducting a pilot test involves several key steps, such as:

Determine the objective of the study

Choose key data points to analyze based on the study's goals

Prepare for the pilot test, including making sure that all researchers or testers are well informed

Deploy the pilot test, including all research

Evaluate the results

Use the results of the pilot test to make any changes to the larger test

  • Steps after evaluation of pilot testing

Once you have evaluated your pilot test, there are several steps you may want to take. These can include:

Identifying any potential risks associated with the study and taking steps to mitigate them 

Analyzing the results of your pilot and the feasibility of continuing

Developing methods for collecting and analyzing data for your larger study, including making any changes indicated by the product pilot test

  • The benefits of pilot testing

Pilot testing offers a number of important benefits:

Learn more about your study methods and get a feel for what your actual test will look like

Avoid costly errors that could interfere with your results or prevent you from finishing your study

Make sure your study is realistic and feasible based on current data and capability 

Get early insights into the possible results of a larger-scale test

Often, the results of a pilot test can help inform future testing methodology or shape the course of the future study.

  • Best practices for pilot testing

Understanding good practices for pilot testing can help you build a test that reflects the current needs and status of your organization. It’s important to consider the following:

Make sure all personnel are fully trained and understand the data they need to collect and methods for reporting and collecting that data. Incorrect data collection and/or analysis can interfere with your study and make it more difficult to get the information you need.

Identify clear key metrics for later analysis. Make sure you know what you are planning to analyze and what you want to learn from the pilot study. 

Base results on evidence, rather than simply collecting evidence to support a hypothesis . Using unbiased data collection methods can make a big difference in the outcome of your study.

Use pilot testing results to make changes to your future study that can help cut costs and improve outcomes. 

Remain open to different outcomes in the final test. While pilot testing can provide some insights, it may not provide the same information as a larger-scale test, especially when you conduct a pilot test with a limited segment of your target audience. 

  • Pilot testing vs. beta testing

During pilot testing, researchers are able to gather data prior to releasing or deploying a product. A pilot test is designed to offer insights into the product and/or customers. 

A beta test, on the other hand, actively deploys a version of the product into the customer’s environment and allows them to use it and provide feedback. Beta testing is generally conducted when a product is nearing completion, while a pilot test may be conducted early in the process.

Why is it called a pilot test?

A pilot test is an initial test or a miniature version of a larger-scale study or project. The term "pilot" means to test a plan, project, or other strategy before implementing it more fully across an organization. A pilot test is generally conducted before beta testing in the case of a product or software release.

What is pilot testing of a product?

A pilot test invites a limited group of users to test out a new product or solution and provide feedback. During a pilot test, the product will be released to a very limited group of reviewers, often hand-picked by the testing organization.

What is the difference between a pilot test and a pretest?

Generally, a pretest involves only a small selection of the elements involved in the larger-scale study. A pretest might help identify immediate concerns or provide deeper insight into a product's functionality or desirability. A pilot test, on the other hand, is a miniature version of the final test, conducted with the same attributes as the final research study.

Is pilot testing the same as alpha testing?

Alpha testing is a testing process usually applied to software. It is designed specifically to look at the bugs in a product before it is launched in a public form, including beta test form. Pilot testing, on the other hand, is a full test of the entire product and its features, and may involve end users.

While alpha testing is generally performed by employees of the organization and may involve testing strategies designed to identify challenges and problems, pilot testing usually involves use of the product by end users. Those users will then report on their findings and provide more insight into the product's overall functionality.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 18 May 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics, log in or sign up.

Get started for free

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Guidelines for Designing and Evaluating Feasibility Pilot Studies

Jeanne a. teresi.

1 Columbia University Stroud Center at New York State Psychiatric Institute, 1051 Riverside Drive, Box 42, Room 2714, New York, New York, 10032-3702, USA

2 Research Division, Hebrew Home at Riverdale, 5901 Palisade Avenue, Riverdale New York 10471

Xiaoying Yu

3 Office of Biostatistics, Department of Preventive Medicine and Population Health, University of Texas Medical Branch at Galveston, 301 University Boulevard, Galveston, Texas, 77555-1147

Anita L. Stewart

4 University of California, San Francisco, Institute for Health & Aging, 490 Illinois St., 12 th floor, Box 0646, San Francisco, CA 94143

Ron D. Hays

5 University of California, Los Angeles; Division of General Internal Medicine and Health Services Research, 1100 Glendon Avenue, Suite 850, Los Angeles, California 90024

Associated Data

Pilot studies test the feasibility of methods and procedures to be used in larger-scale studies. Although numerous articles describe guidelines for the conduct of pilot studies, few have included specific feasibility indicators or strategies for evaluating multiple aspects of feasibility. Additionally, using pilot studies to estimate effect sizes to plan sample sizes for subsequent randomized controlled trials has been challenged; however, there has been little consensus on alternative strategies.

In Section 1 , specific indicators (recruitment, retention, intervention fidelity, acceptability, adherence, and engagement) are presented for feasibility assessment of data collection methods and intervention implementation. Section 1 also highlights the importance of examining feasibility when adapting an intervention tested in mainstream populations to a new more diverse group. In Section 2 , statistical and design issues are presented, including sample sizes for pilot studies, estimates of minimally important differences, design effects, confidence intervals and non-parametric statistics. An in-depth treatment of the limits of effect size estimation as well as process variables is presented. Tables showing confidence intervals around parameters are provided. With small samples, effect size, completion and adherence rate estimates will have large confidence intervals.


This commentary offers examples of indicators for evaluating feasibility, and of the limits of effect size estimation in pilot studies. As demonstrated, most pilot studies should not be used to estimate effect sizes, provide power calculations for statistical tests or perform exploratory analyses of efficacy. It is hoped that these guidelines will be useful to those planning pilot/feasibility studies before a larger-scale study.

Pilot studies are a necessary first step to assess the feasibility of methods and procedures to be used in a larger study. Some consider pilot studies to be a subset of feasibility studies ( 1 ), while others regard feasibility studies as a subset of pilot studies. As a result, the terms have been used interchangeably ( 2 ). Pilot studies have been used to estimate effect sizes to determine the sample size needed for a larger-scale randomized controlled trial (RCT) or observational study. However, this practice has been challenged because pilot study samples are usually small and unrepresentative, and estimates of parameters and their standard errors may be inaccurate, resulting in misleading power calculations ( 3 , 4 ). Other questionable goals of pilot studies include assessing safety and tolerability of interventions and obtaining preliminary answers to key research questions ( 5 ).

Because of these challenges, the focus of pilot studies has shifted to examining feasibility. The National Center for Complementary and Integrative Health (NCCIH) defines a pilot study as “a small-scale test of methods and procedures to assess the feasibility/acceptability of an approach to be used in a larger scale study” ( 6 ). Others note that pilot studies aim to “field-test logistical aspects of the future study and to incorporate these aspects into the study design” ( 5 ). Results can inform modifications, increasing the likelihood of success in the future study ( 7 ).

Although pilot studies can still be used to inform sampling decisions for larger studies, the emphasis now is on confidence intervals (CI) rather than the point estimate of effect sizes. However, as illustrated below, CIs will be large for small sample sizes. Addressable questions are whether data collection protocols are feasible, intervention fidelity is maintained, and participant adherence and retention are achieved.

Although many in the scientific community have accepted the new focus on feasibility for pilot studies, there has not been universal adoption. Numerous articles describe guidelines for conducting feasibility pilot studies ( 8 – 10 ), both randomized and non-randomized ( 2 , 11 ). A useful next step is to augment general guidelines with specific feasibility indicators and describe strategies for evaluating multiple aspects of feasibility in one pilot study. Additionally, studies of health disparities face special feasibility issues. Interventions that were tested initially in mainstream populations may require adaptation for use in ethnically or socio-demographically diverse groups and measures may not be appropriate for those with lower education or limited English proficiency.

Building on a framework developed by the NCCIH ( 6 ), Figure 1 presents an overview of questions to address. Section 1 of this commentary provides guidelines for assessments, data collection, and intervention implementation. Section 2 addresses statistical and design issues related to conducting pilot studies.

An external file that holds a picture, illustration, etc.
Object name is nihms-1750682-f0001.jpg

Framework of Feasibility Questions for Pilot Studies

These guidelines were generated to assist investigators from several National Institutes of Health Centers that fund pilot studies. Presenters at Work in Progress meetings have expressed the need for help in framing pilot studies consistent with current views about their use and limitations. A goal of this commentary is to provide guidance to early and mid-stage investigators conducting pilot studies.

Section 1. Assessing Feasibility in Pilot Studies

Assessments and data collection.

Can participants comply with data collection protocols? Data can be obtained via questionnaires, performance tests (e.g., cardiopulmonary fitness, cognitive functioning), lab tests (e.g., imaging), and biospecimens (e.g., saliva, blood). Data may vary in complexity (e.g., repeated saliva samples over 3 days, maintaining a food diary), and intrusiveness (e.g., collecting mental health data or assessing cognition). The logistics can be challenging, e.g., conducting assessments at a clinic or university or scheduling imaging scans. With the COVID pandemic, an important issue is the feasibility of conducting assessments remotely, e.g., using telehealth software.

A detailed protocol is needed to test data collection feasibility, assure assessment completion, and track compliance. Measures may require administration via tablet or laptop in the community, with secure links for uploading and storing data; links and data collection software require testing during pilot studies. For biospecimens, the protocol should include details on storing and transferring samples (e.g., some may require refrigeration).

Feasibility indicators can include completion rates and times for specific components, perceived burden, inconvenience, and reasons for non-completion ( 9 ), all of which may inform assessment protocol modification. Assessments can be scheduled in community settings for convenience, and briefer measures may be used to reduce respondent burden. Instructions to interviewers and participants can be tested in the pilot study. For example, to facilitate compliance with a complex biospecimen collection protocol, a video together with in-person support and instruction were provided to Spanish-speaking Latinas ( 12 ).

Are needed data available from administrative records and how are variables defined and scored? Studies in clinical settings may use medical record data or administrative sources to assess medical conditions, and healthcare provider data may be used to determine eligibility. Feasibility issues include obtaining permission, demonstrating access, and capability to merge data across sources. Also important is how demographic or clinical characteristics are measured, and their accuracy and completeness. Race and ethnicity data are often obtained through the medical record, possibly as a stratification variable, but may be assessed in ways that make it of questionable validity (e.g., by observation). When an important clinical measure is not available in the medical records, one can explore the feasibility of self-report measures, which may be reliable and valid in relation to objective measures, e.g., weight and height ( 13 ) or CD4 counts ( 14 ).

Conceptual and Psychometric Adequacy of Measures

Are the measures acceptable, appropriate, and relevant to the target population? Measures developed primarily in mainstream populations may not be culturally appropriate for some race and ethnic groups. There can be group differences in the interpretations of the meaning of questions, or in relevance of the concept measured. In a physical activity intervention study, the activities assessed excluded those typically performed by bariatric surgery patients, thus missing important changes in activity level ( 15 ). In a feasibility patient safety survey, respondents evaluated the usefulness, level of understanding, and whether the survey missed important issues ( 16 ).

Qualitative methods such as cognitive interviews and focus groups are key to determining conceptual adequacy and equivalence ( 17 ), and to ensure that the targeted sample members understand the questions ( 18 , 19 ). For example, the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) methodology uses the Delphi method ( 20 ).

Is there evidence of reliability and validity (including responsiveness to change) of measures in the target population? Do measures developed in mainstream populations meet standard psychometric criteria when applied to the target population? This includes potentially testing the equivalence of administering measures via paper/pencil and electronically. Interrater reliability should be established for interviewer-administered measures, and a certification form developed and tested in the pilot study.

For example, the Patient-Reported Outcome Measurement Information System (PROMIS®) measures were developed with qualitative methods in an attempt to ensure conceptual equivalence across groups ( 21 ). However, later work examined the psychometric properties in new applications and translations ( 22 ), and physical function items were found to perform differently across language groups ( 19 , 23 ). Translation or different cultural understanding of phrases or words could result in lack of measurement equivalence.

Quantitative methods include obtaining preliminary estimates of reliability (e.g., test-retest, internal consistency, inter-rater), score distributions (range of values), floor or ceiling effects, skewness, and the patterns and extent of missing data, all of which are relevant for power calculations. Optimal qualitative methods to examine group differences in concepts, and quantitative methods for assessing psychometric properties and measurement equivalence were described in a special issue of Medical Care ( 24 ) and later summarized ( 25 ). Although pilot studies will not have sufficient sample sizes to test measurement equivalence, investigators can review literature describing performance in diverse groups. Identifying measures with evidence of conceptual and psychometric adequacy in the target population increases the likelihood that only minimal feasibility testing will be necessary. Feasibility testing can focus on multiple primary outcome measures to determine if one or more are not acceptable or understood as intended.

Intervention Implementation

Four aspects of the feasibility of implementing interventions are given in Figure 1 . For interventionists, the questions are whether they can be recruited, trained, and retained, and whether they can deliver the intervention as intended. For participants, the main issue is whether they will adhere to and engage in the program components. The acceptability of treatment conditions pertains to both participants and interventionists. Testing feasibility is particularly important when evidence-based interventions found effective in a mainstream population are adapted or translated for a more diverse population ( 26 ).

Specific steps for each question are summarized in Table 1 , including feasibility assessment strategies and examples. A combination of quantitative and qualitative methods (mixed methods) is required for assessing implementation feasibility. Quantitative data can be obtained from structured surveys. Qualitative data are generated from open-ended interviews of interventionists or participants regarding adherence and acceptability, e.g., reasons for not attending sessions or difficulty implementing program elements.

Methods for Examining Feasibility of Implementing Interventions

Recruiting and training interventionists, and assessing whether the intervention is delivered as intended are often overlooked, particularly when interventionists are recruited from community settings (e.g., promotores, community health workers). Intervention delivery as intended (implementation or treatment fidelity) is determined by observation and structured ratings of delivery. In a feasibility study, investigators can focus on modifiable factors affecting treatment fidelity with the goal of modifying the intervention immediately if needed, thus improving the chances of resolution.

Acceptability by both intervention and control groups (how suitable, satisfying, and attractive) ( 27 ) is critical for diverse populations, to assure that treatment conditions are sensitive to cultural issues and relevant. Acceptability, reported by participants and interventionists, can be determined prior to implementation through formative research and debriefing post-intervention interviews.

Although participant adherence to the intervention and retention are standard components of reporting (CONSORT), in a feasibility study, more detailed data are collected. Adherence can be tracked to each component, including assessment of reasons for non-adherence. If tracked in real time, results can highlight components that require modification. Interventionists can report whether participants can carry out the intervention activities ( 27 ) or have difficulty with some components, and participants can report whether components are too complicated or not useful. Adherence also includes engagement in the intervention (treatment receipt) ( 28 ). Engagement differs from adherence in that it is more focused on completion of all activities and/or practicing skills and understanding the material along the way.

Section 2. Statistical and Design Issues in Planning Pilot Studies

Sample sizes for pilot feasibility studies.

What sample size is needed for a pilot study to address feasibility issues ? NCCIH notes that sample size should be based on “practical considerations including participant flow, budgetary constraints, and the number of participants needed to reasonably evaluate feasibility goals.” For qualitative work, to reach saturation, sample sizes may be 30 or less. For quantitative studies, a sample of 30 per group (intervention and control) may be adequate to establish feasibility ( 29 ).

Many rules of thumb exist regarding sample sizes for pilot studies ( 30 – 34 ), resulting in a confusing array of recommendations. Using reasonable scenarios regarding statistics that may be generated from pilot studies examining process and outcome variables, relatively large samples are required. If estimates of parameters such as proportion within treatment groups adhering to a regimen or correlations among variables are to be estimated, CIs may be very large with sample sizes less than 70–100 per group. If the goal is to examine the CI around feasibility process outcomes such as acceptance rates, adherence rates, proportion of eligible participants who are consented or who agree to be randomized, then sample sizes of at least 70 may be needed, depending on the point estimate and CI width (see Appendix Table 1 ).

Group Differences and Effect Sizes

Can the pilot study be used to estimate group differences and generate effect sizes? Because the focus is on feasibility, results of statistical tests are generally not informative for powering the main trial outcomes. Additionally, feasibility process outcomes may be poorly estimated.

Pilot study investigators often include a section on power and statistical analyses in grant proposals. Usually, the sections are not well-developed or justified. Often design features and measure reliability, two features affecting power are not considered. Most studies will require relatively large sample sizes to make inferential statements even for simple designs; complex designs and mediation and moderation require even larger samples. Thus, most pilot studies are limited in terms of estimation and inference. Some investigators have written acceptable analyses plans to be used in a future, larger study, and propose to test algorithms, software and produce results in an exploratory fashion. This may be acceptable if the intent is to test the analytic procedures. If a statistical plan is provided for a future larger study, it should be clearly indicated as such. Some investigators provide exploratory analyses, which is not advised because the results will not be trustworthy.

What types of statistical analyses should be proposed for pilot studies ? Descriptive statistics may be examined. For example, the mean and standard deviation for continuous measures, and the frequency and percentage for categorical measures can be calculated overall and by subgroups. In large pilot trials, CIs may be provided to reflect the uncertainty of the main feasibility outcome by groups.

It may be possible to ascertain the minimally important difference (MID), to power a future trial ( 35 ). For larger pilot trials, preparatory to large multi-site studies, the variance of the primary outcome measure might be useful to determine the standardized effect size. The MID does not account for the variance estimate required to calculate effect size.

Specifying the Minimally Important Difference

Are there available estimates of minimally important differences? While it is recognized that a MID cannot be generated using pilot data, such a specification based on earlier research may be important in planning a larger study. Methods for determining MIDs and treatment response have been reviewed ( 36 , 37 ). The MID is “the average change in the domain of interest on the target measure among the subgroup of people deemed to change a minimal (but important) amount according to an ‘anchor’” ( 38 ). Estimating the MID is a special case of examining responsiveness to change: the ability of a measure to reflect underlying change, e.g., in health (clinical status), intervening health events, interventions of known or expected efficacy, and retrospective reports of change by patients or providers. In estimating the MID, the best anchors (retrospective measure of change and clinical parameters) are ones that identify those who have changed but not too much. Clinical input may be useful to identify the subset of people who have experienced minimal change ( 6 , 39 ).

Variance Estimates

Are there estimates of the variances of outcomes in study arms/subgroups? Variance estimates have an important impact on future power calculations. One could use the observed variance to form a range of estimates around that value in sensitivity analyses, and check if variances are similar to those of other studies using the same measures. The CI around that estimate should be calculated, rather than just the point estimate. However, values derived from small pilot studies may change with larger sample sizes and may be inaccurate. Thus, this estimation will only apply to large pilot studies.

Confidence Intervals

How large will confidence intervals be for process outcomes? Although we advise against calculating effect sizes for efficacy outcomes, and caution about calculating feasibility outcomes involving proportions, information on CIs is included below because there are specialized pilot studies that are designed to be large enough to accurately estimate these indices. Additionally, it is instructive to show how wide the CI could be if used to examine group differences in feasibility indices or outcomes. CIs are presented for feasibility process outcomes such as recruitment, adherence and retention rates, and for correlations of the outcomes before and after an intervention. In general, point estimates will not be accurate. There are several rules of thumb ( 30 , 40 ). Leon, Davis, and Kraemer ( 7 ) provide examples of how wide CIs will be with small samples.

Examples of CI estimation for process outcomes.

The 95% Clopper Pearson Exact CI for one proportion and Wald Method with Continuity Correction CI for differences in two proportions were calculated under various scenarios. Setting the α level at 0.05, the limits for the 95% CI for one proportion are given by Leemis and Trivedi ( 41 ), and the Wald Method CI for the difference in two proportions by Fleiss, Levin, and Paik ( 42 ).

where n is the total sample size, n 1 is the number of events F ( α /2, b , c ) is the ( α /2)th percentile of the F distribution with b and c degree freedom.

As shown in Appendix Table 1 , the 95% CI for a single proportion of 0.1 with a total sample size of 30 is (0.021, 0.265) with width of 0.244. The width is narrower with increased sample size, but it is relatively large (0.185) even with sample sizes of 50.

For the difference between two proportions (0.2 vs 0.1), when the sample size per group is 10, the 95% CI is (−0.310, 0.510) and the width is 0.820 (see Appendix Table 2 ). When the group sample size is 30, the width is 0.425. Even with 50 per group, the CI width is relatively large (0.317).

The tables and figures provide other examples. As shown in Table 2 , Appendix Figure 1 and Appendix Table 1 , the minimum width for a CI for a single proportion is large for sample sizes less than 70. Table 3 , Appendix Figure 2 and Appendix Table 2 show that if one wished to estimate the difference in retention rates with accuracy, a sample size of at least 50 per group would be required.

Minimum and Maximum Length for 95% Clopper Pearson Exact CI for a Single Proportion

Note: The minimum and maximum values for the CI width were computed for proportions ranging from 0.1 to 0.9 by 0.1. The maximum width of a CI for a single proportion can be as large as 0.37 for a sample size of 30. For a given sample size, the 95% CI is widest for a proportion of 0.5 and narrowest when proportions are further away from 0.5. For example, when the proportion is 0.5, the maximum is 0.37 for n of 30; the minimum length is 0.24 when the proportion is 0.10 or 0.90.

Minimum and Maximum Length for 95% Confidence Intervals for a difference in two proportions

Note: The Wald method with continuity correction was used to calculate 95% CI for the difference (d) in two proportions (p2 - p1 = d, set p1=0.1, 0.2, 0.3, 0.4, d=0.1, 0.2, 0.3, then p2 = 0.2, 0.3, 0.4, 0.5, 0.6, 0.7 based on the value of d). The proportions are selected based on clinically relevant estimates and their differences. Setting p1=0.9, 0.8, 0.7, 0.6, given the same d=p1-p2 and corresponding p2=0.8, 0.7, 0.6, 0.5, 0.4, 0.3, will yield the same estimates of the width of CI (differing only in the label of the events). The maximum width of a CI for a difference in two proportions can be as large as 0.57 for a group sample size of 30.

For example, given n=30, the maximum width occurs when p2=0.55 and p1=0.45 and the minimum width occurs when p2=0.2 and p1=0.1.

Note also that in this example, p1 and p2 were restricted to the less extreme values indicated above. If p1 and p2 are not limited, and any two proportions are selected, the maximum values occur when p1 and p2 are close to 0.5 and within the range of proportions we considered; thus the value is still very close to the numbers in the table. If we consider more extreme proportions close to 0 and 1 then the Wald method of calculating confidence intervals for their difference can underestimate the width of the interval. For example, for n=30, the maximum occurs when p1 and p2 are very close to 0.5; for p1=0.5001 and p2=0.4999, the width is 0.5727. The minimum occurs when p1 and p2 are very close to 0 or 1; for p1=0.0001 and p2=0.0002 (or p1 = 0.9999 and p2 = 0.9998), the width is 0.0791. Detailed values are provided in Appendix Table 2 .

Correlations of the Outcomes Before and After the Intervention.

Table 4 shows the formulas and minimum and maximum length for the 95% CI for the Pearson correlation coefficient from 0.100 to 0.900. As shown in Appendix Table 3 , the 95% CI for a correlation coefficient of 0.500 with a total sample size of 30 is (0.170, 0.729), the width is 0.559. When the sample size is 50, the width is 0.426. What is obvious from Table 4 , Appendix Table 3 and Figure 3 is that with sample sizes below 100, one cannot estimate a correlation coefficient with accuracy except for conditions with a high correlation of 0.900 and sample size over 50.

Minimum and Maximum Length for 95% CI for Pearson Correlation Coefficient (0.1–0.9 by 0.1)

Note: The 95% CI for the correlation coefficient was obtained by using Fisher’s Z transformation (3). First, compute a 95% CI for the parameter 1 2 ln 1 + ρ 1 − ρ using the formula 1 2 ln 1 + r 1 − r ± 1.96 n − 3 , where r is the sample correlation coefficient and n is the sample size.

Denote the limits for the 95% CI for this interval as ( L z , U z ). Then the limits of the 95% CI for the original scale ( L ρ , U ρ ) can be calculated by using the conversion formulas below: L ρ = e 2 L z − 1 e 2 L z + 1 and U ρ = e 2 U z − 1 e 2 U z + 1

Problems with Use of Non-parametric Statistics

Are non-parametric statistics a rescue method for small pilot studies? Some investigators believe incorrectly that they may use non-parametric tests to get around the problem of poor estimation using parametric tests. Parametric tests rely on distributional assumptions; for example, the normality assumption is assumed for a two-sample t-test comparing the means between two independent groups when the population variance is unknown. If the normality assumption is violated, a non-parametric test such as the Wilcoxon rank-sum test is often used; one important assumption is equality of population variances. Pilot studies are typically conducted with small sample sizes, and tests of normality are not reliable due to either lack of power to detect non-normality or small sample-induced non-normality. Non-equal variances may be observed, and the two-sample t-test with Satterthwaite’s approximation of the degrees of freedom is robust, except for severe deviation from normality. Although the non-parametric test has higher power if the true underlying distribution is far from normal given that other assumptions are met, it typically has lower statistical power than the parametric test if the underlying distribution is truly or close to normal. Unless there is strong evidence of the violation of normality based on the given data (with a reasonable sample size) and/or established knowledge of the underlying distribution, the parametric test is generally preferred. Non-parametric tests are not free of assumptions and not a rescue method, nor a substitution for parametric tests with small sample sizes.

Evaluation of Randomization Algorithms and Specification of Design Features and MIDs

The preceding presentation provided caveats regarding generating effect sizes, calculating power, estimating confidence intervals and use of non-parametric statistics. Below is a discussion of statistical or design factors that may be examined in pilot studies.

Randomization Algorithm

Is the randomization algorithm working correctly? One can check procedures and protocol for randomization and whether the correct group assignment was made after randomization. Small sample sizes can result in imbalance between arms or within subgroups that cannot be detected with pilot data or early on in studies. Therefore, examination of balance between groups does not inform about randomization procedure performance.

Dose and Separation

Is there separation between groups in terms of dose delivered? Does the dose need adjustment? Is there a difference between groups in program delivery? For example, in a study of behavioral interventions of diet and exercise changes to reduce blood pressure, did the usual care group members also change their diets or increase exercise, thus reducing the potential effects of the study? Group separation on intervention variables may be examined in studies that have an indicator of whether the intervention is affecting the targeted index, e.g., determining if blood levels of a drug are actually different between usual care and intervention groups.

Design Effect Estimates

Are there estimates of the design effects? The cluster size and intracluster correlation coefficient (ICC) can affect power. These may be difficult to estimate with small pilot studies; however, one can usually get some idea about the cluster size from other information, which can be used in planning a larger study. For example, in a study of a pain intervention, patients will be clustered within physicians/practices. Investigators can determine in advance about how many patients are cared for within a practice that may be sampled.

A goal of this commentary was to provide guidelines for testing multiple components of a pilot study ( 2 ), a likely strategy for early and mid-stage investigators conducting studies as part of a training grant or center. Guidelines on recruitment feasibility are also available ( 43 ), including issues faced when studying disparities populations.

Estimation issues for group differences in outcome measures as well as process indicators, e.g., completion or adherence rates, were discussed, and it was demonstrated that both will have large CIs with small sample sizes. If a goal of a pilot study is to estimate group differences, this objective should be stated clearly, and the requisite sample sizes specified, often as large as 70 to 100 per group. A typical pilot study with 30 respondents per group is too small to provide reasonable power or precision. It has thus been argued that only counts, means, and percentages of feasibility outcomes should be calculated and later compared with albeit subjective thresholds that are specified a priori, such as achieving a retention rate of at least 80%.

It has been suggested that indicators of feasibility should be stated in terms of “clear quantitative benchmarks” or progression criteria by which successful feasibility is judged. For example, NCCIH guidelines suggest adherence benchmarks such as “at least 70% of participants in each arm will attend at least 8 of 12 scheduled group sessions” ( 6 ). For testing the feasibility of methods to reach diverse populations these data may be used to modify the methods rather than as strict criteria for progression to a full-scale study. For example, some research has shown that a trial can be effective with fewer sessions as long as key sessions are attended.


Several indicators that can be examined in pilot feasibility studies include recruitment, retention, intervention fidelity, acceptability, adherence, and engagement. Additional indicators include randomization algorithms, capability to merge data, reliability of measures, interrater reliability of assessors, design features such as cluster sizes, and specification of an MID if one exists. As demonstrated in this commentary, most pilot studies should not be used to estimate effect sizes, provide power calculations for statistical tests or perform exploratory analyses of efficacy. It is hoped that these guidelines may be useful to those planning pilot/feasibility studies preparatory to a larger-scale study.

Supplementary Material

Supplemental data file (.doc, .tif, pdf, etc.).

This article was a collaboration of the Analytic Cores from several National Institute on Aging Centers: Resource Centers for Minority Aging Research (UCSF, grant number 2P30AG015272-21, Karliner; UCLA, grant number P30-AG021684, Mangione; and University of Texas, grant number P30AG059301, Markides), an Alzheimer’s Disease - RCMAR Center (Columbia University, grant number 1P30AG059303, Manly, Luchsinger) an Edward R. Roybal Translational Research Center (Cornell University, grant number 5P30AG022845, Reid, Pillemer and Wethington), and the Measurement Methods and Analysis Core of a Claude D. Pepper Older Americans Independence Center (National Institute on Aging, 1P30AG028741, Siu). These funding agencies played no role in the writing of this manuscript. XY is supported by a research career development award (K12HD052023: Building Interdisciplinary Research Careers in Women’s Health Program-BIRCWH; Berenson, PI) from the National Institutes of Health/Office of the Director (OD)/National Institute of Allergy and Infectious Diseases (NIAID), and Eunice Kennedy Shriver National Institute of Child Health & Human Development (NICHD).

Conflict of Interest: The authors have no conflicts of interest

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

How Pew Research Center will report on generations moving forward

Journalists, researchers and the public often look at society through the lens of generation, using terms like Millennial or Gen Z to describe groups of similarly aged people. This approach can help readers see themselves in the data and assess where we are and where we’re headed as a country.

Pew Research Center has been at the forefront of generational research over the years, telling the story of Millennials as they came of age politically and as they moved more firmly into adult life . In recent years, we’ve also been eager to learn about Gen Z as the leading edge of this generation moves into adulthood.

But generational research has become a crowded arena. The field has been flooded with content that’s often sold as research but is more like clickbait or marketing mythology. There’s also been a growing chorus of criticism about generational research and generational labels in particular.

Recently, as we were preparing to embark on a major research project related to Gen Z, we decided to take a step back and consider how we can study generations in a way that aligns with our values of accuracy, rigor and providing a foundation of facts that enriches the public dialogue.

A typical generation spans 15 to 18 years. As many critics of generational research point out, there is great diversity of thought, experience and behavior within generations.

We set out on a yearlong process of assessing the landscape of generational research. We spoke with experts from outside Pew Research Center, including those who have been publicly critical of our generational analysis, to get their take on the pros and cons of this type of work. We invested in methodological testing to determine whether we could compare findings from our earlier telephone surveys to the online ones we’re conducting now. And we experimented with higher-level statistical analyses that would allow us to isolate the effect of generation.

What emerged from this process was a set of clear guidelines that will help frame our approach going forward. Many of these are principles we’ve always adhered to , but others will require us to change the way we’ve been doing things in recent years.

Here’s a short overview of how we’ll approach generational research in the future:

We’ll only do generational analysis when we have historical data that allows us to compare generations at similar stages of life. When comparing generations, it’s crucial to control for age. In other words, researchers need to look at each generation or age cohort at a similar point in the life cycle. (“Age cohort” is a fancy way of referring to a group of people who were born around the same time.)

When doing this kind of research, the question isn’t whether young adults today are different from middle-aged or older adults today. The question is whether young adults today are different from young adults at some specific point in the past.

To answer this question, it’s necessary to have data that’s been collected over a considerable amount of time – think decades. Standard surveys don’t allow for this type of analysis. We can look at differences across age groups, but we can’t compare age groups over time.

Another complication is that the surveys we conducted 20 or 30 years ago aren’t usually comparable enough to the surveys we’re doing today. Our earlier surveys were done over the phone, and we’ve since transitioned to our nationally representative online survey panel , the American Trends Panel . Our internal testing showed that on many topics, respondents answer questions differently depending on the way they’re being interviewed. So we can’t use most of our surveys from the late 1980s and early 2000s to compare Gen Z with Millennials and Gen Xers at a similar stage of life.

This means that most generational analysis we do will use datasets that have employed similar methodologies over a long period of time, such as surveys from the U.S. Census Bureau. A good example is our 2020 report on Millennial families , which used census data going back to the late 1960s. The report showed that Millennials are marrying and forming families at a much different pace than the generations that came before them.

Even when we have historical data, we will attempt to control for other factors beyond age in making generational comparisons. If we accept that there are real differences across generations, we’re basically saying that people who were born around the same time share certain attitudes or beliefs – and that their views have been influenced by external forces that uniquely shaped them during their formative years. Those forces may have been social changes, economic circumstances, technological advances or political movements.

When we see that younger adults have different views than their older counterparts, it may be driven by their demographic traits rather than the fact that they belong to a particular generation.

The tricky part is isolating those forces from events or circumstances that have affected all age groups, not just one generation. These are often called “period effects.” An example of a period effect is the Watergate scandal, which drove down trust in government among all age groups. Differences in trust across age groups in the wake of Watergate shouldn’t be attributed to the outsize impact that event had on one age group or another, because the change occurred across the board.

Changing demographics also may play a role in patterns that might at first seem like generational differences. We know that the United States has become more racially and ethnically diverse in recent decades, and that race and ethnicity are linked with certain key social and political views. When we see that younger adults have different views than their older counterparts, it may be driven by their demographic traits rather than the fact that they belong to a particular generation.

Controlling for these factors can involve complicated statistical analysis that helps determine whether the differences we see across age groups are indeed due to generation or not. This additional step adds rigor to the process. Unfortunately, it’s often absent from current discussions about Gen Z, Millennials and other generations.

When we can’t do generational analysis, we still see value in looking at differences by age and will do so where it makes sense. Age is one of the most common predictors of differences in attitudes and behaviors. And even if age gaps aren’t rooted in generational differences, they can still be illuminating. They help us understand how people across the age spectrum are responding to key trends, technological breakthroughs and historical events.

Each stage of life comes with a unique set of experiences. Young adults are often at the leading edge of changing attitudes on emerging social trends. Take views on same-sex marriage , for example, or attitudes about gender identity .

Many middle-aged adults, in turn, face the challenge of raising children while also providing care and support to their aging parents. And older adults have their own obstacles and opportunities. All of these stories – rooted in the life cycle, not in generations – are important and compelling, and we can tell them by analyzing our surveys at any given point in time.

When we do have the data to study groups of similarly aged people over time, we won’t always default to using the standard generational definitions and labels. While generational labels are simple and catchy, there are other ways to analyze age cohorts. For example, some observers have suggested grouping people by the decade in which they were born. This would create narrower cohorts in which the members may share more in common. People could also be grouped relative to their age during key historical events (such as the Great Recession or the COVID-19 pandemic) or technological innovations (like the invention of the iPhone).

By choosing not to use the standard generational labels when they’re not appropriate, we can avoid reinforcing harmful stereotypes or oversimplifying people’s complex lived experiences.

Existing generational definitions also may be too broad and arbitrary to capture differences that exist among narrower cohorts. A typical generation spans 15 to 18 years. As many critics of generational research point out, there is great diversity of thought, experience and behavior within generations. The key is to pick a lens that’s most appropriate for the research question that’s being studied. If we’re looking at political views and how they’ve shifted over time, for example, we might group people together according to the first presidential election in which they were eligible to vote.

With these considerations in mind, our audiences should not expect to see a lot of new research coming out of Pew Research Center that uses the generational lens. We’ll only talk about generations when it adds value, advances important national debates and highlights meaningful societal trends.

  • Age & Generations
  • Demographic Research
  • Generation X
  • Generation Z
  • Generations
  • Greatest Generation
  • Methodological Research
  • Millennials
  • Silent Generation

Kim Parker's photo

Kim Parker is director of social trends research at Pew Research Center

How Teens and Parents Approach Screen Time

Who are you the art and science of measuring identity, u.s. centenarian population is projected to quadruple over the next 30 years, older workers are growing in number and earning higher wages, teens, social media and technology 2023, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy


  1. Pilot Study: Purpose, Reasons, and Steps to Conduct It

    research meaning pilot study

  2. How To Do A Pilot Study In Research

    research meaning pilot study

  3. purpose of pilot study

    research meaning pilot study

  4. What is a pilot study?

    research meaning pilot study

  5. What is a Pilot Study?

    research meaning pilot study

  6. PPT

    research meaning pilot study


  1. What is a Research

  2. Understanding Research: Meaning and Types of Research

  3. Research meaning,what is research @ madhuvathanajeyashakar

  4. Marketing Research

  5. Characteristics of Research#trending #trendingshorts #nursing #health

  6. Action Research: Meaning, Definition and Characteristics


  1. Pilot Study in Research: Definition & Examples

    Advantages. Limitations. Examples. A pilot study, also known as a feasibility study, is a small-scale preliminary study conducted before the main research to check the feasibility or improve the research design. Pilot studies can be very important before conducting a full-scale research project, helping design the research methods and protocol.

  2. What is a pilot study?

    Pilot studies can play a very important role prior to conducting a full-scale research project. Pilot studies are small-scale, preliminary studies which aim to investigate whether crucial components of a main study - usually a randomized controlled trial (RCT) - will be feasible. For example, they may be used in attempt to predict an ...

  3. What Pilot Studies Are and Why They Matter

    Example: Qualitative Interview Studies . Pilot studies can also be useful for qualitative research studies, such as interview-based studies. For example, imagine that a researcher is interested in studying the relationship that Apple consumers have to the company's brand and products.The researcher might choose to first do a pilot study consisting of a couple of focus groups in order to ...

  4. Conducting the Pilot Study: A Neglected Part of the Research Process

    The IRIS research team conducted pilot studies in advance of their main study, and we were thus able to avail of the research experiences gathered from the largest "inclusion" study conducted in Europe to date. In some ways, this may be regarded as a "pilot study," or a "feasibility study," in advance of the IRISS study. ...

  5. Introduction of a pilot study

    A pilot study is the first step of the entire research protocol and is often a smaller-sized study assisting in planning and modification of the main study [, ]. More specifically, in large-scale clinical studies, the pilot or small-scale study often precedes the main trial to analyze its validity. Before a pilot study begins, researchers must ...

  6. Doing A Pilot Study: Why Is It Essential?

    Abstract. A pilot study is one of the essential stages in a research project. This paper aims to describe the importance of and steps involved in executing a pilot study by using an example of a descriptive study in primary care. The process of testing the feasibility of the project proposal, recruitment of subjects, research tool and data ...

  7. Pilot experiment

    Pilot experiments are frequently carried out before large-scale quantitative research, in an attempt to avoid time and money being used on an inadequately designed project. A pilot study is usually carried out on members of the relevant population. [1] A pilot study is used to formulate the design of the full-scale experiment which then can be ...

  8. A tutorial on pilot studies: the what, why and how

    2. Narrowing the focus: Pilot studies for randomized studies. Pilot studies can be conducted in both quantitative and qualitative studies. Adopting a similar approach to Lancaster et al.[], we focus on quantitative pilot studies - particularly those done prior to full-scale phase III trialsPhase I trials are non-randomized studies designed to investigate the pharmacokinetics of a drug (i.e ...

  9. A tutorial on pilot studies: the what, why and how

    Pilot studies for phase III trials - which are comparative randomized trials designed to provide preliminary evidence on the clinical efficacy of a drug or intervention - are routinely performed in many clinical areas. Also commonly know as "feasibility" or "vanguard" studies, they are designed to assess the safety of treatment or interventions; to assess recruitment potential; to assess the ...

  10. FAQ: What Is a Pilot Study and Why Is It Important?

    A pilot study is important in research for many reasons, including: Defining the initial research question: A small-scale study can help researchers define and further refine the initial research question by ruling out certain factors and establishing benchmarks that determine the feasibility of the large-scale effort.

  11. What are Pilot Studies and Clinical Trials?

    A pilot study is a small-scale study conducted in preparation for a larger investigation. The purpose of a pilot study is to increase the likelihood of a successful future RCT by exploring the ...

  12. Why Is a Pilot Study Important in Research?

    Conducting a pilot study is an essential step in many research projects. Here's a general guide on how to conduct a pilot study: Step 1: Define Objectives. Inspect what specific aspects of your main study do you want to test or evaluate in your pilot study. Step 2: Evaluate Sample Size. Decide on an appropriate sample size for your pilot study.

  13. Pilot Study Definition, Examples & Limitations

    A pilot study is a research study done before the actual intended research study. Pilot studies have multiple purposes, including reducing study errors, training study staff, and identifying ...

  14. What Is a Pilot Study?

    A pilot study is a small feasibility study designed to test various aspects of the methods planned for a larger, more rigorous, or confirmatory investigation (Arain, Campbell, Cooper, & Lancaster, 2010). The primary purpose of a pilot study is not to answer specific research questions but to prevent researchers from launching a large-scale ...

  15. Pilot Studies: Common Uses and Misuses

    Misuses of Pilot Studies. Rather than focusing on feasibility and acceptability, too often, proposed pilot studies focus on inappropriate outcomes, such as determining "preliminary efficacy.". The most common misuses of pilot studies include: Estimating effect sizes for power calculations of the larger scale study.

  16. The Role and Interpretation of Pilot Studies in Clinical Research

    A pilot study is, "A small-scale test of the methods and procedures to be used on a larger scale …" (Porta, 2008). The fundamental purpose of conducting a pilot study is to examine the feasibility of an approach that is intended to ultimately be used in a larger scale study. This applies to all types of research studies.

  17. (PDF) The Importance of Pilot Studies

    The importance of pilot studies. The term pilot study is used in two different. ways in social science research. It can refer. to so-called feasibility studies which are. "small scale version [s ...

  18. What Is a Pilot Study?

    The primary purpose of a pilot study is not to answer specific research questions but to prevent researchers from launching a large-scale study without adequate knowledge of the methods proposed; in essence, a pilot study is conducted to prevent the occurrence of a fatal flaw in a study that is costly in time and money (. Polit and Beck, 2017 ...

  19. What is Pilot Testing? Explanation, Examples & FAQs

    A pilot test is an initial test or a miniature version of a larger-scale study or project. The term "pilot" means to test a plan, project, or other strategy before implementing it more fully across an organization. A pilot test is generally conducted before beta testing in the case of a product or software release.

  20. (PDF) Introduction of a pilot study

    A pilot study is the first. step of the entire research p rotocol and is often a smaller-sized. study assisting in planning and m odification of the main study. [1,2]. More specifically, in large ...

  21. (PDF) Pilot Study, Does It Really Matter? Learning Lessons from

    A Pilot Study (PS) is a small-scale research project conducted before the final full-scale study. A PS helps researchers to test in reality how likely the research process is to work, in order to ...

  22. What is a pilot or feasibility study? A review of current practice and

    Background. A brief definition is that a pilot study is a 'small study for helping to design a further confirmatory study'[].A very useful discussion of exactly what is a pilot study has been given by Thabane et al. [] Such kinds of study may have various purposes such as testing study procedures, validity of tools, estimation of the recruitment rate, and estimation of parameters such as the ...

  23. Writing Survey Questions

    Writing Survey Questions. Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.

  24. Guidelines for Designing and Evaluating Feasibility Pilot Studies

    Pilot studies are a necessary first step to assess the feasibility of methods and procedures to be used in a larger study. Some consider pilot studies to be a subset of feasibility studies (), while others regard feasibility studies as a subset of pilot studies.As a result, the terms have been used interchangeably ().Pilot studies have been used to estimate effect sizes to determine the sample ...

  25. How Pew Research Center will report on generations moving forward

    Pew Research Center has been at the forefront of generational research over the years, telling the story of Millennials as they came of age politically and as they moved more firmly into adult life. ... When we do have the data to study groups of similarly aged people over time, we won't always default to using the standard generational ...

  26. Here's what marijuana researchers have to say about 420 or ...

    Studies show overuse of marijuana by youth with mood disorders leads to a rise in self-harm, suicide attempts and death. Adobe Stock Related article Using marijuana may affect your ability to ...