If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

Biology library

Course: biology library   >   unit 1, the scientific method.

  • Controlled experiments
  • The scientific method and experimental design

Introduction

  • Make an observation.
  • Ask a question.
  • Form a hypothesis , or testable explanation.
  • Make a prediction based on the hypothesis.
  • Test the prediction.
  • Iterate: use the results to make new hypotheses or predictions.

Scientific method example: Failure to toast

1. make an observation..

  • Observation: the toaster won't toast.

2. Ask a question.

  • Question: Why won't my toaster toast?

3. Propose a hypothesis.

  • Hypothesis: Maybe the outlet is broken.

4. Make predictions.

  • Prediction: If I plug the toaster into a different outlet, then it will toast the bread.

5. Test the predictions.

  • Test of prediction: Plug the toaster into a different outlet and try again.
  • If the toaster does toast, then the hypothesis is supported—likely correct.
  • If the toaster doesn't toast, then the hypothesis is not supported—likely wrong.

6. Iterate.

  • Iteration time!
  • If the hypothesis was supported, we might do additional tests to confirm it, or revise it to be more specific. For instance, we might investigate why the outlet is broken.
  • If the hypothesis was not supported, we would come up with a new hypothesis. For instance, the next hypothesis might be that there's a broken wire in the toaster.

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Incredible Answer

Banner

Write a Critical Review of a Scientific Journal Article

1. identify how and why the research was carried out, 2. establish the research context, 3. evaluate the research, 4. establish the significance of the research.

  • Writing Your Critique

Ask Us: Chat, email, visit or call

Click to chat: contact the library

Video: How to Integrate Critical Voice into Your Literature Review

How to Integrate Critical Voice in Your Lit Review

Video: Note-taking and Writing Tips to Avoid Plagiarism

Note-taking and Writing Tips to Avoid Accidental Plagiarism

Get assistance

The library offers a range of helpful services.  All of our appointments are free of charge and confidential.

  • Book an appointment

Read the article(s) carefully and use the questions below to help you identify how and why the research was carried out. Look at the following sections: 

Introduction

  • What was the objective of the study?
  • What methods were used to accomplish this purpose (e.g., systematic recording of observations, analysis and evaluation of published research, assessment of theory, etc.)?
  • What techniques were used and how was each technique performed?
  • What kind of data can be obtained using each technique?
  • How are such data interpreted?
  • What kind of information is produced by using the technique?
  • What objective evidence was obtained from the authors’ efforts (observations, measurements, etc.)?
  • What were the results of the study? 
  • How was each technique used to obtain each result?
  • What statistical tests were used to evaluate the significance of the conclusions based on numeric or graphic data?
  • How did each result contribute to answering the question or testing the hypothesis raised in the introduction?
  • How were the results interpreted? How were they related to the original problem (authors’ view of evidence rather than objective findings)? 
  • Were the authors able to answer the question (test the hypothesis) raised?
  • Did the research provide new factual information, a new understanding of a phenomenon in the field, or a new research technique?
  • How was the significance of the work described?
  • Do the authors relate the findings of the study to literature in the field?
  • Did the reported observations or interpretations support or refute observations or interpretations made by other researchers?

These questions were adapted from the following sources:  Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

Once you are familiar with the article, you can establish the research context by asking the following questions:

  • Who conducted the research? What were/are their interests?
  • When and where was the research conducted?
  • Why did the authors do this research?
  • Was this research pertinent only within the authors’ geographic locale, or did it have broader (even global) relevance?
  • Were many other laboratories pursuing related research when the reported work was done? If so, why?
  • For experimental research, what funding sources met the costs of the research?
  • On what prior observations was the research based? What was and was not known at the time?
  • How important was the research question posed by the researchers?

These questions were adapted from the following sources: Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

Remember that simply disagreeing with the material is not considered to be a critical assessment of the material.  For example, stating that the sample size is insufficient is not a critical assessment.  Describing why the sample size is insufficient for the claims being made in the study would be a critical assessment.

Use the questions below to help you evaluate the quality of the authors’ research:

  • Does the title precisely state the subject of the paper?
  • Read the statement of purpose in the abstract. Does it match the one in the introduction?

Acknowledgments

  • Could the source of the research funding have influenced the research topic or conclusions?
  • Check the sequence of statements in the introduction. Does all the information lead coherently to the purpose of the study?
  • Review all methods in relation to the objective(s) of the study. Are the methods valid for studying the problem?
  • Check the methods for essential information. Could the study be duplicated from the methods and information given?
  • Check the methods for flaws. Is the sample selection adequate? Is the experimental design sound?
  • Check the sequence of statements in the methods. Does all the information belong there? Is the sequence of methods clear and pertinent?
  • Was there mention of ethics? Which research ethics board approved the study?
  • Carefully examine the data presented in the tables and diagrams. Does the title or legend accurately describe the content? 
  • Are column headings and labels accurate? 
  • Are the data organized for ready comparison and interpretation? (A table should be self-explanatory, with a title that accurately and concisely describes content and column headings that accurately describe information in the cells.)
  • Review the results as presented in the text while referring to the data in the tables and diagrams. Does the text complement, and not simply repeat data? Are there discrepancies between the results in the text and those in the tables?
  • Check all calculations and presentation of data.
  • Review the results in light of the stated objectives. Does the study reveal what the researchers intended?
  • Does the discussion clearly address the objectives and hypotheses?
  • Check the interpretation against the results. Does the discussion merely repeat the results? 
  • Does the interpretation arise logically from the data or is it too far-fetched? 
  • Have the faults, flaws, or shortcomings of the research been addressed?
  • Is the interpretation supported by other research cited in the study?
  • Does the study consider key studies in the field?
  • What is the significance of the research? Do the authors mention wider implications of the findings?
  • Is there a section on recommendations for future research? Are there other research possibilities or directions suggested? 

Consider the article as a whole

  • Reread the abstract. Does it accurately summarize the article?
  • Check the structure of the article (first headings and then paragraphing). Is all the material organized under the appropriate headings? Are sections divided logically into subsections or paragraphs?
  • Are stylistic concerns, logic, clarity, and economy of expression addressed?

These questions were adapted from the following sources:  Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site. Retrieved July 31, 2006.

After you have evaluated the research, consider whether the research has been successful. Has it led to new questions being asked, or new ways of using existing knowledge? Are other researchers citing this paper?

You should consider the following questions:

  • How did other researchers view the significance of the research reported by your authors?
  • Did the research reported in your article result in the formulation of new questions or hypotheses (by the authors or by other researchers)?
  • Have other researchers subsequently supported or refuted the observations or interpretations of these authors?
  • Did the research make a significant contribution to human knowledge?
  • Did the research produce any practical applications?
  • What are the social, political, technological, medical implications of this research?
  • How do you evaluate the significance of the research?

To answer these questions, look at review articles to find out how reviewers view this piece of research. Look at research articles and databases like Web of Science to see how other people have used this work. What range of journals have cited this article?

These questions were adapted from the following sources:

Kuyper, B.J. (1991). Bringing up scientists in the art of critiquing research. Bioscience 41(4), 248-250. Wood, J.M. (2003). Research Lab Guide. MICR*3260 Microbial Adaptation and Development Web Site . Retrieved July 31, 2006.

  • << Previous: Start Here
  • Next: Writing Your Critique >>
  • Last Updated: Jan 11, 2024 12:42 PM
  • URL: https://guides.lib.uoguelph.ca/WriteCriticalReview

Suggest an edit to this guide

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • v.25(3); 2014 Oct

Logo of ejifcc

Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide

Jacalyn kelly.

1 Clinical Biochemistry, Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada

Tara Sadeghieh

Khosrow adeli.

2 Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada

3 Chair, Communications and Publications Division (CPD), International Federation for Sick Clinical Chemistry (IFCC), Milan, Italy

The authors declare no conflicts of interest regarding publication of this article.

Peer review has been defined as a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It functions to encourage authors to meet the accepted high standards of their discipline and to control the dissemination of research data to ensure that unwarranted claims, unacceptable interpretations or personal views are not published without prior expert review. Despite its wide-spread use by most journals, the peer review process has also been widely criticised due to the slowness of the process to publish new findings and due to perceived bias by the editors and/or reviewers. Within the scientific community, peer review has become an essential component of the academic writing process. It helps ensure that papers published in scientific journals answer meaningful research questions and draw accurate conclusions based on professionally executed experimentation. Submission of low quality manuscripts has become increasingly prevalent, and peer review acts as a filter to prevent this work from reaching the scientific community. The major advantage of a peer review process is that peer-reviewed articles provide a trusted form of scientific communication. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. Despite the positive impacts of peer review, critics argue that the peer review process stifles innovation in experimentation, and acts as a poor screen against plagiarism. Despite its downfalls, there has not yet been a foolproof system developed to take the place of peer review, however, researchers have been looking into electronic means of improving the peer review process. Unfortunately, the recent explosion in online only/electronic journals has led to mass publication of a large number of scientific articles with little or no peer review. This poses significant risk to advances in scientific knowledge and its future potential. The current article summarizes the peer review process, highlights the pros and cons associated with different types of peer review, and describes new methods for improving peer review.

WHAT IS PEER REVIEW AND WHAT IS ITS PURPOSE?

Peer Review is defined as “a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field” ( 1 ). Peer review is intended to serve two primary purposes. Firstly, it acts as a filter to ensure that only high quality research is published, especially in reputable journals, by determining the validity, significance and originality of the study. Secondly, peer review is intended to improve the quality of manuscripts that are deemed suitable for publication. Peer reviewers provide suggestions to authors on how to improve the quality of their manuscripts, and also identify any errors that need correcting before publication.

HISTORY OF PEER REVIEW

The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ( 2 ). The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ( 2 ). There, he stated that physicians must take notes describing the state of their patients’ medical conditions upon each visit. Following treatment, the notes were scrutinized by a local medical council to determine whether the physician had met the required standards of medical care. If the medical council deemed that the appropriate standards were not met, the physician in question could receive a lawsuit from the maltreated patient ( 2 ).

The invention of the printing press in 1453 allowed written documents to be distributed to the general public ( 3 ). At this time, it became more important to regulate the quality of the written material that became publicly available, and editing by peers increased in prevalence. In 1620, Francis Bacon wrote the work Novum Organum, where he described what eventually became known as the first universal method for generating and assessing new science ( 3 ). His work was instrumental in shaping the Scientific Method ( 3 ). In 1665, the French Journal des sçavans and the English Philosophical Transactions of the Royal Society were the first scientific journals to systematically publish research results ( 4 ). Philosophical Transactions of the Royal Society is thought to be the first journal to formalize the peer review process in 1665 ( 5 ), however, it is important to note that peer review was initially introduced to help editors decide which manuscripts to publish in their journals, and at that time it did not serve to ensure the validity of the research ( 6 ). It did not take long for the peer review process to evolve, and shortly thereafter papers were distributed to reviewers with the intent of authenticating the integrity of the research study before publication. The Royal Society of Edinburgh adhered to the following peer review process, published in their Medical Essays and Observations in 1731: “Memoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the author.” ( 7 ). The Royal Society of London adopted this review procedure in 1752 and developed the “Committee on Papers” to review manuscripts before they were published in Philosophical Transactions ( 6 ).

Peer review in the systematized and institutionalized form has developed immensely since the Second World War, at least partly due to the large increase in scientific research during this period ( 7 ). It is now used not only to ensure that a scientific manuscript is experimentally and ethically sound, but also to determine which papers sufficiently meet the journal’s standards of quality and originality before publication. Peer review is now standard practice by most credible scientific journals, and is an essential part of determining the credibility and quality of work submitted.

IMPACT OF THE PEER REVIEW PROCESS

Peer review has become the foundation of the scholarly publication system because it effectively subjects an author’s work to the scrutiny of other experts in the field. Thus, it encourages authors to strive to produce high quality research that will advance the field. Peer review also supports and maintains integrity and authenticity in the advancement of science. A scientific hypothesis or statement is generally not accepted by the academic community unless it has been published in a peer-reviewed journal ( 8 ). The Institute for Scientific Information ( ISI ) only considers journals that are peer-reviewed as candidates to receive Impact Factors. Peer review is a well-established process which has been a formal part of scientific communication for over 300 years.

OVERVIEW OF THE PEER REVIEW PROCESS

The peer review process begins when a scientist completes a research study and writes a manuscript that describes the purpose, experimental design, results, and conclusions of the study. The scientist then submits this paper to a suitable journal that specializes in a relevant research field, a step referred to as pre-submission. The editors of the journal will review the paper to ensure that the subject matter is in line with that of the journal, and that it fits with the editorial platform. Very few papers pass this initial evaluation. If the journal editors feel the paper sufficiently meets these requirements and is written by a credible source, they will send the paper to accomplished researchers in the field for a formal peer review. Peer reviewers are also known as referees (this process is summarized in Figure 1 ). The role of the editor is to select the most appropriate manuscripts for the journal, and to implement and monitor the peer review process. Editors must ensure that peer reviews are conducted fairly, and in an effective and timely manner. They must also ensure that there are no conflicts of interest involved in the peer review process.

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g001.jpg

Overview of the review process

When a reviewer is provided with a paper, he or she reads it carefully and scrutinizes it to evaluate the validity of the science, the quality of the experimental design, and the appropriateness of the methods used. The reviewer also assesses the significance of the research, and judges whether the work will contribute to advancement in the field by evaluating the importance of the findings, and determining the originality of the research. Additionally, reviewers identify any scientific errors and references that are missing or incorrect. Peer reviewers give recommendations to the editor regarding whether the paper should be accepted, rejected, or improved before publication in the journal. The editor will mediate author-referee discussion in order to clarify the priority of certain referee requests, suggest areas that can be strengthened, and overrule reviewer recommendations that are beyond the study’s scope ( 9 ). If the paper is accepted, as per suggestion by the peer reviewer, the paper goes into the production stage, where it is tweaked and formatted by the editors, and finally published in the scientific journal. An overview of the review process is presented in Figure 1 .

WHO CONDUCTS REVIEWS?

Peer reviews are conducted by scientific experts with specialized knowledge on the content of the manuscript, as well as by scientists with a more general knowledge base. Peer reviewers can be anyone who has competence and expertise in the subject areas that the journal covers. Reviewers can range from young and up-and-coming researchers to old masters in the field. Often, the young reviewers are the most responsive and deliver the best quality reviews, though this is not always the case. On average, a reviewer will conduct approximately eight reviews per year, according to a study on peer review by the Publishing Research Consortium (PRC) ( 7 ). Journals will often have a pool of reviewers with diverse backgrounds to allow for many different perspectives. They will also keep a rather large reviewer bank, so that reviewers do not get burnt out, overwhelmed or time constrained from reviewing multiple articles simultaneously.

WHY DO REVIEWERS REVIEW?

Referees are typically not paid to conduct peer reviews and the process takes considerable effort, so the question is raised as to what incentive referees have to review at all. Some feel an academic duty to perform reviews, and are of the mentality that if their peers are expected to review their papers, then they should review the work of their peers as well. Reviewers may also have personal contacts with editors, and may want to assist as much as possible. Others review to keep up-to-date with the latest developments in their field, and reading new scientific papers is an effective way to do so. Some scientists use peer review as an opportunity to advance their own research as it stimulates new ideas and allows them to read about new experimental techniques. Other reviewers are keen on building associations with prestigious journals and editors and becoming part of their community, as sometimes reviewers who show dedication to the journal are later hired as editors. Some scientists see peer review as a chance to become aware of the latest research before their peers, and thus be first to develop new insights from the material. Finally, in terms of career development, peer reviewing can be desirable as it is often noted on one’s resume or CV. Many institutions consider a researcher’s involvement in peer review when assessing their performance for promotions ( 11 ). Peer reviewing can also be an effective way for a scientist to show their superiors that they are committed to their scientific field ( 5 ).

ARE REVIEWERS KEEN TO REVIEW?

A 2009 international survey of 4000 peer reviewers conducted by the charity Sense About Science at the British Science Festival at the University of Surrey, found that 90% of reviewers were keen to peer review ( 12 ). One third of respondents to the survey said they were happy to review up to five papers per year, and an additional one third of respondents were happy to review up to ten.

HOW LONG DOES IT TAKE TO REVIEW ONE PAPER?

On average, it takes approximately six hours to review one paper ( 12 ), however, this number may vary greatly depending on the content of the paper and the nature of the peer reviewer. One in every 100 participants in the “Sense About Science” survey claims to have taken more than 100 hours to review their last paper ( 12 ).

HOW TO DETERMINE IF A JOURNAL IS PEER REVIEWED

Ulrichsweb is a directory that provides information on over 300,000 periodicals, including information regarding which journals are peer reviewed ( 13 ). After logging into the system using an institutional login (eg. from the University of Toronto), search terms, journal titles or ISSN numbers can be entered into the search bar. The database provides the title, publisher, and country of origin of the journal, and indicates whether the journal is still actively publishing. The black book symbol (labelled ‘refereed’) reveals that the journal is peer reviewed.

THE EVALUATION CRITERIA FOR PEER REVIEW OF SCIENTIFIC PAPERS

As previously mentioned, when a reviewer receives a scientific manuscript, he/she will first determine if the subject matter is well suited for the content of the journal. The reviewer will then consider whether the research question is important and original, a process which may be aided by a literature scan of review articles.

Scientific papers submitted for peer review usually follow a specific structure that begins with the title, followed by the abstract, introduction, methodology, results, discussion, conclusions, and references. The title must be descriptive and include the concept and organism investigated, and potentially the variable manipulated and the systems used in the study. The peer reviewer evaluates if the title is descriptive enough, and ensures that it is clear and concise. A study by the National Association of Realtors (NAR) published by the Oxford University Press in 2006 indicated that the title of a manuscript plays a significant role in determining reader interest, as 72% of respondents said they could usually judge whether an article will be of interest to them based on the title and the author, while 13% of respondents claimed to always be able to do so ( 14 ).

The abstract is a summary of the paper, which briefly mentions the background or purpose, methods, key results, and major conclusions of the study. The peer reviewer assesses whether the abstract is sufficiently informative and if the content of the abstract is consistent with the rest of the paper. The NAR study indicated that 40% of respondents could determine whether an article would be of interest to them based on the abstract alone 60-80% of the time, while 32% could judge an article based on the abstract 80-100% of the time ( 14 ). This demonstrates that the abstract alone is often used to assess the value of an article.

The introduction of a scientific paper presents the research question in the context of what is already known about the topic, in order to identify why the question being studied is of interest to the scientific community, and what gap in knowledge the study aims to fill ( 15 ). The introduction identifies the study’s purpose and scope, briefly describes the general methods of investigation, and outlines the hypothesis and predictions ( 15 ). The peer reviewer determines whether the introduction provides sufficient background information on the research topic, and ensures that the research question and hypothesis are clearly identifiable.

The methods section describes the experimental procedures, and explains why each experiment was conducted. The methods section also includes the equipment and reagents used in the investigation. The methods section should be detailed enough that it can be used it to repeat the experiment ( 15 ). Methods are written in the past tense and in the active voice. The peer reviewer assesses whether the appropriate methods were used to answer the research question, and if they were written with sufficient detail. If information is missing from the methods section, it is the peer reviewer’s job to identify what details need to be added.

The results section is where the outcomes of the experiment and trends in the data are explained without judgement, bias or interpretation ( 15 ). This section can include statistical tests performed on the data, as well as figures and tables in addition to the text. The peer reviewer ensures that the results are described with sufficient detail, and determines their credibility. Reviewers also confirm that the text is consistent with the information presented in tables and figures, and that all figures and tables included are important and relevant ( 15 ). The peer reviewer will also make sure that table and figure captions are appropriate both contextually and in length, and that tables and figures present the data accurately.

The discussion section is where the data is analyzed. Here, the results are interpreted and related to past studies ( 15 ). The discussion describes the meaning and significance of the results in terms of the research question and hypothesis, and states whether the hypothesis was supported or rejected. This section may also provide possible explanations for unusual results and suggestions for future research ( 15 ). The discussion should end with a conclusions section that summarizes the major findings of the investigation. The peer reviewer determines whether the discussion is clear and focused, and whether the conclusions are an appropriate interpretation of the results. Reviewers also ensure that the discussion addresses the limitations of the study, any anomalies in the results, the relationship of the study to previous research, and the theoretical implications and practical applications of the study.

The references are found at the end of the paper, and list all of the information sources cited in the text to describe the background, methods, and/or interpret results. Depending on the citation method used, the references are listed in alphabetical order according to author last name, or numbered according to the order in which they appear in the paper. The peer reviewer ensures that references are used appropriately, cited accurately, formatted correctly, and that none are missing.

Finally, the peer reviewer determines whether the paper is clearly written and if the content seems logical. After thoroughly reading through the entire manuscript, they determine whether it meets the journal’s standards for publication,

and whether it falls within the top 25% of papers in its field ( 16 ) to determine priority for publication. An overview of what a peer reviewer looks for when evaluating a manuscript, in order of importance, is presented in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g002.jpg

How a peer review evaluates a manuscript

To increase the chance of success in the peer review process, the author must ensure that the paper fully complies with the journal guidelines before submission. The author must also be open to criticism and suggested revisions, and learn from mistakes made in previous submissions.

ADVANTAGES AND DISADVANTAGES OF THE DIFFERENT TYPES OF PEER REVIEW

The peer review process is generally conducted in one of three ways: open review, single-blind review, or double-blind review. In an open review, both the author of the paper and the peer reviewer know one another’s identity. Alternatively, in single-blind review, the reviewer’s identity is kept private, but the author’s identity is revealed to the reviewer. In double-blind review, the identities of both the reviewer and author are kept anonymous. Open peer review is advantageous in that it prevents the reviewer from leaving malicious comments, being careless, or procrastinating completion of the review ( 2 ). It encourages reviewers to be open and honest without being disrespectful. Open reviewing also discourages plagiarism amongst authors ( 2 ). On the other hand, open peer review can also prevent reviewers from being honest for fear of developing bad rapport with the author. The reviewer may withhold or tone down their criticisms in order to be polite ( 2 ). This is especially true when younger reviewers are given a more esteemed author’s work, in which case the reviewer may be hesitant to provide criticism for fear that it will damper their relationship with a superior ( 2 ). According to the Sense About Science survey, editors find that completely open reviewing decreases the number of people willing to participate, and leads to reviews of little value ( 12 ). In the aforementioned study by the PRC, only 23% of authors surveyed had experience with open peer review ( 7 ).

Single-blind peer review is by far the most common. In the PRC study, 85% of authors surveyed had experience with single-blind peer review ( 7 ). This method is advantageous as the reviewer is more likely to provide honest feedback when their identity is concealed ( 2 ). This allows the reviewer to make independent decisions without the influence of the author ( 2 ). The main disadvantage of reviewer anonymity, however, is that reviewers who receive manuscripts on subjects similar to their own research may be tempted to delay completing the review in order to publish their own data first ( 2 ).

Double-blind peer review is advantageous as it prevents the reviewer from being biased against the author based on their country of origin or previous work ( 2 ). This allows the paper to be judged based on the quality of the content, rather than the reputation of the author. The Sense About Science survey indicates that 76% of researchers think double-blind peer review is a good idea ( 12 ), and the PRC survey indicates that 45% of authors have had experience with double-blind peer review ( 7 ). The disadvantage of double-blind peer review is that, especially in niche areas of research, it can sometimes be easy for the reviewer to determine the identity of the author based on writing style, subject matter or self-citation, and thus, impart bias ( 2 ).

Masking the author’s identity from peer reviewers, as is the case in double-blind review, is generally thought to minimize bias and maintain review quality. A study by Justice et al. in 1998 investigated whether masking author identity affected the quality of the review ( 17 ). One hundred and eighteen manuscripts were randomized; 26 were peer reviewed as normal, and 92 were moved into the ‘intervention’ arm, where editor quality assessments were completed for 77 manuscripts and author quality assessments were completed for 40 manuscripts ( 17 ). There was no perceived difference in quality between the masked and unmasked reviews. Additionally, the masking itself was often unsuccessful, especially with well-known authors ( 17 ). However, a previous study conducted by McNutt et al. had different results ( 18 ). In this case, blinding was successful 73% of the time, and they found that when author identity was masked, the quality of review was slightly higher ( 18 ). Although Justice et al. argued that this difference was too small to be consequential, their study targeted only biomedical journals, and the results cannot be generalized to journals of a different subject matter ( 17 ). Additionally, there were problems masking the identities of well-known authors, introducing a flaw in the methods. Regardless, Justice et al. concluded that masking author identity from reviewers may not improve review quality ( 17 ).

In addition to open, single-blind and double-blind peer review, there are two experimental forms of peer review. In some cases, following publication, papers may be subjected to post-publication peer review. As many papers are now published online, the scientific community has the opportunity to comment on these papers, engage in online discussions and post a formal review. For example, online publishers PLOS and BioMed Central have enabled scientists to post comments on published papers if they are registered users of the site ( 10 ). Philica is another journal launched with this experimental form of peer review. Only 8% of authors surveyed in the PRC study had experience with post-publication review ( 7 ). Another experimental form of peer review called Dynamic Peer Review has also emerged. Dynamic peer review is conducted on websites such as Naboj, which allow scientists to conduct peer reviews on articles in the preprint media ( 19 ). The peer review is conducted on repositories and is a continuous process, which allows the public to see both the article and the reviews as the article is being developed ( 19 ). Dynamic peer review helps prevent plagiarism as the scientific community will already be familiar with the work before the peer reviewed version appears in print ( 19 ). Dynamic review also reduces the time lag between manuscript submission and publishing. An example of a preprint server is the ‘arXiv’ developed by Paul Ginsparg in 1991, which is used primarily by physicists ( 19 ). These alternative forms of peer review are still un-established and experimental. Traditional peer review is time-tested and still highly utilized. All methods of peer review have their advantages and deficiencies, and all are prone to error.

PEER REVIEW OF OPEN ACCESS JOURNALS

Open access (OA) journals are becoming increasingly popular as they allow the potential for widespread distribution of publications in a timely manner ( 20 ). Nevertheless, there can be issues regarding the peer review process of open access journals. In a study published in Science in 2013, John Bohannon submitted 304 slightly different versions of a fictional scientific paper (written by a fake author, working out of a non-existent institution) to a selected group of OA journals. This study was performed in order to determine whether papers submitted to OA journals are properly reviewed before publication in comparison to subscription-based journals. The journals in this study were selected from the Directory of Open Access Journals (DOAJ) and Biall’s List, a list of journals which are potentially predatory, and all required a fee for publishing ( 21 ). Of the 304 journals, 157 accepted a fake paper, suggesting that acceptance was based on financial interest rather than the quality of article itself, while 98 journals promptly rejected the fakes ( 21 ). Although this study highlights useful information on the problems associated with lower quality publishers that do not have an effective peer review system in place, the article also generalizes the study results to all OA journals, which can be detrimental to the general perception of OA journals. There were two limitations of the study that made it impossible to accurately determine the relationship between peer review and OA journals: 1) there was no control group (subscription-based journals), and 2) the fake papers were sent to a non-randomized selection of journals, resulting in bias.

JOURNAL ACCEPTANCE RATES

Based on a recent survey, the average acceptance rate for papers submitted to scientific journals is about 50% ( 7 ). Twenty percent of the submitted manuscripts that are not accepted are rejected prior to review, and 30% are rejected following review ( 7 ). Of the 50% accepted, 41% are accepted with the condition of revision, while only 9% are accepted without the request for revision ( 7 ).

SATISFACTION WITH THE PEER REVIEW SYSTEM

Based on a recent survey by the PRC, 64% of academics are satisfied with the current system of peer review, and only 12% claimed to be ‘dissatisfied’ ( 7 ). The large majority, 85%, agreed with the statement that ‘scientific communication is greatly helped by peer review’ ( 7 ). There was a similarly high level of support (83%) for the idea that peer review ‘provides control in scientific communication’ ( 7 ).

HOW TO PEER REVIEW EFFECTIVELY

The following are ten tips on how to be an effective peer reviewer as indicated by Brian Lucey, an expert on the subject ( 22 ):

1) Be professional

Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it.

2) Be pleasant

If the paper is of low quality, suggest that it be rejected, but do not leave ad hominem comments. There is no benefit to being ruthless.

3) Read the invite

When emailing a scientist to ask them to conduct a peer review, the majority of journals will provide a link to either accept or reject. Do not respond to the email, respond to the link.

4) Be helpful

Suggest how the authors can overcome the shortcomings in their paper. A review should guide the author on what is good and what needs work from the reviewer’s perspective.

5) Be scientific

The peer reviewer plays the role of a scientific peer, not an editor for proofreading or decision-making. Don’t fill a review with comments on editorial and typographic issues. Instead, focus on adding value with scientific knowledge and commenting on the credibility of the research conducted and conclusions drawn. If the paper has a lot of typographical errors, suggest that it be professionally proof edited as part of the review.

6) Be timely

Stick to the timeline given when conducting a peer review. Editors track who is reviewing what and when and will know if someone is late on completing a review. It is important to be timely both out of respect for the journal and the author, as well as to not develop a reputation of being late for review deadlines.

7) Be realistic

The peer reviewer must be realistic about the work presented, the changes they suggest and their role. Peer reviewers may set the bar too high for the paper they are editing by proposing changes that are too ambitious and editors must override them.

8) Be empathetic

Ensure that the review is scientific, helpful and courteous. Be sensitive and respectful with word choice and tone in a review.

Remember that both specialists and generalists can provide valuable insight when peer reviewing. Editors will try to get both specialised and general reviewers for any particular paper to allow for different perspectives. If someone is asked to review, the editor has determined they have a valid and useful role to play, even if the paper is not in their area of expertise.

10) Be organised

A review requires structure and logical flow. A reviewer should proofread their review before submitting it for structural, grammatical and spelling errors as well as for clarity. Most publishers provide short guides on structuring a peer review on their website. Begin with an overview of the proposed improvements; then provide feedback on the paper structure, the quality of data sources and methods of investigation used, the logical flow of argument, and the validity of conclusions drawn. Then provide feedback on style, voice and lexical concerns, with suggestions on how to improve.

In addition, the American Physiology Society (APS) recommends in its Peer Review 101 Handout that peer reviewers should put themselves in both the editor’s and author’s shoes to ensure that they provide what both the editor and the author need and expect ( 11 ). To please the editor, the reviewer should ensure that the peer review is completed on time, and that it provides clear explanations to back up recommendations. To be helpful to the author, the reviewer must ensure that their feedback is constructive. It is suggested that the reviewer take time to think about the paper; they should read it once, wait at least a day, and then re-read it before writing the review ( 11 ). The APS also suggests that Graduate students and researchers pay attention to how peer reviewers edit their work, as well as to what edits they find helpful, in order to learn how to peer review effectively ( 11 ). Additionally, it is suggested that Graduate students practice reviewing by editing their peers’ papers and asking a faculty member for feedback on their efforts. It is recommended that young scientists offer to peer review as often as possible in order to become skilled at the process ( 11 ). The majority of students, fellows and trainees do not get formal training in peer review, but rather learn by observing their mentors. According to the APS, one acquires experience through networking and referrals, and should therefore try to strengthen relationships with journal editors by offering to review manuscripts ( 11 ). The APS also suggests that experienced reviewers provide constructive feedback to students and junior colleagues on their peer review efforts, and encourages them to peer review to demonstrate the importance of this process in improving science ( 11 ).

The peer reviewer should only comment on areas of the manuscript that they are knowledgeable about ( 23 ). If there is any section of the manuscript they feel they are not qualified to review, they should mention this in their comments and not provide further feedback on that section. The peer reviewer is not permitted to share any part of the manuscript with a colleague (even if they may be more knowledgeable in the subject matter) without first obtaining permission from the editor ( 23 ). If a peer reviewer comes across something they are unsure of in the paper, they can consult the literature to try and gain insight. It is important for scientists to remember that if a paper can be improved by the expertise of one of their colleagues, the journal must be informed of the colleague’s help, and approval must be obtained for their colleague to read the protected document. Additionally, the colleague must be identified in the confidential comments to the editor, in order to ensure that he/she is appropriately credited for any contributions ( 23 ). It is the job of the reviewer to make sure that the colleague assisting is aware of the confidentiality of the peer review process ( 23 ). Once the review is complete, the manuscript must be destroyed and cannot be saved electronically by the reviewers ( 23 ).

COMMON ERRORS IN SCIENTIFIC PAPERS

When performing a peer review, there are some common scientific errors to look out for. Most of these errors are violations of logic and common sense: these may include contradicting statements, unwarranted conclusions, suggestion of causation when there is only support for correlation, inappropriate extrapolation, circular reasoning, or pursuit of a trivial question ( 24 ). It is also common for authors to suggest that two variables are different because the effects of one variable are statistically significant while the effects of the other variable are not, rather than directly comparing the two variables ( 24 ). Authors sometimes oversee a confounding variable and do not control for it, or forget to include important details on how their experiments were controlled or the physical state of the organisms studied ( 24 ). Another common fault is the author’s failure to define terms or use words with precision, as these practices can mislead readers ( 24 ). Jargon and/or misused terms can be a serious problem in papers. Inaccurate statements about specific citations are also a common occurrence ( 24 ). Additionally, many studies produce knowledge that can be applied to areas of science outside the scope of the original study, therefore it is better for reviewers to look at the novelty of the idea, conclusions, data, and methodology, rather than scrutinize whether or not the paper answered the specific question at hand ( 24 ). Although it is important to recognize these points, when performing a review it is generally better practice for the peer reviewer to not focus on a checklist of things that could be wrong, but rather carefully identify the problems specific to each paper and continuously ask themselves if anything is missing ( 24 ). An extremely detailed description of how to conduct peer review effectively is presented in the paper How I Review an Original Scientific Article written by Frederic G. Hoppin, Jr. It can be accessed through the American Physiological Society website under the Peer Review Resources section.

CRITICISM OF PEER REVIEW

A major criticism of peer review is that there is little evidence that the process actually works, that it is actually an effective screen for good quality scientific work, and that it actually improves the quality of scientific literature. As a 2002 study published in the Journal of the American Medical Association concluded, ‘Editorial peer review, although widely used, is largely untested and its effects are uncertain’ ( 25 ). Critics also argue that peer review is not effective at detecting errors. Highlighting this point, an experiment by Godlee et al. published in the British Medical Journal (BMJ) inserted eight deliberate errors into a paper that was nearly ready for publication, and then sent the paper to 420 potential reviewers ( 7 ). Of the 420 reviewers that received the paper, 221 (53%) responded, the average number of errors spotted by reviewers was two, no reviewer spotted more than five errors, and 35 reviewers (16%) did not spot any.

Another criticism of peer review is that the process is not conducted thoroughly by scientific conferences with the goal of obtaining large numbers of submitted papers. Such conferences often accept any paper sent in, regardless of its credibility or the prevalence of errors, because the more papers they accept, the more money they can make from author registration fees ( 26 ). This misconduct was exposed in 2014 by three MIT graduate students by the names of Jeremy Stribling, Dan Aguayo and Maxwell Krohn, who developed a simple computer program called SCIgen that generates nonsense papers and presents them as scientific papers ( 26 ). Subsequently, a nonsense SCIgen paper submitted to a conference was promptly accepted. Nature recently reported that French researcher Cyril Labbé discovered that sixteen SCIgen nonsense papers had been used by the German academic publisher Springer ( 26 ). Over 100 nonsense papers generated by SCIgen were published by the US Institute of Electrical and Electronic Engineers (IEEE) ( 26 ). Both organisations have been working to remove the papers. Labbé developed a program to detect SCIgen papers and has made it freely available to ensure publishers and conference organizers do not accept nonsense work in the future. It is available at this link: http://scigendetect.on.imag.fr/main.php ( 26 ).

Additionally, peer review is often criticized for being unable to accurately detect plagiarism. However, many believe that detecting plagiarism cannot practically be included as a component of peer review. As explained by Alice Tuff, development manager at Sense About Science, ‘The vast majority of authors and reviewers think peer review should detect plagiarism (81%) but only a minority (38%) think it is capable. The academic time involved in detecting plagiarism through peer review would cause the system to grind to a halt’ ( 27 ). Publishing house Elsevier began developing electronic plagiarism tools with the help of journal editors in 2009 to help improve this issue ( 27 ).

It has also been argued that peer review has lowered research quality by limiting creativity amongst researchers. Proponents of this view claim that peer review has repressed scientists from pursuing innovative research ideas and bold research questions that have the potential to make major advances and paradigm shifts in the field, as they believe that this work will likely be rejected by their peers upon review ( 28 ). Indeed, in some cases peer review may result in rejection of innovative research, as some studies may not seem particularly strong initially, yet may be capable of yielding very interesting and useful developments when examined under different circumstances, or in the light of new information ( 28 ). Scientists that do not believe in peer review argue that the process stifles the development of ingenious ideas, and thus the release of fresh knowledge and new developments into the scientific community.

Another issue that peer review is criticized for, is that there are a limited number of people that are competent to conduct peer review compared to the vast number of papers that need reviewing. An enormous number of papers published (1.3 million papers in 23,750 journals in 2006), but the number of competent peer reviewers available could not have reviewed them all ( 29 ). Thus, people who lack the required expertise to analyze the quality of a research paper are conducting reviews, and weak papers are being accepted as a result. It is now possible to publish any paper in an obscure journal that claims to be peer-reviewed, though the paper or journal itself could be substandard ( 29 ). On a similar note, the US National Library of Medicine indexes 39 journals that specialize in alternative medicine, and though they all identify themselves as “peer-reviewed”, they rarely publish any high quality research ( 29 ). This highlights the fact that peer review of more controversial or specialized work is typically performed by people who are interested and hold similar views or opinions as the author, which can cause bias in their review. For instance, a paper on homeopathy is likely to be reviewed by fellow practicing homeopaths, and thus is likely to be accepted as credible, though other scientists may find the paper to be nonsense ( 29 ). In some cases, papers are initially published, but their credibility is challenged at a later date and they are subsequently retracted. Retraction Watch is a website dedicated to revealing papers that have been retracted after publishing, potentially due to improper peer review ( 30 ).

Additionally, despite its many positive outcomes, peer review is also criticized for being a delay to the dissemination of new knowledge into the scientific community, and as an unpaid-activity that takes scientists’ time away from activities that they would otherwise prioritize, such as research and teaching, for which they are paid ( 31 ). As described by Eva Amsen, Outreach Director for F1000Research, peer review was originally developed as a means of helping editors choose which papers to publish when journals had to limit the number of papers they could print in one issue ( 32 ). However, nowadays most journals are available online, either exclusively or in addition to print, and many journals have very limited printing runs ( 32 ). Since there are no longer page limits to journals, any good work can and should be published. Consequently, being selective for the purpose of saving space in a journal is no longer a valid excuse that peer reviewers can use to reject a paper ( 32 ). However, some reviewers have used this excuse when they have personal ulterior motives, such as getting their own research published first.

RECENT INITIATIVES TOWARDS IMPROVING PEER REVIEW

F1000Research was launched in January 2013 by Faculty of 1000 as an open access journal that immediately publishes papers (after an initial check to ensure that the paper is in fact produced by a scientist and has not been plagiarised), and then conducts transparent post-publication peer review ( 32 ). F1000Research aims to prevent delays in new science reaching the academic community that are caused by prolonged publication times ( 32 ). It also aims to make peer reviewing more fair by eliminating any anonymity, which prevents reviewers from delaying the completion of a review so they can publish their own similar work first ( 32 ). F1000Research offers completely open peer review, where everything is published, including the name of the reviewers, their review reports, and the editorial decision letters ( 32 ).

PeerJ was founded by Jason Hoyt and Peter Binfield in June 2012 as an open access, peer reviewed scholarly journal for the Biological and Medical Sciences ( 33 ). PeerJ selects articles to publish based only on scientific and methodological soundness, not on subjective determinants of ‘impact ’, ‘novelty’ or ‘interest’ ( 34 ). It works on a “lifetime publishing plan” model which charges scientists for publishing plans that give them lifetime rights to publish with PeerJ, rather than charging them per publication ( 34 ). PeerJ also encourages open peer review, and authors are given the option to post the full peer review history of their submission with their published article ( 34 ). PeerJ also offers a pre-print review service called PeerJ Pre-prints, in which paper drafts are reviewed before being sent to PeerJ to publish ( 34 ).

Rubriq is an independent peer review service designed by Shashi Mudunuri and Keith Collier to improve the peer review system ( 35 ). Rubriq is intended to decrease redundancy in the peer review process so that the time lost in redundant reviewing can be put back into research ( 35 ). According to Keith Collier, over 15 million hours are lost each year to redundant peer review, as papers get rejected from one journal and are subsequently submitted to a less prestigious journal where they are reviewed again ( 35 ). Authors often have to submit their manuscript to multiple journals, and are often rejected multiple times before they find the right match. This process could take months or even years ( 35 ). Rubriq makes peer review portable in order to help authors choose the journal that is best suited for their manuscript from the beginning, thus reducing the time before their paper is published ( 35 ). Rubriq operates under an author-pay model, in which the author pays a fee and their manuscript undergoes double-blind peer review by three expert academic reviewers using a standardized scorecard ( 35 ). The majority of the author’s fee goes towards a reviewer honorarium ( 35 ). The papers are also screened for plagiarism using iThenticate ( 35 ). Once the manuscript has been reviewed by the three experts, the most appropriate journal for submission is determined based on the topic and quality of the paper ( 35 ). The paper is returned to the author in 1-2 weeks with the Rubriq Report ( 35 ). The author can then submit their paper to the suggested journal with the Rubriq Report attached. The Rubriq Report will give the journal editors a much stronger incentive to consider the paper as it shows that three experts have recommended the paper to them ( 35 ). Rubriq also has its benefits for reviewers; the Rubriq scorecard gives structure to the peer review process, and thus makes it consistent and efficient, which decreases time and stress for the reviewer. Reviewers also receive feedback on their reviews and most significantly, they are compensated for their time ( 35 ). Journals also benefit, as they receive pre-screened papers, reducing the number of papers sent to their own reviewers, which often end up rejected ( 35 ). This can reduce reviewer fatigue, and allow only higher-quality articles to be sent to their peer reviewers ( 35 ).

According to Eva Amsen, peer review and scientific publishing are moving in a new direction, in which all papers will be posted online, and a post-publication peer review will take place that is independent of specific journal criteria and solely focused on improving paper quality ( 32 ). Journals will then choose papers that they find relevant based on the peer reviews and publish those papers as a collection ( 32 ). In this process, peer review and individual journals are uncoupled ( 32 ). In Keith Collier’s opinion, post-publication peer review is likely to become more prevalent as a complement to pre-publication peer review, but not as a replacement ( 35 ). Post-publication peer review will not serve to identify errors and fraud but will provide an additional measurement of impact ( 35 ). Collier also believes that as journals and publishers consolidate into larger systems, there will be stronger potential for “cascading” and shared peer review ( 35 ).

CONCLUDING REMARKS

Peer review has become fundamental in assisting editors in selecting credible, high quality, novel and interesting research papers to publish in scientific journals and to ensure the correction of any errors or issues present in submitted papers. Though the peer review process still has some flaws and deficiencies, a more suitable screening method for scientific papers has not yet been proposed or developed. Researchers have begun and must continue to look for means of addressing the current issues with peer review to ensure that it is a full-proof system that ensures only quality research papers are released into the scientific community.

  • Search Menu
  • Advance Articles
  • Editor's Choice
  • CME Reviews
  • Best of 2021 collection
  • Abbreviated Breast MRI Virtual Collection
  • Contrast-enhanced Mammography Collection
  • Author Guidelines
  • Submission Site
  • Open Access
  • Self-Archiving Policy
  • Accepted Papers Resource Guide
  • About Journal of Breast Imaging
  • About the Society of Breast Imaging
  • Guidelines for Reviewers
  • Resources for Reviewers and Authors
  • Editorial Board
  • Advertising Disclaimer
  • Advertising and Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Society of Breast Imaging

Article Contents

Introduction, selection of a topic, scientific literature search and analysis, structure of a scientific review article, tips for success, acknowledgments, conflict of interest statement.

  • < Previous

A Step-by-Step Guide to Writing a Scientific Review Article

  • Article contents
  • Figures & tables
  • Supplementary Data

Manisha Bahl, A Step-by-Step Guide to Writing a Scientific Review Article, Journal of Breast Imaging , Volume 5, Issue 4, July/August 2023, Pages 480–485, https://doi.org/10.1093/jbi/wbad028

  • Permissions Icon Permissions

Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process. The process involves selecting a topic about which the authors are knowledgeable and enthusiastic, conducting a literature search and critical analysis of the literature, and writing the article, which is composed of an abstract, introduction, body, and conclusion, with accompanying tables and figures. This article, which focuses on the narrative or traditional literature review, is intended to serve as a guide with practical steps for new writers. Tips for success are also discussed, including selecting a focused topic, maintaining objectivity and balance while writing, avoiding tedious data presentation in a laundry list format, moving from descriptions of the literature to critical analysis, avoiding simplistic conclusions, and budgeting time for the overall process.

Scientific review articles provide a focused and comprehensive review of the available evidence about a subject, explain the current state of knowledge, and identify gaps that could be topics for potential future research.

Detailed tables reviewing the relevant scientific literature are important components of high-quality scientific review articles.

Tips for success include selecting a focused topic, maintaining objectivity and balance, avoiding tedious data presentation, providing a critical analysis rather than only a description of the literature, avoiding simplistic conclusions, and budgeting time for the overall process.

The process of researching and writing a scientific review article can be a seemingly daunting task but can be made manageable, and even enjoyable, if an organized approach is used and a reasonable timeline is given. Scientific review articles provide authors with an opportunity to synthesize the available evidence about a specific subject, contribute their insights to the field, and identify opportunities for future research. The authors, in turn, gain recognition as subject matter experts and thought leaders in the field. An additional benefit to the authors is that high-quality review articles can often be cited many years after publication ( 1 , 2 ). The reader of a scientific review article should gain an understanding of the current state of knowledge on the subject, points of controversy, and research questions that have yet to be answered ( 3 ).

There are two types of review articles, narrative or traditional literature reviews and systematic reviews, which may or may not be accompanied by a meta-analysis ( 4 ). This article, which focuses on the narrative or traditional literature review, is intended to serve as a guide with practical steps for new writers. It is geared toward breast imaging radiologists who are preparing to write a scientific review article for the Journal of Breast Imaging but can also be used by any writer, reviewer, or reader. In the narrative or traditional literature review, the available scientific literature is synthesized and no new data are presented. This article first discusses the process of selecting an appropriate topic. Then, practical tips for conducting a literature search and analyzing the literature are provided. The structure of a scientific review article is outlined and tips for success are described.

Scientific review articles are often solicited by journal editors and written by experts in the field. For solicited or invited articles, a senior expert in the field may be contacted and, in turn, may ask junior faculty or trainees to help with the literature search and writing process. Most journals also consider proposals for review article topics. The journal’s editorial office can be contacted via e-mail with a topic proposal, ideally with an accompanying outline or an extended abstract to help explain the proposal.

When selecting a topic for a scientific review article, the following considerations should be taken into account: The authors should be knowledgeable about and interested in the topic; the journal’s audience should be interested in the topic; and the topic should be focused, with a sufficient number of current research studies ( Figure 1 ). For the Journal of Breast Imaging , a scientific review article on breast MRI would be too broad in scope. Examples of more focused topics include abbreviated breast MRI ( 5 ), concerns about gadolinium deposition in the setting of screening MRI ( 6 ), Breast Imaging Reporting and Data System (BI-RADS) 3 assessments on MRI ( 7 , 8 ), the science of background parenchymal enhancement ( 9 ), and screening MRI in women at intermediate risk ( 10 ).

Summary of the factors to consider when selecting a topic for a scientific review article. Adapted with permission from Dhillon et al (2).

Summary of the factors to consider when selecting a topic for a scientific review article. Adapted with permission from Dhillon et al ( 2 ).

Once a well-defined topic is selected, the next step is to conduct a literature search. There are multiple indexing databases that can be used for a literature search, including PubMed, SCOPUS, and Web of Science ( 11–13 ). A list of databases with links can be found on the National Institutes of Health website ( 14 ). It is advised to keep track of the search terms that are used so that the search could be replicated if needed.

While reading articles, taking notes and keeping track of findings in a spreadsheet or database can be helpful. The following points should be considered for each article: What is the purpose of the article, and is it relevant to the review article topic? What was the study design (eg, retrospective analysis, randomized controlled trial)? Are the conclusions that are drawn based on the presented data valid and reasonable? What are the strengths and limitations of the study? In the discussion section, do the authors discuss other literature that both supports and contradicts their findings? It can also be helpful to read accompanying editorials, if available, that are written by experts to explain the importance of the original scientific article in the context of other work in the field.

If previous review articles on the same topic are discovered during the literature search, then the following strategies could be considered: discussing approaches used and limitations of past reviews, identifying a new angle that has not been previously covered, and/or focusing on new research that has been published since the most recent reviews on the topic ( 3 ). It is highly encouraged to create an outline and solicit feedback from co-authors before writing begins.

Writing a high-quality scientific review article is “a balancing act between the scientific rigor needed to select and critically appraise original studies, and the art of telling a story by providing context, exploring the known and the unknown, and pointing the way forward” ( 15 ). The ideal scientific review article is balanced and authoritative and serves as a definitive reference on the topic. Review articles tend to be 4000 to 5000 words in length, with 80% to 90% devoted to the body.

When preparing a scientific review article, writers can consider using the Scale for the Assessment of Narrative Review Articles, which has been proposed as a critical appraisal tool to help editors, reviewers, and readers assess non–systematic review articles ( 16 ). It is composed of the following six items, which are rated from 0 to 2 (with 0 being low quality and 2 being high quality): explanation of why the article is important, statement of aims or questions to be addressed, description of the literature search strategy, inclusion of appropriate references, scientific reasoning, and appropriate data presentation. In a study with three raters each reviewing 30 articles, the scale was felt to be feasible in daily editorial work and had high inter-rater reliability.

The components of a scientific review article include the abstract, introduction, body, conclusion, references, tables, and figures, which are described below.

Abstracts are typically structured as a single paragraph, ranging from 200 to 250 words in length. The abstract briefly explains why the topic is important, provides a summary of the main conclusions that are being drawn based on the research studies that were included and analyzed in the review article, and describes how the article is organized ( 17 ). Because the abstract should provide a summary of the main conclusions being drawn, it is often written last, after the other sections of the article have been completed. It does not include references.

The introduction provides detailed background about the topic and outlines the objectives of the review article. It is important to explain why the literature on that topic should be reviewed (eg, no prior reviews, different angle from prior reviews, new published research). The problem-gap-hook approach can be used, in which the topic is introduced, the gap is explained (eg, lack of published synthesis), and the hook (or why it matters) is provided ( 18 ). If there are prior review articles on the topic, particularly recent ones, then the authors are encouraged to justify how their review contributes to the existing literature. The content in the introduction section should be supported with references, but specific findings from recent research studies are typically not described, instead being discussed in depth in the body.

In a traditional or narrative review article, a methods section is optional. The methods section should include a list of the databases and years that were searched, search terms that were used, and a summary of the inclusion and exclusion criteria for articles ( 17 , 19 ).

The body can take different forms depending on the topic but should be organized into sections with subheadings, with each subsection having an independent introduction and conclusion. In the body, published studies should be reviewed in detail and in an organized fashion. In general, each paragraph should begin with a thesis statement or main point, and the sentences that follow it should consist of supporting evidence drawn from the literature. Research studies need not be discussed in chronological order, and the results from one research study may be discussed in different sections of the body. For example, if writing a scientific review article on screening digital breast tomosynthesis, cancer detection rates reported in one study may be discussed in a separate paragraph from the false-positive rates that were reported in the same study.

Emphasis should be placed on the significance of the study results in the broader context of the subject. The strengths and weaknesses of individual studies should be discussed. An example of this type of discussion is as follows: “Smith et al found no differences in re-excision rates among breast cancer patients who did and did not undergo preoperative MRI. However, there were several important limitations of this study. The radiologists were not required to have breast MRI interpretation experience, nor was it required that MRI-detected findings undergo biopsy prior to surgery.” Other examples of phrases that can be used for constructive criticism are available online ( 20 ).

The conclusion section ties everything together and clearly states the conclusions that are being drawn based on the research studies included and analyzed in the article. The authors are also encouraged to provide their views on future research, important challenges, and unanswered questions.

Scientific review articles tend to have a large number of supporting references (up to 100). When possible, referencing the original article (rather than a review article referring to the original article) is preferred. The use of a reference manager, such as EndNote (Clarivate, London, UK) ( 21 ), Mendeley Desktop (Elsevier, Amsterdam, the Netherlands) ( 22 ), Paperpile (Paperpile LLC, Cambridge, MA) ( 23 ), RefWorks (ProQuest, Ann Arbor, MI) ( 24 ), or Zotero (Corporation for Digital Scholarship, Fairfax, VA) ( 25 ), is highly encouraged, as it ensures appropriate reference ordering even when text is moved or added and can facilitate the switching of formats based on journal requirements ( 26 ).

Tables and Figures

The inclusion of tables and figures can improve the readability of the review. Detailed tables that review the scientific literature are expected ( Table 1 ). A table listing gaps in knowledge as potential areas for future research may also be included ( 17 ). Although scientific review articles are not expected to be as figure-rich as educational review articles, figures can be beneficial to illustrate complex concepts and summarize or synthesize relevant data ( Figure 2 ). Of note, if nonoriginal figures are used, permission from the copyright owner must be obtained.

Example of an Effective Table From a Scientific Review Article About Screening MRI in Women at Intermediate Risk of Breast Cancer.

Abbreviations: ADH, atypical ductal hyperplasia; ALH, atypical lobular hyperplasia; CDR, cancer detection rate; LCIS, lobular carcinoma in situ; NR, not reported; PPV, positive predictive value. NOTE: The Detailed Table Provides a Summary of the Relevant Scientific Literature on Screening MRI in women with lobular neoplasia or ADH. Adapted with permission from Bahl ( 10 ).

a The reported CDR is an incremental CDR in the studies by Friedlander et al and Chikarmane et al. In all studies, some, but not all, included patients had a prior MRI examination, so the reported CDR represents a combination of both the prevalent and incident CDRs.

b This study included 455 patients with LCIS (some of whom had concurrent ALH or ADH). Twenty-nine cancers were MRI-detected, and 115 benign biopsies were prompted by MRI findings.

Example of an effective figure from a scientific review article about breast cancer risk assessment. The figure provides a risk assessment algorithm for breast cancer. Reprinted with permission from Kim et al (28).

Example of an effective figure from a scientific review article about breast cancer risk assessment. The figure provides a risk assessment algorithm for breast cancer. Reprinted with permission from Kim et al ( 28 ).

Select a Focused but Broad Enough Topic

A common pitfall is to be too ambitious in scope, resulting in a very time-consuming literature search and superficial coverage of some aspects of the topic. The ideal topic should be focused enough to be manageable but with a large enough body of available research to justify the need for a review article. One article on the topic of scientific reviews suggests that at least 15 to 20 relevant research papers published within the previous five years should be easily identifiable to warrant writing a review article ( 2 ).

Provide a Summary of Main Conclusions in the Abstract

Another common pitfall is to only introduce the topic and provide a roadmap for the article in the abstract. The abstract should also provide a summary of the main conclusions that are being drawn based on the research studies that were included and analyzed in the review article.

Be Objective

The content and key points of the article should be based on the published scientific literature and not biased toward one’s personal opinion.

Avoid Tedious Data Presentation

Extensive lists of statements about the findings of other authors (eg, author A found Z, author B found Y, while author C found X, etc) make it difficult for the reader to understand and follow the article. It is best for the writing to be thematic based on research findings rather than author-centered ( 27 ). Each paragraph in the body should begin with a thesis statement or main point, and the sentences that follow should consist of supporting evidence drawn from the literature. For example, in a scientific review article about artificial intelligence (AI) for screening mammography, one approach would be to write that article A found a higher cancer detection rate, higher efficiency, and a lower false-positive rate with use of the AI algorithm and article B found a similar cancer detection rate and higher efficiency, while article C found a higher cancer detection rate and higher false-positive rate. Rather, a better approach would be to write one or more paragraphs summarizing the literature on cancer detection rates, one or more paragraphs on false-positive rates, and one or more paragraphs on efficiency. The results from one study (eg, article A) need not all be discussed in the same paragraph.

Move from Description (Summary) to Analysis

A common pitfall is to describe and summarize the published literature without providing a critical analysis. The purpose of the narrative or traditional review article is not only to summarize relevant discoveries but also to synthesize the literature, discuss its limitations and implications, and speculate on the future.

Avoid Simplistic Conclusions

The scientific review article’s conclusions should consider the complexity of the topic and the quality of the evidence. When describing a study’s findings, it is best to use language that reflects the quality of the evidence rather than making definitive statements. For example, rather than stating that “The use of preoperative breast MRI leads to a reduction in re-excision rates,” the following comments could be made: “Two single-institution retrospective studies found that preoperative MRI was associated with lower rates of positive surgical margins, which suggests that preoperative MRI may lead to reduced re-excision rates. Larger studies with randomization of patients are needed to validate these findings.”

Budget Time for Researching, Synthesizing, and Writing

The amount of time necessary to write a high-quality scientific review article can easily be underestimated. The process of searching for and synthesizing the scientific literature on a topic can take weeks to months to complete depending on the number of authors involved in this process.

Scientific review articles are common in the medical literature and can serve as definitive references on the topic for other scientists, clinicians, and trainees. The first step in the process of preparing a scientific review article is to select a focused topic. This step is followed by a literature search and critical analysis of the published data. The components of the article include an abstract, introduction, body, and conclusion, with the majority devoted to the body, in which the relevant literature is reviewed in detail. The article should be objective and balanced, with summaries and critical analysis of the available evidence. Budgeting time for researching, synthesizing, and writing; taking advantage of the resources listed in this article and available online; and soliciting feedback from co-authors at various stages of the process (eg, after an outline is created) can help new writers produce high-quality scientific review articles.

The author thanks Susanne L. Loomis (Medical and Scientific Communications, Strategic Communications, Department of Radiology, Massachusetts General Hospital, Boston, MA) for creating Figure 1 in this article.

None declared.

M.B. is a consultant for Lunit (medical AI software company) and an expert panelist for 2nd.MD (a digital health company). She also receives funding from the National Institutes of Health (K08CA241365). M.B. is an associate editor of the Journal of Breast Imaging . As such, she was excluded from the editorial process.

Ketcham CM , Crawford JM. The impact of review articles . Lab Invest 2007 ; 87 ( 12 ): 1174 – 1185 .

Google Scholar

Dhillon P. How to write a good scientific review article . FEBS J 2022 ; 289 ( 13 ): 3592 – 3602 .

Pautasso M. Ten simple rules for writing a literature review . PLoS Comput Biol 2013 ; 9 ( 7 ): e1003149 . doi: 10.1371/journal.pcbi.1003149 .

Gregory AT , Denniss AR. An introduction to writing narrative and systematic reviews—tasks, tips and traps for aspiring authors . Heart Lung Circ 2018 ; 27 ( 7 ): 893 – 898 .

Heacock L , Reig B , Lewin AA , Toth HK , Moy L , Lee CS. Abbreviated breast MRI: road to clinical implementation . J Breast Imag 2020 ; 2 ( 3 ): 201 – 214 .

Neal CH. Screening breast MRI and gadolinium deposition: cause for concern ? J Breast Imag 2022 ; 4 ( 1 ): 10 – 18 .

Morris EA , Comstock CE , Lee CH , et al.  ACR BI-RADS ® Magnetic Resonance Imaging . In: ACR BI-RADS ® Atlas, Breast Imaging Reporting and Data System . Reston, VA : American College of Radiology ; 2013 .

Google Preview

Nguyen DL , Myers KS , Oluyemi E , et al.  BI-RADS 3 assessment on MRI: a lesion-based review for breast radiologists . J Breast Imag 2022 ; 4 ( 5 ): 460 – 473 .

Vong S , Ronco AJ , Najafpour E , Aminololama-Shakeri S. Screening breast MRI and the science of premenopausal background parenchymal enhancement . J Breast Imag 2021 ; 3 ( 4 ): 407 – 415 .

Bahl M. Screening MRI in women at intermediate breast cancer risk: an update of the recent literature . J Breast Imag 2022 ; 4 ( 3 ): 231 – 240 .

National Library of Medicine . PubMed . Available at: https://pubmed.ncbi.nlm.nih.gov/ . Accessed October 5, 2022 .

Elsevier . Scopus . Available at: https://www.elsevier.com/solutions/scopus/ . Accessed October 5, 2022 .

Clarivate . Web of Science . Available at: https://clarivate.com/webofsciencegroup/solutions/web-of-science/ . Accessed October 5, 2022 .

National Institutes of Health (NIH) Office of Management . NIH Library . Available at: https://www.nihlibrary.nih.gov/services/systematic-review-service/literature-search-databases-and-gray-literature/ . Accessed October 5, 2022 .

Vidal EIO , Fukushima FB. The art and science of writing a scientific review article . Cad Saude Publica 2021 ; 37 ( 4 ): e00063121 . doi: 10.1590/0102-311X00063121 .

Baethge C , Goldbeck-Wood S , Mertens S. SANRA—a scale for the quality assessment of narrative review articles . Res Integr Peer Rev 2019 ; 4 : 5 . doi: 10.1186/s41073-019-0064-8 .

Sanders DA. How to write (and how not to write) a scientific review article . Clin Biochem 2020 ; 81 : 65 – 68 .

Lingard L , Colquhoun H. The story behind the synthesis: writing an effective introduction to your scoping review . Perspect Med Educ 2022 ; 11 ( 5 ): 289 – 294 .

Murphy CM. Writing an effective review article . J Med Toxicol 2012 ; 8 ( 2 ): 89 – 90 .

The University of Manchester Academic Phrasebank . Being critical. Available at: https://www.phrasebank.manchester.ac.uk/being-critical/ . Accessed October 5, 2022 .

Clarivate . EndNote . Available at: https://endnote.com/ . Accessed October 5, 2022 .

Mendeley . Getting started with Mendeley Desktop . Available at: https://www.mendeley.com/guides/desktop/ . Accessed October 5, 2022 .

Paperpile . Available at: https://paperpile.com/ . Accessed October 5, 2022 .

RefWorks . Available at: https://www.refworks.com/refworks2/ . Accessed October 5, 2022 .

Zotero . Available at: https://www.zotero.org/ . Accessed October 5, 2022 .

Grimm LJ , Harvey JA. Practical steps to writing a scientific manuscript . J Breast Imag 2022 ; 4 ( 6 ): 640 – 648 .

Gasparyan AY , Ayvazyan L , Blackmore H , Kitas GD. Writing a narrative biomedical review: considerations for authors, peer reviewers, and editors . Rheumatol Int 2011 ; 31 ( 11 ): 1409 – 1417 .

Kim G , Bahl M. Assessing risk of breast cancer: a review of risk prediction models . J Breast Imag 2021 ; 3 ( 2 ): 144 – 155 .

  • narrative discourse

Email alerts

Citing articles via.

  • Recommend to your Librarian
  • Journals Career Network

Affiliations

  • Online ISSN 2631-6129
  • Print ISSN 2631-6110
  • Copyright © 2024 Society of Breast Imaging
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Scientific Method

Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved. Among the activities often identified as characteristic of science are systematic observation and experimentation, inductive and deductive reasoning, and the formation and testing of hypotheses and theories. How these are carried out in detail can vary greatly, but characteristics like these have been looked to as a way of demarcating scientific activity from non-science, where only enterprises which employ some canonical form of scientific method or methods should be considered science (see also the entry on science and pseudo-science ). Others have questioned whether there is anything like a fixed toolkit of methods which is common across science and only science. Some reject privileging one view of method as part of rejecting broader views about the nature of science, such as naturalism (Dupré 2004); some reject any restriction in principle (pluralism).

Scientific method should be distinguished from the aims and products of science, such as knowledge, predictions, or control. Methods are the means by which those goals are achieved. Scientific method should also be distinguished from meta-methodology, which includes the values and justifications behind a particular characterization of scientific method (i.e., a methodology) — values such as objectivity, reproducibility, simplicity, or past successes. Methodological rules are proposed to govern method and it is a meta-methodological question whether methods obeying those rules satisfy given values. Finally, method is distinct, to some degree, from the detailed and contextual practices through which methods are implemented. The latter might range over: specific laboratory techniques; mathematical formalisms or other specialized languages used in descriptions and reasoning; technological or other material means; ways of communicating and sharing results, whether with other scientists or with the public at large; or the conventions, habits, enforced customs, and institutional controls over how and what science is carried out.

While it is important to recognize these distinctions, their boundaries are fuzzy. Hence, accounts of method cannot be entirely divorced from their methodological and meta-methodological motivations or justifications, Moreover, each aspect plays a crucial role in identifying methods. Disputes about method have therefore played out at the detail, rule, and meta-rule levels. Changes in beliefs about the certainty or fallibility of scientific knowledge, for instance (which is a meta-methodological consideration of what we can hope for methods to deliver), have meant different emphases on deductive and inductive reasoning, or on the relative importance attached to reasoning over observation (i.e., differences over particular methods.) Beliefs about the role of science in society will affect the place one gives to values in scientific method.

The issue which has shaped debates over scientific method the most in the last half century is the question of how pluralist do we need to be about method? Unificationists continue to hold out for one method essential to science; nihilism is a form of radical pluralism, which considers the effectiveness of any methodological prescription to be so context sensitive as to render it not explanatory on its own. Some middle degree of pluralism regarding the methods embodied in scientific practice seems appropriate. But the details of scientific practice vary with time and place, from institution to institution, across scientists and their subjects of investigation. How significant are the variations for understanding science and its success? How much can method be abstracted from practice? This entry describes some of the attempts to characterize scientific method or methods, as well as arguments for a more context-sensitive approach to methods embedded in actual scientific practices.

1. Overview and organizing themes

2. historical review: aristotle to mill, 3.1 logical constructionism and operationalism, 3.2. h-d as a logic of confirmation, 3.3. popper and falsificationism, 3.4 meta-methodology and the end of method, 4. statistical methods for hypothesis testing, 5.1 creative and exploratory practices.

  • 5.2 Computer methods and the ‘new ways’ of doing science

6.1 “The scientific method” in science education and as seen by scientists

6.2 privileged methods and ‘gold standards’, 6.3 scientific method in the court room, 6.4 deviating practices, 7. conclusion, other internet resources, related entries.

This entry could have been given the title Scientific Methods and gone on to fill volumes, or it could have been extremely short, consisting of a brief summary rejection of the idea that there is any such thing as a unique Scientific Method at all. Both unhappy prospects are due to the fact that scientific activity varies so much across disciplines, times, places, and scientists that any account which manages to unify it all will either consist of overwhelming descriptive detail, or trivial generalizations.

The choice of scope for the present entry is more optimistic, taking a cue from the recent movement in philosophy of science toward a greater attention to practice: to what scientists actually do. This “turn to practice” can be seen as the latest form of studies of methods in science, insofar as it represents an attempt at understanding scientific activity, but through accounts that are neither meant to be universal and unified, nor singular and narrowly descriptive. To some extent, different scientists at different times and places can be said to be using the same method even though, in practice, the details are different.

Whether the context in which methods are carried out is relevant, or to what extent, will depend largely on what one takes the aims of science to be and what one’s own aims are. For most of the history of scientific methodology the assumption has been that the most important output of science is knowledge and so the aim of methodology should be to discover those methods by which scientific knowledge is generated.

Science was seen to embody the most successful form of reasoning (but which form?) to the most certain knowledge claims (but how certain?) on the basis of systematically collected evidence (but what counts as evidence, and should the evidence of the senses take precedence, or rational insight?) Section 2 surveys some of the history, pointing to two major themes. One theme is seeking the right balance between observation and reasoning (and the attendant forms of reasoning which employ them); the other is how certain scientific knowledge is or can be.

Section 3 turns to 20 th century debates on scientific method. In the second half of the 20 th century the epistemic privilege of science faced several challenges and many philosophers of science abandoned the reconstruction of the logic of scientific method. Views changed significantly regarding which functions of science ought to be captured and why. For some, the success of science was better identified with social or cultural features. Historical and sociological turns in the philosophy of science were made, with a demand that greater attention be paid to the non-epistemic aspects of science, such as sociological, institutional, material, and political factors. Even outside of those movements there was an increased specialization in the philosophy of science, with more and more focus on specific fields within science. The combined upshot was very few philosophers arguing any longer for a grand unified methodology of science. Sections 3 and 4 surveys the main positions on scientific method in 20 th century philosophy of science, focusing on where they differ in their preference for confirmation or falsification or for waiving the idea of a special scientific method altogether.

In recent decades, attention has primarily been paid to scientific activities traditionally falling under the rubric of method, such as experimental design and general laboratory practice, the use of statistics, the construction and use of models and diagrams, interdisciplinary collaboration, and science communication. Sections 4–6 attempt to construct a map of the current domains of the study of methods in science.

As these sections illustrate, the question of method is still central to the discourse about science. Scientific method remains a topic for education, for science policy, and for scientists. It arises in the public domain where the demarcation or status of science is at issue. Some philosophers have recently returned, therefore, to the question of what it is that makes science a unique cultural product. This entry will close with some of these recent attempts at discerning and encapsulating the activities by which scientific knowledge is achieved.

Attempting a history of scientific method compounds the vast scope of the topic. This section briefly surveys the background to modern methodological debates. What can be called the classical view goes back to antiquity, and represents a point of departure for later divergences. [ 1 ]

We begin with a point made by Laudan (1968) in his historical survey of scientific method:

Perhaps the most serious inhibition to the emergence of the history of theories of scientific method as a respectable area of study has been the tendency to conflate it with the general history of epistemology, thereby assuming that the narrative categories and classificatory pigeon-holes applied to the latter are also basic to the former. (1968: 5)

To see knowledge about the natural world as falling under knowledge more generally is an understandable conflation. Histories of theories of method would naturally employ the same narrative categories and classificatory pigeon holes. An important theme of the history of epistemology, for example, is the unification of knowledge, a theme reflected in the question of the unification of method in science. Those who have identified differences in kinds of knowledge have often likewise identified different methods for achieving that kind of knowledge (see the entry on the unity of science ).

Different views on what is known, how it is known, and what can be known are connected. Plato distinguished the realms of things into the visible and the intelligible ( The Republic , 510a, in Cooper 1997). Only the latter, the Forms, could be objects of knowledge. The intelligible truths could be known with the certainty of geometry and deductive reasoning. What could be observed of the material world, however, was by definition imperfect and deceptive, not ideal. The Platonic way of knowledge therefore emphasized reasoning as a method, downplaying the importance of observation. Aristotle disagreed, locating the Forms in the natural world as the fundamental principles to be discovered through the inquiry into nature ( Metaphysics Z , in Barnes 1984).

Aristotle is recognized as giving the earliest systematic treatise on the nature of scientific inquiry in the western tradition, one which embraced observation and reasoning about the natural world. In the Prior and Posterior Analytics , Aristotle reflects first on the aims and then the methods of inquiry into nature. A number of features can be found which are still considered by most to be essential to science. For Aristotle, empiricism, careful observation (but passive observation, not controlled experiment), is the starting point. The aim is not merely recording of facts, though. For Aristotle, science ( epistêmê ) is a body of properly arranged knowledge or learning—the empirical facts, but also their ordering and display are of crucial importance. The aims of discovery, ordering, and display of facts partly determine the methods required of successful scientific inquiry. Also determinant is the nature of the knowledge being sought, and the explanatory causes proper to that kind of knowledge (see the discussion of the four causes in the entry on Aristotle on causality ).

In addition to careful observation, then, scientific method requires a logic as a system of reasoning for properly arranging, but also inferring beyond, what is known by observation. Methods of reasoning may include induction, prediction, or analogy, among others. Aristotle’s system (along with his catalogue of fallacious reasoning) was collected under the title the Organon . This title would be echoed in later works on scientific reasoning, such as Novum Organon by Francis Bacon, and Novum Organon Restorum by William Whewell (see below). In Aristotle’s Organon reasoning is divided primarily into two forms, a rough division which persists into modern times. The division, known most commonly today as deductive versus inductive method, appears in other eras and methodologies as analysis/​synthesis, non-ampliative/​ampliative, or even confirmation/​verification. The basic idea is there are two “directions” to proceed in our methods of inquiry: one away from what is observed, to the more fundamental, general, and encompassing principles; the other, from the fundamental and general to instances or implications of principles.

The basic aim and method of inquiry identified here can be seen as a theme running throughout the next two millennia of reflection on the correct way to seek after knowledge: carefully observe nature and then seek rules or principles which explain or predict its operation. The Aristotelian corpus provided the framework for a commentary tradition on scientific method independent of science itself (cosmos versus physics.) During the medieval period, figures such as Albertus Magnus (1206–1280), Thomas Aquinas (1225–1274), Robert Grosseteste (1175–1253), Roger Bacon (1214/1220–1292), William of Ockham (1287–1347), Andreas Vesalius (1514–1546), Giacomo Zabarella (1533–1589) all worked to clarify the kind of knowledge obtainable by observation and induction, the source of justification of induction, and best rules for its application. [ 2 ] Many of their contributions we now think of as essential to science (see also Laudan 1968). As Aristotle and Plato had employed a framework of reasoning either “to the forms” or “away from the forms”, medieval thinkers employed directions away from the phenomena or back to the phenomena. In analysis, a phenomena was examined to discover its basic explanatory principles; in synthesis, explanations of a phenomena were constructed from first principles.

During the Scientific Revolution these various strands of argument, experiment, and reason were forged into a dominant epistemic authority. The 16 th –18 th centuries were a period of not only dramatic advance in knowledge about the operation of the natural world—advances in mechanical, medical, biological, political, economic explanations—but also of self-awareness of the revolutionary changes taking place, and intense reflection on the source and legitimation of the method by which the advances were made. The struggle to establish the new authority included methodological moves. The Book of Nature, according to the metaphor of Galileo Galilei (1564–1642) or Francis Bacon (1561–1626), was written in the language of mathematics, of geometry and number. This motivated an emphasis on mathematical description and mechanical explanation as important aspects of scientific method. Through figures such as Henry More and Ralph Cudworth, a neo-Platonic emphasis on the importance of metaphysical reflection on nature behind appearances, particularly regarding the spiritual as a complement to the purely mechanical, remained an important methodological thread of the Scientific Revolution (see the entries on Cambridge platonists ; Boyle ; Henry More ; Galileo ).

In Novum Organum (1620), Bacon was critical of the Aristotelian method for leaping from particulars to universals too quickly. The syllogistic form of reasoning readily mixed those two types of propositions. Bacon aimed at the invention of new arts, principles, and directions. His method would be grounded in methodical collection of observations, coupled with correction of our senses (and particularly, directions for the avoidance of the Idols, as he called them, kinds of systematic errors to which naïve observers are prone.) The community of scientists could then climb, by a careful, gradual and unbroken ascent, to reliable general claims.

Bacon’s method has been criticized as impractical and too inflexible for the practicing scientist. Whewell would later criticize Bacon in his System of Logic for paying too little attention to the practices of scientists. It is hard to find convincing examples of Bacon’s method being put in to practice in the history of science, but there are a few who have been held up as real examples of 16 th century scientific, inductive method, even if not in the rigid Baconian mold: figures such as Robert Boyle (1627–1691) and William Harvey (1578–1657) (see the entry on Bacon ).

It is to Isaac Newton (1642–1727), however, that historians of science and methodologists have paid greatest attention. Given the enormous success of his Principia Mathematica and Opticks , this is understandable. The study of Newton’s method has had two main thrusts: the implicit method of the experiments and reasoning presented in the Opticks, and the explicit methodological rules given as the Rules for Philosophising (the Regulae) in Book III of the Principia . [ 3 ] Newton’s law of gravitation, the linchpin of his new cosmology, broke with explanatory conventions of natural philosophy, first for apparently proposing action at a distance, but more generally for not providing “true”, physical causes. The argument for his System of the World ( Principia , Book III) was based on phenomena, not reasoned first principles. This was viewed (mainly on the continent) as insufficient for proper natural philosophy. The Regulae counter this objection, re-defining the aims of natural philosophy by re-defining the method natural philosophers should follow. (See the entry on Newton’s philosophy .)

To his list of methodological prescriptions should be added Newton’s famous phrase “ hypotheses non fingo ” (commonly translated as “I frame no hypotheses”.) The scientist was not to invent systems but infer explanations from observations, as Bacon had advocated. This would come to be known as inductivism. In the century after Newton, significant clarifications of the Newtonian method were made. Colin Maclaurin (1698–1746), for instance, reconstructed the essential structure of the method as having complementary analysis and synthesis phases, one proceeding away from the phenomena in generalization, the other from the general propositions to derive explanations of new phenomena. Denis Diderot (1713–1784) and editors of the Encyclopédie did much to consolidate and popularize Newtonianism, as did Francesco Algarotti (1721–1764). The emphasis was often the same, as much on the character of the scientist as on their process, a character which is still commonly assumed. The scientist is humble in the face of nature, not beholden to dogma, obeys only his eyes, and follows the truth wherever it leads. It was certainly Voltaire (1694–1778) and du Chatelet (1706–1749) who were most influential in propagating the latter vision of the scientist and their craft, with Newton as hero. Scientific method became a revolutionary force of the Enlightenment. (See also the entries on Newton , Leibniz , Descartes , Boyle , Hume , enlightenment , as well as Shank 2008 for a historical overview.)

Not all 18 th century reflections on scientific method were so celebratory. Famous also are George Berkeley’s (1685–1753) attack on the mathematics of the new science, as well as the over-emphasis of Newtonians on observation; and David Hume’s (1711–1776) undermining of the warrant offered for scientific claims by inductive justification (see the entries on: George Berkeley ; David Hume ; Hume’s Newtonianism and Anti-Newtonianism ). Hume’s problem of induction motivated Immanuel Kant (1724–1804) to seek new foundations for empirical method, though as an epistemic reconstruction, not as any set of practical guidelines for scientists. Both Hume and Kant influenced the methodological reflections of the next century, such as the debate between Mill and Whewell over the certainty of inductive inferences in science.

The debate between John Stuart Mill (1806–1873) and William Whewell (1794–1866) has become the canonical methodological debate of the 19 th century. Although often characterized as a debate between inductivism and hypothetico-deductivism, the role of the two methods on each side is actually more complex. On the hypothetico-deductive account, scientists work to come up with hypotheses from which true observational consequences can be deduced—hence, hypothetico-deductive. Because Whewell emphasizes both hypotheses and deduction in his account of method, he can be seen as a convenient foil to the inductivism of Mill. However, equally if not more important to Whewell’s portrayal of scientific method is what he calls the “fundamental antithesis”. Knowledge is a product of the objective (what we see in the world around us) and subjective (the contributions of our mind to how we perceive and understand what we experience, which he called the Fundamental Ideas). Both elements are essential according to Whewell, and he was therefore critical of Kant for too much focus on the subjective, and John Locke (1632–1704) and Mill for too much focus on the senses. Whewell’s fundamental ideas can be discipline relative. An idea can be fundamental even if it is necessary for knowledge only within a given scientific discipline (e.g., chemical affinity for chemistry). This distinguishes fundamental ideas from the forms and categories of intuition of Kant. (See the entry on Whewell .)

Clarifying fundamental ideas would therefore be an essential part of scientific method and scientific progress. Whewell called this process “Discoverer’s Induction”. It was induction, following Bacon or Newton, but Whewell sought to revive Bacon’s account by emphasising the role of ideas in the clear and careful formulation of inductive hypotheses. Whewell’s induction is not merely the collecting of objective facts. The subjective plays a role through what Whewell calls the Colligation of Facts, a creative act of the scientist, the invention of a theory. A theory is then confirmed by testing, where more facts are brought under the theory, called the Consilience of Inductions. Whewell felt that this was the method by which the true laws of nature could be discovered: clarification of fundamental concepts, clever invention of explanations, and careful testing. Mill, in his critique of Whewell, and others who have cast Whewell as a fore-runner of the hypothetico-deductivist view, seem to have under-estimated the importance of this discovery phase in Whewell’s understanding of method (Snyder 1997a,b, 1999). Down-playing the discovery phase would come to characterize methodology of the early 20 th century (see section 3 ).

Mill, in his System of Logic , put forward a narrower view of induction as the essence of scientific method. For Mill, induction is the search first for regularities among events. Among those regularities, some will continue to hold for further observations, eventually gaining the status of laws. One can also look for regularities among the laws discovered in a domain, i.e., for a law of laws. Which “law law” will hold is time and discipline dependent and open to revision. One example is the Law of Universal Causation, and Mill put forward specific methods for identifying causes—now commonly known as Mill’s methods. These five methods look for circumstances which are common among the phenomena of interest, those which are absent when the phenomena are, or those for which both vary together. Mill’s methods are still seen as capturing basic intuitions about experimental methods for finding the relevant explanatory factors ( System of Logic (1843), see Mill entry). The methods advocated by Whewell and Mill, in the end, look similar. Both involve inductive generalization to covering laws. They differ dramatically, however, with respect to the necessity of the knowledge arrived at; that is, at the meta-methodological level (see the entries on Whewell and Mill entries).

3. Logic of method and critical responses

The quantum and relativistic revolutions in physics in the early 20 th century had a profound effect on methodology. Conceptual foundations of both theories were taken to show the defeasibility of even the most seemingly secure intuitions about space, time and bodies. Certainty of knowledge about the natural world was therefore recognized as unattainable. Instead a renewed empiricism was sought which rendered science fallible but still rationally justifiable.

Analyses of the reasoning of scientists emerged, according to which the aspects of scientific method which were of primary importance were the means of testing and confirming of theories. A distinction in methodology was made between the contexts of discovery and justification. The distinction could be used as a wedge between the particularities of where and how theories or hypotheses are arrived at, on the one hand, and the underlying reasoning scientists use (whether or not they are aware of it) when assessing theories and judging their adequacy on the basis of the available evidence. By and large, for most of the 20 th century, philosophy of science focused on the second context, although philosophers differed on whether to focus on confirmation or refutation as well as on the many details of how confirmation or refutation could or could not be brought about. By the mid-20 th century these attempts at defining the method of justification and the context distinction itself came under pressure. During the same period, philosophy of science developed rapidly, and from section 4 this entry will therefore shift from a primarily historical treatment of the scientific method towards a primarily thematic one.

Advances in logic and probability held out promise of the possibility of elaborate reconstructions of scientific theories and empirical method, the best example being Rudolf Carnap’s The Logical Structure of the World (1928). Carnap attempted to show that a scientific theory could be reconstructed as a formal axiomatic system—that is, a logic. That system could refer to the world because some of its basic sentences could be interpreted as observations or operations which one could perform to test them. The rest of the theoretical system, including sentences using theoretical or unobservable terms (like electron or force) would then either be meaningful because they could be reduced to observations, or they had purely logical meanings (called analytic, like mathematical identities). This has been referred to as the verifiability criterion of meaning. According to the criterion, any statement not either analytic or verifiable was strictly meaningless. Although the view was endorsed by Carnap in 1928, he would later come to see it as too restrictive (Carnap 1956). Another familiar version of this idea is operationalism of Percy William Bridgman. In The Logic of Modern Physics (1927) Bridgman asserted that every physical concept could be defined in terms of the operations one would perform to verify the application of that concept. Making good on the operationalisation of a concept even as simple as length, however, can easily become enormously complex (for measuring very small lengths, for instance) or impractical (measuring large distances like light years.)

Carl Hempel’s (1950, 1951) criticisms of the verifiability criterion of meaning had enormous influence. He pointed out that universal generalizations, such as most scientific laws, were not strictly meaningful on the criterion. Verifiability and operationalism both seemed too restrictive to capture standard scientific aims and practice. The tenuous connection between these reconstructions and actual scientific practice was criticized in another way. In both approaches, scientific methods are instead recast in methodological roles. Measurements, for example, were looked to as ways of giving meanings to terms. The aim of the philosopher of science was not to understand the methods per se , but to use them to reconstruct theories, their meanings, and their relation to the world. When scientists perform these operations, however, they will not report that they are doing them to give meaning to terms in a formal axiomatic system. This disconnect between methodology and the details of actual scientific practice would seem to violate the empiricism the Logical Positivists and Bridgman were committed to. The view that methodology should correspond to practice (to some extent) has been called historicism, or intuitionism. We turn to these criticisms and responses in section 3.4 . [ 4 ]

Positivism also had to contend with the recognition that a purely inductivist approach, along the lines of Bacon-Newton-Mill, was untenable. There was no pure observation, for starters. All observation was theory laden. Theory is required to make any observation, therefore not all theory can be derived from observation alone. (See the entry on theory and observation in science .) Even granting an observational basis, Hume had already pointed out that one could not deductively justify inductive conclusions without begging the question by presuming the success of the inductive method. Likewise, positivist attempts at analyzing how a generalization can be confirmed by observations of its instances were subject to a number of criticisms. Goodman (1965) and Hempel (1965) both point to paradoxes inherent in standard accounts of confirmation. Recent attempts at explaining how observations can serve to confirm a scientific theory are discussed in section 4 below.

The standard starting point for a non-inductive analysis of the logic of confirmation is known as the Hypothetico-Deductive (H-D) method. In its simplest form, a sentence of a theory which expresses some hypothesis is confirmed by its true consequences. As noted in section 2 , this method had been advanced by Whewell in the 19 th century, as well as Nicod (1924) and others in the 20 th century. Often, Hempel’s (1966) description of the H-D method, illustrated by the case of Semmelweiss’ inferential procedures in establishing the cause of childbed fever, has been presented as a key account of H-D as well as a foil for criticism of the H-D account of confirmation (see, for example, Lipton’s (2004) discussion of inference to the best explanation; also the entry on confirmation ). Hempel described Semmelsweiss’ procedure as examining various hypotheses explaining the cause of childbed fever. Some hypotheses conflicted with observable facts and could be rejected as false immediately. Others needed to be tested experimentally by deducing which observable events should follow if the hypothesis were true (what Hempel called the test implications of the hypothesis), then conducting an experiment and observing whether or not the test implications occurred. If the experiment showed the test implication to be false, the hypothesis could be rejected. If the experiment showed the test implications to be true, however, this did not prove the hypothesis true. The confirmation of a test implication does not verify a hypothesis, though Hempel did allow that “it provides at least some support, some corroboration or confirmation for it” (Hempel 1966: 8). The degree of this support then depends on the quantity, variety and precision of the supporting evidence.

Another approach that took off from the difficulties with inductive inference was Karl Popper’s critical rationalism or falsificationism (Popper 1959, 1963). Falsification is deductive and similar to H-D in that it involves scientists deducing observational consequences from the hypothesis under test. For Popper, however, the important point was not the degree of confirmation that successful prediction offered to a hypothesis. The crucial thing was the logical asymmetry between confirmation, based on inductive inference, and falsification, which can be based on a deductive inference. (This simple opposition was later questioned, by Lakatos, among others. See the entry on historicist theories of scientific rationality. )

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing—but without implying that this is also a measure for the probability that it is true.

Popper was also motivated by his doubts about the scientific status of theories like the Marxist theory of history or psycho-analysis, and so wanted to demarcate between science and pseudo-science. Popper saw this as an importantly different distinction than demarcating science from metaphysics. The latter demarcation was the primary concern of many logical empiricists. Popper used the idea of falsification to draw a line instead between pseudo and proper science. Science was science because its method involved subjecting theories to rigorous tests which offered a high probability of failing and thus refuting the theory.

A commitment to the risk of failure was important. Avoiding falsification could be done all too easily. If a consequence of a theory is inconsistent with observations, an exception can be added by introducing auxiliary hypotheses designed explicitly to save the theory, so-called ad hoc modifications. This Popper saw done in pseudo-science where ad hoc theories appeared capable of explaining anything in their field of application. In contrast, science is risky. If observations showed the predictions from a theory to be wrong, the theory would be refuted. Hence, scientific hypotheses must be falsifiable. Not only must there exist some possible observation statement which could falsify the hypothesis or theory, were it observed, (Popper called these the hypothesis’ potential falsifiers) it is crucial to the Popperian scientific method that such falsifications be sincerely attempted on a regular basis.

The more potential falsifiers of a hypothesis, the more falsifiable it would be, and the more the hypothesis claimed. Conversely, hypotheses without falsifiers claimed very little or nothing at all. Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions. The ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions. Popper sought to reconcile the view by blurring the distinction between falsifiable and not falsifiable, and speaking instead of degrees of testability (Popper 1985: 41f.).

From the 1960s on, sustained meta-methodological criticism emerged that drove philosophical focus away from scientific method. A brief look at those criticisms follows, with recommendations for further reading at the end of the entry.

Thomas Kuhn’s The Structure of Scientific Revolutions (1962) begins with a well-known shot across the bow for philosophers of science:

History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. (1962: 1)

The image Kuhn thought needed transforming was the a-historical, rational reconstruction sought by many of the Logical Positivists, though Carnap and other positivists were actually quite sympathetic to Kuhn’s views. (See the entry on the Vienna Circle .) Kuhn shares with other of his contemporaries, such as Feyerabend and Lakatos, a commitment to a more empirical approach to philosophy of science. Namely, the history of science provides important data, and necessary checks, for philosophy of science, including any theory of scientific method.

The history of science reveals, according to Kuhn, that scientific development occurs in alternating phases. During normal science, the members of the scientific community adhere to the paradigm in place. Their commitment to the paradigm means a commitment to the puzzles to be solved and the acceptable ways of solving them. Confidence in the paradigm remains so long as steady progress is made in solving the shared puzzles. Method in this normal phase operates within a disciplinary matrix (Kuhn’s later concept of a paradigm) which includes standards for problem solving, and defines the range of problems to which the method should be applied. An important part of a disciplinary matrix is the set of values which provide the norms and aims for scientific method. The main values that Kuhn identifies are prediction, problem solving, simplicity, consistency, and plausibility.

An important by-product of normal science is the accumulation of puzzles which cannot be solved with resources of the current paradigm. Once accumulation of these anomalies has reached some critical mass, it can trigger a communal shift to a new paradigm and a new phase of normal science. Importantly, the values that provide the norms and aims for scientific method may have transformed in the meantime. Method may therefore be relative to discipline, time or place

Feyerabend also identified the aims of science as progress, but argued that any methodological prescription would only stifle that progress (Feyerabend 1988). His arguments are grounded in re-examining accepted “myths” about the history of science. Heroes of science, like Galileo, are shown to be just as reliant on rhetoric and persuasion as they are on reason and demonstration. Others, like Aristotle, are shown to be far more reasonable and far-reaching in their outlooks then they are given credit for. As a consequence, the only rule that could provide what he took to be sufficient freedom was the vacuous “anything goes”. More generally, even the methodological restriction that science is the best way to pursue knowledge, and to increase knowledge, is too restrictive. Feyerabend suggested instead that science might, in fact, be a threat to a free society, because it and its myth had become so dominant (Feyerabend 1978).

An even more fundamental kind of criticism was offered by several sociologists of science from the 1970s onwards who rejected the methodology of providing philosophical accounts for the rational development of science and sociological accounts of the irrational mistakes. Instead, they adhered to a symmetry thesis on which any causal explanation of how scientific knowledge is established needs to be symmetrical in explaining truth and falsity, rationality and irrationality, success and mistakes, by the same causal factors (see, e.g., Barnes and Bloor 1982, Bloor 1991). Movements in the Sociology of Science, like the Strong Programme, or in the social dimensions and causes of knowledge more generally led to extended and close examination of detailed case studies in contemporary science and its history. (See the entries on the social dimensions of scientific knowledge and social epistemology .) Well-known examinations by Latour and Woolgar (1979/1986), Knorr-Cetina (1981), Pickering (1984), Shapin and Schaffer (1985) seem to bear out that it was social ideologies (on a macro-scale) or individual interactions and circumstances (on a micro-scale) which were the primary causal factors in determining which beliefs gained the status of scientific knowledge. As they saw it therefore, explanatory appeals to scientific method were not empirically grounded.

A late, and largely unexpected, criticism of scientific method came from within science itself. Beginning in the early 2000s, a number of scientists attempting to replicate the results of published experiments could not do so. There may be close conceptual connection between reproducibility and method. For example, if reproducibility means that the same scientific methods ought to produce the same result, and all scientific results ought to be reproducible, then whatever it takes to reproduce a scientific result ought to be called scientific method. Space limits us to the observation that, insofar as reproducibility is a desired outcome of proper scientific method, it is not strictly a part of scientific method. (See the entry on reproducibility of scientific results .)

By the close of the 20 th century the search for the scientific method was flagging. Nola and Sankey (2000b) could introduce their volume on method by remarking that “For some, the whole idea of a theory of scientific method is yester-year’s debate …”.

Despite the many difficulties that philosophers encountered in trying to providing a clear methodology of conformation (or refutation), still important progress has been made on understanding how observation can provide evidence for a given theory. Work in statistics has been crucial for understanding how theories can be tested empirically, and in recent decades a huge literature has developed that attempts to recast confirmation in Bayesian terms. Here these developments can be covered only briefly, and we refer to the entry on confirmation for further details and references.

Statistics has come to play an increasingly important role in the methodology of the experimental sciences from the 19 th century onwards. At that time, statistics and probability theory took on a methodological role as an analysis of inductive inference, and attempts to ground the rationality of induction in the axioms of probability theory have continued throughout the 20 th century and in to the present. Developments in the theory of statistics itself, meanwhile, have had a direct and immense influence on the experimental method, including methods for measuring the uncertainty of observations such as the Method of Least Squares developed by Legendre and Gauss in the early 19 th century, criteria for the rejection of outliers proposed by Peirce by the mid-19 th century, and the significance tests developed by Gosset (a.k.a. “Student”), Fisher, Neyman & Pearson and others in the 1920s and 1930s (see, e.g., Swijtink 1987 for a brief historical overview; and also the entry on C.S. Peirce ).

These developments within statistics then in turn led to a reflective discussion among both statisticians and philosophers of science on how to perceive the process of hypothesis testing: whether it was a rigorous statistical inference that could provide a numerical expression of the degree of confidence in the tested hypothesis, or if it should be seen as a decision between different courses of actions that also involved a value component. This led to a major controversy among Fisher on the one side and Neyman and Pearson on the other (see especially Fisher 1955, Neyman 1956 and Pearson 1955, and for analyses of the controversy, e.g., Howie 2002, Marks 2000, Lenhard 2006). On Fisher’s view, hypothesis testing was a methodology for when to accept or reject a statistical hypothesis, namely that a hypothesis should be rejected by evidence if this evidence would be unlikely relative to other possible outcomes, given the hypothesis were true. In contrast, on Neyman and Pearson’s view, the consequence of error also had to play a role when deciding between hypotheses. Introducing the distinction between the error of rejecting a true hypothesis (type I error) and accepting a false hypothesis (type II error), they argued that it depends on the consequences of the error to decide whether it is more important to avoid rejecting a true hypothesis or accepting a false one. Hence, Fisher aimed for a theory of inductive inference that enabled a numerical expression of confidence in a hypothesis. To him, the important point was the search for truth, not utility. In contrast, the Neyman-Pearson approach provided a strategy of inductive behaviour for deciding between different courses of action. Here, the important point was not whether a hypothesis was true, but whether one should act as if it was.

Similar discussions are found in the philosophical literature. On the one side, Churchman (1948) and Rudner (1953) argued that because scientific hypotheses can never be completely verified, a complete analysis of the methods of scientific inference includes ethical judgments in which the scientists must decide whether the evidence is sufficiently strong or that the probability is sufficiently high to warrant the acceptance of the hypothesis, which again will depend on the importance of making a mistake in accepting or rejecting the hypothesis. Others, such as Jeffrey (1956) and Levi (1960) disagreed and instead defended a value-neutral view of science on which scientists should bracket their attitudes, preferences, temperament, and values when assessing the correctness of their inferences. For more details on this value-free ideal in the philosophy of science and its historical development, see Douglas (2009) and Howard (2003). For a broad set of case studies examining the role of values in science, see e.g. Elliott & Richards 2017.

In recent decades, philosophical discussions of the evaluation of probabilistic hypotheses by statistical inference have largely focused on Bayesianism that understands probability as a measure of a person’s degree of belief in an event, given the available information, and frequentism that instead understands probability as a long-run frequency of a repeatable event. Hence, for Bayesians probabilities refer to a state of knowledge, whereas for frequentists probabilities refer to frequencies of events (see, e.g., Sober 2008, chapter 1 for a detailed introduction to Bayesianism and frequentism as well as to likelihoodism). Bayesianism aims at providing a quantifiable, algorithmic representation of belief revision, where belief revision is a function of prior beliefs (i.e., background knowledge) and incoming evidence. Bayesianism employs a rule based on Bayes’ theorem, a theorem of the probability calculus which relates conditional probabilities. The probability that a particular hypothesis is true is interpreted as a degree of belief, or credence, of the scientist. There will also be a probability and a degree of belief that a hypothesis will be true conditional on a piece of evidence (an observation, say) being true. Bayesianism proscribes that it is rational for the scientist to update their belief in the hypothesis to that conditional probability should it turn out that the evidence is, in fact, observed (see, e.g., Sprenger & Hartmann 2019 for a comprehensive treatment of Bayesian philosophy of science). Originating in the work of Neyman and Person, frequentism aims at providing the tools for reducing long-run error rates, such as the error-statistical approach developed by Mayo (1996) that focuses on how experimenters can avoid both type I and type II errors by building up a repertoire of procedures that detect errors if and only if they are present. Both Bayesianism and frequentism have developed over time, they are interpreted in different ways by its various proponents, and their relations to previous criticism to attempts at defining scientific method are seen differently by proponents and critics. The literature, surveys, reviews and criticism in this area are vast and the reader is referred to the entries on Bayesian epistemology and confirmation .

5. Method in Practice

Attention to scientific practice, as we have seen, is not itself new. However, the turn to practice in the philosophy of science of late can be seen as a correction to the pessimism with respect to method in philosophy of science in later parts of the 20 th century, and as an attempted reconciliation between sociological and rationalist explanations of scientific knowledge. Much of this work sees method as detailed and context specific problem-solving procedures, and methodological analyses to be at the same time descriptive, critical and advisory (see Nickles 1987 for an exposition of this view). The following section contains a survey of some of the practice focuses. In this section we turn fully to topics rather than chronology.

A problem with the distinction between the contexts of discovery and justification that figured so prominently in philosophy of science in the first half of the 20 th century (see section 2 ) is that no such distinction can be clearly seen in scientific activity (see Arabatzis 2006). Thus, in recent decades, it has been recognized that study of conceptual innovation and change should not be confined to psychology and sociology of science, but are also important aspects of scientific practice which philosophy of science should address (see also the entry on scientific discovery ). Looking for the practices that drive conceptual innovation has led philosophers to examine both the reasoning practices of scientists and the wide realm of experimental practices that are not directed narrowly at testing hypotheses, that is, exploratory experimentation.

Examining the reasoning practices of historical and contemporary scientists, Nersessian (2008) has argued that new scientific concepts are constructed as solutions to specific problems by systematic reasoning, and that of analogy, visual representation and thought-experimentation are among the important reasoning practices employed. These ubiquitous forms of reasoning are reliable—but also fallible—methods of conceptual development and change. On her account, model-based reasoning consists of cycles of construction, simulation, evaluation and adaption of models that serve as interim interpretations of the target problem to be solved. Often, this process will lead to modifications or extensions, and a new cycle of simulation and evaluation. However, Nersessian also emphasizes that

creative model-based reasoning cannot be applied as a simple recipe, is not always productive of solutions, and even its most exemplary usages can lead to incorrect solutions. (Nersessian 2008: 11)

Thus, while on the one hand she agrees with many previous philosophers that there is no logic of discovery, discoveries can derive from reasoned processes, such that a large and integral part of scientific practice is

the creation of concepts through which to comprehend, structure, and communicate about physical phenomena …. (Nersessian 1987: 11)

Similarly, work on heuristics for discovery and theory construction by scholars such as Darden (1991) and Bechtel & Richardson (1993) present science as problem solving and investigate scientific problem solving as a special case of problem-solving in general. Drawing largely on cases from the biological sciences, much of their focus has been on reasoning strategies for the generation, evaluation, and revision of mechanistic explanations of complex systems.

Addressing another aspect of the context distinction, namely the traditional view that the primary role of experiments is to test theoretical hypotheses according to the H-D model, other philosophers of science have argued for additional roles that experiments can play. The notion of exploratory experimentation was introduced to describe experiments driven by the desire to obtain empirical regularities and to develop concepts and classifications in which these regularities can be described (Steinle 1997, 2002; Burian 1997; Waters 2007)). However the difference between theory driven experimentation and exploratory experimentation should not be seen as a sharp distinction. Theory driven experiments are not always directed at testing hypothesis, but may also be directed at various kinds of fact-gathering, such as determining numerical parameters. Vice versa , exploratory experiments are usually informed by theory in various ways and are therefore not theory-free. Instead, in exploratory experiments phenomena are investigated without first limiting the possible outcomes of the experiment on the basis of extant theory about the phenomena.

The development of high throughput instrumentation in molecular biology and neighbouring fields has given rise to a special type of exploratory experimentation that collects and analyses very large amounts of data, and these new ‘omics’ disciplines are often said to represent a break with the ideal of hypothesis-driven science (Burian 2007; Elliott 2007; Waters 2007; O’Malley 2007) and instead described as data-driven research (Leonelli 2012; Strasser 2012) or as a special kind of “convenience experimentation” in which many experiments are done simply because they are extraordinarily convenient to perform (Krohs 2012).

5.2 Computer methods and ‘new ways’ of doing science

The field of omics just described is possible because of the ability of computers to process, in a reasonable amount of time, the huge quantities of data required. Computers allow for more elaborate experimentation (higher speed, better filtering, more variables, sophisticated coordination and control), but also, through modelling and simulations, might constitute a form of experimentation themselves. Here, too, we can pose a version of the general question of method versus practice: does the practice of using computers fundamentally change scientific method, or merely provide a more efficient means of implementing standard methods?

Because computers can be used to automate measurements, quantifications, calculations, and statistical analyses where, for practical reasons, these operations cannot be otherwise carried out, many of the steps involved in reaching a conclusion on the basis of an experiment are now made inside a “black box”, without the direct involvement or awareness of a human. This has epistemological implications, regarding what we can know, and how we can know it. To have confidence in the results, computer methods are therefore subjected to tests of verification and validation.

The distinction between verification and validation is easiest to characterize in the case of computer simulations. In a typical computer simulation scenario computers are used to numerically integrate differential equations for which no analytic solution is available. The equations are part of the model the scientist uses to represent a phenomenon or system under investigation. Verifying a computer simulation means checking that the equations of the model are being correctly approximated. Validating a simulation means checking that the equations of the model are adequate for the inferences one wants to make on the basis of that model.

A number of issues related to computer simulations have been raised. The identification of validity and verification as the testing methods has been criticized. Oreskes et al. (1994) raise concerns that “validiation”, because it suggests deductive inference, might lead to over-confidence in the results of simulations. The distinction itself is probably too clean, since actual practice in the testing of simulations mixes and moves back and forth between the two (Weissart 1997; Parker 2008a; Winsberg 2010). Computer simulations do seem to have a non-inductive character, given that the principles by which they operate are built in by the programmers, and any results of the simulation follow from those in-built principles in such a way that those results could, in principle, be deduced from the program code and its inputs. The status of simulations as experiments has therefore been examined (Kaufmann and Smarr 1993; Humphreys 1995; Hughes 1999; Norton and Suppe 2001). This literature considers the epistemology of these experiments: what we can learn by simulation, and also the kinds of justifications which can be given in applying that knowledge to the “real” world. (Mayo 1996; Parker 2008b). As pointed out, part of the advantage of computer simulation derives from the fact that huge numbers of calculations can be carried out without requiring direct observation by the experimenter/​simulator. At the same time, many of these calculations are approximations to the calculations which would be performed first-hand in an ideal situation. Both factors introduce uncertainties into the inferences drawn from what is observed in the simulation.

For many of the reasons described above, computer simulations do not seem to belong clearly to either the experimental or theoretical domain. Rather, they seem to crucially involve aspects of both. This has led some authors, such as Fox Keller (2003: 200) to argue that we ought to consider computer simulation a “qualitatively different way of doing science”. The literature in general tends to follow Kaufmann and Smarr (1993) in referring to computer simulation as a “third way” for scientific methodology (theoretical reasoning and experimental practice are the first two ways.). It should also be noted that the debates around these issues have tended to focus on the form of computer simulation typical in the physical sciences, where models are based on dynamical equations. Other forms of simulation might not have the same problems, or have problems of their own (see the entry on computer simulations in science ).

In recent years, the rapid development of machine learning techniques has prompted some scholars to suggest that the scientific method has become “obsolete” (Anderson 2008, Carrol and Goodstein 2009). This has resulted in an intense debate on the relative merit of data-driven and hypothesis-driven research (for samples, see e.g. Mazzocchi 2015 or Succi and Coveney 2018). For a detailed treatment of this topic, we refer to the entry scientific research and big data .

6. Discourse on scientific method

Despite philosophical disagreements, the idea of the scientific method still figures prominently in contemporary discourse on many different topics, both within science and in society at large. Often, reference to scientific method is used in ways that convey either the legend of a single, universal method characteristic of all science, or grants to a particular method or set of methods privilege as a special ‘gold standard’, often with reference to particular philosophers to vindicate the claims. Discourse on scientific method also typically arises when there is a need to distinguish between science and other activities, or for justifying the special status conveyed to science. In these areas, the philosophical attempts at identifying a set of methods characteristic for scientific endeavors are closely related to the philosophy of science’s classical problem of demarcation (see the entry on science and pseudo-science ) and to the philosophical analysis of the social dimension of scientific knowledge and the role of science in democratic society.

One of the settings in which the legend of a single, universal scientific method has been particularly strong is science education (see, e.g., Bauer 1992; McComas 1996; Wivagg & Allchin 2002). [ 5 ] Often, ‘the scientific method’ is presented in textbooks and educational web pages as a fixed four or five step procedure starting from observations and description of a phenomenon and progressing over formulation of a hypothesis which explains the phenomenon, designing and conducting experiments to test the hypothesis, analyzing the results, and ending with drawing a conclusion. Such references to a universal scientific method can be found in educational material at all levels of science education (Blachowicz 2009), and numerous studies have shown that the idea of a general and universal scientific method often form part of both students’ and teachers’ conception of science (see, e.g., Aikenhead 1987; Osborne et al. 2003). In response, it has been argued that science education need to focus more on teaching about the nature of science, although views have differed on whether this is best done through student-led investigations, contemporary cases, or historical cases (Allchin, Andersen & Nielsen 2014)

Although occasionally phrased with reference to the H-D method, important historical roots of the legend in science education of a single, universal scientific method are the American philosopher and psychologist Dewey’s account of inquiry in How We Think (1910) and the British mathematician Karl Pearson’s account of science in Grammar of Science (1892). On Dewey’s account, inquiry is divided into the five steps of

(i) a felt difficulty, (ii) its location and definition, (iii) suggestion of a possible solution, (iv) development by reasoning of the bearing of the suggestions, (v) further observation and experiment leading to its acceptance or rejection. (Dewey 1910: 72)

Similarly, on Pearson’s account, scientific investigations start with measurement of data and observation of their correction and sequence from which scientific laws can be discovered with the aid of creative imagination. These laws have to be subject to criticism, and their final acceptance will have equal validity for “all normally constituted minds”. Both Dewey’s and Pearson’s accounts should be seen as generalized abstractions of inquiry and not restricted to the realm of science—although both Dewey and Pearson referred to their respective accounts as ‘the scientific method’.

Occasionally, scientists make sweeping statements about a simple and distinct scientific method, as exemplified by Feynman’s simplified version of a conjectures and refutations method presented, for example, in the last of his 1964 Cornell Messenger lectures. [ 6 ] However, just as often scientists have come to the same conclusion as recent philosophy of science that there is not any unique, easily described scientific method. For example, the physicist and Nobel Laureate Weinberg described in the paper “The Methods of Science … And Those By Which We Live” (1995) how

The fact that the standards of scientific success shift with time does not only make the philosophy of science difficult; it also raises problems for the public understanding of science. We do not have a fixed scientific method to rally around and defend. (1995: 8)

Interview studies with scientists on their conception of method shows that scientists often find it hard to figure out whether available evidence confirms their hypothesis, and that there are no direct translations between general ideas about method and specific strategies to guide how research is conducted (Schickore & Hangel 2019, Hangel & Schickore 2017)

Reference to the scientific method has also often been used to argue for the scientific nature or special status of a particular activity. Philosophical positions that argue for a simple and unique scientific method as a criterion of demarcation, such as Popperian falsification, have often attracted practitioners who felt that they had a need to defend their domain of practice. For example, references to conjectures and refutation as the scientific method are abundant in much of the literature on complementary and alternative medicine (CAM)—alongside the competing position that CAM, as an alternative to conventional biomedicine, needs to develop its own methodology different from that of science.

Also within mainstream science, reference to the scientific method is used in arguments regarding the internal hierarchy of disciplines and domains. A frequently seen argument is that research based on the H-D method is superior to research based on induction from observations because in deductive inferences the conclusion follows necessarily from the premises. (See, e.g., Parascandola 1998 for an analysis of how this argument has been made to downgrade epidemiology compared to the laboratory sciences.) Similarly, based on an examination of the practices of major funding institutions such as the National Institutes of Health (NIH), the National Science Foundation (NSF) and the Biomedical Sciences Research Practices (BBSRC) in the UK, O’Malley et al. (2009) have argued that funding agencies seem to have a tendency to adhere to the view that the primary activity of science is to test hypotheses, while descriptive and exploratory research is seen as merely preparatory activities that are valuable only insofar as they fuel hypothesis-driven research.

In some areas of science, scholarly publications are structured in a way that may convey the impression of a neat and linear process of inquiry from stating a question, devising the methods by which to answer it, collecting the data, to drawing a conclusion from the analysis of data. For example, the codified format of publications in most biomedical journals known as the IMRAD format (Introduction, Method, Results, Analysis, Discussion) is explicitly described by the journal editors as “not an arbitrary publication format but rather a direct reflection of the process of scientific discovery” (see the so-called “Vancouver Recommendations”, ICMJE 2013: 11). However, scientific publications do not in general reflect the process by which the reported scientific results were produced. For example, under the provocative title “Is the scientific paper a fraud?”, Medawar argued that scientific papers generally misrepresent how the results have been produced (Medawar 1963/1996). Similar views have been advanced by philosophers, historians and sociologists of science (Gilbert 1976; Holmes 1987; Knorr-Cetina 1981; Schickore 2008; Suppe 1998) who have argued that scientists’ experimental practices are messy and often do not follow any recognizable pattern. Publications of research results, they argue, are retrospective reconstructions of these activities that often do not preserve the temporal order or the logic of these activities, but are instead often constructed in order to screen off potential criticism (see Schickore 2008 for a review of this work).

Philosophical positions on the scientific method have also made it into the court room, especially in the US where judges have drawn on philosophy of science in deciding when to confer special status to scientific expert testimony. A key case is Daubert vs Merrell Dow Pharmaceuticals (92–102, 509 U.S. 579, 1993). In this case, the Supreme Court argued in its 1993 ruling that trial judges must ensure that expert testimony is reliable, and that in doing this the court must look at the expert’s methodology to determine whether the proffered evidence is actually scientific knowledge. Further, referring to works of Popper and Hempel the court stated that

ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge … is whether it can be (and has been) tested. (Justice Blackmun, Daubert v. Merrell Dow Pharmaceuticals; see Other Internet Resources for a link to the opinion)

But as argued by Haack (2005a,b, 2010) and by Foster & Hubner (1999), by equating the question of whether a piece of testimony is reliable with the question whether it is scientific as indicated by a special methodology, the court was producing an inconsistent mixture of Popper’s and Hempel’s philosophies, and this has later led to considerable confusion in subsequent case rulings that drew on the Daubert case (see Haack 2010 for a detailed exposition).

The difficulties around identifying the methods of science are also reflected in the difficulties of identifying scientific misconduct in the form of improper application of the method or methods of science. One of the first and most influential attempts at defining misconduct in science was the US definition from 1989 that defined misconduct as

fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community . (Code of Federal Regulations, part 50, subpart A., August 8, 1989, italics added)

However, the “other practices that seriously deviate” clause was heavily criticized because it could be used to suppress creative or novel science. For example, the National Academy of Science stated in their report Responsible Science (1992) that it

wishes to discourage the possibility that a misconduct complaint could be lodged against scientists based solely on their use of novel or unorthodox research methods. (NAS: 27)

This clause was therefore later removed from the definition. For an entry into the key philosophical literature on conduct in science, see Shamoo & Resnick (2009).

The question of the source of the success of science has been at the core of philosophy since the beginning of modern science. If viewed as a matter of epistemology more generally, scientific method is a part of the entire history of philosophy. Over that time, science and whatever methods its practitioners may employ have changed dramatically. Today, many philosophers have taken up the banners of pluralism or of practice to focus on what are, in effect, fine-grained and contextually limited examinations of scientific method. Others hope to shift perspectives in order to provide a renewed general account of what characterizes the activity we call science.

One such perspective has been offered recently by Hoyningen-Huene (2008, 2013), who argues from the history of philosophy of science that after three lengthy phases of characterizing science by its method, we are now in a phase where the belief in the existence of a positive scientific method has eroded and what has been left to characterize science is only its fallibility. First was a phase from Plato and Aristotle up until the 17 th century where the specificity of scientific knowledge was seen in its absolute certainty established by proof from evident axioms; next was a phase up to the mid-19 th century in which the means to establish the certainty of scientific knowledge had been generalized to include inductive procedures as well. In the third phase, which lasted until the last decades of the 20 th century, it was recognized that empirical knowledge was fallible, but it was still granted a special status due to its distinctive mode of production. But now in the fourth phase, according to Hoyningen-Huene, historical and philosophical studies have shown how “scientific methods with the characteristics as posited in the second and third phase do not exist” (2008: 168) and there is no longer any consensus among philosophers and historians of science about the nature of science. For Hoyningen-Huene, this is too negative a stance, and he therefore urges the question about the nature of science anew. His own answer to this question is that “scientific knowledge differs from other kinds of knowledge, especially everyday knowledge, primarily by being more systematic” (Hoyningen-Huene 2013: 14). Systematicity can have several different dimensions: among them are more systematic descriptions, explanations, predictions, defense of knowledge claims, epistemic connectedness, ideal of completeness, knowledge generation, representation of knowledge and critical discourse. Hence, what characterizes science is the greater care in excluding possible alternative explanations, the more detailed elaboration with respect to data on which predictions are based, the greater care in detecting and eliminating sources of error, the more articulate connections to other pieces of knowledge, etc. On this position, what characterizes science is not that the methods employed are unique to science, but that the methods are more carefully employed.

Another, similar approach has been offered by Haack (2003). She sets off, similar to Hoyningen-Huene, from a dissatisfaction with the recent clash between what she calls Old Deferentialism and New Cynicism. The Old Deferentialist position is that science progressed inductively by accumulating true theories confirmed by empirical evidence or deductively by testing conjectures against basic statements; while the New Cynics position is that science has no epistemic authority and no uniquely rational method and is merely just politics. Haack insists that contrary to the views of the New Cynics, there are objective epistemic standards, and there is something epistemologically special about science, even though the Old Deferentialists pictured this in a wrong way. Instead, she offers a new Critical Commonsensist account on which standards of good, strong, supportive evidence and well-conducted, honest, thorough and imaginative inquiry are not exclusive to the sciences, but the standards by which we judge all inquirers. In this sense, science does not differ in kind from other kinds of inquiry, but it may differ in the degree to which it requires broad and detailed background knowledge and a familiarity with a technical vocabulary that only specialists may possess.

  • Aikenhead, G.S., 1987, “High-school graduates’ beliefs about science-technology-society. III. Characteristics and limitations of scientific knowledge”, Science Education , 71(4): 459–487.
  • Allchin, D., H.M. Andersen and K. Nielsen, 2014, “Complementary Approaches to Teaching Nature of Science: Integrating Student Inquiry, Historical Cases, and Contemporary Cases in Classroom Practice”, Science Education , 98: 461–486.
  • Anderson, C., 2008, “The end of theory: The data deluge makes the scientific method obsolete”, Wired magazine , 16(7): 16–07
  • Arabatzis, T., 2006, “On the inextricability of the context of discovery and the context of justification”, in Revisiting Discovery and Justification , J. Schickore and F. Steinle (eds.), Dordrecht: Springer, pp. 215–230.
  • Barnes, J. (ed.), 1984, The Complete Works of Aristotle, Vols I and II , Princeton: Princeton University Press.
  • Barnes, B. and D. Bloor, 1982, “Relativism, Rationalism, and the Sociology of Knowledge”, in Rationality and Relativism , M. Hollis and S. Lukes (eds.), Cambridge: MIT Press, pp. 1–20.
  • Bauer, H.H., 1992, Scientific Literacy and the Myth of the Scientific Method , Urbana: University of Illinois Press.
  • Bechtel, W. and R.C. Richardson, 1993, Discovering complexity , Princeton, NJ: Princeton University Press.
  • Berkeley, G., 1734, The Analyst in De Motu and The Analyst: A Modern Edition with Introductions and Commentary , D. Jesseph (trans. and ed.), Dordrecht: Kluwer Academic Publishers, 1992.
  • Blachowicz, J., 2009, “How science textbooks treat scientific method: A philosopher’s perspective”, The British Journal for the Philosophy of Science , 60(2): 303–344.
  • Bloor, D., 1991, Knowledge and Social Imagery , Chicago: University of Chicago Press, 2 nd edition.
  • Boyle, R., 1682, New experiments physico-mechanical, touching the air , Printed by Miles Flesher for Richard Davis, bookseller in Oxford.
  • Bridgman, P.W., 1927, The Logic of Modern Physics , New York: Macmillan.
  • –––, 1956, “The Methodological Character of Theoretical Concepts”, in The Foundations of Science and the Concepts of Science and Psychology , Herbert Feigl and Michael Scriven (eds.), Minnesota: University of Minneapolis Press, pp. 38–76.
  • Burian, R., 1997, “Exploratory Experimentation and the Role of Histochemical Techniques in the Work of Jean Brachet, 1938–1952”, History and Philosophy of the Life Sciences , 19(1): 27–45.
  • –––, 2007, “On microRNA and the need for exploratory experimentation in post-genomic molecular biology”, History and Philosophy of the Life Sciences , 29(3): 285–311.
  • Carnap, R., 1928, Der logische Aufbau der Welt , Berlin: Bernary, transl. by R.A. George, The Logical Structure of the World , Berkeley: University of California Press, 1967.
  • –––, 1956, “The methodological character of theoretical concepts”, Minnesota studies in the philosophy of science , 1: 38–76.
  • Carrol, S., and D. Goodstein, 2009, “Defining the scientific method”, Nature Methods , 6: 237.
  • Churchman, C.W., 1948, “Science, Pragmatics, Induction”, Philosophy of Science , 15(3): 249–268.
  • Cooper, J. (ed.), 1997, Plato: Complete Works , Indianapolis: Hackett.
  • Darden, L., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press
  • Dewey, J., 1910, How we think , New York: Dover Publications (reprinted 1997).
  • Douglas, H., 2009, Science, Policy, and the Value-Free Ideal , Pittsburgh: University of Pittsburgh Press.
  • Dupré, J., 2004, “Miracle of Monism ”, in Naturalism in Question , Mario De Caro and David Macarthur (eds.), Cambridge, MA: Harvard University Press, pp. 36–58.
  • Elliott, K.C., 2007, “Varieties of exploratory experimentation in nanotoxicology”, History and Philosophy of the Life Sciences , 29(3): 311–334.
  • Elliott, K. C., and T. Richards (eds.), 2017, Exploring inductive risk: Case studies of values in science , Oxford: Oxford University Press.
  • Falcon, Andrea, 2005, Aristotle and the science of nature: Unity without uniformity , Cambridge: Cambridge University Press.
  • Feyerabend, P., 1978, Science in a Free Society , London: New Left Books
  • –––, 1988, Against Method , London: Verso, 2 nd edition.
  • Fisher, R.A., 1955, “Statistical Methods and Scientific Induction”, Journal of The Royal Statistical Society. Series B (Methodological) , 17(1): 69–78.
  • Foster, K. and P.W. Huber, 1999, Judging Science. Scientific Knowledge and the Federal Courts , Cambridge: MIT Press.
  • Fox Keller, E., 2003, “Models, Simulation, and ‘computer experiments’”, in The Philosophy of Scientific Experimentation , H. Radder (ed.), Pittsburgh: Pittsburgh University Press, 198–215.
  • Gilbert, G., 1976, “The transformation of research findings into scientific knowledge”, Social Studies of Science , 6: 281–306.
  • Gimbel, S., 2011, Exploring the Scientific Method , Chicago: University of Chicago Press.
  • Goodman, N., 1965, Fact , Fiction, and Forecast , Indianapolis: Bobbs-Merrill.
  • Haack, S., 1995, “Science is neither sacred nor a confidence trick”, Foundations of Science , 1(3): 323–335.
  • –––, 2003, Defending science—within reason , Amherst: Prometheus.
  • –––, 2005a, “Disentangling Daubert: an epistemological study in theory and practice”, Journal of Philosophy, Science and Law , 5, Haack 2005a available online . doi:10.5840/jpsl2005513
  • –––, 2005b, “Trial and error: The Supreme Court’s philosophy of science”, American Journal of Public Health , 95: S66-S73.
  • –––, 2010, “Federal Philosophy of Science: A Deconstruction-and a Reconstruction”, NYUJL & Liberty , 5: 394.
  • Hangel, N. and J. Schickore, 2017, “Scientists’ conceptions of good research practice”, Perspectives on Science , 25(6): 766–791
  • Harper, W.L., 2011, Isaac Newton’s Scientific Method: Turning Data into Evidence about Gravity and Cosmology , Oxford: Oxford University Press.
  • Hempel, C., 1950, “Problems and Changes in the Empiricist Criterion of Meaning”, Revue Internationale de Philosophie , 41(11): 41–63.
  • –––, 1951, “The Concept of Cognitive Significance: A Reconsideration”, Proceedings of the American Academy of Arts and Sciences , 80(1): 61–77.
  • –––, 1965, Aspects of scientific explanation and other essays in the philosophy of science , New York–London: Free Press.
  • –––, 1966, Philosophy of Natural Science , Englewood Cliffs: Prentice-Hall.
  • Holmes, F.L., 1987, “Scientific writing and scientific discovery”, Isis , 78(2): 220–235.
  • Howard, D., 2003, “Two left turns make a right: On the curious political career of North American philosophy of science at midcentury”, in Logical Empiricism in North America , G.L. Hardcastle & A.W. Richardson (eds.), Minneapolis: University of Minnesota Press, pp. 25–93.
  • Hoyningen-Huene, P., 2008, “Systematicity: The nature of science”, Philosophia , 36(2): 167–180.
  • –––, 2013, Systematicity. The Nature of Science , Oxford: Oxford University Press.
  • Howie, D., 2002, Interpreting probability: Controversies and developments in the early twentieth century , Cambridge: Cambridge University Press.
  • Hughes, R., 1999, “The Ising Model, Computer Simulation, and Universal Physics”, in Models as Mediators , M. Morgan and M. Morrison (eds.), Cambridge: Cambridge University Press, pp. 97–145
  • Hume, D., 1739, A Treatise of Human Nature , D. Fate Norton and M.J. Norton (eds.), Oxford: Oxford University Press, 2000.
  • Humphreys, P., 1995, “Computational science and scientific method”, Minds and Machines , 5(1): 499–512.
  • ICMJE, 2013, “Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals”, International Committee of Medical Journal Editors, available online , accessed August 13 2014
  • Jeffrey, R.C., 1956, “Valuation and Acceptance of Scientific Hypotheses”, Philosophy of Science , 23(3): 237–246.
  • Kaufmann, W.J., and L.L. Smarr, 1993, Supercomputing and the Transformation of Science , New York: Scientific American Library.
  • Knorr-Cetina, K., 1981, The Manufacture of Knowledge , Oxford: Pergamon Press.
  • Krohs, U., 2012, “Convenience experimentation”, Studies in History and Philosophy of Biological and BiomedicalSciences , 43: 52–57.
  • Kuhn, T.S., 1962, The Structure of Scientific Revolutions , Chicago: University of Chicago Press
  • Latour, B. and S. Woolgar, 1986, Laboratory Life: The Construction of Scientific Facts , Princeton: Princeton University Press, 2 nd edition.
  • Laudan, L., 1968, “Theories of scientific method from Plato to Mach”, History of Science , 7(1): 1–63.
  • Lenhard, J., 2006, “Models and statistical inference: The controversy between Fisher and Neyman-Pearson”, The British Journal for the Philosophy of Science , 57(1): 69–91.
  • Leonelli, S., 2012, “Making Sense of Data-Driven Research in the Biological and the Biomedical Sciences”, Studies in the History and Philosophy of the Biological and Biomedical Sciences , 43(1): 1–3.
  • Levi, I., 1960, “Must the scientist make value judgments?”, Philosophy of Science , 57(11): 345–357
  • Lindley, D., 1991, Theory Change in Science: Strategies from Mendelian Genetics , Oxford: Oxford University Press.
  • Lipton, P., 2004, Inference to the Best Explanation , London: Routledge, 2 nd edition.
  • Marks, H.M., 2000, The progress of experiment: science and therapeutic reform in the United States, 1900–1990 , Cambridge: Cambridge University Press.
  • Mazzochi, F., 2015, “Could Big Data be the end of theory in science?”, EMBO reports , 16: 1250–1255.
  • Mayo, D.G., 1996, Error and the Growth of Experimental Knowledge , Chicago: University of Chicago Press.
  • McComas, W.F., 1996, “Ten myths of science: Reexamining what we think we know about the nature of science”, School Science and Mathematics , 96(1): 10–16.
  • Medawar, P.B., 1963/1996, “Is the scientific paper a fraud”, in The Strange Case of the Spotted Mouse and Other Classic Essays on Science , Oxford: Oxford University Press, 33–39.
  • Mill, J.S., 1963, Collected Works of John Stuart Mill , J. M. Robson (ed.), Toronto: University of Toronto Press
  • NAS, 1992, Responsible Science: Ensuring the integrity of the research process , Washington DC: National Academy Press.
  • Nersessian, N.J., 1987, “A cognitive-historical approach to meaning in scientific theories”, in The process of science , N. Nersessian (ed.), Berlin: Springer, pp. 161–177.
  • –––, 2008, Creating Scientific Concepts , Cambridge: MIT Press.
  • Newton, I., 1726, Philosophiae naturalis Principia Mathematica (3 rd edition), in The Principia: Mathematical Principles of Natural Philosophy: A New Translation , I.B. Cohen and A. Whitman (trans.), Berkeley: University of California Press, 1999.
  • –––, 1704, Opticks or A Treatise of the Reflections, Refractions, Inflections & Colors of Light , New York: Dover Publications, 1952.
  • Neyman, J., 1956, “Note on an Article by Sir Ronald Fisher”, Journal of the Royal Statistical Society. Series B (Methodological) , 18: 288–294.
  • Nickles, T., 1987, “Methodology, heuristics, and rationality”, in Rational changes in science: Essays on Scientific Reasoning , J.C. Pitt (ed.), Berlin: Springer, pp. 103–132.
  • Nicod, J., 1924, Le problème logique de l’induction , Paris: Alcan. (Engl. transl. “The Logical Problem of Induction”, in Foundations of Geometry and Induction , London: Routledge, 2000.)
  • Nola, R. and H. Sankey, 2000a, “A selective survey of theories of scientific method”, in Nola and Sankey 2000b: 1–65.
  • –––, 2000b, After Popper, Kuhn and Feyerabend. Recent Issues in Theories of Scientific Method , London: Springer.
  • –––, 2007, Theories of Scientific Method , Stocksfield: Acumen.
  • Norton, S., and F. Suppe, 2001, “Why atmospheric modeling is good science”, in Changing the Atmosphere: Expert Knowledge and Environmental Governance , C. Miller and P. Edwards (eds.), Cambridge, MA: MIT Press, 88–133.
  • O’Malley, M., 2007, “Exploratory experimentation and scientific practice: Metagenomics and the proteorhodopsin case”, History and Philosophy of the Life Sciences , 29(3): 337–360.
  • O’Malley, M., C. Haufe, K. Elliot, and R. Burian, 2009, “Philosophies of Funding”, Cell , 138: 611–615.
  • Oreskes, N., K. Shrader-Frechette, and K. Belitz, 1994, “Verification, Validation and Confirmation of Numerical Models in the Earth Sciences”, Science , 263(5147): 641–646.
  • Osborne, J., S. Simon, and S. Collins, 2003, “Attitudes towards science: a review of the literature and its implications”, International Journal of Science Education , 25(9): 1049–1079.
  • Parascandola, M., 1998, “Epidemiology—2 nd -Rate Science”, Public Health Reports , 113(4): 312–320.
  • Parker, W., 2008a, “Franklin, Holmes and the Epistemology of Computer Simulation”, International Studies in the Philosophy of Science , 22(2): 165–83.
  • –––, 2008b, “Computer Simulation through an Error-Statistical Lens”, Synthese , 163(3): 371–84.
  • Pearson, K. 1892, The Grammar of Science , London: J.M. Dents and Sons, 1951
  • Pearson, E.S., 1955, “Statistical Concepts in Their Relation to Reality”, Journal of the Royal Statistical Society , B, 17: 204–207.
  • Pickering, A., 1984, Constructing Quarks: A Sociological History of Particle Physics , Edinburgh: Edinburgh University Press.
  • Popper, K.R., 1959, The Logic of Scientific Discovery , London: Routledge, 2002
  • –––, 1963, Conjectures and Refutations , London: Routledge, 2002.
  • –––, 1985, Unended Quest: An Intellectual Autobiography , La Salle: Open Court Publishing Co..
  • Rudner, R., 1953, “The Scientist Qua Scientist Making Value Judgments”, Philosophy of Science , 20(1): 1–6.
  • Rudolph, J.L., 2005, “Epistemology for the masses: The origin of ‘The Scientific Method’ in American Schools”, History of Education Quarterly , 45(3): 341–376
  • Schickore, J., 2008, “Doing science, writing science”, Philosophy of Science , 75: 323–343.
  • Schickore, J. and N. Hangel, 2019, “‘It might be this, it should be that…’ uncertainty and doubt in day-to-day science practice”, European Journal for Philosophy of Science , 9(2): 31. doi:10.1007/s13194-019-0253-9
  • Shamoo, A.E. and D.B. Resnik, 2009, Responsible Conduct of Research , Oxford: Oxford University Press.
  • Shank, J.B., 2008, The Newton Wars and the Beginning of the French Enlightenment , Chicago: The University of Chicago Press.
  • Shapin, S. and S. Schaffer, 1985, Leviathan and the air-pump , Princeton: Princeton University Press.
  • Smith, G.E., 2002, “The Methodology of the Principia”, in The Cambridge Companion to Newton , I.B. Cohen and G.E. Smith (eds.), Cambridge: Cambridge University Press, 138–173.
  • Snyder, L.J., 1997a, “Discoverers’ Induction”, Philosophy of Science , 64: 580–604.
  • –––, 1997b, “The Mill-Whewell Debate: Much Ado About Induction”, Perspectives on Science , 5: 159–198.
  • –––, 1999, “Renovating the Novum Organum: Bacon, Whewell and Induction”, Studies in History and Philosophy of Science , 30: 531–557.
  • Sober, E., 2008, Evidence and Evolution. The logic behind the science , Cambridge: Cambridge University Press
  • Sprenger, J. and S. Hartmann, 2019, Bayesian philosophy of science , Oxford: Oxford University Press.
  • Steinle, F., 1997, “Entering New Fields: Exploratory Uses of Experimentation”, Philosophy of Science (Proceedings), 64: S65–S74.
  • –––, 2002, “Experiments in History and Philosophy of Science”, Perspectives on Science , 10(4): 408–432.
  • Strasser, B.J., 2012, “Data-driven sciences: From wonder cabinets to electronic databases”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 85–87.
  • Succi, S. and P.V. Coveney, 2018, “Big data: the end of the scientific method?”, Philosophical Transactions of the Royal Society A , 377: 20180145. doi:10.1098/rsta.2018.0145
  • Suppe, F., 1998, “The Structure of a Scientific Paper”, Philosophy of Science , 65(3): 381–405.
  • Swijtink, Z.G., 1987, “The objectification of observation: Measurement and statistical methods in the nineteenth century”, in The probabilistic revolution. Ideas in History, Vol. 1 , L. Kruger (ed.), Cambridge MA: MIT Press, pp. 261–285.
  • Waters, C.K., 2007, “The nature and context of exploratory experimentation: An introduction to three case studies of exploratory research”, History and Philosophy of the Life Sciences , 29(3): 275–284.
  • Weinberg, S., 1995, “The methods of science… and those by which we live”, Academic Questions , 8(2): 7–13.
  • Weissert, T., 1997, The Genesis of Simulation in Dynamics: Pursuing the Fermi-Pasta-Ulam Problem , New York: Springer Verlag.
  • William H., 1628, Exercitatio Anatomica de Motu Cordis et Sanguinis in Animalibus , in On the Motion of the Heart and Blood in Animals , R. Willis (trans.), Buffalo: Prometheus Books, 1993.
  • Winsberg, E., 2010, Science in the Age of Computer Simulation , Chicago: University of Chicago Press.
  • Wivagg, D. & D. Allchin, 2002, “The Dogma of the Scientific Method”, The American Biology Teacher , 64(9): 645–646
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Blackmun opinion , in Daubert v. Merrell Dow Pharmaceuticals (92–102), 509 U.S. 579 (1993).
  • Scientific Method at philpapers. Darrell Rowbottom (ed.).
  • Recent Articles | Scientific Method | The Scientist Magazine

al-Kindi | Albert the Great [= Albertus magnus] | Aquinas, Thomas | Arabic and Islamic Philosophy, disciplines in: natural philosophy and natural science | Arabic and Islamic Philosophy, historical and methodological topics in: Greek sources | Arabic and Islamic Philosophy, historical and methodological topics in: influence of Arabic and Islamic Philosophy on the Latin West | Aristotle | Bacon, Francis | Bacon, Roger | Berkeley, George | biology: experiment in | Boyle, Robert | Cambridge Platonists | confirmation | Descartes, René | Enlightenment | epistemology | epistemology: Bayesian | epistemology: social | Feyerabend, Paul | Galileo Galilei | Grosseteste, Robert | Hempel, Carl | Hume, David | Hume, David: Newtonianism and Anti-Newtonianism | induction: problem of | Kant, Immanuel | Kuhn, Thomas | Leibniz, Gottfried Wilhelm | Locke, John | Mill, John Stuart | More, Henry | Neurath, Otto | Newton, Isaac | Newton, Isaac: philosophy | Ockham [Occam], William | operationalism | Peirce, Charles Sanders | Plato | Popper, Karl | rationality: historicist theories of | Reichenbach, Hans | reproducibility, scientific | Schlick, Moritz | science: and pseudo-science | science: theory and observation in | science: unity of | scientific discovery | scientific knowledge: social dimensions of | simulations in science | skepticism: medieval | space and time: absolute and relational space and motion, post-Newtonian theories | Vienna Circle | Whewell, William | Zabarella, Giacomo

Copyright © 2021 by Brian Hepburn < brian . hepburn @ wichita . edu > Hanne Andersen < hanne . andersen @ ind . ku . dk >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

scientific method review essay

Try for free

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved February 19, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, what is your plagiarism score.

How to write a good scientific review article

Affiliation.

  • 1 The FEBS Journal Editorial Office, Cambridge, UK.
  • PMID: 35792782
  • DOI: 10.1111/febs.16565

Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research. A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the importance of building review-writing into a scientific career cannot be overstated. In this instalment of The FEBS Journal's Words of Advice series, I provide detailed guidance on planning and writing an informative and engaging literature review.

© 2022 Federation of European Biochemical Societies.

Publication types

Review articles: purpose, process, and structure

  • Published: 02 October 2017
  • Volume 46 , pages 1–5, ( 2018 )

Cite this article

  • Robert W. Palmatier 1 ,
  • Mark B. Houston 2 &
  • John Hulland 3  

224k Accesses

420 Citations

62 Altmetric

Explore all metrics

Avoid common mistakes on your manuscript.

Many research disciplines feature high-impact journals that are dedicated outlets for review papers (or review–conceptual combinations) (e.g., Academy of Management Review , Psychology Bulletin , Medicinal Research Reviews ). The rationale for such outlets is the premise that research integration and synthesis provides an important, and possibly even a required, step in the scientific process. Review papers tend to include both quantitative (i.e., meta-analytic, systematic reviews) and narrative or more qualitative components; together, they provide platforms for new conceptual frameworks, reveal inconsistencies in the extant body of research, synthesize diverse results, and generally give other scholars a “state-of-the-art” snapshot of a domain, often written by topic experts (Bem 1995 ). Many premier marketing journals publish meta-analytic review papers too, though authors often must overcome reviewers’ concerns that their contributions are limited due to the absence of “new data.” Furthermore, relatively few non-meta-analysis review papers appear in marketing journals, probably due to researchers’ perceptions that such papers have limited publication opportunities or their beliefs that the field lacks a research tradition or “respect” for such papers. In many cases, an editor must provide strong support to help such review papers navigate the review process. Yet, once published, such papers tend to be widely cited, suggesting that members of the field find them useful (see Bettencourt and Houston 2001 ).

In this editorial, we seek to address three topics relevant to review papers. First, we outline a case for their importance to the scientific process, by describing the purpose of review papers . Second, we detail the review paper editorial initiative conducted over the past two years by the Journal of the Academy of Marketing Science ( JAMS ), focused on increasing the prevalence of review papers. Third, we describe a process and structure for systematic ( i.e. , non-meta-analytic) review papers , referring to Grewal et al. ( 2018 ) insights into parallel meta-analytic (effects estimation) review papers. (For some strong recent examples of marketing-related meta-analyses, see Knoll and Matthes 2017 ; Verma et al. 2016 ).

Purpose of review papers

In their most general form, review papers “are critical evaluations of material that has already been published,” some that include quantitative effects estimation (i.e., meta-analyses) and some that do not (i.e., systematic reviews) (Bem 1995 , p. 172). They carefully identify and synthesize relevant literature to evaluate a specific research question, substantive domain, theoretical approach, or methodology and thereby provide readers with a state-of-the-art understanding of the research topic. Many of these benefits are highlighted in Hanssens’ ( 2018 ) paper titled “The Value of Empirical Generalizations in Marketing,” published in this same issue of JAMS.

The purpose of and contributions associated with review papers can vary depending on their specific type and research question, but in general, they aim to

Resolve definitional ambiguities and outline the scope of the topic.

Provide an integrated, synthesized overview of the current state of knowledge.

Identify inconsistencies in prior results and potential explanations (e.g., moderators, mediators, measures, approaches).

Evaluate existing methodological approaches and unique insights.

Develop conceptual frameworks to reconcile and extend past research.

Describe research insights, existing gaps, and future research directions.

Not every review paper can offer all of these benefits, but this list represents their key contributions. To provide a sufficient contribution, a review paper needs to achieve three key standards. First, the research domain needs to be well suited for a review paper, such that a sufficient body of past research exists to make the integration and synthesis valuable—especially if extant research reveals theoretical inconsistences or heterogeneity in its effects. Second, the review paper must be well executed, with an appropriate literature collection and analysis techniques, sufficient breadth and depth of literature coverage, and a compelling writing style. Third, the manuscript must offer significant new insights based on its systematic comparison of multiple studies, rather than simply a “book report” that describes past research. This third, most critical standard is often the most difficult, especially for authors who have not “lived” with the research domain for many years, because achieving it requires drawing some non-obvious connections and insights from multiple studies and their many different aspects (e.g., context, method, measures). Typically, after the “review” portion of the paper has been completed, the authors must spend many more months identifying the connections to uncover incremental insights, each of which takes time to detail and explicate.

The increasing methodological rigor and technical sophistication of many marketing studies also means that they often focus on smaller problems with fewer constructs. By synthesizing these piecemeal findings, reconciling conflicting evidence, and drawing a “big picture,” meta-analyses and systematic review papers become indispensable to our comprehensive understanding of a phenomenon, among both academic and practitioner communities. Thus, good review papers provide a solid platform for future research, in the reviewed domain but also in other areas, in that researchers can use a good review paper to learn about and extend key insights to new areas.

This domain extension, outside of the core area being reviewed, is one of the key benefits of review papers that often gets overlooked. Yet it also is becoming ever more important with the expanding breadth of marketing (e.g., econometric modeling, finance, strategic management, applied psychology, sociology) and the increasing velocity in the accumulation of marketing knowledge (e.g., digital marketing, social media, big data). Against this backdrop, systematic review papers and meta-analyses help academics and interested managers keep track of research findings that fall outside their main area of specialization.

JAMS’ review paper editorial initiative

With a strong belief in the importance of review papers, the editorial team of JAMS has purposely sought out leading scholars to provide substantive review papers, both meta-analysis and systematic, for publication in JAMS . Many of the scholars approached have voiced concerns about the risk of such endeavors, due to the lack of alternative outlets for these types of papers. Therefore, we have instituted a unique process, in which the authors develop a detailed outline of their paper, key tables and figures, and a description of their literature review process. On the basis of this outline, we grant assurances that the contribution hurdle will not be an issue for publication in JAMS , as long as the authors execute the proposed outline as written. Each paper still goes through the normal review process and must meet all publication quality standards, of course. In many cases, an Area Editor takes an active role to help ensure that each paper provides sufficient insights, as required for a high-quality review paper. This process gives the author team confidence to invest effort in the process. An analysis of the marketing journals in the Financial Times (FT 50) journal list for the past five years (2012–2016) shows that JAMS has become the most common outlet for these papers, publishing 31% of all review papers that appeared in the top six marketing journals.

As a next step in positioning JAMS as a receptive marketing outlet for review papers, we are conducting a Thought Leaders Conference on Generalizations in Marketing: Systematic Reviews and Meta-Analyses , with a corresponding special issue (see www.springer.com/jams ). We will continue our process of seeking out review papers as an editorial strategy in areas that could be advanced by the integration and synthesis of extant research. We expect that, ultimately, such efforts will become unnecessary, as authors initiate review papers on topics of their own choosing to submit them to JAMS . In the past two years, JAMS already has increased the number of papers it publishes annually, from just over 40 to around 60 papers per year; this growth has provided “space” for 8–10 review papers per year, reflecting our editorial target.

Consistent with JAMS ’ overall focus on managerially relevant and strategy-focused topics, all review papers should reflect this emphasis. For example, the domains, theories, and methods reviewed need to have some application to past or emerging managerial research. A good rule of thumb is that the substantive domain, theory, or method should attract the attention of readers of JAMS .

The efforts of multiple editors and Area Editors in turn have generated a body of review papers that can serve as useful examples of the different types and approaches that JAMS has published.

Domain-based review papers

Domain-based review papers review, synthetize, and extend a body of literature in the same substantive domain. For example, in “The Role of Privacy in Marketing” (Martin and Murphy 2017 ), the authors identify and define various privacy-related constructs that have appeared in recent literature. Then they examine the different theoretical perspectives brought to bear on privacy topics related to consumers and organizations, including ethical and legal perspectives. These foundations lead in to their systematic review of privacy-related articles over a clearly defined date range, from which they extract key insights from each study. This exercise of synthesizing diverse perspectives allows these authors to describe state-of-the-art knowledge regarding privacy in marketing and identify useful paths for research. Similarly, a new paper by Cleeren et al. ( 2017 ), “Marketing Research on Product-Harm Crises: A Review, Managerial Implications, and an Agenda for Future Research,” provides a rich systematic review, synthesizes extant research, and points the way forward for scholars who are interested in issues related to defective or dangerous market offerings.

Theory-based review papers

Theory-based review papers review, synthetize, and extend a body of literature that uses the same underlying theory. For example, Rindfleisch and Heide’s ( 1997 ) classic review of research in marketing using transaction cost economics has been cited more than 2200 times, with a significant impact on applications of the theory to the discipline in the past 20 years. A recent paper in JAMS with similar intent, which could serve as a helpful model, focuses on “Resource-Based Theory in Marketing” (Kozlenkova et al. 2014 ). The article dives deeply into a description of the theory and its underlying assumptions, then organizes a systematic review of relevant literature according to various perspectives through which the theory has been applied in marketing. The authors conclude by identifying topical domains in marketing that might benefit from additional applications of the theory (e.g., marketing exchange), as well as related theories that could be integrated meaningfully with insights from the resource-based theory.

Method-based review papers

Method-based review papers review, synthetize, and extend a body of literature that uses the same underlying method. For example, in “Event Study Methodology in the Marketing Literature: An Overview” (Sorescu et al. 2017 ), the authors identify published studies in marketing that use an event study methodology. After a brief review of the theoretical foundations of event studies, they describe in detail the key design considerations associated with this method. The article then provides a roadmap for conducting event studies and compares this approach with a stock market returns analysis. The authors finish with a summary of the strengths and weaknesses of the event study method, which in turn suggests three main areas for further research. Similarly, “Discriminant Validity Testing in Marketing: An Analysis, Causes for Concern, and Proposed Remedies” (Voorhies et al. 2016 ) systematically reviews existing approaches for assessing discriminant validity in marketing contexts, then uses Monte Carlo simulation to determine which tests are most effective.

Our long-term editorial strategy is to make sure JAMS becomes and remains a well-recognized outlet for both meta-analysis and systematic managerial review papers in marketing. Ideally, review papers would come to represent 10%–20% of the papers published by the journal.

Process and structure for review papers

In this section, we review the process and typical structure of a systematic review paper, which lacks any long or established tradition in marketing research. The article by Grewal et al. ( 2018 ) provides a summary of effects-focused review papers (i.e., meta-analyses), so we do not discuss them in detail here.

Systematic literature review process

Some review papers submitted to journals take a “narrative” approach. They discuss current knowledge about a research domain, yet they often are flawed, in that they lack criteria for article inclusion (or, more accurately, article exclusion), fail to discuss the methodology used to evaluate included articles, and avoid critical assessment of the field (Barczak 2017 ). Such reviews tend to be purely descriptive, with little lasting impact.

In contrast, a systematic literature review aims to “comprehensively locate and synthesize research that bears on a particular question, using organized, transparent, and replicable procedures at each step in the process” (Littell et al. 2008 , p. 1). Littell et al. describe six key steps in the systematic review process. The extent to which each step is emphasized varies by paper, but all are important components of the review.

Topic formulation . The author sets out clear objectives for the review and articulates the specific research questions or hypotheses that will be investigated.

Study design . The author specifies relevant problems, populations, constructs, and settings of interest. The aim is to define explicit criteria that can be used to assess whether any particular study should be included in or excluded from the review. Furthermore, it is important to develop a protocol in advance that describes the procedures and methods to be used to evaluate published work.

Sampling . The aim in this third step is to identify all potentially relevant studies, including both published and unpublished research. To this end, the author must first define the sampling unit to be used in the review (e.g., individual, strategic business unit) and then develop an appropriate sampling plan.

Data collection . By retrieving the potentially relevant studies identified in the third step, the author can determine whether each study meets the eligibility requirements set out in the second step. For studies deemed acceptable, the data are extracted from each study and entered into standardized templates. These templates should be based on the protocols established in step 2.

Data analysis . The degree and nature of the analyses used to describe and examine the collected data vary widely by review. Purely descriptive analysis is useful as a starting point but rarely is sufficient on its own. The examination of trends, clusters of ideas, and multivariate relationships among constructs helps flesh out a deeper understanding of the domain. For example, both Hult ( 2015 ) and Huber et al. ( 2014 ) use bibliometric approaches (e.g., examine citation data using multidimensional scaling and cluster analysis techniques) to identify emerging versus declining themes in the broad field of marketing.

Reporting . Three key aspects of this final step are common across systematic reviews. First, the results from the fifth step need to be presented, clearly and compellingly, using narratives, tables, and figures. Second, core results that emerge from the review must be interpreted and discussed by the author. These revelatory insights should reflect a deeper understanding of the topic being investigated, not simply a regurgitation of well-established knowledge. Third, the author needs to describe the implications of these unique insights for both future research and managerial practice.

A new paper by Watson et al. ( 2017 ), “Harnessing Difference: A Capability-Based Framework for Stakeholder Engagement in Environmental Innovation,” provides a good example of a systematic review, starting with a cohesive conceptual framework that helps establish the boundaries of the review while also identifying core constructs and their relationships. The article then explicitly describes the procedures used to search for potentially relevant papers and clearly sets out criteria for study inclusion or exclusion. Next, a detailed discussion of core elements in the framework weaves published research findings into the exposition. The paper ends with a presentation of key implications and suggestions for the next steps. Similarly, “Marketing Survey Research Best Practices: Evidence and Recommendations from a Review of JAMS Articles” (Hulland et al. 2017 ) systematically reviews published marketing studies that use survey techniques, describes recent trends, and suggests best practices. In their review, Hulland et al. examine the entire population of survey papers published in JAMS over a ten-year span, relying on an extensive standardized data template to facilitate their subsequent data analysis.

Structure of systematic review papers

There is no cookie-cutter recipe for the exact structure of a useful systematic review paper; the final structure depends on the authors’ insights and intended points of emphasis. However, several key components are likely integral to a paper’s ability to contribute.

Depth and rigor

Systematic review papers must avoid falling in to two potential “ditches.” The first ditch threatens when the paper fails to demonstrate that a systematic approach was used for selecting articles for inclusion and capturing their insights. If a reader gets the impression that the author has cherry-picked only articles that fit some preset notion or failed to be thorough enough, without including articles that make significant contributions to the field, the paper will be consigned to the proverbial side of the road when it comes to the discipline’s attention.

Authors that fall into the other ditch present a thorough, complete overview that offers only a mind-numbing recitation, without evident organization, synthesis, or critical evaluation. Although comprehensive, such a paper is more of an index than a useful review. The reviewed articles must be grouped in a meaningful way to guide the reader toward a better understanding of the focal phenomenon and provide a foundation for insights about future research directions. Some scholars organize research by scholarly perspectives (e.g., the psychology of privacy, the economics of privacy; Martin and Murphy 2017 ); others classify the chosen articles by objective research aspects (e.g., empirical setting, research design, conceptual frameworks; Cleeren et al. 2017 ). The method of organization chosen must allow the author to capture the complexity of the underlying phenomenon (e.g., including temporal or evolutionary aspects, if relevant).

Replicability

Processes for the identification and inclusion of research articles should be described in sufficient detail, such that an interested reader could replicate the procedure. The procedures used to analyze chosen articles and extract their empirical findings and/or key takeaways should be described with similar specificity and detail.

We already have noted the potential usefulness of well-done review papers. Some scholars always are new to the field or domain in question, so review papers also need to help them gain foundational knowledge. Key constructs, definitions, assumptions, and theories should be laid out clearly (for which purpose summary tables are extremely helpful). An integrated conceptual model can be useful to organize cited works. Most scholars integrate the knowledge they gain from reading the review paper into their plans for future research, so it is also critical that review papers clearly lay out implications (and specific directions) for research. Ideally, readers will come away from a review article filled with enthusiasm about ways they might contribute to the ongoing development of the field.

Helpful format

Because such a large body of research is being synthesized in most review papers, simply reading through the list of included studies can be exhausting for readers. We cannot overstate the importance of tables and figures in review papers, used in conjunction with meaningful headings and subheadings. Vast literature review tables often are essential, but they must be organized in a way that makes their insights digestible to the reader; in some cases, a sequence of more focused tables may be better than a single, comprehensive table.

In summary, articles that review extant research in a domain (topic, theory, or method) can be incredibly useful to the scientific progress of our field. Whether integrating the insights from extant research through a meta-analysis or synthesizing them through a systematic assessment, the promised benefits are similar. Both formats provide readers with a useful overview of knowledge about the focal phenomenon, as well as insights on key dilemmas and conflicting findings that suggest future research directions. Thus, the editorial team at JAMS encourages scholars to continue to invest the time and effort to construct thoughtful review papers.

Barczak, G. (2017). From the editor: writing a review article. Journal of Product Innovation Management, 34 (2), 120–121.

Article   Google Scholar  

Bem, D. J. (1995). Writing a review article for psychological bulletin. Psychological Bulletin, 118 (2), 172–177.

Bettencourt, L. A., & Houston, M. B. (2001). Assessing the impact of article method type and subject area on citation frequency and reference diversity. Marketing Letters, 12 (4), 327–340.

Cleeren, K., Dekimpe, M. G., & van Heerde, H. J. (2017). Marketing research on product-harm crises: a review, managerial implications. Journal of the Academy of Marketing Science, 45 (5), 593–615.

Grewal, D., Puccinelli, N. M., & Monroe, K. B. (2018). Meta-analysis: error cancels and truth accrues. Journal of the Academy of Marketing Science, 46 (1).

Hanssens, D. M. (2018). The value of empirical generalizations in marketing. Journal of the Academy of Marketing Science, 46 (1).

Huber, J., Kamakura, W., & Mela, C. F. (2014). A topical history of JMR . Journal of Marketing Research, 51 (1), 84–91.

Hulland, J., Baumgartner, H., & Smith, K. M. (2017). Marketing survey research best practices: evidence and recommendations from a review of JAMS articles. Journal of the Academy of Marketing Science. https://doi.org/10.1007/s11747-017-0532-y .

Hult, G. T. M. (2015). JAMS 2010—2015: literature themes and intellectual structure. Journal of the Academy of Marketing Science, 43 (6), 663–669.

Knoll, J., & Matthes, J. (2017). The effectiveness of celebrity endorsements: a meta-analysis. Journal of the Academy of Marketing Science, 45 (1), 55–75.

Kozlenkova, I. V., Samaha, S. A., & Palmatier, R. W. (2014). Resource-based theory in marketing. Journal of the Academy of Marketing Science, 42 (1), 1–21.

Littell, J. H., Corcoran, J., & Pillai, V. (2008). Systematic reviews and meta-analysis . New York: Oxford University Press.

Book   Google Scholar  

Martin, K. D., & Murphy, P. E. (2017). The role of data privacy in marketing. Journal of the Academy of Marketing Science, 45 (2), 135–155.

Rindfleisch, A., & Heide, J. B. (1997). Transaction cost analysis: past, present, and future applications. Journal of Marketing, 61 (4), 30–54.

Sorescu, A., Warren, N. L., & Ertekin, L. (2017). Event study methodology in the marketing literature: an overview. Journal of the Academy of Marketing Science, 45 (2), 186–207.

Verma, V., Sharma, D., & Sheth, J. (2016). Does relationship marketing matter in online retailing? A meta-analytic approach. Journal of the Academy of Marketing Science, 44 (2), 206–217.

Voorhies, C. M., Brady, M. K., Calantone, R., & Ramirez, E. (2016). Discriminant validity testing in marketing: an analysis, causes for concern, and proposed remedies. Journal of the Academy of Marketing Science, 44 (1), 119–134.

Watson, R., Wilson, H. N., Smart, P., & Macdonald, E. K. (2017). Harnessing difference: a capability-based framework for stakeholder engagement in environmental innovation. Journal of Product Innovation Management. https://doi.org/10.1111/jpim.12394 .

Download references

Author information

Authors and affiliations.

Foster School of Business, University of Washington, Box: 353226, Seattle, WA, 98195-3226, USA

Robert W. Palmatier

Neeley School of Business, Texas Christian University, Fort Worth, TX, USA

Mark B. Houston

Terry College of Business, University of Georgia, Athens, GA, USA

John Hulland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Robert W. Palmatier .

Rights and permissions

Reprints and permissions

About this article

Palmatier, R.W., Houston, M.B. & Hulland, J. Review articles: purpose, process, and structure. J. of the Acad. Mark. Sci. 46 , 1–5 (2018). https://doi.org/10.1007/s11747-017-0563-4

Download citation

Published : 02 October 2017

Issue Date : January 2018

DOI : https://doi.org/10.1007/s11747-017-0563-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Find a journal
  • Publish with us
  • Track your research

Scientific Method

Illustration by J.R. Bee. ThoughtCo. 

  • Cell Biology
  • Weather & Climate
  • B.A., Biology, Emory University
  • A.S., Nursing, Chattahoochee Technical College

The scientific method is a series of steps followed by scientific investigators to answer specific questions about the natural world. It involves making observations, formulating a hypothesis , and conducting scientific experiments . Scientific inquiry starts with an observation followed by the formulation of a question about what has been observed. The steps of the scientific method are as follows:

Observation

The first step of the scientific method involves making an observation about something that interests you. This is very important if you are doing a science project because you want your project to be focused on something that will hold your attention. Your observation can be on anything from plant movement to animal behavior, as long as it is something you really want to know more about.​ This is where you come up with the idea for your science project.

Once you've made your observation, you must formulate a question about what you have observed. Your question should tell what it is that you are trying to discover or accomplish in your experiment. When stating your question you should be as specific as possible.​ For example, if you are doing a project on plants , you may want to know how plants interact with microbes. Your question may be: Do plant spices inhibit bacterial growth ?

The hypothesis is a key component of the scientific process. A hypothesis is an idea that is suggested as an explanation for a natural event, a particular experience, or a specific condition that can be tested through definable experimentation. It states the purpose of your experiment, the variables used, and the predicted outcome of your experiment. It is important to note that a hypothesis must be testable. That means that you should be able to test your hypothesis through experimentation .​ Your hypothesis must either be supported or falsified by your experiment. An example of a good hypothesis is: If there is a relation between listening to music and heart rate, then listening to music will cause a person's resting heart rate to either increase or decrease.

Once you've developed a hypothesis, you must design and conduct an experiment that will test it. You should develop a procedure that states very clearly how you plan to conduct your experiment. It is important that you include and identify a controlled variable or dependent variable in your procedure. Controls allow us to test a single variable in an experiment because they are unchanged. We can then make observations and comparisons between our controls and our independent variables (things that change in the experiment) to develop an accurate conclusion.​

The results are where you report what happened in the experiment. That includes detailing all observations and data made during your experiment. Most people find it easier to visualize the data by charting or graphing the information.​

The final step of the scientific method is developing a conclusion. This is where all of the results from the experiment are analyzed and a determination is reached about the hypothesis. Did the experiment support or reject your hypothesis? If your hypothesis was supported, great. If not, repeat the experiment or think of ways to improve your procedure.

  • Six Steps of the Scientific Method
  • What Is an Experiment? Definition and Design
  • Scientific Method Flow Chart
  • Scientific Method Lesson Plan
  • How To Design a Science Fair Experiment
  • How to Do a Science Fair Project
  • What Are the Elements of a Good Hypothesis?
  • How to Write a Lab Report
  • What Is a Hypothesis? (Science)
  • Biology Science Fair Project Ideas
  • Understanding Simple vs Controlled Experiments
  • Null Hypothesis Definition and Examples
  • Stove Top Frozen Pizza Science Experiment
  • Dependent Variable Definition and Examples
  • What Are Examples of a Hypothesis?
  • Kindergarten Science Projects
  • 1.2 The Scientific Methods
  • Introduction
  • 1.1 Physics: Definitions and Applications
  • 1.3 The Language of Physics: Physical Quantities and Units
  • Section Summary
  • Key Equations
  • Concept Items
  • Critical Thinking Items
  • Performance Task
  • Multiple Choice
  • Short Answer
  • Extended Response
  • 2.1 Relative Motion, Distance, and Displacement
  • 2.2 Speed and Velocity
  • 2.3 Position vs. Time Graphs
  • 2.4 Velocity vs. Time Graphs
  • 3.1 Acceleration
  • 3.2 Representing Acceleration with Equations and Graphs
  • 4.2 Newton's First Law of Motion: Inertia
  • 4.3 Newton's Second Law of Motion
  • 4.4 Newton's Third Law of Motion
  • 5.1 Vector Addition and Subtraction: Graphical Methods
  • 5.2 Vector Addition and Subtraction: Analytical Methods
  • 5.3 Projectile Motion
  • 5.4 Inclined Planes
  • 5.5 Simple Harmonic Motion
  • 6.1 Angle of Rotation and Angular Velocity
  • 6.2 Uniform Circular Motion
  • 6.3 Rotational Motion
  • 7.1 Kepler's Laws of Planetary Motion
  • 7.2 Newton's Law of Universal Gravitation and Einstein's Theory of General Relativity
  • 8.1 Linear Momentum, Force, and Impulse
  • 8.2 Conservation of Momentum
  • 8.3 Elastic and Inelastic Collisions
  • 9.1 Work, Power, and the Work–Energy Theorem
  • 9.2 Mechanical Energy and Conservation of Energy
  • 9.3 Simple Machines
  • 10.1 Postulates of Special Relativity
  • 10.2 Consequences of Special Relativity
  • 11.1 Temperature and Thermal Energy
  • 11.2 Heat, Specific Heat, and Heat Transfer
  • 11.3 Phase Change and Latent Heat
  • 12.1 Zeroth Law of Thermodynamics: Thermal Equilibrium
  • 12.2 First law of Thermodynamics: Thermal Energy and Work
  • 12.3 Second Law of Thermodynamics: Entropy
  • 12.4 Applications of Thermodynamics: Heat Engines, Heat Pumps, and Refrigerators
  • 13.1 Types of Waves
  • 13.2 Wave Properties: Speed, Amplitude, Frequency, and Period
  • 13.3 Wave Interaction: Superposition and Interference
  • 14.1 Speed of Sound, Frequency, and Wavelength
  • 14.2 Sound Intensity and Sound Level
  • 14.3 Doppler Effect and Sonic Booms
  • 14.4 Sound Interference and Resonance
  • 15.1 The Electromagnetic Spectrum
  • 15.2 The Behavior of Electromagnetic Radiation
  • 16.1 Reflection
  • 16.2 Refraction
  • 16.3 Lenses
  • 17.1 Understanding Diffraction and Interference
  • 17.2 Applications of Diffraction, Interference, and Coherence
  • 18.1 Electrical Charges, Conservation of Charge, and Transfer of Charge
  • 18.2 Coulomb's law
  • 18.3 Electric Field
  • 18.4 Electric Potential
  • 18.5 Capacitors and Dielectrics
  • 19.1 Ohm's law
  • 19.2 Series Circuits
  • 19.3 Parallel Circuits
  • 19.4 Electric Power
  • 20.1 Magnetic Fields, Field Lines, and Force
  • 20.2 Motors, Generators, and Transformers
  • 20.3 Electromagnetic Induction
  • 21.1 Planck and Quantum Nature of Light
  • 21.2 Einstein and the Photoelectric Effect
  • 21.3 The Dual Nature of Light
  • 22.1 The Structure of the Atom
  • 22.2 Nuclear Forces and Radioactivity
  • 22.3 Half Life and Radiometric Dating
  • 22.4 Nuclear Fission and Fusion
  • 22.5 Medical Applications of Radioactivity: Diagnostic Imaging and Radiation
  • 23.1 The Four Fundamental Forces
  • 23.2 Quarks
  • 23.3 The Unification of Forces
  • A | Reference Tables

Section Learning Objectives

By the end of this section, you will be able to do the following:

  • Explain how the methods of science are used to make scientific discoveries
  • Define a scientific model and describe examples of physical and mathematical models used in physics
  • Compare and contrast hypothesis, theory, and law

Teacher Support

The learning objectives in this section will help your students master the following standards:

  • (A) know the definition of science and understand that it has limitations, as specified in subsection (b)(2) of this section;
  • (B) know that scientific hypotheses are tentative and testable statements that must be capable of being supported or not supported by observational evidence. Hypotheses of durable explanatory power which have been tested over a wide variety of conditions are incorporated into theories;
  • (C) know that scientific theories are based on natural and physical phenomena and are capable of being tested by multiple independent researchers. Unlike hypotheses, scientific theories are well-established and highly-reliable explanations, but may be subject to change as new areas of science and new technologies are developed;
  • (D) distinguish between scientific hypotheses and scientific theories.

Section Key Terms

[OL] Pre-assessment for this section could involve students sharing or writing down an anecdote about when they used the methods of science. Then, students could label their thought processes in their anecdote with the appropriate scientific methods. The class could also discuss their definitions of theory and law, both outside and within the context of science.

[OL] It should be noted and possibly mentioned that a scientist , as mentioned in this section, does not necessarily mean a trained scientist. It could be anyone using methods of science.

Scientific Methods

Scientists often plan and carry out investigations to answer questions about the universe around us. These investigations may lead to natural laws. Such laws are intrinsic to the universe, meaning that humans did not create them and cannot change them. We can only discover and understand them. Their discovery is a very human endeavor, with all the elements of mystery, imagination, struggle, triumph, and disappointment inherent in any creative effort. The cornerstone of discovering natural laws is observation. Science must describe the universe as it is, not as we imagine or wish it to be.

We all are curious to some extent. We look around, make generalizations, and try to understand what we see. For example, we look up and wonder whether one type of cloud signals an oncoming storm. As we become serious about exploring nature, we become more organized and formal in collecting and analyzing data. We attempt greater precision, perform controlled experiments (if we can), and write down ideas about how data may be organized. We then formulate models, theories, and laws based on the data we have collected, and communicate those results with others. This, in a nutshell, describes the scientific method that scientists employ to decide scientific issues on the basis of evidence from observation and experiment.

An investigation often begins with a scientist making an observation . The scientist observes a pattern or trend within the natural world. Observation may generate questions that the scientist wishes to answer. Next, the scientist may perform some research about the topic and devise a hypothesis . A hypothesis is a testable statement that describes how something in the natural world works. In essence, a hypothesis is an educated guess that explains something about an observation.

[OL] An educated guess is used throughout this section in describing a hypothesis to combat the tendency to think of a theory as an educated guess.

Scientists may test the hypothesis by performing an experiment . During an experiment, the scientist collects data that will help them learn about the phenomenon they are studying. Then the scientists analyze the results of the experiment (that is, the data), often using statistical, mathematical, and/or graphical methods. From the data analysis, they draw conclusions. They may conclude that their experiment either supports or rejects their hypothesis. If the hypothesis is supported, the scientist usually goes on to test another hypothesis related to the first. If their hypothesis is rejected, they will often then test a new and different hypothesis in their effort to learn more about whatever they are studying.

Scientific processes can be applied to many situations. Let’s say that you try to turn on your car, but it will not start. You have just made an observation! You ask yourself, "Why won’t my car start?" You can now use scientific processes to answer this question. First, you generate a hypothesis such as, "The car won’t start because it has no gasoline in the gas tank." To test this hypothesis, you put gasoline in the car and try to start it again. If the car starts, then your hypothesis is supported by the experiment. If the car does not start, then your hypothesis is rejected. You will then need to think up a new hypothesis to test such as, "My car won’t start because the fuel pump is broken." Hopefully, your investigations lead you to discover why the car won’t start and enable you to fix it.

A model is a representation of something that is often too difficult (or impossible) to study directly. Models can take the form of physical models, equations, computer programs, or simulations—computer graphics/animations. Models are tools that are especially useful in modern physics because they let us visualize phenomena that we normally cannot observe with our senses, such as very small objects or objects that move at high speeds. For example, we can understand the structure of an atom using models, without seeing an atom with our own eyes. Although images of single atoms are now possible, these images are extremely difficult to achieve and are only possible due to the success of our models. The existence of these images is a consequence rather than a source of our understanding of atoms. Models are always approximate, so they are simpler to consider than the real situation; the more complete a model is, the more complicated it must be. Models put the intangible or the extremely complex into human terms that we can visualize, discuss, and hypothesize about.

Scientific models are constructed based on the results of previous experiments. Even still, models often only describe a phenomenon partially or in a few limited situations. Some phenomena are so complex that they may be impossible to model them in their entirety, even using computers. An example is the electron cloud model of the atom in which electrons are moving around the atom’s center in distinct clouds ( Figure 1.12 ), that represent the likelihood of finding an electron in different places. This model helps us to visualize the structure of an atom. However, it does not show us exactly where an electron will be within its cloud at any one particular time.

As mentioned previously, physicists use a variety of models including equations, physical models, computer simulations, etc. For example, three-dimensional models are often commonly used in chemistry and physics to model molecules. Properties other than appearance or location are usually modelled using mathematics, where functions are used to show how these properties relate to one another. Processes such as the formation of a star or the planets, can also be modelled using computer simulations. Once a simulation is correctly programmed based on actual experimental data, the simulation can allow us to view processes that happened in the past or happen too quickly or slowly for us to observe directly. In addition, scientists can also run virtual experiments using computer-based models. In a model of planet formation, for example, the scientist could alter the amount or type of rocks present in space and see how it affects planet formation.

Scientists use models and experimental results to construct explanations of observations or design solutions to problems. For example, one way to make a car more fuel efficient is to reduce the friction or drag caused by air flowing around the moving car. This can be done by designing the body shape of the car to be more aerodynamic, such as by using rounded corners instead of sharp ones. Engineers can then construct physical models of the car body, place them in a wind tunnel, and examine the flow of air around the model. This can also be done mathematically in a computer simulation. The air flow pattern can be analyzed for regions smooth air flow and for eddies that indicate drag. The model of the car body may have to be altered slightly to produce the smoothest pattern of air flow (i.e., the least drag). The pattern with the least drag may be the solution to increasing fuel efficiency of the car. This solution might then be incorporated into the car design.

Using Models and the Scientific Processes

Be sure to secure loose items before opening the window or door.

In this activity, you will learn about scientific models by making a model of how air flows through your classroom or a room in your house.

  • One room with at least one window or door that can be opened
  • Work with a group of four, as directed by your teacher. Close all of the windows and doors in the room you are working in. Your teacher may assign you a specific window or door to study.
  • Before opening any windows or doors, draw a to-scale diagram of your room. First, measure the length and width of your room using the tape measure. Then, transform the measurement using a scale that could fit on your paper, such as 5 centimeters = 1 meter.
  • Your teacher will assign you a specific window or door to study air flow. On your diagram, add arrows showing your hypothesis (before opening any windows or doors) of how air will flow through the room when your assigned window or door is opened. Use pencil so that you can easily make changes to your diagram.
  • On your diagram, mark four locations where you would like to test air flow in your room. To test for airflow, hold a strip of single ply tissue paper between the thumb and index finger. Note the direction that the paper moves when exposed to the airflow. Then, for each location, predict which way the paper will move if your air flow diagram is correct.
  • Now, each member of your group will stand in one of the four selected areas. Each member will test the airflow Agree upon an approximate height at which everyone will hold their papers.
  • When you teacher tells you to, open your assigned window and/or door. Each person should note the direction that their paper points immediately after the window or door was opened. Record your results on your diagram.
  • Did the airflow test data support or refute the hypothetical model of air flow shown in your diagram? Why or why not? Correct your model based on your experimental evidence.
  • With your group, discuss how accurate your model is. What limitations did it have? Write down the limitations that your group agreed upon.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.

This Snap Lab! has students construct a model of how air flows in their classroom. Each group of four students will create a model of air flow in their classroom using a scale drawing of the room. Then, the groups will test the validity of their model by placing weathervanes that they have constructed around the room and opening a window or door. By observing the weather vanes, students will see how air actually flows through the room from a specific window or door. Students will then correct their model based on their experimental evidence. The following material list is given per group:

  • One room with at least one window or door that can be opened (An optimal configuration would be one window or door per group.)
  • Several pieces of construction paper (at least four per group)
  • Strips of single ply tissue paper
  • One tape measure (long enough to measure the dimensions of the room)
  • Group size can vary depending on the number of windows/doors available and the number of students in the class.
  • The room dimensions could be provided by the teacher. Also, students may need a brief introduction in how to make a drawing to scale.
  • This is another opportunity to discuss controlled experiments in terms of why the students should hold the strips of tissue paper at the same height and in the same way. One student could also serve as a control and stand far away from the window/door or in another area that will not receive air flow from the window/door.
  • You will probably need to coordinate this when multiple windows or doors are used. Only one window or door should be opened at a time for best results. Between openings, allow a short period (5 minutes) when all windows and doors are closed, if possible.

Answers to the Grasp Check will vary, but the air flow in the new window or door should be based on what the students observed in their experiment.

Scientific Laws and Theories

A scientific law is a description of a pattern in nature that is true in all circumstances that have been studied. That is, physical laws are meant to be universal , meaning that they apply throughout the known universe. Laws are often also concise, whereas theories are more complicated. A law can be expressed in the form of a single sentence or mathematical equation. For example, Newton’s second law of motion , which relates the motion of an object to the force applied ( F ), the mass of the object ( m ), and the object’s acceleration ( a ), is simply stated using the equation

Scientific ideas and explanations that are true in many, but not all situations in the universe are usually called principles . An example is Pascal’s principle , which explains properties of liquids, but not solids or gases. However, the distinction between laws and principles is sometimes not carefully made in science.

A theory is an explanation for patterns in nature that is supported by much scientific evidence and verified multiple times by multiple researchers. While many people confuse theories with educated guesses or hypotheses, theories have withstood more rigorous testing and verification than hypotheses.

[OL] Explain to students that in informal, everyday English the word theory can be used to describe an idea that is possibly true but that has not been proven to be true. This use of the word theory often leads people to think that scientific theories are nothing more than educated guesses. This is not just a misconception among students, but among the general public as well.

As a closing idea about scientific processes, we want to point out that scientific laws and theories, even those that have been supported by experiments for centuries, can still be changed by new discoveries. This is especially true when new technologies emerge that allow us to observe things that were formerly unobservable. Imagine how viewing previously invisible objects with a microscope or viewing Earth for the first time from space may have instantly changed our scientific theories and laws! What discoveries still await us in the future? The constant retesting and perfecting of our scientific laws and theories allows our knowledge of nature to progress. For this reason, many scientists are reluctant to say that their studies prove anything. By saying support instead of prove , it keeps the door open for future discoveries, even if they won’t occur for centuries or even millennia.

[OL] With regard to scientists avoiding using the word prove , the general public knows that science has proven certain things such as that the heart pumps blood and the Earth is round. However, scientists should shy away from using prove because it is impossible to test every single instance and every set of conditions in a system to absolutely prove anything. Using support or similar terminology leaves the door open for further discovery.

Check Your Understanding

  • Models are simpler to analyze.
  • Models give more accurate results.
  • Models provide more reliable predictions.
  • Models do not require any computer calculations.
  • They are the same.
  • A hypothesis has been thoroughly tested and found to be true.
  • A hypothesis is a tentative assumption based on what is already known.
  • A hypothesis is a broad explanation firmly supported by evidence.
  • A scientific model is a representation of something that can be easily studied directly. It is useful for studying things that can be easily analyzed by humans.
  • A scientific model is a representation of something that is often too difficult to study directly. It is useful for studying a complex system or systems that humans cannot observe directly.
  • A scientific model is a representation of scientific equipment. It is useful for studying working principles of scientific equipment.
  • A scientific model is a representation of a laboratory where experiments are performed. It is useful for studying requirements needed inside the laboratory.
  • The hypothesis must be validated by scientific experiments.
  • The hypothesis must not include any physical quantity.
  • The hypothesis must be a short and concise statement.
  • The hypothesis must apply to all the situations in the universe.
  • A scientific theory is an explanation of natural phenomena that is supported by evidence.
  • A scientific theory is an explanation of natural phenomena without the support of evidence.
  • A scientific theory is an educated guess about the natural phenomena occurring in nature.
  • A scientific theory is an uneducated guess about natural phenomena occurring in nature.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is an educated guess about a natural phenomenon.
  • A hypothesis is an educated guess about natural phenomenon, while a scientific theory is an explanation of natural world with experimental support.
  • A hypothesis is experimental evidence of a natural phenomenon, while a scientific theory is an explanation of the natural world with experimental support.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is experimental evidence of a natural phenomenon.

Use the Check Your Understanding questions to assess students’ achievement of the section’s learning objectives. If students are struggling with a specific objective, the Check Your Understanding will help identify which objective and direct students to the relevant content.

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-physics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/physics/pages/1-introduction
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: Physics
  • Publication date: Mar 26, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/physics/pages/1-introduction
  • Section URL: https://openstax.org/books/physics/pages/1-2-the-scientific-methods

© Jan 19, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Scientific Method: Role and Importance Essay

The scientific method is a problem-solving strategy that is at the heart of biology and other sciences. There are five steps included in the scientific method that is making an observation, asking a question, forming a hypothesis or an explanation that could be tested, and predicting the test. After that, in the feedback step that is iterating, the results are used to make new predictions. The scientific method is almost always an iterative process. In other words, rather than a straight line, it is a cycle. The outcome of one round of questioning generates feedback that helps to enhance the next round of questioning.

Science is an activity that involves the logical explanation, prediction, and control of empirical phenomena. The concepts of reasoning applicable to the pursuit of this endeavor are referred to as scientific reasoning (Cowles, 2020). They include topics such as experimental design, hypothesis testing, and data interpretation. All sciences, including social sciences, follow the scientific method (Cowles, 2020). Different questions and tests are asked and performed by scientists in various domains. They do, however, have a common approach to finding logical and evidence-based answers.

Scientific reasoning is fundamental for all types of scientific study, not simply institutional research. Scientists do employ specific ideas that non-scientists do not have to use in everyday life. However, many reasoning principles are useful in everyday life. Even if one is not a scientist, they must use excellent reasoning to understand, anticipate, and regulate the events that occur in the environment. When one wants to start their careers, preserve their finances, or enhance their health, they need to acquire evidence to determine the most effective method for achieving our goals. Good scientific thinking skills come in handy in all of these situations.

Experiments, surveys, case studies, descriptive studies, and non-descriptive studies are all forms of research used in the scientific method. In an experiment, a researcher manipulates certain factors in a controlled environment and assesses their impact on other variables (Black, 2018). Descriptive research focuses on the nature of the relationship between the variables being studied rather than on cause and effect. A case study is a detailed examination of a single instance in which something unexpected has occurred. This is normally done with a single individual in extreme or exceptional instances. Large groups of individuals are polled to answer questions about certain topics in surveys. Correlational approaches are used in non-descriptive investigations to anticipate the link between two or more variables.

The Lau and Chan technique describes how to assess the validity of a theory or hypothesis using the scientific method, also known as the hypothetical-deductive method (Lau & Chan, 2017). For testing theories or hypotheses, the hypothetical-deductive technique (HD method) is highly useful. It is sometimes referred to as “scientific procedure.” This is not quite right because science can’t possibly employ only one approach. However, the HD technique is critical since it is one of the most fundamental approaches used in many scientific disciplines, including economics, physics, and biochemistry. Its implementation can be broken down into four stages. The stages include using the hypothetical-deductive method, identifying the testable hypothesis, generating the predictions according to the hypothesis, and using experiments in order to check the predictions (Cowles, 2020). If the predictions that are tested turn out to be correct, the hypothesis will be confirmed. Suppose the results are incorrect; the hypothesis would be disconfirmed.

The HD method instructs us on how to test a hypothesis, and each scientific theory must be testable.

One cannot discover evidence to illustrate whether a theory is likely or not if it cannot be tested. It cannot be considered scientific information in that circumstance. Consider the possibility that there are ghosts that people cannot see, cannot communicate with, and cannot be detected directly or indirectly. This hypothesis is defined in such a way that testing is not possible. It could still be real, and there could be such ghosts, but people would never know; thus, this cannot be considered a scientific hypothesis. In general, validating a theory’s predictions raises the likelihood that it is right. However, this does not establish definitively that the theory is right in and of itself. When given additional assumptions, a hypothesis frequently creates a prediction. When a forecast fails in this way, the theory may still be valid.

When a theory makes a faulty prediction, it might be difficult to determine whether the theory should be rejected or whether the auxiliary assumptions are flawed. Astronomers in the 19th century, for example, discovered that Newtonian physics could not adequately explain the orbit of the planet Mercury. This is due to the fact that Newtonian physics is incorrect, and you require relativity to get a more accurate orbit prediction. When astronomers discovered Uranus in 1781, they discovered that its orbit did not match Newtonian physics predictions. However, astronomers concluded that it could be explained if Uranus was being affected by another planet, and Neptune was discovered as a result.

I had several instances where I have made assumptions on an important issue regardless of evidence. Once I have prepared the work on the topic of power distribution in the workplace and its relation to gender, I have assumed that possibly because of the general feminine traits, women are less likely to create a strong image of power in comparison with men. In fact, such a hypothesis needs to be tested, and it is testable. For example, I could first define what is meant by feminine traits by collecting data from different biological and psychological sources. After that, I could observe the information regarding what factors or behavior patterns contribute to establishing power in the workplace. If I found the correlation between the feminine character traits, communication style, and behavioral patterns with the distribution of power in the workplace, then I could confirm my hypothesis.

Thus, applying the scientific method can help to improve critical reasoning by using tools from scientific reasoning. By supporting the provided hypothesis with evidence from scientific research and statistical data, one can make their claim more valuable and objective. The scientific method is essential for the creation of scientific theories that explain information and ideas in a scientifically rational manner. In a typical scientific method application, a researcher makes a hypothesis, tests it using various methods, and then alters it based on the results of the tests and experiments. The new hypothesis is then retested, further changed, and retested until it matches observable events and testing results. Hypotheses serve as tools for scientists to collect data in this way. Scientists can build broad general explanations, or scientific theories, based on that evidence and the numerous scientific experiments conducted to investigate possibilities. In conclusion, a scientific method is an important approach to examining the hypothesis. By using the tools of the scientific method, the inferences become rational and objective.

Black, M. (2018). Critical thinking: An introduction to logic and scientific method . Pickle Partners Publishing.

Cowles, H. M. (2020). The Scientific Method . Harvard University Press.

Lau, J., & Chan, J. (2017). Scientific methodology: Tutorials 1-9 .

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, March 14). Scientific Method: Role and Importance. https://ivypanda.com/essays/scientific-method-role-and-importance/

"Scientific Method: Role and Importance." IvyPanda , 14 Mar. 2023, ivypanda.com/essays/scientific-method-role-and-importance/.

IvyPanda . (2023) 'Scientific Method: Role and Importance'. 14 March.

IvyPanda . 2023. "Scientific Method: Role and Importance." March 14, 2023. https://ivypanda.com/essays/scientific-method-role-and-importance/.

1. IvyPanda . "Scientific Method: Role and Importance." March 14, 2023. https://ivypanda.com/essays/scientific-method-role-and-importance/.

Bibliography

IvyPanda . "Scientific Method: Role and Importance." March 14, 2023. https://ivypanda.com/essays/scientific-method-role-and-importance/.

  • Science Theories' Application to the Natural World
  • "Theories of Human Nature" by Peter Lopson
  • Albert Einstein, His Life and Career
  • The Structure of Scientific Revolutions
  • Utilization of Space and Time in Design
  • Rheology of Engine Oils: Testing Engine Oil Types
  • The Concept of String Theory
  • The Earth's orbit and Other Astronomical Phenomena Affect the Earth’s
  • A Trip to Mars: Approximate Time, Attaining Synchrony & Parking Orbit
  • The Cosmic Dance of Siva
  • Pragmatic Development Description and Explanation
  • Outreach Chicago's Study Variables and Research Design
  • Types of Studies for the Outreach Chicago Firm
  • Nobel Prize for Natural Experiments
  • System Dynamics and Soft Systems and Action Research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 12 February 2024

China conducts first nationwide review of retractions and research misconduct

  • Smriti Mallapaty

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

Technicians wearing full PPE work in a lab

The reputation of Chinese science has been "adversely affected" by the number of retractions in recent years, according to a government notice. Credit: Qilai Shen/Bloomberg/Getty

Chinese universities are days away from the deadline to complete a nationwide audit of retracted research papers and probe of research misconduct. By 15 February, universities must submit to the government a comprehensive list of all academic articles retracted from English- and Chinese-language journals in the past three years. They need to clarify why the papers were retracted and investigate cases involving misconduct, according to a 20 November notice from the Ministry of Education’s Department of Science, Technology and Informatization.

The government launched the nationwide self-review in response to Hindawi, a London-based subsidiary of the publisher Wiley, retracting a large number of papers by Chinese authors. These retractions, along with those from other publishers, “have adversely affected our country’s academic reputation and academic environment”, the notice states.

A Nature analysis shows that last year, Hindawi issued more than 9,600 retractions, of which the vast majority — about 8,200 — had a co-author in China. Nearly 14,000 retraction notices, of which some three-quarters involved a Chinese co-author, were issued by all publishers in 2023.

This is “the first time we’ve seen such a national operation on retraction investigations”, says Xiaotian Chen, a library and information scientist at Bradley University in Peoria, Illinois, who has studied retractions and research misconduct in China. Previous investigations have largely been carried out on a case-by-case basis — but this time, all institutions have to conduct their investigations simultaneously, says Chen.

Tight deadline

The ministry’s notice set off a chain of alerts, cascading to individual university departments. Bulletins posted on university websites required researchers to submit their retractions by a range of dates, mostly in January — leaving time for universities to collate and present the data.

Although the alerts included lists of retractions that the ministry or the universities were aware of, they also called for unlisted retractions to be added.

scientific method review essay

More than 10,000 research papers were retracted in 2023 — a new record

According to Nature ’s analysis, which includes only English-language journals, more than 17,000 retraction notices for papers published by Chinese co-authors have been issued since 1 January 2021, which is the start of the period of review specified in the notice. The analysis, an update of one conducted in December , used the Retraction Watch database, augmented with retraction notices collated from the Dimensions database, and involved assistance from Guillaume Cabanac, a computer scientist at the University of Toulouse in France. It is unclear whether the official lists contain the same number of retracted papers.

Regardless, the timing to submit the information will be tight, says Shu Fei, a bibliometrics scientist at Hangzhou Dianzi University in China. The ministry gave universities less than three months to complete their self-review — and this was cut shorter by the academic winter break, which typically starts in mid-January and concludes after the Chinese New Year, which fell this year on 10 February.

“The timing is not good,” he says. Shu expects that universities are most likely to submit only a preliminary report of their researchers’ retracted papers included on the official lists.

But Wang Fei, who studies research-integrity policy at Dalian University of Technology in China, says that because the ministry has set a deadline, universities will work hard to submit their findings on time.

Researchers with retracted papers will have to explain whether the retraction was owing to misconduct, such as image manipulation, or an honest mistake, such as authors identifying errors in their own work, says Chen: “In other words, they may have to defend themselves.” Universities then must investigate and penalize misconduct. If a researcher fails to declare their retracted paper and it is later uncovered, they will be punished, according to the ministry notice. The cost of not reporting is high, says Chen. “This is a very serious measure.”

It is not known what form punishment might take, but in 2021, China’s National Health Commission posted the results of its investigations into a batch of retracted papers. Punishments included salary cuts, withdrawal of bonuses, demotions and timed suspensions from applying for research grants and rewards.

The notice states explicitly that the first corresponding author of a paper is responsible for submitting the response. This requirement will largely address the problem of researchers shirking responsibility for collaborative work, says Li Tang, a science- and innovation-policy researcher at Fudan University in Shanghai, China. The notice also emphasizes due process, says Tang. Researchers alleged to have committed misconduct have a right to appeal during the investigation.

The notice is a good approach for addressing misconduct, says Wang. Previous efforts by the Chinese government have stopped at issuing new research-integrity guidelines that were poorly implemented, she says. And when government bodies did launch self-investigations of published literature, they were narrower in scope and lacked clear objectives. This time, the target is clear — retractions — and the scope is broad, involving the entire university research community, she says.

“Cultivating research integrity takes time, but China is on the right track,” says Tang.

It is not clear what the ministry will do with the flurry of submissions. Wang says that, because the retraction notices are already freely available, publicizing the collated lists and underlying reasons for retraction could be useful. She hopes that a similar review will be conducted every year “to put more pressure” on authors and universities to monitor research integrity.

What happens next will reveal how seriously the ministry regards research misconduct, says Shu. He suggests that, if the ministry does not take further action after the Chinese New Year, the notice could be an attempt to respond to the reputational damage caused by the mass retractions last year.

The ministry did not respond to Nature ’s questions about the misconduct investigation.

Chen says that, regardless of what the ministry does with the information, the reporting process itself will help to curb misconduct because it is “embarrassing to the people in the report”.

But it might primarily affect researchers publishing in English-language journals. Retraction notices in Chinese-language journals are rare.

Nature 626 , 700-701 (2024)

doi: https://doi.org/10.1038/d41586-024-00397-x

Data analysis by Richard Van Noorden.

Reprints and permissions

Related Articles

scientific method review essay

  • Scientific community
  • Research management

How to boost your research: take a sabbatical in policy

How to boost your research: take a sabbatical in policy

World View 21 FEB 24

Scientists under arrest: the researchers taking action over climate change

Scientists under arrest: the researchers taking action over climate change

News Feature 21 FEB 24

Science can drive development and unity in Africa — as it does in the US and Europe

Science can drive development and unity in Africa — as it does in the US and Europe

Editorial 21 FEB 24

Could roving researchers help address the challenge of taking parental leave?

Could roving researchers help address the challenge of taking parental leave?

Career Feature 07 FEB 24

Best practice for LGBTQ+ data collection by STEM organizations

Correspondence 06 FEB 24

Open science — embrace it before it’s too late

Open science — embrace it before it’s too late

Editorial 06 FEB 24

Open-access publishing: citation advantage is unproven

Correspondence 13 FEB 24

How journals are fighting back against a wave of questionable images

How journals are fighting back against a wave of questionable images

News Explainer 12 FEB 24

COVID’s preprint bump set to have lasting effect on research publishing

COVID’s preprint bump set to have lasting effect on research publishing

Nature Index 09 FEB 24

Recruitment of Global Talent at the Institute of Zoology, Chinese Academy of Sciences (IOZ, CAS)

The Institute of Zoology (IOZ), Chinese Academy of Sciences (CAS), is seeking global talents around the world.

Beijing, China

Institute of Zoology, Chinese Academy of Sciences (IOZ, CAS)

scientific method review essay

Position Opening for Principal Investigator GIBH

Guangzhou, Guangdong, China

Guangzhou Institutes of Biomedicine and Health(GIBH), Chinese Academy of Sciences

scientific method review essay

Faculty Positions in Multiscale Research Institute for Complex Systems, Fudan University

The Multiscale Research Institute for Complex Systems (MRICS) at Fudan University is located at the Zhangjiang Campus of Fudan University.

Shanghai, China

Fudan University

scientific method review essay

Postdoctoral Associate- Single- Cell and Data Science

Houston, Texas (US)

Baylor College of Medicine (BCM)

scientific method review essay

Postdoctoral Associate

scientific method review essay

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

IMAGES

  1. Scientific method essay example

    scientific method review essay

  2. (PDF) Linguistics and the scientific method

    scientific method review essay

  3. The scientific method Essay Example

    scientific method review essay

  4. FREE 27+ Research Paper Formats in PDF

    scientific method review essay

  5. Scientific Method Review PPT by Charlotte Thompson

    scientific method review essay

  6. Scientific Method Review Identifying Variables Worksheet

    scientific method review essay

VIDEO

  1. The Scientific Method

  2. How to write and develop critical essays

  3. Scientific Method to Relax Mind 🤯🔥|| Prashant Kirad|| #study #mind #motivation

  4. Science, Scientific method Educational research

  5. Research Study Introduction

  6. Scientific Method explains why the Scientific Method is important 🤓

COMMENTS

  1. Biology and the scientific method review

    Meaning. Biology. The study of living things. Observation. Noticing and describing events in an orderly way. Hypothesis. A scientific explanation that can be tested through experimentation or observation. Controlled experiment. An experiment in which only one variable is changed.

  2. Perspective: Dimensions of the scientific method

    The scientific method has been guiding biological research for a long time. It not only prescribes the order and types of activities that give a scientific study validity and a stamp of approval but also has substantially shaped how we collectively think about the endeavor of investigating nature.

  3. How to write a review paper

    How to write a review paper to our readers, but it will also enhance its scientific impact on environmental science. Mastering the skills needed to write a good sci-entific review also pays dividends when writing up the literature review featured in the introduction of primary-research papers.

  4. How to review a paper

    Writing a good review requires expertise in the field, an intimate knowledge of research methods, a critical mind, the ability to give fair and constructive feedback, and sensitivity to the feelings of authors on the receiving end.

  5. The scientific method (article)

    The scientific method has five basic steps, plus one feedback step: Make an observation. Ask a question. Form a hypothesis, or testable explanation. Make a prediction based on the hypothesis. Test the prediction. Iterate: use the results to make new hypotheses or predictions.

  6. Write a Critical Review of a Scientific Journal Article

    Review all methods in relation to the objective(s) of the study. Are the methods valid for studying the problem? Check the methods for essential information. Could the study be duplicated from the methods and information given? Check the methods for flaws. Is the sample selection adequate? Is the experimental design sound?

  7. PDF sci article review

    1. Skim the article without taking notes: Read the abstract. The abstract will tell you the major findings of the article and why they matter. Read first for the "big picture." Note any terms or techniques you need to define. Jot down any questions or parts you don't understand.

  8. Peer Review in Scientific Publications: Benefits, Critiques, & A

    HISTORY OF PEER REVIEW. The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ().The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ().

  9. A Step-by-Step Guide to Writing a Scientific Review Article

    Scientific review articles are comprehensive, focused reviews of the scientific literature written by subject matter experts. The task of writing a scientific review article can seem overwhelming; however, it can be managed by using an organized approach and devoting sufficient time to the process.

  10. How to write a good scientific review article

    A review that provides a comprehensive, balanced and engaging overview of a topic is a valuable resource that can often be highly accessed and cited even years after publication [ [ 1] ], but it takes time and plenty of practice to develop the art of writing such an article.

  11. Scientific Method

    Scientific Method First published Fri Nov 13, 2015; substantive revision Tue Jun 1, 2021 Science is an enormously successful human enterprise. The study of scientific method is the attempt to discern the activities by which that success is achieved.

  12. How to Write a Literature Review

    Step 1 - Search for relevant literature Step 2 - Evaluate and select sources Step 3 - Identify themes, debates, and gaps Step 4 - Outline your literature review's structure Step 5 - Write your literature review Free lecture slides Other interesting articles Frequently asked questions Introduction Quick Run-through Step 1 & 2 Step 3 Step 4 Step 5

  13. Scientific Writing Made Easy: A Step‐by‐Step Guide to Undergraduate

    Clear scientific writing generally follows a specific format with key sections: an introduction to a particular topic, hypotheses to be tested, a description of methods, key results, and finally, a discussion that ties these results to our broader knowledge of the topic (Day and Gastel 2012 ).

  14. Scientific method

    The scientific method is critical to the development of scientific theories, which explain empirical (experiential) laws in a scientifically rational manner.In a typical application of the scientific method, a researcher develops a hypothesis, tests it through various means, and then modifies the hypothesis on the basis of the outcome of the tests and experiments.

  15. How to write a good scientific review article

    How to write a good scientific review article FEBS J. 2022 Jul;289 (13):3592-3602. doi: 10.1111/febs.16565. Paraminder Dhillon Affiliation The FEBS Journal Editorial Office, Cambridge, UK. PMID: 35792782 DOI: 10.1111/febs.16565 Abstract Literature reviews are valuable resources for the scientific community.

  16. Review articles: purpose, process, and structure

    Many research disciplines feature high-impact journals that are dedicated outlets for review papers (or review-conceptual combinations) (e.g., Academy of Management Review, Psychology Bulletin, Medicinal Research Reviews).The rationale for such outlets is the premise that research integration and synthesis provides an important, and possibly even a required, step in the scientific process.

  17. Scientific Method: Definition and Examples

    Regina Bailey Updated on August 21, 2019 The scientific method is a series of steps followed by scientific investigators to answer specific questions about the natural world. It involves making observations, formulating a hypothesis, and conducting scientific experiments.

  18. Scientific method

    The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. ... Later stances include physicist Lee Smolin's 2013 essay "There Is No Scientific Method", ... Each element of the scientific method is subject to peer review for possible mistakes.

  19. Scientific Method, Peer Review, and Publishing Essay

    Scientific Method, Peer Review, and Publishing Essay Exclusively available on IvyPanda Updated: Jan 29th, 2024 A scientific method is a crucial tool that scientists employ to comprehend and describe the natural world. It is a method that entails several phases that direct scientists in their study and comprehension of events.

  20. Essay on Scientific Method

    Introduction The discovery of science started to happen from the discovery of atoms and metals throughout the human genomic mapping. Observations started about 500 BC when the Mesopotamians were explaining that earth is the centerpiece of the universe and everything revolves around it.

  21. The Scientific Method in Action

    The Scientific Method in Action Essay. The scientific method provides an approach to problem-solving using observations and experimental results. This evidence-based practice allows chemists to draw logical conclusions through examination of data. In this experiment, the scientific method will be applied to determine the factors that affect the ...

  22. 1.2 The Scientific Methods

    This, in a nutshell, describes the scientific method that scientists employ to decide scientific issues on the basis of evidence from observation and experiment. An investigation often begins with a scientist making an observation. The scientist observes a pattern or trend within the natural world.

  23. Scientific Method: Role and Importance

    The scientific method is a problem-solving strategy that is at the heart of biology and other sciences. There are five steps included in the scientific method that is making an observation, asking a question, forming a hypothesis or an explanation that could be tested, and predicting the test.

  24. The Scientific Method Essay

    The Scientific Method Essay. The Scientific Method is the standardized procedure that scientists are supposed to follow when conducting experiments, in order to try to construct a reliable, consistent, and non-arbitrary representation of our surroundings. To follow the Scientific Method is to stick very tightly to a order of experimentation.

  25. China conducts first nationwide review of retractions and research

    Universities must declare all their retractions and launch investigations into misconduct cases; a Nature analysis reveals that since 2021 there have been more than 17,000 retractions with Chinese ...