Featured Clinical Reviews

  • Screening for Atrial Fibrillation: US Preventive Services Task Force Recommendation Statement JAMA Recommendation Statement January 25, 2022
  • Evaluating the Patient With a Pulmonary Nodule: A Review JAMA Review January 18, 2022

Select Your Interests

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

Scientific Misconduct and Medical Journals

  • 1 JAMA and the JAMA Network, Chicago, Illinois

According to the US Department of Health and Human Services, “Research misconduct means fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.” 1 Other important irregularities involving the biomedical research process include, but are not limited to, ethical issues (eg, failure to obtain informed consent, failure to obtain appropriate approval from an institutional review board, and mistreatment of research participants), issues involving authorship responsibilities and disputes, duplicate publication, and failure to report conflicts of interest. When authors are found to have been involved with research misconduct or other serious irregularities involving articles that have been published in scientific journals, editors have a responsibility to ensure the accuracy and integrity of the scientific record. 2 , 3

Although not much is known about the prevalence of scientific misconduct, several studies with limited methods have estimated that the prevalence of scientists who have been involved in scientific misconduct ranges from 1% to 2%. 4 - 6 During the last 5 years, JAMA and the JAMA Network journals have published 12 notices of Retraction about 15 articles (including recent Retractions of 6 articles by the same author) 7 and 6 notices of Expression of Concern about 9 articles. These notices were published primarily because the original studies were found to involve fabrication or falsification of data that invalidated the research and the published articles; in some cases, postpublication investigations could not provide evidence that the original research was valid. Since 2015, JAMA and the JAMA Network journals also have retracted and replaced 12 articles for instances of inadvertent pervasive error resulting from incorrect data coding or incorrect analyses and without evidence of research misconduct. 8 During the same period, 1021 correction notices have been published in these journals. The JAMA Network policies regarding corrections and retraction with replacement have been published previously. 8 , 9 In this Editorial, the focus is on a more complex and challenging issue—scientific misconduct involving fabrication, falsification, and plagiarism in the reporting of research. 1

The Role and Responsibilities of Editors

JAMA and the JAMA Network journals receive numerous communications from readers, such as letters to the editor and emails, that are critical of the published content. Most of the critiques involve matters of interpretation, the need for clarification of content, or differences of opinion; some address ethical concerns, some are frivolous complaints, and some include calls for retraction. However, typically 10 to 12 times each year these journals receive allegations of scientific misconduct. All matters related to allegations of scientific misconduct for articles published in JAMA and the JAMA Network journals are evaluated and managed by the senior staff of JAMA including the editor in chief of JAMA , executive editor, executive managing editor, and the editorial counsel. This provides a consistent process for dealing with potential scientific misconduct. If the allegation involves an article published in a network journal, the editor in chief of that journal is involved and kept informed about the progress of the investigation. In addition, when necessary, additional expertise is obtained.

Allegations of scientific misconduct brought to journals are challenging and time-consuming for the authors, for editors, and potentially for institutions. The first step involves determining the validity of the allegation and an assessment of whether the allegation is consistent with the definition of research misconduct. In some cases, when authors are accused of misconduct, the criticism represents a different interpretation of the data or disagreement with the statistical approach used, rather than scientific misconduct. This initial step also involves determining whether the individuals alleging misconduct have relevant conflicts of interest. In some cases, it appears that financial interests and strongly held views (intellectual conflict of interest) may have led to the allegation. This does not mean that potential conflicts of interest on the part of the persons bringing the allegations preclude the possibility of scientific misconduct on the part of the authors, but rather, evaluation of conflict of interest is part of the assessment process.

If scientific misconduct or the presence of other substantial research irregularities is a possibility, the allegations are shared with the corresponding author, who, on behalf of all of the coauthors, is requested to provide a detailed response. Depending on the nature of the allegation, it can take months for some authors to respond to the concerns. After the response is received and evaluated, additional review and involvement of experts (such as statistical reviewers) may be obtained. In the majority of cases, the authors’ responses and additional information provided regarding the concerns raised are sufficient to make a determination of whether the allegations raised are likely to represent misconduct. For cases in which it is unlikely that misconduct has occurred, clarifications, additional analyses, or both, published as letters to the editor, and often including a correction notice and correction to the published article are sufficient. To date, JAMA has had very few disagreements with individuals making allegations of scientific misconduct, although some have been critical of the time it has taken for JAMA and other journals to resolve an issue of alleged scientific misconduct. 10 - 12

However, if the authors’ responses to the allegations raised are unsatisfactory or unconvincing, or if there is any doubt as to whether scientific misconduct has occurred, additional information and investigation are usually necessary, and the appropriate institution is contacted with a request to conduct a formal evaluation. At that time, and depending on the nature of the allegations, the journal may publish a notice of Expression of Concern about the published reports in question, indicating that issues of validity or other concerns have arisen and are under investigation. 2

Involving institutions is done with great care for several reasons. First, even just an allegation of misconduct can harm the reputation of an individual. Individuals involved in such allegations have expressed this concern and notification of an institution increases the level of scrutiny directed toward the involved person. In these cases, institutions are responsible for ensuring appropriate due process and confidentiality, based on their policies and procedures. Second, just as JAMA receives allegations of scientific misconduct and research irregularities, so too do institutions. It simply is not possible for every institution to conduct a detailed investigation of every allegation received; thus, JAMA and the JAMA Network journals ensure that institutions are only asked to be involved after a determination has been made that scientific misconduct is a possibility and for which the authors have not adequately responded to the concerns raised.

The Role and Responsibilities of Institutions

Institutions are expected to conduct an appropriate and thorough investigation of allegations of scientific misconduct. Some institutions are immediately responsive, acknowledging receipt of the letter from the journal describing the concerns, and quickly begin an investigation. In other cases, it may take time to identify the appropriate institutional individuals to contact, and even then, many months to receive a response. Some institutions appear well-equipped to conduct investigations, whereas other institutions appear to have little experience in such matters or fail to conduct adequate investigations 13 ; these institutions can take months to years to provide JAMA with an adequate response. In some cases involving questions of misconduct from outside of the United States, institutions have indicated that further investigation must wait until numerous legal issues are resolved, further delaying a response.

The type of investigation an institution conducts depends on the specific allegations and the institutional policies and procedures. In some cases, the investigation has involved reviewing the data, the article and related articles, and the analysis. In other cases, the investigation has involved reanalysis by the authors, or independent statistical analysis by a third party not involved in the initial study. Other cases have involved investigation of ethical issues related to the research, such as appropriate ethical review and approval of the study, informed consent for study participants, and notification of study participants about information related to risks of an intervention. No single approach is appropriate in all cases, but rather it depends on the specific allegation. In 2017, a group of representatives who deal with scientific misconduct, including university and institutional leaders and research integrity officers, federal officials, researchers, journal editors, journalists, and attorneys representing respondents, whistle-blowers, and institutions, examined best and failed practices related to institutional investigation of scientific misconduct. 14 The group developed a checklist that can be used by institutions to follow reasonable standards to investigate an allegation of scientific misconduct and to provide an appropriate and complete report following the investigation. 14

JAMA editors request institutions to provide periodic updates on the status of an investigation, and once the investigation is completed, institutions are expected to provide the editors with a detailed report of their findings. For cases in which misconduct has been identified, the institution and the authors may recommend and request retraction of the published article. In other cases, based on the report of the investigation from the institution, the journal editors make the determination of what actions are needed, such as whether an article should be retracted; or when a notice of Expression of Concern had been posted, whether it should be subsequently followed by a notice of Retraction. In each case, the notices are linked to and from the original article, and retracted articles are clearly watermarked as retracted so that readers and researchers are properly alerted to the invalid nature of the original articles. 2

Conclusions

Allegations of scientific misconduct are challenging. Not all such allegations warrant investigation, but some require extensive evaluation. JAMA reviews its approach to allegations of scientific misconduct on a regular basis to ensure that the process is timely, objective, and fair to authors and their institutions, and results in evidence that will directly address the allegations of misconduct. Ultimately, authors, journals, and institutions have an important obligation to ensure the accuracy of the scientific record. By responding appropriately to concerns about scientific misconduct, and taking necessary actions based on evaluation of these concerns, such as corrections, retractions with replacement, notices of Expression of Concern, and Retractions, JAMA and the JAMA Network journals will continue to fulfill the responsibilities of ensuring the validity and integrity of the scientific record.

Corresponding Author: Howard Bauchner, MD, Editor in Chief, JAMA , 330 N Wabash Ave, Chicago, IL 60611 ( [email protected] ).

Published Online: October 19, 2018. doi:10.1001/jama.2018.14350

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Ms Flanagin reports serving as an unpaid member of the board of STM: International Association of Scientific, Technical, and Medical Publishers. No other disclosures were reported.

See More About

Bauchner H , Fontanarosa PB , Flanagin A , Thornton J. Scientific Misconduct and Medical Journals. JAMA. 2018;320(19):1985–1987. doi:10.1001/jama.2018.14350

Manage citations:

© 2024

Artificial Intelligence Resource Center

Cardiology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts
  • Research article
  • Open access
  • Published: 30 April 2021

A scoping review of the literature featuring research ethics and research integrity cases

  • Anna Catharina Vieira Armond   ORCID: orcid.org/0000-0002-7121-5354 1 ,
  • Bert Gordijn 2 ,
  • Jonathan Lewis 2 ,
  • Mohammad Hosseini 2 ,
  • János Kristóf Bodnár 1 ,
  • Soren Holm 3 , 4 &
  • Péter Kakuk 5  

BMC Medical Ethics volume  22 , Article number:  50 ( 2021 ) Cite this article

13k Accesses

25 Citations

28 Altmetric

Metrics details

The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the scientific literature.

The search included cases involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework. A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without language or date restriction. Data relating to the articles and the cases were extracted from case descriptions.

A total of 14,719 records were identified, and 388 items were included in the qualitative synthesis. The papers contained 500 case descriptions. After applying the eligibility criteria, 238 cases were included in the analysis. In the case analysis, fabrication and falsification were the most frequently tagged violations (44.9%). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%), followed by patient safety issues (11.1%) and plagiarism (6.9%). 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities. Paper retraction was the most prevalent sanction (45.4%), followed by exclusion from funding applications (35.5%).

Conclusions

Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields compared to its proportion in scientific publications. The cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of misbehaviors. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations, and from recently emerging forms of misbehaviors.

Peer Review reports

There has been an increase in academic interest in research ethics (RE) and research integrity (RI) over the past decade. This is due, among other reasons, to the changing research environment with new and complex technologies, increased pressure to publish, greater competition in grant applications, increased university-industry collaborative programs, and growth in international collaborations [ 1 ]. In addition, part of the academic interest in RE and RI is due to highly publicized cases of misconduct [ 2 ].

There is a growing body of published RE and RI cases, which may contribute to public attitudes regarding both science and scientists [ 3 ]. Different approaches have been used in order to analyze RE and RI cases. Studies focusing on ORI files (Office of Research Integrity) [ 2 ], retracted papers [ 4 ], quantitative surveys [ 5 ], data audits [ 6 ], and media coverage [ 3 ] have been conducted to understand the context, causes, and consequences of these cases.

Analyses of RE and RI cases often influence policies on responsible conduct of research [ 1 ]. Moreover, details about cases facilitate a broader understanding of issues related to RE and RI and can drive interventions to address them. Currently, there are no comprehensive studies that have collected and evaluated the RE and RI cases available in the academic literature. This review has been developed by members of the EnTIRE consortium to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). Two separate analyses have been conducted. The first analysis uses identified research articles to explore how the literature presents cases of RE and RI, in relation to the year of publication, country, article genre, and violation involved. The second analysis uses the cases extracted from the literature in order to characterize the cases and analyze them concerning the violations involved, sanctions, and field of science.

This scoping review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA Extension for Scoping Reviews (PRISMA-ScR). The full protocol was pre-registered and it is available at https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5bde92120&appId=PPGMS .

Eligibility

Articles with non-fictional case(s) involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, were included. Cases unrelated to scientific activities, research institutions, academic or industrial research and publication were excluded. Articles that did not contain a substantial description of the case were also excluded.

A normative framework consists of explicit rules, formulated in laws, regulations, codes, and guidelines, as well as implicit rules, which structure local research practices and influence the application of explicitly formulated rules. Therefore, if a case involves a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, then it does so on the basis of explicit and/or implicit rules governing RE and RI practice.

Search strategy

A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without any language or date restrictions. Two parallel searches were performed with two sets of medical subject heading (MeSH) terms, one for RE and another for RI. The parallel searches generated two sets of data thereby enabling us to analyze and further investigate the overlaps in, differences in, and evolution of, the representation of RE and RI cases in the academic literature. The terms used in the first search were: (("research ethics") AND (violation OR unethical OR misconduct)). The terms used in the parallel search were: (("research integrity") AND (violation OR unethical OR misconduct)). The search strategy’s validity was tested in a pilot search, in which different keyword combinations and search strings were used, and the abstracts of the first hundred hits in each database were read (Additional file 1 ).

After searching the databases with these two search strings, the titles and abstracts of extracted items were read by three contributors independently (ACVA, PK, and KB). Articles that could potentially meet the inclusion criteria were identified. After independent reading, the three contributors compared their results to determine which studies were to be included in the next stage. In case of a disagreement, items were reassessed in order to reach a consensus. Subsequently, qualified items were read in full.

Data extraction

Data extraction processes were divided by three assessors (ACVA, PK and KB). Each list of extracted data generated by one assessor was cross-checked by the other two. In case of any inconsistencies, the case was reassessed to reach a consensus. The following categories were employed to analyze the data of each extracted item (where available): (I) author(s); (II) title; (III) year of publication; (IV) country (according to the first author's affiliation); (V) article genre; (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification)[ 7 ]; (X) types of violation (see below); (XI) case description; and (XII) consequences for persons or institutions involved in the case.

Two sets of data were created after the data extraction process. One set was used for the analysis of articles and their representation in the literature, and the other set was created for the analysis of cases. In the set for the analysis of articles, all eligible items, including duplicate cases (cases found in more than one paper, e.g. Hwang case, Baltimore case) were included. The aim was to understand the historical aspects of violations reported in the literature as well as the paper genre in which cases are described and discussed. For this set, the variables of the year of publication (III); country (IV); article genre (V); and types of violation (X) were analyzed.

For the analysis of cases, all duplicated cases and cases that did not contain enough information about particularities to differentiate them from others (e.g. names of the people or institutions involved, country, date) were excluded. In this set, prominent cases (i.e. those found in more than one paper) were listed only once, generating a set containing solely unique cases. These additional exclusion criteria were applied to avoid multiple representations of cases. For the analysis of cases, the variables: (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification); (X) types of violation; (XI) case details; and (XII) consequences for persons or institutions involved in the case were considered.

Article genre classification

We used ten categories to capture the differences in genre. We included a case description in a “news” genre if a case was published in the news section of a scientific journal or newspaper. Although we have not developed a search strategy for newspaper articles, some of them (e.g. New York Times) are indexed in scientific databases such as Pubmed. The same method was used to allocate case descriptions to “editorial”, “commentary”, “misconduct notice”, “retraction notice”, “review”, “letter” or “book review”. We applied the “case analysis” genre if a case description included a normative analysis of the case. The “educational” genre was used when a case description was incorporated to illustrate RE and RI guidelines or institutional policies.

Categorization of violations

For the extraction process, we used the articles’ own terminology when describing violations/ethical issues involved in the event (e.g. plagiarism, falsification, ghost authorship, conflict of interest, etc.) to tag each article. In case the terminology was incompatible with the case description, other categories were added to the original terminology for the same case. Subsequently, the resulting list of terms was standardized using the list of major and minor misbehaviors developed by Bouter and colleagues [ 8 ]. This list consists of 60 items classified into four categories: Study design, data collection, reporting, and collaboration issues. (Additional file 2 ).

Systematic search

A total of 11,641 records were identified through the RE search and 3078 in the RI search. The results of the parallel searches were combined and the duplicates removed. The remaining 10,556 records were screened, and at this stage, 9750 items were excluded because they did not fulfill the inclusion criteria. 806 items were selected for full-text reading. Subsequently, 388 articles were included in the qualitative synthesis (Fig.  1 ).

figure 1

Flow diagram

Of the 388 articles, 157 were only identified via the RE search, 87 exclusively via the RI search, and 144 were identified via both search strategies. The eligible articles contained 500 case descriptions, which were used for the analysis of the publications articles analysis. 256 case descriptions discussed the same 50 cases. The Hwang case was the most frequently described case, discussed in 27 articles. Furthermore, the top 10 most described cases were found in 132 articles (Table 1 ).

For the analysis of cases, 206 (41.2% of the case descriptions) duplicates were excluded, and 56 (11.2%) cases were excluded for not providing enough information to distinguish them from other cases, resulting in 238 eligible cases.

Analysis of the articles

The categories used to classify the violations include those that pertain to the different kinds of scientific misconduct (falsification, fabrication, plagiarism), detrimental research practices (authorship issues, duplication, peer-review, errors in experimental design, and mentoring), and “other misconduct” (according to the definitions from the National Academies of Sciences and Medicine, [ 1 ]). Each case could involve more than one type of violation. The majority of cases presented more than one violation or ethical issue, with a mean of 1.56 violations per case. Figure  2 presents the frequency of each violation tagged to the articles. Falsification and fabrication were the most frequently tagged violations. The violations accounted respectively for 29.1% and 30.0% of the number of taggings (n = 780), and they were involved in 46.8% and 45.4% of the articles (n = 500 case descriptions). Problems with informed consent represented 9.1% of the number of taggings and 14% of the articles, followed by patient safety (6.7% and 10.4%) and plagiarism (5.4% and 8.4%). Detrimental research practices, such as authorship issues, duplication, peer-review, errors in experimental design, mentoring, and self-citation were mentioned cumulatively in 7.0% of the articles.

figure 2

Tagged violations from the article analysis

Analysis of the cases

Figure  3 presents the frequency and percentage of each violation found in the cases. Each case could include more than one item from the list. The 238 cases were tagged 305 times, with a mean of 1.28 items per case. Fabrication and falsification were the most frequently tagged violations (44.9%), involved in 57.7% of the cases (n = 238). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%) and involved in 20.2% of the cases. Patient safety issues were the third most frequently tagged violations (11.1%), involved in 14.3% of the cases, followed by plagiarism (6.9% and 8.8%). The list of major and minor misbehaviors [ 8 ] classifies the items into study design, data collection, reporting, and collaboration issues. Our results show that 56.0% of the tagged violations involved issues in reporting, 16.4% in data collection, 15.1% involved collaboration issues, and 12.5% in the study design. The items in the original list that were not listed in the results were not involved in any case collected.

figure 3

Major and minor misbehavior items from the analysis of cases

Article genre

The articles were mostly classified into “news” (33.0%), followed by “case analysis” (20.9%), “editorial” (12.1%), “commentary” (10.8%), “misconduct notice” (10.3%), “retraction notice” (6.4%), “letter” (3.6%), “educational paper” (1.3%), “review” (1%), and “book review” (0.3%) (Fig.  4 ). The articles classified into “news” and “case analysis” included predominantly prominent cases. Items classified into “news” often explored all the investigation findings step by step for the associated cases as the case progressed through investigations, and this might explain its high prevalence. The case analyses included mainly normative assessments of prominent cases. The misconduct and retraction notices included the largest number of unique cases, although a relatively large portion of the retraction and misconduct records could not be included because of insufficient case details. The articles classified into “editorial”, “commentary” and “letter” also included unique cases.

figure 4

Article genre of included articles

Article analysis

The dates of the eligible articles range from 1983 to 2018 with notable peaks between 1990 and 1996, most probably associated with the Gallo [ 9 ] and Imanishi-Kari cases [ 10 ], and around 2005 with the Hwang [ 11 ], Wakefield [ 12 ], and CNEP trial cases [ 13 ] (Fig.  5 ). The trend line shows an increase in the number of articles over the years.

figure 5

Frequency of articles according to the year of publication

Case analysis

The dates of included cases range from 1798 to 2016. Two cases occurred before 1910, one in 1798 and the other in 1845. Figure  6 shows the number of cases per year from 1910. An increase in the curve started in the early 1980s, reaching the highest frequency in 2004 with 13 cases.

figure 6

Frequency of cases per year

Geographical distribution

The first analysis concerned the authors’ affiliation and the corresponding author’s address. Where the article contained more than one country in the affiliation list, only the first author’s location was considered. Eighty-one articles were excluded because the authors’ affiliations were not available, and 307 articles were included in the analysis. The articles originated from 26 different countries (Additional file 3 ). Most of the articles emanated from the USA and the UK (61.9% and 14.3% of articles, respectively), followed by Canada (4.9%), Australia (3.3%), China (1.6%), Japan (1.6%), Korea (1.3%), and New Zealand (1.3%). Some of the most discussed cases occurred in the USA; the Imanishi-Kari, Gallo, and Schön cases [ 9 , 10 ]. Intensely discussed cases are also associated with Canada (Fisher/Poisson and Olivieri cases), the UK (Wakefield and CNEP trial cases), South Korea (Hwang case), and Japan (RIKEN case) [ 12 , 14 ]. In terms of percentages, North America and Europe stand out in the number of articles (Fig.  7 ).

figure 7

Percentage of articles and cases by continent

The case analysis involved the location where the case took place, taking into account the institutions involved in the case. For cases involving more than one country, all the countries were considered. Three cases were excluded from the analysis due to insufficient information. In the case analysis, 40 countries were involved in 235 different cases (Additional file 4 ). Our findings show that most of the reported cases occurred in the USA and the United Kingdom (59.6% and 9.8% of cases, respectively). In addition, a number of cases occurred in Canada (6.0%), Japan (5.5%), China (2.1%), and Germany (2.1%). In terms of percentages, North America and Europe stand out in the number of cases (Fig.  7 ). To enable comparison, we have additionally collected the number of published documents according to country distribution, available on SCImago Journal & Country Rank [ 16 ]. The numbers correspond to the documents published from 1996 to 2019. The USA occupies the first place in the number of documents, with 21.9%, followed by China (11.1%), UK (6.3%), Germany (5.5%), and Japan (4.9%).

Field of science

The cases were classified according to the field of science. Four cases (1.7%) could not be classified due to insufficient information. Where information was available, 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities (Fig.  8 ). Additionally, we have retrieved the number of published documents according to scientific field distribution, available on SCImago [ 16 ]. Of the total number of scientific publications, 41.5% are related to natural sciences, 22% to engineering, 25.1% to health and medical sciences, 7.8% to social sciences, 1.9% to agricultural sciences, and 1.7% to the humanities.

figure 8

Field of science from the analysis of cases

This variable aimed to collect information on possible consequences and sanctions imposed by funding agencies, scientific journals and/or institutions. 97 cases could not be classified due to insufficient information. 141 cases were included. Each case could potentially include more than one outcome. Most of cases (45.4%) involved paper retraction, followed by exclusion from funding applications (35.5%). (Table 2 ).

RE and RI cases have been increasingly discussed publicly, affecting public attitudes towards scientists and raising awareness about ethical issues, violations, and their wider consequences [ 5 ]. Different approaches have been applied in order to quantify and address research misbehaviors [ 5 , 17 , 18 , 19 ]. However, most cases are investigated confidentially and the findings remain undisclosed even after the investigation [ 19 , 20 ]. Therefore, the study aimed to collect the RE and RI cases available in the scientific literature, understand how the cases are discussed, and identify the potential of case descriptions to raise awareness on RE and RI.

We collected and analyzed 500 detailed case descriptions from 388 articles and our results show that they mostly relate to extensively discussed and notorious cases. Approximately half of all included cases was mentioned in at least two different articles, and the top ten most commonly mentioned cases were discussed in 132 articles.

The prominence of certain cases in the literature, based on the number of duplicated cases we found (e.g. Hwang case), can be explained by the type of article in which cases are discussed and the type of violation involved in the case. In the article genre analysis, 33% of the cases were described in the news section of scientific publications. Our findings show that almost all article genres discuss those cases that are new and in vogue. Once the case appears in the public domain, it is intensely discussed in the media and by scientists, and some prominent cases have been discussed for more than 20 years (Table 1 ). Misconduct and retraction notices were exceptions in the article genre analysis, as they presented mostly unique cases. The misconduct notices were mainly found on the NIH repository, which is indexed in the searched databases. Some federal funding agencies like NIH usually publicize investigation findings associated with the research they fund. The results derived from the NIH repository also explains the large proportion of articles from the US (61.9%). However, in some cases, only a few details are provided about the case. For cases that have not received federal funding and have not been reported to federal authorities, the investigation is conducted by local institutions. In such instances, the reporting of findings depends on each institution’s policy and willingness to disclose information [ 21 ]. The other exception involves retraction notices. Despite the existence of ethical guidelines [ 22 ], there is no uniform and a common approach to how a journal should report a retraction. The Retraction Watch website suggests two lists of information that should be included in a retraction notice to satisfy the minimum and optimum requirements [ 22 , 23 ]. As well as disclosing the reason for the retraction and information regarding the retraction process, optimal notices should include: (I) the date when the journal was first alerted to potential problems; (II) details regarding institutional investigations and associated outcomes; (III) the effects on other papers published by the same authors; (IV) statements about more recent replications only if and when these have been validated by a third party; (V) details regarding the journal’s sanctions; and (VI) details regarding any lawsuits that have been filed regarding the case. The lack of transparency and information in retraction notices was also noted in studies that collected and evaluated retractions [ 24 ]. According to Resnik and Dinse [ 25 ], retractions notices related to cases of misconduct tend to avoid naming the specific violation involved in the case. This study found that only 32.8% of the notices identify the actual problem, such as fabrication, falsification, and plagiarism, and 58.8% reported the case as replication failure, loss of data, or error. Potential explanations for euphemisms and vague claims in retraction notices authored by editors could pertain to the possibility of legal actions from the authors, honest or self-reported errors, and lack of resources to conduct thorough investigations. In addition, the lack of transparency can also be explained by the conflicts of interests of the article’s author(s), since the notices are often written by the authors of the retracted article.

The analysis of violations/ethical issues shows the dominance of fabrication and falsification cases and explains the high prevalence of prominent cases. Non-adherence to laws and regulations (REC approval, informed consent, and data protection) was the second most prevalent issue, followed by patient safety, plagiarism, and conflicts of interest. The prevalence of the five most tagged violations in the case analysis was higher than the prevalence found in the analysis of articles that involved the same violations. The only exceptions are fabrication and falsification cases, which represented 45% of the tagged violations in the analysis of cases, and 59.1% in the article analysis. This disproportion shows a predilection for the publication of discussions related to fabrication and falsification when compared to other serious violations. Complex cases involving these types of violations make good headlines and this follows a custom pattern of writing about cases that catch the public and media’s attention [ 26 ]. The way cases of RE and RI violations are explored in the literature gives a sense that only a few scientists are “the bad apples” and they are usually discovered, investigated, and sanctioned accordingly. This implies that the integrity of science, in general, remains relatively untouched by these violations. However, studies on misconduct determinants show that scientific misconduct is a systemic problem, which involves not only individual factors, but structural and institutional factors as well, and that a combined effort is necessary to change this scenario [ 27 , 28 ].

Analysis of cases

A notable increase in RE and RI cases occurred in the 1990s, with a gradual increase until approximately 2006. This result is in agreement with studies that evaluated paper retractions [ 24 , 29 ]. Although our study did not focus only on retractions, the trend is similar. This increase in cases should not be attributed only to the increase in the number of publications, since studies that evaluated retractions show that the percentage of retraction due to fraud has increased almost ten times since 1975, compared to the total number of articles. Our results also show a gradual reduction in the number of cases from 2011 and a greater drop in 2015. However, this reduction should be considered cautiously because many investigations take years to complete and have their findings disclosed. ORI has shown that from 2001 to 2010 the investigation of their cases took an average of 20.48 months with a maximum investigation time of more than 9 years [ 24 ].

The countries from which most cases were reported were the USA (59.6%), the UK (9.8%), Canada (6.0%), Japan (5.5%), and China (2.1%). When analyzed by continent, the highest percentage of cases took place in North America, followed by Europe, Asia, Oceania, Latin America, and Africa. The predominance of cases from the USA is predictable, since the country publishes more scientific articles than any other country, with 21.8% of the total documents, according to SCImago [ 16 ]. However, the same interpretation does not apply to China, which occupies the second position in the ranking, with 11.2%. These differences in the geographical distribution were also found in a study that collected published research on research integrity [ 30 ]. The results found by Aubert Bonn and Pinxten (2019) show that studies in the United States accounted for more than half of the sample collected, and although China is one of the leaders in scientific publications, it represented only 0.7% of the sample. Our findings can also be explained by the search strategy that included only keywords in English. Since the majority of RE and RI cases are investigated and have their findings locally disclosed, the employment of English keywords and terms in the search strategy is a limitation. Moreover, our findings do not allow us to draw inferences regarding the incidence or prevalence of misconduct around the world. Instead, it shows where there is a culture of publicly disclosing information and openly discussing RE and RI cases in English documents.

Scientific field analysis

The results show that 80.8% of reported cases occurred in the medical and health sciences whilst only 1.3% occurred in the humanities. This disciplinary difference has also been observed in studies on research integrity climates. A study conducted by Haven and colleagues, [ 28 ] associated seven subscales of research climate with the disciplinary field. The subscales included: (1) Responsible Conduct of Research (RCR) resources, (2) regulatory quality, (3) integrity norms, (4) integrity socialization, (5) supervisor/supervisee relations, (6) (lack of) integrity inhibitors, and (7) expectations. The results, based on the seven subscale scores, show that researchers from the humanities and social sciences have the lowest perception of the RI climate. By contrast, the natural sciences expressed the highest perception of the RI climate, followed by the biomedical sciences. There are also significant differences in the depth and extent of the regulatory environments of different disciplines (e.g. the existence of laws, codes of conduct, policies, relevant ethics committees, or authorities). These findings corroborate our results, as those areas of science most familiar with RI tend to explore the subject further, and, consequently, are more likely to publish case details. Although the volume of published research in each research area also influences the number of cases, the predominance of medical and health sciences cases is not aligned with the trends regarding the volume of published research. According to SCImago Journal & Country Rank [ 16 ], natural sciences occupy the first place in the number of publications (41,5%), followed by the medical and health sciences (25,1%), engineering (22%), social sciences (7,8%), and the humanities (1,7%). Moreover, biomedical journals are overrepresented in the top scientific journals by IF ranking, and these journals usually have clear policies for research misconduct. High-impact journals are more likely to have higher visibility and scrutiny, and consequently, more likely to have been the subject of misconduct investigations. Additionally, the most well-known general medical journals, including NEJM, The Lancet, and the BMJ, employ journalists to write their news sections. Since these journals have the resources to produce extensive news sections, it is, therefore, more likely that medical cases will be discussed.

Violations analysis

In the analysis of violations, the cases were categorized into major and minor misbehaviors. Most cases involved data fabrication and falsification, followed by cases involving non-adherence to laws and regulations, patient safety, plagiarism, and conflicts of interest. When classified by categories, 12.5% of the tagged violations involved issues in the study design, 16.4% in data collection, 56.0% in reporting, and 15.1% involved collaboration issues. Approximately 80% of the tagged violations involved serious research misbehaviors, based on the ranking of research misbehaviors proposed by Bouter and colleagues. However, as demonstrated in a meta-analysis by Fanelli (2009), most self-declared cases involve questionable research practices. In the meta-analysis, 33.7% of scientists admitted questionable research practices, and 72% admitted when asked about the behavior of colleagues. This finding contrasts with an admission rate of 1.97% and 14.12% for cases involving fabrication, falsification, and plagiarism. However, Fanelli’s meta-analysis does not include data about research misbehaviors in its wider sense but focuses on behaviors that bias research results (i.e. fabrication and falsification, intentional non-publication of results, biased methodology, misleading reporting). In our study, the majority of cases involved FFP (66.4%). Overrepresentation of some types of violations, and underrepresentation of others, might lead to misguided efforts, as cases that receive intense publicity eventually influence policies relating to scientific misconduct and RI [ 20 ].

Sanctions analysis

The five most prevalent outcomes were paper retraction, followed by exclusion from funding applications, exclusion from service or position, dismissal and suspension, and paper correction. This result is similar to that found by Redman and Merz [ 31 ], who collected data from misconduct cases provided by the ORI. Moreover, their results show that fabrication and falsification cases are 8.8 times more likely than others to receive funding exclusions. Such cases also received, on average, 0.6 more sanctions per case. Punishments for misconduct remain under discussion, ranging from the criminalization of more serious forms of misconduct [ 32 ] to social punishments, such as those recently introduced by China [ 33 ]. The most common sanction identified by our analysis—paper retraction—is consistent with the most prevalent types of violation, that is, falsification and fabrication.

Publicizing scientific misconduct

The lack of publicly available summaries of misconduct investigations makes it difficult to share experiences and evaluate the effectiveness of policies and training programs. Publicizing scientific misconduct can have serious consequences and creates a stigma around those involved in the case. For instance, publicized allegations can damage the reputation of the accused even when they are later exonerated [ 21 ]. Thus, for published cases, it is the responsibility of the authors and editors to determine whether the name(s) of those involved should be disclosed. On the one hand, it is envisaged that disclosing the name(s) of those involved will encourage others in the community to foster good standards. On the other hand, it is suggested that someone who has made a mistake should have the right to a chance to defend his/her reputation. Regardless of whether a person's name is left out or disclosed, case reports have an important educational function and can help guide RE- and RI-related policies [ 34 ]. A recent paper published by Gunsalus [ 35 ] proposes a three-part approach to strengthen transparency in misconduct investigations. The first part consists of a checklist [ 36 ]. The second suggests that an external peer reviewer should be involved in investigative reporting. The third part calls for the publication of the peer reviewer’s findings.

Limitations

One of the possible limitations of our study may be our search strategy. Although we have conducted pilot searches and sensitivity tests to reach the most feasible and precise search strategy, we cannot exclude the possibility of having missed important cases. Furthermore, the use of English keywords was another limitation of our search. Since most investigations are performed locally and published in local repositories, our search only allowed us to access cases from English-speaking countries or discussed in academic publications written in English. Additionally, it is important to note that the published cases are not representative of all instances of misconduct, since most of them are never discovered, and when discovered, not all are fully investigated or have their findings published. It is also important to note that the lack of information from the extracted case descriptions is a limitation that affects the interpretation of our results. In our review, only 25 retraction notices contained sufficient information that allowed us to include them in our analysis in conformance with the inclusion criteria. Although our search strategy was not focused specifically on retraction and misconduct notices, we believe that if sufficiently detailed information was available in such notices, the search strategy would have identified them.

Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields when compared with the volume of publications produced by each field. Moreover, published cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of ethical issues for RE and RI. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations and ethical issues, and recently emerging forms of misbehaviors.

Availability of data and materials

This review has been developed by members of the EnTIRE project in order to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). The dataset supporting the conclusions of this article is available in the Open Science Framework (OSF) repository in https://osf.io/3xatj/?view_only=313a0477ab554b7489ee52d3046398b9 .

National Academies of Sciences E, Medicine. Fostering integrity in research. National Academies Press; 2017.

Davis MS, Riske-Morris M, Diaz SR. Causal factors implicated in research misconduct: evidence from ORI case files. Sci Eng Ethics. 2007;13(4):395–414. https://doi.org/10.1007/s11948-007-9045-2 .

Article   Google Scholar  

Ampollini I, Bucchi M. When public discourse mirrors academic debate: research integrity in the media. Sci Eng Ethics. 2020;26(1):451–74. https://doi.org/10.1007/s11948-019-00103-5 .

Hesselmann F, Graf V, Schmidt M, Reinhart M. The visibility of scientific misconduct: a review of the literature on retracted journal articles. Curr Sociol La Sociologie contemporaine. 2017;65(6):814–45. https://doi.org/10.1177/0011392116663807 .

Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–8. https://doi.org/10.1038/435737a .

Loikith L, Bauchwitz R. The essential need for research misconduct allegation audits. Sci Eng Ethics. 2016;22(4):1027–49. https://doi.org/10.1007/s11948-016-9798-6 .

OECD. Revised field of science and technology (FoS) classification in the Frascati manual. Working Party of National Experts on Science and Technology Indicators 2007. p. 1–12.

Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integrity Peer Rev. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 .

Greenberg DS. Resounding echoes of Gallo case. Lancet. 1995;345(8950):639.

Dresser R. Giving scientists their due. The Imanishi-Kari decision. Hastings Center Rep. 1997;27(3):26–8.

Hong ST. We should not forget lessons learned from the Woo Suk Hwang’s case of research misconduct and bioethics law violation. J Korean Med Sci. 2016;31(11):1671–2. https://doi.org/10.3346/jkms.2016.31.11.1671 .

Opel DJ, Diekema DS, Marcuse EK. Assuring research integrity in the wake of Wakefield. BMJ (Clinical research ed). 2011;342(7790):179. https://doi.org/10.1136/bmj.d2 .

Wells F. The Stoke CNEP Saga: did it need to take so long? J R Soc Med. 2010;103(9):352–6. https://doi.org/10.1258/jrsm.2010.10k010 .

Normile D. RIKEN panel finds misconduct in controversial paper. Science. 2014;344(6179):23. https://doi.org/10.1126/science.344.6179.23 .

Wager E. The Committee on Publication Ethics (COPE): Objectives and achievements 1997–2012. La Presse Médicale. 2012;41(9):861–6. https://doi.org/10.1016/j.lpm.2012.02.049 .

SCImago nd. SJR — SCImago Journal & Country Rank [Portal]. http://www.scimagojr.com . Accessed 03 Feb 2021.

Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. 2009;4(5):e5738. https://doi.org/10.1371/journal.pone.0005738 .

Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics. 2006;12(1):53–74. https://doi.org/10.1007/PL00022268 .

DuBois JM, Anderson EE, Chibnall J, Carroll K, Gibb T, Ogbuka C, et al. Understanding research misconduct: a comparative analysis of 120 cases of professional wrongdoing. Account Res. 2013;20(5–6):320–38. https://doi.org/10.1080/08989621.2013.822248 .

National Academy of Sciences NAoE, Institute of Medicine Panel on Scientific R, the Conduct of R. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US) Copyright (c) 1992 by the National Academy of Sciences; 1992.

Bauchner H, Fontanarosa PB, Flanagin A, Thornton J. Scientific misconduct and medical journals. JAMA. 2018;320(19):1985–7. https://doi.org/10.1001/jama.2018.14350 .

COPE Council. COPE Guidelines: Retraction Guidelines. 2019. https://doi.org/10.24318/cope.2019.1.4 .

Retraction Watch. What should an ideal retraction notice look like? 2015, May 21. https://retractionwatch.com/2015/05/21/what-should-an-ideal-retraction-notice-look-like/ .

Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci USA. 2012;109(42):17028–33. https://doi.org/10.1073/pnas.1212247109 .

Resnik DB, Dinse GE. Scientific retractions and corrections related to misconduct findings. J Med Ethics. 2013;39(1):46–50. https://doi.org/10.1136/medethics-2012-100766 .

de Vries R, Anderson MS, Martinson BC. Normal misbehavior: scientists talk about the ethics of research. J Empir Res Hum Res Ethics JERHRE. 2006;1(1):43–50. https://doi.org/10.1525/jer.2006.1.1.43 .

Sovacool BK. Exploring scientific misconduct: isolated individuals, impure institutions, or an inevitable idiom of modern science? J Bioethical Inquiry. 2008;5(4):271. https://doi.org/10.1007/s11673-008-9113-6 .

Haven TL, Tijdink JK, Martinson BC, Bouter LM. Perceptions of research integrity climate differ between academic ranks and disciplinary fields: results from a survey among academic researchers in Amsterdam. PLoS ONE. 2019;14(1):e0210599. https://doi.org/10.1371/journal.pone.0210599 .

Trikalinos NA, Evangelou E, Ioannidis JPA. Falsified papers in high-impact journals were slow to retract and indistinguishable from nonfraudulent papers. J Clin Epidemiol. 2008;61(5):464–70. https://doi.org/10.1016/j.jclinepi.2007.11.019 .

Aubert Bonn N, Pinxten W. A decade of empirical research on research integrity: What have we (not) looked at? J Empir Res Hum Res Ethics. 2019;14(4):338–52. https://doi.org/10.1177/1556264619858534 .

Redman BK, Merz JF. Scientific misconduct: do the punishments fit the crime? Science. 2008;321(5890):775. https://doi.org/10.1126/science.1158052 .

Bülow W, Helgesson G. Criminalization of scientific misconduct. Med Health Care Philos. 2019;22(2):245–52. https://doi.org/10.1007/s11019-018-9865-7 .

Cyranoski D. China introduces “social” punishments for scientific misconduct. Nature. 2018;564(7736):312. https://doi.org/10.1038/d41586-018-07740-z .

Bird SJ. Publicizing scientific misconduct and its consequences. Sci Eng Ethics. 2004;10(3):435–6. https://doi.org/10.1007/s11948-004-0001-0 .

Gunsalus CK. Make reports of research misconduct public. Nature. 2019;570(7759):7. https://doi.org/10.1038/d41586-019-01728-z .

Gunsalus CK, Marcus AR, Oransky I. Institutional research misconduct reports need more credibility. JAMA. 2018;319(13):1315–6. https://doi.org/10.1001/jama.2018.0358 .

Download references

Acknowledgements

The authors wish to thank the EnTIRE research group. The EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) aims to create an online platform that makes RE+RI information easily accessible to the research community. The EnTIRE Consortium is composed by VU Medical Center, Amsterdam, gesinn. It Gmbh & Co Kg, KU Leuven, University of Split School of Medicine, Dublin City University, Central European University, University of Oslo, University of Manchester, European Network of Research Ethics Committees.

EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement N 741782. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Department of Behavioural Sciences, Faculty of Medicine, University of Debrecen, Móricz Zsigmond krt. 22. III. Apartman Diákszálló, Debrecen, 4032, Hungary

Anna Catharina Vieira Armond & János Kristóf Bodnár

Institute of Ethics, School of Theology, Philosophy and Music, Dublin City University, Dublin, Ireland

Bert Gordijn, Jonathan Lewis & Mohammad Hosseini

Centre for Social Ethics and Policy, School of Law, University of Manchester, Manchester, UK

Center for Medical Ethics, HELSAM, Faculty of Medicine, University of Oslo, Oslo, Norway

Center for Ethics and Law in Biomedicine, Central European University, Budapest, Hungary

Péter Kakuk

You can also search for this author in PubMed   Google Scholar

Contributions

All authors (ACVA, BG, JL, MH, JKB, SH and PK) developed the idea for the article. ACVA, PK, JKB performed the literature search and data analysis, ACVA and PK produced the draft, and all authors critically revised it. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Anna Catharina Vieira Armond .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

. Pilot search and search strategy.

Additional file 2

. List of Major and minor misbehavior items (Developed by Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Research integrity and peer review. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 ).

Additional file 3

. Table containing the number and percentage of countries included in the analysis of articles.

Additional file 4

. Table containing the number and percentage of countries included in the analysis of the cases.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Armond, A.C.V., Gordijn, B., Lewis, J. et al. A scoping review of the literature featuring research ethics and research integrity cases. BMC Med Ethics 22 , 50 (2021). https://doi.org/10.1186/s12910-021-00620-8

Download citation

Received : 06 October 2020

Accepted : 21 April 2021

Published : 30 April 2021

DOI : https://doi.org/10.1186/s12910-021-00620-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research ethics
  • Research integrity
  • Scientific misconduct

BMC Medical Ethics

ISSN: 1472-6939

research misconduct in medical journals

Advertisement

Advertisement

Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis

  • Original Research/Scholarship
  • Published: 29 June 2021
  • Volume 27 , article number  41 , ( 2021 )

Cite this article

  • Yu Xie 1 , 2 ,
  • Kai Wang 1 &
  • Yan Kong 1  

4146 Accesses

65 Citations

140 Altmetric

16 Mentions

Explore all metrics

Irresponsible research practices damaging the value of science has been an increasing concern among researchers, but previous work failed to estimate the prevalence of all forms of irresponsible research behavior. Additionally, these analyses have not included articles published in the last decade from 2011 to 2020. This meta-analysis provides an updated meta-analysis that calculates the pooled estimates of research misconduct (RM) and questionable research practices (QRPs), and explores the factors associated with the prevalence of these issues. The estimates, committing RM concern at least 1 of FFP (falsification, fabrication, plagiarism) and (unspecified) QRPs concern 1 or more QRPs, were 2.9% (95% CI 2.1–3.8%) and 12.5% (95% CI 10.5–14.7%), respectively. In addition, 15.5% (95% CI 12.4–19.2%) of researchers witnessed others who had committed at least 1 RM, while 39.7% (95% CI 35.6–44.0%) were aware of others who had used at least 1 QRP. The results document that response proportion, limited recall period, career level, disciplinary background and locations all affect significantly the prevalence of these issues. This meta-analysis addresses a gap in existing meta-analyses and estimates the prevalence of all forms of RM and QRPs, thus providing a better understanding of irresponsible research behaviors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research misconduct in medical journals

Similar content being viewed by others

research misconduct in medical journals

Why, When, Who, What, How, and Where for Trainees Writing Literature Review Articles

Gerry L. Koons, Katja Schenke-Layland & Antonios G. Mikos

research misconduct in medical journals

Literature reviews as independent studies: guidelines for academic practice

Sascha Kraus, Matthias Breier, … João J. Ferreira

research misconduct in medical journals

The Trustworthiness of Content Analysis

Availability of data and material.

Full data from the current meta-analysis can be retrieved from Harvard Dataverse (Available at https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/F9C8OK ).

Code availability

The code for the analysis can be retrieved from Harvard Dataverse (Available at https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/F9C8OK ).

Adeleye, O. A., & Adebamowo, C. A. (2012). Factors associated with research wrongdoing in Nigeria. Journal of Empirical Research on Human Research Ethics, 7 (5), 15–24.

Article   Google Scholar  

Agnoli, F., Wicherts, J. M., Veldkamp, C. L., Albiero, P., & Cubelli, R. (2017). Questionable research practices among Italian research psychologists. PLoS ONE, 12 (3), 0172792.

All European Academies. (2017). The European code of conduct for research integrity revised edition . https://allea.org/code-of-conduct . Accessed 4 April 2021.

Allen, G. N., Ball, N. L., & Smith, H. J. (2011). Information systems research behaviors: What are the normative standards? Mis Quarterly, 35 (3), 533–551.

Awasthi, S., & Ranjan, S. (2019). Perception and attitude towards data cooking: A perspective of LIS research scholars. Library Philosophy and Practice , 2872.

Banks, G. C., O’Boyle, E. H., Jr., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., et al. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42 (1), 5–20.

Bebeau, M. J., & Davis, E. L. (1996). Survey of ethical issues in dental research. Journal of Dental Research, 75 (2), 845–855.

Bedeian, A. G., Taylor, S. G., & Miller, A. N. (2010). Management science on the credibility bubble: Cardinal sins and various misdemeanors. Academy of Management Learning & Education, 9 (4), 715–725.

Google Scholar  

Begg, C. B., & Mazumdar, M. (1994). Operating characteristics of a rank correlation test for publication bias.  Biometrics, 50 (4), 1088–1101.

Braun, M., & Roussos, A. J. (2012). Psychotherapy researchers: Reported misbehaviors and opinions. Journal of Empirical Research on Human Research Ethics, 7 (5), 25–29.

Bruton, S. V., Brown, M., & Sacco, D. F. (2020). Ethical consistency and experience: An attempt to influence researcher attitudes toward questionable research practices through reading prompts.  Journal of Empirical Research on Human Research Ethics, 15 (3), 216–226.

Burgess, G. L., & Mullen, D. (2002). Observations of ethical misconduct among industrial hygienists in England. AIHA Journal, 63 (2), 151–155.

Dhingra, D., & Mishra, D. (2014). Publication misconduct among medical professionals in India. Indian Journal of Medical Ethics, 11 (2), 104–107.

Dotterweich, D. P., & Garrison, S. (1998). Research ethics of business academic researchers at AACSB institutions. Teaching Business Ethics, 1 (4), 431–447.

Eastwood, S., Derish, P., Leash, E., & Ordway, S. (1996). Ethical issues in biomedical research: Perceptions and practices of postdoctoral research fellows responding to a survey. Science and Engineering Ethics, 2 (1), 89–114.

Egger, M., Smith, G. D., Schneider, M., & Minder, C. (1997). Bias in meta-analysis detected by a simple, graphical test. BMJ, 315 (7109), 629–634.

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4 (5), e5738.

Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science, 7 (1), 45–52.

Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLoS ONE ,  13 (7), e0200303.

Gardner, W., Lidz, C. W., & Hartwig, K. C. (2005). Authors’ reports about research integrity problems in clinical trials. Contemporary Clinical Trials, 26 (2), 244–251.

Geggie, D. (2001). A survey of newly appointed consultants’ attitudes towards research fraud. Journal of Medical Ethics, 27 (5), 344–346.

Glick, J. L. (1993). Perceptions concerning research integrity and the practice of data audit in the biotechnology industry. Accountability in Research, 3 (2–3), 187–195.

Glick, L. J., & Shamoo, A. E. (1994). Results of a survey on research practices, completed by attendees at the third conference on research policies and quality assurance. Accountability in Research, 3 , 275–280.

Godecharle, S., Fieuws, S., Nemery, B., & Dierickx, K. (2017). Scientists still behaving badly? A survey within industry and universities. Science and Engineering Ethics, 24 (6), 1697–1717.

Greenberg, M., & Goldberg, L. (1994). Ethical challenges to risk scientists: An exploratory analysis of survey data. Science, Technology, & Human Values, 19 (2), 223–241.

Henry, D. A., Hill, S. R., Doran, E., Newby, D. A., Henderson, K. M., Maguire, J., et al. (2005). Medical specialists and pharmaceutical industry-sponsored research: A survey of the Australian experience. Medical Journal of Australia, 182 (11), 557–560.

Higgins, J. P., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 327 (7414), 557–560.

Hofmann, B., Helgesson, G., Juth, N., & Holm, S. (2015). Scientific dishonesty: A survey of doctoral students at the major medical faculties in Sweden and Norway. Journal of Empirical Research on Human Research Ethics, 10 (4), 380–388.

Hofmann, B., Jensen, L. B., Eriksen, M. B., Helgesson, G., Juth, N., & Holm, S. (2020). Research integrity among PhD students at the faculty of medicine: A comparison of three Scandinavian universities. Journal of Empirical Research on Human Research Ethics, 15 (4), 1–10.

Hofmann, B., Myhr, A. I., & Holm, S. (2013). Scientific dishonesty—a nationwide survey of doctoral students in Norway. BMC Medical Ethics, 14 (1), 3.

Holm, S., & Hofmann, B. (2018). Associations between attitudes towards scientific misconduct and self-reported behavior. Accountability in Research: Policies and Quality Assurance, 25 (5), 290–300.

Jensen, L. B., Kyvik, K. O., Leth-Larsen, R., & Eriksen, M. B. (2018). Research integrity among PhD students within clinical research at the University of Southern Denmark. Danish Medical Journal, 65 (4), 1–5.

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23 (5), 524–532.

Kalichman, M. W., & Friedman, P. J. (1992). A pilot study of biomedical trainees' perceptions concerning research ethics.  Academic Medicine, 67 (11), 769–775.

Kattenbraker, M. S. (2007).  Health education research and publication: Ethical considerations and the response of health educators . PhD thesis, Southern Illinois University Carbondale, Carbondale, Illinois, United States.

Koklu, N. (2003). Views of academicians on research ethics. Journal of Educational Sciences & Practices, 2 (4), 138–151.

Lensvelt-Mulders, G. J. L. M., Hox, J. J., van der Heijden, P. G. M., & Maas, C. J. M. (2005). Meta-analysis of randomized response research: Thirty-five years of validation. Sociological Methods & Research, 33 (3), 319–348.

Lock, S. (1988). Misconduct in medical research: Does it exist in Britain? British Medical Journal, 297 , 1531–1535.

Long, T. C., Errami, M., George, A. C., Sun, Z., & Garner, H. R. (2009). Scientific integrity: Responding to possible plagiarism. Science, 323 (5919), 1293–1294.

Lindsay, D. S. (2015). Replication in psychological science. Psychological Science, 26 (12), 1827–1832.

List, J. A., Bailey, C. D., Euzent, P. J., & Martin, T. L. (2001). Academic economists behaving badly? A survey on three areas of unethical behavior. Economic Inquiry, 39 (1), 162–170.

Martinson, B. C., Anderson, M. S., & De Vries, R. (2005). Scientists behaving badly. Nature, 435 (7043), 737–738.

May, C., Campbell, S., & Doyle, H. (1998). Research misconduct: A pilot study of British addiction researchers. Addiction Research, 6 (4), 371–373.

Meyer, M. J., & McMahon, D. (2004). An examination of ethical research conduct by experienced and novice accounting academics. Issues in Accounting Education, 19 (4), 413–442.

Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine, 151 (4), 264–269.

NAS-NAE-IOM (National Academy of Sciences, National Academy of Engineering, and Institute of Medicine). (1992). Responsible science: Ensuring the integrity of the research process . National Academy Press.

National Academies of Sciences, Engineering, and Medicine. (2017). Fostering integrity in research . The National Academies Press.

Necker, S. (2014). Scientific misbehavior in economics. Research Policy, 43 (10), 1747–1759.

Nilstun, T., Löfmark, R., & Lundqvist, A. (2010). Scientific dishonesty—questionnaire to doctoral students in Sweden. Journal of Medical Ethics, 36 (5), 315–318.

Office of the President. (2000). Federal research misconduct policy.  https://ori.hhs.gov/federal-research-misconduct-policy . Accessed 4 April 2021.

Okonta, P. I., & Rossouw, T. (2013). Prevalence of scientific misconduct among a group of researchers in Nigeria. Developing World Bioethics, 13 (3), 149–157.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 (6251), aac4716.

Ozturk, N., Armato, S. G., Giger, M. L., Serago, C. F., & Ross, L. F. (2013). Ethics and professionalism in medical physics: A survey of AAPM members. Medical Physics, 40 (4), 047001.

Pupovac, V., & Fanelli, D. (2015). Scientists admitting to plagiarism: A meta-analysis of surveys. Science and Engineering Ethics, 21 (5), 1331–1352.

Pupovac, V., Prijić-Samaržija, S., & Petrovečki, M. (2016). Research misconduct in the Croatian scientific community: A survey assessing the forms and characteristics of research misconduct. Science and Engineering Ethics, 23 (1), 165–181.

Rabelo, A., Farias, J., Sarmet, M., Joaquim, T., Hoersting, R., Victorino, L., et al. (2019). Questionable research practices among Brazilian psychological researchers: Results from a replication study and an international comparison. International Journal of Psychology, 55 (4), 674–683.

Rankin, M., & Esteves, M. D. (1997). Perceptions of scientific misconduct in nursing. Nursing Research, 46 (5), 270–276.

Ranstam, J., Buyse, M., George, S. L., Evans, S., Geller, N. L., Scherrer, B., et al. (2000). Fraud in medical research: An international survey of biostatisticians. Controlled Clinical Trials, 21 (5), 415–427.

Rohwer, A., Young, T., Wager, E., & Garner, P. (2017). Authorship, plagiarism and conflict of interest: views and practices from low/middle-income country health researchers. British Medical Journal Open, 7 , e018467.

Saberi-Karimian, M., Afshari, R., Movahhed, S., AmiriKeykhaee, F. F., Mohajer, F., et al. (2018). Different aspects of scientific misconduct among Iranian academic members. European Science Editing, 44 (2), 28–31.

Simmons, R. L., Polk, J., Williams, B., & Mavroudis, C. (1991). Misconduct and fraud in research: Social and legislative issues symposium of the Society of University Surgeons. Surgery, 110 , 1–7.

Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics, 12 (1), 53–74.

Steneck, N. H. (2007). Introduction to the responsible conduct of research . US Government Printing Office.

Book   Google Scholar  

Swazey, J. P., Anderson, M. S., Lewis, K. S., & Louis, S. S. (1993). Ethical problems in academic research. American Scientist , 81 (6), 542–553.

Tangney, J. P. (1987). Fraud will out-or will it? New Scientist, 115 (1572), 62–63.

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin, 133 (5), 859–883.

Tijdink, J. K., Bouter, L. M., Veldkamp, C. L., van de Ven, P. M., Wicherts, J. M., & Smulders et al. (2016). Personality traits are associated with research misbehavior in Dutch scientists: a cross-sectional study.  PLoS ONE ,  11 (9), e0163251.

Titus, S. L., Wells, J. A., & Rhoades, L. J. (2008). Repairing research integrity. Nature, 453 (7198), 980–982.

Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36 (3), 1–48.

VSNU. (2018). Netherlands code of conduct for research integrity .  https://www.wur.nl/en/show/Netherlands-Code-of-Conduct-for-Research-Integrity-1.htm . Accessed 4 April 2021.

Were, E., Kaguiri, E., & Kiplagat, J. (2020). Perceptions of occurrence of research misconduct and related factors among Kenyan investigators engaged in HIV research. Accountability in Research, 27 (6), 1–18.

Download references

Acknowledgements

We thank Han Xiao for selecting eligible studies and extracting data and Yanyan Lin for developing the research strategy.

There was no funding for this study.

Author information

Authors and affiliations.

School of Humanities and Social Sciences, University of Science and Technology of China, Jinzhai Road 96, Hefei, 230026, Anhui, People’s Republic of China

Yu Xie, Kai Wang & Yan Kong

Student Working Office, Xuancheng Campus, Hefei University of Technology, Tunxi Road 193, Hefei, 230009, Anhui, People’s Republic of China

You can also search for this author in PubMed   Google Scholar

Contributions

Designed research: YX and YK; collected data: YX; analyzed data: YX; wrote the paper: YX, KW and YK; revised the paper: YX, KW and YK.

Corresponding author

Correspondence to Yan Kong .

Ethics declarations

Conflicts of interest.

The authors declare no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 602 KB)

Supplementary file2 (docx 218 kb), rights and permissions.

Reprints and permissions

About this article

Xie, Y., Wang, K. & Kong, Y. Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis. Sci Eng Ethics 27 , 41 (2021). https://doi.org/10.1007/s11948-021-00314-9

Download citation

Received : 22 March 2020

Accepted : 23 May 2021

Published : 29 June 2021

DOI : https://doi.org/10.1007/s11948-021-00314-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research misconduct
  • Questionable research practices
  • Research integrity
  • Meta-analysis
  • Find a journal
  • Publish with us
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Research misconduct in health and life sciences research: A systematic review of retracted literature from Brazilian institutions

Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation Department of Nursing, College of Health Sciences, University of Brasilia, Brasília, Federal District, Brazil

ORCID logo

Roles Data curation, Funding acquisition, Investigation, Methodology, Writing – review & editing

Roles Data curation, Formal analysis, Writing – review & editing

Affiliation Department of Statistics, Telecomunicações do Brasil – Telebrás, Brasília, Federal District, Brazil

Contributed equally to this work with: Fábio Zicker, Maria Rita Carvalho Garbi Novaes, César Messias de Oliveira

Roles Writing – review & editing

Affiliation Center for Technological Development in Health, Oswaldo Cruz Foundation, Brasília, Federal District, Brazil

Affiliation Department of Nursing, College of Health Sciences, Health Sciences Education and Research Foundation – ESCS/Fepecs, Brasília, Federal District, Brazil

Affiliation Department of Epidemiology & Public Health, Institute of Epidemiology & Health Care, University College London, London, United Kingdom

Roles Conceptualization, Funding acquisition, Project administration, Supervision, Writing – review & editing

  • Rafaelly Stavale, 
  • Graziani Izidoro Ferreira, 
  • João Antônio Martins Galvão, 
  • Fábio Zicker, 
  • Maria Rita Carvalho Garbi Novaes, 
  • César Messias de Oliveira, 
  • Dirce Guilhem

PLOS

  • Published: April 15, 2019
  • https://doi.org/10.1371/journal.pone.0214272
  • Reader Comments

Fig 1

Measures to ensure research integrity have been widely discussed due to the social, economic and scientific impact of research integrity. In the past few years, financial support for health research in emerging countries has steadily increased, resulting in a growing number of scientific publications. These achievements, however, have been accompanied by a rise in retracted publications followed by concerns about the quality and reliability of such publications.

This systematic review aimed to investigate the profile of medical and life sciences research retractions from authors affiliated with Brazilian academic institutions. The chronological trend between publication and retraction date, reasons for the retraction, citation of the article after the retraction, study design, and the number of retracted publications by author and affiliation were assessed. Additionally, the quality, availability and accessibility of data regarding retracted papers from the publishers are described.

Two independent reviewers searched for articles that had been retracted since 2004 via PubMed, Web of Science, Biblioteca Virtual em Saúde (BVS) and Google Scholar databases. Indexed keywords from Medical Subject Headings (MeSH) and Descritores em Ciências da Saúde (DeCS) in Portuguese, English or Spanish were used. Data were also collected from the Retraction Watch website ( www.retractionwatch.com ). This study was registered with the PROSPERO systematic review database (CRD42017071647).

A final sample of 65 articles was retrieved from 55 different journals with reported impact factors ranging from 0 to 32.86, with a median value of 4.40 and a mean of 4.69. The types of documents found were erratum (1), retracted articles (3), retracted articles with a retraction notice (5), retraction notices with erratum (3), and retraction notices (45). The assessment of the Retraction Watch website added 8 articles that were not identified by the search strategy using the bibliographic databases. The retracted publications covered a wide range of study designs. Experimental studies (40) and literature reviews (15) accounted for 84.6% of the retracted articles. Within the field of health and life sciences, medical science was the field with the largest number of retractions (34), followed by biological sciences (17). Some articles were retracted for at least two distinct reasons (13). Among the retrieved articles, plagiarism was the main reason for retraction (60%). Missing data were found in 57% of the retraction notices, which was a limitation to this review. In addition, 63% of the articles were cited after their retraction.

Publications are not retracted solely for research misconduct but also for honest error. Nevertheless, considering authors affiliated with Brazilian institutions, this review concluded that most of the retracted health and life sciences publications were retracted due to research misconduct. Because the number of publications is the most valued indicator of scientific productivity for funding and career progression purposes, a systematic effort from the national research councils, funding agencies, universities and scientific journals is needed to avoid an escalating trend of research misconduct. More investigations are needed to comprehend the underlying factors of research misconduct and its increasing manifestation.

Citation: Stavale R, Ferreira GI, Galvão JAM, Zicker F, Novaes MRCG, Oliveira CMd, et al. (2019) Research misconduct in health and life sciences research: A systematic review of retracted literature from Brazilian institutions. PLoS ONE 14(4): e0214272. https://doi.org/10.1371/journal.pone.0214272

Editor: Angeliki Kerasidou, University of Oxford, UNITED KINGDOM

Received: June 22, 2018; Accepted: March 11, 2019; Published: April 15, 2019

Copyright: © 2019 Stavale et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data are within the manuscript and its Supporting Information files.

Funding: Researchers involved in this review were supported by a grant from the following agencies: the Federal District Research Foundation – FAPDF (1629 018); Coordination for the Improvement of Higher Education Personnel - CAPES Brazil (1651856), Special Programme for Research and training in Tropical Diseases –TDR/WHO (B20359), UNIEURO, and the Brazilian National Council for Scientific and Technological Development – CNPq. These supporting institutions did not contribute to the study design, data collection or analysis, manuscript writing or publishing. JAMG is employed by Telecomunicações do Brasil – Telebrás. Telecomunicações do Brasil – Telebrás provided support in the form of salary for author JAMG, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific role of this author is articulated in the ‘author contributions’ section.

Competing interests: JAMG is employed by Telecomunicações do Brasil – Telebrás. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials.

Introduction

Research integrity relies on rigorous methodological approaches during planning, conduct, documentation and reporting of studies [ 1 ]. Practices known to harm these steps are classified as research misconduct [ 2 ], [ 3 ]. It has become more common for studies addressing the impact of misconduct to be published as a warning to the scientific community [ 4 ], [ 5 ], [ 6 ]. In 2012, Fang and colleagues conducted a systematic review of retracted publications in the field of biomedical and life sciences using PubMed. Their findings showed that most of the retractions were due to fraud, and they addressed the impact of these findings since these studies are mainly publicly funded [ 4 ].

Research misconduct occurs when plagiarism, data manipulation, fabrication, poor study reporting, and lack of transparency are part of the scientific process [ 2 ]. These acts have been found to compromise the validity and reliability of research results [ 7 ], [ 8 ], [ 9 ]. On many occasions, these faults have led to a retraction notice. The publication of retraction notices intends to alert readers to serious errors—unintentional or of misconduct nature—that result in unreliable conclusions [ 7 ]. The purpose of retraction notices is also to avoid the use of these studies as a basis for future investigations, except for research about scientific integrity itself. Additionally, retractions are an important tool to evaluate scientific production, and the study of retractions supports measures to avoid error and misconduct.

Misconduct has scientific, social and economic impacts [ 5 ], [ 8 ], [ 10 ]. Economically, it has been estimated that billions of dollars have been wasted on funding studies based on retracted publications [ 11 ]. Socially, it affects evidence-based medicine by exposing study volunteers and the population as a whole to wrong medical decisions [ 10 ]. Scientifically, further investigations based on unreliable findings and unethical research leads to untrustworthy conclusions, compromising the advances of scientific knowledge [ 9 ], [ 12 ], [ 13 ]. Therefore, corrupted research conducts may generate a chain of misconduct [ 6 ], [ 10 ].

Financial support for health and life sciences research has steadily increased in Brazil, which has been followed by a rising number of scientific publications. Simultaneously, there have been a growing number of retracted publications, raising concerns about the quality and reliability of these articles. The first retraction reported in health and life sciences from Brazilian institutions was a paper about nursing that was published in 2004 [ 14 ]. At the time, the author admitted to plagiarism. Since then, other cases of research misconduct have been discovered, generating apprehension about the scientific advances in the country.

Brazil is a member of the BRICS (Brazil, Russia, India, China, South Africa) cooperative group that is responsible for some of the 1% most cited publications in the world [ 15 ]. Although the citation impact of the country is below the global average, it increased 15% in the past six years [ 15 ]. The publications with higher impact ratings were performed mainly in collaboration with other institutions from the BRICS. The scientific influence of the country, as well as its participation in collaboration funds and networks for promoting health research, is growing worldwide [ 15 ].

The understanding of research integrity and research misconduct varies institutionally and culturally [ 16 ], [ 17 ], [ 18 ], so it is important to understand the factors underlying the retractions of Brazilian scientific publications and the notable increase in retractions.

Despite the relevance of research misconduct and the awareness of breaches of research integrity, the analysis of retracted publications in Brazil is quite new. In this context, this systematic review proposed the following research question: What are the main reasons for retracted publications in the field of health and life sciences that were published by researchers who are affiliated with Brazilian institutions? Answering this research question will pave the way for future investigations about research integrity in Brazil by recognizing the particularities of the country.

This review intended to characterize the underlying causes of retraction, to assess the extent of research misconduct, to support discussions of possible solutions, and ultimately, to promote further investigations. To carry out this review, data were collected regarding reasons for retraction, temporal trends from publication to retraction, citation pattern after retraction, and the impact factors and ethical guidelines endorsements of the journals. Additionally, this review evaluated the quality of retraction notices considering whether complete information was provided in accordance with the COPE guidelines [ 1 ]–a fundamental aspect of research transparency.

Materials and methods

Protocol and registration.

This review protocol was registered with PROSPERO (CRD42017071647).

Information source

The screening of eligible publications was performed from late July to early August 2017 in accordance with the preapproved registered protocol.

Search strategy

Details of the search strategy are available via the following link: https://www.crd.york.ac.uk/PROSPEROFILES/71647_STRATEGY_20170610.pdf .

Study selection

For this review, retraction notices that were published from January 2004 until August 2017 regarding articles that had at least one author that was affiliated with a Brazilian institution, irrespectively to their authorship position and regardless of the publication year of the original article, were selected. The start date was the publication year of the first retracted article in nursing science that was written by authors affiliated with a Brazilian institution [ 14 ].

Studies in the field of life and health sciences following the Brazilian National Council for Scientific and Technological Development , CNPq (from the Portuguese, Conselho Nacional de Desenvolvimento Científico e Tecnológico), classification [ 19 ] that were published in English, Portuguese or Spanish in national or international journals were eligible for this review.

Despite their study design, all retracted articles, with complete or incomplete retraction notice information according to the Committee of Publication Ethics (COPE) guidelines [ 2 ], were eligible for this review when they were in accordance with the protocol. Retraction notices, articles with a retraction notice attached or any sort of information indicating a retraction were considered for data collection. Studies regarding research integrity were excluded, as well as studies related to other fields of scientific knowledge.

Sampling and data collection process

Two independent reviewers searched for retracted articles via the PubMed, Web of Science and Brazilian Virtual Library of Health (BVS) databases. Google Scholar and the Retraction Watch [ 20 ] website were searched to identify additional publications and gray literature. The last database is an open access portal reporting retracted papers worldwide. The results were compared, and a consolidated list of retracted articles was produced according to the protocol.

Data were collected and analyzed according to reason for retraction, time trend from publication to retraction, citation pattern after retraction, journal impact factor, quality of retraction notice information, author’s affiliation and adherence to either COPE or CONSORT guidelines on ethics and standard reporting.

Data collection rationale

  • Publication year and retraction year trend : The time between the date of publication and the date of retraction was calculated in years. Articles published and retracted in the same year were considered to have a time of 0. Publications without complete information regarding these dates were labeled as “not applicable” for this analysis.
  • Author’s affiliation : This analysis was limited to one author per paper. Data were collected from the last authors because they are typically responsible for mentoring and supervising the research planning, conduct and reporting [ 21 ]. Three articles were excluded from this analysis because the last author was not affiliated with a Brazilian institution.
  • Journal’s name and impact factor (IF) : The impact factor over the last 5 years was collected from Thompson and Reuters’s indicators. Previous research has shown an increase in the citation of retracted papers when they were published in high impact journals [ 9 ]. This review investigated whether the same pattern exists in Brazilian publications.
  • Ethical and reporting guidelines endorsement : It was assumed that journals endorsed by either CONSORT or COPE guidelines followed ethical guidelines.
  • Area of study : The health and life sciences were categorized into the following sub groups: medical science, biological science, nutrition, dentistry, sports science, nursing science, physiotherapy, and pharmacology.
  • Retraction indicator : The presentation of retraction notices or retracted articles reflected how editors and databases did or did not facilitate their visibility. Transparency was ensured when retraction notices were attached to the original article and had a clear warning of retraction/withdrawal.
  • Reasons for retraction : The reasons for retraction were classified as a) error (inappropriate study design, data collection or report); b) fraud (data or image manipulation); c) author’s dispute (publications without the consent or recognition of all authors, sponsors or industry manufacturers of the tested product); d) duplicated publication (when authors or editors published the same article more than once); e) irregular citation pattern or citation staking (artifice used to upgrade the impact factor of a journal); f) unknown (reason for retraction was not mentioned); g) plagiarism (image, text or unspecified forms of plagiarism) and; h) no informed consent was obtained for the use and publication of images of participants.
  • Retracted by : Retraction notices are expected to acknowledge who retracted the article. Retractions by authors indicate good faith and are considered as retractions due to an honest mistake. Retractions by editors, depending on the reason, may indicate honest mistakes from the editorial board or misconduct from the authors.
  • Retraction endorsement by authors : Authors usually participate and/or agree with the wording of the retraction. Report of participation of authors and their endorsement indicates transparency of the retraction process.
  • Citation pattern of retracted articles : The number of times an article has been cited reflects its visibility and possible impact on the scientific community [ 22 ]. Therefore, the citation pattern before and after retraction was analyzed by calculating the mean citations per year from the date of publication to the date of retraction for each article. Similarly, the mean citations per year from the date of retraction to 2017 were also calculated. For comparison purposes, articles with a higher mean number of citation per year before retraction were considered to have a positive-citation pattern , while those with a higher mean number of citations per year after retraction were considered to have a negative-citation pattern .
  • Quality of retraction notices : According to the COPE recommendations [ 2 ], [ 7 ], retraction notices must contain: the date of retraction, motives for the retraction, whether the retraction was endorsed by the authors, who requested the retraction, and the proper citation of the original article in the retraction notice. A complete report of this information accounts for a high-quality retraction notice.

The PRISMA statement checklist was used to assure the quality of this systematic review. The checklist is provided as S1 Table . Some topics did not apply for this study considering that this review evaluated only retraction notices and excluded the original articles. Consequently, the methods used to assess the risk of bias of the individual studies, summary measures, synthesis of results and risk of bias across studies was not used.

Statistical analysis

The Shapiro-Wilk normality test was conducted for the citation pattern before and after retraction and the correlation between the citation pattern and the impact factor of the journals. These variables exhibited a non normal distribution. Hence, the Spearman correlation test and a descriptive analysis were performed using the R statistical program version 3.4.2 and Excel for Mac 2011, version 14.4.3. S1 File of the conducted tests is available.

Retraction notice selection

A final sample of 65 retracted articles was retrieved ( Fig 1 ) from 55 different journals with an impact factor range of 0–32.86 and a mean impact factor of 4.7. The types of documents that were included were erratum (n = 1), retracted article (n = 3), retracted article with its retraction notice attached (n = 5), retraction notice with erratum (n = 3) and retraction notice (n = 45). The search using the Retraction Watch Blog [ 13 ] added 8 articles that were not identified by the search strategy using the bibliographic databases.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Study selection flowchart showing initial number of records to final sample retrieved for analysis.

https://doi.org/10.1371/journal.pone.0214272.g001

The retracted publications covered a wide range of studies. Experimental studies (n = 40) and literature reviews (n = 15) accounted for 84.6% of the included articles Table 1 . Studies conducted in the field of medical science accounted for 52% of the retrieved articles. Medical science was the field with the largest number of retractions (n = 34) followed by biological sciences at 26% (n = 17), dentistry 7.7% (n = 5), sports sciences at 3% (n = 2), pharmacology at 3% (n = 2), nutrition at 1.5% (n = 1), nursing sciences at 1.5% (n = 1), and physiotherapy at 1.5% (n = 1).

thumbnail

https://doi.org/10.1371/journal.pone.0214272.t001

Ethical and standard reporting guidelines.

Out of the 65 journals with published retracted notices, only 7 clearly complied with the COPE and CONSORT guidelines. A total of 41.5% of the selected journals were not member of COPE or part of CONSORT’s list. Still, reference to these two main ethical and reporting guidelines recommendations was found in the Guide for Authors section of these journals.

Affiliation, number of retractions and area of study of the authors.

A total of 26 Brazilian institutions had at least one research article retracted. Of these institutions, 20 (77%) were public institutions, 5 (19%) were private institutions and 1 (4%) was a nonprofit organization. The University of São Paulo was the institution with the highest number of retracted publications (n = 17), followed by the University of Campinas (n = 16). Both are leading Brazilian academic institutions with the highest scientific productivity [ 15 ]. Of the 62 articles analyzed, 48 (77.4%) were published by authors affiliated with institutions located in southeastern Brazil. The University of Campinas (São Paulo) also accounted for the highest number of retractions by author Table 2 . The largest number of postgraduate programs in the country is concentrated in the southeastern region of Brazil [ 23 ]. One author had 8 retractions during the studied period. Plagiarism was the main cause for retractions related to the two authors with most retractions that were affiliated with this university Table 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0214272.t002

thumbnail

https://doi.org/10.1371/journal.pone.0214272.t003

Time trend between publication and retraction.

The time to retraction varied from 0 to 19 years. Five retraction notices (7.7%), 3 from 2011 and 2 from 2012, did not specify the year of retraction. In 2017, one article was retracted less than a year after it was published ( Fig 2 ).

thumbnail

Graphic representation: The distribution of number of articles by reason for retraction. Plagiarism is categorized under: a) unknown (purple bar), b) plagiarism of text (blue bar), c) plagiarism of image (light green bar).

https://doi.org/10.1371/journal.pone.0214272.g002

The overall mean time to retraction was 3,36 years. Most articles (55%) took from one to three years from the time of publication to be retracted. Data showed the number of retracted articles increased significantly starting in 2012, the start point of this review.

Number of citations after retraction

The analysis of post retraction citations is a proxy assessment of the influence of articles on scientific activity despite of their retraction. A total of 37% of the retrieved articles had a positive-citation pattern ; meanwhile, 63% had a negative-citation pattern . The most cited article with a negative-citation-pattern was published in 2007 and was retracted in 2016 [ 24 ]. Thus far, it has received a total of 490 citations and of these, 58 were from after the retraction of the article.

Association between impact factor and post retraction citation number

There was a strong positive correlation between the number of citations/year of an article after its retraction and the impact factor of the respective journal responsible for its retraction notice (Spearman rho = 0.69, p<0.05). The details of this analysis can be found in the S1 File .

Association between the impact factor and the number of citations before the retraction

There was a moderate correlation between the number of citations/year of an article before its retraction and the impact factor of the journal in which it was published (Spearman rho = 0.43, p<0.05).

This review sample size did not allow for a multivariate analysis. The details of this analysis can be found in the S1 File .

Quality of data from the retraction notices

Retraction notices are supposed to cite the original article [ 7 ]. However, our results showed that a proper citation of the original article was present in only 22 (33%) retraction notices; 42 retraction notices did not cite the original article; 1 article was cited three times in its retraction, implying that the retraction notice applied to more than one publication. Missing data were found in 57% of the retraction notices retrieved. Missing information in retraction notices was mainly regarding: date of retraction (7%), reason for retraction (7%), who requested the retraction (3%) and endorsement by the authors (38.4%). Retraction warnings such as a withdrawn/retracted red sign over the article were also nonexistent (37%).

Reasons for retraction

The identified reasons for retraction are illustrated at Fig 2 . Thirteen articles (20%) were retracted for at least two distinct reasons. Fraud was responsible for the retraction of three articles: two were retracted for image manipulation [ 16 ], [ 17 ] and one for data manipulation. Errors were attributed to inappropriate statistical analysis (n = 4), study design (n = 2) and inadequate data collection (n = 6). Retractions for duplicated publications were attributed to authors in 71% of the cases and to editors in 4,6% of the cases. Although an author’s dispute should not lead to a retraction [ 6 ], two articles accounted for retraction due to an author’s dispute. However, there is no additional information available for these retractions; therefore, it is not possible to assume this was the only reason for the retraction.

The comprehension of research integrity and the consequences of misconduct varies between different cultures [ 16 ], [ 17 ], [ 18 ]. Likewise, the concept of research integrity and research misconduct differ from institution to institution [ 2 ], [ 3 ]. In general, all institutions agree that fabrication, fraud and plagiarism negatively affect science to some extent, characterizing research misconduct [ 3 ], [ 13 ]; although, misconduct can have a wider definition [ 2 ]. Research integrity refers to a broader concept that does not necessarily imply misconduct or a direct effect on scientific integrity [ 13 ]. This diversity may explain the disparities between journals, publishers, research institutions, funders, and researchers when taking measures to prevent and report misconduct or breaches to research integrity. This scenario represents a challenge for academic studies on the matter.

In fact, for this review, the traditional bibliographic sources did not provide a complete picture of retracted articles. A total of eight (15%) articles were only identified on the Retraction Watch website, highlighting difficulties in retrieving retractions and suggesting poor transparency in the reporting of retractions.

Another obstacle of research transparency is the diversity of journal policies to deal with this subject [ 6 ], in that they do not always follow the COPE recommendation for the publication of retraction notices. For instance, the use of footnotes or comments from readers as an alert of a retraction [ 25 ], [ 26 ] and the absence of any type of warning in the database or in the article that is available in the journal. In addition, this review identified an erratum that was actually a retraction notice. These results reflect that some journal policies disregard research integrity flaws.

Legal threats to publishers have an influence on their positions regarding misconduct and, therefore, on the issue of retractions [ 7 ]. Despite publishers concern over litigation, this review found complete information, transparency and clarity of other retraction notices, supporting the existence of disparities between editors’ and publishers’ attitudes towards handling errors or misconduct.

The fact that public institutions funded the majority of the retracted articles also raises concerns regarding the importance of coordinated action between institutions to prevent research misconduct and to allocate a responsible investment of public funds.

In 2013, a Brazilian citation-stacking scheme used to increase journal impact factor was revealed [ 24 ]. Thompson and Reuters discovered that four journals were participating in self-citation in order to boost their impact factor [ 27 ]. Despite of the considerable number of retractions that were made as a result of this scheme, this review search strategy was able to identify a unique paper that was retracted for an irregular citation pattern [ 28 ], which is known as citation stacking. This fact addresses once more the difficulties in finding retracted articles [ 29 ], [ 30 ] and, therefore, warrants the necessity of efforts to maintain transparency in every step of scientific assembly.

Previous studies have shown that fraud and error have accounted for most of the retractions of biomedical articles [ 4 ], [ 28 ]; however, the present review revealed a larger number of retractions due to plagiarism. Fraud refers to fabrication, falsification or manipulation of data while error implies no intention to compromise the study [ 13 ]. Plagiarism may refer to unjust appropriation of ideas (text plagiarism) or images (image plagiarism). This review showed that 76% of the reported plagiarism was accounted for by image plagiarism. Among the cases of image plagiarism, 15% of the retractions clearly stated the existence of similarities of images to previous publications and raised manipulation concerns. In addition, 33.3% of the retractions due to plagiarism did not specify the type of plagiarism.

In regard to image editing, there is a fine line between what is allowed and what is not, and there are no standardized guidelines of scientific journals [ 13 ], [ 31 ]. Coordinated action is needed in order to establish guidelines and education for authors regarding image editing and the rationale for what is considered misconduct [ 32 ].

The underlying factors to explain why image plagiarism is the major cause of misconduct are unclear. Nevertheless, the notable increase in retractions is an indicator of the awareness of scientific misconduct [ 33 ] in regard to different forms of plagiarism and the necessity of actions to avoid this behavior.

Are the increasing numbers of retracted publications a sign of scientific awareness of misconduct?

The results of this review are in accordance with those of previous studies about chronological trends of retracted publications [ 33 ], [ 34 ] that showed an increasing number of retractions in the past years. It is not possible to affirm that misconduct is increasing by evaluating only the retractions of authors affiliated with Brazilian institutions. Deeper investigation is needed to evaluate this aspect.

The increasing number of retracted publications over the years may be a sign of scientific awareness and response of authors, readers and institutions to flag questionable research [ 33 ], [ 34 ]. This can be illustrated by the request of authors to withdraw their article or the alert from other researchers to editors. In addition, more retractions are a reflex of advances in technology that can identify plagiarism and data manipulation [ 33 ], [ 34 ]. For instance, the use of software to identify image manipulation and plagiarism may increase the detection of such misconduct. Likewise, with a faster publication process, the publication of retractions and investigations–when needed–can be more efficient with the participation and collaboration of authors, institutions, researcher, and journals.

What is the purpose of a retraction if not to be used to avoid more scientific misconduct?

A recent publication explored the nature of retracted articles [ 9 ]. The authors classified the citations as positive, neutral or negative. An interesting aspect of this study was the evaluation of a proper citation method for retracted articles. Otherwise, a retracted article is cited as legitimate and, hence, reliable. In most cases, it is not possible to assess whether a retracted article served as a basis for a new scientific investigation despite its retraction or whether it was cited without careful attention. Our finding regarding post retraction citation patterns showed how often retracted articles continue to receive positive citations without accurate retraction identification.

Further investigation is needed to understand why unreliable studies are still cited as legitimate [ 35 ]. Nevertheless, it is important to address that retracted publications might be used for new scientific production. A proper citation of retracted publications brings awareness to the causes involving its withdrawal and assists authors in not ignoring the retraction. Proper citation gives researchers the tools to make decisions in accordance with obvious ethical implications.

The role of distinct actors in the publication of retractions

Retractions are published at the request of an author, publisher, editor, or community [ 4 ], [ 7 ], [ 8 ], [ 9 ]. The intention of a retraction is to promote transparency and clarity regarding research misconduct or an honest error that lead to flawed articles [ 4 ], [ 6 ], [ 7 ]. Thus, in accordance with the COPE Guidelines for Retractions , retractions should be published as soon as possible to avoid new citations of the unreliable work, researchers acting on its findings, or drawing more erroneous conclusions. Because the main goal is to minimize a chain of flaws, retractions should be transparent regarding the reason for the retraction, existence of endorsement by the authors, the date of retraction, a reference to the retracted article, a DOI, attachment to the original article and visibility [ 7 ], [ 36 ].

This review encompassed a wide range of retraction policies of different journals from the retraction wording to how the article is red-flagged [ 6 ], [ 7 ]. For wording, the reasons for the retraction were sometimes vague or absent. Information regarding retraction date and citation of the retracted article were also nonexistent for some publications. For methods to signal a retraction to readers, a variation from a big red note of withdrawn/retracted ( red-flag ) to a simple footnote was found. A possible explanation for the difficulties in retrieving articles for this review was the lack of a standardized publication of retraction notices. Furthermore, these practices are completely against the purpose of publishing retractions: transparency.

Endeavors to promote transparency are a caveat of unethical practices involving those involved in the scientific activity: scientists, publishers, editors, and academic institutions [ 18 ], [ 35 ], [ 36 ]; each has a specific role and may contribute to minimizing misconduct or not. Everybody has a role.

Limitations and strengths

Incomplete information of the retraction notices reduced the accuracy of our analysis. Hence, the results obtained may underestimate the number of retractions due to restrictions of our search strategy, the level of transparency of the published retractions and their availability in the bibliographic databases.

Additionally, our analysis did not include an assessment of the original paper’s quality, and therefore, it is not possible to draw conclusions regarding the relationship between the research quality and retraction. Further investigations should be performed with this purpose since it is known that a retraction does not necessarily indicate a completely invalid study [ 1 ].

Since research integrity is a worldwide concern, despite the fact that this review considered only Brazilian institutions, its findings provide useful insights and could serve as a basis for future investigations.

Retraction notices do not account only for research misconduct; they are also an alert of honest mistakes during scientific practices [ 6 ]. Nevertheless, these incidents compromise the quality and validity of research results. Considering authors affiliated with Brazilian institutions, this review concluded that most of the retractions of articles in health and life sciences were retracted for research misconduct.

Journals, funders, academic institutions, and researchers have an important educational and surveillance role to play in preventing research misconduct. The enforcement of disciplinary and educational measures is fundamental to reduce the incidence of corrupted science. In addition, the creation of a standard instrument for reporting retraction notices would assure the discussion of ethical policies and would promote a uniform publication of retractions.

This study attempted to emphasize the importance of coordinated action among all involved in scientific production in order promote research transparency. There is a positive impact of good practices when conducting investigations and reporting and publishing retraction notices. The underlying factors involving research misconduct remains unclear. Measures to prevent misconduct may take into consideration the particularities of each society, including weakness and strengths, depending on the cultural aspects. However, the impact of bad science is borderless and is not culture-dependent.

Supporting information

S1 table. prisma checklist..

https://doi.org/10.1371/journal.pone.0214272.s001

S2 Table. Study data.

https://doi.org/10.1371/journal.pone.0214272.s002

S1 File. Statistical analysis pipelines and rationale.

https://doi.org/10.1371/journal.pone.0214272.s003

Acknowledgments

We would like to thank the editors, publishers, institutions and authors who contributed to a clear and transparent retraction notice. Without your integrity this review would not be possible.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. Smith R. What is research misconduct? The COPE Report 2000: the Committee on Publication Ethics. BMJ Books. 2000. https://publicationethics.org/files/u7141/COPE2000pdfcomplete.pdf
  • 11. Neimark J. Life of attack. Christopher Korch is adding up the costs of contaminated cell lines. Life Magazine 2015; [cited 2017 nov] 347(issue 6225). Available from: https://www.jillneimark.com/pdf/line-of-attack.pdf
  • 13. Shaw and Satalkar’s schema for research integrity: https://wcrif.org/images/2017/documents/3.%20Wednesday%20May%2031,%202017/4.%202A-00/D.%20Shaw%20-%20Interpreting%20integrity;%20A%20conceptual.pdf
  • 15. Cross D, Thomson S, Sinclair A. A report for CAPES. Research in Brazil. Research Clarivate. A Report for CAPES. Research Clarivate. Research in Brazil, 2018. https://www.capes.gov.br/images/stories/download/diversos/17012018CAPES-InCitesReport-Final.pdf
  • 18. Inter Academy Council (IAP). Responsible conduct in the global research enterprise: a policy report. The Netherlands: IAP. 2012. http://www.interacademies.net/file.aspx?id=19789
  • 19. Brasil. Tabela de Áreas do Conhecimento. Conselho Nacional de Desenvolvimento Científico e Tecnológico. Ministério da Ciência, Tecnologia, Inovações e Comunicações. 2017. http://www.cnpq.br/documents/10157/186158/TabeladeAreasdoConhecimento.pdf
  • 20. Oransky I. Tracking retractions as a window into the scientific process. Retraction Watch. 2018. https://retractionwatch.com
  • 21. Venkatraman V. 2010. Conventions of scientific authorship. Science Retrieved 30 November, 2017. https://doi.org/10.1126/science.caredit.a1000039
  • 23. GEOCAPES. Sistema de dados estatísticos da CAPES. Distribuição de programas de pós-graduação no Brasil em 2017. Brasil, 2018; [cited 2018 dec]. https://geocapes.capes.gov.br/geocapes/

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 20 July 2022

Exclusive: investigators found plagiarism and data falsification in work from prominent cancer lab

  • Richard Van Noorden

You can also search for this author in PubMed   Google Scholar

Over the past decade, questions have swirled around the work coming out of a prominent US cancer-research laboratory run by Carlo Croce at the Ohio State University (OSU). Croce, a member of the US National Academy of Sciences, made his name with his work on the role of genes in cancer. But for years, he has faced allegations of plagiarism and falsified images in studies from his group. All told, 11 papers he has co-authored have been retracted, and 21 have required corrections.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 607 , 650-652 (2022)

doi: https://doi.org/10.1038/d41586-022-02002-5

PLoS ONE Editors. PLoS ONE 17 , e0267621 (2022).

Article   PubMed   Google Scholar  

Download references

Reprints and permissions

Supplementary Information

  • OSU report from committee investigating Flavia Pichiorri
  • OSU report from committee investigating Michela Garofalo

Related Articles

research misconduct in medical journals

  • Research data

Use game theory for climate models that really help reach net zero goals

Correspondence 16 APR 24

Female academics need more support — in China as elsewhere

The world needs a COP for water like the one for climate change

Structure peer review to make it more robust

Structure peer review to make it more robust

World View 16 APR 24

Is ChatGPT corrupting peer review? Telltale words hint at AI use

Is ChatGPT corrupting peer review? Telltale words hint at AI use

News 10 APR 24

Rwanda 30 years on: understanding the horror of genocide

Rwanda 30 years on: understanding the horror of genocide

Editorial 09 APR 24

AI-fuelled election campaigns are here — where are the rules?

AI-fuelled election campaigns are here — where are the rules?

World View 09 APR 24

How papers with doctored images can affect scientific reviews

How papers with doctored images can affect scientific reviews

News 28 MAR 24

Superconductivity case shows the need for zero tolerance of toxic lab culture

Correspondence 26 MAR 24

Associate or Senior Editor (Immunology), Nature Communications

The Editor in Immunology at Nature Communications will handle original research papers and work on all aspects of the editorial process.

London, Beijing or Shanghai - Hybrid working model

Springer Nature Ltd

research misconduct in medical journals

Assistant Professor - Cell Physiology & Molecular Biophysics

Opportunity in the Department of Cell Physiology and Molecular Biophysics (CPMB) at Texas Tech University Health Sciences Center (TTUHSC)

Lubbock, Texas

Texas Tech University Health Sciences Center, School of Medicine

research misconduct in medical journals

Postdoctoral Associate- Curing Brain Tumors

Houston, Texas (US)

Baylor College of Medicine (BCM)

research misconduct in medical journals

Energy AI / Grid Modernization / Hydrogen Energy / Power Semiconductor Concentration / KENTECH College

21, Kentech-gil, Naju-si, Jeollanam-do, Republic of Korea(KR)

Korea Institute of Energy Technology

research misconduct in medical journals

Professor in Macromolecular Chemistry

The Department of Chemistry - Ångström conducts research and education in Chemistry. The department has 260 employees and has a turnover of 290 mil...

Uppsala (Stad) (SE)

Uppsala University

research misconduct in medical journals

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research misconduct in medical journals

Fraud and Deceit in Medical Research Insights and Current Perspectives

Article sidebar.

research misconduct in medical journals

Main Article Content

Photo by Agni B on Unsplash

The number of scientific articles published per year has been steadily increasing; so have the instances of misconduct in medical research. While increasing scientific knowledge is beneficial, it is imperative that research be authentic and bias-free. This article explores why fraud and other misconduct occur, presents the consequences of the phenomenon, and proposes measures to eliminate unethical practices in medical research. The main reason scientists engage in unethical practices is the pressure to publish which is directly related to their academic advancement and career development. Additional factors include the pressure to get research funds, the pressure from funding sources on researchers to deliver results, how scientific publishing has evolved over the years, and the over-publication of research in general. Fraud in medical research damages trust and reliability in science and potentially harms individuals.

Keywords: Fraud, Misconduct, Undue influence, Research Ethics, Publish, Unified Patient Lobby

Since the introduction of Evidence-Based Medicine (EBM) in the early 1990s, scientific articles published per year have increased steadily. No one knows the exact number of scientific articles published per year, but several estimates point to around 2,000,000. [1] EBM aims to integrate the clinical experience and the best available scientific knowledge in managing individual patients. [2] The EBM model is based on the accumulation of as much clinical and research data as possible, which has propelled a significant rise in research. Unfortunately, its incentive structure has also led to a rise in research misconduct.

“Fraud in science has a long history.” [3] Cases of misconduct began to surface in the late 1980s and increased during the 1990s. Experts suggest that today fraud is “endemic in many scientific disciplines and in most countries.” [4] In recent reporting, the majority of cases of scientific fraud involved falsification and fabrication of the data, while plagiarism was much less frequent. 8 percent of scientists and 10 percent of medical and life-sciences researchers admitted to falsifying data at least once between 2017 and 2021 in a Dutch study of 6,813 researchers, while more than half engaged in at least one questionable research practice. [5] Questionable research practices include research design flaws or unfairness in decisions surrounding publication or grants. [6] In an older study, closer to 2 percent of those surveyed reported having engaged in falsification or fabrication, [7] while in a more recent survey of 3,000 scientists with NIH grants in the United States, 0.3 percent of the scientists responding admitted fabricating research data and 1.4 percent of them admitted plagiarizing. [8] These numbers are almost certainly not reflective of the true incidence of fraud as many scientists admitted that they engaged in a range of behaviors beyond fabrication, falsification, and plagiarism that undermine the integrity of science, such as changing the results of a study under pressure from a funding source or failing to present data that contradicts one’s previous research. It is also unclear whether surveys are the best method to investigate misconduct because a scientist answering the survey may be unsure of anonymity and may not be truthful.

This article explores why misconduct occurs, presents the consequences, and proposes measures to eliminate unethical practices in medical research. In the 1999 Joint Consensus Conference on Misconduct in Biomedical Research, “scientific fraud” was defined as any “behavior by a researcher, intentional or not, that falls short of good ethical and scientific standards.” [9]

I. The Scientific Publishing Landscape

There are several reasons scientists may commit misconduct and engage in unethical practices. There is an increasing pressure to publish, which the motto "publish or perish reflects.” [10] The number of scientific papers published by a researcher is directly related to their academic advancement and career development. Similarly, academic institutions rely on scientific publications to gain prestige and access research grants.

Pressure to get research grants may create environments that make it challenging to research integrity. Researchers are often tempted to alter their data to fulfill the desired results, separately report the results of one research in multiple end publications, commonly referred to as “salami publication,” or even simultaneously submit their scientific articles to more than one journal. This creates a vicious cycle in which the need for funding leads to scientific misconduct, which in turn secures more research funding. Meanwhile, the pressure from the funding sources cannot be overlooked either. Although researchers must report the role of the funding sources, selection and publication bias often may advantage articles that support the interests of the financial sponsor. Disclosure does not alter the conflict of interest.

The growing number of scientific articles published per year has practically overwhelmed the peer-review system. Manuscript submissions are often reviewed superficially or assigned to inexperienced reviewers; therefore, misconduct cases may go unnoticed. The rise of “predatory” journals that charge authors publication fees and do not review work for authenticity and the dissemination of information through preprints has worsened the situation.

The way that profits influence scientific publishing has very likely contributed to the phenomenon of misconduct. The publishing industry is a highly profitable business. [11] The increased reliance on funding from sources that expect the research to appear in prestigious, open-access journals often creates conflicts of interest and funding bias. On the other hand, high-impact journals have not given space to navigate through negative results and previous failures. Nonsignificant findings commonly remain unpublished, a phenomenon known as “the file drawer problem.” Scientists often manipulate their data to fit their initial hypothesis or change their hypothesis to fit their results, leading to outcome-reporting bias.

II. Misconduct Concerning the Reporting and Publishing Data

The types of misconduct vary and have different implications for the scientist’s career and those relying on the research. For example, plagiarism is generally not punished by law currently unless it violates the original author’s copyright. Nevertheless, publishers who detect plagiarism implement penalties such as rejection of the submitted article and expulsion of the author. While plagiarism can be either accidental or deliberate, in either case, it is a serious violation of academic integrity as it involves passing off someone else’s “work or ideas” as one’s own. [12] Plagiarism can be “verbatim” (copying sentences or paragraphs from previously published work without using quotation marks or referencing the source) or rephrasing someone’s work or ideas without citing them. In “mosaic” plagiarism, the work plagiarized comes from various sources. “Self-plagiarism” is defined as an author’s reproduction of their previous publications or ideas in the same or altered words.

According to most scientific journals, all authors of an article in part must have contributed to the conception and design of the study, drafted the article, revised it critically, or approved of its final version. [13]   The use of a ghost author (usually a professional writer who is not named an author) is generally not ethical, as it undermines the requirement that the listed authors created the article.

Moreover, wasteful publication is another practice that contributes to misconduct. Wasteful publication includes dividing the results of one single study into multiple end publications (“salami slicing”), republishing the same results in one or more articles, or extending a previously published article by adding new data without reaching new conclusions. Wasteful publication not only skews the scientific databases, but also wastes the time of the readers, the editors, and the reviewers. It is considered unethical because it unreasonably increases the authors’ citation records. Authors caught engaging in such behaviors may be banned from submitting articles for years while the submitted article is automatically rejected. Wasteful publication is an example of how the pressure to publish more articles leads to dishonest behavior, making it look like a researcher has conducted more studies and has more experience.

Conflicts of interest are not strictly prohibited in medicine but require disclosure. Although disclosure of financial interests is a critical step, it does not guarantee the absence of bias. Researchers with financial ties to a pharmaceutical company funding their research are more likely to report results that favor the sponsor, which eventually undermines the integrity of research. [14] Financial sponsors should not be allowed to influence publication; rather authors need to publish their results based on their own decisions and findings.

III. Misconduct in Carrying Out Scientific Research Studies

Common forms of fabrication include concealing negative results, changing the results to fit the initial hypothesis, or selective reporting of the outcomes. Falsification is the manipulation of experimental data that leads to inaccurate presentation of the research results. Falsified data includes deliberately manipulating images, omitting, or adding data points, and removing outliers in a dataset for the sake of manipulating the outcome. In contrast to plagiarism, this type of misconduct is very difficult to detect. Scientists who fabricate or falsify their data may be banned from receiving funding grants or terminated from their institutions. Falsification and fabrication are dangerous to the public as they can result in people giving and receiving incorrect medical advice. Relying on falsified data can lead to death or injury or lead patients to take a drug, treatment, or use a medical device that is less effective than perceived. Thus, some members of the scientific community support the criminalization of this type of misconduct. [15]

Research involving human participants requires respect for persons, beneficence, justice, voluntary consent, respect for autonomy, and confidentiality. Violating those principles constitutes unethical human experimentation. The Declaration of Helsinki is a statement of ethical principles for biomedical research involving human subjects, including research on identifiable human material and data. Similarly, research in which animals are subjects is also regulated. The first set of limits on the practice of animal experimentation was the Cruelty to Animals Act passed in 1876 by the Parliament of the United Kingdom. Currently, all animal experiments in the EU should be carried out in accordance with the European Directive (2010/63/EU), [16] and in the US, there are many state and federal laws governing research involving animals. The incentives to compromise the ethical responsibilities surrounding human and animal practices may differ from the pressure to publish, yet some are in the same vein. They may generally include taking shortcuts, rushing to get necessary approvals, or using duress to get more research subjects, all actions that reflect a sense of urgency.

IV. Consequences of Scientific Misconduct

Fraud in medical research damages science by creating data that other researchers will be urged to follow or reproduce that wastes time, effort, and funds. Scientific misconduct undermines the trust among researchers and the public’s trust in science. Meanwhile, fraud in medical trials may lead to the release of ineffective or unsafe drugs or processes that could potentially harm individuals.

Scientific misconduct is associated with reputational and financial costs, including wasted funds for research that is practically useless, costs of an investigation into the fraudulent research, and costs to settle litigation connected with the misconduct. The retraction of scientific articles for misconduct between 1992 and 2002 accounted for $58 million in lost funding by the NIH (which is the primary source of public funds for biomedical research in the US). [17]

Of retracted articles, over half are retracted due to “fabrication, falsification, and plagiarism.” [18] Yet it is likely that many articles that contain falsified research are never retracted. A study revealed that of 12,000 journals reviewed, most of the journals had never retracted an article. The same study suggests that some journals have improved oversight, but many do not. [19]

V. Oversight and Public Interest Organizations

The Committee on Publication Ethics (COPE) was founded in 1997 and established practices and policies for journals and publishers to achieve the highest standards in publication ethics. [20] The Office of Research Integrity (ORI) is an organization created in the US to do the same. In 1996, the International Conference of Harmonization (ICH) adopted the international Good Clinical Practice (GCP) guidelines. [21] Finally, in 2017 the Parliamentary Office of Science and Technology (POST) initiated a formal inquiry into the trends and developments on fraud and misconduct in research and the publication of research results. [22]

Despite the increasing efforts of regulatory organizations, scientific misconduct remains a major issue. To eliminate unethical practices in medical research, we must get to the root of the problem: the pressures put on scientists to increase output at the expense of quality.

In the absence of altered incentives, criminalization is a possibility. However, several less severe remedies for reducing the prevalence of scientific misconduct exist. Institutions first need to foster open and frank discussion and promote collegiality. Reducing high-stakes competition for career advancement would also help realign incentives to compromise research ethics. In career advancement, emphasis should be given to the quality rather than the quantity of scientific publications. The significance of mentorship by senior, experienced researchers over lab assistants can bolster ethical training. Adopting certain codes of conduct and close supervision of research practices in the lab and beyond should also be formalized.

The publication system plays a critical role in preserving research integrity. Computer-assisted tools that detect plagiarism and other types of misconduct need to be developed or upgraded. To improve transparency, scientific journals should establish clear authorship criteria and require that the data supporting the findings of a study be made available, a movement that is underway. Preprint repositories also might help with transparency, but they could lead to people acting on data that has not been peer-reviewed. Finally, publishing negative results is necessary so that the totality of research is not skewed or tainted by informative studies but does not produce the results researchers hoped. Consistently publishing negative results may create a new industry standard and help researchers see that all data is important.  

Any medical trial, research project, or scientific publication must be conducted to develop science and improve medicine and public health. However, the pressures from the pharmaceutical industry and academic competition pose significant threats to the trustworthiness of science. Thus, it is up to every scientist to respect and follow ethical rules, while responsible organizations, [23] regulatory bodies, and scientific journals should make every effort to prevent research misconduct.

*Paper was updated September 27, 2022

[1]   World Bank. “Scientific and technical journal articles”. World Development Indicators, The World Bank Group. https://data.worldbank.org/indicator/IP.JRN.ARTC.SC?year_low_desc=true. 

[2] Masic I, Miokovic M, Muhamedagic B. “Evidence Based Medicine - New Approaches and Challenges.” Acta Inform Med . 2008;16(4):219-25. https://www.bibliomed.org/mnsfulltext/6/6-1300616203.pdf?1643160950

[3] Dickenson, D. “The Medical Profession and Human Rights: Handbook for a Changing Agenda.” Zed Books . 2002;28(5):332. doi: 10.1136/jme.28.5.332.

[4] Ranstam J, Buyse M, George SL, Evans S, Geller NL, Scherrer B, et al. “Fraud in Medical Research: An International Survey of Biostatisticians, ISCB Subcommittee on Fraud. Control Clin Trials . 2000;21(5):415-27. doi: 10.1016/s0197-2456(00)00069-6.

[5] Gopalakrishna, G., Riet, G. T., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. (2021, “Prevalence of questionable research practices, research misconduct and their potential explanatory factors: a survey among academic researchers in The Netherlands.” MetaArXiv . July 6, 2021. doi:10.31222/osf.io/vk9yt; Chawla , Dalmeet Singh, “8% of researchers in Dutch survey have falsified or fabricated data.” Nature . 2021. https://www.nature.com/articles/d41586-021-02035-2 (The Dutch study’s author suggests the results could be an underestimate; she also notes an older similar study that found 4.5 percent.)

[6] Chawla.

[7] Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data.  PLoS One . 2009;4(5):e5738. doi:10.1371/journal.pone.0005738

[8] Martinson BC, Anderson MS, de Vries R. “Scientists behaving badly” Nature . 2005;435(7043):737-8. https://www.nature.com/articles/435737a.

[9] Munby J, Weetman, DF. Joint Consensus Conference on Misconduct in Biomedical Research: The Royal College of Physicians of Edinburgh. Indoor Built Environ. 1999;8:336–338. doi: 10.1177/1420326X9900800511.

[10] Stephen Beale “Large Dutch Survey Shines Light on Fraud and Questionable Research Practices in Medical Studies Published in Scientific Journals,” The Dark Daily, Aug 30, 2021. https://www.darkdaily.com/2021/08/30/large-dutch-survey-shines-light-on-fraud-and-questionable-research-practices-in-medical-studies-published-in-scientific-journals/

[11] Buranyi S. Is the staggeringly profitable business of scientific publishing bad for science? The Guardian. June 27, 2017. https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science  

[12] Cambridge English Dictionary.  https://dictionary.cambridge.org/us/dictionary/english/plagiarism

[13] International Committee of Medical Journal Editors. Defining the Role of Authors and Contributors http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html

[14] Resnik DB, Elliott KC. “Taking Financial Relationships into Account When Assessing Research.” Accountability in Research. 2013;20(3):184-205. doi: 10.1080/08989621.2013.788383.

[15] Bülow W, Helgesson G. Criminalization of scientific misconduct. Med Health Care and Philos. 2019;22:245–252.

[16] Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes. Official Journal of the European Union. https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2010:276:0033:0079:en:PDF

[17] Stern AM, Casadevall A, Steen RG, Fang FC. “Financial Costs and Personal Consequences of Research Misconduct Resulting in Retracted Publications,” eLife . 2014;3:e02956. doi: 10.7554/eLife.02956.

[18] Brainard, Jeffrey and Jia You, “What a massive database of retracted papers reveals about science publishing's ‘death penalty': Better editorial oversight, not more flawed papers, might explain flood of retractions,” Science, Oct 25, 2018  https://www.science.org/content/article/what-massive-database-retracted-papers-reveals-about-science-publishing-s-death-penalty

[19] Brainard and You.

[20] Doherty M, Van De Putte Lbacope. Guidelines on Good Publication Practice; Annals of the Rheumatic Diseases 2000;59:403-404.

[21] Dixon JR Jr. The International Conference on Harmonization Good Clinical Practice Guideline. Qual Assur. 1998 Apr-Jun;6(2):65-74. doi: 10.1080/105294199277860. PMID: 10386329.

[22] “Research Integrity Terms of Reference.” Science and Technology Committee, 14 Sept. 2017, committees.parliament.uk/committee/135/science-and-technology-committee/news/100920/research-integrity-terms-of-reference/.

Frideriki Poutoglidou

Department of Clinical Pharmacology, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki

Article Details

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License .

  • Architecture and Design
  • Asian and Pacific Studies
  • Business and Economics
  • Classical and Ancient Near Eastern Studies
  • Computer Sciences
  • Cultural Studies
  • Engineering
  • General Interest
  • Geosciences
  • Industrial Chemistry
  • Islamic and Middle Eastern Studies
  • Jewish Studies
  • Library and Information Science, Book Studies
  • Life Sciences
  • Linguistics and Semiotics
  • Literary Studies
  • Materials Sciences
  • Mathematics
  • Social Sciences
  • Sports and Recreation
  • Theology and Religion
  • Publish your article
  • The role of authors
  • Promoting your article
  • Abstracting & indexing
  • Publishing Ethics
  • Why publish with De Gruyter
  • How to publish with De Gruyter
  • Our book series
  • Our subject areas
  • Your digital product at De Gruyter
  • Contribute to our reference works
  • Product information
  • Tools & resources
  • Product Information
  • Promotional Materials
  • Orders and Inquiries
  • FAQ for Library Suppliers and Book Sellers
  • Repository Policy
  • Free access policy
  • Open Access agreements
  • Database portals
  • For Authors
  • Customer service
  • People + Culture
  • Journal Management
  • How to join us
  • Working at De Gruyter
  • Mission & Vision
  • De Gruyter Foundation
  • De Gruyter Ebound
  • Our Responsibility
  • Partner publishers

research misconduct in medical journals

Your purchase has been completed. Your documents are now available to view.

Research integrity and academic medicine: the pressure to publish and research misconduct

This narrative review article explores research integrity and the implications of scholarly work in medical education. The paper describes how the current landscape of medical education emphasizes research and scholarly activity for medical students, resident physicians, and faculty physician educators. There is a gap in the existing literature that fully explores research integrity, the challenges surrounding the significant pressure to perform scholarly activity, and the potential for ethical lapses by those involved in medical education.

The objectives of this review article are to provide a background on authorship and publication safeguards, outline common types of research misconduct, describe the implications of publication in medical education, discuss the consequences of ethical breaches, and outline possible solutions to promote research integrity in academic medicine.

To complete this narrative review, the authors explored the current literature utilizing multiple databases beginning in June of 2021, and they completed the literature review in January of 2023. To capture the wide scope of the review, numerous searches were performed. A number of Medical Subject Headings (MeSH) terms were utilized to identify relevant articles. The MeSH terms included “scientific misconduct,” “research misconduct,” “authorship,” “plagiarism,” “biomedical research/ethics,” “faculty, medical,” “fellowships and scholarships,” and “internship and residency.” Additional references were accessed to include medical school and residency accreditation standards, residency match statistics, regulatory guidelines, and standard definitions.

Within the realm of academic medicine, research misconduct and misrepresentation continue to occur without clear solutions. There is a wide range of severity in breaches of research integrity, ranging from minor infractions to fraud. Throughout the medical education system in the United States, there is pressure to publish research and scholarly work. Higher rates of publications are associated with a successful residency match for students and academic promotion for faculty physicians. For those who participate in research misconduct, there is a multitude of potential adverse consequences. Potential solutions to ensure research integrity exist but are not without barriers to implementation.

Conclusions

Pressure in the world of academic medicine to publish contributes to the potential for research misconduct and authorship misrepresentation. Lapses in research integrity can result in a wide range of potentially adverse consequences for the offender, their institution, the scientific community, and the public. If adopted, universal research integrity policies and procedures could make major strides in eliminating research misconduct in the realm of academic medicine.

The landscape of academic medicine in the United States places a strong emphasis on scholarly work and publications in peer-reviewed journals. Publications are often required for career advancement, to procure grant funding, and to maintain accreditation with regulatory bodies [ 1 ], [ 2 ], [ 3 ], [ 4 ]. The pressure to publish can be felt at all levels of academic medicine, from medical students to resident physicians to faculty physicians [ 5 ], [ 6 ], [ 7 ], [ 8 ]. The longstanding culture of the scientific world, including medical education, has been described as a “publish or perish” [ 3 ] mentality, which highlights the highly intertwined relationship between prolific publication and career advancement in academic medicine [ 1 , 2 , 4 , 9 ]. Unfortunately, the pressure to publish can lead some participants to engage in research misconduct to bolster productivity [ 2 , 3 ]. In this review, we explore existing guidelines for authorship, common types of research misconduct, the implications of scholarly work for medical learners and faculty, consequences of research misconduct, and finally, possible solutions to promote integrity in academic medicine.

The authors (MK, MD, EAG) identified a gap in the existing literature regarding the holistic examination of research integrity and its implications across the spectrum of medical education, from students to resident physicians to faculty physicians. To complete a narrative review, the authors explored the current literature utilizing PubMed, Scopus, and Google Scholar beginning in June of 2021 and completed the literature review in January of 2023. References included publications between 2001 and 2022 to ensure updated quality information that is applicable to the academic medicine population today. To capture the wide scope of the review, numerous searches were performed. A number of Medical Subject Headings (MeSH) terms were utilized to identify relevant articles. MeSH terms included: Authorship; Misconduct; Research Integrity; Plagiarism; Medical Education; Residency Applications; Faculty, Medical; Fellowships and Scholarships; and Internship and Residency. Papers were included if they were published between the set time frame, contained one or more of the previously mentioned MeSH terms, and were full-length articles. Additional references were accessed to include medical school and residency accreditation standards, residency match statistics, regulatory guidelines, and standard definitions. Articles were excluded if they were published outside of the specified time frame, if the majority of the paper did not relate to the attached MeSH term, or if the full-length article was inaccessible to the authors. Ultimately, the articles to be included were discussed between the authors (MK and MD), with the ultimate decision made by the primary author (MK). The faculty author (EAG) of this review has extensive experience as a physician educator, residency program director, and advisor to medical students applying to residency. Table 1 outlines the sources utilized for this narrative review.

The sources utilized for this narrative review.

Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work

Drafting the work or revising it critically for important intellectual content

Final approval of the version to be published

Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved [ 10 ].

The fourth criterion was added in 2013 to put an emphasis on accountability for all authors listed on the paper [ 7 ]. Adhering to these authorship criteria may prevent misrepresentations and hyper-authorship. Many well-known peer-reviewed journals follow the ICMJE suggestions; however, individual journals can set their own criteria for what constitutes an author [ 11 ].

There are multiple checkpoints that a project goes through to preserve scientific integrity and prevent research misconduct. Fostering a culture of academic honesty begins at the local institutional level with strict regulatory oversight by federal regulatory bodies, such as the US Office of Research Integrity (ORI) [ 12 , 13 ]. The three major safeguards in place to maintain scientific integrity are: peer review for funding; the referee system of peer review for publication; and replication of results [ 13 ]. When these checkpoints fail, the possibility for misconduct to occur increases.

Research misconduct

Research misconduct is defined by the US ORI as “the fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting results” [ 12 , 14 , 15 ]. There are numerous types of infractions in integrity with a range of seriousness. Plagiarism may present as superficial paraphrasing or the use of exact words without proper citation, or it may come in the form of utilizing others’ ideas and misrepresenting them as one’s own [ 2 , 16 ]. During the process of data collection, research misconduct may take place in the form of data “trimming,” which is the removal of irregular results to strengthen desired results, or data “cooking,” which involves the deliberate manipulation of data to produce the desired results [ 2 , 12 ]. The most egregious form of misconduct is the overt fabrication or falsification of data or results. Additional definitions related to misconduct are listed in Table 2 .

Definitions of common terms seen in relation to authorship misrepresentation and research misconduct.

ICMJE, International Committee of Medical Journal Editors.

Although not strictly covered by the ORI’s definition of research misconduct, there are a number of improper research practices that should be noted. One of the most common improper practices is assigning “honorary authorship,” which involves listing undeserving authors on a publication [ 2 , 12 , 17 ]. The motivations for this practice can be vast and variable. For example, honorary authorship may be offered by a junior researcher as reciprocity to a senior researcher who holds rank, influence, or funding in the department. Conversely, a senior, influential researcher may coerce a junior researcher to list them as an author without having met the criteria of authorship as defined by the ICMJE [ 2 , 10 , 17 ]. Alternatively, a junior researcher may list a highly influential honorary author as a tool to increase the likelihood of obtaining publication or funding from an entity that would otherwise be unattainable [ 2 , 17 ]. With regard to misconduct during the peer review process, an influential author could generate bias, either positive or negative, from the reviewer [ 2 ].

Scholarly activity in medical education

For medical students, the motivation or pressure to engage in scholarly work may be multifactorial. To maintain accreditation, both allopathic and osteopathic medical schools in the United States must demonstrate support for faculty- and student-driven research. They must produce evidence of student participation in research and scholarly activity [ 18 , 19 ]. Students may feel institutional pressure to participate in research as the medical school aims to maintain accreditation.

In addition to completing their academic work and clinical training, medical students must actively prepare to apply to postgraduate training programs, commonly known as “residency programs.” Securing and successfully completing residency training is imperative to the unrestricted practice of medicine in the United States. Students apply for residency during their final year of medical school and must undergo a rigorous application and interview process to match into a program. Residency programs rank applicants based on numerous characteristics such as academic performance, including United States Medical Licensing Examination (USMLE) and Comprehensive Osteopathic Medical Licensing Examination (COMLEX) scores.

Furthermore, residency programs evaluate students based on participation in research and scholarly activity, service, leadership, and interpersonal skills. Prior to 2022, USMLE Step 1 and COMLEX Level 1 were graded numerically. Both USMLE Step 1 and COMLEX Level 1 transitioned to Pass/Fail scoring in 2022, which has potentially far-reaching implications for medical students and residency programs. It is speculated that the loss of numeric board scores will prompt programs to weigh other elements of the application more heavily, including research and scholarly activity. In fact, results from the 2022 residency match reveal that more research experience and publications are associated with higher residency match rates for medical students in all specialties for allopathic students and most specialties for osteopathic medical students [ 20 , 21 ]. For students pursuing surgical specialties, such as general surgery [ 22 ], orthopedic surgery [ 23 ], or neurosurgery [ 6 ], research productivity plays a substantial role in matching into residency.

Facing pressures similar to medical students, resident physicians and those in fellowships frequently find themselves entrenched in a “publish or perish” culture [ 3 ]. In the United States, residency training programs are overseen by the Accreditation Council for Graduate Medical Education (ACGME). For the program to maintain accreditation and for individual residents to remain in good standing, participation in scholarly activity is required by the ACGME [ 24 ]. Furthermore, research and other scholarly activity encompass an important part of applications for resident physicians seeking fellowship programs. Unfortunately, evidence suggests that a substantial minority of students and resident physicians misrepresent their scholarly work [ 8 , 25 ], [ 26 ], [ 27 ].

In addition to misconduct during the research and publication process, there exists potential for misconduct and misrepresentation in reporting one’s work. It is not uncommon for job applications, residency or fellowship applications, and curricula vitae (CV) to include misrepresentations of varying severity [ 8 , 25 ], [ 26 ], [ 27 ]. From egregious actions such as listing fake publications to lesser infractions such as embellishing the status of the listed work, misrepresentations are not rare [ 8 , 25 ], [ 26 ], [ 27 ]. Yeh et al. [ 25 ] found that one in eight (12 %) student candidates interviewing for general surgery residency at their institution had serious inaccuracies in reported vs. actual publications. Oke and Mantagos [ 28 ] found that 4.7 % of applicants to a pediatric ophthalmology fellowship listed unverifiable publications on their applications. Cortez et al. [ 8 ] examined the accuracy of publications listed by applicants to an orthopedic sports medicine fellowship program. Within their cohort, 68 % of the articles were reported as “completed,” with 5.7 % of those being found to be inaccurate, and 31.6 % of the articles were reported as “submitted,” with 28.3 % being unpublished between 2 and 6 years later [ 8 ]. In order from most to least common, the types of misrepresentation found by Wiggins [ 26 ] was listing nonexistent articles, authorship order, and nonauthorship. This analysis also revealed that there was no consensus as to what constitutes misrepresentation [ 26 ]. None of these studies discuss if there were any consequences faced by the applicants, but they do reveal that misrepresentation has been a problem for many years.

Scholarly activity by faculty physicians

Physicians who pursue a career in academic medicine, commonly referred to as “faculty physicians,” encounter numerous motivations to produce scholarly work and publications. As with medical students and resident physicians, faculty physicians face regulatory mandates from accrediting bodies to engage in research and scholarly activity [ 18 , 19 , 24 ]. Additionally, institutional pressure frequently exists surrounding the relationship between publications and reputation within the academic community. Higher rates of publication and publications with a higher impact factor are associated with more prestigious institutional reputations [ 2 , 3 ].

Furthermore, publication output and the number of citations are primary factors when considering someone for promotions, job retention and mobility, and tenure [ 2 ], [ 3 ], [ 4 , 29 ], [ 30 ], [ 31 ]. Within the world of academic medicine, the h-index is utilized as an objective measure to describe the impact that an individual’s work has made. The h-index is calculated utilizing both the number of publications and the number of citations by other authors per publication [ 29 ]. Higher rates of publication, and high h-index results, are associated with higher academic rank, NIH funding procurement, and career advancement for faculty physicians [ 2 ], [ 3 ], [ 4 , 29 ], [ 30 ], [ 31 ]. Rates of high-impact publications are impossible to untangle from career advancement. Career advancement opportunities frequently offer additional financial incentives, further incentivizing prolific publication.

With numerous incentives to publish, academic physicians may attempt to fast-track the path to success by engaging in research misconduct or misrepresentation. DuBois et al. [ 32 ] investigated the motivation that is driving researchers to participate in research misconduct. The study found that 33 % felt pressure to publish, and 48 % of cases could be attributed to a self-centered personality such as having confidence they would not be caught, arrogance, seeking prestige, and greed [ 32 ]. More serious cases of fabrication, falsification, or plagiarism were seen more often with motivational factors of seniority, financial incentives, and oversight failures [ 32 ]. Although data for US physicians is lacking, a study conducted in 2000 in England showed that 59.8 % of physician survey respondents indicated that they felt pressured to publish to better their careers [ 1 , 33 ]. It also revealed that 5.7 % of the physician respondents had participated in honorary authorship and 4.1 % would be willing to falsify data to improve a grant application [ 1 , 33 ].

Consequences of research misconduct

There is not a universal process or consequence to research misconduct or misrepresentation in authorship. As a result, infractions such as research-related misconduct can result in a range of sanctions including article retraction, action by academic institutions or administrations, civil penalties, and criminal penalties [ 34 ]. Authorship disputes, including allegations of ghost authorship and others, are not viewed by the ORI as plagiarism, leaving journals and administrations to handle claims of misrepresentation as they see fit [ 34 ]. Journals have not yet enacted a universally accepted protocol for investigating and addressing research misrepresentation [ 11 ]. Many well-known journals, including Cell , Nature , and The Lancet , follow the guidelines proposed by the Committee on Publication Ethics (COPE) [ 35 ], [ 36 ], [ 37 ]. These guidelines encourage article retraction when necessary, along with publishing a notice of retraction outlining the reason [ 38 ]. A lack of government oversight and variability in journal response could be possible factors in the high and rising prevalence of authorship misrepresentation.

The ORI maintains lists of individuals that have committed research misconduct. Administrative actions imposed by the ORI are dependent upon the seriousness and impact of the misconduct, along with the individual’s history of behavior [ 39 ]. Possible sanctions are listed below:

… debarment from eligibility to receive Federal funds for grants and contracts; prohibition from service on PHS [Public Health Service] advisory committees, peer review committees, or as consultants; certification of information sources by respondent that is forwarded by institution; certification of data by institution; imposition of supervision on the respondent by the institution; submission of a correction of a published article by respondent; submission of a retraction of a published articles by respondent [ 39 ].

In the 1980s, a cardiology fellow was discovered to have fabricated data that led to false data being collected over a period of approximately 14 years. The author suffered a 10-year suspension from participating in federally funded research, along with the retraction of a total of 82 papers from two separate institutions [ 13 ]. This case is an example of a failure of the peer-review system and of oversight from the institution where the research was conducted. Reproducibility of data is incredibly important to avoid cases like this. In 2011, a study was conducted that reviewed articles retracted from PubMed between 2000 and 2010 [ 40 ]. Out of 742 articles, 73.5 % were retracted due to scientific error and 26.6 % due to fraud including falsification and fabrication of data [ 40 ]. A concerning finding of this study revealed that 31.8 % of the retracted articles did not have any indication that they had been retracted [ 40 ]. Both examples reveal the systemic gap that allows articles containing falsified and fraudulent material to remain available for continued citation by unsuspecting researchers. Utilizing information from retracted articles decreases the value of the work and the legitimacy of the author citing it.

Beyond the professional and legal consequences of research misconduct, ethical breaches in research can result in public harm and loss of confidence in the scientific community ( Table 3 ). Perhaps the most socially impactful case of research misconduct is that of Andrew Wakefield, who published a paper in The Lancet in 1998 stating that the measles, mumps, and rubella (MMR) vaccine caused autism in children [ 41 ]. The author faced article retraction, banishment from practicing medicine, and worldwide attention from the media exposing him as a fraud. Despite proving that Wakefield falsified data and many studies affirming that there is no link between the MMR vaccine and autism, the social damage had already been done. Wakefield’s fabricated work scared the public, and MMR vaccination rates decreased. Decreased vaccination rates resulted in a resurgence of measles, which had previously been eliminated by the vaccine [ 41 ]. Not only did this led to the death of children in Britain and the United States, but also it started a movement of antivaccinators putting more children at risk for many preventable diseases [ 41 ]. The implications of research misconduct are far-reaching for individuals, institutions, the scientific community, and the greater public.

Laws related to research misconduct.

CFR, Code of Federal Regulations; USC, Uniform System of Classification.

Potential solutions

Scientific research and publications serve as a conduit for furthering knowledge and understanding of biology and medicine. Publications are a tool to communicate research and motivate one’s peers [ 3 ]. Published works can initiate discussions regarding the interpretation of results, the application of the findings, and the generation of ideas for future works and advancement in knowledge. The development and execution of research studies require significant dedication and effort.

Within the realm of academic medicine, research misconduct and misrepresentation continue to occur without clear solutions. In the arena of clinical research, Bando et al. [ 4 ] propose a multifaceted approach to ensure research integrity. Although clinical research represents only one facet of academic medicine, their approach could be extrapolated to almost all areas of academic medical research and scholarly work. Their first recommendation involves the creation of a network of independent ORIs to work in tandem with local Institutional Review Boards (IRB). Although IRBs focus on ethics and the protection of human subjects, local ORIs would ensure the integrity of the scientific process [ 4 ]. Their second proposal involves creating and maintaining a local database to collect raw data related to clinical trials. This would allow for data review by a local ORI or journal editor, and data would be preserved and accessible to authorized parties for a minimum of 10 years [ 4 ]. The third element of the proposal calls for medical schools to implement the ORI’s responsible conduct of research (RCR) program into their curriculum [ 4 ]. This would ensure universal knowledge of ethical research practices by all US-trained physicians. The fourth and final proposal is for clinical researchers to implement a “Checklist for Clinical Research” [ 4 ]. Their proposed checklist implements checkpoints at each step of the project, from inception to publication, to ensure integrity throughout the process [ 4 ]. It also calls for transparency and collaboration among all members of the research team.

Although this comprehensive approach, if adopted, offers much-needed solutions to the problem of research misconduct in academic medicine, there are barriers to implementation. The most likely barrier for academic institutions to implement a comprehensive research integrity program is funding. The funds required to establish an independent ORI and a central database would be cost prohibitive for most nonprofit organizations, including universities, osteopathic medical schools, and hospitals. Other interventions, such as the implementation of a research integrity curriculum and a research integrity checklist system, are much more feasible. Other barriers that may be encountered are regulatory fatigue, department culture that resists buy-in, and time constraints for those tasked with implementation. Currently, osteopathic medical schools are required to include research methodology and research ethics in the curriculum but have no obligation to implement comprehensive research integrity systems [ 19 ]. Widespread and universal implementation of the changes proposed by Bando et al. into osteopathic education would likely require a mandate from Commission on Osteopathic College Accreditation (COCA), the accrediting body for osteopathic medical schools.

Limitations

This narrative review had limited access to full-length articles, which was a limitation. Additionally, only articles published after the year 2001 were included, limiting the historical context of this information. Narrative reviews are inherently limited by the completeness of a literature search. An inherent bias of the authors in their interpretation of content may also be present; however, this was attempted to be mitigated by including direct quotations from material and providing relevant information as complete as possible.

The world of academic medicine’s pressure to publish is shared by the greater scientific community. Beginning early in medical school, young physicians learn that engaging in research and publishing manuscripts opens the door to opportunity. For physicians pursuing a career in academia, the pressure to publish continues. The potential for research misconduct and authorship misrepresentation exists and, if enacted, can result in a wide range of potentially adverse consequences for the offender, their institution, the scientific community, and the general public. If adopted, universal research integrity policies and procedures (like those suggested by Bando et al.), a universal process to investigate and address authorship misconduct, and adhering to the authorship criteria set forth by the ICJME, could make major strides in eliminating research misconduct in the realm of academic medicine.

Research ethics: Not applicable.

Informed consent: Not applicable.

Author contributions: All authors provided substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; all authors drafted the article or revised it critically for important intellectual content; all authors gave final approval of the version of the article to be published; and all authors agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Competing interests: None declared.

Research funding: None declared.

Data availability: All data are available upon request to the corresponding author.

1. Rahman, H, Ankier, S. Dishonesty and research misconduct within the medical profession. BMC Med Ethics 2020;21:22. https://doi.org/10.1186/s12910-020-0461-z . Search in Google Scholar PubMed PubMed Central

2. Mousavi, T, Abdollahi, M. A review of the current concerns about misconduct in medical sciences publications and the consequences. Daru 2020;28:359–69. https://doi.org/10.1007/s40199-020-00332-1 . Search in Google Scholar PubMed PubMed Central

3. Guraya, SY, Norman, RI, Khoshhal, KI, Guraya, SS, Forgione, A. Publish or perish mantra in the medical field: a systematic review of the reasons, consequences and remedies. Pak J Med Sci 2016;32:1562–7. https://doi.org/10.12669/pjms.326.10490 . Search in Google Scholar PubMed PubMed Central

4. Bando, K, Schaff, HV, Sato, T, Hashimoto, K, Cameron, DE. A multidisciplinary approach to ensure scientific integrity in clinical research. Ann Thorac Surg 2015;100:1534–6. https://doi.org/10.1016/j.athoracsur.2015.06.097 . Search in Google Scholar PubMed

5. Salehi, PP, Azizzadeh, B, Lee, YH. Pass/fail Scoring of USMLE Step 1 and the need for residency selection reform. Otolaryngol Head Neck Surg 2021;164:9–10. https://doi.org/10.1177/0194599820951166 . Search in Google Scholar PubMed

6. Yaeger, KA, Schupper, AJ, Gilligan, JT, Germano, IM. Making a match: trends in the application, interview, and ranking process for the neurological surgery residency programs. J Neurosurg 2021;135:1882–8. https://doi.org/10.3171/2020.11.JNS203637 . Search in Google Scholar PubMed

7. Alfonso, F, Zelveian, P, Monsuez, JJ, Aschermann, M, Böhm, M, Hernandez, AB, et al.. Authorship: from credit to accountability. Reflections from the Editors’’ network. Basic Res Cardiol 2019;114:23. https://doi.org/10.1007/s00395-019-0729-y . Search in Google Scholar PubMed

8. Cortez, XC, Freshman, RD, Feeley, BT, Ma, CB, Lansdown, DA, Zhang, AL. An evaluation of self-reported publications in orthopaedic sports medicine fellowship applications. Orthop J Sports Med 2020;8:2325967120920782. https://doi.org/10.1177/2325967120920782 . Search in Google Scholar PubMed PubMed Central

9. Niles, MT, Schimanski, LA, McKiernan, EC, Alperin, JP. Why we publish where we do: faculty publishing values and their relationship to review, promotion and tenure expectations. PLoS One 2020;15:e0228914. https://doi.org/10.1371/journal.pone.0228914 . Search in Google Scholar PubMed PubMed Central

10. Defining the role of authors and contributors . http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html [Accessed 14 Jul 2021]. Search in Google Scholar

11. Bordewijk, EM, Li, W, van Eekelen, R, Wang, R, Showell, M, Mol, BW, et al.. Methods to assess research misconduct in health-related research: a scoping review [published online ahead of print, 2021 May 24]. J Clin Epidemiol 2021;136:189–202. https://doi.org/10.1016/j.jclinepi.2021.05.012 . Search in Google Scholar PubMed

12. “Definition of Research Misconduct” Office of Research integrity, US Dept. of Health and Human Services . hhs.gov [Accessed 16 Jan 2023]. Search in Google Scholar

13. Robishaw, JD, DeMets, DL, Wood, SK, Boiselle, PM, Hennekens, CH. Establishing and maintaining research integrity at academic institutions: challenges and opportunities. Am J Med 2020;133:e87–90. https://doi.org/10.1016/j.amjmed.2019.08.036 . Search in Google Scholar PubMed PubMed Central

14. 42 CFR § 93.103 – research misconduct. 2020;§ 93.103(Office of the Federal Register, National Archives and Records Administration). Search in Google Scholar

15. Resnik, DB, Neal, T, Raymond, A, Kissling, GE. Research misconduct definitions adopted by U.S. research institutions. Account Res 2015;22:14–21. https://doi.org/10.1080/08989621.2014.891943 . Search in Google Scholar PubMed PubMed Central

16. Roig, M. Avoiding plagiarism, self-plagiarism, and other questionable writing practices: a guide to ethical writing ; 2015. Available from: https://ori.hhs.gov/sites/default/files/plagiarism.pdf . Search in Google Scholar

17. Justin, GA, Pelton, RW, Woreta, FA, Legault, GL. Authorship ethics: a practical approach. Am J Ophthalmol 2021;224:A3–5. https://doi.org/10.1016/j.ajo.2020.09.022 . Search in Google Scholar PubMed

18. Functions and structure of a medical school: standards for accreditation of medical educations programs leading to the MD Degree. Liaison Committee on Medical Education . www.lcme.org [Accessed 13 Dec 2022]. Search in Google Scholar

19. “2019 COM Continuing Accreditation Standards” Commission on Osteopathic College Accreditation . https://osteopathic.org/wp-content/uploads/2018/02/com-continuing-accreditation-standards.pdf [Accessed 13 Dec 2022]. Search in Google Scholar

20. Charting outcomes in the match senior students of U.S. MD medical schools. The National Resident Matching Program . Available from: https://www.nrmp.org/wp-content/uploads/2022/07/Charting-Outcomes-MD-Seniors-2022_Final.pdf . Search in Google Scholar

21. Charting outcomes in the match: senior students of U.S. DO medical schools. The National Resident Matching Program . Available from: https://www.nrmp.org/wp-content/uploads/2022/07/Charting_Outcomes_DO_Seniors_2022_Final-Updated.pdf . Search in Google Scholar

22. Iwai, Y, Lenze, NR, Becnel, CM, Mihalic, AP, Stitzenberg, KB. Evaluation of predictors for successful residency match in general surgery. J Surg Educ 2022;79:579–86. https://doi.org/10.1016/j.jsurg.2021.11.003 . Search in Google Scholar PubMed

23. Smolev, ET, Coxe, FR, Iyer, S, Kelly, AM, Nguyen, JT, Fufa, DT. Orthopaedic surgery residency match after an early-exposure research program for medical students. J Am Acad Orthop Surg Glob Res Rev 2021;5:e21.00113. https://doi.org/10.5435/JAAOSGlobal-D-21-00113 . Search in Google Scholar PubMed PubMed Central

24. ACGME Common Program Requirements . Accreditation Council on graduate medical education . Available from: https://www.acgme.org/globalassets/pfassets/programrequirements/cprresidency_2022v3.pdf . Search in Google Scholar

25. Yeh, DD, Reynolds, JM, Pust, GD, Sleeman, D, Meizoso, JP, Menzel, C, et al.. Publication inaccuracies listed in general surgery residency training program applications. J Am Coll Surg 2021;233:545–53. https://doi.org/10.1016/j.jamcollsurg.2021.07.002 . Search in Google Scholar PubMed

26. Wiggins, MN. A meta-analysis of studies of publication misrepresentation by applicants to residency and fellowship programs. Acad Med 2010;85:1470–4. https://doi.org/10.1097/ACM.0b013e3181e2cf2b . Search in Google Scholar PubMed

27. Yannuzzi, NA, Smith, L, Yadegari, D, Venincasa, MJ, Al-Khersan, H, Patel, NA, et al.. Analysis of the vitreoretinal surgical fellowship applicant pool: publication misrepresentations and predictors of future academic output. Retina 2020;40:2026–33. https://doi.org/10.1097/IAE.0000000000002698 . Search in Google Scholar PubMed

28. Oke, I, Mantagos, IS. Rates of unverifiable and incomplete publications in pediatric ophthalmology fellowship applications. J AAPOS 2021;25:295–7. https://doi.org/10.1016/j.jaapos.2021.04.013 . Search in Google Scholar PubMed

29. Zhu, E, Shemesh, S, Iatridis, J, Moucha, C. The association between scholarly impact and National Institutes of Health funding in orthopaedic surgery. Bull Hosp Jt Dis 2017;75:257–63. Search in Google Scholar

30. Zaorsky, NG, O’Brien, E, Mardini, J, Lehrer, EJ, Holliday, E, Weisman, CS. Publication productivity and academic rank in medicine: a systematic review and meta-analysis. Acad Med 2020;95:1274–82. https://doi.org/10.1097/ACM.0000000000003185 . Search in Google Scholar PubMed

31. Lam, A, Heslin, MJ, Tzeng, CD, Chen, H. The effects of tenure and promotion on surgeon productivity. J Surg Res 2018;227:67–71. https://doi.org/10.1016/j.jss.2018.02.020 . Search in Google Scholar PubMed

32. DuBois, JM, Anderson, EE, Chibnall, J, Carroll, K, Gibb, T, Ogbuka, C, et al.. Understanding research misconduct: a comparative analysis of 120 cases of professional wrongdoing. Account Res 2013;20:320–38. https://doi.org/10.1080/08989621.2013.822248 . Search in Google Scholar PubMed PubMed Central

33. Geggie, D. A survey of newly appointed consultants’ attitudes towards research fraud. J Med Ethics 2001;27:344–6. https://doi.org/10.1136/jme.27.5.344 . Search in Google Scholar PubMed PubMed Central

34. Horner, J, Minifie, FD. Research ethics III: publication practices and authorship, conflicts of interest, and research misconduct. J Speech Lang Hear Res 2011;54:S346–62. https://doi.org/10.1044/1092-4388(2010/09-0263) . Search in Google Scholar PubMed

35. Editorial ethics ; 2021. https://www.cell.com/editorialethics [Accessed 18 Jul 2021]. Search in Google Scholar

36. Corrections, retractions and matters arising | Nature. https://www.nature.com/nature/editorial-policies/correction-and-retraction-policy . Accessed 19 Jul 2021]. Search in Google Scholar

37. About the Lancet medical journal . https://www.thelancet.com/lancet/about [Accessed 19 Jul 2021]. Search in Google Scholar

38. Retraction guidelines . https://publicationethics.org/retraction-guidelines [Accessed 19 Jul 2021]. Search in Google Scholar

39. Administrative actions . https://ori.hhs.gov/administrative-actions . Accessed 17 Jul 2021]. Search in Google Scholar

40. Steen, RG. Retractions in the scientific literature: is the incidence of research fraud increasing? J Med Ethics 2011;37:249–53. https://doi.org/10.1136/jme.2010.040923 . Search in Google Scholar PubMed

41. Godlee, F, Smith, J, Marcovitch, H. Wakefield’s article linking MMR vaccine and autism was fraudulent. BMJ 2011;342:c7452. https://doi.org/10.1136/bmj.c7452 . Search in Google Scholar PubMed

42. §93.104 Requirements for findings of research misconduct. 2020;§93.104. Office of the Federal Register, National Archives and Records Administration. Search in Google Scholar

43. § 287. False, fictitious or fraudulent claims. 2020;§ 287. U.S. Government Publishing Office. Search in Google Scholar

44. 31 U.S.C. 3729 – false claims. 2020;§ 3729. U.S. Government Publishing Office. Search in Google Scholar

45. Tavenner, M. Medicare, Medicaid, Children’s Health Insurance Programs transparency reports and reporting of physician ownership or investment interests . Department of Health and Human Services; 2013. Search in Google Scholar

46. Fong, EA, Wilhite, AW, Hickman, C, Lee, Y. The legal consequences of research misconduct: false investigators and grant proposals. J Law Med Ethics 2020;48:331–9. https://doi.org/10.1177/1073110520935347 . Search in Google Scholar PubMed

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

  • X / Twitter

Supplementary Materials

Please login or register with De Gruyter to order this product.

Journal of Osteopathic Medicine

Trump on trial: Scrutiny is on jury selection as Trump returns to court for the second day of his hush money trial. Follow live coverage.

Prestigious cancer research institute has retracted 7 studies amid controversy over errors

Dana-Farber Cancer Institute

Seven studies from researchers at the prestigious Dana-Farber Cancer Institute have been retracted over the last two months after a scientist blogger alleged that images used in them had been manipulated or duplicated.

The retractions are the latest development in a monthslong controversy around research at the Boston-based institute, which is a teaching affiliate of Harvard Medical School. 

The issue came to light after Sholto David, a microbiologist and volunteer science sleuth based in Wales, published a scathing post on his blog in January, alleging errors and manipulations of images across dozens of papers produced primarily by Dana-Farber researchers . The institute acknowledged errors and subsequently announced that it had requested six studies to be retracted and asked for corrections in 31 more papers. Dana-Farber also said, however, that a review process for errors had been underway before David’s post. 

Now, at least one more study has been retracted than Dana-Farber initially indicated, and David said he has discovered an additional 30 studies from authors affiliated with the institute that he believes contain errors or image manipulations and therefore deserve scrutiny.

The episode has imperiled the reputation of a major cancer research institute and raised questions about one high-profile researcher there, Kenneth Anderson, who is a senior author on six of the seven retracted studies. 

Anderson is a professor of medicine at Harvard Medical School and the director of the Jerome Lipper Multiple Myeloma Center at Dana-Farber. He did not respond to multiple emails or voicemails requesting comment. 

The retractions and new allegations add to a larger, ongoing debate in science about how to protect scientific integrity and reduce the incentives that could lead to misconduct or unintentional mistakes in research. 

The Dana-Farber Cancer Institute has moved relatively swiftly to seek retractions and corrections. 

“Dana-Farber is deeply committed to a culture of accountability and integrity, and as an academic research and clinical care organization we also prioritize transparency,” Dr. Barrett Rollins, the institute’s integrity research officer, said in a statement. “However, we are bound by federal regulations that apply to all academic medical centers funded by the National Institutes of Health among other federal agencies. Therefore, we cannot share details of internal review processes and will not comment on personnel issues.”

The retracted studies were originally published in two journals: One in the Journal of Immunology and six in Cancer Research. Six of the seven focused on multiple myeloma, a form of cancer that develops in plasma cells. Retraction notices indicate that Anderson agreed to the retractions of the papers he authored.

Elisabeth Bik, a microbiologist and longtime image sleuth, reviewed several of the papers’ retraction statements and scientific images for NBC News and said the errors were serious. 

“The ones I’m looking at all have duplicated elements in the photos, where the photo itself has been manipulated,” she said, adding that these elements were “signs of misconduct.” 

Dr.  John Chute, who directs the division of hematology and cellular therapy at Cedars-Sinai Medical Center and has contributed to studies about multiple myeloma, said the papers were produced by pioneers in the field, including Anderson. 

“These are people I admire and respect,” he said. “Those were all high-impact papers, meaning they’re highly read and highly cited. By definition, they have had a broad impact on the field.” 

Chute said he did not know the authors personally but had followed their work for a long time.

“Those investigators are some of the leading people in the field of myeloma research and they have paved the way in terms of understanding our biology of the disease,” he said. “The papers they publish lead to all kinds of additional work in that direction. People follow those leads and industry pays attention to that stuff and drug development follows.”

The retractions offer additional evidence for what some science sleuths have been saying for years: The more you look for errors or image manipulation, the more you might find, even at the top levels of science. 

Scientific images in papers are typically used to present evidence of an experiment’s results. Commonly, they show cells or mice; other types of images show key findings like western blots — a laboratory method that identifies proteins — or bands of separated DNA molecules in gels. 

Science sleuths sometimes examine these images for irregular patterns that could indicate errors, duplications or manipulations. Some artificial intelligence companies are training computers to spot these kinds of problems, as well. 

Duplicated images could be a sign of sloppy lab work or data practices. Manipulated images — in which a researcher has modified an image heavily with photo editing tools — could indicate that images have been exaggerated, enhanced or altered in an unethical way that could change how other scientists interpret a study’s findings or scientific meaning. 

Top scientists at big research institutions often run sprawling laboratories with lots of junior scientists. Critics of science research and publishing systems allege that a lack of opportunities for young scientists, limited oversight and pressure to publish splashy papers that can advance careers could incentivize misconduct. 

These critics, along with many science sleuths, allege that errors or sloppiness are too common , that research organizations and authors often ignore concerns when they’re identified, and that the path from complaint to correction is sluggish. 

“When you look at the amount of retractions and poor peer review in research today, the question is, what has happened to the quality standards we used to think existed in research?” said Nick Steneck, an emeritus professor at the University of Michigan and an expert on science integrity.

David told NBC News that he had shared some, but not all, of his concerns about additional image issues with Dana-Farber. He added that he had not identified any problems in four of the seven studies that have been retracted. 

“It’s good they’ve picked up stuff that wasn’t in the list,” he said. 

NBC News requested an updated tally of retractions and corrections, but Ellen Berlin, a spokeswoman for Dana-Farber, declined to provide a new list. She said that the numbers could shift and that the institute did not have control over the form, format or timing of corrections. 

“Any tally we give you today might be different tomorrow and will likely be different a week from now or a month from now,” Berlin said. “The point of sharing numbers with the public weeks ago was to make clear to the public that Dana-Farber had taken swift and decisive action with regard to the articles for which a Dana-Farber faculty member was primary author.” 

She added that Dana-Farber was encouraging journals to correct the scientific record as promptly as possible. 

Bik said it was unusual to see a highly regarded U.S. institution have multiple papers retracted. 

“I don’t think I’ve seen many of those,” she said. “In this case, there was a lot of public attention to it and it seems like they’re responding very quickly. It’s unusual, but how it should be.”

Evan Bush is a science reporter for NBC News. He can be reached at [email protected].

  • Open access
  • Published: 14 March 2024

Knowledge, attitudes and practices about research misconduct among medical residents in southwest China: a cross-sectional study

  • Lulin Chen 1   na1 ,
  • Yizhao Li 2   na1 ,
  • Jie Wang 2 ,
  • Xiaoli Tan 2 &
  • Xiaoyan Guo 2  

BMC Medical Education volume  24 , Article number:  284 ( 2024 ) Cite this article

774 Accesses

2 Altmetric

Metrics details

With the emergence of numerous scientific outputs, growing attention is paid to research misconduct. This study aimed to investigate knowledge, attitudes and practices about research misconduct among medical residents in southwest China.

A cross-sectional study was conducted in southwest China from November 2022 through March 2023. The links to the questionnaire were sent to the directors of the teaching management department in 17 tertiary hospitals. Answers were collected and analyzed. Logistic regression analysis was performed to explore the factors associated with research misconduct among residents.

6200 residents were enrolled in the study, and 88.5% of participants attended a course on research integrity, but 53.7% of participants admitted to having committed at least one form of research misconduct. Having a postgraduate or above, publishing papers as the first author or corresponding author, attending a course on research integrity, lower self-reported knowledge on research integrity and lower perceived consequences for research misconduct were positively correlated to research misconduct. Serving as a primary investigator for a research project was negatively associated with research misconduct. Most residents (66.3%) agreed that the reason for research misconduct is that researchers lack research ability.

Conclusions

The high self-reported rate of research misconduct among residents in southwest China underscores a universal necessity for enhancing research integrity courses in residency programs. The ineffectiveness of current training in China suggests a possible global need for reevaluating and improving educational approaches to foster research integrity. Addressing these challenges is imperative not only for the credibility of medical research and patient care in China but also for maintaining the highest ethical standards in medical education worldwide. Policymakers, educators, and healthcare leaders on a global scale should collaborate to establish comprehensive strategies that ensure the responsible conduct of research, ultimately safeguarding the integrity of medical advancements and promoting trust in scientific endeavors across borders.

Peer Review reports

Introduction

With the emergence of numerous scientific outputs, growing attention is being paid to research misconduct throughout the world [ 1 ]. Chinese scientific output has increased dramatically in recent years, and accounts for 23.4% of the total scientific papers and 27.2% of the top 1% most frequently cited papers between 2018 and 2020, overtaking the US [ 2 ]. The overwhelming amount of scientific outputs have also brought international attention to research misconduct in China. According to the Retraction Watch Database, 5561 articles were retracted from China in 2023, accounting for 78.5% of total retracted articles around the world [ 3 ]. A research report from the Nature indicates that more than 17,000 retractions with Chinese co-authors have been produced in China’s first nationwide review of retractions and scientific misconduct since 2021 [ 4 ]. An investigation on scientific misconduct in Chinese tertiary hospitals suggested that approximately 40% of researchers admitted to having committed research misconduct, with inappropriate authorship being the most common form [ 5 ]. A survey on nursing students reported that 44.1% of participants were involved in at least one form of research misconduct [ 6 ]. Research misconduct leads to a variety of detrimental consequences, such as misleading other researchers and hindering scientific innovation and development [ 7 ]. The medical science field should impose stricter requirements for research integrity due to its involvement in health status [ 8 ].

Residents’ participation in research can encourage academic careers, enhance clinical reasoning, promote evidence-based practice, and ultimately improve patient outcomes [ 9 , 10 ]. In some countries, residency programs mandatorily teach the basic principles of research alongside other scholarly activities [ 11 ]. Earlier studies with different results have been conducted to evaluate residents’ knowledge, attitudes and practices toward research. Most residents regarded research activity as an important part of their career, but a lack of protected time and experience in research skills and overload of resident clinical work were the major barriers for research, and insufficient research training, limited access to research methodologies, and peer pressure also had a negative impact [ 1 , 12 ]. These barriers may lead to research misconduct. 67.4% of the researchers held the idea that a lack of research ability was the reason for research misconduct [ 13 ]. Therefore, residents are likely to conduct research misconduct due to these barriers.

Although many studies have been conducted to assess the researchers’ knowledge, attitudes, and practices toward research [ 12 , 14 , 15 ], there is still limited knowledge about the prevalence and associated factors of research misconduct among residents around the world. Therefore, we conducted this cross-sectional study to identify these issues in southwest China, and we also investigated the perceived reasons for research misconduct among residents.

Materials and methods

Study design.

A cross-sectional investigation was conducted in southwest China from November 2022 through March 2023. Participants were residents at 17 tertiary hospitals from 8 cities, including Nanning, Liuzhou, Yulin, Qinzhou, Wuzhou, Beihai, Guilin, and Baise. Of the 17 hospitals, 9 were provincial, and 8 were municipal. We applied convenience sampling in the study. The links to the questionnaire were sent to the directors of the teaching management department in the above hospitals, who were asked to forward the investigation to their residents. The informed consent form and questionnaire were both completed online. After reading and submitting the informed consent form, residents could choose to fill out the questionnaire, and then the questionnaire was returned anonymously without available information to identify residents.

Questionnaire design

The questionnaire was designed according to previous studies. It consists of 5 parts, including demographic characteristics and research experience, knowledge of research integrity, perceived consequences for research misconduct, residents’ involvement in research misconduct and perceived reasons for research misconduct. All the questions were closed-ended. The responses to parts 2–3 were expressed by a 5-point Likert-type scale.

Knowledge about research integrity

We designed 12 questions according to an official document released by the Ministry of Education of the People’s Republic of China [ 16 ] and a previous study [ 5 ]. Of the 12 questions, 9 were used to evaluate research integrity during the process of writing research proposals or applying for research projects, conducting research, and publishing papers, and the other 3 were about authorship, research ethics and documentation on scientific integrity issued by regulatory authorities, respectively. The questions could be answered on a scale from 1 to 5 (completely don’t know = 1, know a little = 2, know some = 3, know = 4, completely know = 5). The total scores range from 12 to 60, with higher scores indicating a higher level of knowledge about research integrity. The Cronbach’s α of these items was 0.980, and the KMO index was 0.963.

Perceived consequences for research misconduct

Residents’ perceived consequences for research misconduct were assessed using a 7-item checklist with the reference to a previous study [ 17 ]. The questions could be answered on a scale from 1 to 5 (no influence = 1, a little influence = 2, moderate influence = 3, strong influence = 4, very strong influence = 5). The total scores range from 7 to 35. Higher scores indicate a greater severity of perceived consequences for research misconduct. The Cronbach’s α of these items was 0.972, and the KMO index was 0.940.

Residents’ involvement in research misconduct

This part was designed according to the definition of research misconduct by The Ministry of Education of the People’s Republic of China, including 5 common situations [ 16 ]. In addition, multiple submissions and duplicate publications were also added. The frequencies of research misconduct were divided into 5 levels, including never, 1 time, 2 times, 3 times and ≥ 4 times.

Perceived reasons for research misconduct

We evaluated residents’ perceived reasons for research misconduct using a 7-item checklist based on a previous study [ 17 ]. The checklist consists of both internal and external reasons, such as researchers lack research ability, and researchers are influenced by academic environment. The questions could be answered with “agree”, “neutral” or “disagree”.

Data analysis

SPSS 25.0 (IBM, Chicago, IL, USA) was used to analyze the data. Categorical variables are described using frequencies and percentages, and continuous variables are expressed as the mean (M) and standard deviation (SD). In our study, the 5-point Likert scale was summed and a total score was obtained. The mean score was calculated, and those who scored at or above the mean were identified as the high group, while others were classified as the low group. Logistic regression analysis was performed to explore the factors associated with research misconduct among residents. A P value less than 0.05 was considered statistically significant.

A total of 6553 questionnaires were collected, of which 6200 were valid after excluding those with contradictory records, with an effective rate of 94.61%. Residents’ demographic characteristics are shown in Table  1 . 60.2% of the residents had an undergraduate degree or below, and 39.8% had a postgraduate degree or above. In terms of research and publishing experience, 37.6% of the participants served as a primary investigator for a research project and 38.2% published a paper as the first author or corresponding author. Most residents (88.5%) attended a course on research integrity.

Table  2 shows residents’ self-reported knowledge on research integrity. The highest scores were research ethics and disposition of research misconduct, and the lowest score was documentation on scientific integrity issued by regulatory authorities. The average total score of self-reported knowledge among residents was 39.44 ± 14.46.

As shown in Table  3 , of the 7 listed consequences, the entire academic environment was the most perceived consequence with a score of 3.68 ± 1.29 and personal academic reputation was the least perceived consequence with a score of 3.46 ± 1.37. The average total score of perceived consequences for research misconduct among residents was 25.16 ± 8.47.

Table  4 presents the residents’ involvement in research misconduct. 3331 residents (53.7%) admitted to having committed at least one of the seven listed forms of research misconduct (Table  5 ). The most common type of research misconduct was multiple submissions (50.6%), and the least common type was buying and selling papers, letting other people write papers, or writing papers for others (46.7%).

We involved all the demographic characteristics, self-reported research integrity knowledge and perceived consequences for research misconduct to determine the factors associated with research misconduct, and the results are displayed in Table  5 . Having a postgraduate degree or above (OR = 2.457, 95% CI = 2.076–2.909, P  < 0.01), publishing papers as the first author or corresponding author (OR = 4.271, 95% CI = 3.641–5.009, P  < 0.01), attending a course on research integrity (OR = 4.242, 95% CI = 3.226–5.579, P  < 0.01), lower self-reported knowledge on research integrity (OR = 2.374, 95% CI = 1.937–2.908, P  < 0.01) and lower perceived consequences for research misconduct (OR = 20.411, 95% CI = 16.325–25.528, P  < 0.01) were positively correlated to research misconduct, and serving as a primary investigator for a research project (OR = 0.600, 95% CI = 0.510–0.715, P  < 0.01) was negatively associated with research misconduct.

Table  6 depicts the perceived reasons for research misconduct among residents. The most perceived reason for research misconduct was that researchers lack research ability (66.3%), and the reason that researchers are influenced by academic environment ranked the second (65.7%). The reason with the lowest agreement rate was that there is a lack of research integrity training (62.0%).

To our knowledge, this study is the first survey on residents’ knowledge, attitudes and practices towards research misconduct and factors associated with research misconduct in China. The questionnaire applied has good reliability and validity referencing previous studies. The results may help policy-makers and hospital managers identify key residents who tended to conduct research misconduct and design the content of residency training. Our study suggests that a limited number of residents served as a primary investigator for a research project (37.6%) and published a paper as the first author or corresponding author (38.2%). Additionally, 53.7% admitted to having committed research misconduct. The average total score of self-reported knowledge among residents was 39.44 ± 14.46 (ranging from 12 to 60). The average total score of perceived consequences for research misconduct among residents was 25.16 ± 8.47 (ranging from 7 to 35). Previous studies have already shown the worrisome prevalence of self-reported research misconduct among medical faculty members [ 5 , 12 , 18 ]. Our study revealed worse results among residents in hospitals, which demonstrated the insufficiency of research integrity management in hospitals.

In our study, multiple submissions (50.6%) was the most frequent form of research misconduct, and 12.2% of the residents conducted multiple submissions ≥ 4 times. Since multiple submissions was only perceived as a severe deviance by scientific journals, some researchers regarded it as little apparent harm [ 19 ]. The Ministry of Education of the People’s Republic of China has not emphasized the severity of multiple submissions [ 16 ], and it was not even considered as research misconduct by some researchers [ 20 ]. Falsifying research data, materials, literature or annotations, or fabricating research results was the second most common form of research misconduct with an alarming self-reported rate of 49.0%. Bjørn et al. reported that 10.0% of participants believed the common incidence of falsification, fabrication, and plagiarism (FFP) and some respondents were willing to conduct FFPs based on their perceived true conclusions [ 21 ]. When researchers applied for a grant, FFPs were more acceptable and regarded as not important as in publications, so the actual prevalence of FFPs may be worse than expected.

Residents with a postgraduate degree or above may be more likely to conduct research misconduct. Oren et al. [ 22 ] also reported that PhD nurses tend to fabricate, select or omit data to improve their chances of publication. Majid et al. [ 23 ] suggested that postgraduate students have a higher estimation of research misconduct than undergraduate students. The additional statistical skills in postgraduate students may make it easier for them to fabricate, select or omit data, and the desire to be successful drives them to conduct research misconduct [ 19 ]. Besides, the contradiction between the limited time and overloaded work, and high demand for postgraduates’ scientific achievement may lead to the incidence of research misconduct. Those serving as a primary investigator for a research project have a lower inclination to conduct research misconduct. This may be related to the experience of researchers. Primary investigators usually have relatively rich research experience and knowledge, whereas junior researchers usually have poor knowledge of research misconduct [ 20 ]. On the other hand, the primary investigator bears the greatest responsibility, which makes them pay more attention to research integrity, and have more opportunities to be exposed to relevant knowledge and cases of research integrity.

In our study, most residents (88.5%) attended a course on research integrity, which was surprisingly contrary to the high prevalence of research misconduct. Attending a course on research integrity contributed to research misconduct in the study, indicating the shortcomings in the current research integrity courses and urgency to update and implement the content of courses. Traditional lectures are still the most commonly used teaching method in China, and are significantly less effective and efficient than the seminar teaching method or the combined problem-based learning and case-based learning teaching method [ 24 , 25 ]. Furthermore, there is no research integrity course specifically designed for residents, and they are usually trained together with hospital staff. The academic environment is complex and junior residents usually learn about research integrity from their supervisors or senior students [ 20 ]. This may influence their knowledge, attitudes and practices towards research misconduct. Further studies should be conducted to explore resident’ perceptions of research integrity courses, and in-depth interview method should be adopted among residents to optimize course design.

Lower self-reported knowledge was associated with higher research misconduct prevalence. We also collected the self-reported reasons for research misconduct from residents. The top 3 reasons are “researchers lack research ability”, “researchers are influenced by academic environment” and “researchers deviate in personal value”. These results indicate that a lack of personal research ability and knowledge about research misconduct leads to residents’ involvement in research misconduct. The reputation and income of medical staff are closely associated with the professional title, and research achievements play an important role in the professional title promotion system and the evaluation system in Chinese tertiary hospitals, such as publication of papers or grant application. The contradiction between research ability and promotion pressure would contribute to the incidence of research misconduct. Previous studies also suggest that promotion pressure and individual morality are the main perceived reasons for research misconduct [ 1 , 5 ]. Consistent with earlier surveys, personal morality was the main influencing factor, suggesting it is important to enhance personal morality [ 19 ]. Lower perceived consequences for research misconduct were significantly correlated to research misconduct with the enormous OR, and this may provide clues for the design of training. Few researchers considered that education on research misconduct has an effect on reducing the incidence of research misconduct [ 5 ]. Therefore, more courses focusing on consequences for research misconduct should be conducted for residents to reduce the incidence of research misconduct.

Our results reflect the weakness of research integrity courses, the importance of perceived consequences and practices for research misconduct, and the factors linked with research misconduct among residents. Those who were postgraduate or above, had lower scores of research misconduct knowledge and perceived consequences, did not serve as a primary investigator for a research project, published papers as the first author or corresponding author, and attended a course on research integrity tended to conduct research misconduct. This revealed a troubling phenomenon in which residents with research experience almost have a tendency to conduct research misconduct despite being trained, suggesting the necessity for the reformation of residents’ education. More attention should be paid to residents’ education by hospital managers and policy-makers, and we propose several recommendations to improve research integrity. First, government administrations should emphasize the importance of research integrity, and include research integrity in compulsory courses by residency training programs, and the seminar teaching method or the combined problem-based learning and case-based learning teaching method could be applied to improve the effectiveness of courses, instead of relying on traditional lectures. Second, in-depth interview with residents may be conducted to optimize curriculum design, and consequences for research misconduct should be emphasized in the courses, and multiple submissions and duplicate publication need to be highlighted due to their high prevalence. Third, an auditing and surveillance system can be implemented in hospitals, and the department in charge of research integrity should be set up and maintain its authority and independence, and the in-hospital review process should be strictly conducted before residents’ scholar activity.

The study has a few limitations. First, convenience sampling was applied due to the sensitivity of research misconduct, which may affect the results. Furthermore, the questionnaire was derived from self-report, and bias may be involved in the process despite assurances of anonymity. Finally, although based on other studies, the questionnaire was self-designed and measurements may differ from the study objectives.

Data availability

The data used during the study are available from the corresponding author on reasonable requests.

Rahman H, Ankier S. Dishonesty and research misconduct within the medical profession. BMC Med Ethics. 2020;21(1):22.

Article   PubMed   PubMed Central   Google Scholar  

Guardian T. China overtakes US in scientific research output 2022 [Available from: https://www.theguardian.com/world/2022/aug/11/china-overtakes-the-us-in-scientific-research-output .

Database TRW. New York: The Center for Scientific Integrity. 2018 [2024-2-21]. Available from: http://retractiondatabase.org/ .

Mallapaty S. China conducts first nationwide review of retractions and research misconduct. Nature. 2024;626(8000):700–1.

Article   CAS   PubMed   Google Scholar  

Yu L, Miao M, Liu W, Zhang B, Zhang P. Scientific misconduct and associated factors: a survey of researchers in three Chinese tertiary hospitals. Account Res. 2021;28(2):95–114.

Article   PubMed   Google Scholar  

Bloomfield JG, Crawford T, Fisher M. Registered nurses understanding of academic honesty and the perceived relationship to professional conduct: findings from a cross-sectional survey conducted in Southeast Asia. Nurse Educ Today. 2021;100:104794.

Kaiser M, Drivdal L, Hjellbrekke J, Ingierd H, Rekdal OB. Questionable Research Practices and Misconduct among Norwegian researchers. Sci Eng Ethics. 2021;28(1):2.

Armond ACV, Gordijn B, Lewis J, Hosseini M, Bodnár JK, Holm S, et al. A scoping review of the literature featuring research ethics and research integrity cases. BMC Med Ethics. 2021;22(1):50.

Tooke J, Wass J. Nurturing tomorrow’s clinician scientists. Lancet. 2013;381(Suppl 1):S1–2.

Seaburg LA, Wang AT, West CP, Reed DA, Halvorsen AJ, Engstler G, et al. Associations between resident physicians’ publications and clinical performance during residency training. BMC Med Educ. 2016;16:22.

Shanmugalingam A, Ferreria SG, Norman RMG, Vasudev K. Research experience in psychiatry residency programs across Canada: current status. Can J Psychiatry. 2014;59(11):586–90.

Al-Taha M, Youha SA, Al-halabi B, Stone J, Retrouvey H, Samargandi O, et al. Barriers and attitudes to Research among residents in Plastic and reconstructive surgery: a National Multicenter cross-sectional study. J Surg Educ. 2017;74(6):1094–104.

Article   Google Scholar  

Han S, Li K, Gao S, Zhang Y, Yang X, Li C, et al. Research misconduct knowledge and associated factors among nurses in China: a national cross-sectional survey. Appl Nurs Res. 2023;69:151658.

Clancy AA, Posner G. Attitudes toward Research during Residency: a survey of Canadian residents in Obstetrics and Gynecology. J Surg Educ. 2015;72(5):836–43.

Hofmann B, Thoresen M, Holm S. Research Integrity attitudes and behaviors are difficult to alter: results from a ten Year follow-up study in Norway. J Empir Res Hum Res Ethics. 2023;18(1–2):50–7.

China MoEotPsRo. Measures for preventing and handling academic misconduct in institutions of higher learning 2016 [Available from: http://www.moe.gov.cn/srcsite/A02/s5911/moe_621/201607/t20160718_272156.html .

Jie W, Xiaowei L, Zhiwen W, Zhengxin W. Graduate nursing students’ attitude towards research integrity. Chin Nurs Manage. 2016;16(10):1352–7.

Google Scholar  

Shamsoddin E, Torkashvand-Khah Z, Sofi-Mahmudi A, Janani L, Kabiri P, Shamsi-Gooshki E, et al. Assessing research misconduct in Iran: a perspective from Iranian medical faculty members. BMC Med Ethics. 2021;22(1):74.

Yi N, Nemery B, Dierickx K. Perceptions of research integrity and the Chinese situation: In-depth interviews with Chinese biomedical researchers in Europe. Account Res. 2019;26(7):405–26.

Yi N, Nemery B, Dierickx K. Integrity in Biomedical Research: a systematic review of studies in China. Sci Eng Ethics. 2019;25(4):1271–301.

Hofmann B, Jensen LB, Eriksen MB, Helgesson G, Juth N, Holm S. Research Integrity among PhD students at the Faculty of Medicine: a comparison of three scandinavian universities. J Empir Res Hum Res Ethics. 2020;15(4):320–9.

Asman O, Melnikov S, Barnoy S, Tabak N. Experiences, behaviors, and perceptions of registered nurses regarding research ethics and misconduct. Nurs Ethics. 2019;26(3):859–69.

Khadem-Rezaiyan M, Dadgarmoghaddam M, Research Misconduct. A report from a developing country. Iran J Public Health. 2017;46(10):1374–8.

PubMed   PubMed Central   Google Scholar  

Zhao W, He L, Deng W, Zhu J, Su A, Zhang Y. The effectiveness of the combined problem-based learning (PBL) and casebased learning (CBL) teaching method in the clinical practical teaching of thyroid disease. BMC Med Educ. 2020;20(1):381.

Zeng HL, Chen DX, Li Q, Wang XY. Effects of seminar teaching method versus lecture-based learning in medical education: a meta-analysis of randomized controlled trials. Med Teach. 2020;42(12):1343–9.

Download references

Acknowledgements

We express our thanks to all the investigated hospitals. We offer special gratitude to the participants of the study for their support.

This study was supported by self-funded research project of The Health Committee of Guangxi Zhuang Autonomous Regions, Grant/Award Number: Z-A20221187.

Author information

Lulin Chen, Yizhao Li authors contributed to the manuscript equally.

Authors and Affiliations

Department of Preventive Health, The Second Nanning People’s Hospital, The Third Affiliated Hospital of Guangxi Medical University, Nanning, People’s Republic of China

Department of Science and Education, The Second Nanning People’s Hospital, The Third Affiliated Hospital of Guangxi Medical University, Nanning, People’s Republic of China

Yizhao Li, Jie Wang, Yue Li, Xiaoli Tan & Xiaoyan Guo

You can also search for this author in PubMed   Google Scholar

Contributions

Lulin Chen analyzed the data and wrote the main manuscript text. Jie Wang designed this study. Yizhao Li and Xiaoyan Guo collected the data and reviewed the manuscript. Yue Li and Xiaoli Tan collected the data. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Xiaoyan Guo .

Ethics declarations

Ethics approval and consent to participate.

The study was approved by institutional review board of The Second Nanning People’s Hospital. Electronic version informed consents were obtained from all participants.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Chen, L., Li, Y., Wang, J. et al. Knowledge, attitudes and practices about research misconduct among medical residents in southwest China: a cross-sectional study. BMC Med Educ 24 , 284 (2024). https://doi.org/10.1186/s12909-024-05277-6

Download citation

Received : 07 October 2023

Accepted : 07 March 2024

Published : 14 March 2024

DOI : https://doi.org/10.1186/s12909-024-05277-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research misconduct

BMC Medical Education

ISSN: 1472-6920

research misconduct in medical journals

  • Share full article

Advertisement

Supported by

Prestigious Medical Journal Ignored Nazi Atrocities, Historians Find

The New England Journal of Medicine published an article condemning its own record during World War II.

A black-and-white archival photo shows two rows of Nazi doctors and scientists in a trial courtroom in Nuremberg, Germany. Several other people sit at desks in front of them, looking at documents or listening via headphones to the proceedings.

By Alexander Nazaryan

A new article in the New England Journal of Medicine, one of the oldest and most esteemed publications for medical research, criticizes the journal for paying only “superficial and idiosyncratic attention” to the atrocities perpetrated in the name of medical science by the Nazis.

The journal was “an outlier in its sporadic coverage of the rise of Nazi Germany,” wrote the article’s authors, Allan Brandt and Joelle Abi-Rached, both medical historians at Harvard. Often, the journal simply ignored the Nazis’ medical depredations, such as the horrific experiments conducted on twins at Auschwitz, which were based largely on Adolf Hitler’s spurious “ racial science .”

In contrast, two other leading science journals — Science and the Journal of the American Medical Association — covered the Nazis’ discriminatory policies throughout Hitler’s tenure, the historians noted. The New England journal did not publish an article “explicitly damning” the Nazis’ medical atrocities until 1949 , four years after World War II ended.

The new article, published in this week’s issue of the journal, is part of a series started last year to address racism and other forms of prejudice in the medical establishment. Another recent article described the journal’s enthusiastic coverage of eugenics throughout the 1930s and ’40s.

“Learning from our past mistakes can help us going forward,” said the journal’s editor, Dr. Eric Rubin, an infectious disease expert at Harvard. “What can we do to ensure that we don’t fall into the same sorts of objectionable ideas in the future?”

In the publication’s archives, Dr. Abi-Rached discovered a paper endorsing Nazi medical practices: “Recent changes in German health insurance under the Hitler government,” a 1935 treatise written by Michael Davis , an influential figure in health care, and Gertrud Kroeger, a nurse from Germany. The article praised the Nazis’ emphasis on public health , which was infused with dubious ideas about Germans’ innate superiority.

“There is no reference to the slew of persecutory and antisemitic laws that had been passed,” Dr. Abi-Rached and Dr. Brandt wrote. In one passage, Dr. Davis and Ms. Kroeger described how doctors were made to work in Nazi labor camps. Duty there, the authors blithely wrote, was an “opportunity to mingle with all sorts of people in everyday life.”

“Apparently, they considered the discrimination against Jews irrelevant to what they saw as reasonable and progressive change,” Dr. Abi-Rached and Dr. Brandt wrote.

For the most part, however, the two historians were surprised at how little the journal had to say about the Nazis, who murdered some 70,000 disabled people before turning to the slaughter of Europe’s Jews, as well as other groups.

“When we opened the file drawer, there was almost nothing there,” Dr. Brandt said. Instead of discovering articles either condemning or justifying the Nazis’ perversions of medicine, there was instead something more puzzling: an evident indifference that lasted until well after the end of World War II.

The journal acknowledged Hitler in 1933, the year he began implementing his antisemitic policies. Seven months after the advent of the Third Reich, the journal published “The Abuse of the Jewish Physicians,” an article that today would most likely face criticism for lacking moral clarity. It appeared to be largely based on reporting by The New York Times.

“Without providing any details, the notice reported that there was some indication of ‘a bitter and relentless opposition to the Jewish people,’” the new article said.

Other journals saw the threat of Nazism more clearly. Science expressed alarm about the “crass repression” of Jews, which took place not only in medicine but also in law, the arts and other professions.

“The journal, and America, had tunnel vision,” said John Michalczyk , co-director of Jewish Studies at Boston College. American corporations avidly did business with Hitler’s regime. The Nazi dictator, in turn, looked favorably at the slaughter and displacement of Native Americans, and sought to adopt the eugenics efforts that had taken place across the United States throughout the early 20th century.

“Our hands are not clean,” Dr. Michalczyk said.

Dr. Abi-Rached said she and Dr. Brandt wanted to avoid being “anachronistic” and viewing the journal’s silence on Nazism through a contemporary lens. But once she saw that other medical publications had taken a different tack, the journal’s silence took on a fraught new meaning. What was said was dwarfed by what was never spoken.

“We were looking for strategies to understand how racism works,” Dr. Brandt said. It seemed to work, in part, through apathy. Later, many institutions would claim that they would have acted to save more of the Holocaust’s victims had they known the extent of the Nazis’ atrocities.

That excuse rings hollow to experts who point out that there were enough eyewitness reports to merit action.

“Sometimes, silence contributes to these kinds of radical, immoral, catastrophic shifts,” Dr. Brandt said. “That’s implicit in our paper.”

This paper is in the following e-collection/theme issue:

Published on 12.4.2024 in Vol 26 (2024)

Application of AI in in Multilevel Pain Assessment Using Facial Images: Systematic Review and Meta-Analysis

Authors of this article:

Author Orcid Image

  • Jian Huo 1 * , MSc   ; 
  • Yan Yu 2 * , MMS   ; 
  • Wei Lin 3 , MMS   ; 
  • Anmin Hu 2, 3, 4 , MMS   ; 
  • Chaoran Wu 2 , MD, PhD  

1 Boston Intelligent Medical Research Center, Shenzhen United Scheme Technology Company Limited, Boston, MA, United States

2 Department of Anesthesia, Shenzhen People's Hospital, The First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Key Medical Discipline, Shenzhen, China

3 Shenzhen United Scheme Technology Company Limited, Shenzhen, China

4 The Second Clinical Medical College, Jinan University, Shenzhen, China

*these authors contributed equally

Corresponding Author:

Chaoran Wu, MD, PhD

Department of Anesthesia

Shenzhen People's Hospital, The First Affiliated Hospital of Southern University of Science and Technology

Shenzhen Key Medical Discipline

No 1017, Dongmen North Road

Shenzhen, 518020

Phone: 86 18100282848

Email: [email protected]

Background: The continuous monitoring and recording of patients’ pain status is a major problem in current research on postoperative pain management. In the large number of original or review articles focusing on different approaches for pain assessment, many researchers have investigated how computer vision (CV) can help by capturing facial expressions. However, there is a lack of proper comparison of results between studies to identify current research gaps.

Objective: The purpose of this systematic review and meta-analysis was to investigate the diagnostic performance of artificial intelligence models for multilevel pain assessment from facial images.

Methods: The PubMed, Embase, IEEE, Web of Science, and Cochrane Library databases were searched for related publications before September 30, 2023. Studies that used facial images alone to estimate multiple pain values were included in the systematic review. A study quality assessment was conducted using the Quality Assessment of Diagnostic Accuracy Studies, 2nd edition tool. The performance of these studies was assessed by metrics including sensitivity, specificity, log diagnostic odds ratio (LDOR), and area under the curve (AUC). The intermodal variability was assessed and presented by forest plots.

Results: A total of 45 reports were included in the systematic review. The reported test accuracies ranged from 0.27-0.99, and the other metrics, including the mean standard error (MSE), mean absolute error (MAE), intraclass correlation coefficient (ICC), and Pearson correlation coefficient (PCC), ranged from 0.31-4.61, 0.24-2.8, 0.19-0.83, and 0.48-0.92, respectively. In total, 6 studies were included in the meta-analysis. Their combined sensitivity was 98% (95% CI 96%-99%), specificity was 98% (95% CI 97%-99%), LDOR was 7.99 (95% CI 6.73-9.31), and AUC was 0.99 (95% CI 0.99-1). The subgroup analysis showed that the diagnostic performance was acceptable, although imbalanced data were still emphasized as a major problem. All studies had at least one domain with a high risk of bias, and for 20% (9/45) of studies, there were no applicability concerns.

Conclusions: This review summarizes recent evidence in automatic multilevel pain estimation from facial expressions and compared the test accuracy of results in a meta-analysis. Promising performance for pain estimation from facial images was established by current CV algorithms. Weaknesses in current studies were also identified, suggesting that larger databases and metrics evaluating multiclass classification performance could improve future studies.

Trial Registration: PROSPERO CRD42023418181; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=418181

Introduction

The definition of pain was revised to “an unpleasant sensory and emotional experience associated with, or resembling that associated with, actual or potential tissue damage” in 2020 [ 1 ]. Acute postoperative pain management is important, as pain intensity and duration are critical influencing factors for the transition of acute pain to chronic postsurgical pain [ 2 ]. To avoid the development of chronic pain, guidelines were promoted and discussed to ensure safe and adequate pain relief for patients, and clinicians were recommended to use a validated pain assessment tool to track patients’ responses [ 3 ]. However, these tools, to some extent, depend on communication between physicians and patients, and continuous data cannot be provided [ 4 ]. The continuous assessment and recording of patient pain intensity will not only reduce caregiver burden but also provide data for chronic pain research. Therefore, automatic and accurate pain measurements are necessary.

Researchers have proposed different approaches to measuring pain intensity. Physiological signals, for example, electroencephalography and electromyography, have been used to estimate pain [ 5 - 7 ]. However, it was reported that current pain assessment from physiological signals has difficulties isolating stress and pain with machine learning techniques, as they share conceptual and physiological similarities [ 8 ]. Recent studies have also investigated pain assessment tools for certain patient subgroups. For example, people with deafness or an intellectual disability may not be able to communicate well with nurses, and an objective pain evaluation would be a better option [ 9 , 10 ]. Measuring pain intensity from patient behaviors, such as facial expressions, is also promising for most patients [ 4 ]. As the most comfortable and convenient method, computer vision techniques require no attachments to patients and can monitor multiple participants using 1 device [ 4 ]. However, pain intensity, which is important for pain research, is often not reported.

With the growing trend of assessing pain intensity using artificial intelligence (AI), it is necessary to summarize current publications to determine the strengths and gaps of current studies. Existing research has reviewed machine learning applications for acute postoperative pain prediction, continuous pain detection, and pain intensity estimation [ 10 - 14 ]. Input modalities, including facial recordings and physiological signals such as electroencephalography and electromyography, were also reviewed [ 5 , 8 ]. There have also been studies focusing on deep learning approaches [ 11 ]. AI was applied in children and infant pain evaluation as well [ 15 , 16 ]. However, no study has focused on pain intensity measurement, and no comparison of test accuracy results has been made.

Current AI applications in pain research can be categorized into 3 types: pain assessment, pain prediction and decision support, and pain self-management [ 14 ]. We consider accurate and automatic pain assessment to be the most important area and the foundation of future pain research. In this study, we performed a systematic review and meta-analysis to assess the diagnostic performance of current publications for multilevel pain evaluation.

This study was registered with PROSPERO (International Prospective Register of Systematic Reviews; CRD42023418181) and carried out strictly following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [ 17 ] .

Study Eligibility

Studies that reported AI techniques for multiclass pain intensity classification were eligible. Records including nonhuman or infant participants or 2-class pain detection were excluded. Only studies using facial images of the test participants were accepted. Clinically used pain assessment tools, such as the visual analog scale (VAS) and numerical rating scale (NRS), and other pain intensity indicators, were rejected in the meta-analysis. Textbox 1 presents the eligibility criteria.

Study characteristics and inclusion criteria

  • Participants: children and adults aged 12 months or older
  • Setting: no restrictions
  • Index test: artificial intelligence models that measure pain intensity from facial images
  • Reference standard: no restrictions for systematic review; Prkachin and Solomon pain intensity score for meta-analysis
  • Study design: no need to specify

Study characteristics and exclusion criteria

  • Participants: infants aged 12 months or younger and animal subjects
  • Setting: no need to specify
  • Index test: studies that use other information such as physiological signals
  • Reference standard: other pain evaluation tools, e.g., NRS, VAS, were excluded from meta-analysis
  • Study design: reviews

Report characteristics and inclusion criteria

  • Year: published between January 1, 2012, and September 30, 2023
  • Language: English only
  • Publication status: published
  • Test accuracy metrics: no restrictions for systematic reviews; studies that reported contingency tables were included for meta-analysis

Report characteristics and exclusion criteria

  • Year: no need to specify
  • Language: no need to specify
  • Publication status: preprints not accepted
  • Test accuracy metrics: studies that reported insufficient metrics were excluded from meta-analysis

Search Strategy

In this systematic review, databases including PubMed, Embase, IEEE, Web of Science, and the Cochrane Library were searched until December 2022, and no restrictions were applied. Keywords were “artificial intelligence” AND “pain recognition.” Multimedia Appendix 1 shows the detailed search strategy.

Data Extraction

A total of 2 viewers screened titles and abstracts and selected eligible records independently to assess eligibility, and disagreements were solved by discussion with a third collaborator. A consentient data extraction sheet was prespecified and used to summarize study characteristics independently. Table S5 in Multimedia Appendix 1 shows the detailed items and explanations for data extraction. Diagnostic accuracy data were extracted into contingency tables, including true positives, false positives, false negatives, and true negatives. The data were used to calculate the pooled diagnostic performance of the different models. Some studies included multiple models, and these models were considered independent of each other.

Study Quality Assessment

All included studies were independently assessed by 2 viewers using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool [ 18 ]. QUADAS-2 assesses bias risk across 4 domains, which are patient selection, index test, reference standard, and flow and timing. The first 3 domains are also assessed for applicability concerns. In the systematic review, a specific extension of QUADAS-2, namely, QUADAS-AI, was used to specify the signaling questions [ 19 ].

Meta-Analysis

Meta-analyses were conducted between different AI models. Models with different algorithms or training data were considered different. To evaluate the performance differences between models, the contingency tables during model validation were extracted. Studies that did not report enough diagnostic accuracy data were excluded from meta-analysis.

Hierarchical summary receiver operating characteristic (SROC) curves were fitted to evaluate the diagnostic performance of AI models. These curves were plotted with 95% CIs and prediction regions around averaged sensitivity, specificity, and area under the curve estimates. Heterogeneity was assessed visually by forest plots. A funnel plot was constructed to evaluate the risk of bias.

Subgroup meta-analyses were conducted to evaluate the performance differences at both the model level and task level, and subgroups were created based on different tasks and the proportion of positive and negative samples.

All statistical analyses and plots were produced using RStudio (version 4.2.2; R Core Team) and the R package meta4diag (version 2.1.1; Guo J and Riebler A) [ 20 ].

Study Selection and Included Study Characteristics

A flow diagram representing the study selection process is shown in ( Figure 1 ). After removing 1039 duplicates, the titles and abstracts of a total of 5653 papers were screened, and the percentage agreement of title or abstract screening was 97%. After screening, 51 full-text reports were assessed for eligibility, among which 45 reports were included in the systematic review [ 21 - 65 ]. The percentage agreement of the full-text review was 87%. In 40 of the included studies, contingency tables could not be made. Meta-analyses were conducted based on 8 AI models extracted from 6 studies. Individual study characteristics included in the systematic review are provided in Tables 1 and 2 . The facial feature extraction method can be categorized into 2 classes: geometrical features (GFs) and deep features (DFs). One typical method of extracting GFs is to calculate the distance between facial landmarks. DFs are usually extracted by convolution operations. A total of 20 studies included temporal information, but most of them (18) extracted temporal information through the 3D convolution of video sequences. Feature transformation was also commonly applied to reduce the time for training or fuse features extracted by different methods before inputting them into the classifier. For classifiers, support vector machines (SVMs) and convolutional neural networks (CNNs) were mostly used. Table 1 presents the model designs of the included studies.

research misconduct in medical journals

a No temporal features are shown by – symbol, time information extracted from 2 images at different time by +, and deep temporal features extracted through the convolution of video sequences by ++.

b SVM: support vector machine.

c GF: geometric feature.

d GMM: gaussian mixture model.

e TPS: thin plate spline.

f DML: distance metric learning.

g MDML: multiview distance metric learning.

h AAM: active appearance model.

i RVR: relevance vector regressor.

j PSPI: Prkachin and Solomon pain intensity.

k I-FES: individual facial expressiveness score.

l LSTM: long short-term memory.

m HCRF: hidden conditional random field.

n GLMM: generalized linear mixed model.

o VLAD: vector of locally aggregated descriptor.

p SVR: support vector regression.

q MDS: multidimensional scaling.

r ELM: extreme learning machine.

s Labeled to distinguish different architectures of ensembled deep learning models.

t DCNN: deep convolutional neural network.

u GSM: gaussian scale mixture.

v DOML: distance ordering metric learning.

w LIAN: locality and identity aware network.

x BiLSTM: bidirectional long short-term memory.

a UNBC: University of Northern British Columbia-McMaster shoulder pain expression archive database.

b LOSO: leave one subject out cross-validation.

c ICC: intraclass correlation coefficient.

d CT: contingency table.

e AUC: area under the curve.

f MSE: mean standard error.

g PCC: Pearson correlation coefficient.

h RMSE: root mean standard error.

i MAE: mean absolute error.

j ICC: intraclass coefficient.

k CCC: concordance correlation coefficient.

l Reported both external and internal validation results and summarized as intervals.

Table 2 summarizes the characteristics of model training and validation. Most studies used publicly available databases, for example, the University of Northern British Columbia-McMaster shoulder pain expression archive database [ 57 ]. Table S4 in Multimedia Appendix 1 summarizes the public databases. A total of 7 studies used self-prepared databases. Frames from video sequences were the most used test objects, as 37 studies output frame-level pain intensity, while few measure pain intensity from video sequences or photos. It was common that a study redefined pain levels to have fewer classes than ground-truth labels. For model validation, cross-validation and leave-one-subject-out validation were commonly used. Only 3 studies performed external validation. For reporting test accuracies, different evaluation metrics were used, including sensitivity, specificity, mean absolute error (MAE), mean standard error (MSE), Pearson correlation coefficient (PCC), and intraclass coefficient (ICC).

Methodological Quality of Included Studies

Table S2 in Multimedia Appendix 1 presents the study quality summary, as assessed by QUADAS-2. There was a risk of bias in all studies, specifically in terms of patient selection, caused by 2 issues. First, the training data are highly imbalanced, and any method to adjust the data distribution may introduce bias. Next, the QUADAS-AI correspondence letter [ 19 ] specifies that preprocessing of images that changes the image size or resolution may introduce bias. However, the applicability concern is low, as the images properly represent the feeling of pain. Studies that used cross-fold validation or leave-one-out cross-validation were considered to have a low risk of bias. Although the Prkachin and Solomon pain intensity (PSPI) score was used by most of the studies, its ability to represent individual pain levels was not clinically validated; as such, the risk of bias and applicability concerns were considered high when the PSPI score was used as the index test. As an advantage of computer vision techniques, the time interval between the index tests was short and was assessed as having a low risk of bias. Risk proportions are shown in Figure 2 . For all 315 entries, 39% (124) were assessed as high-risk. In total, 5 studies had the lowest risk of bias, with 6 domains assessed as low risk [ 26 , 27 , 31 , 32 , 59 ].

research misconduct in medical journals

Pooled Performance of Included Models

In 6 studies included in the meta-analysis, there were 8 different models. The characteristics of these models are summarized in Table S1 in Multimedia Appendix 2 [ 23 , 24 , 26 , 32 , 41 , 57 ]. Classification of PSPI scores greater than 0, 2, 3, 6, and 9 was selected and considered as different tasks to create contingency tables. The test performance is shown in Figure 3 as hierarchical SROC curves; 27 contingency tables were extracted from 8 models. The sensitivity, specificity, and LDOR were calculated, and the combined sensitivity was 98% (95% CI 96%-99%), the specificity was 98% (95% CI 97%-99%), the LDOR was 7.99 (95% CI 6.73-9.31) and the AUC was 0.99 (95% CI 0.99-1).

research misconduct in medical journals

Subgroup Analysis

In this study, subgroup analysis was conducted to investigate the performance differences within models. A total of 8 models were separated and summarized as a forest plot in Multimedia Appendix 3 [ 23 , 24 , 26 , 32 , 41 , 57 ]. For model 1, the pooled sensitivity, specificity, and LDOR were 95% (95% CI 86%-99%), 99% (95% CI 98%-100%), and 8.38 (95% CI 6.09-11.19), respectively. For model 2, the pooled sensitivity, specificity, and LDOR were 94% (95% CI 84%-99%), 95% (95% CI 88%-99%), and 6.23 (95% CI 3.52-9.04), respectively. For model 3, the pooled sensitivity, specificity, and LDOR were 100% (95% CI 99%-100%), 100% (95% CI 99%-100%), and 11.55% (95% CI 8.82-14.43), respectively. For model 4, the pooled sensitivity, specificity, and LDOR were 83% (95% CI 43%-99%), 94% (95% CI 79%-99%), and 5.14 (95% CI 0.93-9.31), respectively. For model 5, the pooled sensitivity, specificity, and LDOR were 92% (95% CI 68%-99%), 94% (95% CI 78%-99%), and 6.12 (95% CI 1.82-10.16), respectively. For model 6, the pooled sensitivity, specificity, and LDOR were 94% (95% CI 74%-100%), 94% (95% CI 78%-99%), and 6.59 (95% CI 2.21-11.13), respectively. For model 7, the pooled sensitivity, specificity, and LDOR were 98% (95% CI 90%-100%), 97% (95% CI 87%-100%), and 8.31 (95% CI 4.3-12.29), respectively. For model 8, the pooled sensitivity, specificity, and LDOR were 98% (95% CI 93%-100%), 97% (95% CI 88%-100%), and 8.65 (95% CI 4.84-12.67), respectively.

Heterogeneity Analysis

The meta-analysis results indicated that AI models are applicable for estimating pain intensity from facial images. However, extreme heterogeneity existed within the models except for models 3 and 5, which were proposed by Rathee and Ganotra [ 24 ] and Semwal and Londhe [ 32 ]. A funnel plot is presented in Figure 4 . A high risk of bias was observed.

research misconduct in medical journals

Pain management has long been a critical problem in clinical practice, and the use of AI may be a solution. For acute pain management, automatic measurement of pain can reduce the burden on caregivers and provide timely warnings. For chronic pain management, as specified by Glare et al [ 2 ], further research is needed, and measurements of pain presence, intensity, and quality are one of the issues to be solved for chronic pain studies. Computer vision could improve pain monitoring through real-time detection for clinical use and data recording for prospective pain studies. To our knowledge, this is the first meta-analysis dedicated to AI performance in multilevel pain level classification.

In this study, one model’s performance at specific pain levels was described by stacking multiple classes into one to make each task a binary classification problem. After careful selection in both the medical and engineering databases, we observed promising results of AI in evaluating multilevel pain intensity through facial images, with high sensitivity (98%), specificity (98%), LDOR (7.99), and AUC (0.99). It is reasonable to believe that AI can accurately evaluate pain intensity from facial images. Moreover, the study quality and risk of bias were evaluated using an adapted QUADAS-2 assessment tool, which is a strength of this study.

To investigate the source of heterogeneity, it was assumed that a well-designed model should have familiar size effects regarding different levels, and a subgroup meta-analysis was conducted. The funnel and forest plots exhibited extreme heterogeneity. The model’s performance at specific pain levels was described and summarized by a forest plot. Within-model heterogeneity was observed in Multimedia Appendix 3 [ 23 , 24 , 26 , 32 , 41 , 57 ] except for 2 models. Models 3 and 5 were different in many aspects, including their algorithms and validation methods, but were both trained with a relatively small data set, and the proportion of positive and negative classes was relatively close to 1. Because training with imbalanced data is a critical problem in computer vision studies [ 66 ], for example, in the University of Northern British Columbia-McMaster pain data set, fewer than 10 frames out of 48,398 had a PSPI score greater than 13. Here, we emphasized that imbalanced data sets are one major cause of heterogeneity, resulting in the poorer performance of AI algorithms.

We tentatively propose a method to minimize the effect of training with imbalanced data by stacking multiple classes into one class, which is already presented in studies included in the systematic review [ 26 , 32 , 42 , 57 ]. Common methods to minimize bias include resampling and data augmentation [ 66 ]. This proposed method is used in the meta-analysis to compare the test results of different studies as well. The stacking method is available when classes are only different in intensity. A disadvantage of combined classes is that the model would be insufficient in clinical practice when the number of classes is low. Commonly used pain evaluation tools, such as VAS, have 10 discrete levels. It is recommended that future studies set the number of pain levels to be at least 10 for model training.

This study is limited for several reasons. First, insufficient data were included because different performance metrics (mean standard error and mean average error) were used in most studies, which could not be summarized into a contingency table. To create a contingency table that can be included in a meta-analysis, the study should report the following: the number of objects used in each pain class for model validation, and the accuracy, sensitivity, specificity, and F 1 -score for each pain class. This table cannot be created if a study reports the MAE, PCC, and other commonly used metrics in AI development. Second, a small study effect was observed in the funnel plot, and the heterogeneity could not be minimized. Another limitation is that the PSPI score is not clinically validated and is not the only tool that assesses pain from facial expressions. There are other clinically validated pain intensity assessment methods, such as the Faces Pain Scale-revised, Wong-Baker Faces Pain Rating Scale, and Oucher Scale [ 3 ]. More databases could be created based on the above-mentioned tools. Finally, AI-assisted pain assessments were supposed to cover larger populations, including incommunicable patients, for example, patients with dementia or patients with masked faces. However, only 1 study considered patients with dementia, which was also caused by limited databases [ 50 ].

AI is a promising tool that can help in pain research in the future. In this systematic review and meta-analysis, one approach using computer vision was investigated to measure pain intensity from facial images. Despite some risk of bias and applicability concerns, CV models can achieve excellent test accuracy. Finally, more CV studies in pain estimation, reporting accuracy in contingency tables, and more pain databases are encouraged for future studies. Specifically, the creation of a balanced public database that contains not only healthy but also nonhealthy participants should be prioritized. The recording process would be better in a clinical environment. Then, it is recommended that researchers report the validation results in terms of accuracy, sensitivity, specificity, or contingency tables, as well as the number of objects for each pain class, for the inclusion of a meta-analysis.

Acknowledgments

WL, AH, and CW contributed to the literature search and data extraction. JH and YY wrote the first draft of the manuscript. All authors contributed to the conception and design of the study, the risk of bias evaluation, data analysis and interpretation, and contributed to and approved the final version of the manuscript.

Data Availability

The data sets generated during and analyzed during this study are available in the Figshare repository [ 67 ].

Conflicts of Interest

None declared.

PRISMA checklist, risk of bias summary, search strategy, database summary and reported items and explanations.

Study performance summary.

Forest plot presenting pooled performance of subgroups in meta-analysis.

  • Raja SN, Carr DB, Cohen M, Finnerup NB, Flor H, Gibson S, et al. The revised International Association for the Study of Pain definition of pain: concepts, challenges, and compromises. Pain. 2020;161(9):1976-1982. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Glare P, Aubrey KR, Myles PS. Transition from acute to chronic pain after surgery. Lancet. 2019;393(10180):1537-1546. [ CrossRef ] [ Medline ]
  • Chou R, Gordon DB, de Leon-Casasola OA, Rosenberg JM, Bickler S, Brennan T, et al. Management of postoperative pain: a clinical practice guideline from the American Pain Society, the American Society of Regional Anesthesia and Pain Medicine, and the American Society of Anesthesiologists' Committee on Regional Anesthesia, Executive Committee, and Administrative Council. J Pain. 2016;17(2):131-157. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hassan T, Seus D, Wollenberg J, Weitz K, Kunz M, Lautenbacher S, et al. Automatic detection of pain from facial expressions: a survey. IEEE Trans Pattern Anal Mach Intell. 2021;43(6):1815-1831. [ CrossRef ] [ Medline ]
  • Mussigmann T, Bardel B, Lefaucheur JP. Resting-State Electroencephalography (EEG) biomarkers of chronic neuropathic pain. A systematic review. Neuroimage. 2022;258:119351. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moscato S, Cortelli P, Chiari L. Physiological responses to pain in cancer patients: a systematic review. Comput Methods Programs Biomed. 2022;217:106682. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Thiam P, Hihn H, Braun DA, Kestler HA, Schwenker F. Multi-modal pain intensity assessment based on physiological signals: a deep learning perspective. Front Physiol. 2021;12:720464. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rojas RF, Brown N, Waddington G, Goecke R. A systematic review of neurophysiological sensing for the assessment of acute pain. NPJ Digit Med. 2023;6(1):76. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Mansutti I, Tomé-Pires C, Chiappinotto S, Palese A. Facilitating pain assessment and communication in people with deafness: a systematic review. BMC Public Health. 2023;23(1):1594. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • El-Tallawy SN, Ahmed RS, Nagiub MS. Pain management in the most vulnerable intellectual disability: a review. Pain Ther. 2023;12(4):939-961. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gkikas S, Tsiknakis M. Automatic assessment of pain based on deep learning methods: a systematic review. Comput Methods Programs Biomed. 2023;231:107365. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Borna S, Haider CR, Maita KC, Torres RA, Avila FR, Garcia JP, et al. A review of voice-based pain detection in adults using artificial intelligence. Bioengineering (Basel). 2023;10(4):500. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • De Sario GD, Haider CR, Maita KC, Torres-Guzman RA, Emam OS, Avila FR, et al. Using AI to detect pain through facial expressions: a review. Bioengineering (Basel). 2023;10(5):548. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhang M, Zhu L, Lin SY, Herr K, Chi CL, Demir I, et al. Using artificial intelligence to improve pain assessment and pain management: a scoping review. J Am Med Inform Assoc. 2023;30(3):570-587. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hughes JD, Chivers P, Hoti K. The clinical suitability of an artificial intelligence-enabled pain assessment tool for use in infants: feasibility and usability evaluation study. J Med Internet Res. 2023;25:e41992. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fang J, Wu W, Liu J, Zhang S. Deep learning-guided postoperative pain assessment in children. Pain. 2023;164(9):2029-2035. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Whiting PF, Rutjes AWS, Westwood ME, Mallett S, Deeks JJ, Reitsma JB, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern Med. 2011;155(8):529-536. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sounderajah V, Ashrafian H, Rose S, Shah NH, Ghassemi M, Golub R, et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat Med. 2021;27(10):1663-1665. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guo J, Riebler A. meta4diag: Bayesian bivariate meta-analysis of diagnostic test studies for routine practice. J Stat Soft. 2018;83(1):1-31. [ CrossRef ]
  • Hammal Z, Cohn JF. Automatic detection of pain intensity. Proc ACM Int Conf Multimodal Interact. 2012;2012:47-52. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Adibuzzaman M, Ostberg C, Ahamed S, Povinelli R, Sindhu B, Love R, et al. Assessment of pain using facial pictures taken with a smartphone. 2015. Presented at: 2015 IEEE 39th Annual Computer Software and Applications Conference; July 01-05, 2015;726-731; Taichung, Taiwan. [ CrossRef ]
  • Majumder A, Dutta S, Behera L, Subramanian VK. Shoulder pain intensity recognition using Gaussian mixture models. 2015. Presented at: 2015 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE); December 19-20, 2015;130-134; Dhaka, Bangladesh. [ CrossRef ]
  • Rathee N, Ganotra D. A novel approach for pain intensity detection based on facial feature deformations. J Vis Commun Image Represent. 2015;33:247-254. [ CrossRef ]
  • Sikka K, Ahmed AA, Diaz D, Goodwin MS, Craig KD, Bartlett MS, et al. Automated assessment of children's postoperative pain using computer vision. Pediatrics. 2015;136(1):e124-e131. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rathee N, Ganotra D. Multiview distance metric learning on facial feature descriptors for automatic pain intensity detection. Comput Vis Image Und. 2016;147:77-86. [ CrossRef ]
  • Zhou J, Hong X, Su F, Zhao G. Recurrent convolutional neural network regression for continuous pain intensity estimation in video. 2016. Presented at: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); June 26-July 01, 2016; Las Vegas, NV. [ CrossRef ]
  • Egede J, Valstar M, Martinez B. Fusing deep learned and hand-crafted features of appearance, shape, and dynamics for automatic pain estimation. 2017. Presented at: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017); May 30-June 03, 2017;689-696; Washington, DC. [ CrossRef ]
  • Martinez DL, Rudovic O, Picard R. Personalized automatic estimation of self-reported pain intensity from facial expressions. 2017. Presented at: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW); July 21-26, 2017;2318-2327; Honolulu, HI. [ CrossRef ]
  • Bourou D, Pampouchidou A, Tsiknakis M, Marias K, Simos P. Video-based pain level assessment: feature selection and inter-subject variability modeling. 2018. Presented at: 2018 41st International Conference on Telecommunications and Signal Processing (TSP); July 04-06, 2018;1-6; Athens, Greece. [ CrossRef ]
  • Haque MA, Bautista RB, Noroozi F, Kulkarni K, Laursen C, Irani R. Deep multimodal pain recognition: a database and comparison of spatio-temporal visual modalities. 2018. Presented at: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018); May 15-19, 2018;250-257; Xi'an, China. [ CrossRef ]
  • Semwal A, Londhe ND. Automated pain severity detection using convolutional neural network. 2018. Presented at: 2018 International Conference on Computational Techniques, Electronics and Mechanical Systems (CTEMS); December 21-22, 2018;66-70; Belgaum, India. [ CrossRef ]
  • Tavakolian M, Hadid A. Deep binary representation of facial expressions: a novel framework for automatic pain intensity recognition. 2018. Presented at: 2018 25th IEEE International Conference on Image Processing (ICIP); October 07-10, 2018;1952-1956; Athens, Greece. [ CrossRef ]
  • Tavakolian M, Hadid A. Deep spatiotemporal representation of the face for automatic pain intensity estimation. 2018. Presented at: 2018 24th International Conference on Pattern Recognition (ICPR); August 20-24, 2018;350-354; Beijing, China. [ CrossRef ]
  • Wang J, Sun H. Pain intensity estimation using deep spatiotemporal and handcrafted features. IEICE Trans Inf & Syst. 2018;E101.D(6):1572-1580. [ CrossRef ]
  • Bargshady G, Soar J, Zhou X, Deo RC, Whittaker F, Wang H. A joint deep neural network model for pain recognition from face. 2019. Presented at: 2019 IEEE 4th International Conference on Computer and Communication Systems (ICCCS); February 23-25, 2019;52-56; Singapore. [ CrossRef ]
  • Casti P, Mencattini A, Comes MC, Callari G, Di Giuseppe D, Natoli S, et al. Calibration of vision-based measurement of pain intensity with multiple expert observers. IEEE Trans Instrum Meas. 2019;68(7):2442-2450. [ CrossRef ]
  • Lee JS, Wang CW. Facial pain intensity estimation for ICU patient with partial occlusion coming from treatment. 2019. Presented at: BIBE 2019; The Third International Conference on Biological Information and Biomedical Engineering; June 20-22, 2019;1-4; Hangzhou, China.
  • Saha AK, Ahsan GMT, Gani MO, Ahamed SI. Personalized pain study platform using evidence-based continuous learning tool. 2019. Presented at: 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC); July 15-19, 2019;490-495; Milwaukee, WI. [ CrossRef ]
  • Tavakolian M, Hadid A. A spatiotemporal convolutional neural network for automatic pain intensity estimation from facial dynamics. Int J Comput Vis. 2019;127(10):1413-1425. [ FREE Full text ] [ CrossRef ]
  • Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Ensemble neural network approach detecting pain intensity from facial expressions. Artif Intell Med. 2020;109:101954. [ CrossRef ] [ Medline ]
  • Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Enhanced deep learning algorithm development to detect pain intensity from facial expression images. Expert Syst Appl. 2020;149:113305. [ CrossRef ]
  • Dragomir MC, Florea C, Pupezescu V. Automatic subject independent pain intensity estimation using a deep learning approach. 2020. Presented at: 2020 International Conference on e-Health and Bioengineering (EHB); October 29-30, 2020;1-4; Iasi, Romania. [ CrossRef ]
  • Huang D, Xia Z, Mwesigye J, Feng X. Pain-attentive network: a deep spatio-temporal attention model for pain estimation. Multimed Tools Appl. 2020;79(37-38):28329-28354. [ CrossRef ]
  • Mallol-Ragolta A, Liu S, Cummins N, Schuller B. A curriculum learning approach for pain intensity recognition from facial expressions. 2020. Presented at: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020); November 16-20, 2020;829-833; Buenos Aires, Argentina. [ CrossRef ]
  • Peng X, Huang D, Zhang H. Pain intensity recognition via multi‐scale deep network. IET Image Process. 2020;14(8):1645-1652. [ FREE Full text ] [ CrossRef ]
  • Tavakolian M, Lopez MB, Liu L. Self-supervised pain intensity estimation from facial videos via statistical spatiotemporal distillation. Pattern Recognit Lett. 2020;140:26-33. [ CrossRef ]
  • Xu X, de Sa VR. Exploring multidimensional measurements for pain evaluation using facial action units. 2020. Presented at: 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020); November 16-20, 2020;786-792; Buenos Aires, Argentina. [ CrossRef ]
  • Pikulkaew K, Boonchieng W, Boonchieng E, Chouvatut V. 2D facial expression and movement of motion for pain identification with deep learning methods. IEEE Access. 2021;9:109903-109914. [ CrossRef ]
  • Rezaei S, Moturu A, Zhao S, Prkachin KM, Hadjistavropoulos T, Taati B. Unobtrusive pain monitoring in older adults with dementia using pairwise and contrastive training. IEEE J Biomed Health Inform. 2021;25(5):1450-1462. [ CrossRef ] [ Medline ]
  • Semwal A, Londhe ND. S-PANET: a shallow convolutional neural network for pain severity assessment in uncontrolled environment. 2021. Presented at: 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC); January 27-30, 2021;0800-0806; Las Vegas, NV. [ CrossRef ]
  • Semwal A, Londhe ND. ECCNet: an ensemble of compact convolution neural network for pain severity assessment from face images. 2021. Presented at: 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence); January 28-29, 2021;761-766; Noida, India. [ CrossRef ]
  • Szczapa B, Daoudi M, Berretti S, Pala P, Del Bimbo A, Hammal Z. Automatic estimation of self-reported pain by interpretable representations of motion dynamics. 2021. Presented at: 2020 25th International Conference on Pattern Recognition (ICPR); January 10-15, 2021;2544-2550; Milan, Italy. [ CrossRef ]
  • Ting J, Yang YC, Fu LC, Tsai CL, Huang CH. Distance ordering: a deep supervised metric learning for pain intensity estimation. 2021. Presented at: 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA); December 13-16, 2021;1083-1088; Pasadena, CA. [ CrossRef ]
  • Xin X, Li X, Yang S, Lin X, Zheng X. Pain expression assessment based on a locality and identity aware network. IET Image Process. 2021;15(12):2948-2958. [ FREE Full text ] [ CrossRef ]
  • Alghamdi T, Alaghband G. Facial expressions based automatic pain assessment system. Appl Sci. 2022;12(13):6423. [ FREE Full text ] [ CrossRef ]
  • Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, et al. Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images. Sci Rep. 2022;12(1):17297. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fontaine D, Vielzeuf V, Genestier P, Limeux P, Santucci-Sivilotto S, Mory E, et al. Artificial intelligence to evaluate postoperative pain based on facial expression recognition. Eur J Pain. 2022;26(6):1282-1291. [ CrossRef ] [ Medline ]
  • Hosseini E, Fang R, Zhang R, Chuah CN, Orooji M, Rafatirad S, et al. Convolution neural network for pain intensity assessment from facial expression. 2022. Presented at: 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC); July 11-15, 2022;2697-2702; Glasgow, Scotland. [ CrossRef ]
  • Huang Y, Qing L, Xu S, Wang L, Peng Y. HybNet: a hybrid network structure for pain intensity estimation. Vis Comput. 2021;38(3):871-882. [ CrossRef ]
  • Islamadina R, Saddami K, Oktiana M, Abidin TF, Muharar R, Arnia F. Performance of deep learning benchmark models on thermal imagery of pain through facial expressions. 2022. Presented at: 2022 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT); November 03-05, 2022;374-379; Solo, Indonesia. [ CrossRef ]
  • Swetha L, Praiscia A, Juliet S. Pain assessment model using facial recognition. 2022. Presented at: 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS); May 25-27, 2022;1-5; Madurai, India. [ CrossRef ]
  • Wu CL, Liu SF, Yu TL, Shih SJ, Chang CH, Mao SFY, et al. Deep learning-based pain classifier based on the facial expression in critically ill patients. Front Med (Lausanne). 2022;9:851690. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ismail L, Waseem MD. Towards a deep learning pain-level detection deployment at UAE for patient-centric-pain management and diagnosis support: framework and performance evaluation. Procedia Comput Sci. 2023;220:339-347. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vu MT, Beurton-Aimar M. Learning to focus on region-of-interests for pain intensity estimation. 2023. Presented at: 2023 IEEE 17th International Conference on Automatic Face and Gesture Recognition (FG); January 05-08, 2023;1-6; Waikoloa Beach, HI. [ CrossRef ]
  • Kaur H, Pannu HS, Malhi AK. A systematic review on imbalanced data challenges in machine learning: applications and solutions. ACM Comput Surv. 2019;52(4):1-36. [ CrossRef ]
  • Data for meta-analysis of pain assessment from facial images. Figshare. 2023. URL: https:/​/figshare.​com/​articles/​dataset/​Data_for_Meta-Analysis_of_Pain_Assessment_from_Facial_Images/​24531466/​1 [accessed 2024-03-22]

Abbreviations

Edited by A Mavragani; submitted 26.07.23; peer-reviewed by M Arab-Zozani, M Zhang; comments to author 18.09.23; revised version received 08.10.23; accepted 28.02.24; published 12.04.24.

©Jian Huo, Yan Yu, Wei Lin, Anmin Hu, Chaoran Wu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 12.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Res Med Sci
  • v.17(11); 2012 Nov

Fraud and deceit in medical research

Umran sarwar.

Division of Surgery, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, Hammersmith Hospital Campus, London W12 ONN, England

Marios Nicolaou

1 Department of Plastic Surgery, Salisbury District Hospital, Salisbury SP2 8BJ, England

Publication of medical research is the cornerstone for the propagation and dissemination of medical knowledge, culminating in significant effects on the health of the world's population. However, instances of individuals and institutions subverting the ethos of honesty and integrity on which medical research is built in order to advance personal ambitions have been well documented. Many definitions to describe this unethical behavior have been postulated, although the most descriptive is the “FFP” (fabrication, falsification, and plagiarism) model put forward by the United States’ Office of Research Integrity. Research misconduct has many ramifications of which the world's media are all too keen to demonstrate. Many high-profile cases the world over have demonstrated this lack of ethics when performing medical research. Many esteemed professionals and highly regarded world institutions have succumbed to the ambitions of a few, who for personal gains, have behaved unethically in pursuit of their own ideals. Although institutions have been set up to directly confront these issues, it would appear that a lot more is still required on the part of journals and their editors to combat this behavioral pattern. Individuals starting out at very junior positions in medical research ought to be taught the basics of medical research ethics so that populations are not failed by the very people they are turning to for assistance at times of need. This article provides a review of many of the issues of research misconduct and allows the reader to reflect and think through their own experiences of research. This hopefully will allow individuals to start asking questions on, what is an often, a poorly discussed topic in medical research.

INTRODUCTION

Medical research is the cornerstone of scientific research. It has the potential to engender a better state of physical and psychological health. Therefore, it is imperative that medical research is genuine and free from bias. When conducting medical research, one must abide by the ethical and moral obligations as outlined by the Nuremberg code in 1947[ 1 ] and the subsequent Declaration of Helsinki 1964 (and later revised in 2002),[ 2 , 3 ] which explain the responsibilities of scientists and physicians when conducting medical research on humans. However, despite the morality underpinning medical research, scientific research has a long history of fraud and deception,[ 4 , 5 , 6 ] with this behavior adversely affecting the very lives researchers are seeking to help.

Additionally, the seriousness of fraud in the biological sciences – science directly influencing the physical and psychological well-being of the individual – should also be acknowledged. As a result of the implementation of detection policies and the management of misconduct cases by regulatory bodies, who have seen an unprecedented increase in misconduct cases, the prevalence of fraud and deceit has become increasingly documented within research circles.[ 7 ] In recognition of the seriousness of the situation, multiple organizations have been created to deal with the problem.

However, despite the publication of cases in the media and in working sessions of regulatory governing bodies throughout the world,[ 8 , 9 , 10 , 11 , 12 ] fraud and deception in medical research has often been underreported. One reason for this could be the fact that there is no standard definition of what constitutes scientific deception,[ 5 ] making it more difficult to identify cases and prevent it from continuing. In order to fully understand this, we must discuss the definitions available to us.

RESEARCH MISCONDUCT

The Oxford English Dictionary describes fraud as “wrongful or criminal deception intended to result in financial or personal gain” and deceit as “the action or practice of deceiving someone by concealing or misrepresenting the truth.”[ 13 ] Research organizations and the literature have defined these behavioral patterns within the umbrella title of “Research Misconduct.”[ 14 ]

An array of definitions is used to define research misconduct within the literature depending on the country in which they originate. Given the international nature of publications and research, and the cross-fertilization of research across continents, through departmental and institutional collaboration in the 21 st century, it is surprising that a single global definition is yet to be utilized.[ 14 ]

From the United Kingdom (UK) perspective, following much impetus for change by Stephen Lock,[ 15 ] in 1999, The Royal College of Physicians of Edinburgh hosted the Consensus Conference on Misconduct in Biomedical Research, which aimed to address the issues in research misconduct.[ 16 ] Their definition was the broadest yet from the UK and was stated as: “Behaviour by a researcher, intentional or not, that falls short of good ethical and scientific standard.” The UK Committee on Public Ethics (COPE) describes misconduct as the “intention to cause others to regard as true that which is not true.”[ 17 ] Additionally, the United States of America's key regulatory body, the Office of Research Integrity (ORI), defines research misconduct using the FFP model, i.e. the serious aspects of misconduct. These include:[ 18 , 19 ]

  • Fabrication – Making up data or results and recording or reporting them.
  • Falsification – Manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.
  • Person's ideas, processes, results, or words without giving appropriate credit.
  • Research misconduct does not include honest error or differences of opinion

Consequently, given the breadth of definitions, it is clear that the question “what is misconduct?” arises. Evidently, varying degrees of medical research misconduct do exist, ranging from the serious (i.e. the FFP model) to cases of “ghost” authors and duplication of presentations, often regarded as trivial deviations from standards. Richard Smith has described a “taxonomy of research misconduct” illustrating the spectrum of definitions and their relative seriousness [ Table 1 ].[ 16 ]

A taxonomy of research misconduct*

An external file that holds a picture, illustration, etc.
Object name is JRMS-17-1077-g001.jpg

Fabrication and falsification

As Smith states, fabrication and falsification of data, and neglecting to seek ethical approval for research that involves human participants, are both unethical and go against the spirit of scientific research. However, it is questionable whether a clinical researcher who fabricates data to enrol a terminally ill patient into a trial that ultimately may lead to that individual receiving treatment that may prolong their life should receive the same penalty as someone fabricating data for their own professional gain.

Whilst being recognized as morally wrong, it is debateable as to whether the third branch of the FFP model, plagiarism– the use of published or unpublished material without due acknowledgment of the primary author– constitutes misconduct research in the same way as does the fabrication and/or the falsification of data. Arguably, the repercussion of plagiarism is merely damage to the ego of the individual whose ideas/words are taken. Moreover, since the work is already published and in the public domain, there is, arguably, no harm in utilizing the same information, saving on further expense and time. Daniel David, editor of The Journal of Cognitive and Behavioural Psychotherapies , believes, “if duplication of content… helps the author to reach a new or larger readership…” and “if text recycling within these constraints helps to present the same idea more accurately across several publications, they become legitimate conduct.”[ 20 ]

Referring to the United States’ ORI definition of plagiarism, which is “unattributed textual copying,” many have questioned its applicability in real life situations. One definition of plagiarism suggests it is the repetition of 11 words or the overlap of 30 letter strings,[ 21 ] although this is by no means a standard definition.

Furthermore, “salami-slicing” – the selective use of research- project results to maximize the number of presentations possible – has also been classed as a type of plagiarism by some, but not by others.[ 22 ] The contention here is whether this constitutes misconduct. Berk argues that although it is difficult to qualify the degree of deceit, plagiarism is “a breach of professional ethics that must be explored and unreservedly deplored.”[ 23 ]

Is fraud and deceit in medical research black and white?

When considering the definitions of “deceit” and “deception,” there is little agreement on less serious cases. It is debateable as to whether dual publications (submitting to several journals simultaneously) or placing an individual's name on the list of authors of a publication when their contribution is minimal (gift/ghost authorship) amount to the same level of misconduct as fabrication of data, and whether this amounts to misconduct at all. Furthermore, it is arguable that the current lack of funding for research may potentially drive many to commit “deception” in order to reach their goals. However, some may argue that such minor indiscretions may lead to more serious breaches of research conduct. In addition, the failing of senior authors to supervise the work lends them to be just as culpable for the indiscretions.[ 12 ]

Sismondo et al . and others describe the implications, on the nation's health, of ghost-authoring by pharmaceutical companies, who exert their financial might in “controlling” and “shaping” crucial steps of research and publication, allowing the pharmaceutical industry to “shape the literature in ways that serve its interests.”[ 24 , 25 ] The important question is how can we halt this stem of deception in research and who is available to assist in this cause.

NATIONAL BODIES

Following revelations of fraudulent research in the UK, medical editors set up COPE in 1997. It now has over 7000 members worldwide from a variety of academic disciplines and covers a number of significant publishers. Although COPE provides advice, support, and guidance to editors and publishers on publication ethics,[ 17 ] it is unable to offer sanctions other than to expel members from its panel. The UK Research Integrity Office (UKRIO) is another body representing the interests of over 50 universities and organizations dedicated to scientific research.[ 26 ] Set up in 2006, its aims are to:[ 26 ]

  • promote the good governance, management, and conduct of academic, scientific, and medical research;
  • share good practice on how to address poor practice, misconduct, and unethical behavior; and
  • give confidential, independent, and expert advice and guidance about the conduct of academic, scientific, and medical research.

Many medical practitioners undertake research at some point in their careers, with the vast majority of medical schools now incorporating this within the undergraduate curriculum. Although the General Medical Council (GMC) has statutory powers, it has no authority to monitor and regulate a medical practitioner's research conduct.

One of the oldest organizations dealing with research misconduct is the ORI in the United States.[ 18 ] Set up in 1992, it oversees and directs Public Health Service (PHS) research integrity activities. With a huge budget of $30 billion, it provides significant funds in the areas of health, research, and development, and oversees bodies such as The National Institute of Health and The Office of Public Health and Science.

PREVALENCE OF RESEARCH MISCONDUCT

There is no accurate data on the prevalence of research misconduct.[ 12 ] The absence of a standardized definition in the global world of publications has proved a major impetus in support of the traditional view that deception is rare.[ 27 ] Koshland goes further stating, “99.9999% of all reports are accurate and truthful,” and that science should not adopt a change in practice, thus allowing the propagation of knowledge.[ 28 ] However, as conveyed by numerous cases in the international media, it is arguable that there is potential for serious harm to the nation where research misconduct takes place.

Surprisingly, some reports suggest a developed psychology for deceit at a young age when minimal exposure to research has been obtained. Taradi et al ., in a survey of 508 medical students, show that over 90% of the students admitted to engaging in education dishonestly and over 78% engaging in academic misconduct.[ 29 ] Nilstun et al . contradict this in their study on doctoral students, suggesting that “students appear to be too inexperienced to have cheated by themselves….”[ 30 ] Furthermore, Martinson et al . surveyed a total of 3247 mid-career (majority at the associate professor level or above) and early-career scientists (majority at post-doctoral level) working in the United States on their practices in research. The results showed at the serious end of the spectrum, i.e. falsification or fabricating data, the percentage engaging in such activity was low (<2%). However, over 33% of the respondents described involvement in research misconduct that would necessitate investigation by the institution or federal agencies. Interestingly, the more senior group demonstrated a greater propensity to engage in questionable activity than did their juniors.[ 7 ]

The first meta-analysis looking into the prevalence of research misconduct was performed by Fanelli.[ 9 ] Examining “scientific behaviors that distort scientific knowledge” only, he showed that 2% of the scientists admitted to serious misconduct (falsification or fabrication of data) at least once, and up to 34% admitted other questionable research practices. When participants were asked about their colleagues’ practices, the results were even higher (14% for falsification of data and 72% for other questionable practices).[ 9 ] However, Fanelli suggests these results may only represent a conservative estimate of the real prevalence of research misconduct, a similar argument put forward by Ranstam et al ., who, in their study of biostatisticians, showed a majority of respondents reporting knowing of at least one serious breach of fraudulent projects in the past 10 years.[ 31 ] Geggie conveys how the majority of newly qualified medical consultants demonstrated evidence of previous misconduct.[ 32 ] Of the respondents, 18% were either willing to commit or were unsure about future research misconduct. This may be reflective of the 17% of participants who reported having received no training in research ethics despite their seniority.[ 32 ]

There are a number of levels at which research misconduct can occur – individual researchers, department, institution, journals, and funding bodies.[ 33 ] When looking at the reasons for research misconduct, there is an underlying desire to be successful in science and also a fear of failure.[ 25 ] Securing grants and financial incentives from pharmaceutical companies and professional career progression are all cited as causes for misconduct.[ 34 ] Arguably, many researchers and departments may have equated the concept of “quantity” rather than “quality” with research success. The association between the number of publications and suitability for funding or career progression has been with us for a while.[ 35 ] When applying for senior posts, surgical trainees are continuously questioned on the number of publications achieved, disregarding the quality of the publication or journal. Beisiegal et al . and Smith suggest this attitude has predisposed to a massive rise in journal titles, many of which are of low quality and are poorly maintained.[ 35 , 36 ]

Broad and Wade argue that in order to improve the quality of research, “what is needed is greater competition brought about by a sharp reduction in the number of journals, especially in medicine and biology,” further claiming that “careerism” is the cause of much research fraud. They suggest a greater separation between medical education, which is perceived to create the foundations for students’ cheating behaviors, and medical research.[ 37 ]

When one asks if organizations are successfully tackling the root causes of misconduct, there appears to be some disagreement on this topic.

On the issue of authorship, progress has been made. The International Committee of Medical Journal Editors published the “Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Writing and Editing for Biomedical Publications,”[ 38 , 39 ] defining the rules for authorship credit as being based upon meeting all three criteria below:[ 38 , 39 ]

  • 1) substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data;
  • 2) drafting the article or revising it critically for important intellectual content; and
  • 3) final approval of the version to be published.

However, these are not universally adopted by all journals. Some journals go even further and request a statement detailing individual authors’ contribution to the submitted work. Journal editors, working closely with other groups, such as biostatisticians and external editors, can play a significant role.[ 33 ] Additionally, peer review ensures research quality and identifies issues of misconduct. Any suspicion should prompt the editor to request the raw data for verification.

Furthermore, there is a perception that results published in high impact factor journals are automatically dependable. Moreover, untimely retractions could result in the propagation of inaccurate information in other works. Wilmshurst points out that education and training of individual researchers and supervisors is crucial to combating misconduct, as well as creating an environment for whistleblowers to speak out.[ 12 ]

In conclusion, evidently there are a number of reasons why research misconduct takes place: Academic pressure, personal desire for fame, “sloppy” science, financial gain, and an inability to determine right from wrong, to name a few. This demonstrates a need for a new system of prevention, investigation, and education to curtail research misconduct, therefore instilling, into the general public, a renewed sense of trust and respect for medical research.

In order to prevent research misconduct, further discussion of its definition and its various facets is needed, therefore resulting in an international consensus on a single, universal definition of what constitutes research misconduct. Additionally, ethical standards need to be made clear so that researchers can determine whether their work breaches certain codes. Furthermore, there needs to be an alleviation of pressure on researchers, as well as greater control of research sponsored by outside organizations.

Moreover, organizations’ investigation into research irregularities must be fair, prompt, transparent, and allow for retractions to be made promptly once evidence of misconduct has been confirmed. Also, there needs to be greater protection for whistleblowers, as well as ensuring a right of appeal. The investigation can be conducted at either institutional or national level, depending on the gravity of misconduct. However, organizations need to be equipped with effective resources and a certain status in the wider community, thereby ensuring public confidence.

Lastly, there is a desperate need for researchers and future researchers to be educated on what constitutes research misconduct, and the seriousness of its repercussions. There is limited publicity and information of the regulatory bodies in medical institutions and places of work. It has been almost 25 years since Lock suggested a closer look at this issue,[ 40 ] yet we are still faced with cases of fraud on an epic scale.[ 41 ] It may be too late to change the ways of our seniors,[ 32 ] but we have a responsibility to our nations to act. As Martinson et al . stated, “it is time to consider what aspects of the research environment are most salient to research integrity, which aspects are most amendable to change, and what changes are likely to be most fruitful in ensuring integrity in science.”[ 7 ] Educating potential researchers at an early stage (e.g. at medical school) on the mechanics of research ethics is essential to finding a solution to this problem and ensuring careers are constructed on honesty and integrity.

Source of Support: Nil

Conflict of Interest: None declared.

IMAGES

  1. Misconduct in Medical Research and Practice

    research misconduct in medical journals

  2. (PDF) Fraud and misconduct in clinical research: A concern

    research misconduct in medical journals

  3. (PDF) Re: Research misconduct is widespread and harms patients

    research misconduct in medical journals

  4. PPT

    research misconduct in medical journals

  5. New Report: Plagiarism and Misconduct in Medical Research

    research misconduct in medical journals

  6. PPT

    research misconduct in medical journals

VIDEO

  1. Judge: KGW can get some records on doctor from medical board

  2. Mysterious Deletion of Medical Journals

  3. Peter Wilmshurst

  4. Safeguarding research integrity

  5. Inquiry with Integrity: Cultivating an Ethical Research Culture

  6. Research Misconduct: A Threat to Scientific Integrity

COMMENTS

  1. Scientific Misconduct and Medical Journals

    Although not much is known about the prevalence of scientific misconduct, several studies with limited methods have estimated that the prevalence of scientists who have been involved in scientific misconduct ranges from 1% to 2%. 4 - 6 During the last 5 years, JAMA and the JAMA Network journals have published 12 notices of Retraction about 15 ...

  2. A review of the current concerns about misconduct in medical sciences

    Given the expansion of the academic competitive environment and with the increase in research misconduct, the role of any regulatory sector, including universities, journals/publishers, government, etc. in preventing this phenomenon must be fully focused and fundamental alternation should be implemented in this regard. ... accessing to medical ...

  3. Journal editors and publishers' legal obligations with respect to

    As the burden of misconduct in medical research is increasingly recognised, questions have been raised about how best to address this problem. Whilst there are existing mechanisms for the investigation and management of misconduct in medical literature, they are inadequate to deal with the magnitude of the problem. ...

  4. Should research misconduct be criminalized?

    Other stakeholders have defined research misconduct in various ways. The Council of Science Editors added mistreatment of research participants to the classical FFP (Council of Science Editors, 2018).The International Committee on Journal Medical Editors added the purposeful failure to disclose conflicts of interest to FFP (ICMJE, 2018).Specially designed for surveys, Anderson et al. (2007 ...

  5. Dishonesty and research misconduct within the medical profession

    This excerpt from Babbage's famously cantankerous treatise continues to ring true almost 200 years later. Though a great deal of newspaper print, conference discussion and journal space has been dedicated to addressing the urgent concerns of scientific irreproducibility and outright fraud amongst full-time scientists [1,2,3,4,5] the reasons for research misconduct of medical doctors (hereon ...

  6. Misconduct in Biomedical Research: A Meta-Analysis and Systematic

    This meta-analysis gives a pooled estimate of the misconduct in research done in biomedical fields such as medicine, dental, pharmacy, and others across the world. We found that there is an alarming rate of misconduct in recent nonself-reported studies, and they were higher than that in the self-reported studies.

  7. A scoping review of the literature featuring research ethics and

    The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the ...

  8. Incidence and Consequences

    Synopsis:Research misconduct and detrimental research practices constitute serious threats to science in the United States and around the world. The incidence of research misconduct is tracked by official statistics, survey results, and analysis of retractions, and all of these indicators have shown increases over time. However, as there are no definitive data, it is difficult to say precisely ...

  9. Prevalence of Research Misconduct and Questionable Research ...

    Irresponsible research practices damaging the value of science has been an increasing concern among researchers, but previous work failed to estimate the prevalence of all forms of irresponsible research behavior. Additionally, these analyses have not included articles published in the last decade from 2011 to 2020. This meta-analysis provides an updated meta-analysis that calculates the ...

  10. Research misconduct in health and life sciences research: A ...

    Background Measures to ensure research integrity have been widely discussed due to the social, economic and scientific impact of research integrity. In the past few years, financial support for health research in emerging countries has steadily increased, resulting in a growing number of scientific publications. These achievements, however, have been accompanied by a rise in retracted ...

  11. Exclusive: investigators found plagiarism and data falsification in

    Pichiorri is at City of Hope medical centre in Duarte, California, which she joined in 2016. ... a committee found 11 cases of research misconduct — 7 concerning plagiarism and 4 image ...

  12. Fraud and Deceit in Medical Research

    The number of scientific articles published per year has been steadily increasing; so have the instances of misconduct in medical research. While increasing scientific knowledge is beneficial, it is imperative that research be authentic and bias-free. This article explores why fraud and other misconduct occur, presents the consequences of the ...

  13. Research integrity and academic medicine: the pressure to publish and

    A number of Medical Subject Headings (MeSH) terms were utilized to identify relevant articles. The MeSH terms included "scientific misconduct," "research misconduct," "authorship," "plagiarism," "biomedical research/ethics," "faculty, medical," "fellowships and scholarships," and "internship and residency."

  14. Misconduct in Medical Research

    Misconduct in Medical Research. One of the distinguishing characteristics of American society, noted long ago by Alexis de Tocqueville, has been optimism and a belief, bordering on faith, in ...

  15. Preventing fraud in biomedical research

    It was stated that "research misconduct includes fabrication, falsification, suppression, or inappropriate manipulation of data; inappropriate image ... Maisonneuve H. [Predatory journals: a real threat for medical research. 1 identify these journals and understand how they work]. Rev Med Interne. (2021) 42:421-6. 10.1016/j.revmed.2021.03. ...

  16. Cancer research institute retracts studies amid controversy over errors

    April 9, 2024, 2:32 PM PDT. By Evan Bush. Seven studies from researchers at the prestigious Dana-Farber Cancer Institute have been retracted over the last two months after a scientist blogger ...

  17. Knowledge, attitudes and practices about research misconduct among

    Background With the emergence of numerous scientific outputs, growing attention is paid to research misconduct. This study aimed to investigate knowledge, attitudes and practices about research misconduct among medical residents in southwest China. Methods A cross-sectional study was conducted in southwest China from November 2022 through March 2023. The links to the questionnaire were sent to ...

  18. Prestigious Medical Journal Ignored Nazi Atrocities, Historians Find

    April 6, 2024. A new article in the New England Journal of Medicine, one of the oldest and most esteemed publications for medical research, criticizes the journal for paying only "superficial ...

  19. Journal of Medical Internet Research

    Background: The COVID-19 pandemic has led to a substantial increase in health information, which has, in turn, caused a significant rise in cyberchondria and anxiety among individuals who search for web-based medical information. To cope with this information overload and safeguard their mental well-being, individuals may adopt various strategies.

  20. Dishonesty and research misconduct within the medical profession

    The relationship between research misconduct and medical specialisation. While attempts to study dishonesty and misconduct amongst UK doctors total just 1 survey published in 2000, 3 more recent surveys of US specialty training applications have elicited sobering discussions amongst the American medical community.

  21. Journal of Medical Internet Research

    Background: Panic disorder is a common and important disease in clinical practice that decreases individual productivity and increases health care use. Treatments comprise medication and cognitive behavioral therapy. However, adverse medication effects and poor treatment compliance mean new therapeutic models are needed. Objective: We hypothesized that digital therapy for panic disorder may ...

  22. Journal of Medical Internet Research

    Background: The continuous monitoring and recording of patients' pain status is a major problem in current research on postoperative pain management. In the large number of original or review articles focusing on different approaches for pain assessment, many researchers have investigated how computer vision (CV) can help by capturing facial expressions.

  23. Scientific Misconduct: A Global Concern

    Abstract. In today's world, evil appears to be all pervading. Medical publication is no exception. Scientific misconduct in medical writing is slowly becoming a global concern, especially over the last few decades. While the occurrence of such events is certainly rare, every researcher and reader should be aware of this entity.

  24. Heart health: Is evening exercise better for people with obesity?

    New research indicates that evening exercise may have more health benefits for people who have obesity. K.C. Alfred/Getty Images. Obesity increases a person's risk for several medical conditions ...

  25. Oral contraceptive use may reduce muscle-tendon injuries

    The full-time faculty of more than 3,100 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide care in more than 80 specialties to more than 120,000 hospitalized patients, more than 360,000 emergency room cases, and ...

  26. Fraud and deceit in medical research

    Although the General Medical Council (GMC) has statutory powers, it has no authority to monitor and regulate a medical practitioner's research conduct. One of the oldest organizations dealing with research misconduct is the ORI in the United States. Set up in 1992, it oversees and directs Public Health Service (PHS) research integrity activities.