Logo for University of Wisconsin Pressbooks

Responding, Evaluating, Grading

Rubric for a Research Proposal

Matthew Pearson - Writing Across the Curriculum

UW-Madison WAC Sourcebook 2020 Copyright © by Matthew Pearson - Writing Across the Curriculum. All Rights Reserved.

Share This Book

Rubric for a Research Proposal

Matthew pearson, writing across the curriculum.

research proposal grading rubric

  • Enroll & Pay
  • Getting Started in Research
  • Undergraduate Research Awards
  • Fall Research Showcase
  • Student Workshops

UGRA Proposal Guidelines

NOTICE: The proposal format requirements have changed slightly since last semester.

Your UGRA proposal will be evaluated using one of the following rubrics provided to faculty reviewers: Research Project Rubric (.docx) or Creative Project Rubric (.docx) We recommend that you read over the appropriate rubric and use it while revising your proposal.

Your proposal should:

  • be no more than 5 total pages, including your text content (~2000 words), figures, images, image captions, references, footnotes, appendices, etc.
  • be single-spaced and typed using Times New Roman 12 point font for main content. Additional text can be used as needed to support figures, images, captions, footnotes, etc.
  • have 1 inch margins top, bottom, right, and left.
  • have a title at the top of the first page. Please do not include your name.
  • include the following sections: abstract/summary, background and introduction, methods and approach, applicant's preparation, conclusion, and references. Optional content may include figures, charts, and images.
  • be saved as a PDF with file name LastnameProposal.PDF. For example, SmithProposal.PDF

***Proposals not meeting the criteria outlined above may not be considered for review.

Guidelines for:

  • Students working in groups: Students applying as part of a group need to each submit their own proposal. Proposals should not be written together and, therefore, should not share written content (ie, identical sentences or paragraphs). Reviewers must be able to see that each student has a full understanding of the project since each student will receive an individual scholarship.
  • Students who have previously received a UGRA:  If applying for a second award, students should submit a full proposal even when continuing on the same project. This proposal needs to include a brief update on their progress either in the Background and Introduction section or the Methods and Approach section. The Methods and Approach should then describe the next steps of the project. Much of your proposal may stay the same, but be sure to include any newly relevant background information if the project has shifted directions or new information was published.

Your UGRA proposal is required to include the following sections:

1. Abstract/Summary 

Purpose : In one paragraph, summarize your proposal. Give the reader a general sense of the field, the problem or idea your work will address, and how you will accomplish this project.

Guiding Questions

  • Why will you do this work?
  • What will you do (think broadly for this section)?
  • And how will you do it?
  • This is your chance to make a good first impression on your readers; it should clearly convey what your project is and why it is important enough to fund.
  • Connect your project to the big picture.
  • This section is a summary of your entire proposal, so write it last.
  • For tips on writing research proposals, see The Professor Is In blog's " Foolproof Research Grant Template ," as well as posts on how to talk about the big issue in your project and the contribution to the literature .
  • Visit the KU Writing Center’s webpage .

2. Background and Introduction 

Purpose: This section has two goals: 1) summarize the work that’s been done in your area and 2) explain how your work will contribute to this field of study. In many fields, this section is referred to as the literature review. It must include citations of previous research or creative work related to your topic.

  • What is already known or has been done in this area?
  • For creative projects: Which artists have done similar work or explored similar themes? 
  • How will this project add to what is already known or has been done?
  • For creative projects: What is your creative vision for the project? What is the inspiration for your project?
  • This section is commonly referred to as a literature review.  The purpose is to position your project within the academic conversation about your topic.
  • You must cite the published work that you review in this section and list it in the References section. Proposals that do not cite other works in this section and include them in the References section will not be funded.
  • Focus on the key publications needed to outline the current state of the field; typical UGRA proposals include 5-10 sources.
  • Be sure to synthesize your sources; this section should read more like a story than a list. Avoid direct quotes; they make it harder for you to synthesize multiple works into a story. Show how your project continues the story by explaining your contribution.
  • Watch this video about the B.E.A.M. system for organizing sources for some tips.

3. Methods and Approach

Purpose: Describe what you will actually do for your project and why you will take this approach. You need to include a timeline that clearly details the work that you will complete during the semester of the award.

  • What will you actually do? What data will you be using?  How will you collect it?  How will you analyze it? What materials or resources will you need? 
  • What are the major steps to complete this project?
  • How will the results of these methods allow you to address your original question?
  • Is the project that you’ve outlined feasible in one semester?
  • Will you work with human subjects? If so, how will you meet the requirements of the KU Human Subjects Committee (HSCL)? Consult your mentor for help with this process.
  • For creative projects: How will you approach and get feedback on your work?
  • Why did you select the particular methods/techniques you’ve described?
  • Be specific to show the reviewer that you have thought through the process and are prepared to begin your project.
  • Relevant details you might mention (depending on project type) include: descriptions of methods and rationale for choosing them, any software or equipment you’ll use and why, a description of your creative process, and/or controls for proposed experiments.
  • Explain the choices you have made in designing your project.  Why are you choosing this method over another?  Are there other studies that have used a similar approach?  Show the reviewer that you understand not just what you are doing for your project, but why you are doing it.
  • Use the timeline to help you and the reviewers ensure that you are proposing a feasible project for one semester. A chart or table is an easy way to provide the timeline.
  • If the project is part of a larger research program or a long-term interest, make clear what part of the larger project will be completed during the one semester term of the grant.
  • Cite your method's origin paper or other work using this technique to show that your approach is standard in the field.
  • Use a first person narrative here, especially when you are working as part of a research group. Reviewers will have a better idea of what you are doing versus what others will do.
  • Don't forget to describe your data analysis plan, especially any statistical methods you plan to use, and how this analysis will tie back to the original question you set out to address. This is a common mistake that reviewers catch.
  • If you are working on a multi-semester project, be sure to provide the most details about the award period that you are applying for.  The reviewer will want to see what work would be funded if you had the award.

4. Applicant's Preparation

Purpose : Describe your preparation and qualifications to complete this project.

  • What experiences, coursework, or training have you done that will give you the needed background knowledge and skills to undertake the project?
  • Did you complete coursework that is relevant? What specific skills or background information did you learn in these classes that prepared you for your project?
  • Did you learn a language, technique, or laboratory skill that you’ll use?
  • Or have you already been doing faculty mentored research or independent study on this topic?
  • Do not skimp on this section; be sure to write at least one paragraph here to make the case that you can complete this project.  The reviewer needs to be able to see whether you have the skills and background knowledge needed to complete the project.
  • Rather than telling the reviewer that you are qualified, show them.  Saying "I am prepared to do this research project" is not as convincing as saying "I used X technique in my BIOL 123 class, earned an A in my BIOL 456 course, and have already begun preparations to do Y procedure in my work in Prof. Z's lab this semester."
  • Keep in mind that UGRA reviewers will not be viewing your transcripts as part of your UGRA application, so if you have taken relevant courses you should mention them, what grade you received in those classes, and how they will help you complete the proposed project.
  • If you do not already have a skill that you will need to complete the project, be sure to address how you will get that knowledge or training. 

5. Conclusion

Purpose: Show a clear connection between the different parts of your proposal.  Summarize key points of your proposal for one final reminder of what you’re doing, how you’ll do it, and why. This is your final sales pitch to the reviewer and a good time to return to how your project relates to the big picture.

  • How will the results and outcomes of your proposed work tie back to your original intent?   In other words, explain how and why your proposed approach will help you achieve your goal. 
  • How will you disseminate your work? 
  • What criteria will you use to evaluate your success?
  • Clearly show the reviewer the connections between your initial intent, proposed work, and anticipated outcomes.  You want to convince your reviewer that the overall goals of your project are important, and that the plan you’ve outlined will move you toward those goals.

6. References

Purpose: List the materials you are citing in your proposal. 

  • Did you list every source you cited in the text?
  • Did you include the most important and relevant sources for your project?
  • Use the citation style most commonly used in your discipline for both the in-text citations and the reference list.
  • Your references do not count towards your 2,000 word limit.
  • You should not include any references that are not cited in the text of the proposal.

7. (Optional) Figures, Charts, and Images

Purpose: You may include any figures, charts, images, etc. that are helpful in explaining your work, either as an appendix or within the body of your work. 

  • Is there an idea you’re trying to communicate in words that would be easier to understand in picture form?
  • Do you have portfolio pieces that will demonstrate the type of artwork or product you are proposing to create?
  • Do you have a survey or interview tool you’d like to reference as an appendix?
  • Do you have preliminary data showing that a new technique works?
  • Keep it simple. Only include information that is needed to understand the proposal. Don’t include a figure or image just to have one.
  • Any figures, charts, images, and examples of artwork need to be referred to within the text of the proposal. Without explanation, the reader does not know why you are including them.
  • Label any figures, charts, and images with a descriptive title, caption, and/or legend for easy reference.
  • Prepare to Apply
  • Proposal Guidelines
  • Courtwright Award

Student Links

  • Get Started
  • Fund your Research
  • Share your Research
  • Instructional Videos
  • Volunteer Opportunities

Connect With Us

Quick links.

  • Getting Started

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • AEM Educ Train
  • v.5(4); 2021 Aug

Logo of aemeductrain

Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts

Jaime jordan.

1 Department of Emergency Medicine, David Geffen School of Medicine at UCLA, Los Angeles California, USA

2 Department of Emergency Medicine, Ronald Reagan UCLA Medical Center, Los Angeles California, USA

Laura R. Hopson

3 Department of Emergency Medicine, University of Michigan, Ann Arbor Michigan, USA

Caroline Molins

4 AdventHealth Emergency Medicine Residency, Orlando Florida, USA

Suzanne K. Bentley

5 Icahn School of Medicine at Mount Sinai, New York New York, USA

Nicole M. Deiorio

6 Virginia Commonwealth University School of Medicine, Richmond Virginia, USA

Sally A. Santen

7 University of Cincinnati College of Medicine, Cincinnati Ohio, USA

Lalena M. Yarris

8 Department of Emergency Medicine, Oregon Health & Science University, Portland Oregon, USA

Wendy C. Coates

Michael a. gisondi.

9 Department of Emergency Medicine, Stanford University, Palo Alto California, USA

Associated Data

Research abstracts are submitted for presentation at scientific conferences; however, criteria for judging abstracts are variable. We sought to develop two rigorous abstract scoring rubrics for education research submissions reporting (1) quantitative data and (2) qualitative data and then to collect validity evidence to support score interpretation.

We used a modified Delphi method to achieve expert consensus for scoring rubric items to optimize content validity. Eight education research experts participated in two separate modified Delphi processes, one to generate quantitative research items and one for qualitative. Modifications were made between rounds based on item scores and expert feedback. Homogeneity of ratings in the Delphi process was calculated using Cronbach's alpha, with increasing homogeneity considered an indication of consensus. Rubrics were piloted by scoring abstracts from 22 quantitative publications from AEM Education and Training “Critical Appraisal of Emergency Medicine Education Research” (11 highlighted for excellent methodology and 11 that were not) and 10 qualitative publications (five highlighted for excellent methodology and five that were not). Intraclass correlation coefficient (ICC) estimates of reliability were calculated.

Each rubric required three rounds of a modified Delphi process. The resulting quantitative rubric contained nine items: quality of objectives, appropriateness of methods, outcomes, data analysis, generalizability, importance to medical education, innovation, quality of writing, and strength of conclusions (Cronbach's α for the third round = 0.922, ICC for total scores during piloting = 0.893). The resulting qualitative rubric contained seven items: quality of study aims, general methods, data collection, sampling, data analysis, writing quality, and strength of conclusions (Cronbach's α for the third round = 0.913, ICC for the total scores during piloting = 0.788).

We developed scoring rubrics to assess quality in quantitative and qualitative medical education research abstracts to aid in selection for presentation at scientific meetings. Our tools demonstrated high reliability.

INTRODUCTION

The scientific abstract is the standard method for researchers to communicate brief written summaries of their findings. The written abstract is the gatekeeper for selection for presentation at professional society meetings. 1 A research presentation serves many purposes including dissemination of new knowledge, an opportunity for feedback, and the prospect of fostering an investigator's academic reputation. Beyond the presentation, abstracts, as written evidence of scientific conference proceedings, often endure through publication in peer‐reviewed journals. Because of the above, abstracts may be assessed in a number of potentially high‐stakes situations.

Abstracts are selected for presentation at conferences through a competitive process based on factors such as study rigor, importance of research findings, and relevance to the sponsoring professional society. Prior literature has shown poor observer agreement in the abstract selection process. 2 Scoring rubrics are often used to guide abstract reviewers in an attempt to standardize the process, reduce bias, support equity, and promote quality. 3 There are limited data describing the development and validity evidence of such scoring rubrics but the data available suggest that rubrics may be based on quality scoring tools for full research reports and published guidelines for abstracts. 2 , 4 , 5 Medical conferences often apply rubrics designed for judging clinical or basic science submissions, which reflect standard hypothesis‐testing methods and often use a single subjective Gestalt rating for quality decisions. 6 This may result in the systematic exclusion of studies that employ alternate, but equally rigorous methods, such as research in medical education. Existing scoring systems, commonly designed for biomedical research, may not accurately assess the scope, methods, and types of results commonly reported in medical education research abstracts, which may lead to a disproportionately high rate of rejection of these abstracts. There are additional challenges in reviewing qualitative research abstracts using a standard hypothesis‐testing rubric. In these qualitative studies, word‐count constraints may limit the author's ability to convey the study's outcome appropriately. 7 It is problematic for qualitative studies to be constrained to a standard quantitative abstract template, which may lead to low scores by those applying the rubric and a potential systematic bias against qualitative research.

Prior literature has described tools to assess quality in medical education research manuscripts, such as the Medical Education Research Study Quality Instrument (MERSQI) and the Newcastle‐Ottawa Scale–Education (NOS‐E). 8 A limited attempt to utilize the MERSQI tool to retrospectively assess internal medicine medical education abstracts achieving manuscript publication showed increased scores for the journal abstract relative to the conference abstract. 4 However, the MERSQI and similar tools were not developed specifically for judging abstracts, and there is a lack of published validity evidence to support score interpretation based on these tools. To equitably assess the quality of education research abstracts to scholarly venues, which may have downstream effects on researcher scholarship, advancement, and reputation, there is a need for a rigorously developed abstract scoring rubric that is based on a validity evidence framework. 9 , 10

The aim of this paper is to describe the development and pilot testing of a dedicated rubric to assess the quality of both quantitative and qualitative medical education research studies. We describe the development process, which aimed to optimize content and response process validity, and initial internal structure and relation to other variables validity evidence to support score interpretation using these instruments. The rubrics may be of use to researchers developing studies and abstract and paper reviewers and may be applied to medical education research assessment in other specialties.

Study design

We utilized a modified Delphi technique to achieve consensus on items for a scoring rubric to assess quality of emergency medicine (EM) education research abstracts. The modified Delphi technique is a systematic group consensus strategy designed to increase content validity. 11 Through this method we developed individual rubrics to assess quantitative and qualitative EM medical education research abstracts. This study was approved by the institutional review board of the David Geffen School of Medicine at UCLA.

Study setting and population

The first author identified eight EM education researchers with successful publication records from diverse regions across the United States and invited them to participate in the Delphi panel. Previous work has suggested that six to 10 experts is an appropriate number for obtaining stable results in the modified Delphi method. 12 , 13 , 14 All invited panelists agreed to participate. The panel included one assistant professor, two associate professors, and five professors. All panelists serve as reviewers for medical education journals and four hold editorial positions. We collected data in September and October 2020.

Study protocol

We followed Messick's framework for validity that includes five types of validity evidence; content, response process, internal structure, relation to other variables, and consequential. 15 Our study team drafted initial items for the scoring rubrics after a review of the literature and existing research abstract scoring rubrics to optimize content validity. We created separate items for research abstracts reporting quantitative and qualitative data. We sent the draft items to the Society for Academic Emergency Medicine (SAEM) education committee for review and comment to gather stakeholder feedback and for further content and response process validity evidence. 16 One author (JJ) who was not a member of the Delphi panel then revised the initial lists of items based on committee feedback to create the initial Delphi surveys. We used an electronic survey platform (SurveyMonkey) to administer and collect data from the Delphi surveys. 17 Experts on the Delphi panel rated the importance of including each item in a scoring rubric on a 1 to 9 Likert scale with 1 labeled as “not at all important” and 9 labeled as “extremely important.” The experts were invited to provide additional written comments, edits, and suggestions for each item. They were also encouraged to suggest additional items that they felt were important but not currently listed. We determined a priori that items with a mean score of 7 or greater advanced to the next round and items with a mean score of three or below were eliminated. The Delphi panel moderator (JJ) applied discretion for items scoring between 4 and 6, with the aim of both adhering to the opinions of the experts and creating a comprehensive scoring rubric. For example, if an item received a middle score but had comments supporting inclusion in a revised form, the moderator would make the suggested revisions and include the item in the next round.

Each item consisted of a stem and anchored choices with associated point‐value assignments. Panelists commented on the stems, content, and assigned point value of choices and provided narrative unstructured feedback. The moderator made modifications between rounds based on item scores and expert feedback. After each round, we provided panelists with aggregate mean item scores, written comments, and an edited version of the item list derived from the responses in the previous round. The panelists were then asked to rate the revised items and provide additional edits or suggestions.

We considered homogeneity of ratings in the Delphi process to be an indication of consensus. After consensus was achieved, we created final scoring rubrics for quantitative and qualitative medical education research abstracts. We then piloted the scoring rubrics to gather internal structure and further response process validity evidence. Five raters from the study group (JJ, LH, MG, CM, SB) participated in piloting. We piloted the final quantitative research rubric by scoring abstracts from publications identified in the most recent critical appraisal of EM education research by Academic Emergency Medicine / AEM Education and Training, “Critical Appraisal of Emergency Medicine Education Research: The Best Publications of 2016”. 18 All 11 papers highlighted for excellent methodology in this issue were included in the pilot. 18 Additionally, we included an equal number of randomly selected citations that were included in the issue but not selected as top papers, for a total of 22 quantitative publications. 18 Given the limited number of qualitative studies cited in this issue of the critical appraisal series, we chose to pilot the qualitative rubric on publications from this series from the last 5 years available (2012–2016). 18 , 19 , 20 , 21 , 22 We randomly selected one qualitative publication that was highlighted for excellent methodology and one that was not from each year for a total of 10 qualitative publications. 18 , 19 , 20 , 21 , 22 The same five raters who performed the quantitative pilot also conducted the qualitative pilot.

Data analysis

We calculated and reported descriptive statistics for item scoring during Delphi rounds. We used Cronbach's alpha to assess homogeneity of ratings in the Delphi process. Increasing homogeneity was considered to be an indication of consensus among the expert panelists. We used intraclass correlation coefficient (ICC) estimates to assess reliability among raters during piloting based on a mean rating (κ = 5), absolute agreement, two‐way random‐effects model. We performed all analyses in SPSS (IBM SPSS Statistics for Windows, Version 27.0).

Quantitative rubric

Three Delphi rounds were completed, each with 100% response rate. Mean item scores for each round are depicted in Table  1 . After the first round, three items were deleted, one item was added, and five items underwent wording changes. After the second round, one item was deleted and eight items underwent wording changes. After the third round items were reordered for flow and ease of use but no further changes were made to content or wording. Cronbach's alpha for the third round was 0.922, indicating high internal consistency. The final rubric contained nine items: quality of objectives, appropriateness of methods, outcomes, data analysis, generalizability, importance to medical education, innovation, quality of writing, and strength of conclusions (Data Supplement  S1 , Appendix S1 , available as supporting information in the online version of this paper, which is available at http://onlinelibrary.wiley.com/doi/10.1002/aet2.10654/full ). The ICC for the total scores during piloting was 0.893, indicating excellent agreement. ICCs for individual rubric items ranged from 0.406 to 0.878 (Table  3 ).

Items and mean scores of expert review during Delphi process for quantitative scoring rubric

Inter‐rater reliability results during piloting

Qualitative rubric

Three Delphi rounds were completed, each with 100% response rate. Mean item scores for each round are depicted in Table  2 . After the first round 2 items were deleted, one item was added and nine items underwent wording changes. After the second round, three items were deleted and four underwent wording changes. After the third round no further changes were made. The resulting tool contained seven items reflecting the domains of quality of study aims, general methods, data collection, sampling, data analysis, writing quality, and strength of conclusions (Appendix S2 ). Cronbach's alpha for the third round was 0.913, indicating high internal consistency. ICC for the total scores during piloting was 0.788, indicating good agreement. The item on writing quality had an ICC of –0.301, likely due to the small scale of the item and sample size leading to limited variance. ICCs for the remainder of the items ranged from 0.176 to 0.897 (Table  3 ).

Items and mean scores of expert review during Delphi process for qualitative scoring rubric

We developed novel and distinct abstract scoring rubrics for assessing quantitative and qualitative medical education abstract quality through a Delphi process. It is important to evaluate medical education research abstracts that utilize accepted education methods as a distinctly different class than basic, clinical, and translational research. Through our Delphi and piloting processes we have provided multiple types of validity evidence in support of these rubrics aligned with Messick's framework including content, response process, and internal structure. 15 Similar to other tools assessing quality in medical education research, our rubrics assess aspects such as study design, sampling, data analysis, and outcomes that represent the underpinnings of rigorous research. 8 , 23 , 24 , 25 , 26 Unlike many medical education research assessments published in the literature, our tool was designed specifically for the assessment of abstracts rather than full‐text manuscripts, and therefore the specific item domains and characteristics reflect this unique purpose.

We deliberately created separate rubrics for abstracts reporting quantitative and qualitative data because each has unique methods. When designing a study, education researchers must decide the best method to address their questions. Often, in the exploratory phase of inquiry, a qualitative study is the most appropriate choice to identify key topics that merit further study. These often may be narrow in scope and may employ one or more qualitative methods (e.g., ethnography, focus groups, personal interviews). The careful and rigorous analysis may reveal points that can be studied later via quantitative methods to test a hypothesis gleaned during the qualitative phase. 27 Specific standards for reporting on qualitative research have been widely disseminated and are distinct from standards for reporting quantitative research. 28 Even an impeccably designed and executed qualitative study would fail to meet major criteria for excellent quantitative studies. For example, points may be subtracted for lack of generalizability or conduct of the qualitative study in multiple institutions as well as for the absence of common quantitative statistical analytics. The qualitative abstract itself may necessarily lack the common structure of a quantitative report and lead to a lower score. The obvious problem is that a well‐conducted study might not be shared with the relevant research community if it is judged according to quantitative standards. A similar outcome would occur if quantitative work were judged by qualitative standards; therefore, we advocate for using scoring rubrics specific to the type of research being assessed.

Our work has several possible applications. The rubrics we developed may be adopted as scoring tools for medical education research studies that are submitted for presentation to scientific conferences. The presence of specific scoring rubrics for medical education research may address disparities in acceptance rates and ensure presentation of rigorously conducted medical education research at scientific conferences. Further, publication of abstract scoring rubrics such as ours sets expectations for certain elements to be included and defines an acceptable level of submission quality. Dissemination and usage of the rubrics may therefore help improve research excellence. The rubrics themselves can serve as educational tools in resident and faculty training. For example, the rubrics could serve as illustrations or practice material in teaching how to prepare a strong abstract for submission. The inclusive wording of the items allows the rubrics to be adapted to medical education work in any medical specialty. Medical educators may also benefit from using the methods described here to create their own scoring rubrics or provide evidence‐based best practice approaches for other venues. Finally, this study provides a tool that could lay the groundwork for future scholarship on assessing the quality of educational research.

LIMITATIONS

Our study has several limitations. First, the modified Delphi technique is a consensus technique that can force agreement of respondents, and the existence of consensus does not denote a correct response. 11 Since the method is implemented electronically, there is limited discussion and elaboration. Second, the team of experts were all researchers in EM; therefore, the rubrics may not generalize to other specialties. The rubrics were intended for quantitative and qualitative education research abstract submission, so it may not perform well for abstracts that include both quantitative and qualitative data or those focused on early work, innovations, instrument development, validity evidence, or program evaluation. Finally, there are two limitations to the pilot testing. An a priori power calculation to determine sample size was not possible since the rubrics were novel. The ICCs of individual items on the scoring rubrics were variable and we chose not to eliminate items with low ICCs given the small sample size during piloting and a desire to create a tool comprehensive of key domains. Future studies of use of these tools incorporating larger samples may provide data for additional refinement. Faculty who piloted the rubrics were familiar with the constructs and rubrics, and it is not known how the rubrics would have performed with general abstract reviewers nor what training might be required. The success of separate rubrics may rely on the expertise of the reviewers in the methodology being assessed.

We offer two medical education abstract scoring rubrics with supporting preliminary reliability and validity evidence. Future studies could add additional validity evidence including use with trained and untrained reviewers and relationship to other variables, e.g., a comparison between rubric scores and expert judgment. Additional studies could be performed to provide consequential validity evidence by comparing the number and quality of accepted medical education abstracts before and after the rubric's implementation or whether the number of abstracts that eventually lead to publication increases.

CONCLUSIONS

Using the modified Delphi technique for consensus building, we developed two scoring rubrics to assess quality in quantitative and qualitative medical education research abstracts with supporting validity evidence. Application of these rubrics demonstrated high reliability.

CONFLICTS OF INTEREST

The authors have no potential conflicts to disclose.

AUTHOR CONTRIBUTIONS

Jaime Jordan and Michael A. Gisondi conceived the study. Jaime Jordan, Michael A. Gisondi, Laura R. Hopson, Caroline Molins, and Suzanne K. Bentley contributed to the design of the study. Jaime Jordan, Laura R. Hopson, Caroline Molins, Suzanne K. Bentley, Nicole M. Deiorio, Sally A. Santen, Lalena M. Yarris, Wendy C. Coates, and Michael A. Gisondi contributed to data collection. Jaime Jordan analyzed the data. Jaime Jordan, Laura R. Hopson, Caroline Molins, Suzanne K. Bentley, Nicole M. Deiorio, Sally A. Santen, Lalena M. Yarris, Wendy C. Coates, and Michael A. Gisondi contributed to drafting of the manuscript and critical revision.

Supporting information

Data Supplement S1 . Supplemental material.

ACKNOWLEDGMENTS

The authors acknowledge that this project originated to meet an SAEM Education Committee Objective and thank all the committee members for their support of this work.

Jordan J, Hopson LR, Molins C, et al. Leveling the field: Development of reliable scoring rubrics for quantitative and qualitative medical education research abstracts . AEM Educ Train . 2021; 5 :e10654. 10.1002/aet2.10654 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

Presented at Society for Academic Emergency Medicine Virtual Meeting, May 13, 2021.

Supervising Editor: Esther H. Chen, MD.

  • help_outline help

iRubric: Research Proposal and Presentation rubric

  • research proposal

research proposal grading rubric

IMAGES

  1. Proposal evaluation-criteria-rubric smaller margins

    research proposal grading rubric

  2. Essay Grading Rubric Template

    research proposal grading rubric

  3. Ap Research Presentation Rubric

    research proposal grading rubric

  4. 46 Editable Rubric Templates (Word Format) ᐅ TemplateLab

    research proposal grading rubric

  5. SOLUTION: Grading rubric for business plan

    research proposal grading rubric

  6. Proposal Rubrics 5-25-2020

    research proposal grading rubric

VIDEO

  1. Elestrals Art Episode 1--Grading Rubric

  2. Research Paper Rubric

  3. Grading Criteria for Research Defence Presentation- Grading Rubric- Research Folder

  4. What is difference between Research proposal and Research paper/ NTA UGC NET

  5. Composition I: Sisseton Wahpeton College: Logging on, Grading Rubric, Focus (Part I)

  6. 213 6 Full research proposal rubric review

COMMENTS

  1. Scoring Rubric for Undergraduate Research

    Scoring Rubric for Budget and Resume. Category. 0 = Incomplete. 1 = Complete. Budget. Proposed budget is incomplete and/or does not adequate describe resources and materials needed for the proposal. Proposed budget is complete and adequately describes resources and materials needed for the proposal. Resume.

  2. PDF Thesis Research Proposal Evaluation Rubric- PRINT version 10-2011

    Significance ‐ Impact of Proposed Research: (a) Demonstrated the potential value of solution or contribution to the research problem in advancing knowledge (a) within and (b) outside the area/field of study. (b) 5. Research and Design Methods ‐ Solution Approach: Applied sound state‐of‐the‐field research methods/tools to solve the ...

  3. Grading Rubric for A Research Paper—Any Discipline

    Style/Voice ____. Grammar/Usage/ Mechanics ____. *exceptional introduction that grabs interest of reader and states topic. **thesis is exceptionally clear, arguable, well-developed, and a definitive statement. *paper is exceptionally researched, extremely detailed, and historically accurate. **information clearly relates to the thesis.

  4. Rubric for a Research Proposal

    Matthew Pearson - Writing Across the Curriculum. The following rubric guides students' writing process by making explicit the conventions for a research proposal. It also leaves room for the instructor to comment on each particular section of the proposal. Clear introduction or abstract (your choice), introducing the purpose, scope, and ...

  5. PDF ERE Research Proposal Rubric

    Program Review Research Proposal Rubric Educational Technology, Research and Assessment Use for these course-based artifacts or other experiences: • ETR 519/520 research proposal SLO 1: Design a study of an educational research problem or phenomenon using appropriate methodologies Introduction Acceptable Developing Unacceptable

  6. Example 1

    Example 1 - Research Paper Rubric. Characteristics to note in the rubric: Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper will ...

  7. PDF Dissertation Proposal Rubric

    Dissertation Proposal Rubric. Graduate School of Education: Doctor of Education in Educational Leadership. Dissertation Proposal Rubric: 5-part dissertation (with edits by Dannelle D. Stevens, Coordinator and Gayle Thieman, Doctoral Program Committee Member) Score every dimension: Unsatisfactory = 1; Emerging = 2; Proficient = 3; Exemplary = 4.

  8. PDF Common Rubrics for Evaluating Undergraduate Research Proposals

    Ability to enhance student's academic development is less clearly demonstrated or less likely. Project does not speak to student's development or only in the weakest manner. X 2. Role, involvement, and activities of student and faculty mentor are carefully presented and explained.

  9. Example 9

    Example 9 - Original Research Project Rubric. Characteristics to note in the rubric: Language is descriptive, not evaluative. Labels for degrees of success are descriptive ("Expert" "Proficient", etc.); by avoiding the use of letters representing grades or numbers representing points, there is no implied contract that qualities of the paper ...

  10. PDF Proposal Grading Rubric Excellent Satisfactory Weak Poor Significance

    Proposal Grading Rubric. Problem is somewhat clear. Problem is not clear. A clear and well‐written exposition of the significance of the research is present. A reasonable exposition of the significance of the research is present. Some description of the significance present but it is not well‐written. The significance is either not present ...

  11. Rubric for a Research Proposal

    Previous post: USING RUBRICS TO TEACH AND EVALUATE WRITING IN BIOLOGY. Next post: PROBLEM REPORT AND REFLECTION RUBRICS FOR WRITING IN MATH. ... Rubric for a Research Proposal. Rubric for a Research Proposal. Posted on October 25, 2017. Matthew Pearson, Writing Across the Curriculum. Posted in Responding, Evaluating, Grading

  12. PDF URC proposal evaluation rubric 20200110

    URC PINS proposal evaluation rubric. strong case is made for the novelty and importance of the research. Excellent: The proposed work is highly original and the results are expected to be important in the specific field of research, and perhaps even beyond. Good: The proposed research or scholarly work is novel and that the work is interesting ...

  13. UGRA Proposal Guidelines

    Your UGRA proposal will be evaluated using one of the following rubrics provided to faculty reviewers: Research Project Rubric (.docx) or Creative Project Rubric (.docx) We recommend that you read over the appropriate rubric and use it while revising your proposal. Your proposal should: be single-spaced and typed using Times New Roman 12 point ...

  14. PDF Microsoft Word

    GRADUATE STUDENT THESIS/DISSERTATION PROPOSAL EVALUATION. The attached evaluation tool (rubric) is designed to assist program faculty in the evaluation of their degree program's ability to successfully prepare their students to propose graduate research. The rubric includes four broad evaluation criteria, and encourages the addition of ...

  15. Leveling the field: Development of reliable scoring rubrics for

    Dissemination and usage of the rubrics may therefore help improve research excellence. The rubrics themselves can serve as educational tools in resident and faculty training. For example, the rubrics could serve as illustrations or practice material in teaching how to prepare a strong abstract for submission. The inclusive wording of the items ...

  16. PDF Grading rubric for research article presentations (20%)

    Grading rubric for research proposals - oral presentation (15%) Grade component Mostly not true Partly true Mostly true Completely true Background (15%) 0-6% 9% 12% 15% • The literature review is comprehensive and describes relevant material. • The purpose of the study is clearly described. Specific aims (10%) 0-4% 6% 8% 10%

  17. PDF Research Paper Scoring Rubric

    Research Paper Scoring Rubric Ideas Points 1-10 Has a well-developed thesis that conveys a perspective on the subject Poses relevant and tightly drawn questions about the topic; excludes extraneous details and inappropriate information Records important ideas, concepts, and direct quotations from a variety of reliable

  18. PDF Action Research Final Report Rubric and Presentation Rubric (Revised 5

    Action Research Final Report Rubric (Revised 5/23/17) NOTE TO REVIEWERS: A score of zero should be entered for missing criteria. Unsatisfactory (1.0) Basic (2.0) Proficient (3.0) Distinguished (4.0) Score/ Level Rationale & Context (Revisions from Proposal Plan) • Revisions made to the proposal: •Did not address most reviewer comments and

  19. PDF Tips on Grading: Using Rubrics

    A grading rubric is a scoring guide or checksheet that identifies the standards and criteria for a given assignment. Rubrics work particularly well for assessing communication ... for a research proposal in chemistry might easily be adapted for a biology or social sciences proposal. Page 2 Marketing Proposal Page 3 Presentation Report

  20. PDF Assessment Rubric for Graduate Thesis Seminar

    Presents a significant research problem related to the chemical sciences. Articulates clear, reasonable research questions given the purpose, design, and methods of the project. All variables and controls have been appropriately defined. Proposals are clearly supported from the research and theoretical literature.

  21. PDF RESEARCH RUBRICS FOR PROPOSAL DEFENSE

    The gap of knowledge of the study is clearly and specifically stated in paragraph 3 to convince the readers of the need to conduct such study. 7. The research paradigm is able to provide the direction and skeletal framework of the study. 8. The aims/ purposes or objectives were thoroughly presented.

  22. iRubric: Research Proposal and Presentation rubric

    The rubic is used to appraise the initial research proposal presented in writing and class presentation by MBA students regarding their final case study analysis. Rubric Code: H882AC. By tcelesteh. Draft.

  23. Research proposal + Grading rubric

    BIOG 1500 RESEARCH PROPOSAL GRADING RUBRIC Sections and specific criteria Satisfactory Acceptable but below standard Absent or Not Title A Is clear, concise, and exactly describes the research project. 3 2 0 Abstract B Begins with a general background related to the study. 1 0 C States the purpose of the study clearly and concisely. 2 1 0