National Academies Press: OpenBook

Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality (2001)

Chapter: appendix f: alternative assessment case studies, appendix f alternative assessment case studies, performance assessment of experienced teachers by the national board for professional teaching standards.

The National Board for Professional Teaching Standards (NBPTS) provides an example of a large-scale, high-stakes performance assessment of teaching that draws on portfolio and assessment center exercises. While NBPTS assessments are intended for voluntary certification of experienced teachers (teachers who have been practicing in their subject areas for at least three years), this case is relevant to the committee’s focus on assessment of beginning teaching in at least two ways. First, it provides an example of a model that states may choose or have chosen to emulate. Second, states may decide to grant licenses to candidates certified by the NBPTS. Finally, NBPTS certification is viewed by its proponents as an integral phase in a teacher’s career development that can and should be consonant with earlier phases of assessment and development.

The NBPTS is an independent organization governed by a 63-member board of directors, most of whom are classroom teachers. Its mission is to “establish high and rigorous standards for what accomplished teachers should know and be able to do, to develop and operate a national voluntary system to assess and certify teachers who meet these standards, and to advance related education reforms for improving student learning in American schools” < www.nbpts.org >. In addition, NBPTS seeks to forge “a national professional consensus, to reliably identify teachers who meet [these] standards,” and to communicate what accomplished teaching looks like” (Moss and Schutz, 1999:681). The NBPTS is in the

process of developing standards and assessments for more than 30 certification fields identified by the subject or subjects taught and by the developmental level of the students.

For each certificate, development work starts by articulating a set of content standards, based on the five core propositions set forth in the NBPTS central policy statement (National Board for Professional Teaching Standards, 1996:2– 3; see Box F-1 ). The drafting of the content standards for each certificate based on these core propositions is handled by a committee comprised primarily of teachers experienced in the relevant subject area along with experts in child development, teacher education, and the relevant academic discipline. Public review and comment are obtained for the content standards, and the feedback received is used in the final revision of the standards (Moss and Schutz, 1999).

The final version of a content standards document states that the standards “represent a professional consensus on the critical aspects of practice that distinguish exemplary teachers in the field from novice or journeymen teachers. Cast in terms of actions that teachers take to advance student outcomes, these standards also incorporate the essential knowledge, skills, dispositions, and commitments that allow teachers to practice at a high level” (e.g., National Board for Professional Teaching Standards, 1996:1). Moss and Schutz (1999:682–683) describe the Early Adolescence/English Language Arts Standards:

[T]here are 14 distinct Early Adolescence/English Language Arts (EA/ELA) Standards…Standard II states that:

“Accomplished EA/ELA teachers set attainable and worthwhile learning goals for students and develop meaningful learning opportunities while extending to students an increasing measure of control over how those goals are pursued.” . .

This standard is then elaborated into a full-page description that includes statements such as, “Educational goal-setting is an interactive process in the middle-grades English teacher’s classroom…. These activities often include a strong mixture of student involvement and direction”; “in carrying out learning direction activities, accomplished teachers adjust their practice, as appropriate, based on student feedback and provide many alternative avenues to the same learning destinations”; or “the planning process is inclusive, no one is allowed to disappear.”

Box F-2 provides an overview of the standards for EA/ELA. These content standards are then used to guide all aspects of assessment development for that certification assessment.

Once the content standards are created for any given certificate, the assessment developers (now primarily at Educational Testing Services, or ETS) are joined by a second committee of approximately eight experienced teachers in the development and pilot testing of assessment tasks and the accompanying rubrics that will be used to score candidates’ performances. Although each assessment

task is designed in light of the standards specific to a given certificate, NBPTS has adopted a framework that generalizes across certificates to guide the work of all development teams.

Each assessment consists of two major parts: a portfolio to be completed by candidates in their home schools and a half-day of testing at an assessment center. The school-based portfolio consists of (1) three entries that are classroom based and include two videos that document the candidate’s teaching practice through student work and (2) one entry that combines the candidate’s work with students’ families, the community, and collaboration with other professionals. The six assessment center exercises require candidates to demonstrate their knowledge of subject matter content.

Box F-3 provides a detailed overview of the EA/ELA assessment tasks. A formal multistate pilot test is administered before the assessments are released for the first operational use.

Each assessment task is scored in accordance with a rubric prepared during the task development phase. NBPTS scoring rubrics encompass four levels of performance on a particular task, with the second-highest level designated as meeting the standards of accomplishment. 1 Further, the terminology used in each scoring rubric closely mirrors the language of the relevant content standards.

Following initial use of the assessment, at least three extensively trained assessors and other teaching experts select a sample of responses to be used in training and certifying the scorers. The small group charged with selecting the

benchmarks is instructed to choose them so that assessors see that there are different ways to achieve a score. Those who select benchmarks and score portfolios are subjected to a series of training exercises designed to assist them in identifying and controlling any personal biases that might influence their selection.

Most exercises are scored independently by two assessors with a third, more experienced assessor used to adjudicate scores that differ by more than a prespecified amount. For some exercises, where interrater reliability is deemed sufficient, only a sample of exercises is double scored. A weighting strategy is employed to combine scores on the exercises and to form the total scores on the assessment. This weighting strategy is selected by another committee of teachers who must choose from among four predetermined weighting strategies. To make certification decisions (accomplished/not accomplished), the total score is compared to a predetermined passing score that is uniform across certificates. This performance standard is equivalent to receiving a just-passing score on each

of the exercises, although high scores on one exercise can compensate for low scores on others.

The uniform performance standard set by the NBPTS was based on a series of empirical standard-setting studies. These studies explored a number of different standard-setting methods (see Jaeger, 1998; Educational Testing Service, 1998). In all of the investigations, panels of experienced teachers were asked to make decisions based on profiles of scores across exercises. The process was “typically iterative, in which individual panelists—thoroughly familiar with the tasks, rubrics, and standards—were given an opportunity to discuss and revise their individual decisions about score profiles in light of feedback on other panelists’ decisions and on the practical implications of their own decisions. From this set of revised individual decisions, the assessment developers computed the ‘recommended’ performance standard” (Moss and Schutz, 1999:685). The NBPTS adopted the uniform performance standard for all certificates based on the results of these early studies along with consideration of the practical implications of setting the standard at different levels (e.g., minimizing adverse impact for groups of candidates, minimizing the anticipated proportion of candidates who are misclassified as failing the exam due to measurement error).

In addition to documentation of the development process, five other kinds of validity evidence are routinely gathered and examined for each assessment:

Content-related evidence of validity is examined by convening a panel of experienced teachers in the subject area to independently rate (a) the extent to which each of the content standards describes a critical aspect of highly accomplished teaching and (b) the importance and relevance of each exercise and rubric to each content standard and to the overall domain of accomplished teaching.

“Scoring validation,” as defined by the assessment developers, is evaluated by assembling another panel of experienced teachers in the subject area to rank randomly selected pairs of exercise responses. These rankings are then compared to the rankings obtained from the official scoring.

Information regarding reliability and errors of measurement is reported as (a) error associated with scores given by different assessors, (b) error associated with the sampling of exercises, and (c) misclassification estimates (i.e., estimates of the proportion of candidates incorrectly passing and failing) due to both of these sources of error.

Confirmation of the predetermined passing standard is obtained by convening a panel of experienced teachers, who have also served as assessors. The panel examines different possible profiles of exercise scores, rank ordered by total score, and draws lines where they believe the passing standard should be.

Evidence regarding bias and adverse impact is provided by (a) reporting certification rates by gender, ethnicity (where sample sizes permit), and teaching

context and (b) investigating the influence of having exercises scored by assessors of the same and different ethnicity as the candidate.

The test developer, ETS, publishes a technical manual (Educational Testing Service, 1998) that describes the methodology for these studies, provides annual updates of technical information for each certificate, and publishes outcomes for each administration. The technical manual is available to the public. The annual updates are available only to NBPTS members, staff, consultants, and advisory panel members. Special studies are frequently presented at national conferences. The NBPTS’s validity research agenda, technical reports, and reports of special studies are routinely reviewed and commented on by the Measurement Research Advisory Panel, a panel of independent scholars whose backgrounds are primarily in educational measurement.

In addition to these routine studies, NBPTS has undertaken a number of special validity/impact studies, many of which are still in progress. These include (1) an external review of complete portfolios by a diverse panel exploring differences in performance between African American and white candidates (Bond, 1998); (2) a study of rater severity and the effects of different scoring designs (Englehard, et al., 2000); (3) a survey of how teachers prepare for the exam (Lynch et al., 2000); (4) a survey of assessors’ impressions about the impact of training and scoring on their professional development (Howell and Gitomer, 2000); and (5) a comparison of NBPTS evaluations with others based on classroom observations and interviews (Wylie et al., 2000). A recently released study compared the teaching practices of NBPTS certified teachers with other teachers and compared samples of student work from classrooms of the two groups of teachers (Bond et al., 2000).

NBPTS assessments are voluntary; passing the assessment results in a certification of accomplishment. To be eligible for NBPTS certification, a teacher must have completed a baccalaureate degree; must have a minimum of three years of teaching experience at the early childhood, elementary, middle, or secondary levels; and must have a valid state teaching license for each of those years or, where a license is not required, the teacher must be teaching in a school recognized and approved to operate by the state.

For the 2000–2001 testing year, the examination fee was $2,300. A number of states and other education agencies have programs in place to subsidize this cost, and NBPTS publicizes that information on its website. Candidates are given approximately 10 months to complete the portfolio tasks and attend the assessment centers. Since each exercise is scored independently, a candidate who does not pass the assessment may bank scores on the exercises passed and retake exercises on which a failing score was received (below a 3 on a four-point scale) for a period of two years after being notified of his or her initial scores. NBPTS publishes a list of newly certified teachers each year. States and local education agencies have their own policies about how the scores are used, in-

cluding how teachers are recognized and rewarded for receiving NBPTS certification. These include the granting of a state license for teachers transferring into the state, salary increases and bonuses, and opportunities to assume new roles. (See < www.nbpts.org > for a summary of state incentives.)

The NBPTS’s direct involvement in professional development and support activities is limited. For instance, the board offers short-term institutes for “facilitators” who plan to support candidates for certification and a series of “Teacher Development Exercises” that can be purchased and used in local workshops. NBPTS does, however, work informally with state and local education agencies and with teacher education institutions to support and publicize local initiatives (see Box F-4 ). For instance, NBPTS provides a list of ways that state and local education agencies and institutions might support its work and offers contact information for local agencies to interested teachers. In addition, NBPTS’s web page contains information about the activities of its affiliates and provides examples of the variety of professional roles that board-certified teachers have assumed. It should be noted that while NBPTS supplies information on local contacts and activities, it does not monitor the quality, relevance, or usefulness of this information.

Through its certification assessments and related activities, NBPTS hopes to “leverage change” in the contexts and culture of teaching. The board hopes to (1) make “it possible for teachers to advance in responsibility, status, and compensation without having to leave the classroom” and (2) encourage “among teachers the search for new knowledge and better practice through a study regimen of collaboration and reflection with peers and others” (National Board for Professional Teaching Standards, 1996:7).

CONNECTICUT’S TEACHER PREPARATION AND INDUCTION PROGRAM

Connecticut, working in collaboration with the Interstate New Teacher Assessment and Support Consortium (INTASC), provides an example of a state that has implemented a licensing system that relies on performance assessments. Connecticut’s Beginning Educator Support and Training program is a comprehensive three-year induction program that involves both mentoring and support for beginning teachers as well as a portfolio assessment. The philosophy behind Connecticut’s teacher preparation and induction program is that effective teaching involves more than demonstration of a particular set of technical skills. The program is based on the fundamental principle that all students must have the opportunity to be taught by a caring, competent teacher and that, in addition to command of subject matter, effective teaching requires a deep concern about students and their success, a strong commitment to student achievement, and the

conviction that all students can attain high levels of achievement (Connecticut Department of Education, 2000). This philosophy is reflected in Connecticut’s Common Core of Teaching (CCT), which is intended to present a comprehensive view of the accomplished teacher, detailing the subject-specific knowledge, skills, and competencies the state believes teachers need in order to ensure that students learn and perform at high levels. The CCT encompasses: (1) foundational skills and competencies common to all teachers from prekindergarten through grade 12 and (2) discipline-based professional standards that represent the necessary knowledge, skills, and competencies (Connecticut Department of Education, 1999). The specific components of the CCT appear in Box F-5 .

Connecticut’s program covers three aspects of teachers’ development: preservice training, beginning teacher induction, and teacher evaluation and continuing professional growth. The CCT guides state policies related to each of these phases, which are described below.

Preservice Training

At the preservice phase, teacher education programs are expected to demonstrate that teacher candidates are knowledgeable about the CCT as well as the state’s achievement test batteries, the Connecticut Mastery Tests, and the Connecticut Academic Performance Test (Connecticut State Department of Education, 1999). The training requirements for prospective teachers are specified in terms of a set of standards, as distinct from a list of required courses. The standards encompass the body of knowledge and skills the state believes individuals should develop as they progress through the teacher education programs. The approval process for teacher education programs is based on these standards, and teacher education programs are expected to demonstrate that students achieve them. Prospective teachers in Connecticut must pass Praxis I, have a minimum B minus average to enter a teacher preparation program, and must pass Praxis II to be recommended for initial licensure (Connecticut State Board of Education, 2000).

Beginning Educator Support and Training (BEST) Program

BEST is a comprehensive three-year induction program that is required for all beginning teachers. The program has two components: (1) support and instructional assistance through mentorship, seminars, distance learning, and support teams over a two-year period and (2) assessment of teaching performance through a discipline-specific portfolio assessment (Connecticut State Board of Education, 2000). The goals of the BEST program include:

ensuring that all students have high-quality, committed, and caring teachers;

promoting effective teaching practice leading to increased student learning;

providing effective support and feedback to new teachers so that they continue to develop their knowledge base and skills and choose to remain in the profession;

providing standards-based professional development for both novice and experienced teachers; and

developing teacher leaders by recognizing and relying on experienced teachers to support, assess, and train beginning teachers.

As part of the program, school administrators assign new teachers to mentors, provide opportunities for new teachers to work collaboratively with more senior teachers, provide time for professional development, ensure that new teachers have access to resources and support structures, evaluate teachers and offer constructive feedback, and provide an ongoing orientation program < www.state.ct.us/sde >.

During their first year, beginning teachers are evaluated at the district level through live and video observations. They learn how to systematically document their teaching practices and to use student work to demonstrate student learning, and they develop a long-term plan for professional growth. During year two, teachers use feedback from year one to improve their teaching techniques, and they begin to plan for portfolio development. With guidance from their principal, teachers design a professional development plan that supports the BEST portfolio process and that uses the portfolio components as an avenue to support growth in planning, instructing, student work analysis, and reflecting. The portfolio is submitted in the spring of the second year. During the third year of the induction phase, teachers continue to work on the portfolio, as needed, and share their portfolio experience with other beginning teachers. Third-year teachers begin to expand their goals to focus on district and school improvement goals. They also begin to document their professional responsibilities in preparation for professional growth and tenure < www.state.ct.us/sde >.

The Portfolio Assessment

The portfolio assessment is intended to provide a thorough representation of a teacher’s performance through documentation of teaching over time and by focusing on a specific content/discipline area. In the portfolio, teachers document their methods of lesson planning, teaching and facilitation of student learning, assessment of student work, and self-reflection within a unit of instruction. The portfolio includes evidence from multiple sources, such as lesson plans, videotapes of teaching, teacher commentaries, examples of student work, and formal and informal assessments (Connecticut Department of Education, 1999). Portfolio assessment requirements vary depending on the beginning teacher’s teaching assignment and license endorsement. Content-focused seminars, run by trained teachers, help beginning teachers learn about ways to meet subject spe-

cific standards so as to demonstrate content-specific teaching practices (Connecticut State Department of Education, 1999:4).

Scorers are trained to evaluate the portfolios using criteria based on content-focused professional teaching standards. Each portfolio is evaluated by at least two experienced educators with extensive teaching experience in the same disciplinary areas as the beginning teacher. Scorers receive up to 70 hours of training and must meet a proficiency standard in order to be eligible to score a portfolio. In scoring the portfolio, scorers first review the included materials to make notes about the evidence provided. Scorers then organize the evidence around a series of guiding questions derived from the discipline-based standards. The two scorers independently evaluate the quality of the teaching documented in the portfolio according to scoring rubrics and benchmarks and then convene and decide on a final portfolio score. Any portfolio that does not meet the standard of “Competent” is rescored by another pair of assessors. If the second evaluation results in a different score, a “Lead” assessor adjudicates the final score. Teachers whose portfolios do not meet the competency standards are eligible for a personal conference with a portfolio assessor during which they receive individualized feedback about the evaluation (Connecticut State Board of Education, 2000). Such teachers may submit another portfolio during the third year of teaching. If a teacher fails to meet the standard by the third year, the teacher is ineligible to apply for a provisional certificate and cannot teach in Connecticut public schools. To regain certification, an individual needs to successfully complete a formal program of study as approved by the state.

Support for Mentors

School districts are responsible for appointing mentors to beginning teachers. Mentors must have at least three years of teaching experience and must be willing to make the time commitment involved in mentorship activities. Mentors are required to attend a series of training workshops designed to help them prepare for their role in offering guidance to the beginning teacher and in preparing for the portfolio assessment. Mentors and support team leaders receive 20 hours of training focused on the CCT and 10 hours of annual update training.

Professional Development

Throughout the continuous professional growth phase, the components of the CCT are used to establish standards for the evaluation of teachers (according to the Guidelines for Comprehensive Professional Development and Teacher Evaluation) and to guide teachers in selecting appropriate professional development to meet individual as well as local district goals (Connecticut Department of Education, 1999). Teachers who hold either a provisional or professional license must also complete 90 hours of professional development every five

years to maintain their licenses. Professional development activities should be directly related to improving teaching and learning.

Role of Other Staff

A number of staff play a role in Connecticut’s induction and evaluation program. The school principal is responsible for ensuring that beginning teachers are aware of the BEST program’s requirements. Principals facilitate opportunities for mentors and beginning teachers to meet (and to observe each other’s classrooms); assist with arranging classroom coverage; and provide resources for portfolio preparation (e.g., videotaping equipment). District facilitators are responsible for providing BEST program orientation sessions, ensuring that beginning teachers receive adequate support from mentors, and arranging for release time so that the mentor and beginning teacher can meet. Release time is acquired by using substitute teachers for the beginning teacher and the mentor(s). The state department of education (through six regional educational service centers) provides distance learning seminars to train mentors and portfolio assessors on the portfolio assessment. The department manages the program; sets policies, standards, and procedures; and assures that the assessment portfolios are reliable and valid.

Studies of Technical Characteristics

The state has conducted numerous studies associated with developing the portfolio assessment, evaluating its psychometric characteristics, and assessing its overall impact on teacher competence in the state. For all subject areas, the discipline-specific standards were developed by committees of experts in the particular subject area, including teachers, curriculum specialists, administrators, and higher-education representatives. As part of the development process, job analysis studies were conducted. Representative samples of teachers, administrators, and higher-education faculty participated in surveys in which they rated the standards in terms of their importance for beginning and experienced teachers, their relevance to the job of teaching, and their importance for advancing student achievement. The standards were further validated by having teachers complete journals to document the degree to which the standards were represented in their actual teaching.

Experts’ judgments and portfolio performance data are used in setting the standards for the portfolio assessment and determining the cutpoints. Bias and sensitivity reviews are conducted for the portfolio handbook (the guidelines given to beginning teachers for constructing their portfolios and all related scoring materials), and portfolio performance is broken down by race, gender, type of community, and socioeconomic status to examine disparate impact. Generalizability studies are implemented to estimate reliability of portfolio scores.

A number of studies have been conducted to collect construct-related evidence of validity. These studies have examined the relationships between teachers’ performance on the portfolio and (1) other quantitative measures, such as their grade point averages, SAT scores, and Praxis I and II scores; (2) case studies of teachers’ performance in the classroom; and (3) student achievement test results in English/language arts, reading, and mathematics.

Additional studies have examined the consequences associated with program implementation. Program effectiveness is evaluated through surveys of mentors, portfolio assessors, school administrators, principals, central office personnel, beginning teachers, and higher-education faculty. In addition, portfolio scorers evaluate their own teaching skills before and after participating in scorer training activities. This information is used qualitatively to judge the impact of scorer training on actual classroom teaching practices.

OHIO’S TEACHER INDUCTION PROGRAM

Ohio provides an example of a state that plans to incorporate a commercially available program into its licensing system. Ohio is in the process of redesigning its licensing system. The new system, which will be implemented in 2002, includes a comprehensive induction program for beginning teachers, requirements for performance assessment during the induction year administered by the Ohio Department of Education, and procedures to ensure continued professional development for experienced teachers. The new system will eliminate procedures through which teachers were awarded permanent “lifetime” licenses and instead will implement a two-stage system. During the first stage, individuals who complete an approved teacher preparation program and pass required tests (Praxis II, Principles of Teaching and Learning, and the appropriate Praxis II subject matter examination) receive a provisional license. The provisional license is good for two years, is renewable, and entitles individuals to work as full-time or substitute teachers. Teachers who successfully complete Ohio’s new Entry Year Program and pass a performance assessment (Praxis III) will receive a professional license (the second stage). The professional license is good for five years, and renewal requires completion of an approved professional development plan and a master’s degree or the equivalent after 10 years of teaching.

The Entry Year Program

The new system will require beginning teachers to participate in the Entry Year Program, an induction program designed to provide mentorship, support, and additional learning experiences for new teachers. Ohio has spent the past seven years piloting the Praxis III Classroom Performance Assessment for be-

ginning teachers and approaches for teacher induction programs and is in the process of standardizing approaches to its Entry Year Program. Development and implementation of an Entry Year Program will be the responsibility of individual school districts, but the state will provide guidelines and financial support for development activities. Districts are required to develop an implementation plan, provide orientation sessions, identify and train mentors, arrange for release time for mentors to work with entry-year teachers, conduct assessment activities, and coordinate with higher-education institutions.

As part of the Entry Year Program, experienced teachers serve as mentors to beginning teachers. Whenever possible, mentors teach the same subject or grade level and are located in the same building as the beginning teacher. Serving as a mentor can be incorporated into a veteran teacher’s professional development plan and can count toward licensure renewal. Mentor teachers work on an ongoing basis with beginning teachers. They engage in activities designed to help beginning teachers develop their skills in instructional planning and preparation, presentation of various learning activities, and assessment of students’ learning.

According to state guidelines, individuals selected to be mentors should be experienced teachers and should demonstrate an awareness of instructional methods and professional responsibilities needed to improve teaching skills and increase student learning. Districts are required to operate an in-depth training program for mentors on ways to conduct observations, provide feedback, and offer professional guidance and support to beginning teachers.

Ohio is focusing on using ETS’s PATHWISE Induction Program-Praxis III Version as the basis for its Entry-Year Program. In Ohio this program is called the Ohio FIRST (Formative Induction Results in Stronger Teaching) Year Program.

The PATHWISE Induction Program-Praxis III Version

The PATHWISE Induction Program-Praxis III Version grew from a need for mentors to assist new teachers to focus on successful teaching under the Praxis III framework (D.Gitomer, ETS, personal correspondence, July 1999). The two pieces of the program—the observation/induction system (provided by the district) and the performance assessment (administered by the Ohio department of education)—are built on the same domain of teaching skills. These domains are:

Organizing content knowledge for student learning —how teachers use their understanding of students and subject matter to establish learning goals, design or select appropriate activities and instructional materials, sequence instruction, and design or select evaluation strategies.

Creating an environment for student learning —the social and emotional

components of learning as prerequisites to and context for academic achievement, including classroom interactions between teachers and students and among students.

Teaching for student learning —making learning goals and instructional procedures clear to students, making content comprehensible to students, encouraging students to extend their thinking, monitoring students’ understanding of content, and using instructional time effectively.

Teacher professionalism —reflecting on the extent to which the learning goals were met, demonstrating a sense of efficacy, building professional relationships with colleagues, and communicating with parents or guardians about student learning.

Each domain consists of a set of four or five assessment criteria (for a total of 19) that represent critical aspects of teaching (see Box F-6 ). These criteria were developed over a six-year period based on information collected from job analyses, reviews of empirical and theoretical research, and examinations of states’ licensing requirements for teachers (Wesley et al., 1993).

The job analyses involved asking teachers and others familiar with teaching about the importance of the various tasks beginning teachers perform. The job analyses were conducted separately for elementary (Rosenfeld et al., 1992b), middle (Rosenfeld et al., 1992c), and secondary school teachers (Rosenfeld et al., 1992a). A series of literature reviews were also conducted to document what is known both empirically and theoretically about good practice for teachers in general and for beginning teachers in particular (Dwyer, 1994). Finally, information on the teacher licensing requirements of all 50 states and the District of Columbia was compiled. A nationwide content analysis of state performance assessment requirements also was carried out to determine the content overlap among the systems and to highlight distinctive differences (Dwyer, 1994).

Based on the findings of these studies, the test developers drafted an initial set of assessment criteria which were presented to a national advisory committee for comment. Committee comments led to revisions in the criteria, followed by a pilot test of the criteria in two states (Minnesota and Delaware). The results of this field testing formed the basis for finalizing the assessments made available for inspection and initial use by states in the fall of 1992.

According to the ETS developers, the criteria were developed so as to “infuse a multicultural perspective throughout the system” (Dwyer, 1994:4). This perspective is based on the premise that “effective teaching requires familiarity with students’ background knowledge and experiences (including their cultural resources), and effective teachers use this familiarity to devise appropriate instruction” (p. 4). The criteria are intended not to prescribe a particular way of teaching but to allow for flexibility in how they can be demonstrated in various classroom contexts.

Praxis III: Classroom Performance Assessment

Praxis III is a performance assessment designed to measure beginning teachers’ skills in relation to 19 criteria (see Box F-6 ). The Praxis III assessment uses

three data collection methods: (1) direct observation of classroom practice, (2) written descriptions of students and lesson plans, and (3) interviews structured around the classroom observation. Prior to being observed, the beginning teacher provides the trained assessor with written documentation about the general classroom context and the students in the class, as well as specific information about the lesson to be observed. The observation allows assessors to gain a first-hand understanding of the teacher’s practices and decisions. The written documentation provides a sense of the general classroom context and the students in the class as well as specific information about the lesson to be observed. Semistructured interviews with the teacher before and after the observation provide an opportunity to explore and reflect on decisions and teaching practices. The interviews also allow assessors to evaluate the teacher’s skill in relating instructional decisions to contextual factors, such as student characteristics and prior knowledge (Dwyer, 1994:2–3).

The design of Praxis III was based on the premise that “effective teaching requires both action and decision making and that learning is a process of the active construction of knowledge.” This guiding conception makes explicit the belief that “because good teaching is dependent on the subject matter and the students, assessments should not attempt to dictate a teaching method or style that is to be applied in all contexts” (Dwyer, 1994).

PATHWISE Induction Program

The PATHWISE Induction Program is designed to prepare beginning teachers for the Praxis III assessment and to provide opportunities for professional development growth. The PATHWISE system is intended to be a “flexible system responsive to an individual’s personal teaching style,” incorporating “constructive assessment that fosters growth and professional development in students and first-year teachers by recognizing their strengths as well as their weaknesses” (Educational Testing Service, 1995b:3). The system provides opportunities for beginning teachers to interact with mentors to (1) identify their strengths and weaknesses, (2) develop a plan for improving their teaching skills, and (3) improve their skill in reflecting on their teaching practices.

The program consists of a series of structured interactions between beginning teachers and their mentors. The interactions are referred to as “events,” and the activities associated with each event are designed to encourage collaboration between beginning teachers and their mentors. A brief description of the types of events follows:

The initial interaction is called the Teaching Environment Profile. This event requires beginning teachers to examine the context of their teaching by collecting information about their students and about the environment (including the district and the community) within which the school is located.

The next interaction is called Profiles of Practice—Observation of Classroom Practice. Two observation events take place in which the mentor teacher observes the beginning teacher leading an instructional lesson. The mentor then provides feedback based on the observation and on review of planning materials, oral and written reflections, and examples of student work.

Three inquiry tasks are also scheduled for beginning teachers. These tasks require beginning teachers to explore specific aspects of their teaching practice. To complete each event, beginning teachers must gather information about a selected aspect of teaching using a variety of resources, including their colleagues, research journals, and texts. Together with the mentor, the beginning teacher develops a plan of action to try out in the classroom, implements the plan, and reflects on the experience. The inquiry events focus on ways to establish a positive classroom environment, design an instructional experience, and analyze student work.

Teachers also participate in two events called Individual Growth Plans. The individual growth plans are designed to help beginning teachers determine how best to focus their efforts throughout the entire induction process. To complete these events, beginning teachers must prepare a plan for professional learning that takes into account their teaching practices, school or district initiatives, and other challenges they may face.

The final induction activities are referred to as Closure Events: Assessment and Colloquium. These interactions help bring closure to the year by engaging beginning teachers in self-assessment, encouraging a final evaluation of professional learning, and promoting the sharing of professional knowledge with other beginning teachers.

A total of 10 events occur over the course of the school year. All of the inquires and observations make use of planning, teaching, reflecting, and applying what is learned. The activities encourage beginning teachers to participate in reflective writing, conversations with experienced colleagues, and ongoing examination of teaching in relation to student learning (Educational Testing Services, 1995a).

Training of Mentors and Assessors

Since the PATHWISE Induction Program-Praxis III Version relies on effective use of observational data, extensive training is required. ETS offers three levels of training for the program. An initial two-day training in the PATHWISE Classroom Observation system acquaints individuals with the 19 criteria that form the basis for the program and is a prerequisite for other levels of training. Participants receive instruction in recording observational data, analyzing written contextual information, using written and observational information to evaluate performance, writing summaries of teacher performance, and providing

feedback to beginning teachers. Training relies on simulations, case studies, sample evaluation forms, and videotapes.

A four-day training session familiarizes individuals with the PATHWISE Induction Program-Praxis III Version. This seminar focuses on training mentors in the uses and purposes of 10 ten events and in developing their mentoring and coaching skills.

The seminar for Praxis III assessors is the final level of training and is five days in duration. The training consists of a series of structured activities during which participants learn to recognize the presence of each of the 19 Praxis III criteria. Participants learn to evaluate written information provided by the teacher, take notes during classroom observations, and conduct semistructured interviews. The training process utilizes a variety of stimuli, including worksheets, sample records of evidence, simulations, case studies, and videotapes. Trainees receive feedback from instructor and fellow participants. They also engage in professional reflection through journal writing.

Implementation of Pathwise Induction Program-Praxis III Version in Ohio

ETS offers training for mentors and assessors but also provides instructional modules for individuals to learn how to conduct the training sessions. Individuals in Ohio have learned the ETS procedures, and the state now offers its own training sessions.

Training in the PATHWISE Classroom Observation System has been incorporated into Ohio’s preservice program to introduce students to the 19 criteria. Many state institutions have incorporated the domains into their preservice education programs, and their focus will be on using the PATHWISE rubrics to evaluate students’ progress. PATHWISE Classroom Observation System training is also available for teachers who plan to become mentors. Approximately 15,000 teachers have completed this training (John Nickelson, Ohio Department of Education, personal communication, March 19, 2001).

Implementation of the Praxis III Performance Assessment

Beginning teachers in Ohio will be required to pass Praxis III starting in 2002. Ohio teachers who do not pass it after their first year may try again during the second year. Failure to complete the Entry-Year Program’s requirements successfully after the second attempt will result in loss of the provisional license until such time as the candidate completes additional coursework, supervised field experiences, and/or clinical experiences as designated by a college or university approved for teacher preparation and is recommended by such college or university.

ABILITY-BASED TEACHER EDUCATION AT ALVERNO COLLEGE

In this case study the committee provides a description of teacher education and assessment as practiced at Alverno College, in Milwaukee, Wisconsin. Alverno College undertook development of a performance-based baccalaureate degree over 20 years ago (Diez, et al., 1998). This change resulted in an overhaul of the college’s curriculum and its approach to teaching. The approach is characterized by publicly articulated learning outcomes, realistic classroom activities and field experiences, and ongoing performance assessments of learning progress. Alverno’s program is of interest because it provides an example of a system in which a party other than a state or district could warrant teacher competence. The focus here is on Alverno as a working program that can expand the debate about other models for warranting teacher competence.

Alverno College has an enrollment of approximately 1,900 students in 66 fields of study, 300 of whom are education majors. With the exception of a postbaccalaureate teacher certification program and the master of arts in education program, all programs admit only female students. Two-thirds of Alverno’s students are from the Milwaukee area, and about 30 percent are members of minority groups. There are about 100 faculty members, and the average class size is 25. The next section provides a brief overview of Alverno’s philosophy and practices. 2

Abilities and Learning Outcomes for the Baccalaureate Degree

At Alverno College an ability is defined as “a complex integration of knowledge, behaviors, skills, values, attitudes, and self-perceptions” (Diez et al., 1994:9). The general education courses provide students with the opportunity to expand and demonstrate each of eight abilities:

Communication —an ability to communicate effectively by integrating a variety of communication abilities (speaking, writing, listening, reading, quantitative, media literacy) to meet the demands of increasingly complex communication situations.

Analysis —an ability to be a clear thinker, fusing experience, reasoning, and training into considered judgment.

Problem solving —an ability to define problems and integrate a range of abilities and resources to reach decisions, make recommendations, or implement action plans.

Values within decision making —an ability to reflect and to habitually seek to understand the moral dimensions of decisions and to accept responsibility for the consequences of actions.

Social interaction —an understanding of how to get things done in committees, task forces, team projects, and other group efforts.

Global perspective —an ability to articulate interconnections between and among diverse opinions, ideas, and beliefs about global issues.

Effective citizenship —an ability to make informed choices and develop strategies for collaborative involvement in community issues.

Aesthetic responsiveness —an ability to make informed responses to artistic works that are grounded in knowledge of the theoretical, historical, and cultural contexts.

The abilities cut across disciplines and are subdivided into six developmental levels. The six levels represent a developmental sequence that begins with objective awareness of one’s own performance process for a given ability and specifies increasingly complex knowledge, skills, and dispositions. Students must demonstrate consistent performance at level 4 for each of the eight abilities prior to graduation. An example of the development levels for problem solving appears in Box F-7 .

Each of Alverno’s educational programs also defines a set of abilities distinctive to each major and minor area. These outcomes, identified by faculty as essen-

SOURCE: Alverno College Faculty, 1973/2000.

tial learning outcomes, relate to and extend the general education abilities (Loacker and Mentkowski, 1993). Within the major area of study, students are expected to achieve at least a level 5 for each of the program’s abilities (Zeichner, 2000).

Abilities and Learning Outcomes in Teacher Education

Alverno’s Department of Education offers degree programs in elementary, early childhood, secondary, bilingual, music, art, and adult education. All education programs are designed to foster the same set of teaching abilities. These teaching abilities define professional levels of proficiency that are required for graduation with a major in any of the teacher education programs. The teaching abilities refine and extend the general education abilities into the professional teaching context. While the professional teaching abilities are introduced in the first year, they receive heavy emphasis during the junior and senior years. The teaching abilities include:

Conceptualization —integrating content knowledge with educational frameworks and a broadly based understanding of the liberal arts in order to plan and implement instruction.

Diagnosis —relating observed behavior to relevant frameworks in order to determine and implement learning prescriptions.

Coordination —managing resources effectively to support learning goals.

Communication —using verbal, nonverbal, and media modes of communication to establish the environment of the classroom and to structure and reinforce learning.

Integrative interaction —acting with professional values as a situational decision maker, adapting to the changing needs of the environment in order to develop students as learners.

Each of the above education abilities is further described for faculty and candidates through maps (Diez et al., 1998). In the maps, development of the ability is defined in terms of what teachers would be expected to do with their knowledge and skills at various stages of their development. An example based on skill in integrative interaction follows: the beginning teacher would demonstrate ability in integrative interaction by showing respect for varied learner perspectives; the experienced teacher would provide structures within which learners create their own perspectives; and the master teacher would assist learners in the habit of taking on multiple perspectives. This type of mapping is intended to “capture the interactions between knowing and doing” (Diez et al., 1998:43).

Alverno’s Program for Education Majors

Alverno’s program for education majors is designed to address the developmental needs of learners. Concepts are addressed in an integrated fashion, across

multiple courses and settings to enable a “deepened understanding” that comes with repetition of concepts (Diez, 1999:233). The program is characterized by extensive opportunities for field experiences that require candidates to apply what they have learned. Coursework and field experiences are sequenced to build developmentally across the years of the program. For example, candidates begin with coursework and field experiences that require them to apply the frameworks they are learning with individual students or small groups in tutorial settings. They progress to more complex tasks with larger groups and whole-class instruction. The assignments gradually increase in complexity, requiring candidates to attend to multiple factors in their planning, their analysis of the classroom, and their implementation of learning experiences (Diez, 1999:233).

Self-reflection and self-assessment skills are emphasized at Alverno. Faculty have developed a set of reflective logs that guide students in each of four semester-long field experiences prior to student teaching. These logs are intended to help students develop their skills in the five education abilities. According to Diez (1999), the logs direct students to make links between theoretical knowledge and practical application (which develops skill in conceptualization and diagnosis), to observe processes and environments of learning (coordination skills), to translate content knowledge into suitable short presentations or learning experiences (communication skills), and to begin to translate their philosophy of education into decisions regarding all aspects of teaching environments and processes (integrative interaction).

The first stage of the education program is the preprofessional level. To apply to the preprofessional level, students must have completed one year of coursework, a required one-credit human relations workshop, and a portion of the math content requirements (Zeichner, 2000). The preprofessional stage lasts two semesters. During this time, education students begin to integrate the knowledge bases of the liberal arts disciplines with the process for applying the material from these disciplines (Alverno College Institute, 1996). The subject area methods courses are taught during this stage. These courses connect teaching methods with material learned in liberal arts general education courses. Performance assessments during this stage may consist of such activities as requiring teacher candidates to create a lesson for a given grade that incorporates knowledge about developmental psychology. Other performance assessments involve simulations in which prospective teachers take on the various roles that teachers play, such as conducting a parent-teacher conference, being part of a multidisciplinary evaluation team, or working with district planning activities. This period includes two pre-student teaching experiences.

After two semesters in the preprofessional stage and completion of two of the pre-student teaching experiences, teacher candidates can apply for admission to the professional level. To be admitted, they must successfully complete the first two field experiences (and provide letters of recommendation), demonstrate a specific ability, meet the statewide minimum cutoff scores on the required

Praxis exams, and pass several standard assessment exercises that are spread throughout the program (Zeichner, 2000). An example of one these standard assessments is the Behavioral Event Interview and Self-Assessment described by Zeichner (2000:10):

[This assessment is] an hour-long interview conducted in the second semester of field experiences. Each education department member interviews two students each semester. The aim of the interview is to give students a chance to talk about their actions and thinking in relation to working with pupils. It focuses on stories elicited by questions (e.g., Can you tell me about the time you came to a realization about children’s development through an experience with a child or children?). The students then are asked to use their stories as data for a self-assessment process focusing on the five advanced education abilities (e.g., Where do you see yourself drawing upon x ability? Where do you see a need to strengthen this ability?). The interview is audiotaped and students take the tape with them to complete a written self-assessment. They set goals for their next stage of development in the teacher education program and then meet for a second session with the faculty interviewer.

The final two semesters are considered the beginning of professional practice, during which student teaching occurs. To be accepted for student teaching, students must demonstrate communication ability at level 4, successfully complete all four pre-student teaching experiences, and pass another standard assessment exercise—the Professional Group Discussion Assessment. Zeichner (2000:11) also describes this assessment:

Students compile a portfolio that includes a videotape of their teaching together with a written analysis of that teaching in relation to the five advanced education abilities, cooperating teacher evaluations, etc. The student then participates in a half-day interview with principals and teachers from area schools who are part of a pool of over 400 educators helping to assess students’ readiness for student teaching.

Assessment as Learning

The philosophy behind Alverno’s program is based on the premise that only through integrating knowledge, skills, and dispositions in observable performances can evidence of learning be shown. Assessment is treated as integral to learning. It is used both to document the development of the abilities and to contribute to candidates’ development. Alverno’s faculty describe their approach as assessment as learning (Diez et al., 1998). As practiced at Alverno, assessment as learning has the following features:

Expected learning outcomes or abilities are stated, and candidates are aware of the goals toward which they are working.

Explicit criteria for performance are outlined to guide candidates’ work and to provide structure for self-assessment.

Evaluations are based on expert judgments using evidence of candidates’ performance and weighing it against established criteria.

Feedback is intended to be productive; it is not aimed at judgment alone but on ongoing development.

All assessments include the experience of reflective self-assessment.

Assessment is a process involving multiple performances. Candidates experience many assessments using multiple modes, methods, and times to provide a cumulative picture of their development.

Assessment begins during orientation and continues through graduation. For instance, all new students complete an initial communications assessment that includes writing and presenting. The presentation is recorded on videotape, and each student evaluates her own performance before receiving diagnostic and prescriptive feedback from expert assessors.

Alverno faculty believe that performance assessments should be as realistic as possible and should closely mimic the experiences of practicing teachers. In developing the curriculum, they have identified the variety of roles that teachers play. Performance assessments include simulations of parent-teacher interactions, multidisciplinary team evaluation, the teacher’s work with district or building planning, and the teacher’s citizenship role, as well as actual classroom teaching (Diez et al., 1998:2).

Each course is structured around the assessments and learning outcomes that must be demonstrated to claim mastery of the course material. Box F-8 contains two examples: the learning outcomes for a course at Alverno and a course assessment designed to evaluate mastery of one of the outcomes.

Coursework is intentionally sequenced to reflect developmental growth and to provide for cross-course application of concepts. For example, a mathematics methods course assessment might ask students to (1) create a mathematics lesson plan for first graders that incorporates concepts from developmental psychology, (2) teach the lesson, and (3) describe the responses of the learners and the adaptations made.

Alverno requires that student teachers perform all of the duties of a teacher effectively, assuming full responsibility for the classroom for a minimum of four weeks in each placement. They start and end each day of teaching on the same schedule as the cooperating teacher. Their performance is assessed on the five professional teaching abilities by the cooperating teacher, the college supervisor(s), and the student teacher herself.

Research and Program Evaluation

Alverno has documented in detail the ways in which its program abilities were developed. The eight general education abilities were identified during discussions with faculty members about “what it means to say that a student has

completed a liberal arts baccalaureate degree” (Diez, 1999:41). The abilities emerged from faculty discussions focused on such questions as: What should a woman educated in liberal arts be able to do with her knowledge? How does the curriculum provide coherent and developmental support to her learning? What counts as evidence that a student has achieved the expectations of the degree, the major, and the professional preparation? Once the abilities were defined, the discussions centered on pedagogical, developmental, and measurement ques-

tions. Descriptions of the abilities and corresponding performance levels have evolved over the years as a result of many factors. There is ongoing interdisciplinary consultation and review. Each faculty member is expected to serve on a program evaluation and development committee.

Alverno’s performance-based assessments are designed to reflect the actual work of beginning professional teachers. They cover the content and skills considered relevant to the tasks that teachers are expected to perform. In addition, the context for assessments is intended to reflect real-life teaching situations, representing a broad sample of performance situations (broader than would be expected for assessments that focus on basic skills and subject matter knowledge). Committees of faculty members routinely audit the contents of the assessments (during regularly scheduled departmental meetings) to verify that they are appropriate and that they reflect current thinking about what teachers should know and be able to do.

Multiple judgments are obtained of each student’s skills and knowledge. Each student is observed and assessed hundreds of times as they participate in classroom and field activities. Evaluations utilize multiple contexts, multiple modes, and multiple evaluators. There are formal, “milestone” assessments staged at relevant points in the curriculum, such as the Behavioral Event Interview and Self-Assessment and the Professional Group Discussion Assessment described above, as well as less formal, ongoing, in-class assessments. The institution has processes in place for refining and updating criteria for judging students’ performance.

There have been both internal and external reviews of the program. The institution has maintained an Office of Research and Evaluation since 1976 (Mentokowksi, 1991). Through this office, a comprehensive longitudinal study of 750 students was conducted that tracked students from entry into the program through two years after graduation. For this study, researchers collected information on (1) student performance in the curriculum on college-designed ability measures; (2) student perceptions of reasons for learning, the process of learning, and its value for their own career and life goals; and (3) students’ personal growth after graduation (Mentkowski and Doherty, 1984).

Another piece of the longitudinal study involved collecting data from a group of alumnae five years after graduation. This research focused on how abilities learned in college transferred to the work setting, the extent to which alumnae continued to grow and develop after college, and how graduates were doing in their careers and further learning in the long term. This study used multiple research instruments (17) to collect data, including questionnaires, tests, essays, and interviews (Alverno College, 1992–1993).

In addition, surveys have collected data from Alverno graduates on their perceptions about the college’s program and the extent to which they felt prepared to teach upon graduation (Zeichner, 2000). Surveys and focus groups with principals have examined employers’ perceptions of the preparedness of Alver-

no graduates (Zeichner, 2000). Surveys have also compared longevity in the teaching field for Alverno alumnae with graduates of other programs.

Licensing Practices in Wisconsin

Wisconsin currently requires that students admitted into its teacher education programs pass a basic skills test. Alverno also has developed its own assessments for reading, listening, writing, speaking, and quantitative literacy through four levels, and these are required for graduation. Entering the classroom requires endorsement from the institution from which one graduates. Alverno uses successful completion of field experiences and course requisites and its many formal and informal assessments of students as the basis for warranting readiness to teach.

In 2004 the state of Wisconsin will require candidates for teaching positions to present portfolios demonstrating their performance in relation to several professional teaching standards. Candidates who pass the portfolio review will be granted provisional licenses to teach for three to five years while pursuing professional development goals related to the standards. The portfolios are meant to comprise performance assessments compiled near the time of graduation. In addition, the state’s standards make provision for a yet-to-be-specified content examination.

Americans have adopted a reform agenda for their schools that calls for excellence in teaching and learning. School officials across the nation are hard at work targeting instruction at high levels for all students. Gaps remain, however, between the nation's educational aspirations and student achievement. To address these gaps, policy makers have recently focused on the qualifications of teachers and the preparation of teacher candidates.

This book examines the appropriateness and technical quality of teacher licensure tests currently in use, evaluates the merits of using licensure test results to hold states and institutions of higher education accountable for the quality of teacher preparation and licensure, and suggests alternatives for developing and assessing beginning teacher competence.

Teaching is a complex activity. Definitions of quality teaching have changed and will continue to change over time as society's values change. This book provides policy makers, teacher testers, and teacher educators with advice on how to use current tests to assess teacher candidates and evaluate teacher preparation, ensuring that America's youth are being taught by the most qualified candidates.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Center for Teaching and Assessment of Learning

Alternative Summative Assessment

Changing the format of a major assessment away from an exam can seem daunting, but there are significant advantages for you and your students. The alternative assessments introduced in this video and described below allow students to demonstrate their learning in more authentic ways. Authentic assessments are those that more closely replicate work that students would do outside of a classroom. 

Courses that should consider alternative assessments:

  • Small to medium enrollment courses (less than 60 students in a section)
  • Mid- to upper-level courses (300 level and above)
  • Courses with student learning outcomes that focus on writing, oral communication, or other skills not well quantified by examinations.

Courses that should consider modifications to exams:

Courses with exams with a significant number of analytical questions, questions that involve making claims or judgement calls, or exams where students must justify their answers. You may wish to formally convert your exam to an “open book” exam. Set a time limit, or word count, for the exam. Allow students to use materials that they have at hand, and communicate your expectation that their answers be detailed, thorough, and specific to the course content.

  • Extension: Ask students to upload their notes, study guides, or scratch paper for additional exam credit.
  • Extension: Ask students to complete an  “exam wrapper”  that describes their preparation for the exam, a self-assessment of their performance, and a short reflective statement.
  • Extension: Add an open-ended or essay questions at the end of the exam where students must reflect on which course materials they used to prepare for the exam.

Considerations Before Implementing an Alternative Assessment

Many of the examples of alternative assessments listed below require students to demonstrate skills in searching for information, writing or presentation that you might not be explicitly teaching in your course. This means that students may need additional practice or support to successfully complete the assessment. For example, you might provide students with templates they can use instead of requiring them to create the entire product from scratch. You might need to adjust your expectations and grading standards to account for students’ lack of experience and expertise in some areas, particularly if those areas are also outside of the scope of the learning outcomes for your course.

Additionally, it may be helpful to provide students with multiple examples of the kind of product(s) you are asking them to create. You can review and critique some examples together as a class to demonstrate and explain your expectations. You can also require students to review and critique in class or in Canvas discussion boards; you can join the discussions at appropriate junctures and provide corrections and amplifications of important points.

Tools and Resources

If you choose to use any of the alternative assessments described here or elsewhere, we suggest that you create a rubric, as well as take advantage of Canvas’s Speedgrader function, to speed up your grading process. CTAL has resources and many examples of rubrics .

The Library has assignment packages for podcast, graphic design and interactive timeline projects . They include a sample timeline, suggested tools, rubrics, and resources for students. 

“Library Resources for Multimedia Projects” is a module available for download in Canvas Commons. The module contains pages for video, podcast, graphic design projects. Depending on what your students are creating, you can un-publish and customize the pages as needed.

Alternative Major Assessments Categorized by Disciplinary Portfolios

Each course is unique, but it can be helpful to think broadly about the kind of work frequently done within a discipline when considering what kind of alternative assessment is right for you and your students. Below you’ll find assessments organized by disciplinary categories – while these are grouped by discipline, please do not feel that a particular assessment can *only* be used within that discipline. 

Arts and Humanities

Many arts and humanities courses incorporate some form of writing, and it may be a natural inclination to assign a final paper in lieu of an exam in these courses. While many courses may find that an effective solution, some students can be overwhelmed by having multiple large papers due at the same time at the end of the semester. Here are some suggestions that do not rely on a traditional essay format: 

Digital portfolios

  • If students will be completing several smaller assignments remotely, or if your course used discussion boards, you may want to ask them to submit a digital “portfolio” where they select and highlight some of their best work and comments. Students can write a reflection about what resources and tools enhanced their learning, as well as add information to responses on earlier assessments. Adding this final, reflective portfolio component is a way to bring together many disparate elements into a cohesive final project, and requires very little advanced planning on the part of the instructor.

Digital reflections

  • Google Slides
  • An audio recording 
  • A video recording
  • An infographic
  • The Library has great resources for multimedia projects of all kind.

Course anthology

  • Extension: Ask students to seek out 2-3 additional sources (outside of the course) that they feel will also address the prompt and have them annotate them as well.  Note: If research skills are outside of the course goals and are not explicitly taught/assessed, consider providing your students with a tutorial from the Library. 

Syllabus extension

  • Ask students what they would like to explore if the class were to continue for another module or unit. Have them identify the key theme of the unit, a variety of course materials (articles, videos, open educational resources…) and an essay question or other idea for assessment that would help demonstrate mastery of the materials and concepts.

Demonstrations or simulations

  • For arts courses that include hands-on practice or kinesthetic skill-building, you may want to have students demonstrate skills in recording. For example, dance or music classes may include a video ‘recital’ of some kind, while studio art courses could ask students to film themselves completing part of a work or project that is especially challenging. Even if your course is largely asynchronous, you may want to consider having a synchronous session for students to share their demonstrations live with each other and reconnect with their colleagues.
  • Students can borrow recording equipment from the Student Multimedia Design Center in the Library.

Social Sciences (Qualitative-Focused Courses)

Evaluation of case studies.

  • UD’s own PBL Clearinghouse has many examples of problems that can be used as case studies. 
  • Sage Research Methods , a database available through the Library, has access to over 1100 case studies for teaching.

Autoethnography

  • Our world is increasingly interlinked socially, economically, culturally, and politically. Autoethnography asks a learner to describe and systematically analyze their personal experiences within the context of this interconnectedness in order to find their place in a community, to work and live with people whose experiences and perspectives differ from their own, and to think through the ethical challenges they will face over a lifetime.

Digital Storytelling & Podcasting

  • Individual learners, or teams of learners, create a digital story or podcast that details the various facts of a cultural institution (local farmer’s market, museum, community center, etc.) that interests them and aligns with course learning objectives. Learners will engage in a combination of non-fiction writing, journalism, and/or ethnography to report on their observations and conclusions. To increase rigor, instructors can ask learners to include quoted audio from interview subjects, and, if appropriate, music and atmosphere sounds to help provide narrative shape and transitions.
  • The Library has resources for video and audio projects , and a video on the digital storytelling process .

Annotated Bibliography

  • In addition to providing an alphabetical list of research sources, an annotated bibliography asks learners to write a concise summary of each source and assess its value or relevance to the course and/or a research question. Such an assignment often requires the student to identify and paraphrase its thesis (or research question), its methodology, and its main conclusion(s). This process asks learners to go beyond simply listing content and to instead account for why the content is relevant to the course and its learning objectives.
  • Extension for information literacy skills: consider asking students to look for a variety of source types outside of books and journal articles, and provide information within their bibliography about why they chose those sources, applying evaluative criteria across formats.  The Library teaching team offers a lesson plan for such an activity.

Social Sciences (Quantitative-Focused Courses)

Analysis of data sets.

  • For quantitative courses, or courses with a lab component, you may want to locate some online data sets or identify previously-collected data (e.g., student-collected data from labs run in previous semesters, data that are included as supplementary material for a published article) as resources for assessment. Generate a short list of questions or a list of calculations, statistical models, or functions that you want to ensure that students have mastered. You can make the questions about the data available ahead of time so that students can do some preparation in advance but only unlock the Canvas module with the data set during a limited window of time. Using this assessment, you may even use the same questions more than once, but give students multiple data sets to demonstrate their knowledge.
  • You may also consider having students create visualizations of data sets.  The Library provides a guide on data visualization projects .

Infographic

  • An infographic can help learners gain an increased understanding of complex disciplinary jargon and hone their summarizing and paraphrasing skills in an aesthetically pleasing and compelling way. Students will also improve research, organization, and design skills; which are important in interdisciplinary fields and in an increasingly multimodal workforce.
  • You can link to the Library’s resources for creating infographics to get students started.

Article for magazine, newspaper, or YouTube

  • Related learning activity: Have students practice locating the source material upon which a popular press article was based, and discuss the differences in treatment between the two. Use this discussion to build a list of evaluative criteria or class rubric for what a reliable piece of scientific reporting should look like.
  • The library has great resources for multimedia projects of all kinds.

Policy petition

  • A successful petition has a clear goal and communicates an immediate emotional connection to a given cause. Language may also vary depending on the intended audience. The goal of the assignment might be, for example, to demand ethical reform in human subject review, promote the use of new data in environmental policy, or advocate for the publication of a specific article submission. A petition should always aim to propel the issue forward within a level of mutual respect. This assignment might also ask the learner to identify and include compelling media such as photos, videos, or recorded audio for additional impact.
  • You can access policy exemplars in Library databases such as HeinOnline Academic and NexisUNI .
  • Practice data sets are available in Sage Research Methods ; you can search data by discipline, method, or data type.

Policy statements and advocacy

  • A successful policy statement and advocacy for change has a clear goal and communicates an immediate emotional connection to a given cause. The assignment must require students to use accurate and compelling descriptions of scientific principles and results as the core of their argument or a key component. Language may also vary depending on the intended audience. The goal of the assignment might be, for example, to demand ethical reform in human subject review, promote the use of new data in environmental policy, or advocate for the publication of a specific article submission. A petition should always aim to propel the issue forward within a level of mutual respect. This assignment might also ask the learner to identify and include compelling media such as photos, videos, or recorded audio for additional impact.

Poster presentations

  • Requiring students to present findings of experiments or experimental designs using a scientific poster allows them the opportunity to not only describe their findings or design but also practice critical communication skills. Scientists are familiar with this medium and often able to provide ready feedback based on their professional experiences. Students should be provided with the general format of a poster with templates they can readily use and descriptions of the purposes of each section. Students who may not have had experience presenting posters, particularly those in lower-level courses, should be afforded the opportunity to submit drafts for feedback and revision. Posters can also be shared outside of the classroom (e.g., posted in the hallway, shared with colleagues invited to the class) to provide a more authentic audience.
  • The Library has tips and templates for research poster design .

Research proposals

  • In upper-level courses where students are considering graduate school as an option, it may be helpful to include a research proposal as an assessment tool. The proposal could be scaffolded across the length of a semester, and students could review each other’s proposals and a review panel could be convened to replicate (and introduce students to) the proposal review process. This type of assessment may be especially valuable when placed in the context of the NSF Graduate Research Fellowship Program or similar pre-doctoral fellowship. 

Storyboards or Infographics

  • It can often be helpful and powerful to portray scientific processes, theories, and methods as stories. Requiring students to create storyboards and embody the primary elements of the concept(s) as characters and actions can allow them to creatively illustrate their understanding (or lack thereof).
  • A primer on graphic design can be found on the library’s site (including resources for free and web-based graphic design tools).

Health Sciences

  • There are multiple modes of assessment that you can use with case studies, as well as online repositories. UD’s own PBL Clearinghouse has many examples of problems that can be used as case studies. You can select a few relevant case studies and ask students to write short answer responses, answer “best fit” multiple choice questions, or even write brief position papers. Case studies can be presented in a written format, video vignettes, or a live simulation.
  • Case studies in the health sciences are also available in Library databases such as Sage Research Methods and in open textbooks .

Poster Sessions

  • Poster sessions are not only a unique way for students to demonstrate their understanding of a given topic, but exposes students to a skill they can carry with them beyond graduation. Many health care disciplines offer poster sessions at conferences to share research findings, interesting case studies, or novel approaches. When using posters to assess learning on your course, start by identifying the learning outcomes you will be assessing. Then provide students with a selection of topics or questions that they are to investigate. Since learning how to design a poster is not likely a learning outcome for your course, it is advisable to provide students with a template so that they can focus their energy on the content, rather than the design. Students display their main ideas on a poster and present it to the class, the instructor, or to attendees at a departmental poster session. 
  • The Library has tips and templates for research poster design

Create a fact sheet or “at a glance” document

  • Fact sheets are very common in healthcare as a means to communicate important information in a clear and concise format. Fact sheets need to be accurate, current, and effective in communicating a message to a target audience.
  • E xamples of CDC fact sheets and at-a-glance documents can be found on the CDC’s website.

What about oral exams?

Oral examinations can be equally effective in an undergraduate or graduate course, when the learning outcomes that you are seeking to assess include:

  • Professional demeanor 
  • Technical communication
  • Self-reflection

Teaching an online course?

​Conventional high-stakes exams are challenging to administer online in a way that that supports academic integrity without placing students in unfair or uncomfortable positions through the use of remote proctoring services. The use of these remote proctoring services has become both controversial (Coglan et al., 2020) and expensive in the last year, and many instructors are seeking ways to assess student learning without relying on these platforms. Alternative assessments or integrating the strategies above are options to consider.

Coghlan, S. et al. “Good proctor or “Big Brother”? AI Ethics and Online Exam Supervision Technologies.”  ArXiv  abs/2011.07647 (2020)

Additional Considerations

Many instructors now recognize the efficacy of building real-world context into their courses and assignments by using data sets, popular news articles, and contemporary controversies as part of their assessments. Recently, we have experienced a series of national and international crises related to the COVID-19 pandemic, a rise in anti-Asian hate crimes, political violence, and the repeated instances of Black Americans harmed by police brutality. Caution should be taken in using data or new coverage related to these crises within a course this semester. Recognize that while it may not be possible to shield our students from these issues, we can do significant harm by naively using them within our classrooms (both face-to-face and digital). Many of our students, faculty, and staff have been directly impacted by the virus and their emotional and mental wellbeing should be safeguarded through the learning process. 

Many faculty have added an additional small-stakes assessment that gives students an opportunity to reflect on their learning in this difficult situation. Providing students with the chance to reflect on how they were able to learn, what they were able to learn, and how they will plan to learn more in the future can give our students agency during this crisis.  

More Resources

  • Tips for Exams and Alternative Assessments from Rutgers University’s School of Arts and Sciences
  • Alternatives to Traditional Testing from UC Berkeley’s Center for Teaching & Learning
  • Alternatives to Traditional Exams and Papers from Indiana University’s Center for Innovative Teaching and Learning
  • Measuring Student Learning With Exams: What COVID-19 Can Teach Us from the AAC&U Liberal Education Blog. 
  • ©   University of Delaware
  • Legal Notices
  • Accessibility Notice

University of Victoria Teach Anywhere Logo

Alternative forms of assessment

Assessment , Teach a Course

Although high-stakes exams have been a common assessment practice, many instructors are increasingly using alternative methods to promote other forms of learning while also offering flexibility for learners.

Alternative assessment ideas

While high stakes examinations are common (and sometimes necessary), alternative forms of assessment are an excellent way of facilitating critical thinking, problem solving, communication skills, real-world learning, and application of knowledge. 

How to select an alternative assessment

 When selecting an alternative assessment, consider how it supports students to achieve and demonstrate the intended learning outcomes. Assessments should measure the skills or knowledge that students are meant to learn in a given course. It should also give students the opportunity and guidance to demonstrate these skills and articulate what they have learned. 

For example, assessments have incredible impact upon learners – they affect what students attend to, how they work, and how they go about their studying. Consider what alternative format could provide students with an authentic way to demonstrate achievement of the learning outcomes.

Examples of alternative forms of assessment

In many cases, current assessment practices can be easily adapted either for in-person or online courses. Please visit the sections below for more details and links to external resources.

When used for assessment purposes, case studies often require students to work through a scenario or narrative to respond to specific questions, identify problems, and offer analysis or solutions. Good scenarios usually involve realistic situations, often based on a true story or event that happened in the past. See this resource for more details on how to use case studies to teach and assess students.

Offer students some choice in how they are assessed. This can come in the form of which questions to complete within an assessment, the type of assessment form (test, project, paper, video, etc.), or in the grading scheme. In the case of choice with the grading scheme, this could involve removing the first or second lowest marks from students’ overall grade calculation, which can reduce stress for students if they have had an “off” day.

Paper (with submission of drafts)

Assign a major paper and require submission of many of the parts (at least in draft form) throughout the course. A problem statement or thesis might come first. After that, an initial annotated bibliography, literature review, or outline of major points can be submitted and marked before the final paper is submitted.

Authentic Assessment

This is an assignment or activity type whose outcomes involve something that could be eventually applied to a real-life situation. It is authentic in the sense it is not adapted to the classroom; the result is often an artifact that is applicable to diverse contexts. Examples include laboratory experiments with presentation of findings (oral or written), debates, research proposals, policy briefs, reports, podcast episodes, blog or vlog posts, poster presentations, to name a few. See this resource to learn more about authentic assessment.

Oral Assessment

Oral assessments are used to evaluate student learning through spoken language. These can include presentations on a given topic, individual or group interviews, and demonstration of skills and abilities. They can also be used in conjunction with written assessments. When using oral assessment, consider avenues for accessibility for unreliable internet access, anxiety disorders, language considerations, and hearing challenges. See more about oral assessments and exams .

Concept Map

This is an assignment type in which students map out how different concepts are organized, intertwined, and connected. Organizing and structuring knowledge helps deepen learners’ understanding and comprehension. It can also reveal confusions and areas for development. Click to download a guide on Assessing and Evaluating Concept Maps.

Discussions or Debate

Discussions (either online or in-person) provide a way for students to think about and respond critically to questions, think analytically, build on one another’s ideas, or work collaboratively. Visit our Online Discussions Guide to learn more.

Annotated Portfolios or e-Portfolios

In this type of assessment, students pull together or compile their best work from across the term and write a critical introduction to the collection. Collections of evidence can include text, images, videos and more. Portfolios can be especially useful in applied disciplines. See our resource on e-portfolios .

Annotated Bibliography

An annotated bibliography usually entails asking learners to compile a list of duly referenced sources followed by a summary or analysis. The idea is to have students demonstrate they understand what the sources are about and how they relate to one another. See this resource to learn more about using an annotated bibliography to assess learning.

Engagement Activities

Instead of grading activities for accuracy or awarding points for frequency of participation, consider assessing meaningful engagement. Learn more about assessing engagement.

Other Examples of Alternative Assessments

  • Alternative Online Assessments (University of Calgary)
  • A to Z of Assessment Methods (PDF) (Toronto Metropolitan University)

Suggestions for tests and exams

If you feel a more traditional exam is best approach for your learners, here are some suggestions to increase flexibility and accessibility 

Open Book Exams

Open book exams allow learners to complete exams at their own pace and allow instructors to provide more complex questions less dependent on memorizing and recalling factual information. Some examples of questions might include problems, short answers, or essays. Alternatively, instructors can provide questions in advance but ask students to respond to all (or some) in class. Read more about Open Book Assessments.

Lower-Stake Assignments

Instead of 1 or 2 very high-stakes assessments, consider a series of lower-stakes assessments, so that both you and your students can know how they are doing before it is too late to adjust or get help. If adding low-stakes assessments throughout the term, make sure not overload students with too many assignments. See more about Low-Stakes Writing Assignments .

Flexibly Timed Online Exams

Flexibly timed online exams have a limited time duration (e.g., 2 hours), but students can write them anytime during a specific time window. For example, the exam might start December 7, and a student can write it any time before the designated end date (e.g., December 10). Once a student begins the exam, they have a specified time limit in which to finish (e.g., 2 hours). See out guide on different types of online exams .

References & Additional Resources

Adapted from BC Campus: FLO Bootcamp Alternative Assessment Challenge

Additional References:

  • Alternative online assessments (University of Calgary)
  • Assessment Strategies Module (Queen’s University).
  • Best Practices: Alternative Assessments (Ryerson University)
  • Pr os and cons of 16 diverse assessments that can be done virtually
  • 13 Alternatives to Traditional Testing
  • Responding to Student Papers Effectively and Efficiently (University of Toronto)

About this post

This post was last updated:

case study alternative assessment

We acknowledge and respect the Lək̓ʷəŋən (Songhees and Esquimalt) Peoples on whose territory the university stands, and the Lək̓ʷəŋən and W̱SÁNEĆ Peoples whose historical relationships with the land continue to this day.

You are in a modal window. Press the escape key to exit.

Quick Links

  • DISCOVERe Program
  • HyFlex Teaching
  • Digital Learning Tools
  • Adobe Creative Campus

Alternative Assessment

  • Academic Media Services
  • Teaching with AI
  • Teaching Online
  • Faculty Stories
  • Syllabus Resources
  • Ally for Canvas
  • Making Course Materials Accessible
  • Course Accessibility Reviews
  • Meet the Team
  • Faculty Champions
  • Book Circle
  • IDEAS Newsletter
  • Celebration of IDEAS
  • TIP Conference

Office of IDEAS

Alternative assessment refers to non-traditional methods of evaluating students' learning, such as projects, portfolios, presentations, multimedia, etc as opposed to traditional exams and quizzes. It focuses on assessing students' deeper understanding and skills rather than memorization and recall.

Alternative assessment methods can provide a more comprehensive and authentic view of students' abilities. They promote critical thinking, problem-solving, and creativity, which are valuable skills for real-world applications. Additionally, they can reduce the potential for cheating or plagiarism and can reduce stress related to high-stakes exams.

Our team of Instructional Designers is available for one-on-one consultations to explore innovative forms of assessment and learning strategies that not only engage students but also uphold academic integrity. By reimagining assessment formats, you can deter cheating while enriching the educational experience.

Schedule a consultation

Using Alternative Assessment Tools

Traditional high-stakes assessments, like midterms and finals, can exert pressure and cause anxiety that can impact students performance, often emphasizing memorization over genuine understanding. Alternative assessment strategies or the integration of recurring, low-stakes assignments can focus on a student's abilities and how they apply their knowledge, rather than what they simply know or recall. 

  • Low Stakes: Use more low-stakes assessments like homework and quizzes instead of high-stakes exams. This gives students more chances for practice and feedback and gives instructors meaningful data points on understanding and progress. Low-stakes assessments are assigned fewer points and sometimes may be graded based on completion instead of accuracy.
  • Reducing High Stakes in Assessments: Strategies may include dropping the lowest score, retakes, drafts, providing revision opportunities, or reducing single assessment impact on overall grade
  • Panopto: Panopto’s simple in-video quizzing capabilities help instructors test comprehension, reinforce key concepts, improve knowledge retention, and make their videos more engaging. Panopto quizzes can be integrated directly into the Canvas gradebook
  • Project: projects as an assessment method bridges theoretical knowledge with real-world application and can provide a comprehensive view of a student's abilities, preparing them effectively for professional settings
  • Video: Videos can be used as a  means of illustrating student’s understanding, whether it's explaining a concept, showcasing a skill, or offering reflections on course material
  • Open Book exams: Open book exams containing more conceptual questions or applied questions can be more difficult to look up online.
  • Reflective Paper or Presentation: Reflective assessments can provide time for students to absorb what they learned and to engage with thinking about how they have learned.

Alternative Assessment Tools in Canvas

  • Assignments  - create assignments for students to upload their work instead of assigning a quiz/exam that requires Respondus Monitor.  Peer Review  capability can be added to assignments, as well as rubrics.
  • Examples: a)  "I have done my own work and have neither given nor received unauthorized assistance on this work." (add as question. From Fresno State Honor Code), b)  "You may use your books and notes while taking the test but you must work on your own. Do not share your answers or discuss with anyone, even after completing the test. You will have 60 minutes to complete the test up until the deadline of Tuesday at 11:55 PM. (add to quiz instructions), c) Please read the statement below carefully before beginning the test: By selecting the Attempt quiz now, I acknowledge that I am the assigned student taking the quiz and the work is entirely my own. (add to quiz instructions).
  • Drop lowest score(s) . Setting up  Assignment Groups  (Categories) in the Assignments area gives you the ability to drop the lowest score(s). Besides offering multiple low-stakes exams, dropping the lowest score(s) take pressure off the student to cheat.
  • Use   Question Banks   (Classic Quiz)/ Item Banks   (New Quizzes)  when building your quiz to pull a specific number of questions from a larger pool of questions into the quiz. The quiz will randomly pull questions from the bank so student tests will not be the same.
  • Shuffle Questions . This is done through using a Question Bank on Classic Quizzes and through the Settings in New Quizzes
  • Shuffle Answers  in Multiple Choice/Multiple Answer questions. Be sure that all possible answers are worded correctly (e.g., all of these options, none of these options, etc.)
  • Add a time limit . Once a student begins a quiz, they will only have a certain period of time to complete the quiz. This reduces the amount of time that students have to be fact-checking or looking for the answers (suggested questions times: 30-45 second/True-False question, 60-90 seconds/Multiple Choice question).
  • Show one question at a time . This only shows the question the student is working on at the moment and not the entire quiz. A sub-option is to  Lock questions after answering so   once a question is answered, the student cannot go back to change their response.
  • Use Short answer/Essay questions  instead of multiple choice questions that require students to analyze and evaluate rather than recall facts.
  • Choose to release only scores  and not show student's their correct/incorrect answers until after the exam is complete for everyone - or don't release that information at all.
  • Assume it's open book!  
  • Discussions  - can be graded or ungraded. Use Discussion boards for students to post replies to a prompt you give them and also allow them to reply to each other.
  • Google Assignments  - You can create an online assignment that embeds a document directly from your Google Drive folder. Accepted assignment types are Google Documents, Spreadsheets, and Slides. 

Note:  We have become aware of an issue when students access an exam/quiz on an  iPad  that has been created as a  Canvas New Quiz  with the  Respondus LockDown Browser  enabled. Students will be  locked out  of not only the exam/quiz but also their iPad.

iPads can still be used in this scenario if students follow the steps outlined below when accessing the exam/quiz. Students are not able to tell if an exam/quiz has been created as a Classic Quiz or a New Quiz until they have begun the exam/quiz.

Please let your students know if the exam/quiz is a New Quiz so they can perform the following steps and not get locked out of their exam/quiz and their iPad and share these instructions with your students if using  New Quizzes with Respondus Enabled :

If your instructor uses Canvas New Quizzes with the Respondus LockDown Browser enabled and you access the quiz/exam on an iPad, follow these steps so you don’t get locked out of your iPad (and exam!!)

  • Open the mobile browser (Chrome or Safari) on your iPad -  NOT THE CANVAS STUDENT APP!
  • Log into Canvas ( fresnostate.instructure.com ) and navigate to the quiz.
  • Start the quiz and the LockDown Browser app will automatically launch.
  • Turnitin is an originality checking and plagiarism prevention service. Use as part of a Canvas Assignment.

Alternatives to Remote Proctoring

In-person alternatives, online alternatives.

What are some practical examples of alternative assessment strategies I can implement in my classes?

You can explore options like group projects, reflective journals, case studies, peer assessments, oral presentations, and ePortfolios. These methods encourage active engagement and allow students to demonstrate their understanding in diverse ways.

How can I ensure the fairness and reliability of alternative assessments?

To maintain fairness, provide clear assessment criteria and guidelines to students. Consider using rubrics to evaluate their work consistently. You can also incorporate self-assessment and peer-assessment components to enhance objectivity.

Are there any challenges associated with using alternative assessment, and how can I address them?

Challenges may include increased grading time, varied student abilities, and resistance to change. Address these challenges by setting realistic expectations, offering support and training to students, and gradually incorporating alternative assessment methods into your teaching practice.

What options exist for continuing to use exams or proctoring?

  • The Academic Technology and Resource Center is a resource where faculty can receive assistance in creating Canvas quizzes with features that promote academic integrity. The Center  accepts requests to create Canvas quizzes  for you or provide hands-on guidance to optimize your assessments for secure and reliable test-taking. Request Quiz Creation
  • The Bulldog Testing Center offers a secure, proctored environment for faculty who want to administer supervised exams. Faculty can schedule a window of time for students to take their exams under the watchful eye of professional proctors.  Contact Bulldog Testing Center
  • Last Updated Oct 4, 2023

iStudy for Success!

Online learning tutorials for essential college skills.

  •   1  
  • |   2  
  • |   3  
  • |   4  
  • |   5  
  • |   6  
  • |   7  
  • |   8  
  • |   9  
  • |   10  
  • |   11  
  • |   12  
  • |   13  
  • |   14  

On This Page sidebar

Alternative Assessments

Performance assessment, portfolio assessment, case studies.

A performance assessment requires you to perform a task rather than select an answer from a ready-made list. In this method of assessment, you are actively involved in demonstrating what you have learned. Performance assessments may be more valid indicators of your knowledge and abilities than written tests.

Portfolios are also a form of performance assessment. Student portfolios are a collection of evidence, prepared by the student and evaluated by the faculty member, to demonstrate mastery, comprehension, application, and synthesis of a given set of concepts. To create a high quality portfolio, students must organize, synthesize, and clearly describe their achievements and effectively communicate what they have learned.

For more information on e-portfolios, check out the e-Portfolio iStudy tutorial and the e-Portfolio at Penn State Website .

Case-based assessment instruments evaluate the extent to which you are able to handle authentic, real-world problems. Case-based learning focuses on building knowledge within a group that is working together to examine the facts presented in the case. Much of case-based learning involves learners striving to resolve questions that have no single right answer. Emphasis is placed on the process of resolving a stated problem rather than on the actual answers to the questions.

return to top | previous page | next page

Click to close

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Students’ Perception of Alternative Assessment: A Systematic Literature Review

Profile image of Zia ur Rahman Zaheer

Many studies have been conducted on the implementation of alternative assessments on students. However, this study is carried out to explore definitions, characteristics, and students' perceptions of alternative assessment at university and school level. One hundred and seventeen (n=117) journal articles were searched through different search engines, and only twenty-six (n=24) recent and relevant publications published between years 2002 to 2018 were included in this study, and remaining are excluded. Among inclusions, there were ten (n=10) quantitative researches, six (n=6) qualitative, seven (n=7) mix-mode, and one (1) review paper. The overall respondents of the studies were two thousand eight hundred and seven (n=2807). Most of the studies were carried in Asian countries such as Indonesia, Iran, Turkey, Malaysia, Bosnia, Thailand, and Egypt, and some of the studies were conducted in the USA, UK, Scotland, and Netherlands. The findings reveal that the learners have a positive perception of implementing the alternative assessment. Furthermore, the findings of some studies revealed that alternative assessment is preferable, while other researches indicated the alternative assessment favorably. Besides, some studies have suggested some recommendations for the implementation of alternative assessment.

Related Papers

International Journal of Linguistics, Literature and Translation (IJLLT) Students' Perception of Alternative Assessment: A Systematic Literature Review ARTICLE INFO ABSTRACT

wahidullah Alokozay , Zia ur Rahman Zaheer

case study alternative assessment

Procedia-Social and …

NURFARADILLA NASRI

Assad Yousafzai

The Educational Forum

Amma Akrofi

IJSRP Journal

Assessment has necessarily become the vehicle and engine that drives the delivery of education and other related educational processes. It is a truism that ‘what is assessed becomes what is valued, which becomes what is taught’ (Broadfoot, 2004). Governments across the globe have realized the potential of educational assessment in engendering the much coveted educational goal of enhanced pupil learning. Impact of alternative assessment forms on pupils learning can be discerned from the fact that this framework of assessment is popularly called as assessment for learning. Recent decades have witnessed marked changes in the assessment perspective, assessment systems and assessment regimes.

PsycEXTRA Dataset

SMART M O V E S J O U R N A L IJELLH

Alternative assessment is an ongoing process of making judgment aboutstudents’ progress in language by using nonconventionalstrategies. This paperattemptsto elaborate a number of alternative techniques and distinguishes between alternative assessments and traditional form of assessment. It advocates the opportunities of alternative assessment that help students to become active, dynamic and spontaneous in their language learning process. At the end of the paper, I discuss some obstacles of using this nontraditional, time-consuming test format and suggest combinations of perspectives and approaches that might be relevant in implementing these authentic, process-oriented assessments

International Journal of Language and Literary Studies

Khadija ANASSE

Assessment is a fundamental part in language teaching/learning process. It is a guiding factor that provides insight to teachers and learners about the best way to proceed. The literature about language assessment is rich. It includes different forms and techniques of language assessment. In this paper, however, the focus is mainly on alternative assessment. The latter is different both in form and nature from traditional assessment. Researchers confirm that if applied properly, alternative assessment can reflect students’ progress and motivate them to keep up the hard work. This paper, hence, aims to study the attitude of language teachers toward alternative assessment and the main obstacles that may hinder its application in the Moroccan classroom. This research is quantitative. It uses a questionnaire as the main data collection tool. The findings indicate that teachers hold a positive attitude toward alternative assessment, but they fail to apply it in their classroom due to dif...

The aim of this study is to investigate the factors effecting assessment preference of ELT (English Language Teaching) students. Data was collected from 150 ELT students studying at four universities located in Ankara (Turkey). In order to analyze the collected data multiple regression was calculated. Level of preferring alternative assessment methods was defined as dependent variable. Some characteristics related to learning were defined as predictive (independent) variables; critical thinking learning strategy, metacognitive learning strategy, self efficacy for learning, level of preferring higher order thinking tasks, kinesthetic learning modality, Auditorally learning modality, Visually learning modality. Before calculating multiple regression analysis some assumptions were tested such as, multicollinarty, multivariate normality, homogeneity of variance. Multiple regression analysis results showed us that those independent variables explains %30 percent of variance in level of preferring alternative assessment methods. On the other hand level of preferring higher order thinking tasks has the strongest effect. Self efficacy for learning and level of adopting meta-cognitive learning strategy have respectively second and third strongest effect on dependent variable. Moreover level of adopting critical thinking learning strategy and learning modalities have no significant effect on dependent variable.

Davina Klein

RELATED PAPERS

Environmental pollution (Barking, Essex : 1987)

Head & Neck

Revista Projetar - Projeto e Percepção do Ambiente

Luiz Boscardin

Revista Brasileira em promoção da Saúde

Maria Adelane Monteiro Silva

Gender a výzkum / Gender and Research

Romana Volejníčková

ELT Forum: Journal of English Language Teaching

Siti Asriyah

Journal of Approximation Theory

Igor Shevchuk

K. Kasahara

Apostolos Hadjidimos

Journal of Engineering Science and Technology Review

mohammed jorio

Pedagogy: Jurnal Pendidikan Matematika

Hariani Arief

Pesquisa Agropecuária Brasileira

Hymerson Azevedo

Indonesia Judicial Research Society

Dio Ashar Wicaksana , Arsa Ilmi Budiarti , Marsha Maharani , Bestha Inatsan Ashila

Theresa Kilmer

European Scientific Journal

Dr John D Rich, Jr.

Journal of Pediatric Surgery Case Reports

Francis Papay

VNU Journal of Science: Earth and Environmental Sciences

Nguyen Cong Khai SK20V1Q524

Alejandro Arnau

Global Journal of Human-Social Science

Ismael T O R R E S Maestro

Revista Brasileira de Ciências Sociais

Claudio Beato

Proceedings of the Biological Society of Washington

Sebastian Kvist

Néstor Roselli

Journal of Biomechanics

Arian Vistamehr

Scientific Reports

Bengt Klarin

Alcoholism: Clinical and Experimental Research

Edmund Kerut

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

case study alternative assessment

  • INSTRUCTOR INFORMATION
  • Program Home Page
  • Library Resources
  • Getting Help

4.3. Frameworks for assessment of alternatives

Print

So, what does it take to change the conventional practice with possibly hazardous or harmful chemicals or processes to a more sustainable solution?

The major step is assessment of alternative solutions, which takes into account a wide range of criteria. When thinking about replacing the existing process with an innovative alternative, chemists and engineers try to avoid so-called "regrettable substitutions". In other words, avoid switching to an alternative process or chemical that either transfers risk to another point in the production chain or lifecycle or contains unknown future risks.

Ideally, in the concluded assessment, chosen alternatives must :

  • be technologically feasible;
  • provide the same or better value in performance and cost;
  • have an improved profile for human health and environment;
  • account for economic and social considerations;
  • have potential to be sustainable over long period of time (look out for restrictions that may arise in the future; for example, shortage of rare elements, etc.).

We see that choosing the best alternative requires careful investigation! Such investigation must be comprehensive, i.e., would require cross-disciplinary expertise, and based on high quality data.

Let us look at some typical criteria that may be used in chemical industry and research for evaluation of various processes and reactions. In the "green chemistry" context, the main emphasis is put on the environmental profile of a chemical alternative, while economic feasibility is included in the picture at the stage of technology transfer.

Evaluation criteria

Table 4.1 below represents the set of criteria that can be effectively used for assessing alternatives in chemical and material manufacturing. On the left, the top-level criteria are listed, which are key points of concern when introducing new chemicals to the manufacturing process. The middle column lists some sub-criteria, which show how the impacts can be distributed. The right column specifies specific measures for each type of impact, which basically become guides for data search and analysis.

The list of criteria given in Table 4.1 has been proved effective for some case studies. While it puts the main emphasis on the hazard assessment and environmental impact, the technical and economic criteria are also included and can play a significant role even at the stage of selection of particular chemical reagents for the process. Note that the above list of criteria and sub-criteria is not something written in stone. It is presented here as an illustration. For each specific assessment project, choice of criteria needs to be justified through expert and stakeholder involvement and will depend on the goals of assessment. Depending on the assessment team decision, some criteria can be added, some – removed, and weights of all factors can be tuned. Clear identification and justification of the selected criteria is critical.

Data collection

In any assessment project, clear and consistent requirements should be set for the sources of data to be used. Information should meet specific data quality criteria for inclusion into the assessment. Quality of data will determine their utility. Data selection should follow the internationally recognized definition for reliable information: "Reliable information is from studies or data generated according to valid accepted testing protocols in which the test parameters documented are based on specific testing guidelines or in which all parameters described are comparable to a guideline method. Where such studies or data are not available, the results from accepted models and quantitative structure activity relationship (QSAR) approaches may be considered. The methodology by Organization for Economic Cooperation and Development (OECD) can be used for the determination of reliable studies." (Principles of Alternative Assessment, 2012)

Preferably, data should be obtained from authoritative bodies, those referenced by US government agencies (e.g. EPA). The following are links to some of such resources:

  • U.S. EPA PBT Profiler software can be used to gain information on persistence, bioaccumulation potential and toxicity of organic substances.
  • Government Toxicology Data Network

Technical data sources

Information should be obtained from published studies or directly from technical experts or users of the alternatives. In other cases, information can be requested from product manufacturers. The specific performance information (reactions, energy effects, thermodynamic analysis) available from experimental labs may be needed to draw conclusions about technical feasibility for each individual application. Clear referencing of the data sources is important.

Economic data sources

Data sources for financial information may include manufacturers, stakeholders, the Chemical Economics Handbook , and other standard reference sources. For many emerging alternatives, hard cost information may be unavailable. Cost comparisons today may not be directly extrapolated to emerging technologies because learning curves, scaling, and other factors can affect costs over time. Assumptions and use of surrogate data should be clearly explained in the assessment.

Ranking the alternatives

Quantification (scoring) of the impacts based on the criteria listed above is typically done via a multi-criteria analysis (MCA) model, appropriately build for the project. MCA provides techniques for comparing and ranking different outcomes of existing and alternative processes. When setting up an assessment project, it is important that the scoring system is transparent and is consistently applied to all scenarios under consideration.

MCA is a great tool for comparison of different options, but it is hardly objective because choice of criteria and metrics to quantify impacts varies from case to case. In contrast, cost analysis is aimed at providing objective measure of economic feasibility based on predicted cash flow. Cost analysis requires impacts to be expressed in monetary terms. MCA can use both monetary and non-monetary measures, as well as both quantitative and qualitative measures.

In MCA, ranking of chemicals or processes with respect to the listed criteria can be done in a variety of ways. One way is to assign each criterion a score that spans from 0 to 1, with the value of 1 corresponding to the best (most preferable) choice and the value of 0 corresponding to the worst (least preferable) choice among the available. The rest of the choices would score in between.

For example, if substance A performs better than substances B and C on acute toxicity criterion, and substance B performed the worst of the three choices, then A will receive a score of 1, B will receive a score of 0. In case of qualitative assessment, substance C receives a score of 0.5 (linear dependence). In case of quantitative assessment, the utility values may be connected to the acute toxicity measure and will place substance C on the relative scale (i.e. taking into account how much more toxic it is compared to substance A and how much less toxic it is compared to substance B). This approach will be illustrated in one of the case studies further in this lesson.

Another possible approach for assigning scores is outranking. There is no relative scoring, but instead, alternatives are compared by each criteria in pairs (two at a time). This way, we try to identify the extent to which one alternative out-performs the other. In the end, the dual performance scores (1 - "win"; 0 - "lose") are aggregated, and the preference index is calculated for each alternative.

For example, substance A out-performs B and C by acute toxicity, thus getting the cumulative score of 2 (1 point for each "win"). Respectively, substance C receives a score of 1 for beating B, and B is left with 0. One of the case studies described further in this lesson uses both approaches in order to compare the outcomes.

Evaluation of the economic impacts associated with the implementation of a new product or practice generally focuses on the changes in capital and operational costs and revenues. (These terms of cost analysis were overviewed in Lesson 3.). The main areas where impact is expected are:

  • cost of new equipment or production process;
  • operation and maintenance costs (labor costs, energy costs, etc.);
  • cost differences for different substances;
  • cost of transportation;
  • cost of design, monitoring, and training;
  • regulatory costs.

The data on economic impacts is collected in consultation with relevant supply chain actors and possibly trade associations. Evaluation can be an iterative process, starting from qualitative comparison of the old and new scenarios and ending at quantification of impacts with monetary values.

The European Chemicals Agency (ECHA) website provides a more detailed guide to economic assessment of alternatives and can be used as a resource for this task. There are some documents linked that you are not required to read unless you're specifically interested in the socio-economic assessment.

Weighing factors

In most situations, decision-makers are not equally concerned about all highlighted criteria. For instance, a particular decision-maker may place more importance on whether a household cleaner causes cancer than on whether it contributes to smog formation. Thus, the decision-making method should account for respective “weight" of each criterion in the evaluation process. Since different stakeholders may place different weights upon criteria, the weighting raises significant questions in the context of a regulatory program. For example, can we consistently compare the alternatives without regulating the weight of factors? This is something to watch out for.

The criteria weights can be established by three methods:

  • using generic or recommended weights;
  • calculating the weights based from objective measures; and
  • eliciting weights from stakeholders or experts.

Method (1) is exemplified by Table 4.2 which lists several sets of generic weights recommended by National Institute of Standards and Technology (NIST) based on the data of Environmental Protection Agency (EPA) and Harvard Study for a set of criteria usually used in life cycle assessment (LCA).

In the above table, the NIST panel generated weights from stakeholder consulting that involved 7 building product manufacturers, 7 product users, and 5 LCA experts. EPA weights and Harvard weights were derived by NIST from sets of qualitative rankings of impacts developed respectively by EPA’s Science Advisory Board in 1990 and Harvard researchers in 1992.

Method (2) of calculating corresponding weights can be based on distance-to-target approach, when each criterion is weighted by the variance between the existing and desired conditions. For example, if the global community is further away from achieving the goal for global warming than it is for ozone depletion, then greater weight is given to the global warming potential. Another way to such calculation is monetary evaluation, when weighing is done based on the cost of environmental consequences.

Method (3), which assumes obtaining weights from stakeholders directly, may be based on public opinion surveys, community working group decisions, and different multi-criteria analysis models. The main types of stakeholders to consider: (1) Environmental Non-Government Organizations, Industry, Policymakers, and Consumers (Public). Weight assignments collected through surveys are then averaged across the board of stakeholders and then normalized to 100%.

Use of any of the methods depends on the goals of the assessment project, its scope, resources, and timeline. When building an assessment project, the weighing process should be transparent and well justified. When comparing different cases within one study, keep the weighing scale the same across the evaluation criteria .

Within the MCA approach, the final score ( S i ) of a particular option (alternative) with respect to any major top-level criterion i is estimated as an average of all sub-criteria scores under that criterion:

where n is number of sub-criteria or metrics used to assess the option under top-level criterion i. The final total score ( S tot ) is the weighted sum of all top-level criteria scores:

S t o t = ∑ i = 1 N S i w i

where N is the number of top-level criteria considered in assessment; w i is the weight factor of a particular criterion. The example study presented in the next section of this lesson demonstrates how the MCA scores are calculated and compared.

Consider the following supplemental reading materials on this topic:

Supplemental (Optional) Reading - Alternative Assignment Methodology

  • Principles of Alternatives Assessment , Industry Coalition, 2012.

These recommendations were developed by the Industry Coalition on how the assessment of chemical alternatives should be conducted.

  • Guidance on the Preparation of Socio-Economic Analysis as Part of an Application for Authorization , European Chemicals Agency, Version 1, January 2011.

This website provides some advice on socio-economic analysis of chemical alternatives under REACH regulation program. 

Supplemental (Optional) Reading - Multi-Criteria Analysis

  • Linkov, I, Moberg, E., Multi-Criteria Decision Analysis: Environmental Applications and Case Studies, CRC Press 2011.

This book is available online through Penn State Library system. It provides in-depth explanation on MCA methods and shows its applications to environmental science.

Do Your Students Know How to Analyze a Case—Really?

Explore more.

  • Case Teaching
  • Student Engagement

J ust as actors, athletes, and musicians spend thousands of hours practicing their craft, business students benefit from practicing their critical-thinking and decision-making skills. Students, however, often have limited exposure to real-world problem-solving scenarios; they need more opportunities to practice tackling tough business problems and deciding on—and executing—the best solutions.

To ensure students have ample opportunity to develop these critical-thinking and decision-making skills, we believe business faculty should shift from teaching mostly principles and ideas to mostly applications and practices. And in doing so, they should emphasize the case method, which simulates real-world management challenges and opportunities for students.

To help educators facilitate this shift and help students get the most out of case-based learning, we have developed a framework for analyzing cases. We call it PACADI (Problem, Alternatives, Criteria, Analysis, Decision, Implementation); it can improve learning outcomes by helping students better solve and analyze business problems, make decisions, and develop and implement strategy. Here, we’ll explain why we developed this framework, how it works, and what makes it an effective learning tool.

The Case for Cases: Helping Students Think Critically

Business students must develop critical-thinking and analytical skills, which are essential to their ability to make good decisions in functional areas such as marketing, finance, operations, and information technology, as well as to understand the relationships among these functions. For example, the decisions a marketing manager must make include strategic planning (segments, products, and channels); execution (digital messaging, media, branding, budgets, and pricing); and operations (integrated communications and technologies), as well as how to implement decisions across functional areas.

Faculty can use many types of cases to help students develop these skills. These include the prototypical “paper cases”; live cases , which feature guest lecturers such as entrepreneurs or corporate leaders and on-site visits; and multimedia cases , which immerse students into real situations. Most cases feature an explicit or implicit decision that a protagonist—whether it is an individual, a group, or an organization—must make.

For students new to learning by the case method—and even for those with case experience—some common issues can emerge; these issues can sometimes be a barrier for educators looking to ensure the best possible outcomes in their case classrooms. Unsure of how to dig into case analysis on their own, students may turn to the internet or rely on former students for “answers” to assigned cases. Or, when assigned to provide answers to assignment questions in teams, students might take a divide-and-conquer approach but not take the time to regroup and provide answers that are consistent with one other.

To help address these issues, which we commonly experienced in our classes, we wanted to provide our students with a more structured approach for how they analyze cases—and to really think about making decisions from the protagonists’ point of view. We developed the PACADI framework to address this need.

PACADI: A Six-Step Decision-Making Approach

The PACADI framework is a six-step decision-making approach that can be used in lieu of traditional end-of-case questions. It offers a structured, integrated, and iterative process that requires students to analyze case information, apply business concepts to derive valuable insights, and develop recommendations based on these insights.

Prior to beginning a PACADI assessment, which we’ll outline here, students should first prepare a two-paragraph summary—a situation analysis—that highlights the key case facts. Then, we task students with providing a five-page PACADI case analysis (excluding appendices) based on the following six steps.

Step 1: Problem definition. What is the major challenge, problem, opportunity, or decision that has to be made? If there is more than one problem, choose the most important one. Often when solving the key problem, other issues will surface and be addressed. The problem statement may be framed as a question; for example, How can brand X improve market share among millennials in Canada? Usually the problem statement has to be re-written several times during the analysis of a case as students peel back the layers of symptoms or causation.

Step 2: Alternatives. Identify in detail the strategic alternatives to address the problem; three to five options generally work best. Alternatives should be mutually exclusive, realistic, creative, and feasible given the constraints of the situation. Doing nothing or delaying the decision to a later date are not considered acceptable alternatives.

Step 3: Criteria. What are the key decision criteria that will guide decision-making? In a marketing course, for example, these may include relevant marketing criteria such as segmentation, positioning, advertising and sales, distribution, and pricing. Financial criteria useful in evaluating the alternatives should be included—for example, income statement variables, customer lifetime value, payback, etc. Students must discuss their rationale for selecting the decision criteria and the weights and importance for each factor.

Step 4: Analysis. Provide an in-depth analysis of each alternative based on the criteria chosen in step three. Decision tables using criteria as columns and alternatives as rows can be helpful. The pros and cons of the various choices as well as the short- and long-term implications of each may be evaluated. Best, worst, and most likely scenarios can also be insightful.

Step 5: Decision. Students propose their solution to the problem. This decision is justified based on an in-depth analysis. Explain why the recommendation made is the best fit for the criteria.

Step 6: Implementation plan. Sound business decisions may fail due to poor execution. To enhance the likeliness of a successful project outcome, students describe the key steps (activities) to implement the recommendation, timetable, projected costs, expected competitive reaction, success metrics, and risks in the plan.

“Students note that using the PACADI framework yields ‘aha moments’—they learned something surprising in the case that led them to think differently about the problem and their proposed solution.”

PACADI’s Benefits: Meaningfully and Thoughtfully Applying Business Concepts

The PACADI framework covers all of the major elements of business decision-making, including implementation, which is often overlooked. By stepping through the whole framework, students apply relevant business concepts and solve management problems via a systematic, comprehensive approach; they’re far less likely to surface piecemeal responses.

As students explore each part of the framework, they may realize that they need to make changes to a previous step. For instance, when working on implementation, students may realize that the alternative they selected cannot be executed or will not be profitable, and thus need to rethink their decision. Or, they may discover that the criteria need to be revised since the list of decision factors they identified is incomplete (for example, the factors may explain key marketing concerns but fail to address relevant financial considerations) or is unrealistic (for example, they suggest a 25 percent increase in revenues without proposing an increased promotional budget).

In addition, the PACADI framework can be used alongside quantitative assignments, in-class exercises, and business and management simulations. The structured, multi-step decision framework encourages careful and sequential analysis to solve business problems. Incorporating PACADI as an overarching decision-making method across different projects will ultimately help students achieve desired learning outcomes. As a practical “beyond-the-classroom” tool, the PACADI framework is not a contrived course assignment; it reflects the decision-making approach that managers, executives, and entrepreneurs exercise daily. Case analysis introduces students to the real-world process of making business decisions quickly and correctly, often with limited information. This framework supplies an organized and disciplined process that students can readily defend in writing and in class discussions.

PACADI in Action: An Example

Here’s an example of how students used the PACADI framework for a recent case analysis on CVS, a large North American drugstore chain.

The CVS Prescription for Customer Value*

PACADI Stage

Summary Response

How should CVS Health evolve from the “drugstore of your neighborhood” to the “drugstore of your future”?

Alternatives

A1. Kaizen (continuous improvement)

A2. Product development

A3. Market development

A4. Personalization (micro-targeting)

Criteria (include weights)

C1. Customer value: service, quality, image, and price (40%)

C2. Customer obsession (20%)

C3. Growth through related businesses (20%)

C4. Customer retention and customer lifetime value (20%)

Each alternative was analyzed by each criterion using a Customer Value Assessment Tool

Alternative 4 (A4): Personalization was selected. This is operationalized via: segmentation—move toward segment-of-1 marketing; geodemographics and lifestyle emphasis; predictive data analysis; relationship marketing; people, principles, and supply chain management; and exceptional customer service.

Implementation

Partner with leading medical school

Curbside pick-up

Pet pharmacy

E-newsletter for customers and employees

Employee incentive program

CVS beauty days

Expand to Latin America and Caribbean

Healthier/happier corner

Holiday toy drives/community outreach

*Source: A. Weinstein, Y. Rodriguez, K. Sims, R. Vergara, “The CVS Prescription for Superior Customer Value—A Case Study,” Back to the Future: Revisiting the Foundations of Marketing from Society for Marketing Advances, West Palm Beach, FL (November 2, 2018).

Results of Using the PACADI Framework

When faculty members at our respective institutions at Nova Southeastern University (NSU) and the University of North Carolina Wilmington have used the PACADI framework, our classes have been more structured and engaging. Students vigorously debate each element of their decision and note that this framework yields an “aha moment”—they learned something surprising in the case that led them to think differently about the problem and their proposed solution.

These lively discussions enhance individual and collective learning. As one external metric of this improvement, we have observed a 2.5 percent increase in student case grade performance at NSU since this framework was introduced.

Tips to Get Started

The PACADI approach works well in in-person, online, and hybrid courses. This is particularly important as more universities have moved to remote learning options. Because students have varied educational and cultural backgrounds, work experience, and familiarity with case analysis, we recommend that faculty members have students work on their first case using this new framework in small teams (two or three students). Additional analyses should then be solo efforts.

To use PACADI effectively in your classroom, we suggest the following:

Advise your students that your course will stress critical thinking and decision-making skills, not just course concepts and theory.

Use a varied mix of case studies. As marketing professors, we often address consumer and business markets; goods, services, and digital commerce; domestic and global business; and small and large companies in a single MBA course.

As a starting point, provide a short explanation (about 20 to 30 minutes) of the PACADI framework with a focus on the conceptual elements. You can deliver this face to face or through videoconferencing.

Give students an opportunity to practice the case analysis methodology via an ungraded sample case study. Designate groups of five to seven students to discuss the case and the six steps in breakout sessions (in class or via Zoom).

Ensure case analyses are weighted heavily as a grading component. We suggest 30–50 percent of the overall course grade.

Once cases are graded, debrief with the class on what they did right and areas needing improvement (30- to 40-minute in-person or Zoom session).

Encourage faculty teams that teach common courses to build appropriate instructional materials, grading rubrics, videos, sample cases, and teaching notes.

When selecting case studies, we have found that the best ones for PACADI analyses are about 15 pages long and revolve around a focal management decision. This length provides adequate depth yet is not protracted. Some of our tested and favorite marketing cases include Brand W , Hubspot , Kraft Foods Canada , TRSB(A) , and Whiskey & Cheddar .

Art Weinstein

Art Weinstein , Ph.D., is a professor of marketing at Nova Southeastern University, Fort Lauderdale, Florida. He has published more than 80 scholarly articles and papers and eight books on customer-focused marketing strategy. His latest book is Superior Customer Value—Finding and Keeping Customers in the Now Economy . Dr. Weinstein has consulted for many leading technology and service companies.

Herbert V. Brotspies

Herbert V. Brotspies , D.B.A., is an adjunct professor of marketing at Nova Southeastern University. He has over 30 years’ experience as a vice president in marketing, strategic planning, and acquisitions for Fortune 50 consumer products companies working in the United States and internationally. His research interests include return on marketing investment, consumer behavior, business-to-business strategy, and strategic planning.

John T. Gironda

John T. Gironda , Ph.D., is an assistant professor of marketing at the University of North Carolina Wilmington. His research has been published in Industrial Marketing Management, Psychology & Marketing , and Journal of Marketing Management . He has also presented at major marketing conferences including the American Marketing Association, Academy of Marketing Science, and Society for Marketing Advances.

Related Articles

CASE TEACHING

We use cookies to understand how you use our site and to improve your experience, including personalizing content. Learn More . By continuing to use our site, you accept our use of cookies and revised Privacy Policy .

case study alternative assessment

case study alternative assessment

CAS1501 - 2024 S1 - ASSESSMENT 3 - CASE STUDY Position advertised: Head: Internal Audit Job Ref: JHB00521/EG Recruiter: Financial Recruitment Date posted: Thursday, 14 March 2024 Location: SANDTON, South Africa Salary: R3600000 Annually SUMMARY: An esteemed company in the financial sector with shareholding listed on the Johannesburg Stock Exchange is inviting applications from experienced and qualified professionals for the position of Head: Intemal Audit. This company contributes to the well-being of South African citizens by paying dividends annually, promoting financial stability, practising integrity, and stimulating economic growth in the country. The corerstone of the company's contribution involves management balancing the company's profit and innovation focus with compliance with the Constitution of South Africa and all relevant legislation. Practically, the company's contribution is, amongst others, achieved through resourcing value- creation and relationship building actions, including a defining customer loyalty programme, across the country. Annual reports provide extensive detail of the company's value-creation and relationship building actions. POSITION INFO: The company is seeking a proactive individual to take charge of the internal audit function, concentrating on supervising human resources, operations, and risk management, including fraud risk, domains. As the Head of Internal Audit, your pivotal role requires propelling the company's overarching mission through strategic leadership, oversight of compliance with laws and regulations, and the promotion of continuous improvement in audit processes. Join a devoted team dedicated to upholding financial stability, integrity, and growth in South African financial markets. Your leadership and expertise will play a substantial role in driving the accountability and . integrity of this company.

Gauth ai solution.

IMAGES

  1. 25 of the Best Alternative Assessment Ideas

    case study alternative assessment

  2. Designing Alternative Assessment

    case study alternative assessment

  3. Alternative Assessments

    case study alternative assessment

  4. Alternative Assessment

    case study alternative assessment

  5. 💐 Case study answers format. Sample Case Study Questions and Answers

    case study alternative assessment

  6. Alternative Assessment

    case study alternative assessment

VIDEO

  1. ALTERNATIVE ASSESSMENT

  2. Alternative Assessment INSTRUCTIONS

  3. Isaac's Learning Portfolio, Math 1C Fall 2023

  4. Actual Usage Allocation Method: A Case Study

  5. Alternative Assessments Workshop

  6. Creating Challenging Alternative Assessments For your Classroom

COMMENTS

  1. PDF A GUIDE TO ALTERNATIVE ASSESSMENTS

    A GUIDE TO ALTERNATIVE ASSESSMENTS - 12 - 6. CASE STUDIES Description Case studies consist of fictional scenarios that ask students to solve a dilemma. There are many types of case studies: a) Detailed / Extensive case studies, b) Descriptive / Narrative cases, c) Mini cases, d) Bullet cases, e) Directed choice cases, f) Multiple choice cases

  2. (PDF) Alternative Assessment

    A mixed-method case study design was used to assess a sample size of 158 students and three mathematics teachers. ... This is certainly the case in Oman, where alternative assessment assumes an ...

  3. FOUR CASE STUDIES ON APPLYING ALTERNATIVE ASSESSMENTS IN ...

    Request PDF | FOUR CASE STUDIES ON APPLYING ALTERNATIVE ASSESSMENTS IN HIGHER EDUCATION WITH TECHNOLOGY TOOLS TO FACILITATE TEACHING AND LEARNING | Under the COVID-19 pandemic, on-campus classes ...

  4. PDF Guide to Alternative Assessments

    If you are using traditional summative assessments (e.g. mid-terms, final exams, or final papers), you are likely trying to determine if students can: • Apply the content and skills from your course to solve problems, appropriately respond to case studies or scenarios, or effectively analyze data sets or evidence.

  5. Appendix F: Alternative Assessment Case Studies

    Suggested Citation:"Appendix F: Alternative Assessment Case Studies."National Research Council. 2001. Testing Teacher Candidates: The Role of Licensure Tests in Improving Teacher Quality.Washington, DC: The National Academies Press. doi: 10.17226/10090.

  6. Alternative Summative Assessment

    Evaluation of case studies. There are multiple modes of assessment that you can use with case studies, as well as online repositories. UD's own PBL Clearinghouse has many examples of problems that can be used as case studies. You can select a few relevant case studies and ask students to write short answer responses, answer "best fit ...

  7. PDF Alternative Assessments

    Assessment strategies for case studies Students complete case studies individually. Post the case study in Moodle and each student submits their individual responses using the assignment tool in Moodle; Case studies by e-mail correspondence. E-mail students case studies, students create a response to the case study and send it back by e-mail.

  8. PDF Exploring ESL teachers' alternative assessment strategies and ...

    emphasises rote-learning. The research aims at investigating and analysing in-service teachers' alternative assessment strategies in selected Malaysian ESL classrooms. This study employed a qualitative case study involving eleven ESL teachers. Data were collected through semi-structured interviews, classroom observations and document analysis.

  9. Case Study of the Influences on Alternate Assessment

    This study of seven students and teachers in two school districts revealed seven main factors that contributed to students' scores on the state's alternate assessment, including resources, curriculum, instructional effectiveness, teacher and student charac teristics, data collection and compilation, and features of the state's assessment and ...

  10. Alternative forms of assessment

    Alternative assessment ideas. While high stakes examinations are common (and sometimes necessary), alternative forms of assessment are an excellent way of facilitating critical thinking, problem solving, communication skills, real-world learning, and application of knowledge. ... When used for assessment purposes, case studies often require ...

  11. Using Alternative Assessment Methods in Foreign Language Teaching. Case

    Case Study: Alternative Assessment of Business English for University Students | Alternative assessment methods have proven to be a comprehensive tool for testing language learners, especially at ...

  12. (PDF) Alternative Assessment in EFL Classrooms: Why and How to

    Perceptions of EFL learners towards portfolios as a method of alternative assessment: A case study at a Turkish state university (Unpublished master's thesis). Middle East Technical University, Turkey. Charvade, M., Jahandar, S., & Khodabandehlou, M. (2012). The impact of portfolio assessment on EFL learners' reading comprehension ability.

  13. Alternative Assessment

    Alternative Assessment. Alternative assessment refers to non-traditional methods of evaluating students' learning, such as projects, portfolios, presentations, multimedia, etc as opposed to traditional exams and quizzes. It focuses on assessing students' deeper understanding and skills rather than memorization and recall.

  14. PDF Assessing Alternatives in the Systems Engineering Process: Case Study

    4.2 Inapplicability in MESCAL Case Study After the initial phase of definition of the alternatives, the authors considered several decision making approaches to perform their assessment. However, the unique complexity of the case study made it challenging to use these methods in their traditional form.

  15. PDF EFFECTIVE ASSESSMENTS FOR ELLS 1 Challenges with Effectively Assessing

    modifications, accommodations and alternative assessments. Alongside, implications for teaching such as alternate ways to provide accommodations and collect data. Finally, the capstone supplies recommendations for future research such as case studies on the differences between standardized testing and alternative assessment.

  16. Full article: Interactive oral assessment case studies: An innovative

    The challenge in this context was to provide an alternative assessment to traditional examinations, which would engage learners, prepare them professionally, and promote academic integrity. During the IO CoP meetings, expertise, examples, and opportunities for discussion provided invaluable support. ... Sustainable aviation case study. IO ...

  17. Using Alternative Assessment Methods in Foreign Language Teaching. Case

    Alternative assessment methods have proven to be a comprehensive tool for testing language learners, especially at university level. ... Case Study: Alternative Assessment of Business English for University Students}, author={Sebastian Chirimbu}, journal={Scientific Bulletin of the Politehnica University of Timişoara Transactions on Modern ...

  18. Alternative Assessment Portfolios for Students with Intellectual

    Since 2002, many states have designed alternative assessments, but little research exists on the process of alternative assessment systems (AAS) from the perspective and practices of practitioners. The research questions included: What are the different perceptions that surround the issue of portfolio assessment within this particular public ...

  19. Alternative Assessments

    Alternative Assessments . Performance Assessment. A performance assessment requires you to perform a task rather than select an answer from a ready-made list. In this method of assessment, you are actively involved in demonstrating what you have learned. ... Case studies. Case-based assessment instruments evaluate the extent to which you are ...

  20. (PDF) Students' Perception of Alternative Assessment: A Systematic

    Students' Perceptions On Traditional And Alternative Assessment: (A Case Study at Department of English Language Education UIN Ar-Raniry). Diss. UIN Ar-Raniry Banda Aceh, [22] Jabsheh, A. (2020). The Usability Outlook of Computer-Based Exams as A means of Assessment and Examination: A case study of Palestine Technical University.

  21. 4.3. Frameworks for assessment of alternatives

    The list of criteria given in Table 4.1 has been proved effective for some case studies. While it puts the main emphasis on the hazard assessment and environmental impact, the technical and economic criteria are also included and can play a significant role even at the stage of selection of particular chemical reagents for the process.

  22. Do Your Students Know How to Analyze a Case—Really?

    Usually the problem statement has to be re-written several times during the analysis of a case as students peel back the layers of symptoms or causation. Step 2: Alternatives. Identify in detail the strategic alternatives to address the problem; three to five options generally work best.

  23. Writing a Case Study Analysis

    A case study analysis requires you to investigate a business problem, examine the alternative solutions, and propose the most effective solution using supporting evidence. Preparing the Case. Before you begin writing, follow these guidelines to help you prepare and understand the case study: Read and Examine the Case Thoroughly

  24. An Assessment of Socio-Economic Status of Women on Family Farms ...

    Subsequently, the questions were transformed for the requirements of the assessment model, which assessed the life prospects of women on farms in both the Eastern and Western Cohesion Regions who were aged both over and under 40 years (criteria for "young successor"). ... Slovenian Case Study" Social Sciences 13, no. 4: 224. https://doi.org ...

  25. Solved: CAS1501

    CAS1501 - 2024 S1 - ASSESSMENT 3 - CASE STUDY Position advertised: Head: Internal Audit Job Ref: JHB00521/EG Recruiter: Financial Recruitment Date posted: Thursday, 14 March 2024 Location: SANDTON, South Africa Salary: R3600000 Annually SUMMARY: An esteemed company in the financial sector with shareholding listed on the Johannesburg Stock Exchange is inviting applications from experienced and ...

  26. Water

    Temporary works are necessary to ensure the construction and operation of railways. These works are characterized by their large scale, numerous locations, and long construction periods. However, suitable land resources for such purposes are extremely limited in mountainous railway areas. Additionally, the selection of sites for these works often overlaps with areas affected by debris flow ...