Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Using Rubrics for Performance-Based Assessment: A Practical Guide to Evaluating Student Work
Using Rubrics for Performance-Based Assessment: A Practical Guide to Evaluating Student Work
Using Rubrics for Performance-Based Assessment: A Practical Guide to Evaluating Student Work
Ebook206 pages3 hours

Using Rubrics for Performance-Based Assessment: A Practical Guide to Evaluating Student Work

Rating: 4.5 out of 5 stars

4.5/5

()

Read preview

About this ebook

Writing a rubric that can accurately evaluate student work can be tricky. Rather than a single right or wrong answer, rubrics leave room for interpretation and thus subjectivity. How does a teacher who wants to use performance-based assessment in this day and age of SMART goals find a way to reliably assess student work? The solution is to write clear rubrics that allow the evaluator to objectively assess the student work. This book will show classroom teachers not only how to create their own objective rubrics, which can be used to evaluate performance assessments, but also how to empower their own students to create rubrics that are tailored to their work.

LanguageEnglish
PublisherSourcebooks
Release dateMay 30, 2019
ISBN9781618219329
Using Rubrics for Performance-Based Assessment: A Practical Guide to Evaluating Student Work
Author

Todd Stanley

Todd Stanley is the author of 10 teacher education books including "Project-Based Learning for Gifted Students: A Handbook for the 21st-Century Classroom." He was a classroom teacher for 18 years and is currently the gifted services coordinator for Pickerington Local Schools, where he lives with his wife and two daughters.

Read more from Todd Stanley

Related to Using Rubrics for Performance-Based Assessment

Related ebooks

Teaching Methods & Materials For You

View More

Related articles

Reviews for Using Rubrics for Performance-Based Assessment

Rating: 4.5 out of 5 stars
4.5/5

2 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Using Rubrics for Performance-Based Assessment - Todd Stanley

    Author

    Introduction

    The Unintended Consequences of SMART Goals

    As anyone who has spent any time in education knows, there are movements that come with the field. For example, among others, there is the standards movement, the small schools movement, the college- and career-ready movement, and the movement of integrating technology into the classroom. Beyond these, one prominent movement throughout the past decade has been the educational data movement. The essential question of the data movement is What data can you show me to prove that students are learning? In the past, educators depended on subjective measures, such as grades, teacher recommendations, and graduation requirements, to determine whether a student was succeeding or not. Now the expectation is that objective measures will show educators whether a child is really learning. The goal is to convert the measurement of mastery into a quantifiable number that will prove whether or not a student is growing as a learner.

    Part of this movement led to the use of SMART goals (The Mind Tools Content Team, n.d.) in education. These are goals that are:

    ✓S—specific,

    ✓M—measurable,

    ✓A—achievable,

    ✓R—realistic, and

    ✓T—time related.

    The idea behind SMART goals is to conduct an assessment that can measure growth. If the assessment determines that a student is not growing as a learner, then practices are put into place to help him or her do so.

    Different groups use SMART goals to measure their success. For instance, businesses use SMART goals to increase profits, improve customer service, and increase efficiency in business operations. The medical field can look at patient visits, quality of care, and health outcomes. Sports teams use SMART goals to make players more effective, improve statistics, and set goals to improve wins. If these goals work for such organizations, why would education not want to use such a system—especially if it is going to improve student achievement?

    These goals can be used at various levels of an educational organization, from the central office setting big picture goals for the district, to schools measuring the building’s success on particular skills, to individual teachers trying to demonstrate the growth of their students, to even having students set their own SMART goals as they persevere toward new achievements.

    For example, a teacher might use SMART goals in the classroom by giving a class a math test. One student misses questions 2, 4, 7, and 11, all of which have to do with fractions. This might indicate to the teacher that this student needs more intervention with fractions, so the teacher works with the student further, breaking concepts down into more understandable chunks and providing more practice. If several students in the class show a similar pattern, the lesson on fractions may need to be retaught to the whole class. On the other hand, the results of the test might show a teacher the areas in which a student displays strengths, which would hopefully inspire this teacher to find ways to challenge the child by learning about the concept in more depth.

    SMART goals sound great, but here is the problem: SMART goals seek to get to an unbiased, cold hard fact that indicates the mastery of a student. Either the student mastered the objective or he did not. How black and white can you get? To make the evaluation of mastery as objective as possible, providing questions with one correct answer seems logical. This eliminates the subjectivity of open-ended questions that have multiple possible answers and are subject to interpretation. Having one correct answer to a question avoids the problem of one teacher evaluating a student’s response one way, while another teacher might see things differently because there is so much discrepancy in what may be considered mastery-level work. This is why most state assessments use mostly, or all, multiple-choice questions—because the testing coordinators do not have to train evaluators to be as unbiased as possible.

    In the classroom, this might involve a teacher writing a 10-question multiple-choice test to determine whether students understand proper sentence structure. One student gets eight of the questions correct. This student displays mastery of the skill, perhaps only requiring a brief review of the two missed questions. Another student gets only three of the questions correct, showing a lack of mastery of the skill, requiring intervention. These examples are objective uses of data. A multiple-choice test is easy and quick to grade—the answers are clear.

    Although this is the ideal way to assess mastery in an ideal world, and certainly the easiest, there are certain skills that are more difficult to objectively measure, such as public speaking. After all, how does one rate a student in public speaking? If you are rating a performance, there is a lot of interpretation that can take place. One person might think the student spoke very well, while another might have seen some issues. Because this rating is based on opinion, many would argue that this type of assessment does not invoke the tenets of SMART goals. Otherwise, everyone who evaluated this student would decide the same result. Also, the amount of time it takes for the student to give his or her speech and for the evaluator to rate it is much more involved than other more objective forms of assessment—and time is one of those things in today’s world of content standards that teachers do not have a whole lot of. So, if you are trying to create an objective and easy-to-grade assessment, a logical choice would be a multiple-choice test. A student could have the following question on an assessment:

    Which of the following is the best way to present a speech to a crowd?

    a. Reading from your notes word for word

    b. Speaking to one spot on the wall

    c. Glancing around the room in a fidgety manner

    d. Scanning the room while talking, occasionally making eye contact

    If a student selects D, then he or she would be correct. The problem is this: that knowledge does not make the student a good public speaker. If a teacher truly wants to assess this skill, the student is going to have to actually give a speech to an audience and be evaluated on how effective his or her speaking skills are.

    This is the problem for the so-called soft skills—they are challenging to assess. For example, three people might watch a student’s speech, and one may think the speaker did a really good job, one may think the speaker did an average job, and the third may think the speaker was not very good. The three audience members might be evaluating different things. There is no right or wrong answer, black or white choice, or mastery/nonmastery option. There is a lot of gray area. There are many qualities of presenting a good speech, some the student might do well and other aspects that may need a lot of improvement. How do you assess mastery if there are so many factors involved? To put it simply, the biggest challenge is that there is too much subjectivity in evaluating skills such as this.

    You could conduct an assessment that better defines what good public speaking looks like and what the evaluators should be looking for, but this is harder. If I tasked you with writing a multiple-choice test or writing a comprehensive rubric, which one would take you more time and effort to complete? Because of the subjectivity of the rubric, there is some doubt cast as to whether the skill being measured aligns with a SMART goal. After all, SMART goals are supposed to be specific as well as measurable. Creating an objective assessment that properly evaluates these harder-to-measure skills is quite a challenge. This challenge causes a lot of educators to simply take the easy way out and no longer evaluate these skills.

    Consider the difference between a multiple-choice assessment and an essay assignment. There are few educators who would argue that multiple-choice responses provide a clearer picture of whether a student understands a concept or not than an essay. After all, on a multiple-choice assessment, there are only three to four choices students must contemplate, and the correct answer is right there in front of them. There’s also the luck factor—a student has a 25% chance of guessing the correct answer. Guessing on an essay question is difficult. A student has to have basic understanding in order to address an essay assignment. Thus, an essay provides a more extensive and detailed look at whether a student has indeed mastered a particular concept or skill. Unfortunately, administering an assessment often comes down to time, that precious commodity that teachers often do not have enough of. If I have a classroom of 30 students, and I administer an assessment with three essay questions on it, suddenly I have to read 90 essays, all of varying length. This is going to take longer than a single planning period. If, however, I give those same 30 students a 20-question multiple-choice assessment, I have considerably cut down the amount of time needed to grade. Because this is the easier path, hard-to-measure skills that require a rubric or more of a time commitment simply get passed over. Instead, we measure those skills that fit into the easy mold of a SMART goal—usually the memorization of content.

    First, let me be clear. There is nothing wrong with assessing students’ understanding of content. It serves as the building blocks to higher level conversations. The problem is when content is the only focus of the learning. The unintended consequence is that, because content is knowing a specific fact or method for solving a problem, students then play the memorization game. They memorize the content without understanding the context and when the information should be used. That is where the soft skills come into play. How can you adapt the content in an authentic, real-world situation so that students are not just memorizing the content, but learning it at a much deeper level that will lead to an enduring understanding?

    Oftentimes, soft, 21st-century skills, such as …

    ✓public speaking,

    ✓collaboration,

    ✓creativity,

    ✓critical thinking,

    ✓adaptability,

    ✓leadership,

    ✓problem solving,

    ✓grit,

    ✓initiative, and

    ✓global and social awareness

    … get left in the dust or are unintentionally overlooked. There are few people who would argue against the importance of such skills. Educators are aware that students who are skilled in these areas will be much better equipped in the real world than students who only have facts and content memorized. Despite this, many educators focus on the content because that is what is easy to measure and can be quantified into a number, letter grade, or SMART goal.

    The ultimate examples of this are Advanced Placement (AP) tests. AP is a national program created by the College Board that offers college-level classes to high schoolers. When a student takes an AP course, in most cases, no matter what the student’s effort is, no matter how well he or she does with ongoing assignments, no matter how good his or her body of work for the class is, the student’s actual results all come down to the AP Exam. If a student comes to class and does not ever use the skills needed to pass a college course, such as initiative, grit, critical thinking, problem solving, collaboration, adaptability, or creativity, he or she could still receive college credit if he or she sits

    Enjoying the preview?
    Page 1 of 1