HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
April 1, 2022
Vol. 79
No. 7

Fine-Tuning Assessments for Better Feedback

author avatar
author avatar
Quizzes and tests don’t count as feedback if educators and students can’t act upon the information they provide.

premium resources logo

Premium Resource

AssessmentInstructional Strategies
Fine-Tuning Assessments for Better Feedback
Credit: SOLSTOCK / iSTOCK
A few years ago, while attending a well-known grading conference, a group of us took a break to go holiday shopping. One of the educators in our party had a mission—to find his daughter a holiday-themed snow globe.
To our colleague's delight, we came upon a store focused entirely on Christmas. With an air of confidence, he swung open the door, all but reaching for his credit card to obtain his daughter's wish. As we scoured the festive boutique, and failed to immediately find the crystal ball, he approached a sales associate.
"Where might I find your snow globes?" he asked.
"You are the fifth person today who's asked for a snow globe, and I'll say for the fifth time—we don't sell them," she said.
It was hard not to see this as an instructive example of someone being oblivious to feedback.
Another example: In his fascinating book The Secret War (Harper, 2016), Max Hastings explores the complex and perilous world of WWII espionage and intelligence. One of his conclusions is that intel was only as valuable as any country's ability to (a) disseminate it and (b) act on it. One of many examples cited was the 14 pages of intercepted radio signals suggesting that Pearl Harbor would be attacked. These intercepts had been in the possession of a U.S. intelligence officer, but they were read only as the fires raged and ships sunk.
Perhaps things are not so different in our schools. We have a lot of data, and we claim to value feedback, but to what extent is that feedback data disseminated and acted upon? Would we educators have seen the signs that the planes were on their way? Are we getting signals that we should stock snow globes only to inadvertently ignore customers' requests? Perhaps the airwaves are buzzing with valuable feedback data that's not even on our radar.

Defining Effective Feedback

Recently one of us, Myron, designed a professional development session focused entirely on feedback. At the start of the session, he asked the teachers to confirm that they regularly provide students with feedback. Then he suggested that if they were going to be examining feedback, they should define it. The response in the room was a blank stare.
According to Sadler, feedback in educational contexts is information provided to a learner to reduce the gap between current performance and a desired goal (1989). First, we need to articulate that desired goal. Effective feedback needs clear, straightforward objectives—hopefully derived from established standards. Second, any student's current performance needs to be understood by the student. This requires time for review, revision, and learning from mistakes. Finally, if the student is going to reduce the gap between the goals and current performance, then they will need to know how. Feedback has not been given unless the student has a clear, personalized direction as to what to do next (Hattie, 2012; Sadler, 1989).
Before we get to a few effective feedback tools and examples, tested and tried in real high school classrooms, let's establish a few elements of effective feedback that we will specifically look for in those examples.
Effective feedback:
  • Must involve student voice, choice, and agency. The teacher and student should form a co-pilot relationship on this flight.
  • Does not need to "count" in the gradebook—in fact, it's often much better if it doesn't.
  • Thrives on fewer categories to describe the learning. If you're using percentage grades that have 100 categories, consider moving to a proficiency scale.
  • Should be an integral part of a larger classroom-assessment system that contributes positively to student disposition and learning.
  • Can be used by the teacher and student in a timely fashion to make informed "next-step" decisions.
Now that we've established a definition for feedback, and elements to consider in its implementation, let's explore a few examples from our own classrooms.

Method #1: Effective Quiz Feedback—Immediate, Informal, and Informative

When we started looking into the research on effective feedback, we began to realize that we might not be providing clear objectives, nor monitoring the airwaves for informative feedback. Perhaps our students were asking for snow globes, and we were selling toasters.
Ben decided to look more closely at his in-class quiz structure. He's always been a fan of quizzes—they are easy to create, efficient to administer, and perfect for checking student understanding on specific learning targets. After letting students complete one of these short assessments, Ben could typically grade them before heading home, which made it possible to go over them as a class the next day. These moments of revision were great for conversations and checking in on understanding … or so he thought.
Right around the time of the snow globe incident, Ben realized that he needed to rethink how he could better utilize quiz feedback. He wanted to find a more effective way to give students the opportunity to close the gap between their current performance and the established goal and improve the quality of the feedback they were getting.
He implemented a few changes—big and small—that dramatically improved his use of feedback within the constraints of available class time. Key changes included making his quizzes entirely formative—they were not "counted" toward the final student grade. He also stopped collecting and grading the quizzes and shifted the assessment process to his students by having them self-assess their progress rather than just giving them a grade. Here's how his new quiz feedback structure works:
Ben's grade 11 chemistry course has 25 main learning targets organized into 5 units of study. After a day or two of instruction, lessons, labs, and activities, the students can expect a short quiz to assess their understanding. These quizzes are not counted toward the student grade. Each quiz takes approximately 10–15 minutes, then the class goes over it together. Some students may finish earlier than others, so while waiting they might compare their answers with their neighbor, look up previous notes, or try alternative solutions. The main purpose of the quiz is not to accumulate points, but rather for the students to check their level of understanding on a specific learning target. If a student is stuck on a question, or bumps into confusion, they can take steps to change that.
Reviewing the answers to the quiz is an opportunity for discussion, debate, and the airing of opinions. At the end of the discussion, the students self-assess their original responses and identify where they had gaps in their understanding of the concept. Based on this discussion, students use the table in Figure 1 (see below) to determine most accurately, on a scale of 1–6, the statement that best applies to their level of understanding on the established learning outcome.
Duek Fig. 1.1 April 22
After the students are done with the self-assessment, Ben walks around the room and records their self-evaluation—but only to track their progress, not to form any part of their summative grade. The power of this process is that it provides him, the teacher, with valuable feedback. He might ask students, "How are you feeling after that quiz?", "Did anything stand out to you?", or "Do you know where to find some practice questions?" This way, he can quickly identify a common problem or question and determine concepts that he should re-teach. He can also identify instructional elements and activities that may have fallen short of what his students needed and may reteach a certain concept in a new way. Throughout this process, Ben's work is in line with Hattie's suggestion that feedback is for the teacher (2012) and with research that there is a positive relationship between student performance and assessment when the teacher is willing to modify or improve instruction (Brown, Peterson, & Irving, 2009).
Ben's quiz structure reflects other research surrounding memory. According to Cepeda et al. (2008), "To achieve enduring retention, people must usually study information on multiple occasions" (p. 1). To encourage "enduring retention," Ben offers a re-quiz a few days later on the same learning target, with different questions or a different format. Once again, the re-quiz is not counted toward their overall grade. The point is to give students more feedback so they can take risks, analyze their routines, and demonstrate a better understanding of the learning target. As Susan Brookhart (2012) argues, "Feedback can't be left hanging; it can't work if students don't have an immediate opportunity to use it" (p. 27).

Method #2: Rearranging the Test Structure

While Ben restructured his quizzes for better feedback, Myron tackled his tests. According to Bjork and Bjork, "The effectiveness of tests as learning events remains largely underappreciated, in part because testing is typically viewed as a vehicle of assessment and not a vehicle of learning" (2011, p. 62).
One of the barriers Myron bumped into when trying to use test data for effective feedback was that his tests weren't designed for it. All of his test sections had been separated and organized by format: true/false, short answer, fill-in-the-blank, and so on. Each of these sections were a blend of various learning priorities, and therefore the score or grade for either a section, or the entire test for that matter, was a broad and nebulous concoction. This approach flew in the face of Wiliam's suggestion that "we need to ensure that feedback causes a cognitive rather than an emotional reaction" (2018, p. 153). Students would be eager to see their overall score, indicating an emotional response, only to cast it aside immediately after. Myron yearned for a cognitive response—a change in their thinking—and he hoped that feedback on specific outcomes would achieve it.
Myron decided to restructure his regular unit test, separating the sections according to the key learning outcomes rather than according to format. As a result, students and teacher alike immediately gleaned more useful information. The students received feedback on how well they grasped specific learning priorities, and Myron could see which outcomes had been effectively conveyed to the students, individually and as a whole. This feedback for the teacher allowed Myron to improve upon or revisit specific learning objectives. Lastly, he could reorganize his gradebook, creating three or four columns organized by outcomes, rather than one column of a blended score. In one easy step, he achieved a standards-based gradebook rather than an event-based one (Dueck, 2021).

Everyone—students and teachers—learns through clearly identifying objectives and then using feedback to improve understanding of those objectives.

Author Image

Once the unit tests were divided by outcomes, it was far easier to use portions of the assessment for formative or summative purposes, depending entirely on how the students (or Myron as the teacher) responded to the feedback data (Dueck, 2021). In other words, the student could decide which sections of the unit test would count in the gradebook, and which sections would be re-tested. Therefore, for example, the first test could be entirely formative, or partially formative if the student decided to revisit certain sections. If a student was satisfied with the original test, then it was rendered entirely summative.

Acting on Feedback

The two of us are not alone in the quest to re-engineer many of our assessment tools and processes to better align with the main tenets of effective feedback. Myron has seen evidence of this in his work internationally as well. Everyone—students and teachers—learns through clearly identifying objectives and then using feedback to improve understanding of those objectives. Considering Sadler's definition of feedback, perhaps the most important question is: To what extent are we helping our students close the gap between the identified goal and their current performance in relation to it?
When wading through the reams of grading data we collect every year, our challenge is not dissimilar to the codebreakers of WWII and the owners of Christmas-themed gift shops. To harness the power of feedback, we need to disseminate it and act on it.

Giving Students a Say

How can educators improve assessment practices so that the results are accurate, meaningful, informative, and fair? Put student voice and choice at the center of the process, says Myron Dueck.

Giving Students a Say
References

Bjork, R. A., & Bjork, E. L. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world (2nd ed.). Worth Publishers.

Brookhart, S. (2012). Preventing feedback fizzle. Educational Leadership70(1), 25–29.

Brown, G. T. L., Peterson, E. R., & Irving, S. E. (2009). Self-regulatory beliefs about assessment predict mathematics achievement. In D. M. McInerney, G. T. L. Brown, & G. A. D. Liem (Eds.), Student perspectives on assessment: What students can tell us about assessment for learning (pp. 159–186). Information Age Publishing.

Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., & Pashler, H. (2008). Spacing effects in learning. A temporal ridgeline of optimal retention. Psychological Science19(11), 1095–1102.

Dueck, M. (2021). Giving students a say—Smarter assessment practices to empower and engage. ASCD.

Hattie, J. (2012). Know thy impact. Educational Leadership70(1), 18–23.

Sadler, R. (1989). Formative assessment and the design of instructional systems. Instructional Science18, 119–144.

Wiliam, D. (2018). Embedded formative assessment, (2nd ed.). Solution Tree.

End Notes

1 For an example of a test cover that indicates how sections are tied to specific outcomes, as well as the corresponding student tracking sheet, visit my website.

For 23 years, Myron Dueck has worked as an educator and administrator. Through his current district position, as well as working with educators around the world, he continues to develop grading, assessment, and reporting systems that give students a greater opportunity to show what they understand, adapt to the feedback they receive, and play a significant role in reporting their learning.

Dueck has been a part of administrative teams, district groups, school committees, and governmental bodies in both Canada and New Zealand sharing his stories, tools, and first-hand experiences that have further broadened his access to innovative ideas. He is the author of the bestselling book Grading Smarter, Not Harder (ASCD, 2014) and the new book Giving Students a Say (ASCD, 2021).

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Related Articles
View all
undefined
Assessment
The Unwinnable Battle Over Minimum Grades
Thomas R. Guskey & Douglas Fisher et al.
3 months ago

undefined
A Protocol for Teaching Up in Daily Instruction
Kristina Doubet
3 months ago

undefined
Checking for Understanding
Douglas Fisher & Nancy Frey
4 months ago

undefined
The Fundamentals of Formative Assessment
Paul Emerich France
4 months ago

undefined
The Value of Descriptive, Multi-Level Rubrics
Jay McTighe & Susan M. Brookhart et al.
10 months ago
Related Articles
The Unwinnable Battle Over Minimum Grades
Thomas R. Guskey & Douglas Fisher et al.
3 months ago

A Protocol for Teaching Up in Daily Instruction
Kristina Doubet
3 months ago

Checking for Understanding
Douglas Fisher & Nancy Frey
4 months ago

The Fundamentals of Formative Assessment
Paul Emerich France
4 months ago

The Value of Descriptive, Multi-Level Rubrics
Jay McTighe & Susan M. Brookhart et al.
10 months ago
From our issue
April 2022 Feedback for Impact thumbnail
Feedback for Impact
Go To Publication