assessment, feedback, grades

Rethinking Feedback: Shifting the Power to Students

We know feedback matters. I think of all the ways I have grown because my students, my husband, my editor, and so many others have bothered to share their wisdom with me. Sometimes it stings. Sometimes it sits in the back of my mind, waiting for the right moment. And sometimes, it changes everything.

And yet, when it comes to students, we often act as if feedback is something we do to them rather than with them. We spend hours writing comments, circling errors, suggesting revisions. But how often do students actually use it? How often does our feedback feel more like judgment than guidance?

Maybe it’s time to rethink who gives feedback, how it’s given, and why it even matters. And maybe we can shift our feedback practices in ways that actually work for kids—without adding more to our plates. Here are four shifts that put students in charge of their own growth.

1. Ditch the Teacher-Only Feedback Model

We shouldn’t be the only ones giving feedback. In fact, we might be the worst at it—too rushed, too generic, too focused on what we think matters instead of what they care about.

💡 New idea: What if students got more feedback from peers, younger students, real-world audiences, and even AI tools—and less from us?

👉 Try this:

  • Have students share their writing with a younger class. It’s wild how quickly they’ll simplify, clarify, and revise when they realize a first grader is their audience. I have done this for years with speeches and even our nonfiction picture book unit, it alters the entire process.
  • Use AI to generate feedback alongside human feedback—then have students compare. What’s useful? What’s missing?
  • Create a “feedback portfolio” where students collect and analyze all feedback received (not just yours) and decide what’s worth acting on.

2. Scrap the Grade—But Not for the Reason You Think

We talk about “going gradeless” to reduce stress, and to make learning more meaningful, but removing grades doesn’t matter if students still see feedback as punishment.

💡 New idea: It’s not about eliminating grades—it’s about making assessments feel like coaching instead of judgment.

👉 Try this: Instead of “no grades,” try collaborative grading. Sit down with a student and decide their grade together based on evidence of growth. Let them argue their case. Shift the power.

I have done this for many years, not just with student self-assessments but also their report cards. The conversations you end up having as a way to figure out where to land offer immeasurable insight into how kids see themselves as learners.

3. Let Students Give YOU Feedback First

What if every piece of feedback we gave students had to start with them giving us feedback first?

💡 New idea: Before turning in a project, students answer:

  • “What’s the best part of this work?”
  • “Where did I struggle?”
  • “What specific feedback do I want from you?”

👉 Try this: Make a rule: no teacher feedback without student reflection first. If they can’t identify a strength and a challenge, they’re not ready for feedback yet.

4. The One-Word Feedback Challenge

Ever spend time crafting detailed feedback, only to have students glance at the grade and move on?

💡 New idea: What if our feedback had to fit in one word? Instead of writing long paragraphs that students ignore, we give a single word that sparks curiosity: Tension. Clarity. Depth. Risk. Precision.

👉 Try this: Give students one-word feedback and make them consider what it means. Have them write a short reflection: Why did my teacher choose this word? How does it apply to my work? This forces them to engage with feedback before receiving explanations.

Feedback shouldn’t feel like a dead-end—it should be a conversation. When we shift the balance, when students take ownership, feedback stops being something they receive and starts being something they use. And isn’t that the whole point?


assessment, discussion, feedback, grades, Student Engagement

Let Kids Reject Feedback (Yes, Really!)

A quote block where it says: Good feedback isn't about control, it's about conversation.

What if kids had the right to ignore our feedback? Not because they’re stubborn or disengaged, but because they understand it—and decide to make a different choice.

Too often, feedback feels like a demand: Fix this. Change that. Do it this way. But writers? They get feedback, weigh it, and sometimes say, “No, I’m keeping this.” That’s not disengagement—it’s ownership.

Let’s Build Feedback Negotiation into the Process

Instead of expecting students to accept every suggestion, teach them to think critically about feedback—to question, challenge, and ultimately make their own choices.

1️⃣ Shift the Conversation – Before giving feedback, set the tone:
🗣️ “You don’t have to take every suggestion. Your job is to think about it.”
Ask them: What do you want my feedback on? Where are you stuck? Make it a dialogue, not a directive. I’ve written about this before in the context of only looking at one thing in writing conferences.

2️⃣ Teach Kids to Push Back (The Great Way)
When students disagree with feedback, they need language to explain why. Try modeling this:

  • “I see what you’re saying, but I’m keeping this word because it’s my character’s voice.”
  • “I understand your point, but I want this to feel unfinished on purpose.”
  • “I’ll change this part, but I’m going to keep this sentence because it’s important to me.”

If we want students to engage with feedback, we have to let them practice rejecting it thoughtfully—just like writers do.

3️⃣ Make Choice Part of the Process – Instead of requiring students to change everything, try this:
🔹 Pick one piece of feedback to apply and one to challenge. Explain why.
This simple step forces them to consider feedback instead of just following orders.

4️⃣ Celebrate Thoughtful Resistance
When students defend their choices, it means they care. That’s the goal. Instead of marking something as “wrong,” ask:

  • Why did you make this choice?
  • What effect are you going for?
  • How can you make this even stronger while keeping your vision?

Good feedback isn’t about control. It’s about conversation. And if we want kids to become confident writers, we have to teach them that their voices matter—even if that means telling us no.

assessment, feedback

If Kids Don’t Understand the Feedback, It’s a Waste of Time

I haven’t used this blog in a long time. With the move back to Denmark, navigating the world as a mom of neurodivergent kids, and just the world (waving hands around me), this blog has been quiet. But with the decision to shut down my Patreon, I also might just come back here more. After all, my mind is still going a million miles a minute and perhaps, somewhere, someone could use a few of the ideas that I have. So hello again. It’s nice to be here.

Ever had a kid read your carefully written comment—something insightful, brilliant even—only to ask, “What does that mean?” Yeah. Me too.

If feedback is just for us, if it’s full of teacher-speak or rubrics no one actually reads, kids will ignore it. Not because they don’t care, but because it doesn’t feel like theirs.

Let’s fix that.

Instead of handing them a rubric, build it with them. Here’s how:

1️⃣ Look at real work – Show them examples (past student work, mentor texts, whatever fits). Ask: What makes this good? What makes it confusing? Let them lead.

2️⃣ List what matters – Write down their words. Not “clear transitions” but “It flows” or “I know what’s happening.” Keep it in their language, not ours.

3️⃣ Make it theirs – Turn their words into a checklist, an anchor chart, or a simple, student-friendly rubric. Let them help decide what matters most.

4️⃣ Use it. Every time. – When they write, when they revise, when they give each other feedback. Ask, “How does your work match what we said makes this strong?”

If we want kids to actually use feedback, it has to belong to them. Because the best feedback isn’t what we tell them—it’s what they understand enough to use.

assessment, Be the change, testing

Dear STAR Test, We Need to Talk Again…

Three years ago, almost to this date, I wrote my first blog post about the STAR test, a test sold by Renaissance Learning and employed in thousands of districts across the United States. That post started a discussion with the people behind STAR and while I wish I could say that it created change, isn’t that after all what we always hope for, it didn’t. Three years later, on the eve of my final STAR reading test of the year, I return to those same questions, once again hoping for some clarity, some light to be shed on how this test can be sold as a valid assessment tool.

Because, dear STAR test, it just doesn’t seem like you have evolved much from when we first started together. That in the three years since I last wrote to you hoping for some answers, that you have changed much. I guess, I could count your fancy new interface as change, but really all that has done is cause me to spend more time searching for the things I need in order to try to figure out what my students’ results supposedly are and what they may mean. But the essence of you, a comprehensive reading test that will quickly give me an elaborate understanding of 46 reading skills in 11 different domains remains the same. And much like so many of your cousins, all of the other computer tests who are supposed to be useful in our instruction, I keep feeling like I get the short end of the stick. Like a fool when I tell my students to show off their knowledge, to prove to the computer what we already know; just how much they have grown, just how much stronger they are.

Because according to the tests today, I have pretty much made all of my students worse readers than when they started. Or amazing super readers whose results are so incredible I want to cry tears of joy. It happens every year it seems. That the computer test tells us that they exploded, or that they didn’t grow or in fact reversed their abilities, but the face-to-face tests tell us a different story. The conversations we assess in their book clubs that show deep critical analysis and understanding. The written depth of their knowledge as they explore what it means to think about others’ stories and how it may affect them. How we see them share books, read books, recommend books.

And so that old letter stands the test of time, which is why I am reposting it, because honestly, now three years later into this relationship, I am still wondering why I bother. Why I get my hopes up for reliable, useable date? Why I tell my students to try their hardest? Why we take the time to try to do it right? Because I want to believe in you, STAR, I really do, but at this point, I am just not sure you are worth my time.

So Dear STAR test, we need to talk…again

We first met five years ago, I was fresh out of a relationship with MAP, that stalwart older brother of yours that had taken up hours of my 5th graders time.  They took their time and the results were ok; sometimes, at least we thought so but we were not sure.  But oh the time MAP and I spent together that could have been used for so many better things.

So when I heard about you, STAR, and how you would give me 46 reading skills in 11 different domains in just 30 or so questions, I was intrigued.  After all, 34 timed questions meant that most of my students would spend about 20 or so minutes with you.  You promised me flexibility and adaptation to my students with your fancy language where you said you “…combine computer-adaptive technology with a specialized psychometric test design.”  While I am not totally sure what psychometric means, I was always a sucker for fancy words.   Game on.

With your fast-paced questions, I thought of all the time we would save.  After all, tests should be quick and painless so we can get on with things, right?  Except giving my students only 90 seconds to read a question and answer it correctly meant they got awfully good at skimming, skipping lines, and in general being more worried about timing out than being able to read the whole text.

In fact, every year I have a child in tears who tell me that the timer popped up when they were still reading, that their anxiety is peeking because of that timer.  (Fun fact, if a child times out of a question it is treated as incorrect).  For vocabulary, all they get is 45 seconds because either they know it or they don’t, never mind that some of my kids try to sound words out and double-check their answer all within those precious seconds, just like I have taught them to do.  I watched in horror as students’ anxiety grew.  In fact, your 90 second time limit on reading passages meant that students started to believe that being a great reader was all about speed.  Nevermind, that Thomas Newkirk’s research into reading pace tells us that we should strive for a comfortable pace and not a fast one.  So yes, being a slow reader= bad reader.  

And sure, we could just turn the time off except that is not a decision I am allowed to make as an educator because that is a power given to the administration level, not the individual. On a larger scale, the fact that the product even comes with a time limit should be debated further; what does time have to do with reading comprehension and vocabulary knowledge besides the selling point of being able to administer it quickly or as you say “there are time limits for individual items intended to keep the test moving and maintain test security?” What does that do to bolster the validity of our test? How is that supported by best practice?

And so for some reason, year after year, I keep hoping that this will be the year where the data will truly be useful. Where I will gain knowledge that I can use to shape my teaching, isn’t that, after all, what the whole purpose of collecting data on our students is? But much like previous years, the results are a kaleidoscope of fragmented stories that refuse to fit together into a valid picture.  Students whose scores dropped 4 grade levels and students whose scores jumped 4 grade levels.  Students who made no growth at all.  Once again, I spend the day questioning my capabilities as a teacher because I don’t know what to take credit for.  Is it possible that I am the worst teacher ever to have taught 7th grade ELA or perhaps the best?  You confuse me, STAR, on so many occasions.  

As in previous year, students whose score differences are significant sometimes get to re-test, after all, perhaps they are just having a bad day?  And sure, sometimes they have gone up more than 250 points, all in the span of 24 hours, but other times they have dropped that amount as well.  That is a lot of unmotivated or “bad day” students apparently.   And yet, you tell me that your scores are reliable, and you’re not alone, many studies say you are too, yet that is simply not what we see every day in our classroom.  Although, this study (sponsored by you_did point out that you are most reliable between 1st and 4th grade, so where does that leave my 7th graders?

And last time I dug around your reports, I found that according to your own research at the 7th-grade reading level you only got a score of 0.73 retest reliability which you say is really good but to me doesn’t sound that way (page 54) 0.73 – shouldn’t it be closer to 1.0? If we look at the Cronbach’s Alpha Reliability that is only acceptable. And I guess that’s what I keep coming back to. Is your reliability simply measured as compared to other tests who are also problematic in their assessment methods and who we also know do not give us overly valid results?   Who knows, you would need a math degree to dig through your technical manual to make sense of all of the numbers.

Yet through all of this, you have dazzled me with your data, even know when I dig into your research I keep getting tripped up in your promises of reliable test scores, of comparable rest results, of scores that mean something, but what it is they actually mean, I am not quite sure of.  With all of the reports that I could print out and pour over.  Perhaps you were not accurate for all of my students, but certainly, you had to be for some.  It wasn’t until a long night spent pondering why some of my students’ scores were so low that I realized that in your 0.73 reliability lies my 0.27 insecurity.  After all, who are those kids whose scores are not reliable?   I could certainly guess but the whole point of having an accurate assessment means that I shouldn’t have to.  So it doesn’t feel like you are keeping up your end of the deal anymore, STAR test.  In fact, I am pretty sure that my own child will never make your acquaintance, at least not if we, her parents, have anything to say about it.

So dear STAR test, I love data as much as the next person.  I love reliable, accurate data that doesn’t stress my students out.  That doesn’t make them really quiet when they realize that perhaps they didn’t make the growth.  I love data that I can rely on and it turns out STAR, I just don’t think you fit that description, despite the efforts of those who take you.  Perhaps I should have realized that sooner when I saw your familial relationship with Accelerated Reader.  Don’t even get me started on that killer of reading joy.  You even mention it yourself in your technical manual that there may be measurements errors.  You said,  Measurement error causes students’ scores to fluctuate around their “true scores”. About half of all observed scores are smaller than the students’ true scores; the result is that some students’ capabilities are underestimated to some extent.”  Granted it wasn’t until page 81.  So you can wow me with all of your data reports.  With all of your breakdowns and your fancy graphs.  You can even try to woo me with your trend scores, your anticipated rate of growth and your national percentile rankings.  Your comparability scores to other state testing. But it is not enough, because none of that matters if I can’t count on you to provide me with accurate results. It doesn’t matter if I can’t trust what you tell me about my students.

So I wish  I could break up with you, but it seems we have been matched for the long run for now.  All I can be thankful for is that I work for a district that sees my students for more than just one test, for more than just their points because does anyone actually know what those points mean?  I can be so thankful that I work in a district that encourages us to use STAR as only one piece of the data puzzle, that chooses to see beyond it so we can actually figure out a child’s needs.   But I know I am lucky, not everyone that is with you has that same environment. So dear STAR, I wish you actually lived up to all of your fancy promises, but from this tired educator to you; it turns out I don’t need you to see if my students are reading better because I can just ask them, watch them, and see them grow as they pick up more and more books.  So that’s what I plan on doing rather than staring at your reports, because in the end, it’s not really you, it’s me.  I am only sorry it took me so long to realize it.

Best,

Pernille

PS: In case it needs to be spelled out, this post does not reflect the official view of my employer.

assessment, assumptions, grades, No grades, Personalized Learning

Using the Single Point Rubric for Better Assessment Conversations

A few years ago, I read the following post discussing single-point rubrics from Jennifer Gonzales on her incredible blog Cult of Pedagogy. The post discussed the idea of using a single-point rubric for assessment rather than the multi-point rubrics I was taught to use and how they were not only easier to create, but also offered up an opportunity for students to understand their assessment in a deeper way. Intrigued, we started tinkering with it over the last few years as an English department, developing our process as we went. The other day, I realized that I have never shared that work on here and thought that perhaps if someone had missed Jen’s post or was wondering what this looks like implemented, a blog post may be helpful.

So first of all, what does a single-point rubric look like? Here is an example of one we used with an assessment after finishing the book Refugee for The Global Read Aloud.

We operate on a 1-4 standards-based assessment system, so the difference between multi-point and single-point is the descriptive language found for each score. Where under a multi-point rubric you would fill in the description for 1 through 4, with a single-point rubric you just focus on what you would expect an at grade-level product to contain. This is what sets it apart in my mind; it allows us to focus on what we are specifically looking for and recognizing that students don’t always fall into the other categorizations that we set, no matter how much we broke them down.

This is one of the major reasons why I have loved using single point rubrics; it allows me to leave more meaningful feedback for students when they are either not meeting the grade-level target or are exceeding it. Rather than trying to think of all of the ways a student may not be at grade-level, I can focus on what would place them there assessment-wise and then reflect on when they are not. This has allowed me to leave more meaningful, personalized feedback, while also really breaking down what at grade-level thinking contains.

So what is the process for creating one?

  1. Determine the standards or learning targets that will be assessed. Students should be a part of this process whether through discussion and creation of the rubric or at the very least seeing and understanding the rubric before anything is turned in, after all, we want students to fully understand what we are trying to discover as far as their learning.
  2. Once the standards have been determined, decide what “at grade-level” understanding will contain. While the rubric shown above shows only one box per standard, sometimes our rubrics are broken down further within the standard in order for students to see exactly what it is we are hoping to see from them. (See the example below).
  3. Discuss with students if you haven’t done so already. Do they understand what at grade-level understanding looks like and what it contains? Is the rubric a helpful tool for them to take control of their learning? If not, go back to the drawing board with the rubric.
  4. Add reflective questions for students so that their voice is heard and further ownership is created over the learning process. This is important because too often assessment is something that is done to students rather than a process that allows students to fully see what they are able to do independently, as well as set goals for what they need to work on.
A few reflective questions – to see the original rubric, go here

Using the single-point rubric is a breeze for me compared to the multi-point rubric. First of all, it takes less time to create because we really just focus on that “at grade-level” understanding. Secondly, and this is the big one for me, it allows me to deeply reflect on why my gut or the rubric is telling me that a child is not showing “at grade-level” understanding or above it somehow. I have to really think about what it is within their understanding that moves them into a different category. One that is not limited by the few things that I could brainstorm before I saw their work. I then have to formulate that into written or spoken feedback in order to help that child understand how they can continue to grow. This allows our assessment conversations to change from grades to reflection.

Tips for implementing:

  1. Discuss it with students before using it the first time. Our students had not seen a rubric like this before and so we took the time to discuss it with them before we used it. This would happen for any assessment rubric, but it took a little bit longer because it looked different.
  2. Set the tone for assessment. I have written extensively about my dislike of grades and how I try to shift the focus, and yet I work within a system that tells me I have to assess with numbers attached to it. So there are a few things that need to be in place with the biggest one being the ongoing conversation that assessment is a tool for reflection and not the end of the journey. This is why students always self-assess first in order to reflect on their own journey and what they need from us. This can be messy in the beginning but through the year it gets easier for students to accurately reflect on their own journey and what they need to grow. They then hand that to me in order for me to look at their work and then it culminates in a final discussion if needed.
  3. Break it down. It is easy to get caught up in too many things to assess, using the single-point rubric has allowed us to focus in on a few important things. This is important so that students can work on those skills specifically rather than feel overwhelmed by everything within the process.

What do students think?

Our students seem to like them, or at least that is what they say. They understand mostly what they are being assessed on and they understand the feedback that is given to them. Having them self-assess and reflect prior to our assessment is also huge as it shows students that they are in charge of their assessment and their growth and that we want them to fully invest in their learning. It gives them an opportunity to see how they are growing and what their next step is before I add my opinion in there. This can also help reduce the “shame” factor that is sometimes associated with grades. When we discuss repeatedly with students that there is nothing wrong with being below grade-level and instead let the assessment guide us to the next steps, it shifts the assessment process, as well as the internalization of grades.

Overall, the single-point rubric has been another tool that allows us to help students become more reflective learners, while also helping us get to know the students’ needs more, resulting in a more impactful assessment experience for everyone involved. While we started small, the single-point rubric is now almost exclusively the only type of rubric we use in English and for that I am grateful. If you haven’t tried it yet, I would highly recommend you do. If you have any questions, after all my brain is tired from traveling, please leave them in the comments.

If you are wondering where I will be in the coming year or would like to have me speak, please see this page. If you like what you read here, consider reading my book, Passionate Readers – The Art of Reaching and Engaging Every Child.  This book focuses on the five keys we can implement into any reading community to strengthen student reading experiences, even within the 45 minute English block.  If you are looking for solutions and ideas for how to re-engage all of your students consider reading my very first book  Passionate Learners – How to Engage and Empower Your Students.   

assessment, being a teacher, Literacy, Reading

A Notecard Check – A Simple Way to Check for Understanding

5014990831222784.jpeg
Students’ answers to when I asked when does reading suck…

Ask our students what makes them hate reading and many of them will say the work that comes after.  The reading logs, the essays, the taking notes when reading, the post-its, the to-do’s.  Not the act of reading itself.  They share their truths year after year and year after year, I wonder how I am going to see whether they really are understanding and learning without making them drown in assignments that make them hate reading.  It is a hard balance to find, especially if your students like ours have reading abilities that range from years above grade level to years below.

While the students will be working on other skills with their reading, right now, we are working on increasing stamina and enjoying their books, a skill that some of our students need a lot of work on.  When we introduce too much to them to do, that is when they end up not really working on their reading but rather hunting the text for their answer. This is when they start to dislike reading.  While being able to disseminate a text and do the heavy work with text analysis is important, I cannot have them do that all of the time, not every time they read.  After all, how many adults do that every time they read?

This year, my colleague, Reidun offered up a great idea;  the simple notecard.  The notecard is unassuming.  It is limited in its scope based on its size and it also does not take much time.  Rather than writing anything long, which we only do once in a while, when students have been introduced to a teaching point such as writers using emotive language, we then ask them to return to their own self-selected text and look for an example.  As they read they find a sentence or two, write it down and hand it to us.

5283364949721088.jpeg
A student’s example of descriptive language found within her text.

When I have a moment, I am able to quickly scan through to see who got it and who didn’t, make a note of it and then figure out who needs to be in one of our small groups.  Who gets it, who doesn’t.  The kids spend most of their time reading, rather than taking notes, and I get a chance to peek into their thought process.

As the year progresses, our skill focus will change, our questions will deepen, and yet, offering students time to “simply” read is something that we will continue to protect every single day.  The notecard allows me to peek at skills, to inform my instruction, and to collect data.  All without causing a major interruption in their time with the text.

If you like what you read here, consider reading my newest book, Passionate Readers – The Art of Reaching and Engaging Every Child.  This book focuses on the five keys we can implement into any reading community to strengthen student reading experiences, even within the 45 minute English block.  If you are looking for solutions and ideas for how to re-engage all of your students consider reading my very first book  Passionate Learners – How to Engage and Empower Your Students.      Also, if you are wondering where I will be in the coming year or would like to have me speak, please see this page.