Showing posts with label feedback. Show all posts
Showing posts with label feedback. Show all posts

SBG is more than teach, test, reassess [A fable]

We're nearing the end of the school year which means a lot of educators including myself are doing a bit of self-reflection.  Through several recent conversations with the teachers in my district, it's been exciting to hear about the progress we've made in our systematic transition to standards-based grading.  To further support this anecdotal evidence, student survey data indicates we're doing a better job communicating reassessment opportunities and procedures this spring compared to the fall.  In addition 75% of students agree "I have an understanding of where I am in my learning and the areas that I need to continue to learn." That's reason to celebrate!

At the same time, we still have room to improve in implementing our grading guidelines:

  1. Entries in the grade book that count towards the final grade will be limited to course or grade level standards.**
  2. Extra credit will not be given at any time.
  3. Students will be allowed multiple opportunities to demonstrate their understanding of classroom standards in various ways. Retakes and revisions will be allowed.
  4. Teachers will determine grade book entries by considering multiple points of data emphasizing the most recent data and provide evidence to support their determination.
  5. Students will be provided multiple opportunities to practice standards independently through homework or other class work. Practice assignments and activities will be consistent with classroom standards for the purpose of providing feedback. Practice assignments, including homework, will not be included as part of the final grade.

** Exceptions will be made for midterm and/or final summative assessments. These assessments, limited to no more than one per nine-week period may be reported as a whole in the grade book.

The biggest "aha" in recent conversations with our extremely dedicated secondary teaching staff has been in the context of reassessments.  The flowchart below has been an eye opener for us.


The recent light bulb moments have taken place when discussing the need for more classroom feedback and informal assessment.  Here's an example of how SBG should not work in a middle school math class:
Mr. Jones teaches the area of a triangle on Monday and assigns some practice problems to complete in and outside of class.  Some of the students complete all of the practice problems.  Some of them do not. All students are provided the answers ahead of time on the board.  Mr. Jones teaches the area of a circle on Tuesday and assigns some practice problems to complete in and outside of class.  Again, students are provided the answers to the practice problems ahead of time.  Some of the students complete the practice problems and some do not.  On Wednesday, Mr. Jones gives all students a quiz on these two standards.  After Mr. Jones looks at the quizzes, he sees that about half of the class still doesn't understand how to find the area of a triangle or the area of a circle.  He thinks to himself, "Well, I'm really glad we have standards-based grading, because these students can reassess."  The next day, he hands back the quiz and tells students what they need to do before they can participate in a reassessment.  When only a few students show up for a reassessment opportunity during the next week, Mr. Jones becomes flustered and wonders why students aren't taking advantage of reassessments.
When I look at the visual above and think about Mr. Jones' SBG practices, I believe he's missing the "classroom feedback and informal assessment" part of the flowchart.  Mr. Jones appears to think standards-based grading is merely teaching, testing and offering reassessment opportunities.

Here's an example of what SBG might look like in a middle school math class:
Mr. Johnson teaches the area of a triangle on Monday.  Before he assigns some practice problems, he asks each student to complete a problem on their small whiteboard and hold it up in the air.  Mr. Johnson can quickly see which students are still struggling to understanding the concept.  Rather than assigning everyone the same practice problems to complete it and outside of class. Mr. Johnson makes a quick adjustment and groups together several students who appear to still be struggling.  They will be working with Mr. Johnson for some of the remaining class time and will also be completing different practice problems than their classmates.  The next day, Mr. Johnson asks each student to view a solution to a completed practice problem that is already written in the board.  Each student must write a brief paragraph explaining if the solution is correct or not and evidence to support their reasoning.  Mr. Johnson walks around the room while students are writing their paragraphs.  Next, Mr. Johnson asks students to pair up and share their paragraphs with each other.  Finally, he asks several students to share their written responses aloud and the class collectively decides what the correct solution is to the problem.
Mr. Johnson teaches the area of a circle to round out the class period on Tuesday.  Rather than assigning practice problems from the text, he asks each student to find the area of a circle found in their home.  Each student will be asked to share their findings tomorrow in class.  On Wednesday, Mr. Johnson decides to administer a quiz that he knows will never land in the grade book.  He uses the quiz as an opportunity to provide written feedback to every student, but only after each student has once again self-assessed themselves in pencil against the standards.  Mr. Johnson writes comments by many of the students' solutions and then circles where each student is on a continuum of understanding for each standard.

Mr. Johnson asks students with relative strengths and weaknesses to pair up for seven minutes during class on Thursday.  Josie understood area of a triangle at a high level, but stunk it up on the area of a circle.  She'll be conferencing with Alex who didn't have a clue on the area of a triangle, but dominated the area of a circle.
Later in the week, all students complete another assessment, but this time it goes into the grade book.  Mr. Johnson feels pretty good about the assessment results, because he had the opportunity to see and hear students' thinking during class and was able to provide them with structured feedback through the ungraded quiz prior to the most recent assessment.   Reassessment opportunities are offered to students after the most recent assessment as well.
This fable is far from the ideal classroom, however I think it illustrates an aspect of standards-based grading that I believe deserves more attention in my own conversations with fellow educators: less grading and more feedback.

Visualizing the assessment process [Standards-Based Grading]

In my work with teachers, the topic of reassessment comes up time and time again when digging deep into the work of standards-based grading.

Beyond the practical "how-to" of reassessment, the flowchart below has helped some teachers better understand how the tenets of standards-based grading are grounded in feedback (a.k.a. formative assessment, assessment for learning).



What is missing from this visual?  What doesn't make sense?  

Visual: the assessment/instruction shift

From...

To...
[Thanks to Eric & Russ for helping me think through these visuals over the past few years]

Feedback drives our professional learning

After every all-district professional learning day, we send out a google form to every teacher in the district.


Think of this form as our professional learning's formative assessment.  The results are used by our Iowa Core team (district leadership team comprised of 2+ teachers from each building, principals and district administrators) to plan for the next professional learning opportunity.  I wanted to take a moment to explain why we chose these specific statements and questions.  [Although I briefly explained it earlier as well]

The question, "Which statement best describes your current view of our professional learning opportunities?" is one that was added due to the timing of the year.  This probe helps us determine if our staff is feeling any sort of initiative fatigue at the halfway point of the year.

The statements about theory, demonstration, practice, collaboration and practice are all tied to Iowa's Professional Development Model. At any given time, our goal to ensure that any one of these aspects of quality professional development is not overtaking the others.  In the past, we've heard voices telling us things like "What I learned today makes sense, but I didn't have any time to begin putting it into practice."  Striking this balance is challenging, but we've been using this language to collect staff's perceptions of these ideas all year and will continue to do so.

The statement, "This professional learning opportunity will directly benefit my students" is one we strive to earn high marks every time.  If the staff give our learning low marks in this area, we know we missed the mark.

We also believe it's important to differentiate between a quality (or poor) messenger and a quality (or poor) message.  This is why we continue to use the statement, "The professional learning facilitator(s) was/were effective in his/her role(s)."

"I understand how this professional learning experience relates to building and district goals" is also an important prompt for our district leadership team, because it keeps us grounded in the work we've agreed to focus on for the year.

We've used the statements about follow-up from colleagues and administrators as an attempt to gauge the perceived gaps in our implementation.  To my knowledge, our survey responses this year have not identified any major discrepancies between staff and administration, but we continue to use the statements in the event it becomes a discussion point in the future.

We've only recently started using the most/least beneficial questions.  The results here have helped us break down the specific activities.  I correlate this to a more "standards-based" feedback.  We're attempting to differentiate the homers from the gomers.

Common formative assessments were the biggest new learning for the day, so it was important to get a feeling for how much more time and resources we might need to spend on this topic.  The results from the statement, "When I think about the common formative assessments work ahead of me and my team this year, I feel..." will give us a better idea of our future steps.

Finally, the open-ended questions give our staff members a chance to sound off and provide specific tips on what went well, what didn't go well and the areas they feel we should spend time on in the future.

Our feedback survey questions and statements have evolved over time, but our use of this qualitative data remains the same -- provide the best possible professional learning opportunities for our staff given the precious amount of time we're given.

(Note: I'm open for critiques of our form and would be delighted to learn about the questions, prompts and statements being used in your neck of the woods driving professional learning planning.  Comments are open.)




Using Google Forms as feedback loops


Creating surveys using Google Forms is a fairly straightforward task. In my school district, we often use Google Forms rather than other survey tools, because all students in grades 4-12 and staff members already have Google Apps for Education accounts. It doesn’t require setting up another account!


Feedback Loops
After a professional learning experience, the staff member or team of teachers who planned the activities will send out a survey to assess the perceived effectiveness of the content, delivery as well as get a feel for the next needed steps. Using consistent lykert scale statements has been effective, because it helps compare one professional learning experience to others. 





Explicitly asking staff “What next steps could be taken…?” assists leadership teams charged with planning the next professional learning connect one day or afternoon to the next. 
Asking staff members to complete the survey within two or three days of their experience has been effective to capture emotions, questions and thoughtful suggestions in close to real-time. Finally, we’ve found that sending a summary of the results to staff after the data has been collected creates a sense of transparency that leads to increased trust between those crafting the learning experience and those taking part in it. 


The feedback loop is complete when the lead professional developer kicks off the next day or afternoon by saying, 
"Your feedback influenced today’s agenda.  Let me explain how…”


Whether it’s Google Forms or Survey Monkey, what’s holding your district back from using feedback loops to create connections from one professional learning experience to the next?


(cross-posted at SchoolCIO)

Professional Learning Feedback: Asking the Right Questions

I've written several times about the importance of feedback as a classroom teacher.  I'm a big fan of using feedback from teachers to improve future professional learning as an administrator, too.   Just like the classroom setting, it's not easy crafting the right kind of questions for adults either.  Here are a few likert scale statements that have solicited valuable insights in the past.

  • The facilitator was effective in his/her role (Were the people leading the learning doing a good job?)
  • I believe there will be adequate follow-up from my colleagues related to this learning (Is there an underlying belief this was a one-and-done learning opportunity or will it be sustainable?)
  • I believe there will be adequate follow-up from building/district administrators related to this learning? (Will the administration be a viable support system or was this an isolated waste of time?)
  • I believe this learning will directly benefit my students (Perhaps one of the most important questions - Do staff members feel what they learned today will help the students they teach?)
Here is one open-ended questions that seemed to resonate with staff members.
What should our district's next steps be related to ________________? (This question has helped our leadership team get a better sense for the desire of the entire staff)

Finally, this is an open-ended question I've used with mixed results.
Please add constructive feedback related to this professional learning experience

I'd like to do a better job receiving feedback from teachers so that our leadership team can be even more focused in our future professional learning planning.  I believe one way we can improve is by taking a careful look at the questions asked of our staff.  What questions is your building, district or leadership team using?  What questions have I missed?  Leave them in the comments below.

How to help when students(sort of) want help

In my class, students have access to all of the answers in the problem sets.  I post them in the front of the room every day.


Students no longer have an incentive to copy answers from their neighbor or from the back of the book.  These answers are available the same day the assignment is given and stay posted for 24-48 hours.  I believe that providing answers free of charge eliminates any incentive to copy them from a friend or the back of the book as well as provides instant right/wrong feedback.  I always thought it was strange when math teachers would hide (it took me four years to work up the guts and break "tradition" in this area of my professional career) the answers until the next day - if I was doing something wrong, I would want to know sooner rather than later, so why wouldn't my students desire the same timely feedback?  I regularly encourage students to check their answers during and outside of class and come in for extra help if they have more than three unrelated and unresolved questions.  I'll go ahead and admit that students don't regularly come in outside of class for extra help, but that may be a direct result of the support they're provided between bells.  Here's the way it works:

At the beginning of class each day, students check any remaining answers and write the numbers of the problems they still do not understand on the board (just to the right of the posted problems).  I like this system for two reasons.
  1. It gives me a quick idea of the ideas students are still struggling to understand.
  2. Anonymous marks are the norm.  No single student stands out in the crowd for being the "dumb" one.
It did not take long to realize this is an imperfect system.  Time does not allow to go over every single problem students don't understand.  If only a few students wish to go over #15, is it worth the entire class' time to discuss it?  Time is a realistic factor that comes into play each and every day.  Students don't always have time to come in outside of class for help and we don't legitimately have time during class to answer all of their questions either.  What is the solution?

At the beginning of the year, I amended the process described above by adding one more step.  After I've finished answering several of the questions from the board and before students hand their papers in, I ask them to write one of the following words that best describes their current understanding of the problems: green, yellow or red.  Green means the student has a good idea of what's going on. Yellow means the student is a bit shaky on one or more ideas/problems.  Red means the student is really lost and hopefully plans on coming in outside of class to get extra help.  I also encourage students to write specific questions and/or problem numbers that they still do not understand on their papers.  I regularly vow to them that I will provide written feedback to their questions.  I feel like it's a natural extension of the question answering session at the beginning of class and at least in my mind, makes up for the fact that I can't answer all of the questions students are still unsure about.  A typical yellow paper looks like this:


Student A


Note at the top of the paper that Student A had questions about #s 3, 12, and 20.   Notice also that Student A did not attempt or give me any inclination of his/her misconception.  My notes are in red.  A few questions are running through my head...


Because Student A did not even attempt the problem, am I working harder than him/her by providing the first steps to each of the problems?  Am I encouraging this type of cop out behavior via the green, yellow, red and written feedback system?  Sometimes the system yields great questions with practical feedback as seen in Student B's paper below.







Student B


Is there a better system I could be piloting?  I have tried peer feedback in the place of my all-class remediation at the beginning of class based on the questions students ask.  It usually looks something like, "Who understands #5?  Okay, if you were one of the students that wrote #5 on the board, please see Suzie in a few moments.  Who understands #12? Okay, if you..." It usually works the first time or two, but eventually morphs into social time very quickly because so many high school students aren't mature enough get help from one friend for one problem and then move to another friend who can help with a second or third problem.  Is there an alternative I'm missing?

Does Student A REALLY want help or is he/she just going through the motions to please Mr. Townsley?  After all, if he/she had a strong desire to learn about these concepts, wouldn't he/she have put in more effort intially (a la Student B) and/or come in outside of class to virtually guarantee one-on-one instruction to remedy any misconceptions?  That's admittedly the old school teacher side of me coming out. 

Last, if Rick DuFour is right in saying that learning should be the constant while time and support are the variables, how does this play out in a high school classroom?   How much responsibility for taking the initiative to learn lies on me, the teacher and how much of it lies on the student?  Is it realistic to encourage students to come in outside of class or should my remediation take place solely between bells?
    I'm just not sure how to help students who (sort of) want help.  Tweaks? Overhauling the system?  I'm willing to learn from your experience in the comments section.

    (In)formative assessment: LESS grading and MORE feedback

    It's an exciting time of the year. Classes start in less than 48 hours. Lots of district, building, leadership and curriculum meetings have taken place the past few days. One common theme has been "assessment." Even though our district continues to perform very well on standardized tests, we have been charged to go from "good" to "great" by the administrative team. I can't express in words how exciting I am for the direction our district is going through the boulevard called assessment. I truly believe that transforming assessment practices is the beginning of so many other great conversations and classroom changes. To keep this in the front of our minds, each faculty member is being asked to document his/her assessments from August to December. The documentation is loosely associated with Rick DuFour's three essential questions.

    1. What do we want all students to learn?
    As educators, we must think about the essential learnings (standards, benchmarks, learning targets, objectives, take your pick!) our students should have as a result of taking our course. These may change slightly from year to year depending the students, but we should be able to identify the "core" ideas and concepts each student is expected to learn.

    2. How will we know when each student has learned it?
    As educators, we should be able to articulate the connection between the essential learnings and the assessments we administer in our classrooms. This involves more than just printing out the textbook publisher's test and assuming it "fits" our intended purposes. It is also not merely giving students pop quizzes covering the night's reading and moving on when they haven't a clue what they were to have learned. What's the best way to clearly connect assessments and learning targets? Standards-based grading! It's been a hard sell the past few days in my conversations with colleagues, but I look forward to sharing my successes and failures in developing the implementation of this idea further on this blog.

    3. How will we respond when a student experiences difficulty?
    As educators, how are our assessment and instruction practices setup to support students who struggle? Are we caught up in the "assess and move on" rut? Or are our assessments created, graded (or not) to inform future instruction? The buzz word commonly used here is "formative assessment." I discussed this idea in many previous posts, including this one.

    I really don't feel like I have a firm grasp on #3. Last year, I reported out to my students their successes and failures on quizzes (my bi-unit written, formative assessments) the same way I did on tests, via a 4-point scale per learning target. I keep thinking about Susan Brookhart's comments in her Dec. '07/Jan. '08 Educational Leadership article, "Feedback that Fits" when she said,

    "Formative assessment..Here is how close you are to the knowledge or skills you are trying to develop, and here's what you need to do next....Good feedback contains information students can use....For feedback to drive the formative assessment cycle, it needs to describe where the student is in relation to the learning goal..."
    My "old" standards-based reporting on quizzes looked like the image below. I gave students written feedback on individual problems and then a score for each learning target assessed correlating to a narrative describing their current state of understanding.
    I used to argue that the learning target score was a way of communicating to students how well they were doing in relation to the learning goal. I think it still does make sense in this context, but it does not give them the feedback they need and deserve describing what they need to do next to improve their learning. Looking back, I was giving my students a red, yellow or green light, but never a map to tell them where to turn next. My "next" step is changing the "scoring" into a rubric that instead gives students an idea of where they fit on the continuum of concept mastery.
    I hope this continuum and more "student-friendly" wording along the bottom is information students can better use. I will also continue to give feedback on individual problems so that students can understand what they need to do to better understand the topic or overcome their misconception. Last year's practice of grouping students according to their relative strengths and weaknesses (related to the learning targets) will continue so that students not have the opportunity to learn from my feedback, but also from their peers. My goal in this give students more meaningful feedback and less grading. This subtle change, I believe, takes the emphasis away from a "number" and instead on the feedback.

    What flaws or critiques do you see with this change in philosophy? How would you react as a student if you did not receive a "grade" (in the form of a number or percentage) but rather a mark on a continuum to complement written feedback on problems?

    Use effective feedback

    Note: This is the sixth post in a series based on the book Never Work Harder Than Your Students & Other Principles of Great Teaching by Robyn Jackson.

    "If we just grade assignments and never use that information to help inform our instruction, we have wasted our students' time and we have reinforced to students the false notion that they only reason they are learning the material is to take the test." (p. 127)
    Yet another quote from the book that resonated with me was...
    "It is one thing to collect feedback about students' progress, but if you simply collect this feedback and never use it to adjust your instruction, then you are collecting it in vain." (p. 132)
    Of all the principles listed in Jackson's book, this one of my absolute favorite. I have a passion for assessment, there's no question about it. The quotes above are alluding to the age-old practice of simply "grading" everything and not doing much with those letters, numbers and percentages. In this chapter, the author discusses more than just "feedback." It's actually a nice summary of what many other authors are calling, "assessment for learning." For example, Jackson says...
    "The purpose of assessment is to provide you and your students with feedback on how well students are mastering the objectives of your course." (p. 137)
    (For a more in-depth discussion of formative assessment and assessment for learning, check out this Ed. Leadership article or a few of my previous posts.) Notice how Jackson's statement is centered on the student and how s/he is learning. The purpose of feedback/assessment is to guide the student towards mastery. I have long been guilty of assuming that numbers and letters were great ways of providing feedback to my students. Here's a challenge: the next time you grade a student's paper and give him/her a "B" or "85%", follow it up by asking what that grade or percentage means. What type of effort, mastery, and/or feedback does your "classroom grading scale" give to your students? From my experiences, it is typically a reference point for honor rolls, pleasing parents, allowance bonuses, staying eligible for sports or some combination of the previously mentioned reasons. Rarely have these measures been enough to spur students on to improve their own work in a specific and meaningful way.

    Rick DuFour, probably most well-known for his work on professional learning communities, recently blogged about grades, homework and feedback.
    In most schools, what a grade represents remains in the eye of the beholder of the individual teacher. Some teachers grade homework; some do not. Some allow students to retake a test; some do not. Some provide students with additional time and support; some do not. Some provide extra credit for tasks unrelated to the curriculum; some do not. Some consider behavior, participation, and promptness in determining a grade; some do not. It is time for educators to grapple with the question, “What does a grade represent in our school?” in a more meaningful way.
    Grappling with the "What does a grade represent?" question is an excellent conversation starter, but Jackson's principle takes it one step farther. From my personal experience, I used to think that the hardest questions to answer from the mouths of students were along the lines of... "Will this be graded?" or "How much will this affect my grade?" With a more laser-like focus on learning targets and a change in the culture of my classroom, I am now hoping to eliminate these types of questions. But until I got past the "everything must be graded and recorded" mentality, it was impossible for me to see the value of effective feedback. My feedback (grades, numbers, etc.) was focused on "now" rather than the future. Jackson goes on...
    "Evaluative feedback keeps students focused on the now. Coaching feedback focuses students on the next time." (p. 142)
    This principle teaches us that much of our "grading" should actually be "coaching" instead. A few pages later, the author nails this idea.
    "The best thing we can do for our students who fail is to provide them with an honest assessment of why they failed and show them how to do better the next time." (p. 144)
    Letter grades, percentages and points just don't provide this type of feedback to our students. My grading to coaching ratio is really out of whack. In summary, I've learned that I need to do less "grading now" and more "coaching for the future."

    What about you? What is your grading to coaching ratio? How much effective feedback are you giving your students?



    Related post:

    Know where your students are going

    Note: This is the third post in a series based on the book Never Work Harder Than Your Students & Other Principles of Great Teaching by Robyn Jackson.

    Do we really plan homework, tests, etc. around our learning objectives?
    This is a question I grappled with last year as I transformed my grading practices to a more standards-based system. A hiccup I ran into was using assessments I had created the year before. With a renewed laser-like focus on learning targets, I slowly realized that my assessments did not always clearly align with my intended outcomes. To be brutally honest, "time" was a factor that often hindered me from creating assessments that explicitly measured my learning targets. My intentions were great through an improved reporting and grading system, but the assessments need to be tweaked.

    Robyn Jackson says...
    "Master teachers spend more time unpacking standards and objectives than they do planning learning activities because they understand that clear learning goals will drive everything else they do." (p. 58)
    Several pages later, she alludes to an idea found in a book, Understanding by Design, written by Jay McTighe and Grant Wiggins. In my spring reading, I found this book to be very helpful in thinking about the order in which to design assessments and instruction. Jackson states it clearly...
    "For each learning goal, decide how you will know when students have achieved that goal and how you will know when students are on the right track. Explain these indicators to students." (p. 63)
    I think I have a pretty firm grasp on this concept of keeping the end in mind. I need to take a second look at the list of learning targets (objectives) for each course I teach and then ensure that I have formative assessment prompts ready to help students see when they are on the right track. Furthermore, explaining to students my philosophical difference between "grades" and "feedback" will be key. Initially, I anticipate students questioning my practice of not using traditional scores to mark up their formative assessments. My hope is that this will be a prime opportunity to show them the value of written and verbal feedback rather than a single score.
    "To be effective, feedback needs to cause thinking. Grades don't do that. Scores don't do that. And comments like “Good job” don't do that either. What does cause thinking is a comment that addresses what the student needs to do to improve" - Nov. 2005 Ed. Leadership article.
    A conversation might have look something like this last year before I implemented standards-based grading.
    Me: You earned 10/16 points. Which ideas did you understand and which ones do you still need work on?
    Student: Well, I missed one point on question #1 and five points on question #5. I think I need to better understand how to do #5. Can you tell me how to do it?
    With standards-based grading in place, the conversations now look something more like this:

    Me: How do you think you did on your quiz?
    Student: I got 4/4 on learning targets 1-3, but 2/4 on learning targets 4 and 5. I really don't understand the ideas behind [insert learning target here]. Can you help me?
    I need to take my "feedback" one step farther by not only giving students a score for each standard, but also a narrative outlining what the student needs to do to improve. I believe that this small tweak to my formative assessments will help students see not only how they've done (past tense), but also what they can do to improve (future tense) - the "thinking" Jackson mentions.

    All of these ideas are all great, but my first order of business needs to be taking a solid look at my summative assessments. Jackson paraphrases the ideas of Wiggins and McTighe,
    "It is not until you define your assessment instrument that you have clearly spelled out what your true objective is." (p. 66)"
    Wouldn't it be nice if students no longer asked "what's going to be on the test?" What if the norm instead was students who knew exactly what to expect on the test and worked diligently to ensure that they understood those concepts and skills at a level in which they could articulate them not only on a "test," but also to their parents and peers? I think this is what the Iowa Core Curriculum is calling "Teaching for Understanding."

    Personal action items from this principle are as follows:
    1. Take a detailed look at summative assessments for each course. If an assessment does not clearly match-up with its intended learning targets, modify to fit.
    2. Create a rubric or blank space on each formative assessment for detailed feedback. This will remind me and the students of the "feedback narrative" expecation mentioned above.
    Through the implementation of these action items this summer, I hope to have a year similar to the master educator described on p. 76 of Jackson's book:
    "Because I had invested the time up front to unpack my standards, define mastery and the steps toward mastery, and identify how I would determine whether my students had reached mastery, I had more time during the year to relax and teach."
    Now is the time to plan with the "end" in mind and know where your students are going.

    The Assessment Dentist

    I had my biannual trip to the dentist yesterday. Aside from a half day off from teaching, it gave me some extra time to think about MeTA. This may be a stretch, but I think there are lots of connections between assessment practices and going to the dentist. Let me explain:

    As I sat in the chair listening to the hygienist lecture me on flossing techniques (she was right!), I was reminded of what makes assessment useful - meaningful feedback. If I went to the dentist and all I got was a good cleaning and few shots of lead injected into my teeth, I wouldn't be bettering myself. My problems would seem "fixed" temporarily, but my habits would inevitably stay the same. The plaque on my teeth, which I believe beautifully represents students' misconceptions, may go away with a few shots of red ink, but by the next visit (assessment?!), it will surely find its way back. We're doing the same thing to our students when we simply "check" their work, note the "correct" way and add another score to the grade book.

    A worthwhile dental visit "educates."
    My hygienist gave me flossing tips because she genuinely wanted to help me take better care of my teeth. She wanted me to come back next time with as little plaque as possible. As teachers, we must help our students debug their own work. Spewing out correct answers isn't enough. Understanding where a student's thinking is coming from can be the first step towards helping him/her to get to the next level along the continuum of learning.

    Look in the mirror.
    Our goal should be to help students look in the mirror (a la self-assessment) and see where their own mistakes lie. Yes, a dental visit typically reveals much plaque, but it doesn't have to be the only time/place where plaque is identified. Continuing with the same theme, educators should continually be looking in the mirror to identify poor assessment tools and revise them accordingly.

    Brushing often?
    I go to the dentist two times per year, but I brush my teeth twice daily. How often are we assessing our students? How meaningful is our "brushing" and "flossing"...is it getting rid of the plaque? After all, it is possible to brush and floss without removing much plaque. When we get our new toothpaste, toothbrush and floss at the end of the visit, have we been inspired to use it? I wonder if our assessment often hurts, rather than hinders our students to make meaningful changes in the way they think not only about the important concepts, but about school in general.

    Clean teeth expectation.
    Patients who leave the dentist's office expect to have clean teeth when they leave. What do our students expect to gain from their assessments? What do we as educators expect to learn from our assessments? In the midst of rolling out a more "standards-based" reporting system, I'm realizing that my students are shocked to receive feedback in the form of multiple scores rather than a single number or grade describing their performance. This form of reporting makes the academic expectations of students much more transparent. I look forward to the day when students expect detailed feedback on their work - more than just a single score or letter grade. I admit that breaking this "tradition" of schooling can be a difficult task.

    Have you or your students been to the assessment dentist lately? What are your personal "assessment cavities?"

    Practice with feedback still matters

    Practice with feedback matters, according to a 2007 study (pdf) published in the British Journal of Educational Technology. I've been hooked on this idea ever since I read Bransford's How People Learn two years ago. It's on my "highly recommend" list for other educators.

    "The use of frequent formative assessment helps make students' thinking visible to themselves, their peers, and their teacher." (Bransford et al., 2000, p. 19)

    As I mentioned in a previous post, I'm intrigued by the intersection between standards-based assessment and 21st century learning. In fact, I'm on a continual search for ways in which math education, assessment and technology might work best together. Where does "practice with feedback" fit in then? Without a doubt, formative assessment is a pretty hot topic right now at the Iowa Department of Education. In fact, I was at a district meeting of sorts last night and the "instructional decision making" model came up time and time again in conversation. IDM is the idea that instruction driven by data can help all students improve. K-6 teachers seem to "get it" through their use of DIBELS, BRI and DRA screening probes whereas us secondary folk are...well, assigning homework and giving out quizzes a few times a week! I believe that standards-based assessment/reporting has the potential to enhance my formative assessment practices, because it will better enable me to focus on what I'm teaching and how it should best be assessed. When I bring up connecting standards with assessment, the response I typically get is something like "Well, our text book matches our standards and we use the textbook tests and quizzes, so it must match up." I'm not convinced.

    On the flip side, when I hear elementary teachers talk about setting up supplemental groups and choosing several different novels for their students to read based on ability, I admit that I get a bit jealous. It seems to me like the secondary folk are missing the boat somewhere. Math teachers are seldom accused of not giving our students enough practice, but I would like to propose that we are too often lacking in the area of meaningful feedback. The IDM probes referenced above are one way of enabling educators to provide appropriate instruction and in turn, meaningful feedback to those who need it, when they need it, and at a level that is appropriate for them.

    Then there's the typical high school math classroom: Giving students a score out of five on a daily basis based on the neatness and accuracy of their answers, ability to show their work, and the responsibility of bringing a checking pen to class doesn't seem like meaningful feedback to me - especially when it takes 24 hours to collect papers, record scores and hand them back. A student said to me a few months ago, "I like knowing if I'm doing it right or wrong...today!" What is the "answer" to this problem? Several technology-related answers come to mind:
    • Eliminate homework and make the move to more problem-based learning using appropriate technology tools
    • Create some sort of electronic means (Moodle?) of providing meaningful feedback on a daily basis
    • Who cares about homework?! Use student-response systems as daily probes to assess students' understanding
    I have already decided to make the answers available to students ahead of time in order to help them get immediate feedback. I'm hoping to create a system that truly does make the students' thinking visible to both them and me. Reporting it out in a standards-based way seem like part of the picture, too. What am I missing in order to make practice with feedback a more visible part of my daily routine?