Deron Durflinger is a secondary principal here in the great state of Iowa. Even though we only live a few hours away from each other, I've never had a chance to meet him or visit his district. Perhaps some of his ideas about teacher pay and teacher quality will be closer to reality by the time we actually cross paths. In his recent post, he brings up many good points. Here is the one that hit home with me:
"We also know that teachers are not paid at the level they should be, especially our best teachers. The factory model we currently use supports mediocrity and encourages teachers to be paid for their experience or their level of degree, not for their ability to help kids learn."
I've been interested in merit pay ever since I went into education, but not because I ever felt that I was being under-compensated. Instead, I've always felt like the salary scale is setup to promote mediocrity. To summarize Deron, the system encourages teachers to do the wrong things...stay in the profession longer and take more classes. When neither of these factors directly improves a teacher's practice, the compensation system, from my perspective is just as flawed as many of our classroom assessment practices. In a nutshell, here's what I mean:
In the typical classroom, teachers create a currency of "point accumulation" rather than learning. Students learn to merely turn in work to earn points (just put in your time...), complete and take seriously assignments with large point values and ask for extra credit assigments when they desire a better grade even when we know the extra credit doesn't usually enhance their level of understanding. In the same way our compensation system in education is flawed...we encourage mediocre teachers to stay longer so that they can get paid more (just put in your time...). Even worse yet, we've created incentives for them to take graduate courses and seminars so that they can get even more money, even if these seminars don't directly improve their practice (sounds like extra credit!). The parallel here seems pretty obvious to me. How silly is our system?
Before the flames begin, I have no idea what the solution to this problem might be. Using test scores to reward teachers doesn't make sense, but neither does our current system. That's my thoughts on teacher pay. Feel free to post your own thoughts, solutions and rebuttal in the comments.
"Most of us know teachers who teach very successfully in a textbook-lecture, teacher centered style and have had terrific student achievement doing so." (Nunley, 2006, p. 103)
It's possible, right? It is entirely possible that your classroom is full of auditory learners who don't mind listening to a bald guy talk while swapping slides. It is entirely possible that all of your students are satisfied taking notes and look forward to college lecture halls in their not-so-distant future. If memorizing and regurgitating facts are what you're looking for AND if students are satisfied AND if parents endorse this type of teaching AND if administration is kosher with the results, then keep doing what you're doing.
If the above commentary describes your situation, keep doing what you're doing.
Chances are pretty good that your classroom is different. It's broken. It needs fixed. For your students' sake...you (and I) do NEED to differentiate.
"Many of us took a course in classroom management during our pre-service education. If we really want to move education in the direction of student centeredness and encourage differentiation, then teacher education courses in classroom management should expire. We need to replace them with 'classroom leadership' courses. Strike the term 'classroom management' from education because the concept self-perpetuates the problem. The more you manage others, the more they need to be managed. You cannot manage people into being responsible, intrinsically motivated, cooperative people who strive to reach their own personal potential. For that we need classroom leaders."
[Differentiating the High School Classroom, Nunley, p. 84]
In my first year or two of teaching, I struggled with control. I didn't struggle in the stereotypical way of classroom management, but instead the exact opposite. Perhaps my biggest downfall was thinking that I could control students for part of the class period and then they would magically figure the rest out on their own. I had this picture in my mind that my think-alouds and rigorous math problems would someday yield critical thinkers and students who would naturally mature into being independent learners. I was wrong.
Kathie Nunley suggests that a teacher's desire for control in his/her classroom is one of eighteen obstacles to differentiating learning experiences within the classroom walls. The more I think about it, the more I agree. A few quick bullet points sum up my thinking:
My natural tendency is to AVOID FAILURE and create successful opportunities for my students. Truth: Without failure, nothing changes. Failure has the potential to encourage innovation and discovery.
My natural tendency is to create the SAME learning environment for ALL students. It's a control issue. Truth: Providing students with several legitimate choices to learn/demonstrate the same learning target empowers students and increases motivation.
My natural tendency is to believe that a CONTROLLED class is WELL-RUN class. Truth: Learning, not control, is the currency of education.
Nunley sums up this obstacle well on p. 81.
"One of the main reasons many teachers resist differentiated approaches to teaching is they think it will cause them to lose control in their classrooms. Teachers like control...Whether or not learning is occurring is beside the point when what matters most is the control they have over their students."
In an era of administrators conducting classroom walk-throughs, it is hard not to get caught up in the view from the window rather than carefully observing students conversations and productions. When my students are discussing their quizzes in groups for five minutes and it seems like chaos, that's okay. When my students are spending two minutes in a think-pair-share, that's okay. When students are sitting around the room in groups, discussing their practice problems while sending individuals to the board to check their answers, only to find out they got them all wrong at first, that's okay. If our focus is truly on learning, does it matter if our classrooms look messy and out-of-control to the uninformed eye?
I don't really know what it means to be a "classroom leader" but I do know that managing my students for several years wasn't working out so well. Letting go and in turn encouraging more student discussion and reflection has yielded incredible results. I'm slowly figuring out that success has less to do with managing students and more about pointing them in the right direction: learning.
Will you join me in placing the KABOSH on the term "classroom management" for the sake of our students?
"A class is not differentiated when assignments are the same for all learners and the adjustments consist of varying the level of difficulty of questions for certain students, grading some students harder than others, or letting students who finish early play games for enrichment. It is not appropriate to have more advanced learners do extra math problems, extra book reports, or after completing their "regular" work be given extension assignments." [How to differentiate instruction in mixed ability classrooms, 1995, p. 9]
If you've heard about differentiation, you've probably come across some of Tomlinson's writing. She's practically the differentiation guru. I want to be the first to admit it publicly, if the description above is NOT differentiation, I have a lot to learn about this topic. I know that differentiation happens through product, process and content, but if it doesn't involve extra math problems and extension assignments then my pre-service AND in-service education has been....how do I say this politely?....misleading!
I'm looking forward to reading the rest of Differentiating the High School Classroom by Kathie F. Nunley. I received close to a dozen books for Christmas and this was not one of them. It's checked out from the local area education agency's professional library and is due back in a few weeks, so I figured I had better read it before digging into the new stack. Look for future posts related to differentiation as I learn from Nunley's book.
Nearly every educator I know claims to differentiate in his/her classroom. As I look to reshape my own definition of differentiation-in-action, I'd like to know what it looks like with your students. Feel free to leave a comment below with your best differentiation thoughts and stories.
I am looking for more ways to encourage students to use data that is relevant and interesting to them in my Statistics & Discrete Math class. In a recent project, students were asked to answer the question, "Is size a useful predictor of a house's assessed value?" Data from websites such as the Johnson County Assessor were made readily available for students to access and research the ins and outs of houses on their own streets and in their housing developments. Additional resources were posted on my website for students to use as they created a second statistical model with Iowa City or Cedar Rapids (two cities within 20 minutes that are much larger than the town I teach in) houses as data points.
Students were asked to produce a 1-2 page double-spaced report summarizing their findings:
A brief summary of the data in each mini-study (local neighborhood and CR/IC). This included a response to the following question using data, graphs, regression lines, significance tests and correlation coefficients to backup your opinion - "Is size a useful predictor of price in your neighborhood? ...and in Iowa City or Cedar Rapids?"
A prediction of the students' house's assessed value based on square feet using his/her neighborhood model and the Iowa City or Cedar Rapids model. Does this match-up with the assessed value according to the websites? Explain a few possible reasons why or why not.
Finally, include a paragraph explaining any possible errors observed in the way data was collected or the process in which this study took place
Scatterplots, regression line equations and prediction intervals using Microsoft Excel were the norm for each student.
This project parallels quadrant D learning as written in the 9-12 Data Analysis/Statistics & Probability Iowa Core Curriculum essential concepts.
I appreciate fellow math educator and edu-blogger David Cox and the way he consistently peppers me with questions about my assessment practices on this blog. Most recently David asked:
"Do you offer a pre-test before you begin instruction on the learning target(s)? If so, what do you do with the student who aces the pre-test while you are going through the initial instruction?"
I replied to David briefly in the comments, but a proper explanation warrants a much more thorough response. Nearly every educator I know is familiar with the idea of "using data to drive instruction." I'd like to describe two experiments (let's call them "action research projects" to make them sound more professional) currently in the works in my classroom.
Using data to assess relative strengths and weaknesses of an entire class
At the beginning of the most recent unit of study, students took a thirteen question multiple choice "pre-test" to assess their current level of understanding of surface area and volume. I provided all of the necessary formulas and attempted to choose two questions that correlated with each learning target. While multiple choice questions are not the norm in my class and present some obvious issues with guessing correct answers, I decided to go this route to create a quick turnaround. Because the pre-test was a new idea, I prefaced it to my students as an opportunity to find out what they already know and don't know. I explained my reasoning for the pre-test: to assess what they already know and what they need more help understanding. The worst case scenario from the students' perspective, I explained, was not taking the assessment seriously and in turn suffering through an entire unit full of ideas and concepts they already understood. That incentive seemed to make sense to them, because I did not observe any dot patterns or hear any grumblings.
I have several ideas in mind for utilizing the pre-test data:
Just as I explained to my students, if a learning target seems to be understood by many on the pre-test, I will spend very little time teaching this concept. On the flip side, learning targets that students struggled with trigger a "make sure we spend lots of time discussing this learning target and any misconceptions" in my mind.
I will be giving an alternate form (same questions, different order) to the students at the end of the unit, in conjunction with the usual open-ended summative assessment. I will be teaching one class using the traditional textbook sequence: lateral & surface area of prisms, lateral & surface area of pyramids, volume of prisms, volume of pyramids, etc. The second class will be taught using an approach I've always wanted to try: teach the difference between lateral area, surface area and volume followed by each formula in one 84 minute block. The next few class periods will be spent working on many practice problems and identifying which formula to apply and any misconceptions that arise. Using the pre-/post-test difference data between the two classes in conjunction with several other data points to-be-determined, I would like to find out which approach seems to be more effective. Look for a future post describing the results.
Using data to differentiate instruction and assignments for individual students
My building has a goal to increase the computation scores of students as reported on standardized tests. Our department selected Geometry as the class to focus on skills such as long division, dividing mixed numbers, multiplying decimals and simplifying radicals all without a calculator. The system we came up with was to administer pre- and posts tests at the beginning and end of the course and focus on two skills each week. Typically the skill remediation involved a five to ten minute re-teaching session during class followed by a self-scoring drill and kill worksheet (DaK for short). At the end of each week a short six question quiz provided an interim picture of students' progress on the two skills. As the semester progresses, the completion rate of the DaK worksheets almost always declines and the students who need the practice the most tend to be the ones who choose not to do it. The system has the best of intentions but it just wasn't working. Two weeks ago, I decided to give the quiz first. Students who successfully demonstrated an understanding of the concepts were exempt from the DaK worksheet and their quiz score was recorded. All other students were given the DaK and were only permitted to take the post-quiz if their DaK was completed. New evidence of learning replaced old evidence in the grade book. Students love this new idea so far! It works for them, because an incentive exists to do well on the first quiz as well as the DaK worksheet, but only if a need exists to complete it. It also works for me, because during the five or ten minute re-teaching session, students are hungry to learn based on their mediocre quiz scores. Students who have already mastered the concept can be pinpointed as additional resources for struggling learners. I admit that this system works, because the instructional time is limited and it only happens one time per week. If I could figure out a way to generalize this idea of differentiation and manage classroom behavior all at once, I could probably write a book about it and retire. :)
Are these examples of data-driven instruction? How are you using data to differentiate for your class as well as individual students?
The verdict is still out on the validity of grading homework in the eyes of many educators in my sphere of influence. If the purpose of homework isn't to check for students' understanding, what activities do classroom teachers rely on figure out who "gets it" and who is still lost? The next two diagrams summarize what I hear:
I regularly ask teachers I come in contact with to describe their formative assessment strategies. Answers such as "think, pair share" and "observe the non-verbal cues of the students throughout the class period" come up quite frequently. It's the stuff that often doesn't land on paper. Great. Those are all examples of formative assessment that can drive future instruction. Here lies the controversy:
Once students complete a few problems or write an essay on paper (or take notes or complete a word find or sometimes just write their name and date and turn it in - how sad is that..) many educators I know automatically feel like a point value needs to be assigned. Why?How is a "thumbs up/thumbs down" prompt any different from an activity that involves students writing down an answer or their thoughts on paper, especially when they serve the same purpose? Why is it universally acceptable to grade one and not the other?
.....I just don't get it. Can someone explain this to me?
A commenter named Carol recently raised a few questions that I felt warranted a more in-depth response.
I am really enjoying diving into this stuff... but I can't wrap my mind around the homework. How is it that you "require" them to do their homework, when it has no impact on their assessment? And, if a student never completes their homework, yet masters the standards - would they receive the same mark as the ones who do complete all the assignments? What becomes the point of having a deadline? Help!
It seems like the question of grading homework comes up quite often when discussing standards-based grading on this blog and in my face-to-face conversations with colleagues, too.
It's easier than you think to 'require' students to continue to do their homework while not using it to impact their grade. I conducted the following exercise with my Geometry students last year:
Me: "How many of you would still do the homework if it was worth 2 points rather than 3?" (Most of the class raised their hand) Me: "How many of you would still do the homework if it was only worth 1 point?" (Fewer students raised their hands) Me: "How many of you would still do the homework if it was worth zero points?" (Several students, but clearly a minority, raised their hand) Me: "If you did not do the homework, how well do you think you would do on the test?" (A few students chimed in saying they probably wouldn't do very well) Me: "How many tests do you think you'd have to fail before you realized that you need to do the homework to be successful?" (Some said one test while others said a few tests) Me: "Okay, now that you know this about homework and tests...why would you stop doing your homework if it wasn't worth any points?" (We then had a discussion about how homework is practice, answers are freely available so why NOT do it?! and that it is acts like "insurance.")
In my standards-based grading system, students can be re-assessed for full credit on any learning target they'd like to improve on, but one pre-requisite for this second chance opportunity is that they put in the time in the first place. By completing the homework, they have "purchased insurance" which gives them the right to cash in when a crisis hits. If a student does not complete the homework, but masters the standard, then "insurance" wasn't needed on his/her end - shame on me for not catching this in a pre-test! Full disclosure: this scenario of students not completing their homework and mastering a learning target happens more often than I originally envisioned - I will be piloting a few tweaks to help alleviate this "problem" over the next few weeks. Stay tuned for the results.
For additional commentary on these subjects, you may want to consider reading a few of my previous posts:
"Grading: Points vs. Learning" - de-emphasizing a culture of "points" in the classroom and how quizzes, homework and standards-based grading fit in.
Avid MeTA musings readers, how do you handle homework? If you don't grade it, how do you get buy-in from students, parents and administration? If you are currently grading homework, what's holding you back?
When discussing formative assessment and my current standards-based grading system with colleagues, various aspects of it seem like a lot of "work" (such as inputting multiple scores into the grade book rather than just a single summative assessment score) while other parts are viewed as downright controversial.
In Revisiting Professional Learning Communities at Work, DuFour, et. al (2008) suggest three components of formative assessment:
The assessment is used to identify students who are experiencing difficulty in their learning.
A system of intervention is in place to ensure students experiencing difficulty devote additional time to and receive additional support for their learning.
Those students are provided another opportunity to demonstrate their learning and are not penalized for their earlier difficulty. (emphasis mine, pp. 216-217)
Allowing new evidence of learning to replace old evidence is such a hard sell. I hear responses such as Won't students give a mediocre effort the first time? We're not teaching responsibility!This isn't how college works. I've used this example in a previous post and it's also the one I use to illustrate the point that new evidence of learning should replace old evidence:
Consider the following example. Assume that homework is graded on completion and quizzes/tests on content mastery.
Student A did not understand the concepts and therefore did not complete the homework. Somewhere between the "quiz" and the "test" Student A came in for extra help and finally "understood" the concept which explains his/her sudden improvement on the "test."
In the traditional grading system, which student earns a better grade? Student B, of course. A traditional points system penalizes "later learners." On the "test," both students demonstrated the same level of understanding, but Student A is penalized for initially struggling. Do we have a realistic expectation that students will "get it" the first day we teach concepts to them? If so, then why not have daily tests?
DuFour, et. al go on to explain this point concisely in their book:
"Our position has been challenged in several ways. Some have argued students should not be given a second opportunity to learn, or, at the very least, their initial failure should be included in calculating the grade. They claim it would be unfair to allow low-performing students the opportunity to earn a grade similar to those of students who were proficient on the initial assessment. Our response is that every school mission statement we have read asserts the school is committed to helping all students learn. We have yet to find a mission statement that says, “They must all learn fast or the first time we teach it.” If some students must work longer and harder to succeed, but they become proficient, their grade should reflect their ultimate proficiency, not their early difficulty." (p. 219)
I am becoming increasingly convinced that any classroom claiming to involve formative assessment or "assessment for learning" must allow new evidence to replace the old. It just makes sense.
I had an interesting conversation with a colleague today that followed-up a heated discussion that took place yesterday among several staff members over lunch in the teachers' lounge. Yesterday's ongoing question was, "Do our grading practices promote compliance or learning?" For example, when a teacher continually awards five points for merely completing a homework assignment, we're sending a hidden message to parents that the way their student can raise their grade is to make sure that all of their assignments are turned in. Consider the following fictitious email communication:
Dear Mr. Townsley,
I logged on to PowerSchool last night and saw that Johnny's grade dropped from a C to a D. Is there any extra credit he can do to raise his grade?
I look forward to hearing back from you today!
Jane Doe
or I'm guessing any secondary teacher can relate to this one:
Dear Mr. Townsley,
Jessica told me that she has been turning in all of her work, but her grade is still an "F." Could you send me a list of the assignments she is missing so that she can get her grade up?"
Thanks,
John Smith
How often does parent communication emphasize turning in assignments, late work and following directions? These are all examples of compliance or "doing more work." Wouldn't it be great if parent communication instead looked something like this?
Dear Mr. Townsley,
Suzie does not seem to have a passing grade right now. Could you send me a list of the concepts and ideas that she still does not understand so that she and I can work on them together?
Thanks,
Sam Johnson
My colleague had a great "aha" moment as she thought about this conversation last night. She agreed that traditional grading systems promote compliance and often report out responsibility rather than learning. She challenged my thinking by suggesting that many parents are happy to ask about compliance and responsibility in relation to grades not only because it is what they were used to in school, but also because it is something they can control. Parents can quickly and easily ensure that their children are completing assignments. There is great satisfaction in holding homework completion up as bait for free time watching television or playing video games. "Once you have your essay written, you may play Halo." She went on to suggest that many parents might begin to feel helpless once they realized the reason their student was failing Algebra wasn't because he/she wasn't turning in their homework, but instead because he/she is unable to solve two-step linear equations. Further evidence of this idea plays out when thinking about elementary report cards which tend to be descriptions of skills and abilities rather than letter grades. These skills are typically less cognitive and presumably a larger portion of our society has mastered them, so parents may be willing to accept a skills-based report card at this age. Parents are better able to control and teach those abilities at home, so there is more of a sense of ownership. This sense of control seems to decrease as the content students are expected to know and apply becomes more complex and unfamiliar to the average citizen as students progress through the public school system. Anatomy and calculus have a stigma attached to them that writing cursive and subtracting integers don't seem to possess.
Here lies a new reality in challenging system-wide change towards a more "standards-based" reporting method in our schools today: traditional grading schemes promote a "compliance" mentality and parents seem to be happy with it because they feel like they have more control. "Un-schooling" both students and parents is a natural first step, but what does this look like? As one of only two teachers in my building currently embracing standards-based reporting, I see a steep hill ahead. Is "parent control" a valid challenge to standards-based reporting? If so, what can be done at a system-wide level to overcome this challenge?
Yesterday, I observed my student teacher's statistics lesson on confidence intervals using a t-distribution. We both agreed that the lesson went fairly well and without a doubt emphasized the main ideas needed to help students understand the day's learning objective:
Construct and interpret confidence intervals for a population mean using t-tables when n < 30 and/or the population standard deviation is unknown.
This topic is a common one taught in any introductory statistics class during the inferential statistics half of the syllabus. In fact, just the school day before, students were exposed to the underlying ideas behind confidence intervals and the difference between descriptive statistics and inferential statistics. After yesterday's lesson, my student teacher and I both lamented how tired and unresponsive the class as a whole was. No matter how many jokes he cracked or the stories told about the history and relevance of confidence intervals, the students were very subdued and content with the silence. Aside from a few strategies we discussed about alternative ways to engage the class, it inevitably seemed like one of those "it will hopefully be better tomorrow" conclusions.
Then, "after school" happened. A student who is not currently and has never previously enrolled in Statistics strolled into our room. For the sake of anonymity, we'll call her "Barbara." Barbara is working on her science fair project and her teacher suggested that she pay a visit to the resident statisticians (my student teacher and me) to critique her data analysis. (Note: I'm really excited that this type of interdisciplinary collaboration is becoming more common between Dawn and our statistics department - last week our class critiqued her class' data tables, graphs and charts for appropriate use of descriptive statistics) As it turns out, Barbara was testing the effect of several variables on a person's ability to run. Her averages, in seconds, were within one or two of each other. Based on her descriptive statistics toolkit (mean, median, mode and range), it appeared as though there was a meaningful difference between her variables due to the averages being separated by as much as two seconds. After all, to the average US citizen, a runner who has raced eight times with an average time of 23.5 seconds seems faster than another running who has also raced eight times with an average of 24.4 seconds, right?!
Enter inferential statistics and confidence intervals.
Over the next thirty minutes, my student teacher and I taught Barbara an abbreviated version of the same lesson he had taught earlier than afternoon to our statistics class. Barbara added standard deviation to her descriptive statistics toolkit and asked questions until she not only understood the idea behind confidence intervals, but also felt like she was able to explain it to someone else, a requirement for the science fair project. Barbara asked questions like "Would my intervals change if I increased the number of trials?" and "Why should I choose a 95% level of confidence instead of a 99% level?" She left the room satisfied and ready to add on to the data analysis piece of her project as well add more depth to her "results, discussion and conclusion" component based on her questions.
I am 95% confident that Barbara left the room with a deeper understanding of inferential statistics than those currently enrolled in the statistics course. What a difference context makes...
"Children are not mature enough to understand the ramifications of academic failure; therefore we cannot leave achievement to student interest alone. In this country, we require individuals to be age 16 to drive an automobile, 18 to vote in an election, and 21 to drink alcohol, yet we regularly give children "licenses to fail" at a much younger age when they do not exhibit immediate interest in academics..." (p. 25)
In my standards-based grading system, students may re-take parts of the "test" and their newest level of understanding will always replace the old, however the decision to put in the extra time outside of class is up to the individual student. Is this practice still giving students a "license to fail?"
When, as K-12 educators, can/should we say, "The responsibility of learning is ultimately up to the student." In high school? Never? The day before graduation?
I was asked to present at a professional development day for several school districts in Eastern Iowa. My presentation is entitled, "How do you know if they know? Re-examining assessment through the lens of learning" and will describe the theory behind my current grading and assessment practices. The theme of the day is the Iowa Core Curriculum and I believe my assessment practices mirror at least some of what is being framed as a movement here in Iowa towards "assessment for learning."
Regular readers of this blog know that I no longer report a single score for a test, but instead report students' understanding through multiple learning targets. Homework and quizzes are viewed as feedback opportunities rather than summative assessments. Students may re-take parts of the "test" and their newest level of understanding will always replace the old.
The slides below will be used at the session and make the most sense if you view it full-screen along with the speaker notes.
In addition, the packet with an outline of the presentation and a few selected Educational Leadership articles that all session attendees will receive is available here.
Finally, I created a small website with links to additional commentary on standards-based grading and suggested further reading.
On Monday and Tuesday, I attended my first education technology conference. ITEC lived up to my expectations of meeting up with so many people I've been following via Twitter. A fellow edu-blogger, Russ Goerend, and I made the trek together to Coralville, Iowa and within several hours had introduced ourselves to at least half a dozen tweeps. The sessions were not earth-shattering. Russ posted some thoughts on this aspect of the conference on his blog.
"Why is it so easy to skip the "cool tool" sessions? Because I've already heard about them through my PLN. As much of a buzzword as it is, having a strong PLN (online and off) is huge. I think it's once people have that PLN established that we can move past the "cool tools" phase and get some real work done."
His tweet sums also it up nicely:There were a few sessions focused on "cool tools" such as Moodle, Animoto and iPod Touches. Did the participants come out of those sessions with some new technology knowledge? Probably. Will it change the way they teach and students learn? Probably not.
Evan Abbeycommented on his blog about an ironic take away from this technology conference as tweeted by Seth Denney:An anecdote from the conference supporting this idea was a session led by a technology director discussing his district's implementation of Moodle as a learning management system. He talked about his successes, struggles and future aspirations for the first thirty minutes of the session and then began showing the audience some of the courses the staff has created thus far. He asked the audience to identify common themes as well as ways his staff could improve their courses (a welcomed opportunity for interaction in what is typically a passive opportunity to sit and listen). Some audience members noted the differences among courses. Some only had the syllabus posted while others had their entire course in digital format. Up until this point, seemingly every person in the room had been WOWed by this district's endeavor. I waited a bit and then raised my hand.
"The tool hasn't changed the way these educators teach. They've just transferred their worksheets to pdfs and made them available to download for students."
The presenter acknowledged this observation and the entire tone of his session changed. He admitted his utter disappointment in the Moodle roll-out process. I had the chance to talk with him in the hallway after his session. He was at a loss because he was under the impression that a new tool had the power to change his staff's teaching practices. His sentiments parallel a recent tweet by Bill Ferriter:
"Technology is not automatically good pedagogy. Instead good pedagogy is just made easier by technology."
Literacy, PLNs and good pedagogy all have something in common: a distinct "un-technology" emphasis. I've quoted Larry Cuban once before...
“It is not about technology; it is about learning” (2001, p. 184).
...and I quoted him again in my own ITEC breakout session. I came away from this conference having gained more meaningful relationships with my tweeps, but also an even more cynical view towards "the solution to education's problems is more 21st century technology in the hands of our students and educators" movement. Wesley Fryer summed it up a few weeks ago on his blog,
"While it certainly is true 'kids are into technology' today, it is a fallacy that providing these technologies to teachers in the classroom will automatically result in better learning experiences for students."
When will those in charge of our "technology" conferences get it, too?
"Grading as it has been done traditionally promotes a culture of point accumulation, not learning. It encourages competition rather than collaboration. It often focuses on activities instead of results. It makes all assessments summative (assessment of learning) because everything students do gets a score, and every score ends up in the grade book. " (p. 127)
O'Connor also admits that eliminating grades is probably not going to happen anytime soon, so we, as educators, must work within the system to change the way we report student learning.
Recently, I've been working diligently to change the culture of the classroom to one more focused on learning and report grades in a standards-based fashion. It is admittedly a drawn out process and I'm far from the ideal. The more books and articles I read, the more assessment and grading seem like a giant running back coming full steam at my puny physical frame.
How are you tackling grades? Do your grading practices promote points or learning? How are you frame assessment as a learning tool in your classroom rather than as a reporting mechanism? Leave a comment below.
In Building Leadership Capacity in Schools, Linda Lambert mentions the need for educational leaders to sometimes break the "norm of silence." The n.o.s. looks something like this:
"I won't talk with you about anything you're uncomfortable with."(p. 54)
I admit that this has been my attitude towards some (but not all) of my colleagues regarding many of the ideas written about on this blog. Looking back on this practice, I am ashamed to see the change that "could have happened" but didn't due to my silence. For example, I have been mulling over a way to change homework grading practices for several years. It led to the assessment and grading revolution my regular readers know I have been working through and sharing via this blog. I remember when my math education colleagues finally were convinced that posting homework answers on the board for students to see anytime as they worked through the problem sets was a good idea.Why am I, still to this day, ashamed to share my ideas about assessment reform with my colleagues? This hit home several days ago when I sat on a panel of "veteran teachers" speaking to a group of pre-service educators at an evening class. One student asked the question, "What is one thing you would change about the educational system? I suggested that the way we grade and report student progress needs quite a bit of fixing and I had some ideas on how this might be done, but would only share them if there was enough time at the end of the Q&A session. After my colleagues on the panel discussed NCLB and "too much paperwork" as their pet peeves, I could only smile. Was that any surprise to these pre-service teachers? I'm guessing any current introductory to education textbook mentions the pitfalls of NCLB, but grading?!
Sure enough, a brave middle-aged man asked me a follow-up question about grading towards the end of the time allocated for the teacher panel. I boldly laid out an assessment-for-learning rich classroom with a reporting scheme based on learning targets rather than specific assessments. By comparing my system with the traditional grading system, it was easier than I thought to gain the attention and respect of these pre-service teachers. It seemed so easy. Maybe it was the follow-up email from one of the students wanting to know more about this "anti-grades" idea? Maybe it was the conversation with a colleague in the parking lot after the panel about how he might work towards this ideal? I do know that my "assessment secrets" should no longer be purposefully be hidden in a box.
What's stopping me from breaking the "norm of silence" with my own colleagues? What's stopping you from sharing all of the ideas you read, tweet and blog about with your education-minded colleagues?
Full disclosure: A big thanks goes out to Laura Berry, ASCD Communications Specialist, for sending me a complementary copy after I submitted an essay for consideration in one of their themed publications that never came to fruition. I have not received any compensation to write this review and did not receive the book under any obligation to write this post. Now, on to the review.
The author, Cathy Vatterott, is an academic at the University of Missouri-St. Louis and is commonly referred to as "The Homwork Lady." In this book, she breaks down homework research and paints a fairly objective picture of the limits of this research as well. In addition, Vatterott lays out the common objectives proposed by homework critics such as Alfie Kohn and attempts to create a practical spin on what this does and does not mean for the average classroom teacher. The third chapter is entitled, "Homework Research and Common Sense" and it lives up to its title from the get-go. Her objective approach is summed up in the following quote.
"...the gist of the research, then, is that a small amount of homework may be good for learning, but too much homework can actually be bad for learning." (p. 62)
With such a middle-of-the-road attitude, it's hard not to at least take her arguments seriously. Vatterott even asks some of the hard questions typically heard in faculty lounges during the lunch hour.
"What if more time spent grading homework equaled less time to plan quality classroom instruction, which could affect the quality and amount of learning that occurs in the classroom?" (p. 79)
Okay, maybe the question has never been posed that formally, but who hasn't heard the occasional griping about the usefulness of homework, particularly when students don't complete it and when grading it takes so much of an educator's time? My guess is that I'm not the only one who hears this sentiment from time to time.
Perhaps the most useful chapter is the fourth one focusing on effective homework practices. The author draws a line in the sand regarding homework and grading - an idea that took me years to agree with, but could cause some readers to immediately close the book and never pick it up again.
"Homework's role should be as formative assessment - assessment for learning that takes place during learning. Homework's role is not assessment of learning; therefore it should not be graded." (p. 112, emphasis mine)
Because I happen to believe that grading homework does indeed get in the way of learning and is counterproductive towards documenting understanding in a way that allows new evidence of achievement to replace old evidence, I continued reading with great enthusiasm.
I especially enjoyed a section describing how "grading homework" is different from "checking homework."
"The purpose of homework should be to provide feedback to the teacher and the student about how learning is progressing....Checking (providing feedback) is diagnostic - the teacher is working as an advocate for the student." (p. 112)
Rethinking Homework was well-written, provides many thought-provoking ideas related to homework, grading and formative assessment. Personally, I had already read several of the articles and books Vatterott quotes in her writing, so the underlying ideas often seemed like old hat. If you've read with great detail Marzano, O'Connor, Fisher & Frey, Stiggins, Guskey and even Robyn Jackson's latest ASCD book (which I recently reviewed as well here), this book may seem like a broken record. On the contrary, if you're looking for a single book to read that might might challenge the status quo in the way you and your colleagues view homework and more largely assessment, I highly recommend Rethinking Homework by Cathy Vatterott.
"...changing classroom assessment is the beginning of a revolution - a revolution in classroom practices of all kinds...Getting classroom assessment right is not a simplistic, either-or situation. It is a complex mix of challenging personal beliefs, rethinking instruction and learning new ways to assess for different purposes." (2003, pp. 15-16)
Perhaps this book will finally be the spark to unite a blaze of conversations in your school that change the way educators view teaching and learning. How do you think your colleagues would respond to this book's premises?
I used to penalize students who didn't complete their assignments by taking off a point or two. After several years, I finally realized it was not changing the behavior of my students. The easiest way out for students is to take a hit by missing a few points every once in a while. This year, I'm trying out an experiment in lieu of a different philosophy towards points, grading and late work. In the students' eyes, it might even be worse than taking off points.
I require students to finish work they do not complete the first time. Here is my thought process:
If the assignment was important enough to complete the day I assigned it, why wouldn't it also be important enough to finish it, even if it's a day late?
I can foresee several rebuttals to this philosophy - many of which end in something like...
Aren't you teaching your students that it is okay to be irresponsible?
In my mind everything I do should revolve around helping students "learn." I will let the rest of the school "teach" responsibility by taking off points for incomplete assignments. In the meantime, I will require students to be responsible by completing the important math assignments I have carefully selected for them to conquer. By requiring them to do their assignments, I believe that I am helping them learn.
This afternoon, I facilitated a breakout session for our district's "technology showcase." We've been using a "pick your two favorite 60 minute sessions led by a colleague who volunteers once per year to show off the cool technology stuff in his/her classroom" format for several years now. In the past I've led sessions on Web 2.0 apps and most recently last year an introduction to Moodle. Aside from the English teacher next door who has gone completely paperless this year with Moodle, I have not seen a whole lot of "change" in the sessions I've led or led by others either. In fact, a few colleagues have even suggested that this format, while exciting initially, may have run its course. Many of the sessions (my own as well) have been very "tool-centered" in the past, so a colleague encouraged me to change things up and lead a more philosophical discussion on the proper role of technology in education and the steps we might take locally to move in this direction. The program for the afternoon listed my session description as follows:
TECHNOLOGY INTEGRATION – Matt Townsley – 1:00 PM, Room 409
Always? Never? Why? Come join us for a philosophical discussion on issues related to the use of technology (or not) in your classroom. You will leave this session with a refined outlook on the purpose and relationship between technology, pedagogy and content.
Here are the slides I used. The slides were not intended to speak for themselves as I created them in quasi-zen style, so I'll add some commentary below to illustrate the points I was trying to make.
Slide 1: Put your thinking caps on today. You're going to need them!
Slide 2: Disclaimer: Today's presentation is being recorded.
Slide 3: What is technology? Computers? Overhead projectors? Pencils? Microscopes? (Discuss)
Slide 4: Scenario about George: "George knows how to open Word documents and take attendance online using PowerSchool. He wants to transfer his lecture notes to PowerPoint instead of using the overhead. He also wants his social studies students to create web pages for their final projects rather than doing a research paper. He wonders if this will make a difference in how much his students learn or will enjoy his class." (Discuss whether or not George benefits from days like today's technology showcase; Is George using technology to change the way he teaches? Are George's students benefiting by his use of technology? How can we help George continue this path of using technology to help his students better learn?)
Slide 5: Scenario about Joyce: "Joyce is a teacher who knows quite a bit about technology. She has a Facebook page, a Twitter account and live blogs at her own kids' sporting events. She wants to use these tools in her classroom. She has even gone to a few workshops such as "Blogs in the classroom" and "How to create better wikis." Joyce is always looking at new cool "tools" and wondering how to use them in her classroom." (Discuss how Joyce differs from George. Are her students necessarily learning more/better? Does Joyce benefit from days like today's technology showcase? What does our district do to support teachers like Joyce?)
Slide 6: Stork. What if we've been doing it "wrong"? What if we've been using forks when we need spoons? What if we've been looking at the "tool" too much and not enough at the desired learning outcomes? Some call this "technocentric" planning. Remember the stork in this slide.
Slide 7: Technology in education is a double-edged sword. Cuban hits on the "George's" in our school who use technology to continue doing what they've always done. Mishra & Koehler allude to the Joyce's who need to connect their technology tools with their teaching strategies and desired learning outcomes.
Slide 8: Our district (and perhaps many others, too) has a problem. We have teaching PD (i.e. differentiation, co-teaching, 6+1 traits of writing). We have technology days (like today). We also have curriculum team time where we focus on materials, standards and benchmarks. When do we we have explicit conversations about those areas together?
Slide 9: Solution: We need to be in the middle of this Venn diagram. Discussed example of using Geometer's Sketchpad to match teaching strategy (student-centered "construct a concept" pedagogy) with content (know sum of triangles at a deep enough level to realize it works for any and all Euclidean triangles) with technology (Geometer's Sketchpad allows students to create and manipulate an infinite number of triangles in several minutes so that they can generalize the concept through discovery; this is in contrast to "old" technology of compass and protractor which has potential to lead to miscalculations and student misconceptions). Some of us have bigger technology circles (i.e. Joyce). Some of us don't (i.e. George). Some of us may need to enlarge our teaching circle by looking at new strategies. Some of us may need a more in-depth understanding of our content areas. The key is creativity, per Mishra. We must be able to creatively think about our content and teaching strategies and then how to use software and hardware that is not often created for educational use, in our classrooms. Per other scholars, we need to look at the activity types and strategies that "work" in our disciplines and match them up with the available technology tools.
Slide 10: How do we get there? Prensky proposed a framework several years ago. (Discuss: How do we help dabblers such as George move up? How do we help teachers like Joyce do new things with their technology knowledge?)
Ending comment:
It's not about technology. It's about learning.
Kudos to fellow edu-blogger, Russ Goerend, for an encouraging phone call yesterday as I was putting the final touches on these slides.
Feel free to leave your thoughts in the comments area below.
It's an exciting time of the year. Classes start in less than 48 hours. Lots of district, building, leadership and curriculum meetings have taken place the past few days. One common theme has been "assessment." Even though our district continues to perform very well on standardized tests, we have been charged to go from "good" to "great" by the administrative team. I can't express in words how exciting I am for the direction our district is going through the boulevard called assessment. I truly believe that transforming assessment practices is the beginning of so many other great conversations and classroom changes. To keep this in the front of our minds, each faculty member is being asked to document his/her assessments from August to December. The documentation is loosely associated with Rick DuFour's three essential questions.
1. What do we want all students to learn? As educators, we must think about the essential learnings (standards, benchmarks, learning targets, objectives, take your pick!) our students should have as a result of taking our course. These may change slightly from year to year depending the students, but we should be able to identify the "core" ideas and concepts each student is expected to learn.
2. How will we know when each student has learned it? As educators, we should be able to articulate the connection between the essential learnings and the assessments we administer in our classrooms. This involves more than just printing out the textbook publisher's test and assuming it "fits" our intended purposes. It is also not merely giving students pop quizzes covering the night's reading and moving on when they haven't a clue what they were to have learned. What's the best way to clearly connect assessments and learning targets? Standards-based grading! It's been a hard sell the past few days in my conversations with colleagues, but I look forward to sharing my successes and failures in developing the implementation of this idea further on this blog.
3. How will we respond when a student experiences difficulty? As educators, how are our assessment and instruction practices setup to support students who struggle? Are we caught up in the "assess and move on" rut? Or are our assessments created, graded (or not) to inform future instruction? The buzz word commonly used here is "formative assessment." I discussed this idea in many previous posts, including this one.
I really don't feel like I have a firm grasp on #3. Last year, I reported out to my students their successes and failures on quizzes (my bi-unit written, formative assessments) the same way I did on tests, via a 4-point scale per learning target. I keep thinking about Susan Brookhart's comments in her Dec. '07/Jan. '08 Educational Leadershiparticle, "Feedback that Fits" when she said,
"Formative assessment..Here is how close you are to the knowledge or skills you are trying to develop, and here's what you need to do next....Good feedback contains information students can use....For feedback to drive the formative assessment cycle, it needs to describe where the student is in relation to the learning goal..."
My "old" standards-based reporting on quizzes looked like the image below. I gave students written feedback on individual problems and then a score for each learning target assessed correlating to a narrative describing their current state of understanding. I used to argue that the learning target score was a way of communicating to students how well they were doing in relation to the learning goal. I think it still does make sense in this context, but it does not give them the feedback they need and deserve describing what they need to do next to improve their learning. Looking back, I was giving my students a red, yellow or green light, but never a map to tell them where to turn next. My "next" step is changing the "scoring" into a rubric that instead gives students an idea of where they fit on the continuum of concept mastery. I hope this continuum and more "student-friendly" wording along the bottom is information students can better use. I will also continue to give feedback on individual problems so that students can understand what they need to do to better understand the topic or overcome their misconception. Last year's practice of grouping students according to their relative strengths and weaknesses (related to the learning targets) will continue so that students not have the opportunity to learn from my feedback, but also from their peers. My goal in this give students more meaningful feedback and less grading. This subtle change, I believe, takes the emphasis away from a "number" and instead on the feedback.
What flaws or critiques do you see with this change in philosophy? How would you react as a student if you did not receive a "grade" (in the form of a number or percentage) but rather a mark on a continuum to complement written feedback on problems?