Open letter to "21st Century School Authorities"

Dear "21st Century School" Authorities:

I've been thinking a lot about your ongoing discussion at Dangerously Irrelevant regarding what a "21st century school" might look like. You have kindly suggested that a 1:1 laptop initiative is one possible characteristic of a 21st century school. While I don't disagree, I feel like the student to computer ratio is attracting far too much attention in the current paradigm and may be harming educators' ability to understand how a "21st century school" is different from any other school. Does the hardware and software combination define the "century" of the school? I hope not.

Some recent commentary by Ryan Bretag on Web 2.0 tools illustrates a new train of thought:

"Yes, teachers are using some of the tools or even a lot of the tools. While this is great and provides wonderful new contexts for students, I'm not convinced this will fundamentally shift education if we continue to retrofit these tools instead of embracing the philosophy of participatory and connective learning. In other words, it is time we start seeing these tools as the tip of the ice berg not the identifier of classrooms or schools that have become 21st Century, that have become participatory."
If a given school has a 1:1 computer to student ratio, but is not using them to do "new things" then is it truly a "21st century" school? Simply putting a computer in each student's lap doesn't mean new, innovative, engaging or even effective teaching and learning is going to take place. Contrast this with a school that has a 1:2 or 1:4 computer to student ratio that is using the hardware and software more effectively via student-centered activities, education-friendly networking, rigorous and relevant project-based learning and the like. We often get hung up on the hardware and stuck on the software rather than their connection to teaching and learning.

From hardware/software to pedagogy
I realize that mentioning 1:1 initiatives is merely one possible characteristic of a 21st century school. A "shift" in our educational system is not going to happen due to fancy new websites and hardware tools. It will begin to take place when we take steps away from "retrofitting" these new tools and instead spend our time and effort brainstorming how the tools might match up with pedagogy that "works." To better emphasize this, I would like to suggest that a "21st century school" should not merely be measured by its hardware purchases, but rather by what happens with the tools for educational use. Answering the question, "How are research-based strategies being used in connection with 21st century technology tools?" seems to be an appropriate filter to add to the paradigm. The TPACK framework is a great first step for helping educators connect technology, pedagogy and content.

A simple change in "marketing" 21st century schools as more than a "building with up-to-date hardware" may be a useful first step in helping others begin to identify examples of this "undefined" term. Let's continue the conversation in lieu of how schools are using the technology rather than simply their ability to secure as many laptops as possible.

Sincerely,

Matt

Grading: Points vs. Learning

I have been conversing with a few colleagues about traditional grading practices. Everyone knows what the "points" system is like where all assignments, tests and quizzes are assigned some pre-determined point value. Sometimes the categories (homework, projects, tests, lab reports, etc.) are even weighted separately in order to come up with a percentage that translates in to a "letter" for the purpose of quarter/semester grades. The fuel for this conversation comes from some notes from The Homework Lady and a book entitled, Developing Grading and Reporting Systems for Student Learning by Guskey and Bailey (2001). Here's a quick snippet from p. 49.

"Teachers must decide, therefore what purpose each source of evidence is to serve and then tailor their assessment practices to fit that purpose. In particular, they must be careful in their efforts to increase quantity of evidence available for grading and reporting that they don't use formative assessment information for summative purposes...It would be inappropriate, for example, to use the results from formative quizzes constructed to check students' learning progress and prescribe corrective activities, or from homework assignments designed to offer additional practice on difficult concepts or skills in determining students' summative grades in a subject area or course." (Emphasis mine)
In other words, "practice" assignments/activities shouldn't really be a part of calculating a final (summative) grade. I think it's fair to say my math department believes that "practicing" well (and being allowed to make mistakes and ask questions) should and will lead to "performing" (testing) well, but are we practicing what we preach in the way we report student learning?

Consider the following example. Assume that homework is graded on completion and quizzes/tests on content mastery.

Student A: Homework: 50% Quiz: 60% Test: 100%
Student B: Homework: 100% Quiz 100% Test: 100%

Student A did not understand the concepts and therefore did not complete the homework. Somewhere between the "quiz" and the "test" Student A came in for extra help and finally "understood" the concept which explains his/her sudden improvement on the "test."

In the traditional grading system, which student earns a better grade? Student B, of course. A traditional points system penalizes "later learners." On the "test," both students demonstrated the same level of understanding, but Student A is penalized for initially struggling. Do we have a realistic expectation that students will "get it" the first day we teach concepts to them? If so, then why not have daily tests?

Two questions have come out of this conversation:
1) Why not simply grade homework for completion?
This clearly skews the reporting process. For example, what does a "95% - A" grade mean? Some might argue it means a students has completed 95% of the assignments. Others might claim it means a student has mastered 95% of the concepts. Yet another group suggests some blend of effort and mastery.

2) If I don't grade homework/assignments, then why would students do them?
I think precisely the opposite is true. We are encouraging our students to copy, cheat and only turn in the assignments they feel are "worth it" when they are assigned a point value.
"Students begin to view academic wealth as determined by the number of points they can accumulate. Teachers set the currency rate when they establish their grading standards and simplify the required bookkeeping with modern computerized grading programs. Savvy students keep track of current exchange rates, calculating far in advance the exact number of points they need to attain the grade they want, and adjust their efforts accordingly. They know they must plan cautiously since they can lose points or be fined for certain transgressions, such as not completing a homework assignment or turning in a project late. They also make note of contingencies that allow them to earn points or bonuses, such as doing special projects or volunteering for work outside of class" (Guskey & Bailey, 2001, p. 19).
Most secondary educators have experienced students asking, "how many points is this worth" or parents emailing about "turning in late assignments to boost their child's grade." I want my students to "practice" or complete learning activities for the sake of learning. I'm guessing I am not the only one with this mindset.
"Sadly, this emphasis on earning points in order to procure the grade commodity diminishes the value of learning"
(Guskey & Bailey, 2001, p. 20).
I believe the answer to this "grading" issue is related to creating an increased awareness and classroom culture of formative assessment and reporting out student learning via standards-based grading.
Teachers must seek an appropriate balance between the formative instructional purposes of assessement of student learning, and the summative evaluative purposes required in grading"
(Guskey & Bailey, 2001, p. 31).
This balance is an ongoing challenge for me. What arguments for/against the "points" system are missing from the conversation? What strategies are you trying out in order to better report student learning?

Previous related musings:

PLNs: medicine or vitamins?

In a previous post, I suggested personal learning networks as an ideal model for differentiated professional development. In the midst of diffusing this idea through communication with virtual colleagues as well as in a face-to-face format locally, several questions have come up. I'll try to sum them up:

  1. Will PLNs replace existing professional development? If not, how might it supplement existing initiatives and state/federal mandates?
  2. I thought PLNs were the "end-all" solution to differentiating professional development, why can't I find any resources/thoughts/commentary on ___________?
These both seem like logical questions given the recent "hype" surrounding personal learning networks, particularly as I have expressed my enthusiasm with building-level colleagues about its possibilities. I would like to take a few moments and address these two questions using a metaphor that came to mind today.

Personal learning networks should be viewed as "vitamins" rather than "aspirin."

Just as technology alone will not improve teaching and learning, PLNs are not the "aspirin" solution to our system's professional development headaches. The information gathered from blogs, wikis, Twitter and our favorite podcasts have the capacity to shape our philosophy of education as well as provide a medium for learning about new teaching resources. Personal learning networks are inherently de-centralized. School districts will always need to push out information to the masses through some form of centralized medium. Currently, this is often in the form of face-to-face in-service meetings to ensure that everyone is on the same page.

I see PLNs as more of a "vitamin" approach to learning and professional development. The edubloggers I follow and journals I subscribe to strengthen my practice as an educator. Whenever I load up my RSS or Twitter feed, I am looking for new ideas and insights into the areas of math education, technology and assessment. Some tidbits are timely wheras most of it is archived for later retreival. I may read about an interesting use of cell phones in the classroom setting, but not actually find a use for it until weeks or months down the road. The ability to archive and search so much of what I read via my PLN makes it an invaluable resource to counter the "why can't I find information on ________?" question. If I don't find someone who is currently (or even willing) to talk/chat/Tweet/write about a given topic today, I may find something that will come in handy down the road. Just as vitamins strengthen the body for extended healthy use, I believe my personal learning network has greatly enhanced my educational practices.

While the idea of "personal learning networks" continues to evolve, I am interested to see how it will be individually marketed. It starts with those who are currently enjoying the benefits of networking for professional growth's sake.

What purpose does your PLN serve? Do you see it as "medicine" or a "vitamin" for your professional growth?

The BEST ed tech tool

I just finished reading the May edition of Leading and Learning with Technology published by the ISTE. In the midst of some commentary on p. 47, a somewhat ironic quote (considering the publication's emphasis) stood out:

"The most influential ed tech tool is not a tool at all, but a person..."
It reminded me of a book I read by Doug Johnson several years ago entitled Machines are the easy part; people are the hard part: observations about making technology work in schools. This book is a must read for any one interested in engaging in meaningful educational technology discussions. I have attended conferences, read articles and skimmed through countless blog posts on topics such as "using wikis in your classroom" or "10 ways to engage your students using podcasts." Once the "tool" has been mastered and potentially even tried out several times in the classroom setting, the natural reaction of any educator is to begin the "evangelist" stage and share the new tool and its application with as many colleagues as possible. This is a good thing, right?

In his 30th mini-chapter, Johnson makes a point that resonates with me:
"Technology-literate folks know when to do things the old fashioned way."
We can't get so bogged down with technology that the tool itself takes precedence over quality teaching and learning.

The 51st mini-chapter hammers home this point:
"Rule of Restructuring Education with Technology: the real changes are in teaching practices not technology."
Requiring your students to create Power Point slides for a presentation is not true technology integration. Presentations are nothing new. Neither is having them type up a history time line. History and time lines have been around forever! (Pun intended.)

We can keep attending our ed. tech conferences, Tweet about our favorite technology tools, and blog about how our students are missing out if we don't use 21st century tools in our schools, but until the person becomes the BEST ed tech tool, the battle to have a meaningful impact on student learning will never be won. We, as educators, not Google or Moodle, have the potential to change the status quo in eduation.

Are you up for the challenge?

metacognition + wait time = effective teaching

I feel like I had a great pre-service education at Wartburg College. My recent graduate training in curriculum and instructional technology at Iowa State University was outstanding, too. The education department faculty that mentored me at both institutions were truly dedicated to the profession and often did an excellent job "walking the walk" in addition to "talking the talk."

Today, I came across a video of a professor at Iowa State who really seems to value the importace of metacognition which I also blogged about several weeks ago. While I never had Dr. Mike Clough as an instructor, his work has piqued my interest. It is well worth eight minutes of your time to watch the video linked on this page as he models metacognition and wait time for a small group of future science educators.

Reflecting upon my own practice, I'm most likely too impatient when it comes to soliciting students' thoughts in a large group setting. Seeing it through the lens of metacognition and the quest towards better understaning students misconceptions shed a new light on its importance. A simple tweak to this aspect of instruction seems like it has the potential to pay big dividends.

How would you rate your regular use of wait time?

Self-assessment via project-based learning

Today, my Statistics students were charged with applying their newly acquired hypothesis testing skills in a "real life situation." I typically roll out a new project like this one by outlining my expectations and through showing off several sample of previous students' work. A rubric accompanies each project so that students know how it will be "graded." A new layer I've added this year is requiring students to self-assess their work before they turn it in. So far, students have responded fairly well to the idea and seem to understand that the rubric is as much for "them" as it is for me. We've had several meaningful conversations focused on project-based learning and how a rubric should take the "mystery" out of the grade and expectations.

Below, you can read a more detailed description of this particular project.


The self-assessment piece typically happens the day students are asked to turn in or present their work. I decided to add a new component today by requiring my current students to look at a small sample of previous students' projects (before even beginning the planning process of their own project) and compare them to the rubric. Students were asked to choose the two sample projects they felt were done the best.

Here was the "student's choice." Neatness, readability, and ability to communicate the hypothesis test process were reasons given for its popularity.
It seemed like a nice "twist" to the culture of self-assessment I'm attempting to foster in my classroom. By allowing students to critically analyze others' work, I'm hoping it will carry over into their own work as well. This quasi-peer assessment exercise on the front end will hopefully help them see what "quality work" looks like in this context. I wanted to share this as a working example in my quest towards modeling self-assessment.

Caught speeding!

As I entered the city limits on my way to work today, I saw a radar speed sign presumably placed by the county sheriff's department. This is not the first, nor probably the last time, I will see this sign. Rather than paying a deputy to sit at the city limit sign, they are using a piece of technology with the intent of slowing down traffic as it enters town.

No, I didn't get picked up for speeding, but the idea of a "warning" in this form reminded me of how we should be assessing our students. How often do we wait until the "test" to communicate to students' their misconceptions? The radar sign seems to be a nice metaphor for formative assessment. It gives feedback to the driver (student) in a way that is meaningful. Rather than an officer saying, "Sorry, you get a ticket today," (penalty-driven, summative assessment?) the radar sign kindly reminds us to slow down. We must help our students to "slow down" via better feedback in the form of formative assessment techniques. Waiting until the mighty officer turns on his/her sirens is simply too late!

Does this metaphor resonate with you? Are you more often the deputy or the radar sign? Leave your comments below.

Everyday is "formative assessment day"

I have been reading a PDK article by Paul Black and Dylan Wiliam entitled, "Inside the Black Box: Raising Standards Through Classroom Assessment." A colleague and I have been talking quite a bit about formative assessment and she recommended this article. A few excerpts from the article hit me as I was going about my usual teaching responsibilities. The first one helped solidify all of the thought I've been giving to self-assessment and metacognition on this blog.

"Thus self-assessment by pupils, far from being a luxury, is in fact an essential component of formative assessment...if formative assessment is to be productive, pupils should be trained in self-assessment so that they can understand the main purposes fo their learning adn thereby grasp what they need to do to achieve."
I started to pat myself on the back a bit, but then quickly tasted a slice of humility pie as I read on...
"...the choice of tasks for classroom work and homework is important. Tasks have to be justified in terms of the learning aims they serve, and they can work well only if opportunities for pupils to communicate their evolving understanding are built into the planning. Discussion, observation of activities, and marking of written work can all be used to provide these opportunities, but it is then important to look at or listen carefully to the talk, the writing, and the actions through which pupils develop and display the state of their understanding..." (Emphasis mine)
In my effort to be more "data-driven" in my formative assessments, I feel like I've been a bit too analytical and quantitative. Rather than simply touting standards-based reporting as the end-all solution, I realize I need to listen to the students discuss their misconceptions rather than just seeing the level of understanding as some sort of number on paper. I decided it was worth giving a try.

"Listening" in action
I gave a brief quiz yesterday in one of my math classes. From a standards perspective, it assessed two of this chapter's learning targets. One was a "new" learning target and one was a learning target that was assessed via a quiz once already last week. Today, I handed back the quiz and matched up students according to their relative strengths and weaknesses on their current level of understanding on the learning targets. This was my previous practice. Today I took it a step farther. I circled the room and helped groups who were struggling and listened in with the intent of getting a better grasp of their current misconceptions. This led to a more informed and meaningful whole-class discussion after the groups had time to share with each other.

I believe that listening is allowing me to take a step forward towards answering a key question the article posed:
"Do I really know enough about the understanding of my pupils to be able to help each of them?"
Formative assessment guides instruction
In addition, it led me to reaffirm my belief that an assessment is not "formative" unless it truly guides instruction.

I need to use this quantitative and qualitative "data" (by taking a detailed look at the mistakes students made on the quiz and listening to their conversations as the students talk about their mistakes) to decide how to proceed with instruction even more in the future. Am I transparent in the way I communicate this with students or do they think a "quiz" is merely a "mini-test?" I wonder if we too often separate "instruction" from "assessment" in our classrooms. "Today is a quiz day..." when in fact, the two entities should not be mutually exclusive. I came up with a new motto. It's not really earth shattering and pretty simple:
Everyday is "formative assessment day"
It is more than a change in semantics. Our assessment and instruction should be a cyclical process if we're truly using "formative assessment" techniques. In computer programming lingo, it looks something like this:
START
IF STUDENTS < "GET IT" THEN INSTRUCT
ASSESS
IF ASSESS >= "GET IT" THEN "DONE" ELSE GOTO START
I'm not 100% satisfied with this syntax, probably because there's no one easy way of figuring out the "GET IT" part. Standards-based grading and listening are two possible components, but the puzzle doesn't seem to be solved yet. What other components or key characteristics do you see as essential in effective formative assessment?

New tools to do old things? Or new tools for a new day?

Dr. Mark Stock asked me to guest blog for him recently at the Hope Foundation. The topic was doing "old things in new ways OR new things in new ways?" in the context of technology integration. Many of my thoughts have been influenced by the TPACK framework, an idea derived by Mishra and Koehler at Michigan State University. If you haven't read the landmark article (pdf), I highly recommend it.

Head on over and check out the post by clicking here.

What makes an "A"?

Dr. Mark Stock at the Hope Foundation was asked a thought-provoking question by one of his "pre-administrator" students.

"How are cut scores for proficiency determined?”
I am assuming that he is making reference to cut-off scores such as the 40th percentile cut-off mark for "proficient" on standardized tests for the purposes of reporting AYP.

Mark's response follows.
"....But…..the dirty little secret to setting proficiency levels is still rooted in the question, “How many do we want to fail?'"
I think Mark would agree that we (educators) don't want any students to fail. His point is obviously to answer with a question that continues the conversation in his classroom. I'd like to take it one step further as it relates to standards-based grading/reporting. As I mentioned in an earlier post, I am in the midst of transitioning to a 1-4 point scale for the purpose of reporting out student understanding of learning targets. My current scale looks like this:
4 – demonstrates thorough understanding
3.5 – high level of understanding, but with small errors
3 – demonstrates understanding, but with significant gaps
2 – shows some understanding, but insufficient for a passing grade
1 – Attempts the problem
Scale meets letter grade
Until now, I had not thought about the implications of this quasi-rubric matching up with my overall grading scale of A: 90 and up; B: 80 and up; C: 70 and up; D: 60 and up, F 59 and below. Breaking the 1-4 scale down into percentages and then assigning a letter grade makes for an interesting discussion.Let's assume a student is being assessed on two learning targets. On one they demonstrate a high level of understanding (4) and on the second they show some understanding, but insufficient for a passing grade (2). Assuming all learning targets are weighted equally, as they are now in my grade book, this student has earned a "C" or 6/8 learning target points. I subsequently report out that they have a "75%" level of understanding. By mastering one learning target and failing another, should a student receive a "C"?

I wonder if a better way of reporting out student learning might be by the number of learning targets they've mastered. I'm thinking about a hybrid system that might require students to have a "4" level of understanding on at least 90% of the learning targets in order to get an "A." The percentage here is up for discussion, but if that system is a step in the right direction, what would an appropriate break down for various letter grades look like? Should there also be a required number of "3" or "3.5" learning targets in order to earn a given grade? In other words, should a student who has demonstrated a high level of understanding (4) on 90% of the learning targets, but has significant gaps on the rest (3) earn a different letter grade than a student who has also mastered 90% of the learning targets, but has some small errors (most likely computation in my math course; 3.5) on the remainder earn the same overall letter grade?

I can think of many more scenarios similar to the one I just described. A quick fix is eliminating letter grades, but unfortunately that's not an option for me as public school teacher who is required to use a student information system. The bigger question worth discussing is what makes an "A"?

Standards-based grading & late work

An excellent comment was made on my last post. I started to reply in the comments section, but got a bit long-winded and realized it would be better illustrated in a full-fledged post.








My initial thoughts are below, but they are in no way "best practice" as I'm constantly tweaking this new way of reporting learning.

How do you grade late work?
This could be rolled into a citizenship grade or as you suggested into some sort of new learning target. A virtual colleague suggested to me that 5-10% of a student's grade be something along the lines of "responsibility" (just to play the 'grading game' and keep students accountable), perhaps giving students one point per day for coming to class prepared and ready to ask questions about the homework. Right now, I'm just recording who turns it in, how much they did and if they turned it in late. I should have also mentioned in my initial commentary that I post the answers to all "practice" assignments on the board, so there's no incentive to copy from a friend. Check out some of my previous posts on feedback for more insight on this philosophy. If/when it is entered into the grade book, I want to do it in a way (numerically, I admit this may be difficult) that clearly makes it stand apart from the learning targets. Since understanding and responsibility are not easy to pull out from the "old system," I don't want to fall into this trap again.

Another aspect of this system is allowing students to be re-assessed on a learning target if they're not happy with how they did. For example, I would give the student who got all 4's and one 2 the opportunity to come in for re-teaching, do extra problems/activities, and then eventually a mini-assessment on that learning target. An improved score replaces the old score in the grade book so that students are not penalized for "getting it" later than others. I explained to students that re-takes must take place outside of class, are only available to students how have put forth a solid effort on the homework, arranged ahead of time, and may only be done once it is evident that new learning has taken place. It puts the burden on the shoulders of the student, eliminates the "anyone can re-take at anytime, so why do the homework?" mentality, and helps students understand it takes more time on my end to create an alternative assessment, so I expect additional effort from them, too.

I hope this helps and am open to any critiques on this current practice of reporting learning.

Standards-based grading & student information systems: The beginning

In response to Evan Abbey's thoughts on assessment for learning and student information systems as it related to the Iowa Core Curriculum, I felt compelled to share my current progress towards standards-based reporting. A week ago, we started a new reporting period so I decided to make a drastic change by going (nearly) 100% standards-based reporting in two of my math classes. I had been assessing my students based on "learning targets" (standards) for a few weeks at the end of the last grading period, but had still been reporting their assessment scores as total points in our student information system.

"Old" way of reporting
Here is what the grade book looked like from my perspective in the "old" way. Notice the typical categories along the top: test, homework, quiz, etc.


Standards-based reporting
Here is a screen shot of what my grade book looks like this quarter so far. Notice the different categories along the top. The learning targets are the main (only?) area of focus.
I give students a list of the learning targets - the "big ideas" I hope they'll learn from our unit of study. Here's an example from the assessment reported above.

  1. Define and classify special types of quadrilaterals.
  2. Use relationships among sides, angles and diagonals of parallelograms.
  3. Use properties of rhombuses and rectangles.
  4. Write ratios and solve proportions
  5. Identify and apply similar polygons.
  6. Use and apply AA, SAS, and SS similarity statements.
  7. Find and use relationships in similar right triangles.
  8. Use the Side-Splitter Theorem and Triangle-Angle Bisector Theorem
Each learning target is assessed on a sliding four point scale. Multiple measures would be ideal for determining where each student's understanding is at for each target, but I'm still working on how this looks in a math class. For right now, I've come up with the following scale:
4 – demonstrates thorough understanding
3.5 – high level of understanding, but with small errors
3 – demonstrates understanding, but with significant gaps
2 – shows some understanding, but insufficient for a passing grade
1 – Attempts the problem
Student response to standards-based reporting
Students in my class are now accustomed to detailed feedback in the form of 1-4 scores on each learning target, however a few were a bit hesitant when I told them that beginning the fourth quarter all tests would be recorded by learning target ONLY in the grade book. Their parents would now see (via the student information system) a break down of their current level of knowledge by learning target rather than seeing only a single score.
One student chimed in, "I don't want my parents to see that I got an F on one learning target."
He was making reference to the fact that he had 4's on all learning targets with the exception of one. I explained that previously using the "old" system, his score would have been recorded as 30/32 or 94%. Now he would see seven 4's and one 2 in the grade book - still 94% overall if calculated by points, but the standards-based reporting approach makes his learning more transparent to me, him and his parents. Parents now see that their son has a solid understanding of seven key ideas, but has failed to demonstrate a sufficient knowledge for a passing grade for one learning target. The student wasn't convinced that this was better for him, because he did not want to face the possible repercussions at home for a "failing" grade!

Looking ahead:
You'll notice that my grade book only includes scores for learning targets and no longer homework or quiz scores. Here is my thought process on adding homework and quiz scores:

Homework and quizzes are both "practice" or "formative assessment" opportunities and should not negatively impact a students' overall "grade."

Consider Student A who bombs the homework, bombs the quiz, but "gets it" between the formative assessments and the test (assessment that "goes in the grade book"). Now consider Student B who "gets it" on the homework, aces the quiz and aces the test. In the old system, Student A is penalized for not "getting it" early, so his/her grade suffers. Student B's grade looks outstanding because he/she "got it" right away and earned all of the points along the way. If Students A & B both have the same level of understanding, I believe that what is reported out about these two students should be consistent as well.

At the current time, I am keeping track of students' homework completion and learning target scores on quizzes (formative assessment that drives my instruction), but am not sure if/how I will add these to the grade book. As I figure out how best to play the "grading game," I think I may need to somehow record the homework scores in the grade book to give the students an external incentive to do it. Many, but not all, students see the connection between completing the practice (homework) and formative assessments (quizzes) and the positive impact it has on understanding/learning. On the flip side, I don't want to simultaneously report out "understanding" via learning targets and "effort" via homework completion, because I fear it is a step backwards towards the "old" system and it also confuses stakeholders along the way. I have heard of other schools using citizenship grades, which I think lends itself nicely to separating the reporting of understanding and responsibility.

This new way of reporting learning has been a time-consuming, but rewarding process. It is also a work in progress that will surely need to be tweaked along the way. I'm open to any suggestions/comments from those who may also be stepping up to the challenge of standards-based reporting.

The Assessment Dentist

I had my biannual trip to the dentist yesterday. Aside from a half day off from teaching, it gave me some extra time to think about MeTA. This may be a stretch, but I think there are lots of connections between assessment practices and going to the dentist. Let me explain:

As I sat in the chair listening to the hygienist lecture me on flossing techniques (she was right!), I was reminded of what makes assessment useful - meaningful feedback. If I went to the dentist and all I got was a good cleaning and few shots of lead injected into my teeth, I wouldn't be bettering myself. My problems would seem "fixed" temporarily, but my habits would inevitably stay the same. The plaque on my teeth, which I believe beautifully represents students' misconceptions, may go away with a few shots of red ink, but by the next visit (assessment?!), it will surely find its way back. We're doing the same thing to our students when we simply "check" their work, note the "correct" way and add another score to the grade book.

A worthwhile dental visit "educates."
My hygienist gave me flossing tips because she genuinely wanted to help me take better care of my teeth. She wanted me to come back next time with as little plaque as possible. As teachers, we must help our students debug their own work. Spewing out correct answers isn't enough. Understanding where a student's thinking is coming from can be the first step towards helping him/her to get to the next level along the continuum of learning.

Look in the mirror.
Our goal should be to help students look in the mirror (a la self-assessment) and see where their own mistakes lie. Yes, a dental visit typically reveals much plaque, but it doesn't have to be the only time/place where plaque is identified. Continuing with the same theme, educators should continually be looking in the mirror to identify poor assessment tools and revise them accordingly.

Brushing often?
I go to the dentist two times per year, but I brush my teeth twice daily. How often are we assessing our students? How meaningful is our "brushing" and "flossing"...is it getting rid of the plaque? After all, it is possible to brush and floss without removing much plaque. When we get our new toothpaste, toothbrush and floss at the end of the visit, have we been inspired to use it? I wonder if our assessment often hurts, rather than hinders our students to make meaningful changes in the way they think not only about the important concepts, but about school in general.

Clean teeth expectation.
Patients who leave the dentist's office expect to have clean teeth when they leave. What do our students expect to gain from their assessments? What do we as educators expect to learn from our assessments? In the midst of rolling out a more "standards-based" reporting system, I'm realizing that my students are shocked to receive feedback in the form of multiple scores rather than a single number or grade describing their performance. This form of reporting makes the academic expectations of students much more transparent. I look forward to the day when students expect detailed feedback on their work - more than just a single score or letter grade. I admit that breaking this "tradition" of schooling can be a difficult task.

Have you or your students been to the assessment dentist lately? What are your personal "assessment cavities?"

PLN as differentiated professional development model

I've been thinking about personal learning networks as an ideal model for differentiated professional development. A complaint I often hear regarding in-service time is "how can I use THAT in my classroom?" I brought this up at our building leadership team this week and responded with a question of my own:

What if EVERY professional development session was catered to your interests?
After all, if we're supposed to be differentiating content, process and product in our classrooms, why shouldn't we expect the same from our in-service time each year? I used this Educational Leadership article by Bill Ferriter as an example of how it might be done through the use of personal learning networks (PLNs). I'm happy to report that my building principal thought it was a great idea and asked me to present it at the next morning's staff meeting. I knew that my five minute allotment at the meeting wasn't going to be enough to "sell" the idea and definitely not enough time to thoroughly explain the complexities of what a PLN might look like. Instead, I decided to give a "teaser" to the staff to see if I could recruit five staff members who might be interested in giving the idea a try. Twenty-four hours later, I had the fifth volunteer approach me as I walked in to work and started to make a few copies. I'm in the midst of meeting with each of the five educators and helping them setup their own quasi-PLN through the use of Google Reader and various edublogger RSS feeds. Other social networking tools such as Twitter, the Ning I setup earlier this year, and subscribing to journals via Ebsco's RSS feed will be "phase two" for these individuals if all goes well. Furthermore, I have the green light to use our May in-service time to get all of the staff members in my building setup with their own personal learning network. I'm hoping the five volunteers will not only be able to help with their newly acquired technical expertise, but that their enthusiasm towards the idea will spill over between now and then as well. In my previous attempt to diffuse innovations, my interpersonal communication has been lackluster at best. My hope is that by interacting with these five over the next month that the differentiated professional development model might catch on and be something that we can spend even more time developing next academic year. Look for more posts in the future as this model develops over the next month or so.


Note: This is not my original idea. Others, including Evan Abbey at Heartland AEA have suggested using PLNs as professional development. This is my humble attempt at documenting and sharing the idea as it unfolds so that others, too, may hopefully benefit someday.

Burned? or More of the same?

From an article that recently ran in T.H.E. Journal:

"One of the biggest reasons we face resistance is because so many times we give instruments to the teacher with no follow-up or no training...I don't think fear is the right word. Some teachers have been burned by technology in the past. They used it and found it was either not great or incomplete, or whatever, and so they're not interested in trying again."

-Tom Nolan, curriculum support specialist for the Albuquerque Public Schools
I've seen plenty of educators with this mindset. They may have tried out a cool new software application or website once upon a time, but the technology just didn't "work" for them. Maybe the laptops were low on battery power. Maybe the internet was slow or down that day. Maybe the software itself had glitches or didn't work like it was advertised. Perhaps the comfort level just wasn't initially present with the tool to begin with and this lack of confidence spewed over in to the instruction causing students to be turned off. Regardless of the issue, it left a sour taste in his/her mouth. The first impression was so sour to the point of no return.

I've seen another angle to this problem. Marc Prensky's 2005 Edutopia article sums up a four step process to technology integration that leads to this point:
  1. Dabbling.
  2. Doing old things in old ways.
  3. Doing old things in new ways.
  4. Doing new things in new ways
When an educator sees technology as simply "play" on the student's end (#1) or not improving their instruction (#s 2 & 3), why would they care about trying out Moodle, a Flip camera or The Geometer's Sketchpad? Their response to "technology integration" is the same as it's been to any other professional development experience: It's more of the same "stuff" I don't need in my classroom.

An academic recently posed a challenging scenario to me. It went something like this:
"Imagine I am a teacher down the hall from you at your building. I think my teaching strategies are pretty good. In fact, I've been teaching for a while and my students seem to be learning a lot, too. Why should I change?"
In light of teachers being burned by technology in the past and seeing it as more of the same, what answer would you give to the "teacher down the hall?"

Towards better metacognition - Debugging

(This post is the third in a series based on metacognition as a way of improving classroom assessment and instruction)

Debugging concepts vs. procedures

"Outstanding teaching requires teachers to have a deep understanding of the subject matter and its structure, as well as an equally through understanding of the kinds of teaching activities that help students understand the subject matter in order to be capable of asking probing questions." (Bransford, 2000, p. 188)
When educators are focused on conceptual understanding, debugging can become an effective means of incorporating metacognition in the classroom. With a shallow level of content and pedagogical knowledge in my earlier years of teaching, I found debugging to only happen from a procedural perspective. I was more inclined to point out a step that a student missed in his/her computation and/or algebra rather then stepping back and analyzing where the misunderstanding might be that leads to the "bigger picture" causing students' misconceptions. With these thoughts in mind, I embarked several days ago on changing the way I handle students revise their formative assessments.

Debugging Opportunities
In my quest towards standards-based grading, I have been conducting formal formative assessments every few days (and without much resistance from my students either, I might add!) and reporting progress by standard:One of the many reasons I believe in standards-based grading is that it gives feedback to students on the specific standards ("learning targets") they "get" as well as the learning targets that they may still need more work on. Students took the quiz one day and the following day I paired up students according to their relative strengths and weaknesses on the learning targets and set them loose for about five minutes to ask questions of each other. I'm starting to make this a regular part of my formative assessments. Compare this with my previous practice of marking them up heavily with red ink and verbally regurgitating the preferred solution to commonly missed problems - an exercise that not only bored the students, but never seemed to produce positive results either. Several minutes into this diagnostic session, a really neat conversation was initiated. Two groups were eager to get my attention with similar inquiries:
"Mr. T...we don't understand how to do this problem - we did the same thing! Can you help us?"
This is the type of question that gets a math teacher chomping at the bit! Two typical responses come to mind...
Scenario #1: That's a great question. Let me show you how to get the right answer...
This scenario is what I call a return to the teacher-as-answer-holder system. In my previous post, I laid out several problems with this approach. It takes the responsibility of "thinking" away from the student and places it in the hands of the teacher. Hopes for metacognition and student-initiated debugging are quickly squashed.

Scenario #2: Wow. I'm so glad you asked. (Insert probing question)
This scenario is the one we should be aiming for, but as Bransford suggests this process requires a deep understanding of the subject matter in order to ask questions that will help students overcome their misconceptions - a practice I admit improves daily with practice. This is when the debugging begins on the student's part! Colleagues lament that this is ideal of behavior we wish all of our students would exhibit, but seldom do. I believe the keys to this practice of encouraging students' debugging are wrapped up in connecting collaboration with explicit opportunities for revision. We must teach and value revision through a debugging process. Not only must students be exposed to and practice debugging, the educator must also have a general idea of the common misconceptions students will face for any given standard/lesson.
"Expert teachers know the kinds of difficulties that students are likely to face, they know how to tap into students' existing knowledge in order to make new information meaningful, and the know how to assess their students' progress." (Bransford, 2000, p. 45)
Just as parents remind their children not to touch a hot oven, they must also be taught how to properly use a hot pad when navigating the same steamy temperatures. The quote above reminds me the importance of modeling to students "how not to do" something as well so that they are better able to debug their own academic work. I will never forget my fifth grade teacher who would often purposefully write sentences with mistakes on the board or solve math problems in an incorrect way and wait for a student in the class to notice. He modeled (and expected) debugging in his classroom! This was a one-of-a-kind ordeal during my K-12 years. I only hope that I can provide this same memorable (and academically stimulating) experience for my students.

A virtual colleage of mine deserves credit with a recent post in which he asked students two thought-provoking questions:
  1. What do you want to be able to do a week from today that you can't do now?
  2. What can I do to help you get there?
Kudos to Mr. G over at TAGmirror for putting metacognition into practice! Perhaps some debugging with follow as the students come back with their plans that potentially need revision.

Is "debugging" a natural part of your classroom practice? What type of debugging events are you planning for the remainder of the year?