I appreciate fellow math educator and edu-blogger David Cox and the way he consistently peppers me with questions about my assessment practices on this blog.  Most recently David asked:

"Do you offer a pre-test before you begin instruction on the learning target(s)? If so, what do you do with the student who aces the pre-test while you are going through the initial instruction?"
I replied to David briefly in the comments, but a proper explanation warrants a much more thorough response.  Nearly every educator I know is familiar with the idea of "using data to drive instruction."  I'd like to describe two experiments (let's call them "action research projects" to make them sound more professional) currently in the works in my classroom.


Using data to assess relative strengths and weaknesses of an entire class
At the beginning of the most recent unit of study, students took a thirteen question multiple choice "pre-test" to assess their current level of understanding of surface area and volume.  I provided all of the necessary formulas and attempted to choose two questions that correlated with each learning target.  While multiple choice questions are not the norm in my class and present some obvious issues with guessing correct answers, I decided to go this route to create a quick turnaround.  Because the pre-test was a new idea, I prefaced it to my students as an opportunity to find out what they already know and don't know.  I explained my reasoning for the pre-test: to assess what they already know and what they need more help understanding.  The worst case scenario from the students' perspective, I explained, was not taking the assessment seriously and in turn suffering through an entire unit full of ideas and concepts they already understood.  That incentive seemed to make sense to them, because I did not observe any dot patterns or hear any grumblings.

I have several ideas in mind for utilizing the pre-test data:

  1. Just as I explained to my students, if a learning target seems to be understood by many on the pre-test, I will spend very little time teaching this concept.  On the flip side, learning targets that students struggled with trigger a "make sure we spend lots of time discussing this learning target and any misconceptions" in my mind.

  2. I will be giving an alternate form (same questions, different order) to the students at the end of the unit, in conjunction with the usual open-ended summative assessment.  I will be teaching one class using the traditional textbook sequence: lateral & surface area of prisms, lateral & surface area of pyramids, volume of prisms, volume of pyramids, etc.  The second class will be taught using an approach I've always wanted to try: teach the difference between lateral area, surface area and volume followed by each formula in one 84 minute block.  The next few class periods will be spent working on many practice problems and identifying which formula to apply and any misconceptions that arise.  Using the pre-/post-test difference data between the two classes in conjunction with several other data points to-be-determined, I would like to find out which approach seems to be more effective.  Look for a future post describing the results.


Using data to differentiate instruction and assignments for individual students
My building has a goal to increase the computation scores of students as reported on standardized tests.  Our department selected Geometry as the class to focus on skills such as long division, dividing mixed numbers, multiplying decimals and simplifying radicals all without a calculator.  The system we came up with was to administer pre- and posts tests at the beginning and end of the course and focus on two skills each week.  Typically the skill remediation involved a five to ten minute re-teaching session during class followed by a self-scoring drill and kill worksheet (DaK for short).  At the end of each week a short six question quiz provided an interim picture of students' progress on the two skills.  As the semester progresses, the completion rate of the DaK worksheets almost always declines and the students who need the practice the most tend to be the ones who choose not to do it.  The system has the best of intentions but it just wasn't working.  Two weeks ago, I decided to give the quiz first.  Students who successfully demonstrated an understanding of the concepts were exempt from the DaK worksheet and their quiz score was recorded.  All other students were given the DaK and were only permitted to take the post-quiz if their DaK was completed.  New evidence of learning replaced old evidence in the grade book.  Students love this new idea so far!  It works for them, because an incentive exists to do well on the first quiz as well as the DaK worksheet, but only if a need exists to complete it.  It also works for me, because during the five or ten minute re-teaching session, students are hungry to learn based on their mediocre quiz scores.  Students who have already mastered the concept can be pinpointed as additional resources for struggling learners.  I admit that this system works, because the instructional time is limited and it only happens one time per week.  If I could figure out a way to generalize this idea of differentiation and manage classroom behavior all at once, I could probably write a book about it and retire.  :)

Are these examples of data-driven instruction?  How are you using data to differentiate for your class as well as individual students?