This article was originally posted on Getting Smart on December 18, 2012
The irony is hardly lost on anyone when at education-related professional conferences educators sit in the audience as experts lecture them about how to teach as a guide-on-the-side rather than a sage-on-the stage. A “do-as-I-say-not-as-I-do” moment that often has even the lecturer chuckling. The habits, traditions, and structural constraints of conferences make such absurdity inevitable, and we all tend to take it in stride with a dose of self-deprecating humor. Back in classrooms and buildings and districts, however, similar habits, traditions, and constraints have a much more serious, and less obviously absurd effect.
On the one hand, educators are being asked to help students become question-creators and knowledge-seekers rather than compliant order-takers and rote performers. Educators are being asked to help students develop integrated skill sets that are richer and deeper than what can be assessed by a single high-stakes test. On the other hand, we are asking educators to accept measures of “accountability” that turn educators themselves into rote performers, continually drilling their students to improve their test scores.
The most simplistic accountability measures translate directly into student test scores: Good teachers are the ones whose students score well. Recognizing that this metric could be unfair since many students start the school year behind on academic progress, the concept of “student growth” is now used in many places to assess teachers instead: Good teachers are the ones whose students increase their test scores by one grade level every year. So instead of teaching to the test, the teacher is now teaching to multiple tests depending on how many different starting points are represented by the students at the beginning of the year. If teaching to a single test in the past has crowded out more authentic learning experiences that lead to higher-order thinking, how much more will teaching to “student growth” impact learning in classrooms where many students are already behind by differing amounts?
Another challenge with “student growth” as a metric of accountability is that a student’s ability to demonstrate a year’s worth of academic growth depends on more factors than their academic starting point. Is the student emotionally, physically, cognitively, and socially ready to learn? What kinds of supports beyond the teacher does the student have? If a teacher helps a student gain the social and emotional growth that helps him or her be more prepared to learn, is that less valuable or valued than drilling the student in sample tests? Which will be of more benefit to the student in the longer term?
In theory, sufficiently sophisticated measures of student growth combined with other measures of teacher effectiveness could address some of these fairness issues. The measures would account for student needs beyond academic instruction. The measures would account for the number and severity of disadvantages that show up in a given classroom. Measures of an educator’s skill in using high quality practices in the classroom could augment the measures of student growth, and some states, districts, and schools are attempting just this. Theoretically, a factory “quality-control” model of educators is possible. Eventually, we can imagine, a principal might sit in his office watching scores, giving more money to teachers with high scores and firing the ones whose scores are too low – just as in any other business.
Even if it were possible to create accurate measures within a representative and predictive quality-control model (which is the kind of absurdity that calls for those self-deprecating chuckles) this approach completely misses the point and runs counter to every meaningful definition of personalized education. In my next post, I’ll explain why.