Value Added Blues
More articles are appearing on the statistical problems with Value Added testing, including this one from the Mathbabe bluntly titled The Value Added Teacher Model Sucks. A graph in the middle of her post is preceded with the following explanation: “(Here’s) a scatter plot of scores for the same teacher, in the same year, teaching the same subject to kids in different grades. So, for example, a teacher might teach math to 6th graders and to 7th graders and get two different scores; how different are those scores? Here’s how different:
One of the comments to her post included a link to this Washington Post article which told the story of a teacher who was fired because 50% of her evaluation was based on her student’s value added scores. What makes this teacher’s story especially unnerving is that the majority of the baseline data used to calculate her “added value” was drawn from students whose test scores are under investigation because of a high number of erasures that appeared on their answer sheets. Ay yi yi!!!
Finally, there’s this article from the NYTimes that describes how two teachers in a high-flying school in Brooklyn scored abysmally on the value added algorithm because their students “slipped” from 97% mastery levels to 89%… basically, knowing how these tests are designed it means the students missed less than one additional question on average. While teachers of students at the low end of the bell curve can realize huge gains when a student makes modest progress, teachers of students on the high end of the curve can get correspondingly huge drops in “performance”. While it is clear the king has no clothes, instead of spending time and money trying to devise metrics that will improve teaching, countless dollars will be spent trying to fix these flawed assessments.