Here’s a list of Pearson’s errors in administering standardized high stakes tests compiled by FairTest and blogged by Diane Ravitch. The list is as unsurprising as it is long.
The first course I took as a graduate student in educational administration in 1970 was on test construction. To get us started on gaining an understanding of the flaws in standardized tests the instructor distributed copies of the Stanford Achievement Test and asked us to find five errors in the construction of the questions after reading Chapter One of the assigned text. At the time the local Philadelphia newspapers used the Stanford Achievement test results to “rank” schools in the city— adding credence to the test’s validity and precision. In all, the test had roughly 75 questions… 13 of which were poorly constructed based on flaws described in Chapter One. In some cases there were two correct answers and in other cases there was no clear correct answer. Needless to say, I’ve been a skeptic of the “precision” of standardized testing ever since.
As a middling Facebook user, I was interested to read about a recent study conducted whereby it was demonstrated that readers’ mods were affected by the news feeds they received. Those receiving upbeat news feeds were demonstrably happier than those who received negative news feeds. While many news outlets engaged in hand-wringing over this, Cathy O’Neill aka The Mathbabe was elated:
It’s got everything a case study should have: ethical dilemmas, questionable methodology, sociological implications, and questionable claims, not to mention a whole bunch of media attention and dissection.
By the way, if I sound gleeful, it’s partly because I know this kind of experiment happens on a daily basis at a place like Facebook or Google. What’s special about this experiment isn’t that it happened, but that we get to see the data. And the response to the critiques might be, sadly, that we never get another chance like this, so we have to grab the opportunity while we can.
Of course she’s right about the fact that this kind of study happens daily: it HAS happened for decades in advertising agencies who are trying to find ways to connect with consumers and was a source of deep concern for George Orwell in his analysis of Hitler’s rise to power. I think the study overlooks a paradox of technology: the more media outlets there are the less people are willing to consider another individual or group’s perspective. That led me to make the following comment:
Here’s another hypothesis this study might support: the customization of news feeds has contributed to the polarization of politics in our country. There was a time when there were only three major news sources available to people on a daily basis and the news they provided was governed by a fairness doctrine. The segmentation that began with cable TV has increased with the internet making it possible for people to get, for example, “Christian News”. This segmentation leads to a situation where one’s world view is constantly reinforced making it harder for open-mindedness to prevail.
Her short post has links to the study itself, which was an interesting read. The bottom line from my perspective is that we need to include mindfulness in schools as soon as possible so people can gain a clearer understanding of how their mind works.
This could just as easily have been the title of Canadian Jonathan Gatehouse’s Macleans article, which was more politely titled “America Dumbs Down”. The article, which offered several examples of legislation designed to deny scientific truths dealing with evolution, climate change, and gun control, had one paragraph that jumped out at me:
The term “elitist” has become one of the most used, and feared, insults in American life. Even in the country’s halls of higher learning, there is now an ingrained bias that favours the accessible over the exacting.
Nowhere is this “accessible over exacting” bias more evident than in public policy regarding the rating of education, where state after state has succumbed to single letter rankings for schools… or magazines that develop seemingly exact rankings of colleges based on mathematical formulae… or value added ratings for teachers based on the comparative scores of tests administered once a year to students. Instead of a complicated means of evaluation that includes human judgment, everything in education is increasingly reduced to a single number or rating that is “easy to understand” but muddled. Our desire to make everything “easy to understand” is making us all simple.