I have held college rankings in disdain for years. They are reductionist to an extreme, measuring the easy-to-measure elements that differentiate one college from another and, because the metrics are mathematical, yielding a seemingly exact numeric differentials among colleges and universities that are, upon close inspection, inconsequential. For example, the differences between the top ranked and second ranked college in one category (say, engineering schools) may not be the same as in another category (say art schools) and the numeric difference between the third and fourth ranked college may be do different than the difference between the fourth and sixteenth ranked college within a category.
Despite my misgivings about rankings, it is evident from the sales of US News And World Report experienced when it published it’s annual ratings that most Americans love them… and based on the way colleges respond to the rankings it is evident they are valued by prospective students and their parents. I do believe, however, that the US News And World Report rankings are seldom deal-breakers or deal-makers when it comes to students making their final decision. It may save a cross country trip to visit a campus, but I doubt that a student with acceptances to two or three schools refers to them to make his or her final decision… but it DOES sell magazines and it DOES generate lots of faculty room and coffee-clatch conversation and, if a college is highly rated, generates lots of calls and mailings to alumni.
Given my misgivings I was dismayed when I read that President Obama is advocating a rating system for colleges that incorporates some kind of cost/benefit analysis… and… in response to this emerging trend from the US government and the profits realized by U S News and World Report more and more media outlets are jumping on the rating bandwagon, including Money Magazine. Kevin Carey reported on this development an article that appeared in Tuesday’s NYTimes Upshot section… and the title of the article tells you all you need to know about the rating system: “Building a Better College Rating System. Wait! Babson Beat Harvard!” With an undergraduate degree from Drexel University and two graduate degrees from the University of Pennsylvania I can testify to the fact that there is NOT that much difference in rigor between a “middle tier” university and an “elite” Ivy League school… and I can also testify to the fact that what separates the two institutions defies a simplistic mathematical metric like “earnings after 10 years” no matter how sophisticated the weighting of various exogenous factors…
Measuring quality is a difficult proposition and, in my judgment, not worthwhile. But it is relatively easy to identify and regulate institutions that mislead prospective entrants, fail to support enrolled students, employ unqualified and/or underpaid staff that turn over frequently, and have abysmal graduation rates. The time and money spent developing arcane statistical calculations to create gradations between good and excellent schools would be better spent aggressively monitoring those institutions that are profiteering at the expense of gullible students.
Todday’s NYTimes Magazine features an article by Elizabeth Green titled “Why Do Americans Stink At Math?”, an article well worth reading because it provides a good description of what it would take to make Americans perform at a higher level but an article that underemphasizes or overlooks some of the subtle reasons that contribute to our deficiencies.
Ms. Green contrasts the Japanese methods of teaching mathematics with those used in the US, focussing on Akihiko Takahashi, an education reformer from Japan, and Takeshi Matsuyama, an elementary teacher affiliated with a university-based lab school who was his mentor. Together, they transformed mathematics instruction in Japan. Like Deming before them, Takahashi and Matsuyama implemented the recommendations of US experts, recommendations that our country rejected because they did not fit the hierarchical “factory model” of management that blinds us to new and different ways of thinking. Surprisingly Ms. Green overlooked the parallel to Deming’s experience, which mirrored that of Takahashi and Matsuyama and continues to limit our ability to innovate.
Ms. Green also contrasts the Japanese method of teacher training, which is ongoing and organic, with the virtual absence of training in our country. Instead of stand-alone workshops or the accumulation of graduate credits, Japanese teachers engage in “lesson study”, which is time provided for teachers to meet and discuss their teaching methods and to observe each other’s instruction. But she fails to emphasize the funding that would be required to provide the time needed for teachers to have the time for lesson study nor does she note that shift in thinking that would be required to move away from our credential-based method of measuring teacher learning, a method that is often based on seat time.
As one who led school districts from 1980 through 2011 I saw two other factors that Ms. Green overlooked or underemphasized: our country’s obsession with standardized tests and the unwillingness of parents and school boards to accept “non-traditional ways” of teaching mathematics and scheduling teacher time.
Ms. Green described how the emphasis on standardized tests reinforced “traditional” methods of teaching when she noted that while “…lesson study (in Japan)is pervasive in elementary and middle school, it is less so in high school where the emphasis is on cramming for college entrance exams”. In our country, the emphasis is on cramming for examinations from the very outset… and that emphasis is deleterious. Especially since to date, standardized tests have NOT measured the kinds of mathematics instruction valued by NCTM: they have focussed on the “skills” traditionally taught to parents and school board members, skills that are easy to test (see yesterday’s post for evidence of this).
Ms. Green made no mention of how any effort to introduce “non-traditional” methods of mathematics instruction meets with resistance from parents who complain that “they can’t help their children with homework” because they “don’t understand” the work assigned. And when that attitude is combined with our obsession with test scores, if the scores don’t jump immediately the “new math” books are soon be abandoned in favor of the worksheets that match the tested curriculum and the meme about the “failure of new mathematics” is reinforced.
School boards not only face resistance from parents, they also face budget challenges, which can pose the biggest obstacle to introducing innovation. When administrators contemplate the implementation of something akin to “lesson study” they need to hire additional staff to provide release time for teachers to engage in such a program. One way to provide more release time is to increase class sizes (Japan has much larger class sizes than the US), a recommendation that flies in the face of conventional wisdom in the US and meets resistance from teachers as well as parents.
Finally, as noted repeatedly in this blog, we need to stop thinking of our schools as factories that pour information into students who progress along an assembly line in lockstep based on their age and whose progress is measured by standardized tests and hours spent in the classroom. The bottom line: until we stop thinking of our schools as factories we will see no meaningful change or improvement.
Here’s a list of Pearson’s errors in administering standardized high stakes tests compiled by FairTest and blogged by Diane Ravitch. The list is as unsurprising as it is long.
The first course I took as a graduate student in educational administration in 1970 was on test construction. To get us started on gaining an understanding of the flaws in standardized tests the instructor distributed copies of the Stanford Achievement Test and asked us to find five errors in the construction of the questions after reading Chapter One of the assigned text. At the time the local Philadelphia newspapers used the Stanford Achievement test results to “rank” schools in the city— adding credence to the test’s validity and precision. In all, the test had roughly 75 questions… 13 of which were poorly constructed based on flaws described in Chapter One. In some cases there were two correct answers and in other cases there was no clear correct answer. Needless to say, I’ve been a skeptic of the “precision” of standardized testing ever since.