Home > Uncategorized > Fordham Institute’s Ratings of the Ratings Underscores One of ESSA’s Biggest Flaws

Fordham Institute’s Ratings of the Ratings Underscores One of ESSA’s Biggest Flaws

November 15, 2017

I read with dismay the Fordham Institute’s recent analysis of each state’s rating systems for public schools written by Brandon Wright and Michael Petrilli. The Fordham Institute’s assessment was a problem for me, but their framework for assessments was drawn from ESSA’s language, which, in turn, is ultimately based on the premise that public schools are a commodity that can be rated like motels, automobiles, and restaurants. Here’s a synopsis of the Wright/Petrilli algorithm used to to assess each State’s accountability plan:

The Every Student Succeeds Act grants states more authority over their accountability systems than did No Child Left Behind, but have they seized the opportunity to develop school ratings that are clearer and fairer than those in the past? Our new analysis examines the plans submitted by all fifty states and the District of Columbia, and whether they are strong or weak (or in-between) in achieving three objectives:

  1. Assigning annual ratings to schools that are clear and intuitive for parents, educators, and the public;
  2. Encouraging schools to focus on all students, not just their low performers; and
  3. Fairly measuring and judging all schools, including those with high rates of poverty.

Their overall findings are summarized in the subsequent paragraph in bullet form (my emphases added):

Key findings include:

  • Thirty-four states—67 percent—received a “strong” grade for using clear and intuitive ratings such as A–F grades, five-star ratings, or user-friendly numerical systems. These labels immediately convey to all observers how well a given school is performing, and is a major improvement over the often Orwellian school ratings of the NCLB era.
  • The country is also doing much better in signaling that every child is important, not just the “bubble kids” near the proficiency cut-off. Twenty-three states earned strong grades on this objective, and another fourteen earned medium marks.
  • There is somewhat less progress when it comes to making accountability systems fair to high-poverty schools. Only eighteen states are strong here. But twenty-four others earn a medium grade, which is still an improvement over NCLB.

The fact that Mr. Wright and Petrilli place a high value on ratings such as A–F grades, five-star ratings, or user-friendly numerical systems means that they are simultaneously placing a high value on anything that can be measured numerically and devaluing any element of schooling that cannot be reduced to a number. This would likely contradict their second finding in any state that places a high value on standardized testing, since the best way for a school to improve their standardized testing is to target the so-called “…”bubble kids” near the proficiency cut-off.

In assessing the states who got low marks on their grading system Mr. Wright and Petrilli show their true colors, and their true intent is for states to use some form of rank ordering despite the fact that ESSA does not mandate such an approach:

On the flip side, three states received weak grades in each of the three areas: California, Idaho, and North Dakota. They rely on proficiency rates, don’t emphasize student growth, and propose using a dashboard-like approach with myriad data points and no bottom line for reporting school quality to parents, beyond identifying their very worst schools, as required by federal law.

So the three states that used a nuanced and detailed approach to rating schools and identified only the lowest performing schools got low ratings in the Wright/Petrilli algorithm but states that came up with a simplistic means of rating got higher marks. One thing we’ve learned in education is that the aphorism “what gets measured gets done” is absolutely true. That aphorism created the “bubble kids” and it created the endless gaming of the US News and World Report’s ratings that ultimately reward colleges, universities, and public schools that spend the most and punish the schools who serve first time enrollees and/or children raised in poverty.

KISS— Keep It Simple Stupid— is a great marketing strategy if your plan is to rank order schools and thereby identify the “fact” that 50% of the schools are “failing”. If you want to improve schools for all children, you might seek a system that flags only the worst schools and use a dashboard approach. My advice in examining the Wright/Petrilli algorithm: think of it as upside down.


Categories: Uncategorized Tags: , ,
%d bloggers like this: