Beware of rankings
Over at The American Interest, Walter Russell Mead asserted a few weeks back that ?when it comes to education, red states rule.? He bases this finding on data collected for Newsweek's recently released high school rankings.? (As it turns out, three of the top ten schools in the country are in right-to-work Texas?and two more are in Florida, also a right-to-work state.) Unfortunately, this article is just more evidence of an increasingly common education-policy trend. Far too often, statistics, scores, and school rankings are flaunted as proof of grandiose policy victories, no matter how thin the ties are or valid the original data collected is. Looking at Jay Matthews's rankings of the best-performing high schools in the nation, for example, the top five schools (which draw from wealthy communities or have rigorous admissions standards) cannot validly be compared to run-of-the-mill neighborhood schools. And to assert, as Mead does, that the existence of these top-tier schools settles the debate on whether right-to-work states provide better education is a bridge way too far. (To be clear, my gripe isn't with Texas's or Florida's education systems, which are generally solid, but with the cherry-picking of data.) Using these rankings to draw conclusion on the quality of an educational system of a state as a whole has absolutely no validity. There is no demonstrated causality between the level of achievements of the top high schools in a state and the overall quality of the public education system.
It's not just the Meads of the world who partake of this lazy and incorrect statistical reasoning: A few months ago, a similar discussion occurred regarding ACT and SAT test scores. This time, the point that was being made was that states with strong teacher unions and collective-bargaining laws (like MA and VT) score much higher on these college-entrance exams than states without CBA laws (like TX and FL). This point was especially played up in Wisconsin (remember that whole Wisconsin CBA travail?). Using old?and none too rigorous?data, many would-be analysts used this loose correlation to determine that having powerful teacher unions leads to a better education system. For the same reasons as above (too many variables, small sample size of schools or students, and a disregard for the concept of ?selection bias? or ?control variables,? among other things), this reasoning is not valid at all. Here we see it: Two sets of rankings and two opposite interpretations?both equally, completely invalid.
It's not just residents of the blogosphere or the world of Twitter that mishandle numbers in this manner. For example, in Buffalo Public Schools, the high school graduation rate for African American males is a dismal 25 percent. But does this mean that each of a given African American parent's sons will have a 25 percent chance of graduating, as wrote Kayla Webley in Times Magazine a few weeks back? It most certainly does not; unless these sons are no more than dice ready to be cast. The quality of the school district?if we assume that the graduation rate is the appropriate yardstick to measure it?is just one of the many factors involved in the actual chance for these young men to graduate. Understandably, this last example is of little global consequence; but sometimes such uses and abuses of data/rankings can play a major role in shaping public opinion.
International comparative studies have become a perfect example of how careful we should be in our use of rankings.? The repeated references to them by various public figures and in various reports and publications without putting them into context ultimately (and wrongly) give them legitimacy in the public mind.? In October 2010, an editorial in the New York Times bemoaning the poor quality of America's math and science education was titled ?48th is not a good place.? This was referring to an international comparative study from the World Economic Fund. Now, the Times editorialists are right: Forty-eighth is not a great perch to inhabit. But before booking your flight to Belgium (ranked third by the World Economic Fund), Tunisia (seventh), or Qatar (twelfth), you might want to browse through the report itself. Do so, and you'll see that this ranking is based solely off a survey of executives (called "executives opinion survey"). It asked them one single question: ?On a scale of one to seven (one being poor and seven being excellent?among the best in the world), how would you assess the quality of math and science education in your country's schools? Only 200 executives took the survey for the United States. So what is presented as a ranking of the quality of education in math and science in the U.S. is nothing more than the non-scientific musings of a non-representative group of executives. (It goes without saying that the statistical validity of this ranking is minimal.) The point here isn't to debunk every ranking study (though prudence does call for a methods check of each report of this type), nor is it to say that we shouldn't have a sober conversation about the state of math and science education in the American system. As a scientist who is not the product of American schools, I know there should be one. This example simply reinforces the point that bogus rankings are ubiquitous and are used to create a sense of urgency. The propagation of flawed or misused information fuels the rhetoric about the state of the education in the United States.
These are just a few examples to remind us that, in our intensely polarized education-policy world, it is always tempting (and far-too easy) to simply grab at data to reinforce our points. Refrain from eating the apple. Scientific rigor and solid methodology must be our guidelines. While comparative studies and rankings can be meaningful if undertaken and analyzed at complex levels, too often we rest on easy sound bites based on skewed causalities. And that does no one any good.
?Laurent Rigal, Education Pioneers Fellow, Thomas B. Fordham Institute
blog comments powered by Disqus
About the Editor
Michael J. Petrilli
Executive Vice President
Mike Petrilli is one of the nation's foremost education analysts. As executive vice president of the Thomas B. Fordham Institute, he oversees the organization's research projects and publications and contributes to the Flypaper blog and weekly Education Gadfly newsletter.
May 23, 2013
Sign Up for updates from the Thomas B. Fordham Institute
- Core Knowledge Blog
- Daniel Willingham: Science and Education Blog
- Education Next Blog
- Getting Smart
- Gotham Schools
- Jay P. Greene
- Joanne Jacobs
- NACSA's Chartering Quality
- National Journal Education Blog
- NCTQ Pretty Darn Quick
- NCTQ Teacher Quality Bulletin
- Ohio Education Gadfly
- Politics K-12
- Quick and the Ed
- Rick Hess Straight Up
- The Corner
- The Hechinger Report
- Top Performers