Add a Little Salt

(Guest Post by Jonathan Butcher)

Last week, a South Carolina education blog called “The Voice for School Choice” posted links to an article on the worst schools in the U.S.  South Carolinians should be particularly irked with the article because 11 SC schools made the top 25.  All is not what it seems, though; below is a touch of salt to be added to the results of this article (“25 Worst Performing Public Schools in the U.S.”).  At issue is not the intelligence of the authors nor their ability; however, they make very strong claims as to the significance of their findings, and readers should be aware of the foundation on which the authors make these claims regarding student achievement.

“Worst Schools” was composed by a website called “Neighborhood Scout” and published on a financial blog operated by AOL called “WalletPop.”  Neighborhood Scout specializes in “nationwide relocation software, retail site selection, and real estate investment advertising.”  They are not an academic department at a university nor a policy research institution, and their founders do not have backgrounds in education or education policy research.  The founders’ specialty is geography, computer mapping and web design (there is no evidence that the authors are different from those described on Neighborhood Scout’s web page).

Neighborhood Scout created their own methodology for the “Worst Schools” article.  They subtracted the percentage of students who “passed” NAEP in a particular state (I am assuming they mean students who scored at proficient or above—though it could mean basic or above) from the “average percentage” of students in the same state who scored at the proficient or advanced level on the state’s mandatory test.  Their objective was to find schools in states where there is a large difference between the percentage of students proficient on a state test and the percent proficient on NAEP in order to make judgments about the difficulty (or lack thereof) of a state test.  The article does not compare similar student populations—as does NAEP—or at the least this methodology section does not indicate such disaggregation.

Of note is that the study gives no indication of being peer-reviewed, and peer-review is a robustness check even among research reports not submitted to journals.  In addition, the study is a snapshot of test scores.  It does not take into account improvement over time, student population changes, or compare scores to some baseline indicator.  For example, in the past three years, 6th graders at W.A. Perry (one of the SC schools in the bottom 25) have gone from 48% meeting or exceeding state standards in math to 66%.  They are still below the state average, but more students are meeting or exceeding state standards now than three years ago.  Similar results can be found in English/Language Arts. 

Admittedly, W.A. Perry’s 6th graders’ scores are below the state average; however, they are making progress.  My aim is not to defend schools that may be low-performing, but a snapshot of a school’s test scores at one point in time does not a failing school make.  NCLB agrees with me, as a school must be in need of improvement for three years before significant intervention takes place.

Additionally, no indication is given by the article as to the student populations served at these schools.  For example, Milwaukee Spectrum School (#25) has a total population of 90 at-risk students who had a record of truancy at other schools.  The school is often a last stop for students ready to drop out of high school all together.  Of course the school is struggling; it is intended to serve struggling students.

In the article, different grades are represented for each school.  For example, high schools are not compared to high schools, but to elementary, middle, and high schools.  This presents a problem because the trend in NAEP (generally) is that more elementary students score proficient than middle school students, and more middle school students score proficient than high school students (this is true across subjects).    

Further, scores are not reported for every grade in every subject.  So a high school with low-scoring 11th graders may be on the “Worst Schools” list right before a middle school who has low-scoring 8th graders but a class of 6th graders with scores closer to a state’s average. 

In the end, of course, readers will decide if this list of worst performing schools is convincing.  However, before sinking your teeth in, take the article with a grain of salt.

6 Responses to Add a Little Salt

  1. JEMEDM says:

    Thank you for the additional information. I had never heard of this organization and wondered about their “angle” in this publication. You confirmed some suspicions as well about their methodology.

  2. Ralph Porter says:

    I am the director and founder of CHOiCES Charter School in Florence and Darlington County. Our school is an alternative school that takes students expelled brom Florence and Darlington Counties ages 12-17yrs. The majority of students come to us not even close to being on grade level. We have students improve sometimes two grade levels in one year. But if the student is enrolled as a 7th grade student and is tested on a 3rd grade level then that is being counted against us not the school he came from. Ours mission is to provide a learning opportunity to students who otherwise would not be in school anywhere. We have an 85% success rate in returning students back to their original school with no returns. We have also had 27 students pass their GED and seven of those are in college. So Yes, I agree that the ranking of school does not take into account the individual student improvements but only justifies itself with someone elses data. I’ll be glad to close this school down and give them back to the schools that failed them.

  3. Karl Priest says:

    Thank you for linking to the Continuing Collapse.

    I love that “newsletter”!

  4. Karl Priest says:

    Thank you for the link to the “Continuing Collapse”.

    That is a great “newsletter”.

  5. Morgan says:

    This list is pretty absurd. I’m glad someone affiliated with one of the schools posted something about it. This seems really basic but you can’t just look at a school’s scores and then immediately say “this is one of the worst schools in the country”

  6. John Cronin, Ph.D. says:

    My personal reservations about their methodology are as follows:

    The methodology as described simply subtracts a state’s NAEP proficiency rate from the state’s own reported proficiency rate on their test. The difference between the two (according to the website) is used to adjust each school’s proficiency rate and eventually derive a ranking for each school. The primary weakness of the methodology is that it attempts to report achievement by considering only three points of reference, two of which have nothing to do with the school. These are the state’s NAEP proficiency rate, the average statewide proficiency rate on the state test, and the school’s own proficiency rate. One problem with this methodology is that it may not work well if the state’s proficiency level is set as point that doesn’t discriminate well between high and low performing schools.

    For example, for our study The Accountability Illusion we estimated the difficulty of the South Carolina standards, which are quite high, in relation to those of other states, including Michigan, which are relatively low. We then looked at how a group of eighteen elementary and middle schools would perform relative to each state’s standards. Using South Carolina’s cut score, our highest performing school achieved about 70% proficiency and our lowest 18%. These same schools, when using Michigan’s cut scores, would have rates of 98% proficiency and 78% respectively. These are the same two schools, but in one state the gap between best and worst school in our sample is 52%, in the other 20%. Thus it is no surprise that there are no Michigan schools on the worst 25 list. The reason is that the South Carolina test is discriminating, that is results are reported in a way that strongly differentiate between high and low performing schools because the cut score is set near the middle of the distribution. In Michigan the test doesn’t discriminate as well because the standard is set near the low end of the distribution. As a result, the low performing school in our sample didn’t seem to perform that much worst than the best

    The point…simply applying a transformation that is the difference between a state’s average score and their NAEP performance isn’t going to resolve a problem that is grounded in the lack of sensitivity in the metric itself. The evidence…isn’t it rather strange, among neighborhoodscout.com’s 100 lowest performing schools in America, that not a single one comes from Tennessee, Alabama, New Mexico, Louisiana, or Mississippi, which are the five lowest performing states on NAEP math?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: