Pondiscio: Choice Is Not About Test Scores

March 6, 2017

il_340x270-1128328451_66jh

(Guest Post by Jason Bedrick)

In case you missed it, in today’s U.S. News & World Reportthe inimitable Robert Pondiscio gently chides fellow school choice advocates for getting caught up in a debate over test scores, which are ancillary to the true value of school choice:

Wonky battles over research studies can be illuminating. They can also be irrelevant or premature. While [school choice] advocates are correct that the preponderance of evidence tends to favor school choice, this entire debate puts the cart before the horse. When we look to test-based evidence – and look no further – to decide whether choice “works,” we are making two rather extraordinary, unquestioned assumptions: that the sole purpose of schooling is to raise test scores, and that district schools have a place of privilege against which all other models must justify themselves.

That’s really not what choice is about. Choice exists to allow parents to educate their children in accordance with their own needs, desires and values. If diversity is a core value of yours, for example, you might seek out a school where your child can learn alongside peers from different backgrounds. If your child is a budding artist, actor or musician, the “evidence” that might persuade you is whether he or she will have the opportunity to study with a working sculptor or to pound the boards in a strong theater or dance program. If your child is an athlete, the number of state titles won by the lacrosse team or sports scholarships earned by graduates might be compelling evidence. If faith is central to your family, you will want a school that allows your child to grow and be guided by your religious beliefs. There can be no doubt that, if you are fortunate enough to select a school based on your child’s talents or interests or your family’s values and traditions, the question of whether school choice “works” has already been answered. It’s working perfectly for you.

Deciding whether or not to permit parents to choose based on test-based evidence is presumptuous. It says, in effect, that one’s values, aspirations and priorities for one’s child amount to nothing. Worse, our evidence-based debate presumes that a single, uniform school structure is and ought to be the norm, and that every departure from that system must justify itself in terms of a narrow set of outcomes that may not reflect parents’ – or society’s – priorities. Academic outcomes matter, of course, but so do civic outcomes, character development, respect for diversity and faith and myriad others.

This isn’t to say that the research on the effect of school choice on test scores is meaningless. But it has to be read and understood in the broader context. Test scores are important, but they’re far from what’s most important about exercising educational choice. As Pondiscio concludes:

School choice proponents who seek to prove that vouchers, tax credits and scholarships “work” by citing test-score-based research have allowed themselves to be lured into argument that can never be completely won. They have tacitly agreed to a reductive frame and a debate over what evidence is acceptable (test scores) and what it means to “win” (better test scores). This is roughly akin to arguing whether to shop at your neighborhood grocery store vs. Wal-Mart based on price alone. Price is important, but you may have reasons for choosing the Main Street Grocery that matter more to you than the 50 cents per pound you’d save on ground beef. Perhaps Main Street’s fresh local produce and personal service are more important to you.

If we limit the frame of this debate to academic outputs alone, every new study provides ammunition, but never a conclusion. The real debate we should be having is, “What kind of system do we want?” Answer that question first, then use evidence to improve the school designs, policies and programs we have agreed deserve public support.

Amen, brother!


Add a Little Salt

March 20, 2009

(Guest Post by Jonathan Butcher)

Last week, a South Carolina education blog called “The Voice for School Choice” posted links to an article on the worst schools in the U.S.  South Carolinians should be particularly irked with the article because 11 SC schools made the top 25.  All is not what it seems, though; below is a touch of salt to be added to the results of this article (“25 Worst Performing Public Schools in the U.S.”).  At issue is not the intelligence of the authors nor their ability; however, they make very strong claims as to the significance of their findings, and readers should be aware of the foundation on which the authors make these claims regarding student achievement.

“Worst Schools” was composed by a website called “Neighborhood Scout” and published on a financial blog operated by AOL called “WalletPop.”  Neighborhood Scout specializes in “nationwide relocation software, retail site selection, and real estate investment advertising.”  They are not an academic department at a university nor a policy research institution, and their founders do not have backgrounds in education or education policy research.  The founders’ specialty is geography, computer mapping and web design (there is no evidence that the authors are different from those described on Neighborhood Scout’s web page).

Neighborhood Scout created their own methodology for the “Worst Schools” article.  They subtracted the percentage of students who “passed” NAEP in a particular state (I am assuming they mean students who scored at proficient or above—though it could mean basic or above) from the “average percentage” of students in the same state who scored at the proficient or advanced level on the state’s mandatory test.  Their objective was to find schools in states where there is a large difference between the percentage of students proficient on a state test and the percent proficient on NAEP in order to make judgments about the difficulty (or lack thereof) of a state test.  The article does not compare similar student populations—as does NAEP—or at the least this methodology section does not indicate such disaggregation.

Of note is that the study gives no indication of being peer-reviewed, and peer-review is a robustness check even among research reports not submitted to journals.  In addition, the study is a snapshot of test scores.  It does not take into account improvement over time, student population changes, or compare scores to some baseline indicator.  For example, in the past three years, 6th graders at W.A. Perry (one of the SC schools in the bottom 25) have gone from 48% meeting or exceeding state standards in math to 66%.  They are still below the state average, but more students are meeting or exceeding state standards now than three years ago.  Similar results can be found in English/Language Arts. 

Admittedly, W.A. Perry’s 6th graders’ scores are below the state average; however, they are making progress.  My aim is not to defend schools that may be low-performing, but a snapshot of a school’s test scores at one point in time does not a failing school make.  NCLB agrees with me, as a school must be in need of improvement for three years before significant intervention takes place.

Additionally, no indication is given by the article as to the student populations served at these schools.  For example, Milwaukee Spectrum School (#25) has a total population of 90 at-risk students who had a record of truancy at other schools.  The school is often a last stop for students ready to drop out of high school all together.  Of course the school is struggling; it is intended to serve struggling students.

In the article, different grades are represented for each school.  For example, high schools are not compared to high schools, but to elementary, middle, and high schools.  This presents a problem because the trend in NAEP (generally) is that more elementary students score proficient than middle school students, and more middle school students score proficient than high school students (this is true across subjects).    

Further, scores are not reported for every grade in every subject.  So a high school with low-scoring 11th graders may be on the “Worst Schools” list right before a middle school who has low-scoring 8th graders but a class of 6th graders with scores closer to a state’s average. 

In the end, of course, readers will decide if this list of worst performing schools is convincing.  However, before sinking your teeth in, take the article with a grain of salt.

When “Sorry” Means “J’Accuse”

July 28, 2008

The letter "J'accuse"

The following column about a letter appeared on the front page of the Cleveland Plain Dealer on Wednesday, July 23.  The letter is in the form of an apology, but it is actually a series of accusations about testing and accountability.  Like another front page letter of accusation, this one has all of Emile Zola’s moral outrage but has none of Zola’s justification.

I’ve reprinted it here with my comments in blue italics.

Students pass state test, but at what cost to their education?

by Regina Brett

The school report cards came out in June.

Rocky River Middle School passed the 2008 Ohio Achievement Tests, earned an Excellent rating from the state and met the requirements for Annual Yearly Progress.

For all of those accomplishments, Principal David Root has only one thing to say to the students, staff and citizens of Rocky River:

He’s sorry.

Root wants to issue an apology. He sent it to me typed out in two pages, single spaced.

He’s sorry that he spent thousands of tax dollars on test materials, practice tests, postage and costs for test administration.

Actually, he did not spend the money.  The taxpayers did when they decided through their elected representatives to adopt a testing and accountability system.  They then hired David Root to implement this policy in his capacity as principal at a public school.

Sorry that his teachers spent less time teaching American history because most of the social studies test questions are about foreign countries.

I guess the people of Ohio thought it was important for students to learn about foreign countries when they, through their elected representatives and hired agents, devised the state curriculum and test.  Besides, if students learned more about foreign countries they might know who Emile Zola was.

Sorry that he didn’t suspend a student for assaulting another because that student would have missed valuable test days.

Sounds pretty irresponsible.  Would he have made a different decision if the student would have missed valuable instructional days?  If so, whose fault is that?  Oh yes, I forgot that this is an accusation, not an apology.

Sorry he didn’t strictly enforce attendance because all absences count against the school on the State Report Card.

So, is David Root saying that he cheated on the state accountability system?  Isn’t this like lying to your boss about your job performance?  Will he be fired, sanctioned, or resign to make amends for his infraction?

He’s sorry for pulling children away from art, music and gym, classes they love, so they could take test-taking strategies.

Why didn’t he just follow the state curriculum and let the scores show what students knew? The decision to take time away for “test-taking strategies”  was completely unnecessary given that more than 90% of Rocky River students have been scoring above the proficient level in reading, math, and writing.  It sounds like they would have done just fine on the state test without working on test-taking strategies and having spent more time on art, music, and gym.

Sorry that he has to give a test where he can’t clarify any questions, make any comments to help in understanding or share the results so students can actually learn from their mistakes.

How reliable would the results be if principals could clarify questions, help in understanding, or share secure test items that would be re-used on future tests?  Does every assessment have to be a formative assessment?

Sorry that he kept students in school who became sick during the test because if they couldn’t finish the test due to illness, the student automatically fails it.

This sounds like a difficult decision.  Football coaches similarly have to think about whether to take injured players out of the game versus having the players tough it out.  We pay leaders to make these difficult decisions, balancing competing interests wisely.

Sorry that the integrity of his teachers is publicly tied to one test.

Actually, the state accountability system — let alone “the integrity of his teachers” —  is not based on one test.  The overall rating of Rocky River Middle School is based on several test results (in Reading, Math, Writing, Social Studies, and Science), the progress students have made in those subjects, and (as we already heard about) the possibly fraudulent attendance rate.

He apologized for losing eight days of instruction due to testing activities.

I thought Root didn’t want one test, so it takes time to administer several.  While testing takes place on eight days it does not (or at least does not have to) consume the entirety of those days.  My understanding is that the average student only spent two mornings being tested, as testing occurred in different grades and subjects for different students across eight days.

For making decisions on assemblies, field trips and musical performances based on how that time away from reading, math, social studies and writing will impact state test results.

I would hope that the principal would think about how assemblies, field trips, and musical performances impact instructional time for other academic subjects regardless of whether those subjects are part of a state accountability system.

For arranging for some students to be labeled “at risk” in front of their peers and put in small groups so the school would have a better chance of passing tests.

Again, if smaller group instruction would help certain students, the principal should arrange for that regardless of the state accountability system.  And the principal would have to think of a way to provide that necessary assistance without stigmatizing the students who need it.

For making his focus as a principal no longer helping his staff teach students but helping them teach test indicators.

Why didn’t he just help his staff teach the subjects with confidence that the test indicators would show what they had learned?  This is especially puzzling given how likely it is that students at Rocky River would pass the state test without paying any special attention to test-taking strategies.

Root isn’t anti-tests. He’s all for tests that measure progress and help set teaching goals. But in his eyes, state achievement tests are designed for the media to show how schools rank against each other.

Seems like the state accountability system does measure progress and help set teaching goals.  What’s wrong with it also informing the public and policymakers (via the media) about how their schools are doing?

He’s been a principal for 24 years, half of them at Rocky River Middle School, the rest in Hudson, Alliance and Zanesville. He loves working with 6th, 7th and 8th graders.

“I have a strong compassion for the puberty stricken,” he joked.

His students, who are 11, 12, 13 and 14, worry that teachers they love will be let go based on how well they perform.

One asked him, “If I don’t do well, will you fire my teacher?”

He cringed when he heard one say, “I really want to do well, but I’m not that smart.”

Has a single tenured teacher in Ohio (or in the United States) been let go based on performance on state accountability tests?  Maybe he should reassure the students that their concern is misplaced.

He wants students to learn how to think, not take tests.

Can’t they do both?

“We don’t teach kids anymore,” he said. “We teach test-taking skills. We all teach to the test. I long for the days when we used to teach kids.”

Why not just return to those days and let the test results show what kids have learned?

Unless we get back to those days, principals and teachers all over Ohio will continue to spend your tax dollars to help students become the best test takers they can be.

The people of Ohio decided to adopt an accountability system because the schools weren’t doing an adequate job teaching kids to think without it.  The “just trust us to do a good job” approach wasn’t working.

(edited to add color)


%d bloggers like this: