Don’t Test Me, Bro!

January 23, 2014

(Guest Post by James Shuls)

Back in December, the Fordham foundation put out a clever parody video of “What Does the Fox Say?” Playing on Fordham’s moniker, The Education Gadfly, the video was entitled, “What Does Gadfly Say?” More recently, Fordham released a paper that calls for private schools in state sponsored school choice programs to be subject to state accountability tests. We have listened to what the Gadfly has to say. Maybe Fordham should listen to what many private schools and parents have to say. What I’m hearing is…“Don’t test me, bro!”

I recently surveyed private schools in St. Louis and Kansas City, Missouri regarding their participation in a potential state sponsored private school choice program. I found that many schools, 88% to be exact, are already administering some form of standardized tests. Nearly half of the schools said they would not participate in a school choice program if they were forced to administer state accountability tests. Aside from upholding admissions criteria and allowing students to opt out of religious services, this was the most important factor for Missouri private schools.

The Fordham Foundation conducted a similar survey among private schools in states with private school choice programs. In total, there were seven items that were similar between the Show-Me Institute’s survey and Fordham’s Survey. A rank ordering of the survey items shows the responses in the two were quite similar. Germane to this conversation is the requirement to participate in state testing. A quarter of schools in the Fordham study said this was “Very Important” or “Extremely Important” to their participation in a school choice program. That figure was higher, 37%, among non-participants.

james chart

Is this reason enough to excuse private schools from being required to administer state tests? No, but private school leaders aren’t the only ones saying, “Don’t test me, bro!” This cry is also ringing out from many parents and students. Last September, the AP reported on the growing movement to opt out of standardized tests. Only a fraction of parents are participating in this form of “civil disobedience;” but many others simply don’t value standardized test scores. Indeed, recent reports by the Friedman Foundation and the Fordham Foundation note that parents care about a lot more than test scores.

The Fordham report, What Parents Want, categorized just 23% of parents as “test-score hawks.” More parents, 24%, fell into the “Jeffersonian” category. These parents are inclined to choose a school that “emphasizes instruction in citizenship, democracy, and leadership.” Still more, 36%, were categorized as “Pragmatists”; meaning they valued vocational and job-related training.  “Multiculturalists” were just behind the “test-score hawks” with 22%. What Parents Want makes the same point found in a recent paper by the Friedman Foundation; parents value “more than scores.”

The real question behind this debate is: Who should be the arbiter of school quality? In other words, what is the purpose of school choice? The Fordham Foundation suggests private schools that accept students receiving state support should be held accountable to the taxpayer. This means, Fordham argues, they should be subject to state tests. In essence, Fordham is saying that the state is the arbiter of quality. The state has selected the standards, the state has chosen the state tests, the state will set performance standards, and the state will not allow low-performing schools to participate.

If you believe the ultimate goal of school choice is to improve student achievement, as measured by state accountability tests, then you should agree with Fordham. If, however, the goal of school choice is to afford parents the ability to choose the school that meets their needs; you should probably disagree with Fordham. That is, if parents are the arbiter of school quality; Fordham is wrong.

——————————

James Shuls is the Director of Education Policy at the Show-Me Institute. He earned his doctorate in education policy from the University of Arkansas’ Department of Education Reform. You can read his full paper, Available Seats: Survey Analysis of Missouri Private School Participation in Potential State Scholarship Programs, here.


Am I Being Consistent on Testing Requirements?

January 20, 2014

Rick Hess has a thoughtful post today on last week’s dust-up over whether choice schools should be required to take state tests.  Rick is generally sympathetic with the arguments I was making but raises two objections.

First, Rick worries about whether I (and others) are being consistent in opposing testing requirements for choice schools while having “long slammed districts and promoted school choice by pointing to reading and math scores.”  He continues, “I’ve got a lot of sympathy for those who feel like Greene’s position constitutes something of a bait-and-switch, with choice advocates are changing the rules when it suits them.”

Second, Rick thinks there is an inconsistency in  my suspicion that test-prep and manipulation are largely responsible for test score improvements by Milwaukee choice schools after they were required to take high-stakes tests,  while I interpret research from Florida as showing schools made exceptional test score gains when faced with the prospect of having vouchers offered to their students if scores did not improve.  Why would I believe the former is an artifact of test prep, but not the latter?

Let me deal first with Rick’s second objection because it is easier and quicker to address.  I was concerned about whether test prep and manipulation were responsible for the exceptional gains made by low-graded schools that faced the prospects of voucher competition if their results did not improve.  So, Marcus Winters and I examined results from the Stanford-9, a nationally normed low-stakes test, as well as the state’s high-stakes FCAT, to see if the results were similar.  Here is what we wrote:

Schools are not held accountable for their students’ performance on the Stanford-9. As a result, they have little incentive to manipulate the results by “teaching to the test” or through outright cheating. Thus, if gains are witnessed on both the FCAT and the Stanford-9, we can be reasonably confident that the gains reflect genuine improvements in student learning.

The results were similar, showing exceptional gains on both high and low stakes exams, which gave me confidence that the improvements in FL were real.  In Milwaukee we do not have a similar check on whether learning gains were real after high stakes testing requirements were imposed.  In the absence of a low-stakes check, I’m highly skeptical of whether choice schools suddenly improved in quality when they were required to administer the high-stakes tests that the study subjects had been taking all along with lower results.

Rick’s first point — essentially, that I am being hypocritical in opposing testing for choice schools but not for traditional public schools — requires a more complicated response.  I would be happy opposing state testing requirements for all schools (choice and traditional public) if those schools had some reasonable mechanism for accountability.  Choice schools are accountable without testing requirements because parents can choose whether to send their children (and the resources that follow those students) to those schools or not.  If those schools are not accomplishing what parents want, choice schools have difficulty attracting and retaining students and resources.

Most traditional public schools, however, have no meaningful system of accountability.  They receive students and resources regardless of whether they are accomplishing what families want or not.  If schools are not held accountable by choice, then they have to be accountable by some mechanism.  One way to produce this accountability is to require that they administer state tests and meet certain performance benchmarks.  This type of top-down accountability is far less efficient and comprehensive than choice accountability, but it may have to do in the absence of choice.  But if charter, private, and Tiebout choice were to expand to the point where no school was guaranteed students and revenues regardless of performance, then I’d be fine with getting rid of all testing requirements.

Of course, there would still be plenty of information about schools because most schools in choice systems voluntarily administer tests and report results.  They just choose their own tests, just like how they choose their own standards, curriculum, and pedagogy.  And since tests only capture a tiny portion of what most schools are trying to accomplish, parents would collect information on these other outcomes of education just as consumers collect information on the quality of other complicated services their children receive, including summer camp, piano lessons, babysitters, etc… We don’t have state required testing — or even any testing — for most of these services, so parents rely on reputation, word of mouth, direct observation, and other techniques to collect information and make choices.  No system is perfect and people will make mistakes, but I’d rather that parents make their own mistakes than have bureaucrats impose mistakes upon them.

This skepticism about state testing does represent a shift in my thinking that has been underway for a few years now.  I’m sure someone could dig up an old quote from me embracing top-down accountability in a way that I would not do now.  But I’ve seen more evidence and collected more experience over the last several years that has made me much less enamored of state testing.  I’m convinced that state tests are highly imprecise, very limited in what they cover, subject to test-prep and manipulation, unable to capture the diversity of school goals and circumstances, and seldom used to make intelligent decisions about improving schools.  Simply put, I am no longer a supporter of top-down school accountability regimes.  But until we have expanded choice further, I see no practical alternative to continuing state testing for schools not subject to meaningful choice accountability.


Testing Requirements Hurt Choice

January 17, 2014

In this week’s debate over the wisdom of requiring choice students to take state tests, three points deserve greater emphasis.  First, testing requirements hurt choice because test results fail to capture most of the benefits produced by choice schools.  As Collin Hitt’s piece persuasively argued, a series of rigorous studies have found large long-term benefits for students able to attend schools of choice even when short-term test results show little or no benefit.  Those studies show that charter and private choice schools cause students to graduate high school and go to college at much higher rates.  Those students go to more competitive universities at much higher rates.  And choice causes those students to enjoy much higher salaries later in life.  But if you only looked at short-term test results for these students you would not have expected the magnitude of these benefits.

One (of the many) problems with imposing testing requirements on schools of choice is that it highlights a measure of performance that grossly under-states the benefits of choice.  Given the precarious political position of choice programs, highlighting a measure that severely under-states performance puts those programs in jeopardy.  I can understand why choice opponents favor testing requirements — since they want ammunition to shut choice down or regulate it into oblivion.  But why would choice supporters favor this?  It’s a huge mistake.

Second, the only piece of evidence that Fordham presents to support the claim that state testing requirements improve performance at choice schools is the finding that scores rose when Milwaukee private choice schools were required to take the high stakes state test.  But as Pat Wolf, one of the authors of that study, noted — the score increase may well be just an artifact of private choice schools deciding to start prepping students for that high-stakes test now that they were required to take it.  In other words, Fordham is confusing real learning increases with test manipulation.  Pat was gently warning Fordham not to misinterpret the results in this way.  Despite that warning, Fordham continues to mis-use this research to make their point.

If Fordham continues to incorrectly cite this bit of evidence to support their point, they are in danger of becoming the Diane Ravitch of think tanks.  Wolf similarly warned Ravitch that she misunderstood the graduation rate component of the Milwaukee voucher study, wrongly claiming that attrition was biasing results to show higher graduation rates for voucher students.  Ravitch did not grasp that the result was based on an intention-to-treat analysis and, if anything, that type of analysis under-states the positive effect of choice on graduation rates.  Ravitch either couldn’t understand this or didn’t care about getting it right, so she continues to repeat this incorrect interpretation of the research to advance her agenda.  In the argument about choice and state testing requirements, Fordham is similarly repeating a faulty interpretation that they’ve been warned is mistaken.

Third, despite the lack of evidence that state testing requirements improve outcomes or ensure quality (as they largely acknowledge in an earlier report, “The Proficiency Illusion”), Mike Petrilli continues to push for them because… well, because we’ve got to do something:

Bad schools happen. They happen in the public sector, the charter sector, and, yes, the private sector. And since education is a “public good” as well as a “private good”—because kids’ lives literally hang in the balance and so does the future of the society whose taxpayers are underwriting these costs—we can’t just look the other way….

But the answer cannot be “let the market figure it out.” Because it hasn’t, and it won’t—and somebody must.

Of course, doing something that is ineffective or counter-productive may be worse than doing nothing.  If state testing requirements don’t necessarily make schools better and fail to capture the bulk of the benefits choice schools are producing, then imposing state testing requirements on choice schools just to do something is a really bad idea.  In an effort to prevent all bad things from happening, Fordham may ensure that more bad things will happen.

Fordham’s argument that we need to do something reminds me of the brilliant song Jason Segel wrote for the fictional band, Aldous Snow and the Infant Sorrow, in the movie Forgetting Sarah Marshall — appropriately titled “We’ve Got to Do Something!”  As he is grabbing a cane from a blind man in the music video Aldous (Russell Brand) sings:

You gotta do something,
We gotta do something,
Sometimes I sit in my room and I don’t know what to do,
but we’ve gotta do something!

…and if I was in Government,
Then I’d Government things much more differentlier,
because it ain’t the best way to government things,

 

 

UPDATE — On reflection, the Ravitch comparison was too harsh.  Ravitch repeated a factual error even after the error was pointed out to her.  Fordham is repeating an ambiguous finding, not necessarily a factually incorrect one.  One could interpret the test score gain produced by choice schools in Milwaukee after high-stakes testing was required as a test prep artifact or as a real learning gain.  I’m strongly inclined toward the test prep explanation , but the other interpretation is not factually mistaken.  Both the original study and Pat’s post warned Fordham about the ambiguity of this result, yet Fordham continues to cite it as proof without clarification or qualification.  It’s not pulling a Ravitch, but it’s also not good.


Florida Charter Schools: Show me the money!

January 16, 2014

(Guest Post by Collin Hitt)

There’s mounting evidence that charter schools decrease dropout rates, increase college attendance rates and improve the quality of colleges that college-bound students attend. But so what if these kids go to college? Do they actually graduate? And if charter schools really have lasting impacts, shouldn’t charter schools actually have an impact on how much money students earn? A new working paper examines these questions and the answer, in a word, is yes.

Kevin Booker, Tim Sass, Brian Gill  and Ron Zimmer have now extended their previous research on charter high schools. (Jay wrote about their research and their clever research design a few years back.) They look at students in Chicago and Florida who attend charter schools in eighth grade, some of whom go on to attend charter high schools and some whom go on to attend district-run high schools.

They find that students who attend charter high schools are more likely to graduate high school, attend college and persist in college. Such findings are extremely important. But the paper is truly novel in that it also examines the labor market outcomes for students. From the study:

In Florida, we also examine data on the subsequent earnings of students in our analytic sample, at a point after they could have earned college degrees. Charter high school attendance is associated with an increase in maximum annual earnings for students between ages 23 and 25 of $2,347—or about 12.7 percent higher earnings than for comparable students who attended a charter middle school but matriculated to a traditional high school.

Two years ago, the front page of the New York Times carried a headline that teachers can have lasting impacts on student’s earnings in adulthood, citing groundbreaking work by Jonah Rockoff, Raj Chetty and John Friedman. For a single school year, a one standard deviation increase in teacher quality – as measured by a teacher’s valued-added impact on test scores – increased a student’s annual earnings at age 28 by $182. Compare that to the impact of attending a charter high school in Florida: a $2,347 increase in annual earnings by age 25. Using Rockoff, Chetty and Friedman’s estimate, that’s equivalent to a student experiencing a one standard deviation in teacher quality every year from kindergarten through the twelfth grade.

So these findings stand out. Moreover, Booker and colleagues close the paper with a key observation. In Florida, as in other school choice research, a paradox became apparent. The improvements in long-term outcomes were in no way predicted by earlier research on test score impacts.

The substantial positive impacts of charter high schools on attainment and earnings are especially striking, given that charter schools in the same jurisdictions have not been shown to have large positive impacts on students’ test scores (Sass, 2006; Zimmer et al., 2012)…

 Positive impacts on long-term attainment outcomes and earnings are, of course, more consequential than outcomes on test scores in school. It is possible that charter schools’ full long-term impacts on their students have been underestimated by studies that examine only test scores. More broadly, the findings suggest that the research examining the efficacy of educational programs should examine a broader array of outcomes than just student achievement.

This, I can promise, will be a recurrent theme in school choice research in the coming years. Recall this passage from Will Dobbie and Roland Fryer’s research of the Harlem Promise Academy, where they found large gains in college attendance:

 “…the cross-sectional correlation between test scores and adult outcomes may understate the true impact of a high quality school, suggesting that high quality schools change more than cognitive ability. Importantly, the return on investment for high-performing charter schools could be much larger than that implied by the short-run test score increases.”

Test scores are supposed to be an indicator of how kids will fare later in life. Now we have another piece of school choice research finding that test scores missed the true positive impact that schools (and choice) had on kids. Something to think about if you’re going to argue that schools of choice should be held more accountable to state tests.


We’re Number 7!

January 8, 2014

The greatest thing about Rick Hess’ ranking of Edu-Scholar Public Influence is how much anger and denunciation it riles up.  People argue about the methodology, complain about who is excluded from the ranking, and dismiss the whole enterprise as irrelevant, all the while secretly hoping that they and their friends will rise higher in the rankings next year.  Rick is pretty up front about how imperfect his ranking is.  And he probably just views the whole thing as an amusing recreation.

Ranking scholars is like ranking actors or ranking albums.  It’s great fun and provokes lots of debates, but doesn’t really mean too much.  The Oscars and Rolling Stones lists tell us something about what people think about movies and music, but they offer far from objective methods of identifying excellence.  You are free to like what you like and make your own ranking.

That being said, I’m going to go ahead and abuse Rick’s ranking and brag about how the University of Arkansas Department of Education Reform is the 7th most influential education policy program in the country.  Three of our six regular faculty members are listed among the 87 most influential scholars.  Only 6 other universities have more scholars in the top 87: Stanford (with 14), Harvard (with 9), Columbia (with 6), and NYU, Vanderbilt, and UCLA (each with 4).  We tie the Universities of Virginia, Wisconsin, Michigan, Berkeley, and Pennsylvania with having 3 scholars in the top 87.

And since 3 of our 6 faculty members are in the top 87, you could argue that we are the most influential department since no other program has half of its scholars in the top 100.  Anyone interested in working with these excellent people to get a Ph.D. in education policy should check out our doctoral program and learn how to apply there.  We cover tuition and fees and offer a generous stipend for all students admitted to our program.

Now I’m going to check out this list of the top 100 Bob Dylan songs.  What?  No Way!  Sad Eyed Lady of the Lowlands and It’s All Over Now, Baby Blue should definitely be 1 and 2.  How’d they do this stupid ranking?  Well, just enjoy this Joan Baez cover.

 

 


Let Local School Leaders Do Their Job

January 7, 2014

(Guest Post by James Shuls)

Traditionally trained teachers typically enter the profession after completing coursework that is designed to prepare them for the classroom. This training includes a student teaching experience—a hands-on opportunity to practice their craft. Alternatively certified teachers, on the other hand, often enter the classroom with little to no pedagogical training or classroom experience. So, how do alternatively certified teachers compare to traditionally trained teachers in terms of effectiveness? Many scholars have examined this question, but Julie Trivitt and I are the first to do so using Arkansas data. The results from our analysis of elementary and middle school teachers were recently published in Educational Policy. Like many others, we find the difference between the two groups to be negligible.

Here is a quick summary of the findings:

On average, alternatively certified teachers tend to perform slightly lower than traditionally certified teachers, but there is more variation within each group than between groups. Furthermore, the differences between groups tend to be small and marginally significant only when we control for prior academic achievement as measured by teacher licensure exams. Because alternatively certified teachers score significantly higher on licensure exams, on average, including these scores biases the estimates of alternative certification downward. Nevertheless, the coefficient on alternative certification remains negative, but insignificant, when teacher test scores are not included. We conclude that traditionally certified teachers gain some experience through their training program, which translates to close to a year of experience. Alternatively certified teachers seem to make up the difference as they gain from years of experience at a more rapid rate than traditionally certified teachers.

How could it be that teachers who have undergone training are no more effective than teachers who have not? One possible explanation is that the types of individuals who enter the classroom via the two routes are significantly different from one another; at least, that’s what we found. Alternatively certified teachers in our sample scored significantly higher on all sections of the Praxis I and on the Praxis II professional knowledge exams. The biggest difference between the groups was in math, where alternatively certified teachers scored roughly a half of a standard deviation higher than traditionally certified teachers.

Alternative routes to the classroom seem to be attracting individuals who have higher academic capabilities, on average, than the traditional route to the classroom. This finding is not unique to Arkansas. Tim Sass found that alternatively certified teachers in Florida scored significantly higher on the SAT (2011). In New York, a team of researchers found that alternatively certified teachers from more selective programs performed significantly better than traditionally trained teachers, “Only 5 percent of newly hired Teaching Fellows and TFA teachers in 2003 failed the Liberal Arts and Sciences Test (LAST) exam on their first attempt, while 16.2 percent of newly hired traditional teachers failed the LAST exam…”

So what does this say about traditional teacher training programs? Some might argue that they are of no use, but that is not exactly what the data say. What we see in the Arkansas data, and in the results from other states, is that colleges of education take individuals who have lower academic capabilities, on average, and make them equally effective as individuals who are more academically capable.

There is indeed value in teacher training programs, but there is also value in alternative routes to the classroom. Each route has its benefits and its drawbacks. That is why Julie and I conclude that “teachers, and students, would be best served by equipping schools with more authority to hire the individuals they believe are qualified for the job and to certify those individuals who meet the expectations in the classroom.” Expand routes to the classroom and let local school leaders do their job. Let’s let them decide which teacher is the right fit for their school.

————————————————-

James Shuls is the Director of Education Policy at the Show-Me Institute.  He earned his doctorate in education policy from the University of Arkansas’ Department of Education Reform.


Why I Hate the Olympics

January 6, 2014

I hate the Olympics.  I hate everything about them… their show-casing of murderous authoritarian regimes, their graft and corruption, their promotion of obscure sports that generate little genuine interest, their hypocritical claim of being non-commercial and non-political, their subordination of athletic excellence to soap-opera story-telling… everything.

But soon it will be nearly impossible to escape the media-hype of the Olympics.  NBC has an enormous investment in broadcast rights they need to recoup.  Putin needs to advertise the greatness of re-hashed fascism.  And every hyper-nationalist has to obscure his regime’s abuses and claim superiority based on the defeat of a proximate foe.  Dictators, oppressors, exploiters, and scumbags of every stripe love the Olympics.  I don’t see why we should.

Unfortunately, even in the education policy world we will see folks attempt to channel some of the attention the Olympics generate toward their policy talking-points.  I say ignore them.  Even better — rather than worship at the altar of the Olympics, every time someone in the education policy world tries to harness this authoritarian and corrupt institution as part of an attention-seeking gambit, I propose that we should take a moment to sing the praises of those who advance the cause of liberty.

Sports and competition are great things.  But they are only great when they are organized, engaged-in, and voluntarily paid for by free people.  Otherwise they are just the bread and circuses of the new-age Caesars.

(Typo corrected)


Marc Tucker and Diane Ravitch, Please Contact Santa

December 19, 2013

As I’ve written before, every stripe of education charlatan has been cherry-picking PISA data to support whatever policies he or she prefers.  From Diane Ravitch’s obsession with imitating Finland to Marc Tucker’s divining of lessons from the “top performers,”  we’ve seen a host of causal claims attributed to the relationship between PISA results and particular practices or policies that are not causal at all.

Over on the Education Next blog, Matt Chingos has a brilliant piece demonstrating the relationship between Christmas spirit and student achievement. Matt even runs some regressions to “prove” his point — something that Diane Ravitch, Marc Tucker, and most other “best practices” gurus can’t or don’t bother doing.  Apparently, spending more on Christmas shopping “predicts” higher student achievement, controlling for some demographic factors.  Clearly, we don’t need smaller classes or better teacher-training to make schools better, we just need more Christmas spirit (or at least consumption).

This is why random-assignment and other research designs that more strongly identify causation are so important.  And this is why we should focus on random assignment research on private and charter choice  rather than the results of weaker research designs on those questions.


More #1

December 17, 2013

Arkansas Razorbacks #1 Fan Pin

2013 has been a very good year.  In addition to having my piece with Brian Kisida and Dan Bowen about field trips to art museums as the most viewed and emailed piece in the Sunday New York Times, and having the research on which that was based as the most viewed piece in Education Next, I’ve now learned that one of my blog posts was the most read post on the Education Next site.

This most-viewed Ed Next blog post was one I wrote about whether high school athletic success comes at the expense of academic success.  It was based on an article that Dan Bowen and I wrote for the Journal of Research in Education.

A few other observations about these popular pieces:

  • They are about art and athletics, not math and reading.  Education reformers (including myself) have gone too far in focusing narrowly on math and reading achievement scores, as if those were the only things about schools that matter.  As it turns out, people clearly think that the arts, athletics, and other things are also important and would like to read more articles about them.  I also think they would like schools and policymakers to pursue a diversity of goals and not just maximize math and reading achievement scores.
  • Dan Bowen was co-author on both the art and athletics research projects.  Dan just graduated from our doctoral program in education policy and is currently a post-doc at Rice University.  Next year he’ll be back on the academic job market and I think having two #1 research projects won’t hurt.
  • Department of Education Reform folks had 3 of the top 10 spots for blog posts and 4 of the top 20 articles in Education Next.  Way to go team!

Field Trip Research #1 Again

December 13, 2013

Arkansas Razorbacks #1 Fan Pin

In addition to being the most viewed and emailed article in the Sunday New York Timesmy field trip research with Brian Kisida and Dan Bowen is the most read article in the journal, Education Next, during 2013.  The University of Arkansas may not have the #1 football or basketball teams (and “Well, I say let Harvard have its football and academicsYale will always be first in gentlemanly club life“) but we are #1 in research on the effects of field trips to cultural institutions.  Go team!