Reading First Disappoints

May 1, 2008

A new study released today by the U.S. Department of Education finds that Reading First, the phonics based federally-funded reading program, failed to yield improvements in reading comprehension.  The study doesn’t demonstrate that it is ineffective to emphasize phonics; only that it was ineffective to adopt this particular program.  That could be because there are problems with the design or implementation of Reading First.  Or it could be because the control group was also receiving phonics instruction without the particular elements of Reading First.  And it is possible that phonics is less effective than some people thought.

Whatever the explanation, this well-designed study undermines confidence that instructional reform, like Reading First, can alone transform the educational system


State Regulation of Private Schools: the Good, the Bad and the Ugly

April 30, 2008

(Guest post by Greg Forster)

 

Today, the Friedman Foundation for Educational Choice releases a report that evaluates how each of the 50 states regulates private schools. While all states regulate things like health and safety, most states go further and impose unreasonable and unnecessary burdens on private schools. This creates barriers to entry, hindering competition and thereby reducing the quality of both public and private schools; it also limits the freedom of parents to choose how their children will be educated. Friedman Foundation Senior Fellow Christopher Hammons graded each state based on how good a job it does of regulating private schools. Scroll down to see the grades.  

 

Accompanying the report, we have compiled lists of all the laws and regulations governing private schools in each of the 50 states. The lists are now available on our website.  

 

Our goal is to educate the public on two fronts. First, we often hear private schools described as “unregulated” by forces hostile to school choice. Private schools are in fact regulated and are accountable to the public for following a large body of laws and regulations. Second, there is wide variation from state to state in the quality of private school regulation. We hope to make the public aware of these disparities so that states with poor regulatory systems will themselves be accountable to the public.  

 

To help ensure the accuracy of our list of private school laws and regulations in each state, we contacted each of the 50 state departments of education, asking them to review our lists and let us know if we had anything missing or incorrect. Each state has an extremely large body of laws and regulations, so any effort to locate all the laws and regulations on a particular topic is very difficult, and we wanted to do everything possible to make sure we didn’t miss anything. As you will see below, some states were more helpful than others.

The Good

The Good #1: About one third of the states (18 ) earned a grade in the A or B range. Florida and New Jersey were tied for having the nation’s best regulatory systems for private schools, followed closely by Connecticut and Delaware.  

 

The Good #2: I will admit that I expected most of our e-mails to the state departments of education would be ignored. As it turned out, most of the states – 29 of them – not only got back to us but went over our lists and either said they were OK as is or offered corrections. In fact, publication of the report was delayed so that we would have time to process all the constructive input we were getting from state departments of education. So let me pour myself a big, delicious bowl of crow and apologize to the departments of education in Connecticut, Delaware, Florida, Georgia, Iowa, Illinois, Kentucky, Louisiana, Maryland, Michigan, Missouri, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Ohio, Oregon, South Carolina, South Dakota, Tennessee, Texas, Virginia, Washington, West Virginia, Wisconsin, and Wyoming. I’m sorry I doubted you, and we greatly appreciate your help.  

 

In addition, Arkansas and Arizona deserve recognition for getting back to us and letting us know that they were unable to help us with our request.

The Bad 

The Bad #1: Almost half the states (22) receive D or F grades for the unnecessary burdens imposed on private schools by their laws and regulations. North Dakota ranked the worst in the nation by a large margin, followed by South Dakota, Alabama, Maryland, New York and Tennessee.  

 

The Bad #2: The departments of education in 17 states did not respond to our attempts to contact them. California, Colorado, Hawaii, Idaho, Indiana, Kansas, Maine, Massachusetts, Minnesota, Missouri Mississippi, [oops – apologies to the DOE of Missouri and the schoolchildren of Mississippi] Montana, Nebraska, North Dakota, Oklahoma, Pennsylvania, Rhode Island, and Utah, please check whether you still have a department of education.  

 

Mysteriously, Alaska responded to our initial inquiry, but then didn’t respond to our follow-up communications. 

The Ugly 

Alabama’s department of education deserves special recognition for its efforts to help us. Our request was considered so important that it was ultimately handled by no less than the department’s general counsel.  

 

The department’s first response was to ask where we had gotten our list of Alabama’s private school laws and regulations, and how we were planning to publish it.  

 

I did not ask why they wanted to know, or whom they were planning to pass the information on to once I told them. Instead, I replied that we had compiled our list from the state’s publicly available laws and regulations, and that we were going to post the list on our website and publish a report looking at the laws and regulations in all 50 states.  

 

Their response to that was: “After continued review by appropriate persons and because of the depth of information that you have forwarded to us, it has been determined that this request needs to be reviewed by our SDE Legal Department.” They also asked for more time, which we were happy to give them, as we did for every department that asked for it.  

 

The next and final communication we received was this, which I reprint in its entirety:

I am the General Counsel for the State Department of Education. I have been asked by the Deputy Superintendent of Education, Dr. Eddie Johnson, to review and respond to your request. There are numerous errors contained in the four page document titled ALABAMA. I submit that a further review of our laws and regulations might be helpful. You can access our statutes at www.legislature.state.al.us. The Administrative Code for the Alabama Department of Education can be found at our website, www.alsde.edu/html/home.asp. Thank you for your interest in Alabama.

 

The message was signed “Larry Craven.” Really.  

 

I offer no speculation as to why Mr. Craven would tell us that our document contained numerous errors, but decline to specify any of them.  

 

If at any time he or any other party will be so kind as to specify anything in our list of laws and regulations for Alabama or any other state that’s wrong or missing, we will gladly make any necessary corrections. In a project of this size, combing through countless thousands of laws and regulations to find the ones relevant to private schools, there would be no shame in having missed some. We make a point of saying so both in the report itself and in a disclaimer that appears on each of the 50 state lists we compiled and put on our website.  

 

That said, this also should be said: we wouldn’t have to comb through countless thousands of laws and regulations, a process inherently subject to this kind of difficulty, if the 50 state departments of education provided this information to the public in an easily accessible format. (Some do, but most don’t.) Our only goal here is to get public-domain information actually delivered to the public. We wish we could say that goal was shared by everyone in charge of running the nation’s education system. 

Grades for State Laws and Regulations Governing Private Schools

Alabama F
Alaska B
Arizona A-
Arkansas A-
California B
Colorado B
Connecticut A
Delaware A
Florida A
Georgia A-
Hawaii C+
Idaho C+
Illinois C+
Indiana D-
Iowa D
Kansas F
Kentucky B
Louisiana D
Maine D+
Maryland F
Massachusetts C-
Michigan C-
Minnesota B+
Mississippi F
Missouri A-
Montana F
Nebraska F
Nevada F
New Hampshire C+
New Jersey A
New Mexico C+
New York F
North Carolina D
North Dakota F
Ohio C-
Oklahoma B
Oregon C+
Pennsylvania D
Rhode Island D
South Carolina F
South Dakota F
Tennessee F
Texas B-
Utah A-
Vermont D
Virginia B
Washington F
West Virginia C-
Wisconsin A-
Wyoming F

 Edited for typos


It’s Only a Flesh Wound

April 29, 2008

(Guest post by Ryan Marsh)

Many reform strategies are predicated on the belief that teachers have the largest impact on student achievement and that we can measures the teacher’s contribution with reasonable accuracy. Policies, such as performance pay or other efforts to recruit and retain effective teachers require reasonably accurate identification of which teachers are the most effective and which are the least at adding to their students’  achievement.

Value-added models, or VAMs, are the statistical models commonly used for this purpose. VAMs attempt to estimate teacher effectiveness by controlling for prior achievement and other student characteristics.

Two recent working papers have started a very important debate about the use of VAMs, a debate which will greatly influence future education policy and research. Economist Jesse Rothstein has a working paper in which he performed a critical analysis of these VAMs and their ability to estimate teacher effectiveness. His analysis focuses on the question of whether students are randomly assigned to teachers.  If they are not, then the results of a VAMs should not necessarily be interpreted as causal estimates of teacher effectiveness.  That is, if some teachers are non-randomly assigned students who will learn at a faster rate than others, then our estimates of who is an effective teacher could be biased.

Without getting too technical, Rothstein checks to see if a future teacher can predict past or present scores. If the teacher can predict growth in achievement for students before he or she becomes their teacher, then we have evidence of non-random assignment of students to teachers.  After all, teachers could not have caused things that happened in the past. 

But even if we have bias in VAMs from non-random assignment of students to teachers, the question is how seriously distorted are our assessments of who is an effective teacher.  Many measures have biases and imperfections, but we still rely on them because the distortions are relatively minor.  Rothstein recognizes this when he suggests on p. 32 a way of assessing the magnitude of the bias:

“An obvious first step is to compare non-experimental estimates of individual teachers’ effects in random assignment experiments with those based on pre- or post- experimental data (as in Cantrell, Fullerton, et. al 2007).”

The working paper he cites—by Steven Cantrell, Jon Fullerton, Thomas J. Kane, and Douglas O. Staiger—uses data from an experimental analysis of National Board for Professional Teaching Standards (NBPTS) certification. In the paper, the authors use a random assignment process where NBPTS applicant teachers are paired with non-applicant comparison teachers in their school and principals set up two classrooms which they would be willing to assign to the NBPTS teacher. Classes are randomly chosen for each teacher and compared with the class not chosen. The paper also uses VAMs to assess teacher effectiveness before the experiment was run. The prior effectiveness was used to predict how well a teacher’s students during the experiment performed above students in the comparison classrooms. This allows the researchers to test how well the VAM estimates compare with a random assignment experiment.

That is, teacher effectiveness was measured using VAMs before students were randomly assigned to teachers and then teacher effectiveness was measured after students were randomly assigned, when no bias would be present.  The two correlate well, suggesting little distortion from the non-random assignment.  As the authors conclude, the VAM estimates have “considerable predictive power in predicting student achievement during the experiment.”

In short, Rothstein raises a potentially lethal concern for policies based on value-added models, but another paper by Cantrell, et al suggests that the concerns may be little more than a flesh wound.


New Special Ed Voucher Study

April 29, 2008

Marcus Winters and I have a new study out today on the effects of special education vouchers in Florida on the academic achievement of disabled students who remain in public schools.  As we write in an op-ed in this morning’s Washington Times: “we found that those students with relatively mild disabilities —the vast majority of special-education students in the state and across the nation — made larger academic gains when the number of private options nearby increased. Students with more severe disabilities were neither helped not harmed by the addition of McKay scholarship-receiving private schools near their public school.” The findings are based on an analysis of individual student data using a fixed effects model.

The results of this analysis of Florida’s special education voucher program have important implications for the four other states (Arizona, Georgia, Ohio, and Utah) that have similar programs.  It also suggests ways that federal legislation governing special education, the Individuals with Disabilities Act (IDEA), could be reformed.  We have two more op-eds coming out this week in the The Wash Times that will explore these issues.

Lastly, this new study speaks to the general question of whether expanded choice and competition improve achievement in public schools.  Like the bulk of previous research, including Belfield and Levin, ChakrabartiGreene and Forster, Hoxby, Rouse, et al , and West and Peterson (as a partial list), the new study finds that student achievement in public schools improves as vouchers expand the set of private options.

UPDATE

There is also an editorial endorsing continuation of the voucher program in DC in the Washington Post and another embracing vouchers in the Wall Street Journal


Manipulatives Make Math Mushy

April 25, 2008

An interesting item in this morning’s New York Times — Someone has finally done an experimental study on the math instructional technique that emphasizes the use of blocks, balls, and other concrete “manipulatives” to teach math.  Researchers at Ohio State University created an experiment in which they randomly assigned subjects to be taught a new math concept either by focusing on the abstract math rule, focusing on the use of manipulatives, or combining both techniques.  They then tested how well subjects had learned the math concept by having them apply it to a new situation.  It turns out that students taught with manipulatives did the worst, the ones taught abstractly did the best, and the combined approach performed in the middle.  It appears that those taught math with more concrete examples had a harder time transferring that math concept to a different concrete example.  Kids taught math with tennis balls have a harder time applying the principle to railroad cars.


Surprise! What Researchers Don’t Know about Florida’s Vouchers

April 21, 2008

(Guest post by Greg Forster)

 

Florida’s A+ program, with its famous voucher component, has been studied to death. Everybody finds that the A+ program has produced major improvements in failing public schools, and among those who have tried to separate the effect of the vouchers from other possible impacts of the program, everybody finds that the vouchers have a positive impact. At this point our understanding of the impact of A+ vouchers ought to be pretty well-formed.

 

But guess what? None of the big empirical studies on the A+ program has looked at the program’s impact after 2002-03. That was the year in which large numbers of students became eligible for vouchers for the first time, so it’s natural that a lot of research would be done on the impact of the program in that year. Still, you would think somebody out there would be interested in finding out, say, whether the program continued to produce gains in subsequent years. In particular, you’d think people would be interested in finding out whether the program produced gains in 2006-07, the first school year after the Florida Supreme Court struck down the voucher program in a decision that quickly became notorious for its numerous false assumptions, internal inconsistencies, factually inaccurate assertions and logical fallacies.

 

Yet as far as I can tell, nobody has done any research on the impact of the A+ program after 2002-03. Oh, there’s a study that tracked the schools that were voucher-eligible in 2002-03 to see whether the gains made in those schools were sustained over time. But that gives us no information about whether the A+ program continued to produce improvements in other schools that were designated as failing in later years. For some reason, nobody seems to have looked at the crucial question of how vouchers impacted Florida public schools after 2002-03.

 

[format=shameless self-promotion]

 

That is, until now! I recently conducted a study that examines the impact of Florida’s A+ program separately in every school year from 2001-02 through 2006-07. I found that the program produced moderate gains in failing Florida public schools in 2001-02, before large numbers of students were eligible for vouchers; big gains in 2002-03, when large numbers of students first became eligible for vouchers; significantly smaller but still healthy gains from 2003-04 through 2005-06, when artificial obstacles to participation blocked many parents from using the vouchers; and only moderate gains (smaller even than the ones in 2001-02) after the vouchers were removed in 2006-07.

 

[end format=shameless self-promotion]

 

It seems to me that this is even stronger evidence than was provided by previous studies that the public school gains from the A+ program were largely driven by the healthy competitive incentives provided by vouchers. The A+ program did not undergo significant changes from year to year between 2001-02 and 2006-07 that would explain the dramatic swings in the size of the effect – except for the vouchers. In each year, the positive effects of the A+ program track the status of vouchers in the program. If the improvements in failing public schools are not primarily from vouchers, what’s the alternative explanation for these results?

 

 

 

 

Obviously the most newsworthy finding is that the A+ program is producing much smaller gains now that the vouchers are gone. But we should also look more closely at the finding that the program produced smaller (though still quite substantial) gains in 2003-04 through 2005-06 than it did in 2002-03.

 

As I have indicated, I think the most plausible explanation is the reduced participation rates for vouchers during those years, attributable to the many unnecessary obstacles that were placed in the path of parents wishing to use the vouchers. (These obstacles are detailed in the study; I won’t summarize them here so that your curiosity will drive you to go read the study.) While the mere presence of a voucher program might be expected to produce at least some gains – except where voucher competition is undermined by perverse incentives arising from bribery built into the program, as in the D.C. voucher – it appears that public schools may be more responsive to programs with higher participation levels.

 

There’s a lot that could be said about this, but the thing that jumps to my mind is this: if participation rates do drive greater improvements in public schools, we can reasonably expect that once we have universal vouchers, the public school gains will be dramatically larger than anything we’re getting from the restricted voucher programs we have now.

 

One more question that deserves to be raised: how come nobody else bothered to look at the impact of the A+ program after 2002-03 until now? We should have known a long time ago that the huge improvements we saw in that year got smaller in subsequent years.

 

It might, for example, have caused Rajashri Chakrabarti to modify her conclusion in this study that failing-schools vouchers can be expected to produce bigger improvements in public schools than broader vouchers. In this context it is relevant to point out that many of the obstacles that blocked Florida parents from using the vouchers arose from the failing-schools design of the program. Chakrabarti does great work, but the failing-schools model introduces a lot of problems that will generally keep participation levels low even when the program isn’t being actively sabotaged by the state department of education. If participation levels do affect the magnitude of the public school benefit from vouchers, then the failing-schools model isn’t so promising after all.

 

So why didn’t we know this? I don’t know, but I’ll offer a plausible (and conveniently non-falsifiable) theory. The latest statistical fad is regression discontinuity, and if you’re going to do regression discontinuity in Florida, 2002-03 is the year to do it. And everybody wants to do regression discontinuity these days. It’s cutting-edge; it’s the avant-garde. It’s like smearing a picture of the virgin Mary with elephant dung – except with math.

 

You see the problem? It’s like the old joke about the guy who drops his keys in one place but looks for them in another place because the light is better there. I think the stats profession is constantly in danger of neglecting good research on urgent questions simply because it doesn’t use the latest popular technique.

 

I don’t want to overstate the case. Obviously the studies that look at the impact of the A+ program in 2002-03 are producing real and very valuable knowledge, unlike the guy looking for his keys under the street lamp (to say nothing of the elephant dung). But is that the only knowledge worth having?

 

(Edited to fix a typo and a link.)