Tampa Tribune Beats the Rush

July 1, 2009

Greetings from Tampa

(Guest post by Greg Forster)

The editors of the Tampa Tribune have decided not to join the misinformed rush to judgment on Florida’s tax-credit scholarship program:

It’s too early to accurately gauge the students’ academic progress, as the University of Florida economics professor who oversaw the report emphasized. It measured only first-year test gains. Researcher David Figlio was handicapped by incomplete data for a baseline.

I’m shocked to see that in print. A newspaper actually checked the facts!

I do have to quibble with the editorial’s assertion that the Figlio study shows students who select into the program are among the most “academically challenged.” We don’t, in fact, know that. We know that they are more likely to come from schools that are among the most academically challenged. But school characteristics and individual student characteristics can vary considerably.

This matters because choice opponents have relied upon unsupported assertions about selection bias to wave away the consistent empirical research consensus showing that school choice works. In fact, the Figlio study doesn’t allow us to address this question, as the study itself explicitly says. Other research that does examine this question has not turned up any serious evidence that vouchers either “cream” (selecting high performers) or “dredge” (selecting low performers).

But the editors are back on solid ground when it comes to finances:

The program is a good deal for taxpayers.

Attending public school costs more. When local, state and federal costs, plus capital costs, are factored in, the average cost per student in public school is $12,000.

In the voucher program, the maximum scholarship is $3,950, about 57 percent of the roughly $7,000 the state pays per public school student.

And a scholarship parent pays on average $1,000 a year for their child to attend the private school. The program requires the parents and child to be motivated.

By taking challenging students from poor-performing schools, the Tax Credit Scholarships are easing the burden on the public school system, not diverting resources.

Kudos to the Tribune for checking the facts rather than rushing to judgment!


PJM on Free to Teach

June 1, 2009

Free to Teach cover

(Guest post by Greg Forster)

Today Pajamas Media runs my column on why the government school monopoly is bad for teachers:

Everyone knows a monopoly is bad for the people who rely on its services. But monopolies are also bad for the people who work for them. Just like the monopoly’s clients, its employees have few alternatives. If they’re not treated well at work, they can’t go work for a competing employer. That means the monopoly doesn’t have to worry about keeping them happy.

And the education monopoly also locks out parental pressure for better teaching, which is probably a factor in improving working conditions for teachers in private schools. Public schools are government-owned and government-run, so the main pressure on them is political imperatives. The main pressure on private schools is keeping parents happy. Given that parents primarily want better teaching, which of those two options do you think is better for teachers?

A certain recent study is mentioned in the column.


Free to Teach: What America’s Teachers Say about Teaching in Public and Private Schools

May 20, 2009

Free to Teach cover

(Guest post by Greg Forster)

Today the Friedman Foundation releases Free to Teach: What America’s Teachers Say about Teaching in Public and Private Schools, a study I co-authored with my Friedman colleague Christian D’Andrea.

It’s a simple study with a powerful finding. We used the teacher data from the Schools and Staffing Survey, a very large, nationally representative, confidential survey of school employees conducted by the U.S. Department of Education. We just separated public school teachers from private school teachers and compared their answers on questions covering their working conditions.

We found that the government school system is not providing the best environment for teaching. Public school teachers fare worse than private school teachers on virtually every measurement – sometimes by large margins. They have less autonomy in the classroom, less influence over school policy, less ability to keep order, less support from administrators and peers, and less safety. So it’s not surprising that they also have less job satisfaction on a variety of measures. About the only thing they have more of is burnout. (The measures of teacher burnout were some of the more eye-popping numbers we found in the federal data set.)

Free to Teach box scores

The Schools and Staffing Survey is observational, so we can’t run causal statistical analyses. But it’s really not hard to figure out why private schools provide a better teaching environment. The government school system responds mainly to political imperatives, because anything owned and run by government is inherently political and always will be. Meanwhile, the biggest pressure on private schools is from parents, because if the schools don’t please the parents, the parents can take their children elsewhere.

Which of the two sources of influence – politics or parents – do you think is more focused on demanding that schools provide better teaching?

That’s why private schools deliver a better education even when they serve the same students and families as public schools, and public schools improve when parents can choose their schools.

Parents and teachers are traditionally thought of as antagonists. And no wonder – under the current system, parents have no effective control over their children’s education other than what they can extract from their teachers by pestering and nagging them. The status quo is designed to force parents and teachers into an antagonistic relationship.

But in the big picture, parents are the best friends teachers have. Ultimately, it’s parents who provide the pressure for better teaching, and – if what we’re seeing in the Schools and Staffing Survey is any indication – that pressure for better teaching provides better working conditions for teachers.

Here’s the executive summary:

Many people claim to speak on behalf of America’s teachers, but we rarely get the opportunity to find out what teachers actually have to say about their work – especially when people are debating government control of schooling.

This study presents data from a major national survey of teachers conducted by the U.S. Department of Education; the Schools & Staffing Survey. We break down these observational data for public and private school teachers, in order to compare what teachers have to say about their work in each of the two school sectors.

These are eye-opening data for the teaching profession. They show that public school teachers are currently working in a school system that doesn’t provide the best environment for teaching. Teachers are victims of the dysfunctional government school system right alongside their students. Much of the reason government schools produce mediocre results for their students is because the teachers in those schools are hindered from doing their jobs as well as they could and as well as they want to. By listening to teachers in public and private schools, we discover numerous ways in which their working conditions differ—differences that certainly help explain the gap in educational outcomes between public and private schools. Exposing schools to competition, as is the case in the private school sector, is good for learning partly because it’s good for teaching.

Key findings include:

• Private school teachers are much more likely to say they will continue teaching as long as they are able (62 percent v. 44 percent), while public school teachers are much more likely to say they’ll leave teaching as soon as they are eligible for retirement (33 percent v. 12 percent) and that they would immediately leave teaching if a higher paying job were available (20 percent v. 12 percent).

• Private school teachers are much more likely to have a great deal of control over selection of textbooks and instructional materials (53 percent v. 32 percent) and content, topics, and skills to be taught (60 percent v. 36 percent).

• Private school teachers are much more likely to have a great deal of influence on performance standards for students (40 percent v. 18 percent), curriculum (47 percent v. 22 percent), and discipline policy (25 percent v. 13 percent).

• Public school teachers are much more likely to report that student misbehavior (37 percent v. 21 percent) or tardiness and class cutting (33 percent v. 17 percent) disrupt their classes, and are four times more likely to say student violence is a problem on at least a monthly basis (48 percent v. 12 percent).

• Private school teachers are much more likely to strongly agree that they have all the textbooks and supplies they need (67 percent v. 41 percent).

• Private school teachers are more likely to agree that they get all the support they need to teach special needs students (72 percent v. 64 percent).

• Seven out of ten private school teachers report that student racial tension never happens at their schools, compared to fewer than half of public school teachers (72 percent v. 43 percent).

• Although salaries are higher in public schools, private school teachers are more likely to be satisfied with their salaries (51 percent v. 46 percent).

• Measurements of teacher workload (class sizes, hours worked, and hours teaching) are similar in public and private schools.

• Private school teachers are more likely to teach in urban environments (39 percent v. 29 percent) while public school teachers are more likely to teach in rural environments (22 percent versus 11 percent).

• Public school teachers are twice as likely as private school teachers to agree that the stress and disappointments they experience at their schools are so great that teaching there isn’t really worth it (13 percent v. 6 percent).

• Public school teachers are almost twice as likely to agree that they sometimes feel it is a waste of time to try to do their best as a teacher (17 percent v. 9 percent).

• Nearly one in five public school teachers has been physically threatened by a student, compared to only one in twenty private school teachers (18 percent v. 5 percent). Nearly one in ten public school teachers has been physically attacked by a student, three times the rate in private schools (9 percent v. 3 percent).

• One in eight public school teachers reports that physical conflicts among students occur everyday; only one in 50 private school teachers says the same (12 percent v. 2 percent).


PJM Column on Milwaukee Study

March 30, 2009

(Guest post by Greg Forster)

This morning, Pajamas Media carries my column on the results of the new Milwaukee studies released last week by the School Choice Demonstration Project:

It’s bad enough that everyone seems to be ignoring the program’s positive impact on public schools. About four-fifths of the students are still in public schools. Why look only at the results for the voucher students, only one-fifth of the total? If you had a medical treatment that would help four-fifths of all patients suffering from some horrible disease — and what else can you call the present state of our education system but a horrible disease? — that would be considered a fantastic result.

But it gets worse. These results don’t just show that the program improves education for students in public schools. They also indicate that the program improves education for the students who are using vouchers.


Evidence Shows Vouchers Are a Win-Win Solution

February 23, 2009

win-win-study-large

(Guest post by Greg Forster)

On Friday, the Friedman Foundation released my new report, “A Win-Win Solution: The Empirical Evidence on How Vouchers Affect Public Schools.” It goes over all the available empirical evidence on . . . well, on how vouchers affect public schools.

Here’s the supercool graphic:

win-win-study-chart1

Worth a thousand words, isn’t it? I mean, at what point are we allowed to say that people are either lying, or have been hoodwinked by other people’s lies, when they say that the research doesn’t support a positive impact from vouchers on public schools?

There’s always room for more research. What would we all do with our time if there weren’t? But on the question of what the research we now have says, the verdict is not in dispute.

Here’s the executive summary of the report:

This report collects the results of all available empirical studies on how vouchers affect academic achievement in public schools. Contrary to the widespread claim that vouchers hurt public schools, it finds that the empirical evidence consistently supports the conclusion that vouchers improve public schools. No empirical study has ever found that vouchers had a negative impact on public schools.

There are a variety of explanations for why vouchers might improve public schools, the most important being that competition from vouchers introduces healthy incentives for public schools to improve.

The report also considers several alternative explanations, besides the vouchers themselves, that might explain why public schools improve where vouchers are offered to their students. It concludes that none of these alternatives is consistent with the available evidence. Where these claims have been directly tested, the evidence has not borne them out. The only consistent explanation that accounts for all the data is that vouchers improve public schools.

Key findings include:

  • A total of 17 empirical studies have examined how vouchers affect academic achievement in public schools. Of these studies, 16 find that vouchers improved public schools and one finds no visible impact. No empirical studies find that vouchers harm public schools.
  • Vouchers can have a significant positive impact on public schools without necessarily producing visible changes in the overall performance of a large city’s schools. The overall performance of a large school system is subject to countless different influences, and only careful study using sound scientific methods can isolate the impact of vouchers from all other factors so it can be accurately measured. Thus, the absence of dramatic “miracle” results in cities with voucher programs has no bearing on the question of whether vouchers have improved public schools; only scientific analysis can answer that question.
  • Every empirical study ever conducted in Milwaukee, Florida, Ohio, Texas, Maine and Vermont finds that voucher programs in those places improved public schools.
  • The single study conducted in Washington D.C. is the only study that found no visible impact from vouchers. This is not surprising, since the D.C. voucher program is the only one designed to shield public schools from the impact of competition. Thus, the D.C. study does not detract from the research consensus in favor of a positive effect from voucher competition.
  • Alternative explanations such as “stigma effect” and “regression to the mean” do not account for the positive effects identified in these studies. When these alternative explanations have been evaluated empirically, the evidence has not supported them.

Research Round-Up

February 10, 2009

The U.S. Department of Education released a study on how alternatively certified teachers affect student achievement.  The bottom line is that they find: “students of teachers who chose to enter teaching through an alternative route did not perform statistically different from students of teachers who chose a traditional route to teaching.  This finding was the same for those programs that required comparatively many as well as few hours of coursework. However, among those alternative route teachers who reported taking coursework while teaching, their students performed lower than their traditional counterparts.” 

I’m sure that the headlines will be:  “Alternative Certification Fails to Improve Student Achievement.”  But they will have it backwards.  The real headline should be: “Years of Teacher Education Coursework Yields No Benefits for Student Achievement.”

Besides, the real question is whether the alternatively certified teachers are better than the traditional certified teachers districts would have hired if they were constrained to hire only certified teachers.

And in other research news, the forthcoming issue of Education Next has an article by Paul Peterson and Matthew Chingos comparing student achievement in Philadelphia’s for-profit managed schools versus district-managed schools.  The find: “the effect of for-profit management of schools is positive relative to district schools, with math impacts being statistically significant. Over the last six years, students learned each year an average of 25 percent of a standard deviation more in math — roughly 60 percent of a year’s worth of learning — than they would have had the school been under district management. In reading, the estimated average annual impact of for-profit management is a positive 10 percent of a standard deviation — approximately 36 percent of a year’s worth of reading. Only the math differences are statistically significant, however.”


Charters Work, Unions Don’t

January 7, 2009

building_unions

(Guest post by Greg Forster)

On Monday the Boston Foundation released a study by researchers from Harvard, MIT and Duke, examining Boston’s charter schools and “pilot” schools using a random assignment method (HT Joanne Jacobs).

Pilot schools were created in Massachusetts in 1995 as a union-sponsored alternative to charter schools, which came to the state a year earlier. Pilot schools are owned and operated by the school district. Like charter schools, pilot schools serve students who choose to be there (though it’s easier to get into a charter school than a pilot school; see below). Like charter schools, pilot schools have some autonomy over budget, staffing, governance, curriculum, assessment, and calendar. Like charter schools, pilot schools are regularly reviewed and can be shut down for poor performance.

There are two main differences between charter schools and pilot schools. First, the teachers’ unions. Pilot schools have them, and all the shackles on effective school management that come with them. Charter schools don’t.

Second, some pilot schools are only nominally schools of choice, not real schools of choice like charter schools. Elementary and middle pilot schools – which make up a slender majority of the total – participate in the city’s so-called “choice” program for public schools, and thus have an attendance zone where students are guaranteed admission, and admit by lottery for the spaces left over.  So while on paper everyone who goes to a pilot school “chooses” to be there, some of them will be there only because the city’s so-called “choice” system has frozen them out of other schools. The students compared in the study are all lottery applicants and are thus genuinely “choice students” – they are really there by choice, not because they had no practical alternatives elsewhere. However, the elementary and middle pilot schools are not “choice schools.” (Pilot high schools do not have guaranteed attendance zones and are thus real schools of choice.)

The Boston Foundation examined two treatment groups: students who were admitted by lottery to charter schools and students who were admitted by lottery to pilot schools. The control groups are made up of students who applied to the same schools in the same lotteries, but did not recieve admission and returned to traditional public schools.

As readers of Jay P. Greene’s Blog probably know already, random assignment is the gold standard for empirical research because it ensures that the treatment and control groups are very similar. The impact of the treatment (in this case, charter and pilot schools) is isolated from unobserved variables like family background.

The results? Charter schools produce bigger academic gains than regular public schools, pilot schools don’t.

The two perennial fatal flaws of “public school choice” would both seem to be at work here. First, public school choice is always a choice among schools that all partake of the same systemic deficiencies (read: unions). Choice is not choice if it doesn’t include a real variety of options. And second, public school choice typically offers a theoretical choice but makes it impossible to exercise that choice in practice. In this particular case, if each school has a guaranteed-admission attendance zone, the practical result will be fewer open slots in each school available for choice. (Other kinds of public school choice have other ways of blocking parents from effectively using choice, such as giving districts a veto over transfers.)

Charter schools are only an imperfect improvement on “public school choice” in both of these respects. Charters have more autonomy and thus can offer more variety of choice, but not nearly as much as real freedom of choice would provide. And with charters, as with public school choice, government controls and limits the admissions process.

But charters are an improvement over the status quo, even if only a modest one, as a large body of research has consistently shown.

There are some limitations to the Boston Foundation study, as with all studies. Pilot high schools are not required to admit by lottery if they are oversubscribed, while charter schools are. (Funny how the union-sponsored alternative gets this special treatment – random admission is apparently demanded by the conscience of the community when independent operators are involved, but not for the unions.) Of the city’s pilot high schools, two admit by lottery, five do not, and one admits by lottery for some students but not others. Thus, the lottery comparison doesn’t include five of the pilot high schools. It does include three high schools and all of the elementary and middle schools.

As always, we shouldn’t allow the limitations to negate the evidence we do have. Insofar as we have evidence to address the question, more freedom consistently produces better results, and more unionization consistently doesn’t.


The TIMSS Rorschach Test

December 9, 2008

The Rorschach inkblot test is a psychology test that was used to assess personality and emotions.  The way in which people saw ambiguous images, like the one above, was supposed to say something about who they really were.

The same is true for the interpretations being applied to the results of the 2007 TIMSS (Trends in International Mathematics and Science Study) released today.

Over at Flypaper, Mike Petrilli interprets the gains the US has made in math but not science as suggesting that accountability testing is shifting resources toward math and away from science: “The lesson is that what gets tested gets taught. Under the No Child Left Behind act, and state accountability systems before that, elementary schools have been held accountable for boosting performance in math and reading. There is evidence that American elementary schools are spending less time teaching science, and this is showing up in the international testing data.”

And Mike interprets the relatively good results that Minnesota had (yes, MN took the test as if it were a country) as supporting rigorous standards: “There’s also good news out of Minnesota today, which has made dramatic gains since adopting new, more rigorous math standards.”

But also at Flypaper, Diane Ravtich offers different interpretations.  She sees the gains even in math results as “actually small, only four points.”  She also declines to credit NCLB for any of those gains, even as a perverse result of resource shifting away from science.  She notes that gains were at least as large in the US during the period prior to implementation of NCLB.  And on the topic of Minnesota she takes issue with Mikes explanation for success: “Minnesota showed dramatic gains on TIMSS not because of ‘new, more rigorous standards,’ but because of that state’s decision to implement a coherent grade-by-grade curriculum in mathematics.”  Umm, I would explain the difference but I got so bored trying to distinguish standards from curriculum that I dozed off for a bit.

Rather than focusing on the gains (or lack of gains) made by the US relative to itself in the past, Mark Schneider at Education Week focuses on the comparison between the US and other countries.  He notes that while the US looks relatively strong on the TIMSS, that is distorted by the large number of  “low-performing countries in the calculation of the international average [including Jordan, Romania, Morocco, and South Africa that] drives down that average, improving the relative performance of our students.”

He further notes that we fare worse on the PISA, which reports results from the 30 OECD countries who are our major trading partners and economic competitors: “We do better in TIMSS than we do on PISA, but this is a function of the countries that participate in each, and we should not let the relatively good TIMSS results lull us into a false sense of complacency. Even in the relatively easier playing field of TIMSS, we are lagging far too many countries in overall math performance and in the performance of our best students.”

And at Huffington Post Gerald Bracey was able to offer his reaction to the results last week, before they were released.  He wrote: “It might be good to keep a few things in mind when considering the data:

1. The Institute for Management Development rates the U. S. #1 in global competitiveness.

2. The World Economic Forum ranks the U. S. #1 in global competitiveness.

3. The U. S. has the most productive workforce in the world.

4. “The fact is that test-score comparisons tell us little about the quality of education in any country.” (Iris Rotberg, Education Week June 11, 2008).

5. ‘That the U. S., the world’s top economic performing country, was found to have schooling attainments that are only middling casts fundamental doubts on the value, and approach, of these surveys…'”

Bracey also said that our students could beat up the students in other countries with higher TIMSS scores.  (Actually, I made that last bit up.)

To summarize, Mike Petrilli sees evidence supporting his past concerns about the narrowing of the curriculum and the need for rigorous standards.  Diane Ravitch sees no evidence to alter her negative view of NCLB.  Mark Schneider, the former head of the National Center for Education Statistics, sees the need to review more testing.  And Gerald Bracey doesn’t even have to see the results to know that our education system is doing a great job.  And when I look at the inkblot I see a pudgy guy with a beard and male-patterned baldness laughing.

(edited for clarity)


Replication, The True Test of Research Quality

December 2, 2008

When people can’t argue the facts, they argue peer review.  That’s been my experience when I’ve released non-peer reviewed reports.  Without peer review, folks wonder, how can we know whether to trust these results?

The reality is that even with peer review people still need to wonder whether to trust results.  Peer-review is by definition irresponsible — by which I mean that the reviewers have no responsibility.  By being anonymous, reviewers offer their opinions on the merit of research without any meaningful consequence to themselves.  Many reviewers do a laudable job, but there is nothing to stop them from using their reviews to advance findings they prefer and block findings they dislike regardless of the true merit of the work.  Peer-review is often little more than the anonymous committee vote of a panel composed of some mix of competitors and allies.  It is about as reliable as the Miss Congeniality vote at a beauty contest.  Do we really think she’s the nicest contestant or did the other contestants voting anonymously have ulterior motives for burying her with faint praise?

The true test of research quality is replication.  Science doesn’t determine the truth by having an anonymous committee vote on what is true.  Science identifies the truth by replicating past experiments, applying them to new situations, to see if the results continue to hold up. 

I’m pleased to say that several pieces of my work have been successfully replicated.  By successful replication I mean that the basic findings are upheld.  Replicators almost always make new and different choices about how to handle data or run an analysis.  The question is whether the same basic conclusion is found even when those different choices are made.

The evaluation I did with Paul Peterson and Jiangtao Du of the Milwaukee voucher experiment was successfully replicated by Cecilia Rouse.  The evaluation I did of the Charlotte voucher program was successfully replciated by Josh CowenMy study of of Florida’s A+ voucher and accountability program was successfully replicated three times — by Raj Chakrabarti; Rouse, et al; and West and Peterson.  And my graduation rate work has been successfully replicated by Rob Warren and Chris Swanson.

The interesting thing is that every one of my studies above was initially released without peer review.  And every one of them was attacked for being unreliable because they were not peer reviewed.  When they were all later published in peer reviewed journals (except the grad rate work) and successfully replicated I don’t remember ever hearing anyone retract their accusations of unreliability. 

(edited for typos)


Ohio Charters Save Money for Public Schools and Taxpayers

November 14, 2008

(Guest post by Greg Forster)

It’s raining studies! After this one and then this one comes a study out today from Matthew Carr and Beth Lear of the Buckeye Institute. It’s a fiscal analysis of how charter schools impact the finances of regular public schools in Ohio’s “Big 8” cities.

When a student leaves a regular public school for a charter school (or a private school for that matter), the district loses the state revenue stream associated with that student, but it gains on the local revenue side because local revenues don’t go down, allowing the district to take that student’s share of local funds and redirect it to funding the education of the students who remain behind. The net fiscal impact depends on which is bigger, the state revenue stream per student or the local property taxes per student.

Carr and Lear find that in Ohio’s Big 8, the regular public schools are fiscal winners when students leave for charter schools. The biggest savings are in Cincinnati, where the net gain is $4,030 per student; the lowest is in Canton, where the net gain is $918 per student.

Charters in Ohio’s Big 8 also keep overall educational costs down by providing a better education (as Carr’s previous work in Ohio has shown) for less money per student.