Fordham Foundation on K-12 Economic Segregation

February 18, 2010

(Guest Post by Matthew Ladner)

Fordham has a new study on what they call “private public schools” aka schools that serve hardly any low-income children. Personally I prefer the term Economic Segregation Academies.

Yes kids, calm down, they have data for specific metro areas available online.  So much for the common school myth.

Question for Sara Mead

June 9, 2009

(Guest Post by Matthew Ladner)

I saw a documentary on Napoleon’s Egyptian campaign a few years ago. After a completely nasty setback, Napoleon retreated in defeat back to Cairo, but then ordered a victory parade to be held before fleeing the country entirely.

Watching Fordham’s pre-school event online, I can’t help but think that pre-k advocates are trying to do the same thing with Oklahoma: pretend its a victory, when in fact it looks more like their Waterloo.

I watched the Fordham Foundation pre-school event online yesterday. I was especially taken by Sara Mead’s claim that universal preschool could lead to dynamic changes in K-12, and that disadvantaged kids in Oklahoma’s pre-k program made larger gains than other students.

The biggest problem for universal pre-k advocates, in my view, is that the academic gains associated with Pre-K programs fade out. Consider the blue line in the chart below-4th grade NAEP scores from Oklahoma. In 1998, Oklahoma adopted a universal pre-k program.

FL vs. OkI assume that Ms. Mead has a basis to say that disadvantaged children make bigger gains under the Oklahoma pre-k program. The more important question is whether those gains are sustained over time.

Based upon the NAEP scores, Oklahoma’s program looks like a dud, increasing all of one point between 1998 and 2007.

The best one can try to spin out of the Oklahoma situation is scores might have actually dropped in the absence of the program, but now you are really grasping at straws. I seriously doubt that anyone who voted for this program in 1998 could be anything other than disappointed.

The red line, Florida, shows what can be done with a vigorous effort to improve K-12 schools. Florida’s low-income children improved by 23 points between 1998 and 2007.

Florida voters created a universal pre-k program, which was implemented as a voucher, but none of those students had reached the 4th grade by 2007.

Mead would likely argue, and I think she did at the event, that Pre-K and K-12 reform aren’t mutually exclusive, and I agree. It seems fair to ask however: is Pre-K a waste of time as an education improvement strategy? If not, why are the Oklahoma results so dreadfully unimpressive?

The Fordham Accountability Study

March 25, 2009


(Guest Post by Matthew Ladner)

So I have been off in Miami for a couple of days and return to find Greg and Jay busting on the new Fordham report on school voucher accountability. My take is different.

Let me preface my remarks by saying I haven’t read the final report, but rather an almost final report.

So, if you recall the only Star Trek the Next Generation movie worth watching, there is a great scene where the crew try to convice Captain Picard that the Borg have captured the ship, and that they ought to abandon it and set the auto destruct.

Picard, consumed with hatred for the Borg, refuses to do so. “The Line Must be Drawn HERE! This far, no farther!” Picard bellows with rage.

We get that reaction from many people when the subject of accountability for private schools participating in choice programs comes up. I agree that there are lines that ought not to be crossed, most obviously, forcing private schools to take state exams. Otherwise, you slide down the path to homogenized private schools on the French Catholic model, which can essentially only be distinguished from public schools by a religion class or two. Lines must be drawn- this far and no farther.

The appropriate line, however, is not at zero transparency.

Going into the reasons why I belive this is the case is a longer post than I can write at this time. I believe it is our interests as school choice supporters to embrace a reasonable level of financial and academic transparency in choice programs.

Further, I believe that what the Fordham Foundation has published (at least the draft I saw) developed a very reasonable approach.

More later…

The Professional Judgment Un-Dead

March 25, 2009

It’s time we drive a stake through the heart of “professional judgment” methodologies in education.  Unfortunately, the method has come back from the grave in the most recent Fordham report on regulating vouchers in which an expert panel was asked about the best regulatory framework for voucher programs.

The methodology was previously known for its use in school funding adequacy lawsuits.  In those cases a group of educators and experts was gathered to determine the amount of spending that is required to produce an adequate education.  Not surprisingly, their professional judgment was always that we need to spend billions and billions (use Carl Sagan voice) more than we spend now.  In the most famous use of the professional judgment method, an expert panel convinced the state courts to order the addition of $15 billion to the New York City school system — that’s an extra $15,000 per student.

And advocates for school construction have relied on professional judgment methodologies to argue that we need $127 billion in additional spending to get school facilities in adequate shape.  And who could forget the JPGB professional judgment study that determined that this blog needs a spaceship, pony, martinis, cigars, and junkets to Vegas to do an adequate job?

Of course, the main problem with the professional judgment method is that it more closely resembles a political rather than a scientific process.  Asking involved parties to recommend solutions may inspire haggling, coalition-building, and grandstanding, but it doesn’t produce truth.  If we really wanted to know the best regulatory framework, shouldn’t we empirically examine the relationship between regulation and outcomes that we desire? 

Rather than engage in the hard work of collecting or examining empirical evidence, it seems to be popular among beltway organizations to gather panels of experts and ask them what they think.  Even worse, the answers depend heavily on which experts are asked and what the questions are. 

For example, do high stakes pressure schools to sacrifice the learning of certain academic subjects to improve results in others with high stakes attached?  The Center for Education Policy employed a variant of the professional judgment method by surveying school district officials to ask them if this was happening.  They found that 62% of districts reported an increase in high-stakes subjects and 44% reported a decrease in other subjects, so CEP concluded that high-stakes was narrowing the curriculum.  But the GAO surveyed teachers and found that 90% reported that there had not been a change in time spent on the low stakes subject of art.  About 4% reported an increase in focus on art and 7% reported a decrease.  So the GAO, also employing the professional judgment method, gets a very different answer than CEP.  Obviously, which experts you ask and what you ask them make an enormous difference.

Besides, if we really wanted to know about whether high stakes narrow the curriculum, shouldn’t we try to measure the outcome directly rather than ask people what they think?  Marcus Winters and I did this by studying whether high stakes in Florida negatively impinged on achievement in the low-stakes subject of science.  We found no negative effect on science achievement from raising the stakes on math and reading.  Schools that were under pressure to improve math and reading results also improved their science results.

Even if you aren’t convinced by our study, it is clear that this is a better way to get at policy questions than by using the professional judgment method.  Stop organizing committees of selected “experts” and start analyzing actual outcomes.

Real Men of Accountability Illusion Genius

February 19, 2009

(Guest Post by Matthew Ladner)

Fordham strikes again, following up their great Proficiency Illusion study with the Accountability Illusion. This time, they took 18 elementary and 18 middle schools, and applied the varying accountability rules of 28 different states under NCLB to see which of them would make AYP under which set of rules.

In other words, which states have jimmied the gory details to make it really easy to make AYP? Things like how many students you require to make a subgroup and adopted error margins make a big, big difference.

I can’t tell you how shocking it was to see Arizona as the second easiest state studied in which to make AYP.

That is to say, I was shocked that someone had actually made it easier to do than Arizona. This should be a statewide scandal in Wisconsin.

<Cue cheesy singer and Charleton Heston-like voice about here>

Real men of GENIUS!!!!!!!!!!!!!!!!!!!!!!

Here’s to you, Mr. Wisconsin No Child Left Behind compliance guy.

Mr. Wisconsin No Child Left Behind compliance guy!

When those federal bureaucrats required us to test students in return for federal dollars, you figured out how to how to drop your academic standards lower than anyone. Beating out Arizona…that’s really impressive. They said it couldn’t be done, but you did it!

Watch out! Falling cut scores!

When you’ve got schools making AYP in Wisconsin that don’t make it anywhere else, you deserve the satisfaction of a hard day’s work! We want everyone to feel good about their schools after all, whether the students learn anything or not.

Don’t feel bad-trophies for everyone!!

That partial credit scheme for kids that fail was inspired! Why let the sunbelt states have all the fun with low academic standards? Don’t worry about those darned meddling Fordham kids and their fancy study! You can still get away with it!

Oh YEAH! Where’s my Scooby snack?!?

So here’s to you Mr. Wisconsin NCLB compliance guy!  When it comes to creative insubordination, no one can match your GUSTO! Keep taking those federal dollars and giving them hell!

Mr. Wisconsin I’m Too Scared of Adults to Care About Kids NCLB Compliance guuuuuuy!!!!!!!!!!!!!!!!!

The Proficiency Illusion

November 13, 2008

(Guest Post by Matthew Ladner)

I had a chance to see John Cronin from the Northwest Evaluation Association present on the Fordham Foundation’s study The Proficiency Illusion at the Arizona Education Research Organization conference last week. It was more than interesting enough to have me check out the study. From the forward by Checker and Mike:

Standards-based education reform is in deeper trouble than we knew, both the Washington-driven, No Child Left Behind version and the older versions that most states undertook for themselves in the years since A Nation at Risk (1983) and the Charlottesville education summit (1989). It’s in trouble for multiple reasons. Foremost among these: on the whole, states do a bad job of setting (and maintaining) the standards that
matter most—those that define student proficiency for purposes of NCLB and states’ own results-based accountability systems.

In short, the accountability and standards reform strategy has morphed into a pig’s breakfast. We’ve all known for some time that most states have failed to set globally competitive standards, and have monkeyed about with their cut scores. One of the revelations of the Proficiency Illusion (to me) is that many states have proficiency standards lacking internal consistency. For example, some states have incredibly low cut scores in the elementary grades, only to amp them way up in 8th grade. Parents will receive multiple notices saying that their child is “at grade level” only to shocked to learn later that they are well short.

Other types of problems exist as well. Two years ago at the same AERO conference, I saw a presentation showing that Arizona writing AIMS test had bell curves that stacked on top of each other rather than being horizontally linked across grades. In short, it was impossible to tell whether 4th graders were writing any better than 7th graders with the state exam.

Cronin’s presentation contained other insights- including just how arbitrary AYP can be. It depends hugely on the N requirement for subgroups state by state- some schools wind up with lots of subgroups and some don’t. This means that some relatively high performing schools miss AYP. In fact, Cronin demonstrated what I take to be a fairly common scenario where middle schools miss AYP but in which they perform at a higher level than all of the public school transfer options in the vicinity.

Checker and Mike go on to argue for national standards as a solution to these problems, but concede that it doesn’t seem likely. My modest suggestion on this front would be to adopt the A-Plus plan, and as states sought alternatives to AYP, to have the US Department require the creation of internally consistent standards as a starting point for negotiations. Given that the states would be able to determine their own set of sanctions (or lack thereof) I can’t see why an increase in rigor would be outside the realm of these discussions for states with absurdly easy to pass tests.

Deeply wedded to inconsistent standards? Fine- have fun with AYP and the 2014 train wreck.

In other words, if the feds would abandon Utopian nonsense like 2014 and the high quality teacher provision, they might be able to play a productive role in providing technical guidance and nudging states into better directions with their testing programs.

I am not a fan of NCLB, but even I will concede that it has to date had a net positive impact increasing transparency in public schooling. This will be lost, however, if the 2014 problem isn’t addressed, or if we go down the absurd road of portfolio assessments, and I do view transparency as vitally important.

The Proficiency Illusion shows us that much of the data we’ve been getting from state testing programs isn’t nearly as useful or reliable as imagined. This is a problem, and it must be addressed.