Camp Education

July 21, 2008

“I desire macaroni pictures! And those little shaker things where you put beans inside of paper plates that are glued together! And let us put patterns of glue on the outside of those paper plates so we can then pour glitter on them so they look nice and sparkly!”

As I drop the kids off at sleep-away summer camp, I’ve been thinking about whether school should be more like camp.  At camp the kids learn an enormous amount, including a large amount of traditional academic content.  Two of my children are at a Jewish camp where they learn Hebrew and Judaics in addition to more typical camp activities.  (And no, there is no giant Moses in the shape of the CPU from Tron demanding macaroni pictures).  My oldest goes to a special needs camp that offers an emphasis on independent living skills (just like school) in addition to the usual camp stuff.

They all learn a lot.  But unlike school, the kids love it.  Don’t get me wrong, they like school quite a bit — but they love camp.  They love it even though they are made to do all sorts of challenging or sometimes unpleasant things that they rarely do at home.  They have to do all of the cleaning, they serve and clear all of the meals, and they fold their own clothes.  It can be broiling during the day and freezing at night.  They help tend farm animals.  They climb to the top of a high tower.  They go for long hikes.

The camps my kids go to have very nice facilities and are considered expensive.  Their camps offer activities not usually found at other summer camps, including go-carts, mountain biking, computers, water trampolines, and tennis.  The ratio of counselors to campers at the Jewish camp is less than 5 to 1, and at the special needs camp is about 2 to 1 (including specialists).

How are these camps able to teach kids a lot, get them to work hard, and get the kids to love it, while schools struggle to do any of these things?

What’s more, even these expensive camps are less expensive than the average public school.  The Jewish camp costs $151.92 per day, which given that they are cared for 24 hours per day, comes out to $6.33 per hour.  The average public school, as of 2006-7, cost $10,725 per pupil for 180 days, which works out to $59.98 per day or $8.51 per hour for the 7 hours they are in school.  Even the special needs camp, which seems quite expensive, costs less than the average special education in public schools.  The hourly cost of the special needs camp is $11.02 compared to $16.17 for special education at the average public school.  I also looked up the tuition of a popular Christian camp in the area.  The charge there is only $3.33 per hour.

How do sleep-away camps get kids to work hard, learn a lot, broaden their experiences and love it — all for less than the cost of public schooling?  A big difference is that most of the counselors are young, college kids.  They don’t get paid very much but tend to be enthusiastic, bright, and energetic.  Some will later be doctors or lawyers, but they are happy to be counselors for a few summers in the meantime.  It’s easier to get talented people for low pay for a short time than for an entire career.  Camps always have some wise old-hands to keep the young staff in check and to maintain the norms and mission of the organization, but camps mostly succeed at low cost because of their energetic young counselors.

Could schools be more like camps?  Could we hire a lot of enthusiastic, bright, and energetic teachers fresh out of college, who know full well that most of them will leave in a few years to become lawyers, doctors, or something else?  A few old-hands would stick around to keep the young staff in check and to maintain the norms and missions of the organization.  But schools could potentially attract more talented people as teachers at lower cost if they followed the camp model.  And perhaps schools with a high-turnover, young staff would better connect with students and convey the love of learning and working hard.

I know that current research finds that teachers tend to be less effective in their first few years and that turnover is harmful.  But those are findings about new teachers and high turnover under the current system that rewards teachers for sticking around for 20-25 years.  We can’t simply extrapolate from that to what would happen under a system that attracted a different crop of new teachers and where turnover was effectively encouraged (reform of the pension system and pay scale could move us in that direction).

Maybe the intensity of camp just couldn’t be sustained for an entire school year.  Maybe adding even a little more academic content would ruin the camp magic.  I’m sure many things would go wrong if we tried to make schools more like camps, but I think it’s worth thinking about what we can learn from camps to make schools more effective.


Bigger is Not Better in Education

July 20, 2008

I have a piece in this morning’s Arkansas Democrat Gazette arguing that consolidating school districts in Arkansas to 75 countywide school districts is not a promising reform strategy.  A number of state officials as well as the Dem Gaz have floated the idea of cutting the number of districts to less than one-third of the current number as a way of saving superintendent and football coach salaries while improving the capacity of high schools to offer state-required courses.  I argue that the salary savings will be few, there are better ways to help high schools offer courses (such as with distance ed), and student achievement tends to suffer in larger schools and school districts. 

Now, this doesn’t mean that reconfiguring larger urban high schools into “small” schools within a school, as the Gates Foundation once pushed, is likely to produce much of an improvement either.  The benefits of smaller schools and school districts may be related to the tighter connection they have to their communities and the more competitive market provided by having more districts.  Simply breaking up big high schools may not better connect schools to communities or create more competitive pressure. 

Being able to choose among schools within a district is like being able to choose among the menu items at McDonalds.  It’s nice that you could choose the Filet-O-Fish if you prefer to eat fish, but there is no change in competitive pressure from adding that menu item — all of the money still ends up in the same place.  The same is true for choice within school districts — all of the money still stays with the school district, so their motivation is not significantly altered by your choice among their schools.  We should only expect significant competitive pressure when money leaves one organization and enters another as a result of consumer choice.  School districts are the main organizational unit of education funding.


Would You Pay $43,479 for a 1971 Impala?

July 19, 2008

Andrew Coulson at Cato does a great job of illustrating how disastrous it is to have had stagnant achievement outcomes for 17 year-old public school students since 1970, while per pupil spending has increased by a factor of 2.3 (adjusted for inflation).  He likens it to paying $43,479 for a 1971 Chevy Impala, which is 2.3 times the $19,011 inflation-adjusted price back then ($3,460 before adjusting for inflation).  Meanwhile, a brand new 2008 Impala sells for $21,975 and comes with features like On-Star, side air bags, and anti-lock brakes that weren’t even imagined in 1971. 

In the automotive industry cars keep getting better with little increase in cost (after inflation), while education has not improved significantly and costs us 2.3 times as much (after inflation).  It isn’t every day that people wish that an industry would be as efficient as car-makers.


Being Misquoted

July 17, 2008

Dean Millot has a new post attacking me on the peer review issue that Eduwonkette promotes on her own site.

But Dean Millot is being fundamentally dishonest in that he misquotes me. He says that I argue: “In short, I see no problem with research becoming public with little or no review.”

In fact I wrote: “In short, I see no problem with research initially becoming public with little or no review.” (See here )

The absence of the word “initially” makes quite a difference and sets up the straw man that Millot wishes to knock down. The issue is not whether research can benefit from peer review, but whether it is inappropriate to make it publicly available INITIALLY, before it has received peer review.

Readers may want to wonder about the credibility of Millot’s claim that “One of the reasons I do my best to quote the very words of people I write about in edbizbuzz is that I prefer to fight fair.”

And so much for Eduwonkette’s praise of Millot’s “measured, careful, and thoughtful analysis.”

I’m waiting for the correction and apology from both of them.


It Never Ends

July 14, 2008

I thought that the exchange with Eduwonkette over the appropriateness of releasing research without peer review had run its course with my last post.  But it seems that it will never end.  Here is her latest post and here is the reply that I posted in her comment section:

Eduwonkette is attempting to change the subject. I’ve never disputed that peer review can help provide additional assurances to readers about quality.  The issue is whether research ought to be available to the public even if it has not been peer reviewed.  In attacking the release of my most recent study Eduwonkette seems to be arguing that it is inappropriate to release research without peer review, at least under certain conditions that she only applies to research whose findings she does not like.  If she were going to be consistent, she would have to criticize anyone who releases working papers of their research, which would be almost everyone doing serious research.

 

What’s more, she is still trapped in a contradiction: she can’t say that we should analyze the motives of people who release research directly to the public when assessing whether it is appropriate, while she prevents analysis of her own motives because she blogs anonymously.  As I have now said several times, either she drops the suggestion that we analyze motives or she drops her role as an anonymous blogger.  If she refuses to resolve this contradiction, Ed Week should stop lending her their reputation by hosting her blog.  Let her be inconsistent in blogging at the expense of her own anonymous persona and not drain the respectability of Ed Week.

 

Lastly, the comparison of the market for education policy information and the market for cars comes from my most recent post in our exchange, but she oddly does not credit me here. (See https://jaypgreene.com/2008/07/12/see-were-in-italy/ )  Her position seems to be that we ought to forbid (or at least shun) the sale of used cars without warranties (translation: research without peer review).  My argument is that used cars without warranties come at a risk but there are compensating benefits.  Similarly, non-peer-reviewed research has its risks but also its benefits.

 

UPDATE — My exchange with Eduwonkette continues although it seems increasingly pointless.  Here is my (slightly edited) last comment on her site:

“Let’s make this very concrete. Was it inappropriate for Marcus Winters and I to release our social promotion findings in 2004 without peer review, or should we have waited until it had been peer-reviewed and published (in various forms) in 2006, 2007, and again in 2008? If the appropriate thing is to wait, would interest groups, editorial boards, and bloggers similarly hold their tongues until the additional evidence came in?  Would policymakers hold off on decisions that might have come out differently if they had the suppressed information?

Would it have been OK to release in 2004 as long as we tried to make it obscure enough so that people were less likely to find it? What if interest groups, bloggers, etc… found our obscure finding and promoted them (as has happened with Jesse Rothstein’s paper)?

And in saying ‘working papers and thinktank reports are released for entirely different functions’ you are repeating your call for an analysis of motives. You’ve said that think tanks want to influence policy (bad motive) while academics are trying to advance knowledge with each other (good motive). But if academics are serving the public good, shouldn’t they ultimately want to influence policy? I am an academic who also releases working papers through a think tank. Does that make my motives good or bad? I think all of this analysis of motives is silly when the real issue is the truth of claims, not why people are making those claims. Calling for an analysis of motives is especially silly for someone who is trying to influence people anonymously. The fact that you are trying to influence people through a blog does not give you a free pass from having to be consistent on this.”


See, we’re in Italy…

July 12, 2008

Stripes

“See, we’re in Italy.  The guy on the top bunk has gotta make the guy on the bottom’s bed all the time.  It’s in the regulations.  If we were in Germany I would have to make yours.  But we’re in Italy, so you’ve
gotta make mine. It’s regulations.”

This is more or less Eduwonkette’s response to my complaint that she can’t argue that the source of information is important in assessing the truth of claims while blogging anonymously.  Her answer is that it’s different for bloggers (in Italy) than for researchers (in Germany).  It’s regulations.

She goes on to describe some differences between different types of information in education policy debates, but it’s not clear why any of those differences would be relevant to whether assessing the source is important for one and not for another.  The closest she comes to explaining why things are meaningfully different is when she says, “And let’s be realistic: an anonymous blogger isn’t shaping public policy.”  So, if information will have no bearing on policy debates, then its source is unimportant.

This would be a consistent argument if she really believed that bloggers had NO influence.  But of course they have at least some influence.  Why else would she and the rest of us be bothering with this?  And if bloggers have some influence, then the same basic principles should apply: either we should analyze the motives of sources of information to assess the truth of claims or we shouldn’t.  I’m in favor of not analyzing motives for anyone since I think that the truth of claims is independent of the motives of the source.  Even bad people can make true arguments.

At the risk of belaboring this issue, maybe I can clarify things by describing the market of ideas in policy debates as being like the market for cars.  We have different levels of confidence in cars that have gone through different processes before being made available for sale.  We could buy a used car from the corner used car dealer with no warranty.  That would be like reading blogs.  We don’t really know whether we are getting a lemon or not, since almost no assurances have been made about quality.  Or we could buy a used car from a larger chain with at least some warranty.  That would be like getting information from newspapers or magazines.  There has been some review and assurance of quality, but we still don’t quite know what we’ll get.  Or we could buy a new car from a major dealer and buy the extended warranty.  That would be like getting information from a peer-reviewed journal.  It may still be a lemon, but we’ve received a lot of assurances that it is not.  And I suppose reading an anonymous blogger is like buying a used car from someone you don’t know in the want ads.  There are trade-offs in getting cars with these different level of assurances about quality, just as there are trade-offs in getting information that has gone through different processes to assure quality. 

Eduwonkette’s argument is essentially that the same rules regarding these trade-offs don’t apply to the market for cars without warranties that do apply to the market for cars with warranties.  My view is that there are only differences in degree, not kind.  Even bad people can sell cars that are good values.

I’ve also noticed that Marc Dean Millot has weighed in on this issue.  He’s just knocking down a straw man.  It is not my position that research doesn’t benefit from peer review.  He can check out my cv to see that I have two dozen peer-reviewed publications, many of which were earlier released directly to the public without review.

I’ve been arguing that the public benefits from seeing research even before it has received peer review because it gets more information faster.  Without the assurances of peer review people will tend to have lower confidence in that research, and their confidence may increase as the research receives those additional assurances.  Millot seems to want to embargo information from the public until it receives peer review.  If he really believes that, then he should criticize every researcher with working papers on the web.  That’s almost everyone doing serious research.

And on his points about ideology tainting research I would suggest that people read Greg Forster’s excellent earlier post on Vouchers: Evidence and Ideology.


Eduresponses to Edubloggers

July 10, 2008

My recent posts on the release of our new study on the effects of high-stakes testing in Florida and posts here and here on the appropriateness of releasing it before it has appeared in a scholarly journal, have produced a number of reactions.  Let me briefly note and respond to some of those reactions.

First, Eduwonkette, who started this all, has oddly not responded.  This is strange because I caught her in a glaring contradiction: she asserts that the credibility of the source of information is an important part of assessing the truth of a claim yet her anonymity prevents everyone from assessing her credibility.  I prefer that she resolve this contradiction by agreeing with my earlier defense of her anonymity that the truth of a claim is not dependent on who makes it.  But she has to resolve this one way or another — either she ends her anonymity or she drops the argument that we should assess the source when determining truth.

But apparently she doesn’t have to do anything.  Whose reputation suffers if she refuses to be consistent?  Her anonymity is producing just the sort of irresponsibility that Andy Rotherham warned about in the NY Sun and that I acknowledged even as I defended her.  The only reputation that is getting soiled is that of Education Week for agreeing to host her blog anonymously.  If she doesn’t resolve her double-standard by either revising her argument or dropping her anonymity, Education Week should stop hosting her.  They shouldn’t lend their reputation to someone who will tarnish it.

Mike Petrilli over at Flypaper praises our new study on high stakes testing but takes issue with referencing comments by Chester Finn and Diane Ravitch about how high stakes is narrowing the curriculum in the “pre-release spin.”  I agree with him that this study is not “the last word on the ‘narrowing of the curriculum.’”  But to the extent that it shows that another part of the curriculum (science) benefits when stakes are applied only to math and reading, it alleviates the concerns Checker and Diane have expressed. 

As we fully acknowledge in the study, we don’t have evidence on what happens to history, art, or other parts of the curriculum.  And we only have evidence from Florida, so we don’t know if there are different effects in other states.  But the evidence that high stakes in math and reading contribute to learning in science should make us less convinced that all low stakes subjects are harmed.  Perhaps school-wide reforms that flow from high stakes in math and reading produce improvements across the curriculum.  Perhaps improved basic skills in literacy and numeracy have spill-over benefits in history, art, and everything else as students can more effectively read their art texts and analyze data in history.

Andy Rotherham at Eduwonk laments that what I describe as our “caveat emptor market of ideas” doesn’t work very well.  I agree with him that people make plenty of mistakes.  But I also agree with him that “in terms of remedies there is no substitute for smart consumption of information and research…”  There is no Truth Committee that will figure everything out for us.  And any process of reviewing claims before release will make its own errors and will come at some expense of delay.  Think Tank West has added some useful points on this issue.

Sherman Dorn, who rarely has a kind word for me, says: “Jay Greene (one of the Manhattan Institute report’s authors and a key part of the think tank’s stable of writers) replied with probably the best argument against eduwonkette(or any blogger) in favor of using PR firms for unvetted research: as with blogs, publicizing unvetted reports involves a tradeoff between review and publishing speed, a tradeoff that reporters and other readers are aware of.”  He goes on to have a very lengthy discussion of the issue, but I was hypnotized by his rare praise, so I haven’t yet had a chance to take in everything else he said.


Eduwonkette Apologizes

July 8, 2008

I appreciate Eduwonkette’s apology posted on her blog and in a personal email to me.  It is a danger inherent in the rapid-fire nature of blogging that people will write things more strongly and more sweeping than they might upon further reflection.  I’ve already done this on a number of occasions in only a few months of blogging, so I am completely sympathetic and un-offended.

One could argue that these errors demonstrate why people shouldn’t write or read blogs.  In fact some people have argued that ideas need a process of review and editing before they should be shown to the public.  These people tend to be ink-stained employees of “dead-tree” industries or academia, but they have a point: there are costs to making information available to people faster and more easily.

Despite these costs the ranks of bloggers and web-readers have swelled.  There are even greater benefits to making more information available to more people, much faster than the costs of doing so.  People who read blogs and other material on the internet are generally aware of the greater potential for error, so they usually have a lower level of confidence in information obtained from these sources than from other sources with more elaborate review and editing processes.  Some material from blogs eventually finds its way into print and more traditional outlets, and readers increase their confidence level as that information receives further review.

Of course, the same exact dynamics are at work in the research arena.  Releasing research directly to the public and through the mass media and internet improves the speed and breadth of information available, but it also comes with greater potential for errors.  Consumers of this information are generally aware of these trade-offs and assign higher levels of confidence to research as it receives more review, but they appreciate being able to receive more of it sooner with less review.

In short, I see no problem with research initially becoming public with little or no review.  It would be especially odd for a blogger to see a problem with this speed/error trade-off without also objecting to the speed/error trade-offs that bloggers have made in displacing newspapers and magazines.  If bloggers really think ideas need review and editing processes before they are shown to the public, they should retire their laptops and cede the field to traditional print outlets. 

We have a caveat emptor market of ideas that generally works pretty well.

So it was disappointing that following Eduwonkette’s graceful apology, she attempted to draw new lines to justify her earlier negative judgment about our study released directly to the public.  She no longer believes that the problem is in public dissemination of non-peer-reviewed research.  She’s drawn a new line that non-peer-reviewed research is OK for public consumption if it contains all technical information, isn’t promoted by a “PR machine,” isn’t “trying to persuade anybody in particular of anything,” and is released by trustworthy institutions.

The last two criteria are especially bothersome because they involve an analysis of motives rather than an analysis of evidence.  I defended Eduwonkette’s anonymity on the grounds that it doesn’t matter who she is, only whether what she writes is true.  But if Eduwonkette believes that the credibility of the source is an important part of assessing the truth of a claim, then how can she continue to insist on her anonymity and still expect her readers to believe her.  How do we know that she isn’t trying to persuade us of something and isn’t affiliated with an untrustworthy institution if we don’t know who she is?  Eduwonkette can’t have it both ways.  Either she reveals who she is or she remains consistent with the view that the source is not an important factor in assessing the truth of a claim.

No sooner does Eduwonkette establish her new criteria for the appropriate public dissemination of research than we discover that she has not stuck to those criteria herself.  Kevin DeRosa asks her in the comments why she felt comfortable touting a non-peer-reviewed Fordham report on accountability testing. That report was released directly to the public without full technical information, was promoted by a PR machine, comes from an organization that is arguably trying to persuade people of something and whose trustworthiness at least some people question.

So, she articulates a new standard: releasing research directly to the public is OK if it is descriptive and straightforward.  I haven’t combed through her blog’s archives, but I am willing to bet that she cites more than a dozen studies that fail to meet any of these standards.  Her reasoning seems ad hoc to justify criticism of the release of a study whose findings she dislikes.

Diane Ravitch also chimes in with a comment on Eduwonkette’s post: “The study in this case was embargoed until the day it was released, like any news story. What typically happens is that the authors write a press release that contains findings, and journalists write about the press release. Not many journalists have the technical skill to probe behind the press release and to seek access to technical data. When research findings are released like news stories, it is impossible to find experts to react or offer ‘he other side,’ because other experts will not have seen the study and not have had an opportunity to review the data.”

Diane Ravitch is a board member of the Fordham Foundation, which releases numerous studies on an embargoed basis to reporters “like any news story.”  Is it her position that this Fordham practice is mistaken and needs to stop?


Eduwonkette and Eduwonk Aren’t Edumarried?

July 8, 2008

The New York Sun had a nice profile yesterday of Eduwonkette.  Well, it’s not exactly a profile because Eduwonkette writes anonymously.  In the article some folks complain that her anonymity is a problem: “A co-director of the Education Sector think tank, Andrew Rotherham, suggested on his blog Eduwonk that Eduwonkette might be unfairly pretending to be unbiased because she has ‘skin in the game… It’s this issue of you got all this information to readers, without a vital piece of information for them to put it in context.'”

I think Andy’s mistaken on this. (Did they have some kind of edu-break-up?)  The issue is not who Eduonkette is, but whether she is right or not.  Knowing who she is does not make her evidence or arguments any more or less compelling.  I wish we all spent a whole lot less time analyzing people’s motives and a whole lot more time on their evidence and arguments. 

The only major problem with anonymity is lack of responsibility for being wrong.  There is a reputational price for making bad arguments or getting the evidence wrong that Eduwonkette avoids paying professionally — although she does pay a reputational price to the name brand of Eduwonkette.

Speaking of being wrong, Eduwonkette knocks the study Marcus Winters, Julie Trivitt, and I released today through the Manhattan Institute.  She complains: “It may be an elegantly executed study, or it may be a terrible study. The trouble is that based on the embargoed version released to the press, on which many a news article will appear today, it’s impossible to tell. There is a technical appendix, but that wasn’t provided up front to the press with the glossy embargoed study. Though the embargo has been lifted now and the report is publicly available, the technical appendix is not.”

This isn’t correct.  Embargoed copies of the study were provided to reporters upon their request.  If they requested the technical report, they could get that.  Both were available well in advance to reporters so that they could take time to read it and circulate it to other experts before writing a story.  Both the study and the technical report were made publicly available today (although there seems to be a glitch with the link to the technical report that should be fixed within hours).  The technical report can be found here.

And while we are on the subject of Eduwonkette being wrong, her attacks on test-based promotion policies are overdone.  The Jacob and Lefgren paper does raise concerns, but there is more positive evidence from the experience in Florida.  As I wrote in a previous post: “In a study I did with Marcus Winters that was published in Education Finance and Policy, we found that retained students significantly outperformed their comparable peers over the next two years.  In another study we published in the Economics of Education Review, we found that schools were not effective at identifying which students should be exempted from this test-based promotion policy and appeared to discriminate in applying these exemptions.  That is, white students were more likely to be exempted by school officials in Florida from being retained, but those students suffered academically by being exempted.”

Our results may actually be consistent with what Jacob and Lefgren find.  We find academic benefits for students retained in third grade.  They find: “that grade retention leads to a modest increase in the probability of dropping out for older students, but has no significant effect on younger students.”  It could be that test-based promotion is more beneficial when done with younger students.  It could also be that the policy has positive effects on achievement with some cost to graduation. 

And particularly severe problems with the integrity of test results used for promotion decisions in Chicago may limit the ability to generalize from Chicago’s experience.  In Chicago it may have been easier to move retained students forward by cheating on the next test than actually teaching them the basic skills they need to succeed in the next grade.

Besides, I’m sure that Edwuonkette wouldn’t put too much stock in Jacob and Lefgren’s non-peer-reviewed paper released straight to the public.  I’m sure she would be consistent in her view that: “By the time the study’s main findings already have been widely disseminated, some sucker with expertise in regression discontinuity may find a mistake while combing through that appendix, one that could alter the results of the study. But the news cycle will have moved on by then. Good luck interesting a reporter in that story… So as much as I like to kvetch about peer review and the pain and suffering it inflicts, it makes educational research better. It catches many problems and errors before studies go prime time, even if it doesn’t always work perfectly.”  

Or do these standards only apply to studies whose findings she doesn’t like?   If Eduwonkette isn’t careful she might get a reputation.


New Study Release Tomorrow

July 7, 2008

Keep your eyes peeled for the release tomorrow by the Manhattan Institute of a new study on the effect of high-stakes testing on achievement in low-stakes subjects. The study, led by Marcus Winters and co-authored by me and Julie Trivitt, examines whether achievement in math and reading comes at the expense of science on Florida standardized tests.  Because there are meaningful consequences for performance in math and reading, but not for the rest of the curriculum, many people have worried that schools would improve their math and reading results by skimping on science and other subjects.

These concerns are not just coming from the usual critics of school accountability.  Even accountability advocates have expressed second thoughts.  For example, Chester Finn writes in the National Review Online: “Do the likely benefits exceed the ever clearer costs? Boosting skill levels and closing learning gaps are praiseworthy societal goals. But even if we were surer that NCLB would attain them, plenty of people — parents, teachers, lawmakers, and interest groups — are alarmed by the price. I don’t refer primarily to dollars. (They’re in dispute, too, with most Democrats wrongly insisting that they’re insufficient.) I refer to things like a narrowing curriculum that sacrifices history, art, and literature on the altar of reading and math skills…”

Diane Ravtich has similarly stepped on the high-stakes brakes, expressing concern about the crowding out of other academic subjects and activities: “a new organization called Common Core was launched on February 26 at a press conference in Washington, D.C., to advocate on behalf of the subjects that are neglected by the federal No Child Left Behind legislation and by pending STEM legislation. These subjects include history, literature, the sciences, the arts, geography, civics, even recess (although recess is not a subject, it is a necessary break in the school day that seems to be shrinking or disappearing in some districts). I serve as co-chair of CC with Toni Cortese, executive vice-president of the American Federation of Teachers.”

To find out whether these concerns are supported by the empirical evidence from Florida, tune into the Manhattan Institute web site tomorrow to see the study.