The increased staff is largely Aides (for, special education support), a smaller percentage is for administration. In the early 1970s support staff made up under 2% of staffing, today it is over 10%. See figure 4 here: http://edex.s3-us-west-2.amazonaws.com/publication/pdfs/Hidden-Half-School-Employees-Who-Dont-Teach-FINAL_0.pdf

]]>I know as a recent student in my school, we had a total of 25 teachers for a student body of 900 students or about a (36:1) ratio and 8 teachers/education assistance for a classroom of 11 special needs students(1.3:1)

So yes, there were a lot more teacher-hours or educational dollars spent on each special education child. Ironically, in University, I see the special needs children also have a basic-life skills course where they are maintaining that 1.3:1 ratio even after high school. I wonder how long they’ll spend in that program and if you take into account them having 8 years of education versus my 5 in secondary schooling if the costs are at all similar.

]]>I would resign myself to concluding that 9- and 13-year-olds in the three major racial sub-groups of the general population (white students, black students, and Hispanic students) have shown steady improvement in terms of NAEP scores over the last 30+ years. In addition, the two lowest-achieving of those sub-groups (black and Hispanic students) have come to represent a larger portion of the total student population, while the highest-achieving of those three sub-groups (white students) has come to represent a smaller portion of the total student population. To me, that suggests (but does not prove) that public elementary and middle schools have done an increasingly effective job with a student population that has, over time, come to school less-prepared to be academically successful. Those improvements, however, are incremental and modest.

I would further resign myself to concluding that 17-year-olds across all three major racial sub-groups have shown little to no improvement in terms of NAEP scores over the last 30+ years, which suggests that public high schools are not doing a more effective job with a student population that has, over time, come to school less-prepared to be academically successful. To gain a better understanding of high school effectiveness, I would also want to look at other indicators peculiar to that level, such as graduation rates, SAT scores, higher education attendance and completion rates, and others.

The question of whether or not the additional money spent on K-12 could be considered money well or poorly spent, in my mind, is not answered by these data. I would additionally want to know how the extra money has been spent (as I asked in an earlier comment), and would want to look at additional outcomes data.

I think my beef with your teachability index was primarily with the conclusion that you drew from it earlier in this conversation: “In fact, the aggregate student popluation has gotten easier to serve.”

Again, I would be more cautious. I would suggest that, according to your teachability index — which assumes that 16 specific features comprise student teachability, that those features can be accurately grouped into six categories, and that those categories each equally contribute to overall teachability — the aggregate student population may have become easier to serve. Given the multiple counter-arguments that might be presented to your index — for example, that you have not identified accurate features of teachability, that your features are not given accurate weightings, etc. — I would not be as bold as you in the conclusions I draw from it.

Thanks again for taking the time to talk through this!

Parry

]]>“Which underperforming subgroups are getting smaller?” is the whole subject of the Teachability Index study. Some examples of shrinking underperforming subgroups taken from that study include:

-Poor children (as measured in two different ways)

-Children whose parents didn’t go to college

-Migrant children

-Children with health problems

-Teenage mothers

The hypothetical phenomenon you describe, where all subgroups make gains but the aggregate score conceals the fact that gains have been made, cannot occur. If all subgroups make gains, the aggregate score will rise. It’s true that if gains are made and the subgroup composition also changes, the magnitude of the gains may be reduced in the aggregation. But if the aggregate score is flat, not all subgroups made gains.

That question is now moot, however. If you concede my point that only the 17-year-old scores count, then we agree that outcomes are flat. We’ve more than doubled our investment and have nothing to show for it.

I did not say or imply that we attached no weights. I said *any *weights we attached would have been effectively arbitrary. My point is that there’s no avoiding that problem if we have any measurement of teachability, so the question is whether we want to have an imperfect measurement or none.

As Milton Friedman once said, if you can measure it, measure it, and if you can’t measure it, measure it anyway.

]]>Thanks. And I was unaware that the IQ cut-off might vary by state.

I (apparently mistakenly) thought the dividing line was 70, as in

clinical settings (with the additional criterion of functional life skill

impairment or not). Thanks for making me more aware of these issues. ]]>

Thanks for your response. I had a question and a point.

First of all, which underperforming sub-groups are getting smaller? I wasn’t sure which sub-groups you were referring to. Also, if every major (i.e., reasonably sized) sub-group within a larger population has improved over time (e.g., white students, black students, hispanic students), but, because the under-performing sub-groups have grown as as a percentage of the overall population, the aggregate numbers have remained flat, how can you make an argument that improvement hasn’t occurred? Simply resorting to the aggregate strikes me as simplistic math tricks that hide the larger picture. Your point about 17-year-olds, however, is an important one.

My point relates to your study. You seem to suggest that you ended up not attaching weights because they would have been arbitrary. But the fact is, you did attach weights (at least, as far as I can tell). You created an overall composite score which reflects a combination of each of the six indeces, and your conclusions are based on the way in which this composite score changed over time. In order to create the composite score, you had to create a mathematical formula that combined the indeces. Thus, each index represents 17% of the overall score, which is an assigned weight. Each of the sixteen factors also has a mathematical weight within your overall score. Religious observance ends up representing approximately 4% of your overall index, while single parenthood (i.e., living with a single parent) represents approximately 8% of your overall index. Therefore, according to your formula, religious observance is half as powerful a factor as single parenthood in determining the teachability of a student population.

As an additional example, preschool attendance rates, percentages of English language learners, and parents’ education levels are all assumed to be mathematically equivalent (i.e., having an equal weight) in determining your overall composite score. Had you decided, for example, that percentages of English language learners has a higher impact on teachability than preschool attendance rates, this could have dramatically affected your overall composite, and thus your conclusions.

Again, I really like the idea of a teachability index. But the composite score you’ve developed does have weights (it has to, it’s based on a mathematic formula). Before accepting your conclusions as potentially valid (i.e., the student population has become easier to teach), I would expect to see a justification from the research literature as to the weights you have assigned to all of the various factors.

If I am completely misreading the mathematical approach you used to create your composite score, please correct me.

Parry

]]>I generally associate cognitive disabilities with that 1-2% of the population who is entitled to alternate testing according to NCLB. I believe that there are various screening definitions used at the state and district level–IQ being among the most prevalent. Essentially those students who would fall under the heading of MR or MRDD. It is my understanding that the IQ cut-off varies by state. In any case–the majority of students served under IDEIA have disabilities that are NOT defined in this way.

]]>and all-too-easy rationale that special ed. is to blame

for any and all of education’s problems. Well thought; well said!

And John Wills Lloyd’s points about how we have extensive knowledge about

effective instructional interventions for students with learning

exceptionalities is equally appreciated.

One of the big issues in

learning is application: In the learning heirarchy, the range between knowing what works and doing what works is large. An overly simplistic example is weigh loss.

I am willing to bet that most people who are overweight are encyclopedias of knowledge on current research as to how to lose weight. But whether their actual behavior changes is another story. The same is true in education. The dynamics of the real world classroom imbedded in multiple complex organizational systems makes

consistent application of empirically based instructional interventions challenging (note I did not say impossible). Teachers need support and resources, not blame.

And Margo, I am interested in how you are defining cognitive disabilities. Thank you.

]]>My response was perfectly relevant to your question. Jay quoted my statement that spending is up but results are flat. You asserted (among other things) that “underperforming sub-groups have come to represent proportionately larger pieces of the overall pie”. I responded that this is not true, and linked to Jay and I’s study. While some underperforming sub-groups (the ones you choose to focus on) are larger, others (the ones you don’t mention) are smaller.

With regard to the weighting of the factors in the teachability index: since we lack adequate data to say with any confidence how important each of the factors is for student outcomes, any weights we attach would be effectively arbitrary. The question is, do we want to have some measurement, however imperfect, or none at all?

I might just as easily ask why you arbitrarily choose to focus on underperforming subgroups that are getting bigger, rather than underperforming subgroups that are getting smaller. Which brings me to my next point.

If the question is what kind of return we’re getting for our money, the aggregate number is the only one that counts. Think about it: if the aggregate level is unchanged while some subgroups are performing better than they used to, what must have happened to the performance of the students who aren’t in those subgroups? If we invest more dollars, we have a right to expect a net improvement, not an improvement just in some arbitrarily favored group coming at the expense of a worse education for everybody else.

And the 17-year-old figures are also the only ones that count for this question. What does it matter if kids are a little bit smarter in third grade than they used to be, if those gains are lost by the time they leave the school system?

]]>