(Guest Post by Matthew Ladner)
Yesterday I did a quick face validity test on Arizona’s new school grades by comparing the schools closest to where I live. That didn’t go terrible well for the grading system. This morning before my bike ride I decided to do a second two minute test: comparing the district high school in the neighborhood I used to live (Shadow Mountain High School in the Paradise Valley district) to the one I live near today (Arcadia High School in Scottsdale Unified).
If someone would like to justify the red columns getting a “B” while the blue columns get a “C” the comment section awaits. If you are running Arcadia High, this is going to look to you like Shadow Mountain had a larger swoon on ELA than you from 2016 to 2017, had less math improvement than you (you had a smidge and they were flat). Moreover you outscored them in both math and reading in 2017, but they got a “B” and you got a “C.” Good luck getting the Arcadia High folks on board with this.
Greatschools btw gives both Arcadia and Shadow Mountain a 6 out of 10 for their academics. In other words, Greatschools gives them both the equivalent of a D. With proficiency rates in the twenties for Shadow Mountain and the thirties for Arcadia, it is hard to argue with that assessment. If there is a D-plus to be had here, clearly Arcadia is the school more deserving of it. The state could really, really use higher levels of achievement at both of these schools btw.
The subject at hand however is the grading formula used to create these preliminary grades. It certainly seems to lack face validity to me, but the Greatschools ratings seem to be defensible and thus useful to parents.
One of the priorities that the state charged the A-F Ad Hoc Committee with incorporating into the grading system was a reduction in the correlation between performance and poverty. While there were other priorities, this one became paramount. The Growth Formula that includes two measures- SGP & SGT, weighted against each other and weighted between performance classifications in two pages of rubrics was the heart of the effort to identify a ‘valid’ metric that did not correlate with poverty.
Armed with this tool the State Board of Education approved a formula for K-8 schools where half of the final grade is based on Growth. One fifth of the 9-12 formula is based on growth but other non-test based factors make up half of the high school grade (compared to one fifth of the elementary grade). In both formulas basic proficiency makes up a mere 30% of the overall score.
When you talk about high schools one factor that separated many was their performance on the “College and Career Readiness” indicator. This is a long list of indicators seen by SBE as showing that graduates are prepared for success. It includes AP test taking, college level course completion, CTE courses and program performance, ACT passing and a number of other factors. Performance on this indicator is tied to a schools access to programs that support student success. I’m not aware of anything in the GreatSchools reporting that includes this type of metric, but it is clear that these factors are important for assessing if a high school is preparing students for success after high school. Just being taught to pass the test isn’t going to help them with what comes after.
The development of the letter grade system was a complicated and cumbersome process that included hours of feedback from dozens of groups of stakeholders. In the end I think everybody agrees it is a flawed system, but it is the best we are capable of producing. Like Churchhill’s comment on Democracy. It is the worst system, “except for all those other forms that have been tried from time to time”.
What parent groups were involved? I haven’t come across any who like the grading system that came out of this process. “The development of the letter grade system was a complicated and cumbersome process that included hours of feedback from dozens of groups of stakeholders.”
When I taught elementary school, parents and teachers decided what would be on a report card, not “stakeholder” groups.
I understand the impulse to reduce the correlation between income and outcomes, but the fact of the matter is that such an achievement gap does in fact exist, and twisting oneself into complex gymnastics ultimately only results in bizarre results. The aim of a good A-F system should be to help improve the outcomes of low-income students over time, and thus to reduce the income achievement gap. Contorting the formula in a willful effort to sweep such a gap under the rug does not achieve this goal and produces nonsensical results.