Monday, August 11, 2008

Rating Schools

Last week I threw out a comment about suburban school districts whining about State Report cards that don't let them coast on their demographic profiles. Paul from Save the Hilliard Schools took some exception in the comments, in part as I seemed to have implied that Bexley's success was as meaningful as Dublin's (and Hilliard's) shortcomings. It was a fair point, and what I really meant to say was that Dublin's shortfall this year was a) much more meaningful than the previous 'excellent' rating (or Bexley's currrent 'excellent' rating for that matter), and b) that the mere idea that Dublin and Cleveland deserved the same rating was not at all as ludicrous as the Dispatch tried to make it out to be.

As if in response (that's been happening a lot lately...), a press release has come out describing work on school ratings done by two OSU researchers:

Currently, most people believe that it is obvious which schools are the best – the ones with the highest achievement scores. But using achievement scores to measure school quality assumes that all schools have students with equivalent backgrounds and opportunities that will give them equal opportunities to succeed in school. And that's obviously not true, von Hippel said.

...

By comparing test scores at the end of kindergarten and the beginning of first grade, the researchers could measure learning rates during summer vacation.

Comparing test scores from the beginning and end of first grade allowed the researchers to see how much children learn during the school year.

They then were able to calculate how much faster students learned during the first-grade school year compared to when they were on summer vacation. This was the "impact" score that showed how much schools were actually helping students learn.

"If we evaluate schools that way, things change quite a bit as far as which ones we would identify as failing," Downey said.

...

Based on achievement scores, failing schools tend to be in urban areas, serve a higher percentage of children who qualify for a free lunch, and have a high minority population.

But if you look at impact scores, failing schools are not as concentrated in poor, urban areas with high minority populations.

"When you shift the focus from achievement to impact, there are still schools that do very well and some that do poorly," von Hippel said. "But they are not necessarily where you think they are. There are high-impact schools in every kind of neighborhood, serving every kind of child. The same is true of low-impact schools."
The research appears in the current issue of the journal Sociology of Education (for those who wonder about such things, it has an ISI impact of 1.4, putting it in the top 15-20% of journals in either Sociology or Education). I went back to the actual journal article (subscription only, no link) to answer a question I had and was satisfied both with the overall methodology (those sociologists certainly know how to isolate variance), and with the answer to my question:
A final concern is that impact-based evaluation may penalize schools with high achievement. It may be difficult for any school, no matter how good, to accelerate learning during the school year for high-achievement children. Our study, however, did not find a negative correlation between impact and achievement; to the contrary, the correlation between achievement and impact was positive, although small (see Table 3). And among schools in the top quintile on achievement, 26 percent were also in the top quintile on impact (see Table 4), suggesting that it is possible for a high-achieving school to have high impact as well.
I would love for Franklin County to pilot an impact-based evaluation model, and it would certainly be a chance for Dublin and Hilliard to show that they really are doing more for their students than Southwest or Whitehall are doing for theirs.

1 comment:

Paul said...

I think the question you expose is whether Ohio's report card system is supposed to be measuring the achievement level of the kids, or how much they improve during a year.

I remember my first days in Navy ROTC at Ohio State. We were given a physical fitness test in which we were told to do as many situps, pullups and situps as we could. I was in decent shape in those days, and wanting to make a good showing, did something like 100 situps. After the test was over, I found that on the next PT test in a couple of months, we would be evaluated on how much I had IMPROVED. Crap. Thank you upper-classmen from hiding that piece of intelligence from us. I really felt sorry for the super-fit guys who planned become Marine officers and who did like a zillion pushups to show off to us squids.

So it seems like the report card should tell both stories. One score reports on absolute achievement levels, another on progress. It might mean that there has to be some pretty hard questions to ensure that the test really captures how much improvement there is in a population of kids taking college-level AP classes. Maybe we should just use the PSAT or SAT as the basis for measurement - very few kids get a perfect score. But we have to use the same testing instrument every year to get a true picture of improvement I would think.

But the question remains - what are the scores used for? Is it to assure parents that they've selected a good school system and their tax money is being well spent? Or in the same way to tell parents that their school system sucks and they need to get out? Seems to me that parent can figure this stuff out without the help of the state.

What value does the ranking system have when great school systems like Dublin and Hilliard get dinged, yet the residents know its a great school system anyway? Seems to me like it lessens the value and validity of the system. You can bet that in Hilliard, those who are really involved in the schools are disgusted with the report card system and wish it would go away.

But there is also danger that Dublin and Hilliard residents who don't follow school matters that closely, such as the seniors who have no kids in school yet are faithful voters, will use this complex and flawed report card system to make decisions how to vote on levies.

I've got an idea for a report card statistic - Test scores divided by average household income. This would attenuate the scores of wealthy districts and further demoralize those folks who are funding not only their own districts, but maybe one or two other poorer districts as well.

Standarized testing is necessary - we can't trust that an "A" in Bexley represents the same performance as an "A" in Danville. But we have to be careful how those score are used. I fear their being used in funding decisions, and not necessarily in a good way.

PL