Nothing, in theory. The NYCDOE has just released a report
comparing the traditional teacher preparation programs at local institutions that
provide over half of teachers for the public schools in the city.
There isn’t tremendous variation when you look across the charts, and when you
read the fine print and discover the caveats about the reliability of the data,
as Valerie Strauss of the Washington Post did, you might just shrug off the
whole thing as no big deal.
There are the various rankings, starting with the most well-known, the U.S. News and World Report, Petersons, Forbes, even Standard & Poors. These pick a mix of variables and criteria and weigh them in different ways to line up the institutions from best to worst. Most recently, a controversial report prone to errors, missing data, and other validity issues from NCTQ & US News & World Report gave mostly scathing marks to all but four institutions preparing teachers. It’s evident the study was seriously flawed, and critics have pointed out that it is not so useful to just look at inputs such as syllabi and other material gathered from the internet (see Stanford Professor Linda Darling-Hammond’s commentary and Professor Aaron Pallas from Columbia University’s Teachers College humorous take here, or a consolidated debunking assembled by Professor Roxana Marachi here).
But in the era of accountability craziness, somebody decided
that an important measure of teacher preparation programs was how well kids did
on standardized tests in schools that hired graduates of the programs, and this
has jumped to the head of the line in terms of output measures. There are
plenty of problems with measuring teacher quality based on how kids do on
standardized tests over time, known in jargon as VAM, or value-added measures,
but the problems with connecting the dots to the teacher’s graduate program
take the cake. The bias, invalidity issues, and even just the logistics make
the problems exponentially greater. My brilliant advisor at the University of
Michigan, Professor Virginia Richardson, noted back in 2007:
“So why argue against the use of student achievement scores
as the outcome measure of interest? Probably because success on getting students
to learn curriculum material—as measured by standardized tests is, by
itself, a simplistic, inadequate,
misleading and perhaps immoral measure of teacher education quality. We must
also look at the ways in which teachers conduct their classrooms, and at other
elements that affect the success of teaching including such aspects such as
willingness and effort by the learner, a social surround supportive of teaching
and learning, and opportunity to teach and learn.”
What’s more, the tests that students are taking are in the
process of changing and transitioning to be aligned with the Common Core State
Standards. The latest results in New York City were so messed up as to be
deemed invalid and worthless by many, yet they will become the basis for
judgments about teacher performance, and now teacher education program quality.
Even a veteran reporter like Beth Fertig had a hard time figuring out what to
make of them, and testing expert Daniel Koretz wasn’t much help judging from
this interview. Chancellor Merryl Tisch, in a radio interview with Brian Lehrer, gave vague answers to questions about how cut off scores were
determined and about critiques from historian Diane Ravitch, who has recently
called for the resignation of Commissioner John King on her blog.
What’s perhaps most worrisome is that this keen focus on
student achievement as measured on tests and the misuse and distortion of that
data is a growing trend nationally. In the New York Times article covering the
report comparing the twelve colleges in New York City, the reporter ended with
a quote from Secretary of Education Arne Duncan, who feels this study was a
“major step forward.” I would add – into a bottomless pit.
You might also note that DOE did not reveal if ANY of the differences were statistically significant.
ReplyDelete