Saturday, August 17, 2013

What’s wrong with evaluating teacher education programs in New York City?


Nothing, in theory. The NYCDOE has just released a report comparing the traditional teacher preparation programs at local institutions that provide over half of teachers for the public schools in the city. There isn’t tremendous variation when you look across the charts, and when you read the fine print and discover the caveats about the reliability of the data, as Valerie Strauss of the Washington Post did, you might just shrug off the whole thing as no big deal.

There are valid arguments to be made about the need for this sort of study. For starters, those considering teaching as a profession have a right to know if the program they choose will prepare them well and help them to find employment, among other things. It would also be good if there were ways to make the decision about which program to choose that went beyond comparing transit and billboard advertising, flashy brochures, tuition costs, and courses and credits needed for program completion. For example, students searching for a teacher preparation program might like to know some fairly important things such as the quality of the professors and adjuncts, the size of the classes, the presence of helpful advisors, the library resources, course scheduling and flexibility, the opportunity to choose electives or have options among requirements, the schools in which they will do fieldwork and student teaching, the teachers who will mentor them in those schools, just to name a few features. There’s no YELP for these things, and Rate My Professors isn’t much help either. There is always word of mouth. For example, I heard some young women on the subway commenting on the merits of the program at Relay. One was explaining to another, “I don’t even have to go to class. You pass these tests and you get an exemption from attending.” (Relay’s website refers to the possibility of “substitutions” for time in class). There were two national accreditation organizations, NCATE and TEAC, which have merged into CAEP under a new set of standards, but the transition will be murky and will take years, plus many in the field feel that everybody can get accredited and it’s not a particularly meaningful measure of quality.
 



There are the various rankings, starting with the most well-known, the U.S. News and World Report, Petersons, Forbes, even Standard & Poors. These pick a mix of variables and criteria and weigh them in different ways to line up the institutions from best to worst. Most recently, a controversial report prone to errors, missing data, and other validity issues from NCTQ & US News & World Report gave mostly scathing marks to all but four institutions preparing teachers. It’s evident the study was seriously flawed, and critics have pointed out that it is not so useful to just look at inputs such as syllabi and other material gathered from the internet (see Stanford Professor Linda Darling-Hammond’s commentary and Professor Aaron Pallas from Columbia University’s Teachers College humorous take here, or a consolidated debunking assembled by Professor Roxana Marachi here).


But in the era of accountability craziness, somebody decided that an important measure of teacher preparation programs was how well kids did on standardized tests in schools that hired graduates of the programs, and this has jumped to the head of the line in terms of output measures. There are plenty of problems with measuring teacher quality based on how kids do on standardized tests over time, known in jargon as VAM, or value-added measures, but the problems with connecting the dots to the teacher’s graduate program take the cake. The bias, invalidity issues, and even just the logistics make the problems exponentially greater. My brilliant advisor at the University of Michigan, Professor Virginia Richardson, noted back in 2007:

“So why argue against the use of student achievement scores as the outcome measure of  interest?  Probably because success on getting students to learn curriculum material—as measured by standardized tests is, by itself,  a simplistic, inadequate, misleading and perhaps immoral measure of teacher education quality. We must also look at the ways in which teachers conduct their classrooms, and at other elements that affect the success of teaching including such aspects such as willingness and effort by the learner, a social surround supportive of teaching and learning, and opportunity to teach and learn.”

What’s more, the tests that students are taking are in the process of changing and transitioning to be aligned with the Common Core State Standards. The latest results in New York City were so messed up as to be deemed invalid and worthless by many, yet they will become the basis for judgments about teacher performance, and now teacher education program quality. Even a veteran reporter like Beth Fertig had a hard time figuring out what to make of them, and testing expert Daniel Koretz wasn’t much help judging from this interview. Chancellor Merryl Tisch, in a radio interview with Brian Lehrer, gave vague answers to questions about how cut off scores were determined and about critiques from historian Diane Ravitch, who has recently called for the resignation of Commissioner John King on her blog.

What’s perhaps most worrisome is that this keen focus on student achievement as measured on tests and the misuse and distortion of that data is a growing trend nationally. In the New York Times article covering the report comparing the twelve colleges in New York City, the reporter ended with a quote from Secretary of Education Arne Duncan, who feels this study was a “major step forward.” I would add – into a bottomless pit.


1 comment:

  1. You might also note that DOE did not reveal if ANY of the differences were statistically significant.

    ReplyDelete