Showing posts with label Arne Duncan. Show all posts
Showing posts with label Arne Duncan. Show all posts

Sunday, February 1, 2015

Letter to Duncan on Proposed Regulations

Docket ID ED-2014-OPE-0057
January 30, 2015
The Honorable Arne Duncan
Secretary, U.S. Department of Education
400 Maryland Ave, SW
Washington, DC 20202



Dear Secretary Duncan:
I am a professor of childhood education at Mercy College based in Dobbs Ferry, New York, where I work in New York City’s public schools with our teacher candidates. Mercy is a private, nonsectarian, minority- serving institution with an enrollment of over 11,000 undergraduate and graduate students.  We have a rich tradition of community service both locally and abroad. I am responding to the U.S. Department of Education’s proposed regulations for teacher preparation programs released in the Notice of Proposed Rule Making (NPRM) on December 3, 2014.
Like other teacher preparation programs in institutions of higher education throughout the nation, we have been actively engaged in improving our programs, getting national accreditation (and Mercy College was the first to gain dual initial accreditation from NCATE and CAEP), and working with our partnering districts to provide exceptional beginning teachers for our state’s schools.
In New York State, our teacher preparation programs have also undergone continual reform influenced by the latest policy changes regarding more difficult licensure exams, which felt to many of us like an intentional effort to certify fewer candidates who had already passed the previous exams and requirements. Professor Darling-Hammond even noted at the annual meeting of AERA in 2014, “New York is a prototype of how not to implement teacher performance assessment.”
The regulatory proposal put forward by the Department is flawed for many reasons, but I will focus on what I consider to be the four most important ones. It is an unfunded mandate that grossly underestimates the burden of the proposed regulations; it is a top-down mandate lacking requisite deliberation and input from stakeholders; it creates what are nothing short of perverse incentives for institutions of higher education that will ultimately harm rather than help educational goals; and finally, it uses inappropriate measures and proxies for those measures despite an abundance of evidence regarding the damage resulting from the irresponsible misuse of standardized tests to measure things they were never designed to measure.  
Overall, if these proposed regulations were adopted, they would create an overreach of federal authority into what is currently state-level and institution-level decision-making, and they would set a dangerous precedent to alter federal financial aid policy through regulation rather than through the legislative process. This simply cannot stand as it violates fundamental principles of our democratic processes.
The Department mandates the following indicators as the basis of the rating system to be used by each state (NPRM, pp. 71833-71836):
·       Student Learning Outcomes
·       Placement and Retention Data
·       Graduate and Employer Surveys
·       Accreditation or State Program Approval
The costs associated with gathering and analyzing all of this data are prohibitive, for states and for institutions. They also place a burden of time and effort on institutions and their faculty and staff that are already stretched beyond capacity by the burdens of accreditation. Ultimately such an unfunded mandate will mean harm to all stakeholders, from teacher educators, to teachers, to children, and to the educational institutions meant to help them.
As a top-down mandate, the regulations ignore extant research on a number of issues, and clearly lack the necessary deliberation and input from experts that is required when seeking such broad and far-reaching changes to teacher education. They continue an inappropriate reliance of standardized tests for accountability with no scientific validity, such as the use of value-added modeling (VAM) found in the determination of student growth and in teacher evaluations where student growth is among the measures used, as evidenced in p. 71837 of the NPRM.  The regulations seek to hold institutions accountable for things they have no direct control over, including working conditions of their graduates, employment trends, the ability to collect valid and complete data, and so on. Privacy concerns are not addressed, while we are witnessing a fast-growing national concern about the implications for the massive data tracking that is occurring in education.
These proposed regulations create perverse incentives that will harm rather than help the needed improvements in teacher education. Institutions will be rewarded for not partnering with the high-poverty schools that could benefit from having pre-service teachers and faculty working with them, for not accepting non-traditional candidates into their programs, and for not helping to place graduates in the schools where they are most needed. I wrote of these perverse incentives on my blog in 2013 when the CAEP standards were first released and expressed the concern that such a focus on teacher effectiveness as defined by increasing test scores would lead to a growing lack of attention to many other important factors in considering issues of quality that would have a detrimental effect on P-12 students. The “rigorous teacher candidate entry and exit qualifications” (NPRM, p. 71835) now in place in our state are already having a negative impact on our work, with little evidence to suggest that these licensure requirements will lead to better teaching. For example, the considerable work entailed in completing the edTPA during student teaching has meant a greatly diminished focus on many aspects of preparing pre-service teachers to be capable of having the confidence and skills to be successful first year teachers. I am not alone in believing that this type of performance assessment belongs in the induction years rather than at a time of intense learning as a guest in another’s classroom.
What concerns me the most is the continued irresponsible misuse of standardized test scores to measure things they were never designed to measure. Even as a measure of student learning, we know that standardized tests have considerable limitations. When high stakes are attached, we know that we can have unintended consequences including cheating and data manipulation. VAMs and SLOs were not designed to measure teacher quality. While it is true that for a student to learn in a meaningful way, good teaching is an important factor, effort and interest on the part of the learner is also required, as are opportunities to learn, and a social surround to support learning. That includes high quality curriculum, professional support and development for teachers, extra-curricular activities that enhance students’ educational experiences, access to technology, and more. We know that due to inequities in funding, segregated schools and communities, and the staggering growth of families living in poverty in our country, these elements are not in place in too many of our schools.
Similarly, placing such an overwhelming emphasis on outcome measures in teacher education, neglects other important factors in examining the quality of programs. While teachers’ knowledge and skills matter, so do other qualities such as their ability to communicate and partner with families, to contribute to the school community in meaningful ways, to connect with students and make a difference in their lives. Not everything that matters can be measured by a score or a number. To suggest otherwise is to perpetuate a false view of the purposes of education in a democratic society.
Institutions responsible for preparing teachers are driven by a mission to improve education. Other disciplines may seek knowledge simply for knowledge’s sake, to expand our understanding of the world, but in education we have always worked to make things better. In this era of high-stakes accountability however, we have seen that federal overreach can have disastrous consequences. Our national obsession with competition, rankings and ratings, and the misuse of data is demoralizing the ranks of educators who see in their work a deeper meaning than a better test score.
Thank you for your consideration. I urge you to listen to the voices of professionals who have grave concerns about the proposed regulations and the harm that would result from any implementations.
Sincerely,
Alexandra Miletta/s

Assistant Professor of Childhood Education, Mercy College

Friday, January 2, 2015

Standardized Testing: The Final Frontier

Tests seem so reasonable at first --  teachers teach, students learn, and demonstrate mastery by passing a test. But as Daniel Koretz says at the start of his 2008 book, Measuring Up: What Educational Testing Really Tells Us, “Achievement testing is a very complex enterprise, and as a result, test scores are widely misunderstood and misused.” Now that is what I call an understatement. Furthermore, despite Common Core claims that better standards and tests mean fewer reasons for concern about their misuse, as Vito Perrone of Harvard University pointed out, “Most items on these various standardized tests remain well within the longstanding technology of testing, primarily to support the mechanical scoring procedures. They still seem to be limited instruments with too much influence” (1999, p. 152).

The testing “enterprise” is poised for a warp drive record-breaker of misuse insanity. In a nutshell, here’s how they plan to connect the dots.

A tiny fraction of what a student knows and can do is hypothetically captured, with some modicum of so-called scientific accuracy, by converting the number of correct answers out of the total number of questions on a standardized test to a raw score. Keep in mind that this single raw test score is still prone to error in its intent to measure what the student knows as the student may have made random choices, guessing correctly (or not), or may simply have had other contextual reasons for the performance including illness, distraction, nerves, etc. The test is also imperfect by design and is likely biased in some ways.

Now that raw score goes through some psychometric process to either be normed to a scale comparing it to other test scores, and/or it is ranked somewhere between unacceptable and excellent based on someone’s judgment of what students should know and be able to do. This is where all hell breaks loose as that converted score gets used.

How might it get used? For one, to tell the students and the parents or guardians how “well” they did which can involve labeling the converted score with a percentile rank, a grade-level equivalent, or just a descriptive meaning such as “meets standard.” However, it will likely be used in what is called a “high stakes” way to assign students to special education, to hold them back a year, or to track them into homogenous groups.

The most pernicious use is to group the scores to make claims about the quality of individual teachers. From there, it’s easy to see how tempting it is to make a claim about the quality of a school, and then a whole district. While we’re at it, let’s compare counties, states, regions, countries.

The cold hard truth, in Koretz’s words, is this:
Scores on a single test are now routinely used as if they were a comprehensive summary of what students know or what schools produce (p. 44-45).
He goes on later to add:
Simply attributing differences in scores to school quality or, similarly, simply assuming that scores themselves are sufficient to reveal educational effectiveness, is unrealistic. And more generally, simple explanations of performance differences are usually naïve. All of this is established science (p. 142).

Things get really tricky when hierarchical linear modeling kicks in to provide a “value-added” way to compare actual scores to a prediction and to use the difference to rate teachers’ effectiveness. Ignoring warnings from experts, these value-added models or VAMS, have been misused by policymakers to weigh heavily in the annual evaluation of teachers. Carol Burris, an outspoken principal who opposed this misuse of standardized test scores, recently wrote of a teacher’s lawsuit filed in New York State by my friend, Sheri Lederman, who hopes her case can become “a tipping point” in bringing this damaging unreliable practice to a grinding halt.

That may be wishful thinking because now the dots are being connected to the colleges and universities that educate teachers. They too are to be evaluated and ranked based on the performance of their candidates for teacher certification on standardized tests, which can be more than four in some cases. New federal regulations currently open for public comment until February 2nd would require these institutions of higher education to also track their teacher graduates, and collect their annual evaluation ratings including the VAMS measure, in order to be considered eligible for the TEACH grant program. (I have previously written of how similar perverse incentives plague the new CAEP accreditation standards for these institutions).

Here’s a test question for Arne Duncan, our Secretary of Education:
TRUE OR FALSE?
“A program’s ability to train future teachers who produce positive results in student learning [as measured by standardized testing] is a clear and important standard of teacher preparation program quality.” (from p. 63 in proposed regulations document)
Here’s a hint, provided by Benjamin Campbell of Richmond,Virginia on the federal register of comments. “Current research indicates that no more than 14% -- and often far less – of a student’s learning as measured by standard tests – the only standardized measure – can be attributed to the teacher.”

The bad news is that Arne Duncan, and a whole slew of politicians and policymakers in line behind him, think the correct answer to this question is TRUE. They actually believe harsh punitive consequences work and lead to improvement. They think closing schools and teacher education programs is a good idea. They don’t care if any of their plans are based on faulty data, junk science, or illogical statistics. They blithely ignore extant research, recommendations from experts, and, to put it bluntly, common sense. The question remains – what are we going to do about it?


As Captain Jean-Luc Picard would say, “Engage.”