Tests and rankings, twists and turns. Few things are as confusing as the new plans for evaluating teachers and principals. So here at SchoolBook, we thought we would offer an explanation based on our best sense of where things stand.
First the news:
The New York City Department of Education is eliminating its controversial Teacher Data Reports, which rank teachers based on students' standardized test scores, but it is not giving up ranking teachers by test scores all together, officials said Friday.
The city is considering introducing a roster of new local tests beginning next year, as part of the development of a teacher evaluation system, and officials said there would be some kind of new effectiveness ranking linked to those tests, or to other existing measures of performance.
So though it is several steps in the future, New York City teachers could potentially be facing two rankings in the coming years, one that ranks them from ineffective to highly effective on state tests, and the other on the local assessments, which are expected to be less about multiple choice and more about longer word problems and essays.
“We will have to develop some way to measure growth on those local measures,” said Matthew Mittenthal, a Department of Education spokesman.
Now for the background:
The basics: In 2010, New York State passed a law to transform how teachers and principals across the state are evaluated from the old unsatisfactory/satisfactory system to a much more complicated one. By law, there will be two basic components of the evaluation:
Indicators of student achievement (tests) 40 percent
This 40 percent is broken down into two categories:
- 20 percent state standardized test scores or other “rigorous, comparable” measures.
- 20 percent local or district tests, or other “rigorous, comparable” measures.
Other local measures (more subjective ones) 60 percent
Most of this will be based, for teachers, on principals' observations of their work. Districts can also come up with their own goals by which to judge success, like a review of a portfolio of student work, or a teacher's overall contribution to the school.
Now, we'll break it down further.
The 40 percent from tests:
The test score component of the measurement has received the most attention, because for years the teachers' unions were dead set against permitting the results of student test scores to factor into a teacher's evaluation.
The unions have agreed to the 40 percent weight for scores, but with one important caveat: Before the entire evaluation system can go into effect, the local union chapter must agree to it, including to how the test scores are going to be used. A common assumption is that the unions will want to water down this component.
The 20 percent from state tests
The state's job now is to figure out how it will use its state tests to rate teachers. That means it must come up with formulas that tie the growth or decline of student test scores to the classroom teacher.
This presents a lot of problems. First, there are not enough tests. Large areas of professionals — art teachers, for example — are not tested, and the state has to figure out how to work around that.
In other areas, like high school tests, it is practically impossible to measure learning by comparing, say, a biology test with a physics test, because they are different subjects. The state's new contractor, American Institutes for Research, is supposed to figure out how to tackle that problem.
For the areas where there are suitable tests — namely the math and English tests given in the third though eighth grades — the contractor must figure out how to tie results of these student test scores to their classroom teachers. That means some kind of mathematical formula must be used to account for all the things not in a teacher's control (like the poverty of the student) that could affect results. Experts disagree about whether this is truly possible beyond anything but a broad guess.
The 20 percent from local tests
Different districts will handle this in different ways, but here in New York City, officials are considering introducing new tests that will be given in addition to the state tests. These are meant to be less multiple choice and more like complex word problems that would mimic a regular classroom assignment.
The city has issued a request for proposals to test developers for these exams (which will cost millions of dollars) but has not made a final decision. In part, that is because the city may be able to use state test results for some subjects, depending on the outcome of an Albany court fight (see below). And the local teachers' union still needs to weigh in.
It is likely that the city will create a patchwork for this 20 percent that includes a mix of options for different grades and subjects, including existing tests, new tests, and possibly school-based assessments, provided the city can guarantee rigor across classrooms.
"We're working to develop approaches teachers can use to integrate existing classroom assessments and projects that can be compared across classrooms," said Mr. Mittenthal, the DOE spokesman."A range of these approaches will be tried in more 100 schools this year."
The other 60 percent
Not much has been written about this, but some schools are already rolling out new guidelines that will guide how teachers are observed. The Danielson framework, developed by an educational consultant, Charlotte Danielson, is the one that will probably govern observations: the union likes it.
Four years ago, Joel I. Klein, as chancellor, tried to propel New York City to the cutting edge of teacher evaluation by ranking teachers based on standardized test scores. Fourth- through eighth-grade teachers in English or math would get an annual report, known as a Teacher Data Report, that would rank them as ineffective or effective based on how much improvement their students made on standardized tests.
These reports began factoring in to tenure decisions, but they were used by principals only informally in annual evaluations. They will most likely be released to the public sometime in the near future, as the union has lost two court attempts to keep them private. But because these are based on state test scores, and the state is in charge of its 20 percent of the test component of the evaluation, the city announced Thursday that it would stop producing the Teacher Data Reports.
The state will now produce its own version of the rankings, and the first such reports are due in June 2012. In 2013, there is supposed to be a system good enough that it will count for 25 percent of the evaluation, with the local tests counting for only 15 percent.
But does that mean that the city is now out of the ranking game all together?
No, city officials said Friday.
That's right, the news is that there will be new local rankings of some sort, based on those new New York City tests we just mentioned.
What these will look like, exactly, is still many steps away. After all, the city has still not decided what these local tests will be. And talks with the union on the details have not begun in earnest. But at some point down the road, maybe next year, maybe later, teachers will be getting one or more documents annually, spelling out if they were effective teachers on two different kinds of student tests.
The final twist are the lawsuits. There are two active lawsuits right now dealing with teacher evaluations. The first is by the city teachers' union, which is suing to keep the Teacher Data Reports private.
The union has lost on appeal, but it is attempting a second appeal. If it fails again, you will soon be able to look up, on this Web site and many others, how the city's formula ranked 12,000 city teachers based on their students scores. (More on this from SchoolBook in the coming days.)
Then there is the state lawsuit. The New York State United Teachers has sued over how the Board of Regents interpreted the new state law. The union won in the lower courts, but the Regents are appealing.
This means that for now, there is no clarity on important facets of the system, including whether a local district can just decide to forgo its own tests and use the state tests for its local measure, too.