Beth Fertig is the contributing editor for education, covering the New York City public school system for WNYC on air and online at SchoolBook.org. She has covered education in the city for more than 15 ...
Test Driving a Pilot Teacher Evaluation System
Wednesday, March 14, 2012 - 05:56 AM
As an experienced principal, Katherine Moloney knows that good teachers can make a lesson out of almost anything. So she was delighted when she saw a lively class of fifth graders at Public School 100 The Coney Island School using recipe books to construct their own math problems with fractions.
"We’re making Mexican-style hash," one girl told the principal. Her group was planning to serve this imaginary meal with chocolate chip ice cream cake, and the kids were writing down how much of each ingredient was necessary.
Ms. Moloney spent about 20 minutes talking to students and looking at bulletin boards. "The work that’s going on in there, the discussion, the excitement, that’s what good teaching is about," she said, as she left the classroom.
Ms. Moloney has been testing a new framework for evaluating teachers this year at the school, which is actually in Brighton Beach, after receiving training over the summer. It was designed by Charlotte Danielson who wrote a common-sense framework to help both teachers and administrators identify good teaching.
It’s similar to a tool kit, with 22 strategies every teacher should master. The city is trying out the Danielson framework at 107 schools to learn how much training principals need so they can become certified evaluators once the state's evaluation system goes into effect, said Kirsten Busch, executive director of the Office of Teacher Effectiveness. The city has until next January to negotiate an evaluation system with its teachers' union.
At P.S. 100, Ms. Moloney and her teachers believe classroom observations are much more valid than a controversial rating system the city used that was based solely on student progress on state exams.
When the city released its teacher data reports, the fifth-grade teacher who was using recipes to teach math, Nicole Weingard, got one of the lowest scores in the school. She received just an 18 for her effectiveness at teaching math, putting her in the bottom fifth citywide. Her English score was even lower. But most of Ms. Weingard’s students easily passed their state exams.
"I’ve been teaching for eight years," she said. "I can probably count on one hand how many of my students didn’t perform well."
Ms. Weingard most likely got low marks because the city's rating system put a greater emphasis on progress than student performance. She teaches the honors class, and there wasn't a lot of room for her high-scoring students to improve.
Education officials agree that test scores, alone, are not a sufficient way to rate a teacher. That is why New York State is using its $700 million federal Race to the Top grant to develop a new teacher evaluation system in which test scores will count toward 40 percent of a teacher’s rating.
The other 60 percent will come from observing teachers at work.
New York City is considering using the Danielson system for that 60 percent. As a participant in the study, Ms. Moloney visits each of her teachers about four to six times over the year, focusing on a few of Ms. Danielson's strategies at a time. This month she is looking at student engagement. She carries a clipboard with examples of good and bad engagement techniques and writes down what she sees, talking to students and looking at bulletin boards for examples of their work.
In a fourth-grade classroom, she was struck by the way the teacher, Arielle Lutzer, asked her students to create a "parking lot" on the bulletin board, a place for them to write down whether or not they were confused during a lesson. Later, Ms. Moloney sat down with Ms. Lutzer for about 15 minutes and asked her questions. Ms. Moloney told the teacher that she thought she was "highly effective" at student engagement.
Ms. Lutzer said she appreciated instant feedback. But she acknowledged these observations can be a little nerve-wracking.
"When someone comes in, it might be the 15 minutes that you don’t want them to see, maybe it’s the 15 minutes where you just finished disciplining someone and getting back focus, or starting a lesson and it’s confusing, or in transition," she said.
Ms. Moloney said the Danielson framework helped take the edge off an evaluation process that could easily seem subjective. If she sees a teacher struggling with classroom management, she can send him or her to watch another teacher who's doing a good job.
"So it’s not just a rating and leave you hanging," she said. "It’s a rating with discussion, with feedback and then with next steps."
Currently, the city’s teachers are formally observed once or twice a year and the vast majority are rated as satisfactory.
When Ms. Danielson’s framework was tried in Chicago, a study found about 8 percent of teachers ranked at the lowest level, more than under the city's current system. Ms. Danielson said low-rated teachers could get better. Her method was intended to help supervisors figure out how individual teachers can improve.
But as cities and states rush to adopt new evaluation systems, she fears they might lose sight of that goal.
"So that worries me," she said. "That in their zeal, and under the pressure to do this quickly, that school districts will cut corners and will in fact boil the complex work of teaching down to a simple checklist that they can quickly frame people on."