Streams

Chief Academic Officer Says Tests Don't Always Count Most

Monday, July 22, 2013 - 01:29 PM

In the following Q&A, Chief Academic Officer Shael Polakow-Suransky builds upon our recent report on the Bloomberg administration's legacy when it comes to data in education policy. Suransky tells Beth Fertig that the Department of Education has been looking “for ways to find a balance between quantitative and qualitative information” by improving the quality reviews which include annual school visits by educators. 

Below are excerpts of the interview.

Q: Could you describe the factors other than test scores used by the city in evaluating schools?

We do care about data. But we care about data, and we also care about what's happening day-to-day in classrooms. And we care about the kinds of professional development teachers are getting, and we care about the leadership practice of our school leaders, and we care about the enrichment opportunities and the way that community is being built in the school and all of those pieces are also important. And so when we evaluate school quality, we now never make any decision to close a school, to change the structure of the school, to remove a leader, without first looking at both our quantitative and our qualitative information. And the quality review is a big piece of that. The surveys [of parents, teachers and students] are another piece that.

Q: So you have these very thorough quality reviews, but in the elementary and middle schools at least 80 percent of the school’s grade is still based on test scores. So can you understand why the public believes the test scores remain the priority?

Well, there were two reasons for that. One is that at the high school level, there's a lot of data available that isn't available at the elementary and middle school level across schools [credit accumulation, subject grades]. And so what we set out to do over the past couple of years is try to create ways to measure school quality quantitatively at the elementary and middle school level that goes beyond just the test scores. And so the first step in that direction was creating a system to capture the grades that middle school teachers were giving to their students.

I understand there are many more factors to consider in high schools, but people would still come back to saying, well, so much of the [elementary and middle school] report card is based on progress on state exams. So does that mean that you believe the state exams are really such an important measurement?

I think they are one of the important measurements in the balance. You see on the high school progress report where about a third is based on the state exam is closer to the balance I'd like to see eventually on all three reports. But in order to do that you have to do it in a thoughtful and careful way. You can't just snap your fingers and say we wish it were different, because you actually need to have data that differentiates schools and kids from one another, that is used by all schools in that age group, in order to include it on a quantitative progress report.

And so this year for the first time, schools at the elementary level will start entering grades in a uniform way into our data system. And it's going to allow us eventually to start using those teacher grades as one of the data elements on the progress reports.

Another thing we're doing at the middle school level is we're looking at credit accumulations for ninth graders and back-mapping it to the middle schools they came from. There's not a lot of points associated with that yet, but it's another effort in this direction to try and use a measure other than the state exam that actually can help us tell ‘how good is this middle school?’

So how much can a quality review count in a school's overall evaluation?

In the old principal performance review, the quality review accounted for 22 percent. And as we negotiated the new principal performance evaluation that just went into effect for next school year, the quality review’s score - or supervisory visit using the quality review rubric - will count for 60 percent. So that has just tripled for principal evaluations, because we really believe that that's an important element and we worked with the state and the [principals union] to get that into the evaluation system.

Even if it doesn't show up on the progress report, you're saying when crucial decisions are made you give equal weight to the qualitative data, the review that educators gave the school? Would you say it's 50 percent or more than 50 percent?

Yeah, it often has veto power in the sense that if there's something convincing in that quality review that says the school’s improving, then they would be taken off the [potential closure] list.

And yet when I talk to parents or principals or teachers they never say to me, “Wow, the school got a really great quality review.” They always talk about the school grades. Why is that?

I think when there's a letter grade associated with it, it has a powerful public image associated with that letter grade. Especially if you're talking to someone from the press, that is foremost in your mind because you're thinking about what's the public image of my school, if I'm a B or D or whatever.

I think it says that there's still work to do to get to the right balance in the system. I don't think it's perfect. I think it's a work in progress to try to find the balance where folks actually look at multiple measures. I mean, if you look at the teacher evaluation system that we just put in place, there is 60 percent there as well on the qualitative side and 40 percent that are based on measures of student learning.

I want to be clear that testing is not a bad thing. The problem occurs when you make bad decisions as a result of the testing. And so there are places in the system where because the test is coming up, the curriculum that's taught to kids is focused on test prep as opposed to a richer curriculum. And I think the number of those places is vastly overstated by a lot of the folks who are critical of testing, but it definitely does happen and I would categorize that as a bad instructional decision in response to worries about how kids are going to do on the test. Because what we've learned is generally test prep doesn't actually help kids do well on tests; it is a shortcut for the adults. What really helps kids do well on tests is figuring out what kids next steps are, where their struggling and providing them with a rich curriculum that's going to engage them and that's why we've sort of launched so much work over the past three years also on trying to help people create rich, engaging curriculum.

 

Tags:

Comments [2]

Iamsuperman from Manhattan

As a NYC public school teacher for over 14 years, I have witnessed firsthand the destruction of public education in general and the decimation of arts education in particular. Our nation's myopic near pathological obsession with standardized tests have driven the best teachers out and will keep the best candidates away from teaching. The oligarchs, testing and publishing companies in their mad dash to get as much money in their coffers as possible, have created a perverse incentive for principals to destroy any semblance of a real education. To offer bonuses to principals for high test scores is akin to offering bonuses in the business world for high quarterly statements, and we all saw how that worked out. If the goal of the education reform movement was to improve teacher quality through the use of tests to grade schools, teachers and students, they have failed miserably.

Jul. 25 2013 01:02 PM
Leonie Haimson from NYC

As has been shown over and over again, the progress report grades are statistically unreliable because one year's changes in test scores are highly variable and nearly random. The inventor of the progress reports, Jim Liebman admitted that several times, and claimed the progress reports would take at least three years worth of data into account; he was even quoted in your book, Beth, as saying this. Yet the DOE has never done this even though they know full well that the fact that test scores are volatile and erratic from year to year renders their system -- as well as the teacher evaluation system -- inherently unreliable and unfair.

Jul. 23 2013 04:24 PM

Leave a Comment

Email addresses are required but never displayed.

Get the WNYC Morning Brief in your inbox.
We'll send you our top 5 stories every day, plus breaking news and weather.

Sponsored