Streams

City to Cancel CTB/McGraw Hill's Contract Over Regents Problems

Friday, September 13, 2013 - 03:02 PM

The Department of Education announced Friday that it will discontinue its contract with CTB/McGraw-Hill for electronic scanning and scoring of Regents exams. The new system was tried out last June and came under fire after lengthy delays and other problems at the scoring centers.

Department of Education spokeswoman Erin Hughes said CTB/McGraw-Hill had "experienced significant delays" in its distributed scoring services, which involved scanning the essay portions of students' Regents exams and then downloading them to centralized scoring centers for teachers to grade them at computer terminals.

The shift was made so that the teachers would not grade the exams of any students from their schools. But teachers complained that they spent hours waiting for the exams to download, and scoring continued right through high school graduations.

The D.O.E. said that CTB/McGraw Hill conducted two successful pilots for the process, but then "failed to properly prepare their document review, scanning and digital uploading and processes for the contracted volume of Regents exams." 

"These failures, combined with the aggressive timeline in which our students need results on these exams, necessitate that we cancel the remaining portion of the contract," said Hughes.

She added that it will be impractical for the city to use electronic scoring for Regents exams in 2014. 

Instead, teachers will report to centralized scoring centers where all exams will be scored using traditional scoring methods -- known as paper and pencil. Hughes said this process was successful for the Regents exams that were not scanned.

Brian Belardi, the director of media relations for CTB/McGraw-Hill, issued the following statement:

"We believe that online scoring technology has the potential to make assessment more efficient for students, teachers and administrators. We clearly saw the benefits of online scoring in our work with the New York City Department of Education on the Regents assessment in 2013, in which we processed nearly 2.8 million scores.

However, the transition from print to online scoring is complex – especially in a city with a student base as large as New York’s – and given the challenges experienced on both sides, we understand the Department of Education’s decision to step back from online scoring. We will use the knowledge and experiences gained in our work with New York City to refine our technology, and we look forward to using this technology to improve the assessment process in the future."

Teachers union president Michael Mulgrew blamed the problems on the D.O.E. for relying on an outside vendor. Although the state required districts to stop letting teachers score their own students' Regents exams this year, there was no prohibition on letting teachers score the exams of other students in the same school - which the D.O.E. had allowed in the past.

"The school system should never have 'outsourced' the work of teachers," he said. "The children of this city shouldn't have had to suffer though this to prove the point."

CTB/McGraw-Hill isn't the only testing company that ran into trouble in New York City this year. The D.O.E. also announced Friday that it will consider a new vendor to take over for Pearson in administering gifted and talented tests, which encountered scoring errors.

Pearson's three-year contract for $5.5 million is due to expire after the 2014-2015 school year. Earlier this year, Chancellor Dennis Walcott said he would consider terminating the contract but on Friday Hughes said it was too soon to make the change this year.

"The time required to identify, procure, and implement a new set of G&T assessments made it unfeasible to complete for 2013-14," she said. "The D.O.E. has begun a process to review assessments for G&T eligibility and will release a Request for Proposal (RFP) this fall, for implementation in 2014-15."

This mean the city could stick with Pearson for the final year of its contract if it doesn't find a superior proposal.

Pearson agreed to forego the $2.1 million cost of the 2012-2013 exams. Hughes said the city, "has worked extensively with Pearson to ensure that additional procedural safeguards will be implemented for the 2013-14 G&T testing process."

Meanwhile, the D.O.E. is also tinkering with the formula used to figure out a child's overall percentile ranking on the two G&T exams.

Last year, the D.O.E. replaced one of its two exams with the Naglieri Nonverbal Ability Test in an attempt to level the playing field for children who aren't exposed to as much reading at home or to test preparation programs. That exam counted for two thirds of a child's overall score.

The Otis Lennon School Abilities Test (OLSAT) counted for the remaining third. The city used only the verbal part of the test, known as the SAI.

Department of Education spokeswoman Erin Hughes told WNYC that the change was made because the city would be able to get more data on how its students performed.

"Last year, the D.O.E. did not have data on the performance of New York City students" on the Naglieri test, she said.  "For this reason, the D.O.E. used data from the national samples for these tests in the 2012-13 scoring methodology."

This year, Hughes explained, the city is returning to its old methodology, the Normal Curve Equivalency, and will weight each exam equally.

A parent sued the city last year, claiming its methodology was faulty because it didn't have enough data on the students'  performance. He ultimately lost his case.

Last year, so many more children were eligible for gifted and talented programs that a smaller percentage were offered seats.

Tags:

Comments [3]

Alexis Sargent from NYC

I think the commenters above are misconstruing the article. No one is arguing that teachers go back to grading their own students' exams. Distributive scoring can work but as a scorer last year I can tell you countless hours were wasted in which teachers had to be paid while they simply waited around. Using paper and pencil to grade has worked just fine.

To be honest there were teachers not familiar with the content and who didn't thoroughly understand the rubric grading exams. If they graded incorrectly a student could fail because their is only one set of eyes on the exam. I don't think there was any implementation of back scoring.

Distributive scoring is fine but it has to be done in a way that is efficient.

Sep. 19 2013 07:18 AM
Testy from NJ

Richard,
It's English. If there is only one person who can interpret what the student has written, then we have a problem. That's exactly why teachers shouldn't be scoring their own students' tests. The tests should be scored the same way for everyone, with rubrics in place there should be no guess work, various interpretations, or bias. The idea of distributed scoring seems great, they should be giving the process a better chance and improve it not ditch it.

Sep. 17 2013 12:22 PM
Richard Doherty

In any machine scoring of student responses, the teacher who taught them is the only one who can decide whether their answer is correct. Our students are not up to speed with the English language by the time they get to high school, and only their teacher can interpret what they have written. Another teacher means well, but we need to solve the teacher cheating problem before we add another variable of a teacher who does not know how these students were taught, and, forgive me, may not even understand the subject.
Sorry. If it ain't broke, don't fix it!
FYI I teach Physics.

Sep. 16 2013 06:39 PM

Leave a Comment

Email addresses are required but never displayed.

Get the WNYC Morning Brief in your inbox.
We'll send you our top 5 stories every day, plus breaking news and weather.

Sponsored