Peer-review finding: California high speed rail projections unreliable

Email a Friend

(Nathanael Johnson, KALW)

For months, watchdog groups and critics of the California high speed rail project have claimed that a study of projected ridership on the proposed super-train was wildly incorrect. The High Speed Rail Authority has acknowledged that one of its numbers was off by an order of magnitude, but has maintained that the model still produces valuable information. These statistical models are incredibly complex, and it’s impossible to assess these competing claims without considerable expertise and a lot of time. So California's Senate Transportation and Housing Committee commissioned a peer review from engineers at UC Berkeley and UC Irvine, to put an end to the debate once and for all. The California High Speed Rail Authority paid for the review.

Now this group has released it’s findings. In their report, the professors wrote: “we have found some significant problems that render the key demand forecasting models unreliable for policy analysis.” They go on to tear the study apart, shred by carefully-worded shred.

courtesy of the California High Speed Rail Authority

Why does this matter?

Well, the High Speed Rail Authority has already used this ridership study to make choices about where the proposed train would go and how frequently it would go there. In several cases the results of this study have been used to justify these choices and win political support. In other words policymakers have been making decisions based on data “unreliable for policy analysis.”

The rail authority and Cambridge Systematics, the consulting company that built the model, contest the findings of the Senate review. Roelof van Ark, the rail authority’s new leader, wrote that the ridership model “has been and continues to be a sound tool for high-speed rail planning and environmental analysis.” Lance Neumann, the president of Cambridge Systematics wrote that the Senate report “focuses on academic viewpoints and ignores what it takes to create a model for real-world application.”

But if you wade into the report, which can be found here, it’s clear that much of the professors’ critique has to do with assumptions made by Cambridge Systematics (CS) which seem to defy real-world experience. For instance:

“travel forecasts will incur a sudden change as the trip distance increases from 99.9 miles to 100.1 miles, which is behaviorally unrealistic.”
A bit later their tone grows a little sharper:

“we do not believe that the method chosen, which contradicts both common sense and empirical evidence, was the appropriate one.”
Here’s a good example of the problems they are talking about. And this gets a little complicated, but it’s interesting: CS assumes that people will show up at train stations and wait for the next train to arrive. If that’s what people do, then the time between trains, or headway, will really impact its usefulness. But when I make a long distance trip I plan ahead, check the schedules, and arrive a little before the train or plane departs. Here’s what the professors have to say on this point:

“Regarding headway sensitivity, CS argues that HSR service “offers a new paradigm of interregional service” ... “comparable to the best urban rail services.” It is a matter of speculation whether, in this new paradigm, travelers will simply show up at rail stations and wait for the next available train, as the CS model implicitly assumes. However, it is highly implausible that air travelers will behave in this manner, as the model also assumes.”

CS responds:

“We disagree with the assertion that planned headways for California HSR are substantially different than for urban rail service.  Accordingly, we believe that the treatment of sensitivity to wait times and headway is reasonable and does not introduce any biases.”

Okay, okay, so the gauge of waiting time sensitivity may be off. Does it matter? In short, yes. All these little details add up to big implications in the end. The ridership study was instrumental in building political support for a route that went south from San Jose, through the Pacheco Pass, rather than east through the Altamont Pass. The watchdog group, Californians Advocating Responsible Rail Design, has noted that precisely this issue of waiting time sensitivity produced results that may have biased policy makers against Altamont: Planners assumed that trains would travel less frequently on Altamont option, which would increase wait times, and decrease ridership. A lot.

“The sensitivity to train frequencies penalized the Altamont routing by 20 million riders per year. The entire ridership of the Northeast Corridor Amtrak service is approximately 10 million riders. The report suggests that the sensitivity may have been over-inflated by 4 to 5 times.”

The northeast corridor, which runs from Washington D.C. through New York to Boston, is the busiest passenger rail line in the United States. Is it really reasonable to expect that having trains run twice as often would produce 20 million new riders?