Humantransit has a thought-provoking blog post on whether measuring “on time performance” is really the best way to gauge the effectiveness of public transport in providing what its users want and need. Here’s a couple of interesting paragraphs:
I have a great deal of sympathy for transit executives trying to deal with on-time performance, because many of the causes of delay are outside a transit agency’s control. Still, there are two major problems with the measures of “on-time performance” that prevail in the industry.
1. They are not customer-centered. They report the percentage of services that were on-time, not the percentage of riders who were. Because crowded services are more likely to be delayed, the percentage of customers who were served on-time is probably lower than the announced on-time performance figure.
2. For high-frequency, high-volume services, actual frequency matters more. Suppose that a transit line is supposed to run every 10 minutes, but every trip on the line is exactly 10 minutes late. A typical on-time performance metric (e.g. the percentage of trips that are 0-5 minutes late) will declare this situation to be total failure, 0% on-time performance. But to the customer, this situation is perfection.
For a while this year ARTA and Veolia placed significant emphasis on advertising their ‘on time’ stats – even though in the early part of the year these statistics were truly horrible for Auckland’s rail services. Yet at the very same time as rail performance was so terrible, we saw rail patronage skyrocket to record levels.
ARTA’s main response to the poor performance stats fell into the traps outlined above – too focused on the simple ‘statistic’ and not focused enough on the experience of the rider. Basically, they made the timetable slower. While a greater percentage of trains now get to their destination “on time” (in the crazy world where on time means no more than five minutes late) because they do this by simply adhering to a slower timetable, chances are that your average Western Line user (in particular) has a slower trip now than they did back when the trains were so unreliable (particularly now that the Western Line’s express trains have been removed from the timetable).
I think the points made in the Human Transit blog post are highly valid when it comes to Auckland – that what we really need to be measuring are statistics that people using the rail system find important. How long are they likely to have to wait for their train? How likely is it that their train will get them where they’re going in the time it should? How long will their trip take? While reliability – which is really all that on-time performance stats measure – is very important, so are other aspects like ensuring consistent spacing between services and trying to get the trains to travel as quickly as possible.
I suppose that the main problem I have with the current focus on “on-time performance” is that it encourages overly slow timetables. This may make the statistics look nice, as it’s pretty easy to keep to a very slow timetable, but in the end the slower you make the train trip – the more likely someone is to choose to drive instead. So perhaps we need to broaden our measurement of how good the rail service provided actually is. Perhaps we need to think more about what really matters to customers, rather than trying to find a simple measurement statistic that perversely makes our trains slower by encouraging an overly forgiving timetable.
On time performance is useful, but not in isolation and not to the cost of everything else (like speed and frequency).