Know When To Run: The Story Behind The Xmas Kings Cross Problems

Delays and work overruns aren’t an uncommon part of the Christmas experience on London’s railways. Ultimately, maintenance and improvement needs to be carried out and from a utilitarian perspective the opportunity to do so in a period where passenger numbers are generally lower is simply too good to miss.

It is rare, however, to see quite the level of disruption and overcrowding that was witnessed at Finsbury Park and on the East Coast Main Line last Christmas as a result of overrunning works between there and Kings Cross. Indeed, whilst it was not quite the disaster that the media and some politicians seemed determine to make it, it was certainly extreme enough to warrant further investigation, and a full report into what happened was swiftly commissioned by Network Rail. That report is now out, and it makes interesting reading. For it provides a window into the events that happened that weekend.

Some serious works

This particular Christmas period was a busy one for railway work. With Christmas falling on a Thursday, Network Rail were presented with what they saw as a rare four day period in which to carry out engineering work. Closures for the entire period would, of course, need to be avoided wherever possible, but it was still an opportunity.

Part of the East Coast Main Line, the section of the railway known as “Holloway Junction” (or just “Holloway”) is approximately 1½ miles and lies north of King’s Cross Station. Network Rail planned to take advantage of the festive period to replace two of the junctions there and 500m of the two railway lines between them. The work itself wasn’t unusual, but the scope was relatively ambitious. Ultimately over 6,000 tonnes of ballast would be replaced along with the rails and sleepers which sat on them and it was thus a considerable engineering and logistical challenge.

hollowayjunctions

The two junctions and stretch of track to be replaced

In truth, all four tracks at Holloway needed renewal works, and a full seven day blockage had been considered to allow exactly that, but in the end it was difficult to justify the amount of disruption this would cause. Instead it was decided to carry out the work in two four day blockades, one at Christmas 2014 and one a year later at Christmas 2015. Both would focus on two sets of tracks each.

Although work was focused on only two of the tracks, for logistical reasons Network Rail would actually have to take possession of all four lines at Holloway for the bulk of the work’s duration. This was partly due to the scale of the work, but also because works being undertaken elsewhere along the East Coast Main Line meant it would be impossible to bring the necessary engineering trains straight from their depots. Instead, all fourteen engineering trains required at Holloway had to be brought up before work began and parked up.

freighttrain

A typical engineering train, photo credit: Mike Rowland, Taunton Trains.

The end result was a worksite that was of considerable size, stretching over all four lines, running to almost nine miles in length and requiring more than a little choreography in both work plans and train movement.

That work site would, however, have to shrink on the 27th December. This was because the main junction at Watford on the West Coast Main Line was also being renewed at the same time, leaving two of the key north/south routes closed at the same time. In discussion with the Train Operating Companies (TOCs) Network Rail thus agreed to hand back two of the tracks at Holloway in time for trains to at least operate a reduced timetable on the 27th. In effect, Network Rail would have two days of their blockade with all four lines at their disposal, then two days with only two of them, and the work programme was configured around this. This all meant a number of tasks having to be carried out in serial not parallel, but the likelihood of success was calculated to be 95%.

A success rate of 95% met Network Rail’s minimum requirements, but in order to help achieve this it was decided to take extra steps to assure the risk of equipment failure was minimal. This was crucial because if something failed there would be limited ability to bring in replacement equipment due to all the work going on elsewhere.

Critical to the work were the Road Rail Vehicles (RRVs) that would be used and the log grabs they would be equipped with. Seven log grabs were required for the work and the plant supplier agreed to supply Network Rail with eight brand new ones, the intention being to minimise the risk of failure and to ensure that there was a spare on site. They also agreed to provide an on site fitter so that any problems could be dealt with quickly.

rrvs

Typical RRVs

loggrabbers

Log grabs

The work – and the problems – begin

On paper everything seemed in order, but when work began on Christmas Day, things began to go wrong from the beginning. An hour was lost waiting for the OHLE to be isolated and permits confirming such to be issued. This was due to the sheer number of projects on the route, all of which had been planned to start at the same time. Despite Holloway being second on the priority list, it still got delayed.

Then “scrapping out” – the process of cutting, dismantling and moving the 500m of old track to be replaced took three hours longer than planned. In part, this was due to the scale of the work involved. Although sufficient operators were on site for all the machines, the number required meant that not all of them had much experience of this particular type of work and it took them longer than anticipated to carry it out.

This wasn’t, however, the major problem. That was the plant equipment – most particularly the log grabs and RRVs brought in to operate them. The fact that these represented a potential critical risk to the success of the possession was something that Network Rail had obviously anticipated – this was why brand new log grabs had been sourced for the work. What nobody had realised, however, was that by doing this they weren’t lowering the risk of problems occuring, they were simply trading one risk for another. That second risk reared it’s ugly head almost immediately. The new log grabs had never been used on these particular, or indeed any, RRVs before and it soon became apparent that – for reasons as yet unclear – the fittings didn’t fully match. Throughout the rest of the work, the fittings between the RRVs and log grabs spewed hydraulic fluid at a prodigious rate, losing pressure and failing to work. The on-site fitter worked hard to try and solve the problem but ultimately a specialist had to be called in from off-site.

Scrapping out finally finished at 09:00 on Christmas Day. By then all of the above had conspired to eat through any contingency time already allowed for this stretch of work and beyond. The project was now running three hours behind.

The first decision point

It was now decision time for the project team. With scrapping out complete the next step in the work was to excavate out and replace the ballast and here they had two options. The first was to stick to the original plan, despite the fact that the project was already running late, and excavate out as much of the ballast as possible (down to a depth of about 30cm). Alternatively, they could go with the quicker option of simply “skimming” the ballast. Ultimately the difference between the two options was one of quality. The deeper the excavation, the longer the new junctions and track would last – potentially up to 25 years. Skimming would be quicker, but it would drastically shorten that lifespan, meaning the work would have to be repeated in perhaps just ten years.

This was, in effect, the point of no return for the project team. Once full excavation had begun they would not be able to switch to skimming later on if it became apparent they had made the wrong call. This is because doing so would leave the resulting track on an inconsistent foundation, increasing the risk of future issues or even a derailment.

Stick to the original plan and risk overrun or compromise on the work? In the end the decision was made to proceed. This was not just to avoid doing future work. It was also because the team knew that the plan actually allowed for up to four hours delay in completing the excavation stage of the work. This meant that, assuming nothing else went wrong, they could absorb the three hours of delay they had already inherited from the scrapping out phase in that. Indeed they were even hopeful of making some of that time up. Given that technically they were not yet officially over time they also decided not to officially declare an overrun at this point.

Hindsight, of course, is a wonderful thing. It would be easy to criticise the project team for failing to alter either the work or declare an overrun that Christmas morning. It would also be wrong. Contingency is built into projects for precisely this situation and, although the delays in scrapping out meant that they were now desperately short of it, with the information at hand, the team believed their overalls goals were still achievable, assuming nothing else went wrong.

Unfortunately, shortly after excavation phase began, something else did – although it would be many hours before those on site realised it.

The choreography begins to fail

As excavation work began, two trains full of scrap rails and sleepers due to leave the site were discovered to have been badly loaded. The scrap was not correctly positioned and correcting this took time. By the time those corrections had been made both trains’ drivers had now been on shift too long to be able to take them back to New Barnet where they were due to be emptied. Luckily (or at least so it seemed at the time) this had an easy solution. There were two more drivers on site – those waiting to take away the first two wagon trains to be filled with spoil from the excavation phase. These were “stepped forward” to take charge of the scrap trains instead.

Stepping drivers forward is not an unusual practice, and at the time it seemed like a reasonably risk-free solution to the problem. What no-one had considered, however, was the sheer scale of the Holloway worksite. Stepping every driver forward on a site nine miles long was not a time-free enterprise and each train movement added a little more delay to a project that had already lost most of its contingency. By 14:00hrs on Christmas Day what little contingency that remained had gone. Work was now officially six hours behind schedule (an hour over the total contingency which had been available).

The second decision point

At this point the decision on whether to declare an overrun or not was escalated to senior Network Rail, Amey and Alliance staff. They looked again at the delays which had occurred so far and the minor delays still accruing with every driver change and reached a decision. They would still not declare an overrun.

If the project team’s initial decision to continue with the work as planned first thing Christmas Day morning is understandable, it is harder to understand why the same conclusion was reached later in the day. The official report explains that this decision was based on the belief that there was just about enough contingency built into the Boxing Day work schedule to allow the time to be made up. But so far at every stage the ability of the available contingency to absorb the number of problems occurring had so far proven optimistic. It is also tricky to see how the possibility of further unanticipated effects did not make more of an impact on the decision making process, especially as one of those effects was now itself an ongoing problem – the continued need to step forward drivers. A problem that continued to leech time from the project and which was now a potential cause of further unanticipated problems itself.

And this, in effect, was exactly what happened later that night, finally removing any hope of handing the line back as planned on the Saturday.

The unanticipated happens

Train drivers are a finite resource. Not just because there are only so many people trained (and indeed employed) to do it, but also because, like lorry drivers, there are tightly regulated limits on how many hours it is safe for them to work in one shift.

The work underway at Holloway that Christmas was not the only piece of engineering work going on which required freight train drivers to move equipment, material and spoil. Indeed the scale of the engineering work originally planned throughout the country that weekend had actually had to be scaled back by Network Rail after they released that it would require more drivers than were available in the entire national supply. In the end, through careful timetabling and allocation, they’d worked out the maximum amount of work that could be carried out with the roughly 200 freight train drivers available to them and scheduled it in.

By now astute readers may have spotted the problem looming for those working at Holloway. For no one on site or at Network Rail had spotted that ultimately it didn’t matter how much contingency existed within the schedule for doing the physical work – that wasn’t what they were running out of. They were actually running out of drivers.

This was because the continued need to step drivers forward, and the time it took to do so, meant that their shift patterns and availability quickly fell out of sync with the needs of work site. The departure of spoil trains or even just train movements required to support the planned pace of the work were delayed because drivers were still being brought forward to drive them. Even if the original driver was available then sometimes the cascade of delays to movement and departure would mean they too now lacked sufficient hours to be able to complete their planned trips. As day turned to night the effect began to snowball, made even worse when one of the ballast wagons failed and could not be moved for several hours.

By the early hours of Boxing Day morning the situation had degenerated to the point where there was only one driver available on the whole work site to move the five trains to be found there.

Even now, however, there were delays in declaring an overrun, coming up with a new plan and informing all of the operational companies and teams that would be affected. This was because those on site had become so focused on trying to deal with the rapidly deteriorating driver situation that they had failed to keep Network Rail’s Tactical Control team in Peterborough and the LNE Route Control in York in the loop.

The penny finally drops

By 11:00 on Boxing Day they were officially 15 hours behind and it was clear that there was no chance of completing work on time. They were well beyond the point of no return. Still, it wasn’t until 13:00 on Boxing Day that both Tactical and Route Controls had been brought fully into the loop and informed of the extent of the overrun.

The question now was whether they would be able to hand back two lines on Saturday 27th December at all. Part of the problem was that engineering trains and the rail-mounted crane for rail panels were occupying two lines themselves. Here Network Rail were caught between a rock and a hard place – moving the trains would free up those two lines for passenger use again, but without the equipment and storage capacity for scrap and ballast they provided there was no chance at all of the work being completed until well into the following week, meaning significant disruption on Monday when commuters would be returning to work.

trackpanel

Track panel installation at Holloway

The idea was discussed, but discarded. It was decided that it wasn’t possible to hand back any lines at all on Saturday. Instead they would hand back two lines by 5:30 on Sunday 28th, a full 24 hours later than planned. Until then, there could be no trains into Kings Cross.

Some frantic timetabling

For the train operating companies (TOCs), who found out as Network Rail’s management began to disseminate the news, a full twenty four hour delay was disastrous. Whilst the holiday period is quieter than regular travel times – the very reason that it is used for work such as this – it has its own peaks and troughs. One of those peaks always comes on the 27th, as people across the country travel from relatives’ houses to their own or vice-versa. Indeed East Coast already had 36,714 tickets booked in advance, and 85 services planned into and out of Kings Cross. First Hull Trains and Grand Central were also expecting to run nine trains each with about 3,500 predicted passengers. Even combined, none of those matched GTR, who had 467 services scheduled with predicted loadings of at least the same level as East Coast.

It was here, with little time to determine a course of action, that another problem was discovered – for although contingency plans had been discussed between Network Rail and the TOCs for what should happen if there were delays to the handover, these had all assumed that any delay would be a case of hours, not days. They had collectively failed to come up with an operational contingency plan which specifically anticipated all four lines remaining closed for the whole of the 27th.

Kings Cross Alpha One

It was 20:00 before the TOCs were able to fully mobilize their own planners to look at specifics with Network Rail. Lacking options and needing to get the news out to prospective passengers as quickly as possible, in the end it was decided the only option was to adapt and implement a version of what is known as “Operational Kings Cross Alpha One” – the standard emergency plan used when a sudden blockage occurs to all lines into Kings Cross.

This plan will, at least subconsciously, be familiar to anyone who has found themselves travelling between the north, home counties and Kings Cross in times of disruption. Trains depart and arrive from the likes of Finsbury Park, Stevenage and Peterborough as appropriate, based on what’s available and where blockages lie. Indeed to a certain extent Network Rail and the TOCs were lucky that the problem had happened where it did, as Alpha One is an inherently flexible plan.

What it does require, however, is awareness and careful management of its potential bottlenecks, There is a reason, for example, that during intense disruption many trains terminate and start at Stevenage, about thirty miles north of London. It is a large station with multiple platforms, bi-directional junctions and a layout that is relatively friendly to large footfall, particularly when carefully managed. The disadvantage, of course, is that despite its proximity to the A1 and ample coach parking space for rail replacement services, it is obviously some distance from where many passengers are ultimately trying to go. Finsbury Park on the other hand is in London, on two Tube lines (the Victoria and Piccadilly) and within easy bus distance of more. Its layout, however, is far more restrictive and thus Alpha One calls for it to be used carefully, particularly when it comes to balancing long distance and local services which tend to have very different kinds of passenger whose needs and movement patterns are not always compatible.

The need to be careful with Finsbury Park was doubly true on the 27th, as a normal Alpha One assumed that a certain percentage of passengers would make use of other lines into London, but the aforementioned works underway on the West Coast and Midland Main Lines that weekend meant this simply wasn’t possible.

With little time to spare, Network Rail and the TOCs hastily agreed a modified Alpha One based on all of the above. In particular, conscious of the above problem, it was agreed that the frequent southbound and northbound GTR services would have near-exclusive use the bi-directional platform 4 at Finsbury Park, with long distance services using platform 5. This was critical to avoid overcrowding at the station, which due to the Christmas break would have little ability to call in additional staff if things became difficult to manage.

A plan of action now agreed, the news of the impending disruption was finally announced to press, the Transport Police, TfL and others at 17:10 on Boxing Day. Passengers were strongly advised not to travel if it was at all possible.

The snowball effect

As services began on the 27th it at first seemed like the hastily assembled contingency plan might hold together. By 10:00 over 100 GTR trains had already passed through platform 4 in both directions and services were only running with delays of about ten minutes. Suddenly, however, things began to fall apart.

The cause was the exact thing that those planning the timetable the night before had been worried about and had tried to avoid – mixing locals and long distance at Finsbury Park. The rush to assemble and effectively communicate the plan out had led to some elements of platform and path planning being left to local staff to determine, and in this process the instruction to keep platform 4 free of long distance services had been lost.

In fact, station staff at Finsbury Park and the Kings Cross signal box had agreed between each other that some long distance trains would also use platform 4 to arrive and depart. As the first of these services started to arrive after 10:00 the service pattern quickly fell apart. Platform 4 quickly became blocked with passengers unable to board or exit services and although the problem was soon spotted and corrected (indeed only four long distance trains would actually use platform 4 that morning) Finsbury Park’s restrictive layout meant overcrowding quickly snowballed from there. By 11:00 it had become so bad that staff were forced to seek the help of the British Transport Police to close the station, so that they could attempt to restore some order and put a passenger flow system in place. By 14:00 a one way system was finally operational and the crowds moving and, by 17:00 the queues had largely disappeared.

For many passengers, however, this came as small consolation – some had found themselves outside the station for two or more hours and, when finally able to travel, had little chance of getting a seat due to the reduced timetable the TOCs had been forced to run. The delay in deciding a replacement timetable the day before also robbed travellers of one other thing – accurate travel information. For whilst the general news of prospective delays had been broadcast, there had been insufficient time to get the fully updated timetable into Customer Information Systems. This meant that station and platform signage as well as National Rail’s apps and website were lacking information and not always up to date, further adding to the confusion.

Learning lessons

What then are the lessons to be learned from the failures at Holloway? The report’s own conclusions on the matter are as follows:

  1. The overall structure and content of project and operational contingency plans will be improved to ensure that minimising passenger disruption is at the very heart of our planning.
  2. Contractors will be required to test any new equipment in an off-the-railway environment before it is used on live railway work.
  3. Recognising the risks that are introduced at times of peak project delivery, such as Christmas and Easter, consideration will be given to moving more work away from these peak times.
  4. A review will be undertaken of Network Rail processes for communicating operational train service contingency plans to our own and other staff at short notice.
  5. Engineering train crew and contingency at times of peak work will be treated with the same level of nationwide cross-project scrutiny and planning as other resources in short supply, such as signal testers and overhead line engineers.
  6. Network Rail will work with industry colleagues to improve service recovery and to provide better information to passengers.

Given that the work will need to be repeated at Christmas 2015 to renew the other two tracks, it must be hoped that Network Rail successfully take those lessons on board.

Written by John Bull