The summer of 2019 saw the disconnection of more than a million electricity customers in the biggest single electricity system disturbance in Britain in many years, comparable with those in 2008 and 2003.
Three different government or regulator investigations into the incident were initiated and all of them reported last week. This short overview and extended comment piece by one of UKERC’s Co-Directors, Keith Bell, summarises the reports’ findings and unpacks what the incident might teach us about the electricity sector and its readiness for further decarbonisation without compromising security of supply.
Ofgem, the UK Government’s Energy Emergencies Executive Committee (E3C) and the Office of Rail and Road (ORR) have all published their reports on the electricity system incident that happened on August 9th.
The Ofgem and E3C reports have confirmed the Electricity System Operator’s (ESO’s) description of the main sequence of events, that a lightning strike on a transmission overhead line precipitated loss of some distribution connected generation (DG), as would have been expected, plus the unexpected loss of significant amounts of power from the Hornsea offshore wind farm (owned by Ørsted) and Little Barford combined cycle gas turbine (CCGT) station (owned by RWE). These losses caused the loss of further DG. All the generation losses combined to exceed the amount of frequency containment reserve that had been scheduled by the ESO and led to system frequency falling so far that the first stage of ‘low frequency demand disconnection’ (LFDD) was triggered, disconnecting 1.1 million electricity customers.
The E3C investigation found that a number of essential services, including a few hospitals, an airport, a water treatment works, supplies to a chemical works and both railway traction and signalling supplies were affected during the incident.
The ORR investigation found that power converters on two particular types of train operated by Govia Thameslink in the south-east of England tripped when system frequency dropped to 49.0 Hz. On a number of them, the equipment was locked out until a technician arrived with a laptop. These trains were therefore stranded and blocked key routes for a number of hours.
Ofgem reported that Ørsted and RWE have made payments of £4.5 million each to Ofgem’s “voluntary redress fund” in recognition of the impact that failure of their plant had on consumers. One of the Distribution Network Operators (DNOs), UKPN, also made a payment, of £1.5 million, in recognition of the adverse impact that its actions might have had when restoring disconnected demand before being authorised to do so by the ESO.
Ofgem has identified a number of issues with the ESO’s existing processes and procedures including the way that the need for frequency management services is identified and the way they are procured.
Ofgem’s investigations into the actions of the ESO are continuing. It is also working with the department of Business, Energy and Industrial Strategy (BEIS) on a review of arrangements for electricity system governance and system operation.
Ofgem has recommended a number of actions including reviewing the following: the timetable for replacement of DG ‘loss of mains’ protection; various codes and standards; and arrangements for ensuring compliance of generation with relevant codes.
E3C made a number of the same recommendations as Ofgem. It also promised to clearly define what ‘essential services’ are and provide guidance to those services, and to develop a new incident response communications strategy. It also asked the DNOs and the Energy Networks Association to undertake a “fundamental review” of the LFDD scheme.
The ORR recommended that train operating companies should check the settings of train protection systems and that Network Rail should check the nature of their connections to DNOs’ networks.
What happened August 9th_the investigations
On Friday January 3rd 2020, the Electricity regulator, Ofgem, the UK Government’s Energy Emergencies Executive Committee (E3C) and the Office of Rail and Road (ORR) all published their reports on the electricity system incident that happened on August 9th when a single lightning strike on an electricity transmission overhead line led, within less than 80 seconds, to the disconnection of electricity supplies to 1.1 million customers and disruption to rail services in the south-east of England.
Ofgem’s report looks particularly at the actions of various licensed electricity sector actors, in particular the Electricity System Operator (ESO) part of National Grid, two generation companies – Ørsted and RWE – and the Distribution Network Operators (DNOs) on the day and leading up to it. The ORR’s report reviews what happened on the rail system while the E3C report takes a broad, high level look at the lessons that might be learned.
Ofgem’s report states that payments totalling £10.5 million have been made into Ofgem’s “voluntary redress” fund by three major industry actors in recognition of direct impacts on electricity users or an impact that could easily have arisen in view of departure from correct procedures. Ørsted and RWE– whose plant disconnected from the system when it shouldn’t have – each paid £4.5 million, and one of the DNOs – UK Power Networks – has paid £1.5 million in recognition of having restored demand before receiving confirmation from the ESO that it was safe to do so.
Ofgem’s investigation has confirmed the broad sequence of events outlined in the Electricity System Operator’s (ESO’s) report from September. However, Ofgem has stated that investigations into the ESO’s performance are still ongoing and that it is reviewing the ESO’s governance.
It is impossible for an electricity supply to be 100% reliable. Random events caused by the weather, equipment failure or human error are always possible. However, because of automatic control system responses made ready by the system operator, and the meshed nature of the transmission network, not all of them will lead to interruptions to customers’ supply.
On average, an electricity customer in Britain experiences one interruption per year. Almost all interruption incidents are very localised and are the result of faults on the distribution network.
Generally, speaking, the causes of electricity supply interruptions where a large amount of load is lost fall into one of two categories, or both of them:
Although there was a lightning storm going on at the time, the weather was not extreme on August 9th last year and the incident that affected Britain fell into the second category. None of the contributing events on their own should normally have caused any interruptions to electricity users’ supply.
The initiating event was a short circuit on a 400 kV overhead line caused by a lightning strike. Such faults typically happen tens of times each year on the transmission network and do not cause losses of supply. On this occasion, however, the voltage depression caused by the short circuit fault caused an incorrect response by the wind turbines’ control systems at Hornsea offshore wind farm leading to large oscillations of reactive power and the triggering of the turbines’ own protection causing almost all of the power being produced at Hornsea to be lost. This should not normally happen. We are told by both Ofgem and the ESO that Hornsea’s owner, Ørsted, and the wind turbine manufacturer, Siemens Gamesa, re-configured the wind turbines’ control software on August 10th so that the same problem should not arise at Hornsea again.
The voltage depression also caused a steam turbine at Little Barford combined cycle gas turbine power station, owned by RWE, to trip. Although both the ESO’s report from September and Ofgem’s from last week quote RWE’s explanation of a measurement error triggering the outage, it is still not completely clear why this arose or why, to compound the problem, a gas turbine there also tripped a minute later.
A further, known issue made the event worse. This concerned small scale generation – ‘distributed generation’ (DG) – connected within the distribution networks. Two types of ‘loss of mains’ protection, intended to safely shut down a portion of the distribution network when it becomes isolated from the rest, are known to be triggered by certain disturbances on the transmission system. ‘Vector shift’ protection is sensitive to short circuit faults and ‘rate of change of frequency’ (ROCOF) protection is triggered by a large enough instantaneous loss of generation. Although the phenomena are known, what the ESO and the DNOs have told Ofgem about how much DG was lost due to each of them on August 9th is based only on estimates. This is largely because, many years after significant volumes of DG started to be connected, the DNOs lack detailed monitoring of it.
There is a programme of work within the industry to replace the two particular DG protection mechanisms with settings that are less sensitive to transmission system disturbances. However, this loss of mains protection change programme is scheduled to be completed only in 2022.
The loss of DG due to ‘vector shift’, the drop in power at Hornsea and the disconnection of the steam unit at Little Barford combined to cause further DG to trip due to ROCOF. As a result, there was not enough generation to meet demand. This caused the system’s frequency to fall. ‘Frequency containment reserve’ is scheduled by the ESO, mostly from generators but also from batteries, to respond automatically and correct the imbalance. However, there was not enough response to prevent the system’s frequency from falling so far below statutory limits that it triggered the disconnection of 1.1 million customers by automatic ‘low frequency demand disconnection’ (LFDD) equipment.
The E3C report notes that some sites were affected that are commonly – but not consistently – referred to as ‘essential services’. Many of them are included on the National Risk Register. The ‘essential services’ disconnected by the action of LFDD included supplies to Newcastle airport, two hospitals, railway signalling at 2 sites, railway traction supplies at one site and a water treatment works. The E3C report also notes further losses during the incident: traction supplies at two locations, railway signalling supplies at 6 sites, and supplies at two hospitals, one water treatment works and an airport. These are all for as yet unexplained reasons although, for the most part, backups (such as standby generation) operated successfully. However, supplies to an oil refinery and chemicals manufacturing plant were also reported by E3C to have been disconnected with full operations taking several weeks to be restored. (Curiously, the ORR report says that “eight signalling power supplies were lost in rural locations”. These include “Norwood Fork, London”. I grew up near there and it’s definitely not rural).
Normally, the ESO schedules enough frequency containment reserve to cover for the loss of whatever the ‘single largest infeed’ happens to be at the time. Ofgem reports this as having been 969 MW. However, 1561 MW was lost within less than half a second. Nonetheless, it’s possible that the system frequency might not have dropped as low as the 48.8 Hz threshold for the first stage of LFDD if all the scheduled frequency containment reserve had delivered. Ofgem reports that “primary response providers (required to deliver a response within 10s) under-delivered by 17% and secondary response providers (required to deliver a response within 30s) under-delivered by 14%”. However, even the ESO seemed initially unsure quite what had been delivered, quoting one set of numbers in Appendix M of its September report and another set in the main body. Perhaps this forms part of what Ofgem has in mind when it says that “the ESO has been unable to demonstrate a robust process for monitoring and validating the performance of individual providers”. Further: “Our assessment of the level of inertia and frequency response held by the ESO prior to this event suggests that there was only a narrow margin for error in securing the system against transmission-connected generator losses alone” and there is a “high level of sensitivity to small changes in key assumptions”. (The latter is something we’ve seen in our research at University of Strathclyde).
As has already been noted, a number of ‘essential services’ – which E3C has promised to define more clearly in future – were affected by the incident, some because they were disconnected by LFDD, some for other, unexplained reasons, perhaps due to the sites’ own protection operating when the system’s frequency dropped to 49.0 Hz. The general requirements of LFDD are set out in the Grid Code; responsibility for its implementation sits with the DNOs. Could they have configured it such that no essential supplies would be disconnected? At present, LFDD is achieved by opening circuit breakers at a 33 kV level. Ofgem notes that “isolating particular sites is generally unfeasible”. I’m no expert on modern communications equipment but I cannot believe that it is actually “generally unfeasible”. A more pertinent question might be how much are consumers as a whole (since it us who will pick up the bill) willing to pay for a much better targeted system that is required to operate only occasionally. How much better targeted could LFDD be if it was implemented at an 11 kV level?
As it happens, the effect of LFDD on August 9th was much less than a reading of the Grid Code would, I think, lead most people to expect: only, according to Ofgem’s report, about 4% of demand (compared with the 5% specified in the Grid Code for the first tier), amounting to around 892 MW. There was wide variation between the different DNOs, but the overall effect in terms of reducing the mismatch between generation and demand was only around 330 MW (according to modelling done by my colleague at Strathclyde, Callum MacIver) or 350 MW (according to the ESO). Why was that? The most likely explanation is that around 540-560 MW of DG was also disconnected.
By international standards, electricity supply in Britain is very reliable and the event on August 9th was small, largely because LFDD succeeded in preventing the situation from getting a lot worse. However, the impact on rail users in the south-east was significant with massive delays affecting thousands of passengers. This was largely due, so it seems, to the inadvertent operation of another set of protection equipment, this time on certain types of trains built by Siemens Mobility and operated by Govia Thameslink. As the ORR reports, these were subject to a software upgrade programme at the time: all of the class 700 and 717 trains stopped working when system frequency dropped to 49.0 Hz. However, only those with the older version of the software could be re-started by the driver. The others – 22 of them – required a technician to come out with a laptop.
It is clear from the ESO’s report in September, Ofgem’s new report and conditions on the system at the time that the variability of renewables was not the cause of the event. Also, one event cannot be taken as a sign of deteriorating system stability or of the complete inadequacy of procedures and conventions that have served us well for many years. However, there is no cause for complacency. Britain’s supplies of energy need to be progressively decarbonised and the technical characteristics of the electricity system continue to change. The costs of the transition need to be kept to a minimum but future electricity users will no doubt expect their supply to be, on average, as reliable as it has been up to now.
One particular thing that the August 9th event perhaps highlights is that equipment designed to operate only on rare occasions can still be triggered. This includes protection equipment on generators and trains, and ‘defence measures’ such as low frequency demand disconnection, of which E3C has called for a “fundamental review”.
Even though they are rarely used, network operators and critical users of the network such as generators, railway signals, communications hubs, water and chemicals processing plants, and hospitals need to be sure that their protection equipment and back-up facilities continue to work correctly and are set appropriately for the power system as it is in Britain. For example, although system frequency is supposed to normally be within the range of 50 ± 0.5 Hz, it can go outside that range. Generation connected to the system is supposed to still operate with the system’s frequency as low as 47.5 Hz. The ORR reports the main train operating standard as stating that the lower limit to the frequency of AC supplies in Britain is 47.0 Hz. Why then does an accompanying guidance note permit trains’ power supplies to be disconnected at 49.0 Hz, and why was it only those particular Siemens trains that did so on August 9th?
The ORR has recommended that train operating companies should check the settings of train protection systems and that Network Rail should check the nature of their connections to DNOs’ networks.
The electricity system in Britain is one, very large, interconnected system with millions of interacting components. The network owners and operators – the ESO, the transmission owners and the DNOs – have a key role in making the system as a whole work correctly. However, they are forbidden from owning or operating generators and, on August 9th, it was generation equipment that, in the first instance, either behaved incorrectly or, in the case of DG, had settings that have been known for some time to be flawed.
One question raised by Friday’s reports and the ESO’s from September is what level of responsibility either the ESO or the DNOs should take in ensuring that equipment connected to the system – generators, interconnectors and loads – behaves in a way that contributes to system stability rather than putting the system at unreasonable risk. Rather pointedly, Ofgem says that “the ESO’s approach to following the procedures [for checking generators’ compliance with the Grid Code] is not sufficiently considered and proactive given the increased complexity of the system.”
The job of ensuring that the system hangs together is arguably becoming more difficult than in the past. The volume of small scale ‘distributed’ generation capacity has grown from an estimated 7 GW in 2009 to more than 37 GW today. This represents tens of thousands of individual installations in contrast to the hundreds of power stations that met demand for electricity up to recent years. (The peak demand for electricity in Britain over the Winter of 2019/20 is expected to be around 60 GW). Ofgem says that “the ESO could have been more proactive in understanding and addressing issues with distributed generation and its impact on system security” and that “the information DNOs collect and record on distributed generation is variable or severely limited”. I know from talking to colleagues in other countries that it is poor relative to, say, Germany and the island of Ireland. In my view, failure to adequately manage the operation of DG represents both a threat to the system and the missing of an opportunity to use the services that DG might provide.
Many of the DNOs have aspirations to become ‘distribution system operators’ (DSOs) that take a much more active role in managing power flows and utilising flexibility from generation, storage and flexible demand in real time than is done now. However, this requires much more observability and controllability of distributed resources. I said in UKERC’s 2019 Review of Energy Policy that “the DNOs’ readiness to become [DSOs] was not shown in a good light by what happened on August 9th.” Ofgem agrees, pointing to “the substantial improvements required in DNOs’ capabilities if they are to transition towards playing a more active network management role as DSOs”.
Another new challenge is the use by so many generators or interconnectors of power electronic converters. These allow hugely increased and very useful control flexibility relative to old style, directly connected electrical machines. However, given the wide variety of ways in which the thousands of lines of control software can be written and the intellectual property bound up in it, only the manufacturers know in detail how the converters behave. Unfortunately, they generally lack the network operators’ models of the wider systems to which they will connect and so can’t be totally sure what the converters will do under all conditions once connected and how they will interact with other equipment. Grid Code rules help to ensure that, normally, everything is ok, but the behaviour of the wind turbines at Hornsea on August 9th showed that software changes can make a big difference: the version of the software installed at the time (and subsequently replaced) caused unstable responses to the not unusual condition of a voltage depression on the network, to the ultimate detriment of the system as a whole. According to Ofgem, “The ESO relied significantly on self-certification by Hornsea 1 for the generator’s commissioning process as demonstration of the generator’s compliance with the Grid Code, despite the complexity of the connection … We would expect the ESO to review the adequacy of the procedures it carries out and flag potential compliance concerns to Ofgem.”
Should it fall to the network operators to ensure that power electronic converters behave only in ways that are benign for the system? How do they check? And when equipment connected to a DNO’s network impacts on the system as a whole, who takes final responsibility for ensuring that everything is ok?
A valid concern for the future, very low carbon electricity system is whether there will be sufficient resources to meet demand throughout periods when there is little wind or sunlight. However, that was not the issue on August 9th: it was about how the available resources are used. In particular, how much ‘frequency containment reserve’ needs to be scheduled that can respond quickly enough to a disturbance such as the tripping of a large generator or interconnector? The scheduling of such short-term reserve entails a cost, that of making sure there is enough ‘headroom’ on responsive plant relative to those units’ maximum output. A balance is struck between that cost and the benefit of avoiding unacceptable frequency deviations given the whole range of things that might happen.
The electricity system needs to be decarbonised, but there is also a need to get the engineering right. A power system is already a very complex piece of engineering but the growth of DG and of use of power electronic converters introduces potential behaviours that are, as yet, only partially understood. As I noted in a blog from July there is an urgent need for research and for people capable of engaging with the new challenges. This arises at a time when the UK Government’s main research funding agency decided to cut funding from its core ‘Centres for Doctoral Training’ programme to PhD students working on electricity system issues. Almost all of those students would have been expected to go on and work in the industry. It has also been suggested that one of the key companies in the power sector, the ESO, is encouraging experienced individuals out of the door.
The electricity sector also has complicated institutional arrangements. As was suggested in the 2019 UKERC Energy Policy Review, the August 9th incident shows, in my view, that responsibilities for ensuring electricity system resilience – preventing, containing and recovering from interruptions to supply arising from disturbances – need to be clarified and applied in a more rigorous way. As E3C’s report noted, essential services that use electricity also need to be helped to understand the extent to which they can depend on a supply from the system and how to survive interruptions.
Delivering a resilient system cost-effectively requires the right mix of operational decisions, control facilities, logistics and assets with the right specifications. Engineering standards, clearly defined roles for the sector’s various licence holders and codes for governing the relationships between them are critical to getting both the engineering and the commercial relationships right among so many different actors. I and many others, such as those involved in the ‘Future Power System Architecture’ (FPSA) initiative, have been arguing for some time that the set of codes and standards need to be kept fit for the energy system that is coming, and that this has not been happening. I therefore welcome the launch by the Department for Business, Energy and Industrial Strategy (BEIS) and Ofgem last year of a review of electrical engineering standards. To be fair, it was started before the August 9th incident though, based on what I’ve seen of its progress so far, I’m not sure how much it’s going to be able to deliver in the short term.
Ofgem, in its report from last week, has called for a number of specific reviews of standards and procedures, e.g.: “the ESO, as the party required to operate to [Security and Quality of Supply Standard (SQSS)], should carry out [a review of the SQSS requirements for holding reserve, response and system inertia] and raise modification proposals to the SQSS Panel by April 2020.” (The ESO, in its brief statement in response to Ofgem’s report, promised only that it would provide an “update” to the SQSS industry panel by April.)
One particular suggestion by Ofgem is that “it may be necessary to consider standards for assessing explicitly the risk-weighted costs and benefits of securing the system for certain events”. This is not a new idea. It is already a feature of the Security and Quality of Supply Standard, albeit in a very limited way, and I recall a colleague in National Grid arguing 15 years ago for a much more extensive explicit treatment of risk. My own feeling at the time was that we needed a much clearer conceptual framework and improved computing power before going down that road. A European research project, GARPUR, has recently put in place what I think are very useful theoretical underpinnings and carried out a practical demonstration on the Icelandic system. However, the GARPUR consortium also noted a key barrier to the application of what it called “probabilistic reliability management methods”: the lack of reliable statistical or other data collected by transmission system operators across Europe. (So it’s not just the GB network licensees, then?).
One of the most significant responses by the North American regulator to the blackout that affected the North-Eastern US and parts of Canada and disconnected more than 50 million people in August 2003 was a new regulatory requirement on key actors in the sector. This obliged them to collect and process basic reliability data for components of the system and publish annually a raft of what were called ‘vital signs’ so that trends could be seen. These include emergency alerts, transmission outage rates, protection system performance and reserve margins and go far beyond the rather paltry information published by the GB transmission licensees with which Ofgem has seemed content for so long. If we are ever to get security standards that are more explicit in their treatment of risk, and to be able to keep an eye on how the network licensees are performing, I believe we need something similar to what is obliged in North America, and to start doing it soon.
I will note one final thing: no fines were levied last week. Ofgem has said that either agreement was reached with licensed parties so that there was no need to make a determination on licence compliance, or that it found no evidence of failure to comply with licences. However, it also said that investigation into the ESO will continue: “if we identify instances in which the ESO has failed to meet its requirements, we will take the necessary action.” It also said: “Given the changes which are required in the energy system to achieve Net Zero we believe that the core roles of the system operators are worthy of review. The concerns raised by our investigation into the events of 9 August 2019 and associated lessons learned will inform that work. We will also work closely with BEIS ahead of its position paper on system governance in 2020.” Is this a recognition that Ofgem’s existing procedures for regulating the sector are inadequate, or that something has changed beyond just the growth of renewable generation? So soon after the ESO came into being (in April 2019), might we see further major to changes to institutional arrangements for planning and operating Britain’s power system?
Keith Bell is a Professor of Smart Grids at the University of Strathclyde and a co-Director of the UK Energy Research Centre. He has worked on power system planning and operation in industry and academia for more than 25 years, including leading a major review of the SQSS in 2004-5. In the 4th phase o UKERC, he is leading Theme 4 on Energy Infrastructure Transitions.