back to article Train-knackering software design blunder discovered after lightning sparked Thameslink megadelay

British electricity providers are paying £10.5m after a 2019 outage revealed a train-bricking software design flaw. Companies behind the Hornsea One wind farm, off the Yorkshire coast, and the Little Barford conventional power station in Bedfordshire have between them coughed up £9m in "redress" to UK 'leccy regulator Ofgem …

  1. batfink

    Bugger

    A problem hard to detect in advance, and basically impossible to test for.

    Who signed off the Requirements? If it wasn't in there, it would have taken a pretty sharp eye in the design/implementation reviews to spot the discrepancy between and the rail standards and the Siemens design.

    1. Steve Davies 3 Silver badge

      Re: and basically impossible to test for.

      Errr No it isn't. That's what the Test Track at Velim is for. They can apply all the edge cases to ensure that a spec is met before the trains go into service. Many different manufacturers use the facility for their certification approval testing.

      1. Mark #255

        Re: and basically impossible to test for.

        How do they manage frequency variation? SFC?

        1. Richard 12 Silver badge
          FAIL

          Re: and basically impossible to test for.

          We have a Very Big Magic Power Supply that can play back almost any pattern of frequency excursions into around 400A of load. I forget the exact rating, but it's not the biggest one you can buy.

          Our regression testing involves running a large set of weird-as-heck supply events, most of which have been recorded from real power events over the last few decades.

          This is despite us being a lot smaller than Siemens, and our power control products aren't anywhere near as safety critical.

      2. Roland6 Silver badge

        Re: and basically impossible to test for.

        >Errr No it isn't. That's what the Test Track at Velim is for.

        This is something that could and should have been factory tested.

        However, as noted in the article, it is likely it was tested, just that no one really thought about the real-world conditions in which this event would be triggered and thus correctly identifying the appropriate level of the lockout and reset.

        1. sanmigueelbeer

          Re: and basically impossible to test for.

          This is something that could and should have been factory tested

          Uhhhh, no. Manufacturers cannot be trusted to test their own products.

          Example such as VW's "diesel-gate" reinforces why testing (and verification) done by independent bodies (plural) is/are the way to go.

          1. Anonymous Coward
            Anonymous Coward

            Re: and basically impossible to test for.

            "Uhhhh, no. Manufacturers cannot be trusted to test their own products".

            Can we perhaps agree that manufacturers should test their own products to the best of their ability, but that independent testing is also required to detect problems that the manufacturer has missed (for whatever reason)?

            In other words, trust but verify.

            1. Dave 15

              Re: and basically impossible to test for.

              Manufacturers are under continual pressure to ship product to make profit, testing, or even ignoring inconvenient test findings are an easy shortcut to take.

              This is becoming more of a problem with more complex products, but even basic ones arent actually tested these days.... after all after 100 years of the motorcar it appears the modern designers are not capable of designing lights where I both know the light has malfunctioned AND can fix it by the side of the road. Hell they cant even sort out a way of preventing my phone, keys, loose change coming out of my pocket and falling in a gap between the seat and centre console where it is impossible to ever reach them again! Not that car manufacturers (who want us to believe that despite these shortcomings they are capable of creating a self driving car we can put our lives in the 'hands' of) but everyone else... how many laptops do YOU have festering in the corner because the power connector fell off the board? I have a collection waiting for the trip to the skip. Or mobile phones that cant answer calls and whose charge connector is such a poor fit you cant charge without hours of patient fiddling (my shiteberry running android manages both of those in one go, to the point I use a 10 year old Nokia for most things and the shiteberry ONLY for the google authenticator app I need for work).

              1. man_iii

                Re: and basically impossible to test for.

                I got some good news and bad news.... Chrome can run Google Authenticator app on the desktop without problems... But figuring out how to get it working and setup with the qr scan code is a task all on its own.

                1. JulieM Silver badge

                  Re: and basically impossible to test for.

                  I've written a Google Authenticator clone that runs to about one screenful of Perl, if you strip out delimiting line breaks and comments.

                  The same function (which looks like a fiendish version of a mind-reading parlour trick) is used in the authenticator app (which is air-gapped from the client side) and the server login process, to generate a stream of numbers from the timestamp divided by 30 (so the code is valid for long enough to type it in, send it over the network and check it; you can optionally check against the code from 30 seconds ago, in case it changed while in transit) and a pre-shared key (in the QR code; anyone who sees that QR code can generate the stream of numbers).

                2. phuzz Silver badge

                  Re: and basically impossible to test for.

                  OP just needs something that implements RFC 6238, (on whatever hardware they can bring themselves to carry around without complaining the whole time.)

          2. Roland6 Silver badge

            Re: and basically impossible to test for.

            >Uhhhh, no. Manufacturers cannot be trusted to test their own products.

            Who said that Factory testing is something done exclusively by the manufacturer?

            Factory testing - performed by BR employees - was an integral part of BR's acceptance process; fail factory testing and nothing gets shipped.

            But as noted in the article, the system worked as per the requirements specification, just that the requirements specification was as expected: incomplete.

            >Example such as VW's "diesel-gate" reinforces why testing (and verification) done by independent bodies (plural) is/are the way to go.

            No, diesel-gate demonstrated that the US standard emissions test was not fit for purpose.

            Interestingly, the work of Emissions Analytics indicates that pre-Euro-5 diesel engines can be less polluting than both modern diesel and petrol engines, yet they fail the mandated emissions test - go figure. Interestingly, this counter-intuitive finding is reflected in the real-world data which shows an increase in ICE pollution when the opposite was expected. So it does look like Europe also has standards and mandated independent testing that isn't fit for purpose...

            1. Martin an gof Silver badge

              Re: and basically impossible to test for.

              It seems to have been even more subtle than that with the trains. According to this document both Siemens's Technical Specification for the trains and the national electrification specification stated that the trains should continue to operate "for short periods" with frequencies as low as 48.5Hz, but that their design submission merely stated compliance with EN50163, which allows (but presumably doesn't mandate) disconnection below 49.0Hz. The actual event had a lowest frequency of 48.8Hz, but only for a very short period of time (miliseconds) as 48.9Hz triggered some demand disconnection and 48.8Hz triggered more, on top of the extra generation already in place.

              Additionally,

              Siemens have also clarified that there should not have been a Permanent Lockout on the train following a protective shutdown caused by a supply voltage frequency drop. All trains should have been recoverable via Battery Reset whereas 30 trains were not recoverable. This was not the intended behaviour of the train.
              So firstly the trains were built to a different standard to the one they said they were built to, secondly it seems as if UK guidelines haven't been properly harmonised with EN guidelines, thirdly the actual operation of the software didn't match its own specification, after an update, whereas the pre-update software worked as expected. The figures are also interestingly slightly different to those given in the article (the article says 47Hz).

              Also interesting to note that the windfarm went offline due to "oscillations" in a control system causing overcurrent at the turbines - again, something that shouldn't have happened (insufficient damping) and seems to have been solvable by a software update.

              Conversely, the report from Little Barford is very sparse. They think the steam turbine tripped due to "a discrepancy in speed signals", which to me says "dud sensor", but they have no idea why the steam bypass valve failed, thus causing the automatic trip of the first gas turbine set and the subsequent manual trip of the second. Uniquely for this series of unfortunate events, I imagine it's most likely a mechanical or electrical failure, rather than software!

              M.

            2. keith_w

              Re: and basically impossible to test for.

              >Example such as VW's "diesel-gate" reinforces why testing (and verification) done by independent bodies (plural) is/are the way to go.

              No, diesel-gate demonstrated that the US standard emissions test was not fit for purpose.>

              No Diesel-gate demonstrated that manufacturers can write software designed to specifically evade clean air testing. There was nothing wrong with the test themselves.

              1. Roland6 Silver badge

                Re: and basically impossible to test for.

                >There was nothing wrong with the test themselves.

                Apart from not being "fit for purpose" - remember (independent) real world testing showed that real world emissions were very different to those recorded by the mandated tests...

                So if the purpose of the tests was to replicate the "real world" under controlled/repeatable conditions then they failed.

                Not saying that the individual tests and measurements mandated weren't good, just that the test suite failed to deliver against the objective and guard against cheating. When we wrote the test suites for networking protocols a few decades back, we ensured the test suite had sufficient coverage to make it uneconomic for vendors to simply build a responder rather than a full protocol state machine et al..

            3. sanmigueelbeer

              Re: and basically impossible to test for.

              No, diesel-gate demonstrated that the US standard emissions test was not fit for purpose.

              Then why was a code written to cheat the system and deployed worldwide?

              Ok, how about another example? What about 737MAX MCAS?

              Boeing told customers that the MCAS was safe and took the FCC for a "ride". What independent body verified this? The FCC.

              The secret is out: Boeing staff "seconded" to FCC are certifying their own work.

              I will repeat what I said previously: Independent bodies (plural) need to verify what the manufacturer claims.

              1. Roland6 Silver badge

                Re: and basically impossible to test for.

                >Then why was a code written to cheat the system and deployed worldwide?

                Well the investigation revealed the two main reasons:

                1. The mandated tests could be readily cheated and much of the code already existed, so this was a quick and cheap option.

                2. VW needed a diesel engine ready to satisfy marketing demands, but didn't have one that could pass the mandated tests without cheating.

                >What about 737MAX MCAS?

                The two big problems here would seem to be:

                1. As you have already identified: "Boeing staff "seconded" to FCC are certifying their own work."

                2. The independent body was placing too much trust in the manufacturers testing claims - so everyone got complacent.

                So I would suggest that having " Independent bodies" whilst necessary, isn't in-itself sufficient, especially when dealing with safety-critical systems.

        2. John Smith 19 Gold badge
          FAIL

          "just that no one really thought about the real-world conditions"

          Because in Germany the regulation of mains frequency never gets that bad?

          OTOH if working below that limit was in the UK rail spec (which you'd expect suppliers would have glanced at as they might be tested for compliance with it) it should have stayed working.

      3. Len
        Headmaster

        Re: and basically impossible to test for.

        As not everyone is a train geek like me, Velim is an amazing facility to test trains at speeds of up to 230 km/h under all sorts of power and signalling systems. Want to know how your new train design keeps after 30,000 km? Just let it run circles on the outer circuit for a few hundred hours...

        Velim railway test circuit

        1. John Brown (no body) Silver badge

          Re: and basically impossible to test for.

          "under all sorts of power and signalling systems. "

          Someone forget to test the combination of new Azumas and the older signalling and track-side systems on the northern part of the Eas Coast mainline though!

          1. hoola Silver badge

            Re: and basically impossible to test for.

            And how about the total morons who bought the trains and then discovered that the coaches were longer than standard for the UK so they would not go through Bristol Temple Meads.

            Perhaps if we had trains designed in the UK they might work. Of course we cannot do that has stupid rules have meant that UK companies were essentially barred from bidding on the contracts. We used to have some of the best train designers and builders anywhere but continual political stupidity has destroyed it (along with most of our innovative industry... GKN, Hawker, Cobham and this list goes on).

            The APT with its tilting coaches was years ahead of anyone else, yes it had some problems but the press slated it saying that it made them sick, ignoring the fact they had all been on the piss and were either still drunk or hungover. The technology was cutting edge and then just dumped before being sold to Fiat for bugger all. It then comes back as the Virgin Pendelino trains and suddenly it is the best thing out their.

            The HST or Intercity 125 might be old now but they are still better to travel on than the stupid Meridian/Voyager units with a bus engine rattling away under every coach. There is no way that the modern stuff will ever run for as long as the HST.

            Like everything, technology appears to be used as a cheap way to add functionality because software is cheaper to update than hardware. The trouble is that the quality control and testing of software is totally shite and nobody ever accepts responsibility that something is wrong.

            Another example of technology gone mad is on the Cambrian Coast line, there has been some new (about 10 years or older sort of new) signalling system that was supposed to enable more trains to be run. There are still only trains every 2 hours and recently there were some incidents of speeding because the drivers had not been fed the correct speed limit information into the cab. What the hell is wrong with a sign at the side of the track. It cannot display the wrong information although it might fall over or the driver not see it. Just how can we even consider driverless trains and cars when these sorts of basic information cannot be got right I just don't know.

            That is better, end of rant, time for some more caffeine.

            1. The Commenter formally known as Matt
              Boffin

              Re: and basically impossible to test for.

              "It then comes back as the Virgin Pendelino trains and suddenly it is the best thing out their."

              Been on them a couple of times (sober and not hungover thank you very much) and they make me really queezy, a quick google shows they still make lots of people sick and are pretty much despised.

              1. MJI Silver badge

                Re: and basically impossible to test for.

                According to someone I chatted to the Pendodildo system is not as advanced as the APT system.

                He was caught checking the tilt systems over in a Pendodildo.

                A previous job title of his was Tilt System Development Engineer APT-E.

                1. Stuclark

                  Re: and basically impossible to test for.

                  I know exactly who you were talking to, and yes, he has said that (and proven it) on record.

                  The lie though is that APT tech was sold to Fiat - it wasn't, which is probably why the Pendolino your system isn't as good as APT tilt was (and is - you can still experience it on a real APT in Crewe)

                  1. MJI Silver badge

                    Re: and basically impossible to test for.

                    It went to Bombardier I think.

                    The Pendodildo system is lineside sensor and screw jack.

            2. Anonymous Coward
              Anonymous Coward

              Re: and basically impossible to test for.

              Stand by for the same chaos on the tube when integrated train operation becomes the norm, where once *Staion to Station* working could be operated to keep the system running - basically an agreed section of road is operated on a one train through the area at a time basis with movements agreed via a closed circuit telephone line between an operating official at each of the affected station, now we are likely to have all the spiffy shiney new trains sitting down and refusing to move because computer says NO!. Nearly as dumb a move as decomissioning the tubes own power generation and relying solely on the National Grid - at least before they had the grid as a back up.

              And dont get me started on the over complicated electronics. Its a Rapid Transport system, it needs minimal creature comforts...

        2. Anonymous Coward
          Anonymous Coward

          Re: and basically impossible to test for.

          "[...] facility to test trains at speeds of up to 230 km/h [...]"

          For an instant there I wondered what sort of fixed rolling road test rig could handle that.

          Do they have an 00 gauge facility? Several second-hand purchases make annoying noises - but only when running on the track. Someone must have devised an 00 gauge rolling test rig - or a dynamic track analyser.

          1. Martin an gof Silver badge

            Re: and basically impossible to test for.

            Someone must have devised an 00 gauge rolling test rig

            They certainly have though the analysis might require you to devise something yourself :-)

            M.

        3. MJI Silver badge

          Re: and basically impossible to test for.

          It's a giant train set!

          Brilliant!

          Did they use Setrack?

      4. Stuart Moore

        Re: and basically impossible to test for.

        I wonder if the different software versions is a hint here. On the test track it has the old version, and the driver needed to do a manual reset... Fair enough. No one re-tested when the new version went out...

      5. vogon00

        Re: and basically impossible to test for.

        "That's what the Test Track at Velim is for"....methinks that modern managers, project managers and beancounters need reminding of this.

        My experience lately is that when folk of their ilk hear 'apply all the edge cases', what they hear is 'blahhhhh blahhhhh blahhhhhh un-necessary time and cost' and promptly chop those 'edge-ish' bits out of the test and/or approvals plan.

        I'm lucky enough to have had a diverse and very enjoyable career testing things for a living, in an organisation where you were *expected* to try and break the thing you were working on (within sensible limits).... the rational being that if it went wrong in-service, it would be seriously inconvenient for users, if not downright Goddamn dangerous (Think national-scale 'phone infrastructure - no 112/911 service=big problems!).

        We were well paid to have a negative attitude towards 'Product Whatwever' in those days - actually a realistic attitude from a Systems point of view - which was endorsed by the C-Suite as necessary for product quality. The attitude was that if Joe Public doesn't have an issue then we've done the job right.

        Most of the time, we would end up fixing an issue, even the 'Very Low Probability, Medium Impact' ones on the (proven!) assumption that if Mr. Sod can stuff it up, he will.

        With this modern 'continuous delivery' way of working, I find the 'edge' cases get ignored as a fix is seen as 'only a software update away' - no matter that the poor sap trying to use the thing has to wait weeks/months/forever for a fix.

        Nobody wants to take the time and trouble to create a robust product any more, and it's hard to take pride in your work (About 50% of my output at the moment is crap, because 'timescales' and workload).

        The world is increasingly run/managed by people who have absolutely no idea of the technicalities and complexity of modern systems.

        Here endeth this rant.

        1. yoganmahew

          Re: and basically impossible to test for.

          Absolutely @vogon,

          here continueth the rant:

          and even if the edge cases tests are run, when the inevitable design errors are found, they are ignored because "we're too close to cutover and anyway that's an edge case". Fast forward two years and the product no longer works for so many edge cases that happen daily (when you hit volume, nothing is edge) and the agile prophet has moved on to leading the design of another key piece of product.

          Never mind not having any idea, they don't care that it isn't going to work.

          WAD has become WBAD - Working Badly As Designed.

          1. Anonymous Coward
            Anonymous Coward

            Re: and basically impossible to test for.

            I spent several years in the field trouble-shooting a new IT product. What I called boundary cases often had many different root causes that produced apparently the same symptoms.

            I was once drafted into a development team to assist with their testing. It turned out that many of them pressed a few obvious keys - and that was their testing done. My application of tests for obvious boundary conditions often broke their code. The customer users then came up with many less predictable boundary conditions for failures - usually timing interactions.

            One of my favourite tests was to backspace repeatedly after typing a few characters - and see if they stopped at the start of the buffer.

            1. Gene Cash Silver badge

              Re: and basically impossible to test for.

              > My application of tests for obvious boundary conditions

              "STOP BREAKING OUR CODE!"

              "Um, that's what testing is **for**!"

            2. Amos1

              Re: and basically impossible to test for.

              My favorite was to enter an Alt+255 sequence on the numeric keypad while typing in an app's field or free form text field. That enters a NULL character. I used to use it in my passwords instead of a space.

              I once did that on a new database app and it was unrecoverable for some reason. I mean unrecoverable as in they had to restore the database from a backup. Once their app tried to read the high ASCII it barfed all over itself. Fun times. :)

              1. John Smith 19 Gold badge
                Thumb Up

                "Once their app tried to read the high ASCII it barfed all over itself. Fun times. :)"

                Excellent work.

                This is proper Black Team level testing. Thumbs up.

                In the 2nd decade of the 21st century (IOW this IT s**t has been going for a while now) any developer should be reading "user supplied input" as

                DANGER! User supplied input. Expect anything. Trust nothing. Suspect everything until proved safe.

                But apparently not. :-(

          2. Doctor Syntax Silver badge

            Re: and basically impossible to test for.

            "WAD has become WBAD"

            Or the differentce between "just works" and "only just works".

      6. batfink

        Re: and basically impossible to test for.

        Ah! I hadn't realised there was actually a proper full-size test rig. I sit corrected.

    2. richardcox13

      Re: Bugger

      > basically impossible to test for.

      Not really,

      To test the hardware surely trains are every tested not on the "production" network, but having n isolated a test line.

      Of course that costs...

      1. Jeffrey Nonken

        Re: Bugger

        "Of course that costs..."

        ...far less than live testing would.

        1. Steve Davies 3 Silver badge

          Re: Live Testing

          When a Design has passed the sort of testing that is done at Velim, it is tested on the real UK network mostly in the middle of the night when and as they surely will, things go wrong they don't interfere with normal operations.

          None of that would catch the situation that happened with the Thameslink Class 700's because there was no test in the schedule for it.

          Having written more than my fair share of System Test procedures in the past, I was always under pressure to cut down the number of tests. "It is taking too [redacted] long. Can't you cut half of them out so we can get signoff this quarter?"

          I was even threatened with the sack if I rejected this pressure. I stood my ground and we did the tests as per the spec. On this system that took 4 days and a cast of more than 1000 people. You don't want the thing to crash at peak times now do you.

          We found a number of P1 problems with the stress test which was one of the very tests I was under pressure to drop for the sake of the Q2 figures. The Q4 ones looked good though and it was on time and budget. The PHB's who wanted me to drop the tests wanted the kudos of bringing it in early. If we had have done that then we'd have been in the sh1t up to our necks at the next holiday period.

          As has been said, many managers (especially those who are MBA's) don't really have a grasp on how complex and interconnected modern systems are. Boeing is a clear case of that. The MCAS should never have been approved. It was a device to cut corners. That worked out well didn't it...????

          1. Roland6 Silver badge

            Re: Live Testing

            >As has been said, many managers (especially those who are MBA's) don't really have a grasp on how complex and interconnected modern systems are. Boeing is a clear case of that. The MCAS should never have been approved.

            Looks like many didn't take the hint from NASA's very public and tragic cock-up. Probably thought it didn't have any applicability to them...

          2. low_resolution_foxxes

            Re: Live Testing

            The really effing annoying thing about the Boeing MCAS software glitch, is that testing was done, testing highlighted the MCAS was glitchy, it raised the requirement for dual sensors and an alarm monitoring it, which did not happen, they were developing a software fix before the first crash, yet they still allowed it to happen twice cause Airbus were pounding them commercially.

    3. Anonymous Coward
      Anonymous Coward

      Re: Bugger

      The technicians are only interested in performance these days, not items that will bugger performance.

      1. not.known@this.address

        Re: Bugger

        "The technicians are only interested in performance these days, not items that will bugger performance."

        Speak for yourself, AC. Technicians want stuff working as it should - which is preferably as the designer needs/wants* it to, not as some asshat in an office** thinks it should. You seem to be confusing Marketing and Accounting with technical teams.

        *Always assuming a realistic and achievable requirement and design.

        **often now including headline-grabbing politicians, neo-luddites (sorry, "green" campaigners) and the "won't somebody think of the children?!" brigade as often as marketing drones and Realworld-challenged management.

    4. Dave 15

      Re: Bugger

      I would have thought trivial to test.

      I would also have suggested that it is a stupid requirement. Sensibly designed I refuse to believe that a restarting a train would cause electrocution for anyone on board however damaged individual components are... if you can show that some damage could lead to a short from power to the passengers and from via them to earth I think you need to have a little redesign of the hardware not some horrendous and ill conceived software hack.

      Leaving passengers stranded for hours is not a good idea. Recently had the same problem on a German train - the overhead wire came down - after 4 hours the emergency lighting had failed, eventually rescue involved several firecrews with torches and flood lights, a spare train, stopping all other lines nearby - total palava. It somehow reminds me a few years ago that it required Tornado... a new steam engine... to travel around Kent rescuing new fangled electric trains from the snow.

      1. Martin an gof Silver badge

        Re: Bugger

        Restarting a stopped train is probably more a danger to other people / trains on the tracks than to the passengers. I'm sure there are very strict rules regarding getting clearance, but you don't want to endanger any workers on the track or risk running into the back of a stranded train that hasn't been able to restart.

        In the case of "line down", there's a possibility that the lines are still live, so you wouldn't want to drive through them. This is interesting as the new trains for the East Coast Main Line - Type 801 "electric multiple unit" trains based on the same Hitachi design as the 800 and 802 "bi mode" trains are also - technically - bi mode, as they have small Diesel engines "for emergency use", implying that in the event of a power failure they can rescue themselves.

        The ECML seems to have more than its fair share of problems, and there are (were) even engines known as Thunderbirds specifically kept aside for rescue work.

        As for Tornado, here's the story from the people themselves (twice) and here's the news from the BBC.

        M.

      2. This post has been deleted by its author

      3. Anonymous Coward
        Anonymous Coward

        Re: Bugger

        The idea, given that real electric trains aren't running on 12 volts, of circumstances where it seems sensible to not turn the thing back on again after a failure seems eminently sensible to me.

        As usual we get the typical El Reg 'I would have designed it better' stuff. In any system there will always be 'edge' cases and you can never test them all (apart a literally infinite budget of time and money). So you always have to think 'what do I do if we go over the edge?' and programming 'STOP' seems quite sensible on a train. So actually you are arguing over decimal points. Maybe it shouldn't have stopped at 49 HZ? How about 48.5 HZ? or 48.49 HZ?

      4. the hatter

        Re: Bugger

        Think of it like needing a manager to approve certain refunds in a shop. It's an unusual enough circumstance, something that shouldn't really happen, and thus they probably aren't going to be certain why it happened. Better to just cause a bit of inconvenience, and let someone on a better pay grade make the decision about the best way to proceed. Surely if you're in this business, you know users who click dialogues and warnings to continue, without considering why they are clicking them. Also I'd hope you're familiar with situations happening that 'can't' happen, and what look like obvious problems which aren't fixed by (repeatedly) making the obvious fix. All becomes routine, then something catastrophic happens because they weren't thoroughly investigated.

        For once, the cause of it reading a low frequency was exactly what you'd expect, but the network running at that frequency 'shouldn't happen', but I'd guess the power train, which needs to be very efficient, may well be massively damaged if drivers kept pushing the restart button, which would strand the passengers for longer, quite aside from operating such a high energy system out of spec.

  2. Andy Non Silver badge
    Coat

    "technicians with laptops had to be dispatched to 22 stranded trains."

    Which took longer than expected as there were no trains running.

    1. Yet Another Anonymous coward Silver badge

      Don't you just want them to have been given handcarts

      1. John Sager

        Ah yes! Another one ticked off the bucket list. I actually operated one of those on a short track at a railway museum in Carson City, NV. Shades of Blazing Saddles...

  3. Will Godfrey Silver badge
    Unhappy

    50 50

    Neither Thameslink nor Siemens had done their homework. I'm not really surprised by Siemens, but can (dimly) remember a time when British Rail was red-hot on details - even though standards were arguably less stringent.

    1. Anonymous Coward
      Anonymous Coward

      Re: 50 50

      Yes....the good old days...

      *shudders in horror*

      I suspect that while British Rail may have had more clearly defined standards, the ability for the power grid to meet those standards was less, at least before the addition of the CCGT's in the 80's/90's.

      Getting used to stable power (in terms of both delivery, quantity and frequencies etc) is a nice problem to have...

      1. Graham Dawson Silver badge

        Re: 50 50

        That stability is already a thing of the past. Regardless of one's opinions on wind power, the functional capacity being installed (or already present) is not nearly enough to replace the capacity that is due to be retired, or is already being retired, in conventional generators. There's simply not enough redundancy in the system. On top of that the grid itself has become... shall we say, less than optimal in terms of transmission capacity and maintenance.

        1. John Brown (no body) Silver badge

          Re: 50 50

          Don't worry about it, we have all those interconnects to the EU and all the huge spare generating capacity that won't have any import duties applied when our great and fantastic trade deal is in place </sarc>

          1. Yet Another Anonymous coward Silver badge

            Re: 50 50

            We don't need that French electricity that probably smells of garlic. We can have proper feet and inches electricity from the colonies

            1. Graham Dawson Silver badge

              Re: 50 50

              Should be measured in pounds per square inch, fluid ounces per second, and miniature horsepower (like regular horsepower, but smaller to fit down the wires) if you want imperial analogies.

            2. Rich 11 Silver badge

              Re: 50 50

              I, for one, am looking forward to the heady days when cheap chlorinated electricity will be shipped directly to us by the friendly orange man across the Atlantic.

          2. Graham Dawson Silver badge
            Pint

            Re: 50 50

            The French interconnect, which is where we draw most of our HVDC, is already being put under pressure by increasing mandates for "green" power in France (reducing grid reliability) and the retirement of nuclear generators there. It would have been unlikely to remain a net inward flow for much longer, regardless of our membership of the EU.

            That interconnect gave the politicos a nice buffer to ignore the growing problems of the generator grid in this country. It going away means they can't hide from their mistakes quite so easily.

            1. Roland6 Silver badge

              Re: 50 50

              > It going away means they can't hide from their mistakes quite so easily.

              Ha ha! It will be spun as yet another example of "the EU" punishing the UK...

    2. Nugry Horace

      Re: 50 50

      British Rail's own stock for the Thameslink line -- class 319 -- had their own teething problems. The one I remember reading about was that resetting after an overload would cause the pantograph to go up, resulting in some unwelcome consequences when it happened south of the river.

    3. MJI Silver badge

      Re: 50 50

      I saw all of them.!

      One of my favourite locos!

  4. Ryan 7
    Boffin

    Class 700 and Class 717 are from the new-generation "Desiro City" family, not the late-'00s "Desiro".

  5. beast666

    When the wind blows

    Or not, as the case may be.

  6. Phil O'Sophical Silver badge

    Load shedding?

    In response to this drop, Hornsea One and Little Barford disconnected from the grid to protect their equipment.

    Wasn't that sort of reaction also a cause of the big US east coat blackout years ago? Aren't managed grids supposed to shed load in such circumstances, not self-disconnect generating capacity (which will only make the situation worse)?

    1. dinsdale54

      Re: Load shedding?

      Yup, pretty much.

      https://en.wikipedia.org/wiki/Northeast_blackout_of_1965

    2. Michael Wojcik Silver badge

      Re: Load shedding?

      The 2003 US Northeasat blackout also involved generation capacity going offline to prevent damage, though that wasn't the initial or major cause.

      '03 was an interesting one because it involved a number of failures of different classes, from tree limbs on the lines to a race condition in controller software. I still remember the event well; I was at the Stately Manor in Michigan at the time (not an ideal place to be without power in August, but a hell of a lot better than, say, southern Ohio).

      There were a number of repercussions from the event outside the power industry. For example, Michigan passed a law requiring municipalities to have at least a 3-day supply of water (at normal usage rates) without pumping, so the small city I live in - and many others - had to build another water tower. Water isn't usually an issue in Michigan (which after all is a pair of low peninsulas into one of the largest bodies of fresh water on the planet, and gets a whole bunch of precipitation in the bargain), so even a small water emergency was an eye-opener for many people. It may have helped fuel the ongoing opposition to Nestle's water-theft program.

      1. Amos1

        Re: Load shedding?

        Let's not forget that FirstEnergy had a direct, unfirewalled T-1 connection to their internal network with a vendor who got a Blaster infection. Blaster spread across the T-1 to the FirstEnergy internal network and clobbered a bunch of systems including monitoring systems.

        At the time I worked in the city next door to Akron, OH, where FirstEnergy's headquarters were located. When our data center cut over to UPS and the entire company headquarters went dark, we hurriedly patched what we could for Blaster. Yes, we had our own Blaster infection going on because our non-technical CIO banned most patching because it "interfered with getting work done."

        Why "what we could"? Because our non-technical CIO thought a 15-minute UPS capacity was good enough for anyone and we had no generator despite being multi-national corporation hosting SAP for worldwide operations. Four months later we had a generator.

        He would not let us buy LCD monitors or KVMs because they were too expensive. So we had a bunch of old CRT monitors on all of the servers. Turning those monitors off bought us another five minutes of battery power.

    3. Anonymous Coward
      Anonymous Coward

      Re: Load shedding?

      Well, yes, but this problem was essentially down to software.

      The initial lightning strikes on the Barford-Little Wymondley transmission line caused a trip, these systems reset and were back on line in about 20 seconds and all should then have been well. However, the Hornsea wind farm's connector saw microsecond transients caused by the initial trip and reset, threw its toys out of the pram, and sulked. Little Barford went off line in various stages, and the 1.8GW of generation capacity that tripped out in total then caused the final load shed to prevent catastrophic frequency drops in the whole grid.

      Hornsea have re-configured their system to be less pernickety, Newcastle Airport realised that they were not a protected customer and quickly asked to be so on the Monday after this happened.

      Old-fashioned steam age electrical stuff is quite happy with things that take a few seconds to stabilise, clever fast sampling new wonderful stuff needs analogue and digital filtering to take account of transients, and people that can count to type in the correct limit frequencies in the traction control systems.

      1. johnfbw

        Re: Load shedding?

        Should airports really be protected customers? The only safety critical aspect is air traffic control (definitely safety critical - but not always at airports), runways are only partially - they should have generators for emergencies to keep the lights on for low on fuel aircraft (only), otherwise they are only buildings with lots of shops. Doubt you are allowed to land at an airport with no power under normal circumstances

        1. Richard 12 Silver badge

          Re: Load shedding?

          Large public building full of thousands of people? Definitely needs to be protected.

          Especially as hundreds of those people can't legally leave either, as they're airside and checking that many passports, in the dark with no computers is...

        2. cdrcat

          Re: Load shedding?

          Presumably avoiding domino effects throughout the country is a good idea.

          Presumably some of the engineers to reset the trains took flights?

      2. Jellied Eel Silver badge

        Re: Load shedding? Embedded generation

        ..and the 1.8GW of generation capacity that tripped out in total then caused the final load shed to prevent catastrophic frequency drops in the whole grid.

        .. and the 2-3GW of 'embedded' generation that was also shed. I'm still pondering the E3C report, but that covers a lot of generating capacity ranging from domestic installs to mid/large commercial wind & solar. I'm assuming that was also due to the frequency drop, but it's something the system operator(s) can't manage very effectively. Not sure if the operator(s) are being told to check their frequency tolerances, but in our push for economic suicide.. I mean 'renewable' generation, as embedded generation increases, so does the potential risk of future supply disruption.

    4. Anonymous Coward
      Anonymous Coward

      Re: Load shedding?

      Same on Westvoast in ?1999 .... took out power from parts of western Canda down to Mexico and across as far as Texas. I lived in Cupertino between 1998 and 2000 and had more power outages there in that time than probably have had in the subsequent 20 years back in the UK.

      1. Yet Another Anonymous coward Silver badge

        Re: Load shedding?

        Capitalism, any spare capacity is a waste of money - you want to have slightly less generating capacity than you can be sure you will sell. Having spare unused capacity that the government can call on is socialist

        1. Jr4162

          Re: Load shedding?

          With generating stations, the load and the power output have to match. You can't have more power on the grid than needed. If there is a surprise spike in demand, the grid managers can lower voltage temporarily while additional capacity is brought online or shed loads.

    5. swm

      Re: Load shedding?

      I was at college in New Hampshire when the blackout hit. As I remember Consolidated Edison was begging power companies not to disconnect in an effort to stabilize the grid. Granite State Electric finally had enough of this and disconnected later that evening. The most scary thing I remember was the street lights pulsing at about 1 cycle/second as the generators were slipping phase.

      The college had their own generator that could power about 1/4th of the campus. When power was lost it was decided to power the dinner hall until dinner was over then switch to the dormitories. The food hall was begging for electricity for their refrigerators but it was pointed out that the food would last just fine over night.

      The computer center (running time sharing) was on the same feed as the dinner hall so we got power back almost immediately. The field engineer came in ready to power up the computers but I stopped him until we ascertained what was going on. When we discovered that the East coast was without power we called it a day. (We lost power about 1/2 hour later when the feed to the dining hall was killed.) People in California were dialling in to the system and wondering why we were not up. I told them to read about it in the papers the next day.

      It took about a week for power to become stabilized.

      1. Martin an gof Silver badge

        Re: Load shedding?

        It took about a week for power to become stabilized.

        By which count the problems last August in the UK were completely insignificant.

        The thing which has struck me about the coverage is that the National Grid operated (more-or-less) as designed, and despite the loss of nearly twice the amount of generating capacity that had been planned for, the thing was back at a stable frequency within about five minutes, and loads were being reconnected shortly after that. We have longer outages than that at home on a regular basis and it is no big deal.

        Pretty much all the panic was because of the knock-on effects on the trains, and the causes of those - because power was never disconnected from the rail network - was purely and simply down to all those new-fangled engines switching off when they shouldn't have, and staying switched off when they should have been re-bootable by the driver.

        All the initial blame was heaped on the National Grid, and since then although the train problems have been reported, I haven't seen any reporting which attempts to address this issue. I suppose "a few trains broke" just isn't as much of a story as "nationwide blackout".

        M.

  7. jamesdagger

    Passing the bill for the redress, Ofgem commented "It hertz to be you".

    1. Laura Kerr

      Eh? You watt?

      1. The Oncoming Scorn Silver badge
        Coat

        Ohm my God

        1. Will Godfrey Silver badge
          Happy

          Must... resist...

          Damn!

          1. Arthur the cat Silver badge

            By induction there should be another comment.

            1. STOP_FORTH Silver badge

              There appears to be a certain amount of reluctance to join in.

            2. bazza Silver badge

              No one should react to that...

          2. Anonymous Coward
            Anonymous Coward

            Resistance is futile

            1. Ken Shabby Bronze badge
              Mushroom

              Hah

              The ampere strikes back!

        2. This post has been deleted by its author

    2. Michael Wojcik Silver badge

      There's always some bright spark who has to throw the switch on the pun fight.

      1. Inventor of the Marmite Laser Silver badge

        Just conduct yourself properly. That's all.

        1. TRT

          I lack that capacity.

    3. NetBlackOps

      That's El Reg. Come for the articles, stay for the punishment.

      1. Phil O'Sophical Silver badge

        Just current practice.

      2. Sgt_Oddball
        Coat

        Happens with a high...

        Frequency round these parts...

        Shockingly bad I know.

        Mines the one with the 19th edition regs in the pocket..

    4. Morrie Wyatt
      Coat

      With penalties like that

      You certainly wouldn't want it happening with any frequency.

    5. Gene Cash Silver badge

      All these bad punners should be grounded.

  8. TRT

    I told you, Gareth.

    Software. Glad the details came out in the wash.

    1. John Brown (no body) Silver badge

      Re: I told you, Gareth.

      At least it wasn't DNS.

  9. This post has been deleted by its author

  10. Mike Shepherd
    Meh

    Progress

    Victorian train engineers had no electronics (no thermionic triodes, no semiconductors, let alone the power semiconductors used in trains today). They had no computers to help design or to run their trains, little of our nearly 200 years experience of railways, no software for simulation, no software on board or to help manage train movements. Yet they could get their trains running again faster than we can now, in the 21st century? Maybe ours should carry a load of coal for backup to these "advanced" systems that can become confused.

    1. Graham Dawson Silver badge

      Re: Progress

      There have been a number of occasions on which Tornado, and other steam locos, were used rescue passengers on modern equipment that was stranded by cold weather, snow and electrical failure.

      Other side-effects of modernity: old locomotives were much heavier, and so tended to wear down the rail head more rapidly than modern locos. This might seem like a bad thing, but one of the side-effects of this was that stress microcracks in the rail surface didn't have chance to expand before they were ground out by the loco wheels, meaning that tracks were less likely to catastrophically fail under load. Nowadays these microcracks have to be scanned for, and scoured away, by a special train that runs up and down the entire network.

      On the other hand, a badly stoked steam loco could occasionally just go bang and kill a bunch of people. So there's that.

      1. Anonymous Coward
        Anonymous Coward

        Re: Progress

        "On the other hand, a badly stoked steam loco could occasionally just go bang and kill a bunch of people. So there's that."

        In some cases the train driver had become annoyed at losing steam through venting of the safety valve - so they tied/weighted it down to fix the "problem".

        1. Mike Shepherd
          Meh

          Re: Progress

          As Rolt (1955) observed of an event in 1850, one man (annoyed by the prolonged noise of escaping steam) ...silenced it by the simple expedient of screwing down the safety valves. By a kind of rough justice the costive locomotive retaliated by slicing off one of his unduly sensitive ears with a flying fragment.

          1. paulf
            Alert

            Re: Progress

            I seem to recall my steam traction inspector telling me of a more pragmatic solution to steam crews that allowed their loco to blow off (allowing the pressure to get too high so that the safety valve vents excess pressure). In the BR days they, as firemen, were fined if they blew off in a station. Same if they produced black smoke.

            When I was learning to fire I was taught that every minute the safety valve is blowing off costs 10 lb of coal and 10 gallons (imp) of water (~5kg and 46L). That helps focus the mind!

            1. Anonymous Coward
              Anonymous Coward

              Re: Progress

              “... were fined if they blew off in a station...”

              You’d be up before a judge for that nowadays. So a friend tells me.

            2. Jamie Jones Silver badge

              Re: Progress

              If they blow off where they shouldn't, why don't they just blame the dog, like I do?

            3. The Real Tony Smith

              Re: Progress

              '...When I was learning to fire I was taught that every minute the safety valve is blowing off costs 10 lb of coal and 10 gallons (imp) of water (~5kg and 46L). That helps focus the mind!...'

              I remember being told that black smoke and a blowing off safety was a waste of money..... but also the sign of a bloody good fireman!

      2. paulf
        Flame

        Re: Progress

        @Graham Dawson, "old locomotives [...] tended to wear down the rail head more rapidly than modern locos"

        This might be great for preventing rail cracks but also suggests the rail wears out much quicker than now so requires much more infrastructure works to maintain the railhead in a usable condition. This will be a lot more expensive than paying for the Flying Banana New Measurement Train to scan the rails on a regular basis plus any required grinding. Those infrastructure works usually require possessions (complete line closures) at night or at weekends (big works tend to require closure for several weeks) so, yes, in some respects heavier trains are very much a bad thing. Infrastructure is usually more costly to replace than the bits on the train that make contact with it - this is why wheel sets are made from softer steel than the rail (so the wheel wears out quicker than the rail) and pantographs connect to the contact wire of the OLE with a graphite block (which again wears out quicker than the contact wire).

        Weight is important - Regional Railways introduced lighter stock in the 1980s/90s for two clear reasons: lighter trains introduce less wear overall on the infrastructure, even when you include extra monitoring (with the added benefit of lower fuel consumption), and they also have faster acceleration with the same size engine which makes means stopping at a station doesn't take as long to complete (which improves line capacity).

        Steam may have a nice romantic feel but when I go to get a steam loco ready for service it usually takes at least 2 people 2-3 hours from the point you start putting in the fire to it being ready to go off shed (plus an hour before that removing the previous fire, and cleaning the smokebox+ashpan). On a diesel loco or DMU it takes one person just over half an hour to check the engine levels, then do a visual check after starting while getting air up for the brakes. It can then be driven by one person. It's easy to forget quite how filthy and labour intensive steam traction was!

        Unfortunately modern traction is suffering the same problem as cars - they no longer mechanical machines and are now computers on wheels!

      3. Stuclark

        Re: Progress

        "old locomotives ... tended to wear down the rail head more rapidly ..."

        Guess which BR research project finally solved excess real head wear?

        ... Oh yeah, that will have been the APT project!

    2. fidodogbreath

      Re: Progress

      Maybe ours should carry a load of coal for backup to these "advanced" systems that can become confused.

      To get the steam pressure back up in the CPUs?

    3. hammarbtyp

      Re: Progress

      "Yet they could get their trains running again faster than we can now, in the 21st century?"

      Steam trains are incredible time consuming to get going. Apart from the effort in cleaning ashes, it takes a long time to bring steam up to pressure. Nor were they particular reliable and when they went wrong, it tended to be a catastrophic failure.

      It easy to look at rose tinted glasses at the age of steam, but they are inefficient, labour intensive and most of the drivers who drove them for a living were glad to see the back of them

      1. Mike Shepherd
        Meh

        Re: Progress

        You're right, of course: the only people who look at steam locomotives with nostalgia are those who didn't have to operate them. But we're talking about how long it takes to re-start an engine that has no fault. Solving that doesn't need reversion to steam power, as you suggest. It needs more careful thought in design. In that, we seem to have gone backwards ("...371 trains to be cancelled, 220 to be part-cancelled and 873 to be delayed").

      2. Gene Cash Silver badge

        Re: Progress

        > Steam trains are incredible time consuming to get going

        I watched Jay Leno fire up one of his steam cars once. Holy o'crap.

        As he said, "you're not getting your pregnant wife to the hospital to give birth in time"

        OTOH, 90% of it was "wait-until-then-do" procedure that could have been handled by a Raspberry Pi and some decent Python scripts.

        1. Anonymous Coward
          Anonymous Coward

          Re: Progress

          "I watched Jay Leno fire up one of his steam cars once."

          How was the steam produced? I though t that a "flash" boiler could produce steam very quickly - probably similar to modern instant flow water heaters,

  11. Anonymous Coward
    IT Angle

    A feature, not a bug?

    Regardless of whether Siemens got the specs wrong or why nobody caught it, the first question should be why the update made the lockout permanent and why this either wasn't documented by Siemans or why it wasn't noticed by those who should be reviewing the update. It's the same philosophy as Boeing's decision to make the stall system in the 737 Max practically impossible to override. Programmers have always thought that they know better than users {I know I did) but locking the user out is not the answer.

    1. John Brown (no body) Silver badge

      Re: A feature, not a bug?

      From the artiice; " Permanent lock-outs, the ORR report explained, are a safety feature to prevent drivers from re-electrifying damaged components and exposing people to the risk of electric shock."

      Of course, what the real risks are and whether this is just overzealous precaution is another matter.

    2. Ian Johnston Silver badge

      Re: A feature, not a bug?

      Regardless of whether Siemens got the specs wrong or why nobody caught it, the first question should be why the update made the lockout permanent and why this either wasn't documented by Siemans or why it wasn't noticed by those who should be reviewing the update.

      All clearly covered in the ORR review. Basically they didn't want drivers rebooting trains which had locked up for good reason, but didn't check the list of good reasons carefully enough.

  12. Pangasinan Philippines

    Compensation?

    When the mains frequency changes in either direction, is there an adjustment period to bring the average frequency back to normal?

    I mention this because I built a digital clock in 1972 (ish) from TTL logic chips (14 pin and 16 pin) all housed in a small die-cast box.

    The input frequency was mains derived through a Schmitt trigger.

    The hardest part was the reset from 12 to 1.

    1. Mike Shepherd
      Meh

      Re: Compensation?

      This is done in the small hours, when demand is low, so it's easier to increase the frequency (if an increase is what's needed) to achieve a total 50x60x60x24 cycles over 24 hours. If cycles need to be "lost" (by decreasing the frequency), this corresponds to reduced power, so could (in theory) be done at any time without extra load on the generators.

      Perhaps other readers can provide more detail or correct me on this.

  13. Unep Eurobats
    Unhappy

    Tell me about it

    I got stuck in Peterborough (with bike). Already by the evening word was coming through that there were trains further along the line that needed an engineer to come out and reset them. Fortunately there was a Premier Inn and they let you keep your bike in your room.

    1. Shady

      Re: Tell me about it

      You have my sincerest sympathies.

      I decamped to do a six month contract in Peterborough a few years back. The place was so dull I drove over to Milton Keynes in the evenings just to find something to do.

      1. Martin an gof Silver badge

        Re: Tell me about it

        Peterborough has other attractions...

        M.

      2. veti Silver badge

        Re: Tell me about it

        Peterborough to Milton Keynes - is over an hour's drive, according to Google. You could have got to Leicester or Cambridge in the same time, which makes Milton Keynes seem an odd choice.

  14. terrythetech
    Coat

    low frequency

    Sooo, low frequency power causes even lower frequency trains

  15. Loatesy

    Wrong type of trains on the track.

  16. Anonymous Coward
    Anonymous Coward

    Power outage

    The real problem is not the train but the inadequate power supply.

    As noted in the article, there had been 1000MW (1GW) of spare capacity, but the total generating capacity we have is only 40GW; i.e the country is running on just 2.5% margin. Anyone familiar with queueing theory knows that at least 30% is necessary for safety. It makes no difference whether we are talking about power generation, road capacity, hospital beds, checkout queues, or any other system that expects random load.

    Looking ahead, if we are to convert all cars (not vans, not buses, not lorries) to electric we need an extra 28GW just to cover the mileage being driven today.

  17. jmch Silver badge

    Versioning??

    "Helpfully, the trains were halfway through a new software deployment. Version 3.25.x went into "temporary lock-out", which the driver could reset, something that seven drivers did successfully. Trains running v3.27.x entered "permanent lock-out" needing a techie with a laptop to reset them. "

    If the newer version required techie fix and the older version could be reset by the driver, that should be "unhelpfully", as the 'newer' software was teh one that required more work to fix. Or are they counting down version numbers in reverse order?

  18. RM Mynez-Arefzlash

    Genuine question

    Why is it such a big deal to a train whether the frequency is 50Hz, 49Hz or any other value?

    1. Patched Out

      Re: Genuine question

      I'm not a locomotive expert, but do know something about power electronics, so I'm going out on a limb here - the electrical motors that drive the wheels are like electrical transformers. As the AC frequency goes down, the current (amperage) goes up to deliver the same amount of power (electrical or motive power). It is probably a protective measure to keep the motors from burning out.

      1. RM Mynez-Arefzlash

        Re: Genuine question

        Thanks Patched Out. That does make some sort of sense. I need to think it through but you've given me a starting point.

  19. Anonymous Coward
    Anonymous Coward

    Operating reserve

    While National Grid have largely been absolved of blame here, one must question the quantity of reserve generation being held, which is a politically motivated choice regarding funding of balancing services not entirely in NG's control. The 1000GW reserve is a very-long-standing quantity. The widespread disconnection of generation in response to a frequency excursion has been known as a risk for a very long time, and known to be growing especially in line with the uptake of solar plant.

    Contrary to some comments above, available generation in the UK is well in excess of 40GW so there is plenty of reserve available "right now" if we choose to run it. As demand increases for EV's, of course, generation has to increase too. There are 10's of GW's of generators applying to connect right now; though how many will actually happen is anyone's guess (see Westinghouse/Toshiba for examples of what happens to many generator applications).

    Holding more in reserve means more CO2's and "dead" costs while the network is behaving normally so it is a political decision as to whether you want higher reliability and costs versus more risk but generally lower costs for 9999 days out of 10000.

    Balancing services are already very, very expensive even at that historical level of reserve (over £1bn/year); and the real cost of running a distributed network is finally coming home to roost. The business model designed for big static generators isn't really compatible with distributed gen here there any everywhere. Dieter Helm's cost of energy review did recommend going as far as a COMPLETE re-write of the rulebook rather than sticking-plasters being stuck on top of sticking-plasters that we currently have. Windmills in the meantime are banking crazy subsidies whether they are generating or not while network operators effectively have to cover the costs of the windmills inadequacies..

    The other really obvious tech to deploy (but still, more capex costs) are the widespread installation of flywheels and synchronous compensation capabilities. But, if they only benefit the world on 1 day out of 10,000 are they worth the expense going on your bill?

    In the US the burden of reliability sits with the customer, if you need reliability, you buy a UPS and a generator. In the UK we've been very happy to let reliability burden sit with the transmission network and generators for the last 60 years. Perhaps, in an era of distributed generation everywhere we should be rethinking that policy.

    Too bad our politicians have been lead down the road of debating utter garbage for the last 15-20 years rather than looking at real national interests.

  20. Anonymous Coward
    Anonymous Coward

    I thought that under the original UK Rail specs, the onboard batteries should enable the train to "limp" home to a station within 4 miles. Perhaps that the question should be asked as to why this wasn't used (or rather wasn't an option) as it would have expedited the permanent lock out fixing by the technician (at a station rather than mid track).

    I can only presume that the "limp" home mode is only available where there is no power at all rather than a low frequency situation.

    1. Badbob

      Batteries?

      The batteries on most multiple unit trains are, like your combustion engine car, designed to keep some of the lights on (for a short period) and to provide the bare amount of power required to restart the thing. There is no “limp home” facility for the overwhelming majority of electric multiple units. The primary exception being the 80x series, most of which are bi-mode and some have at the minimum a tiny diesel generator which will keep the AC on and provide a small amount of traction power for a slow limp (providing there aren’t any steep gradients). The only exception being the units ordered by First for their new London to Edinburgh operation which will be pure electric, with only a very basic battery for AC and lights.

  21. J.G.Harston Silver badge

    Nicholas Parsons

    Talking turbines, this is the only way I could find an IT angle, Nicholas Parsons RIP age 96.

    There must be a IT angle that can get a reg obit article.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like