back to article UK utility Severn Trent tests the waters with £4.8m for SCADA monitoring and management in the clouds

UK utility Severn Trent Water has offered £4.8m for a cloud-based platform to juggle its supervisory control and data acquisition (SCADA) systems. The £1.7bn-revenue firm aims to plug various data sources, from machine senors to weather readings, into the deployment, which will be used to measure and manage the utility's …

  1. batfink

    SCADA in the Cloud?

    Ah, a new and bigger attack surface then....

    1. Keythong
      WTF?

      Re: SCADA in the Cloud?

      It also makes, possibly even local, control and monitoring vulnerable to internet down or cloud down events, which could be caused deliberately by DDoS attacks, even for encrypted communications.

      Water provision should be recognised as part of critical survival infrastructure, so should have military grade security and local control, and only be linked via a private WAN; with secured internet access/messaging as a non-essential option, via a separate WAN node. I suggest that Water companies should be regarded as military contractors.

      1. fidodogbreath

        Re: SCADA in the Cloud?

        I work for a company that develops cloud-based SCADA for utilities (no involvement with this project though). There are ways to address these issues.

        1. In our system, equipment control (PLC) and alarm processing run at the site, not in the cloud. The cloud HMI sends control requests and receives status responses, alarms, performance data etc. If the cloud HMI goes down, the site will continue to do the last thing it was told, and record alarms and historical data values.

        2. Communication from the HMI to the site is completely separate from communication between the HMI and the users. Also, site comms automatically fail over between multiple connections. Typically, at least one of these does not entail tunneling over the public internet.

        3. If the HMI is down hard due to hack, crash, DOS, or whatever, there is a local HMI running at the site that a customer can either lay hands on (if at the site) or remote into (if not), which can also issue control requests to the PLC.

        4. Since point data is recorded in an onsite SQL DB, when HMI comms come back up the operator has a complete picture of everything that took place during the comm outage.

        We have other protections and redundancies in place as well. The point is, we're not idiots and we're not making $15 IOT light bulbs. We know that this is critical infrastructure, and we don't want it to be fragile or vulnerable.

        No security is perfect, obviously. Natanz and Russia's attacks on Ukraine's grid are just two examples of how even air-gapped, non-cloudy SCADA can be vulnerable to a well-resourced attacker.

        1. DS999 Silver badge

          I actually think this might be an improvement

          If only because legacy SCADA hardware never took security into account at all. When re-architecting things to make the data available from the cloud was done, obviously they had to consider security given today's world and the rather serious SCADA attacks we've seen in recent years.

          Not that a lot still won't be missed as is true for all software these days, but if information/alarms is kept as separate as possible from control/changes then most of the attention security-wise can be paid to the control/change part of the system. While I suppose certain information/alarms would be considered sensitive, it is less likely to pose a problem if someone gains access to that rather than gaining access to change/control things.

        2. jake Silver badge

          Re: SCADA in the Cloud?

          So a new and bigger attack surface, then.

          Thanks for the confirmation of the OP's initial assessment.

  2. Tim99 Silver badge
    Facepalm

    Please tell me

    That a highly trained human being has to sit in front of the controls of the treatment plants, and only the human is allowed to Control the treatment plant.

    1. AddieM

      Re: Please tell me

      A well-trained but highly-insured human being has the passwords that let you change all the settings in the control system, and then the computer does all the busywork of actually running things. This frees up the human being to take laboratory samples, go look at things that have stopped responding, receive chemical deliveries, and all the other exciting jobs that water treatment entails. The plants don't quite run themselves, but can quite often be left to do their own things for weeks at a time without raising an alarm that needs a human being to go look at it *right now*.

      1. Tim99 Silver badge

        Re: Please tell me

        I’m a Chartered Chemist, and also have a strong background in IT. In for 25 of the last 50 years have specialized in water and environmental analysis, including for a national authority, and am a volunteer technical assessor for ISO17025. My comment still stands.

        1. Nunyabiznes
          Joke

          Re: Please tell me

          Homer Simpson is based on real people.

          1. Anonymous Coward
            Go

            Re: Please tell me

            Homer is real, but at least the donuts at his place of work are free, so there is still a faint glimmer of hope for society.

  3. Anonymous Coward
    IT Angle

    So you are taking a mission-critical system and hosting it in the cloud? Why? Something like SCADA, which doesn't have any real increases in resource demand during its function (It's not like something like a ticketing system or an ecommerce platform, where transactions and resource demands fluctuate wildly. Nor is it a temporary or seasonal need, SCADA is a year-round requirement.) should be a perfect candidate for being on-prem, where it can be secured more easily.

    1. fidodogbreath

      So you are taking a mission-critical system and hosting it in the cloud? Why?

      Several reasons.

      You have distributed resources that cover a large geographic area, but which need to be centrally managed and monitored.

      You need both local and op center users to be able to monitor and manage your plant(s).

      You are in the middle of a pandemic, and you need at least some of your operators and managers to be able to work remotely.

      You are rapidly expanding, and you need your control infrastructure to be highly scalable on demand.

      Just a few scenarios.

      1. jake Silver badge

        "You have distributed resources that cover a large geographic area, but which need to be centrally managed and monitored."

        We were doing that literally decades before "the cloud" became a marketing term.

        "You need both local and op center users to be able to monitor and manage your plant(s)."

        We were doing that literally decades before "the cloud" became a marketing term.

        "You are in the middle of a pandemic, and you need at least some of your operators and managers to be able to work remotely."

        We could have done that literally decades before "the cloud" became a marketing term. In some places, we were.

        "You are rapidly expanding, and you need your control infrastructure to be highly scalable on demand."

        For water needs? (Electricity, sewer ... ) Pull the other one ... it's not as if all of a sudden we need three or four new reservoirs and attendant plumbing to come online by the end of the week because 12 new subdivisions and a handful of heavy industry plants suddenly popped up without warning.

    2. Martin M

      See my comment below. *Analytics* computation requirements can indeed fluctuate wildly, and that seems to be what they're talking about here. Plus lots of historical data, which means cheap, reliable storage is highly desirable.

      1. jake Silver badge

        "*Analytics* computation requirements can indeed fluctuate wildly"

        For WATER needs? Really? That's not even true here in California, land of either flood or drought.

        "Plus lots of historical data, which means cheap, reliable storage is highly desirable."

        Why would this library require the services of a second party? Storage, as you rightly point out, is cheap. Cheaper still if you own it, instead of paying rent on it. You can pack an awful lot of storage capacity in a closet sized space these days ... at a cost low enough that you can mirror all corporate data in each of three geographically diverse offices. Maintenance on such a system, once properly set up, is virtually nonexistent.

        1. Martin M

          Analytics computation requirements are very high when someone is running a big ad-hoc analytical query (not infrequently, tens of large servers), and zero if no-one if no-one is. Typically, there's a small number of analysts/data scientists who do not query all day, which drives a very peaky workload. Traditionally, they've been provided with quite a large set of servers which are lightly loaded most of the time and run queries horribly slowly during peak workload.

          Instead. the 'serverless' (hate that term) analytics services allocate compute to individual jobs, and only charge for that. Therefore are typically cheaper because there's not idling, and yet run queries at full speed when required vastly reducing data scientist thumb-twiddling (and have you seen what a good data scientist earns?)..

          Post by AddieM below suggests they have racks of Oracle to support their warehouse. I can guarantee you that that storage is not cheap. Could they rearchitect to a more cost-effective, perhaps open source MPP data warehouse without forking out megabucks to an EDW vendor? In theory, but most plump for something as 'appliance-y' as possible to minimise complexity, and those are very spendy. Even equivalent cloud services with dedicated MPP compute (e.g. Redshift et al) tend to be a lot cheaper, and are fully managed..

  4. AddieM

    Hopefully the description here has been over-simplified for IT people? Severn Trent's water treatment plants have a hierarchy of control:

    - generally, each process area is controlled by a Programmable Logic Controller (PLC) - these are sat in a cabinet in a Motor Control Centre (MCC) and are responsible for monitoring instrument readings, starting and stopping drives, opening closing valves, that kind of thing. They're generally hardwired to each device; slightly more modern plants would use a token-ring networking system called Profibus to connect everything up, but Severn Trent are old-school. PLCs are hard-real time controllers about the size of a fag packet that use specialised programming techniques, usually ladder logic. The PLCs will have a local Human Machine Interface (HMI) screen, which shows the status of all the kit and all the set-points in effect, in pretty picture form, called a mimic. They're super simple and reliable; we've still got PLCs from the 60s that have just been ticking away every day.

    - the PLCs are monitored over the network using a Supervisory Control And Data Acquisition (SCADA) system - generally over ethernet, sometimes ethernet over fibre optics if they're a bit further apart. SCADAs are generally server blades running Windows. They'll have a copy of the mimics for every HMI, so you can supervise the status of the whole plant; will let you change any set-point anywhere, which saves having to wander around the site to do it, and they'll have a lot of trending information available - they record all of the PLC instrument readings, so you can check how deep a tank was a year ago, for instance. You wouldn't want to outsource them; running a plant without one changes it from a one-or-two man job to an all-hands-on-deck, 24/7 cover disaster, and so an outage would be very very bad. Many of these sites are in the arse end of nowhere, and have unreliable internet connections: local, network-isolated Windows running on redundant servers can actually have a very good uptime.

    - the SCADAs will report some telemetry info to STW's central database, so that the bods in head office can monitor the info. I think that's what's being proposed for changing over here? At the moment, STW only monitor a few key pieces of information (total plant flow, etc). United Utilities have been attempting to change over to their own system for the last several years, which monitors literally every piece of info for every asset they own, which is costing them millions per year in huge Oracle racks and which is very very slow; it's a prime candidate for moving to the cloud. STW are generally a bit more cautious and conservative, but can possibly leapfrog the 'do it on the premises' step - I think it would make a lot of sense. The SCADAs themselves generally keep years' worth of trending info stored, so they can buffer for a while if the networks go down and just update whenever otherwise - there's hardwired emergency callouts for things like power failures otherwise.

    The three suppliers are Siemens, Allen Bradley, and Mitsubishi, I think. STW have been generally changing over from AB to Mitsi for all their control needs, but Siemens have been basically giving away the hardware and then stinging you for licenses and spares lately, and have been making inroads.

    1. Jimmy2Cows Silver badge
      Pint

      This is the level of insight I come here for. Great comment!

    2. Tim99 Silver badge

      Hi Addie, see my self-aggrandising comment above. I liked and upvoted your post, but I suspect that I am more cynical than most, and I believe that my cynicism has been earned - Including being the senior chemist who did the analysis when ST was prosecuted by the relevant authority when THEY had polluted a major system (Admittedly that was before privatisation in 1989, I suspect that things have not necessarily improved).

      1. AddieM

        Hey Tim. My background is as senior commissioning engineer in the water industry for the last twenty years, working across Europe. Regarding our water companies: I would say that any cynicism in dealing with them is extremely well-deserved and completely justified. On the other hand, I've always found Severn Trent to be relatively open about what they do, and they don't take *every* opportunity to screw over their own staff, contractors and suppliers - I'd probably rate them 2 out of 5 for being water company bastards, there's a lot of them I've really enjoyed working with over the years. Now, travel a bit to the South-East off their patch (but not as far as the coast); phew, now that's a different matter...

    3. Martin M

      Makes a great deal of sense. Particularly if there is a very variable query workload you could stream the information into Azure Data Lake Storage and run queries using Azure Data Lake Analytics. That would provide cost effective storage as well as usage-priced analytics compute instead of relying on provisioning loads of expensive traditional data warehouse nodes (and their associated licenses) that are probably lying fallow most of the time, and insufficient when you do get busy.

      This kind of analytical workload is normally a slam dunk for cloud over on-prem, and doesn't usually pose a direct threat to integrity or availability of operational systems - obviously confidentiality may obviously still be very important, depending on the nature of the data. The data flow is from the sensitive operational network to the less sensitive cloud analytics one, and you can make going the reverse way very difficult (even data diodes etc. for very high assurance).

      The exception is possibly the monitoring side of things, where a DoS/compromise might slow some types of response. But it sounds like the biggest problem would be plain old non-malicious unreliable plant network reliability issues - any response would have to be resilient to that, and thus to more malicious attacks.

    4. diodesign (Written by Reg staff) Silver badge

      "Hopefully the description here has been over-simplified for IT people?"

      No doubt there's gear on site. Where exactly it goes off-premises, we're relying on the tender document's vague wording -- what do you make of it?

      I've tweaked the article to reflect the fact the tender document leans more toward wanting a system than can perform analysis and predictions in the cloud by integrating with physical, on-location SCADA systems.

      C.

    5. Dan 55 Silver badge

      If they have Oracle racks I'm surprised they didn't move to Oracle cloud, which should be easier to do in theory. They must have a contract which is subsidising a lot of yachts.

  5. Terje

    What Fing moron think that a SCADA system has anything to do in the cloud. any scada system for critical infrastructure such as water should be standalone and preferably air gapped.

    I can just see the scenario play out in front of me, "innocent" worker with his digger cuts fiberoptic cable.

    In the control room:

    Dave: Hey Steve, did you lose contact with the scada system right now.

    Steve: Yes, I lost contact with everything and we can no longer talk directly to the systems because the firewalls reject every connection not directly from the scada system...

    What could possibly go wrong

  6. stevebp

    They've been sold a dummy

    Whoever is giving them advice should be taken out and shot. No-one in their right mind would put an infrastructure control and monitoring tool outside of their own, discrete management network due to the (almost certain) risk that a hacker will gain access to it or disable it with DDOS attacks. I see a worrying trend with Data Centre DCIMs and BMS too.

    1. ForthIsNotDead

      Re: They've been sold a dummy

      I don't read it like that. They are not going to replace their site SCADA systems. Their systems have been developed over many years. They use the Scope SCADA system from Servelec, and a mixture of PLCs on site to control processes, and RTUs to bring selected parameters back into the central SCADA system. It's been built over many years and they are not going to throw it out.

      This is about taking the end-point where all the *selected* site data ends up and migrating it to the cloud, where it can be stored in data lakes and mined/queried by data scientists. In that respect, it makes sense in so far as that is the solution that world + dog is adopting, however I think the project cost savings that are often espoused by the cloud vendors are exaggerated.

      They could do the same thing with their rack of servers in a server room. They could even run FaaS/serverless etc. locally if they want to develop in a 'cloudy' way. It's all open source.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like