I believe there is only one word in answer to this:
"Duh!"
The Carnegie-Mellon University's Software Engineering Institute has nominated transport systems, machine learning, and smart robots as needing better cyber-security risk and threat analysis. That advice comes in the institute's third Emerging Technology Domains Risk Survey, a project it has handled for the US Department of …
This isn't a report meant for information security professionals. It's written for higher level executives about the technology challenges ahead. Take the time to read through it all before you jump at the chance to publicly roll your eyes.
Also, the fact you 'dismiss' anything coming from Carnegie Mellon University displays your absolute ignorance towards information security. CMU is the #1 university in the world when it comes to information security and information technology research.
From experience of working on one "robotic" brain surgery product (that has been operational successfully for years), I can say that at least standing between development and deployment is a rightfully bewildering set of compliance mountains to overcome. Getting FDA permissions for these machines is not easy, and I am thankful for the huge pressures exerted by these organizations on the development processes.
I predict that getting certification will become harder and even more technical, possible requiring machine validation of the code etc (proofs). Presently most human safety software regulation focuses on the development process, and not on the software itself (the 'how', not the 'what').
My concern would be that the areas that are newly being automated, the IOTs and entry of robotics into new areas will be lightly regulated, with overwhelming financial pressures being placed on governments to permit or just have 'light touch' standards. Self driving cars is one such area. As an old-hand programmer I can't believe for a second that these machines are anywhere near safe, for a long list of reasons, and I have been shocked at the public and governments attitude to permitting software engineers to give their shitty code near full control of cars and trucks etc. Its like they are living in an alternative reality, definitely drinking far too much Kool-Aid. 20 years of experience and neither Google nor Microsoft can secure a browser. What hope is there to develop safe software capable of autonomously driving a car?
The reasons why it is so technically difficult to write safe software? There is so many it really is difficult to list. To sign off on the control system for an autonomous vehicle responsible for transporting humans in traffic? Fucking hell... that's scary difficult.
Its not about "its better than humans". That a popular fallacy. That argument doesn't cut it when signing off on safety systems. You need to attest that it meets agreed specification, and I don't think you will get specifications worded like "must be better than most drivers" or "better than the average driver". Right or wrong, when little Johnny kills himself and his passengers because of his own lack of skill and poor judgement, its viewed as a tragedy. Watch what happens when Johnny and his passengers are killed because of e.g. Ford's software... it wont be nice for Ford.
Writing even relatively simple software to be without bugs seems so difficult most companies fail. This shouldn't need elaborating on. Writing software well is hard, and most companies don't even bother to begin trying to do it right.
Take a read around Toyota's unintended acceleration case for an insight into the reality of the software controlling the braking system of a car (not autonomously), and which may have been involved in a fatality. Look at the reports on the software, and how the company reacted. Extrapolate now from that to the complexity of an autonomous system responsible for the safety of all the passengers.
Biting the hand that feeds IT © 1998–2022