£42 billion and 38,000 skilled jobs
Using the Cruise metrics, that equates to around 25,000 robotaxis at a net cost of £1.6M each. Truly the "potential benefits" of which Liz Truss would be proud.
The UK government has promised to "clarify and update" the law to allow the introduction of self-driving vehicles to the country's roads, but it is set to be a long, technical journey. In the King's Speech this week, in which the governing party sets out its legislative program, King Charles III said ministers would "introduce …
This has always been necessary.
At which point you're putting the liability on the software - which if they are the "driver" by their definition - also means: INSURANCE.
So now although you might choose to insure your car as an asset, the "3rd-party" (main) component of the insurance should be on the system driving it.
Then you will discover that a) nobody wants to take that on as a car manufacturer and/or b) the cost of self-driving cars / subscriptions (yep, ongoing costs of insurance will require ongoing subscriptions) skyrockets to compensate.
And it's only at that point that, drunk as a skunk, you can get into a self-driving car and let it take you home. Until then, you are always the driver/responsible.
So, look forward to expensive subscriptions for self-driving, paying insurance for the vehicle AND 3rd-party insurance via the subscription, and companies being sued to oblivion and your car "decertified" if, for instance, something like the Dieselgate scandal comes out, or some AI is found to be terribly faulty as a knock-on effect of even one lawsuit involving the cars around the world. A recall will mean "no driver" until you update your software to a recertified version. Not to mention obsoletion when your car software is not up-to-date, or is too old to support, and now it's no longer legal to use on the road except if you're driving it yourself.
This stuff is all "just another 20 years away" again, because the above isn't going to happen overnight no matter how much business you throw at it. And when it does, Ford etc. are then basically a software / insurance company that happens to make cars.
So... in the event that the automation does something which would result in disqualification for a fleshy meatsack driver, is it then forbidden to move until the ban is completed - six months, a year, ten years? Leaving the owner with an unusable and depreciating asset? And if the offence is punishable with prison, will there be a big HM Car Park to which the vehicle is escorted and left at His Majesty's Pleasure?
Surely the onus - and the risk - is upon the company developing the vehicle to make sure it doesn't misbehave in these ways.
Why not? That's what humans get. And this self-driver is all the same entity, so yep.
Now imagine the insurance that the manufacturers will build into it once they realise that one car accident could cripple all their cars until the courts rule it's safe to drive again. Ouch.
The onus is entirely on whatever is in control of the vehicle. If that's me, it's on me. If that's not me, it's going to be the car.
Like I explain to bad bosses on a regular basis in my career - I can have both the power and the responsibility, or neither, but you can't mix and match.
That's why they're tested individually.
If the standard of testing of automated systems was similar to that of human drivers (even in those countries that have comparatively strong driving tests) it would be a very low bar.
Humans bring many strengths to the challenges posed by driving that are very difficult to achieve automatically, but also a great many weaknesses - and that is before you get to some of the abominations that some "designers" inflict with their road designs. This is evidenced by the statistics on road deaths/injuries. Even so, we do not insist upon periodic re-certification of drivers.
Automated driving still has a way to go before it is safe enough for common and widespread use - but you are not going to progress the technology without taking it onto the streets([in a well-planned and carefully controlled way). Part of the challenge is updating the regulatory framework that will oversee the use of those systems.
Insurance should just be handled by the insurance companies, same as now. They will have data eventually how much damage was caused by a million self driving cars in a year. And that may be more or less than the damage caused by a million drivers. So the insurance premium will be higher or lower.
Yes but there still has to be the ability to "Ban" the vehicle,
Let's assume there is a bug. That makes all other vehicles vulnerable so they should all be "banned".
The company then has to take the hit providing taxis or whatever to the people who have bought the shite in the first place.
Just like an MOT, if it is not road worthy then it cannot be legally driven.
It will be easy enough as the vehicles are all connected to data anyway.
Just because this is software does not take away any of the responsibility. Software has a history of being bug-ridden detritus that is just about good enough. Those systems that are good are at teh very top end and extremely specialised, not consumer solutions.
There is an acceptance that software can have bugs, it does not matter, it does not kill or injure, just a periodic inconvenience. Now you are putting the software in control 1 1/2 tonnes of metal that is interacting with the general public without constraint.
Sorry, how is this different to today? If a car manufacturer designs smthg whose petrol tank explodes, there’s a product recall. The manufacturer replaces the component at their expense.
Recently, Toyota released a car where the wheel tended to fall off. Because they’d forgotten to factor in that electric cars put more torque down; strains the wheel more.
Mostly, they don’t release cars until they are confident that the design is damn-foolproof. And occasionally they get it wrong and lose money; and have to pay compensation. People are waving that this is the first time anything like this has happened in human history, and it just isn’t
It's a difference in kind of recall.
At the moment, at its worst a recall generally means the vehicle can be driven slowly to a registered garage for the faulty part to be replaced before it fails catastrophically.
It's also very unlikely that multiple vehicles will fail simultaneously as driving style and pothole counts vary.
It is probable that faults found in self-driving software will result in all vehicles using a particular version to be immediately taken out of use until the software is fixed, verified and installed, because suddenly they're uninsurable, if not illegal.
The same as a driver who loses their sight isn't allowed to drive home from the optician.
“It is probable that faults found in self-driving software will result in all vehicles using a particular version to be immediately taken out of use until the software is fixed, verified and installed, because suddenly they're uninsurable, if not illegal.”
Sorry, this is just total over-reaction and FUD. We’ve already seen early self-driving vehicles on the road. Unlike “I, Robot”, when there’s a bug the eyes don’t suddenly turn red and the car turns into a a murderous beast. They are not “all going to fail simultaneously”. What happens is that under some rather rare condition, the car fails to handle it correctly. The idea that once a car is already shown to meet some safety standard (say “fewer than 5 fatalities per billion miles”), you’re suddenly going to have to take them all off the roads until the new fatal rare condition is handled, is just silly. Unless they all happen to meet the road-condition “man in orange sweatshirt with GO logo, holding the hand of child wearing flashing-light wheelies, near dusk”, they are no more dangerous than before it happens the first time.
That makes all other vehicles vulnerable so they should all be "banned".
Surely that should depend on the vulnerability and the actual risk it presents. For example - should there be a type ban if a very rare set of circumstances leads to a minor accident?
Even in the case of a serious accident (life-changing injury/death) - if the circumstances are extremely unlikely to be repeated would an instant ban be the best way forward. Such decisions are often not without their own cost in terms of consequential injury and death as people switch modes of transport.
Within the air-sector accident investigations frequently identify different degrees of remedial action. In some cases immediate grounding is needed, but often the risks can be addressed through scheduled updates to systems or scheduled maintenance.
It seems likely that as automation increases there will need to be an associated investigative body similar to AAIB or RAIB [in the UK - other countries = other bodies]. Certainly the AAIB has the power to ground unsafe systems.
Obviously a major difference between where car-automation is going and current-day air-systems is that the pilot/flight crew is still present to handle problems - though there is interest in fully autonomous flight (particularly for freight).
This has been my argument as well.
As soon as control of the vehicle passes to software, the people who created that software and own all the rights etc have to be responsible and culpable when it goes wrong.
That includes prison sentences if the failure merits it. Death by dangerous driving has to be the outcome is someone is killed by a self-driving vehicle and people at teh company have to be responsible. That will be the QA team that sign off.
I would also include the occupants of the vehicle as well. If the vehicle crashes when it is supposed to be self driving and software of sensor failure is the cause then they are culpable.
Currently there is no responsibility, it is with the driver. The entire point of self driving is that the occupants do not need to take over. The reality is that someone cannot because they are unable to react fast enough anyway. If you need a "driver" in the vehicle to be responsible and be able to take over then the entire thing is a complete waste of time.
Based on what my latest Golf appears to be capable of doing we are years away from safe self driving. It is not capable of running on the road and we must not be in the situation where to have self driving vehicles, there have to be massive upgrades to roads. That is completely pointless. Spend the money on public transport instead.
"It is not capable of running on the road and we must not be in the situation where to have self driving vehicles, there have to be massive upgrades to roads. That is completely pointless. Spend the money on public transport instead."
Why would we waste the money on public transport instead? Cars are popular for the simple fact that they actually work. You get in the car, drive to where you are going and get out, the exceptions being in crowded cities. Public transport may or not arrive, is not pleasant and you have to get yourself to the pickup point, travel to where the transport is doing dropoffs, then get to where you are going. And that is best case scenario of local travel, anything further afield is usually much worse.
I am not convinced of automated transport on the smaller roads but on the main arteries I can see it being a good idea. Especially for trucking goods.
However well it works it works on specific routes. If your journey simply that route or a part of it it's fine. If it involves parts of two or more routs, even if there are interchange points between the routes, there's likely to be time wasted changing and the journey is likely to be far from direct.
Nevertheless, even when the journey consumed excessive amounts of time, when I used to commute into central London from Wycombe by train I didn't envy the drivers on the crowded roads we passed.
By this logic 1000 software developers are driving 100 million cars simultaneously. If those cars cause 1 death, would you jail all the software developers?
Or divide the sentence by a billion?
Or give them a reward for saving so many lives compared to the bad old days when humans used to drive?
“Death by dangerous driving has to be the outcome is someone is killed by a self-driving vehicle and people at teh company have to be responsible. That will be the QA team that sign off.”
No, because that’s not the case in the case of a human driver, for the offence of dangerous driving.
“when driving falls far below the minimum standard expected of a competent and careful driver”
If your human driving kills somebody, and yet you followed all best precautions, you *don’t* get prosecuted Run over a 5yr old child outside a school at 3pm, expect to go to jail matter “why” it happened, because you knew you were supposed to take special precautions. Do so at 3am, with the child wearing black pajamas, it’s the parents fault.
Competent and careful drivers cause approximately 5 deaths per billion miles. We know this as a measured fact.
The equivalent for an *engineering team* is to keep the fatality rate below that rate. So long as their implementation does so, they don’t get prosecuted.
This is just the same as: it is still possible for engineers to design and build bridges. Occasionally those fail, and the design team is not mecessarily prosecuted.
But if *in addition* the bridge architect was criminally negligent, then yes they would. And that occurs when they fail to take the precautions and design practices in line with professional practice. So it should be same here.
If, to take a relevant example, the self-driving car fails to roll off an accident victim *because this is the first time anyone ever thought of it in engineering*, so it wasn’t in the test set. It may be bad, adds into the “does this technology kill more than 5 per billion” box, but is not in itself negligent.
But if the company fails to fix the bug, and to implement that test case into tegression test of all future SW versions. That’s a negligence and process problem.
I think people are letting their prejudice override standard engineering best practice.
It's the old hybrid-dilemma.
If you hedge your bets and try to make a device do two different things, chances are it will do both badly and cause you more problems than either.
As far as I see it, you would need to be buying a "self-driving car" (with a subscription because the software/insurance would be on the manufacturer of the car) or a "human-driving car".
At that point you can abandon all controls, steering, instruments, much of the dashboard, etc. and make the car's job so much easier.
But trying to do both in one is just a temporary solution that's never going to work well in terms of liability, insurance, etc. We're already seeing that with Tesla "Autopilot". All parties point fingers at the other and it becomes an expensive mess to sort out.
Whereas a dedicated self or human driving car - you know exactly who's liable immediately and can just deal with the collision (never "accident") straight away.
Self-driving cars will be a thing eventually, but they'll be a totally different thing. They'll be a personal transport unit that you hire or rent. The seats don't even need to face forward or even be seats - they could be beds! But the obsession with trying to make the car do the human's job but tolerating interference from the human, not to mention all the other humans around it, on a road built for humans and signs and signals readable by humans, plus handing back to a dead/inattentive human if it panics, and putting the onus on the human at all times... that's just a ridiculous mess of liability.
I would be happy to see a little automated pod zooming down an isolated lane of a motorway overtaking me, with kids lying on a bed reading a book on the back seat, and mum and dad making sandwiches in the front seat (which is turned to face the kids). I'd be in sci-fi heaven.
But what we have at the moment is idiot-hell where some twat thinks that their self-driving car is infallible, falls asleep at 70mph and kills a family, then tries to blame the manufacturer when it's not even clear if he ever turned on the self-driving at all.
what we have at the moment is idiot-hell where some twat thinks that their self-driving car is infallible, falls asleep at 70mph and kills a family, then tries to blame the manufacturer when it's not even clear if he ever turned on the self-driving at all.
At the moment a "self driving car" is legally speaking no more than cruise control; it is there to assist the driver and is never in control of the vehicle. The owner is always liable for a crash and it's very difficult to get the manufacturer for anything because there is no process built into law for holding the manufacturer responsible for anything beyond manufacturing faults. If the manufacturer is moving into driving however then clearly that is no longer fit for purpose.
I would be inclined to support a different method whereby a self driving car must pass a driving test. Additionally, at the moment nobody bothers legally crucifying a driver who kills themselves in an accident for obvious reasons. If a self driving car manages that then it needs to be prosecuted and banned for whatever period is appropriate; and that needs to remove every instance of that self driving vehicle from the roads for the protection of the public for the defined duration of the ban. And also; the manufacturer should supply insurance for the self driving portion of the car or self insure by agreeing to bear all costs and depositing the half million required in an escrow bank account as per the requirement in the highway code.
I would be happy to see a little automated pod zooming down an isolated lane of a motorway overtaking me, with kids lying on a bed reading a book
How do you wear a seatbelt lying on a bed? Surely you'd get your head smashed open against the rear of the vehicle in an accident, and then catapulted through the front window? I suspect that your always going to be stuck with seats and seatbelts.
That's a problem long solved by taxi ranks - and all other firms of public transport, actually.
If public transport meant a pod that turned up at your house within five minutes and took you exactly where you wanted to go with few to no diversions, why would you own a car?
Sitting in traffic and searching for a parking space is not my idea of fun.
If you flag down a taxi, and it smells of body odor, fentanyl smoke, urine, vomit, sex fluids, and/or old McDonald's Happy Meals, you're gonna say, "Sorry, Charlie ... I'm not getting in there. I'll get a different taxi."
If people are renting a non-personally-owned transport pod, are they going to have the chance to pass on a skanky one, or are they stuck with Hobson's choice? Because if it's Hobson's choice, people will just refuse to use the pods.
May not be fair, but to bureaucrats that won't matter. Somebody always has to be liable. In the UK it'll be the driver because the bureaucrats hate motorists. In the US it'll be the car maker or software provider, simply because that way there's plenty of money to be made by the lawyers (easier than wringing it out of individual driver's insurance companies on a slow case by case basis).
Looks like the UK is heading towards totalitarian dystopia and self-driving cars is just a thin end of the wedge.
Here is why:
The government could use the pretext of ensuring safety and efficiency in self-driving vehicles to install advanced surveillance systems. These systems could monitor not just traffic conditions but also keep a track of citizens’ movements, associations, and routines.
With the advent of self-driving technology, the government could exert more control over where and when people travel. By requiring mandatory routes or restricting access to certain areas, the government could effectively control population movement under the guise of traffic management or environmental concerns.
Promoting reliance on automated vehicles could lead to a decline in driving skills among the populace, making them more dependent on government-controlled transportation systems. This dependency could be leveraged to control aspects of citizens' lives, such as limiting travel during certain times or to certain locations.
By highlighting the risks of cyber-attacks, the government could enforce strict cybersecurity measures that may include invasive monitoring of personal devices and communications under the pretext of national security.
The government could exploit the vast amounts of data collected through self-driving vehicles for commercial gain or political manipulation. This could include selling data to third parties (as they do with NHS now) or using it to manipulate public opinion and electoral outcomes.
By controlling the rollout and access to self-driving technology, the government could create a divide in society – those who have access to the latest technology and those who do not. This could lead to social stratification and increased control over the privileged class.
With self-driving vehicles being highly dependent on software and remote control systems, a government with nefarious intentions could theoretically gain access to these systems. This access could be used to remotely control vehicles, directing them to crash at high speeds, making it appear as an accident. Such a method of targeting opponents would offer the government plausible deniability, as vehicle crashes can often be attributed to mechanical failures or errors in the self-driving system, rather than foul play. This tactic could be used selectively and covertly against key political opponents, journalists, activists, or anyone deemed a threat to the regime. The randomness and apparent non-connection of these accidents could make it difficult to trace back to the government. People raising concerns could be dismissed as conspiracy theorists etc. using the usual methods.
Where do they get these arbitrary numbers from? Doing what, exactly?
Manfacturers already have their R&D teams, and if they solve the problems around self-driving those teams definitely won't be growing, so it won't be that.
Manfacturers already have their buillding teams, and are progressively automating their build processes, so it won't be that.
Existing mechanics will mostly retrain to be qualified for work on autonomous cars, so it won't be that.
Taxi drivers will be displaced by autonomous cars, so it definitely won't be that.
I'm sorry, but what drugs are the researchers on? Neither 2 nor 40 seconds is in any way adequate. It needs *minutes*.
Either the time to handover is short (and 40 seconds is still short), in which case a self driving car is largely useless as the driver has to be continuously alert and in a position to take control.
Alternatively it's a self driving car that actually works within a given environment (let's say a motorway), in which case it is 'safe' to take a nap, have lunch, engross yourself in a book or computer game, where there is several minutes warning that the motorway turn off is arriving and control is needed on A/B roads. If the driver does not take control a few minutes prior to turn off, the car most probably should move to the hard shoulder and stop. That has implications both for smart motorways, and also for detection and penalties for not taking control in a timely manner.
Can't say I'm confident or particularly enthused about the prospect of a self driving car in the forseeable future, even if it's highly attractive to be able to tell the car to do 4-6 hours of motorway driving and chill out, or even the prospect of starting a long distance journey early in the morning and sleeping most of the way.
as the driver has to be continuously alert and in a position to take control.
That is probably going to be extremely frustrating to many people. It's like being a passenger supervising another driver. At least when driving you are doing something that is somewhat exciting and stimulating. Probably sitting in such a car would be an equivalent of watching the paint dry and you need to jump in and remove any fly you notice.
We have two Mway junctions near town with pairs of giant roundabouts, four or five lanes. The road markings were pretty unclear to start with and are now (mostly) so faded that they're becoming a real hazard, esp for those who don't know the roundabouts. I am not sure how a self-driving car will negotiate this sort of thing... never mind who's responsible for any accidents.
it's embarrassing.
If a car wants to drive by itself (or it's manufacturers) then it sits a bog standard UK driving test.
If it passes then - like us mean puppets - it has demonstrated a level of competence assumed to be "safe enough"
All of this hand wringing would make sense if human drivers never made mistakes or killed people.
Passing a driving test is utterly trivial. Current cars on the roads are capable of it, to the extent that any car would ever be - they can't be seen to check mirrors, eg, so the test isn't actually applicable.
We're way beyond that stage already. The hard stuff is basically untouched, but urban driving well enough to meet driving test standards is easy.
I think about when I am driving in traffic. I am doing stuff like establishing eye-contact with pedestrians, cyclists, other drivers. Assessing whether they know my intentions and I know theirs. Then there's classic scenarios like a football rolls across the road from between parked cars - probability that a kid chasing it might run out. Someone is in that parked car - are they going to open the door. If you've spent time on the road on a bicycle or motorbike and you're still alive, you've probably got used to assessing stuff like that.
Motorways are probably a better option for automation. But there's still stuff there - like if you've got cars both overtaking and undertaking a middle-lane dope, they might follow the manoeuvre with attempting to both occupy the same space in the middle-lane at the same time.
DEATH RACE 2030
Drivers start on the east coast with ten million dollars. Everybody that gets run over is a hundred thousand deduction. You forfeit all if the car crashes (especially if you die). What's left if/when you reach the west coast is yours to keep. Better hope it's a positive figure.
Televise the whole thing. You'll get idiots lining up to participate. It'll be a great way to testing the technology and thinning the crowd a little while entertaining the rest. As for payments and sponsorship? Well, we have the technology to put big OLED panels on the cars to push endless adverts, not to mention along the route like those god awful boards around the edges of football fields these days. And if it distracts the robodriver, oh well...
I really don't understand why so many people, including you, are so totally confused about what are the easy parts of autonomy, and what the hard.
The bits about eye contact and making intentions clear are largely irrelevant - autonomous cars will be entirely predictable and very cautious and patient compared to human drivers. But it's a fair point to suggest we might want some additional signals added to autonomous vehicles for communicating with other road users.
The stuff about noting hazards and motorway driving are the (relatively) really easy bits. Sticking to rules instead of driving dangerously due to impatience is not an issue for computers.
The stuff that is hard for autonomous vehicles is things like unconventional road layouts, cones round roadworks, and similar stuff humans find very easy - basically, like Captchas, where we are much better at certain kinds of image recognition.
Uh...
autonomous cars will be entirely predictable
The issue is that you can't predict the behaviour of the environment. You the autonomous car is going to work on the basis of if X then Y, well... good luck. It's not going to work.
we might want some additional signals added to autonomous vehicles for communicating with other road users
One thing driving instructor tells you that you should never trust the signals you see on the car and you have to use your instincts to figure out the intention.
Classic example are roundabouts. Drivers tend to give wrong signals all the time, take the exit at the last second or take one they didn't signal they are going to take.
You could probably mitigate some of it if all cars on the round are autonomous and can coordinate with each other, but it's never going to happen.
The stuff about noting hazards and motorway driving are the (relatively) really easy bits.
Nothing easy about motorway. People tend to picture it as a straight road, with wide lanes and nothing going on. Until you get an animal on the road or your tire bursts and many other scenarios you can't do if X then Y for.
The stuff that is hard for autonomous vehicles is things like unconventional road layouts, cones round roadworks, and similar stuff humans find very easy - basically, like Captchas, where we are much better at certain kinds of image recognition.
We currently don't have a system for autonomous driving, so probably the hardest part is to build one, but such technology does not exist yet. Sure you have some systems that pretend they drive autonomously, but it's like trying to have conversation with AI. You can quickly tell, something is "off". Mainly because current systems can't reason - contrary to what brochures for investors say.
I don't think self-driving vehicles are equipped to deal with trees falling off buildings down on top of vehicles in the roadway. I mention this because I saw it once. The human driver stomped his gas pedal (the lane ahead of him was two carlengths clear), and the tree crashed through the cargo area of the delivery van he was driving. Death avoided! I don't think self-driving cars have sky-scanning sensors. (I think the human driver probably saw an unexpected shadow, looked up, then reacted.)
"But how often does something like that happen?", you ask.
Risk is the likelihood of something bad happening, multiplied by the negative consequences of it happening. I think we all set "human death" at an extremely high value of "negative consequences".
I think we all set "our own death" at an extremely high value of "negative consequences".
I think a majority of us set "human death" at an extremely high value of "negative consequences".
The suits in the C suite are more concerned about dividends.
There's been a lot of posts about this. A perfect self driving car would be brilliant.
Go to a concert, have a number of beers, as the encore plays ring your car. It starts from its cheap car park several miles out and picks you up at a pick up and drop location. You slump dozily into the car and it drives you home as you sleep. Total cost only a few pounds, rather than dozens by going in a taxi.
Decide to travel to the Isle of Skye. Stick bags in the car at 1am. Strap yourself in, fall asleep. Arrive prior to 9am for breakfast.
You decide to go to *Berlin*. Flights are inconvenient. Leave at lunchtime, pop on Eurostar, have dinner in Calais. Snooze until Berlin. (not actually the best example, train may be taken in the same time, but you take the point)
The Taxi of Mum or Dad is no longer a thing. Trip for your child is approved, car drives them there, drops off, returns home.
You have a hospital appointment and either can't drive, and/or the parking is appalling. Car drops you off and returns home whilst you have your appointment.
Need to send something to your friend in a hurry. Stick item in car, tell it to go to your friend. Once item is retrieved, car returns to you.
In a utopian future with perfect AI and no energy scarcity it would be fantastic. Unfortunately it's unlikely to happen soon.
Back in the world of self driving cars as they are I see minimal advantages. Possibly lane assistance might be useful, but as it's fundamentally necessary to remain alert I see the advantage of very few assistance features.
Auto headlights (that can be over-ridden) are one of the few enhancements I like. Reverse parking sensor and camera. Cruise control. I imagine I'd probably trust an auto parallel park as that seems doable, and I'll admit I do not always get the angle correct first time.
I've tried cars with other distance sensors and they're incredibly annoying, beeping at me when I'm 'too close' to concrete bollards on a motorway with narrow lanes and no room to safely move further away. Even rain sensitive wipers never get it right, I'd rather adjust things myself.
Current, grossly inadequate technology is _already_ safer per mile than the global average. About on a par with the US average. Still some way behind the average in countries with decent levels of driver training.
It really isn't that hard to be better than the bad drivers out there. Problem is, everyone thinks they're one of the good drivers.
This is brought to you by HMG... it's never going to happen. Just look at the news piece slapped in the middle of the article... Something about auto lane changing by 2021. This is all just gov word crap to make them feel groovy and hip and "to inifinity and beyond". Anyway one S. Braverman got binned today so not a total loss.
The problem is one of defining "safe enough", as the article points out.
We probably want self-driving cars to be safer, if not A LOT safer, than human drivers. Autonomous vehicles killing thousands of people a year is not going to be acceptable, even if we accept that death toll from fallible human drivers.
How many UK deaths per year caused by self-driving cars is acceptable? If zero, then self-driving cars are going to have to be VERY slow and cautious out on the roads, especially when mixing with human-driven traffic.