But... but... we are driving because we like it, right?
I mean, who doesn't enjoy being stuck in traffic for an hour every morning?
Keen to prove that automated cars are safer than human-driven vehicles, Waymo, the robotaxi biz spun out of Google in 2016, set out to simulate 72 fatal crashes that occurred from 2008 through 2017 in the vicinity of Chandler, Arizona. Chandler, a popular venue for testing automated cars in the United States due to its …
Some of us are old enough to remember when electric windows were a new-fangled gimmick! Other might be old enough to remember when heaters were new-fangled gimmicks. Or windscreen wipers. Or the witchcraft of screen *washers*. Never mind those complex and hard to repair electric starter motors. You can't beat a proper starting handle!!
"Never mind those complex and hard to repair electric starter motors."
An argument can be made that electric starters are what killed off the first wave of electric cars. Ladies were not happy about stooping over to crank a car and there were a number of injuries and fatalities from handles that snapped back on people.
Eh, it's just tech. It'll start out costly, but if it can be made to work, eventually it'll be standard on all cars.
Is the self-driving hardware and software safety-related such that its failure can lead to deaths? That is a discussion that, amazingly, has yet to happen, presumably because there are no commercially available level 4 or 5 vehicles (high/full automation), which would be required for the driver not to be concerned about sitting in a traffic jam.
It is moot at the moment, because there is always supposed to be a driver sitting there, holding his breath, and ready to take over at a moment's notice. But if failure of the self-driving apparatus is likely to kill someone then, as with aircraft, the labour-intensive quality assurance processes and necessary redundancy are likely to keep the cost high.
This has been predicted for a while
Once actuaries show that self-driving cars are less llikely to get into crashes than the liability part of insurance premiums will reduce dramatically and that accounts for about 75% of what's currently paid
We already see this in effect anyway - the premiums for inexperienced drivers are HIGH because they're far more likely to get into a crash where they're at fault
DidYouKnow - the SINGLE greatest number of motor insurance claims are related to reversing incidents in car parks. Claims under this area outnumber just about everything else put together (but they're cheap to fix). Second most common is "Other incident in car park" (mostly: "drove into another parked car whilst turning into parking spot"
Road crashes are spectacularly expensive by comparison but the sheer number of low speed incidents demonstrates that many people have no concept of spatial awareness
One of the more interesting pieces of fallout of the change in this direction will be that insurers are lkiely to require "advanced driving certificates"(*) before even offering cover for manual operation and even the UK's driving tests will seem mild by comparison with the abliity levels required to obtain a manual license in the (near) future. PCV and HGV licensing requirements and restrictions are especially likely to tighten up
(*) Paradoxically right now a lot of them INCREASE premiums if drivers have these kinds of things as they're perceived to be on the road more and therefore a greater overall risk
Once actuaries show that self-driving cars are less likely to get into crashes than the liability part of insurance premiums will reduce dramatically
I don't know about that, but I imagine it will change pedestrian and cyclist behaviour.
At the moment irresponsible (or malevolent) pedestrians and cyclists have to be fairly careful on the roads because they are dealing with a human being who might not have seen them or might not react in good time. Once cars reliably come to a screeching halt on a dime as soon as someone takes a single small step into the road, people will become much less cautious.
Pedestrians may come to regard all roads as a right of way for them because of this. If I want to walk down the middle of the carriageway, what is to stop me? Drivers might get very, very angry but they can't actually assault me since everything will be recorded by the ubiquitous self-driving car video cameras, and I bet the car won't let the driver intimidate me by driving three inches from my backside, or make some extraordinary manoeuvre to drive around me. Similarly, as a stupid and thoughtless cyclist I can be even more of a complete arse.
Molesting self driving cars for fun or profit might even become a thing - for example changing speed limit signs, or entrances to premises, in ways that would be transparently obvious to a human driver, but the machine intelligence can be more easily fooled.
I can very accurately predict that there will be a new "game" involving forcing computer driven cars to perform an emergency stop or at worst swerve violenty in order to avoid an accident.
It'll probably have a catchy name like AutoBaiting or something.
It will then be followed by badly drafted and rushed laws stipulating the punishments for such behaviour and in response the AutoBaiters will just modify how they do it.
It will then be followed by badly drafted and rushed laws stipulating the punishments for such behaviour and in response the
AutoBaiters will just modify how they do it the police will complain that they simply don't have the resources to investigate yet another common low-level street crime. FTFY
"If I want to walk down the middle of the carriageway, what is to stop me?"
Possibly the policeman who comes knocking on your door when one of the 100's of cameras pointing at you from all the self-driving vehicles uploads the offending image and video for facial/gait/whatever recognition and passes your address to the police, courtesy of Facebook :-)
It'll be such easy pickings for the Police that they'll go after all of them to get a nice clear up rate.
"This has been predicted for a while"
I remember a short story by Larry Niven where someone was sentenced to be dismantled for the organ banks for manually driving their car.
Not sure of the date, but I think I read it in The Long Arm Of Gil Hamilton back in the early 80s. It was a collection so the story will likely be at least a few years older.
"Do you think the wage slaves who have to be at work at spot on 8:59 will be able to afford driverless cars?"
Beyond the cars, road signage and markings will need to be kept up to standards. A patrol will need to make sure than any roadside advertising isn't detected by cars incorrectly. The cost to taxpayers will also increase and the number of roads may have to be reduced to pay for maintenance.
Of course there's fewer incidents if everyone obeys the traffic rules.
It's like saying if everybody obeys the 'Murder is not nice, don't do it' rule then fewer people are killed......
But, People as a collective group are stupid, just look at the Darwin Awards.
"Of course there's fewer incidents if everyone obeys the traffic rules."
Just about every road crash involves serious failure of at least 3 parts of the safety equation
In a lot of cases, there's already been a failure on the part of road DESIGNERS and maintainers, which removes the safety margins down to the level that an unforced error lesewhere pushes things over the line
Blaming drivers is an easy way for local authorties to avoid criminal liability, but there's increasing pushback on this even in the land which made it illegal to cross the road ("jaywalking") so that cars could roam freely: https://nyc.streetsblog.org/2017/01/05/states-highest-court-holds-nyc-liable-for-injuries-on-streets-without-traffic-calming
There is also no ego involved. If you look at a lot of crash videos, many are caused by one driver making a mistake (or being an egotistical arsehole) and the one "in the right" not making allowances and forcing his right-of-way to the point it causes an accident.
When you get two big egos facing each other, the result is always going to be a mess, regardless of whether it is on the road, in business, in politics or wherever. Remove the ego, you remove the danger. A good, human, defensive driver would probably also have avoided many of the crashes.
Add in distractions, like smartphones, and you are just asking for trouble. I leave my 'phone in my bag or pocket when driving. If it is a message, it can wait until I get to my destination. If it is important, they will ring and I can either take it hands free or safely pull over and take the call.
"many are caused by one driver making a mistake (or being an egotistical arsehole) "
True, but there is a big target to pin the liability on. Who's to blame if the car was driving? The manufacturer? Road maintenance? The owner? Even in a situation with "no-fault" insurance, somebody will be looking to place the blame somewhere and get compensation. The problem is that investigations will be very technical and take a lot of resources. Now, some accidents presume a guilty party. If one car crashes into the back of another, the driver in the rear is presumed to be at fault unless a dash cam, CCTV or witnesses say that the person in front slammed on their brakes or blindly switched lanes right in front of the other car. With automated cars, it could be the car in back's forward sensor was faulty just as likely as the car in front saw a squirrel and did an emergency brake rather than just culling a very stupid squirrel.
Human driving depends on the fiction that people can continuously pay attention to the road ahead (which itself is known to be false) while at the same time taking note of traffic and direction signs, the rear view mirror, the speedometer and so on. Learning to drive pushes a lot of this impossible workload into the subconscious mind. But when our own automated natural intelligence goes wrong, it can result in accidents where people apparently haven't seen a pedestrian or cyclists that was in plain sign, and they have no idea why.
AI doesn't suffer from this problem, and neither should it deliberately disobey traffic rules as most human drivers do from time to time. but it has two other problems. Firstly current implementations seem to make more mistakes than humans in categorising what is in vision. This will presumably improve. But secondly, not actually being intelligent, it cannot grasp what is going on in a situation, in terms of the intentions and likely actions of human participants. This would be a big step beyond curent AI. I suspect that in complex situations such as inner city driving we will need to rely on human intelligence for a long time yet, and this is where the accidents involving pedestrians and cyclists occur. For these situations some automated assistance to human driving is available, but that brings the problem that humans hate making effort, and the more help you give them, the less work they do themselves. We will need some good psychology to overcome this.
Know what you mean! I'm a Limey and I have to actually think when I drive in Spain rather than going by instinct. I feel like I'm driving in some weird alternative mirror universe.....
I find that having sat-nav helps me concentrate on the driving, although you have to be careful as there was a stretch of road I had to drive along where you had to do the opposite of what the sat-nav was telling you. E.g. "turn left", nope there's a no entry sign there, I'm going to have to turn right....
Also I'll find myself at roundabouts saying "on does trays"....
Citizen of Nowhere,
Judging by the number of cars that also have this problem when I am driving.
You must be doing an awful lot of driving in this country in *lots* of cars or it is not just you who has this problem !!!
P.S. May I ask if you have 'Diplomatic Immunity' !!!
I left a french hotel at 5am to catch a ferry (near le mans)
Pulled right out of the car park onto the left lane.
Voice from the back "are you really trying to kill us"
Happily no-one in sight either way.
For the next 5 years small things took great delight in pointing out when I was trying to kill everyone.
(apart from that time, happily, incorrectly)
It happens. I got lucky.
I've always hired a ltd drive when having to drive on the wrong side of the road. I find the different layout in the car (mainly having the gear stick on the right) is enough to compensate and I've never found the desire to drift back to the left.
I've frequently been in the front (rh) seat of taxis and been relaxed (even in a Cairo taxi - though perhaps not quite so relaxed, especially when the driver went into a tunnel, on the wrong side of the dual carriageway - but the oncoming traffic just parted to let us through, without honking or flashing lights, as though it was the natural thing to do). The scariest ride I had was as a front seat passenger in a lhd car in Cumbernauld (near Glasgow); sitting in what should have been the driver's seat with no control was definitely scary!
I'm not sure that intelligence is needed to predict the motion of most groups - particularly when you can substitute an enormous set of observations.
People, whether in cars, on foot, on motorbikes, pedal cycles, scooters, skates, are basically predictable - often due to simple physics, a vehicle moving cannot suddenly translate laterally, it must adjust it's angle and do so. Similarly people can't instantly disappear, much to the chagrin of many people who currently control heavy machinery on the roads.
Pedestrians are probably the least predictable group, since they can stop to concentrate on something unrelated.
"I'm not sure that intelligence is needed to predict"
Not 100% of the time, no.
But it massively helps on occasion - humans can see a person approaching a pedestrian crossing and predict a sudden 90 degree turn to cross the road.
Humans can see a person at the side of the road trying to flag down a taxi and predict it doing a u-turn in front of you.
Humans can see a car in front fish-tailing round a bend, and predict black ice
And the most obvious one: humans realise that certain objects simply don't disappear into thin air, or other 'obvious' anomalies that AI simply don't care about.
"And the most obvious one: humans realise that certain objects simply don't disappear into thin air, or other 'obvious' anomalies that AI simply don't care about."
Another one is seeing a load of brake lights up ahead on the motorway and knowing you need to start slowing down or get off for a detour.
"And the most obvious one: humans realise that certain objects simply don't disappear into thin air, "
Really - I'd suggest there are quite a few people driving who don't realise that (although I'll give credit where it's due and estimate that 50% of them just don't bother looking, so aren't aware of anything there *to* disappear).
A pedestrian making a 90 turn at a dropped kerb or other pedestrian crossing is also easily predictable by ML, as is the concept that brake lights ahead are going to come back to slow you down as well... Although that's something else that a significant minority of motorists seem unable to appreciate.
Computer learning doesn't involve intelligence, so understanding doesn't come into it. It's well-proven that computer learning systems can evolve the right rules. People generally assume the wrong parts of this are the hard ones. Rule-generation and decision-making isn't hard. Accurate location (and having sufficient location data) is the hard bit.
Preventing low-speed collisions is trivial. It's only at higher speeds where emergency braking takes too long without prediction. Autonomous cars can already deal just fine with urban environments filled with pedestrians, cyclists, and so-on. (They don't currently have the capability to automatically shoot cycle-rickshaw-riders in the head, but that's a legislative issue.)
"Autonomous cars can already deal just fine with urban environments filled with pedestrians, cyclists, and so-on."
Which (unsurprisingly) is where the greatest number of insurance claims (by number, not value) happen
Humans are lousy at dealing with multiple simultaneous inputs
"Which (unsurprisingly) is where the greatest number of insurance claims (by number, not value) happen"
There are already lots of cars that can parallel park, reverse into a parking spot, etc. BUT, if you look at the videos on YouTube of Teslas using the summon "feature", you can see that there are huge limitations in tight enviroments. If you insist that Tesla is the leader in machine driving, you should be rather scared about that.
" It's well-proven that computer learning systems can evolve the right rules. "
Is it? It seems that being able to determine which rules are being generated by deep learning remains elusive, never mind proving that they are right. This is highlighted by periodic revelations that the predictions are based on some otherwise unnoticed bias in the data. I recall a predictor that was supposed to differentiate wolves from huskies, which turned out to mainly recognize snow.
" it can result in accidents where people apparently haven't seen a pedestrian or cyclists that was in plain sign, and they have no idea why."
I can give an example myself from 30 years ago, not far from New Zealand's Parliament house
Narrowish but extremely busy one way road, cars both sides (various doors opening) slightly uphill, behind a cyclist weaving all over the lane, because I needed to turn 50 metres past her and didn't want to jump in front/cut her off
I was concentrating so hard on looking for car doors flying open and being wary of the cyclist (who seemed on the verge of falling off) that I completely missed the zebra crossing (cars illegally parked hard up against it either side) and pedestrian who had walked out onto the road without looking, until she was in front of me (my three passengers were also startled. We were ALL worried about the cyclist)
Thankfully I was going slow enough to stop even though completely startled. Although there were markings on the road for the crossing it was extremely faded and essentially disappeared in wet weather (anyone who knows Wellington knows this is "most of the year").
It was very much a case of if you knew the crossing was there you paid more attention to the footpath, but this crossing was a unicorn (it was a VERY busy road and the signage was nearly invisible).
Later on I lived in that area and as a pedestrian on foot I _never_ went on that crossing without checking carefully as it was too easy to have to dodge 40mph cars not slowing down (particularly on weekdays)
A complant to the council resulted in a safety assessment and the crossing being completely rebuilt/remarked and repositioned - apparently I wasn't the only driver who was surprised by the crossing and safety officers observed a bucketload of near-misses (which gelled with my experience on that crossing as a pedestrian)
So: being sensible, you had clearly slowed down to a point where the various inputs didn't overwhelm your processing capacity, and likewise you recognised a risky environment, so were going slowly enough to stop for said pedestrian. That's the point which we'd like to see Ai get to, obviously some time in the future.
However when AI gets there, it will outperform all those human numpties who aren't as good a driver as you, and drive too fast for the conditions - and we've all seen plenty of those...
I don't pay attention to the speedometer. I just peer at it every so often. I am not glued to the rear view mirror either. I am looking in my direction of travel as that's where problems are likely to crop up. The vast majority of the time I don't need to look at regular road signs as I know where I am going. When I am going somewhere new, I am much more attentive to all these things and probably a source of frustration for the people behind me that do know where they are going.
I can tell if somebody parked at the side of the road and standing outside their car is going to wait until I pass before they open their door or is just going to fling it open without looking. An AI may detect a person standing next to the car but will have to err on the side of safety and stop to allow them to open the door and get in to avoid an accident.
"its simulated AI driver isn't a careless, distracted jerk".
Nor in fact are most human drivers. The actual incidence of accidents per vehicle mile is very low, even including minor prangs. I'm also (with quite a bit of experience of them) very wary of simulations (particularly simulations by interested parties). Furthermore, until (unless) full autonomy is achieved, the "safety net" of a human driver taking over in emergency is utterly illusory. Reaction time from total inattention (as of an uninvolved passenger) is substantially longer than from partial attention (as of an inattentive but involved driver).
"Nor in fact are most human drivers. "
Bollocks. It's just that being a selfish, inattentive shitty driver doesn't cause accidents as often as one might think - the usual effect is just to create traffic and slow people down for no good reason.
ETA: I note that we have one person in this thread who thinks it's perfectly reasonable to keep driving after years of incidents of driving on the wrong side of the fucking road!
I keep pointing out that the shocking thing with Anne Sacoolas is that if she hadn't run away, she'd have got a community service order. The road laws in this country make it a slap-on-the-wrist offence _at most_ to kill a cyclist by wandering onto the wrong side of the road. People upset about extradition/immunity are completely missing the point: it'd be like extraditing over a parking ticket.
Does it also depend on the quality of the driving test? Looking at the difference between UK / European tests and those in the States (admittedly mainly on YouTube), I'm surprised that the death rate in the States is only twice and bit of that of the UK's (death per billion miles driven).
So what they have shown is that "When simulating the optimal conditions for a self driving car, the simulated self driving car performs optimally." - No shit. How does a human driver do in the same simulation, with no other traffic or distractions, or weather conditions.
So what they have shown is that "When simulating the optimal conditions for a self driving car, the simulated self driving car performs optimally." - No shit. How does a human driver do in the same simulation, with no other traffic or distractions, or weather conditions.
Yep, come up to a rural area in the northern USA, no lane markers, road verges that drop into ditches, no streetlights and enough snow falling to enable everyone to play "guess there the road is". In those cases, if I HAVE to drive, it's twenty mph on a road marked 55mph, and stick to the roads I really do know very well.
Or rural Somerset. Lots of steep hills, single track roads as wide as one car. Reversing into a passing space if a car comes the other way. Negotiating reversing right into a bush making contact with the bush when a big van comes the other way. Having no traction trying to hill start on wet leaves. Avoid these roads like the plague when it is slightly icy out.
The road down to my waterwheel is so thin with overhanging bushes that they touch both sides of the car. I once went down in a car where someone had these new-fangled parking sensors. Gordon Bennet, you would have thought we were a fleet of lorries reversing over an alarm factory with the number of alarms that went off as we drove down the road.
"Or rural Somerset. Lots of steep hills, single track roads as wide as one car."
The roads in my area are much the same. The chances are zero that They will be doing anything about that in my lifetime. Given that, I'm not going to be able to rely on a car to deliver my over-the-limit carcass back to my front door. Not that it's a problem, but it's not going to be a selling point that does anything for me even if I were to take up alcoholism and wish to practice it outside of my home.
Being outside of a big city, I was able to afford to purchase my own home. Shit roads is one of the compromises that had to be made.
"even if I were to take up alcoholism and wish to practice it outside of my home." - I for one am looking forward to the reopening of pubs so I can practice my alcoholism outside my home. This way I need to stay sober enough to stumble 1.5 miles home up a hill, which also keeps me fit. Instead, during pandemic I only need to stumble up the stairs, and to be fair this is optional.
As any programmer knows, most and all are not the same thing. Most human drivers might pass but all cars with the same software would pass.
I like to drive and have zero interest in self driving cars but my latest car has a number of assistive technologies like cross traffic and blind spot warnings. I have no doubt that I would detect most of those on my own but I'm not sure about all of them.
that being a driverless car test pilot is a really bad idea. If a (fatal) problem occurs your company will close ranks and use their financial might prove how/why you were negligent, regardless of reality. They have a reputation to defend; your boss might care about you (maybe) but the corporate body wants a good PR outcome.
I don't know what kind of salary I'd have to earn to take that risk
That's complete nonsense. The driver is very heavily monitored. The only driver who got in trouble was sitting back, watching netflix, and not doing their job at all. And when they crashed, it was their fault because they were, and I don't think this is hard to know would be dangerous driving, watching fucking tv.
Human beings don't actually work like that though. There was a study linked here, but there's probably more, on how automated systems affect human behavior. They not only lower our attentiveness & alertness (obvious), but the study had shown they lower it to the point that if something happened that required immediate human intervention, the delay it takes for our brains to get back into a "readiness" state necessary to make corrective action is so long, our ability to prevent or minimize catastrophic accidents is diminished. The proposed solution they gave is for the automation to simulate failures more often, to keep human workers more attentive. E.g. If a supply line automation actually fails 1/10,000 operations, change it to simulate failure once every 1,000 operations, to keep humans on their toes.
Being the test driver would most certainly suck. By definition your ability to take over an already-moving vehicle when the automation fails is significantly diminished compared to if you already had your hands on the wheel and turned the self-driving completely off. It's a perfect storm: you automatically will have severely delayed reactions compared to if you just did the driving yourself, BUT your job is not to drive but let the AI do as much of the driving for you, AND said AI is still dangerously incompetent at driving. Plus all the legal might those companies have if something does go wrong, like top commentor pointed out.
I also recommend reviewing the Tempe Uber crash footage if you haven't yet. Yes, the driver was on her phone, but the pedestrian was jaywalking/not using a crosswalk, it was late at night on a long empty road, the road was lit using streetlights AND the pedestrian was crossing in the dark between the streetlights. You can't see her until the last second. I'm not entirely convinced a human driver paying 100% attention to the road ahead could've done any better. The pedestrian seems to have a death wish. PLUS, auto-drive systems have an array of lidar, radar, and night-vision cameras specifically for situations like this, so they can potentially be better than human eyes. This is not a simple case by any means. If she had just not been looking at her phone, I think this case might've already been dropped.
"You can't see her until the last second."
When that story was first reported on El Reg, many people pointed out the poor lighting conditions. Then some commenters who were locals pointed out that the visibility to a human eye is far better than what it appears to be in that video.
"locals pointed out that the visibility to a human eye is far better than what it appears to be in that video."
Yes, video was shot using a much better camera some time later. What the accident demonstrates is that people are not going to pay attention if they don't need to. That was a test car with a driver that was specifically there to monitor the car. Now consider what happens when it's a production car with Joe Public warming the seat. There is a good argument for Nerve Attenuation Syndrome (NAS) from Johnny Mnemonic. With all of the distractions there are today, we often reach out to turn on something else if we have a tiny bit of attention span left. The problem is that humans don't multi-task. Some people are fast at switching tasks, but we can't do more than one thing at a time if there is an overlap in needed resources. I can occasionally sing while playing drums, but I have to have one of those pretty much on automatic. I couldn't sight read a new piece of music and track the lyrics at the same time.
"your boss might care about you (maybe) "
That's silly talk. Your boss is worried about their job and quarterly bonus and possible promotion. If you are having "relations" with them, they might be willing to give you a good reference on the quiet when you apply for a new job.
"In the eight per cent of simulations as a responding driver that Waymo's code couldn't improve on, each was the result of a rear-end collision – which human drivers also have a hard time avoiding. "
It is well-neigh impossible for a person waiting at a red light to avoid being rear-ended, but I'd argue that it should be entirely possible for an automated vehicle to avoid rear-ending a stopped car. I fail to see how Waymo finds this inevitable.
Also, I'd like to ask if they simulated the weather conditions at the time of the accident (of course not), or the state of the road signage (wear and tear - obviously not), or the position of the sun (nope). These are things a human driver has to cope with, and an automated vehicle will have to learn to cope with as well.
In short, their simulation simulated what would happen in an ideal world with ideal driving conditions.
So 10 out of 10 when everything's perfect. Great news.
Now try it again in the real world.
"It is well-neigh impossible for a person waiting at a red light to avoid being rear-ended, but I'd argue that it should be entirely possible for an automated vehicle to avoid rear-ending a stopped car. I fail to see how Waymo finds this inevitable."
Obviously, they meant - said, IMO - that the Waymo car was the one being rear-ended.
Equally obviously this is PR puff rather than science, but that doesn't mean the thing being highlighted is false. It's very clearly true that self-driving cars are already safer than human drivers, although still not as safe as a good human driver who's actually concentrating, not overtired, etc.
"It's very clearly true that self-driving cars are already safer than human drivers, although still not as safe as a good human driver who's actually concentrating, not overtired, etc"
I'm pretty sure that statement is incorrect, especially when you consider that self-driving cars are not really able to cope with navigating a car down (say) a busy town centre street on a Saturday. A self-driving car still can't make eye-contact with pedestrians and other road users, and use that to decide whose turn it is to move. Self-driving vehicles are being trained in environments that are much simpler than the average driver has to negotiate daily.
"I'm pretty sure that statement is incorrect, especially when you consider that self-driving cars are not really able to cope with navigating a car down (say) a busy town centre street on a Saturday"
I'm pretty sure you're wrong, and that part of self-driving cars is utterly trivial and has been solved for years.
You're not wrong about the inability to communicate with pedestrians etc (although there's no reason we couldn't add signals for that purpose, akin to indicators and brake lights). I make the assumption that there are going to be situations where self-driving cars are pretty slow compared to a human driver - but safe.
"It's very clearly true that self-driving cars are already safer than human drivers"
Citation needed. What is very clear is that AI has almost completely different failure modes from human drivers.
So of course, this test is completely flawed, as one can assume that the simulation never suffered from any of the sensor glitches / interpretation glitches that real self-driving cars do.
Sort of supporting this viewpoint(ish)....
It was shown that ABS was not as effective as some drivers in some circumstances. This was qualified by pointing out that the drivers were professional racing drivers and the circumstances were very unusual. In the normal situation where ABS saves lives (Ice/Heavy Rain etc) it was leagues ahead of normal drivers with non-ABS braking.
If you cherry-pick your research it can come back to bite just like when I point out to my wife all the studies showing lots of black 'full-leaded' coffee is good for you...She then points to the mountain of studies which would beg to differ
To be fair, cadence braking is better at slowing you down, racing driver or not. I drove cars without it for years, so I was expecting it and practiced at it; not sure how I'd do now, but my trained response isn't to mash the pedal as hard as possible in an emergency stop. I would note that the cars without ABS tended to have much better brake modulation.
I'm not all that big a fan of vehicle autonomy as it is VERY difficult to do properly. I think that operations like Uber and Tesla should, based on their records, be banned from further participation in the technology. We'd all be a lot safer without companies that prioritize profits over safety. But, I must say that Waymo does seem to be taking safety seriously and has compiled an enviable record. Good for them.
There are classes of accidents where autonomous vehicles will surely outperform humans. For example, there are incidents labelled "Looked But Didn't See" (LBDS) or Looked But Failed To See (LBFTS) where a human driver -- often but not always elderly -- looks at oncoming traffic, presumably sees a vehicle or pedestrian, but somehow the information never makes it to the brain. That presumably won't happen with properly designed autonomous systems.
That said, I think that there is probably a class of accidents caused by autonomous vehicles failing to recognize or improperly catagorizing entities. And another class caused by situations that humans recognize as problemetic, but computers won't -- bad/confusing signage, non-standard traffic control devices (a blinking red arrow currently may not mean the same thing everywhere), humans attempting to direct traffic flow, livestock on the road. There's also the problem of undetected hardware failure. I don't know how you'd simulate things like that. We may not know how safe autonomous vehicles are until they are in general use
Anyway -- autonomous vehicles are probably going to happen. Not as soon as many people think, but within a decade or two. Let's hope they are designed by people who take safety seriously and are regulated by entities that are not puppets of the car industry.
When a car can take me from Berlin to my mother's place in the middle of nowhere on the Isle of Skye, in the middle of winter, with no input from me, I'll start to think about believing in self-driving cars.
Until then? It's some flavour of cruise control and automated lane keeping (shortly to be mandated on all new cars in Europe, I believe, and required my mandate to be turned on automatically although the driver can still (so far) turn it off).
I am not a fan of driver assist systems. It seems to me they replace driver intelligence with hopeful hints... don't need to look in the mirror if the blind spot warning light isn't on; don't need to watch the car in front if the cruise control adjusts your speed to suit; don't need to learn to park if the car does it for you. No, rely on automated stability systems and anti-skid braking systems and the like and you're just waiting for the time that physics catches up with you...
"humans attempting to direct traffic flow"
You know, that's a really good point. Can a current autonomous car, like Waymo's, understand the directions that a police officer is giving to traffic - and know enough to ignore the traffic light that is giving different directions? (I'm betting no.)
"I think that there is probably a class of accidents caused by autonomous vehicles failing to recognize or improperly catagorizing entities"
The difference is that in one case you load the changes into your system, push it out to all systems and that issue is no longer an issue on ALL such devices
With humans, you need to instruct each one individually and there's no uniformity about how much attention they're actually paying.
There are compelling arguments for requiring mandatory periodic restests of driving ability. We wouldn't tolerate one test for life on any other class of heavy machinery handling
"Chandler, a popular venue for testing automated cars in the United States due to its favorable weather..."
OA mentioned Waymo's issue with snow, but how about rain, fog, poor lighting (especially dawn & dusk ), blocked vision by large trucks (lorries for most of you)  are more of the real-world conditions I want simulated and tested. Devil's in the details! ----->
Univ of Michigan in Ann Arbor and the Center for Automotive Research (CAR) have a nice test facility in Ann Arbor. What do actual results from that say?
 Amplified by other drivers who won't turn their lights on even though the sun hasn't yet risen or has already set. I've only owned cars -- oldest being a 1998 Buick -- that are quite liberal with "daytime running lamps" (headlights on, low beam, but taillights off) whenever in gear at a minimum, later ones (Chevys)whenever the engine is running. The Chevys could be overridden at any time for the current ignition cycle, but funny that the Buick couldn't -- lights on when in gear, period, no override.
 Go ahead, try to leave proper stopping/vision distance behind any vehicle, not just large ones. You'll get someone cut in Every. Damn. Time. Sure the AI responds by slowing to leave more distance, but how much time will that waste? (I know, if a crash happens because I didn't leave more distance to the person who cut me off then it doesn't matter if I saved 5 seconds up to that point. Statistically it's probably a wash in the long run.)
I'm sure systems will eventually be good enough to cope with the vagaries of a wide range of roads.
It would mean you could remove all pedestian crossings in towns; want to cross the road? - Simply step out into traffic and watch it safely stop for you. Got a few hundred people crossing and re-crossing the road? You can see where this is going.
Anyone who has worked with computers long enough knows that it is the edge cases where computers screw up really really badly.
Simulations of 72 fatal crashes, probably in perfect daylight, with no animals, with no pedestrians, with no potholes, with no glare blinding cameras/sensors, no advertisements that fool the AI. All the simulated sensors are working at 100% efficiency (and they are all "ideal" no nonlinearities, no lower bounds, no upper bounds), no gradual build up of dust, dirt, mud. That those 72 simulations are to all intense purposes absolutely perfect.
"The automatic auto firm recreated 72 fatal crashes in 91 simulations, putting the Waymo Driver in the role of both the initiating driver and the responding driver for accidents involving two vehicles. And in the simulator at least, the company's software outperformed the humans involved in the recreated incidents."
This means precisely nothing. If you replaced me with someone who does not drink alcohol at times when I was about to go drinking (a friday), chances are that person would not get drunk as often as me.
What is omitted is that that replacement prefers class C drugs on Saturday.
You need to test the entire driving history. That is path dependant to the extent you cannot reliably test. I hoped the el reg journo would pour a big smelly bucket of sarcasm all over this.
The last sentence helped - Waymo is still working on driving in snow. ®
Hoped for more though.
There a lot of "evidence" that AI and machine learning is much better than most humans (LOL), so should we stop sending kids to college now and just send their laptops to class? At least that meets the current lock down rules ... but wait, I was writing a joke and I just thought, if machines are better at driving and learning than humans then maybe Microsoft could create a new political party with an AMD Ryzen 5 5600X as Prime Minister?
"the paper notes that the simulations did not include any other vehicle traffic, so any effect such traffic might have on the Waymo Driver's sensors is not modeled."
So not comparing like with like. Or is there an assumption that the other vehicle traffic might not have had an effect on the human driver and wouldn't have left the human driver with nowhere else to go?
Obviously most human driven crashes are due to human error, human stupidity (driving when drunk/tired/distracted/etc.) or the human ability to decide to defy traffic/speed laws. It is equally obvious that most computer driven crashes will be due to computer/programmer error, like the Tesla crashes when autopilot is engaged.
We are years away from a true apples to apples comparison. With all the testing in Chandler, where they have the roads mapped to within a micron, know about road construction/detours before the cars have to find out the hard way, have nearly perfect weather year round, far better than average pavement conditions, nice straight wide roads, and far fewer cyclists and pedestrians than in many other locations it is about the simplest test possible for autonomous driving. Similar to Tesla owners mostly engaging autopilot in the easiest conditions (highways and stop/start driving conditions with good weather) it is easy for them to show good numbers.
Until autonomous cars are tasked with driving in ALL the same places/conditions/etc. as human drivers, comparing fatalities per mile - or worse, "we wouldn't have crashed where this human did" - is stupid. Because if you look at the fatal Tesla crashes, none of them would have happened with a human behind the wheel. Human error and computer error are different!
There also needs to be a standardized off-highway testing regime before any of these cars are allowed to test on public roads conducted by an organization unrelated to the company submitting the vehicle. I won't go anywhere near Chandler Arizona or San Francisco/Silicon Valley as it seems to be a free for all.
If we are lucky, it will mostly be ones & twos. Just don't anger anyone with the resources to digitally disconnect your break line. And pray that we never go to war with China. Or that Iran starts redirecting their resources from nukes to roots.
We JUST TODAY have an opinion piece about the sorry state of affairs in the software industry. Connect the dots, people. The failure mode that I'm worried about is system compromise. Convince me on that before surrounding me with thousands of murder machines.
Biting the hand that feeds IT © 1998–2022