the machine hit the brakes unexpectedly.
Perhaps the AI went sentient and there was someone tailgating the bus so HAL decided to brake-check them.
Two driverless vehicle trials were temporarily halted this week after self-driving mini-buses encountered obstacles – or think they did – resulting in minor injuries to a rider and a pedestrian. To play devil's advocate for a second, are these accidents a sign that machine-learning software is still unprepared for motoring in …
The data shows it wasn't a false reading, but we aren't sure what caused the shuttle to initiate the emergency stop," he said. "We feel very confident that this is not an accident that will repeat itself."
So they they aren't sure what caused the shuttle to decide it needs to stop, but are still "very confident" this won't happen again? BASED ON WHAT?! I can pretty much guarantee it WILL happen again...and again...and again until they understand what caused it make that emergency stop. If a single sensor indicating something is in the path is enough to stop it, would a bird trigger it? Would an insect, if it is really close to the sensor? Or maybe the sensor is flawed, or the software is flawed.
Idiots like these operating autonomous vehicles before they are ready are going to set the industry back years, because they aren't willing to admit the technology is nowhere near ready to be deployed for real world use. Having something driving around and gathering data is one thing, it can err on the side of caution without injuring anyone.
There's no way they should let the public ride this at this stage, but clearly they consider publicity more important than public safety.
Err, no.
The sensor sends a stream of data to some preprocessor, which turns that data into "object(s) occupying $sectors of FOV of sensor" and the stream into "object(s) closing in/moving across/moving away at $angularvelocity". Combining this with data from other sensors can turn this into "object of $size at $distance is moving towards/across/away from this vehicle at $speed" and from there the decision will be made to care or not. If you want to record what the sensors 'see' you have to record a video stream (with roughly the same FOV as its associated sensor) in parallel with the sensor data, so that an adequately trained neural net can decide whether that sensor and its processing algorithms correctly caused the action it took.
The point is that during trials you would expect all of the raw sensor data and a video stream from independant cameras to be recorded precisely to allow investigation of any incidents or anomolies. If you don't do this then unless the trial has perfect results or very clea rmajor failings you will have incidents which you can't adequately investigate so that it is unclear whether the system is behaving adequately or not.
Bullshit. There are trivial ways to conclude an event was very likely objectively real without actually knowing what it was - such as having two or more sensors tripping at the same time, for example. Obviously only they know what they base their confidence on, but it's definitely possible they actually have some kind of strong evidence it wasn't a sensor fault.
"I'm sorry, Dave, I'm afraid the speed limit on this road is 25 MPH and that's the LIMIT there's nothing that says that I need to be going that fast if I think that 22 MPH is a more sensible speed for the current road conditions and just where do you need to be right now that's so important that you think that
You just can't use seatbelts on an automated bus.
Even though it is a transport device which not only can but is likely to make sudden stops, you can't use seatbelts.
Even though they'd stop people falling out of their seats in the case that a sudden stop occurs, you can't use 'em.
And why? Because they're decades old mature technology that actually works. Not cool at all.
Or use non-slippery seat covers, and with the backs of the seats in front cushioned somewhat. After all the sudden stop: "caused an elderly man to slip off his seat" and "suffered facial injuries – bruising and laceration". Even if it didn't completely remove the likelihood of a similar incident, it would be a completely passive way of moderating such events.
The problem is more along the lines of 76 year old folks - even those in reasonable health - more often than not being made out of tissue paper wrapped around soap bubbles. Realistically, getting a single step out of bed is "hazardous" for them - there's no way around that. Arguably, you can try to tote them around in a Zorb ball and hope for the best, or you can accept that they're permanently at high risk of injury and there's absolutely nothing you can do to the rest of the world to change that.
Well, if they situate themselves correctly, they'll at least get a kiss before they pass on.
Even if that is likely to be a Liverpool kiss (that one was safe at time of writing, but YMMV).
Lets get real here - if a bus couldnt move for legal reasons unless everyone was belted up bus services would grind to a haltt. And if its not mandatory then you might as well not bother having them as almost no one would use them. And thats before we consider standing passengers .
>if a bus couldnt move for legal reasons unless everyone was belted up bus services would grind to a halt
You aren't thinking like OSHA/HSE.
The solution is to make everyone wear a cash helmet, and fireproof race overalls on a bus - so forcing them to drive a car instead.
The AI is meant to be annoyed by smartphone-zombies, or smombies as those are called here.
A rare case where German news were more precise: The woman who walked into the bus was staring on her smartphone and had headphones on. A human driver, at least one with a dashcam to prove he had no chance, would have gotten a "was here mistake, drive on".
"The data shows it wasn't a false reading, but we aren't sure what caused the shuttle to initiate the emergency stop,"
That's a bit worrying, given that the bus is on a trial. Surely it should be adequately instrumented such that one can be sure?
I've no idea what data is being collected, but if I were in charge of giving permission to driverless vehicle trials on public roads, I'd insist on wrap-around video recording being added to the vehicles specifically for post-event human review at the very least - low level and high level. Wide angle cameras front, sides, and rear, low down and high up. Perhaps also stereo cameras - for human review video recording - too.
Bosch sells this rig specifically for vehicles:
https://www.bosch-mobility-solutions.com/en/products-and-services/passenger-cars-and-light-commercial-vehicles/driver-assistance-systems/lane-departure-warning/stereo-video-camera/
If you had that, you'd most likely be able to be pretty sure about anything that might have been in front of the vehicle. And you'd have a record of what really happened if a really nasty crash came about.
Maybe something like that had been required and it wasn't quite good enough in this case. I'd love to know.
Wraparound cameras would capture some but not all of the relevant data. You'd also need to examine the LIDAR logs, and rerun the decision algorithm to see if it thought that pigeon over on the left was in fact a small child who was about to run across the road...
At some stage, you start wondering "how much data is enough to analyse 95% of incidents, and how much more would we need to spend to figure out that last 5%?" And likely come up with numbers that make a city council look a bit thoughtfully at their budget.
Why on earth would you want to catch a bus that only traveled at 9mph?. If you include the wait for the next bus along, and the delay at every stopping place, it would be quicker, and better for you to walk. Unless, of course, the Americans of the area are incapable of walking at a walking pace. Or walking at all.
Unless, of course, the Americans of the area are incapable of walking at a walking pace. Or walking at all.
In the US, walking (outside city centers where sidewalks do exist) is essentially signing your own death warrant as you have to intrude on the Domain of the Automobile for most if not all of your journey. And if you happen to survive that you run the risk of being shot for displaying Furrin Habits.
Autonomous trucks are being used on minesites in Oz, not for any happy progressive free-the-masses-from-having-to-think idealism, but for industrial/economic avoid-having-to-pay-and-feed-workers logistic reasons... and thus, operating for several years already.
These are controlled environments, limited in scope with barricades preventing random vehicles entering, and all authorised vehicles inside the zone communicating their positions and locations to central control, and road and engineering design and construction to make everything pleasing to our robot overlords.
And yet...
Crows had been swooping the trucks and triggering the collision avoidance detectors, and tumbleweeds ( the Australian equivalent) will set it off.
All this plus being vulnerable to network, software and hardware hiccups.
OK, what else, save for the driver, makes complex decisions - that may be erroneous - on the road?
The statement in the title is irrelevant in the context. It says nothing more or less than "mechanical failures that can cause an accident are rare and roads are pretty decent, too". And it surely does not mean that autonomous vehicles will be - or can be - safer than human drivers. And that is before one realises that the software that is supposed to make complex decisions on the road will be written by humans, and I suspect that El Reg Commentariat at least does not expect those humans to be perfect, either.
Who is collecting real statistical data?
That was my first thought and I haven't found anything comprehensive. However, a useful benchmark might be the tram incident figures collected by the transport safety department of the Australian state of Victoria (which I suppose refers to the 250km network in Melbourne).
In 2018 they had 55 serious injuries (roughly equal numbers of people injured onboard and collisions with pedestrians); there were two deaths
Aged 17 I was driving along and passed a friend, I stopped and offered him a lift. Shortly afterwards a disabled woman stepped out in front of the car and I was forced to execute an emergency stop. I'm not sure who was more shocked, me or her. Shortly after that I let my friend out at his destination. It may have looked like he fled the car.
I don't doubt an autonomous vehicle would have stopped sooner but I reacted and executed in a sufficiently timely manner. I wasn't tested on emergency stops during my test but I was taught it by my instructor.
As a new driver I and a mate were out for a trip into Derbyshire when I executed an emergency stop to the surprise of my mate.
I'd seen the sheep balancing on top of the wall and gathering itself ready to jump down so hit the brakes before it took off. It landed unharmed just in front of us.
When I read about autonomous vehicles I often wonder whether one would (a) recognise a sheep, (b) still recognise one balancing on top of a wall and (c) recognise from its stance and minor movements that it was about to jump off. The last is the most difficult as an understanding that arises from having to manage the balance a mammalian body and not just a motor vehicle with its centre of gravity well within its wheelbase.
I drive in Derbyshire a lot and can confirm plenty of farm / wild animals cross roads there, I'm sure the likes of deer, cattle, badgers, sheep, foxes, alpaca (increasingly trendy) are big enough to trigger an AI obstacle detector (and be absent from the scene by the time investigations began)
Would be interested to know if the AI obstacle detectors are triggered by birds, as that would cause huge problems
I'm about to take a medical test which can kill me by method X, but that was described in the pamphlet as "rare". I felt good about that until the next paragraph, when method Y was described as "very rare".
So, as always, the thought in my heart is "it's been good to know you all" fellow castard bommentards, but perhaps this time it's worth stating. I will post again, Very-Rarity-willing.
Thank you @Doctor Syntax, for your concern. This is the next post. Now that I've taken the test, and almost understand it, I see how the "rare" deadly side effect could be a better option than doing the test in the obvious way--which would probably bring into play the same side effect.
Nonetheless, I will remain vigilant around those who say "We feel confident ..."
We already know that small pixel changes to the sensor data can cause gross misinterpretations by the AI. You only need to stick a few pieces of white plastic on the ground to freak the vehicle out. Litter blowing in the wind, maybe just a chewing-gum wrapper, could easily trigger this kind of confused emergency stop, with nothing to explain it to the human minders unless the videos are hi-res and wide-angle enough to pick it up and the yoomans think to watch them really, really closely.
The current generation of AI vehicles will always be prone to such random pixel moments. It wouldn't surprise me if the outcome of all this testing was to discover that general intelligence is needed to safely drive a vehicle in public places.
Yes but half a second later I have gone "oh, **** it's only litter" and released the brake before my passengers nutted themselves. Current AI does not have that flexibility to overcome pixel moments.
Or, maybe you meet a person with blood on their face beckoning you the wrong way down a one-way street to get past an accident just ahead and allow the ambulance to reach it. An AI that blocks the road and whines for Mummy, or does not recognise the bloody object as a human head because the blood causes a pixel moment and it keeps going, is not going to be appreciated.
So in the Vienna one some dozy iSuicide[1] merchant walked straight into the side of a bus? I'm not sure what it could have done about that. The Time Warp[2]?
I have seen a woman drive into the side of a stationary bus.....
[1] The act of navigating an urban environment with headphones on and while staring fixedly downward at a phone or tablet screen.
[2] It's just a jump to the left...
In the Austrian case the pedestrian is definitely at fault. Until recently Navya has a bus operating where I work in Le Défense near Paris. The busses are painfully slow. On the rare occasion that you see someone inside one they appear trapped, regretting ever entering it as everyone else walks past them.
Yes. And once everyone gets over the idea that human drivers and not actually rushing about trying to cause maximum mayhem but are actually doing their best to keep out of accidents we'll realise that that's a very tough call.
TFA says "People get hit by human-driven vehicles all the time, of course.". Actually, in terms of vehicle miles, they get hit vary rarely.
Yes humans are on average extremely good drivers even in poor conditions. The issue is th evast numbers of journeys made gives a false impression of the probability of a driver mistake leading to an accident. I remain sceptical about autonomous vehicles acheiving similar performance anytime soon.
"With AI supposed to be safer and better than us mere mortals when behind the wheel,"
Well ... maybe. At least for the very specific and carefully orchestrated sort of hazards that it was tested for. If those are a high enough proportion of all incidents then the AI may well come out ahead on average, by reacting faster to that sort of threat. For less specific threats it may do less well .. but not often encounter them.
Lies. damned lies and statistics. And then marketing materials.
Replacing a human driver with an AI machine is only a partial solution, which is bound to fail. They need to replace the passengers and pedestrians too. That way the vehicle sensor data can be shared quickly and easily, using a simple short-range wireless data link, so that sudden stops can be anticipated and collisions avoided.
There would be many other potential advantages, such as pedestrians looking where they are going (rather than into their smartphones) and passengers avoiding turning up at the bus stop in groups larger than the capacity of the bus.
The thing is that it is likely that a human driver would have seen the person walking and not paying attention to the road, anticipated a collision and sounded their horn. They may even have applied the brakes gently as a precaution and to alert following traffic. The person may not have stepped out, but the action by the driver would at least have no effect at all. Their action could have added more time to avoid an accident and hopefully brought the bus to the attention of the person. A robot is very likely to spot the human but would have trouble anticipating their actions or level of awareness. With this lack of empathy that a machine has, a 'safe' option would be for software could assume all humans were likely to jump out suddenly and keep slowing down and beeping, but this would be impractial. In the language of advanced driving; Instead of relying on forward observation and planning the robot driver is reacting to events, and that is more likely to lead to an incident.
> They may even have applied the brakes gently as a precaution and to alert following traffic
If you did that here you would be stationary.
Would you drive past a bunch of schoolkids pushing each other around on the pavement?
Should the city totally grid lock because someone is walking looking at their phone?
Here's an idea:
Don't let pedestrians walk around unless they have their phone turned on and sending their location continuously to sensors in autonomous vehicles, which could then avoid them.
You could even add a flag to the data stream if the pedestrian was actively looking at the phone....