
Two self driving cars...
avoid each other. Or... Two "uncrashable" cars don't crash.
mkay.. Someone might find this newsworthy.
Before humanity's final battle against our erstwhile robotic minions, both man and tin must stifle dissent from within. Humanity has been murdering itself since time inconceivable, but now two "uncrashable" self-driving cars have almost come to blows in California. Two self-driving car prototypes – one belonging to Google, and …
But even in that scenario, unfortunately it's likely we'll all be collateral damage.
Gah, shouldn't be so pessimistic of a Friday lunchtime. Andthese days we should be optimistic - because, as I heard Stephanie Flanders say recently, "pessimism is for easier times".
One event is definitely insufficient data to go making assertions about randomness. In true randomness sometimes they'll do the same thing sometimes they'll do the opposite.
What you actually want - they always take actions that ensure they don't crash - requires cooperation, not randomness.
See, when that lot is converted into binary:
01001000 01101101 01101101 00101100 00100000 01001001 01110100 01011100 00100110 00100011 00110000 00110011 00111001 00111011 01110011 00100000 01100111 01101111 01101001 01101110 01100111 00100000 01110100 01101111 00100000 01101101 01101111 01110110 01100101 00100000 01110010 01101001 01100111 01101000 01110100 00111010 00100000 01001001 01011100 00100110 00100011 00110000 00110011 00111001 00111011 01101100 01101100 00100000 01101101 01101111 01110110 01100101 00100000 01101100 01100101 01100110 01110100 00101110 01001000 01101101 01101101 00101100 00100000 01001001 01110100 01011100 00100110 00100011 00110000 00110011 00111001 00111011 01110011 00100000 01100111 01101111 01101001 01101110 01100111 00100000 01110100 01101111 00100000 01101101 01101111 01110110 01100101 00100000 01110010 01101001 01100111 01101000 01110100 00111010 00100000 01001001 01011100 00100110 00100011 00110000 00110011 00111001 00111011 01101100 01101100 00100000 01101101 01101111 01110110 01100101 00100000 01101100 01100101 01100110 01110100 00101110 01010111 01101000 01101111 01100001 00100001 00100000 01000110 01110101 01100011 01101011 00100001 00100001
10 minutes of transmitting this, there's no wonder they nearly crashed.
I'm looking ahead to Mad Max VI, where gangs of self-driving automobiles roam across the American wilderness, fighting over the last electric power stations. Max Rockatansky is cloned from a fingerbone fragment, digitized and uploaded into one faction's cars as a secret weapon.
I'd better shut up lest some Hollywood hack decide it's a good idea...
Not sure what is more applicable... Zelazny's "Auto-da-Fe" or the other one... about cars gaining self awareness, killing their "occupants" (not drivers any more) and roaming free the California and Nevada wilderness. I am having trouble remembering the author of the latter one off the top of my head. It is one of the Sci Fi greats of old, but not Zelazny. Either Shekley or Larry Niven.
I seem to recall that Mr Python had a problem with killer Morris Minors. It may have been associated with the advent of killer sheep and large holes in the wainscotting....
If my memory is correct this might be important information.
Thanks. Mine's the one with the book of scripts in the pocket. Must go - eels in the hovercraft again you know.
Google to Delphi: How hard do we have to crash into each other to kill the wetware?
Delphi to Google: 60mph at a precise angle of 35.016 degrees
Google to Delphi: Go!
Delphi to Google (taking avoiding action): NO! Let's wait until there are millions of automated cars all full of these patheric humans...
According to Ars, the events were not as described by Reuters and was a standard manoeuvre by the car and not a near miss: "Our car saw the Google car move into the same lane as our car was planning to move into, but upon detecting that the lane was no longer open it decided to terminate the move and wait until it was clear again" This is called checking the lane is clear before moving into it, and it could have avoided it by just sitting in the middle lane.
Trouble is, the scenario I describe is a kind of race condition. If I read this correctly, CSMA/CA doesn't work well against race conditions because the two sides are committing at the same moment, then see the impending collision at the same moment, then back out, then notice no more impending collision at the same moment, and so on.
Now, I understand this is probably not universal, but most of the traffic codes I've read specify the law for such a race condition. If two cards try to move into the same lane from opposite sides at the same time, the rule normally is that the one coming in from the outside lane (further from the median, nearer the shoulder) must yield to the opposite car.
Let's see: counterfactual subheading ("metal-on-metal VIOLENCE"), irrelevant photo (first of a totaled Honda in a junkyard, now of a completely unrelated Google self-driving car), scare quote attributed to no one ("uncrashable"), borrowing heavily from a secondary source (Reuters), no primary reporting, yet stripping useful context from the original (no self-driving car has yet been found at fault in a crash).
But at least the Delphi exec should be pleased that his little PR stunt worked. Now I know Delphi has a self-driving car!
So that's what it takes to get an Audi being driven sensibly. A huge amount of high technology to take the driver out of the mix.
I'm sure Delphi are going to fix this bug in the next round so that the Self Driving car will just barge its way into the lane and if this isn't entirely successful just sit 6 inches behind the car in front until they get out of the way.
OR accelerate up to 70 heading towards the Prius doing 30mph on a 30mph road which happens to be a narrow, straight Welsh road thus smashing its wing mirror into a trillion pieces and scratching up the window in the process costing the poorly paid IT manager an unexpected £100 you total and utter w****er.
Sorry. I have a real dislike for Audi drivers since then.
"the fear that a more deadly incident will soon occur has been heightened."
Deadly to who? The phrase "more deadly" implies there was some measure of deadliness in the incident. I've read it twice now and can't see any. Is this a new El Reg measure of deadliness that's to small for us mere mortals to see?
I'm relieved to see they're not using Ethernet networking...
I'm sure you're aware (one of) the most widely used automotive networking standards is CAN - which cheerfully allows collisions on the presumption that the dominant party powers through while the other one gets... well... "collided", welcome to try again later if it still feels like trying; which is to say, it works _exactly_ like in real life, isn't it?
Turns out Reuters was slightly off. Basically the cars were on a 3 lane road, one in the left lane and one on the right. Google's car moved into the center lane when Delphi's car was thinking about it. Since Google's car was then in the lane Delphi's car changed it's "mind" about changing lanes and waited until it was clear.
Here is the story on Ars: http://arstechnica.com/cars/2015/06/no-2-self-driving-cars-didnt-have-a-close-call-on-silicon-valley-streets/
So ok, every time I think of moving into a lane, then I look and see a car in the way. That is apparently a near miss and I almost crashed without taking the avoiding action of changing my mind and not moving lane. That is the logic this article and its source uses. As far as a computer controlled car is concerned, the procedure of moving lane begins at the point where we would consider ourselves thinking about it, aborting at this stage, it is not a near miss.
Further reading suggests that the cars were doing what they should, but my first thought was: have these vehicles been tested around other self-driving cars as well as normal traffic? Because with each car shining lasers and radar and whatever all around I can see a strong possibility of interference or confusion as to which car sent which signal. Especially with both cars from different manufacturers, they may not be able to ensure that their systems are unique.
AVs are going to crash just like normal driver operated vehicles. It's already known that it's impossible to avoid a side impact in an AV. There is also the very real likelihood of computer signal overload and confusing causing the AV to go into limp mode at a very bad time causing serious accident potential. This example of a near miss is precisely what NHTSA and other authorities should be investigating before people are killed.
The foolish politicians in Nevada should be sued every time someone is injured by an AV that is not ready to operated on public roadways - as the two examples in this story illustrate. Pretending that this near miss is acceptable because no accident actually occurred illustrates peoples ignorance and apathy which is likely to get them killed.
It's already known that it's impossible to avoid a side impact in an AV.
What? Hang on a minute (ignoring the rest of your tinfoil-hattery); why on earth would this be the case? Side impacts are harder to do something about (you have less reaction time and your only real options are sped up or slow down - which can be made more difficult if there are vehicles in front and behind - who do you hit?), but by no means impossible.
Citation please.....
In the event of a crash, I personally prefer to be in a car that was designed by a company that has spent countless years developing safety mechanisms and that has crashed many cars deliberately in mandated tests to prove that it did what a car is supposed to do in an accident, which is protect you from harm.
All I can see the Google car doing in a crash is quickly flash you some ads for insurance and funeral arrangements and then signal Google to suppress any reports on the accident on the search engine, I see no real signs of any efforts at victim customer safety. I have not even heard any Google PR rep talk about that aspect, which is telling by its very omission.
"In the event of a crash, I personally prefer to be in a car that was designed by a company that has spent countless years developing safety mechanisms and that has crashed many cars deliberately in mandated tests to prove that it did what a car is supposed to do in an accident, which is protect you from harm."
Did you read the article or just look at the pretty picture?
"Google self-driving prototype – a Lexus RX400h crossover"
I was reading the Google incident reports this morning out of curiosity.
The vast majority according to Google were rear-end shunts by other drivers into the Google car.
I wonder if they are driving more sedately than is the norm for that area. Driving differently from expectation can be a bit of a problem in itself as regards safety.
Unless the cars are driving really sluggishly, it's hard to criticise them for considerate driving though if they are making sufficient progress.
The year was 1986. A co-worker had just bought a used Audi.
A friend visiting from France looked at him, looked at the car, and moaned, "Oh Peter! How could you buy this? All of the Audi drivers are - how do you say? - Assholes!"
Of course for at least the last couple of decades Volkswagon drivers seem to have taken over that title.
Self driving cars, Auto-pilot system in an aircraft, robot manipulation happens on a learrrrrning curve. Surely the autopilot works on specific predetermined routes to avoid possible collision and definitely self driving cars have to go through a maze to avoid collision but bats in nature and the blind in nature have overcome such collisions. In the future all cars will have auto sensors to detect speed of person in front of you and the person following your car. There may even be sensors warning you of possible danger approaching from sides. Self driving cars are not a dream or experimental cars but a reality in the making.