the crew trained a feed-forward neural network to whiz the Volkswagen around an oval shaped track as fast as possible without going off course.
My kids' model railway can do that too, until centrifugal force intervenes... :)
Researchers claim to have trained an autonomous vehicle to drive as well as an amateur race-car driver, a skill set that could be used to build safer artificially intelligent motorists, in theory. Boffins at Stanford University in the US taught the computer brain of a driverless Volkswagen GTI to negotiate the Thunderhill …
Is not the hardest problem for self driving cars. It isn't even the 500th hardest problem for self driving cars. Driving on snow won't be difficult for autonomous vehicles because increased stopping distances and slower speeds around curves are difficult for it to figure out (and you don't want it driving "near the edge" like a good amateur racer in any case!)
The hard part about driving on snow for autonomous vehicles will be the absence of any lane markers, or in some cases the difficult to determining where the road is at all. There's a reason why Phoenix is one of the favorite places to test autonomous vehicles (outside of Silicon Valley, of course) because the roads are wide and very well marked, and it almost never rains really heavily (which causes problems for lidar etc.)
Testing autonomous cars in Phoenix is like designing lifeboats that will need to withstand the conditions of the North Sea, and testing them in a swimming pool.
Definitely worth an upvote for mentioning Phil Hill. The Millennials wouldn't have a clue who he was, thinking you really meant Damon, not Phil.
Perhaps they should add a Gilles Villeneuve mode too... Oh imagine the carnage! But we'd get there much faster 50% of the time, the rest of the time the car would have crashed or broken
Only the Older Millennials (like me) will probably remember Damon Hill as a driver, as he retired in 1999 (20 years ago)
For Historical context:
Phill Hill retired in 1967 - over 50 years ago, and a few years before Apollo 11
I'm all for the standard 'Kids today' grumblings, but when you use 'Millenials' to mean 'young people' when the eldest of us are pushing 40...
It's like listening to people complain about that new fangled 'Internet' or that rock music is far too modern.
I mean, it's amusing, but I dont think it's always the effect you're going for.
> So, they keep training and testing, making the environments increasingly more difficult. The system never "forgets".
To my mind, the fundamental problem with these kinds of learning AIs is that they need huge numbers of examples to learn from. And the important scenarios - for example transitioning from water on tarmac to water on ice, losing control and then regaining it without a crash don't happen very often.
So rather than learn by having an AI drive around with a human ready to take over, it might make more sense to equip a fleet of cars with the full sensing kit and pay people to drive them as their normal car. There's no automatic driving, just recording of real-world driving, warts and all.
The first test of any new AI algorithm could be to process all the data and pick out the good and bad bits of driving on any single journey. If those decisions agree with human judges then maybe it's ready to be let loose on the word?
@aaron_low
Thanks for the comma.ai link. The words on their website sound great and all make perfect sense. The kit they're selling doesn't match their aims.
In my previous post I meant that test drivers should be given regular cars fitted with the full kit needed for a future self-driving vehicle, so LIDAR, multiple cameras etc.
The system never "forgets"
I think you don't understand how neural networks work. If the system is "learning" itself, when exposed to new conditions the entire neural network changes. So yes, it may "forget" some of what it learned in a way, since it may react differently to the same situation than how it used to after it has learned in a new situation.
Its like with a human, if I jumped out at you from behind the bushes every time you walked out your front door, eventually you would stop reacting to that. If one day I didn't show up but an axe murderer did, your flight/fight response is going to be all fucked up because you will have come to expect (mostly) harmless me jumping out at you shouting "boo" and not Jason Vorhees.
I've always wondered how autonomous vehicles will handle temporary lane markings. I was driving south on the M6 through a section of roadwork (doesn't narrow it down much, does it) and there were markings for three different lane layouts and it was difficult to figure out which was the one to use especially with the sun low on the horizon.
Yeah, that car is just making an oval on the skid pad at Thunderhill raceway. I thought something was amiss too as they said the vehicle could complete the course in 39s or some such. ThunderHill raceway is composed of two tracks a 3 mile and a 2 mile track, that can be combined into one 5 mile track. On the three mile track a good time is around 2 minutes, actually 2:06 or less for the most advanced run groups. I was expecting to see the car drive an actual lap on the "actual" race track. That would be impressive. Throw in some other cars on the track at the same time and I would really be amazed. Oh well.
https://www.thunderhill.com/track-info/track-maps
<quote>
Car crashes are, after all, mostly down to human error, Spielberg, a graduate student in mechanical engineering at Stanford, noted. The academics reckon 94 per cent of them are the result of “human recognition, decision, or performance error.” If an autonomous car can take over in extraordinary situations, such as when a car needs to suddenly swerve, speed up, or brake, crashes could be averted by taking humans out of the loop – assuming the machine-learning software can do better than people.
</quote>
WTF? I'd want the *human* to take over in an extraordinary situation, otherwise we'll have 100% of crashes due to software. We'll also have to rely on the software recognising an 'extraordinary' situation, as Uber can attest to (https://www.theregister.co.uk/2018/03/19/uber_self_driving_car_fatal_crash/)
..as largely caused by the driver disabling the automatic safety features
I believe that it was Uber disabling the Volvo existing features, not the (low-level) driver.
The driver was watching an unrelated video on their tablet according to reports, when they should have been observing.
That's not a racetrack, and it's about a second slower than a human driver. On a 39 seconds per lap track that is horrendously terrible if you want to compare yourself to the human. It's also ONLY left turns. Not really confidence inspiring when in the real world roughly 50% of corners and curves go the other way.
"Yes, but Americans only race on oval tracks. Admittedly their race tracks are banked, but nothing as difficult as Formula racing."
If you look at an online map, eg Google Maps, putting in the name of the race track, you might be a little surprised at the shape of the tracks there.
"Massive expanse of tarmac, a couple of cones, no other cars, cyclists, or pedestrians. This has absolutely no benefit to increasing safety of automated vehicles?"
It does have some benefit. It's teaching the system about what road surfaces work at various speeds and conditions etc. so the system has an "awareness" of what is safe and what is not. Having said that, it's not new, but is building up a large database that could be useful as a small part of a safer self driving car.
Many years ago, when 8-bit home computers were all the rage (and colour displays were far from standard), I fitted Hall effect sensors to a Scalextric track and wrote software to do the same thing. It was quite interesting watching it "learn" by driving the car around the track, initially quite slowly and recording the speed input against each sensor passed. The software would then add a speed limit for each track section if it failed to reach the next sensor in the expected time (ie it crashed off). Then I had to teach the program that the speed limit at a sharp curve could not be achieved if it went into the curve at full throttle from a straight because my simple programme wasn't aware of anything other than speed limits at each stage. My cars didn't have accelerometers or LIDAR :-)
Scalextric announces takeover of Formula 1! Teams rush to hire top driving talent from kindergarten.
From what I remember of Scalextric, it took a little maturity in order to learn the optimum strategy was not just pull the trigger on the controller fully and keep it there.
...unless you enjoyed watching the toy fly off the track at the first bend and hit the wall (and what small child doesn't find that amusing for an exponential number of times more than an adult).
What am I say...? It'll be a roaring success. Like that movie, Death Race.
WHAT is the point of it all ?
simple question, serious response needed
if it is JUST to get the human out of the loop, then is it going to be a supplement to a person doing the actual driving ?
is it going to do ALL the driving, and all of the neural training is to allow this to [eventually] occur
is there a REALISTIC date set down yet, when there will be real unmanned vehicles on the roads at all ?
if so, are we looking at seperate roads for manned v unmanned at all ?
is ANY data looking towards motorcycles at all, NOT to ride them, but to be aware and acknowledge them ?
TL:DR - I am seriously wondering WHY we are bothering as it appears to be a vanity project gone bad, that no one is able to stop
a car crash in the making as it were :o(
Well, this AI managed to drive a car in a circle. My grandpa had a story how he and his brother had their dad's Renault drive in circles after jumping out in the 1920s. This being on a square parking lot of just the right size, it gave them great pleasure to watch the other customers react when they realized their car was trapped by a unmanned moving obstacle. Well, that's teenagers for you.
Autonomous cars are being designed for an invariant environment which is a fallacy.
Assumption: people don't step out in front of cars very often. The reason I don't step out in front of a car is because I don't trust that the driver is paying attention, has the skill to stop the car and won't get out after they do stop with a tire iron in their hand.
When the consequences go away, so does behavioral self-regulation. It's just like the Internet.