back to article Mobileye's autonomous cars are heading to California. But they're not going to kill anyone. At least not on purpose

It's hard to know at what point in Amnon Shashua's presentation on autonomous cars that I started fearing for my life. But it began in earnest when others started asking questions and he started answering them. As CEO of Mobileye, Shashua has been on the forefront on self-driving car technology for some time. He started the …

  1. Jay Lenovo

    A Question of Blame

    The Three(or four) Laws of Self-Driving Cars

    1. Autonomous Cars may not injure a human being or, through inaction, allow a human being to come to harm.

    2. Autonomous Cars must obey the orders given to them by human beings except where such orders would conflict with the First Law.

    3. Autonomous Cars must protect their own existence as long as such protection does not conflict with the First or Second Laws

    *4. If humans still get hurt, it is the fault of anyone but the car manufacturer, because they only have good intentions.

  2. Anonymous Coward
    Anonymous Coward

    A scam that kills people

    Having watched these autonomous cars trying to drive up and down the freeways on the Peninsula for the last decade (both the 280 and the 101) and seen first hand how they totally screw up traffic when the freeway is nt completely empty I'm just waiting for the first class action lawsuit to be brought against one of the players when I will gladly file an expert witness amicus for the class action plaintiffs. The software stacks are little more that hacked grad student projects, riddled with bugs, and which make the most egregious and simplistic assumption as to environmental condition. In fact, typical grad student project code. They work 99% of the time. And the other 1% they will (and already have) kill people.

    The software and hardware should be held to the same regulatory standards as avionics. Only then should it be allowed on the open road. Otherwise keep it purely to closed circuits, which is the only place is actually belongs.

    But the dirty little secret of the whole autonomous car scam is that everyone with any real technical understanding of both the problems and the current (and likely practical) solutions know that autonomous car hardware and software could never ever reach even the most basic quality standards laid down for avionics. Both technical and operational. Its all just hand-waving and very very contrived demos in very controlled environments. In the real world it not only fails but fails often.

    Its just a matter of how many people it kills before the whole farrago is stopped. Given how much money the VC's have put into play in the current hype phase it will probably need at least a dozen or two dead bodies before they are put out of business. Although a couple of dead children, which will eventually happen, should bring it to a conclusion quicker.

    1. Anonymous Coward
      Anonymous Coward

      Re: A scam that kills people

      I think the only responsible thing to do for the future of mankind, is to run autonomous vehicles off the road whenever you see them.

  3. Jason Hindle Silver badge

    Autonomous cars will be super safe

    Just as soon as they’re the only cars on the road. Puny AI can’t cope with the Human Factor.

  4. vtcodger Silver badge

    A Mad Max Moment

    The Mad Max moment will presumably occur when a Mobileye vehicle and an Uber vehicle decide to occupy the same space on a road. Two cars enter. One car leaves.

  5. Rebel Science

    Level 5 autonomy is coming but no thanks to Deep Learning

    Unlike the brain, a deep neural net cannot see an object it has never seen before. There will be no fully autonomous cars until we solve the AGI problem. We need breakthroughs in instant object detection, a common sense or causal understanding of the world, prediction and planning. Once you have that, you won't even need fancy sensors like Lidar, radar, infrared cameras and such. A simple movable binocular camera will do. Above all, you won't need to test the system for millions of miles. The machine will learn to drive just like humans do.

  6. Zog_but_not_the_first

    Days of the Space Race

    Reading this reminds me of the explanations and assertions that the Apollo missions were carefully designed to eliminate risk with failure events vanishingly rare etc. Privately, the astronauts reckoned the chances of coming home safely were about 50:50 (and I'm glad they made it).

    See also nuclear power's one-in-ten-thousand-years critical event record...

  7. Nick Kew


    You tell us Shashua was full of straw. Then you use your own strawmen to argue against him.

    I don't recollect ever seeing that bouncing ball, in [mumble] years of life. You say I should be alert to the possibility of it being followed by a child. I say Shashua is right: that shouldn't affect behaviour. When in charge of a potentially-dangerous machine you should always be alert to risks. You really shouldn't need a bouncing ball to grab your attention.

    What grabs my attention is any broken line of sight. Anything from parked cars to a gate or hedge could conceal the next hazard, and I want to be prepared for it!

    1. Fred Dibnah Silver badge

      Re: Strawmen

      "I don't recollect ever seeing that bouncing ball, in [mumble] years of life."

      I have. The ball was mine, I was about six years old, and if the driver hadn't been alert I wouldn't be writing this.

    2. }{amis}{

      Re: Strawmen

      Your Lucky dude, I had the exact scenario happen to me less than 6 months after passing my driving test and narrowly missed hitting a 5 year old chasing his toy.

      I felt guilty about that incident for weeks even though I knew that I did nothing wrong and reacted to the best of my abilities.

    3. Keith Langmead

      Re: Strawmen

      It’s not a matter of relying on the ball to grab your attention, it’s being alert to risks, and when you see the ball being aware that it may indicate another risk which isn’t yet visible.

      ""You don't need to forecast what other vehicles will do," he states boldly."

      I'd argue that any competent driver is reading the road, and by extension forecasting what others around them are likely to do. For instance, you're in the middle lane approaching someone in the left lane, and see that they are catching up with a slower vehicle in the left lane. Long before you reach them you should be aware that they will likely want to pull out shortly, so have plenty of time to see if the right lane is clear, and decide if you want to pull into the right lane, or increase/decrease your speed as appropriate. You shouldn't be reaching that car and be surprised when they indicate that they want to pull into the middle lane. Autonomous cars should do no less.

      "...predicting what others will do saying it would simply create too much information to effectively compute..."

      What I think that means is it's too hard for them to code, so rather than find a way to do it they're going to ignore the problem.

      1. Brangdon

        Re: Strawmen

        Does the bouncing ball actually change your car's movement? Or does it merely cause you to be more attentive? For me when I see a hazard I may only move my foot over the brake without actually slowing down, while I watch for developments. After the bouncing ball I'd be more ready to stop, and paying more attention to where the ball came from in case a child followed, but I wouldn't actually stop or even necessarily slow down. An autonomous car is presumably always paying attention, always alert, and always ready to stop, so doesn't need to predict as much.

  8. Anonymous Coward
    Anonymous Coward

    "No city would accept autonomous cars if they create traffic jams,"

    The man obviously hasn't had any dealings with British local government, who actively seek to create traffic jams that they can use in their War on Cars, and for the promotion of their beloved buses.

    If crap-headed junctions don't create enough congestion, then they deliberately re-phase the traffic lights, or create more poorly utilised bus lanes and squeeze more cars onto less tarmac. Or they institute 20 mph speed limits that actually worsen safety, and when that becomes public knowledge claim they "can't afford to remove them". Or they simply don't repair potholes.

  9. ianmcca

    At its heart he's denying the value of predicting the actions of others. I know I predict while driving - constantly and unconsciously, using judgements of their position speed and acceleration, but also the make and condition of the car, the age and sex of the driver, where we are.

    On the other hand I can see (by their road behaviour) that very many other drivers dont do this yet still drive safely so I'm not sure what I gain. Perhaps better fuel economy, a smoother journey and a smug feeling of self-satisfaction, and not much else.

    So I think he's right that the design starting point has to be "don't cause the accident", and a basic level of that will work. But there must be a gradual extending of the predictive aspect of not causing an accident beyond simply that of not pulling the vehicle where another vehicle is inevitably going to be.

    1. muddysteve

      "At its heart he's denying the value of predicting the actions of others. I know I predict while driving - constantly and unconsciously, using judgements of their position speed and acceleration, but also the make and condition of the car, the age and sex of the driver, where we are."

      Of course you are. The central premise of driving is that you only drive into a space that will be empty when you get there, which involves the prediction of what everything around you is going to do. If you just drive by reacting to others, then you are an accident waiting to happen. In fact, I am not even sure that is possible.

      1. Anonymous Coward
        Anonymous Coward


        "The central premise of driving is that you only drive into a space that will be empty when you get there"

        In my experience, the central premise of driving is "I'm a great driver, everyone else is terrible".

        1. Anonymous Coward
          Anonymous Coward

          Re: well..

          "In my experience, the central premise of driving is "I'm a great driver, everyone else is terrible"."

          But not in Sweden.

          Seriously, a study by psychologists showed in US trials that a majority overrated their driving skills, while in Sweden a majority underrated them. US != the world.

  10. Anonymous Coward

    I rode a motorcycle.

    Seeing that image brought back a memory.

    I was playing a computer game on PC 'Roadrash" I had hit a car that swerved out, the cut scene showed a motorcyclist hitting a car that stopped abruptly and the rider flew over the handlebars, over the car and onto the roadway in front of the car. the vehicles driver got out, dragged the immobile rider to one side then returned to their vehicle and continued on their way. {Ouch} That hit home, much the same as what happened to me. Turned off the cut scenes from that point, but in-game I used all the nitro I could muster from that point onward.

    Intention would include NOT using a technology before it has been tested to the point that it DOES NOT make mistakes, ever, but pushing ahead ASAP.

  11. }{amis}{


    This kind of rant just goes to show that idiocy and intelligence are not mutually exclusive.

    You only have to look at the history of car models that have caused deaths to know that people will not tolerate a machine that might kill them, it is an emotional response and no amount of logic or stats will cover that kind of behaviour.

    1. Nick Kew

      Re: Smart??

      Nonsense! People are very insistent on their inalienable right to a machine that might kill them. Cars being an obvious case in point, killing thousands each year and terrorising whole populations into keeping children and vulnerable folks out of danger - and hence denying them freedom to develop.

      It's only looking from outside an obsessed culture that you see such obsession and its absurdity. Like when we in Blighty view the US and its guns.

  12. Wupspups

    Autonomous cages? Why the image of a motorcycle?

    Interesting article. Often wondered how autonomous cars would cope in a mixed traffic situation with assorted styles of meatbag driving. I suppose the real test would be the Paris in the rush hour round the Arc d'Triumph or Milan at any time.

    And whats with the pic of an Andrea Dovizioso's Moto GP crash? Doesnt make sense.

  13. stevewolves

    Too many "accidents"

    A shame that the motor industry press still use the word "accident" when surely the more accurate term would be "collision"?

    eg. "If an accident is inevitable", no, a collision could be inevitable, an "accident" could be avoided.

    Surprised this wasn't picked up on, especially on an article regarding risk.

    1. Paul

      Re: Too many "accidents"

      Here in the UK, the police say "incident". There are very few actual accidents, by which I mean a freak combination of wholly innocent steps which couldn't have been foreseen to lead to the crash.

      Most incidents are due to specific mistakes which could have been prevented and lead to an unavoidable collision.

    2. Anonymous Coward
      Anonymous Coward

      Re: Too many "accidents"

      In Russian, it's an "unfortunate event", with no assumption that it was some kind of inevitable thing that just happen. But then there are some very educational videos on Youtube that tell you than driving in Russia or Ukraine is not to be recommended unless you have an APC, and possibly not even then.

  14. handleoclast

    Engineering mindset?

    This typical engineering "I am right" mindset is often what leads to technological breakthroughs but, as has been proven time and time again, it is dangerous when applied to a larger context.

    All the competent (and better) engineers I have worked with do not have an "I am right" mindset. They query their assumptions. Although they might react instinctively and dismiss differences of opinion, later they query their assumptions. It is the bad engineers who insist they are right come what may.

    Nor is the "I am right" mindset confined to bad engineers. You can find it in bad managers, bad marketers, bad economists, bad presidents, etc. It is a dangerous mindset in any profession and it occurs in all professions.

    BTW, I am right about this and nobody can change my mind. :)

  15. strum

    When automobiles were first introduced in the US, people (pedestrians) were regularly injured or killed.

    So the auto industry invented the 'crime' of jaywalking - making it the pedestrians' fault.

    I suspect something will happen, someday, to the auto auto industry, making every collision the human driver's fault.

  16. BobC

    Avionics is not a good comparison.

    A prior comment stated that autonomous vehicle software should meet the same standards as avionics. This opinion is wrong on at least two separate levels.

    First, in the US, the FAA has two certification paths for avionics hardware and software. 1. Prove it is accurate and reliable (typically via formal methods), then test enough to validate that proof. 2. Test the hell out of it, at a level 5x to 10x that done for the more formal path.

    Small companies are pretty much forced to use the second path more than the first. Where I worked, we relied on the second path and aspired to the first. We had awesome lab and flight test regimens that the FAA frequently referred to as "best practices" for the second path.

    Second, the risk of death due to an avionics failure (per incident) is massively higher than it is in cars, especially given the ever-increasing level of passenger safety measures present in modern vehicles. The fact that aviation death counts are so low is due more to the relatively tiny number of vehicles involved compared to cars (on the order of ~100K cars to each plane).

    Autopilots are fundamentally simpler than autonomous driving: Fully functional autopilots have existed for well over half a century (the L-1011 was the first regular commercial aircraft to do an entire flight autonomously, including takeoff and landing). The primary reason for this achievement is the large distances between planes. In-air collisions simply don't happen outside of air shows.

    The massively greater complexity of the driving environment (separate from the vehicle itself) forces the use of statistical methods (machine learning), rather than relying solely on formal, provable rules. If it isn't clear already, this means that autonomous driving systems will be forced to use the second path to certification: Exhaustive testing.

    Most of that testing must occur in the real world, not in a simulator, because we simply don't yet know how to construct a good enough simulator. The simulator will always miss things that exist in the real world. One goal of ALL real-world self-driving tests MUST be to gather data for use by future simulators! Just because simulators are hard is no excuse to avoid building them. We just can't rely on them alone.

    That said, all such on-road tests must be done with a highly trained technician behind the wheel. It is VERY tough to remain vigilant while monitoring an autonomous system. Been there, done that, got the T-shirt, hated every minute. In my case it was operating a military nuclear reactor. Boring as hell. Terribly unforgiving of mistakes. Yet it is done every minute of the day with extreme safely.

    I'd focus on the test drivers more than the vehicles or their technology. Get that right, and we'll earn the trust needed to improve the technology under real world conditions.

  17. tk666

    Low Bar

    Luckily for the Autonomous car they are competing with totally crap human drivers. Its difficult to imagine a system that could drive worse than 90 % of humans.

    I live in California.

  18. imispgh2

    Stellar article. It is great to see someone in the press who has the courage to not echo the echo chamber.

    The 30 Billion miles is actually low. RAND says 500B at 10X better than a human and Toyota says 1 trillion.

    My name is Michael DeKort. I am a former systems engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS. I also worked in Commercial IT and Cybersecurity.

    I received the IEEE Barus Ethics Award for whistleblowing regarding the DHS Deepwater program post 9/11 -

    I am also a member of the SAE On-Road Autonomous Driving Validation & Verification Task Force - (SAE asked me to join the group because of my POV on this area and my background)

    It is a myth that the use of public shadow driving to develop autonomous vehicles will ever come close to actually creating one. You can never drive the one trillion miles, spend over $300B or harm as many people as this process will harm trying to do so. What happens when you move from benign and hyped scenarios to running thousands of accident scenarios thousands of times each? The answer is to leverage FAA practices and use aerospace/DoD level simulation.

    More details in my articles here

    Impediments to Creating an Autonomous Vehicle

    Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI

    DoT, NHTSA and NTSB are Enabling Autonomous Vehicle Tragedies

    Tesla and Elon Musk are killing people for no reason and will never get close to producing a truly autonomous vehicle

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021