back to article Remember the Uber self-driving car that killed a woman crossing the street? The AI had no clue about jaywalkers

The self-driving Uber car that hit and killed a woman walking her bike across a street wasn’t designed to detect “jaywalking pedestrians.” That's according to an official dossier published by the US National Safety Transportation Board (NTSB) on Tuesday. The March 2018 accident was the first recorded death by a fully …

  1. hitmouse

    Re: "Fall Creators Update"

    Cue Google updating its CAPTCHA images to detect jaywalkers.

    1. Sir Runcible Spoon

      Re: "Fall Creators Update"

      Couldn't the AI have at least (in cases of doubt) sound the horn to alert the unknown object to its presence?

      1. Drew Scriver

        Re: "Fall Creators Update"

        It could have, but judging by the various classifications and corrections reflected in the timeline doing so would probably mean that autonomous vehicles would sound their horns every couple of seconds in a busy environment. Traffic in the western world would resemble the cacophonic traffic in India and it's doubtful people would accept that.

        In addition, assuming that the vehicles would also respond to other vehicles sounding their horns, this would quickly lead to a stalemate.

        By the way, I highly doubt audible cues are currently taken into account...

        1. Loyal Commenter Silver badge

          Re: "Fall Creators Update"

          The more pertinent question should be: if it detected an unknown object that may or may not have been going into its path of travel, why didn't it slow down (unless it was already going much faster than the 39 mph it was doing when it hit)?

          If a human driver saw something in the road that they couldn't identify, ignored it, didn't slow down and then hit that something after it turned out to be a person at 39 mph, and killed them, they'd quite possibly be facing a charge of causing death by careless driving.

          Hazard perception is a major component of the dirving theory test, and one of the most important skills when driving. Humans aren't actually that great at it, hence the need for driving lessons beyond those required to learn how to control a vehicle, and it's the sort of thing a computer should be better equipped to deal with.

          As I said, at the very least, you'd expect the response to an unidentified hazard to be to slow down until it has been better evaluated. If that results in a self-driving car erratically braking, then this suggests that the hazard identification technology isn't yet good enough, and shouldn't be on the road, not that the brakes should be disabled...

          I'm sorry to tell you, Mr Uber, but on this occasion, you have not passed your driving test.

          1. Loyal Commenter Silver badge

            Re: "Fall Creators Update"

            I think the long-and-short of it is: until "AI" can produce something that at least approximates theory of mind, it's going to be no use in applications that require theory-of-mind to work, such as anticipating what other sentient beings in the environment may do, be they people, animals, or, if self-driving-cars do end up with proper AI, other vehicles.

            1. Lusty

              Re: "Fall Creators Update"

              "approximates theory of mind"

              Nope. It's perfectly feasible to do this without AI, just slow the car to a stop in any and all circumstances when the lane is not clear then let the human take over. It's not a requirement for cars to proceed at speed or at all, and until we get that out of the design we'll keep failing. It's the same reasons humans keep crashing - we think it's necessary to go at speeds which are often too fast for conditions. It's not.

              1. Loyal Commenter Silver badge

                Re: "Fall Creators Update"

                I don't disagree with that at all. I can's see anyone buying a "self-driving-car" that doesn't drive itself though, because what you just described is something that is likely to spend most of its time stopped.

                1. Richocet

                  Re: "Fall Creators Update"

                  That is why they turned off the part of the program where the car slowed down if it spotted a potential danger it was unsure of, or braked to avoid a hazard.

                  They couldn't solve the actual problem of AI driving so they faked it. More reckless than what Boeing did with their MCAS on the 767 Max and someone should be liable for the decision.

                  Potential solution: Make the Uber test car and the safety driver a tiny, frail vehicle that has the lowest legal crash protection rating: Minimizes damage to 3rd parties in the event of a crash, and a stronger incentive for Uber and the safety driver to avoid all crashes.

                  1. ICPurvis47
                    Childcatcher

                    Re: "Fall Creators Update"

                    Jeremy Clarkson's solution of having a twelve inch long spike protruding from the centre of the steering wheel might go some way to improving the safety of Uber's test vehicles as well.

                    1. Joe 35

                      Re: "Fall Creators Update"

                      I thought that was ralph nader?

              2. Sam not the Viking Silver badge

                Re: "Fall Creators Update"

                As reported elsewhere, 'Jaywalking' is known as 'crossing the road' almost everywhere except the USA. Is this not the great problem with 'AI' or 'self-driving' cars? Anyone can, and many will, just walk out in front of these vehicles to force a stop. It will become a sport. Cars will have to crawl in populated areas.

                1. Dazed and Confused

                  Re: "Fall Creators Update"

                  > 'Jaywalking' is known as 'crossing the road' almost everywhere except the USA.

                  Even in parts of the USA it is pretty common for people to walk across the road and expect drivers to stop.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: "Fall Creators Update"

                    "[...] for people to walk across the road and expect drivers to stop."

                    In a recent case in London a woman walked across a road while looking at her phone. A cyclist collided with her. The judge ruled that the law effectively says a pedestrian has the right of way.

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: "Fall Creators Update"

                      The judge said that the cyclist knew the road wasn't entirely clear and continued to ride on. He shouted at her to move out of the way, she didn't and he ended up hitting her. Wouldn't it be obvious for you to slow down, if you saw her?

                      Not sure if part of the reason was because he was "legally unrepresented at the initial stages and failed to launch a counter claim against the pedestrian". As in, because he didn't claim damages he was ultimately held liable.

                    2. Mike 137 Silver badge

                      Re: "Fall Creators Update"

                      "The judge ruled that the law effectively says a pedestrian has the right of way."

                      English law does say, and always has said. this explicitly, for very good reason.

                      Mobility and staying alive are basic human rights. Driving is not one. Plus, it's not up to the weaker party to protect themselves from the stronger - it's up to the stronger party to be careful of the weaker. That's called qualifying as a member of the human race, and anything else results in mayhem.

                      The concept the that vehicles have precedence is a misunderstanding. The real concept is that the party with the power to hurt the other has precedence. Not the kind of world I'd choose to live in.

                      1. Kiwi
                        Boffin

                        Re: "Fall Creators Update"

                        The concept the that vehicles have precedence is a misunderstanding. The real concept is that the party with the power to hurt the other has precedence. Not the kind of world I'd choose to live in.

                        I think many of us live under the basics of maritime law - essentially 'the bigger vessel has the right of way'. My understanding is that's based on the fact that a larger vessel is likely to have longer stopping distances and slowed turning rates thus have a much harder time avoiding a collision.

                        A truck stops a lot slower than a bike, and a bike can go places that'd require a team of people to remove a car from.

                        I'm not sure if under NZ law a pedestrian is given precedence or not, but unless they walk in front of a car (or other vehicle) leaving no room to stop, the driver will be guilty of 'failing to stop in the clear road ahead' and 'careless driving causing injury (or death)'. Personally, seeing the herds of mindless twits walking out onto the roads after a thugby game, I'd be all for a law allowing us to mow some of them down. 30 or 40 might walk as a group into a 70kph road without one of them checking for oncoming traffic. But it'd be the drivers who have to live with it, not the idiots who walked out.

                        1. Anonymous Coward
                          Anonymous Coward

                          Re: "Fall Creators Update"

                          @kiwi and 'the bigger vessel has the right of way'

                          I am thinking precidence is more the least maneuverable rather than size, however where it comes to pedestrains then they are more likely to be damaged than say the driver which should be the main consideration. Two people can walk into each other and both are equally to blaim however if one is carrying something that make the collision more dangerous and they hit anyone then they should be more responsible since they increased the danger level but failed to be more cautious

                          The intention should never have been for the vehicle controller to run people over and yet this was the manufacturers chosen behaviour

                          Yes people walking in front of you are annoying but the vechicle controller carries the onus to avoid collisions more that the pedestrian since the car. That the AI was intentionally configured to ignore it's onus says IMHO that the manufacturer is responsible for the collision and should be prosecuted for murder i.e. malicious intent and afore thought resulting in death

                          1. Kiwi
                            Pint

                            Re: "Fall Creators Update"

                            I am thinking precidence is more the least maneuverable rather than size,

                            That would also make sense :)

                            however where it comes to pedestrains then they are more likely to be damaged than say the driver which should be the main consideration.

                            One of the things I've tried to get through to some people with walking/riding and driving. If a truck fails to give way, and you could've avoided the crash, will it matter who had the right of way or how culpable the truck driver was? If a car hits you while you're riding, does it matter that they should've given the ROW/not been tailgating etc? You'll be lucky if you can walk home, but your vehicle will be towed at best. And even if their insurance pays out in full next week, you still have to get to work tomorrow.

                            I often base my assessment of risk and road-rules not on what the law says but on 'how much could it hurt". Even at low speed, a car hitting a person could hurt a lot. As the vulnerable party, if I wish to remain at my current state of pain, I must take care to avoid being in an accident even if I am completely innocent.

                            We have some very cold and wet weather here today (so much for the 'long hot summer' we were warned of just a week ago!), and I had to do some driving. A couple of instances where I had the ROW but I yeilded to others for the simple reason that under the conditions they may not have seen me, and worse they may be travelling too fast for the conditions. Doesn't matter to me if I had the ROW or they had great full-cover instant payout insurance, I'd still lose my car.

                            Not defending Uber, but I do say that we need to be responsible for our own safety and sometimes let someone else go even when we have the right (especially when on foot).

                            The intention should never have been for the vehicle controller to run people over and yet this was the manufacturers chosen behaviour

                            I don't think they chose it from a deliberate choice but from some bad thinking, ie trying to determine the class of object and forgetting about it's movement. I do get that identifying something as a car or chunk of roadside furniture changes the threat landscape somewhat (cars likely to move out, furniture likely to stay put but could hide a kid), but tracking whether or not something is moving is still very important. The movement track should take absolute precedence.

                            Don't get me wrong, I think someone at Uber should be held very accountable for this, preferably someone higher up but especially anyone who was given a warning that such programming could be a problem and opted to ignore the warning. Prison time should be a very real possibility especially for those who signed off on stopping the emergency braking. As far as I'm concerned, the first thing you teach newbies is how to stop a vehicle.

                            Yes people walking in front of you are annoying but the vechicle controller carries the onus to avoid collisions more that the pedestrian since the car.

                            I'm guessing you were going to say something about the level of damage the car can do - yes I agree. As a driver it is my responsibility to do my best to avoid harming others and I share some responsibility for protecting them from their mistakes - if I can see someone doing something stupid I don't barrel into them at high speed, I back off or even brake. Even if my assessment of their actions is wrong, better to slow and be wrong then not slow and be wrong.

                            That the AI was intentionally configured to ignore it's onus says IMHO that the manufacturer is responsible for the collision and should be prosecuted for murder i.e. malicious intent and afore thought resulting in death

                            I still come back to thinking there was some messed up actions/mindsets but they did it from what were the best intentions. OTOH, while writing that last sentence I did get a mental reminder of some of Uber's other past actions so maybe they were doing it from a straight-out trying to get to market quicker and were less concerned about the well-being of others. Certainly, putting a car on the road with some safety systems disabled and the human driver being given some extra distractions increased the risks to what, to my mind at least, was an unacceptable level except on a controlled training ground. I would not be too upset if Uber was sued into oblivion, and many among the top-level managers/CxOs etc having plenty of time to study the inside of a small cell.

                            1. Anonymous Coward
                              Anonymous Coward

                              Re: "Fall Creators Update"

                              @ kiwi again

                              I agree with you but would like to add most human drivers believe they are safe to be around pedestrians and this has prevented true segregation of the human/vehiclular traffic.

                              Now with road traffic there are already thousands of deaths each year due to the inherant issue with humans and machines sharing the same space, when you add in AI intentionally programmed to ignore behaviour that makes a human driver safer then the deaths can only go up.

                              Utilmately AI should be limited to routes where there is zero chance of causing injury to anyone but the passengers, I am envisioning flying or underground routes but again we need only look at the drone/aircraft issue to see that there are no routes availible baring tunneling where it doesnt require some interaction with humans. So for AI to be safe either it is over safe meaning AI controlled travel would be necessarily slower than driving yourself or all driving is AI and humans are prevented from straying onto routes.

                              I would not mind AI only vehicular control since it is obvious that even the best human drivers are gambling every time they get in the drivers seat but since most believe, against all evidence, that they are safe then it is unlikley that AI, which could be safe at ridiculous cost, will ever replace the current situation where it is accepted that travelling is inherantly dangerous but the cost of safe human/machine segregation is prohibative.

                        2. Lusty

                          Re: "Fall Creators Update"

                          "I think many of us live under the basics of maritime law - essentially 'the bigger vessel has the right of way'. "

                          Maritime law, or Colregs as they are known don't say any such thing, and a supertanker or container ship will absolutely give way to a small sailing boat - I've seen it many times from the deck of my small sailing boat. The exception is that if either vessel is constrained in some way (draft, manoeuvrability etc.) then they need to make that known via signalling and will get precedence.

                          What the colregs also say, which is the smartest part, is that no vessel has "right of way" and that in collisions all parties share responsibility. To put this in car terms, I could be driving the wrong way up the M40 and you're still to blame if you hit me, because you should also have been looking where you're going and at a reasonable speed to be able to avoid collision. Works better at sea, but it's definitely words to live by. Personal responsibility is key.

                          1. Kiwi
                            Pint

                            Re: "Fall Creators Update"

                            Maritime law, or Colregs as they are known don't say any such thing,

                            Thanks for the correction. I am not a sailor myself nor have I bothered to look that up. I have been told it by various individuals (some with no obvious common link) who have significant seagoing experience (ie not on 'small sailing boats' but on larger cargo ships, medium-large passenger ferries (Aratiki, Arahura and Awatere to be specific) and other decently large ships (NZ's frigates are probably the smaller of these). That said, they're also bike riders so could've been simplifying :)

                            Then again, Part B S9 does state "Small vessels or sailing vessels must not impede (larger) vessels" (at least according to wikipedia), however that is related to narrow channels - given the 3 vessels named above, that's quite possibly the context intended.

                            I do see further down that a power-driven vessel must give way to a sailing vessel, but while small boats the power-driven boat has the clear ability to manoeuvre, a tanker takes a bit longer to stop or turn than your boat would!.

                            Of course, sanity dictates that if it can do you great harm and might have trouble NOT hurting you, you keep out of it's way! :)

                            But thanks for the correction.

                            and a supertanker or container ship will absolutely give way to a small sailing boat - I've seen it many times from the deck of my small sailing boat.

                2. Fungus Bob

                  Re: "Fall Creators Update"

                  Cars already crawl in populated areas...

                  1. Spit The Dog

                    Re: "Fall Creators Update"

                    And so they should. Powered craft on water slow and give a wide berth to unpowered craft. There's no entitlement to road space just because you're driving a motor vehicle. This belief currently leads to 1800 fatalities annually in the UK.

                    1. Anonymous Coward
                      Anonymous Coward

                      Re: "Fall Creators Update"

                      "Powered craft on water slow and give a wide berth to unpowered craft."

                      IIRC the "steam gives way to sail" rule is no longer universal. Very large ships require a considerable forewarning to be able to slow or change course significantly.

                    2. Colin Wilson 2

                      Re: "Fall Creators Update"

                      According to https://www.statista.com/statistics/324006/international-and-uk-pedestrian-deaths/ in the UK there's 7.1 pedestrian deaths per million population per year. Given a population of 66 million that's around 470 pedestrian fatalities.

                      That's 470 too many of course :(

          2. Kiwi
            Flame

            Re: "Fall Creators Update"

            The more pertinent question should be: if it detected an unknown object that may or may not have been going into its path of travel, why didn't it slow down (unless it was already going much faster than the 39 mph it was doing when it hit)?

            I came here pretty much to say the same thing.

            There's other options too. It could've alerted the operator, slowed, stopped... Lots of things.

            If you're playing solitaire and there's something on your screen you can't identify immediately, it's probably safe to ignore. If you're driving and there's something you cannot identify, it's not safe to ignore. You give it more attention until you can ID it, or at least tell if it's fixed/maybe moving/maybe heading your way. This is absolute basic safe driving, spot something, rate it as a hazard and respond accordingly.

            I don't know why the person looked away but there can be valid reasons for it while driving. They may also have been using a phone. But regardless, if Uber were stupid enough to make their car only spot peds at designated crossings then they're still at least 50% culpable. Have the designers never driven in an urban environment? Pedestrians are stupider than sheep, and certainly by and large don't respect designated crossings. How can they be designing cars/control systems and not know this?

            (I also wonder if the 33 vehicles that drove into a self-driving Uber were the reason that they turned the erratic auto-brakes off....)

            1. skeptical i
              Meh

              Re: "Fall Creators Update"

              re: "Pedestrians are stupider than sheep, and certainly by and large don't respect designated crossings." Hmmm, while many pedestrians (and drivers) may be dumb as dirt, most pedestrians in my observation and experience are acting like pedestrians on streets designed for cars. Streets that only have signals or "proper crosswalks" every quarter mile (forcing wheelchair users, pram pushers, and other pedestrians to go out of their way to cross at a signal then double back to their destinations), have shite or no sidewalks or proper lighting, encourage car speed by design (multiple wide lanes, median separation), have "free right" turns that don't require vehicles to even slow down to make right turns at intersections, and so on. It is not clear, given the utter lack of appropriate, safe, and convenient pedestrian facilities in many areas, what pedestrians should do other than do the best they can in a hostile environment. It would be helpful if more drivers were aware that they are only one class of road users -- i.e., not the only road users -- and showed a modicum of care.

              1. Kiwi

                Re: "Fall Creators Update"

                It would be helpful if more drivers were aware that they are only one class of road users -- i.e., not the only road users -- and showed a modicum of care.

                I agree with that statement, however..

                In Wellington there was a big problem with pedestrians being hit especially by busses (which tend to be doing short distances between stops and thus not getting up to full speeds). The speed limits in some areas were reduced to 30kph yet that didn't seem to help. Now in some areas there are physical barriers along some footpaths so peds are less likely to walk out into the street. The problem was people walking out without looking, some suddenly coming from a shop entrance and heading straight across a busy road others turning suddenly, others breaking away from a group of people chatting by the roadside and so on. Some, maybe, where the drivers could see them and could've taken action, but most through their own fault.

                In NZ with the exception of designated motorways, it is IIRC somewhat legal to walk on the road should the footpath not be available. On motorways it's illegal to walk at all except in some exceptional circumstances. But even then, people are expected to follow the basic rules and not walk in front of traffic where they can avoid it, and to wait for a suitable gap in traffic. Given the number of traffic lights around, it's likely that such a gap will come (and as I said elsewhere, there is a distance at which you must use a pedestrian crossing or cross at traffic lights).

                It is also illegal to impede the flow of traffic unnecessarily, so a person walking out an forcing traffic to slow is committing an offence. More fines for that might also make people more sensible.

                Cars, bikes, and especially trucks and busses can mess up your day. Take some time and don't take the risk.

                We thankfully have few (if any) 'free turn' signs in service in NZ (I cannot recall seeing one in the last 20 years!), you can cross where you like (unless x metres from a crossing, IIRC no more than 20), we do have more crossings in areas where they're more needed but expect people not to be too lazy (and in case you don't know I am often a 'wheelchair pusher', being involved in elder car 'n all that). While traffic flows and how many cars are expected to pass an area is a consideration, many of our shopping areas have 30kph speed limits (just under 20mph), speed humps (often as part of pedestrian crossings) and road-narrowing features (parking, islands, markings, signs....).

    2. Geoff332

      Re: "Fall Creators Update"

      Why not go the whole hog and use CAPTCHA to crowd-source moral judgement. Google could quickly build up a whole library of trolley problems and put them to the wisdom of the internet and use that to train their AIs.

      I'm joking, but I'd bet at least a whole pint that someone has thought of this idea before, and taken it seriously.

      1. Kiwi
        Pint

        Re: "Fall Creators Update"

        I'm joking, but I'd bet at least a whole pint that someone has thought of this idea before, and taken it seriously.

        Look down.

        In the David Tenant "Dr Who" episode "Blink" the Dr makes the comment "Look to your left", to which Larry - busily writing stuff down -says "What does he mean by look to your left? I've written tons about that on the forums. I reckon it's a political statement.".

        In this case.. Look down the screen a bit..

  2. Oengus

    Surely

    We know the Radar and Lidar detected an object multiple times. Surely the "AI" detected the movement and could predict the path and speed. That would have given rise to at least a high risk regardless of what the object was. Wouldn't the default state be unknown object moving into path of SUV; do something to avoid the unknown object.

    1. diodesign (Written by Reg staff) Silver badge

      Re: Surely

      If you look at the logs, you'll see the AI thought Elaine was, at times, a static object - it didn't expect her to move into its path of travel. She did, though, because she was a person crossing the street. Something the AI didn't take into account at all until the final second or so. Or so it seems.

      And don't call me Shirely.

      C.

      1. Anonymous Coward
        Anonymous Coward

        Re: Surely

        What worries me it's not sampling the environment fast enough and detect where an object is and in which direction it is moving, regardless of what it is, when it's in front of the car. It attempts to classify an object and infer most of its position and path from such classification.

        I don't know it mimicking how a human driver brain works it's the right solution, We don't have ranging devices, so we had to do without. If we were like bats, we would use them to map the environment.

        I wonder what would happen if a truck loses its load and "strange" objects start to to roll and bump along a road - AI cars will happily run into anything because it's unexpected? If a tree crashes on a road the AI will decide it should not be there, especially since it would have been trained with vertical trees, not horizontal ones?

        1. DaLo

          Re: Surely

          I think the biggest issue is false positives. These systems can detect these objects, and easily avoid them (as long as they have a radar reflection and/or a lidar map). However allowing the vehicle to do that would result in a horrendous ride that could be dangerous to other vehicles. It'd be jumping like a kangaroo at times.

          So it tries to risk profile detected objects - similar to how humans do (and we often get it wrong). So if we see a human on the side of the road we look for subtle clues in body language as well as whether they are looking at us to determine whether they are about to cross the road in front of us, or pull out at a junction. Just determining a path is not always enough of a clue as to the risk.

          1. Doctor Syntax Silver badge

            Re: Surely

            "So if we see a human on the side of the road we look for subtle clues in body language"

            Not just humans. The same needs to be applied to other animals. In fact, if the animal is proceeding along the road it's as well to allow for an unexpected change of direction and avoid alarming it. Yes I do live where there are a number of horsey folk around - hazards on the road but useful of you grow veg.

            1. MacroRodent

              Re: Surely

              > The same needs to be applied to other animals. ...

              May be difficult. Round here the moose are infamous for jumping on the road just as a car is coming, even when they probably see the car. I guess the problem is that the moose brain cannot comprehend that any object can move as fast as a car. The only safe course is to seriously slow down when you see a moose anywhere near the road.

              1. Yet Another Anonymous coward Silver badge

                Re: Surely

                >The only safe course is to seriously slow down when you see a moose anywhere near the road.

                Same here with deer, the problem is that their strategy to escape wolves is to suddenly switch direction across the path of the pursuer because they can turn faster.

                We would have to train self-driving cars to be smarter predators than wolves - which might not have a happy ending

                1. TheRealRoland
                  Coat

                  Re: Surely

                  Coffee & Cars...

              2. Drew Scriver

                Re: Surely

                This essentially means that AI/ML will have to differentiate between moose, elk, deer, fawns, bucks, cows, etc - as well as their young and how this may affect their behavior.

                I highly doubt the city slickers who tend to program the systems are even aware that deer and elk are not the same animal. Or that a horse by itself should be categorized differently than a horse that's being ridden.

                They apparently already need traffic to be so predictable that a pedestrian who isn't crossing properly can't be categorized as a pedestrian. That's not going to work in rural areas, which are rather unstructured. Most roads don't even have lines here...

                Good luck with the elk. As flawed as I may be as a homo sapiens, I'll keep driving myself, thank you very much. Trouble is the other drivers, of course...

                Having said that, I am a strong proponent of assistive features like smart cruise control, corrective steering, and blind spot warning systems.

                1. Muscleguy

                  Re: Surely

                  And you don't have to be driving along a single lane country road either. My wife has seen them on the offramp of the M90 in the middle of Fife*, hardly the Highlands. In winter the deer come down off the hills and you can see them beside the roads in winter quite easily.

                  *Which goes straight into a 60mph road and is wide and sweeping so you are only slightly slowing down from the motorway.

                  1. Danny 2

                    Re: Surely

                    I was driving a company car on a country road near West Calder and there was a deer and a bambi in the middle of the road. The deer sprang off but the bambi slipped. I had no time to brake so I drove the car into a ditch. My reasoning was it was just a company car and it was just me.

                    I wish the software engineers behind autonomous cars prioritise saving pedestrians over passengers.

                    The roads in the area were also filled with wabbits. The thing about wabbits is they don't run off to the side of the road, they run straight down the road expecting to out-run the danger. You have to pull over so they feel safe enough to run to the fields, or drive slowly along behind them.

                    1. ICL1900-G3

                      Re: Surely

                      Good for you, and boo to the downvoter.

                    2. Michael Wojcik Silver badge

                      Re: Surely

                      The deer sprang off but the bambi slipped. I had no time to brake so I drove the car into a ditch.

                      It's a tough choice, assuming you even spot it in time and have the presence of mind to make a conscious decision. Around here, where deer are the second or third most common cause of vehicular accidents (impairment is first; the statistics I've seen were from several years ago, and distraction may now have passed deer), the authorities and insurance companies frequently tell people not to swerve, just brake if possible and stay on the road, on the grounds that swerving is statistically more dangerous to the driver and passengers than hitting a deer.

                      I've never hit a deer with any of my cars, though I've had close calls, and once was hit by a deer. It jumped out from the woods into the side of the car. I wasn't going sideways, so that puts the deer entirely at fault. (Also I had the right of way. And it wasn't licensed or registered to operate on the public roads. Deer have little respect for the law.)

                      1. Danny 2

                        Re: Surely

                        @MW

                        It was a blind bend in the road so I had to choose in less than a second. I maybe would've chosen to hit the baby deer if there was a passenger I valued in the car. I hate to admit this but maybe a different choice if it was my car.

                        I think there are far too many wild deer in the UK today, but I also watched Bambi a lot when I was a bairn so, well, I'd trust a self-driving car made by Disney more than one made by Uber.

                  2. Michael Wojcik Silver badge

                    Re: Surely

                    My wife has seen them on the offramp

                    I was driving on Interstate 96 near Lansing, MI a couple of years ago. It was around 1 AM and snowing gently - typical mid-Michigan snow, big fluffy flakes spiraling down like something in a movie. Not yet accumulating on the road, so I was doing something close to the posted limit of 70 MPH; I-96 is a limited-access four-lane highway. The road was nearly deserted; I'd passed a semi a mile or two back, and there weren't any other vehicles in sight at the moment.

                    I'd been driving for about 14 hours, coming from Kansas.

                    If you've driven in snow showers at night, you know what it looks like - the hypnotic effect of flakes catching the light from the headlights and spinning past, while the segments of the dashed lane-separator line click by on the road. Flakes, dash, flakes, dash, flakes, deer standing in the left lane close enough that I could have slapped it had I stuck my arm out the window, dash, flakes, ...

                    By the time the headlights lit up the deer, I was maybe a car length away. I was just lucky it stood still. That truck I'd passed a little way back? Who knows. There wasn't any way to warn him about it.

                    I did swear vigorously for a few minutes, though. That helped.

                    And that's not the only time I've driven past a deer standing in the road on a highway at night.

              3. Dazed and Confused

                Re: Surely

                > I guess the problem is that the moose brain cannot comprehend that any object can move as fast as a car.

                I've often felt that anything doing over 40MPH is invisible to sheep, presumably they have no natural predictor that travels at that speed so why waste time handling it.

              4. Anonymous Coward
                Anonymous Coward

                " I guess the problem is that the moose brain cannot comprehend ..."

                "...that any object can move as fast as a car."

                They cannot comprehend that cars (or their content) are of any relevance. (Or have come to the conclusion that they are not.)

                I've met one standing calmly in the middle of the road (spotted him, and he me, from afar; it wasn't critical), and requiring approach to less than 10m and repeated toots of the horn to convince him to move on. Very large and self-assured, moose are.

                1. Yet Another Anonymous coward Silver badge

                  Re: " I guess the problem is that the moose brain cannot comprehend ..."

                  >Very large and self-assured, moose are.

                  They weigh the best part of a ton, are 12ft tall and their only predator are Orcas (seriously - moose swim between arctic islands and are eaten by killer whales)

                  Frankly their attitude to a fiat 500 is entirely appropriate

                  1. Nicko

                    Re: " I guess the problem is that the moose brain cannot comprehend ..."

                    I used to live in Newfoundland.

                    In winter, Moose often use the roads at night as they are warmer (the tarmac heats up during the day) - as it can be -40C, any slight warmth is welcome. It's not uncommon to be driving and to pick out a moose ahead, however, when snowing, it can be difficult to pick them out in time, especially at night.

                    They can be dangerous & protective, especially if they have a calf with them - some have described them as "fifteen hundred pounds of angry beefsteak".

          2. Anonymous Coward
            Anonymous Coward

            "However allowing the vehicle to do that would result in a horrendous ride"

            It's obvious a vehicle can't react to any single movement of an object, but it should be able to calculate a probable path from previous positions - not from what an object looks like - especially if it applies too broad categories.

            We may be able to recognize a sport cyclist and a child who can barely drive a bicycle (although it could be hard to tell which one is more dangerous, as both won't follow the rules), and apply different kind of probable behaviours - still we can usually work out also the orientation of an object in 3D space and add it to the computing of the probable next positions. If a bicycle is seen from its side at an angle close enough to 90°, nobody would think it's going along the road (unless it's fixed on the back of a car - what an AI would do??)

            That's because we understand perspective, while it looks AI are still obsessed with textures.

            And we can usually understand when an "object" is misbehaving regardless of it type exactly because it's moving in an unexpected way, and that's without the power of precise all weather ranging capabilities - our seeing can be deceived more easily.

            Moreover even our peripheral vision is more sensible to movements than high-precision imaging which happens only in the fovea - and we can react to movements well before we "categorize" the object. Often, what it is is important only after you avoided it...

            1. Kiwi
              Pint

              Re: "However allowing the vehicle to do that would result in a horrendous ride"

              Moreover even our peripheral vision is more sensible to movements than high-precision imaging which happens only in the fovea - and we can react to movements well before we "categorize" the object. Often, what it is is important only after you avoided it...

              EarlyAM on the way to work one day, I had a large black dog rush at me from the side of the road. Avoided it easily enough of course, but did take a look at it to make sure it wasn't continuing the chase (legal speed limit of 50kph but weather conditions meant I was going slower, I think some of the larger breeds could be a threat at those speeds). Wasn't a dog at all, was rubbish day on that street and someone's rubbish bag was blown across the road.

              Dog or bag, it was on a collision course with me and if I'd not avoided it I've have been eating gravel for breakfast. As you said, only after it was no longer a threat was the actual ID of any real interest :)

            2. Mark Ruit

              Re: "However allowing the vehicle to do that would result in a horrendous ride"

              If you look carefully at the timeline, you can see that every time the AI's inputs reclassified the object, the Artificial Ignorance dumped the history table of the previous 'classified object'. How the f*** can any program calculate a future path if it only has one point to work on?

          3. Marcus Fil

            Re: Surely

            "It'd be jumping like a kangaroo at times." And it hasn't even met a kangaroo yet - an animal renowned for lacking any road sense whatsoever.

            Frankly all I am hearing from the auto driving advocates is poor excuses for an inability to deal with harsh reality. Most useful piece of driving advice I ever received: "Never hurry into a situation you don't understand". Braking means giving yourself more time before you arrive at the scene which in turn gives you more time to make sense of what is happening; that could make the difference between life and death - yours, or someone elses. Sometimes the required reaction might even mean accelerating, but you need the OODA loop to complete before anything you do happens. Replace 'you' with AI but it all still applies - replicating myopic, intoxicated or other unsuitable human drivers is presumably not the goal of auto driving.

            In this instance if an object is deemed to be a static feature than a fraction of second later winks out of existence then something strange is happening and the brakes should have come on - this accident illustrates that Uber's AI folk were not fit to be let loose on public roads. Although it obviously it was the USA where profit comes become human life. Hopefully a few million dollar law suits will redress the balance.

            1. Kiwi

              Re: Surely

              "Never hurry into a situation you don't understand". Braking means giving yourself more time before you arrive at the scene which in turn gives you more time to make sense of what is happening; that could make the difference between life and death - yours, or someone elses.

              When I've been teaching people to drive, or just chatting with others about driving (especially newbies), I've asked one question.

              What is your biggest asset?

              I've mentioned it applies to cars, trucks, bikes, spaceships, supertankers, aircraft, any form of vehicle. I'd usually give them a while before answering.

              Time.

              If you have enough time to assess a situation and react, even if you cannot identify all the elements or correctly predict their path, you're not going to be in an accident. If you're heading into an unknown situation and you slow down, you have more time to assess what is happening, more time to come up with a course of action, and more time to react. Go in too fast and you run out of time before you have a clue what hit you.

              If in doubt, stretch it out.

          4. David Hicklin Bronze badge

            Re: Surely

            "So if we see a human on the side of the road we look for subtle clues in body language as well as whether they are looking at us to determine whether they are about to cross the road in front of us, or pull out at a junction."

            Not just humans, also applies to other car drivers where by experience I can (nearly always) tell by the way they speed up/slow down or the car "twitches" that they are about to cut across you to the exit slip at the last moment. Something new drivers have to learn which takes time, so they have to rely on faster reactions.

            1. Oneman2Many

              Re: Surely

              It was dark, the sensors could see more than you. Just the car made some stupid decisions.

        2. hplasm
          Paris Hilton

          Re: Surely

          "..I don't know it mimicking how a human driver brain works..."

          Disconnected from reality most of the time?

        3. Doctor Syntax Silver badge

          Re: Surely

          "We don't have ranging devices"

          But most of us have two ranging mechanisms.

          Binocular vision enables the brain to triangulate objects. That only requires that the two images of an object are recognised as the same thing, it's not necessary to recognise what the object is.

          A second mechanism, for which one eye suffices, depends on recognising what the object is, knowing its size and working out how far it is from the angular size it presents.

          1. matt 83

            Re: Surely

            Binocular vision is only good for up to around 10 meters and at 40mph it only takes about half a second to travel 10m so we're really only using the second mechanism while driving.

            If you're a driver be careful of children riding Shetland ponies, they may be closer than you think!

            1. Rich 11

              Re: Surely

              they may be closer than you think!

              "Are they now, Ted?"

            2. Sir Runcible Spoon

              Re: Surely

              You can use the vanishing point on bends to judge approach speed vs distance from much greater distances

            3. Baldrickk
              Paris Hilton

              Re: Surely

              Only 10 metres?

              Some back of the napkin math... My ipd is 70.0mm, so eyes are 0.045m from the bridge of my nose.

              Angular resolution of the human eye is 0.02 degrees

              That triangle is just shy of 260m along the adjacent.

              That's the theoretical max. Even if we halve that, for a more realistic value it'll still be 130m. An order of magnitude greater than 10m.

              1. Yet Another Anonymous coward Silver badge

                Re: Surely

                That angular resolution is the closest two points can be before merging (and only at the fovea)

                This is not the same as an absolute angle measurement between the two eyes

                1. david 12 Silver badge

                  Re: Surely

                  But it's the absolute angle measurement between the two eyes that's used for Binocular Triangulation?

                  Not the angular resolution of a single eye?

                  I've heard both claims from different sources. Are there any one-eyed vultures here to report personal experience?

                  1. Kiwi
                    Boffin

                    Re: Surely

                    Not the angular resolution of a single eye?

                    I've heard both claims from different sources. Are there any one-eyed vultures here to report personal experience?

                    Few years back I'd had a minor[1] injury and had a patch on one eye for a couple of days. I'd expected a lot more trouble with judging distances but I found it wasn't really an issue, however I was driving roads to/from work I'd been driving daily for some time. I can't be certain if my sense of depth/distance was aided by local knowledge and experience. Certainly things like pedestrians were no issue, the person walking pas #45 is smaller than the car parked outside so therefore a kid - now I think about it the angle of declination (looking from a higher road to a lower one and objects along it) probably helped give me clues as to how far along the road the person was.

                    The depth perception problem is more acute at nights from what I recall reading - a single red light on a push-bike can be indistinguishable from a pair of car tail lights much further in the distance as we cannot see much else around us to aid us. Of course, with modern headlights and the mad placing of streetlights every .0001mm on hundreds of KM of motorways that hardly get used when the lights are on, you can see plenty. When I started out car lights were politician-like; ie those rather dim yellow things. Hell, I remember the first time seeing 'cats eyes' at night when the high beams came up (they were just coming into my town in the late 80's, no idea when other towns got them)

                    [1] a couple of mm up and you'd have your 'one-eyed vulture'!

            4. Michael Wojcik Silver badge

              Re: Surely

              If you're a driver be careful of children riding Shetland ponies, they may be closer than you think!

              Really, this is good advice even if you're not a driver.

          2. Anonymous Coward
            Anonymous Coward

            "But most of us have two ranging mechanisms."

            I should have written "precision ranging devices". We do infer some information from the object type - i.e. we know a biciyle is within a given size range so we can deduct its probable distance and direction from perceived size, and perspective. We too can be deceived when the assumptions are wrong. But a car with a precision ranging device like a lidar should use it to take unambiguous data about an object position and orientation in 3D space and calculate the trajectory. Maybe the echo of complex objects is not so easy to process.

            1. Doctor Syntax Silver badge

              Re: "But most of us have two ranging mechanisms."

              I'm not sure precision is a big factor. I think our inherited mechanisms as humans are more to do with predicting trajectories, including our own, and whether they're likely to intersect at roughly the same time. Roughly because we'll apply a safety factor which obviates the need for precision.

              It's not so much object position that matters, not even relative position, because its position to ourselves is constantly changing whilst we're in motion.

        4. LucreLout

          Re: Surely

          What worries me it's not sampling the environment fast enough and detect where an object is and in which direction it is moving, regardless of what it is, when it's in front of the car.

          What worries me most is the meat sack supposedly supervising the JohnnyCab seems to spend less than 10% of the time looking out the front window and near on all the time staring at their tech. If the car takes more than 5% of their time to supervise then they won't be driving safely and need a second occupant to fiddle with the tech while one of them focuses only on the road outside the car.

          1. Doctor Syntax Silver badge

            Re: Surely

            What you disparagingly call the meat sack has parallel processing capabilities that vastly outstrip whatever is driving your Johnny Cab.

            1. LucreLout

              Re: Surely

              What you disparagingly call the meat sack has parallel processing capabilities that vastly outstrip whatever is driving your Johnny Cab.

              They may well do, but if they aren't looking out of the window they won't see the hazard, they won't assess its risks, they won't form alternative action plans, and they won't take correct and timely action. They'll just plough them down, as was the case in the matter at hand.

        5. Flywheel
          Facepalm

          Re: Surely

          And "it categorized her variously" .. but if in doubt, don't stop - just keep going. What kind of idiot writes software like this?!

          1. John Brown (no body) Silver badge

            Re: Surely

            "And "it categorized her variously" .. but if in doubt, don't stop - just keep going. What kind of idiot writes software like this?!"

            Ones who are hopefully feeling very guilty at having dodged a criminal charge and looking very, very carefully both at the quality of the work they are doing and their future career path.

            1. Kiwi
              Coat

              Re: Surely

              and their future career path.

              If their code practices are anything to go by, I'm guessing "crash and burn" features quite strongly on their career path.

              1. Stork Silver badge

                Re: Surely

                Move fast and break things?

          2. Dazed and Confused

            Re: Surely

            > .. but if in doubt, don't stop - just keep going. What kind of idiot writes software like this?!

            A taxi company?

          3. Kiwi
            Coat

            Re: Surely

            What kind of idiot writes software like this?!

            Easy.

            An Uber-idiot!

      2. MonkeyCee

        Re: Surely

        @diodesign <pedant> The logs show the driving software thought she was either a unknown, pedestrian or bike. When it thought it was a bike, it thought it was moving, in the left hand lane </pedant>

        Perhaps the changing characteristic is what made it unpredictable, since it seemed to lose the history each time it changed the classification.

        Jaywalking is an American thing anyway. Hopefully they'll have a "don't run down pedestrians" mode ofr the rest of the world :)

        1. Robert Helpmann??
          Childcatcher

          Re: Surely

          Jaywalking is an American thing anyway.

          People, especially kids, and animals running into the roadways is something that happens all over the world though the circumstances vary. Autonomous diving vehicles have to be able to improve upon human ability in order to have any chance of uptake. This is a scenario a human would normally be expected to deal with. The software failed to do so, resulting in tragedy.

          There are a number of open source autonomous initiatives in the works. I would love to see those take off, especially as an alternative to anything having to do with Uber.

          1. vtcodger Silver badge

            Re: Surely

            FWIW - In Arizona as in most (all?) of the US, you aren't supposed to mow pedestrians down even if the pedestrians shouldn't be in the road.

            Relevant law: per https://activerain.com/blogsview/1497199/watch-where-you-re-walking-arizona-revised-statutes-pedestrian-right-of-way

            28-794. Drivers to exercise due care

            Notwithstanding the provisions of this chapter every driver of a vehicle shall:

            1. Exercise due care to avoid colliding with any pedestrian on any roadway.

            2. Give warning by sounding the horn when necessary.

            3. Exercise proper precaution on observing a child or a confused or incapacitated person on a roadway.

            It doesn't surprise me that Uber's "autonomous" vehicle doesn't seem to comply with relevant traffic laws. Bunch of greedy sociopaths if you ask me. Likewise Tesla. Waymo OTOH seems to be run by adults. If you ask me, Uber, Tesla, et. al. vehicles should be required to be led by someone on foot waving a red flag or a lantern (see https://en.wikipedia.org/wiki/Red_flag_traffic_laws) until such time as they demonstrate reasonable concern for public safety.

            1. Stork Silver badge

              Re: Surely

              As my driving instructor phrased it, you don't have right of collision.

        2. Spanners Silver badge
          Boffin

          Re: Surely

          Jaywalking is an American thing anyway.

          Yes, the term was invented my US car manufacturers to put the blame for unacceptable levels of pedestrian casualties upon the victims.

          The rest of us just have pedestrians, some with more sense than others.

          1. Anonymous Coward
            Anonymous Coward

            Re: Surely

            The Germans used to come down hard on people who crossed the road away from designated crossings or where the Green Man wasn't flashing (oo err, Miss).

            I once had a German colleague who felt this went much too far.Once in Munich we were on one side of a completely empty Theresienhoehe and a group of people were standing on the crossing waiting for the Green Man. He marched into the middle of the road, turned about and shouted "You are all sheep!"

            It looks as if, as the technology stands, we are all going to need to become sheep. Good luck with that. Unlike the US*, much of the rest of the world values its freedom,and 20mph limits in towns are actually encouraging idiot crossing behaviour.

            *I know. Sarcasm.

          2. Michael Wojcik Silver badge

            Re: Surely

            the term was invented [by] US car manufacturers

            It was promoted by auto manufacturers, but I find no evidence that it was coined by them. This article, for example, does not indicate any such origin in the first three known appearances of the term. It seems more likely to be a popular neologism.

            1. Kiwi

              Re: Surely

              [jay-walking]

              It was promoted by auto manufacturers, but I find no evidence that it was coined by them.

              I always understood it was a reference to a "bluejay" or some other bird with "jay" in its name, a bird that has a preference for wandering haphazardly along roadways.

              In NZ we have the term, but it relates to crossing a road at an angle rather than straight across, although it can also cover not crossing at a pedestrian crossing/controlled crossing when you're within a short distance of it (can't recall if it's 10m or 20m, but a distance that'd take most people a few seconds to walk)

      3. Schultz
        Boffin

        "the AI thought Elaine was, at times, a static object"

        The obvious problem in the control system seems to be that it can 'forget' the history of an object if it is reclassified. Physical reality tells us that things don't magically disappear - so the system should somehow match the newly identified object to its past data.

        Our brain is very good at that - the moment we identify something our brain will convince us that we saw that object all along. And the brain will quite strongly resist recognizing the object as something else until the evidence becomes overwhelming. Look up optical tricks to learn more about this.

        1. Baldrickk

          Re: "the AI thought Elaine was, at times, a static object"

          The classification shouldn't affect the tracking of an object at all...

          1. Kiwi

            Re: "the AI thought Elaine was, at times, a static object"

            The classification shouldn't affect the tracking of an object at all...

            Yup. As another poster mentioned, trees fall over. As can lamp posts, power lines, fences etc. Hills can become landslides, houses can move as well (usually on trucks but sometimes with the aid of 'natural forces'). While the correct ID can help you predict an objects actions, slowing down when you know you don't have it ID's is quite prudent. And if you're changing classification, you don't have it's ID pinned.

    2. Joe W Silver badge

      Re: Surely

      Wouldn't the default state be unknown object moving into path of SUV; do something to avoid the unknown object.

      One should have hoped so. 'cept it ain't. Problem is, that the "AI" is only able to detect and classify what it has been trained on, something I have to remind my colleagues of who think it is a marvelous miraculous machine, able to solve all our problems. Here, the system did not take into account that the "unknown object" might want to move just across the street, not along it. It did not have a concept of this happening, and thus failed to recognize it. Training data is: Garbage in - garbage out, or in this case incomplete data in - incomplete classifications. What I continue to wonder about is: why not use these systems to automate trains? At least on railroad tracks there is much less going on than on a busy high street, and the flow of traffic is steered externally anyway.

      Next problem, the system is either a simple decision tree or too convoluted to understand what is happening....

      1. Anonymous Coward
        Anonymous Coward

        Re: Surely

        well, I thought (facepalm) the I-component in "AI" had something to do with "neural" mumbo-jumbo, i.e. learning and acting in new, unprovisioned for circumstances. Perhaps not going to the extremes and assuming that a stationary object on the roadside might suddenly take off to the skies or turn into a chest full of looted Spanish doublons, but something... something that, kind of, happens surprisingly often when humans interact with road system. Instead, stationary = tick, dismiss, who would have thought...

        1. Michael Wojcik Silver badge

          Re: Surely

          An ML-based classification algorithm might use a static model, or it might be able to update its model. Both designs are possible. I have no idea if Uber's system at the time enabled continuous learning.

          In this case, updating its model during this event would very likely not have been useful. Updating its model from prior similar events might have been - that is, the model could have been updated to recognize pedestrians crossing outside marked crosswalks, at least as objects likely to move into the vehicle's path even if not correctly tagged as pedestrians.

          In any case, the term "Artificial Intelligence" is sufficiently broadly used to include all sorts of things, the attempts of marketers, sensationalists, and curmudgeons to pin its meaning down notwithstanding. (As always, there are plenty of Reg commenters who insist "AI" means some specific thing, generally not any of the things it's commonly used for. Sorry, kids; you don't own the term.) So there's little point in wondering whether "AI" implies some particular capability.

          "Machine Learning" is a bit more specific, but still encompasses a huge range of approaches, architectures, algorithms, and implementations. And this is an extremely active area of research, with thousands of significant new papers every year.

      2. Anonymous Coward
        Anonymous Coward

        Re: Surely

        AI doesn't really exist in any form at the moment. Machine Learning exists and that has been forced into meaning the same/similar to AI by the marketing hype machines.

        However, due to generification of the term, AI is now synonymous with the features of a machine learning system and even systems that run a pretty basic decision algorithm.

        My favourite question to a supplier when they mention that their system uses "AI" is to ask them to describe how it uses AI and where the decisions are processed. When they start to explain it I often retort with "but isn't that just a pre-defined algorithm?" - as in pretty much every case that's all it is.

      3. Jens Goerke

        Re: Surely

        There's a saying here in Germany: "es kann nicht sein, was nicht sein darf" - "it can't be what isn't allowed to be", resulting in the faulty reasoning that pedestrians aren't allowed on the road, so whatever _is_ on the road can't be a pedestrian.

        Common sense and experience have shown that a young child will probably run after a ball, so any trained/experienced human driver will brake hard after seeing a ball rolling out into the street in front of the car, saving the child that's bound to run out shortly afterwards. Happened to me twice, once in each role.

        1. Cem Ayin

          Mostly OT: The Impossible Fact

          The saying "es kann nicht sein, was nicht sein darf" actually comes from the closing line of a poem by Christian Morgenstern. You can find the poem here, along with a congenial english translation:

          https://www.babelmatrix.org/works/de/Morgenstern%2C_Christian-1871/Die_unmögliche_Tatsache/en/32691-The_Impossible_Fact

        2. terrythetech
          Coat

          Re: Surely

          Once as a child and once as a ball?

          1. Anonymous Coward
            Anonymous Coward

            Re: Surely

            older siblings will use anything at their disposal when their preferred ball isn't readily available.

        3. David Roberts

          Re: Surely

          I learned a few tips when first learning to drive.

          Along with the "ball often followed by child" I also learned to look under vehicles parked at the side of the road. If you see feet there is usually also the rest of the person who may suddenly appear in the road.

          If you see the end of a pram or pushchair then the pusher is likely to move out until they can see. This leaves the captive child well into harm's way.

          Oh, and never trust indicators. Wait until the direction taken by the vehicle confirms the intentions of the driver. Although this is probably not part of the current AI design (looking at indicators, that is).

          Child running down a front path or drive? Well away from the road but may not have working brakes.

          I wonder if the macine learning has covered this sort of stuff yet?

          1. Ken Hagan Gold badge

            Re: Surely

            "If you see feet there is usually also the rest of the person who may suddenly appear in the road."

            I was taught this age 10 by my teacher. I have never forgotten it, but it wasn't until I was an adult that it occurred to me to wonder where my teacher learned it. No matter, I have passed it on to my children who with a bit of luck will have no more use for the factoid than I've had these past 30 years. I wonder if they will pass it on? Maybe we'll have self-driving cars by then.

      4. vtcodger Silver badge

        Re: Surely

        One should have hoped so. 'cept it ain't.

        The requirement should probably be to do what a normally cautious human driver would do -- slow down until the object and it's trajectory is positively identified. If it can't be identified, try to creep around it in a safe fashion. In no case strike it.

        Will such behavior be unpopular with other drivers? Most likely. Especially when the object is a scrap of paper or a tumbleweed. OTOH, I doubt that violent arm and finger motions accompanied by verbal abuse from other drivers would bother autonomous vehicles one bit.

        1. Michael Wojcik Silver badge

          Re: Surely

          It'd be miserable riding in such an autonomous vehicle through rural Colorado. Besides the tumbleweeds (which like to spring out from the ditch right in front of you), in the colder weather flocks of small birds will often settle on the edges of the road, presumably for the warmth. They take off as you approach, right into the path of the car. I've accidentally hit a couple over the years despite my best efforts.

    3. big_D Silver badge

      Re: Surely

      The problem is that it didn't know how to classify her, as it wasn't programmed for jaywalkers. Therefore each time she was re-classified, the old object was dropped and a new one created and that new object has no motion history, so is static at the time of capture.

      You can see from the log, as the object moves, the AI takes that into consideration, based on its training - a vehicle or bike would usually stay in its lane, so the initial movements show it as moving in its lane.

      Then it is re-detected as something else, as it is a new object, the old one, and its movement profile is dropped and a the new object has an empty movement profile. I think that is the problem with the model, instead of re-classifying the object, it deletes the old object and its movement history and creates a new object with no history. If it had recognised the re-classification, it could have worked out that it was moving across its path. This seems like a major failing in the software.

      If I see an object moving towards me, it is the movement that is the priority, whilst I try and classify it. The movement continues and continues to be the primary metric I concentrate on, I recognize it as a vehicle, it continues to move and I see it is heading directly towards me. I reclassify it as an SUV, I try to avoid it. It is still moving towards me, I re-classify it not as an SUV, but as a Transit as it gets close and a streetlight illuminates it, but I continue to try and avoid it.

      The Uber system works the other way around. Instead of concentrating on the movement and classifying what is moving, it classifies the object first, then looks to see if it moves.

      1. Zack Mollusc

        Re: Surely

        Well put. I would add that the AI seems quite relaxed about vaious objects teleporting into existance near it while it is travelling at 44mph. "Pfft, that thing that I have only just become aware of will probably stay out of my way. Oh, I can't see it any more, maybe the road is haunted or something. Ooh, a different thing has appeared even closer to me, well it might stay where it is or something, oh good it seems to have vanished again so i can just forget about it, hey another thing has appeared even closer to me and even closer to my path, i assume it is nothing to worry about..."

        1. getHandle

          Re: Surely

          Wonder how it would cope with sofas...

          1. Anonymous Coward
            Anonymous Coward

            Re: Surely

            Or trash tornadoes...

            The problem behind the problem is that moving things can have different "appropriate responses", where answering wrong can be problematic. Sure, crashing into a pedestrian is tragic. So can getting rear-ended when your vehicle stops for a trash tornado in an overabundance of caution.

            1. Anonymous Coward
              Anonymous Coward

              Re: getting rear-ended

              Although very sudden stops should be avoided, as they might result in getting rear-ended; that is *not* a reason to not to do so if it avoids you hitting something. Getting rear-ended is the other driver's fault - they should have been both paying attention and following at a safe distance.

              And I'm not convinced that the two scenarios here - hitting a pedestrian, and getting hit by a car - are all that comparable.

              1. I ain't Spartacus Gold badge
                Megaphone

                Re: getting rear-ended

                You don't have to slam on the brakes. You can just release the throttle. I wonder if that's why the disabled their auto braking system? Because they were too stupid to program the machine that if it wasn't sure, it should slow down for a bit and wait for more data. Which is exactly what human drivers are supposed to do. If you see people ahead loitering by the side of the road, you don't slam on the anchors - but you should slow down, to give yourself better braking options if required.

                I remember watching a demo video from Google ages ago. It was a graphical representation of the decisions their system was making as it approached a junction. And it was tracking various objects on road and pavement, guessing where they might be in the future and assigning them risks. And they definitely talked about slowing down, rather than braking, as an option where the system wasn't sure.

                Obviously I'm assuming the worst about Uber here, and assuming they've not even thought of the most basic things. But then I feel justified in doing that, given they programmed a system that could only see pedestrians on designated crossings! And operated a system that was unable to do emergency braking - but distracted their "safety driver" with stupid buttons to click when they should be fucking driving.

                Oh and I also query the terminology. They don't have a self-driving car at all. If it can't be trusted to do emergency braking, then it's not self-driving. Obviously you have to have the back-up meat driver as this stuff is experimental. But their system, as designed, is unable to deal with normal driving, without input from the "safety driver" and therefore isn't self-driving at all! Self-crashing I'll accept. But braking to avoid obstacles is a fundamental part of driving - as is not braking to avoid obstacles of course. And they'd switched off one - to deal with their pisspoor implementation of the other!

                Fuckwits!

                1. xeroks

                  Re: getting rear-ended

                  also note that there is a one-second "do not react" delay in place, presumably to prevent the car's continual panic alerts from affecting the driving.

                  1. Kiwi
                    Boffin

                    Re: getting rear-ended

                    also note that there is a one-second "do not react" delay in place, presumably to prevent the car's continual panic alerts from affecting the driving.

                    An interesting thing. With most drivers (as I know anyway), when we see certain levels of threat we'll (if in a manual) move a foot to the clutch, a hand to the gear lever, and maybe start lifting off the gas. Sometimes we'll change down (eg if coming up on a farm tractor some distance up the road) to give us better acceleration helping us get around something quicker, and sometimes we'll move a foot over the brake pedal letting the car coast while we carry out further assessments/watch the situation (perhaps a bunch of kids playing with a ball near the side of the road, or a group of people coming out of a house from what appears to have been a party).

                    The process of slowing, even slightly, starts to transfer weight to the front of the car, starting to put more load on the tyres. This improves the braking performance considerable, in effect a practice known as "progressive braking" (rather than just 'slamming on the anchors'). You're much less likely to lock the wheels (skid) and thus had a much better chance of slowing, and of course as said elsewhere the mere act of slowing gives you more time to react - sometimes a millisecond can mean a huge difference in the outcome.

                    I wonder if any self-driving system will ever be given such simple bits of logic (and anyone who writes code for such things, please feel free to use what I said here - you can give me a decent car (by my standards[1]) in payment :) )

                    [1] Station wagon, able to take a normal sized fridge. Air-con. Stereo and cabin controls (including lights etc) are all physical controls that can be felt rather than having to be looked for. Multi-stage fresh/recirc air, prefer manual but could be sold on an automatic. Must not fight me when I make a decision but can act to protect me if for some reason I miss seeing something

        2. headrush

          Re: Surely

          What's that big wide round thing rushing towards me? Very fast, very very fast.

          It needs a big wide round sounding name, round..... round..... ground!!!

          That's it, ground!

          I wonder if it'll be friends with me?

          1. OssianScotland
            Pint

            Re: Surely

            Alternatively:

            "Oh No, Not Again!"

            Icon: Raised to Douglas

        3. tin 2

          Re: Surely

          100% on both the above. I wonder - it might be in the detail I'm not about to read - how frequently the system updates what it "sees". If it's so infrequent - comparatively speaking - that creating "an object" and tracking it's movement, and changing it's category as it appears to be different things basically doesn't work and it was worked around.

          Definitely in terms of bugs - or rather outright negligence on the part of the coders - chucking away an object's movement history on classification change is... well.... words actually fail me.

          The second is if a bicycle is *side on* and not strapped to something much bigger, it's absolutely NOT going to be traveling in the left lane in the same direction. Never mind AI here, that's a simple if...then...

          The hesitance that should be in there for stuff popping up out of nowhere is a great point. There's my (currrently) 3 massive bugs in the system. Clearly not fit to be out on the streets ever with those kind of glaring errors.

          1. Doctor Syntax Silver badge

            Re: Surely

            "There's my (currrently) 3 massive bugs in the system."

            There's really one fundamental bug: the absence of the massive processing power of the brain. And not just the human brain. It's a common animal trait to judge trajectories, both of self and other objects. It helps predators hunt down prey and prey to avoid predators. There are millions of years of evolution behind the mechanism of the brain; "training" is just the commissioning process.

            1. tin 2

              Re: Surely

              Sure but... lets say for a sec you wanted to try and see if you could replicate somewhat with some code:

              Chucking movement history away when a thing appears to be something else.

              Not being alarmed at something big suddenly appearing.

              Bike can't travel in a direction it's not facing.

              Bugs. Huge whacking easy ones.

              1. jrd

                Re: Surely

                Even more serious bug: if you can't recognize something, assume it is safe to ignore. The whole reason to identify objects is so you can predict their behaviour. If you fail to identify them, sensible defensive programming practice says you should assume the worst case for now - say, as an object which can move quickly in any direction without warning. Which is entirely reasonable, if it is an animal the software hasn't encountered.

                Driving software has to Fail Safe, just like any other software controlling dangerous equipment. This is such a basic requirement that I can't believe Uber isn't facing criminal prosecution over this.

              2. Doctor Syntax Silver badge

                Re: Surely

                "Chucking movement history away when a thing appears to be something else."

                This is the fundamental problem.

                If tracking is only achieved by joining up intermittent sightings of what is recognised to be the same object there is no movement history to be chucked away of the object can't be recognised consistently. Any unclassified object in a given sampling might be any unclassified object in a previous sampling.

                If we have objects at positions A and B and then a second later we have objects at positions C and D then we may have something that's moved from A to C and something else that's moved from B to D but we might equally well have something that's moved from A to D and something else that moved from B to C. If there's insufficient processing power to handle that - and there are probably a good deal many unclassified objects than just two (and all moving relative to the vehicle due to the vehicle's movement if nothing else) then the system cannot establish any trajectories.

                Unless there's sufficient processing power to track objects continuously the system will fail and that's the problem with doing things in code: you're trying to use a few cores to do what the eye* and brain does with massive parallelism.

                * Processing starts in the eye before the brain even gets involved.

        4. MrReynolds2U

          Re: Surely

          sounds Heart of Gold like... wonder how it would classify a bowl of petunias?

          1. big_D Silver badge

            Re: Surely

            Oh no, not again.

      2. Charlie Clark Silver badge

        Re: Surely

        I seem to recall video from one of Google's early drives where the car does lots of start/stop because of a pedestrian. Funny to look at and annoying if you're in the car but looked like a sensible default: if in doubt, stop.

        I think the classification/identification argument is largely a red herring in what is clearly a case of negligence leading to manslaughter because the emergency braking was disabled and the driver wasn't paying attention. There are probably consequences about how to keep supervising drivers actively involved in the process and whether these should be regulated.

      3. Doctor Syntax Silver badge

        Re: Surely

        "a vehicle or bike would usually stay in its lane"

        There are a couple of problems with that built-in assumption.

        One is that the only possible objects are vehicles and bikes; it doesn't allow for walkers, animals or even mobile parts of the environment (think landslip or large wave, e.g. https://www.bbc.co.uk/news/av/uk-england-hampshire-50266218/isle-of-wight-waves-almost-drag-man-and-child-into-sea).

        The other is that word "usually". I think most of us rate the probability as usually small but greater than zero and keep reassessing it on the basis of observation whilst the object is in view.

        1. big_D Silver badge

          Re: Surely

          I used "usually" for brevity. It is the initialisation of the object "car" or "bike", it would assume at the time of creation that, if it isn't side on, that it will stay in its lane, until it starts getting movement data - remember the initial status of a new object is "static", so a new vehicle object spotted in the next lane is assumed, by the model, to be moving in that lane, until its movement history indicates otherwise.

          Unfortunately, deleting the movement history every 1/10th of a second isn't going to help get it right!

          1. Anonymous Coward
            Anonymous Coward

            remember the initial status of a new object is "static"

            And that's a big assumption too... sure, most objects will be static, but its the dynamic ones you have to care about, and I wonder at what distance a new object is detected, and how long it takes to assess if it's static or dynamic. Anyway I believe that any object that "appears" within a given small enough radius should be set to a special status of "dangerous - assess what it's doing very quickly".

            1. big_D Silver badge

              Re: remember the initial status of a new object is "static"

              Agreed. The protocol is a complete head-slapper of incompetence.

            2. John Brown (no body) Silver badge

              Re: remember the initial status of a new object is "static"

              "but its the dynamic ones you have to care about"

              Static ones matter too. eg a pedestrian on the side of the road who, to a human at least, is clearly waiting to or about to step into the road. Possibly partially obscured by street furniture, parked cars or just carrying something that makes their shape appear something other than human, eg carrying or pushing a bike.

            3. Doctor Syntax Silver badge

              Re: remember the initial status of a new object is "static"

              "most objects will be static"

              Unless the vehicle's stationary they're all dynamic because they're being observed from the viewpoint of the vehicle Parallax will ensure that even objects static relative to each other will have the angles between them change from the PoV of the vehicle..

          2. Doctor Syntax Silver badge

            Re: Surely

            "Unfortunately, deleting the movement history every 1/10th of a second isn't going to help get it right!"

            The movement history is simply the tracking of what's recognised as the same object in 1/10th of a second intervals. If you can't reconcile some of the objects with the objects in the next sample you either have to discontinue your movement history or accept that there are N possible continuations based on the number of possible options to identify the unclassified objects with their candidates in the previous sample. Unless you can successfully identify objects from one sampling of a scene to the next you have a combinatorial explosion of possible trajectories to consider.

        2. John Brown (no body) Silver badge

          Re: Surely

          "The other is that word "usually". I think most of us rate the probability as usually small but greater than zero and keep reassessing it on the basis of observation whilst the object is in view."

          It's not unusual for computer based systems to not account for edge cases. The thing is though, the human brain WILL always account for those edges cases when they happen and deal with them in some way. The problem with computers and so-called AI is that they can't because, as we see from this case, they can't classify them. The fail-safe option is the only real solution, and the dev team and/or Uber senior people decided that edge cases should be risk assessed beforehand and below some threshold, discounted as too expensive to account for and that the fail safe option should not impede the vehicle from getting where it's going.

          1. Stoneshop
            Alert

            Re: Surely

            The thing is though, the human brain WILL always account for those edges cases when they happen and deal with them in some way.

            The 'oh fuck' interrupt handler.

      4. LucreLout

        Re: Surely

        a vehicle or bike would usually stay in its lane

        ??? Come to London, watch the cyclists. Stay in lane? Not bloody likely.

        1. Doctor Syntax Silver badge

          Re: Surely

          Don't need to!

        2. Anonymous Coward
          Anonymous Coward

          Re: Surely

          Come to Bristol, they're mostly in "lane", it's just that that "lane" is the pavement...

          1. John Brown (no body) Silver badge

            Re: Surely

            Or the one I saw today riding in the marked off shared cycle lane (dashed, not solid line) coming towards me on my side of the road, ie in his terms riding on the wrong side of the road, instead of using the cycle lane on the other side of the road, again, in his terms, on the right side of the road. Of course, he stuck two fingers up at me when I flashed my lights and beeped the horn. Twat!

            1. Anonymous Coward
              Anonymous Coward

              Re: Surely

              On Bond Street by any chance?

              1. John Brown (no body) Silver badge

                Re: Surely

                Why? Was it you? :-p

                No, it was a local road in Tyneside :-)

      5. EverybodyNeedsaCRC

        Re: Surely

        Excellently put! Reading all this, I just keep thinking, it doesn't matter what is in front of me, identified or not, if the end result is 'wtf was that?', default action is still to avoid. It seems the AI version of human anticipation is far harder to implement.

        I also wonder how a human driver would have reacted in the same scenario.. If only there was someone who could have stepped in, hmm. although to be fair, anyone can be distracted (I'm not aware of the full details on why the occupant was looking away)

        My Nest cam often fails to detect motion from large moving high contrast objects at certain times of the day, like people. Then happily alerts me to a moving cloud's reflection in a car window. Yeah chalk and cheese and a very crappy comparison at best, but makes you think about what this tech has to deal with in the real world.

      6. John Brown (no body) Silver badge

        Re: Surely

        "You can see from the log, as the object moves, the AI takes that into consideration, based on its training - a vehicle or bike would usually stay in its lane, so the initial movements show it as moving in its lane."

        I wonder if these cars are able to detect a cyclist giving legally correct hand signals to indicate a turn and act appropriately?

        1. Zack Mollusc

          Re: Surely

          You won't be able to train that. Where would you find any examples ?

    4. Anonymous Coward
      Anonymous Coward

      Re: Surely

      What concerns me is how poorly the system must have been planned - you have an object at a roughly consistent position that changes classification rapidly - that should be a flag that an unknown or hard to classify object is on the road - at *that* point you alert the driver. I would also say you apply the brakes, but apparently the emergency brakes were turned off.

      That said, between 1.2 and 0.2s before impact (when the car finally detected a bike on collision) the car only slowed from 43 to 40 mph - so even then the car wasn't doing much to slow down.

      It was criminal - and I mean that - to allow this thing on the road.

      1. Carpet Deal 'em

        Re: Surely

        What concerns me is how poorly the system must have been planned - you have an object at a roughly consistent position that changes classification rapidly - that should be a flag that an unknown or hard to classify object is on the road - at *that* point you alert the driver.

        There's the rub: the NTSB breakdown shows the system had no object permanence after a reclassification. Thus, rather than a "something" it couldn't figure out but was definitely headed into its path, all it saw was a series of objects with no connection to each other. There's no excuse for this inability to handle classification errors: had it been able to see "something" was moving on a fixed course, things probably wouldn't have come to this.

        1. Doctor Syntax Silver badge

          Re: Surely

          There's no excuse for this inability to handle classification errors: had it been able to see "something" was moving on a fixed course, things probably wouldn't have come to this.

          There may be no excuse in terms of letting the thing loose on the road but there's a likely reason. If there's more than one unclassified object in view then from one sample to the next it can't link up these to "know" that one was on a fixed course because it can't keep close enough track on them because it can only do things sequentially.

      2. John Brown (no body) Silver badge

        Re: Surely

        "I would also say you apply the brakes, but apparently the emergency brakes were turned off."

        FWIW, it was the autobraking system supplied with the Volvo as standard equipment that was turned off because Uber said it "interfered" with their own detection and braking system. Read into that what you will.

    5. Joe 35

      Re: Surely

      The default seems to have been that every time the object classification changed it decided it must static since it hadn't seen it before, rather than being able to "understand" that object type A then B then C, D,C,A,X, etc was a single thing that had a trajectory. Thissoftware seems to have some fundamental gaps in how to process new information and relate it to past information.

  3. T. F. M. Reader

    Reasonable defaults

    It seems to me - just from readin the article though - that without classification the AI engine assumes that the object is "static" and does not assign "goals" to it. Wouldn't it be prudent/conservative to assume instead that anything unidentified may move into the vehicle's path, and get prepared by slowing down, etc.? What would a human driver do if he/she catches something unrecognizable in the corner of the eye? Would it not be some instinctive equivalent of "Wait a second, what the hell is it?"

    1. big_D Silver badge

      Re: Reasonable defaults

      That is the difference between (human) nature and the AI model.

      In reality, most creatures recognise a movement and react to that movement, whilst trying to classify what it is as a secondary task. The first reaction is to take a defensive reaction and then work out if it is friendly or not. The pot of petunias.

      The Uber AI on the other hand tries to classify the object, before working out if it is moving. The whale.

      The problem is, AI programmers have, over the years, worked with object recognition first and movement as a secondary consideration, because it is the "easier" part of programming the model. You work in objects, so you start with the object. If it isn't the object you thought it was, you drop it and start again. This is exactly the opposite of the behaviour that is required. But "movement" isn't an object, it is a metric or property of an object from a traditional programming perspective.

      The programmers didn't re-evaluate their logic to fit in with the real world, they just used what they had learnt in programming classes.

    2. Zack Mollusc

      Re: Reasonable defaults

      I very much doubt that the pedestrian was the only 'unclassified' object the car saw on that journey. I bet the 'AI' sees an enormous number of objects that it cannot classify or accurately position and if it tried to drive conservatively would get nowhere.

      1. Charlie Clark Silver badge

        Re: Reasonable defaults

        As in this example, this should always be the case because it's good engineering.

      2. Doctor Syntax Silver badge

        Re: Reasonable defaults

        " if it tried to drive conservatively would get nowhere"

        So much the better!

      3. Loyal Commenter Silver badge

        Re: Reasonable defaults

        In that case, it shouldn't be on the road. End of.

        If a human driver didn't slow down or stop because they couldn't identify hazards, and ploughed into them, they'd face prosecution. Software-controlled cars shouldn't be held to a lower standard.

    3. Colintd

      Re: Reasonable defaults

      I suspect the default is static as otherwise the car would constantly undertake emergency brakeng when it fails to identify am object. This fits with the comment that the braking was disabled due to erratic behaviour. I'm not saying this is the right default, I'm just speculating about why it was set that way.

      1. big_D Silver badge

        Re: Reasonable defaults

        Until the system has two consecutive sweeps of a particular area, it cannot know if something is moving or not, so the initial status is static. But once it has detected movement, it should use the movement profile as the master and try and classify what is moving, as opposed to reclassifying the object, which resets the motion profile to static.

        That is the problem. Either it needs to not drop the object when it is reclassified, but "rename" it, so that the movement profile stays with it, or it needs to use the movement profile as the primary "object" and classify what is moving. But their system just throws away its knowledge of the object and creates a new one.

        I expect that they use classes, so you have object, which has child object vehicle, bike, pedestrian etc. and when it reclassifies, it has to drop the vehicle object and create and object object, then drop that and make a bike object, then drop the bike object and make an object object etc.

        Better would be to have an object object, which has a "type" classification and loads a movement rules collection based on the type. That way the object and its movement history would remain constant, but the type and its movement rules would be added and removed from the parent object.

        That way the movement history and a generic "it is moving to intercept my path" rule would be there even if it can't identify what it is, it can still take avoiding action. Stopping for or swerving around an unidentified piece of trash flying past is silly, but preferable to hitting an unidentified pedestrian.

        1. Charles 9

          Re: Reasonable defaults

          "Stopping for or swerving around an unidentified piece of trash flying past is silly, but preferable to hitting an unidentified pedestrian."

          Unless it starts doing it a hundred times an hour and you end up late to that important meeting. Now it's turned from silly to a PITA.

          1. big_D Silver badge

            Re: Reasonable defaults

            They are currently only in testing. Until the model is reliable, this is the only acceptable behaviour.

            The driver's comfort and convenience in the test phase is irrelevant and they aren't in a hurry to get anywhere, they are being paid to be "driven round in circles". The driver is just sitting there monitoring the vehicle (so they can react if the vehicle makes a mistake), whilst it collects information that can be used to adjust the model to be more accurate.

            Safety is of the upmost importance. If the vehicle is in a situation where it isn't sure, it should stop as quickly and safely as possible and let the driver take over.

          2. Doctor Syntax Silver badge

            Re: Reasonable defaults

            "Unless it starts doing it a hundred times an hour and you end up late to that important meeting."

            And there's the problem. Self-driving is competing with massively parallel processing and yet its proponents claim it can do it better. And in any case it's better to end up late for a meeting than just end up.

          3. aks

            Re: Reasonable defaults

            If it were over-sensitive it would brake for flying leaves on a windy day in autumn.

            This crash was on a highway. How will it cope in cities where there are people everywhere? In London, people dodge through traffic frequently, especially if it's slow-moving. No solution that I can see.

          4. Adrian 4

            Re: Reasonable defaults

            We try to use laws to make hitting a pedestrian a PITA too, especially because some people think their meetings are more important.

        2. Francis Boyle Silver badge

          Re: Reasonable defaults

          "Until the system has two consecutive sweeps of a particular area, it cannot know if something is moving or not, so the initial status is static."

          Surely, that should be 'undefined'. Anything else is just asking for trouble.

          1. big_D Silver badge

            Re: Reasonable defaults

            You would have thought so, wouldn't you... But the protocol in the article definitely says "static" is the default condition.

          2. Loyal Commenter Silver badge

            Re: Reasonable defaults

            The thing is, the human vision system is very good at spotting moving things, especially in peripheral vision, and that's without using doppler radar/lidar, which presumably this could use (if it's sensitive enough). Admittedly, that's only good for things moving towards or away from you; the vision system involved needs to be a lot better at detecting transverse movement.

            1. rskurat

              Re: Reasonable defaults

              I wonder if they've thought ahead to a future situation where the road was full of these things: wouldn't the radar/lidar systems interfere with each other? Would every car have to have its own unique modulation?

              1. Ken Hagan Gold badge

                Re: Reasonable defaults

                It would be like driving at night with every on-coming muppet's headlights on full beam. (And if they tried to use modulation to separate the signals, it would be like driving home at night through a 1970s disco.)

        3. David Roberts

          Re: Reasonable defaults - instinctive swerve

          The instinctive swerve when encountering a fast moving unclassified object is more or less built into humans.

          I assume is is part of the evolutionary threat model.

          Advanced driving includes learning not to swerve to avoid a squirrel in the road which then results in hitting a vehicle in the other lane or a pedestrian beside the road. Or ending up in a ditch.

          I wonder if the programmers were aware of this and compensated too much?

          Nah.

          1. ICPurvis47
            Mushroom

            Re: Reasonable defaults - instinctive swerve

            I was once driving down the M5 just north of Bristol, family on board, caravan in tow. There was a hold-up, and we came to a standstill in the middle lane, all three lanes stationary. Some minutes later, the jam started to clear, and I accelerated along behind the vehicle in front. Suddenly, it swerved into the left hand lane, forcing the vehicle beside it to jump onto the hard shoulder. I was then faced with a Jimny, stationary in the middle lane, which I was approaching at approximately 40MPH. I instinctively swerved left to avoid it, forcing the Mini beside me onto the hard shoulder, after which I pulled back into the middle lane to allow the Mini to get back into the left lane. Some way farther on, the Mini pulled alongside my car and the driver began gesticulating and blowing his horn. He obviously hadn't seen the Jimny, and thought that I had entered his lane without looking (which was true, I hadn't had time to look in my left wing mirror to see if the lane was clear). I wonder to this day how many other vehicles managed to avoid the Jimny before someone piled into the back of it at 40 - 50 - 60 MPH, causing another huge pileup.

            1. Kiwi
              Pint

              Re: Reasonable defaults - instinctive swerve

              I wonder to this day how many other vehicles managed to avoid the Jimny before someone piled into the back of it at 40 - 50 - 60 MPH, causing another huge pileup.

              There's a lot of that sort of footage on YT. Just one of the reasons I am much more inclined to stretch my following distances out even more these days.

              (An internal debate - being back from the traffic I am a little more likely to be rear-ended, up with the traffic I am more likely to be hit from the side or involved in something like this.. Where I can I do a "get well ahead of them" which the cops seem to prefer riders do these days - a good bubble of emptiness around me (to compliment the one I'm told is in my head! :) )

      2. Charles 9

        Re: Reasonable defaults

        What's being noted is that neither extreme is acceptable. Too conservative, you brake for trash tornadoes and end up going nowhere; too liberal, you crash into pedestrians. What's needed is to see where the happy medium is (if there IS a happy medium; if it's UNhappy, the market's inherently unstable and can't really be satisfied).

        1. big_D Silver badge

          Re: Reasonable defaults

          Except we are in the test phase, so the too conservative is the correct setting, with the model learning from its mistakes and becoming "less conservative", to use your terminology, as it gains experience of what is a genuine hazard and what isn't.

          1. Adrian 4

            Re: Reasonable defaults

            "so the too conservative is the correct setting, "

            For uncontrolled roads, yes. But in reality you can't test other strategies if low level collision avoidance is stopping you all the time.

            The proper answer is to present the system with only the variables you can handle at present, and only when you believe that to be representative of reality can you unleash it on the streets.

            To be fair to Uber, they were doing that by running at night in an area that was relatively quiet. But still had nowhere near enough control of the environment and should in any case have then used the more careful strategies (with the expectation that they wouldn't be invoked).

            There's some marketing in this too : clearly such a low-function prototype is a very long way from being safe to run on the streets. But 39MPH on an urban road looks far more progressive to your investors than 5MPH stop and go, so you'd want to demonstrate progress on that earlier rather than later.

            1. Kiwi
              Stop

              Re: Reasonable defaults

              To be fair to Uber, they were doing that by running at night in an area that was relatively quiet.

              They are among the most dangerous roads around - everyone on them is in a state of more relaxed/less alert.

              Why can Uber not be creating a test-track (old navy bases seem to do well!) and putting in various things to test it's refexes.

              Hell.. Uncouple the control module from the car, take it for a drive with a decent human driver at the wheel (no automatics), and monitor it's responses to sensors while it has no ability to do anything boneheaded. That way, you can see what it does and if it tries to stop when it should (admittedly it may struggle some with things like it wants to brake but the car doesn't do anything).

          2. Kiwi
            Pint

            Re: Reasonable defaults

            Except we are in the test phase, so the too conservative is the correct setting, with the model learning from its mistakes and becoming "less conservative", to use your terminology, as it gains experience of what is a genuine hazard and what isn't.

            That seems to be how human drivers do a lot of the learning. Start conservative, learn more about how things feel and what happens in certain situations, get better at assessing things and dealing with them (at least for those who don't have their noses glued to screens and not watching the world around them)

        2. Doctor Syntax Silver badge

          Re: Reasonable defaults

          "the market's inherently unstable and can't really be satisfied"

          Stuff the market. These are lives we're dealing with.

          If it were a human driver we may be looking at sentencing them to imprisonment for dangerous driving. Perhaps the appropriate way to deal with this is to imprison the CEO or a board member instead.

    4. ttlanhil

      Re: Reasonable defaults

      Yes, but defining the correct criteria is hard.

      Do you emergency brake when a bird swoops across the road? (it's close, so need to break hard to avoid it). A large bird may be a similar size to a ball that a child is about to chase onto the road

      An attentive human is pretty good about making those decisions (we're built to recognise moving 3D objects) - so providing training data from human-controlled cars can help (video as input, and output of if the driver took action - even if too late)

      Part of the problem is those cases are really rare - humans are good at noticing and remembering unusual things; but they make for sparse training data for AI

      If it was been able to track the same "object" even though the classification had changed (as opposed to "There's a new bicycle there" "The bicycle is gone, there's a new unknown object"), then it could have kept the movement history and made better decisions

      1. big_D Silver badge

        Re: Reasonable defaults

        The movement is key and if the system can't classify whether it is harmless to hit it or not, it should always err on the side of caution and avoid the collision.

        If it had said that the pedestrian was a drifting paper bag, that would be one thing. But it just kept throwing away its knowledge that the unknown object was moving to intercept its path and starting from zero with a new static object.

        That is the problem. It doesn't track movement, it tracks known objects and if an object changes classification, it has to be dropped and a new one created, because it has different properties. That is an arse backwards way of solving the problem that can only lead to failure.

  4. Steve Davies 3 Silver badge
    Big Brother

    I hope...

    That the image used on the front page is an old stock image.

    Even holding your phone (if you are the driver) in a vehicle with the engine running can be enough to get you a ticket.

    That is one area of law that will need a huge upheaval IF self driving is ever made legal in the uk. After all, the first thing most people will do after engaging their car's FSD system will be to look at their phones and their social media accounts. /s /s /s

    It will also mean a huge rise in Social Medai Addiction.

    Perhaps we should ban FSD (Full Self Driving)???

    either way [see icon] will be watching our every move

    1. MooseMonkey

      Re: I hope...

      I think that yet again Volvo get it... Their new systems in development have cameras etc INSIDE the car to check what the driver is doing. If it looks like the driver is distracted, or nodding off, or drunk, or ill, or just driving like a knob, the car takes over and parks in a safe place. Distracted includes looking at phone, sat nav, pretty lady on the pavement etc.

      1. Anonymous Coward
        Anonymous Coward

        Re: I hope...

        In development - lots of companies have stuff in development it doesn't mean it is possible. For instance, detecting someone is drunk is easier to do from their driving style rather than a camera looking at them I would suggest (unless absolutely hammered). Same as 'driving like a knob'. Looking at a sat nav isn't an issue as long as it is briefly, or looking out of the side window for brief periods.

        However the system would also have to cope with people wearing sunglasses and people who are disabled (in a way that may confused such a system). These aren't easy problems to always solve so analysing driving behaviour may be safer.

        This however is all as part of the development of autonomous vehicles, whereby the time you get the development right there may well be level 3 autonomous vehicles (or higher) where it makes driver monitoring fairly redundant. _ driver monitoring is only needed permanently in Level 1 and Level 2.

        Are you still able to react system will still be useful for levels under level 5, but this could be as simple as requiring you to perform a task every now and then (such as how Tesla do it by ensuring you have a hand on the steering wheel within the last 20 seconds)

      2. Doctor Syntax Silver badge

        Re: I hope...

        "Distracted includes looking at phone, sat nav, pretty lady on the pavement etc."

        1. We're expected to keep the speedo under sufficient observation to adhere to arbitrary indications. If that's not a greater distraction than the sat nav I don't know what is.

        2. The pretty lady might move from the pavement into the road and requires the same degree of observation as the ugly gent.

        1. FrogsAndChips Silver badge

          Re: pretty lady vs. ugly gent

          The assumption is that you would only give the ugly gent a quick glance, just enough to assess if they might be a risk, whereas you could stare at the pretty lady long enough to get distracted by the rest of your environment and miss the kid crossing the road before you.

      3. Kiwi
        Pint

        Re: I hope...

        or just driving like a knob, the car takes over and parks in a safe place.

        So basically.. The volvo's will be parked on the side of the road always, never moving?

        Oh wait.. That's Audi.

        Or BMW...

        Or Tesla...

  5. Zack Mollusc

    Don't forget the orientation

    The person was pushing a bicycle across the street, thus the bicycle and its shiny hard metal surfaces was broadside on to the car's sensors, giving the best possible chance for detection.

    A cynic might think that the problem is data processing. If you give the car enough sensors to give a detailed and accurate picture of its environment, the computers needed to interpret all that information in real time won't fit in a car (or maybe cannot be built).

    1. big_D Silver badge

      Re: Don't forget the orientation

      The processing capability can fit into a car. But you have to do the right processing.

      It should detect movement and try and work out what is moving. Instead the system tries to classify what it sees and then work out if it is moving. If it can't classify it or it changes the classification, the movement history is reset to zero and the process starts again.

      If the programmers had worked the "natural" way and tracked the movement, whilst trying to classify what is moving, it would have seen much sooner that it was moving across the path of the vehicle, because it has tracked the movement and can predict where it is going, even as it tries to work out WTF is moving.

      But the AI was programmed the other way around, it tries to work out what is there first, then tries to give it a movement profile. A re-classification deletes the old object and its movement profile, so you end up with a bunch of static objects that are not in the way.

      This is bad programming.

      1. Zack Mollusc

        Re: Don't forget the orientation

        I agree that the car was not trying to track possible threats before trying to identify them, I contend that the sensory input has not the resolution nor accuracy to reveal the course and speed of the possible threats and that sufficiently sensitive input would give too much data to process in real time with computational resources that would be practical to fit in a car.

        1. big_D Silver badge

          Re: Don't forget the orientation

          The existing systems on the car had the resolution to do this. The Uber systems that replaced it have better resolution, but crap programming.

      2. Doctor Syntax Silver badge

        Re: Don't forget the orientation

        "The processing capability can fit into a car. But you have to do the right processing."

        The right processing applies to every object in the visual field. I agree movement must be tracked but there needs to be some sort of classification of the objects to evaluate the probability of static objects starting to move and of moving objects changing speed and/or direction.

        That processing capability can certainly fit into a driver's head; the despised "wetware". Whether the required hardware can when it's only weakly parallelised is a different matter.

      3. EBG

        Re: Don't forget the orientation

        It may be that it requires the classification to be able to then calculate the movement ? Otherwise, how does it define the boundaries of the object, or eveb knows that there is an object ?

        1. big_D Silver badge

          Re: Don't forget the orientation

          It does need the classification at some point. More important is the movement. Is it moving to intercept our path? If you know it is going to cross your path, you have to prioritise its classification. But you don't keep deleting the knowledge that you are going to collide with it, if you change it classification, which is what the Uber software seems to do.

    2. Charlie Clark Silver badge

      Re: Don't forget the orientation

      Near realtime processing of video data is, at least according to Google, one of the main reasons for using LIDAR and RADAR which produce data that can be quickly vectorised and processed. But the uncertainty of getting stuff done on time is why the vehicle must have a system of fail safes such as emergency braking. Which was disabled.

  6. Filippo Silver badge

    From what I understand, it looks like each time an object gets re-classified, its tracking history is erased. Because of this, if an object's classification is ambiguous, there is a side effect of the system being unable to track its path. So it's always considered to be static.

    I'm sure the real problem goes deeper than that, but my programmer's gut feeling is that re-classification should not erase tracking history.

    That is, the problem isn't "the software wasn't designed to classify jaywalkers"; the fundamental problem is "the software should avoid ambiguously-classified objects".

    1. big_D Silver badge

      Exactly. The movement should be the object the system works with and the actual "object" just a property of the movement object.

      But programmers are used to modelling physical objects with properties, such as movement. This is 100% wrong for such a threat model.

      1. Anonymous Coward
        Anonymous Coward

        Huh? "Programmers"? I'm super dense and basic, but:

        1) Create Object null

        2) Add "direction" add "velocity" add ["etc"]

        2.1) Track Object

        2.2) If collision detected, jump to 4

        3) Define object either "pedestrian" "bicycle" "car" ["etc"]

        3.1) If dangerous pedestrian, avoid [etc], if undefined, jump to 2 (thus keeping existing object data)

        4) avoid!!!

        but they instead did 3) 1) 2) or such like and it kept failing at 3, so never even got to 2. If going 1) 2) 3), you could still work with the data, even if 3 fails.

        1. Anonymous Coward
          Anonymous Coward

          Nope, they probably did 2.2 fine. The problem is if you detect a collision then 4 becomes impossible to execute, you have to go to 5.1) Lawsuit.

        2. big_D Silver badge

          You are, theoretically correct, but...

          I suspect that the problem lies in the definition of the objects. They are not interchangeable. Instead of having an object with "type" properties and loading the appropriate behaviour/rule set, they probably have an object "vehicle", an object "pedestrian", an object "bike" and object "undefined" etc. and moving from one to another means dropping the old object of type "undefined" and creating a "vehicle", nope, wrong, drop vehicle and create "bike", nope, it is "undefined". Each time you lose all your movement history!

          The way they have done it is "logical" for a programmer, but arse backward to the way reality works.

          It does 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, BANG.

          1. Baldrickk

            I've worked on a few different tracking projects across multiple employers now - and I can confirm that in EVERY case, we track an object and only then classify afterwards.

            That is the logical way of approaching the problem for a programmer. Or it should be.

            It's an object first, with additional parameters that can be defined later. - You choose the right abstraction for the task at hand.

            The Uber dev responsible however? He chose poorlly

        3. Charlie Clark Silver badge

          This isn't about the programming, which BTW isn't done like this for this kind of task, but about having fail safes, such as an emergency braking system. And using them. Uber disabled the emergency braking system.

          1. Anonymous Coward
            Anonymous Coward

            Failsafe

            Well TBF that’s what the human being is supposed to be there for. However it’s well known that staying alert on the lookout for a dangerous situation that rarely happens is something that humans are particularly poor at. On the other hand making constant semi-autonomous decisions in response to fast changing stimuli is something they are quite good at. Maybe we are trying to automate the wrong bit...

            1. Baldrickk

              Re: Failsafe

              The failure rate of the UBER vehicles (times needing human interaction) there isn't really an excuse.

              The onboard camera clearly shows the driver using their phone and not watching the road.

              If it was a WAYMO car, then I could understand it. Their average distance between human interactions was iirc over 40 miles at the time of the accident. Uber couldn't hit their target of 4.

        4. WolfFan Silver badge

          That’s what happens in military conditions. You’re on a ship, sitting at the Electronic Sensing Systems console, and the system detects something, it doesn’t know what. It checks for range, course, and speed; if the object is heading anywhere near you, alarms sound. It does not reclassify the object. It does not care. It yells ‘Vampire, vampire, unidentified object, range 90 nm, speed 1500 knots, course zero eight zero, closing!’ It’s up to the operator to make the ID. “Vampire, vampire, many vampires, evaluate as SS-N-19 Shipwreck, count is six and rising, range 80 nautical miles, speed Mach 4, closing!” And about there the Air Defence Officer has a brown trousers incident and the air defense ring of the battlegroup starts shooting, as you probably have an Oscar shooting at you, and Oscars carry 24 Shipwrecks, and one Shipwreck will kill anything smaller than a carrier. Two or three will kill a carrier.

          Don’t have the AI ID the object, tell the human allegedly in charge and have him do the ID. Problem done. Oh. Wait. Suddenly that’s not a fully automatic self driving car anymore... Pity, that.

          1. Doctor Syntax Silver badge
            Happy

            Ninety nanometres is a bit close.

    2. Anonymous Coward
      Anonymous Coward

      I agree with that in principle however there are a number of cases where this might not work. If a sensor detects a bike and then loses it but redects a bike a second later that is 10m further forward it could assume the bike is travelling at 10m/s. However is that the same bike or not? Was the initial classification actually a bike but the second a motorbike?

      A human has the instant ability in nearly all cases to know that the bike is the same as they saw a second ago, and also understand the likely path of it and the likely travel direction and if it is being ridden by a 5yo it is likely to be wobbling and change course rapidly whereas if it is being ridden by a sensible adult is more likely to retain it's course.

      So it may well be that the programming is intended to try to be more advances and act like a human, able to determine the same object and classify the likely path based upon object type and route and if it thinks it is a different object to what it saw before then it tries to track this new object.

      If it was perfect then this would be a fine approach, as it isn't perfect and in reality, nowhere near, then it fails with a hard fail. The fail-safe in this scenario is the 'safety driver'. Utilising something that removes the 'A' from 'AI'. However humans are also fallible and in this case it shows that human behaviour is very fallible - most of this we know about due to human behaviours - complacency, tiredness, monotony, etc. When your fail-safe is more fallible than your machine then it isn't going to end well.

      1. druck Silver badge

        A human has the instant ability in nearly all cases to know that the bike is the same as they saw a second ago, and also understand the likely path of it and the likely travel direction and if it is being ridden by a 5yo it is likely to be wobbling and change course rapidly whereas if it is being ridden by a sensible adult is more likely to retain it's course.

        Ridden by a sensible adult? I'd like to see you come up with enough training data on that to teach an AI!

      2. Anonymous Coward
        Anonymous Coward

        True but... You could always go for both. As in, that's also generally what humans do. That "might" be the same bike, or a different one, so what do you do in each instance?

        Computers and sensors currently just seem to really really have too low a compute and/or data range to be useful in such a high fidelity environment and workload.

        I've only played with an arduino and an IR distance and a sonar distance sensor, but even there, I don't think I could get it to work with the fidelity required for it to be useful (I wanted them to work at the speed of a human response, if not at the accuracy, and plain old touch/hearing/sight is a million times faster than the ping of the sonar, or the request chain for the IR chip).

  7. Anonymous Coward
    Anonymous Coward

    "...ignorant cyber-Judge Dredds"

    The use of ignorant in the sub-head is very apt. Human learner-drivers would be similarly homicidal but for the fact that have usually already experienced as passengers (and pedestrians) that you can't trust that a person (particularly a child) near a road will stay off the road.

    "Expect the unexpected" is good advice for a driver. "Run over the unexpected" is not.

    1. tiggity Silver badge
      Thumb Up

      Re: "...ignorant cyber-Judge Dredds"

      Indeed. In UK where someone can cross the road anywhere (not just at designated points) you always assume worst scenario with anyone approaching pavement edge (& even more so with far more unpredictable kids)

      1. Anonymous Coward
        Anonymous Coward

        Re: "...ignorant cyber-Judge Dredds"

        Agreed, except the dreaded ‘phone-zombies’ are far less predictable than most little kids...

    2. Warm Braw

      Re: "...ignorant cyber-Judge Dredds"

      The problem with AI for this kind of task is that it only "knows" what it has been trained to recognise and can only act within the parameters of its knowledge. Humans, at least in some situations, are aware that things may occur that are outside their previous experience and, even if they are reckless about the safety of others, are going to avoid situations that may result in their own injury.

      Although not actively murderous, that kind of AI has an evolutionary tendency to eliminate the things that conflict with its "world view" - that's just the inevitable consequence of coupling "failure to recognise" with a heavy moving object. It's not just that the AI isn't sufficiently well trained: there are odd unexpected circumstances that will occur for which no reasonable amount of training data will ever be available. It seems to me to be the wrong sort of technology for a safety-critical system, except, perhaps as assistance ("it looks like a child behind that car") to a human operator.

      1. Charles 9

        Re: "...ignorant cyber-Judge Dredds"

        So what kind of tech do you use for a safety-critical system that's TOO dangerous for a human (say very high up or in an area full of toxic gases)?

        1. Doctor Syntax Silver badge

          Re: "...ignorant cyber-Judge Dredds"

          Remote control by human.

        2. Adrian 4

          Re: "...ignorant cyber-Judge Dredds"

          There's a difference between 'dangerous to someone else' and 'dangerous to (robot) me'. You can classify a risk as being acceptable to damage the robot but not another entity.

          See Asimov's laws !

  8. Wellyboot Silver badge

    Onboard automatic brakes switched off.

    >>because when it was switched on, the vehicle would act erratically.<<<

    The car would not act erratically, it would behave exactly as Volvo intended.

    The mismatch between when the Volvo (millions of installations?) and Uber (prototype) systems would decide to apply brakes should have been a gaint flashing neon 'problem' sign to Uber that required immediate attention, also the dozens of other crashes does point to the system being totally unready for use on public roads.

    1. Anonymous Coward
      Anonymous Coward

      Re: Onboard automatic brakes switched off.

      This is just rubbish, there are a number of times when Volvo is mentioned as though it has perfected systems like this and do everything properly. Volvo struggle with the issue of automatic braking similar to others, especially with false positives. So most of the time it works but other times it doesn't.

      Like this classic https://www.youtube.com/watch?v=_47utWAoupo

      1. Baldrickk

        Re: Onboard automatic brakes switched off.

        A false positive is vastly better than a false negative.

        I an only say that Volvo obviously errs in the correct direction.

    2. Anonymous Coward
      Anonymous Coward

      Re: Onboard automatic brakes switched off.

      "Action suppression begins" -- Is that the euphemism for "murderers disabled the brake system"?

    3. 2Nick3

      Re: Onboard automatic brakes switched off.

      When I was taking Drivers Ed, for the in-vehicle phase the instructor had their own brake pedal, and the ability to reach across to grab the steering wheel - the two options were Stop or Go Straight (as turning the wheel from the passenger seat was not really possible, but holding it steady was). The danger in that is the driver could decide to swerve around something when the instructor wants to stop, and you get something somewhere between, which isn't going to be a good solution. I was in the back seat of the car when a fellow student driver hit a squirrel - he tried to change lanes, while the instructor didn't think he saw the squirrel and tried to stop.

      Having two different systems, that don't have the ability to communicate in real time, attempting to control a few thousand pounds of metal going down the road is not a good situation. At least for the squirrel. Or a pedestrian.

      And yes, if your autonomous braking part of your self-driving system isn't as good as that delivered by the car manufactures it needs some more work before you put it on the road.

      1. Baldrickk

        Re: Onboard automatic brakes switched off.

        And I was taught not to brake for something like that... swerving or sudden braking is more likely to end up with an accident.

        Not that I wasn't taught to emergency stop when needed.

  9. Anonymous Coward
    Anonymous Coward

    I've never worked in AI in any form...

    ... So here's how I think they ought to do it.

    Treat unknown objects as a dispersed cloud of stuff. If the cloud impinges on the vehicle's path, respond. If the cloud grows toward the vehicle's path, assume it might continue to do so and respond. Etc etc.

    But it's probably not that easy.

    1. Anonymous Coward
      Anonymous Coward

      Re: I've never worked in AI in any form...

      Leaves? Carrier bag? Smoke? Dust? Localised fog?

      1. jospanner

        Re: I've never worked in AI in any form...

        Tough shit. If the system can't manage to stop running over pedestrians then it shouldn't be on the road.

        1. Anonymous Coward
          Anonymous Coward

          Re: I've never worked in AI in any form...

          Very insightful, I don't think anyone disagrees that a vehicle shouldn't run over pedestrians wherever possible, however the response was to the idea that the car should stop for any object. That isn't a solution as the car shouldn't stop for any object, such as those I mentioned above, it should only stop for those that are a risk.

          1. Doctor Syntax Silver badge

            Re: I've never worked in AI in any form...

            And until it can draw the distinction reliably or fail safe it shouldn't be on the road.

            That should just be a basic requirement. I know "let's do a rough build and then let the users find the bugs" might be a fashionable development process right now but moving fast and breaking things isn't good enough when the things are human beings. There needs to be a constraint and not running down pedestrians is a good place to start.

          2. 2Nick3

            Re: I've never worked in AI in any form...

            "That isn't a solution as the car shouldn't stop for any object, such as those I mentioned above, it should only stop for those that are a risk."

            If you see something dart out into the street, as a driver your first instinct should be to avoid hitting it. If you then realize it's some leaves blowing in the wind you can then abort the avoidance and carry on. That's usually going to happen before your foot is all the way off of the gas pedal, and before you've committed to a swerve.

            Defensive Driving starts with everything being a threat and then reclassifying down. With training you are able to do that quickly for most things, but something new should always trigger that same response.

            1. Anonymous Coward
              Anonymous Coward

              Re: I've never worked in AI in any form...

              Of course not, on a windy day in autumn if you slammed on the brakes or rapidly steered away every time a leaf blew across the road you'd be pulled over on suspicion of drunk driving or be a liability to all other road users.

              Humans are very capable of using context to decide threats and are capable of determining whether there is a potential risk without need to slam on the brakes for everything as a first instinct.

          3. Baldrickk

            Re: I've never worked in AI in any form...

            If it can't make the distinction, before it reaches the "object" then one of two things are aparrent:

            1 - the system isn't capable enough to be in charge of the vehicle

            2 - the vehicle was driving in unsafe conditions

            This is why you slow down in fog / snow etc. You drive to the conditions.

      2. Anonymous Coward
        Anonymous Coward

        Re: I've never worked in AI in any form...

        Leaves: if you can't detect the size of an object you have an issue. Anyway even a small object may be, for example, a stone thrown by someone or something... would you like to impact it at 40mph?

        Bag: are you sure it is empty or can contain something that could damage the car?

        Smoke/dust/fog: it it's thick enough the ranging systems can't detect what's within and what is doing, you have to slow down anyway.

        1. Anonymous Coward
          Anonymous Coward

          Re: I've never worked in AI in any form...

          So your recommendation would be for an AI to be trained that whenever it detected a leaf it should slam on the brakes?

          Every time a carrier bag blew across the road it should take evasive action?

          Every time there was smoke/dust/fog the car should slow down to a halt and wait for it to clear?

          Can you imagine driving in the car you have developed? No because that is not needed, it needs to correctly identify a risk and then take appropriate action - such as driving straight through it or stopping or taking evasive action. Hence you can't just say if an object is detected it must therefore be a threat, that is not intelligent at all.

          It is actually easier to make a system react to all objects however it doesn't work in real life, search up "phantom braking".

          1. Kiwi
            Facepalm

            Re: I've never worked in AI in any form...

            So your recommendation would be for an AI to be trained that whenever it detected a leaf it should slam on the brakes?

            No. But if sufficient amount of them to impair vision[1], or braking distances, then yes. Yep, wet leaves on the roads are a significant threat to many vehicles especially motorcyclists, and you cannot be sure a pile of dry leaves aren't covering a layer of wet ones. Where leaves accumulate also indicates areas where moss or decaying leaves are more likely. In the worst circumstances, a single leaf[2] can cause a rider to crash (well, the cause was their driving over that leaf at that speed and lean angle, the leaf was the final factor causing a momentary loss of traction that the bike could not recover from).

            And as LDS said, it could be something more damaging.

            Every time a carrier bag blew across the road it should take evasive action?

            Yes, if it could hit the vehicle and impair vision or braking (unlikely a bag will impair braking but not impossible, especially on a MC - trapped between the pads and disks it could easily lessen the braking performance for just long enough to cause big problems)

            Every time there was smoke/dust/fog the car should slow down to a halt and wait for it to clear?

            Not necessarily to a stop (unless it is thick enough) but yes, you should slow down for smoke if it is thick enough to impair vision (and fog, by the definitions I know, is always something you slow down for as it always reduces your visibility, sometimes considerably)

            Can you imagine driving in the car you have developed? No because that is not needed, it needs to correctly identify a risk and then take appropriate action - such as driving straight through it or stopping or taking evasive action.

            I do, every day. As do a lot of my fellow drivers. It's called 'driving to the conditions'. See an item that is a threat (ie will impede on your path, block your vision etc), start to take action while assessing it, if it's not a potential threat resume otherwise take further action to lessen the impact.

            The vehicle you design would often be involved in accidents as it would leave taking evasive action until it was too late. Many would die. Mine would get you to your destination comfortable and safely, almost certainly never heavily applying the brakes (unless something changed suddenly like a ball coming out from the side of the road - your threat model would see it as "ball, no threat, continue", mine sees "child coming, SLOW DOWN!!! - yours kills a kid while mine means there's barely an incident). I'll take my design over yours any day.

            Hence you can't just say if an object is detected it must therefore be a threat, that is not intelligent at all.

            Your response is the thing that is not intelligent. Until you can be sure something is NOT a threat you must treat it as such. That is the basis of defensive (ie proper) driving - treat everything as a threat until you can asses it and deal with it. Driving straight through a small amount of smoke or mist is usually perfectly fine as you can see through with enough clarity to know what's on the other side. Driving straight through fog is a criminal offence in many (if not most) countries as you are not driving to the conditions, and should be slowing down to a safe speed.

            This is all just basic common sense, but then you do seem to come up with some really silly excuses for not avoiding a crash/dangerous drivers remaining on the road etc (Well not necessarily you but certain posters here anyway...)

            My annual driving miles tend to far exceed the average, yet my accident rate is far below the average as is my heavy braking rate (excluding practice braking) - because I see things and take precautions long before they become a real hazard. Yes, this includes slowing for plastic bags when they might cross my path while I'm riding.

            [1] by 'vision' I mean any sensors the controller is using to detect the environment - a human's eyes or a SDC's cameras/LIDAR etc

            [2] Not likely to be a leaf from a lemon tree, but perhaps something larger and thick enough

      3. Kiwi

        Re: I've never worked in AI in any form...

        Leaves? Carrier bag? Smoke? Dust? Localised fog?

        ALL of these are threats that could lead to a fatal incident, although of course the threat level varies.

        Just a couple of days ago I slowed down for something like an old shopping bag blowing across the road. If it'd hit me in the right place, chances are good I'd not have lived. Why? Motorway (100km/hr speed limit, my speed somewhere near that), me on the bike, lots of 'roadside furniture' on the form of barriers as well as a narrow shoulder and upcoming split into a straight ahead lane and an exit (with dividing barriers). If the bag had blown across my helmet, ending my vision, I may not've had time to remove it (not so easy when wind-pressure helps hold it in place and MC gloves won't necessarily grip it well - not that I can see where to grip it so have to vaguely reach in the area). Dust (and leaves, to a lesser extent) can get in the eyes causing a loss of or restriction in vision (although my usual practice is to close the visor when this is a clear risk).

        Smoke is seldom much of a problem unless bad enough to impair visibility, same as fog. I do know of several fatalities on NZ roads from drivers not slowing down in fog, and maybe some where they couldn't see a curve so drove off the road at speed.

        All threats to be dealt with appropriately. Sometimes the appropriate action is to continue as you were, sometimes it's to slow a bit or change lanes. If you cannot stop in the clear road you have ahead you are driving too fast for the conditions, and under NZ law you must be able to stop in the visibly clear road (or 1/2 the road on narrow roads). If smoke limits your clear visibility to 20 metres then you must drive at a speed that means you can stop within 20 metres.

    2. Neil Barnes Silver badge

      Re: I've never worked in AI in any form...

      The other way round perhaps?

      Look for spaces. Track how the spaces grow and shrink and how fast they're moving. If a space isn't going to be big enough for long enough for you, once you get to where it is, go somewhere else or stop.

      1. Doctor Syntax Silver badge

        Re: I've never worked in AI in any form...

        I think that as humans we follow both processes.

  10. smudge
    FAIL

    How not to hold up autonomous cars

    Who remembers the predictions that there would be permanent traffic jams of autonomous cars - because pedestrians would learn that they could walk right out in front of the cars and the cars would stop?

    "You've got to ask yourself one question. "Do I feel lucky?" Well, do ya, punk?"

    1. AIBailey

      Re: How not to hold up autonomous cars

      I'm thinking that Deathrace 2000 should be reclassified as a documentary in order to understand what might eventually go on in the "mind" of an autonomous vehicle.

  11. Moldskred

    The safety culture at Uber is just atrocious and clearly directly to blame for the accident and death. I think the phrase I'm looking for is 'criminally negligent homicide.'

    1. Doctor Syntax Silver badge

      I'm not sure either "safety" or "culture" applies individually let alone in combination.

    2. Tom 38

      At the very least, corporate manslaughter - the system had a setting which would have stopped it earlier, but they turned it off because it was stopping in too many scenarios.

  12. Anonymous Coward
    Anonymous Coward

    in other news

    a robocop stood still while four law-abiding citizens lining to deposit their cash, unexpectedly pulled out guns and demanded cash from a bemused robo-cashier.

  13. quattroprorocked

    How about

    All objects are types of Initial Object.

    So, capture the info about initial object - movement etc - and while you try and make up your mind about what it is / threat level etc you can at least react to it if it looks like collision is possible. At least that way you always have the fill data for the system to play with.

    But me, I think real life, safe, autonomous cars are ten years away, and will be for some time to come. (Maybe "sleep while it takes you up the motorway in formation" could be ready sooner, but "urban", long way off).

    1. Charles 9

      Re: How about

      Then you get rear-ended as it stops for a trash tornado. NOT a good idea.

      1. jospanner

        Re: How about

        1. I would rather be rear-ended than kill someone

        2. The person behind me should stop following so close

        1. Doctor Syntax Silver badge

          Re: How about

          And "up the motorway in formation" implies sufficient coordination to prevent that. So long as there's enough slack so that Charles's trash tornado at M1 junction 29 doesn't stop traffic right back to junction 19.

        2. Charles 9

          Re: How about

          "I would rather be rear-ended than kill someone"

          Not all rear-enders are survivable. So you're saying you'd rather die than kill.

          "The person behind me should stop following so close"

          You forget Murphy's Law of tailgating: the moment any space grows longer than a car length, it gets taken up. You should ALWAYS expect a car to be following too close to you because of this.

          1. Kiwi

            Re: How about

            "I would rather be rear-ended than kill someone"

            Not all rear-enders are survivable. So you're saying you'd rather die than kill.

            No, but for car occupants they do tend to be survivable. And yes, I would pesonally rather die than kill.

            "The person behind me should stop following so close"

            You forget Murphy's Law of tailgating: the moment any space grows longer than a car length, it gets taken up. You should ALWAYS expect a car to be following too close to you because of this.

            If someone is tailgating, slow down until the speed you're driving at is safe for the distance they're following at. It really is that simple.

            You forget that the majority of us don't drive in such a messed up place as you imagine. Most drivers far exceed the level of skill you display in these forums, and the level you claim your locals manage. Perhaps you should be doing something to fix the exceptionally poor driving in your area instead of playing "whataboutsim" and expecting the rest of us to lower our skills to the level of your stupidity?

    2. Anonymous Coward
      Anonymous Coward

      Re: How about

      Yes an no. I think there might not be a time in my lifetime, or my kids lifetime where I could have a car drive autonomously in any part of the world and do it completely without issue.

      However a car driving autonomously in a defined public urban area is available and doing it today - without a driver. See the recent video on here of a Waymo. On a highway/motorway a few cars can probably achieve this already.

      Teslas can also drive autonomously around a car park with surprisingly few incidents.

      So there is a big divide between what they can already do today to the situation I mentioned at the beginning. However this gap will get filled constantly and there will be a point where it can do it good enough most of the time under most situations and have significantly less issues than a human. Thsi won't be a switch where a vehicle is suddenly released that can high Level 5 in every situation, however there will be a gradual progression with some big leaps.

      A bit like voice recognition, which seemed impossible for a while, to being barely usable with many, many hours of training, to today where for many people you can speak to a system out of the box and it will recognise most of what you say (Yes, I know, not for everyone, and it is not perfect etc etc, however it is to a point where most people don't think that it isn't usable)

      1. Primus Secundus Tertius

        Re: How about

        @anon

        1. "surprisingly few incidents" is today's euphemism of the day.

        2. Voice recognition is still very poor compared with OCR or a reasonable typist.

        1. Anonymous Coward
          Anonymous Coward

          Re: How about

          1. Not really, I think quite literally. Having done over a million smart summons there were only a handful of incidents (all no more than a 'fender bender') reported of which most were caused by the human driver of another car.

          I expected there to be many, many more incidents but was surprise how few there were. Hence "surprisingly few".

          2. I wouldn't say 'very poor'. My point was voice recognition isn't 'perfect' but for many people it would be no surprise anymore if a system sat there and could recognise 99% of what you said. Even the fact that we can be quite blasé about a real-time language translation system between two speakers of different languages, or the automated voice booking system that can understand and react to a business that it is calling. I'm even surprised at how reasonable the system that keeps calling me up to tell me that I have had a motor accident is when I test it - It seems to get better every few months.

          I was making the point that it didn't go from rubbish to perfect in one step, it had many iterations to the point where it has become usable for many people and used every day by some - Alexa, Siri, Google etc. I see similar for autonomous cars, they won't suddenly be 'perfect' they will just get to a point where for your scenarios they work exactly as expected.

          1. Doctor Syntax Silver badge

            Re: How about

            "My point was voice recognition isn't 'perfect' but for many people it would be no surprise anymore if a system sat there and could recognise 99% of what you said."

            Sometimes I can't avoid TVs in public places sitting there with the sound turned off and subtitles of attempted automated transcription. That the transcription can be attempted is admirable but the results are pretty dire. You certainly couldn't rely on them if your life depended on it - which is the case for autonomous cars.

            1. Anonymous Coward
              Anonymous Coward

              Re: How about

              You sem to be unable to understand the that this isn't a direct analogy. The point was about progression towards a situation where it can become mainstream. So the fact that an automated subtitling system can produce results in real-time that are understandable is showing that it has gone into full use and can now improve over time. Heck even on you-tube the results are pretty good for many videos, some are perfect.

              At some point people will be using autonomous vehicles that aren't perfect. You know the car you drive around in today isn't perfect and faults in them can and do cause deaths. However you get to the point that they are at an acceptable level that the risk is so low as not to be a concern. This will be a higher bar than voice recognition due to the safety implications, it is still a progressive path.

              1. Anonymous Coward
                Anonymous Coward

                Re: How about

                Maybe opening another can of worms here, but most Tesla drivers understand the risks and the technology very well and are confident enough that the autopilot system when monitored and used properly is effective enough to turn on. There is some controversy around the stats for sure but it would seem that it is already much safer than a human driver on average.

                Fully accept that there have been deaths caused when the system was relied on fully with no observation and it shows it is/was far from perfect. But with 2 billion miles travelled using it many owners feel the risk is small enough to trust it under some circumstances and may feel it is safer than having it turned off, but also wouldn't trust it enough to go for a sleep on the back seat, if that option was available.

                1. 's water music

                  Re: How about

                  ... many owners feel the risk is small enough to trust it under some circumstances...

                  Even if you assume that these owners are accounting for the risks externalised by their being inside a vehicle and for the benefits which accrue to themselves only then you are still relying on human judgement of risks from infrequent events and that is not a register where humans perform well.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: How about

                    Humans already use their judgement to analyse risk when driving themselves everyday. This is why there are so many fatalities. An L2 autonomous car can easily reduce the risk and supplement the human judgement of the observer. Hence the risk is and therefore accident rate is lower.

                    Humans already allow themselves to be driven or flown etc knowing that accidents have occurred but can effectively gauge the risk based upon the frequency of those negative events and a risk/benefit calculation.

                    hence I would trust autopilot to take me along a separated highway while keeping observation, but I would not yet trust the same vehicle to drive with me having long periods of inattention (e.g. sleeping). That is a reasonable risk calculation that, after 2 billion miles of driving, has not seemed inappropriate.

              2. Baldrickk

                Re: How about

                The funny thing is that live subtitles are done these days by a human watching the stream, and delivering the same lines themselves in a monotone.

                Ostensibly, this is to reduce the defect rate, but voice detection is now good enough that it could probably be done without now.

                The mistakes you see while watching the muted news feed? That's the human in the loop messing up.

      2. batfink

        Re: How about

        "Completely without issue" is part of the problem here.

        Too often the reaction to accidents with self-driving cars is "they shouldn't be on the road until there are zero accidents".

        What this should be is "they shouldn't be on the road until they have fewer accidents than humans".

        Humans are far from perfect drivers. Just have a look at the US fatality figures - 40,000pa, and that excludes injuries.

        Yes Ai is going to take a lot more work, but the target is never going to be zero.

        1. Anonymous Coward
          Anonymous Coward

          Re: How about

          I think the target will always be zero, but may not be achievable. It will have to be many magnitudes better than humans (400,000 p/a) though. Humans can accept that they are fallible and that they cause and kill people every day. They can also accept that they, themselves, are fallible and could potentially kill someone else or themselves on the road. They can even accept that someone driving them is fallible and may kill them on their journey - I mean people will already get in cars with drivers who are drunk, or be in a taxi in Italy.

          However they will struggle to accept vehicles routinely killing their occupants or others road users. If there was even a 1/10th of the fatalities news stories highlighting 12 deaths every day in the US by an autonomous car is not going to be acceptable for most people.

          1. batfink

            Re: How about

            Agreed - and that's the perception problem that needs to be changed.

            We're prepared to accept that humans are fallible but not to accept that machines are also fallible - even when the latter failure rate is lower.

            It is possible to change. We managed to make the change from human-operated lifts (elevators) to fully automated ones (yes I'm showing my age). It's overcoming the initial mental block that's going to be the problem.

            Of course we're nowhere near the self-driving systems being ready yet, but that's not an excuse to dismiss the idea as unworkable.

            1. Anonymous Coward
              Anonymous Coward

              Re: How about

              The problem with machine fatalities is they are repeatable - in the same conditions different humans, or even the same humans, could act differently. All the same machines will act in the same way.

              1. Anonymous Coward
                Anonymous Coward

                Re: How about

                A million machines could also be 'taught' how to do something differently in seconds with the click of a mouse and an OTA update. A million humans would require a massive re-education program with limited reach.

                1. Charles 9

                  Re: How about

                  The other problem is that they fail on conditions no competent human would be expected to fail: such as not seeing the broad side of a twenty-foot-long container on a trailer smack across your field of view.

        2. Doctor Syntax Silver badge

          Re: How about

          "Humans are far from perfect drivers. Just have a look at the US fatality figures - 40,000pa, and that excludes injuries."

          That needs to be set against vehicle miles. I don't know how your figure compares to vehicle miles in the US but in the UK it's a huge number of miles per fatality. I doubt autonomous vehicles have got anywhere near it.

          1. Anonymous Coward
            Anonymous Coward

            Re: How about

            I would expect Level 2 autonomous vehicles have a lot fewer fatalities per mile than human drivers. But the sample size is still small and the data for exact comparisons doesn't exist.

            Tesla for instance show 1 accident (not death) for every 4.34 million miles driven, whereas the US average is 1 accident for every 498,000 miles driven.

            However it isn't directly comparable as most autopilot miles will be on highways and larger roads, whereas average driving is on all road types. There is also a large variation in car types on the roads, of different ages and maintenance and driven by the whole section of society. Tesla cars will be newer than average and be driven by a smaller demographic. However, I would assume, Tesla accidents on Autopilot are more likely to be reported (at least to Tesla) than a road accident, especially one that doesn't result in a claim.

          2. batfink

            Re: How about

            Of course the number of miles of self-driving are low so far - it's still in the development stage. Had you been reading carefully you may have noticed that I was talking about *targets*.

      3. Doctor Syntax Silver badge

        Re: How about

        "However a car driving autonomously in a defined public urban area is available and doing it today"

        And do it right up to the point where they hit somebody as here.

        1. Anonymous Coward
          Anonymous Coward

          Re: How about

          There's cars doing it today that haven't hit anybody, that is the point.

          You seem to enjoy spreading FUD that another car project is just waiting to hit someone and it will be inevitable just because an Uber mowed someone down? It might well be that another project is much safer, hopefully they are.

          I've never though of Uber as being the shining light of perfection and safety.

  14. David Lewis 2
    FAIL

    tl;dr

    Summary: The software was not fit for purpose and should never have been in use on the public roads.

    It looks like they are using the modern trend of "testing in production". The problem is when one of these "we didn't think of that" scenarios happens, someone could die.

    And please can we stop using AI (Artificial Intelligence) when we really mean ML (Machine Learning). Intelligent it wasn't.

    1. Charlie Clark Silver badge

      Re: tl;dr

      This isn't really about the software. If you don't trust the software you include fail safes such as emergency braking (disabled) and a supervisor (not supervisoring).

      "Failure is not an option" is Uber's missing motto.

    2. David 18

      Re: tl;dr

      Looks like I have almost perfectly, and inadvertently, duplicated your post :D

    3. Kubla Cant

      Re: tl;dr

      And please can we stop using AI (Artificial Intelligence) when we really mean ML (Machine Learning). Intelligent it wasn't.

      Absolutely. And how long will it take for Machine Learning to replicate the result of several million years of evolution? In terms of spatial perception and ability to predict the path of objects in their surroundings, the rats and dogs driving cars on YouTube are far better equipped.

  15. MrKrotos

    Is time going backwards now?

    "involved in 37 smashes between September 2018 and March 2018"

    1. Anonymous Coward
      Anonymous Coward

      Re: Is time going backwards now?

      Was it a DeLorean, maybe?

  16. sbt
    Boffin

    A more ethical way of developing these autonomous systems ...

    ... would be to have a qualified, or better expert, driver at the wheel, and compare the actions taken by the driver with what the autopilot system *would* have done given the inputs. That way, jaywalkers don't have to die before we find out they're not included in the code, and the lesson is learnt in the code review and not in the morgue.

    1. Anonymous Coward
      Anonymous Coward

      Re: A more ethical way of developing these autonomous systems ...

      Still a limited data set. You would need that scenario to play out thousands of times all slightly differently to start training the system.

      That would require hundreds of thousands of expert drivers and cars unless you force the scenarios (and then that is not a real situation as it wouldn't be random enough).

      I believe some manufacturers sun a shadow system that makes decisions (but doesn't act on them) and then analyses what the driver did. This is then compared to see who would've made the better decision and learn from the times when the computer wouldn't.

      Sometimes you can just program it in. A computer in a car could already deal with an understeering or oversteering car or other loss of traction better than nearly all drivers on the road. However there may be other factors for the loss of traction that the human can better comprehend than the computer.

      1. Brewster's Angle Grinder Silver badge
        Joke

        Re: A more ethical way of developing these autonomous systems ...

        "However there may be other factors for the loss of traction that the human can better comprehend than the computer."

        Yeah, traction really suffers when you get pedestrians under the rear wheels.

      2. Richocet

        Re: A more ethical way of developing these autonomous systems ...

        Yes, there's some sort of Bosch-developed traction control system in my car that prevents and recovers from skids etc. It's about 5 years old and came standard. It sounds like what you describe.

    2. Charlie Clark Silver badge

      Re: A more ethical way of developing these autonomous systems ...

      The cars have already been trained on how humans drive. Having a supervisor obviates this because the supervisor's intervention is logged directly so that the algorithm can be retrained directly. But the supervisor wasn't paying attention. And Uber disabled the emergency braking system so a woman was kiled.

      We shouldn't be discussing the software as long as the company is so wilfully negligent. This needs to go to court and the sentence should be severe (big fine but maybe someone needs some jail time) enough to focus everybody's attention.

  17. mihares

    Worse than human

    Premise: sitting "at the wheel" of an autonomous car that sucks at exception handling is, for a human, a recipe for disaster: it must be so mind bogglingly boring that, maybe, you'd be brought back to conscious life by a 747 MAX crashing in front of you because of a software problem (ups, sorry...).

    0) This is exactly what self-driving cars are supposed to avoid: delayed awareness of people (or animals) entering the path of the vehicle in a non-standard way.

    1) Although the sensor did pick up something (someone: Hertzberg), the software was so badly thought out that failed completely at permanence of objects, something that humans at a larval stage are able to handle.

    2) The software was so badly designed that it could not handle the standard safety equipment of the car.

    3) The software was so badly designed that it did not act conservatively (unknown object --> slow the f**k down)

    I don't think anybody would reach an employable age without killing themselves by sticking their tongues in the mains if they were so stupid. I am an idiot, and even I would think about pedestrians (probably drunk) walking in front of the car at any time.

    This stinks of management pressure. All over. Some Uber managers should be made responsible of this.

    1. batfink

      Re: Worse than human

      At a basic level here, it seems passing strange to me that the code effectively says "can't be a human crossing the road unless at a marked crossing".

      Eliminating the case that there might be humans crossing the road elsewhere looks like clear negligence on the part of the designers to me.

      1. mihares
        Boffin

        Re: Worse than human

        Well, yes, but I don't think this is the biggest problem with the software design --their training of the classifier stinks, granted, but an object was recognised and even if unknown or changing in nature it should have been tracked.

        Problem is: tracking an object whose coordinates and dimension may be noisy (very noisy: radars and lidars are not ophthalmic surgery equipment) while taking in account the speed of the vehicle and the parallel component of the object's velocity is not trivial from a conceptual point of view, it can get computationally expensive and may require you to develop the system for years before it's roadworthy. As in: doesn't freak out for pigeons and leaves, plays nicely with the car's standard safety equipment and doesn't kill people.

        Uber didn't want to put in the time and effort to get it right: publicity waiting that long and coming at such an high price? No way. So the push for a short cut must have been huge. Hence the path prediction began to depend on the label the object has. And the necessity to drop the previous tracking history once the label is changed: if a lorry becomes a car, then the distance calculations may be very off and lead to dodgy path predictions and discomfort for the autonomous vehicle occupants.

        I think a mitigation to these self driving cars is to have the specification, the code and the parameters after training open for scrutiny --if not to the whole humanity, at least to regulators. A solution, as things are now, is to not have them on the road.

  18. John Savard

    Simple solution

    A human has died.

    The cost of the medical care she requires to stop being dead, and resume living, is greater than the market capitalization of Uber.

    Goodbye Uber.

    That's how it should work; then businesses would exert themselves mightily to do whatever is necessary to eliminate any risk whatsoever of people being killed or injured by their activities.

    1. Francis Boyle Silver badge

      Goodbye Uber

      would be ideal but I would settle for them* being banned from anything to with autonomous vehicles except as a customer.

      *Not just the company, anyone who was involved in the project.

    2. batfink

      Re: Simple solution

      Sounds great. Have you thought about how that would apply to all businesses?

      Good luck getting a cup of coffee.

      Having said that, yes those fuckers in Uber should be personally liable here. With glaring holes like those revealed here, this software shouldn't have been allowed out into the real world.

      1. Public Citizen

        Re: Simple solution

        It should not have been allowed anywhere but on a fenced in Uber Test Track with a herd of deer permanently in residence.

        For every deer killed or injured, add a full year of additional testing before the thing is certified to be turned loose on clearly defined city streets, and NO, you aren't allowed outside of clearly defined boundaries within highly urbanized areas.

        1. Stoneshop
          Devil

          Re: Simple solution

          It should not have been allowed anywhere but on a fenced in Uber Test Track with a herd of deer permanently in residence.

          Final test: the deer replaced with a herd of Uber CxO's on cocaine.

    3. dvhamme

      Re: Simple solution

      That is ridiculous. Does a human driver go bankrupt when he accidentally runs over a pedestrian? No he doesn't, in most countries not even when you can prove that he was deliberately inattentive (drunk, catching pokemon).

      The problem with autonomous vehicles is that we want them to be 100 times safer than the average human driver. That is totally unreasonable. If we would only demand that they make the road safer (i.e. they cause fewer accidents, or lower speed accidents than the average human), we would be zooming around in driverless cars RIGHT NOW.

      Why don't we let the insurance companies sort it out? You buy an autonomous car from brand X. Your insurance company looks at accident records for this model, technology used etc. and makes you a quote which you accept as it is offset by the time you can spend doing other things in the car. If your autonomous vehicle then has a fender bender or kills someone, your insurance company pays for damages. Vehicle manufacturers have incentive to make their cars safer so they are cheaper to insure than the competition, we don't need entirely new accountability laws.

      Maybe the idea is flawed, I just came up with it. Feel free to point that out to me.

      1. Public Citizen

        Re: Simple solution

        NO, it isn't entirely unreasonable. It isn't even partially unreasonable.

        We need to expect the same level of reliability of this ~software~ that we ~expect~ from the hardware it is ostensibly designed to correctly operate.

        1. dvhamme

          Re: Simple solution

          Why do we need to expect this? Why does a cost / benefit analysis not work in this case?

          If you have a system with many components, and you identify the worst performing component (driver), and can replace it with something which is better averaged over all circumstances, why would you not do it, and instead demand the new component must surpass the old component's performance by orders of magnitude?

  19. mr-slappy
    WTF?

    I don't understand...

    .. why software in aircraft (*) has to undergo years of rigorous design, testing and certification before a plane can fly, but the bar for self-driving cars seem to be "it compiled ok." (Or maybe "we did a load of really thorough testing, honest guv")

    Why are self-driving cars even allowed on the roads? The technology doesn't seem to even be alpha yet.

    (*) well not for Boeing obviously

  20. Mage Silver badge
    Devil

    It's a Cruise control!

    Calling it Self Driving or Auto Pilot is a marketing gimmick. Some countries already insist it's only marketed and described as a cruise control.

    Can we make it a criminal offence to hype AI. Or maybe to use AI at all in the description of a product. AI has become a meaningless marketing term too.

    Directors should be personally liable. A fine on a company is no deterrent.

  21. Anonymous Coward
    Anonymous Coward

    Not designed to detect jaywalkers == mansalughter through gross negligence

    > The self-driving Uber car that hit and killed a woman walking her bike across a street wasn’t designed to detect “jaywalking pedestrians.”

    This. I don't understand why this hasn't resulted in an automatic manslaughter through gross negligence charge, or whatever the US equivalent is? Even if jaywalking is forbidden, people on the road are to be expected simply because they may be fleeing a crash on the other side of the road. To assume there will always be none and therefore the software doesn't need to identify people is criminal negligence.

    1. Richocet

      Re: Not designed to detect jaywalkers == mansalughter through gross negligence

      Well because there's a loophole.

      If you take a process and make it much more complex and with multiple legal entities, it is possible to get away with more from a law and liability perspective.

      Examples:

      Tax avoidance through complex international company structures.

      Causing a financial crash through unnecessarily complex financial products created from home loans.

  22. Anonymous Coward
    Anonymous Coward

    Too soon?

    It was an Uber cab - the internal investigation team at Uber only wanted to know why it didn't stop to offer the woman a ride. A fare is a fare.

  23. David 18

    TL:DR

    TL:DR

    AI still a myth and a marketeer's wet dream. It's not ready for prime-time and certainly nowhere near ready for anything involving safety in the real world

  24. dvhamme

    I read the report and I disagree with the emphasis on jaywalking. The fatality is a result of 1) poor tracking code and 2) poor sensor fusion.

    For 1) they should obviously consider associating a previously found "unknown" object with a freshly detected bicycle with overlapping or nearly overlapping position to generate a motion hypothesis.

    For 2) at one point they assumed the bicycle was following the normal direction of travel in the lane it was detected in, but this hypothesis could easily have been proven wrong by asking the radar what it thought of it; the doppler radar would have picked up on the fact that the object had a near zero velocity along the viewing direction.

    We don't need cars that explicitly consider jaywalking. It would have been sufficient if the system had been able to reconstruct, even with low accuracy, an estimated motion path and understood that this was abnormal for the road context. The system is designed to do that, but poor at it apparently.

    1. DuncanLarge Silver badge

      You forgot 3: The human driver NOT LOOKING AT THE ROAD AHEAD.

      also:

      >We don't need cars that explicitly consider jaywalking. It would have been sufficient if the system had been able to reconstruct, even with low accuracy, an estimated motion path and understood that this was abnormal for the road context

      What? Ever crossed the road in the UK? This "abnormal" condition takes on a whole different meaning in the UK. The system has to construct a highly accurate motion path of all road users and road crossers no matter where they are at the time or what they are doing. Thats what I must do when I drive and if it cant match my abilities this system should be relegated to operating inside a wearehouse or reduced in size and turned into a delivery robot that moves at 4mph on the footpath. Expecting anything less is ridiculous.

      Or give it a guided road where it operates like a tram. You can then keep the pedestrians off such a road.

  25. Electronics'R'Us
    FAIL

    The problem with discrete Object detection

    I live in south east Cornwall, and most of the local roads are single track lanes (with some reasonable passing places scattered along them).

    Take a typical scenario from a drive out to the nearest supermarket.

    Exit from my driveway onto a lane. Drive to a completely blind bend, slow down and hit the horn for a short burst just in case something is coming the other way. Drive slowly around the bend until a decent stretch of lane is visible and speed up a bit (assuming no oncoming traffic or people walking their dogs).

    Drive out through village toward a main road (this takes on new meaning in the south west). There are a few points on this drive before I reach a proper road where I can see the end of a part of a lane at a turn, but I cannot see the middle of that lane itself; if I spot a vehicle approaching into the lane (visible for perhaps a couple of seconds at most), I can wait at my end until it passes but keep in mind I cannot see it for much of the approach.

    From what I see here, the Uber software might see it, but as soon as it disappears along the lane blind zone it assumes it no longer exists and would therefore merrily go forward when the correct thing to do is to wait for the known approaching object to go past at a suitable passing place.

    Sounds pretty fundamentally flawed to me.

    I don't expect self driving vehicles around these parts for a very long time.

    1. Charlie van Becelaere

      Re: The problem with discrete Object detection

      "I don't expect self driving vehicles around these parts for a very long time."

      As a drive-on-the-right-side-of-the-road driver recently driving in Devon I totally concur with your expectation here.

      I was about to head up a road much like the one you describe but decided that discretion would be the better part of valour and turned around and found a different route. Perhaps we'll get a bit of Robin Hood v. Little John action when two autonomous vehicles meet on a single lane?

      1. DuncanLarge Silver badge

        Re: The problem with discrete Object detection

        Robin Hood,

        Robin Hood,

        Driving down the lane,

        Robin Hood,

        Robin Hood,

        Automated for convenience,

        Steers like a cow,

        Breaks for no one,

        Robin Hood,

        Robin Hood,

        Robin Hood

  26. Anonymous Coward
    Anonymous Coward

    So much for "self-driving cars will eliminate traffic accidents". Does anyone still believe that?

    1. Charlie Clark Silver badge

      Not eliminate, but certainly reduce through defensive driving and the use of fail safes. Both of which were conspicuous by their absence in this instance.

      1. tin 2

        I wonder if they will reduce the amount of accidents that are due to driver idiocy or fatigue - for example - but increase the amount of entirely avoidable accidents that were due to some processing problem with the data the car could "see".

        And the most interesting thing will be if that figure comes out at fewer accidents, and if so if that's really a success...

        1. Charlie Clark Silver badge

          The counterfactuals will be very difficult to assess and lots of different outcomes are possible.

          I suspect there will be a period of adjustment as normal drivers learn to adjust to the cautious and well-behaved self-driving cars that never run red lights, keep safe distances, park correctly, but then quickly get used to them.

          The extensive trials that are being run are designed to make this as smooth as possible so the cars behave like good drivers in the area but time will tell.

  27. Anonymous Coward
    Anonymous Coward

    Autonomous cars will need their own dedicated space

    to start with. Hence use as taxis and in city centres that can be (relatively) easily adapted for them.

    The problem is that isn't what the marketeers want to promise in order to keep the money flowing.

    1. Anonymous Coward
      Anonymous Coward

      Re: Autonomous cars will need their own dedicated space

      Probably the easy space to dedicate to them are highways when there should not be pedestrian and cross roads. Still there could be unexpected objects anyway.

      1. Ken Hagan Gold badge

        Re: Autonomous cars will need their own dedicated space

        Sounds like a railway.

  28. Il'Geller

    ...since there was no classification label for a person not using a proper crossing point...

    Uber marks patterns with timestamps and creates scenarios that contain cause-and-effect relationships.

  29. Sleep deprived
    Unhappy

    Jaywalking?

    Must pedestrians wear pants with large white bands to convince Uber's AI they're walking on a pedestrian crossing and not get mowed down? Maybe Uber could provide them, but they're already going broke.

  30. robert lindsay
    Unhappy

    I am the law!

    Judge Dredd is not running down perps unless they've commited a crime. You mean Judge Death.

  31. martinusher Silver badge

    What's the standard for success?

    From what I remember of this accident the person crossed the street in front of the car giving no time for both the AI and the human in the car to react. In theory we'd like the AI to be better than a human but for now surely the criteria for whether the software works or not is going to be 'Does it work as well as a human?'. Its possible that over time the AI might be made to work better than a person but I don't think its that good at 'what if' scenarios, it just reacts to 'what is' so if you give it no time to react then its just not going to work.

    Its a fun problem to solve but, seriously, if it doesn't give incremental improvements over a person what's the point in deploying it? Uber et al don't pay their drivers that well so its not like they're going to make a fortune putting people out of work (and so reducing the pool of people able to afford their services). Self-driving makes more sense for rapid transit -- buses on reserved busways and light rail.

    1. Zack Mollusc

      Re: What's the standard for success?

      The person had almost completed crossing a three lane road before impact. The car was probably not in sight on the empty road when the person began crossing. The published logs of the car have it first detect the person over five seconds before impact. Plenty of time to either fully stop or change lanes to go behind the person.

    2. Kiwi

      Re: What's the standard for success?

      Its possible that over time the AI might be made to work better than a person but I don't think its that good at 'what if' scenarios, it just reacts to 'what is' so if you give it no time to react then its just not going to work.

      In a few incidents, "what if?" is the only reason I am alive. In many, it's why they weren't more serious. It is a BIG part of 'defensive driving' or even basic prudent driving. See a threat, anticipate possible/probably actions, plan how to deal with them. It's meant I've already had an escape route planned when other vehicles have crashed near me, it's meant I've held back when someone was doing a sudden lane change, it meant a few days ago a bag blowing down the road didn't cover my face when it blew across. And it's meant I'm already slowing down (and tyres/suspension already loaded) when I need to stop in a hurry (well, it lessens the need considerably).

      I accept new drivers need time to learn some of those (and some courses seem to be working on that more and more these days), and they need time to learn realistic levels of movement and risk, but they will learn.

      AI needs to be able to learn that as well. If it cannot project and anticipate so it has a plan of action ahead of time and is ready to react, it should not be on the road. This applies to humans as well (at least once out of the supervised learning stage).

    3. DuncanLarge Silver badge

      Re: What's the standard for success?

      > giving no time for both the AI and the human in the car to react.

      If you read the article you will see that you remember wrong.

      The A.I had loads of time, decided to ignore he object.

      The human driver on board wasnt even looking out of the window. Didnt even see it about to happen. Only knew it happened when the car when bumpety bump.

  32. Pontius

    Jaywalking in South Africa

    Back in the '80s my wife and I were in East London in SA. We'd split up and arranged to meet later, and at the designated time arrived back at our meeting point. I found my wife in very heated conversation ( she is not the quiet retiring type) with a policeman. Apparently her having safely crossed the road, but not at a crossing, he was berating her for jaywalking. I arrived. 'Is this your husband?'he asked, then turned his attentions to me, saying I should have had more control of her and ultimately I was to blame. This did not go down well with my dedicated feminist beloved who waded (verbally) into him, yet again. We escaped with a warning after I humbled myself and apologised profusely, seeing how this confrontation could possibly end up.I did not want my day in court. On another occasion I winced as I witnessed her fantastically effective verbal demolition of the manager of a company that had aggrieved her. She is not a woman to be messed with. For the sake of my dangly bits, she is now my ex-wife...

  33. HmYiss

    oops..

    when you make an autonomous car and forget to account for people crossing the road.

  34. Richocet

    This system is ready for deployment in Queensland Australia

    The behavior this vehicle exhibited mimics Queensland drivers very well. And they already have the system packaged into an SUV.

    In Queensland running over objects is the general approach to driving. If unsure run over it and hope for the best, with some exceptions if the object is as big as the vehicle.

    Future upgrades for the Queensland market:

    1) Fit a bullbar so that the vehicle can sustain more impacts before needing repairs.

    2) Re-program to drive a minimum of 10kmph above the speed limit at all times

    3) Move aggressive AI to swerve towards pedestrians, cyclists, motorcyclists

    4) Occasionally turn off the "give way" logic module.

  35. Public Citizen
    Flame

    There are many areas in the USAwhere the marked cues of a painted crossing area are not present yet that area is a ~legal crosswalk~ because of the context, be it a roadway junction, adjacent sidewalks, or other defining characteristics that every licensed motor vehicle operator are expected to identify and act accordingly. In addition in large areas of the country one may expect wild animals from the size of a rabbit all the way up to Moose to be on the roadway, particularly at night. The indigenous deer population is responsible for a statistically notable number of accidents, often with injury and sometimes death to the human occupants of motor vehicles.

    If this [allegedly] AI system is not capable of properly identifying a human with a bicycle crossing the roadway in the dark [even with Radar that works equally well regardless of ambient light conditions] how can it possibly recognize and respond properly to the not infrequent encounter of [pardon the cliche] a Deer In The Headlights? From personal experience I can tell you that it's a random dice roll to determine whether the animal will freeze on the spot, bolt off the road, run away down the road, or charge the approaching vehicle. Until these systems are capable of correctly analyzing and responding to these situations t100% Of The Time they aren't safe to be certified for test operation on the public highways.

  36. tim 13

    While I saw the same flaws in the object detection process as the other commenters, and not wishing to victim blame, it must be said that car’s avoidance logic was no worse than Ms Herzberg’s and the car had less to lose.

    1. Baldrickk

      There is a reasonable expectation that the car can take action to respond to changing conditions. If it can't, it's going too fast for the conditions.

      That's not the case for a pedestrian - they are speed limited. She wasn't going too fast to react.

  37. sbivol

    What about the driver?

    The car had a driver who was supposed to be responsible for the vehicle at all times. I'd blame the human sitting behind the wheel, not the wheel or the algorithms.

    Software is buggy. This is why you don't let a plane full of people fly unmanned. This car wasn't unmanned either.

    1. Danny 2

      Re: What about the driver?

      The driver was playing Frogger.

      A car either drives itself safely or it doesn't, a human in a driving seat is just a scape goat.

      I used to drive 100,000+ miles a year as a tech support engineer and half the time I was on autopilot, sometimes with bad results. I never hurt anyone else but I could have. Being driven by a car that seems to know what it is doing will just lull the driver into a false sense of security and they'll lose attention.

      It'd be better the other way around, the human drives the car and the computer intervenes if the human makes a mistake.

      1. Dusty

        Re: What about the driver?

        "A car either drives itself safely or it doesn't, a human in a driving seat is just a scape goat."

        Indeed, the "Driver is there to take control in an emergency" thing is a dangerous myth.

        The "Window" for a motor accident avoidance is typically only a matter of seconds.

        I doubt is a Human operator would have a chance to take control in the event of something unexpected happening even if they were holding the wheel, feet on pedals and devoting full attention to the road ahead.

        You know, Actually driving.

        Self driving cars are pointless unless they can be given full responsibility for the task allowing the "Passanger" to get on with something else (Up to and including sleeping it off in the back seat after a heavy session)

        This is not to say that some technologies might not actually be helpful to a human driver such as adaptive cruise control, auto emergency braking and so on. But even these will eventually reduce the drivers involvement and reduce his ability to react to unexpected events

  38. anthonyhegedus Silver badge

    AI?

    It's not artificial intelligence. At best it's Simulated Intelligence, and it's not even that. It's an algorithm, and lots and lots of statistics, which approximately mimics a model of a model of how we think intelligence works in our own brains. It's not even close.

    It's like painted scenery in a film studio. It LOOKS like scenery, the colours look right, but the focus is off, and when you walk into it to try and pick things up (in the scenery), you can't because it's a painting. You can make it ever more sophisticated: maybe you can have a simulated sun moving across it, and use lights to make shadows work the right way. You could even have some objects from the scenery built into the set, which you can walk on, pick up etc. But it's still not the real thing. Eventually, you could build a whole set like Eastenders or Coronation St - real houses, shops, market stalls etc. But beyond that, it's still an approximation of the real world.

    Everything that's supposedly 'AI' is just an approximation.

  39. Dusty

    In other words, the AI behaved very much like a Human driver.

    Cyclists and pedestrians are often regarded as static objects to be avoided rather than dynamic objects that have a very significant risk of moving into your path (Or you moving into theirs re cyclists at intersections)

    Like a human driver, The AI didn't expect her to just step right out in front of it and by the time it realised (Again, just like a Human driver) that she was actually in the vehicles direct path, it was too late to stop.

    1. Dusty

      OK Down thumbers,

      How "exactly" do you think an AI (Or even a Human driver) should be programmed/taught to deal with the fact that sometimes Pedestrians will just randomly step out "Right" in front of you or that sometimes Cyclists approaching from a minor road will just continue out without giving way?

      The only "Safe" approach is to come to a screeching halt every single time the AI detects any Pedestrians or Cyclists in the vicinity.

      Obviously, this is no more practical for an AI than it is for a Human driver.

      So accidents like this will continue to be a feature on the highway until such time as the vehicular and Cyclist/Pedestrian aspects of travel are physically separated and the one cannot intrude on the other..

      1. Kiwi

        How "exactly" do you think an AI (Or even a Human driver) should be programmed/taught to deal with the fact that sometimes Pedestrians will just randomly step out "Right" in front of you or that sometimes Cyclists approaching from a minor road will just continue out without giving way?

        Well.. In the circumstances of this case, I could gradually take several seconds to detect the hazard, several more to consider various outcomes, and then leisurely change lanes.

        The AI had at least 5 seconds to work with. I think I could bring even my crappy car to a complete stop in that time, and it brakes worse than an out-of-control supertanker in a hurricane!

        5 seconds is a bloody long time to not be able to see something right in front of you. No one legally driving a car would miss that.

        Those who pull out/pull through intersections are another matter, but the only cyclist I've ever known to have done anything like that was some stupid messed up kid playing chicken with a freight train - I'll give you one guess who that was. In all my years on the road I've known of one case and heard of a few more. Pedestrians walking out is a different matter, but that only tends to be the mindless drones leaving a thugby game and only around certain areas. Oh, I have had the odd kid run out, twice I think (struggling to think of any more). One from a parked car - saw his head through the parked car's windscreen so stopped, another crossing the road to her friends - I didn't see her at first because I was watching her friends expecting one of them might come out, still saw her in time though.

        You can prepare people to watch for warning signs and potentials (any pedestrian, cyclist, animal, or other motorist can stray into your path even from stopped - as can any tree, lamp-post, house etc if the right conditions are met), but there are the odd cases where something will happen too suddenly - those you can't. An AI should have a better chance as humans only have eyes and ears but AI has all sorts of things.

        Thing is.. The Uber had at least 5 seconds to detect the person and respond, but it didn't because it was programmed NOT to see pedestrians unless they were at a dedicated crossing. How to fix this? Train it that humans can be on any road any time, legal or not, and if you see one pay attention to what it is doing so you have a chance of predicting what it will do next.

      2. DuncanLarge Silver badge

        > How "exactly" do you think an AI (Or even a Human driver) should be programmed/taught to deal with the fact that sometimes Pedestrians will just randomly step out "Right" in front of you or that sometimes

        Again I ask, what did you score on your hazard perception test? Maybe you are not a UK driver so I will wish you good luck trying to drive on UK roads where you are expected to know how to handle exactly these events, and to prove it, before you are even able to have a license.

        Sure, many stupid drivers forget to care about continuing to follow such rules and skills after they do get their license but I cant excuse a machine with a human backup specifically placed in said machine to see these exact issues and react to them. Especially when it becomes evident that said machine was never programmed to handle the random events that WILL happen on the road and the human backup was lazily not doing their job.

        I was a software tester and if I had the job of testing this algorithim I would have tested for more than just people crossing the road. I would have trees fall over in front of it, ducks crossing, a ladder falling into the road, a (obviously simulated) child rushing off after a ball or baloon maybe with a panicked parent following. Oh and I'd certainly have it try out a flock of sheep/cows or a combine harvester, tractor etc.

        You know, the geek in me was beginning to get all excited about self driving cars. Seeing as a child of driving age is expected to demonstrate more object classification and collision avoidance power than a computer thats trying to simulate such skills I'd say there will be a loong development roadmap before we see anything other than automated delivery vehicles trundling along slowly and stopping the moment they detect a bird.

        Jesus, could you imagine how this thing would handle driving in india? Or china?

        1. Charles 9

          Think more than pedestrians? Think other drivers who give the rules the finger because they've been gridlocked for over an hour and they're late (with paying passengers inside, no less).

          You want a REAL AI driving test? Turn them loose in some overpopulated Asian metropolis in the middle of their rush hour (my personal experience is Metro Manila's Epifanio de los Santos Avenue during evening rush as the sun is setting). Guaranteed every square centimeter of the road will be packed to the gills: if not with cars then with motorcycles, bicycles, even street peddlers on foot taking advantage of the captive audience. And no, mass transit won't save you here. Buses and taxis take up a fair chunk of the traffic while the line to take the train often spills out into the street to further add to the chaos.

          If an AI car can successfully run the length of the road under those constant condition without a ding, THEN I'll think it's ready.

    2. DuncanLarge Silver badge

      > Cyclists and pedestrians are often regarded as static objects to be avoided

      Er, do you know how to drive? What was your score on the hazard perception test?

      I cant imagine how a bike can be a static object unless its not moving... which wasnt the case here as it was moving

  40. Anonymous Coward
    Anonymous Coward

    naive trust

    the car makers can't or don't want to create a secure keyless system, so why do we trust autonomous driving will work?

    1. Oneman2Many

      Re: naive trust

      Drivers don't want the inconvenience of a secure system.

  41. KillingTime

    Where are the test results?

    I don't think I've seen an adequate explanation anywhere as to why self driving cars aren't required to take a government backed test before they’re allowed out onto the road with other road users. I realise that self driving technology may not yet be up to the task, and drive as well as a person (when concentrating), but the same reasoning applies to people does it not? If someone who is not up to the task - for whatever reason - takes a driving test and fails it, then they're not allowed behind the wheel on a public road. There are no exceptions to this as lives are at stake. If governments are as serious as they say on self driving, then the test could be adapted to certify legal use of autonomy in different scenarios - like public clearways only (red circle and cross on a blue background in the UK – people are not supposed to wander onto them). While this doesn’t guarantee someone won’t wander on, it does remove the need to test for some types of pedestrian traffic. If governments were really serious, then they’d create a framework for software modelling and testing of vehicles for certification. Once you have the capability of an object in software, you can run as many scenarios as time permits against it. The child chasing a ball onto a road could easily be simulated. Tech. like this does currently exist, but is not used because the driving test is firmly fixed to the human examiner, human pupil format. Any software modelling used by a manufacturer is proprietary. No one in authority is forcing the use of an open test framework. This means the public is forced to rely on a marketing department in order to gauge the capability of any claimed autonomy.

    1. Oneman2Many

      Re: Where are the test results?

      Not all tests cover all conditions and ultimately unless the car is driving in fully autonomous mode (level 4 or above) it is expected that there is human guiding it. How else is it supposed to learn ?

      1. Kiwi
        Devil

        Re: Where are the test results?

        How else is it supposed to learn ?

        1) Computer simulation, with simulated sensor inputs. These can be gathered by someone driving a car equipped with such sensors to gain as much real data as possible. This approach will initially have gaps till someone sits down with the accident investigatory boards and talks about causes of crashes (including rarer ones that would be harder to think of or train an AI to see) and comes up with a data framework to help teach the AI.

        2) A closed test track, with cardboard cutouts/crash test dummies etc designed to closely mimic real vehicles/people etc. As close as possible as you don't want the computer to see a pedestrian as "not a cutout of a pedestrian therefore ignored". As Stoneshop suggested, said track should include a number of Uber CxO's in an inebriated state - if they can trust it then perhaps it's safe, if they won't trust it then it's clearly still not good enough.

        3) Just feeding it data from humans and seeing their responses (bearing in mind the screwup with the AI in the movie "Stealth" where, once seeing a pilot deliberately disobey orders, figured it could do the same (as far as I recall from the movie, IIRC most of the plot was "Things go boom!"). Giving the systems some R/W data will help a lot, although again there will be big gaps especially with a decent driver who is seeing potential issues starting and acting to avoid them occurring anywhere other than in their imagination.

        Icon coz the way Uber trains their stuff - their cars might as well be demon possessed.

  42. Wupspups

    Uber needs to read Roadcraft.

    Perhaps Uber's developers need to book a few hours with a plod advanced driving instructor doing a commented drive They may just get an idea of what hazard detection really is. I got a chance of that a few years ago. Bloody amazing what the plod was picking up. Not hazards that were justa few seconds away more like 10 to 15 seconds, more on the motorway.

    Hell just send Uber a copy of Roadcraft that would probably give them more clue than they have at the moment.

  43. Cyril

    Why not just put big metal grates on the front of the automated cars? That way pedestrians couldn't damage them. And it would also encourage the pedestrians to not step in front of the car.

    A speed of 39 mph indicates (about 63 kph) that the vehicle was traveling in an area that one could reasonably expect a pedestrian would not travel into the road. If the vehicle was exceeding the speed limit there would have been a serious issue over it. Areas that are likely to have pedestrians cross the road are limited to 35 mph.

    If the pedestrian didn't have a bike the AI would have most likely recognized them and avoided impact. If they were riding the bike the AI would have been able to correctly classify them and act accordingly. With a human operator she could have still been hit as they could have been distracted and not realized the danger until it was too late, happens all the time.

    As it is, the AI has a gap in it's understanding that can now be remedied, which is the intent of the program. It's a shame that someone was killed, but it was advertised that their would be automated cars running around the area so she should have been aware of the danger. My mother taught me from a young age not to step in front of moving vehicles.Never assume they will stop. Because sometimes, they will not.

    It was a terrible accident. But the only one that could have definitely prevented it was the deceased. She risked her life by stepping into the road between two controlled intersections, an infraction of the law that is there to keep her safe. Yes, Uber bears some liability, but not criminal responsibility for the incident. They are responsible for the next one because now they know better.

    1. Danny 2

      It's a shame that someone was killed, but it was advertised
      Oh, Cyril, that is cold. If I advertise I will be releasing a chlorine gas cloud in my neighbourhood then who is to blame?

      It's scary how many roadkill anecdotes I have, this one is meta. I was a passenger when my friend ran down an Alsatian that ran out in front of him. He jumped out his car to check if it was dead, and it sprang up and bit his hand before running away. Served him right, well, I laughed.

      I told that story at a dinner party and it prompted confessionals from everyone in the room. The first person admitted running down a rabbit. The next person admitted to running down a cat. And then the animals got bigger and the stories more gory. The last guy in the room said nothing but he started crying. His fiancee said, "Last year Paul ran down and killed an OAP. He wasn't prosecuted because the man just walked out between parked cars in front of him, but..."

      I burst out laughing myself. Literally by myself, nobody in my house saw the funny side.

      I was a schoolchild when I saw a girl my age flying though the air after being hit by a car. About 15 feet high, tumbling through the air, survived with only one broken leg. She was screaming but what I remember more is the frozen look on the drivers face.

      I used to hate car drivers and would terrify them with awful stunts I won't list now.

  44. FlippingGerman

    Uber

    Uber’s arrogance in everything they do is simply horrendous.

    I like the business model - ride-hailing is the logical modernisation of taxis, but the company is just awful.

  45. mIRCat

    All hail our robot overlords.

    As long as it correctly tags John Connor as the leader of the resistance, we'll be fine.

  46. Oneman2Many

    For those of you wondering what is required to conduct a trial on UK public road read the government guidelines,

    https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/776511/code-of-practice-automated-vehicle-trialling.pdf

    AFAIK there have no tests of ADS so far on UK roads.

  47. CJN1946

    Arrogance

    Only the arrogance of man would build a system to try and duplicate the human brain.

  48. DuncanLarge Silver badge

    In the UK

    Imagine they bring this system to the UK.

    Imagine the carnage!

    In the UK there is no such thing as "jaywalking". I had no clue what the american actors on TV were talking about when I was a kid. Nobody called it "jaywalking". I just knew it as crossing the road.

    Everyone simply crosses the road, even when there are crossings right next to them.

  49. Real Horrorshow

    Not a robot enviroment

    So, the world isn't one giant Amazon warehouse, but a place containing human beings doing human stuff. Who would have thought it? Well, everyone except Uber really. The assumption almost makes sense in terms of Uber's quest to run a service business without the tiresome requirement to pay anybody. People being people however means you need other people to deal with them. I suspect truly autonomous vehicles are a lot further off than their proponents would have us believe.

  50. PieMan666

    The driving test of the future

    My son has suggested that the driving test of the future will be 4 hours of total boredom followed by 5 seconds of absolute terror. Your pass / fail will be decided on if you avoid the accident in the self driving car...

    The other is what my father said when I first learnt to drive, which is treat every other person on the road (and next to it) as an idiot and you won't' go far wrong.

    1. Charles 9

      Re: The driving test of the future

      You also won't go very far, period. This is because the mere act of driving is an inherent risk no matter what you do. All you can do at that point is to live with the risk, understand there WILL be people unintentionally out to kill you, but you have to drive around them anyway and live with your Sword of Damocles if you want to make a living.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like