back to article Before we lose our minds over sentient AI, what about self-driving cars that can't detect kids crossing the road?

US House reps on Wednesday grilled a panel of experts on the various impacts artificial intelligence are likely to have on society, privacy, ethics, and so forth, and what can be done about it, if anything. The range of questions from House Committee on Science, Space, and Technology revealed the breadth and depth of the …

  1. Mark 85

    I was waiting for one of the CongressCritters to ask if AI could be used in elections for maybe steering voters or selecting candidates. Now that's scary.

  2. Danny Boyd

    I don't quite understand how the problem of unrepresentative learning data sets (which leads to AI systems discriminating by race, gender, etc.) became a problem of "The workforces of Google, Facebook, Microsoft, Amazon, and Apple lack women, people of color, and people with disabilities, relatively speaking."

    Adding women and people of color to AI development won't resolve the problem of unrepresentative learning data sets, because I don't think AI developers are so stupid they train their AI systems exclusively on their own faces and voices.

    I have nothing against women and people of color, I just fail to see logical connection between two problems.

    1. Psmo
      Facepalm

      I have nothing against women and people of color, I just fail to see logical connection between two problems.

      Questions get asked early if there's a team member for whom the tech never works.

      Otherwise, there's no issue before training against the full dataset starts.

      Your model might already be too badly tuned and unrecoverable at that point, or simply far better at one subset than another.

      So you rather than fix it, you release it as 'beta' or sell it to government.

      1. Robert Carnegie Silver badge

        They also need, what's the current polite term for little people?

        Mind you, I bet if you test human drivers then they also are more likely to miss (or not miss) a child on the road than a full sized adult.

        1. LucreLout
          Joke

          They also need, what's the current polite term for little people?

          Taxpayers.

          1. Aussie Doc
            Coat

            I also did note the reference to needed more funds for further research.

            I'm available and have my own coat ---->

    2. Anonymous Coward
      Anonymous Coward

      I don't quite understand how the problem of unrepresentative learning data sets (which leads to AI systems discriminating by race, gender, etc.) became a problem of "The workforces of Google, Facebook, Microsoft, Amazon, and Apple lack women, people of color, and people with disabilities, relatively speaking."

      AI and machine learning datasets should be using an even representation of gender and ethnicity. The political machine does not. A much paler version of pot/kettle/black.

      As for skin tone, the system should probably make a quick decision on that and use one of several different networks for the initial feature location/detection. I would think training some of the lower layers (usually convolution) to work with significantly different image intensities would produce weaker layers. Once the features have been correctly located, common layers could then be used for matching individuals.

      1. BGatez
      2. Anonymous Coward
        Anonymous Coward

        I'd like to see an analysis of how much of this is bias, how much is training, and how much is having fewer photons to work with generating an image.

        1. This post has been deleted by its author

        2. This post has been deleted by its author

    3. J.G.Harston Silver badge

      We're not French here. Adjectives go before nouns.

  3. Pascal Monett Silver badge
    Facepalm

    “Shouldn’t we worry about the emergence of consciousness in AI?”

    I love how that question really demonstrates how the term AI has been degraded. If it were AI, it would be conscious.

    So they know that they are just slapping a trendy moniker on a statistical analysis machine, then they smoke a joint and start believing that statistics can become sentient.

    Well, they are politicians, after all.

    1. Alan Bourke

      Re: “Shouldn’t we worry about the emergence of consciousness in AI?”

      No more than we should be worried about stargates or time travel.

      1. I.Geller Bronze badge

        Re: “Shouldn’t we worry about the emergence of consciousness in AI?”

        The only existing AI, which I'm presenting here for your admiration, is the answer to the NIST TREC call, which wanted to create a system capable of finding the exact answers to any questions. This AI was created as a relational blockchain database for structured texts. But on consciousness NIST TREC didn't say a word, and thus this AI does not have it.

  4. Starace

    Looking for 'discrimination' where none exists

    This is obviously a matter of interpretation; some will see a technical problem, others see it as a deliberate sleight that needs a fix matching their own personal beliefs.

    Image or audio recognition is always going to have different levels of performance for different scenarios. Lack of contrast or low input levels are always going to be issues. If this biases performance then it needs to be resolved but that doesn't magically mean politics will fix it. And just because it seems performance correlates with your ideas of discrimination it doesn't mean it's a political act.

    As for biases emerging in datasets, socioeconomic or otherwise. Maybe the data is biased because it was somehow skewed, or maybe there really is a trend in there that some don't want to acknowledge. Or maybe it really was deliberate; a sales system for example will discriminate against poor people because it's looking for an outcome tied to desire and ability to pay. Not liking an outcome doesn't necessarily mean it's the wrong one.

    Keep everything calm, identify problems, fix them. Don't treat things as an excuse to rail against the system and try to work your pet belief into the world especially by legislation; it might give you a warm feeling but *it won't work*.

    1. Killfalcon Silver badge

      Re: Looking for 'discrimination' where none exists

      In this case, however, they have the distinct problem that the reality in America is biased - by decades of Jim Crow laws, reinforced by (often accidentally) discriminatory practices from finance and tax policy.

      To be clear form the start: I'm not saying everyone involved in this is racist - just that the outcome of the systems is, and that's a hard form of racism to combat because there's no moustache-twirling villain to point at, just people making rational choices with loaded dice - dice that were loaded a surprisingly long time ago in many cases!

      Go back 50 years or so (if you're currently in your 40s, roughly when your parents were kids, 3-4 generations from the US Civil War) and much of the US had laws essentially saying "Black people can't live in this area, and it's legal to not hire them if you don't wanna". The folk with money (generally speaking, not people who's grandparents were slaves) didn't wanna because, generally speaking, their grandparents owned slaves.

      So you put all the white people and the money *here* and the black people and the no money *there* and then say "schools are funded locally", so white kids get better schools. Then you base loans and insurance on zip code (because people in poor areas are statistically more likely to default on loans, right?), and the trap is set. It's often subtle stuff that adds up - lower loan caps force people to payday loans that cost much more, for instance, or just not getting a loan to open a shop that could have provided employment for local kids pushing just one or two folks into crime to make rent, which further degrades the local economy. Hell, just having more people renting means more money leaving the locality and reinforcing issues caused by, say, locally funded schools, police and healthcare.

      America's intergenerational social mobility is often worse than the UK's (not that ours is amazing). The poor's kids tend to stay poor and the rich's kids tend to stay rich, and this is in large part because of real, systemic, biases in the systems of finance.

      If we train AI on the 'real data' without thinking through the problems carefully, we risk codifying them - and worse, exporting them to other nations.

      1. LucreLout

        Re: Looking for 'discrimination' where none exists

        The folk with money (generally speaking, not people who's grandparents were slaves) didn't wanna because, generally speaking, their grandparents owned slaves. So you put all the white people and the money *here* and the black people and the no money *there* and then say "schools are funded locally", so white kids get better schools.

        This utter fallacy that all, most, more than a small percentage of white people owned slaves is so far beyond reality, it's skewed your entire world view.

        Firstly, it wasn't only the whites that owned black slaves. Secondly, most white families didn't own slaves. Thirdly, in most states slavery was always illegal.

        https://en.wikipedia.org/wiki/Slavery_in_the_United_States#Distribution_of_slaveholders

        Any chance you could rethink your views using facts rather than just using emotions and prejudice please?

  5. macjules
    Devil

    “Who benefits from AI? Who gets harmed? And who gets to decide?”

    Über, You, Über

  6. Anonymous Coward
    Anonymous Coward

    "The systems have been shown to exhibit gender discrimination when placing job ads, racial discrimination in predictive policing, and socioeconomic discrimination when selecting zip codes for commercial products and services."

    That would be because they're going off raw statistical data as opposed to idealism.

    1. Jeff 11

      It could be that, but more likely that AI, or rather machine learning, simply mirrors the biases in entrenched discriminatory systems from which its training data was obtained.

      1. I.Geller Bronze badge

        Yes, machine learning simply mirrors the biases because the patterns have them.

    2. Toltec

      I was thinking that if you wanted to train an AI to find ways to make people better off rather than maximise profits then you could.

      Then I thought about it bit more.

      The AI would be looking for optimal ways to persuade people to change their habits and lifestyles to make them healthier and more productive.

      A little later we have the Matrix.

    3. Anonymous Coward
      Anonymous Coward

      "socioeconomic discrimination when selecting zip codes for commercial products and services"

      What does this even mean?

      Does it mean that more 'upscale' products and services are marked more heavily where they are more likely to be purchased?

      If so, I have a hard time considering that to be 'discrimination' in the bad sense, but rather discrimination between futile activity and productive activity.

      1. Claptrap314 Silver badge

        Look up "redlining". But mostly, this is political grandstanding--the only competency of Congress.

  7. phuzz Silver badge
    Terminator

    All to plan

    "So, what should the US government do about this"

    Well, they can test the manufacturers systems, and when they find ones that kill children and/or minorities, they can find a highly priced defence contracthome for them in the US military.

    After all, one company's embarrassing snafu involving an SUV, a school bus full of orphans, a grandmother in a wheelchair and a basket full of kittens, is another firm's missile guidance algorithm.

  8. scrubber
    Devil

    self-driving cars that can't detect kids crossing the road

    Can't? Or won't?

    1. Phil W

      Re: self-driving cars that can't detect kids crossing the road

      Or can, and are deliberately trying to run them over to reduce the human population.

      1. Diez66

        Re: self-driving cars that can't detect kids crossing the road

        Yup, I wondered that, maybe Skynet starts small.

    2. I.Geller Bronze badge

      Re: self-driving cars that can't detect kids crossing the road

      To make such a system is not difficult.

      1. I.Geller Bronze badge

        Re: self-driving cars that can't detect kids crossing the road

        Not to mention the fact that the training of your data (using dictionary/ encyclopedic definitions) is trillions of times less expensive than the training of the same by millions of random texts - the training by dictionary allow the avoiding of the very costly and lengthy studying of the patterns' surrounding context-and-subtexts, when they're found. That is, the context-and-subtexts are reflected in the choice of dictionary definitions used for annotation of the patterns' words. Concluding, the patterns are uniquely indexed and can instantly be found, which can be crucial, for example, to Tesla.

        OpenAI indexes by patterns only. Indeed, how can OpenAI use random texts for the indexing?

      2. Nolveys

        Re: self-driving cars that can't detect kids crossing the road

        "Thump-thump!" Child detected.

    3. I.Geller Bronze badge
      Mushroom

      Re: self-driving cars that can't detect kids crossing the road

      AI finds answers, that's what it does. If a car will ask then its AI responds and initiates its certain actions. For example, the sensors identified an object on the road, which after referring to AI database is defined (in patterns) as a kid.

      After that, the information (patterns) obtained is superimposed on the parameters of the vehicle, such as speed and direction. And then after the car's question "What to do?", AI answer comes - "Sharply to brake", which is transmitted to brake pads.

      AI? Why is it needed?

      To save on programmers. AI structure texts to patterns, which are the direct analog of programming language commands. That is, instead of hiring a trillion lazy and stupid programmers, it is enough to structure texts describing all possible situations on the road.

    4. I.Geller Bronze badge
      Mushroom

      Re: self-driving cars that can't detect kids crossing the road

      Now imagine a situation where there is a sentence, from which Tesla mines an instruction for its car:

      --Speed up the car and turn right.

      But what if by AI-structuring the car's AI decides that the word "right" is both an adjective and a noun. Then, instead of two correct patterns:

      - car speed up

      - car turn right,

      four appears:

      - car speed up

      - car turn right

      - right speed up

      - right turn right.

      How could the car react to this linguistic garbage? Thus my checking by dictionary definitions is the must.

      PS OpenAI cannot allocate either two or four patterns because it does not use my patented AI-parsing. Instead, OpenAI uses the only alternative n-gram parsing, which isn't good for Tesla at all, because it doesn't see the above two correct patterns.

      1. I.Geller Bronze badge

        Microsoft has again confirmed the relevance of my AI technology.

        "WNLI is critical to reach human performance on the overall average score on GLUE. The Microsoft team approached WNLI by a new method based on a novel deep learning model that frames the pronoun-resolution problem as computing the semantic similarity between the pronoun and its antecedent candidates."

        Remember a few weeks ago I explained you all how Alice happily trains? Using pronouns? And constructing synonymous clusters? Well... Microsoft finally surpassed the estimate for human performance on the overall average score on GLUE (87.6 vs. 87.1) on June 6, 2019! Congrats!

        1. I.Geller Bronze badge

          Microsoft deletes lexical noise.

          The sentence: “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.” If the word “feared” is selected, then “they” refers to the city council. If “advocated” is selected, then “they” presumably refers to the demonstrators."

          Either

          - city councilmen feared

          or

          - demonstrators feared

          is lexical noise.

          Either

          - city councilmen advocated

          or

          - demonstrators advocated

          is lexical noise, where such lexical noise is typically superfluous predicative definitions that do not explain the central themes contained within the digital textual information and, accordingly, removal of such noise often results in an improvement in the quality of the structured data.

          "The Microsoft team approached WNLI by a new method based on a novel deep learning model that frames the pronoun-resolution problem as computing the semantic similarity between the pronoun and its antecedent candidates."

          Microsoft deletes lexical noise.

          1. I.Geller Bronze badge

            Re: Microsoft deletes lexical noise.

            You all know WHAT that means? You all know that I changed the second time all our Civilization, all of Humanity?

            Microsoft deletes lexical noise!

  9. J.G.Harston Silver badge

    All of this re-inforces my opinion that autonomous vehicles have no place on the highway and should only be used on Special Roads, where traffic is restricted, regulated, controlled, streamed, with unexpected happenings minimised*. The problem is that most of the development research on auto-vecs is in the US where they don't have the legal concept of Special Roads, so they are being bogged down with trying to cope with being on the public highway.

    *I have a strong memory of some early scifi having a throw-away sentence along the lines of: he pulled out onto the motorway, put the car in automatic, dropped the seat back and began to snooze. This is *exactly* the environment auto-vecs are suitable for.

    1. Daniel 18

      "All of this re-inforces my opinion that autonomous vehicles have no place on the highway and should only be used on Special Roads, where traffic is restricted, regulated, controlled, streamed, with unexpected happenings minimised*. The problem is that most of the development research on auto-vecs is in the US where they don't have the legal concept of Special Roads, so they are being bogged down with trying to cope with being on the public highway."

      This looks more like confirmation bias than logic to me.

      Expecting autonomous vehicles to be perfect before letting them out on the road is like expecting doctors to be perfect before allowing medical practices.

      You let the autonomous vehicles out when they are as good as a good human driver, across the usual mix of circumstances. Given that all drivers are not good drivers, this is a net safety win.

      And yes, children are smaller and therefore harder to see. That's reality biting back.

      Oh... and I am pretty sure that most countries don't have a lot of 'Special Roads', whatever those are. I've never heard of them. And no, I am not American.

      1. This post has been deleted by its author

      2. SundogUK Silver badge

        As far as I am aware no one has ever demonstrated an autonomous vehicle that was even close to being as safe as the average human driver. I doubt they ever will in an urban environment.

  10. Claptrap314 Silver badge
    Thumb Down

    Nother is ever so bad

    That government cannot be expected to make it worse. Apparently, it's EBJs time to grandstand.

    Sigh.

    From the reporting (no WAY) am I going to watch the video, the security of autonomous systems was not even breached. Which should be a non-started for autonomous vehicles, since the car manufactures pioneered IoT security.

  11. Kev99 Silver badge

    I've beaucoup articles about how advanced and wonderful autonomous drone cars are. How they're the answer to traffic, pollution, fuel economy and diaper rash. One thing I haven't seen is a real world test someplace other than Las Vegas or Los Angeles. You know, places where it rains so hard you can barely see the road, snowless days are rarer than an honest politician, and freezing road spray is the norm from November to April. I have a car that has adaptive cruise control and anti-collision sensors and if any of the above conditions occur, the systems crap out. I can't believe Tesla, Cadillac and others have found a way to overcome the weather.

  12. Aussie Doc
    Headmaster

    Was surprised not to see/hear reported a US rep asking the important question: "Yeah, I'm not sure I understand that, but how do I program my VCR to tape my favourite show?"

    Somebody could teach them ----->

  13. John Brown (no body) Silver badge

    The irony...

    “privileged ignorance.” because they’re more likely to be white men:

    So, no one in other parts of the world, such as China or India are working on facial or voice recognition? Are they having similar but different stereotyping issues?

    China, at least, appear to be rolling out CCTV based facial recognition and fining people for various offences via automated systems.

  14. gfrevivus

    From memory I think there was a survey done of drivers way back in the 70s or 80s that found that drivers when driving consistently over estimated the distance from children . Apparently this was because in estimating distances we use the apparent size to work out how close something is.I also believe that this also was a reason amongst others that the Highway Code stressed more caution and reducing speed where children might be crossing unsupervised. As always comment open to correction or better citation.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like