back to article Google calls in Women in Technology Hall of Famer to lead new Responsible AI group amid internal strife

Google has reshuffled the management team overseeing its Ethical AI group months after it ousted one of its star researchers, a controversial move that sparked public anger and internal revolt. Marian Croak, 66, an engineer who began her career at Bell Labs in 1982 and is currently VP of engineering at the Chocolate Factory, …

  1. Mark192

    Is "ethical" AI impossible?

    - Design a system that uses correlation, rather than causation, to predict results.

    - Feed in data from an unequal society.

    - Have it processed in a black box.

    - Get unethical results fed out.

    Is this a reasonable summary of his it works? (genuine question)

    1. Michael Wojcik Silver badge

      Re: Is "ethical" AI impossible?

      "AI" is not a meaningful term if we're talking about technical details. It's been applied very generally and in contradictory ways. Forget it.

      If we want to discuss Machine Learning specifically, that's a very large field. Considerable work has been done in explicable models (which are not used as black boxes) and interpretable models (which don't even begin as black boxes).

      There is also, of course, a tremendous amount of research into correcting bias in training and other techniques for identifying and compensating for incorporating undesirable traits in black-box models. How much fruit that has borne, and what it will achieve in the future, are open questions.

      ML is not limited to naive Deep Learning architectures that are just stacks of convolutional and full-connected NN layers, trained via unsupervised learning on a huge wodge of dirty data. Even with just a cursory attempt to follow some interesting advancements in the field -- which is all I make -- you'll see that generalizations about it are rarely accurate.

    2. CrackedNoggin Bronze badge

      Re: Is "ethical" AI impossible?

      I don't think it is broad enough, no. For example, with facial recognition, it is not only accuracy but also how the results are being used. C.f., "Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match" [NYT]. Consider also the ability of Facial Recognition to be as another means of spying on people - the big one way mirror in the sky.

  2. RM Myers
    FAIL

    Correlation versus causation

    The belief that correlation implies causation is the basis of much of the argumentation on social media on the internet, assuming it fits the arguer's bias. And guess where much of the data to feed these models comes from? So yes, the AI is going to give you biased results, as you stated.

    Okay, I'm going to show my age, but "garbage in, garbage out".

  3. NoneSuch Silver badge
    Devil

    That whole "Do no evil..." pledge is really inconvenient to how Google wants to operate today.

    Good luck to them finding the perfect volcano lair for Alphabet HQ.

  4. JWLong

    Ethical AI

    There's a joke there somewhere!

    1. needmorehare
      Pint

      Ethical Software

      is the same issue with the same problems.... The solution to the problems associated with AI/ML is freedom, not ethics. Freedom means having access to use, study and modify the algorithm and associated training data used to produce the model used in production.

      Beer is a toast to the ongoing prosperity of free software and to a future of free-as-in-freedom AI/ML.

      1. yetanotheraoc Silver badge

        Re: Ethical Software

        The problem with freedom is there's no money in it.

  5. Mike 16

    Ethics and AI

    I guess I am just too darn old to consider the study of Ethics and AI to be "new".

    OTOH, when I took a course with (roughly) that title in the late 1960's, it was

    not so much about what ethics we should instill in AI (and whether we could), but

    what "human rights" should apply to AI.

    Note also the time, when not all biological humans were consistently deemed

    worthy of these "human rights". 50+ years later, it's not clear yet when (if?)

    they ever will be.

    The robots may have to stand in line.

    1. yetanotheraoc Silver badge

      Off topic

      "The robots may have to stand in line."

      That's a perfect use for robots. Go shopping, or to the pub, or whatever. When it gets to the front of the line it calls your mobile and you do a quick swap. What could go wrong?

  6. yetanotheraoc Silver badge

    References?

    "Tensions were rising between Gebru and management after she was told to retract her name from a paper she had co-authored on the dangers and risks of producing and running massive language-processing models – such as the ones Google uses to provide translation services and the like. In a memo to staff, Dean claimed the paper was not up to scratch, and that it didn’t include enough references to previous research."

    She should have just referenced herself! She is a top expert, after all. It sounds like her mistake was having an opinion, which is not why Google is hiring these women. They are supposed to be providing warm fuzzy feelings about Google and AI, *not* writing hard-nosed papers about risks.

    1. CrackedNoggin Bronze badge

      Re: References?

      "Google fired its Ethical AI team co-leads over a paper that was critical about massive language models."

      I partially read the paper () and it was not about facial recognition - it was about the ethics of "massive language models". The numerous arguments were not explicitly prioritized, but the most lengthly seemed to be that running a language learning model used too much energy: as much as a cross country passenger plane trip. OK - but does such a run happen every day? Every week? That wasn't discussed but I would guess not. Are the results reusable for further learning? Not discussed. Are there advantages to scale and new insights that can only be attained by R&D at scale? Not discussed.

      One section was entitled "Down the Garden Path" - and seemed to claim that the research was pretty much hopeless and not well directed. There is truth to that only because research is fundamentally full of dead ends and restarts - but that is the nature of all research isn't it? It rarely goes in a straight line. Of course we all agree that current language AI doesn't begin to approach "human understanding of language" - there is no shortage of well written articles pointing this out. However, I do not believe that makes current research "unethical".

      Compared to facial recognition, the current study of language models just doesn't have the same immediate potential for misuse. (Ironically, when AI gets past the garden path stage of language modelling and starts thinking for itself, that could change dramatically!) It seemed to me that the paper struggled with trying to make the anti ethical argument, and ended up throwing a lot of mud in the hope that something would stick.

      Notably most of the coverage I have seen in the mainstream press doesn't link to the paper, and the technical discussion focuses on facial recognition instead, with the political discussion focusing on the firings. Political people have their talking points and they are not going to change those to adapt to some pesky research ("massive language modelling" and Timnit's actual paper covering it) which is of secondary importance to the issue at hand (political talking points).

      I suppose ethics research, just like dry R&D, can take some wrong turns and end up in dead end.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like