"Google fired its Ethical AI team co-leads over a paper that was critical about massive language models."
I partially read the paper () and it was not about facial recognition - it was about the ethics of "massive language models". The numerous arguments were not explicitly prioritized, but the most lengthly seemed to be that running a language learning model used too much energy: as much as a cross country passenger plane trip. OK - but does such a run happen every day? Every week? That wasn't discussed but I would guess not. Are the results reusable for further learning? Not discussed. Are there advantages to scale and new insights that can only be attained by R&D at scale? Not discussed.
One section was entitled "Down the Garden Path" - and seemed to claim that the research was pretty much hopeless and not well directed. There is truth to that only because research is fundamentally full of dead ends and restarts - but that is the nature of all research isn't it? It rarely goes in a straight line. Of course we all agree that current language AI doesn't begin to approach "human understanding of language" - there is no shortage of well written articles pointing this out. However, I do not believe that makes current research "unethical".
Compared to facial recognition, the current study of language models just doesn't have the same immediate potential for misuse. (Ironically, when AI gets past the garden path stage of language modelling and starts thinking for itself, that could change dramatically!) It seemed to me that the paper struggled with trying to make the anti ethical argument, and ended up throwing a lot of mud in the hope that something would stick.
Notably most of the coverage I have seen in the mainstream press doesn't link to the paper, and the technical discussion focuses on facial recognition instead, with the political discussion focusing on the firings. Political people have their talking points and they are not going to change those to adapt to some pesky research ("massive language modelling" and Timnit's actual paper covering it) which is of secondary importance to the issue at hand (political talking points).
I suppose ethics research, just like dry R&D, can take some wrong turns and end up in dead end.