back to article Big Tech bankrolling AI ethics research and events seems very familiar. Ah, yes, Big Tobacco all over again

Big tech's approach to avoiding AI regulation looks a lot like Big Tobacco's campaign to shape smoking rules, according to academics who say machine-learning ethics standards need to be developed outside of the influence of corporate sponsors. In a paper included in the Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics …

  1. Disgusted Of Tunbridge Wells

    Tobacco is enjoyable. Social media is mainly unpleasant.

    1. Eclectic Man Silver badge
      Pint

      Ummm

      I'm asthmatic and so dislike tobacco smoking (although ironically most of the people I fancy used to smoke). The Register comments sections count as social media and I quite enjoy them. (I confess to not being on FaceBook, Twitter or TikTok).

      Let's have drink and chat about it.

      1. Disgusted Of Tunbridge Wells
        Pint

        Re: Ummm

        TikTok is brilliant by the way. Just scroll past the rubbish and it starts showing you videos that you like ( eg: my 'for you page' is full of history stuff, etymologies and scantily clad women )

        Probably because it's not really social in the "My enemies must die" sense that twitter is.

    2. Dan 55 Silver badge

      Both are enjoyable for yourself and harmful for other people.

    3. deadlockvictim Silver badge

      Both are highly addictive and anti-social?

  2. Eclectic Man Silver badge
    Unhappy

    Racial bias

    Many AI systems exhibit unintentional racial bias. The facial recognition systems that have been trained on white faces are very poor at recognising non white people and have lead to miscarriages of justice in the USA (as documented on el Reg and elsewhere). AI systems concerning re-offending likelihood have been shown to be biassed due to taking a person's home location and associating their chance of re-offending with the prevalence of crime in that area without taking into account whether police target that area and so are likely to record more crime there.

    In states where the law enforcement systems have histories of being more punitive towards black people, AI often serves to re-inforce that discrimination. We are currently seeing the result of faulty programming and institutional authoritarianism in the ongoing Post Office Horizon system scandal, where 'the computer says so' was taken as proof of criminal activity, hopefully the AI industry will learn from that disaster, but if history is anything to go by it won't.

    1. You aint sin me, roit Silver badge

      Re: Racial bias

      To be fair that's not AI.

      Stop and search more black youths and you will find more black youths carrying knives...

      So stop and search more black youths! And find more knives...

      The data suggests that black youths carry knives. Of course the preponderance of white youths carrying knives is unknown because they don't get stopped in the first place.

      Nothing to do with AI, just simple racism.

      1. Yet Another Anonymous coward Silver badge

        Re: Racial bias

        Same in other fields

        Analysis of our previous $PRESTIGE_COLLEGE students shows that best students were white,male, private schools and children of alumni therefore mathematically we pick these students.

        The admission office never see the gender/ethnicity etc of the applicants and so can't be discriminating

      2. katrinab Silver badge

        Re: Racial bias

        I think you need to stop thinking about "AI" as some sort of intelligence, and start thinking of it as a type of compiler that compiles training data into a computer program.

        So your training data is your computer code, and if your code is racist, your computer program will be racist.

        Also, I feel that in many cases, trying to find correlations between input data used is no more scientific than examining a goat's entrails or examining the position of the stars.

      3. Nigel Sedgwick

        Count Versus Proportion

        "Stop and search more black youths and you will find more black youths carrying knives..."

        However you will find almost invariably a lesser proportion of "black youths carrying knives". In fact, if a greater proportion were found, that would be indicative of (likely purposeful) initial targeting of some subset of black youths that were less-prone to knife carrying than the later and wider subset.

        Keep safe and best regards

      4. Anonymous Coward
        Anonymous Coward

        Re: Racial bias

        If you're referring to the UK, you don't know how stop searches work.

        In order to search someone, you must have reasonable suspicion. This could be in the form of live intelligence 'He has a knife', the suspects actions 'they moved their hand to their waistband and looked nervous' etc..you can't just stop and search whoever you please (unless very specific, short term and high level laws are enacted).

        Also, guess what - if your data shows black youths are knife carrying, it's not racism, it's intelligence led policing if you make sure you engage with them more.

        Stop waving about the word 'racism' when you clearly don't know how to use it properly in this context.

    2. Anonymous Coward
      Anonymous Coward

      Re: Racial bias

      More facial recognition systems have been trained on Chinese faces than any other ethnicity.

      1. Eclectic Man Silver badge

        Re: Racial bias

        Indeed they have, but the ones used in the USA are predominantly trained on white faces (mostly male).

    3. Nigel Sedgwick

      Re: Racial Bias and Intent

      "Many AI systems exhibit unintentional racial bias."

      To the very best of my knowledge, no current non-biological system (labelled as AI or not) actually has the ability (independent of its programmers and/or others) of "intent". Accordingly such non-biological systems have not the ability for racial bias, whether intentional or unintentional.

      Keep safe and best regards

      1. Eclectic Man Silver badge

        Re: Racial Bias and Intent

        Maybe I should have posted "Many AI systems exhibit the unintended and unrealised racial bias of their programmers", or possibly "Many AI systems exhibit the unintended racial bias of the data sets with which they are trained."

        For example, a post above states that the police in the UK need reasonable cause to stop and search someone. Which is true, however, black people driving 'nice' cars (Mercedes, Jaguars, BMW's etc.) are more likely to be stopped than white people driving the same vehicles (see several recent examples in the UK press). (There is no crime of 'Driving while Black' on the UK statute book.)

        Similarly, if criminality is evenly spread across the population irrespective of skin colour, but one section of society determined by skin colour is more likely to be stopped and prosecuted, more crimes will be recorded against that section of society and so they will get an unfair reputation for criminality, and so be more likely to be targeted than otherwise, leading to more detection of crime in a vicious circle.

        I'm sure that the poster above (sorry didn't write down the tag name) does not consider him/her self to be racist, but the attitude presented that finding more people of a specific skin colour carry knives so searching more of 'them' will find more knives and therefore detect more crime, lends itself to exhibiting unintentional racial discrimination.

        I hope this explains my original comment concerning racial bIas.

        (It's OK, I know I'm at risk of major downvoting.)

    4. This post has been deleted by its author

  3. don't you hate it when you lose your account Silver badge

    Oil, tobacco, sugar, brexit, AI

    Lobbyist should be electronically tagged and all their communications monitored. After all if they have nothing to hide........

    1. don't you hate it when you lose your account Silver badge

      Re: Oil, tobacco, sugar, brexit, AI

      The lobbyists seem to have their trolls voting. Pity they didn't post a counter argument. That would have been fun.

  4. Warm Braw Silver badge

    Who knows whether algorithms really harm society?

    We can get some clue from looking at who benefits.

    And can we stop using the word "algorithm" to describe an indeterminate process?

  5. amanfromMars 1 Silver badge

    When Incest Rules IT can quickly Manifest Madness and Deformity by All Accounts

    Big Tech bankrolling AI ethics research and events and seeming very familiar to Big Tobacco all over again is ye olde favourite great parlour game that leads those following lost souls, rearranging deckchairs for the Titanic, Saints playing Sinners and Poachers turning Gamekeeper whenever bankrolling is directed to the unworthy ..... the silver tongued devil charlatan and the poisonous snake oiler.

    'Tis nothing new to be overly concerned about once one know what is to be expected and routinely dismissed and discounted as a viable future sustainable resource/font of common collective wisdom.

    Weirdly and spookily enough, presently there is an almost mirror carbon copy of the dilemma in a similar drama playing itself out currently in Westminster and at No 10 Downing Street.

  6. Howard Sway

    58 per cent of AI ethics faculties have received funding from Big Tech

    Big Tech can always be sure that when it comes to recruiting experts in ethics, there will be plenty of willing candidates who have no ethics whatsoever.

  7. deive

    Then there is the bigger picture, of why is history repeating itself? Why have we not learned this lesson?

    1. Pascal Monett Silver badge
      Trollface

      It's the same thing as herd immunity : it requires thought and effort from 80% of the population.

      Right now, 60% of the population is just begging for football season to start.

    2. Psmo Silver badge
      Meh

      Because history repeats itself. Or do I repeat myself?

    3. jonathan keith

      Because, to make a sweeping generalisation, human beings are lazy idiots.

      1. Eclectic Man Silver badge

        Not all, some are industrious idiots.

        (In his opus "On War", von Clausewitz classified soldiers into four separate categories:

        Intelligent and lazy: these people should go into 'intelligence' or central high command, as they will work surprisingly hard and diligently to ensure that they do not have to get up at 2:30 in the morning to deal with unforeseen emergencies.

        Intelligent and industrious: These people are field commanders, they react quickly and decisively to events and make usually sensible decisions based on what they know at the time, they are good tacticians.

        Stupid and lazy: They are the foot soldiers, they will basically stay where you put them and do what they are told (if trained appropriately). They are too lazy to wander far, and too stupid to get into too much trouble on their own.

        Stupid and industrious: These people are a nightmare and will cause no end of trouble. They won't stay where they are put, they will get into scrapes and tinker with things if not given something to do. They should be got rid of at the earliest available opportunity.)

        1. Androgynous Cupboard Silver badge

          Nice. I’d observe that “intelligent and lazy” covers most engineers in that case. Who among us has not spent a week automating a task that takes 20 minutes a month?

          1. Dante Alighieri
            Boffin

            Obligatory

            https://xkcd.com/1319/

          2. Eclectic Man Silver badge
            Joke

            Ah, but I was learning new skill, and the task that took 20 minuets each month was soooo boring that spending 10 working days perfecting the automation was entirely worthwhile.

            Now, what was it you actually wanted me to do in the last fortnight?

  8. JWLong

    We're Missing Two Things

    1). Ethics

    2). Truth in advertising.

    Haven't seen neither one of them around in 30-40 years.

    And, that's NOT all folks.

    Now, back to my LOONEY TUNES.

    1. Chris G Silver badge

      Re: We're Missing Two Things

      That actually boils down to one missing thing; if all advertisers had ethics they would only tell the truth.

      I wonder if many in Big tech who bandy the word ethics around, know who Immanuel Kant was, or for that matter what deontology is?

      1. HildyJ Silver badge
        Devil

        Re: We're Missing Two Things

        He was a real pissant who very rarely stable.

      2. JWLong

        Re: We're Missing Two Things

        @Chris G

        Point taken.

      3. Eclectic Man Silver badge
        Mushroom

        Re: We're Missing Two Things

        I got 220 pages (out of over 600) into Kant's "Critique of Pure reason" before giving up. I told myself I was interested in whether his belief that Euclidean Geometry was the only possible geometry affected the validity of his deductions. I was wrong, my interest was not that strong.

        What I actually learnt form CoPR was that Kant found a flaw in Descartes' "I think, therefore I am". In order for Descartes to deduce "I am" he needs to have the concept of existence first, and therefore to have experienced something other than himself thinking.

        I strongly believe that philosophers (apart from Nietzsche*) should be read in translation, as someone else has done the hard work of actually finding out what s/he meant, otherwise the translation would never be published. Bernard Williams (authors of "Ethics and the limits of philosophy") needed a much better editor, IMHO.

        *In my experience Nietzsche is incomprehensible or annoying in any langauge.

    2. hoola Silver badge

      Re: We're Missing Two Things

      Ethics, isn't that a county in the SE or England?

  9. a_yank_lurker Silver badge

    Ethics and Corruption

    Funding research tends to corrupt the research as there is a bias to find what the funder wants or expects. The bias may not be overt but it is there. Even if the researcher takes pains to lesson the bias it is there subconsciously.

    A related issue is the ethics of the funder and in the case of AI I have my doubts about the ethics of all the institutional funders whether corporate or government. They want results that make themselves look good. So the conundrum is how to fund AI research so it will done more ethically.

    1. JohnSheeran

      Re: Ethics and Corruption

      Isn't this just a case of outside funding being the source of corruption? How is government any less corruptible than corporate sponsors? Isn't the very act of material exchange the source of the corrupting influence? It seems like ethics in general are required at every single level of that particular mechanism to "ensure" no corruption and that seems not very likely.

      What if funding were established through a double-blind mechanism where the funding source and the researchers were shielded from each other? Could that be a way to make it work?

      1. Eclectic Man Silver badge

        Re: Ethics and Corruption

        "What if funding were established through a double-blind mechanism where the funding source and the researchers were shielded from each other? Could that be a way to make it work?"

        Unfortunately that makes it very difficult to find out whether the funders were 'getting value for money'. In effect, in many countries' research by universities and other organisations is publicly funded from taxation, directed by semi-independent organisations (such as the UK's former 'Research Councils') or by dedicated research institutions (such as the Wellcome Trust).

        Ethically there is no reason why 'Big Tech' (or 'Big Corp' for that matter) should not fund their own research into ethics, or anything else. The issue is whether they try to influence the direction of the research or the published results, as the tobacco industry did, and the polluting climate changer deniers are, while claiming it as 'independent' science. Much of the research in biology on new drugs is not published as the results are either inconclusive, or do not fit with the sponsor's desire for a profitable, marketable product. There was a campaign a while ago to get every drug trial registered before it started with a requirement that the results be published, irrespective of outcome, but I don't think it came to anything.

        On the bright side, at the 'Big Tech' is aware they have to do something about 'Ethics' in AI, I just wish they had done ethics for 'Big Corp' first.

        1. JohnSheeran

          Re: Ethics and Corruption

          All good points.

          I would say that there are company/organizations that do exist in the United States that are called "Think Tanks" that do exactly the kind of thing we're talking about. However, there have been plenty of statements in the news regarding the validity of their work because of the potential influence from their sponsors.

          Government is often no better in this regard because of lobbyists, politicians, etc.

          This whole thing just feels like a much bigger social problem than anything else.

    2. jonathan keith
      Joke

      Re: Ethics and Corruption

      How about letting the AIs do the ethics research? I mean, what could possibly go wrong?

  10. Mr. Skeezix

    We don't need independent research, we need opposing research. Wisdom is more likely in the middle than on either extreme.

  11. Robert Grant Silver badge

    > The analogy "is not perfect," the two brothers acknowledge, but is intended to provide a historical touchstone and "to leverage the negative gut reaction to Big Tobacco’s funding of academia to enable a more critical examination of Big Tech."

    This sort of obvious shaming-by-irrelevant-connection reminds me of The Simple Truth (https://www.readthesequences.com/The-Simple-Truth). The relevant excerpt:

    > Mark calms himself down. “I suppose I can’t expect any better from mere shepherds. You probably believe that snow is white, don’t you.”

    > “Um… yes?” says Autrey.

    > “It doesn’t bother you that Joseph Stalin believed that snow is white?”

  12. ecofeco Silver badge

    Algorithms that trap people in poverty

    https://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers-fight-back/

    A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.

  13. steviebuk Silver badge

    We'll end up with this one day

    "Serve the public trust"

    "Protect the innocent"

    "Uphold the law"

    "Any attempt to arrest a senior member of Google, Amazon, Facebook, Microsoft, Apple, Nvidia, Intel, IBM, Huawei, Samsung, Uber, Alibaba, Element AI, and OpenAI results in shutdown"

  14. martinusher Silver badge

    These programs are mostly giant filters

    The term "Artificial Intelligence" is misleading. Many of these programs are just very complex filters that take a mass of data. munge it into an internal form and use it to get a near match to existing records. So 'facial recognition' is no different from fingerprint matching, ballistics matching or any of the other forensic techniques that have evolved over the years. Where they go wrong is their application -- popular media has portrayed DNA matching as utterly foolproof, for example, so juiries tend to believe it even in the absense of cooroborating evidence.

    So there really isn't any 'ethics' involved in AI. There's plenty of questions about specific applications but they're no different from the questions we should be asking ourselves since the dawn of time. For example, facial recognition has potential limitations due to training bias. But then, so do eye witnesses -- they're notoriously unreliable. People -- especially colored people in the US -- have been wrongly convicted of crimes based on poor witnesses for ever. The fault is the justice system that refuses to question imperfect data -- if he's black then he obviously did it, end of story.

    The other big issue with application of large databases is reducing people to some quality index based on proprietary hashing algorithms. The index describes you as 'good' or 'bad' (and invariably in the US there are more 'bad' black people -- that is, poor people who don't qualify for credit except that that rating decides all sorts of other things that affect people's lives).

    1. Robert D Bank

      Re: These programs are mostly giant filters

      A large part of the problem is that there is no easy way to check and if necessary correct/challenge the veracity of these data sets as they are often held by private corporations. For things like credit reference agencies this can have a huge impact on individuals as you may be filtered out very early in the scoring and never be eligible for funding that you should be eligible for.

      Another issue is the data sourcing. For example the aggregators that collect data from multiple sources may be making unjustified assumptions or connections that then trickle through the data sets of all who buy them. Even in cases where you're inputting your own data it is easy to make a mistake occasionally, especially on poorly designed forms, but can sometimes be very difficult to get that mistake corrected. If that data is in many companies and jurisdictions it is virtually impossible. Another even more concerning issue is where data may be collected from social media where people have their guard down and may brag about something that they never did for example, but if that is used in a profile of them it can be very damaging.

      And then there is the issue of filtering out choices to narrow your options based on your profile of income and interests etc. It's pretty obvious how embedded that is already in the likes of Google search or FB ads. Essentially a list of who's prepared to pay the most to be at the top of the search results or profiling game wins and any other business are left floundering. So it affects both individual choice and viability of smaller competitive businesses.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021