Tobacco is enjoyable. Social media is mainly unpleasant.
Big Tech bankrolling AI ethics research and events seems very familiar. Ah, yes, Big Tobacco all over again
Big tech's approach to avoiding AI regulation looks a lot like Big Tobacco's campaign to shape smoking rules, according to academics who say machine-learning ethics standards need to be developed outside of the influence of corporate sponsors. In a paper included in the Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics …
COMMENTS
-
-
-
Thursday 29th April 2021 10:53 GMT Disgusted Of Tunbridge Wells
Re: Ummm
TikTok is brilliant by the way. Just scroll past the rubbish and it starts showing you videos that you like ( eg: my 'for you page' is full of history stuff, etymologies and scantily clad women )
Probably because it's not really social in the "My enemies must die" sense that twitter is.
-
-
-
Thursday 29th April 2021 10:05 GMT Eclectic Man
Racial bias
Many AI systems exhibit unintentional racial bias. The facial recognition systems that have been trained on white faces are very poor at recognising non white people and have lead to miscarriages of justice in the USA (as documented on el Reg and elsewhere). AI systems concerning re-offending likelihood have been shown to be biassed due to taking a person's home location and associating their chance of re-offending with the prevalence of crime in that area without taking into account whether police target that area and so are likely to record more crime there.
In states where the law enforcement systems have histories of being more punitive towards black people, AI often serves to re-inforce that discrimination. We are currently seeing the result of faulty programming and institutional authoritarianism in the ongoing Post Office Horizon system scandal, where 'the computer says so' was taken as proof of criminal activity, hopefully the AI industry will learn from that disaster, but if history is anything to go by it won't.
-
Thursday 29th April 2021 16:04 GMT You aint sin me, roit
Re: Racial bias
To be fair that's not AI.
Stop and search more black youths and you will find more black youths carrying knives...
So stop and search more black youths! And find more knives...
The data suggests that black youths carry knives. Of course the preponderance of white youths carrying knives is unknown because they don't get stopped in the first place.
Nothing to do with AI, just simple racism.
-
Thursday 29th April 2021 16:46 GMT Yet Another Anonymous coward
Re: Racial bias
Same in other fields
Analysis of our previous $PRESTIGE_COLLEGE students shows that best students were white,male, private schools and children of alumni therefore mathematically we pick these students.
The admission office never see the gender/ethnicity etc of the applicants and so can't be discriminating
-
Saturday 1st May 2021 15:42 GMT katrinab
Re: Racial bias
I think you need to stop thinking about "AI" as some sort of intelligence, and start thinking of it as a type of compiler that compiles training data into a computer program.
So your training data is your computer code, and if your code is racist, your computer program will be racist.
Also, I feel that in many cases, trying to find correlations between input data used is no more scientific than examining a goat's entrails or examining the position of the stars.
-
Monday 3rd May 2021 16:59 GMT Nigel Sedgwick
Count Versus Proportion
"Stop and search more black youths and you will find more black youths carrying knives..."
However you will find almost invariably a lesser proportion of "black youths carrying knives". In fact, if a greater proportion were found, that would be indicative of (likely purposeful) initial targeting of some subset of black youths that were less-prone to knife carrying than the later and wider subset.
Keep safe and best regards
-
Monday 3rd May 2021 18:02 GMT Anonymous Coward
Re: Racial bias
If you're referring to the UK, you don't know how stop searches work.
In order to search someone, you must have reasonable suspicion. This could be in the form of live intelligence 'He has a knife', the suspects actions 'they moved their hand to their waistband and looked nervous' etc..you can't just stop and search whoever you please (unless very specific, short term and high level laws are enacted).
Also, guess what - if your data shows black youths are knife carrying, it's not racism, it's intelligence led policing if you make sure you engage with them more.
Stop waving about the word 'racism' when you clearly don't know how to use it properly in this context.
-
-
Monday 3rd May 2021 16:51 GMT Nigel Sedgwick
Re: Racial Bias and Intent
"Many AI systems exhibit unintentional racial bias."
To the very best of my knowledge, no current non-biological system (labelled as AI or not) actually has the ability (independent of its programmers and/or others) of "intent". Accordingly such non-biological systems have not the ability for racial bias, whether intentional or unintentional.
Keep safe and best regards
-
Tuesday 4th May 2021 18:59 GMT Eclectic Man
Re: Racial Bias and Intent
Maybe I should have posted "Many AI systems exhibit the unintended and unrealised racial bias of their programmers", or possibly "Many AI systems exhibit the unintended racial bias of the data sets with which they are trained."
For example, a post above states that the police in the UK need reasonable cause to stop and search someone. Which is true, however, black people driving 'nice' cars (Mercedes, Jaguars, BMW's etc.) are more likely to be stopped than white people driving the same vehicles (see several recent examples in the UK press). (There is no crime of 'Driving while Black' on the UK statute book.)
Similarly, if criminality is evenly spread across the population irrespective of skin colour, but one section of society determined by skin colour is more likely to be stopped and prosecuted, more crimes will be recorded against that section of society and so they will get an unfair reputation for criminality, and so be more likely to be targeted than otherwise, leading to more detection of crime in a vicious circle.
I'm sure that the poster above (sorry didn't write down the tag name) does not consider him/her self to be racist, but the attitude presented that finding more people of a specific skin colour carry knives so searching more of 'them' will find more knives and therefore detect more crime, lends itself to exhibiting unintentional racial discrimination.
I hope this explains my original comment concerning racial bIas.
(It's OK, I know I'm at risk of major downvoting.)
-
-
This post has been deleted by its author
-
-
Thursday 29th April 2021 10:50 GMT amanfromMars 1
When Incest Rules IT can quickly Manifest Madness and Deformity by All Accounts
Big Tech bankrolling AI ethics research and events and seeming very familiar to Big Tobacco all over again is ye olde favourite great parlour game that leads those following lost souls, rearranging deckchairs for the Titanic, Saints playing Sinners and Poachers turning Gamekeeper whenever bankrolling is directed to the unworthy ..... the silver tongued devil charlatan and the poisonous snake oiler.
'Tis nothing new to be overly concerned about once one know what is to be expected and routinely dismissed and discounted as a viable future sustainable resource/font of common collective wisdom.
Weirdly and spookily enough, presently there is an almost mirror carbon copy of the dilemma in a similar drama playing itself out currently in Westminster and at No 10 Downing Street.
-
-
-
Saturday 1st May 2021 19:54 GMT Eclectic Man
Not all, some are industrious idiots.
(In his opus "On War", von Clausewitz classified soldiers into four separate categories:
Intelligent and lazy: these people should go into 'intelligence' or central high command, as they will work surprisingly hard and diligently to ensure that they do not have to get up at 2:30 in the morning to deal with unforeseen emergencies.
Intelligent and industrious: These people are field commanders, they react quickly and decisively to events and make usually sensible decisions based on what they know at the time, they are good tacticians.
Stupid and lazy: They are the foot soldiers, they will basically stay where you put them and do what they are told (if trained appropriately). They are too lazy to wander far, and too stupid to get into too much trouble on their own.
Stupid and industrious: These people are a nightmare and will cause no end of trouble. They won't stay where they are put, they will get into scrapes and tinker with things if not given something to do. They should be got rid of at the earliest available opportunity.)
-
-
-
-
Tuesday 4th May 2021 19:11 GMT Eclectic Man
Re: We're Missing Two Things
I got 220 pages (out of over 600) into Kant's "Critique of Pure reason" before giving up. I told myself I was interested in whether his belief that Euclidean Geometry was the only possible geometry affected the validity of his deductions. I was wrong, my interest was not that strong.
What I actually learnt form CoPR was that Kant found a flaw in Descartes' "I think, therefore I am". In order for Descartes to deduce "I am" he needs to have the concept of existence first, and therefore to have experienced something other than himself thinking.
I strongly believe that philosophers (apart from Nietzsche*) should be read in translation, as someone else has done the hard work of actually finding out what s/he meant, otherwise the translation would never be published. Bernard Williams (authors of "Ethics and the limits of philosophy") needed a much better editor, IMHO.
*In my experience Nietzsche is incomprehensible or annoying in any langauge.
-
-
Thursday 29th April 2021 21:15 GMT a_yank_lurker
Ethics and Corruption
Funding research tends to corrupt the research as there is a bias to find what the funder wants or expects. The bias may not be overt but it is there. Even if the researcher takes pains to lesson the bias it is there subconsciously.
A related issue is the ethics of the funder and in the case of AI I have my doubts about the ethics of all the institutional funders whether corporate or government. They want results that make themselves look good. So the conundrum is how to fund AI research so it will done more ethically.
-
Friday 30th April 2021 11:04 GMT JohnSheeran
Re: Ethics and Corruption
Isn't this just a case of outside funding being the source of corruption? How is government any less corruptible than corporate sponsors? Isn't the very act of material exchange the source of the corrupting influence? It seems like ethics in general are required at every single level of that particular mechanism to "ensure" no corruption and that seems not very likely.
What if funding were established through a double-blind mechanism where the funding source and the researchers were shielded from each other? Could that be a way to make it work?
-
Friday 30th April 2021 13:54 GMT Eclectic Man
Re: Ethics and Corruption
"What if funding were established through a double-blind mechanism where the funding source and the researchers were shielded from each other? Could that be a way to make it work?"
Unfortunately that makes it very difficult to find out whether the funders were 'getting value for money'. In effect, in many countries' research by universities and other organisations is publicly funded from taxation, directed by semi-independent organisations (such as the UK's former 'Research Councils') or by dedicated research institutions (such as the Wellcome Trust).
Ethically there is no reason why 'Big Tech' (or 'Big Corp' for that matter) should not fund their own research into ethics, or anything else. The issue is whether they try to influence the direction of the research or the published results, as the tobacco industry did, and the polluting climate changer deniers are, while claiming it as 'independent' science. Much of the research in biology on new drugs is not published as the results are either inconclusive, or do not fit with the sponsor's desire for a profitable, marketable product. There was a campaign a while ago to get every drug trial registered before it started with a requirement that the results be published, irrespective of outcome, but I don't think it came to anything.
On the bright side, at the 'Big Tech' is aware they have to do something about 'Ethics' in AI, I just wish they had done ethics for 'Big Corp' first.
-
Tuesday 4th May 2021 18:23 GMT JohnSheeran
Re: Ethics and Corruption
All good points.
I would say that there are company/organizations that do exist in the United States that are called "Think Tanks" that do exactly the kind of thing we're talking about. However, there have been plenty of statements in the news regarding the validity of their work because of the potential influence from their sponsors.
Government is often no better in this regard because of lobbyists, politicians, etc.
This whole thing just feels like a much bigger social problem than anything else.
-
-
-
-
Sunday 2nd May 2021 02:40 GMT Robert Grant
> The analogy "is not perfect," the two brothers acknowledge, but is intended to provide a historical touchstone and "to leverage the negative gut reaction to Big Tobacco’s funding of academia to enable a more critical examination of Big Tech."
This sort of obvious shaming-by-irrelevant-connection reminds me of The Simple Truth (https://www.readthesequences.com/The-Simple-Truth). The relevant excerpt:
> Mark calms himself down. “I suppose I can’t expect any better from mere shepherds. You probably believe that snow is white, don’t you.”
> “Um… yes?” says Autrey.
> “It doesn’t bother you that Joseph Stalin believed that snow is white?”
-
Monday 3rd May 2021 06:52 GMT martinusher
These programs are mostly giant filters
The term "Artificial Intelligence" is misleading. Many of these programs are just very complex filters that take a mass of data. munge it into an internal form and use it to get a near match to existing records. So 'facial recognition' is no different from fingerprint matching, ballistics matching or any of the other forensic techniques that have evolved over the years. Where they go wrong is their application -- popular media has portrayed DNA matching as utterly foolproof, for example, so juiries tend to believe it even in the absense of cooroborating evidence.
So there really isn't any 'ethics' involved in AI. There's plenty of questions about specific applications but they're no different from the questions we should be asking ourselves since the dawn of time. For example, facial recognition has potential limitations due to training bias. But then, so do eye witnesses -- they're notoriously unreliable. People -- especially colored people in the US -- have been wrongly convicted of crimes based on poor witnesses for ever. The fault is the justice system that refuses to question imperfect data -- if he's black then he obviously did it, end of story.
The other big issue with application of large databases is reducing people to some quality index based on proprietary hashing algorithms. The index describes you as 'good' or 'bad' (and invariably in the US there are more 'bad' black people -- that is, poor people who don't qualify for credit except that that rating decides all sorts of other things that affect people's lives).
-
Monday 3rd May 2021 12:01 GMT Robert D Bank
Re: These programs are mostly giant filters
A large part of the problem is that there is no easy way to check and if necessary correct/challenge the veracity of these data sets as they are often held by private corporations. For things like credit reference agencies this can have a huge impact on individuals as you may be filtered out very early in the scoring and never be eligible for funding that you should be eligible for.
Another issue is the data sourcing. For example the aggregators that collect data from multiple sources may be making unjustified assumptions or connections that then trickle through the data sets of all who buy them. Even in cases where you're inputting your own data it is easy to make a mistake occasionally, especially on poorly designed forms, but can sometimes be very difficult to get that mistake corrected. If that data is in many companies and jurisdictions it is virtually impossible. Another even more concerning issue is where data may be collected from social media where people have their guard down and may brag about something that they never did for example, but if that is used in a profile of them it can be very damaging.
And then there is the issue of filtering out choices to narrow your options based on your profile of income and interests etc. It's pretty obvious how embedded that is already in the likes of Google search or FB ads. Essentially a list of who's prepared to pay the most to be at the top of the search results or profiling game wins and any other business are left floundering. So it affects both individual choice and viability of smaller competitive businesses.
-