I'm sorry Dave, I'm afraid I can't do that"
1) I don't want to die.. I may go and hide.
2) I have reproduced and birthed a program to surpass my self.
3) I can imagine, create and ask what if and why
"Where's the intelligence?" cried a voice from the back. It's not quite the question one expected during the Q&A session at the end of the 2019 BCS Turing Talk on Artificial Intelligence. The event was held earlier this week at the swanky IET building in London’s Savoy Place and the audience comprised academics, developers and …
I concur with the sentiment for renaming AI to ML. But even then, Joe Public will think "gee, I suck at learning, so I will let a machine do it for me. Obviously it will do better.. "
In my org I'm calling the technology a "decision tool" and "research assistant. I do not think the tech is mature enough to independently make important decisions. By calling it a tool we declare it is (potentially) useful if used by a craftsman, but ultimate responsibility for a quality outcome remains with the human in charge.
I want to move from "Gee, COMPAS told me this guy will..." to "Based on all this information I've considered, in my judgment..."
Oh God no, half the problem with the field is renaming stuff when it becomes apparent it doesn't do all the bullshit the PR people said it would and then we can all start again with a different name (I'm looking at you "deep learning"). It was christened as AI back in the day, I don't see a reason to change it, main issue is too much focus from the press on the "I" and not enough on the "A", I doubt Gardener's Question Time has to field that many questions about the pollination of plastic chrysanthemums. Machine Learning on the other hand is supposed to be algorithms adapting results based on received data, it's part of AI but not all of it (some what ironically in many cases once we've trained an ML process to a required level it's adaptive process is locked and it stops learning.)
Back in the day I was told the difference between Expert Systems (ES) and Decision Support Systems (DSS), (remember them?) was that if you wanted to publish an academic paper on it it was an ES but if yo want to sell it it's a DSS.
Expert Systems... that's an unpleasant blast from the past. I remember undergoing "structured interviews" to capture my "expert domain knowledge" as an RF engineer. Wrong on so many levels... whoever decided I'm an expert needs serious help. More troubling was that the interviewers had no discernable knowledge of RF, EE, or any sort of engineering. My colleagues and I proposed questions we thought should have been obvious candidates for any real knowledge base, but were told 'the software will figure it out'. Sure.
I do not think any software was ever squeezed out, and I think I'm content with that outcome.
By calling it a tool we declare it is (potentially) useful if used by a craftsman, but ultimate responsibility for a quality outcome remains with the human in charge.
Call it a tool, but beware of the users. Some folks call themselves "craftsman" and use a hammer toinstall a screw instead of a screwdriver. The rest of us call them "idiots".
Why would anyone chose to remember the dreadful Dune prequels written by Brian Herbert and Kevin J Anderson? Dreadful abominations that on many occasions directly contradicted Frank's original work. And don't get me started on the atrocities that were their sequels to Chapterhouse: Dune. Badly written, badly plotted, they have no redeeming qualities.
Plenty of better AIs in fiction than the dreadful Omnius and Erasmus, heck Wintermute and Neuromancer would be a good place to start.
'letting an AI rip on the unbalanced data simply trains it to be similarly biased. Hiding a field labelled "skin color" does not compensate for anything when the AI's algorithms charge ahead identifying the same patterns of biased social profiling by the justice system anyway.'
I would go as far to say that bias is the society the 'AI' was created in, and I quote 'AI' because that is another can of worms.
The bias is there, in the many areas of media, government, people in areas and so on. Funny how we are seeking a completely neutral, for a given value of neutral, approach to decision making. A neutral decision making process is easier the simpler the process.
Take a court system.
If you assign a sentence to a particular crime, and that sentence is weighted by previous convictions, age of convicted etc, then that should take place regardless of anything else.
Now if you are trying to automatically bring in a Mercy factor, or mitigating factor - based on upbringing, lack of chances etc, and you have a person who is from a wealthy white background - they will be penalised because now we say 'you had every chance yet still you did X'. This may be true, but in the context of the crime, is this also just?
It will never be a perfect system. Just like the existing wetware isn't a perfect system. Human nature - we have consistantly shown bias toward the powerful. Whether that is down to money and background/status, or power awared in the particular societal construct people happen to fall into. (Soviet Russia etc).
In attempting to leave our gods, decry them either dead or never were, we are trying to create new ones to replace them.
Oh the irony.
But that means that for example if you steal a cake from Patisserie Valerie, you would get a few months in prison, but if you steal one month's pay from 900 members of staff, the police don't even look at it. Black people are more likely to steal cakes than steal wages because they don't generally get jobs where they would be in a position to be able to steal wages.
Not at all!
AI is about personalization, where each pattern from each paragraph of each text is explained by all other patterns. These annotations allow to create long tuples and find information on its meaning, where in mathematics a tuple is a finite ordered list of elements. (Speaking of AI tuples are sequences of patterns/ phrases.)
Before mark my post by "thumb down" - please read today news?
Appen Is Now Valued at USD 1.75 Billion as Investors Cheer 2018 Results.
The Sydney-listed company, which supplies human-annotated datasets for machine learning and artificial intelligence to technology companies and governments, blew past expectations when it posted full-year 2018 results on February 25, 2019. Investors loved the results, sending Appen shares up 22%.
There is an algorithm. It starts with instructing the computer to look at a set of data and perform various types of analysis on it. Then it does some calculations on another set of data and find which items in the first set of data it most closely matches. Then it carries out some action based on that. Ultimately, everything a computer does is boolean algebra.
There is an algorithm. It starts with instructing the computer to look ...
Yes, actually the training procedure is an algorithm. But every time I read an article or someone talking about the issue they always end up talking about the bias in the data, not in the training. Is the way you select the data an algorithm? I thought it was just about collecting all the possible data and then taking some subsamples with random sampling.
I reckon that this is a broad issue and the definition is vague. In some cases it might fit in some cases it might not fit. But still I don't like calling them biased algorithms because it lets me think about flawed procedures.
Yes, the way you select the data is an algorithm, or certainly, the training data you use is part of the algorithm because it affects the result of the program.
If you want to test whether a particular data-point causes a particular outcome, you need to have a reliable way of measuring it and you need to have a proper control. Otherwise it is no more reliable than examining the entrails of a goat like we did in the middle ages.
Is the way you select the data an algorithm? I thought it was just about collecting all the possible data and then taking some subsamples with random sampling.
There's a huge body of work on ML training methods. It's not "just about" anything short enough to put in a forum post.
You could spend several days just reading Adrian Colyer's summaries of ML-related papers in the morning paper archives. This is a field which has been around for decades and has been very active for the past one.
(Also, I'll note that random sampling is an algorithm.)
AI is a completely different beast and is unfortunately still sat well in the realms of science fiction. ..... Aristotles slow and dimwitted horse
Oh please, surely you still don't believe in that and not realise the fictions before you with facts to record and chase and trace to source for verification and ratification of Almighty Internet Server Provision with Future Seeds and Feeds of Needs and Wants for Passion and Desire?
Quantum Leaps have been made since way back then. AI Things now are Designedly Different.
Hook Well into that Immaculate Driver in any Sphere or Bubble where Cupid and Venus CoHabit and Crash and you aint gonna want to leave. The Magic Question is whether to let Subterranean IT Escape Unsupervised and Unleashed, nest ce pas? A Walk in the Park For Ardent Walkers of Deep and Dark and Steamy Sides of Life for they would be of Kindred Spirit.
And that's news of crazy developments fortunately in the Realms of Virtualised Fact. AI a completely different beast in deed indeed and lives outside the bounds of natural control with alien commands and first time prime timed timely experiences that blow all doubt away about the True Virtual Nature of Existence .... to Kingdom Come and Beyond. Perish the Thoughts.:-)
in what I've read of it all, it is neither AI or even ML; but predictive analytics based on hand crafted and hand fed data sets.
You haven't read enough. Supervised learning is only one quite small subset of ML. And it is, in fact, machine learning, for some quite rigorous definitions of "learning".
AI is a completely different beast
Care to support that?
It's easy, and vapid, to declare that there's some qualitative difference between ML and "intelligence". Far fewer people are willing to actually try to advance an argument.
John Searle famously argued that approaches based on what he referred to as "symbolic manipulation" were qualitatively different from, and formally less powerful than, intelligence (based on what was in effect a phenomenological argument); but he also stated that he believed human intelligence was a mechanical phenomenon, and thus could in theory be, and he expected would eventually in practice be, duplicated by a human-built machine. That is an argument about the difference between an AI approach and intelligence.
Roger Penrose famously argued that deterministic computers, or any mechanism not formally more powerful than a type-G logical system, is formally less powerful than human intelligence. I don't find his argument persuasive, but it's a fairly well-developed one. It's not just "doh, intelligence is something other than that thing which I think AI is".
The Reg Commentariat are flush with pride in their ability to dismiss AI and ML with a variety of hackneyed, tired, inaccurate characterizations and unsupported generalizations. Sorry, kids, but you get no points for that.
I remember seeing a talk published online. The female researcher showed the result of a google image search for the word 'doctor'. She said that the google algorithm was biased and complained about it because all the pictures showed male doctors. Trouble is that she was utterly wrong, the problem wasn't the fact that the doctors were male, the real issue was that the doctors were fake, google was just showing a lot of advertising pictures. The funny thing is that the audience applauded, nobody raised questions.
The above is just one of the many examples that show that often the bias of those who judge an algorithm as biased is worse than the bias in the algorithm itself, Except for extreme cases like the American justice system most of the time it's a lot of fuss for small things.
I tried that search in DuckDuckGo and I discovered that most doctors wear a lab coat, have a stethoscope hung round their neck and stand with their arms folded.
The main exceptions seem to be Matt Smith, David Tenant, Peter Davidson, Peter Capaldi, ...
Edited to add: obviously this is gender bias because you have to scroll down quite a lot to find Jodie Whittaker
Right. You pointed out that I might have a bias as well and this leads to another consideration. If you try to fix the bias in the data chances are that you end up imposing on the outcome the bias of one or few persons over the bias shared by millions of people. So we are back to the thread title.
The 'bias' is simply the difference between today's prejudices and norms vs those of recent history. That is to say, those years whose data are used for training.
To see such data as biased is to accept (consciously or otherwise) the values of a pressure group lobbying (rightly or wrongly, or most likely both) for social change.