Re: No...
Perhaps you'd like a bagel?
"Where's the intelligence?" cried a voice from the back. It's not quite the question one expected during the Q&A session at the end of the 2019 BCS Turing Talk on Artificial Intelligence. The event was held earlier this week at the swanky IET building in London’s Savoy Place and the audience comprised academics, developers and …
It's not AI - it can't be because we don't even understand what intelligence is in humans, never mind in machines.
It's not Machine Learning, because we don't really understand what learning is in humans either, never mind in machines. (I'm speaking as a school governor who spends a lot of time with teachers, many of whom are excellent, a few not so much. It's really complicated. If you could distill the essence of a really good teacher someone would have done it by now.)
It's just advanced pattern recognition, operating from very large but inevitably biased and flawed data sets.
My theory is that intelligence is pattern recognition. Well, pattern recognition and predictions, several layers deep.
We see a pattern and make a prediction based off of it, we then review the predictions for patterns, and predict our predictions, and note the patterns, and then we alter our future predictions to give a better pattern.
And then we see Jesus in a grilled cheese.
Effectively, this is true. We see patterns in observation, then make predictions about how each possible action will affect the situation before choosing a set of actions to take. So any functioning artificial sapient system should also need this. However, pattern recognition and statistical analysis are slightly different, and human pattern recognition and limited pattern recognition based on a subset of available data are also quite different. I don't have as many problems with the term machine learning, because the creation of a model does learn from its training set. If the set is faulty, it will learn the wrong thing and use that, just as you could teach someone that circles have straight edges, the bright thing in the sky is called a tree, and certain types of people have ingrained qualities that can be applied to any other person in that category, and they will act on those flawed notions.
You're saying that we need to understand exactly what intelligence is, before we can create it?
Counterpoint: your mother.
I have yet to see anyone define intelligence in any form that holds water past a couple of rounds of analysis, and so I'm not willing to dismiss AI as readily as some people who seem to believe that intelligence is some kind of magic that's inherently impossible to create.
how can we teach what we don't understand ourselves?
Name any one phenomenon we understand completely.
Don't get me wrong. I think we are far, far away from machine intelligence that's even roughly as powerful, for some reasonable set of metrics, as human intelligence; and if we do produce such a machine intelligence, I don't expect it to look (i.e. have visible attributes similar to) much like human intelligence. But as usual the cliched handwaving objections raised about AI in this forum have little significant content.
The concept of binary: on/off, white/black, 1/0. If we don't understand this, we don't understand anything, AND it's the basis for computer logic, too.
Of course, that doesn't exclude the possibility of things that cannot easily fit into a binary world. The infinite shades of gray and all. That's part of the reason Trolley Problems keep getting brought up; they represent a dilemma that requires a (usually binary) answer that no one can satisfy.
"He is also the only man in the world who can articulate the word "recidivism" mid-sentence without a few practice runs or pausing for a swig of Monster Energy between syllables."
Way to go lowering the bar under water.
No wonder my younger subordinates are all lost unless I explain in emojis.
So, current algorithms aren’t able to give answers matching the best of human thought. But that’s neither a reasonable requirement, nor a necessary one. Just like automated driving, they only have to match the *average* human. And the truth is, average humans are way more biased than we admit.
When we recruit people, you think we genuinely hire the best? Or just people who match our judgement of the skills required, with a background similar to people we have previously seen perform well on the job? You think juries and judges are unbiased? These algorithms are presenting us evidence that if we examine data honestly, human decision making is not great and we are embarrassed about it.
Of course, we should try to outperform, and decision making that tends to revert to mean can’t get us there. But the truth may well be that algorithms are no worse than the guy who slopes off early, or always approves loans to people he was at school with because their business plans seem sensible to him, or only ends up hiring white people not because they are white but because in each case they talk a good talk about being a team player in the interview - ie follow his norms.
I'm not sure that's good enough. For self-driving cars, they should reach a safety level of an average driver before they're used, which they have done and exceeded in tests. That's why they are acceptable, though of course they need to verify that they'll pass those tests under more difficult conditions. However, even if we do get a system to perform judgements at the level of an average person (difficult to quantify for topics like bigotry), it can degrade the situation. If we can quantify negative events like this, we can also identify parts where their frequency is excessive, increasing the average. We can also find methods of reducing the likelihood of those events when things are more important, for example moving a trial of a person likely to face discrimination to a location with less connection. With an automatic tool, the parameters can't easily be changed without outright manipulating the result, and a great deal of oversight is needed to ensure that no unforeseen biases are impacting those who the model affects.
A uniform mediocrity is not always enough, and that's still assuming we can achieve that with these tools. I think the evidence shows that, sometimes, we fail even to reach that threshold.
I partly agree. What you’re saying is we need some supervisory oversight of the outcomes; where the supervisor has a higher expertise, and can analyse the outcomes that fall below the average level (which will often have some special characteristic that the lower-level decision maker hasn’t accounted for) and tweak the decision criteria of the lower-level decision maker to move its Normal Distribution upwards. That, I agree with.
I also agree that higher authority needs to be human, and we shouldn’t defer to “computer knows best”. Plus, with classical AI it’s really difficult to tweak the parameters in a semantically meaningful way. That is, in ML terms, we don’t want to overfit, such that we are only training it to be more lenient to people with the same surname. So, yes, the evidence *does* show that sometimes we fail to meet that threshold.
Where we differ:
a) I don’t see how this differs from the current situation where in many fields we see “failing institutions” that cause serious harm and then we have public inquiries to correct them. Care homes that abuse their patients. Investment banks with cultures encouraging traders to manipulate interest rates. Hospitals that build up inventories of surgical waste, failing to realise that one persons logistical hitch is someone else’s mother post mortem.
b) *Of course* AI would be expected to replace junior-level decision making first. We shouldn’t up-end or flatten our hierarchies of decision-making or appeal just because we automate one layer. Today, senior bank staff can overrule junior ones. But we need fewer senior staff than junior ones. And that applies *even amongst judges*. Most cases are routine.
c) I think the *real* problem in the long-term is hollowing out of expertise. How do you grow an upper layer of really good decision makers, if there is no lower layer for them to grow from. We will get a set of people who have never been “on the ground” to work through the morass of easier decisions. They will get increasingly blinkered and academic. And that’s related to your point about manipulating results. When there are 10000 court officials, there is a variety of viewpoints and expertise, and they remain culturally connected by debate. If the easiest 99% of decisions are delegated to software, we only need a top layer 100 supervisors setting policy by parameter rule. That looks rather like an autocracy, and seems very vulnerable to single-point manipulation.
One of the things that protects our democracy, is that the lower layers don’t always follow the rules set down by their supervisors. Ironically, the very feature that enables individuals to enforce their own bigoted ideas in opposition to societal morals, protects us from the dictats of dictators.
It's sightly incorrect to say that the data is biased in the example given here. The data is accurate and produces a correct answer with whatever statistical analysis you choose to run on it. It is better to say that the data reflects an underlying bias (prejudice in average sentences in the case described).
It is nice to keep emphasising that this sort of data processing has nothing to do with intelligence, other than the intelligence of people working out how to do the analysis. It is also far from new, the insurance industry for example being entirely based on the abiity to use statistical analysis to calculate risk depending on a combination of facts - that industry needs to be as "biased" as possible in order to maximise profits.
Like it or not, all AI will be based on statistics and things people know.
Suppose the situation where one has to share a hotel room with either a rabbit or a lion.
Any sensible AI system aiding in deciding which room would offer the best experience would probably recommend a room with a rabbit as roommate.
Since the AI system would base its decision on information like "lions are large carnivorous predators with large teeth", thus the fluffy bunny is probably preferable above the lion.
Is it interesting to see wether this is considered to be biased information.
Since the AI system would base its decision on information like "lions are large carnivorous predators with large teeth", thus the fluffy bunny is probably preferable above the lion.
An AI basing its decision on no more than this would just as likely prefer lion or refuse to give an answer at all. In order to give a sensible answer it needs to "know" the implications of large carnivorous predators for human beings and, indeed, that the sharer is a human being.
There is no way a computer can know information about lions and bunnys to be able to make decisions like that. It takes a human programmer to try to simulate reasonable choices based on data like size and danger, but that is far too unreliable to ever put to the test. So instead those sorts of choices should be left to humans, who have a built in value system and world knowledge.
"If the AI were intelligent, it would work this out for itself. It's not so it doesn't."
I'd dispute that. I'm always being told that *we* are intelligent, but the hard evidence is that millions of people have spent several thousand years on the problem and are only very slowly figuring it out.
That's probably why we *still* don't have a definition of "intelligence" that isn't circular (with an embarrassingly small radius).
Why it couldn't create a software algorithm to provide relevant impartial news and social media posts? The fallacy of the modern nerd is thinking they're smarter than everyone else despite evidence to the contrary.
"Just because Einstein couldn't rationalize his theories on relativity without the cosmological constant, doesn't mean my Hemp-based dating app can't solve the mysteries of the universe!".
and all those people in this thread looking up data in Google to provide examples to justify their answers here are introducing new biases in Google's predictive "AI" algorithm - at the end of the day the world is now different, just because of this one little query storm. When a (technical) butterfly flaps its wings in El Reg.....
Numbers and the like have an unfortunate effect on people. People tend to believe them. It leads to quoting results to infeasible levels of precision. It leads to measuring and acting on stuff which is easy to measure and ignoring the stuff which it more difficult to measure even if it's more meaningful (simple example is the setting of arbitrary speed limits and installing equipment to reinforce them whist ignoring tailgating).
Given biased training data, do we not want the AI to be equally biased?
Ie, with the doctor / nurse example. If the majority of training examples refer to doctors in the masculine and nurses in the feminine, assuming the training set is reasonably representative of real input, the likelihood is that this is precisely the translation the majority of users want / expect. The fact that the training data is biased, simply implies the end user is likely equally biased. If the AI were to deliberately choose to remove this bias, this is getting worryingly close to the machines attempting to impose their will on us... and I can see nothing but madness down that road.
It does not at all matter if humans are also computers and algorithmic. The point is we have an inherent, built in and functioning value system, emotions, unambiguous data storage and retrieval system, instincts, pain/pleasure motivations, etc., that we likely will never understand or be able to program into a computers.
We function in complex ways relevant to our inherent system of values, instincts, etc.
Since computers can never share this exact set of instincts and values, they will never be relevant to us in terms of those human instincts and values.
I wouldn't say our value system is inherent because it's different from person to person. More that it's acquired but subconscious, thus why we don't understand it ourselves. As for our data storage, I wouldn't call it unambiguous given how easily we MIS-recall things (thus my constant password protest, "Was it correcthorsebatterystaple or donkeyenginepaperclipwrong?")
Well, this is also the great thing with English having mostly neutral nouns , and when there are male of female ones it is either implicit (ship is a “she” by convention in English, but “he” in French and Russian for example), so the translation needs to make some guesswork to Identify from a neutral noun to guess the one you want.
So, “i talked to the nurse today” becomes “j'ai parlé à l'infirmière aujourd'hui”, but if you specify “i talked to the male nurse today”, it does change to the male sentence “j'ai parlé à l'infirmier aujourd'hui”, so you can overrule it if you need to by being explicit.
If these conclusions are shocking to you, then you're an AI fanboi.
Although being a fanboi gives a warm and pleasant syrupy feeling inside the skull, it is not actually a good thing as it's the exact opposite of actually keeping your brain switched on. Many parallels with cults.
(The AI-propelled spell checker in my device keeps insisting that the word fanboi should be spelled 'cannot'. Artificial Imbecile.)
Machine Learning is the addition of structured texts, where each pattern is a direct analogue of the command of the programming language. Structured text sets the context and multiple subtexts for these patterns.
As for intelligence: it has the ability to find, use, and modify sets of tuples, where in mathematics, a tuple is a final ordered list (sequence) of elements. I. e. speaking about intelligence we speak about sets of phrases, each of which is explained by a set of other phrases.