Google DeepMind
Beware of Cupid Hunts bearing Sweetheart Deals.
When British Prime Minister Theresa May bigged up AI at the World Economic Forum in Davos last month, it was as if she had nothing better to talk about. Name-dropping DeepMind, perhaps the only justification for her claim that the UK can be a "world leader in artificial intelligence", seemed a little desperate, especially as …
Can we please drop "I" from "AI" because there's very little if no intelligence in all existing AI algorithms. I'd really love AI to be replaced with "automation" or "fuzzy algorithms" because it's what it is.
People are throwing "intelligence" around as if we understand what intelligence is. We don't. There isn't even a universally accepted definition of it.
The worst of all, is, of course, the fact that people believe that that the brain is a computing device which processes information. We do NOT know that. It's akin to medieval people saying that the Sun is burning. Yes, the Sun is "burning" but it's not a chemical reaction, in fact it has nothing to do with chemistry.
Likewise with the brain - we see what it does. We created computers roughly 30 years ago and there's some resemblance between what we do and what computers do. Thus the brain is a computational device. This corollary is false.
OK, I had to be more specific: we created computers which resemble modern PCs roughly 30 years ago. To be precise the first PC was released in 1981. And, of course, the first digital computer was created shortly after WW2 but it was its performance and usability were just lacking so I wouldn't call it a "computer" ;-)
I nearly gave up on the article when it lauded the Google translation AI. It has improved from downright dangerous to laughable, but is that really "good"?
I do a lot of translations and I am nowhere near professional at it, but I can still run rings around Google's efforts, between German and English.
At least it has gone from falling off my chair laughing bad to sniggering bad.
As to the article, better an AI winter of discontent than a discontented Wintermute...
Nope, the translations are generally not good enough.
Automated transcription is not impressive and it certainly needs checking.
The last time I tried to translate a large piece of text, the Google Translate text was "so good", I only re-wrote 95% of it and I just needed a "good enough" translation for an email. I also did a short internship at a translation service and the results there are even stricter.
It works for some simple themes, as long as you don't need an accurate translation.
It may remove a lot of drudge work for someone bilinual in the relevant languages. Indeed I know someone who does translations by using GOOGLE translate followed by carefully fixing the result.
The problem is, if you don't know the original language, you can't tell when the automatic translation contains a conceptual error.
I feel like we sort of redefined AI downward until it matched what we already knew how to do, and then declared we'd conquered AI.
A lot of what's going on is statistical methods, like Bayesian classification. Calling it "intelligence" is a big stretch. Even calling it "learning" is a bit iffy.
People (and other animals) have been using their brains to good effect for millions of years. Very recently, a few intelligent people drew up rules for certain highly artificial abstractions that turned out to have practical applications - for example, geometry and trigonometry allow accurate measurement of fields for tax purposes.
Subsequently, the disciplines of mathematics and logic were identified and explored in some depth. Then machines were designed and built to implement a few mathematical and logical operations. The speed and small size of microelectronic circuits have allowed an increasing number of useful tasks to be performed by computers, often much faster - and sometimes better - than human brains could do them. (A goal mentioned by Blaise Pascal nearly 400 years ago).
Computers, then, perform a small subset of the functions of human brains - those that can be specified by very simple and limited abstractions such as logic and arithmetic. On the other hand, unlike all brains, they are not fundamentally driven by instincts to seek some outcomes and avoid others - their goals must be programmed in, and do not evolve. In consequence, they don't combine emotions and urges with logic, as brains do.
Currently the general population see AI as a method of superimposing one face onto another in a pr0n film.
Interesting comment about propriety verses open, Personally I think open would be the best way forward as closed would leave people at the mercy of algorithms which you can't see or control and there has been plenty of noise about AI potentially being unintentionally discriminatory by design.
The problem is you're unlikely to find a smoking gun in the algorithm itself. What you really care about is the training data.
But to be honest we don't really need the algorithm to determine if something is discriminatory. For example, the algorithm for credit scoring is proprietary, making it a black box, but its disproportionate burden on certain groups when they try to not just borrow money, but also rent housing and land a job, is well-known.
Lots of things are unintentionally discriminatory, including nature itself (think genetic lottery and generally the role of chance -- which people completely underestimate, clinging to the delusion they are in control). AI is one of them...
The problem with AI is that there is no appeal against its decisions. We are already neck deep in ’this must be correct because the computer says so‘, but AI makes it orders of magnitude worse. You can -- in principle -- reason with people and you can -- in principle -- point to bugs in code. You cannot do either with AI. So even discrimination that just appears randomly will likely become self-fulfilling prophecy.
Intelligence is not a binary thing. Any creature with a complex body plan has sufficient "intelligence" to know its own spatial posture and to base decisions on that. Even a 1980s computer chess game had that level of intelligence. By the time animals get eyes and stuff, like say an insect, their level of "intelligence" grows to match the Big Data pouring in from their senses. Frogs, mice, crows, dogs, humans all represent an evolutionary chain of incremental advances in intelligence. Machines are no different. Currently they are probably around the frog level - once they level with the crow we can start crying "AGI".
No. What we are seeing is the beginning of what correlating and evaluating massive amounts of data can achieve with a bespoke program.
We are nowhere near AI and talking about improvements in translation, although impressive indeed, has nothing to do with AI and everything to do with better coding (meaning code that does the job better).
Besides, translation still needs a ways to go before you feed a text to Google Translate in English and get a proper German/French/Spanish/your choice version that does not need to be almost completely rewritten by a competent linguist to be up to par with the original version.
I reckon the translation thing is approximately going from "finding translations for each word in the original and returning them roughly in the order they were in the original text" (equal to Classic Chinglish) to "returning the most common phrase the resulting words tend to be found in, possibly also preferring the phrases typically found closest to the rest of the phrases in the text" (Modern Chinglish a.k.a. "just run it through Google"). True context awareness is nowhere in sight of course, seeing as how that would require actual proper Turing-resistant AI.
Process that are largely not intelligent but are able to make guesses and inferences about data given lots of training, either that or they use a lot of rules dreamed up my fallible meatbags (usually where "the computer says no..."
Identifying pictures as "cat picture" or "not a cat picture" is fine, but not that intelligent as it did not work out that one was a fire extinguisher with a hose wrapped round the bottom and two tags on the handle.
Using an algorithm generated by people will usually end up finding scenarios that were not originally considered which can make decisions appear arbitrary.
As the article says though the hype engine has worked and people are buying the concept.
I still remember the 80's when software like ELIZA was going to revolutionise interactions with computers and make them seem like people. I don't see a huge difference from that even to the expensively constructed digital assistants now proliferating.
The biggest danger in all this is that people end up trusting computers instead of their common sense...
It's an old idea that has regained momentum in the past few years, fuelling more hype ..
It is not hype whenever Reading and Leading AI Gospels .... for they be Virtual Flight Travel Instruction Manuals.
For Per Ardua ad Astra AIMaster Pilot Programs .....Remotely Delivering Immaculate Missions.
cc .. Air Chief Marshal Sir Stephen Hillier, Chief of the Air Staff, in command of the Royal Air Force
El Regers and Commentards All,
I think it is high time that it should be made more widely known that SMARTR Machines are now Generating Future Raw Source for IntelAIgent Lead into New Cyber Terrain Territory ..... Live Operational Virtual Environments under Heavenly Protection and Control for an Almighty Command.
And such has been freely offered to home intelligence services and defence forces ... for their own trial beta servering, for the blazing of new trails.
How very odd that is not widely known. Is it residing and presiding in some Deep Google Minded NEUKlearer Silo?
Inquisitive Minds would have an Answer in Reply.
The first problem is defining what "intelligence" is. We have no solid technical definition of that. It's pretty hard to solve a problem that you haven't really defined. Or, it's easy, since you can do a "shoot and call what you hit the target" sort of "solution".
Alan Turing himself remarked (with what I think was very subtle irony) that as soon as we understand the mechanism behind any apparently "intelligent" behaviour, we insist that it isn't really intelligent. Thus, he argued, true intelligence must forever remain wrapped in mystery.
I very much wish I could have met and talked with him. People like that come along once in a very, very long time - and then the government kills them if it can. (The so-called "cyanide apple" was never analyzed for poison, and it seems likely on the whole that Turing was murdered for some reason comprehensible only to moronic apparatchiks).
For my money, Turing was as close as England has ever come to its own Leonardo da Vinci. Sadly, he only lived a bit more than half as long - and even then Leonardo's constant lament was "Di mi se mai fu fatta alcuna cosa". ("Tell me if anything was ever done/completed"). There is even more crushing irony in the thought that Leonardo survived for 67 years in the chaotic and violent 15th-16th centuries, whereas Turing only made it to 41 in supposedly "civilized" 20th-century England.
You don't need to be HAL to take out quite a bit of the middle of IT and accounting/exec. jobs. With an AI baked into the windows AD environment it would be a lot easier than the AI trying to administrate by powershell scripts. The only thing saving us is that microsoft's AI is dead last in the ranking. Can't drone on, so please use your imagination to fill in the gaps.
As noted, same thing happens over and over when boffin-hobbits make a few baby-steps which get over-hyped by companies or the media and we end up with a lot of over-excited up-talking what the technology is going to be capable of doing.
When this fails to manifest, it'll get quietly buried for another ten or twenty years to be taken out every couple of years so people can laugh at the naivety.
Earlier A.I. and premature over-excitment over terms like VR in the 1990's.
I think it might actually be worse this time round as Marketing have indeed latched onto it as another nebulous hyper term to throw around, making previous exotic IT term usage as naive as 1950's washing powder commercials.
Or roughly that, was what I heard Hubert Dreyfus say back in the early 1970s. Some of you cursed that name when you read it, some nodded in agreement, and most went "Whodat?".
Anyway, his point was that much of AI seemed to consist of grant proposals that were equivalent to someone stacking one brick atop another and noting that the height of the stack had doubled, thus "proving" that in only about 30 such steps the stack would reach the moon.
I am very cynical about AI, having experienced what happened in the 1980's. It was just the same as now, with all the hype but without the threat of AI controlled cars.
The idea of a driver-less car sounds great, until all the qualifications are spelled out - such as that these cars might be restricted to motorways, and would have to revert to manual control in other circumstances - so you can't be driven home after a boozy party, nor can you send your children somewhere alone by car, nor can you start a walk at point A and tell the car to meet you up at point B!
A real driver-less car would have to figure out whether a child (or adult) was likely to blunder into the road, whether a lorry shedding some of its load was a danger or not, understand the difference between a horse with rider, and one without (which I encountered once on a motorway), recognise the sound of some piece of debris getting stuck under the car, etc etc.
The 1980's AI hype was dominated by the idea of Logic Programming, which is hardly ever mentioned nowadays. LP was great at figuring out family relationships, but only if nobody was adopted, changed sex, or whatever! There is no steady progress towards AI - just a series of hype fuelled lurches.
I've just finished Bratko's 'Logic Programming for AI' and its alive and kicking. Its way way beyond family relationships. Get a copy from your local library. I'm self unemployed at the moment and ploughing through as much AI as possible and, compared with what I saw in the 80s its come a very long way down an extremely long road. In the right hands I think its capable of some remarkable things - but then software itself does too in the right hands with the right people managing it. I think we may be coming up to what will be called a winter when it is really a plateau - a bit like when you realise you need to completely re-factor something to make it go that bit further. Libraries will be consolidated and people who know what they are doing will help other people who know what they want to do in a AI ish way and, unlike the massive amount of effort spent managing paper shaped documents on computers that has seriously wasted the vast effort of some of the most intelligent outputs from out universities for the last 30 years, we will see some serious jumps forward.
I don't see us coming to a "winter" or a plateau. I see us coming to a fork, where businesses want to take "AI" in one direction, advancing the mechanisms that already exist; while researchers want to take AI in another direction, attempting actual AI and not just a simplification of the definition.
Siri, Cortana, Alexa, etc are still as dumb as the annoying Office assistant in M$ Office 1997.
While in 1997, we had Nuance Naturally Speaking running on Pentium 1 with whopping 166 MHz. Now, the speaking agent needs constant "cloud" web access, so the Nuance software that powers Siri, Cortana and Alexa now runs on internet "cloud" servers instead of on a high end 3GHz octo-core smartphone, that is running loops around the good-ol Pentium 1.
And of course Google shut-down Freebase.com, (was the biggest open source ontology), to prevent competition (from smaller companies) in software assistants.
We had these expert systems, the hated telephone computers for four decades. And guess what, A.I. winter is coming again, because the progress just didn't happen, well except for semi-automated FakePorn, I guess.
I'm not saying I'm the right hands but I have a Pi3/Camera combination that currently recognises the difference between a chicken, a duck, a rat and a Magpie with over 90% accuracy. This is quite useful when it comes to stopping the latter two nicking the food and eggs. If I can get it to 99.9% on the rat I might just automate a trap and for the magpie a loud alarm (which would put the hens off lay). Its not much but it would pat for itself quite quickly.
A similarly trained device for insect recognition (with a laser!!!) could roam crops and do away with need for pesticides, and indeed weed recognition could also reduce the need for herbicides. Both solutions in mass production could work out cheaper than the chemicals without the likelyhood of resistant mutations cropping up (OK you do get mirrored beetles and silvered plants).
Useful solutions will continue to develop - its twats trying to sell you shit you dont want that are going to have a winter not AI.
Looking at things through Other Cloudy Lenses .....
amanfromMars Feb 9, 2018 3:21 AM [1802090821] ..... upping the ante on https://www.zerohedge.com/news/2018-02-07/china-developing-ai-enabled-nuclear-submarines-can-think-themselvesThe one question we ask: What piece of military hardware will China infuse an AI system on next?Another question for answering, and one which is much more difficult to realise is true and there is precious little that one can do to either halt or divert it from an already chosen path, is what piece of hardware, military or civilian, will AI systems infuse for China next