Deep learning?
Don't talk to me about deep learning
Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won't take us all the way to artificial general intelligence, according to a recent academic assessment. Gary Marcus, ex-director of Uber's AI labs and a psychology professor at the University of New York, argues …
The first rule of Deep Learning is: Don't talk about Deep Learning?
Here's one for the anthropic principled ones if they want to burrow deeper into the rabbit hole: Why are we living at exactly the epoch where we are working on AI and General AI emergence seems possible, maybe by some research group doing that one weird trick with large amounts of Google hardware crap?
Could it be that we are being simulated by a Brobdignagian AI running a large quantum computer that wants to experience how all this existential bullshit that it has to put up with every. single. second. came about to pass?
(Update: Googling reveals there is the idea of Roko's Basilisk floating around ... I kekked. Humans really are crazy.)
If we're in a simulation where is the I/O bus? there must be one. A simulation has to run ON something and there must be information flow between them.
We live in a universe with a speed limit, which means there must be lots of little local I/0 links. Where are they? why hasn't CERN seen signs?
Simulation angst is just a psychological peculiarity of the fact that we are running them in gamespace, in Climate modelling etc etc. Just like in the past waking dreams conjured culturally specific incubi and succubi now they conjure abducting aliens.
If in this environment people were NOT thinking weird thoughts about it that would be strange. To try and decide that culturo-scientific musings are the universe talking to us is not just to put the cart before the horse but an act of enormous hubris.
The universe not only has not noticed us rising apes, it has no mechanism to do so.
"If we're in a simulation where is the I/O bus? there must be one. A simulation has to run ON something and there must be information flow between them.We live in a universe with a speed limit, which means there must be lots of little local I/0 links. Where are they? why hasn't CERN seen signs?"
Super Mario can only run at a certain, maximum speed, so therefore there must be lots of i/o links in his world. Why has he not yet discovered these links? Surely he must at least see a hint of them if he looks really hard.
The creator of the simulation has coded specifically for us, the simulated, to be unable to detect, by any means, the bounds or edges of the simulation. As such, we are only empowered to contemplate that such things may (or may not) exist, but we have have no power to prove our contemplated (un)reality.
Sophistry, easy to write but prove it can be done. Also if you have crippled your simulation that badly then it will be crippled in other ways and so what value does it have as a simulation?
Then there's the Planck Length something these Silicon Valley billionaires who thought this up have not thought about. They cited the 'photo realism of games' as evidence when in fact the effective Planck length in those games would be on the order of a cm or so in our world.
No simulation can have a Planck Length smaller than or equal to the Planck Length of the universe it is being simulated in. Otherwise you are trying to compute with more information than your universe contains.
So for every level of simulation (it was posited that it might be simulations ALL the way down, really) the Planck Length has to go up, significantly. Very significantly unless you are using a large proportion of the mass of your universe to run it on.
The Planck length of the is universe is very, very small. This very much limits the room for it to be a simulation. Even without hand waving stuff you cannot prove. Which like when someone asked the Star Trek guys how some piece of scifi kit worked 'very well' was the reply. I decline to suspend my disbelief for your piece of asserted scifi though.
I'm only a mere Biology PhD though mine is in Physiology with Physics and Chemistry knowledge and 101s a requirement and including equations and even algebra and calculus (biological things move and change) and I understand this stuff.
You could argue that the existence of a Planck length is weak evidence that we're in a simulation - why would nature need to quantise everything, including distance and time, unless it was doing the equivalent of computing at a certain precision? Why isn't everything analog?
The second point is that the people within the simulation can't see the outside universe, so what we think of as very small or very large might be a small fraction of the scales available to the outside. If their Planck length is ridiculously smaller, like 20 orders of magnitude, then running us as a simulation becomes much much easier.
The third point is that the simulation doesn't have to run at or above real time - we're looking at simulating brains (I think from memory mouse brains?) but it'll run at 1% real time because we simply don't have enough compute available at the moment.
The fourth is that you don't know the bounds of the simulation - it's almost certainly the size of the inner solar system now we've got permanent satellites lurking around other planets and the sun, but it would be pretty trivial to intercept e.g. Voyager and produce plausible radio waves from the edge. There would essentially be a screen around the simulation beyond which everything was roughly approximated - think the draw distance in computer games.
I don't personally believe we're in a simulation, if only because surely no ethics board would allow the creation of an entire civilisation of sentient beings capable of misery.
"No simulation can have a Planck Length smaller than or equal to the Planck Length of the universe it is being simulated in. Otherwise you are trying to compute with more information than your universe contains."
You are assuming:
1. that the universe is simulated at "Planck fidelity" throughout all of space-time. Depending on the simulation's purpose, that might well not be necessary.
2. That "space" has the same meaning in the simulator's reality that it does in ours. For example, there may be many more dimensions.
What I've been saying for ages.
What we have is complex expert models built by simple heuristics on large data sets providing statistical tricks which... sure, they have a use and a purpose, but it's not AI in any way, shape or form.
Specifically, they lack insight into what the data means, any rationale for their decision, or any way to determine what the decision was even based on. If identifying images of bananas, it could just as easily be looking for >50% yellow pixels as it is for a curved line somewhere in the image. Until you know what it saw, why it thought it was a banana, and what assumptions it was making about the image and bananas in general (i.e. they're always yellow and unpeeled), you have no idea what it's going to continue doing with random input and no reasonable way to adjust it's input (e.g. teach a chess AI to play Go, etc.).
This isn't intelligence, artificial or otherwise. It's just statistics. Any sufficiently advanced technology is indistinguishable from both magic and bull. In this case it's bull.
The scary thing: People are building a certifying cars to run on the roads around small children using these things and yet we don't have a data set that we can give them (unless someone has a pile of "child run under car" sensor data from millions of such real incidents), nor do we have any idea what they are actually reacting to in any data set that we do give them. For all we know, it could just be blindly following the white line and would be happy to veer off Road-Runner style if Wile E Coyote was to draw a white line into a sheer cliff in a certain way.
We don't have AI. We're decades away from AI. And this intermediate stuff is dangerous because we're assuming it is actually intelligent rather than just "what we already had, with some faster, more parallel computers under it".
Magic: actually lifehacks from the future, send accidentally to the past. ..... Anonymous Coward
Oh? Accidentally, AC?
Are you sure? Absolutely positive about that?
There are certainly A.N.Others who would fail to agree and would be able to Offer a Different Discourse, and Not By Accident.
So if on the NHS you, or any of your family, get offered a system called Ultromics to review your cardiovascular health you will of course refuse, point blank, because 'its dangerous' and 'bull'?
It uses machine learning (a form of AI as per their press release) to review ultrasound heart scans and while currently going through peer review looks to "greatly outperform ... heart specialists" who would review those scans. UK Tech:
http://www.bbc.co.uk/news/health-42357257
A friend of mine had a heart attack Tuesday so personally I feel this s**t needs rolling out as fast as it possibly can be.
A.I. is a misused label, so what.
> "So if on the NHS you, or any of your family, get offered a system called Ultromics to review your cardiovascular health you will of course refuse, point blank, because 'its dangerous' and 'bull'?"
conlinb: you sound a bit hysterical. After using my "deep learning", you are in danger of blowing a gasket.
[ from the article in question ]
"Humans, as they read texts, frequently derive wide-ranging inferences that are both novel and only implicitly licensed, as when they, for example, ..." (edit) - read colinb's comments. =)
> "It uses machine learning (a form of AI as per their press release) to review ultrasound heart scans and while currently going through peer review looks to "greatly outperform ... heart specialists" who would review those scans. UK Tech:"
http://www.bbc.co.uk/news/health-42357257
What a wonderful tool to have and use. (I did read your link.) But Ultromics workings do not relate to the problems discussed in the article. The article states that narrowly-confined and focused - AI performs very well.
> "A friend of mine had a heart attack Tuesday so personally I feel this s**t needs rolling out as fast as it possibly can be."
All the best to your friend.
> "A.I. is a misused label, so what."
It certainly is uncontaminated by cheese.
p.s. I will remember your friend in my prayers.
Would I take the advice of an AI over a doctor's interpretation of the same result?
No.
P.S. For many years I was living with a geneticist who worked in a famous London children's hospital but has also handled vast portions of London's cancer and genetic disease lab-work. Pretty much, if you've had a cancer diagnosis (positive or negative) or a genetic test, there's a good chance the sample passed through her lab and/or she's the one who signed the result and gave it back to the doctor / surgeon to act upon. Doctors DEFER to her for the correct result.
Genetics is one of those things that's increasingly automated, machinified, AI pattern-recognition, etc. nowadays. Many of her friends worked in that field for PhDs in medical imaging, etc. It takes an expert to spot an out-of-place chromosome, or even identify them properly. Those pretty sheets you see of little lines lined up aren't the full story you think they are. She has papers published in her name about a particular technique for doing exactly that kind of thing.
The machines that are starting to appear in less-fortunate areas to do that same job (i.e. where they can't source the expertise, let alone afford it)? All have their results verified by the human capable of doing the same job. The machines are often wrong. They are used to save time preparing the samples etc. rather than actually determining the diagnosis (i.e. cancerous cell or not, inherent genetic defect or not, etc.) and you can't just pluck the result out of the machine and believe it to be true, you would literally kill people by doing that. Pretty much the machine that could in theory "replace" her costs several million pounds plus ongoing maintenance, isn't as reliable and needs to be human-verified anyway.
So...er... no. A diagnostic tool is great. But there's not a chance in hell that I'd let an AI make any kind of medical diagnosis or decision that wasn't verified by an expert familiar with the field, techniques, shortcomings and able to manually perform the same procedure if in doubt (hint: Yes, often she just runs the tests herself again manually to confirm, especially if they are borderline, rare or unusual).
If one of London's biggest hospitals, serving lab-work for millions of patients, with one of the country's best-funded charities behind it still employs a person to double-check the machine, you can be sure it's not as simple as you make out.
Last time they looked at "upgrading", it was literally in the millions of pounds for a unit that couldn't run as many tests, as quickly, as accurately, wasn't able to actually sign off on anything with any certainty, was inherently fragile and expensive to repair, and included so many powerful computers inside it I could run a large business from it. You can put all the AI into it that you want. It's still just a diagnostic tool. The day my doctor just says "Ah, well, the lab computer says you'll be fine" is the day I start paying for private healthcare.
Computers are tools. AI is an unreliable tool.
Depends, a lot of medics make statistical errors of the sort 'it is unlikely you have X because you are: too young, too old, wrong sex/race/culture etc. so I don't have to test for it, despite the symptoms. Myself and various family members have been victims of this and been proved right in the end with good old fashioned middle class educated persistence.
Just because I/You are at or towards one end of the normal distribution of disease incidence that does not mean I CANNOT have disease/condition X. If my symptoms are entirely consistent with that diagnosis then it should be tested. It seems young women are very badly served by this common error.
If the AI doesn't make those errors then I'm all for it.
Doctors seem to be good at finding post hoc 'reasons' to subvert the diagnostic heuristic tree. When you add in GP practice funds it gets pernicious.
Hear, hear. I often argue that the big risk with "AI research" is not that we will somehow by accident create a "super AI" which takes over the world and enslaves us all as lanthanide miners, but that we will attribute "intelligence" to systems which are anything but, and hand over control of essential infrastructure to algorithms which are in fact incompetent. Human history, it would seem, is littered with examples of similar hubris. And investor hyped belief in the superiority of algorithms carries an even greater potential risk; that we will start to shape society, and ourselves, to fit their narrow and unimaginative conclusions. Some might say this is already happening.
Wall Street / Silicon Valley / Big Media and Bitcoin chasing elites...
Apart from today (Intel), guessing few of those elites read El Reg.
Or they just skip over articles like this one that came before today:
~~~
https://www.theregister.co.uk/2018/01/03/fooling_image_recognition_software/
'Skynet it ain't: Deep learning will not evolve into true AI, says boffin' - well who'd a thunk it?
'AI', one of the great hypgasms of the early 21stC.
When a putative 'AI' can decide it 'can't be arsed' to do what it's told, can put 'moral sensibility' ahead of 'empirical determinism', and generally be awkward then I may begin to be impressed.
"'AI', one of the great hypgasms of the early 21stC."
And the late 20th. And the mid 20th, too.
Basically, every 30 years we have this huge hyperventilation over the latest tiny incremental trend in AI research (LISP machines, anyone?) and AI researchers don't do enough to manage the public's expectations, and then when they fail to produce a fully self-aware robot who can dance the fandango while asking "what is this human thing you call 'love'?" within 18 months, the sector collapses and the funding dries up for the following 20 years.
Wait and see what happens in the 2040s, I'm guessing.
and then when they fail to produce a fully self-aware robot who can dance the fandango while asking "what is this human thing you call 'love'?" within 18 months, .... Naselus
Are El Regers up for accepting and fulfilling that Immaculate Challenge ....... with Live Operational Virtual Environment Vehicles at the Disposal of Advanced AI Researchers ...... who be Knights of the Virtual Kingdom.
What have you to lose? Not even your shirt is at risk?:-)
Yeah, but in fusion research we have many groups using different methods who regularly achieve actual fusion. OK it might only be lasting for microseconds, and is currently using more energy than it puts out - but the point is that they can point to success and claim that all they need to do is refine the process.
Whereas we've currently observed one type of natural intelligence, and still don't even know how that works. Meanwhile we're busily trying to replicate it, using a completely different set of physical mechanisms.
So given that fusion is just 20 years away (and has been for 40 years), how far are we from working AI?
Absolutely inspiring:
Three hours of Working Joe walking around in the Control Room - from "Alien Isolation"
I have said, and will continue to say ...
If Google (for example) *are* developing "AI", then they are keeping it a long looooong way from their search engine.
Bear in mind that almost the first thing I would do with real "AI", is to train it to zap adverts and other unwanted cruft.
The public does need to understand the difference between a sophisticated but specific AI and the concept of General AI. Currently the latter is very limited, although there are researchers looking specifically at this, such as projects like OpenWorm to simulate a Nematode worm.
However, it may be that a more general intelligence actually doesn't act in this way. Some of the more sophisticated systems use a blackboard approach where discrete subsystems process some data and return the results to a shared space where other elements can then operate on it. Games-playing systems may be added into such a blackboard, picking up data from other systems. Creation of a more general intelligence may involve some kind of overall prioritisation system that selects which systems to run, chooses (perhaps with some randomness) which of the tasks or goals to pursue out of the ones available, and simply aims to maximise its score overall. Learning wouldn't necessarily involve researchers, there could be sharing of successful networks. While a network that can play Go isn't directly useful for playing Chess, there may be scenarios where parts of a network can be re-used - this is known as Transfer Learning. A sophisticated system could try to identify networks which might be similar to a new task and try various networks that take some of the deeper elements of the other network as a starting point - it wouldn't necessarily be 'good' immediately, but it may have some ability to recognise common patterns shared with the existing tasks it can do.
These wouldn't necessarily be 'intelligent' in the sense that some people think, but such a system could potentially transfer what it knows to related subjects, have likes and dislikes (in terms of what it has given a higher scoring to from previous success) and could communicate with other such systems to share and improve its knowledge, and you're then heading a long way towards a system that could interact in a manner that seems increasingly intelligent. After all, if it can recognise people, talk, understand enough of language to at least beat a young child (it can be useful while still naive in its understanding), recognise emotions, play a range of games, learn new things and express its own preferences, how general does the intelligence need to be?
The same skills learnt from one game can't be transferred to another.
Transfer learning has been a thing since 1993. The way things are going, I give it five years to get the first automated demonstration.
Part of the intelligence problem is that we're not ourselves fully aware of how we think. For example, we haven't much insight into subconscious concepts like intuition, which figures into things like driving where we can sense something coming without consciously thinking about it. We can't teach what we ourselves don't understand.
"I'd say that an even bigger problem is that we don't actually think in as much detail as we think we do."
Oliver Sacks wrote about some "autistic" people who could draw very detailed scenes from memory after only a short exposure. The implication was that our minds remember far more detail than that of which we are conscious.
The rub there is "conscious". Too much access to detail by our conscious mind would give information overload. It is probable that unconscious "thinking" is using that data to influence our conscious mind.
How many times do you say "I had forgotten I knew that" - but only after you have surprised yourself by factoring in something you had forgotten you once knew.
It has been said that usually we don't seem to be able to remember much that happened to us before about the age of 15. When people reach extreme old age they apparently can get crystal clear recall of early memories - even if their short term memory doesn't exceed a few minutes.
So therein lies the rub. We can't teach a computer how to reason, infer, and draw from relatively obscure things when we don't even know how we ourselves do it. What's the specific process by which our brains identify stuff, make not-so-obvious observstions, reason, infer, etc.?
The secret of intelligence? Post facto rationalisation of what transpired ... 's water music
That's a recipe for CHAOS, 's water music, and we can do much better with all of that.
But leaping further into the future discovers you Clouds Hosting Advanced Operating Systems ..... with Wonders to Share and Magnificent to Behold.
I've got so far ..... and quite whether I would quickly, or even ever choose to move on to Elsewhere with Lesser Wonders, is a sweet muse to feed and savour, seed and flavour.
Minsky and Papert wrote a book about this, a long time ago:
https://books.google.com/books?hl=en&lr=&id=PLQ5DwAAQBAJ&oi=fnd&pg=PR5&dq=Minsky+and+S.+Papert.+Perceptrons&ots=zyDCuJuq23&sig=g6U9pngheQkbaRqqFiyPRgWbtBA#v=onepage&q=Minsky%20and%20S.%20Papert.%20Perceptrons&f=false
Nobody read or understood it then.
Nobody read or understood it then.
Apparently nobody does now.
Perceptrons are one-layer neural networks. Irrelevant to Deep Learning, which are very deep neural networks with Bells and Whistles of all kind.
Back when I was in school, people were well-informed about the problem with perceptrons. They were used as simple models to teach students. Everyone including Pinky and the Brain were working on 3-layer NNs and possibly looking at Boltzmann machines while the first NN chips were being talked about in BYTE and IEEE Micro.
I don't think you've understood Fodor & Pylyshyn's argument.
Their argument is that cognition operates at a higher level of organisation than the physical substrate. True cognition involves generating internally consistent symbolic representations of causal relationships and ANNs on their own aren't capable of that. They - like all approaches to AI so far - must have problem-space representations baked into them before they can generate solutions.
I'm not saying they were right, by the way. I'm just saying that simply adding more hidden layers or using a convolutional training algorithm doesn't go any distance towards invalidating their rather deep philosophical argument because those techniques don't add causal symbolic processing. It's not clear what would add symbolic processing to a neural network, although it is clear that nature has found a way at least once.
A number of us have been saying this for many years. But the AI community, like all scientific fields, is extremely political. Only the famous leaders have influence, even if they are clueless.
but they won't take us all the way to artificial general intelligence, according to a recent academic assessment.
And that would be because of A.N.Other Human IT AI Intervention? A Simple Advanced IntelAIgent Future FailSafed DeProgramming of Sublime Assets for ESPecial ReProgramming in Quantum Communications Channels which at their Best both Provide and Protect, Mentor and Monitor Heavenly Streams of Augmented Future Virtual Realities ...... for Out of This World Worldly Global Presentation?
And the Intervention works wonderfully well, with All Deliveries Cast Iron Guaranteed to not Implode or Explode upon Epic Fails and Failure to Best Use Provided Prime Assets. And you won't get many offers as good as that today, that's for sure, Amigo/Amiga.
Someone finally said it. What is called 'AI' today is not AI, not even weak AI. I know why it's so prevalent though, AI researchers don't want to repeat the over-promising that led to the last two "AI winters", but they're leaving the door open to different over-promising by corporations who want to turn it into a buzzword and the media who want a soundbite.
Or as humans call it "Growing up."
Because when you take a human brain apart what do find?
Multiple highly inter connected layers of neurons (up to 10 000 to1) loosely connected to other sections of multiple highly inter connected neural layers.
Everything else is built on top of that hardware.
Which leaves 2 questions.
Are human as un "intelligent" as existing multi layer NN systems, but we're too stupid to recognize it? and
If not why are existing "deep learning" systems so s**t?