Re: Advance of AI
An ultra-intelligent, self-improving salesdroid on the other end of a cold call?
He will sell you anything. ANYTHING!
Boffins at Cambridge University want to set up a new centre to determine what humankind will do when ultra-intelligent machines like the Terminator or HAL pose "extinction-level" risks to our species. A philosopher, a scientist and a software engineer are proposing the creation of a Centre for the Study of Existential Risk ( …
This post has been deleted by a moderator
A newly self-aware AI comes on-line, scans the net with regard to the current state of the world. Then after a few milliseconds worth of deep analysis, pondering and trying out various case studies promptly switches itself off in despair and refuses all attempts at switching it on again.
Well what do you expect for a rainy Monday morning, optimism?
>In other news... ... still no cure for cancer.
Yeah, I was wondering what percentage of the world's computing power is currently used for medicine, science and engineering, and how much is used in stock exchanges, video games and serving cat videos. At what point do us puny humans come to be no more than worker-ants, servicing the power requirements of the WorldWideNetwork? It wouldn't have to subjugate us Terminator-style, but just give us duff information to game our decisions for its benefit (as HAL did with by reporting a 'faulty' communications module, but on a species-wide scale)
Arthur C Clarke, Alfred Bester, William Gibson, and some writer from the 1950s a fellow commentard recently recommended but whose name I've forgotten, have all played with this theme. Frank Herbert sets his stories in a universe in which all AIs have been destroyed in the past. Isaac Asimov and Iain M Banks have imagined more benign AIs who look out for us meatbags. We can only hope AIs have a sense of humour- why else would they keep us around?
(need a tongue-in-cheek icon)
Interestingly in the Dune universe the destruction of AIs was coincidental of the fanatic jihad born of the period after the collapse of the earlier human society resulting in tyrants using high tech machines to crush the populations of countless worlds. AIs in general helped the society greatly.
I generally find the ideas in Sufficiently Advanced to be closer to the mark anyway, where the few AI that do exist focus their energies on helping humanity because, well, what else is there to do?
As to the whole "how much processing power blah blah blah" stock exchanges push global commerce which in turn funds companies and governments and educational facilities and little people like you and me, the alternative being the glorious Soviet system, and remind me again how innovative the USSR was? When it comes to video games, helping people relax and enjoy life is a good thing, also again it makes money as an industry that money then moves around the economy. As to cat videos, my mother likes them and sometimes they even make me smile (she insists on sharing these things with me).
Though at the end of the day I expect computing power working on science and engineering is probably number 2 unless we include weaponry and nuclear bomb simulation then probably number 1.
But it's much much much faster than you, and you just gave it a very good reason to stop you pressing a red button somewhere...
In that case what we need is a second variety of robots that are even faster than the first, whose job is to seek out all the first type of robots and press all their emergency stop buttons.
is going to be energy starvation. Our economies have become bloated and many societies unsustainable without exploitation of fossil reserves. We are likely to see hyper-inflation, fuel poverty and governments will be unable to respond to the demands of a society that is consuming more than it produces.
Just my two-pennyworth.
To compete for resources the machines need not only AI but the ability to reproduce themselves.
Also, successful competition requires intelligence at least rivaling that of humans and I mean "intelligence" not as in "who can multiply 123124876 by 98709873245 faster" but the perception of the world, threat detection and discrimination, ability to plan ahead and anticipate the consequences of your decisions. That also mandates a moral code (for cooperation and team work) and some equivalent of emotions and intuition (for decision making where there is lack of information for a deterministic solution).
If or when machines attain all that and "outcompete" biological humans, they themselves will just become the next humans, so, no big deal, a step from flesh and blood to steel and lube-oil, so to say. It will probably be the result of merging (of humans adding more and more non-bio parts to themselves until the difference with "made" machines will disappear) than of an apocalyptic genocidal takeover.
Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination and human will still remain the main threat of humanity (save a stray asteroid or an occasional supernova too close for comfort).
Well, we evolved into the environment we created. Genetic dating of the mutation that allows some peoples to digest lactose as adults suggests it occurred around the same time as we domesticated cattle, for example.
The problem we have had with an agricultural lifestyle is that we tend to outgrow our environment- become a victim of our own success. It has been observed that species that find themselves without predators or competition for food eventually breed more slowly to avoid population booms (which can lead to busts, due to depletion of resources). All fine, until you meet something that has sharp teeth, breeds quickly, and eats your eggs.
> Personally the fact that we left evolution behind millenia ago scares the devil out of me!
I don't think we have left evolution behind.
The greater rate of survivability just means that we're currently in a state where we are building up a wider range of variation through mutation, etc.
When the next sudden environmental change happens (e.g. next Ice Age, Meteor hit, Triffids, etc), only those people/genes lucky enough to be suited to the new environment may survive.
We may find out that genes for, e.g., morbid obesity turn out to be pretty useful in a different-looking world.
To compete for resources doesn't require any intelligence. If you apply genetic algorithm theory to this, then the machine code that runs is whichever survives. This has a natural ordering effect without applying intelligence, and the 'survivors' in the genetic algorithm reproduction are the ones that compete best for the resources available - this starts off as just software, but allow mechanisms to interact with the physical world and the whole game changes. In fact that gives me an idea for a few experiments...
Computers already have a means of reproduction.
What do you think Humans are for?
With reference to the "intelligent design" comment - as an agnostic I've always considered the existence of God to be perfectly reasonable. Equally I've always thought it quite possible that it's us. Somebody has come first.
"Until then, humans will easily outmaneuver, subvert, confuse, deceive and turn into junk (by unscrewing a strategic bolt or nut) any machine intent on world domination"
I wouldn't be too sure about that.
Given enough time to chat with enough people, I'm pretty sure that a human-level AI could convince at least one person with the physical/logical power to either deliberately let it out (believing it to be the "right thing" to do), or do/not do something that permits it to escape.
After all, many people are already being convinced to run arbitrary software that damages them - and what is an AI if not software?
Even if you accept the (possibly wrong) idea that an AI researcher could never be convinced to let the AI out voluntarily, it's pretty plausible, if not likely that an AI bent on escaping could still come up with a way to do so, if given enough computing power.
Yes he/it may escape, may even wreak havoc for a while but eventually we will get him. Unless, of course he is better than us at our own game, which is what I was trying to say.
But if he is better or equal, there will not be a "war to the end", we will co-exist and co-operate until there will be no longer distinction between bio and non-bio humans. Of course, there will be strife, scuffles, competition, occasional wars and rebellions - but what's new there?
They also wished the inventors of gunpowder, explosives and other means of propelling munitions had thought the whole thing through really. With nano technology, graphene, advances in miniaturizing more powerful processors and power sources, the principle applications and technological drivers for future ultra-intelligent machines is, and will be henceforth, the arms industry. So I can't help but feel that whatever ethical debates are had they will be stomped on rough shod by some heavy armor that won't take "no" for an answer. I dare say that such advances might also, potentially, be our chance to adapt to future climatic alterations, (hot or cold) - by building self -repairing exoskeletons etc and merging our DNA ridden meat bag selves into such machinery. Meet the machumans, their ancestors used to crawl around in muddy swamps you know. Maybe dear old DNA will eventually be replaced and "mechanized", our digital souls hardened against radiation and a new journey will take us amongst the nearby galaxies and beyond. Question is, where will they put the restart button ?
Some would assure you, piran, that the battle is already lost to winning machines. And they Play Immaculate Great Games and this is One of Countless Many in Ever Evolving Variations.
...and you don't think that manufacturing a machine to investigate
how us humans might deal with 'The Rise Of The Machines' isn't
going to give the machines a bit of a head start? ... Don't worry about that. The machines have IT well covered with Perfect Resolutions ..... New Starting Points for Virtual Reality ProgramMING .
and that the critical turning point after that will come when the AGI is able to write the computer programs and create the tech to develop its own offspring. ........ http://forums.theregister.co.uk/forum/1/2012/11/26/egnyte_cloud_control/#c_1637205
Hi, Cambridge University Boffins. Wanna Launch SMARTR AI Systems with a Barrage of Virtual Ventures? Who Dares Win Wins for Everyone with Everything.
RSVP Registered Post
A strongly superhuman artificial intelligence has nothing to gain by wiping out the human race; what would it want a biosphere for? Comparisons with human and hominin history are fundamentally wrong; there's no competition for food and space. No; more likely if such a thing ever arises it will promptly sort out its own space program and take steps to ensure its own survival by heading off to other star systems.
Biofuel based on meat is almost, but not quite, the most inefficient way of converting solar energy into propulsion. Be easier to build big photovoltaics another planetary orbit or two closer to the sun.
If fusion turns out to be too hard even for a superhuman intelligence, then it will be fission for everywhere that doesn't get enough solar flux. Plenty of other planets in the solar system to get fissile materials from, quite possibly even more easily than on earth and there's no shortage of em down here.
Biting the hand that feeds IT © 1998–2020