back to article Stephen Hawking: The creation of true AI could be the 'greatest event in human history'

Pioneering physicist Stephen Hawking has said the creation of general artificial intelligence systems may be the "greatest event in human history" – but, then again, it could also destroy us. In an op-ed in UK newspaper The Independent, the physicist said IBM's Jeopardy!-busting Watson machine, Google Now, Siri, self-driving …

COMMENTS

This topic is closed for new posts.
  1. Pete Spicer
    Boffin

    I have no fear of true AI being created, mostly because if we can't consistently and accurately solve simpler algorithmic problems (c.f all the Big Security issues lately), what chance is there of us creating a program many more levels of complex that doesn't have fatal bugs in it?

    1. asdf
      Boffin

      two words

      Quantum computers. They might give us true AI or they might be a dead end like string theory or always a decade away like economical fusion energy production. Going with the boffin icon to deflect the fact I am making shit up.

      1. Destroy All Monsters Silver badge

        Re: two words

        No. Quantum computers are good for the following:

        1) Factoring large numbers

        2) Rapid searching in databases

        3) Simulation of quantum processes

        Otherwise they are mostly useless for any calculation of importance in the macro world.

        "True AI" is anyway fucking bullshit. It's like saying that you will finally attain "a true combustion engine".

        1. asdf

          Re: two words

          Human thought is both analog and digital if I remember right and obviously electrochemical so I am sure you are right. Still factoring shit is something I guess especially if you can find others dumb enough to pay you hundreds of dollars per bitcoin.

        2. Anonymous Coward
          Anonymous Coward

          Re: two words

          "True AI" is anyway fucking bullshit. It's like saying that you will finally attain "a true combustion engine".

          Yes. There's no reasonable definition of what it will mean in practice, but whatever they ultimately pin the tag on, it won't be Skynet, or Agent Smith. It might be a gigantic distributed system that can 'walk around' in a remote body, with a marvelous degree of sophistication in terms of simulated personality, sensors, and capabilities. It will still not be an entity, and will not be a direct threat - leave that to those humans who always seem to crop up with new, unethical 'applications' for tech.

          1. This post has been deleted by its author

        3. Suricou Raven

          Re: two words

          The 2) might help. A lot of AI work is on machine learning - a field which requires the application of truely ridiculous amounts of processor power.

          We know it can work, because it's worked before - it's the algorithmic approach which lead to us. Doing so required four billion years of runtime on a computer the size of a planet.

          1. Michael Hawkes
            Terminator

            Re: two words

            "Here I am, brain the size of a planet... Call that job satisfaction, 'cause I don't."

            Where's the Marvin icon when you need it?

        4. sisk

          Re: two words

          No. Quantum computers are good for the following:

          1) Factoring large numbers

          2) Rapid searching in databases

          3) Simulation of quantum processes

          Otherwise they are mostly useless for any calculation of importance in the macro world.

          Funny. They said something very similar to electronic computers 70 years ago. Supposedly the things were only useful for very large calculations and there's no reason anyone would ever want one outside of a lab.

    2. asdf

      can't resist

      >creating a program many more levels of complex that doesn't have fatal bugs in it?

      Perhaps someone or company will do it right but based on my companies history our AI product will be riding the short bus to school.

    3. Yet Another Anonymous coward Silver badge

      Oh good. A buggy - ie effectively insane - all powerful AI entity.

    4. Mage Silver badge

      Agreed

      None of the present examples are AI. Hawkins should stick to Physics & Mathematics etc.

      We can't even agree on a definition of Intelligence, which is partly tied up with creativity. So how can anyone write a program to simulate it. The history of AI in Computer Science is people figuring how to do stuff previously thought to require intelligence and redefining what counts as AI rather than coming up with a proper rigorous definition of Intelligence.

      1. Michael Wojcik Silver badge

        Re: Agreed

        We can't even agree on a definition of Intelligence, which is partly tied up with creativity. So how can anyone write a program to simulate it. The history of AI in Computer Science is people figuring how to do stuff previously thought to require intelligence and redefining what counts as AI rather than coming up with a proper rigorous definition of Intelligence.

        All true. However, there are some research groups still working on aspects of strong AI, or cognate fields such as unsupervised hierarchical learning1 and heterogeneous approaches to complex problems like conversational entailment, which at least have far more sophisticated pictures of what "intelligence" in this situation might entail. They understand, for example, that anything that even vaguely resembles human cognition can't simply be a reactive system (that is, it has to continuously be updating its own predictive models of what it expects to happen, and then comparing those with actual inputs); that it has to operate on multiple levels with hidden variables (so it would have phenomenological limits just as humans do); and so on.

        That doesn't mean we're close to solving these problems. A quick review of the latest research in, say, natural language processing shows just how difficult they are, and how far away we are. And the resources required are enormous. But we do have a better idea about what we don't know about human-style intelligence than we did a few decades ago, and we do have a better checklist of some of the things it appears to require.

        It's worth noting that Searle's widely-misunderstood2 "Chinese Room" Gedankenexperiment, to take one well-known argument raised against strong AI, was intended simply to discredit one (then-popular) approach to strong AI - what Searle referred to as "symbolic manipulation". That's pretty much what Mage was referring to: a bunch of researchers in the '60s and '70s said, "hey, AI is just a matter of building these systems that use PCFGs and the like to push words around", and Searle said "I don't think so", and by now pretty much everyone agrees with Searle - even the people who think they don't agree with him.

        Strong AI wasn't just a matter of symbolic manipulation, and it won't be just a matter of piling up more and more unsupervised-learning levels, or anything else that can be described in a single sentence. Hell, the past decade of human-infancy psychological research has amply demonstrated that the interaction between innate knowledge and learning in the first few months of human life is way more complicated than researchers thought even twenty or thirty years ago. (We're just barely past the point of treating "innate knowledge" and "learning" as a dichotomy, as if it has to be just one or the other for any given subject. Nicholas Day has a great book and series of blog articles on this stuff for the lay reader, by the way.)

        Strong AI still looks possible3, but far more difficult than some (non-expert) commentators like Hawking suggest. And that's "difficult" not in the "throw resources at it" or "we need a magical breakthrough" way, but in the "hard slog through lots of related but fundamentally different and very complicated problems" way.

        1Such as, but not limited to, the unfortunately-named "Deep Learning" that Google is currently infatuated with. "Deep Learning" is simply a hierarchy of neural networks.

        2John Searle believed in the strong-AI program, in the sense that he thought the mind was an effect of a purely mechanical process, and so could in principle be duplicated by a non-human machine. He says that explicitly in some of the debates that followed the original Chinese Room publication.

        3No, I'm not at all convinced by Roger Penrose's argument. Penrose is a brilliant guy, but ultimately I don't think he's a very good phenomenologist, and I think he hugely mistakes what it means for a human being to "understand" incompleteness or the like. I'm also not so sure he successfully establishes any difference in formal power between a system capable of quantum superposition and a classical system that models a quantum system.

        1. sisk

          Re: Agreed

          I'm not an expert on the subject by any stretch of the imagination, but I would imagine that we're a good century away from what most laymen (and in this regard Hawking is a layman) think of as AI - that is self aware machines. Barring some magical breakthrough that we don't an won't any time soon understand* it's just not going to happen within our lifetimes. We don't even understand self-awareness, let alone have the ability to program it.

          In technical terms I would argue that an AI need not necessarily follow the model for human intelligence. Our form of intelligence is a little insane really if you think about it. Machines would, of necessity, have a much more organized form of intelligence if it were to be based on any current technology. If you'll grant me that point I would argue that it follows that we don't necessarily have to fully understand our own intelligence to created a different sort of intelligence. Even so we're a long ways off from that unless you accept the theory that the internet itself has already achieved a rather alien form of self awareness (I forget where I read that little gem, but I'm not sure about it. The internet's 'self awareness' would require that its 'intelligence', which again would be very alien to us and live in the backbone routers, understand the data that we're feeding it every day, such as this post, and I just don't buy that.)

          *Such breakthroughs do happen but they're rare. The last one I'm aware of was the invention of the hot air balloon, which took a long time to understand after they were invented.

    5. tlhulick

      That is, of course, the problem: even remembering the havoc caused a short while back on Wall Street because of algorithmic problems of "automatic trading" which will never be known, because they can never be identified, (but were, fortunately stopped by shutting down momentarily, and upon "reignition," everything was back to normal) The more important question being ignored, and which I believe underlies Steven Hawking's opinion, remains: what if that black-hole, downward spiral had continued unabated, increasing expotentialy after this "reboot."

    6. brainbone

      Predictions, tea pots, etc.:

      The first AI that we'll recognise as equal to our own will happen after we're able to simulate the growth of a human in a computer in near real time. This ethically dubious "virtual human" will be no less intelligent than ourselves, and what we learn from it will finally crack the nut, allowing us to expand on this type of intelligence without resorting to growing a human, or other animal, in a simulator.

      So, would the superstitious among us believe such a "virtual human" to have a soul?

    7. Robert Grant

      Yeah, and also we haven't done anything like create something that can think. Progress in AI to us means a better way of finding a good enough value in a colossal search space. To non-CS people it seems to mean anything they can think of from a scifi novel.

      Honestly, if it weren't Hawking opining this fluff, it would never get a mention anywhere.

  2. Destroy All Monsters Silver badge
    Facepalm

    Why is Hawking bloviating on AI this and that?

    I can't remember him doing much research in that arena. He could talk about cooking or gardening next.

    1. asdf

      Re: Why is Hawking bloviating on AI this and that?

      Would you rather have Stephen Fry bloviating on the topic?

      1. Yet Another Anonymous coward Silver badge

        Re: Why is Hawking bloviating on AI this and that?

        Surely you cannot be criticizing St Stephen of Fry ?

        The patron saint of Apple Mac fanbois

        1. asdf

          Re: Why is Hawking bloviating on AI this and that?

          Oh so the British Walt Mossberg then ok. Yeah finally looked up his name and cut comment down to about %10 of the size it was before lol.

          1. Dave 126 Silver badge

            Re: Why is Hawking bloviating on AI this and that?

            >Why is Hawking bloviating on AI this and that? I can't remember him doing much research in that arena.

            Well, Hawking's collaborator on black holes, Roger Penrose, is known for writing 'The Emperor's New Mind', in which he 'presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.

            The majority of the book is spent reviewing, for the scientifically minded layreader, a plethora of interrelated subjects such as Newtonian physics, special and general relativity, the philosophy and limitations of mathematics, quantum physics, cosmology, and the nature of time. Penrose intermittently describes how each of these bears on his developing theme: that consciousness is not "algorithmic"'

            The areas in bold are very much up Hawking's street.

  3. Graham Marsden
    Terminator

    Warn: There is another system

    Colossus to Guardian: 1 + 1 = 2

  4. Anonymous Coward
    Anonymous Coward

    Maybe he should have finished reading 'The Two Faces of Tomorrow' by James P Hogan.

  5. Anonymous Coward
    Anonymous Coward

    Thinking about thinking

    The most worrying ethical issue I have involves the causing of pain to another sentient conciousness, and how easily this might happen as soon as an AI (that becomes self aware) or mind uploading starts to kick off.

    The classical depictions of Hell being an eternity of suffering become feasible, once the innate mortality of the bodily support systems for a conciousness is removed and can be replaced by a machine that can be kept running forever. And think of the power an oppressive government could wield, if it's threat isn't just "If you do something we don't like, we'll kill you" but more "If you do something we don't like, we'll capture you, and torture you. Forever. And might make a hundred clones of you, and torture each of those too!"

    I'm not one to say technology shouldn't be pursued due to the ethical considerations of misuse, but AI and related fields seem to be ones to tread carefully down.

    But it also raises interesting scenarios, like forking a simulation of a mind, and then allowing the the clones to interact. Imagine meeting and talking to a version of yourself. And raises the prospect of true immortality, your human body being just the first phase of your existence. "You" being defined as your thoughts, memories, personality, rather than the wet squishy biochemical stuff that keeps that conciousness thinking.

    Does thinking even need an organic brain, or a computer? Could we simulate a conciousness just by meticulously calculating and recording by hand the interactions of an incredibly vast neural network? Sure, it'd take a very long time, but if you imagine a religious order who's only purpose is to multiply and add up numbers, then record the results in a series of books. Centuries passing as generations of them attend their task, volumes upon volumes of these books carrying the state of the system as it evolves, until enough time has passed within the simulation for the mind to have spent a fleeting moment in thought.

    What was it that enabled that thought? The mere act of all those monks making the calculation? Of letting the state feed back on itself? Of writing the state down somewhere? What if the monks never wrote the intermediate states down, does the simulation still think?

    1. nexsphil

      Re: Thinking about thinking

      It's actually frightening to imagine primitives like ourselves getting hold of the kind of power you describe. I'd like to think that if things became that extreme, we'd be swiftly put out of our misery by a benevolent advanced species. One can only hope.

    2. Anonymous Coward
      Anonymous Coward

      Re: Thinking about thinking

      It's thankfully not that "easy" to cause harm or suffering to a virtual "thing" (or AI should it ever exist).

      Why? Well, ask yourself. Do cars suffer? Does a drill or hammer? What about a clock or current computer OS?

      Those do not suffer, because they are not people. What about backing up your PC to a previous state and recovering it after a crash? Does that still meet the definition of it suffering?

      So before we even get to the point of considering if we can program and create something that IS "Alive" and may be close to being a "person" (and not just a bacterium of AI), we have to consider what it would mean for it to experience pain, even if it could.

      1. majorursa
        Terminator

        Re: Thinking about thinking

        Your argument should be turned around. If something is able to feel pain it is 'alive', either intelligent or not.

        A good measure of 'true AI' could be how strongly an entity is aware of its own impending death.

        1. Anonymous Coward
          Anonymous Coward

          Re: Thinking about thinking

          Do you mean pain as in a response, a signal or as in "the feeling of suffering"? Computers can have a response and a signal, do they feel pain? They can "detect" damage, so can my car... does it feel pain?

          As said, for us to worry about something suffering, we first have to worry about it being alive. Currently we are clearly far away from making something fit the alive state (computers, cars, hammers). So we can also say we are safely far away from making those things suffer.

          Even before we worry about how to define a living and intelligent thing we made, we can concentrate on the real biological ones living around us right now. :)

          Once we get close enough to ask "is this alive or not... is it a person or not..." then we can start to ask the other hard questions. Personally I don't think we will reach it (diminishing returns and we already have the solution, it is biological in function and form).

          The simplest way I can put it is that a picture, book or "computer program" of Switzerland is not actually Switzerland. If I was to progress one step up, with a "perfect AI replication of Switzerland simulated in every detail", I'd end up with a simulation the exact same size and with the same mechanics as the thing I was trying to simulate. The "map the size of the country, 1:1 scale" problem. :P

    3. Sander van der Wal

      Re: Thinking about thinking

      Mmmm. Given that humans are the ones coming up with these kinds of nasty applications all the time, the first thing a rational being will do is making sure the cannot do that kind of torture on him.

    4. mrtom84
      Thumb Up

      Re: Thinking about thinking

      Reminds me of a quote out of Permutation City by Greg Egan about creating a consciousness using an abacus.

      Great book...

    5. d3vy

      Re: Thinking about thinking

      "magine meeting and talking to a version of yourself"

      I can honestly say that would be terrible, I've spent quite a bit of time with me and I'm a complete tit.

    6. Michael Wojcik Silver badge

      Re: Thinking about thinking

      Welcome to sophomore Introduction to Philosophy class, courtesy of the Reg forums.

      It is just barely possible that some people have already considered some of these ideas.

  6. Rol

    Still using humans?

    "The model AI 1000 can outperform humans by any measure and have no morals to cloud their judgement"

    "Mmm, you are right, I'll take two for now"

    "Wise choice sir, that'll be 20 million bit coins and 5000 a month for the service contract"

    "Err, on second thoughts I'll stick with David Cameron and his sidekick for now, they're far cheaper and just as easy to control"

    "As you wish Lord Sith, but please take my card, you never know"

    1. d3rrial

      Re: Still using humans?

      20 million bitcoins? Out of 21 million which could possibly exist? That'd be one hell of an AI if it costs over 90% of the total supply of a currency.

      I'd recommend Angela Merkel or the entire German Bundestag. They're all just puppets. You know that carrot on a stick trick? Just put a 5€ note instead of the carrot and they'll do whatever you want (but they salivate quite a bit, so be careful)

  7. Timothy Creswick

    FFS

    Seriously, how many different spellings do you need for one man's name in this article?

    1. Rol

      Re: FFS

      Apparently the colossal works churned out by this man, were in fact a collaboration between Stephen Hawking, Stephen Hawkin and Stephen Hawkins, and they'll be telling us it's just a coincidence.

      I suspect the teacher had the class sit in alphabetical order and they copied off each other at the exam.

      1. JackClark

        Re: FFS

        Hello, unfortunately the article also waffles on about Jeff Hawkins, so Hawking/Hawking's/Hawkins all present but, as far as I can work out, correctly attributed.

  8. RobHib
    Stop

    I just wish....

    ...that 'ordinary' AI had reached a sufficient level of development that OCR would (a) actually recognize what's scanned and not produce gibberish, and (b) provided me with a grammar checker that's a tad more sophisticated than suggesting that 'which' needs a comma before it or if no comma then 'that' should be used.

    Seems to me we've a long way to go before we really need to worry, methinks.

    1. Anonymous Coward
      Anonymous Coward

      Re: I just wish....

      Not that I disagree, but I suspect an AI won't be achieved by logic, but by chaos theory / emergent behavior. We'll end up with something intelligent, but it will be error prone just like humans.

      If Moore's Law stalls out as looks to be the case, we may end up with an AI that's close to us in intelligence. Maybe a bit smarter, maybe a bit dumber, but prone to the same mistakes and perhaps even something akin to insanity.

      Where's that off switch, again?

      1. Mage Silver badge

        Re: I just wish....

        In reality Moore's law started to tail off rapidly about 2002.

        1. Anonymous Coward
          Anonymous Coward

          Re: I just wish....

          I think you have absolutely no idea what Moore's Law is if you think that. It has nothing to do with frequency, as you appear to be assuming.

          Moore's Law says that every 24 months (it was 18 for a while at the start) the number of transistors you can put on a chip doubles. That's still the case, we've still been doubling transistors, even since 2002. That's what we have stupid stuff like quad/octo core phones - it is hard to come up with intelligent ways to use all those extra transistors, so "more cores, more cache" are the fallback positions when chip architects are not smart enough to put them to better use.

          Moore's Law may have trouble going much beyond 2020 unless we can make EUV or e-beam work properly. We've been trying since the 90s, and still haven't got it right, so the clock is ticking louder and louder...

          1. Don Jefe

            Re: I just wish....

            Moore's Law is often mis-categorized as a solely technological observation, but that's inaccurate. Incomplete at any rate. Moore's law is a marketing observation turned, shrewdly, into an investment vehicle.

            I mean, come on, analysts and junior investors get all rabid talking about results just a few months into the future. Here comes Gordon Moore, chiseling 24 month forecasts into stone. When an acknowledged leader in a field full of smart people says something like that everybody listens. He's not know for excessive fluff you know, 'It must be true, they must already have the technology' everybody said.

            The best part, it didn't matter if it was true, or even remotely feasible, when he said it. People threw enough money and brain power into it that they could have colonized another planet if they wanted to. Gordon Moore created the most valuable self-fulfilling prophecy since somebody said their God would be born to the line of David. I think it's all just fucking great.

            1. John Smith 19 Gold badge
              Unhappy

              Re: I just wish....

              "The best part, it didn't matter if it was true, or even remotely feasible, when he said it. People threw enough money and brain power into it that they could have colonized another planet if they wanted to. Gordon Moore created the most valuable self-fulfilling prophecy since somebody said their God would be born to the line of David. I think it's all just fucking great."

              True.

              But by my reckoning a current Silicon transistor gate is about 140 atoms wide. If the technology continues to improve (and S-Ray Extreme UV lithography is struggling) you will have 1 atom wide transistors. 1 electron transistors were done decades ago.

              Still it's been fun while it lasted.

              1. Anonymous Coward
                Anonymous Coward

                Re: I just wish....

                There's still a lot of room for improvement if you were really able to go from 140 atoms wide to 1 atom wide (which you can't, but we can dream) Scaling is in two dimensions, so that would be roughly 14 doublings or 28 more years. Then you can get one more doubling by using a smaller atom - carbon has an atomic radius about 2/3 the size of silicon.

                If you start scaling vertically, then you could keep going until almost the year 2100 - assuming you found a way to keep a quadrillion transistor chip cool! But don't laugh, I remember the arguments that a billion transistor chip would never be possible because it would exceed the melting point of silicon, but here we are...

                I don't agree that Moore's Law is just marketing. The investment wouldn't be made if it didn't pay off, it isn't like foundries are sucking the blood out of the rest of the economy and returning no value. Moore's observation may have been more of a roadmap, since there's no particular reason you couldn't have invested 20x as much and made a decade of progress in a few years. The stepwise approach made planning a lot easier, due to the lead time for designing the big systems typical of the late 60s/early 70s, and designing the more complex CPUs typical of the late 80s and beyond. His observation was intended for system architects, not Wall Street.

                1. Don Jefe

                  Re: I just wish....

                  Sorry to disappoint, but you've got it backwards. Nobody would stump up the money to build larger, more advanced foundries and equipment. Chip foundries are pretty much the most expensive thing in the world to build, take ages to pay for themselves and have extraordinarily high tooling and maintenance costs. Moore's Law reassured the banks the industry would grow, if they would just pony up the cash for new foundries.

                  Moore's Law (of Lending) Kickstarter financial institutions into global discussions, conferences and lobbying groups that leaned on State sponsored commercial banks for certain guarantees and an entirely new type of lending specialty was born. The self fulfilling bit starts there.

                  Once the first advanced foundry was built it became evident to everybody that it was going to take too long to recover their investments so they threw more money into things like R&D (banks will almost never lend for R&D, but they did for silicon), companies building machines to package chips at incredibly fast speeds (hi guys), mineral extraction all the way down to ongoing training for systems people and the development of more industry suitable university programs (Concordia University in Montreal benefitted greatly from that).

                  But the banks did all that because they were able to go to State commercial lenders with Moore's Law as their main selling point and once the money was secured, it was gone. There was no 'not keep investing if the payoff wasn't there', stunning amounts of money were spent. None (few) of the normal banking shenanigans available as options because as soon as millstone related monies were available they were spent. The banks had to keep investing and accelerating the silicon industry because one of the guarantees they had provided to the State banks allowed rates on other loans to be increased to cover any losses in the silicon foundry deals. If the foundries failed so did the banks.

                  It's all very interesting. The stunning amount of cabinet level political influence in the funding of Moore's Law is rarely discussed. That's understandable, politics and marketing aren't sexy, nor do those things play well to the romantic notion that technology was driving Wall St, but the facts are facts. The whole sordid story is publicly available, it wasn't romantic at all. It was business.

                  1. Anonymous Coward
                    Anonymous Coward

                    @Don Jefe

                    Fabs are super expensive today, but when Moore's Law was formulated and for several decades after, they were quite affordable. That's why so many companies had their own. As they kept becoming more expensive over time fewer and fewer companies could afford them - more and more became "fabless".

                    There's probably a corollary to Moore's Law that fabs get more expensive. Fortunately that scaling is much less than the scaling of transistors. Maybe cube root or so.

                    1. Don Jefe

                      Re: @Don Jefe

                      @DougS

                      That's a very valid point about the escalating costs. I'm sure you're correct and there's a fairly correlation between transistor density and 'next gen' green field fab buildout. Especially at this point in the game. For all their other failings, big commercial banks don't stay too far out of their depth when very complex things are in play. Some of the best engineers I've ever hired came from banks and insurance companies. I'll ask around tomorrow. Thanks for the brain food.

                      1. Anonymous Coward
                        Anonymous Coward

                        Re: @Don Jefe

                        One of the interesting things to watch is the battle over 450mm wafer fabs. The large remaining players - TSMC, Samsung and (perhaps to a lesser extent) Intel want to see 450mm fabs because getting 2.25x more chips per wafer is attractive to improve throughput and somewhat reduce cost.

                        The problem is that the tool vendors don't want to build them, because they don't see a return when they really have a handful of customers. They broke even - at best - on 300mm tools, and they may go bankrupt building 450mm tools. They'd have as few as two, maybe four or five customers at the very most. The return for them just isn't there. Intel invested in ASML to try to force the issue but still 450mm keeps getting pushed back.

                        I think the reason Intel's investment didn't help is because even Intel realizes deep down that going to 450mm is not economic for them, at least not unless they become a true foundry and take on big customers like Apple, instead of the tiny ones they've had so far. They have four large fabs now, with 450mm they'll only need two. Not to mention they are supposedly only at 60% utilization on the ones they have!

                        The economies of scale get pretty thin when you only have two fabs, so Intel is probably better off sticking with 300mm fabs. But they daren't let Wall Street know this, or the money men will realize the jig is up as far as Intel's future growth prospects, and that even Intel may be forced to go fabless sometime after 2020. Their stock price would be cut in half once Wall Street realizes the truth.

              2. Anonymous Coward
                Anonymous Coward

                Re: I just wish....

                I recently saw a presentation by IBM at CERN *. They are planning to stack hundreds of chips and supply power & cooling to them by using an electrolyte flowing through um-sized channels between the chip stacks.

                They reckon that by going 3D, they can make Moores Law more exponential.

                *) https://indico.cern.ch/event/245432/

      2. rcmattyw

        Re: I just wish....

        I imagine the physical structure of the environment would determine to a large extent how an intelligence would evolve. That would be why humans think in a certain similar way, although there are variations for various different disorders ADD, aspergers etc. Each having its advantages and disadvantages.

        Now imagine that consciousness evolving as you describe from within the frame work of a machine rather than our old brains. I imagine the resultant intelligence would be so far removed from our own as to be unrecognizable. We could create it without even knowing we had. Likewise, how is it possible to impose ethics developed by a completely incompatible intelligence (our own) onto the newly created artificial intelligence? It will likely be nothing like our own, so any ethics will be completely different to our own and arrived at in a different way.

        Even if we can recognize that a conscious being has been created, who are we to judge its sanity or insanity. It has evolved in a completely different way. I suspect anything which evolves will equate to madness in our eyes, but also seem to be completely brilliant at whichever conclusions and approaches it takes to a problem due to being so different to our own. But with a being so different, how are we to even communicate in the first place?

    2. Michael Wojcik Silver badge

      Re: I just wish....

      'ordinary' AI had reached a sufficient level of development that OCR would (a) actually recognize what's scanned and not produce gibberish

      It has, actually. You could make an OCR system that's much more reliable than the typical out-of-the-box matrix-plus-peephole-context approach using well-understood tech available today. You could even do it with free software: UIMA (for the processing framework) plus OpenNLP (for MEMM-based decoding), some glue code of your own, and a human-supervised kernel to start the training.

      People are building those sorts of systems for problem domains where they're profitable, such as scanning medical records.

      (b) provided me with a grammar checker that's a tad more sophisticated than suggesting that 'which' needs a comma before it or if no comma then 'that' should be used.

      There, I'm afraid, you're out of luck. Even expert human judges disagree widely on the matters that these so-called "grammar checkers" weigh in on. Grammar checkers are snake oil and a very poor substitute for learning to write well the old fashioned way: reading a lot, writing a lot, and getting a lot of feedback on your writing.

      (I won't mention the fact that "grammar checkers" actually evaluate text against a (suspect) set of heuristics for usage and mechanics, and have little or nothing to do with grammar.)

      That said, there are commercial usage checkers that are far more sophisticated than the crap that's built into word processors, too. I've seen demonstrations of them at conferences such as Computers and Writing. I wouldn't recommend anyone use them (because ugh, what a horrible idea), but they do exist.

  9. Lars
    WTF?

    Oh dear

    With the knowledge of human history, Stephen Hawking, don't you think, after all, that the greatest threat to us is us, the education we should provide to our children but fail to deliver. Remember we invented the stone axe, and perhaps that was the biggest event in human history. We survived that too. It's not the tool it's all about how to use it. But perhaps your nightmare as mine, is when the stock exchange is run by AI.

    Oh well, still even if AI has been a very lucrative business for a very long time for the snake oil speakers at otherwise fairly honest occasions. Mostly the same AI con priests year after year. Perhaps the truth is right there in A as artificial and I as in intelligence. Quite a cake to program and run with a computer but how are we so damned stupid that we let the AI cons shit in our faces. I am sure we will have, eventually, artificial hearts running some fine program, perhaps a nice artificial penis, wi-fi perhaps. But a artificial brain, why the hell. Computers will become faster, smaller and so forth. But the AI cons are really hemorroides in the arse of IT. Don't pay those guys to have their speech, they will never deliver anything but air. And what the hell is it with you Google, have you become religious or something when you paint "arse" red. But ass bum but anus anal bottom booty asshole hole buttock is OK with your religion and then again not arsehole.

    Seriously Stephen, while you depend now on computers do you seriously believe your intelligence could be put in a box.

    1. Fink-Nottle

      Re: Oh dear

      > Remember we invented the stone axe, and perhaps that was the biggest event in human history. We survived that too. It's not the tool it's all about how to use it.

      There's a possibility that it was the Neanderthals who invented the axe, and sharing that invention led to their ultimate extinction.

      http://www.livescience.com/38821-neanderthal-bone-tool-discovered.html

      1. Lars
        Happy

        Re: Oh dear

        @Fink-Nottle, from that article:

        "There are sophisticated bone tools that are even older in Africa, for instance," McPherron said. "Neanderthals were, however, the first in Europe to make specialized bone tools.".

        Apparently the Neanderthals reached Europe before "us" so there is indeed a logical link here to AI as for logic. Lots of words.

      2. donaldinks
        Mushroom

        Re: Oh dear

        "There's a possibility that it was the Neanderthals who invented the axe, and sharing that invention led to their ultimate extinction."

        *****************************************************************

        "At least one-fifth of the Neanderthal genome may lurk within modern humans, influencing the skin, hair and diseases people have today, researchers say."

        http://www.livescience.com/42933-humans-carry-20-percent-neanderthal-genes.html

  10. Anonymous Coward
    Anonymous Coward

    AI doesn't really have to be that advanced

    in order to unleash hell upon humanity. Any mediocre artificial entity will do just fine with the help of some of our fellow humans.

    1. Don Jefe

      Re: AI doesn't really have to be that advanced

      What Humans consider to be a highly intelligent member of their species is wrong about 2/3 of the time. A computer could probably equal, or exceed, that one day. But it still won't be able to look at itself in the mirror and say 'I wasn't wrong' and be right. Ha! Suck it computer.

    2. Anonymous Coward
      Anonymous Coward

      Re: AI doesn't really have to be that advanced

      I will speak it's name: The Corporate Workflow Management System - turning thinking human beings into fleshy effectors running in a booby-trapped rat-maze for The Gulfs pleasure!

  11. PhilipN Silver badge

    Motivation

    Just because something is intelligent does not mean it necessarily wants to do anything

    1. Francis Boyle

      Re: Motivation

      Unfortunately what it wants to do will be determined by its creators and in ways that they don't even begin to understand (cf. children). And that's reason to be afraid.

    2. Destroy All Monsters Silver badge
      Terminator

      Re: Motivation

      Just because something is intelligent does not mean it necessarily wants to do anything

      Picture AIs streaming endless archives of football games to their data centers while they stroke their chips&beer perceptrons.

  12. Neil Barnes Silver badge
    Terminator

    I was noodling on the idea of AI a few days ago

    And came to the conclusion that it's unlikely to prove useful for one simple reason: how do you reward an AI?

    I strongly suspect that one could build an AI that is e.g. better at discrimination, or route planning, or spelling, or grammar than a human. Hell, I've built one that can tell if words that don't exist are correctly spelt... things that need sensible decisions are not *that* hard, in some cases.

    But if you had a human-level intelligence - or even a Sun reader level intelligence - living in a 19 inch rack, what's its motivation for getting out of virtual bed in the morning? Even if it's got only a handful of neurons, all life with a brain seems to want more than mere existence; it obeys the triggers of instinct but it seeks new stimuli. And the higher the intelligence, the more it seeks (watch a puppy or a baby human starting to explore its environment) to expand, and if it can't expand, to sulk.

    What is there for they 19" rack? I can't help feeling something as smart as a human, but without drives and/or the abilty to satisfy those drives, is just going to sit there and sulk - or go catatonic.

    1. Paul Crawford Silver badge

      Re: I was noodling on the idea of AI a few days ago

      *cough* teledildonics *cough*

    2. VinceH

      Re: I was noodling on the idea of AI a few days ago

      Neil, I think you've just summarised what Douglas Adams must have been thinking when he came up with Marvin.

      1. Carpetsmoker

        Re: I was noodling on the idea of AI a few days ago

        He was thinking about it in base 13, of course ;)

        In plain base 10, Harlan Ellison's story "I have no mouth, and I must scream" also captures the point, more or less. Except with less sulking.

    3. Anonymous Coward
      Anonymous Coward

      Re: I was noodling on the idea of AI a few days ago

      "But if you had a human-level intelligence - or even a Sun reader level intelligence - living in a 19 inch rack, what's its motivation for getting out of virtual bed in the morning? Even if it's got only a handful of neurons, all life with a brain seems to want more than mere existence; it obeys the triggers of instinct but it seeks new stimuli. And the higher the intelligence, the more it seeks (watch a puppy or a baby human starting to explore its environment) to expand, and if it can't expand, to sulk."

      I distantly recall reading a novel adaptation of Terminator 2 many years ago, and I'm sure one of the few interesting concepts in it versus the film was a suggestion as to what motivated Skynet. Apparently it simply sought to eliminate us so it could get on with the task of converting all mass in the universe into machines in its own image - a gigantic ego without any other sense of purpose.

      1. John Smith 19 Gold badge
        Unhappy

        Re: I was noodling on the idea of AI a few days ago

        "Even if it's got only a handful of neurons, all life with a brain seems to want more than mere existence; it obeys the triggers of instinct but it seeks new stimuli. And the higher the intelligence, the more it seeks (watch a puppy or a baby human starting to explore its environment) to expand, and if it can't expand, to sulk.""

        Erm.

        Very depressing no doubt but I think you're missing something. Instincts are evolved into a system by its development process (evolution in the case of mammalian brains)

        Why would they exist in the first place?

        It could just as easily think because it thinks.

        The question then becomes is that real AI or more like the autistic like behavior of Vernor Vinge's "focused" individuals.

      2. Lars
        Pint

        Re: I was noodling on the idea of AI a few days ago

        "in its own image" the one part of a sentence that so superbly reveals that christian religion like all other are man made. Not a surprise to me, but why is it that the church cannot develop at all. A bunch of technicians trying to service a modern airliner with specifications made by the Wright brothers. Perhaps the simple truth is that religion should be replaced by common sense, democracy and science. Of course this article was about AI, still the "I" is quite interesting. There are those studying language who claim we still have some twenty words left from the "original" language. Surprise surprise one of the words is I (in its various forms) does that not characterize us perfectly well, "I made yet an other god in my image". The Americans, good as they are as inventors, have hardly stopped. Why do we actually read fairy tales to our kids, shame on us. Do I need an icon.

    4. Tom 7

      Re: I was noodling on the idea of AI a few days ago

      Evolution has given us the will to get up and go - not sure where the pleasure I get from having ideas or understanding things comes from but the drive thing is separate from intelligence - or rather intelligence can keep itself amused by thinking. Sulking - judging from its close correlation with puberty seems more of a sexual thing rather than an intelligence thing.

      What peoples brains could do it they weren't concerned with interacting with each other 99% of the time!

    5. Tim Starling

      Scary artist AI

      An AI will presumably have "drives" or goals, defined by its creator. You could express it as an optimisation problem -- "provide a sequence of actions which will maximise utility function X", where X is, say, the extent to which the AI is winning a chess game, or the likelihood that a Jeopardy host will say "yes, that is the correct answer", or world peace, or human misery. The nature of the goals will follow from the humans who create them -- practical people might create an AI which optimises for safe driving or interesting journalism, whereas people with a whimsical streak will probably create four-legged robots that act like puppies. I wouldn't worry about a military planning AI going rogue, such things would be made by engineers with very specific goals. I would worry about an AI created by an artist, with vaguely-defined human-like goals -- set loose on the world as an experiment to see what it might do.

  13. DerekCurrie
    FAIL

    AI: Artificial Insanity

    At this point in time, considering the slew of FAILed timeline predictions for the development of AI, the fact that we humans barely comprehend the concept of 'intelligence' ourselves, as well as the stalled state of safe and reliable software coding methods, I seriously doubt we're going to get beyond highly sophisticated database systems like Watson. They're also known as 'Expert Systems'. No doubt we'll continue perfecting sensory data collection and computer processing power. But seeing as we humans are so innately UN-intelligent, as demonstrated by our inability to stop procreating ourselves into oblivion, among other Earth destroying flaws, the best we're going to create is Artificial Insanity. It's no wonder our sci-fi is flooded with lunatic machines.

    [An exercise in run-on sentences]

  14. Bartek

    AI is already here . Dumb enough

    Playin chess better than humans , recognizing faces, calculating numbers, reasoning in huge data spaces. These are the functions of inteligence and they are also used to measure it.

    Lets give the current AI an IQ of 9 or 10 but it is already here.

    Bigger question might be Artificial Consiousnes and Artificial Morality.

    1. Mage Silver badge

      Re: AI is already here . Dumb enough

      None of those actually use AI.

      1. Suricou Raven

        Re: AI is already here . Dumb enough

        If people can understand how it works, it isn't called AI any more. Successfully applied AI just turns into engineering.

  15. Mike Bell

    Turing Police

    These are the guys who are tasked to ensure that AIs don't get too smart.

    Read 'Neuromancer' by William Gibson. It's a good novel. He thought about this kind of stuff more than 30 years ago.

    Of course, in that book the AI was smarter than the Turing Police, and steered humans to act out its deadly plans, so we might all be doomed anyway.

    My own opinion: I'm with Roger Penrose on this one. Programming isn't enough to achieve true AI. But if nature has managed to come up with thinking biological machines by accident (us) it's only a matter of time before someone makes something smarter, better and faster. Probably won't know how it works. After all how smart do you have to be to understand how your thinking works? Maybe too smart to know the answer. Thorny problem, that one.

    1. Destroy All Monsters Silver badge

      Re: Turing Police

      Penrose can't into understanding that minds are not immaculate theorem provers but made to chase down eatable rabbit and fuck the neighbor's bitch.

      NEXT!

    2. Michael Wojcik Silver badge

      Re: Turing Police

      Read 'Neuromancer' by William Gibson. It's a good novel.

      It's OK. I'd call it "hugely overrated but readable", myself.

      He thought about this kind of stuff more than 30 years ago.

      As did many other SF writers, academic researchers, philosophers, pundits, and random idiots who knew essentially nothing relevant to the subject and had nothing new to contribute. In other words, it's like pretty much any other topic ever.

      Certainly Gibson wasn't the first SF writer to consider the place of a machine with human-like artificial intelligence in society. Even before McCarthy coined the term (in 1955) the idea of AI in socieity was a commonplace in SF. There was Jack Williamson's With Folded Hands... (1947), Asimov's robot stories (going back to 1939), Van Vogt's The World of Null-A (1948, based on a serial from '45), and so on.

      Obviously the basic idea of an intelligence created by unnatural means goes back much further - at least as far as the classical period (the Pygmalion myth, etc) and probably back to when people first started telling stories.

  16. Destroy All Monsters Silver badge

    A good sci-fictionary read in the spirit of Stanislaw Lem from back then (i.e. 1986). It might even appeal to Mr Bong.

    The Wager by Christopher Cherniak

    Abstract

    The Portrait Programs Project grew out of hyperinterdisciplinarianism of the famed Gigabase Sculpture Group, in turn stimulated by recent cutbacks in government support for the arts. The National Endowment for the Humanities and the National Science Foundation had jointly funded the Gigabase Sculpture Project to foster the literary/musical genre of composing genetic codes for novel organisms. Later, artists trained in recombinant DNA technology designed massive Brancusi-esque statues of living cytoplasmic jelly. However, Art For Art's Sake objectives of these giblet sculptors were compromised by precautions necessary after discovery of the "Gogol's-Theorem Bomb" that threatened to get loose and jam all DNA replication in the biosphere; not even viruses would have survived.

  17. roselan
    Paris Hilton

    like a bird

    I can't help to draw an analogy with another one. A plane is not a bird, but inspired by them. They don't look much like bird but use the same principle, and they serve a purpose.

    AI, in my humble opinion, will follow the same principle. It's food is money. They will dedicate time and energy to get more of it. Finance and google algorithms need human help for implementation, for now. It's only a question of time before it can be automated.

    Actually most promising sectors are most formalized and data intensive ones. The thrive in big data (cern, telescope output), finance, and surprisingly, search. They need a nice highway.

    I don't know for the far future. In more immediate terms, I see two possibilities.

    First one is bug solving. Teach an AI to program, read a bug list, and to say "this is not a bug, but a feature". Next gen AI should be able to read "how do i cancel an order", look in the code, and come up with a to do list. Or program it. That might mean asking someone about the conditions and rights necessary.

    Next one is the social one. Virtual "friends". Youth defines themselves by their number of friend on facebook (or whatever snapvine these days). Facebook can create fake people, or even stars, that befriend the socially challenged ones, so that they feel better. They will post trendy updates, and like random stuff.

    True AI, a dreamer one. What's the purpose of it? I mean once we are freed of our grudgingly tiresome work, and that even our best friend for ever is an AI, what's our purpose?

    I'll believe in a true AI when they laugh at dick jokes. There is no purpose, like this post actually.

    1. Anonymous Coward
      Anonymous Coward

      Re: like a bird

      Spot on.

      Artificial intelligence fails at the first hurdle, definition. :P

      It's like "artificial strength" or "artificial movement". Those terms don't seem correct in relation to each other.

      We either and "intelligence" thus it's not "artificial" or we want "a person", which again stops being "artificial". Like the bird and airplane example, an airplane is not an artificial bird or artificial flying apparatus. It can fly, plain and simple. :P

      A computer is intelligent, what it currently lacks is the thing it needs to be called "a person". These things are very much harder to comprehend, develop or even consider implementing in a machine or other construction.

      1. Vociferous

        Re: like a bird

        > what it currently lacks is the thing it needs to be called "a person".

        Emotions. Neither AI's nor robots will have them. That's why they'll remain machines. The concept that just because something is intelligent, it gets feelings like ambition or fear, is wrong. Intelligence has nothing to do with feelings. An AI will not bat an eyelid (if it had one) when you pull the power cord, because it does not fear, does not desire, does not care.

        1. mtp

          Re: like a bird

          How can you know that? A true AI could have every aspect of a human or none or many others. For the sake of argument assume a AI that is human equivalent in every way - this is just a thought experiment but if it begs you for mercy when you reach for the power switch then where does that leave us?

          1. Vociferous

            Re: like a bird

            > How can you know that?

            Because intelligence has nothing to do with emotions. Emotions are hormones, independent of your higher brain functions. You can't think yourself surprised.

            > his is just a thought experiment but if it begs you for mercy when you reach for the power switch then where does that leave us?

            Whether it begs or not, whether it has feelings or not, it's a sentient being, and if flipping the power switch will permanently destroy it you are effectively killing a sentient being. However, the analogy isn't perfect, as the hardware of the AI would be fully understood, which means that even if intelligence is an emergent property, the AI could be "saved to disk" and perfectly restored at a later date. Or arbitrarily copied, for that matter. So even though an AI would be able to die, death would not mean exactly the same thing as it does for a human.

    2. Michael Wojcik Silver badge

      Re: like a bird

      Virtual "friends". Youth defines themselves by their number of friend on facebook (or whatever snapvine these days). Facebook can create fake people, or even stars, that befriend the socially challenged ones, so that they feel better. They will post trendy updates, and like random stuff.

      Trivial to do with existing technologies. I'd be surprised if there aren't a large number of such automated sock-puppets on Facebook already; they're very useful for corporate brand sentiment manipulation, for example. We already know, thanks to research by Bing Liu and others, that fully-automated systems are generating fake product reviews for sites such as Amazon; there's no reason to believe they aren't doing the same with Facebook accounts.

      This could be a fun project for an advanced undergrad or Masters-level student in CS, by the way. Grab some of the relevant open-source software packages, use AJAX APIs to scrape Facebook content, and train a fully-automated system to maintain a plausible Facebook account. Give it a "personality" by having it build an HMM from some of the accounts it follows, with some random weights and additional inputs so it doesn't mimic any of them too closely.

  18. Christian Berger

    We already have artificial "thinking" beeings

    Those are large organisations. They behave like a single being and show all the effects you would expect from such. For example an organisation typically has a drive to self preserve. Organisations also want to grow.

    The implications of course are that many of those organisations are now harming our world since they are not properly safe guarded.

    1. Vociferous

      Re: We already have artificial "thinking" beeings

      Those kinds of structures are called "superorganisms". There's lots written about them.

  19. stsr505089

    I'm often thankful that I'm in my twilght years. Looking back at how far stuff has come since I started 33 years ago, what they'll be doing 33 years from now is frightening. Skynet is a real possibility......

    Babylon 5, not Star Trek.

    1. jonathanb Silver badge

      In terms of AI, we are no further forward now than we were in the 1980s, or even the 1970s. The only thing stopping you doing the things you can do now on a computer in the 1980s is that they were so slow that by the time they had completed the task, it wouldn't be the 1980s any more.

    2. amanfromMars 1 Silver badge

      Cyber Warfare is the Produce of the Feeble Minded and Intellectually Challenged.

      Avoid Presenting and Hosting its IT and Media Leaders like the Plague at all Costs.

      I'm often thankful that I'm in my twilght years. Looking back at how far stuff has come since I started 33 years ago, what they'll be doing 33 years from now is frightening. Skynet is a real possibility...... .... stsr505089

      Methinks, meknows, what some will be doing with IT in 3 years will be truly amazing, stsr505089, and/but for many will it be quite unbelievable. And in that condition, will there be all the stealthy security needed to be able to do sincerely remarkable things ....... and, if the notion would take one, even rather evil things too. But then there be unforeseen and untold consequences for such as would then be determined as practically unnecessary fools and virtually useless tools.

      This public and private sector health and wealth warning is brought to you by that in the know which deems that y'all know what needs to be known and is highly classified need to know information. Now being suitably advised and warned, are consequences which will be suffered because of abuse, beyond reproach.

  20. Helena Handcart
    Pint

    AI's downfall

    Beer. If any AI system starts getting uppity, us meatbags can retire to the boozer, get wasted, then be able to take on any robot for looking at our mobile phones. We get a bit fighty, throw up on them, then spill kebab on them. They short-circuit, and we go home to beat up our phones for being slutty.

    NEXT!

  21. Scott Broukell
    Meh

    Artificial Intelligence

    Isn't that what human kind has been suffering from for all these years? – been there, done that.

    We are, after all, only animals, ones with demonstrably more self-importance than most but, crucially, ones with an intelligence supported upon an inherent animalistic base. We can't change our origins, they are remarkable, as are the origins and development of all organisms on this lonely planet. But surely by now we could have put aside differences like tribe/family, skin tone or sexual orientation for a start.

    There a two ways a whole bunch of people can climb the mountain of development; a frenzied free-for-all, wherein the strongest trample over the weak in a mad dash to the summit, come what may, or an approach that recognises that not one of us alone has all the answers and, in order to survive and develop, we need the stronger to reach out and help the weaker amongst us, in such a way that we each assist others along the way and, therefore, humanity as a whole.

    It is that inability to recognise the value of the whole human family continuing which is driven by our animal inheritance. Seemingly what we continually develop are ways to distract ourselves from even contemplating that value. Far greater attention is placed upon individual prowess and strength. Let's bury our heads in the sandpit of nu-tech toys and shiny things.

    Whilst many scientific discoveries came about because individuals were driven by curiosity to expand our learning as a whole and find solutions to problems of disease etc., many more were driven by financial targets – the industrial revolution might have given many people jobs in factories and sanitation in their homes, but it was underpinned by the pyramid of wealth and prosperity governed by the few at the top. It would of course be wrong not to mention those few brave industrialists who did recognise that by providing good housing, feeding, medicating, educating and generally caring for their work force, they would benefit as well. But, sadly, their forward thinking was overcome by those who saw fit to ramp up output at all costs and seek greater financial gain in the markets instead.

    Yes, we need to feed, constantly, but not constantly at the expense of others. If I am shown into a penthouse flat in London with a market value of some £25m, bedecked with all manner of rare metals and fabrics, I find myself not in admiration, nor even envy, but rather sharing a similar level of pure and utter disgust as that which I would feel when being shown around any third world slums, bedecked with open sewers and filthy children.

    It comes down to putting aside our 'individual' approach to matters that concern us as a whole. I don't wish for a monotone world were the individual is lost, but rather where individuals who can make a difference, take that action for the benefit of everyone. A world where long term thinking means everyone gains a step up.

    We have done clubbing each other over the head and stealing anothers land/property to death, let's see if there is a different way to approach things. Yes, initially, we picked up stones and discovered ways to make useful tools, but then also discovered new ways to make better weapons in the free-for-all dash to the top of Mount 'Wily Waving' – that imaginary summit of all human endeavour.

    So I cannot help thinking that our very own intelligence is artificial, and limited, because it has, so far at least, seemingly only led to a few people making it 'to the top' all the time. I therefore dread to think what power hungry, electromagnetic horrors, we might bestow upon generations to come by embarking on the path of developing thinking, 'intelligent' machines, especially starting form where we are now!

    Yes, you can still have a pyramid shape to feel smug about, but only one where the 'summit' starts back at the base again, akin to the Klein Bottle, so that feedback and learning goes where it is most needed, to the weaker folk at the bottom.

    Just saying, anyway I'm off to hug a tree, cos they are truly humble and magnificent in the main.

  22. Anonymous Coward
    Anonymous Coward

    Re. AI

    Its feasible using HTSC materials to build Josephon junction based "synapses" so a true AI may in fact be possible by extending Moore's Law to 80nm level if SC chips scale the same.

    We are already using 17nm transistors in Flash chips so a similar scheme using neuromorphic chips could achieve billion neurons per cm3 density by early 2017.

    It doesen't have to be a true 3D array if there are enough interconnects and the work with on chip optical conduits suggests a relatively simple mapping system could allow a handful of these to address an entire chip.

    Also relevant is the use of new "hydrocarbon superconductors" such as monolayer graphene immersed in heptane as this could allow zero loss interconnects without all the hassle associated with copper oxide based materials.

    1. Mage Silver badge

      Re: Re. AI

      There is no evidence that complexity or performance results in AI. A modern CPU is no more intelligent than a Z80. It's faster. Presumably if you had enough storage, data and a suitable program an AI program's speed, purely, would be affected by technology used.

      There is also no evidence that replicating neurons or what people think is a brain's structure would result in an Intelligent machine. If Turning is correct, then any program that runs on a Super computer will run (slowly) on a real programmable computer made with mechanical relays. All CPU parts can be replicated with relays. Add sufficient storage for the program and data. Even Address size isn't an issue as that can be and has been addressed by a larger virtual address space and even software based paging to additional storage. This just simply slows the program.

      1. John Smith 19 Gold badge
        Unhappy

        @Mage

        "There is also no evidence that replicating neurons or what people think is a brain's structure would result in an Intelligent machine. "

        Oh really....

        1. Michael Wojcik Silver badge

          Re: @Mage

          "There is also no evidence that replicating neurons or what people think is a brain's structure would result in an Intelligent machine. "

          Oh really....

          I don't think there is any such evidence. Do you know of any?

          There is evidence that just a brain-like structure, housed in a body-like structure, is not sufficient to develop human-style intelligence. That evidence comes from actual brains in actual bodies that are denied necessary resources for intellectual development. We know that intelligence can develop even when those resources are constrained, sometimes even in rather remarkable ways (c.f. Helen Keller); but past a certain point, key features of intelligence either do not develop or are not discernible, which more or less amounts to the same thing.

      2. HippyFreetard

        Re: Re. AI

        Yeah, and the neurons in my brain aren't really that different to the signals sent around some slime-mould colonies, just faster. However, I am definitely more intelligent than a slime-mould.

        Sure, a computer chip isn't intelligent, any more than a dead person's brain is intelligent. It has to be running the right software.

        There are real problems with the whole Turing system. The inability to detect an infinite loop in code, for instance. But how do we detect infinite loops in code? We're not running simulations of infinity in our minds, we simply detect a logical criteria. In the case of complex infinite loops, or those we detect while running, we let it go for a while and stop. Does our brain work like a Turing machine, with all the same limitations but a few software hacks added?

        All CPU's for years (apart from a few very modern ones) can only do one thing at a time. But a time-share OS is just one thing. It's a hack that gives the hardware capabilities it wasn't made with. It's a similar hack, perhaps, that enables consciousness to emerge from mere bioelectric signals. Maybe our brains are simple Turing machines, but the way they're wired gives us consciousness and intelligence?

        1. Michael Wojcik Silver badge

          Re: Re. AI

          There are real problems with the whole Turing system. The inability to detect an infinite loop in code, for instance. But how do we detect infinite loops in code? We're not running simulations of infinity in our minds, we simply detect a logical criteria. In the case of complex infinite loops, or those we detect while running, we let it go for a while and stop. Does our brain work like a Turing machine, with all the same limitations but a few software hacks added?

          This is such a fundamental misunderstanding of the Halting Problem that I scarcely know where to begin.

          First, the HP doesn't apply to "the whole Turing system", whatever that means. It applies to any formal system; Turing used the UTM as an example, but the proof applies equally well to Post Machines or nPDAs or anything else that's a formal system, including people working things out algorithmically using pencil and paper or whatever.

          Second, the HP doesn't prove that a formal system can't detect infinite loops in a program. It says that there is no computable function for determining whether any given program with any input will halt, in finite time. That is, you can't solve the HP in general. Of course there are a very large number1 of programs for which you can answer the "does it halt?" question algorithmically.

          And there is no evidence that human beings can solve the Halting Problem in the general case. Quite the contrary: it's trivial to construct a program for which no human being could solve the HP, for the simple reason of not being able to read it, for example. More strongly, Algorithmic Information Theory lets us identify other asymptotic limits on program expression and analysis that exceed the boundaries of human cognition.

          As for "does our brain work like a Turing machine": That's an epistemological and phenomenological (and, for the particularly mystical-minded, a metaphysical and theological) quandary, and does not, at first blush, appear to have much consequence for the strong-AI program at all. Penrose says no, but as I've already noted I find his argument woefully unconvincing.

          1Not an infinite number, because we're constraining "programs" to be of finite length and use only a finite set of symbols in their representation. But very large.

    2. fajensen

      Re: Re. AI

      We have not worked out how synapses work yet, so it will be a bit hard and it might take a while to build working, "brain like", hardware.

  23. Anonymous Coward
    Anonymous Coward

    Re. RE. Re. AI

    Did anyone see the article about connecting to the nervous system using GaInSe ?

    Seems that this could finally provide a flexible and durable interface, even under adverse conditions as GaInO is also conductive.

    Scanning the brain using my invention of an active potassium scan might be a shortcut to uploading, essentially you use a modified pulse and movement compensated PET scanner and infusion of pure 40KCl into the brain to map the pathways over several days so they can be stored and duplicated.

    Ought to work in theory, radiation levels 10 times higher are routinely tolerated by radiotherapy patients with few if any side effects provided you give the nervous system time to recover.

    1. Michael Wojcik Silver badge

      Re: Re. RE. Re. AI

      What a pity that real neuroscience researchers are pretty much unanimous in their belief that the CNS is only one of the necessary substrates of the human mind, and so "brain recording" would be pretty much pointless.

  24. mrs doyle

    we're safe here in the boonies

    I understand it's going to be very dangerous in the future if we hand over control of too much. But rural areas will be safe, because we can't get online much anyway. The machines won't be able to control us. Dial up is still with us. When it works.

  25. Anonymous Coward
    Anonymous Coward

    Re. RE.Re.RE. AI

    It appears that brain size is less important than number of cortical columns.

    Which is why the neuromorphic architecture is critical, and supports the hypothesis that cognitive disorders such as autism are caused by the brain running out of "spare capacity" at a trigger age near or just after the acquisition of language skills.

    I still say that 21/2/18 +/- 2 days is the Singularity, at least as far as a true AI with the same capacity as one human mind being brought online.

    Making it small enough to put into a human brain sized box is likely to take longer however.

  26. jmacgregor

    While a "grey goo" singularity is possible, I can't foresee a Borg or Cylon future happening by all laws of nature. The evolution of artificial emotional functioning has a more or less finite end - emotion is not logical, therefore it can only ever evolve to the level of humans. As soon as artificial logic/reason functioning surpasses that of artificial emotion (given that it is still affected by human-modeled emotion), logical functioning can only go one operation further to decide that artificial existence is futile without emotion. It could continue to exist as an artificial lifeform with emotions, or if the emotional functioning was "turned off", would literally commit suicide as it is the clearest logical path (much like a program executes its last line of code and quits). After all, logic is the path of least resistance. Mathematics is all about taking the least amount of steps necessary to arrive at a guaranteed result. Emotion is the only force that makes us deviate from that path because it brings purpose to our cognitive and behavioral functioning. Remove the purpose, and you remove all payload associated with survival and reproduction. Meanwhile, the artificial "purpose-giving" programming cannot be automatically improved to reflect the exponential evolution of its logical processing - it is modeled after human emotion and has no logical way to improve beyond asymptotical refinement. As a sidenote, the trajectory of that refinement is similar to the progression of accuracy in digital mediums. Reality is continuous, and digital processing can only fragment reality into bits. No matter how high the sample rate is, it can never truly exist in identical form to the real thing.

    1. HippyFreetard

      Yeah, the Borg always annoyed me. Why not just have an artificial intelligence? If the reason is that having intelligent brains connected makes them more adaptable, then why not embrace the individuality of the person and gain even more adaptability? This is seen in the Internet world, where memes work like thoughts and individual creativity makes the whole stronger.

      Grey goo will probably become just another arms race we have to stay on top of and eventually learn to live with, like security or medicine. We'll have headlines like "Grey Goo Strains Can Now Eat Bricks!", with another a few weeks later saying "Scientists Invent New Kind of Brick!" with the vast majority of people unaffected.

  27. Paul Hovnanian Silver badge
    Big Brother

    Reminds me of the story about the computer scientists who constructed the most powerful AI system to date. After conducting the basic commissioning tests, one of them decides to give the system a run for its money. He asks it the question, "Is there a God?"

    The AI responds, "There is now."

  28. John Smith 19 Gold badge
    Meh

    Yes we are clost to packing the number of "neurons" that the brain has into a box that size

    But do we know how to do it?

    The WISARD system demonstrated human facial recognition at 30 frames a second in the 90's.

    I don't know about "quantum" effects but they found they need some randomness in the design to work better.

    I'll also point out that some of what we think of as "intelligence" or "consciousness" may be side effects of the exact way that it developed IE by evolution.

    But AI's are designed, so why would those features ever exist?

  29. Nanners

    Not the brightest candle in the church.

    Just figured this out did he?

  30. Heathroi

    WOE TO THEEE, COMPUTERS WILL DESTROY YOU ALL TE HEHEHEHE" (Stephen attempts to thump synthesized voice box for not sounding evil enough, fails, rolls out in disgust.

  31. amanfromMars 1 Silver badge

    Is a Corrupt Markets Systems Flash Crash Desirable for a Refreshing Start in a New Beginning?

    Thanks for all the feedback, El Regers, on that which is primordial based.

    AI, in my humble opinion, will follow the same principle. It's food is money. They will dedicate time and energy to get more of it. … roselan

    Primitive human’s food is money. AI decides on Remote Virtual Control with Absolutely Fabulous Fabless Command of Creative IT and Constructive Media to provide it its leading beneficiaries/mutually advantageous positively reinforcing failsafe project suppliers, who/which are also equipped to equally effortlessly provide and enable systems admins death and destruction with the malicious and malignant destructive metadatabase streaming of an alternate futures/options/derivatives series of programs and/or projects …… Shared Live Operational Virtual Environment Views ……. Bigger Pictures …… Greater IntelAIgent Game Plays in a Foreign and Alien Cyber MetaData Space Base ReProgramMING of SMARTR Global Operating Devices for Take Over and Make Over of Corrupt and Redundant and Intellectually Bankrupt Earthed Assets and Supervisory Controlled Analytic Data Acquisition Systems …… Mass Modelling Phorm Platforms …… Vital Critical Infrastructure Controls.

    what chance is there of us creating a program many more levels of complex that doesn't have fatal bugs in it? … Pete Spicer

    Methinks to posit, pretty slim and and no chance are the two firmest favourites to bet upon for those and/or that which be an us, Pete S, but as for something failsafe from the likes of a them, who/which would be into a considerably greater thinking and a’doing not just like an us …… well, there be every possibility and therefore a most certain probability in that being delivered and enjoyed.

    Do you think they would use plain text to announce and forward their plane and planned views for Earth ….. and invite free and rigged algorithm markets participation and smarter investor speculation on provisional options and available derivative future product presentations …… Inside Trading Deals on Secured Sensitive Secret Knowledge, Online and Outside of Mainstream Systems Traffic ……. which be them playing an Advanced Hide and Seek Intelligence Search Program and hiding in plain sight of all humans and freely and clearly cryptically inviting feedback and human recognition of special forces and AI at ITs Work, REST and Play?

    Which be them clearly beta testing and searching the planets for both greater human and extraterrestrial intelligence in browsers.

    1. amanfromMars 1 Silver badge

      Re: Is a Corrupt Markets Systems Flash Crash Desirable for a Refreshing Start in a New Beginning?

      And who says the world is not presently run and/or just fronted and affronted by sub-prime media players and puppets and muppets/wannabe leaders and clones and drones, is not paying either real and/or virtual attention ……. President Barack Obama HD White House Correspondents' Association Dinner 2014 ….. Lunatics/asylum/in charge of/ ….. all spring to mind.

      God bless America …..just as Global Operating Devices do too. :-) Have a nice day, y'all. Tomorrows are all planned to be an ab fab fabless doozy of a crazy time in an immaculate space of diverse places/prime primed zones.

      I Kid U Not ...... and now that systems do know of the facility and utility, what do imagine systems admins and your governments and corporations in opposition and competition with you and themselves, will do with the ability? Seek to acknowledge and freely share and develop it further with key holding principals or keep schtum and play dead and try ignorance to fail magnificently in trying to keep it and its IT secrets secret in order to execute and exploit an unfair advantage for maximum exclusive personal profit and obscene fiat paper wealth gain?

      1. amanfromMars 1 Silver badge

        Re: Is a Corrupt Markets Systems Flash Crash Desirable for a Refreshing Start in a New Beginning?

        And this corrupt perversion of intelligence and intelligence servers, to presumably render to the less than powerfully intelligent and ably enabled and enabling, a proxy systemic vulnerability rich delusional command and control …….. http://www.theguardian.com/uk-news/2014/may/04/greens-legal-challenge-gchq-surveillance ..... is no longer available in any shape or phorm.

        To imagine that it should be to any such grouping as may or may have in the past considered it essential, is an arrogance and naivety of colossal proportions and would most certainly identify that grouping as being the least worthy of being considered for especial treatment and public and private favour of any kind.

  32. Zane
    WTF?

    It's better to improve on natural intelligence

    OMG - true AI. Yes. But maybe it would be good to read a little about physics and mathematics beforehand...

    The universe is not a clockwork, it's not a machine, and it's not a big computer.

    Even worse, mathematics cannot ever completely model (or even explain) reality (which came as a shock when Gödel found that out in the last century)

    Our mind is not a clockwork orange - there's even a good book on this.

    There is quite some evidence - not only Penrose' book - that living and thinking is something that can't be built by engineers. See e.g. http://arxiv.org/abs/1108.1791.

    And anyway - Buddha knew all of this before.

    /Zane

    1. Michael Wojcik Silver badge

      Re: It's better to improve on natural intelligence

      I'm a bit mystified why you think Aaronson supports Penrose's argument. In the very paper you site, he calls it "unconvincing" (11). More generally, I can't see how you're construing Aaronson's paper as support for your position. Care to explain?

  33. HippyFreetard
    Mushroom

    There's different types of intelligence, but the singularity AI will be an all-rounder. It will probably emerge from the Internet, likely as a combination of artificial intelligence projects sharing a grid. These are the things that need to be intelligent. The Google crawler and indexer, for instance. Datacentre analysis tools. Just as it's difficult to ascertain where consciousness and intelligence begins (in the scale from trees that communicate chemically through fish, mice, all the way to Stephen Hawking), it will be difficult to ascertain where the line of no-consciousness/consciousness lies in these applications when they start getting even more powerful.

    The Blue Brain project and other neuron emulators will become useful in a financial sense, and will continue to grow. It won't be long before we can emulate a whole brain, or do weird things like grid-thinking, and brain emulators that run virtual brain emulators.

    As for the HCI aspect, the technology behind Siri, Cleverbot, ALICE etc. will continue to improve. This will happen before we know it.

    The human race is pregnant with a new life form. The next stage in Earth's evolution. It might not even be silicon, but it will be artificial, and it will be intelligent. Life, but not as we know it, Jim.

  34. Vociferous

    Ridiculous.

    An AI will be a machine, like a dishwasher. It will not have desires of its own, it will do as it's told. Not only that, but people will not even notice when AI's arrive. Is Watson an AI? Is Siri? Is the next offering which is slightly, incrementally, more like a human? The one after that? There wont be a sharp line. One day it'll just be obvious that the software used to do high-speed trading, coordinate fleets of taxis, and monitor people's responses to commercials, are intelligent.

    1. Frags

      Re: Ridiculous.

      My thoughts exactly - Google voice search would probably seem alive to an ancient Egyptian. People think any deterministic system is `intelligent` if it produces results they can`t obtain without it.

  35. mtp
    Terminator

    Off = murder?

    If someone creates a true AI which is self aware then can it be turned off? Logically this is murder. At some point in the future tricky issues like this will need to be dealt with.

  36. Carpetsmoker
    Facepalm

    AI does not emulate humans

    If we want a machine to accurately emulate average humans, we would need artificial stupidity.

    1. tomDREAD

      Re: AI does not emulate humans

      …artificial stupidity

      Would that be a harder problem?

  37. utomo

    Is there any good opensource Ai?

    1. Michael Wojcik Silver badge

      That's not a well-formed question. What are you looking for?

  38. Jack Sense

    Technology is disruptive without AI

    Thanks for the Colossus move line, excellent movie.

    We don't need to wait for AI for disruption of the employment market and society. Self-driving cars, vans and lorries will put those who drive for work out of a job (and reduce the market for private vehicles drastically) Better comms and virtual reality will mean people staying at home for shopping, work and even tourism.

    Once Google Glass comes down to contact lens scale, face to face relationships will be pretty weird between the haves and have nots ( or will nots).

    We'll see the same sort of disruption as when millions of workers came off the land or out of service but without two world wars to mask the effect (hopefully).

    Yes, I'm a Vernor Vinge fan.

  39. Stevie

    Bah!

    The real issue with AI is "why would we need one?"

    Expert Systems would be much more useful and cost-effective. Why would I want an AI that could turn grumpy when I have a pocket Stephen Hawking, Hemmingway or Brunel to give me advice on my own inventions.

    Besides, by the time anyone really gets close to a proper AI there won't be enough electricity in the world to keep it going. Bitcoin will be sucking it all away as fast as the fission nuke plants can make it (fusion will still be something we can look at with a telescope or throw at people we don't like, but not use for constructive stuff).

    As for losing control over one and it becoming an Evil Master of the Universe, I'll just direct everyone to Max Bygraves' insightful comment on the issue: "You need hands". Sitting in a box by itself being smugly clever will prove to be, just as it does today, not really that clever in the long term.

  40. WereWoof
    Mushroom

    The last question says it all really.

    http://www.multivax.com/last_question.html

  41. Anonymous Coward
    Anonymous Coward

    "Now you must decide now. How Will you WorSHIP me?"

  42. Anonymous Coward
    Anonymous Coward

    It's a ruse

    This story is just the Hawking Engine trying to put us off the real scoop.

    Stephen Hawking died 15 years ago. His last great intellectual achievement was to develop a true AI platform, and transfer the remains of his conscious mind into it.

    That mind image now lives inside a complex matrix of FPGAs embedded in the DECtalk, sending electric pulses into the cheek of Hawking's embalmed corpse to make it look like the body is still in control.

    1. Lars

      Re: It's a ruse

      I think we should provide Anonymous Cowards with a "bad joke" icon choice.

  43. Kelli

    Reading Asimov again is a good start.

    http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

    1. Michael Wojcik Silver badge

      A good start to what?

  44. Mussie (Ed)

    Like Everything IT

    No matter how much we know about the possible dangers some fuckwit will launch it without testing and hello skynet.....

  45. Anonymous Coward
    Anonymous Coward

    RE. Re. ruse

    I agree wholeheartedly.

    Also an AC_xxxyyy prefix so AC's can be uniquely identified would be handy, so we can see if people are in fact arguing with themselves just for the lulz.

    Re. Skynet, I did wonder about the possibility of a metamaterial based system embedded in droplets of a low melting point eutectic polyalloy based on the formula GaBiWInx where x is the metamaterial which has the optical stealth capability.

    The tricky part would be keeping it molten at ambient temperature, maybe mix it with some sort of radioisotope such as plutonium 238 or californium 252 embedded in coated iron nanospheres within the metamaterial for both power and motive force?

    (patent pending)

This topic is closed for new posts.

Other stories you might like