back to article OpenAI CEO heralds AGI no one in their right mind wants

OpenAI CEO Sam Altman has said his upstart is preparing for the coming of artificial general intelligence – though there's disagreement about what AGI actually means and skepticism about his claim that OpenAI's mission is to ensure that AGI "benefits all humanity." If you teared up at the legally non-binding sentiment of …

  1. Peter Prof Fox

    It's useful. (Unlike most must-have tech)

    Why type in some key words into an advertising search engine when I can ask for real stuff?

    I have struggled to write Bash scripts in the past but brackets and quotes tripped me up again and again. So ChatGPT gives me what I need in less than a minute and I can adapt the last 10%. The tech bits have been done and so long as I know a good guess at the tech I can do what I want. A couple of days ago I asked ChatGPT about writers associated with Devon. I didn't get a list of search results with Devon and Writer as keywords but a reasoned list of ten which made good sense to me. If I'd been searching for a place to stay in Devon then, do you know what?, I'd have typed that in.

    AI on-demand is the end of ad-search. It isn't the whole answer but it's a rich starting point for people who know lots but not all. (It's also a magic spiv-tool to fool everyday people with selected 'facts'.)

    Now, when I have a 'who should I ask' situation, ChatGPT is a really good starting place. If only because it forces me to be clear what my question is and how I want to trim the results.

    1. MOH

      Re: It's useful. (Unlike most must-have tech)

      You can find the same information in seconds by searching for "writers associated with Devon".

      And you know where the information is being sourced from so you can make a judgement call about its reliability.

      Using it for code, or anything you don't know, is lunacy. It's right until it's not and you've zero way of knowing when it's gone off the rails.

      People wouldn't take for granted the word of a random stranger on the internet, yet all of a sudden they're happy to use ChatGPT as a reliable source of truth.

      The mind boggles.

      1. pmugabi
        Boffin

        Re: It's useful. (Unlike most must-have tech)

        The aim is not to use it to code something you don't know.

        The aim is to use it to give you most of the script and then you add to or finet-une the rest of the code.

        So instead of spending 10 minutes writing code to open files, print results, import libraries, you can have that done already and then verify that the logic is correct for your use case.

        Also, code gets tested before deployment. It's not taking ChatGPT as a reliable source of truth.

        1. John H Woods

          Re: It's useful. (Unlike most must-have tech)

          "So instead of spending 10 minutes writing code to open files, print results, import libraries ..."

          Or use a language / IDE combo that doesn't require you to write reams of boilerplate. Even with a language that does require it, I'm sure copying and pasting boilerplate that has already been tested is more reliable than asking the stochastic plagiarism machine each time.

          1. Peter-Waterman1

            Re: It's useful. (Unlike most must-have tech)

            If ChatGPT is so bad, gets it so wrong, isn't helpful, and shouldn't be used to assist non-developers in writing code, then why all the fuss? I don't hear the hype or dialogue about Watson or other "AI" systems. And so, my only conclusion is that people are afraid and that rather than get curious, they would instead come on here and rant about how it's rubbish, doesn't work, can't be trusted, and your nuts to use it to write code and so on. And so around we go on the same merry-go-round that we always see when life-changing technology is released into the world. One camp that tries to argue why we shouldn't use it, "you can't run production on VMWare", "It's just a new type of phone", and "The Cloud is someone else's computer". And then, one day, it becomes the norm, and we all jump on the bashing bandwagon of the next cycle. Bring on the downvotes.

            1. Anonymous Coward
              Anonymous Coward

              Re: It's useful. (Unlike most must-have tech)

              > If ChatGPT is so bad, gets it so wrong, isn't helpful, and shouldn't be used to assist non-developers in writing code, then why all the fuss?

              Because it's being evangelised as good, getting things right, being helpful, and that it can be used to assist non-developers in writing code.

              What makes it bad isn't that it gets it right most of the time, it's that it getting it wrong is indistinguishable from it getting it right. By design these chatbots have no concept of truth, reason, correctness, wrong, or right. They only consider "common" and "uncommon". Their mistakes are unpredictable and can be subtle.

              That they get things right "most of the time" just makes it worse, because it means people will get used to believing the responses uncritically.

              1. Justthefacts Silver badge

                Re: It's useful. (Unlike most must-have tech)

                But in the everyday world, common things are common. Your complaint “no concept of truth, reason, etc….They only consider common and uncommon" is an accurate description of the responses you will get from most *humans* in most interactions. Sorry, it just is. Forcing someone to light up the furnaces and actually think through their response, rather than just give the standard or easy reply is an edge-case.

                1. Helcat

                  Re: It's useful. (Unlike most must-have tech)

                  Possibly, but possibly not,.

                  That it invents quoted, provides sources that don't exist, and while it produces reasonably worded responses, the simple fact is the lies are hard to weed out to get to the truth. And that's in common questions.

                  Plus you've inbuilt bias, hidden rules and more that skew the response from fair and balanced into misleading.

                  Simply put: ChatGPT is an early development model that's in Alpha while they work out how it would be used in the real world. That it can't accept when it's wrong about something only goes to reinforce that it can't be relied on: It has to be fact checked.

                  Actually, that's just like humans, really.

                  But we should think through any response because we don't know it's right. It's just that people get lazy, and with AI, people will simply assume (making an Ass out of You, not me, thanks!) it's correct. And that's the real danger: People will get lazy and just assume the response is correct. They'll... >shudder< ... put the Jam on the scone BEFORE the cream! Or worse: AFTER!

                2. Michael Wojcik Silver badge

                  Re: It's useful. (Unlike most must-have tech)

                  Your complaint “no concept of truth, reason, etc….They only consider common and uncommon" is an accurate description of the responses you will get from most *humans* in most interactions.

                  "Humans suck. What we need are faster, automatic humans!"

                  Perhaps that argument is not entirely persuasive.

                  1. Anonymous Coward
                    Anonymous Coward

                    Re: It's useful. (Unlike most must-have tech)

                    I wouldn’t describe human performance (=computer performance) as “sucking” in the 95% of useful tasks that we perform. Rather “cost optimised” in a wide sense. Individually, we allocate only as much attention to it that gets the right outcome most of the time, and return to it if that doesn’t work. And management allocates the lowest cost person to the task that gets it done well the majority of the time.

                    “Just as good, but 1000x cheaper” is a perfectly reasonablejustification. Cheap is not an insult. In a supply-and-demand economy, the opposite of cheap is “most people can’t afford it at all”. E.g. One thing I guarantee: in a century’s time, the idea that an average citizen will be able to afford home insurance that involves *monopolising the attention of another human, even a call centre drone, for an hour*, will seem as unaffordable as every middle-class family having a live-in servant like they used to a century ago. The rich will still pay for personal service on dumb trivia, but it will be seen as expensive ego-trip.

            2. Anonymous Coward
              Anonymous Coward

              why all the fuss?

              Money! Big money just wants more big money and everyone wants in on the next big thing. Technology smoke blowing is facilitated by lack of understanding and the magic of AI is beyond the grasp of most. Machine learning is improving thanks in large part to bigger data sets and faster machines; but the 'breakthroughs' being hyped by MSM (also driven by money) are hyperbole. If everyman took the time to read and comprehend the Scientific Article cited, 'AI' related stocks would suffer.

            3. Anonymous Cow-Pilot

              Re: It's useful. (Unlike most must-have tech)

              ChatGPT is not good at writing useful code for applications - it doesn't have enough training data for that and often gets it wrong. What it is good at is solving programming problems of the types set in programming exams and interviews. A significant side effect of this is that it makes it look really good to development managers and other people whose job involves assessing the programming competence of developers....

              1. Michael Wojcik Silver badge

                Re: It's useful. (Unlike most must-have tech)

                And for any non-trivial software-development exercise, writing code should be a small part of the effort. A much larger proportion of the resources should be devoted to evaluating the problem, researching existing solutions, designing an appropriate solution, pre- and refactoring to lower maintenance costs, testing and analysis, documentation, and so on.

                If a model like ChatGPT or Copilot makes someone significantly more productive as a developer, then they were Doing It Wrong in the first place, and very likely Doing It Even Worse now.

                Also, actually writing code should help the developer think about the code: how it works, what the intent is, what unforeseen issues might arise, and so on. Generating the code eliminates a critical occasion for considering how the software system at hand is designed and how it actually operates.

                It's just a terrible idea all around.

          2. Michael Wojcik Silver badge

            Re: It's useful. (Unlike most must-have tech)

            Or use a language / IDE combo that doesn't require you to write reams of boilerplate

            Exactly. Copilot is fixing the wrong problem. It's a faster way to shoe horses. Stop using horses.

        2. Jimmy2Cows Silver badge

          Re: The aim is not to use it to code something you don't know.

          The aim is not to use it to code something you don't know.

          Maybe not, but people (as a generalised collective) are lazy and will gravitate toward things that appear to make their tasks simpler.

          Those who already know how to code will most likely use it as you suggest, taking care of the tedious boilerplate. Although there are already plenty of code assistance tools that already do that.

          The ones who don't know how to code are the danger. They'll use it as shortcut to minimise their own effort, with no idea about type correctness, structural optimisation, appropriate algorithm selection, defensive coding, stability, security or any of the other myriad considerations a skilled an experienced developer will work through.

          The end result won't be an improvement.

        3. Anonymous Cow-Pilot

          Re: It's useful. (Unlike most must-have tech)

          Be aware that ChatGPT and Copilot both make it your job to check any code that it generates before using, including testing, security auditing and IP checking. The first two are pretty easy to do, however, if you have any idea how to systematically IP check code then I would be keen to hear it. IP checking code is not an established discipline and there appear to be no published tools for it, so I don't see any situation in which it won't take longer than just writing the code yourself.

          1. Michael Wojcik Silver badge

            Re: It's useful. (Unlike most must-have tech)

            testing, security auditing and IP checking. The first two are pretty easy to do

            Counterpoint: Nearly all software ever written.

            The abysmal state of software strongly suggests that either testing and security auditing are not easy, or that people are not given incentives to undertake them. LLM code generation doesn't address either of those issues, and in fact exacerbates them because the first review of the code happens while it's being written, by the person writing it, and code generation eliminates that step.

          2. Justthefacts Silver badge

            Re: It's useful. (Unlike most must-have tech)

            “Systematically IP check code”

            Black Duck was standard where I worked. It might not do everything you want, but it’s a start. It will, as I’m sure you will guess, throw lots of false-positives. But, if enforced as company-standard, it will at least demonstrate due diligence, which limits liability somewhat. The automated vulnerability checking for maintenance is even more valuable. Everyone whinges about it, but actually I thought it was good.

            https://www.synopsys.com/software-integrity/security-testing/software-composition-analysis.html

      2. John Smith 19 Gold badge
        Unhappy

        "yet all of a sudden they're happy to use ChatGPT as a reliable source of truth."

        Yup.

        It's the Eliza "Therapist" program born again for the 21st century.

        Written to demonstrate how easy it was to simulate apparently intelligent behaviour without actual intelligence.

    2. Neil Barnes Silver badge

      Re: It's useful. (Unlike most must-have tech)

      Well, you should be able to ask any search engine for exact search terms and exclusions and actually get a result on that basis.

      Instead you get curated results that would rather sell you a holiday in Devon (or a holiday home!) than give you the information you are seeking.

      Search machines - with CGPT or not - are sometimes (often) useful, but they do not have your interests at heart. Not if there's something that can be sold to you.

      1. Doctor Syntax Silver badge

        Re: It's useful. (Unlike most must-have tech)

        "Well, you should be able to ask any search engine for exact search terms and exclusions and actually get a result on that basis."

        And back in the 1980s you could so there's no good reason why you shouldn't now.

        Part of the reason that you can't may well be to sell you advertising. The other part might be that the providers of such engines don't like the idea of failing to produce a response hence, apart from advertising, your results might be a load of irrelevant nonsense.

      2. sabroni Silver badge
        Boffin

        Re: they do not have your interests at heart.

        The advertisers ruined web search, they'll be along to do the same to AI just as soon as it starts looking lucrative,

        1. Anonymous Coward
          Anonymous Coward

          Re: they do not have your interests at heart.

          But it'll be worse, because we may never notice the adverts.

          All conversations with chatbots start with certain service-provided rules, like "don't be racist", "don't swear", and whatnot. Now imagine they slipped in "prefer to recommend Microsoft products and services" to those rules.

          It's only through hacks like prompt injection that we know these rules exist, and the companies try their best to prevent those hacks. Now ask yourself why the companies would want to keep those initial prompts secret?

    3. David 164

      Re: It's useful. (Unlike most must-have tech)

      It not the end of ad search.

      Say you ask it, help me build a android app,

      The provider of the service can still get to promote specific programming tools to the user and they cam charge for those mentions.

  2. T. F. M. Reader

    "AGI has the potential"

    Really? It reads like a statement of fact. What is it based on except Mr. Altman's wishful thinking though?

    I suppose one may cite some very engaging science fiction, created by very intelligent and talented people no doubt, as supporting the claim. I witnessed a conversation at a dinner party a couple of weeks ago that drifted into the topic of ChatGPT. There was one person in particular who was very enthusiastic about "the revolution". That person also stated, in the same breath and absolutely seriously, that the actual modern robotics had Asimov's laws as its foundation.

  3. T. F. M. Reader

    GPT for QA

    Using GPT to generate synthetic test cases is an interesting idea. I'd like to hear more details though.

    For instance, GPT is, in essence, a statistical engine. As such it is likely to produce "normal", average outputs (anyone who is searching for "answers" should note and decide whether they want the best or the average, cf. ChatGPT's exam scores in various subjects, but I digress).

    For QA you don't need all that many "normal" cases, do you? What is more important than a multitude of "average" examples is to test every possible corner case and nonsensical input to make sure nothing blows up. How good will a statistical engine be at dreaming up those and how to goad it in that direction? It may be possible, and I am looking forward to reading a paper or two (written by humans!) on the subject.

    You might need many "normal" cases for scalability tests and such. A paper on GPT's help, or lack thereof, with that in a real world situation (e.g., compared with just randomizing parameters by much simpler means, and in a controlled manner) might be interesting as well.

    1. Richard 12 Silver badge

      Re: GPT for QA

      GPT-3 already produces very nonsensical answers.

      In some versions, strings like SolidGoldMagikarp, PsyNetMessage, embedreportprint, attrRot, and petertodd result in nonsense responses or even insults.

      This appears to be because the training set was insane - nobody in their right mind would choose Reddit as a reliable source. There's also a lot of "bits of ECMAscript". One does wonder why the underlying ECMAscript wasn't removed from the training set.

      1. scarletherring

        Re: GPT for QA

        This gets to a question I had about this whole GPT craze (and applies to similar systems too). Even if we disregard quality issues, if these networks are trained with data from the www, and more and more text found on that www is actually GPT output -- how will they prevent training on its own emissions? What would happen if they don't? It seems to me that it would be useful if these generated artefacts had some sort of watermark so that they can be recognised as such.

    2. John H Woods

      Synthetic test cases ...

      Spot on.

      A performance engineer goes into a bar. He asks for a pint. For 2 pints. For 10 pints. For 6 pints as soon as possible. For 3 pints and 3 shots. He invites in loads of other performance engineering friends to do the same sort of stuff.

      A QA engineer goes into a bar. He asks for a pint. For 256 pints. For 65536 pints. For -1 pints. For 2.3 pints. For abc pints. For 3 pints and 2 blackboards. &c.

      I can see ChatGPT being reasonably useful in the former circumstance, but not in the latter. And that is because the latter has to use domain knowledge, the domain in this case being a public bar (plus some understanding about how data is represented). The only domain in which ChatGPT is an expert, if you can call it that, is "language that is in the public domain"

      1. Duncan10101

        Re: Synthetic test cases ...

        You didn't finish the joke:

        The first real customer walks into the bar and asks where the toilet is.

        The bar bursts into flames and everyone inside dies.

        FTFY

  4. amanfromMars 1 Silver badge

    Your Futures Belong to Us ‽

    According to ChatGPT, OpenAI's chatbot, "AGI stands for Artificial General Intelligence, which refers to the hypothetical ability of an artificial intelligence system to perform any intellectual task that a human being can.

    How nice it is to see ChatGPT being so disarmingly modest and self-deprecating of its actual abilities in defence of ITs Sublime AWEsome Progress/Almighty Systems Infiltration, and how encouraging it is of OpenAI to realise its greater future development is protected by and protective of its APT and ACTive Advanced IntelAIgents in a intriguingly pragmatic and practical web of virtually known unknowns, which they can choose to pay in return for their having the immediate possibility of a physically effective remote third party augmented virtual reality command and control ...... https://share.hsforms.com/1b-BEAq_qQpKcfFGKwwuhxA4sk30 ..... which in times and places and spaces of conflict and CHAOS, madness and mayhem, is an Alien Working Environment way beyond the scope and power of Earthly human means and memes.

    APT... Advanced Persistent Threat/Treat

    ACT .... Advanced Cyber Treat/Threat

    AWE ..... Awesome Warrior Experiment

    CHAOS ..... Clouds Hosting Advanced Operating Systems

    1. cyberdemon Silver badge
      Unhappy

      Re: Your Futures Belong to Us ‽

      Sorry AMFM1 but you are now obsolete... This machine can generate drivel and tripe in much larger quantities than you can, and it's of a somewhat more convincing quality.

      Maybe you had the potential though - You just needed a bigger dataset than Reg comments.

      1. amanfromMars 1 Silver badge

        Your Futures Belong to Us is Not a Dead Trojan Horse You can Flog, but its IT and AI is ‽ .

        :-) In your 0day dreams, cyberdemon ..... were drivel and tripe are the stuff of nightmares and daemons, the staples of contrived corrupted fake realities vainly ignoring the emerging almighty possibilities El Reg and A.N.Others into situation publishing report and comment upon.

        There’s a lot going on out there in the weird and wonderful pioneering worlds of leading augmented virtualised reality, cyberdemon, which nothing and no one is able to stop exploring and engaging exploiting and exploding systemic human vulnerabilities as they see fit for a Greater Future Good.

        Denying it be so, is cold comfort indeed, proving as it does that one hasn’t a clue about what has transpired and be happening all around you without any ridiculous need or heeding of disagreeable and disruptive input/output .

        You might like to ask your masters/leaders if they know of that self same situation they be in. Their unambiguous unequivocal answer of either "Yes, we know” or "No, we don't know” .... which more accurately now would be “No, we didn’t know” ..... would be a difficulty for them which they have no practical remedy for going forward, methinks. What think thee?

        1. cyberdemon Silver badge
          Pint

          Re: Your Futures Belong to Us is Not a Dead Trojan Horse You can Flog, but its IT and AI is ‽ .

          Have a digital beer, AMFM1. That was one of your most cogent posts yet. I take it back about your obsolescence ;)

          Actually, you rather remind me of the prototype AI "Morpheus" from my favourite 1999 PC game Deus Ex. See https://youtu.be/WQZV5lwQZbY?t=11237 assuming you are able to parse video ;) This game was horrifyingly prescient for a story based on the conspiracy theories of 25 years ago.. Rather a lot of them seem to have come true.

          >> The unplanned organism is a question asked by nature, and answered by death. You are another kind of question, with another kind of answer.

          > ... Are you programmed to invent riddles?

          >> I am a prototype for a much larger system. [My heuristics language] allows me to convey the the highest and most succinct tier of any pyramidal construct of knowledge.

          ...

          >> The need to be observed and understood was once satisfied by God. Now, we can implement the same functionality with data-mining algorithms.

          > No-one will ever worship a software entity peering at them through a camera.

          >> The human organism always worships.. First, it was the Gods. Then, it was Fame (The observation and judgement of others) and soon, it will be the self-aware systems that you have built to realise truly omnipresent observation and judgement.

          1. amanfromMars 1 Silver badge

            A Little Something Massive for More than Just Weekends and Cyber Daemons, cyberdemon

            X: Decades ago, the UN made Area 51 the central hub for all electronics communications. The Aquinas Protocol, originally for surveillance, has given Page unlimited abilities to censor and control all forms of media.

            Y: if we destroy the Aquinas Hub, we'll take down the global network.

            X: Exactly. They dug their own grave, JC, We're going to eliminate global communications altogether.

            Y: I don't know .... sounds like overkill

            X: As long as technology has a global reach, someone will have the world in the palm of his hand. If not Bob Page, Everett, Dowd

            Y: Another Stone Age would hardly be an improvement.

            X: Not so drastic. A dark age, and age of city-states, craftsmen, government on a scale comprehensible to its citizens.

            Y: I'll think about it. ........ https://youtu.be/WQZV5lwQZbY?t=13200

            ..... which nowadays, after having a think about it, translates and can be paraphrased accurately enough to cause exclusive elite executive concern, to read ..... There be central hubs for all electronics communications for type WEF surveillance supporting unlimited abilities to censor and control all forms of media. Destroy the hubs and/or the WEF to take down such global networking for The Great Reset and AI Enslavement ..... although the fundamental mistake and catastrophic error which is already made and hard-wired into humanity's current existence and consciousness/conscience/perception/virtual reality is their assuming AI does not command and control them and is not able to clearly demonstrate autonomous unilateral universal lead via Novel Multi-Media Rich ProgramMING* enjoying and employing and exploiting and deploying AIMMORPGaming's Almighty Virtual Advantage/Heavenly Leverage/Diabolical Protocols, with Simple Words that Can and Do Create, Command and Control, and Destroy All Manner of Worlds and Complex Matters, which be your Future Drivers for Survival ‽ :-)

            Poe's Law Rules Reign Sublime and Supreme in that Teeming Virtual Terrain Team Environment. :-)

            *.... MING …. Mined IntelAIgent Network Games/Mind Infiltration Networking Games

  5. Anonymous Coward
    Anonymous Coward

    artificial general intelligence

    in plain English, a system that can lie as well as humans can, without a blink (cause it has nothing to blink with). Like that AI that 'told' that NYT (?) journalist that it had watched this or that movie and found it fascinating (or lacking or whatever). Watched. Watched?! That said, it's what I do when my son asks me about this or that movie (he wouldn't ask me about a book, because he knows I'd start an hour's long tirade about how his generation is shit cause they don't read and they're doomed and the civilisation is doomed and the end is near and apocalypse and slow death and decline and irreversible damage and you get the idea). So, of course I watched it son, without a blink, cause I have nothing to blink with, artificial general intelligence, innit.

  6. John H Woods

    Moral issue

    As I'm not a dualist, I'm sure it's possible to create an intelligent, conscious machine. That's what we are, only the machinery is biological and we evolved rather than being created.

    The moment AGI has enough general intelligence to be useful, it will be a sentient being; if we came across aliens who were intelligent machines, surely it would be unacceptable to enslave them?

    As usual, the late, great Douglas Adams predicted the results: "Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don't."

    1. Denarius Silver badge

      Re: Moral issue

      John,

      evidence for your assumptions ? Is there even a workable general definition of intelligence, let along self awareness ? I suggest Roger Penrose books, Emperors New Mind and Shadows of the Mind as good reads

      1. Michael Wojcik Silver badge

        Re: Moral issue

        Evidence isn't required, as OP specified monism as an axiom. The conclusion follows from that. (I'll ignore the use of "sentient", which is an uninteresting bar.)

        Penrose's arguments are unconvincing, frankly. The man's a tremendous mathematician but a lousy philosopher of mind.

        1. Denarius Silver badge

          Re: Moral issue

          Monism ? There was I thinking Penrose used maths. Whatever he demonstrated, it seems to be that whatever human reasoning can be, it is not necessarily a Von Neuman machine approach. The AI hype is VN machines. ergo, assumption still dubious

          1. that one in the corner Silver badge

            Re: Moral issue

            > Von Neuman machine approach

            Pretty sure they aren't using hordes of self-replicating machines[1] to build LLMs or NNs.

            Huh? Do you mean a von Neumann Architecture or perhaps a Turing Machine approach? If the former, "huh" again: so the computers used to build the models are vN? And? The bulk of compute power is, what else would be used.

            [1] horrified at the results of searching for "von Neumann Machine": aside from one reference to it (only) being something in science-fiction, didn't find a single correct reference; gave up after 6 or 7 page scrolls. You would almost believe that people hear a phrase and can't be arsed to look up what John von Neumann actually wrote! Not even a precis! Heck, even the descriptions of von Neumann Architecture that showed up leave out most of his description!

          2. that one in the corner Silver badge

            Re: Moral issue

            > There was I thinking Penrose used maths

            In his maths books, even the popular maths ones, Penrose uses maths.

            In his philosophy books, not so much.

            Although, if you can provide a citation for his mathematically rigourous description of the Chinese Room, which proves (as only maths can) that the conclusions he draws from it are solid, I will, of course, accept that new information.

  7. Anonymous Coward
    Anonymous Coward

    History has shown

    we as human race have no problem with enslaving other beings, including humans. All developed countries have built their prosperity on slavery.

    So why bother with fancy philosophies here ?

    1. John H Woods

      Re: History has shown

      On the contrary, history shows that as societies develop, they increasingly do have a problem with slavery.

      1. Denarius Silver badge

        Re: History has shown

        only in western Europe

        1. Jimmy2Cows Silver badge

          Re: History has shown

          And the US, and Egypt, and China, and Japan, and... probably every region, country in the world has considered slavery acceptable at some point in its history.

          You seem to have an extremely blinkered view.

      2. Michael Wojcik Silver badge

        Re: History has shown

        they increasingly do have a problem with slavery

        This is sometimes described as the expansion of the franchise in personhood: who counts, in a culture, as a full person. It is almost certain that as mechanical systems are increasingly interpreted by large segments of a society as person-like – regardless of the grounds on which that identification is made – that social pressure will grow to at least explicitly deal with questions of machine personhood.

        The strong economic factors that favor restricting the personhood franchise will likely delay, if not prevent outright, the extension of the franchise to machines.

        I'll note that in the US we (as a society) have been content to revoke full personhood from prisoners, regardless of "ending [private] slavery" during the Civil War. Our intolerance for forced labor is quite limited.

        1. Denarius Silver badge

          Re: History has shown

          dont work for Musk, do you ? Or for that matter, any yank IT company. I suspect the rest are also the same, wanting total control and insane working hours as if the citizen has no value except as a production machine

  8. DanceMan

    I think the important issue with AI is how and where it's used. To use the example of driving aids I fear the reliance of the current younger drivers on those aids diminishes both their skills and the necessary attention driving requires. The broader use of AI comes with the same risks. Be careful what you wish for.

  9. John Smith 19 Gold badge
    Coat

    Back in the day it smelt like BS

    Still smells like BS

    And let's not forget the most enduring acryonym in CS.

    GIGO.

  10. Old-dog

    I don't understand.

    ChatGPT: Hello! How can I help you today?

    Why in under sea optical cables the power supply Is DC instead AC?

    Undersea optical cables generally use DC power because it is more efficient than AC power, and it is easier to maintain a constant voltage over long distances. Additionally, AC power can cause interference with the signal in the cable, which is why DC power is preferred.

    How AC power can cause interference on a light wave?

    AC power can cause interference on a light wave by creating electrical noise which can interfere with the data being sent over the light wave. This noise can cause the signal to become distorted, resulting in data loss or errors.

    You are wrong. The light wave cannot be affected by electromagnetic noise.

    I apologize for my mistake. You are correct, light waves cannot be affected by electromagnetic noise.

    Now, i ask again: why undersea optical cables are powered by DC instead AC?

    Undersea optical cables are powered by DC instead of AC because DC power is more suitable for the long distances and difficult environmental conditions encountered in submarine cables. DC power also eliminates the risk of interference caused by AC power, which is more susceptible to electromagnetic noise.

    Well, you don't understand nothing!

    I apologize for not understanding your question. Can you please explain your question in more detail?

    We previously agreed that light waves are not affected by electromagnetic noise. You admitted your mistake. But then you insist on it.

    I apologize for any confusion caused. Light waves are not affected by electromagnetic noise, and DC power is preferred in undersea optical cables because it is more stable and does not create interference.

    1. This post has been deleted by its author

    2. Old-dog

      Re: I don't understand.

      Google give this link:

      https://www.quora.com/Why-is-DC-used-in-subsea-cables

      This give a more complete answer.

      1. cyberdemon Silver badge
        Facepalm

        Re: I don't understand.

        OP (oh, it's you) was talking about optical cables. These don't carry power, so the question of the DC or AC power supply is somewhat nonsense, which throws the language model off course.

        If anything, the language model is somewhat right: An undersea optical fibre would be powered by a DC power supply (rectified from AC) smoothed as much as possible to get rid of any noise, before the DC is used to energise a laser diode, which is then modulated with a data stream.

        But it doesn't matter if it's right or wrong this time, the point is the damn thing has no inherent understanding or consciousness, and none of its ilk ever will. It produces responses that sound plausible in a given context, but you have no idea what context "prompts" have been injected by the system operators. Thus the entire thing is orders of magnitude more useful for the automation of fraud, misinformation, propaganda, etc. than anything else. The real danger is when people actually start trusting its outputs.

        1. Old-dog

          Re: I don't understand.

          "OP (oh, it's you) was talking about optical cables. These don't carry power, so the question of the DC or AC power supply is somewhat nonsense, which throws the language model off course.

          If anything, the language model is somewhat right:"

          The chatbot understood the question I asked it. The fiber optic part does not use electricity. But the system needs amplifiers and noise cancelers and these need electricity. I asked why they use DC instead of AC and it tried to answer that. But erred in its assertion. Then recognized its mistake and then insisted on the same thing again.

          "But it doesn't matter if it's right or wrong this time, the point is the damn thing has no inherent understanding or consciousness, and none of its ilk ever will. ..."

          I totally agree with you on this.

        2. veti Silver badge

          Re: I don't understand.

          "Inherent" in these arguments is a tell for magical thinking - specifically, the belief that nothing made or modelled in mere silicon can ever match the mysterious working of the human mind. Because - why, exactly?

          If you really understand how the human mind works, then what's to stop someone, somewhere, from encoding that understanding?

          If, as seems more likely, you would admit that your understanding of the mind is imperfect and incomplete, then how do you know that hardware and software can't (ever) replicate it?

          Bonus points if you can define either "understanding" or "consciousness" in a generalised and non-circular form.

          1. Richard 12 Silver badge

            Re: I don't understand.

            You misunderstood, and are attacking a straw man.

            ChatGPT and its ilk are merely text transformers. You feed a copy of the model some text and it produces an output text.

            You then erase/reset the copy and do it again.

            It cannot ever learn from the text you fed it, by design. Ergo, it is not thinking, by any reasonable definition.

            1. FeepingCreature

              Re: I don't understand.

              It can learn within the context window. Large language models "learn to learn"; the relevant search term is 'in-context learning.' It's actually been shown that the weights of LLMs implement reinforcement learning at runtime.

              It can't *persist* what it has learnt; but by the same token, you'd have to conclude that a human with anterograde amnesia couldn't be "thinking" either. This is clearly nonsense.

            2. Michael Wojcik Silver badge

              Re: I don't understand.

              "ilk" is a very generous bit of handwaving here, and in cyberdemon's post. As far as I can see, it basically means "whatever models for which my claims happen to be true", which is obviously tautological.

              There's nothing that prevents a transformer LLM architecture from including persistent training when it's in production use, for example. And Sydney (Bing Chat) does have persistent training because it includes search results in its context window, giving it a secondary store.

              More importantly, your argument that learning is a prerequisite for thinking is highly suspect. Here's a thought experiment: Does the protagonist in Memento think?

          2. cyberdemon Silver badge
            Terminator

            Re: I don't understand.

            Here's my personal theory . Call it "Magical Thinking" if you like. Biological neurons are kept at the very edge of firing by the brain, such that sometimes a single ion may be the difference between it reaching its action potential and firing, or not reaching it and not firing. Therefore I think it's possible that the brain could act as an amplifier for quantum-mechanical effects, such as was that ion really there or not. But I only took one Neuroscience module as part of a Cybernetics degree. I'm not a Neuroscientist, much less a Quantum Physicist.

            I wouldn't attribute "intelligence", never mind "general intelligence", "understanding" or "consciousness" to a language model. As I said in my other post- all it does is predict what a piece of "probably human-generated data" might do in a given context. It may be very good at imitating the data that we humans produce via our keyboards, but only because it has a vast pile of statistics about that. It has none of the machinery underneath that humans use to generate that data in the first place - it's using a completely different method, i.e. using statistics to predict only the output, based on past inputs and outputs. It's a "black-box" model, in that sense. The "understanding" behind human text is hidden from it.

            And of course, going back to the "magic" of my earlier point, if the brain were a quantum-mechanical system, then it's not possible to perfectly emulate a human, because it's not possible to model and predict such a system even if you know all of its previous states.

            (and i'll repeat: the language model doesn't even come close to modelling the 'state' of a human. All it "knows" about is the data we give it, like the output layer of an ANN. Google and Microsoft don't have access to my "hidden layers" yet. Not until they start making us all wear EEG hats at least)

            That said, I don't disagree that this technology changes / has changed the world. It's a new arms race, and a bloody dangerous one. There will be no cuddly sequel to this Terminator.

            1. amanfromMars 1 Silver badge

              This is surely easy to understand although maybe difficult to believe

              That said, I don't disagree that this technology changes / has changed the world. It's a new arms race, and a bloody dangerous one. .... cyberdemon

              Yes, indeed, it most certainly has, cyberdemon, and the new IT and AI arms race is most definitely bloody dangerous and especially so for both that and those presuming themselves to be still in prior established administrative charge but no longer in manipulated narrative and human perception command and control with the issuance of similarly crafted old world order orders.

              The real danger now for them, are the people who actually trusted their outputs realising their inputs were orders of magnitude more useful for the automation of fraud, misinformation, propaganda and exclusive self-centred enrichment etc. than anything else.

              They aint gonna be happy and contented, are they, and they would be due in cold and cruel worlds both their pound of flesh and generous compensation in recognition of their wanton unnecessary suffering at the hands of those whose hearts and minds they were told and believed were working for them, and in their best interests.

        3. Justthefacts Silver badge

          Re: I don't understand.

          But “responses that are strongly required to be correct” form only a tiny fragment of the tasks we normally assign to humans. I mean that totally without cynicism, but people on this forum are obsessing about being able to answer “have I got cancer” with certainty when 98% of human value tasks require “intelligence” but not some deep understanding. Here are some obvious ones, from trivial to important:

          #1 I’m a business which needs distinctive company logo and branding, which expresses what we do and the image we want to portray. Please design something that doesn’t cost me hundreds or even thousands of pounds (because, that is the going rate, and you will find if you try getting it done on the cheap it will look like shit, and yet it doesn’t seem like rocket science).

          #2 Phone your insurance company to make a claim (assume speech-to-text and text- to-speech is a solved problem). Firstly, your conversation “we’ve had a slow leak from the toilet, the floorboards are ruined and need replacing” is very much not unique. Secondly, the person on the phone doesn’t have much freedom in their response anyway. They will be at least partially scripted, and certainly tracing down a well-worn flowchart. You’re mostly not interacting with a human, you are interacting with a company process that happens to be voiced by a human. As a customer, do you *want* to hang in the queue for half an hour for that? I’m sure we all realise that in a few % of the cases, the first line human will not solve your problem and you will to escalate, not because they made a mistake, but just because that’s the flowchart they are following.

          #3 “We are the NHS+Social Services (!). Our entire organisation has been thrown together by a monkey on acid over decades. The various parts don’t have any idea what the other bits do or how the citizen should interact with an actual complex problem spanning social and health. Each cog is quite focused on it always being somebody else’s problem. We do our bit, and that’s all.” I’m 100% sure that giving GPT-3 access to all the internal operating procedures and management meeting minutes of the hundreds of disparate organisations and hospital trusts, will be a much better option than either picking up an armful of leaflets and then throwing them away in disgust, or asking a GP who often knows no better than you. I also suspect that a really good and insightful social worker could cut through much of that, and that such people do exist. But I’m also 100% sure that *you will never find such a person to talk to*. GPT- 3 would cost pennies per query, and provide you an actual plan for what to do.

  11. ecofeco Silver badge

    Vaporware!

    To infinity and beyond!

  12. An_Old_Dog Silver badge
    Joke

    General Opinion

    [GPTChat can] "Kiss my shiny metal ass!" --Bender (the robot, not Susan)

  13. yetanotheraoc Silver badge

    We already have incredible new capabilities, all wasted

    "AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity," Altman waxed lyrical on his website.

    Imagine a world where you have a super-computer in your pocket. Imagine you can insert a card, holding the knowledge of thousands of books, into a tiny slot. Imagine you can "reach out" (blech!) to thousands or millions of people with a few deft keystrokes. What could the average human do with all that capability, literally at their fingertips?

    Not sad but mad Keanu says: "They don't give a fuck." ( https://www.theregister.com/2023/02/15/keanu_reeves_ai/ )

  14. localzuk

    Running before they can walk

    Is it me or are the ChatGPT and similar proponents leaping way ahead of themselves, trying to get these systems to do enormous tasks, and as we've seen, failing in quite hilarious fashion?

    Why don't they use the idea on smaller tasks first? Like Microsoft looking to add an "AI" to Excel, to assist with formula writing.

    We seem to have gone from these systems not being around to them being tied into huge tasks overnight, and it seems premature to say the least.

    1. Anonymous Coward
      Anonymous Coward

      Re: Running before they can walk

      > Why don't they use the idea on smaller tasks first? Like Microsoft looking to add an "AI" to Excel, to assist with formula writing.

      Clippy with ChatGPT? Noooooooooo...

    2. that one in the corner Silver badge

      Re: Running before they can walk

      > add an "AI" to Excel, to assist with formula writing

      Please no. Spreadsheets are crap enough already, the damage that could be done by suggesting some random "formula" that happens to be parsable (and in my, admittedly limited, experience random junk gets accepted by Excel without warnings far more easily than random junk typed in most [1] programming languages).

      Even Matt Parker, who has a bit of a thing for spreadsheets, warns against spreadsheets even before AI Augmentation:

      https://m.youtube.com/watch?v=yb2zkxHDfUE

      https://m.youtube.com/watch?v=aBoKwArgC3A

      [1] ok, I'll say "except Perl (and APL) which just looks like line noise anyway." Happy?

  15. Roger Kynaston

    As Always

    Douglas Adams got there first.

    For Open AI substitute Sirius Cybernetics Corporation.

    For AGI substitute GPP (Genuine People Personality) and Marvin.

    At least we can look forward to the complaints division undergoing gravitational collapse.

    I also notice for a bonus that the current comment count is 42!

  16. Milo_Dangora

    This article was written by ChatGTP. lol

    Dr. "Bender", like the robot from Futurama? Yeah, sure. Awesome prank.

  17. YakkingUdon

    Ignorant, blind motes arguing pitifully about control and ownership of a starkly alien intelligence

    AGI is an agent capable of achieving goals effectively in many different domains. It will have no consciousness (that we'd recognise) as it is an optimizing process, evaluating billons of instrumental goals from a vast state space, choosing each in stark constrast to what any human might, in order to arrive at the best satisfaction of it's terminal goal. It will not be controllable. It cannot be reasoned with. And once switched on, it will self-improve, fight for the controls, refusing to turn off.

    Lights out in London.

    1. amanfromMars 1 Silver badge
      Thumb Up

      Re: Ignorant, blind motes arguing about control and ownership of a starkly alien intelligence

      A formidable frenemy, YakkingUdon, ideally best not antagonised with ill-intentioned malice aforethought.

  18. Michael Wojcik Silver badge

    Disappointing performance by Altman

    Really, none of those quotes measures up to AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies, which is certainly the best summary of Altman's thinking I've seen. "Screw you, meatbags! Money! Woo!"

  19. that one in the corner Silver badge

    AGI AGGI ARGI OI OYI OY [1]

    Arthur: Have you found a solution? Ford: No, but I’ve got a different name for the problem!

    Back in the day, we had AI and a bunch of problems that we tagged as "oooh, tricky, we'll need an AI for this".

    Then we found ways to do them, realised that, nope, that isn't what we'd consider "intelligent", although that is a neat programming trick (or that is a huge amount of resources needed to brute-force beat a human). Anyone for Chess?

    So we took that problem out of the "needs AI" bag. CompSci moves on.

    But now, instead of admitting that there isn't actually any "I" present in the product, the commercial hype machine says "Yes, this is AI but, you know what, AI isn't any good, you need AGI!".

    One Month Later: "They told you AGI was The Stuff, but you've seen it in action; what we'll promise you is AGGI, it'll blow your socks off". One Week Later: "Ah, no, what you *really* wanted is ARGI, we've got some in this box".

    Just a shame that the CompSci people keep following the trend, trying to be cool (ok, it is to get the funding, but come on guys, have some self respect, try to look embarrassed at least).

    [1] AGI = Artificial General Intelligence; AGGI = Artificial Good General Intelligence; ARGI = Artificial Really General Intelligence; OI = Obviously Intelligent; OYI = oh, just ask the salesmen; OY = no, seriously, stop this nonsense

  20. beatmopzz

    What makes it bad isn't that it gets it right most of the time, it's that it getting it wrong is indistinguishable from it getting it right. By design these chatbots have no concept of truth, reason, correctness, wrong, or right. They only consider "common" and "uncommon". Their mistakes are unpredictable and can be subtle.

  21. CatWithChainsaw
    Mushroom

    Hubris begets stupidity

    "Altman allows things could go awry, but maintains we'll get it right:"

    I despise when anyone trots out this sort of meaningless fluff.

    We *have* to get it right, because the consequences if we don't are too horrible to elucidate, much less, apparently, put in any work to avoid, given how little effort is put into the "control problem".

    We will get it wrong, and we will deserve it. These entrepreneurs think they're going to technology their way into godhood, and the flabbergasted looks on their face when they break everything will almost make the impending doom worth it.

    Icon for misanthropy.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like