back to article Forget the AI doom and hype, let's make computers useful

Full disclosure: I have a history with AI, having flirted with it in the 1980s (remember expert systems?) and then having safely avoided the AI winter of the late 1980s by veering off into formal verification before finally landing on networking as my specialty in 1988. And just as my Systems Approach colleague Larry Peterson …

  1. Pascal Monett Silver badge
    Thumb Up

    That is a quote I will keep

    "The neural network is unable to generalize from what it has 'learned' ”

    And I'm going to keep this article's URL to be able to show it to anyone who starts spouting off about how computers are now "intelligent".

    Unless they're in marketing. Then it would just be a waste of time.

    1. Ian Johnston Silver badge

      Re: That is a quote I will keep

      To be fair, computers are probably now more intelligent than people who work in marketing. Mind you, Fischer-Price have for years made computers more intelligent than anyone involved in HR.

      1. katrinab Silver badge
        Megaphone

        Re: That is a quote I will keep

        No they are not.

        Marketing people can reliably recognise everyday objects like door handles, beer glasses and so on. Computers can't.

    2. FeepingCreature Bronze badge

      Re: That is a quote I will keep

      It's just false though. Like, do actually read the papers. The whole point is that it generalizes.

      1. Gene Cash Silver badge

        Re: That is a quote I will keep

        No, the point is that it does NOT generalize. It accumulates a vast mass of training data, and is basically looking to see if the current situation matches any of that training data.

        Today's "AI" is glorified spellcheck (to borrow a recent comment)

        1. FeepingCreature Bronze badge
          Unhappy

          Re: That is a quote I will keep

          I don't know what to say. This just isn't true. People say this because they want it to be the case, but it really just isn't. There isn't even any evidence for this! There was this one paper that said you could phrase LLMs as a linear operator, but it used a *really* tortured operation and very obviously had little to do with how those networks actually work. Meanwhile, all the actual papers by OpenAI were "we built this network and it managed to generalize in interesting ways" and all the research in the field is iterations on this theme. Also if you actually use a current LLM, it is very obvious that it can do broadly novel work using generalized knowledge.

          I frequently get LLMs to write programs that have simply never existed, and I can literally watch it iterately build the code up using its understanding of what the code is doing and what I am asking of it, which, just to emphasize again, are novel requests on novel code that fall into established, generalized patterns of understanding.

          Any claim that LLMs just copypaste from the corpus is simply not compatible with either the scientific publications of the field or observed reality. This is the machine learning equivalent of climate change denial.

          1. Michael Wojcik Silver badge

            Re: That is a quote I will keep

            Shouting into the void, I'm afraid. The denialists here refuse to look at the research or indeed pay any attention to anything that disagrees with their dogma.

            It's unfortunate because that effort could instead be spent on expounding on the reasons why "generative AI" is counterproductive.

          2. jmch Silver badge

            Re: That is a quote I will keep

            I'm not sure that's what the 'generalise' means.

            For example if you take an LLM that has been trained on C code, then it can put novel C programs together, as you describe, in novel ways. But importantly, the building blocks of all code are always the same few simple ones - mathematical, string and date functions, data retrieval and assignment, conditional executions and iteration. Now, lets say someone invents a new language called D. The question is, if you take an LLM that has been trained exclusively on C code, could it be told in a prompt: these are the syntax rules of D, code me whatever program in D.

            An experienced developer who has never encountered a specific language would be able, from reading a few syntactic rules, to make his way into coding something simple in a new language. Could an LLM do that, or would it need to be trained anew on a dataset of 'D' programs?

            And either way, I think that the definitions of whether an LLM is an AI or not, can it generalise or not etc are simply asking the wrong question. The right question is - I have this tool, can it help me to do what I want to get done quicker / easier / better?

            1. FeepingCreature Bronze badge

              Re: That is a quote I will keep

              I have in fact made a new language that the LLM has never seen (check it out!) and it has been pretty good at putting together programs in it. However, ironically, it is very close to D (the actual language D, ie. DigitalMars D), so the model may have been cribbing off its D skill.

              (If you really want to stump it, ask it to explain in detail how lambdas work in D. This is a bit unfair though because they work very differently from other languages, and also the way they work isn't really documented anywhere. But you can brutally confuse the poor thing by asking it how exactly a lambda call in D gets the runtime frame pointer, despite the lambda being clearly passed in a compiletime parameter...)

              That said, your question is already answered: "Not well but yes." One of the OpenAI papers is about this, I don't remember which offhand. The models "learn to learn": one of the patterns they pick up is gradient descent reinforcement learning; ie. they can complete patterns even if they first time they have ever seen the pattern is in their context window during evaluation. The one thing they can't do (yet!) that humans can, is transfer those run-time "virtual" weights to more permanent memory. However, it's unclear how much this matters considering context window size is getting bigger and bigger. And this seems like an engineering issue; for instance, if we can identify the circuits that implement this "runtime reinforcement learning" pattern, we may be able to track which tokens the network attends to, and store them for the next "proper" learning phase.

              1. t245t Silver badge
                Boffin

                Re: That is a quote I will keep

                Q: Describe how lambdas work in D

                ChatGPT: “In D, lambdas are like mini-functions that you create on the fly right where you need them. They're anonymous, meaning they don't have a name, and you define them directly within your code. This feature makes them handy for tasks where you need a quick function but don't want to bother with formalities like naming it or defining it elsewhere. They're quite similar to lambdas in other popular languages like Python, JavaScript, and C++. Here's a straightforward example”:

                auto add = (int a, int b) => a + b;

                1. FeepingCreature Bronze badge

                  Re: That is a quote I will keep

                  Yep and that's wrong, the specified syntax creates a delegate closure, not a lambda. You can tell because the parameter types are fully specified.

                  An example of a lambda would be alias add = (a, b) => a + b;.

                  The reason is that lambdas are actually purely compile-time constants. They don't capture scope (how! they aren't even values); they merely act like they capture scope because the effect of passing a lambda (at compiletime!) to a function is that the called function (not the lambda!) is turned into a child of the current function. Note that the lambda is not assigned to a variable because they are symbols, not values - alias add captures by name, not by value. That's why lambdas are passed to functions as a symbol template parameter: array.map!(a => a + 2); note the !() that marks template parameters in D. map gets the frame pointer of the caller to pass to the lambda because the compiler turns the instantiation into a child function.

                  (Is this a good way to implement lambdas? Not really! But we're stuck with them.)

                  IMO what this demonstrates is that LLMs are just as prone to jumping to ready but wrong conclusions as humans are.

          3. Anonymous Coward
            Anonymous Coward

            Re: That is a quote I will keep

            the only generalisation it does is make shit up based on probability.

            it's not fucking intelligence in any shape or fucking form.

      2. Anonymous Coward
        Anonymous Coward

        Re: That is a quote I will keep

        To be fair, I think it the fault of the author for failing to sufficiently generalize the meaning of "generalize". It can even happen a human.

        The Deep Mind computer chess master does not perform an exact lookup on same-board configurations, there are too many possible configurations for that to be possible - it *does* perform generalization. However that *generalization* is not qualitatively the same as that of a typical high level chess playing human (assuming said human is not cheating and wired to a computer).

        And obviously, and arguably more importantly, a human can do much more than play chess - that's REAL generalization.

        1. Ropewash

          Re: That is a quote I will keep

          I believe it has been established that said human is cheating by being wired to a sex toy.

    3. Michael Wojcik Silver badge

      Re: That is a quote I will keep

      Since that quote is wildly inaccurate and demonstrates a dangerous lack of knowledge about SotA DL models, that would be a foolish thing to do.

      While I'm highly critical of "AI" hype and the use of generative AI, I'm also capable of following the research, which is clearly something that eludes most of the Reg commentators and in this case, unfortunately, Mr Davie as well.

      LLMs most certainly generalize, and in fact that is a prominent area of research. For a while the popular explanation was broad basins in the manifold, but that's no longer widely believed, for reasons.

      And so do other model architectures. Davie misstates his own primary example. Pelrine's experiment was not against AlphaGo — it was against KataGo, which is "based on" AlphaGo Zero with computational optimizations. Get the basic facts right. And AlphaGo Zero is not SotA — the confusingly-named AlphaZero is an improvement on AlphaGo Zero. More importantly, the "zero" models definitely do generalize. That's all they do. They start with nothing but the game rules and the model self-trains using unsupervised learning on its own game play. There is no external training set.

      Pelrine's experiment is important because it demonstrates that it's possible to defeat an AGZ-architecture model at a particular training level by searching for adversarial inputs which aren't well-represented in its model. All that shows, though, is that generalization in the model is incomplete. It most definitely does not show there is no generalization at all.

      The amount of wilful ignorance on this topic continues to be surprising. Once again the myth that people in tech strive to be rational is disproved.

      1. TheBruce

        Re: That is a quote I will keep

        Reading along and started thinking what does pedantic mean...

  2. Mike 137 Silver badge

    Thank you!

    Superb informative article from someone who really knows their stuff. A breath of fresh air among the fog of commercial hype and misunderstandings that constitute the public impression that LLMs are the sum total of "AI".

    1. Michael Wojcik Silver badge

      Re: Thank you!

      Rubbish. Davie is way off base here, and the knowledge he demonstrates of current DL research is shallow and misleading.

      Davie may well have expertise in other ML approaches, which I certainly agree are far more appropriate for many applications (including pretty much anything to do with networking) and have other advantages, such as explicability and good performance without requiring vast resources. But what he writes here about DL is either elementary or wrong.

  3. Dave 126 Silver badge

    A layman such as myself always expected a computer to be good at calculus (just I expect a pocket calculator to be better than me at arithmetic), yet really bad at 'human' (or indeed animal) things like speech recognition, image recognition, and knowing when to stop beeping before I throw it out of the window. Or rather, computers were bad at these things until a few years ago.

    A useful umbrella term for all these AI ML LLM NN approaches might be "Newish Techniques for Making Computers Less Rubbish at Doing Things That They Always Used To Be Pretty Rubbish At Doing"

    It doesn't roll off the tongue, I grant you. But I find it useful as a placeholder.

    1. Evil Auditor Silver badge
      Thumb Up

      "Newish Techniques for Making Computers Less Rubbish at Doing Things That They Always Used To Be Pretty Rubbish At Doing"

      Absolutely. And to make the result of a LLM less rubbish, we need Prompt Engineers - as an acquaintance recently pitched how AI will make everything better, more exciting, replace countless jobs and emerges new jobs such as the mentioned prompt engineers. I compulsively had to curb his enthusiasm with mentioning that prompt engineer will be one of the first function to be fully replaced by AI.

      1. HuBo
        Joke

        Then again, prompt contortionists, prompt charmers, and prompt swallowers, will probably survive as rather unique arts showcased in roadshows of future traveling three-ring AI circuses ... with dancing robodogs in tutus and talking llamas!

        1. Evil Auditor Silver badge

          ...prompt contortionists, prompt charmers, and prompt swallowers... And make LMM do things it is supposed not to do? Brilliant!

      2. Mike 137 Silver badge

        "I compulsively had to curb his enthusiasm with mentioning that prompt engineer will be one of the first function to be fully replaced by AI"

        It's already starting to do so.

    2. Scoured Frisbee

      In my Intro to AI class (graduate school, but still) our working definition was that AI was anything that [knowledgeable] people said a computer could never do. Obviously this is a rolling window, and at first I rebelled at its generality, but I've come to like this definition a lot; it seems to capture the popular usage very well.

      1. Gene Cash Silver badge

        Yep... "AI is winning at checkers" - nope, we have that and it's not AI

        "AI is winning at chess" - nope, we have that and it's not AI

        "AI is winning at Go" - nope, we (sort of) have that and it's not AI

        "AI is translating languages" - nope, we (sort of) have that and it's not AI

        1. katrinab Silver badge
          Meh

          Taking the translating languages one:

          If you are reading an Italian text, is "Macedonia" the former name of a country (now North Macedonia), the name of a region in Greece, or a fruit salad?

          Or, I guess if you are reading an English text, is "Turkey" the name of a coutry, a type of bird, or a type of white meat?

          I have recounted the tale many times of getting a computer to translate some Italian text which had a list of countries, including Macedonia, and it translated it to something along the lines of "Bosnia, Serbia, Fruit Salad, Albania, ...".

          1. captain veg Silver badge

            My favourites include Babelfish (remember that?) translating bus from French as "drunk", and, more recently, Google rendering carrière as "career" when it should have been "quarry".

            To be fair, bus in French is an informal shorthand for autobus and the same spelling can also be the plural participle of boire, to drink, and carrière is indeed a homograph which can mean either "career" or quarry, depending on context. Still, a genuine intelligence would understand that contextual distinction.

            -A.

          2. Michael Wojcik Silver badge

            This particular example is just a context-window and model-size problem.

            Whether solving it by using a larger context window and model is a good use of resources is a separate question. But there's no plausible argument for why an LLM cannot in principle disambiguate natural-language inputs as successfully as a given human judge. And anyone who's spent any time studying language (whether through translation, or linguistics, or literature, or just paying attention) knows well that human judges are not very reliable at disambiguation. Entire careers are built on disagreements over it.

          3. jmch Silver badge
            Happy

            "is "Turkey" the name of a coutry, a type of bird, or a type of white meat?"

            Türkiye have helpfully rebranded to avoid being confused with Christmas/Thanksgiving roasts

            Incidentally, DeepL seems to have no problem distinguishing Macedonia from fruit salad depending on surrounding context.

            1. captain veg Silver badge

              Türkiye

              I've often wondered whether, should the EU make it a condition of membership to observe the Christmas holiday, they would vote for it.

              -A.

          4. mcswell

            Translation

            katrinab wrote:

            "Taking the translating languages one:

            If you are reading an Italian text, is "Macedonia" the former name of a country (now North Macedonia), the name of a region in Greece, or a fruit salad?

            Or, I guess if you are reading an English text, is "Turkey" the name of a coutry, a type of bird, or a type of white meat?"

            Oh good heavens, you clearly have not tried MT in decades. Since forever, MT does not go along word-by-word and translate any more than a human does. Rather it has the context, and usually that context is sufficient for the MT to "choose" the right translation. (It's choosing the translation for the word and its context at the same time.)

            "Usually", you say? Yes; but the same thing (insufficient context to disambiguate a polysemous word) can happen to human translators, too.

  4. theOtherJT Silver badge

    There are two kinds of fools...

    "...those that think religion is literally true, and those that think it is worthless."

    Can't remember where I heard that quote today, but replace "religion" with "AI" and I think we have a reasonable summary of the state of play. No, it's not actually "Intelligent" bit it does have its uses. We just need to remain realistic about what they are.

    1. druck Silver badge

      Re: There are two kinds of fools...

      Religion is useful to try to make the masses behave themselves most of the time, right up until you want them to do something wrong in its name. AI is probably quite the opposite, most of them time it will be used for something wrong (better scams), and the masses wont behave themselves if it has put them out of a job.

    2. Michael Wojcik Silver badge

      Re: There are two kinds of fools...

      This would be a more convincing position if you offered a definition for "actually intelligent".

      While I am very negative about the long-term value of gen-AI, personally, I am equally displeased by how most critics of the term "AI" are unwilling or unable to define their terms. It's anti-intellectual laziness.

  5. LionelB Silver badge
    Stop

    Tools for the job

    > If calculus was already solvable by computers in 1984, while basic arithmetic stumps the systems we view as today’s state of the art, perhaps the amount of progress in AI in the last 40 years isn’t quite as great as it first appears. (That said, there are even better calculus-tackling systems today, they just aren’t based on LLMs, and it’s unclear if anyone refers to them as AI.)

    Well, because that's not what LLMs are for - it's simply not what they're designed to do. Would you sneer at a symbolic maths system because it's rubbish at generating human-like text? Do you imagine the human brain uses the same processing pathways to solve those problems? (Hint: it doesn't*.)

    Can we just agree that AI -- whatever you'd like the term to mean -- is multi-faceted. A bit like human intelligence, really. And, of course not pretend that LLMs are AGI -- although I suspect they may come to be seen as representing a facet of AGI (see footnote).

    *Although the human brain is clearly very, very good at integrating its subsystems; it will, for example, generally (but not always**) be able to generate speech or text which explains how it solved that calculus problem.

    **There are limits to introspective self-analysis; I am a mathematician and actually can't always articulate how I arrive at the solution to a maths problem. Similarly, a jazz musician could probably not articulate how they generated that elegant solo. And here's a fun exercise: try to articulate how you articulate...

    1. Michael Wojcik Silver badge

      Re: Tools for the job

      Indeed. It turns out gcc is terrible at responding to natural-language questions; it keeps complaining about syntax errors. No matter what I do, Microsoft Word won't talk to my PLC. And this Apache video game is rubbish.

      Sometimes a given piece of software doesn't do everything. Who knew?

      1. FeepingCreature Bronze badge

        Re: Tools for the job

        It is very funny to me though that with Copilot, you can *genuinely* solve problems sometimes by just writing a really detailed comment. All we need is a version of gcc that calls out to GPT-4 for corrections.

        1. Anonymous Coward
          Anonymous Coward

          Re: Tools for the job

          no ta, that shit would just make it worse.

          how much do you have invested in this shit? by the sounds of you all your money

          1. FeepingCreature Bronze badge

            Re: Tools for the job

            Literally nothing lol. I hold this opinion cause I genuinely think it's true.

    2. mcswell

      Re: Tools for the job

      Re calculus (which some software tools are reportedly good at--I haven't used them, but have no reason to doubt this) vs. LLMs (which are reportedly no good at calculus, or even ordinary arithmetic): The last calculus course I took was so long ago we probably used Isaac Newton's textbook. Anyway, some of the problems in that text were phrased as word problems; for example, a problem about emptying a tank through a small spigot, where the flow rate varied depending on the head of water. So you had to go from an English statement of the problem (or maybe it was German, if we used Leibniz's textbook) to an abstract "algebraic" statement (algebraic in the sense that it used variables), and then you had to solve that equation.

      I assume a LLM could not deal with such a problem (unless it found a sufficiently close, solved analog in its training data, and then it might still flub the arithmetic). Can these calculus programs solve word problems like that? If not, is there a way to hook an LLM to a calculus problem solver to get the right answer? I'm guessing that constructing the intermediate algebraic statement would be the difficult part.

      1. LionelB Silver badge

        Re: Tools for the job

        > If not, is there a way to hook an LLM to a calculus problem solver to get the right answer? I'm guessing that constructing the intermediate algebraic statement would be the difficult part.

        I suspect that's challenging but doable, and wouldn't be too surprised if it turned up in future iterations of LLMs. The tricky part would be recognising a query as essentially a symbolic (or numerical) maths problem, parsing it, and transforming the parsed output as input to an appropriate solver.

        Actually, though, I think it more likely to happen the other way around: it may well be that maths suites like Mathematica (symbolic) and Matlab (numeric) are already developing human language front-ends.

  6. LionelB Silver badge

    Statistical models

    "... if only the term AI hadn’t been so polluted by people marketing statistical models as AI."

    I'd be a little more circumspect about writing-off statistical models in the context of (A)GI. There is a broadening body of research into the mechanisms of animal/human cognition which postulates that certain types of statistical models may well underpin some aspects of cognition, and by extension intelligence. See e.g. Predictive Coding, or more generally Active Inference. These are not just pie-in-the-sky theories - they are in principle (and, increasingly, in practice) testable.

    PS. Kudos to the Rodney Brooks mention - I was very much influenced by his work during my brief foray into AI and robotics. If you'll excuse the name-drop, turned out the "father of modern AI/ML", Geoffrey Hinton, was an old mate of my PhD supervisor, hence another big influence.

    1. _andrew

      Re: Statistical models

      To my mind the "Deep Learning" school of AI is the revenge of the "stamp collecting" side of science, where all previous approaches to computation had been of the physics (mathematics) side. The popular press likes to call them "algorithms", but really they are anti-algorithms. They operate on the basis of what is (or was), rather than what a designer of code supposes underlies all, axiomatically. They solve problems without requiring the problem to be understood. They optimize the scientist out of the solution.

      1. LionelB Silver badge

        Re: Statistical models

        And yet this is exactly how natural cognition and intelligence arose, and how it functions. Evolution does not need to understand a problem to come up with a design which can solve that problem - nor does the evolved design need to understand anything at all. "Understanding" is a top-down perspective from an evolved entity - us.

        Tellingly, the "Good Old Fashioned AI" (GOFAI) approach singularly failed on tasks which even simple organisms handle effortlessly, such as locomotion, navigation, visual perception and parsing of sensory input in general, etc. Attempts to reason your way through those fundamental cognitive tasks foundered on combinatorial explosions of complexity. Even in highly-structured, rule-based problems such as game-playing, statistical methods which are able to automatically abstract, filter and exploit task-relevant information -- albeit frequently in a manner opaque to the (human) observer -- proved far more effective.

        And yet it is more -- much more -- than "stamp collecting". The challenge for the human designer is to construct the right (possibly statistical) framework that works well for a specific task domain - e.g,. deep convolutional networks for visual parsing, transformer architectures for human-like text generation, and so on. The scientist is most certainly not optimised out of the solution; rather, their task becomes that of a meta-designer - an architect of problem-solving designs, more akin to the role of evolution in the "design" of natural intelligence. This, of course requires deep insight - and lots and lots of maths.

        1. _andrew

          Re: Statistical models

          Perhaps I was a little abrupt: I didn't mean the comparison as a put-down. Most computer systems "barely work" largely because they're all design-driven and have had precious little exposure to the real world and it's teeming data. A bit of stamp collecting is no bad thing.

          I'm going to reserve judgement for a while on how much science is involved in the design of the DNNs. My experience, and what I get from reading the papers is very much of the flavour of "I tried this tweak to last month's best design and it got better results (on the usual published test case)". Mostly the dimensions and "hyperparameters" are dictated by the size of the hardware that can be afforded, rather than any particular insight into the information density or structure of the problem at hand. Meta's latest model was still "learning" (loss function decreasing) at the point where they said "ship it", mostly because they'd run out of internet to train it on. The last time a human had read everything that had ever been written (it's said) was in the 14th century. There's still a lot to actually be learned about the process of learning, IMO.

          1. LionelB Silver badge

            Re: Statistical models

            That's fair enough. A large section of the ML literature is exactly as you describe it. I'd be inclined to call that (software) engineering rather than science. Arguably the science part was in the design of those architectures in the first place; backpropagation, convolutional deep-learning networks, transformer models and so on did not spring out of nowhere - and they certainly do address the structure of the problem/data. There is also a very substantial (mathematical) literature on learning.

        2. katrinab Silver badge

          Re: Statistical models

          Evolution just tries a bunch of stuff at random to see what works.

          That can be a useful way to do things.

          1. LionelB Silver badge

            Re: Statistical models

            It's responsible for all of life as we know it, including cognition, intelligence and consciousness. I guess that counts as "useful" ;-)

            As an engineering principle for AI (or, less ambitiously, cognition, robotics, etc.), however, it has, so far, turned out to be less useful. The reasons are interesting; the principal one is simply scale: even with modern state-of-the-art computing resources, we are aeons away from the resources in time (billions of years) and richness of substrate (organic chemistry) available to natural evolution. Another issue (related to the first) has been our failure to engineer the "open-endedness" we see in natural evolution - it's ability to continually bootstrap and extend itself to higher levels of complexity - to achieve "major transitions".

            Another point is that evolved "designs" are notoriously opaque in their functioning, which has strong implications for control and safety of artificially evolved systems. As the famous Orgel's Second Rule has it, "evolution is cleverer than you" (the Third Rule, according to some wag, is "Leslie Orgel is cleverer than you").

            We had a lovely illustration of this in my old research lab (my PhD was in mathematical evolution theory). One of our members was a pioneer in the field of hardware evolution. He would artificially evolve circuit designs on FPGAs (field-programmable gate arrays - programmable integrated circuits) to perform some computational task. One evolved design in particular puzzled the hell out of everyone. Although it achieved the computational task it had been evolved to perform, examination of the circuit logic seemed to suggest that the way it did so was "impossible". The requisite connections were just not there. It turned out that evolution had exploited some weird low-level electronic effect that was not officially part of the FPGAs capabilities. It was highly thermally sensitive; if you increased the temperature a couple of degrees the computation failed. After that, in order to achieve more robust performance, our man designed a Heath Robinson contraption which evolved FPGAs concurrently with one at room temperature, one in front of a small heater, and one in a mini-fridge. The "fitness" of a design was then taken as an average over the three FPGAs. It worked.

  7. Gene Cash Silver badge

    "Hard AI" vs "Soft AI"

    There's 2 schools of thought:

    "Hard AI" folks that think yes, we'll eventually make computers intelligent like people.

    "Soft AI" folks that don't think that's possible, but think that research into AI is very useful because it teaches us about the brain and intelligence in general.

    1. LionelB Silver badge

      Re: "Hard AI" vs "Soft AI"

      Broadly agreed, but there's a third category: those who think we'll eventually achieve AI, even AGI, but suspect that it won't necessarily be very human-like.

      1. Michael Wojcik Silver badge

        Re: "Hard AI" vs "Soft AI"

        Yes. In fact I think that's a majority opinion among serious researchers working with current models.

        AGI won't involve human-like cognition if it follows any of the current approaches except perhaps artificial neuromorphism. It might produce human-like ideation (i.e. a different mechanism producing similar effects), for some value of "like".

        ASI (artificial superintelligence), assuming it's possible (and I'm utterly unconvinced by anti-physicalist positions, which are necessary for it not to be), would very probably be quite unlike human ideation, and almost certainly quite unlike human cognition.

  8. Anonymous Coward
    Anonymous Coward

    AI and AGI are too high levels abstractions

    Most humans perform very limited set of tasks, except using evolutionary in-built vision and physical dexterity, and sharing information with speech, which is already an advanced skill, except typical daily activities. Already those skills differ by genetic composition: color blindness, for example. And not every human is capable to become an NBA player.

    An average Joe would interpret as gibberish a speech by a quantum physicist. And, without proper introduction, would consider the scientist an idiot. This happened to me even within own competence group, only to find out later I was right about something. Or remember you fighting with a compiler showing annoying "nonsense" messages? If I ask You to bake bread tonight, in most cases, the result would be far from perfect. Baking bread is a skill acquired with time and practice.

    Humanity is a sum of highly diversely-skilled humans. And we have not even touched the issue of tools available at specific time, like a microscope or LNG tanker. Those tools require special skills and tools to be made themselves. Knowledge is encoded in those tools, even when humans, who made them, are not present anymore.

    Thus the concept of AGI, typically envisioned as a human-copy, is too restrictive. AGI, in the context of humanity, is a copy of the whole humanity, environment, tools etc. Which is a pretty ambitious goal, to say the least.

    1. LionelB Silver badge
      Facepalm

      Re: AI and AGI are too high levels abstractions

      Yes... having said which, we're arguably not even close to the general intelligence levels of an extremely stupid, cack-handed and supremely unskilled human.

      1. matjaggard

        Re: AI and AGI are too high levels abstractions

        I disagree. Today's AI systems are very significantly better than humans at some tasks and completely unable to do the basics at others - to me that proves that AI will always be different but it's wrong to describe it as "stupid" just because it can't do something we can.

        1. LionelB Silver badge

          Re: AI and AGI are too high levels abstractions

          My comment was tongue-in-cheek. Having said which, the 'G' in AGI does imply that such a system should be a good all-rounder - let's say at least in the sense that human intelligence is.

  9. Locomotion69

    The objectives in 1984

    But he then goes on to state two goals of AI:

    1. To make computers more useful

    2. To understand the principles that make intelligence possible.

    I guess objective 1 is -to a certain extend- reached.

    Objective 2 however is unresolved up to today. As the article identifies, AI differs from HI (Human Intelligence), and the two do not compare.

    We learn by trial, failure, understanding why we failed, then adapt and retry. I believe that this ability could be called "intelligence". And this is not how AI works.

    1. LionelB Silver badge

      Re: The objectives in 1984

      > As the article identifies, AI differs from HI (Human Intelligence), and the two do not compare.

      Then again, who's to say A(G)I need necessarily be human-like intelligence?

      > We learn by trial, failure, understanding why we failed, then adapt and retry. I believe that this ability could be called "intelligence".

      I agree that adaptation feels like a key aspect of intelligence. And there's no intrinsic reason artificial intelligence couldn't do the same. Sure, current systems such as LLMs and DCNs are not set up to work that way (except in the limited sense that they may be retrained on new data).

      > And this is not how AI works.

      What AI? Unless you are referring to current AI systems, I'm impressed that you appear to know how AI works; I don't believe anyone else does.

    2. Falmari Silver badge

      Re: The objectives in 1984

      @Locomotion69 "We learn by trial, failure, understanding why we failed, then adapt and retry. I believe that this ability could be called "intelligence". And this is not how AI works."

      That's one way we learn, but not the only way. And how AI works you could certainly describe some learn by trail and error.

      Take supervised learning trained neural network*. The training data set consists of input data paired with expected output. The training data repeatedly runs through the network, adjusting the weights until the outputs match the expected outputs. Outputs don't match the expected outputs (failure), adjust weights (adapt) repeat run (retry).

      Then there is game-playing AI that start with just the knowledge of the rules that learn as they play. Could that not be likened to learning by trial, failure, then adapt and retry.

      But I have ignored 'understanding why we failed', there is no understanding in AI learning. One of the ways we learn is by trial and error which is what you are describing, in other words trial, failure, then retry a different way whether we understanding why we failed or not.

      I don't believe that AI is like HI (Human Intelligence), or is Intelligence, but just because it differs does not mean the two can't be compared. It could be the differences that identify what Intelligence is.

      * I was writing neural networks to identify crop types from SPOT satellite images in the early nineties, only did that for 3 years, never done any AI since so knowledge is well dated. :)

      1. LionelB Silver badge

        Re: The objectives in 1984

        Basically agree with your comments, except...

        "... there is no understanding in AI learning."

        Although commonplace, I don't personally feel that "understanding" is a useful term in the context of intelligence, artificial or natural - simply because we don't understand what "understanding" means. Sure, it's an intuitive concept for humans, but one which is, I think, associated with consciousness rather than just intelligence. For humans, understanding seems to be recursive - we feel we understand something when we can rationalise it in terms of things we already feel we understand. So if you want your AI to "understand" you're going to have to assume some form of consciousness, which sets the bar (certainly at this stage) ludicrously high.

        I accept that some will disagree, but I don't personally see consciousness (or understanding) as a prerequisite for what might be consensually accepted as intelligence.

        1. Falmari Silver badge

          Re: The objectives in 1984

          @LionelB "simply because we don't understand what "understanding" means. "

          I agree I was going to say something along those lines in my post but my post was starting to get to long and waffle. :)

          "... but I don't personally see consciousness (or understanding) as a prerequisite for what might be consensually accepted as intelligence."

          I agree.

  10. Plest Silver badge

    Hey rubber, meet the road

    The biggest problem right now is that proper, useful AI has a serious PR problem. You say AI and suddenly you have The Sun spouting off about robots taking away jobs and on the other side PHBs demanding ChatGPT calls be inserted into any and every piece of software. Need to copy a file? Make sure it runs the byte stream through AI.

    To borrow from Blackadder script.....

    "Cloud technology appeared and everyone was realy excited, until somone pointed out that it was simply someone else's datacentre you rent from and everyone was really disappointed."

    Next up AI...

    "AI appeared and everyone was super excited about self-aware robots taking all the shitty jobs, until someone pointed out that AI is just sets of data models you can apply to huge datasets and everyone was really disapponted."

  11. ecofeco Silver badge
    Pirate

    Make computers useful again?

    So AI will get rid of marketing and the tech douche bros? Cool.

    Oh wait, that's never going to happen.

    So how will AI help anyone when it's feeding on an ocean of garbage? Maybe... I dunno, get rid of the garbage and cruft first and shitastic UIs? Oh wait, no money to be made making things simple. Creating a problem and the selling the solution is far more profitable.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like