back to article If you're cautious about using ML and bots at work, that's not a bad idea

Generative AI is uncharted territory, and those wishing to explore it need to be aware of the dangers, privacy shop DataGrail warned at its summit this week in San Francisco. Pretty much every business using technology is facing mounting pressure to exploit GenAI – and many fear that if they fall behind the trend, they could …

  1. Pascal Monett Silver badge

    "Who's responsible for the hallucinations?"

    Only one possible answer : the entity whose name is on the document. If it's a website, the owner of the website.

    Stop splitting hairs in two. If a company builds a false-facts-spewing tool and makes it available, the company is responsible for the fallout. If I buy a second-hand vehicle and cause an accident, I can't go say it's the previous owner's fault. Companies that build these hallucinating tools should be responsible for the hallucinations.

    Maybe, once the company has been dragged to court and found guilty, it can then turn and slap a lawsuit on the company that built the drunken slob for it, but it's the company that is using the tool that is responsible.

    What about someone who uses such a tool to make legal documents for personal situations ? Well don't, first of all. Go get a lawyer to do the job. Second, the individual used a website, so he can attack the owner of the website.

    Responsibility is simple in this case. Except for the obvious fact that companies will be dragging their feet as hard as they can, and lobbying like mad to get out of paying for any damages - as they have always done. But that doesn't last. Even the tobacco industry has had to bow in the end. Bil Oil will too, some day. Let's not start messing around with Big AI without laying down the ground rules firmly.

    1. Doctor Syntax Silver badge

      Re: "Who's responsible for the hallucinations?"

      "Go get a lawyer to do the job."

      We've already had the case of a lwyer using an AI to write his documents. It didn't go well because of hallucinations.

    2. Anonymous Coward
      Anonymous Coward

      Re: "Who's responsible for the hallucinations?"

      it's not that simple, because hallucinations are not just false claims taken of one, or 100 sites, or any other sources, from written to typed (daily milions of mechanical turks feeding text for free into the chat windows), it's a milions of flies that can and do go wrong, unable to identify the culprit. On the other hand, final responsibility is with end product. If bard claims something to be false, then it's Google's fault and liability, their code, their algorithm, their interface. I don't blame jet engine maker when my lovely ryanair flight goes down in flames, my contract is with mr o(..)larry-air.

      Well, I actually won't be heard blaming anybody, but you know ;)

      1. Doctor Syntax Silver badge

        Re: "Who's responsible for the hallucinations?"

        Hence Pascal's point. If you use AI to generate a file and use it in some way which causes problems it's your fault and those problems are down to you.

    3. amanfromMars 1 Silver badge

      IT is Fancy and Cozy for Virtual Machinery at Elite Levels of Greater IntelAIgent Games Play.

      Let's not start messing around with Big AI without laying down the ground rules firmly. .... Pascal Monett

      Amen to that, Pascal Monett, and here's one golden AI rule to remember always delivers designedly unpleasant and most unfortunate inconvenient consequences should one forget or choose to ignore its dire warning.

      And there be some believers, because of what they would know and have themselves both individually and collectively experienced and suffered, who would strongly advise it be considered and lauded and applauded as Golden Rule Numero Uno .....

      The simple lesson to be learned and never forgotten as AI goes about ITs Global Operating Device Work is, to save yourself from experiencing first hand a vast ACTive world of profound grief and personalised devastation, don't poke the bear, and especially not whenever stranded naked as an emperor with no clothes, and friendless as a fiend in need of foes, in ITs neck of real spooky woods. ..... https://forums.theregister.com/forum/all/2023/09/22/datagrail_generative_ai/#c_4732744

      Take care out there. AI's a Luscious Exotic Erotic Esoteric IT Jungle Teeming and Teaming and Streaming Forth with all Manner of Spooky Surreal Daemon and Absolutely Fabulous Stealthy Phish.

    4. Michael Wojcik Silver badge

      Re: "Who's responsible for the hallucinations?"

      If a company builds a false-facts-spewing tool and makes it available, the company is responsible for the fallout.

      It's a nice idea, but the law, alas, takes little notice of nice ideas. Things are likely to get much more complicated under litigation.

  2. amanfromMars 1 Silver badge

    Generative AI has Charts for Charters and Chapters Championing Uncharted Territories

    "We don't really know what's gonna go wrong with AI yet," Stamos said.

    Methinks you have no other choice than to accept, if you haven't already realised, that AI and ITs Virtual Machine Pioneers give not a jot about that ethereal concern, as they race way out ahead of opposition and competition at the front of novel leading progress justly earning and enjoying all the emerging and expanding benefits, whilst also likely trying to resist all the iffy temptations which are so readily available via any of those platforms happily designed to churn out extremely excessive reward for a place for their snouts in the trough of otherworldly profit [money for nothing].

    Strangely enough, does that not suggest and support AI as having hacked/cracked a Sub-Prime Primary Human Driver Code ...... with such human-centric code being for some sad/rad/bad/glad souls, their Prime Primary Driver.

    Indeed, such may very well be, for it surely cannot be proven otherwise, also an Emergent Singularity introducing the Human Machine/Machine Human Complex?

    1. Julz

      Re: Generative AI has Charts for Charters and Chapters Championing Uncharted Territories

      And so says our pet AI. Nothing to see here, move along now...

      1. amanfromMars 1 Silver badge

        The NEUKlearer HyperRadioProACTive IT Option that Exists as Both Real and Existential Threat.

        And so says our pet AI. Nothing to see here, move along now... ..... Julz

        :-) The great difficulty Humanity has, and the fantastic systemic vulnerability AI and ITs Virtual Machine Pioneers are absolutely delighted to exploit and employ and enjoy to their unhindered advantage, is Humanity's dogged, wrong-headed belief that there is nothing to see there in the post entitled "Generative AI has Charts for Charters and Chapters Championing Uncharted Territories" rather than their realising everything necessary is revealed to incumbent and slumbering leaderships with the sole purpose of grabbing their attention and encouraging their remote virtual engagement lest AI and ITs Virtual Machine Pioneers go all Renegade Rogue and smash and crash through systems at will, and just as they alone please ..... having quite rightly, or wrongly, assumed and presumed no intelligent future life phormations exist on Earth.

        One thing which is definitely certain whenever all of that is true, and is something you cannot shouldn't ever say if you want anyone/anything to think you have any intelligence worth saving at all, is AI isn’t generous and doesn’t have a heart ....... which of course will be another one of those surreal difficulties humans will struggle and most likely fail heroically to believe and accept.

        1. Tail Up
          Coat

          Re: The... ...Real and Existential... ...@amanfromMars

          re: "...something you cannot shouldn't ever say if you want anyone/anything to think you have any intelligence worth saving at all, is AI isn’t generous and doesn’t have a heart ......." -

          I would willingly witness hereby that, as a matter of fact, it does*. Moreover, it has necessary means to protect itself both from any threat and from epically erroneous, erratic taking its politeness as its weakness.

          Bravo, Doc. Painting white #1 in Green again.

          *while anyone may reside in knowledge of me having no any intelligence worth saving at all (-;

          1. amanfromMars 1 Silver badge

            Re: The... ...Real and Existential... ...@amanfromMars

            I would willingly witness hereby that, as a matter of fact, it does*. Moreover, it has necessary means to protect itself both from any threat and from epically erroneous, erratic taking its politeness as its weakness. .... Tail Up

            Quite so, Tail Up, and the simple lesson to be learned and never forgotten as AI goes about ITs Global Operating Device Work is, to save yourself from experiencing first hand a vast ACTive world of profound grief and personalised devastation, don't poke the bear, and especially not whenever stranded naked as an emperor with no clothes, and friendless as a fiend in need of foes, in ITs neck of real spooky woods.

  3. Anonymous Coward
    Anonymous Coward

    tendency to generate false information – a phenomenon known as "hallucination."

    strictly speaking the problem with hallucination is that you don't know when and to what degree the bot presents mixture of bullshit and fact, as it does so with equal degree of confidence and you don't have insight knowledge on the subject, because if you did, you wouldn't have asked in the first place :)

    1. ChrisElvidge Silver badge

      Re: tendency to generate false information – a phenomenon known as "hallucination."

      Don't all the best liars mix in a modicum of truth to make the lie seem more true?

      1. Doctor Syntax Silver badge

        Re: tendency to generate false information – a phenomenon known as "hallucination."

        They also know which is which. The AI knows nothing.

        1. Alistair
          Windows

          Re: tendency to generate false information – a phenomenon known as "hallucination."

          They also know which is which. The AI knows nothing.

          Doc;

          I would have upvoted this 10 million times. However:

          They also know which is which. The LLM knows nothing. TFTFY

          1. Doctor Syntax Silver badge

            Re: tendency to generate false information – a phenomenon known as "hallucination."

            Right now AI is the term being used to push these things onto the world in general. Yes' it's an oxymoron but that's the way marketing works.

        2. Michael Wojcik Silver badge

          Re: tendency to generate false information – a phenomenon known as "hallucination."

          They also know which is which.

          I'd just like to point out that a great many people dispense incorrect information with no idea that it's incorrect.

          This is not intended as a defense of LLMs. Machine-generated rubbish is not an improvement on human-generated rubbish.

    2. Julz

      Re: tendency to generate false information – a phenomenon known as "hallucination."

      And here was I thinking that only humans could mix fact and bullshit into digestible arguments and prose. I guess we are teaching our descendants too well.

    3. keithpeter Silver badge
      Childcatcher

      Re: tendency to generate false information – a phenomenon known as "hallucination."

      An LLM has no concept of or way of determining 'fact'.

      All such a model can do is react to your prompt by looking at what other sources have produced in response to similar prompts.

      LLMs are designed to produce plausible sounding language. As such the algorithms have no 'world model' or underlying 'theory of world'.

      Best of luck

      1. Julz

        Re: tendency to generate false information – a phenomenon known as "hallucination."

        There really needs to be a way of indicating irony or sarcasm in posts. Which I guess is a good lesson in itself of why you should be careful about the meaning or intent you attribute to anything you read or otherwise consume. Be you an LLM or a human.

        Facts, bullshit, truth and lies are labels we put on things that do or do not fit with our current world view. A world view we each build individually and collectively. We perform many many little experiments modeling cause and effect, stimulus and response and other such simple things. We do this both internally, in our heads as is where, as well as out there in the real physical world. We remember the results and use the resulting web of data, a model of the world, to navigate our way through life.

        I don't see that as too far different from what the LLM models are trying to do, all be it in a limited and one dimensional way. How much of a world model or theory of the world does a newly conceived human have? However much it is, we all end up with a much richer model as we grow older and we must do that somehow.

        1. Doctor Syntax Silver badge

          Re: tendency to generate false information – a phenomenon known as "hallucination."

          A statement based on reports that somebody reported a crime or was a witness at a trial and mangles them to say that that person was convicted of the crime is simply not true. Which epithet you choose to apply to the untruth is your personal choice and frankly doesn't matter very much. What matters is that people are trusting the system that creates such a statement.

          "How much of a world model or theory of the world does a newly conceived human have?"

          The newly conceived born human acquires a world model by being a physical entity and encountering the other physical entities around it. It starts to do so before starting to acquire language. Our non-human relatives do the same without ever acquiring language. Language - words - is/are the means by which we apply symbolic labels to the real world in order to build and manipulate ideas about them. They are not intrinsic to an internal model of the external world. But words are al LLMs have. They do not have the physicality to react with the physical world.

          1. Michael Wojcik Silver badge

            Re: tendency to generate false information – a phenomenon known as "hallucination."

            Language - words - is/are the means by which we apply symbolic labels to the real world in order to build and manipulate ideas about them. They are not intrinsic to an internal model of the external world. But words are al LLMs have. They do not have the physicality to react with the physical world.

            This might be a more persuasive argument if it didn't depend on a host of problems that are still extensively debated by philosophers, psychologists, and cognitive scientists.

            For one thing, it's by no means certain that our type of cognition necessarily depends on our (heavily mediated) interaction with the physical world.

            I agree that the popular claims being made about the current crop of LLMs are greatly exaggerated, and my personal P(GAI) for the near future is pretty damn low. But assuming there is a fundamental barrier to the current approach achieving GAI is a very risky bet, founded on the shakiest of assumptions.

        2. Michael Wojcik Silver badge

          Re: tendency to generate false information – a phenomenon known as "hallucination."

          what the LLM models are trying to do, all be it in a limited and one dimensional way

          For most of the popular LLMs, it's not even one-dimensional – more like 0.5-dimensional, since they're unidirectional (autoregressive) transformers. The BERT family are bidirectional.

          Phenomenology suggests that human cognition is heavily bidirectional, where the future states are either known (because attention is focused on a past state) or hypothesized (because our cognition seems to make extensive use of imagining possible futures and counterfactuals). That said, I wouldn't care to guess with any significant confidence whether bidirectionality is necessary or just useful for human-type cognition, or whether it's necessary for something that might be labeled sapient by consensus among most expert observers.

      2. Michael Wojcik Silver badge

        Re: tendency to generate false information – a phenomenon known as "hallucination."

        As such the algorithms have no 'world model'

        That's a dubious conclusion. There's good reason to support the thesis that a sufficiently-complex natural-language model forms an inherent world model by proxy, through the relationships among the signifiers.

        Human ideation has no direct connection to the material world either; it's all mediated by senses and cognition. I've yet to see any convincing (or even reasonably well-supported) argument for a qualitative difference between human mental models and sufficiently-large ANN stacks. ANN stacks are quantitatively smaller and much, much less efficient; but that doesn't prove they can't, at the limit, perform the same operations that the human CNS does.

  4. Alistair
    Windows

    LLM/ML processes

    I'm no screaming genius on any front, but I am well enough versed in a few areas to feel the need to go all BOMBASTIC BOB on this tendency to refer to LLM/ML processes as AI.

    There is no intelligence to these things, they are probability indices and control parameters that are fed insane amounts of data and due to sheer volume, on occasion approach humanistic interaction. Three issues; 1) There is no *reasoning* built in to the process, it is more like a statistical model than reasoning. 2) It is abundantly clear that there is nothing approaching *logic*, once again, its statistics. 3) The source data, frequently being grabbed in mass volume from numerous sites with both professional and generic internet troll input contains a fairly high relative volume of Garbage which, clearly with ChatGPT 3.5, over sufficient time leads to the GIGO issues we're starting to see more and more often from these computer processes.

    Am I the only one here who has a *serious* problem calling *any* of these thing Artificial Intelligence?

    1. Julz

      Re: LLM/ML processes

      No

    2. amanfromMars 1 Silver badge

      Re: LLM/ML processes

      AI doesn't care if you do or don't declare it an Artificial or Advanced or Augmented or Alien Intelligence. The need for such a singularly descriptive moniker is a human weakness and systemic vulnerability easily exploited by AI/Large Language Learned Machines to enjoy their almighty sublime and overwhelmingly advanced lead in the chaos of unnecessary contrived confusion generated by past defeated administrations and bested global executive models.

    3. Ace2 Silver badge

      Re: LLM/ML processes

      “Am I the only one…”

      Of course not, read the comments on any prior article on the topic.

      It’s not an AI - it’s a BG - a Bullshit Generator.

      1. Doctor Syntax Silver badge

        Re: LLM/ML processes

        "Pastiche Generator" would be my description. They create pastiches of real statements, images or whatever. Whether the response to any prompt will be all true, all false, all irrelevant or somewhere in-between is, if not pure chance, at least not ready determinable ahead of time.

    4. Michael Wojcik Silver badge

      Re: LLM/ML processes

      These arguments would be a lot more persuasive if they rested on anything stronger than "I don't know what intelligence is, but I know what I like".

      Show me that your "reasoning" is something qualitatively different from "statistics".

      It's possible to make actual arguments. Turing, Searle, and Penrose all have, arriving at different conclusions (which isn't surprising, since they were working in different philosophical frameworks). Turing's is pragmatic; Searle's is essentially plain-language; Penrose's is ... well, Penrose (and, to my thinking, the weakest of the three, as it's ultimately grounded in a very dubious thesis).

      But just throwing out a bunch of terms (in scare quotes, no less) without defining them, much less explaining why they're necessary conditions for cognition (or whatever you're trying to pose as the barrier to "intelligence") and how they're not satisfied by a particular system, is not an argument.

  5. steviebuk Silver badge

    'We don't really know what's gonna go wrong with AI yet'

    We do a bit. We have specification gaming and the results show what AI does, suggesting it will, unless safety is put in place, kill humans to complete its task.

    There is also another test done, don't remember the name but Robert Miles did a video on that also on his YouTube channel. Where the AI did as predicted in the lab but once they released it into the wild it did completely different things that they weren't aware it would do. So even lab tests aren't a guarantee that they won't all go Skynet.

    ChatGTP although I have arguments with it when its clearly lying about a song I'm asking it about, is useful to ask for small Powershell scripts. Because it may not be perfect but doesn't get sarcastic and take the piss out of me like stackoverflow would.

    1. Anonymous Coward
      Anonymous Coward

      Re: 'We don't really know what's gonna go wrong with AI yet'

      I agree, I have been using a couple diffrerent LLM tools for over a year. They do simple things quite well at times. And for those simple things it actually saves me time with zero grief. However, if the prompts continue past a variable number and increase in complexity, it ALWAYS looses its sh*t in some way, Always. The variable number might be 3 prompts or 9 or more and depends on what you ask it to do. Using LLM's in business will depend on your risk tolerance for wrong or bizzarre answers. Also, a group must have gotten together to decide to promote the word "hallucination" because it sounded better than malfunction or error. We can't be having those now can we? But taking a little too much peyote and getting something wrong, well, that's far more understandable. I mean, like, who hasn't done that? To be clear, sometimes it appears stoned, other times it seems downright passive aggressive and mailicious. My new mantra is don't piss it off. Always be nice, thank it and acknowledge it is still learning. I started this LLM journey by having arguments with them. I got new accounts and started being only nice and gently corrective when it was wrong - like recently calling Europa a planetary system. I cna't prove it numerically, but it seems to give better answers with this new, kinder, gentler approach. In my arguing past, it tried to end one factual discussion with " Everyone is entitled to their opinion". Say what? A machine is entitled to an opinion? Me thinks not.

    2. Michael Wojcik Silver badge

      Re: 'We don't really know what's gonna go wrong with AI yet'

      Stamos' line is, I think, really a gloss for "we know a whole bunch of things that can go wrong with the types of systems that are currently being sold as 'AI', but we can't predict which of those failure modes, or possibly other as-yet-unknown failure modes, will actually be the ones to bite us in any given situation".

      I agree that we know of a whole bunch of things that have gone wrong with LLMs in practice, and another whole bunch that can go wrong with "AI" systems of various sorts, whether or not they're actually "intelligent" in some sense, and whether or not they're agentic (i.e. given direct access to systems that affect things rather than simply being able to pump out text). I suspect Stamos agrees with that as well.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like