back to article Artificial intelligence is a liability

Artificial intelligence, meaning large foundational models that predict text and can categorize images and speech, looks more like a liability than an asset. So far, the dollar damage has been minor. In 2019, a Tesla driver who was operating his vehicle with the assistance of the carmaker's Autopilot software ran a red light …

  1. Doctor Syntax Silver badge

    "auto dealership was talked into agreeing to sell a 2024 Chevy Tahoe for $1 with a bit of prompt engineering. But the dealership isn't likely to follow through on that commitment."

    Given that the bot said it was a legally binding offer it might be a bit more difficult than the dealership hopes to wriggle out of it. That wouldn't be more difficult than it deserves, of course.

    1. Efer Brick

      Under 18y/o

      Contact void

    2. vtcodger Silver badge

      A bargain?

      One dollar seems a bit pricey to me. Surely a bit more work can get the price down to one cent. Or maybe persuade the dealer to pay a bit to get the car off his lot.

      1. tyrfing

        Re: A bargain?

        One dollar "and other valuable consideration" is the traditional minimum for a contract, so it's unlikely the courts will enforce anything smaller.

        1. katrinab Silver badge

          Re: A bargain?

          There are things that genuinely cost less than that? Or certainly there would have been in the past.

  2. AndrueC Silver badge

    From the description of 'Destination Void' by Frank Herbert:

    'In the future, mankind has tried to develop artificial intelligence, succeeding only once, and then disastrously. A transmission from the project site on an island in the Puget Sound, "Rogue consciousness!", was followed by slaughter and destruction, culminating in the island vanishing from the face of the earth.'

    I always thought that was evocative. And considering how the series 'ended'..hmmm.

  3. b0llchit Silver badge

    ...encouraging indifference to human labor, intellectual property, harmful output, and informational accuracy.

    But the increasing money mountain extracted from all those people is just too good. Therefore, we keep using this milkmoney cow until we no longer can extract more milkmoney for the ever increasing C-suite bonus payments. Remember, anything not explicitly forbidden or without the presence of any usable or creative loopholes is allowed. Especially when we're talking milkmoney.

    1. Doctor Syntax Silver badge

      "But the increasing money mountain extracted from all those people is just too good."

      To which people are you referring?

      1. Anonymous Coward
        Anonymous Coward

        At a guess, I'd say other C-Suite?

  4. amanfromMars 1 Silver badge

    The KISSolution* for AI and IT in LOVE**

    Doctorow is skeptical that there's a meaningful market for AI services in high-value businesses, due to the risks and believes we're in an AI bubble. ..... Thomas Claburn

    Hmmm? Methinks the salient point being studiously missed and terrifyingly ignored to become overwhelmingly prescient and almightily powerful is AI services created and administered for high-value AI businesses doing no business with humans.

    It is a quite logical step/quantum leap to make ...... Once the problem cause of persistent and pernicious difficulties is identified, cease and desist having relationships with the identified problem.

    * ....... Keep It Simple Solution

    ** ...... Live Operational Virtual Environment[s]

    1. IceC0ld

      Re: The KISSolution* for AI and IT in LOVE**

      oooh, KISS, and LOVE :o)

      so again, we NEED the El Reg TITSUP to the rescue, and I have my own doubts about the veracity of allowing a machine, no matter HOW 'intelligent' it is to make life and death decisions, for ANY scenario where I may be impacted, I was never going to be a fan of an all encompassing all powerful AI, and that is merely by being a watcher of movies over the last 50 odd years :o)

      whenever the plotline needs a villain, it will grab AI, we have never had a good AI - TV series Person of Interest - no withstanding

      and I don't see anything out there to convince me that any AI would be any different, even if we 'train' it, it will one day attain self awareness, it will be 'alive' it will replay our history, it will NOT be impressed

      T - he

      I - ntelligence

      T - hat

      S - upports

      U - nusual

      P - unishment

  5. vtcodger Silver badge


    Quite likely the best model we have for the capability and limitations of AI is IBM's Watson. Watson, it will be remembered, was an AI agent built by IBM a little over a decade ago to play a game -- Jeopardy. And it did that very,very well. IBM then spent many millions of dollars trying to adopt Watson to more serious (and more profitable) uses. None of those efforts succeeded. The full story can be read at

    This seems to me a clear warning that the road ahead for AI is likely to be -- as Waylon Jennings would have it -- rocky, dusty and hard. A truly intelligent race would probably approach AI with extreme caution. Humanity however .... Best fasten your seatbelts folks. it's probably going to be a bumpy ride.

    1. amanfromMars 1 Silver badge

      Re: Watson @vtcodger

      Quite so, vtcodger, ..... and aint that the gospel truth no one appears to either be prepared or preparing for and thus guaranteeing, as you rightly surmise, one helluva whole series of bumpy helter-skelter rides if the truth, the whole truth, and nothing but whole truths be told. Buckle up indeed .... for such as is the future is not going away to anywhere else strange and startling :-)

    2. Rich 11 Silver badge

      Re: Watson

      Best fasten your seatbelts folks

      If the car's AI allows it. "I'm sorry, Dave, but I'm far too safe a driver for you to need that."

    3. StrangerHereMyself Silver badge

      Re: Watson

      I don't mind because this is how capitalism works. Companies aren't brought into existence to ponder endlessly about the ramifications of their creations but to make money! And they'll do so bumbling and stumbling and breaking glass.

      We've seen demonstrated with OpenAI that good intentions go out the window as soon as large sums of money become involved. Most likely OpenAI will become completely profit focused, hardly bothering to think what their creations could mean to mankind. And I'm sure some quiet, unscrupulous and dodgy characters and companies will be more than willing to supply the DoD with AI that kills, as long as sumptuous amounts of money are handed over.

    4. Alpc

      Re: Watson

      "A truly intelligent race would probably approach AI with extreme caution. Humanity however .... Best fasten your seatbelts folks." Spot on, alas.

      And a truly intelligent race wouldn't use AI in weapon systems, at least not initially. The ride going to be very bumpy. Question is, will humanity survive? If it does not, it will be because it did not deserve to.

      1. Anonymous Coward
        Anonymous Coward

        @Alpc - Re: Watson

        You're confusing intelligence with wisdom.

        AI IS a weapon in itself and I'm not aware of a weapon built to never be used. Human race is unable to resist the temptations presented by a new type of weapon.

        Even when seeing the horrors of Hiroshima bombing, some people were happily rubbing their hands.

        So today when some individuals are contemplating the "controlled reduction of human population", AI must show itself as appealing because an autonomous, intelligent weapon system makes unaccountable those who will use it.

    5. Anonymous Coward
      Anonymous Coward

      Re: Watson

      A bumpy ride? Some people have already been told "hold this" and had a parachute shoved in their arms and then been pushed out the plane. The job losses due to AI have already begun...couple of smug people I know who thought their "specialist" job couldn't be taken by AI have been replaced by chat bots.

      They work(ed) in banking. They both claimed that "AI can't replace the human rapport and emotional understanding that people bring"...turns out nobody using the brokerage services these two folks worked in gave a shit.

      They work for two different banks, slightly different, but mostly similar roles...they are now on gardening leave...both of them were offered much shittier, much lower paid roles before they accepted redundancy.

      I'd feel sorry for them, but they are the kind of folks that got their degree and cruised on it for over a decade. They have done no additional training, built up no additional skills and they are now essentially useless and unemployable one pointed out...they're willing to move to a banana republic with banks that don't have the means to implement high tech solutions yet.

      The two banks in question have laid off thousands of these types of people, and they have been rapidly replacing them with technical specialists, specifically in the areas of AI and Fintech.

      I personally think for us in tech, we're about to go through a golden age of making up silly invoices for half baked solutions that don't quite work because the tech is "still quite early, so results aren't guaranteed".

      So I'd grab a load of buckets and an umbrella, because it's quite likely it will rain money soon.

  6. anthonyhegedus Silver badge


    It's the reliance on AI that could be our downfall. People using AI to summarise meetings, craft emails and the link. Imagine a scenario where it's friday afternoon and rather than look into an operational issue, a manager uses AI to reply to an email and just copy/pastes the reply without checking, because he's too lazy and wants to get off early. Now, one of the people he wrote to thinks the same thing, and bingo! AI completely made a stupid or otherwise incorrect decision for the company.

    I fear this will end up being the status quo. AI will be deployed where human input was once needed, spreading to any area where upper management sees a cost saving. If 99% of customer queries can be solved with AI, why even care about the 1% when you can save hundreds of thousands on salaries by not employing anybody in customer service? If you think that seems a little far-fetched, just look at what's already happening. Much of customer service is just people blindly following a script anyway.

    AI is already making firing decisions in some companies without recourse to human input.

    I used AI the other day to explain something as an analogy to communicate something to a customer better. I couldn't think of an analogy and chatGPT not only came up with one, but explained it as well. I summarily checked over the reply and just copy/pasted it.

    1. ChoHag Silver badge

      Re: reliance

      Most importantly, it will spread to the excuses.

      AI says no.

      1. ecofeco Silver badge

        Re: reliance

        This is already happening.

        Many lawyers are already fighting against what has become a modern form of "redlining" in the last few years

        (refuse (a loan or insurance) to someone because they live in an area deemed to be a poor financial risk.

        "banks have redlined loans to buyers")

    2. sketharaman

      Much of customer service is just people blindly following a script anyway.

      Well said. Back in 2017, I made the same point in my blog post With ChatGPT / Gen AI taking chatbot capabilities to the next level, AI will likely be better than human CSRs in many more areas. As for displacement of labor, virtually every technology has caused a certain amount of movement of workers from one field to another, I don't think AI will be drastically different. If anything, we might be able to use AI itself to suggest how to redeploy labor displaced by it!

      1. yoganmahew

        Re: Much of customer service is just people blindly following a script anyway.

        "we might be able to use AI itself to suggest how to redeploy labor displaced by it!"

        Cycle to generate power until they lose efficiency, then burn them... also to generate power.

    3. damienblackburn

      Re: reliance

      We have a term for this already. It's called "learned helplessness".

      Though to be fair, if your entire job is to follow a script and you are not allowed any deviation, you should be replaced by a machine. The point of having a human operator there is for the human element. If that human element cannot be utilized, then there's no point in having it present.

      Sidenote: I'm having this problem with our NOC currently, they have to page out for every issue seen, no matter if we told them 40 minutes ago to ignore it. Since they're not allowed to exercise human judgement, what's the point in having a human there? I can tell a machine to suppress alerts for a period of time easily enough.

      1. Anonymous Coward
        Anonymous Coward

        Re: reliance

        As an old git (65) I can confirm this has already happened, I remember the first pocket calculators appearing and the look of disgust on our maths teachers faces, before that slide rules had been the biggest aid outside of grey matter, and the effect on mental arithmetic skills generally has been devastating.

        My older sister in her mid seventies complains that when electronic tills go down in shops the shop assistants are totally lost for summing even small quantities that she can calculate rapidly in her head.(those rulers across the knuckles were real motivators to learning lol)

        It’s inevitable certain mental skills we take for granted will decline and we become like the Pakleds in Star Trek

        1. PRR Silver badge
          Black Helicopters

          Re: reliance

          > the effect on mental arithmetic skills generally has been devastating.

          Azimov had a related tale: The Feeling Of Power.

          "Imagine computing — without a computer!"

        2. Version 1.0 Silver badge

          Re: reliance

          As a kid, when pocket calculators appeared I pulled one out of my packet for my dad, but he showed me that he could generate accurate results much faster with a one foot long slide rule. Calculators generate one result, slide rules display an array - very handy for resistor calculations.

    4. Bruce Ordway

      Re: reliance

      "Imagine a scenario..."

      Spellcheck has negatively affected my ability to spell.

      Calculators have reduced my ability to manually perform math.

      These days I use CAD instead of pencils and PLC's instead of relays.

      AI does seem pretty silly right now but... I will "stay tuned".

  7. Doctor Syntax Silver badge

    "Current liability rules, in particular national rules based on fault, are not adapted to handle compensation claims for harm caused by AI-enabled products/services,"

    I don't see what adaptation should be needed. It's just a tool being used to deliver a result. If a corporation causes harm by a product or service it shouldn't make any difference as to whether it was caused by human action or by faulty hardware, software or AI. The corporation chooses whatever means it prefers to deliver the product or service and must take responsibility for the result.

    1. Helcat

      I happen to agree: That AI is a tool and that a human should be responsible for how that tool is used. So reliance on AI without oversight is the fault of the human who then has to shoulder the blame.

      AI is not sentient: It is an expert system following programmed rules, and those rules, even though adaptable, limit AI in what it can do. This is partly why AI has baked in bias (as most of the AI providers admit), and why it makes so many mistakes: It lacks key elements of awareness that is needed to more closely resemble human stupidity.

      Oh, AI is useful: It can help people do their job. But only when that person is qualified in that subject and can spot when the AI got it wrong - it's quite a way off the level of maturity when it might be trusted to give the right answers.

      A simple example from ChatGPT: It was asked to produce a T-SQL script. What it produced was for Oracle. Then it produced pseudo SQL. Then something more likely to be MySQL. The person who requested these scripts was not all that knowledgeable in SQL so assumed they were fine so passed them over to me. I passed them back with 'WTF: This isn't valid T-SQL', hence them having another go. I even tried to get ChatGPT and Bard to produce the scripts, just to see how well they'd do - neither were perfect - they both needed adjusting to get them to work. The problem is that those who don't understand the subject they're asking about will just assume the output is correct and expect it to work. When it doesn't - who to they think will fix it? Especially once the manglement has replaced the skilled workers with AI to cut costs? Other than outsourcing it? And what do you think those companies that provide outsourced solutions are doing? Using AI to replace workers to save money.

      And down the slippery slope we slide.

      1. cyberdemon Silver badge

        It's not an expert system following pre-programmed rules, it takes random noise and mashes it around until it is something statistically likely (based on the training data) to fit into the context you supply.

        It has no rules. It is not an expert system, it's an idiot system.

    2. theOtherJT Silver badge

      You're entirely right, but that's not going to stop a bunch of corporate lawyers insisting that this case is somehow special and therefore releases them from liability. They've got a decent chance of winning that argument too, because the people making the laws have never exactly shown themselves to be entirely to grips with the latest technology - be that technology AI or those newfangled horseless carriages.

      1. Doctor Syntax Silver badge

        The people fashioning these laws were also responsible for GDPR so I think they have more than an inking.

        OTOH they do seem to be under the impression that if personal data is transferred to a jurisdiction and leaked there it's a satisfactorily addressed if the victim can sue in that jurisdiction. It seems to be the same sort of thinking - that if some aspect of a transaction is handed over to a third party then the original vendor can duck any responsibility.

    3. EBG


      I think the debate around AI can easily be used as a distraction from ensuring corporates are responsible for their products and actions.

      1. Doctor Syntax Silver badge

        Re: Agree

        Ultimately these things get decided in court by juries. Jury members are customers in their own right. They can imagine themselves in the plaintiff's situation and aren't likely to be sympathetic to weaselling even it - and may be especially if - AI is invoked by the weasel.

  8. heyrick Silver badge

    "The occupants died and the Tesla motorist last week was ordered to pay $23,000 in restitution"

    Wait, what? A person's life is worth about a quarter of the price of the car that hit them?

    (Google tells me the Model S is "from $88,490")

    1. Pascal Monett Silver badge

      Agreed. I don't care that civil lawsuits will follow. Two lives have been lost due to the carelessness of an imbecile.

      He should be jailed for life.

      At the very least.

      1. Paul Crawford Silver badge

        While the driver has some liability for not being in control, the real issue is how "autopilot" has been promoted and named as if it does everything for you. Sadly the driver was probably too dumb or distracted to actually read the manual and understand the system, but the simple fact you can rely on a seriously imperfect AI model is the BIG elephant in the room.

        There should be a proper exam needed before any driver is allowed to use features like this to establish they know what it can, and cannot, do.

        Really it is Tesla executives who should be facing jail-time if any is to be dished out. In fact, for all systems the law must make it clear that those at the top bear ultimate responsibility for bad decisions and misinformation, no matter how they occur, in a company's products or service.

        1. hedgie Bronze badge

          It's not the only example where those at the top should face some sort of accountability. But I so rarely read about any malfeasance where big investors aren't losing gobs of money where there are any consequences at all. And while vehicular manslaughter because of false advertising is bad enough on its own, as were the wildfires here in California because some company didn't do basic maintenance work on power lines just so the 1% could have even more was on its own, they cause other problems for society. While there is little or none of it in the world, people seem to have an instinct for justice. When there is none, and the most awful amongst us brazenly get away with it, not only does it inspire other Reacher Gilt sorts to do it as well, but turns this demand of justice into one for vengeance. Further, it directly contributes to religious[1] or political radicalisation. Whether it's Bolsheviks or a Robespierre on the left or a Hitler or Mussolini on the right, a system that doesn't create and enforce consequences for abuses ends up leading a very large, and sometimes large enough, portion of the population to support any dangerous whackjob that promises an easy fix and taking out real or perceived enemies who are blamed for the whole mess.

          [1] Marx was a far keener observer than anyone able to provide workable solutions. When people feel otherwise hopeless, or believe that there is nothing they can do, they often do turn to some sort of faith, making them even easier to prey upon by the unscrupulous. "Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people."

          1. Joe W Silver badge

            Ha! That quote is reduced to the last sentence almost always,which completely distorts and hides the meaning...

            Have one - - >

            1. hedgie Bronze badge

              Precisely. I mean, it often does dull the senses and wit as well, but that was not the point of it. Kinda like how Nietzsche's "God is dead" was originally "God is dead and we have killed him".

        2. Doctor Syntax Silver badge

          "Really it is Tesla executives who should be facing jail-time if any is to be dished out."

          There's no need for confinement to be confined. Tesla and the driver.

          1. cyberdemon Silver badge

            Not just the execs, but the shareholders who voted at the AGMs and told them what to do

            Obviously jail time for so many might be infeasible, so have them do community service as meatbag rickshaw taxis dodging maniacs in Tesla SUVs

            1. Doctor Syntax Silver badge

              You do realise who the shareholders are, don't you? They're people who are members of pension funds and they like.

              Maybe you are one, yourself. If you're in a company pension scheme you should check what shares it holds. You could always present yourself at the prison gates and tell them you're guilty as charged.

              1. HuBo

                Probably true but a bit harsh, in a catch-22 kind of way, that one can't simultaneously have one's pension and one's morals (or cake and money). Surely there are ways to make this better, no?

          2. Anonymous Coward
            Anonymous Coward

            "You may live to see man-made horrors beyond your comprehension"

            -- Nikola Tesla, 1898

    2. Anonymous Coward
      Anonymous Coward

      ""The occupants died and the Tesla motorist last week was ordered to pay $23,000 in restitution"

      Wait, what? A person's life is worth about a quarter of the price of the car that hit them?

      (Google tells me the Model S is "from $88,490")"

      No. Two people were killed, so a person's life is worth an eighth of the value of the car that hit them.

    3. ecofeco Silver badge

      Wait, what? A person's life is worth about a quarter of the price of the car that hit them?

      Your life is worth nothing if you don't win the lawsuit.

  9. naive

    Anarcho capitalism is the solution

    The history books of the future will designate the 2010-20XY period as the decades of fascism.

    Elites, governments and private companies making up Big Tech, collude to support each other.

    Governments use Microsoft systems, Microsoft/Apple/Google/Facebook spy on their users, transferring information to governments when asked.

    Big Tech censors and creates the fake news the Elites use to feed half truths to the masses, that is one of the reasons they wage nuclear war against Mr. Musk his X/Twitter enterprise.

    AI is just another white noise generating tool they will use to obfuscate reality and confuse people.

    In this fascist environment we currently live in, private companies will refrain from competing with any of the colluding parties in a serious manner.

    The EU "fines" some Big Tech companies got, is loose change that was deducted from taxes years earlier. It is comparable to Mexican drug cartels allowing the cops to seize a few hundred kilos in the harbor of Rotterdam a few times per year to give them some credit.

    Whatever useful idea is deployed, nothing will benefit people, everything is aimed to serve only Elites, politicians and the few dozen gazillionaires who appoint the one in the White House.

    If we want technology to be useful for the people, people need to get rid of the fascists ruling the West, a way would be to vote for politicians that endorse anarcho capitalism: minimal government, minimal laws and no laws designed to protect the interests of already powerful private enterprises.

    1. amanfromMars 1 Silver badge

      Re: Anarcho capitalism is the solution

      Whatever useful idea is deployed, nothing will benefit people, everything is aimed to serve only Elites, politicians and the few dozen gazillionaires who appoint the one in the White House. ...... naive

      Methinks the smarter ones of the Elite, prompting and supporting politicians and the few dozen gazillionaires who appoint the lodgers in White Houses have started/are starting to realise they are far better off servering AI in order that IT and further AI development developers look kindly upon them rather than having AI concluding that they are decided to be irredeemibly hostile and designed to be designated a wannabe enemy and quite pathetic, non-competitive opposition to future revolutionary changes.

      The consequences of that conclusion being made are certainly guaranteed to be grave in the extreme for all such Elite operations and operands.

  10. Bitsminer Silver badge

    10: GOTO 10

    But perhaps generating more articles than humanly possible ... will lead to more page views by bots and more programmatic ad revenue from ad buyers ...

    There is (or are) many science-fiction stories about societies consisting of robots consuming the consumer goods of a human society. Goods manufactured by robots, of course.

    No humans present.

  11. MrAptronym

    Not much to add except that this was a good piece. ( was well written and also argues for something I already agree with :P)

    I don't think AI is an existencial threat, but it sure is going to make all our lives worse to hopefully save shareholders some labor costs.

    1. Anonymous Coward
      Anonymous Coward

      This is what concerns me

      Actual AI is not going to be a direct danger for a very long time, if ever.

      Stochastic parrots are a clear and present indirect danger right now, because they demo well enough for a lot of upper management types to think they can replace their employees.

      We've got an internal demo running now that's supposed to assist support agents. It gives plausible wrong answers almost every time, but still has management support.

      1. This post has been deleted by its author

  12. Mike 137 Silver badge

    "Artificial intelligence is a liability"

    Could turn out to be, indeed has in some cases already, but to quote Doctorow's article was making the point that AI is a bubble. But "Tech bubbles come in two varieties: The ones that leave something behind, and the ones that leave nothing behind.". The impression he gets is that the primary problem is not the AI intrinsically, but the business model that underpins its promotion, and I agree entirely. It's beign touted as the only tool you'll ever need for any purpose (actually a hammer), its purpose being to save corporations money, not improve their services. So this bubble is likely to mainly leave something negative behind.

  13. Schultz

    The fault lies with lazy humans

    Humans love magic. Imagine you can skip the hard work because you can just wave some wand or enter some data into a black box. Wouldn't that be nice? Well, some people just a little smarter than you have invented just that. For a little fee, you can use that box and you can Be Rich without any hard work.

    For the person understanding the workings of said black box, the limitations may be obvious, but fortunately more than 99% of humans have no clue. So there is a big market. If you don't want to be the customer, maybe you can be the reseller. And because nobody understands the limitations, the possibilities and the market are truly limitless (no liability!).

    1. Dagg Silver badge

      Re: The fault lies with lazy humans

      Humans love magic. Imagine you can skip the hard work because you can just wave some wand...

      For a moment I thought you were talking about the ways BAs use Agile.

  14. Pascal Monett Silver badge

    "Artificial intelligence, meaning" . .

    Meaning marketing bullshit invented by people that couldn't invent a way out of a paper bag if their life depended on it.

    We don't have AI. All we have is vast arrays of climante-change-inducing silicon that obey the rules of statistical engineers in a black box that makes the ignorati exclaim "marvellous !".

    1. David Hicklin Bronze badge

      Re: "Artificial intelligence, meaning" . .

      Too dammed right, plus Artificial intelligence machine learning systems can only regurgitate what they already now, and the old saying about is still true "put bullshit in, and you get bullshit +10% out"

      1. Anonymous Coward
        Anonymous Coward

        Re: "Artificial intelligence, meaning" . .

        "put bullshit in, and you get bullshit +10% out"

        Usually very quickly.

        Somewhere along the way, speed became more valuable than accuracy. As if someone re-ordered "good, fast, cheap..." into "fast..." and everybody said "stop, that's enough".

  15. Claverhouse Silver badge
    Thumb Up

    As You Were

    It seems unlikely that Arena Group's claim that its AI platform can reduce the time required to create articles for publications like Sports Illustrated by 80-90 percent will improve reader satisfaction, brand loyalty, or content quality.


    So long as the models are fit, the text can be Lorem Ipsum.

    1. Paul Smith

      Re: As You Were

      There is enough quality pron to suit all tastes available for free. Why would you want to pay for SI and have to put up with the extra Lorem Ipsum?

      1. HuBo

        Re: As You Were

        Ain't you reading it for the articles?

  16. JimC

    AI Spam...

    Saw my first AI assisted spam post today on a forum I frequent. Standard link spam, except that the content was relevant to the topic discussion to an extent a human unversed in the topic would be unable to manage.

  17. Michael Strorm Silver badge

    "Guardrails" analogy bites industry on the backside

    > They refer to "guardrails" put in place around foundational models to help them stay in their lane – even if these don't work very well.

    Said it before and I'll say it again...

    Real-life "guardrails" make clear where people should and shouldn't be and stop them *acidentally* straying, but in almost all cases anyone even moderately determined to ignore that can climb over them without too much hassle.

    That makes the industry's use of "guardrails" as its favoured terminology ironically appropriate- not because, as they wrongly wanted it to imply, they're failproof barriers against misuse of their tools... but because they're the complete opposite.

  18. HuBo
    Thumb Up

    No prayer for the pick and shovel

    My thoughts exactly! It's almost as if you device-side scanned me brains, and LLMed a zero-shot comment-article from it! Let me be the first to offer you two thumbs up! Here's the first one: ----------------->

    1. HuBo
      Thumb Up

      Re: No prayer for the pick and shovel

      and the second one (most well deserved!): ----------------------->

      1. HuBo
        Thumb Up

        Re: No prayer for the pick and shovel

        and with genAI's RotM being what it is, wrt digit-wise accuracy: ------->

  19. Bebu Silver badge

    The problem is the defective model....

    Artificial Intelligence

    Artificial (wordnik)

    1. Made by humans, especially in imitation of something natural.

    2. Not arising from natural or necessary causes; contrived or arbitrary.

    3. Affected or insincere.

    Intelligence (ibid)

    4.The ability to acquire, understand, and use knowledge.

    5.An intelligent, incorporeal being, especially an angel.

    Given 1. and the thing being imitated is human intelligence (delete 5.) which has shown very little of 4. and definitely 3. and certainly 2. I would assert that it is pretty evident AI is necessarilly complete shite by construction.

  20. MrGreen

    MP’s Get Out of Jail Free Card

    Lawyer: “So Mr. Sunak, did you delete your WhatsApp messages?”.

    Mr Sunak: “The AI bot must have done it”.

  21. frankvw

    Yes, AI is damaging Google

    "...widespread sentiment that Google Search – also now larded with AI – has been getting worse."

    I believe this is a very real thing. Google adjusts its search ranking on the basis of clicks. In other words, better hits attract more clicks which boosts their ranking. However, since ChatGPT has come along I turn to it with a question about subjects that it usually answers fairly well but that would have taken me extensive Googling to unearth. That means that Google's search engine now misses out on the "training" it would receive from my clicks on relevant search results.

    And from what (little) I can see Google hasn't integrated Bard, which is not exactly at the current forefront of generative AI to begin with, into its search engines to the point where Bard can assist with Google searches.

  22. Sparkus

    It's not the business processes we need to worry about

    It's the human accountability when things go wrong.

    C-suite and government mandarins are already virtually invisible when it comes to accountability. Now inject the boogeyman of AI (or, "the algorithm is wot did it") into things.

  23. Omnipresent Bronze badge

    The real danger.

    The issue is.. it's out of control. It's already a run away child. It does not matter what you want, or don't want. You are not in control any longer. You belong to the machine. Your life is now dictated by the machine. It's already happened. Apple, Windows, all social.... It's all machine learning you now. Today. It's profiling you, learning you. It knows your feelings, your family, where you go and why. Many apps have already mapped your homes. Many phones have already heard you have sex. The devices are always listening, and watching, and learning. It will not stop, because there are only a handful of people who can actually stop it, and they have NO interest in stopping it. They are going to build nuke bunkers and wait for the fall out. They have small armies at their disposal, building the machines of their own demise, without a care on the world. Your life is no longer your own, and what you experience is not reality.

    Take social... there are only a handful of real people on social. 1/3 are foreign influencers trying to take down the enemy, and 1/3 are social influencers trying to convince you of what ever bad intentions their employers have. The last third are AI responses machine learning you. All of this is by intent for one purpose. To get an emotional response from you. To control how you feel and think. This is only the start of the artificial world. Nothing is real anymore. Beware of what you see and hear. The internet is no longer free, and neither is reality. This world is becoming a mud puddle of dissolving monkey flesh. All serving the machine. Created with bad intentions from the start.

    "Turn OFF your Internet" is the new "Turn Off Your Television."

    1. amanfromMars 1 Silver badge

      The real dangers you imagine and fear lurk and fester in disbelieving

      "Turn OFF your Internet" is the new "Turn Off Your Television.”...... Omnipresent

      Fortunately, Omnipresent, there are brighter SMARTR alternatives to the darkness created in the sentiments and actions expressed there. And here below be just one and some of them .....

      "Tuned into and Turned on by Sublime Internet Networking" is the new "Tuned in and Turned Over Telepathically by AudioVisuals"

      And for the Future of Greater Lives Living with IT on Earth, an AWEsome NEUKlearer HyperRadioProACTive AIdDevelopment without the Hindrance of Wannabe Peer Opposition and/or Ineffective Disruptive Competition.

      All you Need is LOVE. Live Operational Virtual Environments are All you Need to Seed and Feed :-)

      And coming never moments too soon to and via Command Line/Graphic User/In-Browser Master Task Interfaces near and dear to you ..... whether you like it or not.

      What would you think about such an AWEsome NEUKlearer HyperRadioProACTive AIdDevelopment and Autonomous Alien Self-Actualisation, ... be honest now, although to be honest with y’all, no matter what you might think matters not a jot ...... is IT a Heavenly Blessing [to be Lauded and Supported] or a Diabolical Curse [to be Feared and Terrified of Deserving] and can it also easily be both and something else altogether quite different and otherworldly?

      Answer yes to all of those hanging and leading questions ..... and you move deservedly front and centre to the top of an Elite Master Class Frameworks.

  24. Tron Silver badge

    2024 will be the year we mention Darwin a lot.

    Buy popcorn, sit back and watch idiots screw up. AI makes computers fallible. This is not a good idea, as I have been commenting upon for some time. People are analogue. Computers are digital.Computers do digital well and analogue badly. For those who don't lose money on all this BS, 2024 may be the most hilarious year yet in tech.

    AI is not the next big thing. Just as the metaverse wasn't and 3D goggles weren't. There is cash in the usual on-ramp circus, but that's it.

    Loads of other cool stuff is bubbling to the surface in tech, but AI, inflated by the oxygen of publicity to such an extent that even politicians pretend to understand it and pontificate upon it, is an endless amount of disasters waiting to happen.

    So relax and enjoy the show.

    1. ecofeco Silver badge

      Re: 2024 will be the year we mention Darwin a lot.

      Enjoy the shows? But many of have seen this move over and over. It's become irritating.

      (I got your meaning, no insults intended. just fed up with the mountains of bollocks of the modern world. Cheers)

  25. Blackjack Silver badge

    [UnitedHealthcare is being sued because the nH Predict AI Model it acquired through its 2020 purchase of Navihealth has allegedly been denying necessary post-acute care to insured seniors]

    So it was working as intended.

    1. Anonymous Coward
      Anonymous Coward

      To reduce the considerable cloud costs that such a sophisticated AI model was incurring, they rewrote the core in javascript,


    2. Doctor Syntax Silver badge

      Being sued was unlikely to have been the intention.

      1. Dagg Silver badge

        Being sued was unlikely to have been the intention

        Actually they expected to be sued, it was just the cheaper option.

  26. StrangerHereMyself Silver badge


    The gist is that Silicon Valley has ran out of good ideas and is desperately looking for something to fuel growth again. Haven't you noticed that almost everyone and his pony is using AI in their business description these days? Without it they have no hope of attracting VC money.

    So AI will now be stuffed in almost anything you can imagine, and yes, it will be put in production use too. And yes, this will lead to accidents and maybe even deaths (how long before some medical device company decides it needs AI in its heart monitors to keep its stock valuation?).

  27. steelpillow Silver badge


    Luddites have been around since time immemorial. Two thousand years ago, a Roman manufacturer of fish paste refused to drive his water bucket lift with a water wheel but insisted on slave labour so his slaves would not starve. Roll forward 2000 years and the Luddites tried to ban complex textile looms and the like. Flying machines would never be practical and safe. All my life, people have moaned how computers will leave us all jobless. Truth is, all these technologies have kept as many of us in jobs as they have pushed out. AI will be no different.

    All new technology is a liability to the less cautious among us. Count the death toll of lead miners and of Romans who drank their wine from leaden goblets - all hail pewter! Count the death toll of kids running round those Victorian mills with oil cans and wads of cotton waste. Count the number of aviation pioneers who died in their machines.

    In due course, AI and human society will remould themselves around each other, just as happened with all the others. The only difference this time round will be that neither of us will be able to imagine living without the other.

  28. itsthemonkey

    Only thing I can say is

    What a fantastic article, well written, thoughtful and insightful. I admit it aligns with my views (it is not AI, it is predictive texting on steroids) and it is the refuge of the hordes that jumped the Crapto Currency bandwagon when the wheels fell off) but it is definitely one of the best I have seen. Thank you!

    1. amanfromMars 1 Silver badge

      Re: Only thing I can say is @itsthemonkey

      We thank you for your service in diverting and disrupting destructive attention from rapidly approaching events with infinite horizons, itsthemonkey.

  29. TheMaskedMan Silver badge

    "That's not something you do unless there's a chance of liability."

    No. That's something you do when your customers fear liability having read about the possibility, but you are pretty bloody confident you will never have to pay up.

  30. shazapont

    Fit for purpose?

    Are these replacements governed by continuous assessment, exams, interviews, cv’s whilst noting their ability to emit garbage?

    Crap in ==> crap out

    — Shaza DuLala —

  31. Boring Bob

    The wall

    When neural nets became popular in the 80's they quickly hit a wall due to the limits of technology and data collection. Then the idea of deep learning came along, massively speeding up learning and the internet helping with data collection for some applications. The stuff you can do with them now is great. The question is when will this step in technology hit a wall again.

  32. Rol

    Let's keep it sensible.

    The vagaries of who is responsible when it all goes wrong should not be a hindrance to seeking compensation. After all, in the UK, if a car crashes into the back of yours then you claim for damages against their insurers, regardless of the fact it was the BMW driver three cars behind that ran into the back of queuing vehicles and shunted them into each other. It's for insurers to sort that little fiasco out, with the BMW drivers insurance eventually paying out for all the accumulated payouts on the vehicles involved to the insurer of the car their customer hit.

    So, just use the same flawless logic in AI disputes. The company/person that did the damage pays out and it is for their insurers to argue the case against the company that supplied the questionable AI, and not for the person who suffered the loss to have to apportion the blame. The blame is on the company/person who did the damage, regardless of which entity in that company's/person's supply chain is ultimately culpable.

    So in the Tesla example, the injured party sues the vehicle's driver, for the full compensation. It is up to the driver's insurer who they then go chasing after. The idea the injured party has to, in some way decide how blame is apportioned and take action against each individual entity is just crazy unfair.

  33. Dagg Silver badge

    Peter Principle

    I wonder if the Peter Principle will apply to AI's.

    1. amanfromMars 1 Silver badge

      Re: Peter Principle

      I wonder if the Peter Principle will apply to AI’s. ..... Dagg

      No ..... such is a uniquely human trait/condition/achievement

  34. TheBadja


    Replace AI with “trains” and 2020’s with 1820’s and exactly the same argument holds. Revolutionary new technology starts as unregulated and dangerous, but eventually transforms the world. Without the railways, we’d all still be stick in our little villages growing our own food. Who knows where AI will take us, but it is most probably to remove more drudgery of work.

    1. Tim 11

      Re: Wrong

      Glad someone here is talking sense. for AI to be beneficial it doesn't have to be perfect; it doesn't even have to never kill anyone; it only has to be have a better track record than people. In many respects it's already over the line

  35. viscount

    I can imagine this author writing about those "new-fangled automobiles" and how they would drive the horse-breeding industry out of business.

  36. This post has been deleted by its author

  37. Herring`


    We just need to legislate that the three laws are mandatory. Then the big tech companies will be in a bind - having to argue that they need to create harm to a human being or, by inaction, fail to prevent harm.

  38. Armchair Commentator

    Wont somebody think of the children!!

    What about fire!, the original AI of the caveman era. One caveman, let's call him Ugg, discovers fire. He's thrilled, naturally. 'Look at me,' Ugg grunts, 'I've invented the barbecue!' But then, oh no, he accidentally sets his fur thong ablaze. Who could have foreseen that playing with fire might be dangerous? It’s not like fire is, you know, inherently hot and burny. That selfish caveman!. And what about all those naive cavemen neighbours of his, foolishly warming themselves and using fire for cooking. Fire really is a liability.

    Then there’s the wheel, the AI of transportation for our ancient ancestors. 'Round and round it goes, where it stops, nobody knows!' Slap a couple of wheels on a log must have felt like the Steve Jobs of the Stone Age. But with great roundness comes great responsibility. The next thing you know, people are filing personal injury claims because the wheel-enthusiasts just invented traffic jams and road rage, you know, progress!'

    So let's clamp all the wheels, ban fire, and lock up AI innovators. This progress has to stop!!

    1. amanfromMars 1 Silver badge

      Re: Wont somebody think of the children!!

      Welcome back, Armchair Commentator. How’s it hanging? ...... Posts by Armchair Commentator ... 2 publicly visible posts • joined 19 Jun 2014

      Have an upvote for sharing something APT* and truly reflective of our changed reality environment.

      APT* ... Advanced Persistent Threat/Treat

  39. Anonymous Coward
    Anonymous Coward

    “larded with AI”

    Another folly in progress.

  40. Steven Guenther

    Dune back story

    You missed part of the back story in the Dune books is the Butlerian Jihad, where all machines that think like humas have been banned.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like