back to article Are you ready to back up your AI chatbot's promises? You'd better be

I keep hearing about businesses that want to fire their call center employees and front-line staffers as fast as possible and replace them with AI. They're upfront about it. Meta CEO Mark Zuckerberg recently said the company behind Facebook was laying off employees "so we can invest in these long-term, ambitious visions around …

  1. Anonymous Coward
    Anonymous Coward

    AI LLMs often aren't right. They're not even close.

    Worse than that. They actively make shit up.

    1. amanfromMars 1 Silver badge
      Pirate

      Re: AI LLMs often aren't right. They're not even close.

      Worse than that. They actively make shit up. .... Anonymous Coward

      So virtually human and being modelled on the politically incorrect and serially incompetent, AC. ........ See Yes, Prime Minister ..... but it does require balls other than jugglers’ below

      1. Anonymous Coward
        Anonymous Coward

        Re: So virtually human and being modelled on the politically incorrect and serially incompetent

        and should, therefore, be terminated immediately. The only way to be sure.

    2. Lurko

      Re: AI LLMs often aren't right. They're not even close.

      "Worse than that. They actively make shit up."

      Yes, but for many years I was a customer of a well known UK cable company, and the cheaply offshored customer service agents routinely made shit up. A visit to their customer help forum shows they still do. So if you've got poorly trained humans making it up as they go along, often with poor language skills, then AI can at least improve on the language skills, and make shit up more cheaply. What's not to like for PHBs?

      1. DS999 Silver badge
        Trollface

        Re: AI LLMs often aren't right. They're not even close.

        I once heard Directv's CSRs called "random answer generators", and in my experience that was correct - and I dealt with the better class of CSRs you got with commercial accounts not the residential ones (the "good" ones there got promoted to commercial)

      2. Excused Boots Silver badge

        Re: AI LLMs often aren't right. They're not even close.

        It’s Virgin Media isn’t it?

        In an attempt to cut costs they outsource their customer support to some offshore call centre employing people on whatever passes for minimum wage there, making the cheapest tender bid. In itself that’s fine (other than the minimum wage part), except they are the cheapest for a reason - they simply don’t train the staff at all, relying on blindly following a script. And this might well work for, say, 90% of calls, and when it doesn’t, they are incentivised to just make stuff up. Anything will do as long as it gets you off the phone and they chalk up a ‘successfully closed call’.

        Because the call centre owners/managers will have some sort of SLA based on call numbers and closures, which determines if they get paid or not. The incentive is to simply close calls, irrespective of the problem is solved or not.

        Not dissimilar to using a LLM for your support, it will probably be OK most of the time, except when it does go wrong, it’ll go wrong spectacularly, and expensively!

  2. amanfromMars 1 Silver badge

    Yes, Prime Minister ..... but it does require balls other than jugglers'

    Air Canada discovered the hard way that when your AI chatbot makes a commitment, your company will be on the hook for it

    Are we then to reasonably expect political parties be on the hook for promises and commitments to goals they and their leading cheer-leading chatboxes/ministers and constituent members came nowhere near to fulfilling?

    Yeah, why not? That seems perfectly fair and not all crooked for a rigged game.

    1. Tron Silver badge

      Re: Yes, Prime Minister ..... but it does require balls other than jugglers'

      Political promises should come with stats, a timescale and be legally binding. Fail and you should be excluded from office, fined and imprisoned.

      At which point the lying hypocrites will all switch from 'commitments' to 'aspirations'.

      They have been doing this a long time and citizens don't get any less gullible. Brexit proved that beyond reasonable doubt. The most you can do is erase the current lot from power at the next election and be screwed over by different politicians for a bit. They don't do any of it for us and they are 'all in it together'. So insulate yourself from them as best you can.

      Computers fail when they try to be human. AI is unreliable. The mugs will throw money at it the way they did at the metaverse. We get to suffer from the failures and sometimes to laugh at it. Then politicians step in, tap them for free money in fines and then take control of it all.

      1. amanfromMars 1 Silver badge

        If you can’t stand the heat, get out of the IT and media kitchen for it is going to explode

        Changed days for those cooking more than the books with crooks in social media laboratories/politically incorrect parties ......... https://www.telegraph.co.uk/politics/2024/02/24/mps-given-bodyguards-as-extremism-threat-rises/ ..... and a sure sign of a great deal more more accurate targeting of problems for radical fundamentalist solution yet to come.

        And which one should note is not shared freely here as a question whenever quite obviously determined and destined to be an undeniable fact.

        What is it about failed systems which has them constantly digging more holes for them to get buried in ...... apart from a complete lack of common sense and advanced astute artificial augmented anonymous autonomous alien intelligence, of course, plotting for them a totally different course for future self-actualisation/Maslowian hierarchical activation ‽ .

      2. Anonymous Coward
        Anonymous Coward

        Re: Yes, Prime Minister ..... but it does require balls other than jugglers'

        "Political promises should come with stats, a timescale and be legally binding."

        Most come with stats and timescale and fail, but even the ones with all three can fail when they ignore the 'legally binding' bit... "We make the laws and had our fingers crossed when we said that, so it doesn't apply"

  3. GoneFission

    >Air Canada replied, in effect, that "The chatbot is a separate legal entity that is responsible for its own actions."

    Imagine if this comes up again in a higher court and the ruling sides with the company. This would result in them cashing in on all of the cost savings of replacing humans with barely purpose-functional LLMs and none of the burden of associated risks and liabilities.

    1. Doctor Syntax Silver badge

      Can you realistically see that happening?

      OTOH would a small claims court be precedent-setting in Canada?

      1. Richard 12 Silver badge

        Civil cases can set precedents, so maybe?

        1. cyberdemon Silver badge
          Holmes

          It would set a precedent, for the few weeks it would take to be overturned by a higher court, shirley

          1. Yet Another Anonymous coward Silver badge

            The 737 exit root is its own legal entity and if it decides to leave the aircraft and 30,000ft that's nothing to do with the airline

            1. Yet Another Anonymous coward Silver badge

              root/door = my phone's autocomplete has become self aware but is an idiot

        2. Doctor Syntax Silver badge

          AFAIK in English law it's got to be a good deal higher than a small claims hearing. The High Court at least, I think.

          1. Anonymous Coward
            Anonymous Coward

            I think in England previous High Court rulings can be used to set guidance in subsequent cases but it needs an Appeal Court ruling to set a legal precedent (ie something that subsequent cases have to follow)

    2. Anonymous Coward
      Anonymous Coward

      Last time I checked, I was a separate entity to the company who employs me - does that mean the company is not liable if someone phones the support line asking how to get rid of an error and I tell them how to format c: ?

      1. doublelayer Silver badge

        In a way, since your company would probably sue you for doing it. It wouldn't get them out of their liability, but you could still face consequences. I'm fine if Air Canada wants to try suing their chatbot provider for that to recover the costs. It probably won't work, though.

  4. elsergiovolador Silver badge

    Separation

    Does it mean you can train your LLM to say whatever you need it to say, connect it to e.g. Twatter and if confronted say not me guv, it's LLM, sue that not me?

    1. MiguelC Silver badge

      Re: Separation

      Thankfully, the court though otherwise.

  5. Rafael #872397
    Headmaster

    I'm not a Luddite, but

    ... we need a word for the complete opposite of a Luddite, for people and companies that jump on unproved tech concepts and start planning around it before seeing whether it is going to work or be useful, sometimes just because others are thinking about maybe doing it.

    Tech bro, bandwagon jumper, "visionary", early adopter* and technophile* just aren't enough.

    Musketeer?

    *ChatGPT suggestions.

    1. StewartWhite Bronze badge
      Flame

      Re: I'm not a Luddite, but

      How about TechnoTwat?

      1. PB90210 Silver badge

        Re: I'm not a Luddite, but

        Already taken by His Muskiness

    2. amanfromMars 1 Silver badge

      Re: I'm not a Luddite, but @Rafael #872397

      Future Builder Pioneer? ...... Wild Wacky Westernised Cowboy?...... Exotic Erotic Eastern Imperialist? ......... Brave Heart? ....... Bold Leader?

    3. David-M

      Re: I'm not a Luddite, but

      Since Luddite is likely named after Mr. Lud, we'd maybe be looking for a word like Altmanite or AirCanadite...

      1. Doctor Syntax Silver badge

        Re: I'm not a Luddite, but

        Ned Ludd was a fictitious signatory of threatening messages. Alterantives were "Captain swing" and the less imaginative "A friend".

    4. Doctor Syntax Silver badge

      Re: I'm not a Luddite, but

      Mug?

      1. Anonymous Coward
        Anonymous Coward

        Re: I'm not a Luddite, but

        fanboi?

    5. Sparkus

      Re: I'm not a Luddite, but

      https://en.wikipedia.org/wiki/Dune:_The_Butlerian_Jihad

    6. user555

      Re: I'm not a Luddite, but

      Fear-of-missing-out (FOMO)

    7. computing

      Cutting edge self-harmer?

      Cutting edge self-harmer?

      People daemonised and oppressed by technolust spirits.

    8. RedGreen925

      Re: I'm not a Luddite, but

      "or people and companies that jump on unproved tech concepts and start planning around it before seeing whether it is going to work or be useful, sometimes just because others are thinking about maybe doing it."

      I call them morons....

    9. C R Mudgeon

      Re: I'm not a Luddite, but

      Hype cyclist.

      1. Anonymous Coward
        Anonymous Coward

        Re: I'm not a Luddite, but

        Considerer (of) Unproven New Technologies

  6. John H Woods

    "In a few years, it will be a different story."

    I'm not sure I have seen any real evidence for that. Is the training going to get better? Are the LLMs going to get so much better the training can be the same? Or is some new form of AI chatbot that isn't really an LLM going to appear? Absent any of that, I'm sceptical.

    1. Anonymous Coward
      Anonymous Coward

      Re: "In a few years, it will be a different story."

      "Is the training going to get better?" - Well google just did a deal to use Reddit as training material, so there goes that hope.

    2. doublelayer Silver badge

      Re: "In a few years, it will be a different story."

      I wouldn't be surprised that OpenAI will eventually try. They've made it so far by having a bot that can print coherent sentences, and they've succeeded at convincing some companies to pay for it. I think that ascendance will eventually break when the inaccurate results become bad enough. If they can see this coming, they might start focusing on getting some accuracy out of it, not just hiding their training data. The money they've made from sales so far and from massive investments from Microsoft should allow them to contribute resources to the attempt. I don't know how easy that will be, and they may try to get it and fail, but I do think they or someone like them will try.

      1. Ken Hagan Gold badge

        Re: "In a few years, it will be a different story."

        You could start by only using accurate training data, but that's much more extensive than simply hoovering up the easily accessible chunks of the internet.

        1. katrinab Silver badge
          Alert

          Re: "In a few years, it will be a different story."

          Even with "accurate" training data, it is still going to give you the "correct" answer to the wrong question.

          If an answer to a question given in the training text is correct in certain circumstances and not in others, and that is explained in the text, the AI isn't going to be able to understand this context.

      2. Richard 12 Silver badge

        Re: "In a few years, it will be a different story."

        I don't think there's any evidence that a generative LLM can ever be relied upon, and plenty that it cannot because the fundamental concept is stochastic, with low resolution so the probability of unwanted results is always going to be pretty high.

        It's going to need some other type of system.

        1. doublelayer Silver badge

          Re: "In a few years, it will be a different story."

          I agree with you there. If they did it, it would have to be done with some other type of technology, and it's unlikely to be able to do it with perfect accuracy. However, I still think that one of these AI companies might eventually realize that they need it and try to build such a thing. They may fail to accomplish it, but for the moment, nobody is even trying. This is assuming that enough people rely on incorrect chatbot answers and suffer the consequences, so I'm hoping that rulings like this continue to happen. If users manage to find a way not to suffer when they use a chatbot's answer to screw over another, my assumptions could prove wrong.

        2. EveryTime

          Re: "In a few years, it will be a different story."

          Far worse than simply being wrong, LLMs are designed to provide an answer that is most likely to sound correct.

          A post filled with misspellings, typos and obviously wrong details can be immediately rejected as likely being ill-informed. One that appears well written and superficially what is expected is likely to be accepted as accurate.

  7. heyrick Silver badge

    found out the hard way

    I disagree. The hard way was the torturous logic they tried in order to make their chatbot somehow not responsible for what it said.

  8. Gordon861

    Disclaimer

    Does this mean that all business chatbot/AIs will now have a disclaimer on the page saying that any answers should be confirmed and are not binding?

    And will these disclaimers be legally enforceable if the bot does give out false information.

    1. ComputerSays_noAbsolutelyNo Silver badge

      Re: Disclaimer

      A chatbot with a Disclaimer: "Do not trust the chatbot, it may be full of the proverbial", either means no savings from employing AI as a means to cut back on actual people answering inquiries, or zero-ing you customer facing communication, i.e., no real-life people and an untrustworthy chatbot.

      1. doublelayer Silver badge

        Re: Disclaimer

        It depends if a court accepts it. If they basically say that the chatbot can say whatever it likes and they don't have to honor any of it, then it can still look like it's providing customer service and leave every user with the results.

        Such things are not uncommon already. I was reminded of this recently when I was asked to repair a phone with a broken charging port which had been purchased only months ago. The warranty attached to it had so many different reasons why something wouldn't be covered that, as far as I could tell, the damage was not covered. The customer service person said that it was not covered, but not why. What I can't figure out is what kind of damage, other than maybe the phone being broken before anyone touched it, would have been covered by the warranty. Yet a customer buying the device would think that they had some kind of protection anyway because the warranty existed, and surely they wouldn't have a document if it meant nothing.

        The average user will probably never see the disclaimer and assume that, when the chatbot the company chose to put there gives them some information, it is valid. It is possible that a court will overturn that and invalidate the disclaimer, the same way that I could probably have tried to challenge the warranty, but most users will not try because there is a good chance that it won't work and they'll end up wasting time and money in the attempt.

        1. DeathSquid

          Re: Disclaimer

          This is what statutory consumer rights are for. The retailer is on the hook if the item is not fit for purpose.

          1. doublelayer Silver badge

            Re: Disclaimer

            True, but it is where the legal ambiguity starts to come in and, when it does, the average consumer starts to worry about proving what could turn out to be a simple case. For example, in this case with the broken phone, the charging port was working when it was new, and now it's not. I don't know exactly why as it wasn't mine, but it probably wasn't someone pounding it with a hammer. I could try to suggest it's manufacturer's poor workmanship and they can try to prove that it was caused by user negligence. The typical user looks at all this and decides that, since this was a cheap phone, it will take so much effort that, even if they win, they've probably spent more than paying for a repair or replacement would cost and they're not confident they would win anyway. That is how companies can use disclaimers, even invalid ones, to blunt consequences. The only way around this is if the disclaimer is ruled invalid and they are forced to remove it entirely. If some company can find a wording that the court accepts, everyone will use something similar.

    2. katrinab Silver badge
      Unhappy

      Re: Disclaimer

      But then people are going to confirm the answer with a human, and then you don't get any savings from using it.

      1. doublelayer Silver badge

        Re: Disclaimer

        Of course you do if you can hide the disclaimer and have no humans. If they find some set of conditions under which a court allows them to lie via chatbot and not have any consequences, they can use those conditions. Anyone wanting information may try to call someone, but if they only give them the chatbot, then many customers will use it because it's the only option. The customers would be annoyed and some might try not to buy from them, but that doesn't seem to have stopped a lot of companies today who have the bare minimum of customer service.

  9. cornetman Silver badge

    Someone needs to come up with a way of pairing some kind of "fact script" with the part of Chat GPT that can hold a conversation.

    Think of a real person sitting with a company handbook on company rules and current promotional offers.

    Relying on ChatGPT to not only hold a conversation with context but also generate the factual basis for its responses is never going to be reliable.

  10. Helcat Silver badge

    Can see a potential problem here:

    ChatGPT was asked what the VAT paid was on £500 where VAT was 20%. It came back with an answer of £100

    okay...

    so it was asked if it had the calculations correct: It admitted it did not. It then gave the corrected formula...£500 * 20 / 1+(20/100).

    So 10,000/ 1.2.

    oops... tax paid: £8,333.33

    Now, if you're claiming your tax back...

    (for clarity, the actual tax paid was £83.34 as tax is always rounded up :p )

    1. doublelayer Silver badge

      Admittedly, its first answer could be due to the vague question. "the VAT paid on £500" could mean £500 total expenditure including tax or £500 before tax. If I were asked the question, I'd probably ask for clarification. If I couldn't have it, I'd use context to guess which was wanted, for example using a before tax amount if the person asking was a seller and after tax if it was a buyer, but that isn't a guarantee of anything.

      1. cornetman Silver badge

        I was reminded of something that happened with me the other day on a ferry when the announcements came over the speaker system: "There is no smoking anywhere on the vessel." It got me to thinking, why don't they just say that smoking is prohibited on the vessel?

        When I suggested this to my wife, she replied "Well they did say that smoking is not allowed", to which I replied, "Well, actually they didn't. They said that there is no smoking". That's only true until someone actually does smoke, in which case there *would* be smoking on board.

        I can guess what actually happened when they were coming up with the script for the announcement, that someone suggested that what they came up with would be less "confrontational" that just saying that smoking is not allowed. It might be less confrontational, but it doesn't actually say what they intended.

        Perhaps when people become accustomed to hearing certain forms of speech, they stop thinking about what it actually, logically means. Like "what the VAT paid was on £500". I can envisage an accountants office where this is such common parlance, that the ambiguity of it becomes lost, such that they don't understand when someone points it out. I guess a form of the "curse of knowledge".

        Not ragging on the original poster, just an observation....

    2. John Miles

      Being pedantic

      I'd expect the computer to calculate "£500 * 20 / 1+(20/100)" to be £10,000.20

  11. gnasher729 Silver badge

    I don’t think it has anything to do with LLMs making promises. It’s all about what appeared on their website. How it got there (as long as the company is responsible and not done hackers) doesn’t matter.

    And I don’t think anything was “legally binding”. They just had to pay for damages that their website caused. So they had to pay for giving the wrong information.

    1. Richard 12 Silver badge

      They didn't pay damages

      The court simply ruled that they had to honour the contract the chatbot created on their behalf.

      1. gnasher729 Silver badge

        Re: They didn't pay damages

        No contract was created. Their website provided misinformation which led to damages, and they are responsible for the damages. If you approach a bridge in your car, and ask me if the bridge is safe, and I lie to you and your car ends up in the water, I’m responsible for the damages. I didn’t enter a contract to make the bridge safe.

    2. Doctor Syntax Silver badge

      And I don’t think anything was “legally binding”

      It's just that the legal court bound them to pay up.

  12. Anonymous Coward
    Anonymous Coward

    and yet...

    all your valid points, don't matter at all, the show must go on. Was it NOT pointed out that bitcoin etc are a (near) perfect way for the bad boys, girls and in-betweens to get their ransom money from digital high seas? Or that techno bros business model is ruinous? Or that drones + explosive kill more people? Did I mention the machine gun that was supposed to stop all future wars? Boy, didn't he pope mumble something about 'crowsbow baaaad' shitting in the wood? Why would it be any different with 'AI'? Fuck you jack I'm allright, cause I'd better be the fucker than the fucked, that's the homo sapiens spirit.

    1. Doctor Syntax Silver badge

      Re: and yet...

      Is AMFM1 going A/C?

  13. Sparkus

    fixed the headline for you

    Are you ready to back up your AI chatbot's analysis, predications, and promises?

  14. TeeCee Gold badge

    So then.

    If all customer service reps were replaced with AI chatbots, how would we know?

    1. doublelayer Silver badge

      Re: So then.

      All the scripts would get very long. Instead of a quick answer to your question (possibly wrong), you'd get a two-page answer to your question (possibly wrong). A lot like what you get if you try to find the information online.

  15. aerogems Silver badge
    FAIL

    Again we see how businesses say one thing, then act in a completely opposite way. They always talk about how customers are so important, but then at every single opportunity, they skimp on the customer service. When they aren't replacing it with chatbots, it's almost always outsourced to some call center where they're incentivized to get people off the phone as quickly as possible and training is sparse at best, which is why you can talk to three different CSRs and get five different answers to the same question.

    Customer service is seen purely as an expense, not the potential sales and customer loyalty driver that it can be if done well. As long as companies continue to skimp on the customer service, you can expect shit like this to keep happening. But of course, I am foolishly looking beyond the next quarter, which is why I would clearly make for a horrible CxO.

    1. Doctor Syntax Silver badge

      Again we see how businesses say one thing, then act in a completely opposite way. They always talk about how customers are so important, but then at every single opportunity, they skimp on the customer service.

      It's not really saying things. It's just the standard PR process of joining together strings of words. They're not intended to have meaning. Just like generative AI. PR people will be the easiest to replace if they haven't already been.

      1. aerogems Silver badge
        Black Helicopters

        What about political speech writers? I saw someone suggest that Ron DeSantis was just using ChatGPT, and of course Trump literally sounds like what happens if you just press the middle autocorrect option every time. I'm reasonably sure Trump suffered a stroke at some point probably over a decade ago, and had to relearn to talk. Maybe he was using a really early model ChatGPT type system.

  16. Filippo Silver badge

    >A real-live Air Canada rep confirmed he could get the bereavement discount.

    I think this bit deserves more attention. It's not just the chatbot.

    >"The chatbot is a separate legal entity that is responsible for its own actions."

    I'm really glad to hear that the court did not fall for this. Claiming that a chatbot is a separate legal entity is insane, but sometimes you hear about judges misunderstanding stupid things.

    1. Richard 12 Silver badge

      I believe it was the "90 days afterwards" part that the chatbot stated.

      The human appears to have said "yes, a dead grandma qualifies", but not mentioned any other conditions.

  17. T. F. M. Reader

    Was the chatbot even wrong in the case?

    There is an interesting bit in the article: "A real-live Air Canada rep confirmed" what the bot had told the customer.

    An interesting bit that is not in the article: it is not clear if that was considered pertinent by the Court. I, for one, am curious.

    1. aerogems Silver badge
      Boffin

      Re: Was the chatbot even wrong in the case?

      That really wasn't the point. The point was that Air Canada was trying to claim that the chatbot is essentially a legal person completely separate from the airline, so if they wanted to stiff a customer on some discount, they could because they weren't bound by whatever the chatbot said. The judge in the case disagreed, so now all companies using chatbots (in Canada at least) will be on the hook for anything their chatbot tells a customer. It wasn't really about whether this specific instance of a chatbot was giving incorrect info, it was about how Canadian companies are now going to be held accountable for anything their chatbots might say.

  18. Groo The Wanderer

    Welcome to Canada where your publicly logged and trackable promises as a corporation are upheld in the courts in favour of the consumer.

    Desky Canada is about to learn the hard way about the difference between American and Canadian consumer rights if they don't fix their screw-up pronto on Monday morning...

    I'm tired of American shell companies thinking they can get away with the abuses they do in the US here in Canada. This time I'm putting my foot in the ground and using this case as an example of case law that is NOT in Desky's favour. :D

  19. katrinab Silver badge
    Megaphone

    It will never[1] work

    This is not a case where a load of incremental improvements over the years will make this an eventually viable technology. The whole underlying premise of the technology is fundamentally flawed and cannot possibly ever work.

    [1] "Never" in this context means using improved versions of existing technology. It is possible that at some point in the future, there is a new discovery that makes actual AI possible. It is impossible to predict when or if that will ever happen, but we are not moving towards it at the moment.

  20. xyz123 Silver badge

    Please remember this is Canada, where they have already executed 13,000 people for "being poor" whilst claiming they "chose to end their lives with dignity".

    Canada is essentially a totalitarian regime now with quite-literal death camps if you are poor or elderly (they state you don't have sufficient quality of life, and so must be humanely destroyed).

    Seriously..google for Canada MAiD. its frightening and horrific whats happened.

  21. Brewster's Angle Grinder Silver badge

    We need an icon for cynicism.

    "... your company "will end up spending more on legal fees and fines than they earn from productivity gains.""

    Yes, but that's spending money on the right kind of people (rich lawyers) rather than the wrong kind of people (poor people*). And, besides, the legal costs are mid term - long after the person who's fired the support staff has moved on; whereas the savings are in the short term and directly effect their bonus.

    (* I was going to say "unqualified" people. But chances are, they've got a degree. It's just there degree isn't in law...)

  22. 0laf Silver badge

    Doesn't matter

    It doesn't matter now if AI works or not. The board is sold on the idea already, they see the dream of having no staff other than themselves an are being told by the AI sales people that the dream is now possible.

    In reality the enshittification of services will continue. AI might get controlled on the customer facing side if mistakes cost money but internal helpdesks, you are all screwed. It doesn't matter if the machine that replaces you is useless and makes mistakes, there are no personal damages to claim.

    Got to wonder about the end game though, if everyone outside every boardroom is replaced by a machine so noone is employed then what value does the business have without customers?

    It's enough to make you consider conspiracy theories, only my opinion of the majority of the human race is so low that I don't think we're capable of running a conspiracy. Idiocracy here we come

    1. Herring` Silver badge

      Re: Doesn't matter

      if everyone outside every boardroom is replaced by a machine so noone is employed then what value does the business have without customers?

      Well, yes, capitalist greed will end up destroying the economy when nobody can afford to buy stuff. But in the meantime, the execs get bonuses for cutting costs. Have a nice day.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like