back to article The future of AI/ML depends on the reality of today – and it's not pretty

Companies love to use familiar words in unorthodox ways. "We value your privacy" is really the digital equivalent of a mugger admiring your phone. And "partnering"? Usually, it means "The one with more money is bribing the one with more cred." There is a more accurate tech use of "partner," as in the sort that comes with a …

  1. b0llchit Silver badge
    Mushroom

    Follow the money

    It is a great tool to make lots of money,... for the speculative investors and CxOs able to over-fill their bullshit-bingo cards.

    The whole affair is executed using a standard Pyramid/Ponzi scheme operating procedure: Invest, hype, invest, hype and sell just before the inevitable crash.

    1. Groo The Wanderer Silver badge

      Re: Follow the money

      I.e. The usual "pump and dump" tech bubble. We're going to continue to have this problem as long as there are gullible "investors" dreaming of getting rich quick. It makes the "business" of too many companies their stock value instead of tools, technology, and infrastructure to support an actual product.

      They've suckered the whole world!

      1. 0laf Silver badge
        Holmes

        Re: Follow the money

        Someone is getting rich or this wouldn't happen repeatedly. But what we have are captains of industry and politicians who always want to be associated with the new shiny, but in reality can't wipe their own arse without help. And they will always be happy to be part of the next bubble.

      2. David Hicklin Bronze badge

        Re: Follow the money

        Also the "follow the sheep" principle where no company wants to risk being left behind even though it is clear it is a load of BS....hence companies spending stupid amounts on clouds, self driving tech (yeah that last 20% is gonna kill - hopefully financially not literally), AL/ML, agile (other whatever the flavour of the month is), blockchain, bitcoin, tulips etc etc

        1. Rich 11 Silver badge

          Re: Follow the money

          I do at least have a use for tulips.

    2. breakfast
      Holmes

      Re: Follow the money

      It's not only the investors and CxOs, it's also immensely profitable for one company making the hardware that everyone is buying to do it.

      Meanwhile the previous VR/AR hype is dying down, but one company made a lot of the hardware that everyone bought for that too.

      Before that the big hype was around Web3, Blockchain and cryptocurrency bullshit - in the long term the company that worked out best for was probably the one selling the hardware everyone needed to mine their bitcoins.

      Now I'm not saying all of this nonsense is Nvidia's fault, but if we're talking about following the money, it keeps ending up in the same place.

    3. Anonymous Coward
      Anonymous Coward

      Re: Follow the money

      It's very easy to mock the endless cycle of "Invest, hype, invest, hype'.

      But it works.

      And that's why it gets repeated so often.

      Just take a look at the price of Bitcoin.

    4. Anonymous Coward
      Anonymous Coward

      Re: Follow the money

      That's not really how tech bubbles generally work. What happens usually is that a technology starts off pretty expensive, but pretty desirable...which requires a lot of investment. Then the technology matures a bit, not a lot changes, but it becomes an arms race...more hardware is required for competitive edge...faster, bigger, more! Then the hardware becomes much more efficient, and you need less of it to get the same results, so the cost of running it becomes lower...but we're still climbing, but not as fast, because the hardware isn't commodity hardware yet, you still need loads of cash to keep it going and to participate in the arms race...at this point the tech becomes better value and opens up to more people, which is more profit / value to the business...finally, the tech becomes so optimised and the hardware so cheap that literally anyone can do it for peanuts...this is where we "crash"...however, even though the stock has tumbled, it doesn't mean the tech is going anywhere...it simply becomes another technology that is everywhere, abundant and cheap.

      We are currently at the early "arms race" phase...which is why NVIDIA et al have had a massive bump...because they provide the hardware, but none of the actual resulting tech. Either the next generation or the generation after with GPUs is where things start getting a lot cheaper and the tech becomes "commodity" tech...because models and technology built on the current generation of GPUs will become trivial to run...the major thing we're looking for is VRAM being cranked up...because VRAM is currently what is holding things back in terms of making current AI tech commodity, you can't run the more sophisticated models on mid tier or lower GPUs without sacrificing quality through quantization etc...when we reach about 32GB of VRAM on a mid tier GPU or when some smart alec figures out how to use resizeable BAR in an efficient way, that's when we start to see the cost of AI come down....because the tech will no longer be about training newer, bigger, better models from the ground up (which is expensive), it will be about tuning and training existing models...which will become a lot easier to do when users have more VRAM.

      So to summarise, crashes happen in tech when something stops being commercially viable in terms of massive profits and holding tech behind massive walls of money.

      Similar thing happened with the DOTCOM era. People that could build websites were hard to find and the knowledge was harder to come by, which meant the number of people capable of building websites grew very slowly...it was quite a technical thing back in the mid 90s to build a website. It may seem quaint now that we used FTP software and notepad to handwrite static HTML...but in those days, building simple sites was very time consuming...eventually we had Frontpage and Dreamweaver hit the scene and suddenly loads of people could build websites, and we had a massive spate of websites appearing...richer sites that took less time to make...eventually the cost of building a website became so cheap that even your local hairdresser had a website...it was no longer something that was considered "cutting edge" or "forward thinking"...it was just something that a business did...loads of people out there to do it, tech became easier, we had a crash...etc etc...but the technology never went anywhere...it just became commodity...fast forward to today, and anyone can learn how to build a basic, decent looking website in an afternoon...if you put a few more days in, you can learn a JS framework and use a database...shit you can use AI and never write a line of code...we've reached a point now where developing a website is so streamlined that we've had massive layoffs in the space because large businesses have worked out that you don't need dozens and dozens of web developers anymore...you can have just 2 or 3 and get at least the same result.

      The same commoditisation will happen with AI...right now, it's still quite early, the tech is fiddly and not really geared up for the man on the street to setup, use and train...we're at the "notepad and FTP" stage, which will look really twee in about 5-10 years time. Give it a year or so and we'll see some tools that allows your average "Deano" to load an existing model, drag in a bunch of data and click "train" to refine their own models for their own purposes...you won't need to be OpenAI to pull that off with thousands of GPUs...you just need to be "Dave" with a dumb idea, some data, and a fairly good GPU...we'll start to see a bunch of "Dave & Sons" local AI training businesses who can show up at your house, help you with your data and will make you a model specific to your purposes...there will be money here for a while...until someone releases an open source tool that any Tom, Dick or Harry can use to do the same thing...like a wix.com for AI...drag and drop, not amazing, but just right for your typical man on the street to get started with.

      It's got nothing to do with "pumping and dumping"...it has everything to do with how tech develops and moves on.

      So get off your armchair with your tin of Kestrel, take your tinfoil hat off and take some risks like the other people currently driving the market. There is money to be made, just don't be one of the soft bastards that is always late to the party...the music is playing now, it's thumping hard and unlike previous tech "bubbles" it's even easier now to get involved that it has ever been...the dotcom bubble was extremely difficult to participate in by comparison, because you couldn't just buy stock from your smartphone. You couldn't search the internet as easily to find out whats going on, how to use the tech etc etc...there was github, stackoverflow, wikipedia, youtube etc etc...if you always feel like these booms and busts are always out of your reach and they are just "scams set up by the man" then you are either fucking lazy or thick as pigshit...pick one.

      I have invested in loads of so called "scams" over the years...not driven by the hype, but driven by what I know about tech "bubbles"...there are several rules out there that degenerate traders will repeat like the hooded weirdos out of Hot Fuzz..."Buy the rumour, sell the news", "Buy fear, Sell Greed"..."THE GREATER GOOD!"...but the only rule you really need to understand to not get your fingers burned is "Markets come and go" because they do, they always do.

      1. Groo The Wanderer Silver badge

        Re: Follow the money

        You're assuming the technology is capable of doing what is advertised. LLMs aren't.

        1. Anonymous Coward
          Anonymous Coward

          Re: Follow the money

          I'm not making any assumptions. The internet is used for things that nobody imagined back in the 90s. It's a completely different place.

          Who in the late 80s imagined YouTube? Napster? Smartphones?

          Come to think of it, during the Napster days, who imagined Spotify?

          Who on the team that designed the first particle colliders envisaged MRI scanners?

          Probably very few.

          Who knows the directions LLMs/diffusers/AGI research will take? I certainly don't, you definitely don't.

          All I know for sure is that my job sure as hell wouldn't exist if tech bubbles didn't happen and fuck only knows what I'd be doing if during my late teens I had to get an actual job.

          Tech rarely ends up being used for what the inventors envisioned...I'm pretty sure the inventors of the Internet didn't have "piracy" and "doom scrolling TikTok" on their list of potential future outcomes.

          The same will be true of LLMs etc.

          Is there an insane amount of hype around AI? Fuck yes! Is it a pointless technology? Who knows, depends on the user of it I guess? Is it here to stay? Hell yes.

          The major difference this time round is quite a lot of the development is being done in the open, there are far fewer gatekeepers this time round.

          The dotcom boom was extremely abstract for a lot of people because the internet wasn't immediately accessible for most of the planet, that took many years. With AI, you can try and use it now...for free. Anywhere...the barrier to entry is extremely low which is why the pace is crazy.

          I'm personally hearing the phrase "ask ChatGPT" being picked up by more people in a shorter space of time than "Google it".

          Whether you think AI is a worthwhile tech is irrelevant, because there are legions of people out there that think it's fucking awesome and game changing...and for a lot of folks, it genuinely is.

          For quite a lot of us in tech, it doesn't matter if the tech is worthwhile or not...the net result is more kit to be deployed and more infrastructure to be built, new models to be tested, refined, tuned, automated...unless you're a garden variety software "developer" building database skins with some kind of crappy JS framework, for them the countdown has started. Which in itself could be a good thing...because just like LLMs, the typical dev out there can't write good code or build compelling user experiences they just tweak and extend boiler plate with a template over the top...there are models out there that can produce code that is equal in quality to a typical database skinner (in some cases much better) and in the hands of a good developer can produce excellent code...and it can do it in a tiny fraction of the time.

          The internet in it's early days was fairly crap, it was an afterthought on a lot of operating systems and was bolted on...it was being used on machines that weren't really designed for it...AI as it is now is not that far removed from that situation...we're training and using models on hardware that wasn't really designed for it...but where does it go when we do get purpose built hardware and software for it? It can only improve at this point.

  2. Fogcat

    A long read but it gives you a good view of the approach of the "AI bros"

    https://www.nplusonemag.com/issue-47/essays/an-age-of-hyperabundance/

    1. nonpc

      Alas I tried reading it but I am far too old for the gushing verbiage. A summary would be useful.

      1. yoganmahew
        Coat

        Somebody should invent a tool to do that...

  3. Anonymous Coward
    Anonymous Coward

    AI isn't always AI same as a vacuum cleaner isn;t always a hoover

    Generative AI is (for now) hot garbage. I've got the allegedy super duper license and it's like having a malcious sociopath as your PA. Give it some data (always assuming it's going to snaffle it so you have to anonymise and declassify first) and it'll do some very clever analysis for you pretty quickly. But then you notice that if the AI disagrees with your prompt it'll basically ignore it and do what it thinks is best. Then if you correct it it'll deny the error and gaslight you that your prompt was what it did anyway. Also if your data isn't perfect, say a sensor failed and you've no data for a short period it'll hallucinate data to fill the gap. It won't tell you it's done this and false data is likely plausible and hard to spot unless you've already identified the gap.

    So you've got this supposed wonderful tool that you don't trust so you can only use low value data within, doesn't do what it's told and lies about changes, that hallucinates at the drop of a hat in ways that can be very hard to spot. So if you do use the tool you need to prep it, then spend time interogating the results to ensure correctness.

    This is where driverless cars fall down too, unless I can trust the tool to take over the portion of the work and IT be responsible then it is actually worse then doing it myself. If I need to supervise a driverless car then it's not autopilot and (studies have shown) that this observation status is the worst of all worlds with the machine not trusted but in control and the human half alseep from bordom from doing nothing.

    So is all AI crap, no not at all. The non-generative stuff seems to be very useful, but it's not very exciting so doesn't get the headlines. Thos AI/ML tools that are predicting industrial component failures, or estimating erosion patterns or highlighting potential illness from scans. All that stuff, brilliant. Generative, shit, less than useless.

    1. cyberdemon Silver badge
      Terminator

      Re: AI isn't always AI same as a vacuum cleaner isn;t always a hoover

      Even the non-generative AI has serious drawbacks, such as the same privacy issues you mentioned, and high error rates when presented with an input not well-represented in the training data.

      "Black person = Criminal", computer says you have cancer but it's just a piercing, military drone thinks you are the enemy, etc.

      Worse, I predict that future non-generative AI may be trained on fake data from generative-AI, either deliberately (to quickly create fast embedded classifier models) or accidentally due to data pollution

      1. Anonymous Coward
        Anonymous Coward

        Re: AI isn't always AI same as a vacuum cleaner isn;t always a hoover

        AI poisoning is already a thing. With all the AI generated trash content flooding the web it's inevitable that AI will start to eat their own shit. It's an AI centipede.

        1. hoola Silver badge

          Re: AI isn't always AI same as a vacuum cleaner isn;t always a hoover

          When you look at what it is being trained on it is no surprise that it is crap.

          90% of the content on the Internet is total crap and this is being hoovered up as a source to "Train AI".

          That the people making these decisions cannot see that the outcome is yet more crap is rather worrying. On the other hand they are only looking at the value of their shares, bonus & bank balance.

          I had the misfortune to try to get a replacement SIM card. The Chat "powered by AI" was incapable of figuring out "I need a replacement SIM card", I need to change a 4G SIM card for a 5G SIM card" Speak to an agent" and many other combinations.

          My son was still wrestling with it whilst I just phoned to order the new card.

          It just took us in a loop of "You SIM card has been stolen", "you cannot use your phone", "you would like to upgrade your handset".

        2. Anonymous Coward
          Anonymous Coward

          Re: AI isn't always AI same as a vacuum cleaner isn;t always a hoover

          I asked Google CEO Demis Hassabis if as CEO of Google, he, Demis Hassabis, thought that AI poisoning was a thing and he replied: " My personal opinion as just plain old Demissis Hassabis, when I'm not wearing my Google CEO hat is that nobody would ever do this"

      2. mcswell

        "military drone thinks you are the enemy"

        I posted this link to _Into the Shop_ in response on another thread, but here I go again:

        https://archive.org/details/Fantasy_Science_Fiction_v026n04_1964-04_PDF/page/n91/mode/2up

      3. Michael Wojcik Silver badge

        Re: AI isn't always AI same as a vacuum cleaner isn;t always a hoover

        I predict that future non-generative AI may be trained on fake data from generative-AI

        For e.g. analyzing medical images, there's already a lot of research on the use of synthetic data to train models. You can find numerous primary papers and reviews on this subject from the past few years available online.

        Analyzing scans has been one of the hot topics in medical applications of ML for at least a decade, and it's had very mixed results. It's been quite successful (in terms of good scores — .95 or higher — on various metrics, such as precision and recall) for a number of retinal diseases, for example. In the past year or two, models for analyzing mammograms seem to have gotten much better. On the other hand, the last I checked, systems for scanning lung films were still pretty much hit-or-miss. And Derek Lowe had a post a few years back about the vast number of utterly crap papers (methodologically unsound at best, fabricated at worst) on the topic, which flooded the field with useless "research".

        Getting back to training data: For some types of ML systems, such as shallow CNNs and RNNs or SVMs, and some applications, there's a reasonable corpus of decent-quality data available. For others, existing real data isn't sufficient. One paper I saw pointed out that a lot of clinical MRIs, for example, aren't very useful as training data for many applications because doctors specify parameters (slicing and so forth) which need to be taken into account when examining the results; you can't just lump it all into a training dataset and get good results. In other cases the clinical data is too noisy, or it's biased against negative cases so it overfits, or whatever. And so a number of researchers are trying to generate artificial exemplar data to train or tune models to be better discriminators.

        It's a tricky area, and caution is definitely required.

      4. herman Silver badge

        Re: AI isn't always AI same as a vacuum cleaner isn;t always a hoover

        Well, the bright side is that you won’t have to worry about a military drone thinking you are the enemy, for very long.

    2. Anonymous Coward
      Anonymous Coward

      Re: AI isn't always AI same as a vacuum cleaner isn;t always a hoover

      Yeah, but is it us that is ultimately going to benefit from generative stuff?

      Certain types of screwdriver head exist only because they improve accuracy and automation in mass production. For a human, the difference between a phillips head and a hex head is negligible...but for a machine producing millions of widgets...it's a massive difference.

  4. Zippy´s Sausage Factory
    Facepalm

    Perhaps actually spending those billions will help? Perhaps not. Microsoft is already spending close to $19 billion a quarter on AI/ML infrastructure, but recently had to officially remind people that its AI wasn't entirely trustworthy.

    I don't know how much money Microsoft has, but you can't keep that level of burn going very long without severe questions being asked. If I were a Micros~1 shareholder, I'd be writing some rather incendiary letters to Redmond asking why they're setting so much money on fire for a product that doesn't even give the right answer.

    1. Lomax
      Pint

      > Micros~1

      Nice one, have a beer!

      1. jake Silver badge

        Micros~1 has been a visual jargon pun for just about three decades now.

        I first saw it on Usenet back when Win95 was still in Beta. Unfortunately I can't remember who came up with it, and sadly the gookids have buggered up the irreplaceable DejaNews archive, so whoever it was has probably been lost to history. It is possible, maybe even probable, that it was a collective "ah-hah!" moment invented by several people nearly simultaneously.

        1. yoganmahew

          Ah, but did you know this is where it came from:

          "The joke "Micros~1" instead of "Microsoft" is a play on words that references the company's name and the concept of "micro" being one less than "macro." It also humorously implies that Microsoft is somehow "less than" or "inferior" to some other, unspecified entity.

          The exact origin of this joke is difficult to pinpoint, but it likely emerged from online communities or internet forums where users often engage in witty banter and wordplay. As the joke gained popularity, it spread through social media and online discussions, becoming a recognizable meme or inside joke among tech enthusiasts."

          hallucinates Gemini :(

          At some unspecified point in the not too distant future, that will be the truth and those of us who remember will be 'wrong'...

          1. Paul 195

            I see you got downvoted by someone who didn't read your comment to the end.

  5. jake Silver badge

    The current AI Winter ...

    ... actually started about four years ago.

    Unfortunately, people were stuck at home playing with themselves (Covid), and didn't notice. The marketards took advantage of this and suckered billions out of the investors. The investors, not wanting to take a bath (who can blame them?) have continued to prop this up artificially, all the while keeping their fingers crossed that this "new" technology will somehow magically start producing something (anything!) useful.

    The coming crash will be spectacular. Totally predictable, mind, but spectacular nonetheless.

  6. Magani
    Linux

    Decisions, decisions...

    Linux is slowly getting better; Windows is slowly getting worse.

    At what point does one change?

    If there is no way to remove Recall from Win 11, I think I'll stay with 10 until its updates run out, then hope that all the 'Windows only' applications have a better Linux version.

    And before you bother to reply; Wine doesn't do it for me, unless it's a nice Coonawarra red.

    1. jake Silver badge

      Re: Decisions, decisions...

      "At what point does one change?"

      25 years ago would have been nice. The learning curve would now be some 24 years in your rear view mirror. As would your near constant grumbling about your OS of choice.

    2. Doctor Syntax Silver badge

      Re: Decisions, decisions...

      For the average Windows user who wants to stick with Windows the bast thing that could happen would be for Linux market share to start increasing exponentially with a doubling period of about a year.* At some point it might scare Microsoft into listening to its customer.

      * Yes, I know it would really be sigmoidal growth but the marketroids who react to such things don't grok that.

    3. Anonymous Coward
      Anonymous Coward

      Re: Decisions, decisions...

      You can just disable Recall. If the settings app doesn't appease you (or if updates end up automatically turning it back on as they're known to do) you can hard disable it in Group Policy. It's what I do, not only for the forced updates, but for all the LLM slop MS puts in Windows. They *have* to respect Group Policy, that's the settings for real companies and governments, not the fake settings app they dangle in the average joe's face. They cannot automatically switch settings back, and they have to allow anything and everything to be disabled from there. Seriously, if you're a worried Windows user, look into Group Policy, it'll make all your worries go away. It's actually an incredibly powerful tool and a good example of how nice it is when the entire OS is made by one company and everything is tightly integrated.

      That said, once I get a better storage solution (like a NAS or something) I'm probably switching back to Linux. I had a horrible experience using Linux for six years straight a decade ago, but that was a decade ago and things seem to be significantly improving, a lot of my criticisms have since been fixed. Probably going to go with some immutable distro, maybe Vanilla OS, openSEUS Aeon, or whatever Manjaro decides to call their immutable spin. Still need to evaluate, still waiting and watching the ecosystem as it slowly improves. I think my final falloff point will be when the dreaded Windows 12 is eventually announced, which I can only imagine how badly they'll cripple that one. I know they're very eager to completely kill the start menu for more LLM slop. Windows 11 will absolutely be my last version of Windows, whether I want it to be or not.

      1. Michael Strorm Silver badge

        Re: Decisions, decisions...

        > They *have* to respect Group Policy, that's the settings for real companies and governments [..] They cannot automatically switch settings back

        Says who? I'm sure that before the Windows 10 forced upgrade debacle a decade back, it would have been assumed that even MS wouldn't undermine the trust that Windows' security was reliant upon by mislabelling W10 upgrades as essential security ones, nor override people's explicitly-expressed desire *not* to upgrade their computer to W10 (and screw it up in the process), nor all the other bullshit they did back then- and have done since- and they did anyway.

        MS will do what they want to who they think they can get away with doing it to, I wouldn't rule out them doing anything these days.

        1. Michael Wojcik Silver badge

          Re: Decisions, decisions...

          Yes, they'll do what they think they can get away with. The difference here is that the legal departments of large customers will have An Opinion about Recall, where they did not, particularly, about forced updates. (And in fact would have been reluctant to advise turning those off, since that looks like an assumption of liability.)

          I expect pressure from large customers will mean there's an off switch for Recall in GP for quite a while. All someone has to do is point out that Recall databases are subject to discovery and there will be a big fat Nope from Legal.

          1. O'Reg Inalsin

            Re: Decisions, decisions...

            So premium windows for business will have Recall off by default, for real. Meanwhile consumer windows will be supplying a constant flood of training data for AI.

            1. Ramis101

              Re: Decisions, decisions...

              That is kinda where we are right now with Win 10 LTSC verses Win 10 "professional"

    4. NickHolland

      Re: Decisions, decisions...

      Unfortunately, I disagree with the "Linux is getting better" part.

      Linux has been taken over by people who hate Windows, but are intent on re-inventing poorly.

      The question, "Should this be done?" is not asked -- just feature parity, which appears to be the definition of "better" for a lot of people. I'm sure it won't be long until someone adds AI to Linux, (probably systemd... "look! Smarter starup!"), but says, "This time, it is done right" (and they'll say it over and over).

      I love Unix, and a Unix variant is my daily driver, but ... not Linux.

      1. Peter-Waterman1

        Re: Decisions, decisions...

        Get back to work Bill

      2. Anonymous Coward
        Anonymous Coward

        Re: Decisions, decisions...

        I have to agree - and I have been using it for 30 odd years. Peak Linux was KDE3 in mid 2000's when Windows was really struggling.

        Windows seems to be entering one of it's regular tik-tok cyclic downswings - in this case by becoming actively user hostile, more than defective.

        Wine has made remarkable strides, and (depressingly) I now run a lot of Windows sw perfectly happily under it, often when the Linux version can't be installed due to some dependency or other. I would not be surprised if the end of Win10, sees my Win laptop running Windows programs under Wine on Linux.

        Sadly both increasingly suffer from apps adopting random and horrid UI design, and gratingly slow response probably due to cross platform javascript UI toolkits. Oh well, just think of Muddy Mudskipper, and burrow into the slime until the rains come again.

      3. Michael Wojcik Silver badge

        Re: Decisions, decisions...

        TBF, systemd and other userland foolishness is not part of Linux per se, and there are Linux distributions without that crap.

        But it's true that the big, popular, easy-for-non-experts Linux distributions, other than Android (which has its own problems), are heavily under the thumb of those who insist on turning userland into a complete mess.

    5. Cmdr Bugbear

      Re: Decisions, decisions...

      Fair enough regardking Wine's limitations. How about running a windows VM on linux only for those mission-critical apps?

      The Recall app's fuckery shouldn't be able to cause too much harm if you limit its exposure to only the bare minimum of your digital life.

    6. Nematode Bronze badge

      Re: Decisions, decisions...

      At what point does one change? When Linux stops being a Distrofest, fixing problems can be done without needing a terminal window and SUDO..., and when apps which people want are provided on linux. I have 15 apps which don't have a Linux version (and are unlkely ever to), don't have a browser interface and for which there is no Linux equivalent.

  7. iron
    FAIL

    > Googling the question "How much has Google invested in AI?" that same AI, now baked into the search engine, reports that "In April 2024, Google CEO Demis Hassabis...

    Acually it doesn't. It shows a Bloomberg article that says Deep Mind CEO Demis Hassabis said those things.

    1. OhForF' Silver badge
      WTF?

      >"In April 2024, Google CEO Demis Hassabis said that Google would spend more than $100 billion." Direct cut and paste, dear reader<

      Are you saying the author made a mistake or lied to us?

      Even if you paste the same question into google and get a different result now it is quite possible that the automated answer to that question has changed since the article author asked it. Right now i got "The chief of Google's AI business said that over time the company will spend more than $100 billion developing artificial intelligence technology" as the start of the first result (a link to bloomberg is below that).

      You do not expect an AI based tool to be consistent and give the same answer every time, do you?

    2. sedregj Bronze badge
      Gimp

      I think you need to go to https://gemini.google.com

      Anyway it appears to have been "fixed" now:

      "Google has been a significant investor in artificial intelligence. 1 While the exact figures are not publicly disclosed, it's known that the company has allocated billions of dollars to AI research, development, and infrastructure. 2 This includes investments in ..."

      etc. A marketdroid used ChatGPT to get a quick writeup done: "Write me a puff piece with no concrete facts about ..." 8)

  8. Doctor Syntax Silver badge

    "may risk a new AI winter"

    Is "risk" quite the right term here? How about "A good chance of an AI winter"?

    1. b0llchit Silver badge

      The AI predicts: Weather will be colder than usual. You should wrap the coolant pipes and prevent any bare silicon from showing. The climate will be inflating itself to adjust the temperatures at reduced pressure.

    2. Version 1.0 Silver badge
      Meh

      When AI was originally created virtually all programmers worked in the environment, "Calculate the results, then verify the results after checking the situation" but these days AI seems to just "Calculate the results and make sure that they look acceptable." And often probably add "store the question, the results, and the users ID."

      Programmers have always worked accurately to get paid for their work. AI seems designed these days to generate income.

  9. Anonymous Coward
    Anonymous Coward

    Good for fun

    My friends and I regularly send each other utterly bizarre and hilarious AI generated images using more and more bizarre inputs. "Donald Trump riding a giant bunny whilst drinking a beer wearing a Kamala Harris T-Shirt".

    This is utterly useless and a complete waste of electricity and compute power. It is a gimmick.

    AI has been sort of useful for some coding. "How to iterate over an array in JavaScript" will generate some useful examples without having to scour StackExchange. But did I ever have an issue scouring StackExchange? Nope.

    And the worst is that AI isn't even "AI". It is ML only.

    1. Doctor Syntax Silver badge

      Re: Good for fun

      "useful examples without having to scour StackExchange."

      It's just that the AI tool has scoured StackExchange for you. But has it taken not of the ratings of the solutions offered?

      1. Reaps

        Re: Good for fun

        "But has it taken not of the ratings of the solutions offered"

        are you sure your not a bad AI/ML?

      2. Michael Wojcik Silver badge

        Re: Good for fun

        "Copilot, show me how to do something without having to understand it."

        Yes, it's the triumph of the Copypasta Programmer and Cargo-Cult Coder. Exactly what the software industry needed.

        1. yoganmahew

          Re: Good for fun

          But Michal, copypasta is what we already have. Now we get EFFICIENT copypasta! Won't somebody think of the bonuses?

    2. breakfast
      Headmaster

      Re: Good for fun

      I actually think the Stack Exchange thing is an interesting element to all of this - the first people who picked up on AI being potentially useful were developers and I think it has worked better for us because a) code either works or it doesn't and b) most of the code that LLMs could guzzle from the internet, particularly going through Github and Stack Overflow works to some degree. That could give developers a very inflated idea of it's ability, and then it proves disappointing when asked to analyse textual language (which is far more complicated than programming languages and subject to none of the same bounds) from the wider internet where most people are wrong about most things most of the time. There isn't a textual equivalent to code that runs, and that means for broader linguistic tasks LLMs are necessarily less reliable and their outputs are far harder to evaluate.

      1. Lomax
        FAIL

        Re: Good for fun

        I don't think that's true; in my admittedly limited experiments with ML generated code I've repeatedly been given code that just plain does not work. Often the reason has been the "AI" dreaming up function names that do not exist or are only provided by some (unknown) external library, or that the code does something completely different to what was asked for - but sometimes the code has contained bona-fide syntax errors as well. I would not trust any code provided by one of these systems, other than perhaps as a very crude starting point for writing my own. In one particular case where I was pairing with someone using GPT to generate code we ended up spending significantly more time trying to figure out why the code it offered up wasn't working than it would have taken us to just write the damn thing ourselves.

        1. breakfast

          Re: Good for fun

          Well your limited experience is better than my borderline-zero experience. I'm just trying to give my fellow devs who keep yelling about how good it is the benefit of the doubt. I feel like there must be a baby in this bathwater somewhere, but I'm too busy with things that matter to me in the real world to ferret around for it.

          1. doublelayer Silver badge

            Re: Good for fun

            It must at times come up with something useful, defined as something as correct as it needed to be. However, when I or others have used it, I have seen only outcomes where it messed up or where it did something that would have been relatively easy to do without it. I wonder how often it did mess up, but the thing that it was doing was unimportant and nobody cared that it was wrong.

        2. PTW

          Re: Good for fun

          IANA Coder, so I tried to get GPT to alphabetically sort two, two column files, and then merge them, and remove duplicates, using Python. IIRC Python has some built-in functions for such things.

          GPT was wrong, over, and over, and over, again.

      2. Michael Wojcik Silver badge

        Re: Good for fun

        a) code either works or it doesn't

        Programs either halt or they don't. Which is to say, in the general case, you don't know whether code works, and neither does anyone else. That's why thinking about it before, during, and after writing it is crucial.

        and b) most of the code that LLMs could guzzle from the internet, particularly going through Github and Stack Overflow works to some degree

        Yes, the entire software industry "works to some degree", which is why it's an utter fucking disaster. Chainalysis reported over $1B USD transferred to ransomware actors last year. Over 26,000 CVEs published so far this year. 172 CISA KEVs published in the last year.

        Software quality is abysmal, and pasting in some code that seems to work in casual testing, or "works to some degree", without understanding it, is a big part of the problem. (So, of course, are importing a pile of code out of public repositories without understanding it, and creating huge attack surfaces by adding unnecessary features, and many other reprehensible practices of contemporary software development.)

        1. Anonymous Coward
          Anonymous Coward

          Re: Good for fun

          Don't worry grandad, if you shake your fist hard enough that cloud is bound to start paying attention!

          1. Tridac

            Re: Good for fun

            I hope you will feel the same way when the aeroplane you are in falls out of the sky because of a software bug. Micky mouse coding would not last five minutes for real time embedded work...

            1. Anonymous Coward
              Anonymous Coward

              Re: Good for fun

              "I hope you will feel the same way when the aeroplane you are in falls out of the sky because of a software bug. Micky mouse coding would not last five minutes for real time embedded work..."

              Why worry about the coding quality, when you can just put your Mickey Mouse Software (MMS) onto an aircraft, undocumented, and simply not tell the pilots that MMS will take control when it sees fit and shove the nose down hard, repeatedly, and almost certainly when you're close to the ground? Although to be fair, Boeing have now gone back to traditional poor design and assembly as fault causes. The latest is failing thrust links between engine and wing on the 777X. How's that for fucking basic screwup? The two critical bits that ensure the engine pushes the structure of the 777X through the air have either been poorly designed and/or poorly made, and they're fracturing. And reportedly another Boeing self-certification triumph.

              I've been partially sympathetic to Boeing in spite of the MCAS mess up, on the basis that most Boeing flights start, fly and finish without drama. However, the continuing litany of failures mean I've now come to the conclusion that the company and its products are not redeemable in any reasonable time scale. I'll be checking what aircraft I might be flying on, and if it's Boeing then I won't be going.

        2. Tridac

          Re: Good for fun

          Had a look at some of the systemd code at github. One module pulls in nearly 50 header files and the file is ~900 lines of code. Almost no comments and would never pass muster for clients i've done work for. And, where are all the design docs, that describe each module, what it does, how it works and where it fits in the heirarchy. Very unprofessional in design and coding standard terms. Has ruled out Linux for any serious work here. How can any of it be truly verified ?. Too much money, corporate interest and not enough software engineering.

        3. Mateusz

          Re: Good for fun

          "Software quality is abysmal, and pasting in some code that seems to work in casual testing, or "works to some degree", without understanding it, is a big part of the problem."

          I can clearly see this in large corporate. Quantities over qualities. It works 80% of the test cases, move it to production because we promised it to execs by yesterday. IT taken by non technical, money hungry managers with overhyped confidence. They and execs earn more, some other non technical bystanders with fancy corporate titles as well. The actual tech people have no motivation to do it better or upskill and logically there are no reasons to it. They can become useless managers, get more money, push rubbish projects to productions and dance in hype of massive success. End users in corporate are suffering but too scared to report higher that the new feature is rubbish. Calls to helpdesk are growing, new manager or consultant needed to help with that. and so it goes.

      3. druck Silver badge

        Re: Good for fun

        a) code either works or it doesn't

        Wrong, there is far more to code than just whether it produces the right output; what about efficiency, maintainability and security?

  10. Anonymous Coward
    Anonymous Coward

    lying stupid humans are enough of a problem in this world.

    now we have to deal with lying stupid humans believing, lying stupid ML. and then using the lies of the ML to make more lies.

    it's nothing but lying trumpbots all the way down

    1. Anonymous Coward
      Anonymous Coward

      I think all politicians lie, you don't have to single out any particular one unless you just feel like being divisive, which the world really doesn't need more of.

      1. Lomax
        Flame

        You started the division by giving air to that clown in the first place - bit rich to complain when others point out that every word that comes out of his mouth is an inflammatory lie. Same holds true for his army of bots. #LockHimUp

    2. Ken G Silver badge
      Trollface

      I don't believe you.

  11. TheRealRoland
    Unhappy

    I feel i'm gonna get demerits based on the fact that i read this and similar articles. My "social" score at my company will get a hit.

    1. AVR Bronze badge

      Worse, you read the comments. Proof that you're antisocial.

  12. Anonymous Coward
    Anonymous Coward

    I don't think we're getting another AI winter. AI winters were from people promising *any* AI and they turned out to be very stupid, very domain-specific expert systems that didn't really do anything practical. Like, uh, I think someone made a medical expert system that some doctors used, not exactly something you can slap on billboards as the next best thing since sliced bread. LLMs actually do have some use, albeit not as much use as everyone thinks. These usecases will expand as people figure out how to get them to do more things, which will be a slow and steady climb up a mountain, half-way through the bubble will pop and investors will stop caring, but hobbyists and dedicated companies will still continue the crawl, still making new things that people genuinely find useful outside of the hype. I think we managed to break out of the AI winter cycle, for the most part.

    1. Michael Strorm Silver badge

      > hobbyists and dedicated companies will still continue the crawl

      I'm not saying that I necessarily agree with the prediction that there *will* be an AI winter, but assuming that the bubble *did* pop as you suggest...

      Given that the current generative AI boom has been built off the backs of huge multinational corporations with exceptionally deep pockets throwing massive amounts of resources at it- i.e. subsidising it until (they hope) it becomes profitable- and given that it remains reliant upon that situation continuing... how far do you think those "hobbyists and [small] dedicated companies" are going to get if they're no longer able to piggyback onto those billions of dollars of infrastructure and have to provide it all themselves?

      1. Anonymous Coward
        Anonymous Coward

        By renting general compute power, and because the chips and training methods will get easier and cheaper. Before LLaMa it was unthinkable to even be able to train an LLM, now you can custom train your own to do whatever you want. You can't easily make the entire base model yet, but I'm sure that'll be doable in a few years. By the time the bubble pops, AI enthusiasts will be self-sufficient enough. Plus, it's not like megacorps will completely drop AI after the bubble pops, they'll just care a lot less about it. But they're definitely going to keep maintaining their code assistants and whatnot.

        1. Michael Strorm Silver badge

          How suited is "general compute power" to AI use, and more importantly, who's going to pay for it?

          Who and what is going to continue pushing down the cost of the chips if the massive tech giants who were driving most of those sales in the first place are no longer interested?

          How is "maintaining" those existing assistants- and no more- likely to do anything other than mantain the then-current state of play, frozen around the time they stopped actively developing them yet provide a foundation for others who rely on them to move forward?

          1. Anonymous Coward
            Anonymous Coward

            I'm not sure how else to explain this. We didn't stop using the internet after the dotcom bubble burst, I'm curious why anyone thinks this will be any different.

            1. Michael Strorm Silver badge

              > I'm not sure how else to explain this.

              I understand what you're trying to say, I just think that you're wrong.

              The dotcom bubble primarily involved a load of massively-overvalued companies and business models that were too optimistic to survive in the short term (*) and it's arguable that the crash damaged the stock market more than the Internet itself, which wasn't reliant upon that.

              That's not really the same situation as the current AI-related one, which is reliant upon- and being effectively subsidised by- huge companies pouring in billions.

              Even the cost of running the infrastructure and supplying services as it is at present is still, as far as I'm aware, hugely expensive and (again) being subsidised.

              When that's taken away, who pays for the cost of running it, let alone developing it further?

              If development of the Internet *itself* past the late-dialup era (i.e. the time of the bubble bursting) *had* been reliant upon dotcom cash and valuations, it *would* likely have had a much more negative effect. But, as I said, they weren't the same situation.

              (*) Someone pointed out at the time- very presciently in hindsight- that these things (e.g. online shopping and online everything else(!)) *were* indeed likely to become a part of everyday life, it just wasn't going to happen overnight, i.e. in time for all those speculative companies burning through investor cash to mature and become profitable before they went under.

          2. doublelayer Silver badge

            "Who and what is going to continue pushing down the cost of the chips if the massive tech giants who were driving most of those sales in the first place are no longer interested?"

            Moore's law, same as before. One of two things will happen to the chips involved, mostly Nvidia's products.

            1. Someone else will come up with a use case for tons of parallel compute, so they'll keep buying those chips. Nvidia will continue to receive money and invest it into faster chips. Those who want to use the chips for LLMs will be able to buy them.

            2. Nobody will find any other uses for those chips. The price will fall, and improvements in manufacturing will make it easier to keep making them. Those who want to build LLMs will use more than one of them.

            Option 1 is a lot more likely. Even if there was an AI winter, it's not like everyone everywhere would stop developing something around them, and even if they did, progress could still be made on those tools. Whether that progress will ever get something that can be trusted is less clear.

            1. jake Silver badge

              Option 0) We'll all buy them to help keep warm during this AI winter.

            2. Michael Strorm Silver badge

              In theory anyone can develop anything.

              In reality, if virtually all generative AI so far has been built off the backs of- and reliant upon- huge companies investing literal billions in it, how likely is it that small, isolated developers are going to continue to do so if they have to- and let's be honest won't- be able to replace that level of investment themselves?

              Moore's Law isn't magic; it was still commercially-driven. It just so happens that cheap, general-purpose computing power for less money is something that pretty much everyone wanted, was willing to pay for and hence provided the profit and impetus to drive the investment in development.

              For (1), maybe someone will find another use for parallel compute that somehow incentivises the continued development of those chips? Maybe.

              For (2), if no-one else finds uses for the chips, why would the price fall? It might deflate the premium manufacturers are currently able to charge for existing chips made in existing facilities, but if people are only buying them cheap because there's no other demand for them, there's not much to drive the investment needed to make "improvements in manufacturing". More likely they'll become niche products and stagnate.

              1. doublelayer Silver badge

                Large amounts of parallel compute is not a new thing. Before LLMs, there was lots of other machine learning work which people will keep doing. There's also cryptocurrency mining which has fallen lower on the hype list but there are still plenty of people spending lots of money doing it. There are animation studios that use lots of GPUs who will fund manufacturing advancements. If all of those collapse as well, there are always gamers who will need something to drive 8K displays at 144 Hz (sure, they're not doing it now because the chips can't manage it, but I know some who will if they can and I'm not sure if they'll stop there even though it seems overkill to me). GPUs are popular because they're almost as versatile as CPUs were. Everyone needs fast single-threaded performance from time to time, but although not everyone needs reasonably fast parallel compute, so many different use cases can benefit from it that they're going to keep making new chips to do it.

                The one area where we agree is that individuals won't be building new raw models like GPT4. Those require too much training data and training to go somewhere. They will certainly take the ones that have already been trained and keep building things on top of those, though. I'm also not convinced that an AI winter will mean that nobody is doing LLMs anymore. I can easily see some companies deciding that they'll never make their money back and they're out, but I don't think we'll get them all to stop. Somewhere, a company will decide that there are enough clients who want to fire their remaining customer service people and are willing to pay for the LLM that does it, and those people will keep spending.

  13. HuBo Silver badge
    Happy

    Ogres are not smart

    "its [...] gargantuan¹⸴² appetite for data cannot be safely supplied"

    Pretty much sums it up imho. Force-feeding this AI tech, relying on ever bigger machines, with even more data, won't all of a sudden make it "emergently" smart. It's like rote learning of more and more religious texts, the Bible, the Quran, etc, by more and more pupils, ... but with stochastic next-word recall.

    Yann LeCun put it quite well in a momentary spark of enlightened reason when he said: "So clearly the kind of intelligence that is formed by this is not the type that we observe in humans" (in: "Yann LeCun and Andrew Ng: Why the 6-month AI Pause is a Bad Idea", youtube, transcript) -- with the use of the word "intelligence" to be understood in the broadest of senses (including none).

    It seems then that new ANN architectures (or something else entirely) have to be developed to keep this field in motion (long-term), and avoid terminal winterization as last summer's happy novelty item, for which interest has now waned.

    ¹ Note: Rabelais' Pantagruel (son of Gargantua) is of note as well in this here context, clearly.

    ² Note: See also the Verziau of Gargantua monolith, for even more context.

    1. Michael Wojcik Silver badge

      Re: Ogres are not smart

      LeCun is biased and untrustworthy on this subject, his "so clearly" is the weaselest of weasel phrases, and he offers no actual argument or evidence to support his position.

      The "LLMs do not display human-like intelligence" thesis requires two strong supporting arguments, neither of which you (or, really, anyone else) have made. First, define "human-like intelligence" and show, substantially, how it differs from what LLMs are doing. (This also requires demonstrating an understanding of what LLMs are doing, and you've failed there too.) Second, explain why "human-like" is a useful qualifier for any practical purpose, other than niche tautological ones such as "simulate human-like intelligence".

      Actual research, as opposed to bloviating by observers with powerful vested interest and bullshitting on the Internet, has increasingly shown that, for example, large transformer models tend to converge on internal representations which are similar to human ones; see for example the paper described here discussing visual representations in multimodal models. These studies tend to support the Natural Abstraction Hypothesis and the thesis that large transformer models are converging on such abstractions.

      I personally believe that LLMs are both qualitatively not implementing "human-like intelligence" in a number of important respects (but most of those are theoretically repairable without inventing new ANN architectures), and that even frontier models may well still be formally less powerful than human cognition (though that's a much trickier proposition if you believe in a mechanical interpretation of human cognition and aren't persuaded by claims of non-CTH physical capabilities in the human CNS, from people like Penrose). But that's all it is — a belief (informed by following some of the research). Confidently declaring it to be true is the mark of the sophomore.

      1. HuBo Silver badge
        Facepalm

        Re: Ogres are not smart

        Hmmmm, I see, if "large transformer models tend to converge on internal representations which are similar to human ones" for the specific case of visual processing of image data, then it stands to reason that the same happens with LLMs in relation to human intelligence ... brilliant! (especially since everyone knows that language = intelligence!)

        1. HuBo Silver badge
          Stop

          Re: Ogres are not smart

          Not only that, but you can run vision in under 1 Watt on a microcontroller, which is in the ballpark of biological visual systems, but it takes hundreds of gigawatt hours to train an LLM (quoting Victor Peng from today's Tobias article), with results that only superficially imitate "intelligence" ... it would take a human brain working 24/7/365 a good 100,000 years to consume that much energy!

          So no, LLMs are not the right architecture for AI.

  14. Brave Coward

    Eggs

    @Author

    "Some technology is declared to be AI in egg form, just needing the warm fluffy hen's bottom of massive investment to hatch as a miraculous giant robo-god."

    Thank you very much, Sir. That made my day !

  15. Michael Strorm Silver badge

    Total Shi^w Recall

    > How this universal auto-snoop [ "Recall"] was compatible with corporate privacy and data protection policies, Microsoft couldn't say. Because it wasn't. [..] Now it's on the way back, and the fixes remain unspecified. Microsoft really wants us to have it, despite nobody asking for it.

    Less than a fortnight after we last had a reason to comment on yet another example of MS's egregious abuse of its market power (i.e. the news that MS were using "dark patterns" to coerce users into using Edge over other browsers) we're already here again.

    Aggressive-well-past-the-point-of-maliciousness forcing down users throats of what MS wants them to use- regardless of whether the users themselves want that or have even actively refused them- has become a hallmark of the company in the past decade.

    Yes, MS was already well-known for exploiting and abusing the power its near-monopoly position granted it to entrench its market share- something it's been doing since the early 80s.

    However, it was around the time of the launch of Windows 10 (in 2015) when this crossed over from "merely" weaponising it to damage market competition to being far more directly abusive to end users, i.e. the forced upgrades for Windows 7 and 8 users, the misuse and mislabelling of essential security upgrades to do so, the overriding of explicitly-expressed wishes to refuse the upgrade... the list goes on.

    I've already commented on that in more detail, as have countless others like this user, so there's no point rehashing that here yet again.

    Regardless, they've followed a similar path numerous times since- it's clear that this is now their modus operandi and has been for some time.

  16. Anonymous Coward
    Joke

    The accompianing illustration

    The accompanying illustration seems to be inspired by the works of Philip K. Dick. A world occupied by people trapped in paranoia, schizophrenia and drug-induced psychosis and a fragile connection to reality - whatever that is.

    1. Someone Else Silver badge

      Re: The accompianing illustration

      In other words, your typical MAGAt.

  17. Groo The Wanderer Silver badge

    I guess it's time to face facts: I'm going to have to buy a Windows 11 license for my old 12 core box, upgrade it's power supply, and swap the 4070Ti over to it.

    If Microsoft insists on this being deployed, I'm afraid the only thing I can use windows for is gaming.

    The 16 core will become a pure Ubuntu box with VMware then. My license is pre-Broadcon - I can run Workstation on Ubuntu just as easily as Windows, and all you have to do is extract your backups to do it.

    I think it'll be happening this payday - I can't risk doing my clients work on Windows any more. It's not like windows itself has been where my work was done for over a decade anyhow!

    1. Richard 12 Silver badge

      I'm seriously considering not using Windows 11 at all, as Valve has made gaming on Linux a reasonable proposition.

      Only trouble is that I don't know how well VR works with Linux. It's already irritating getting it to work under Windows, where supposedly the majority of effort has been expended.

      1. Groo The Wanderer Silver badge

        Problem is, only maybe 20% of my games run on Linux, and only 3-4 of the major newer titles. I'd be loosing thousands of dollars invested over the years. I've been with Steam since they came out with a nicely patched version of Half-Life to replace the copy that got stolen. :)

        1. Chasxith

          Some of the proton patches available online seemed to work very well for me. Found this to be a good starting point for stuff to get you going: https://fedoramagazine.org/gaming-on-fedora-linux-2024/

          I set it all up on Linux Mint as an experiment on a kitbashed spare Ryzen PC fitted with a GTX 1050Ti as a test rig, it didn't even take much tweaking to get running smoothly even with the less than ideal Nvidia drivers available.

          If it continues to behave reliably and runs everything I want, I'm seriously considering moving my main PC across to a linux distro sometime soon (or at the very least a dual SSD-dual boot setup with a backup Windows 11 install)

          Not so sure how much progress has been made on any VR, though.

  18. Anonymous Coward
    Anonymous Coward

    >"There are some good questions to ask: Is it useful? Is it worth it, and can you build a sustainable industry on it?"

    Great questions. Unfortunately the main one that seems to get asked is: "can I get some VCs to send money my way?"

    1. Anonymous Coward
      Anonymous Coward

      tune in tomorrow for another eisode of...

      Today's question is... how can I get one of these chatbots to exfiltrate a copy of Jensen Huang's speech for tomorrow so I can shift some wealth around?

  19. Orv Silver badge

    The most delusional are the tech bros who think that our current path will get us to general AI or even sentience if we just keep shoveling in enough data. This is like thinking you can reach the moon if you build a tall enough ladder.

  20. tiago.pelicari

    Amazing thoughts

    Thanks for publishing.

  21. Anonymous Coward
    Anonymous Coward

    Realising you're in a coercive relationship is 95% of the journey out of it.

    When you realise you don't have to put up with whatever they dump on you, it is much easier to reach the end of the path and be free from the bastard forever. A better life awaits and eventually they will answer to the law. Freedom from corporate greed is only a download away.

  22. Stoic Skeptic

    ROI

    The WSJ published an article a few months back stating that the ROI on AI/ML projects in the real world was averaging 4%.

    While there are some niches that suit the current flavor of AI/ML very well and have a much higher ROI than that, many do not.

    The great experiment in enterprise AI/ML will be coming to an end soon.

  23. bertkaye

    over-enthusiastic

    I want to make it perfecty clear reiterating that such pundits as Kurzweil who predict superhuman AIs soon are expressing opinions, not provable truths. Ray's best seller books are little more than tech fictions. LLMs are flashy but not the right path to AGIs,

  24. JoeCool Silver badge

    outrage aside, i am curiois about the implementation

    1) is the captured data truely local, or is 1drive involved

    2) does ms make any effort to obscure the process tree. could you kill the process, or block it with the firewall. or is the firewall built with hidden ports just for recall.

    3) could you crash recall, say by running a gui automation tester. would you consume all the cpu or disk or cache or network bandwidth.

    4) how deeply integrated is recall. if you got it to crash, what else would it take down.

  25. Locomotion69 Bronze badge

    Our data is food for AI

    As M$ runs out of sources for useful data, it needs to go and harvest it - that is the only reason I can think of for having Recall in the first place.

    I will not benefit in the short term from it. Medium term is uncertain.

    As my work is bound to strict confidentiality, having Recall violates a lot of legal agreements. The only solution for now: if Recall comes in, Microsoft Windows goes out. And maybe Office365 goes with it.

  26. The Dogs Meevonks Silver badge

    The following rules apply

    1: No one will be bothered to read what no one could be bothered to write

    2: LLMs, scraping LLM output, pollutes the LLMs, pollutes the LLM output, which is scraped by LLMs in an ever expanding pool of polluted garbage

    3: AI is this decades pyramid scheme, where the gullible and the stupid try to convince the gullible and the stupid that 'they can make it pay' only those at the top, at the beginning will get rich and not from the 'products that don;t work' but from the massive salaries, bonuses and huge payouts after they leave a soon to be sinking ship.

    4: Google search is the perfect example of this, it's utter garbage and you can't find what you need without scrolling down several pages because of the ads and AI generated bullshit answers.

  27. Ken G Silver badge
    Trollface

    What's the problem, GenAI is the new Blockchain!

    Don't know much about tech but want to finance a tech startup?

    Need an excuse to sell new servers?

    Have some freshly graduated 'consultants' on your bench?

    Want to scam someone out of their savings?

    You need blockchainGenAI!

  28. Paul Garrish
    Mushroom

    We can barely test human written software to any decent standard (technically, we can test it very thoroughly but not at a cost any corporation is willing to pay). AI is totally un-testable by definition. Thsi is really not going to end well is it....

  29. Grunchy Silver badge

    Forget Microsoft.

    Forget Apple too, for that matter.

    I switched to Ubuntu a couple years ago and it was much easier than I expected. I might fire up Windows once a month now (maybe). There is a bit of a hiccough running Windows inside virt-manager with a dedicated GPU for full video acceleration, but once you sort it out it works just fine!

    https://github.com/Andrew-Willms/GPU-Passthrough-On-Ubuntu-22.04.2-for-Beginners

    For Windows I personally choose the “ghost spectre” defeatured versions (Win 7 and beyond).

    For activation with Microsoft, it’s all taken care of, courtesy of massgrave.dev.

    https://youtu.be/rDH0f59klWc

  30. hx

    They're violating consent in the most intimate way. That's not merely abuse, but the mods would rather me not use the appropriate word.

    1. HuBo Silver badge
      Pint

      Well, I think that's close enough to John Steinbeck's Grapes of Wrath, updated something like (ymmv):

      “Behind the PCIe rows, the long GPU cards—twelve curved tensor penes erected in the TSMC foundry, orgasms set by HBM, raping methodically, raping without passion. The user sat in his matrix seat and he was proud of the straight vectors he did not will, proud of the CoPilot data he did not own or love, proud of the power he could not control. And when that generative AI grew, and projected its artifacts, no man had crumbled a dreaded blank page in his fingers and let the broken pen's ink sift past his fingertips. No man had touched the keyboard, or lusted for the creativity. Men read what they had not written, had no connection with the literature. The creativity bore under genAI, and under genAI gradually died; for it was not loved or hated, it had no prayers or curses.”

  31. Snowy Silver badge
    Mushroom

    Exactly

    Companies love to use familiar words in unorthodox ways. "We value your privacy"

    The better they can keep you information "private" the more they can sell it for, no one is buying information if it is freely available on the internet!!!

  32. Pen-y-gors

    How much!!!!

    "Microsoft is already spending close to $19 billion a quarter on AI/ML infrastructure"

    How? How on earth do you spend that much on a single research development project? For that money you can employ 100,000 'software engineers' duplicating each others' work and give them each a Cray as a desktop PC.

  33. Ashto5

    Age of Ultron

    Scans the web

    Analyses the data

    Decides to kill of humanity

    99% of the web is shockingly bad

    Train anything on that and you get what you asked for

    AI will take your job, only if your a middle manger who slings crap

  34. Colin Bain

    If money want to make a difference for real....

    I have seen the current non AI computer systems not actually work, on a huge scale e.g. Birmingham CITY council, just one council waste 250 million (quarter of a billion pounds!!) on an Oracle system that is never going to work and worse, make losses of potential promised savings, planned for, for years to come. Horizon - well we all know that one. The Canadian govt payroll susytem that bankrupted some employees. The HR system that was introduced by my employer with the specific promise that it worked fine in tbhe pilot, now being replaced. In part the finance department in a massive trust in their employees and a Scrooge like fist on finances did not pay for the orientation module, a modest 1% of total cost. The result was predictable, as a simple Google search would have shown. Some over paid for non entitled time off, under payments, TIME WASTED!

    And speaking of Google, they could really help the world by not wasting the cash on these huge white elephants and just build some affordable housing. With all those brains they might even build a better home.

    It is simply horrific this waste, not the money, that is bad enough, but the waste of human potential, and betterment, you know, the pursuit of happiness.

    All we get these days is misery.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like