back to article Portable Large Language Models – not the iPhone 15 – are the future of the smartphone

Smartphone innovation has plateaued. The iPhone 15, launched overnight, has some nice additions. But my iPhone 13 will meet my needs for a while and I won't rush to replace it. My previous iPhone lasted four years. Before that phone I could justify grabbing Cupertino’s annual upgrade. These days, what do we get? The iPhone 15 …

  1. _andrew

    Sure, it's possible, but why would you want it?

    Just about every single "AI in the machine" story ever written has been a cautionary tale. From the Cirrius Cybernetics Corporation onwards. No one (besides, perhaps, Ian Banks, perhaps Asimov) has posited an AI sidekick actually doing something useful and productive, let alone not turning on the user. I have no use-case for such a thing. What would you do with it?

    1. computing

      Re: Sure, it's possible, but why would you want it?

      A use case? How about multiple advisors are better than fewer! And humans, even expert humans, make silly mistakes.

      I went to my doctor one day with chest pains - I was having a heart attack. He prescribed indigestion and sent me home. I was hospitalised that night. After I had recovered, I mentioned the incident to a nurse at the same clinic. She was shocked. She said the doc should have done an ECG with the symptoms I was experincing. She figured she would have done a better job on that day.

      I agree. An LLM with access to my conversations with the doctor on the day may have asked me get a second opinion.

      1. Headley_Grange Silver badge

        Re: Sure, it's possible, but why would you want it?

        I'm sure all the support people who read the Reg are looking forward to callouts where they are constantly pestered by a semi-informed user armed with a LLM asking a barrage of pointless questions.

        However, I too would be disappointed with any doctor who prescribed indigestion.

      2. Handlebars

        Re: Sure, it's possible, but why would you want it?

        A simple flow chart is all that was needed there. An ECG and blood test is cheap compared to the risk of missing a cardiac problem.

        1. Nifty

          Re: Sure, it's possible, but why would you want it?

          "A simple flow chart is all that was needed there"

          And a personal LLM would have pointed you towards one.

          1. sabroni Silver badge
            Facepalm

            Re: And a personal LLM would have pointed you towards one.

            Hmm, or more likely pointed you towards one that looks like the others but is slightly changed in an arbitrary way.

            It's not like Truth is important to the LLM, it just wants to show you what it thinks you've asked for.

            I don't get it, am I wrong about how these things work or are all the fanboys fucking delusional?

      3. Roland6 Silver badge

        Re: Sure, it's possible, but why would you want it?

        You clearly don’t understand what a LLM is and thus what it is and isn’t capable of.

        Your doctor made a diagnostic and treatment error. If you had Googled “heart attack” or “chest pains” you would know from reliable medical sources the diagnostic steps a medical expert (with access to the right equipment) will take.

        Even asking a random person (ie. Not a medical expert) in to the street would most probably got you the answer: do you want me to call you an ambulance?

        An answer the LLM is highly unlikely to provide.

        1. This post has been deleted by its author

      4. StrangerHereMyself Silver badge

        Re: Sure, it's possible, but why would you want it?

        It would be extremely worrying if people started LLM's for medical advice.

        1. that one in the corner Silver badge

          Re: Sure, it's possible, but why would you want it?

          It is called "thinning the herd".

      5. Anonymous Coward
        Anonymous Coward

        Re: Sure, it's possible, but why would you want it?

        > multiple advisors are better than fewer!

        Yes, under the assumption that most of the advisors are suffiently competent. If I ask 1000 people on the street about how to change a tire, and apply the arithmetic mean of all their instructions, I'm prety sure I will end up with a broken car.

        > An LLM with access to my conversations with the doctor on the day may have asked me get a second opinion.

        Maybe.

        Or it may have hallucinated something medically very inadvisable. Or a sea shanty made of emojis.

        Which isn't that big a problem, if the user is a tech-savy person who is aware how stochastic parrots work, and what their problems are.

        But considering that there are people out there, who will swallow bath soap tabs because someone on some asocial media platform told them to, I am not sure if putting this into everyones smartphone is such a good thing.

      6. doublelayer Silver badge

        Re: Sure, it's possible, but why would you want it?

        We already have that, it's called entering symptoms into Google and reading every page that turns up. The pain you experienced could be a cardiac problem or indigestion, but if you read far enough, something nonspecific with no scans to go on could be almost anything else. You let an LLM pick from those and you'll get diagnoses like broken ribs plus radiation poisoning. People who act on that will find their problems are worse than if they consulted a doctor who actually knows this stuff, even with the risk that they occasionally misdiagnose. An LLM tells you what is possible and isn't even great at limiting itself to that. That's not a good way to make medical decisions. Otherwise, any time you felt slightly unwell, whether it was eating something that disagreed with you or a bit of overexertion, you can assume you've caught almost every parasitic disease in existence.

      7. YARR

        Re: Sure, it's possible, but why would you want it?

        LLMs also make stupid mistakes and are adamant they're right.

        Medical diagnoses is a well defined logical process implemented in existing software such as expert systems. We only need AI for problems we don't know how to solve, and we must never trust them with our lives.

        1. that one in the corner Silver badge

          Re: Sure, it's possible, but why would you want it?

          > Medical diagnoses is a well defined logical process implemented in existing software such as expert systems

          Agreed.

          > We only need AI for problems we don't know how to solve

          There it is again! Oh, the loneliness of the long-distance AI researcher!

          XPS are the children of the AI Lab, but, no, they are a "solved problem" and therefore are not AI :-(

    2. Doctor Syntax Silver badge

      Re: Sure, it's possible, but why would you want it?

      You and I might not want it but HMG would love it. Who needs to weaken encryption when you can get the subject's own device to analyse what they're up to?

    3. Norman Nescio

      Re: Sure, it's possible, but why would you want it?

      Just about every single "AI in the machine" story ever written has been a cautionary tale. From the Cirrius Cybernetics Corporation onwards. No one (besides, perhaps, Ian Banks, perhaps Asimov) has posited an AI sidekick actually doing something useful and productive, let alone not turning on the user. I have no use-case for such a thing. What would you do with it?

      How about ORAC from Blake's Seven?

      Or 'Box' from Star Cops, which the idea of an AI in a smartphone reminds me very much of.

      Possibly the ship's computer in Star Trek.

      But yes, I agree, most Sci-Fi AI's don't appear to have humanity's best interests at heart. AM in Harlan Ellison's I Have No Mouth and I Must Scream is particularly noteworthy. A lot of people find that story rather disturbing. There are also Fred Saberhagen's Beserkers, which don't specifically dislike humanity, but organic life in general. (Sidenote: Fred Saberhagen was a Motorola electronics technician.)

      NN

      1. _andrew

        Re: Sure, it's possible, but why would you want it?

        Orac was more omniscience in a box than just AI, and even he/it convinced Blake and the crew to do things occasionally just because it wanted to see what would happen. But Orac also had infallible accuracy on his side: LLMs are just stochastic echoes in idea space: when they come up with the correct answer it's pure chance. That makes them much closer to useless, in my book.

        Never did see Star Cops, I'm afraid. Auntie must not have picked up the rights, here in the antipodes.

        1. TheMaskedMan Silver badge

          Re: Sure, it's possible, but why would you want it?

          "But Orac also had infallible accuracy on his side"

          It was orac that concluded that Blake had become a bounty hunter "My interpretation of the data leaves little room for doubt", leading to an unfortunate incident on Gauda Prime shortly thereafter. I notice it wasn't around for the ensuing bloodbath, though.

          Not so very different from an LLM, in fact. "Too useful to destroy," as Dayna put it in an earlier episode, but not as infallible as it appears. Use one to your advantage by all means, but trust it without question at your peril.

      2. Evil Scot Bronze badge

        Re: Sure, it's possible, but why would you want it?

        I would add Mycroft. An AI that saved the humans under its care in "The moon is a harsh mistress."

    4. Tron Silver badge

      Re: Sure, it's possible, but why would you want it?

      I don't want it. I have zero interest in having AI-related products or services, would choose ones without it over ones with it, and would be extremely unhappy if I could not turn 'AI' features off, the way I turn most of the 'new' features off on almost every piece of tech I use, as they are annoying gimmicks.

      And I have no intention of paying extra for this. I would be happier to buy a second hand item that didn't have it.

      I may not be alone, as the media have been running scare stories about AI whenever they have a space between the scare stories about social media.

      Finally, there may not be any future for smartphones. The French government are taking down the iPhone 12. Probably more to follow. Governments want us to go back to the 70s and settle down for power cuts and a cold war, with no Web 2.0 or cross-border information sharing, the news being delivered by media groups that they can manipulate. The long term future of tech is starting to look like the long term future of tourism and the short term future of coal mining.

      1. Version 1.0 Silver badge
        Terminator

        Re: Sure, it's possible, but why would you want it?

        Everyone's main concerns about their "smart"phone are: Is it infected, has it been hacked, has all my contact data been copied, and Is the battery dying? ... etc. I would love a "smart"phone update that would return me to the original phones that allow me to talk to people and never worry about getting hacked or having all my data stolen. Certain I like a few of the apps but it would be so much safer to use them on a tablet that I have complete security control over.

        I suspect that smartphones were originally created to collect every users' data while offering users phone calls and photographs so users don't think about all their data being stolen ... but it's not "stolen" when you check the user agreement that we all accept to use everything.

        The AI environment (icon) is that all us smartphone users are effectively just chickens laying eggs, so that the manufacturers can have breakfast, lunch, and dinner every day.

    5. EricB123 Silver badge

      Re: Sure, it's possible, but why would you want it?

      Monitor your roughage intake?

    6. JimboSmith

      Re: Sure, it's possible, but why would you want it?

      A few years ago I was asked by a big boss why I didn’t use Siri, Alexa Bixby, zor whatever the Google one is called. I said that they were great for some people but personally I couldn’t see the point unless they worked on a closed loop system. Once I had explained that I meant no connection to the Internet they were a bit more informed, and understood my objections better. I explained that I didn’t want a microphone in my house listening to my every word and having the ability to send it back to somebody else. They atill thought I was a bit paranoid until I said I wouldn’t say something in front of one of these things that I wouldn’t write on a postcard and send through the Royal Mail. I could see the colour draining away as I said if you’ve done a business call in front of one of these things then it could have heard the whole thing. “Even worse, have one in the bedroom when you’re getting jiggy with it.” That was not the right thing to say according to my boss who knew that the big boss had been given smart speakers by his wife on Valentine’s Day.

      1. JimboSmith

        Re: Sure, it's possible, but why would you want it?

        Similarly I wouldn’t use a LLM because it would likely hoover up more of my data than I liked/ make inferences about me based on what I inputted. I can see one use case of a LLLM (Local Large Language Model) for creating your children a bedtime story which your local automated assistant can then read to them. If the LLLM has only nice things/ mild peril to reference from, then it can’t devise a story that has mass murders in it etc. Other than that ……….

        1. Norman Nescio

          Re: Sure, it's possible, but why would you want it?

          I can see one use case of a LLLM (Local Large Language Model) for creating your children a bedtime story which your local automated assistant can then read to them. If the LLLM has only nice things/ mild peril to reference from, then it can’t devise a story that has mass murders in it etc. Other than that ……….

          I'd be careful that the Grimm brothers' tales didn't get into the training set, then. Hans Christian Andersen's tales are not exactly a laugh a minute, either.

          It appears that many, but not all, children are entertained by the gory bits in the full expectation that good will triumph over evil and everything will be fine. Which is a fine feeling to go to sleep by. Some kids get too disturbed by the gory bits (I was one of those); and some feel short-changed by anodyne stories (I was one of those, too. Never said I was easy to please.).

          As for having a home assistant read the story: I think part of the point is to have a trusted adult do it, which, if everything is working as it should, gives a safe and comfortable environment to go to sleep in. Some adults edit the story on the fly, depending on the child's mood. It's not just reading a story.

  2. Pascal Monett Silver badge

    "neither will they leak all our most personal data to the cloud"

    At this point in time, that sounds rather like wishful thinking to me.

    It appears as if everything everyone is doing at the moment is geared toward siphoning my personal life to The CloudTM. If those portable whatchamacallits are going to become pervasive, I'm willing to bet that they'll happily lap up everything they can and send it to the mothership ASAP.

    It's the contrary that would surprise me.

    1. Cris E

      Re: "neither will they leak all our most personal data to the cloud"

      The mothership could write an AI that watches for unauthorized sharing and helps you stay secure (while still collecting a fair amount of info that would be useful for its own needs.) If the line was drawn clearly that would be both fair and valuable to everyone. I suppose various settings for what was locked and what was not could be established, but having an AI pre-screen your emails and flag the dangerous ones would be great. Eventually it could be like putting the equivalent to email filters out in front of your browser to snoop for badness (ie "You're typing your SSN? Really Mike?")

      1. doublelayer Silver badge

        Re: "neither will they leak all our most personal data to the cloud"

        "The mothership could write an AI that watches for unauthorized sharing and helps you stay secure (while still collecting a fair amount of info that would be useful for its own needs.)"

        That clause on the end is really quite off-putting. The AI and the company does not "need" any of my data. Its needs will be the desire to mine my data and sell a package of users to someone else. I use software that doesn't collect any of it for any needs unless that collection is explicitly to do something I asked it to do. At least, I do so whenever possible.

  3. Headley_Grange Silver badge

    I read the article twice, but I'm still stuck. Once my phone's got this PLLM thing on it, constantly running, sucking battery power and reducing available memory, what will it actually do for me? Like most of my friends (we're old) I use the phone to listen to my music (stuff on shelves in my house), text and mail people, find out when the next train is, find out where the next pub is and make the odd foray onto the web to answer questions - usually about 70s and 80s rock music. All I see in my future is a life blighted by annoying pop-ups recommending stuff that I'm not interested in. There'd better be an off switch.

    1. doublelayer Silver badge

      As far as I can tell, the author's recommending effectively the same dream that advertisers like to sell. You know the one: when they're busy ignoring the privacy implications, they explain that seeing ads about things you're really interested in would be helpful to you. To be entirely honest (not entirely hostile which is my first approach when it comes to advertisers), ads for things I'm actually interested in that provide the information I'm looking for would be much more useful than generic ones because adverts on their own are not evil. Advertisers just don't have any ability to actually provide those ads and appear to have little interest in trying any mechanism other than stealing more and more of my personal data. The article seems to suggest that using an LLM on every bit of my data would allow that software to make recommendations that would be useful to me, and in their mind, this would somehow be a free service provided by some creator of AI models for which I don't pay with my data or a large subscription charge.

      That part isn't going to be true, but nor are any of the other parts. An LLM that's read all my communications and watched my actions is not going to be able to answer my questions, but a search engine can. The reason for that is that an LLM doesn't have recent information which many of my searches are trying to get, it makes things up when most of the time I need accuracy instead of overconfidence, and that what I want right now is often disconnected from the emails I've received today. We humans are pretty good at typing what we want into a search box. I don't think we'll get any benefit having a program try to guess on that query, no matter how authoritative sounding the produced essay is.

      1. _andrew

        The flaw in the targeted advertising theory

        I've always been liberal with Google and tracking cookies for that exact reason: the argument that they would show me useful stuff is superficially compelling. Doesn't work: all of the ads are still worse than useless. It turns out that all of the stuff that I'm interested in, I know where to find already. Ergo, no-one needs to pay to put it in front of me, so they don't. The only people who will pay to put things in front of me "by force", are those flogging things (or ideas) that I'm not interested in, and don't want. So IMO the answer isn't ever-better targeted advertising, it's paying for services. If the Reg had a reasonable "no ads" subscription option, I'd pay it.

        1. SundogUK Silver badge

          Re: The flaw in the targeted advertising theory

          "It turns out that all of the stuff that I'm interested in, I know where to find already."

          This.

      2. SundogUK Silver badge

        "...adverts on their own are not evil."

        Yes they are.

        1. sabroni Silver badge
          Happy

          re: Yes they are.

          Oh No They're Not!

          1. Norman Nescio

            Re: re: Yes they are.

            Well, I've put them behind me.

            ...Mainly by use of Firefox, uBlock Origin, and uMatrix.

            1. doublelayer Silver badge

              Re: re: Yes they are.

              So have I, and that's a completely reasonable thing to do and one I recommend that everybody does. I do it not only to escape at least some trackers, which actually are evil, but also because I don't much care for advertising and I prefer not to see it. My desire not to see it doesn't make it as evil as the tracking, or really evil at all. I don't much enjoy eating grapefruit either, but that doesn't make grapefruit evil.

        2. doublelayer Silver badge

          No, they've existed for a long time and, while they can often be annoying, they don't impinge on your rights the way that data collection does. We don't get to decide something's evil just because we don't like it. If we get a physical newspaper and someone's paid for their text to appear somewhere on it, that doesn't give that advertiser any power over us, as we're perfectly able to ignore that box and read only the parts we care about. The same is true when advertising appears in other media.

          Where that begins to restrict us is when those ads can take actions such as running malware on our systems or when it is used to collect our data. That is something that previous advertisements cannot do and where they're beginning to take from the reader. Passive advertising is not the same, and we should focus on the parts that do us the most harm when we fight against it.

      3. Anonymous Coward
        Anonymous Coward

        As someone who has spent a career designing IT based solutions, I deliberately disable the focus selection as I want lots of input, because sometimes the eye will see and the brain will register something that either addresses a long standing problem, or I found useful in the next week or so.

        This approach recently lead me to meet Ava and her Omeo:

        https://www.facebook.com/people/Disability-Pride-Wellingborough/100086794150469/

        We are now working on off road (cx, mtb) bike tracks and pump tracks where (young) people with these marvellous wheelchairs can have fun. Plus given the designers a challenge: how to not confuse the gyros when the wheels lose contact with the ground.

        1. Anonymous Coward
          Anonymous Coward

          Wow, that's a job!

          Never ceases to amaze me the mad projects people are quietly nibbling away at.

    2. erst

      I think LLMs make for excellent interfaces to many of those services that you say you use your phone for. It can provide a predictive keyboard on steroids for your texting, read your mail and propose a plausible answer already before you type, create a query for the train database from your natural language query about when the next train will leave, …

      1. TheMaskedMan Silver badge

        "It can provide a predictive keyboard on steroids for your texting, read your mail and propose a plausible answer already before you type"

        And the recipient's ai will do the same. It's only a short step from there to the sender's AI extrapolating to the likely reply to the mail it's proposing to send, using that reply as the basis for it's next action... the result will either be total, sterile efficiency or utter chaos.

        On the plus side, it could look up your train time, assume that it will be cancelled and draft a complaint email in one step, sending it before you even leave the house. That would save time.

        1. ModicumSuch

          It reminds me of the old joke where a husband goes into town to sell livestock and telegrams back to his wife to say that he made it there safely, sold the livestock, is returning, and loves her. Over successive revisions, he reasons out the need to include any of those things (he pays by word, naturally), since they’re all either self-evident or implied by sending the telegram.

          In the end, he just telegraphs back “Martha, Robert.”*

          *whichever names are used for the couple in the joke.

      2. Michael Wojcik Silver badge

        They certainly aren't a good interface for any of the "services" I use my phone for. Or for anything else I do, ever, with anything.

    3. Adrian 4

      I imagine it would be like that google thing that keeps popping up unasked and trying to push their services. Unwanted, always wrong, but won't go away.

  4. mark l 2 Silver badge

    I don't think that the tech companies want us running our own LLM AI locally on our devices? No they want them in the cloud so they can hoover up all our personal data, so I don't expect them to be pushing local AI any time soon.

    Yes even Apple whose ads make it look like they are squeakily clean and don't collect personal data from iPhone users still vacuum up a large amount of it from iOS even if they aren't selling it to 3rd party ad companies, they are using it to sell you ads on places like the App store.

    1. ianbetteridge

      > Yes even Apple whose ads make it look like they are squeakily clean and don't collect personal data from iPhone users still vacuum up a large amount of it from iOS even if they aren't selling it to 3rd party ad companies,

      First, turning off personalised ads on Apple OS' really is pretty easy, as long as you're capable of tapping a button: https://fossbytes.com/apple-data-collection-explained/

      Second, Apple has continually moved away from the cloud for ML towards doing more on-device. It announced yesterday it's moved a bunch of voice recognition tasks for Siri from cloud to Apple Watch, because the processor on it now has a fast enough neural engine to support it. It's absolutely in its interests (and yours) to do more ML tasks on-device.

    2. Knightlie

      You sound like one of those people complaining that Apple "hoovers up" your appointments for the Calendar app. And none of the ads I see on the App Store are "personalised," so you're going to have to provide a citation for that.

      Never mind, you won't...

    3. Anonymous Coward
      Anonymous Coward

      I can see them wanting to run it locally.

      BUT - They'll insist on hoovering up the data to the cloud for "Training Purposes" or "Quality Control" etc. The local AI saves them money by moving processing out of their data centers and allows for clever marketing language that makes people think it's more private than cloud based AI.

  5. ianbetteridge

    I'm guessing that you missed the part about Apple moving voice recognition - a core ML tech - from the cloud on to the *Watch*?

    They're fully away that ML is moving on the device. Why do you think they have spent years building chips with Neural Engine into all their devices? The problem with shifting LLM's on to the device, though, is that they're incredibly power-hungry, and that means the more you use them, the more you can watch your battery slide towards zero.

  6. hammarbtyp

    Hype Curve 2.0

    My gut feeling is that all the hype of AI will turn out to be another SIRI, Cortana, or google assist. Fantastic in theory, but in practice never achieving the promises put forward by the early adopters

    1. Ken G Silver badge

      Re: Hype Curve 2.0

      Not anytime soon, but I like the approach. As more is learned about tuning models and personal devices get smarter then I think there's a use for a 'personal assistant'. Even just running your home appliances or being a better search engine and email filter has benefit.

      1. Doctor Syntax Silver badge

        Re: Hype Curve 2.0

        A better search engine would be one that isn't smart and always trying to double guess the user. Just stick go back to basics such as respecting key words such as "and", "or" and "not", and understanding that when a series of words are in quotes only hits which fully match the phrase are needed.

  7. Knightlie

    The combination of "Meta" and "LLM" means I won't ever touch this, even if I thought there was anything remotely beneficial.

    Meta's entire business model is based on scooping up and selling personal data - why would they give out a product that doesn't do that?

  8. darkrookie28

    Yes. Let us stuff our phones with even more bloat that runs down battery.

  9. A. Coatsworth Silver badge
    Terminator

    Trusting every aspect of our lives to a giant computer was the smartest thing we ever did!

    - Homer Simpson, Treehouse of Horror XII

    Kind of funny in 2001, rather frightening in 2024

  10. Zippy´s Sausage Factory

    Typo?

    There seems to be a typo in some HTML formatting as the line

    Nor does the iPhone 15 - although Apple's spec sheets .

    falls short. Feels like there's supposed to be a link at the end?

  11. Ian Johnston Silver badge

    they will continuously improve themselves to represent more accurately our states of mind, body, and finances

    How in blue blazes is my phone supposed to represent my state of mind, body and finances? I strongly suspect whoever wrote that was gettingvery excited about NFTs a couple of years ago,

  12. Kevin McMurtrie Silver badge

    Skeptical me

    It's not Star Trek time yet. Training and maintaining AI datasets is incredibly expensive. Even when technology improves to make that easier, the same funding levels will be maintained to improve the quality. In other words, there are lots of bills to pay. AI products will be tainted to serve the large corporations that built them.

    It could be another 15 years before we have AI that serves only the user and can be trusted with personal data. Even so, we're doomed if AI data ingestion is tricked as easily as real humans.

  13. steelpillow Silver badge
    Holmes

    "but neither will they leak all our most personal data to the cloud."

    Oh, really? Doesn't that hope risk being just a touch naive?

    More likely they will be slaves - appendages - to their cloudy overlords.

  14. Filippo Silver badge

    >These personal large language models [...] will continuously improve themselves to represent more accurately our states of mind, body, and finances.

    No, they won't. LLMs cannot truly learn after the training phase.

    They can fit a lot of text in their context, which is why they can provide the illusion of learning during a specific conversation, but the context is still limited. It's not enough to ingest your life, not even close.

    Imagine a PA you've hired just a week ago. That's what you'll get, and it won't get better. It's not useless - there are other characteristics of LLMs besides absence of long-term memory that will make sure of that, e.g. hallucinations - but it's not what you are predicting.

  15. xyz Silver badge

    More Pavlov's dogshit....

    This is just a fancy version of Sheldon trying to train Penny with chocolate. Keep reinforcing the "message" to keep everyone happy (and controlled). I don't care if that's the intention or not but that'll be the result.

  16. Fr. Ted Crilly Silver badge

    So then.

    Nathan Spring's (Star Cops) 'Box' moves ever closer to reality.

  17. Martin
    WTF?

    Personal AI can redefine the handheld experience...

    Redefine the handheld experience? Come on, that's the sort of marketing bollocks I'd expect from a puff piece, not an actual article.

    1. Michael Wojcik Silver badge

      Check the author.

  18. Michael Wojcik Silver badge

    For negative values of "better", perhaps

    Yet smartphones are about to change for the better – thanks to the current wild streak of innovation around AI.

    Ugh. All the ugh. There is absolutely nothing I find appealing about this prospect – and I've worked in ML, and I follow a fair bit of the LLM-related research, so it's not like I'm simply ignoring all the supposedly wonderful crap and claims of utility from the AI superfans.

    Nor is this simply about my personal feeling of repugnance for LLMs and imprecise hallucination-prone UX. This is accelerating learned helplessness, and burning compute resources (which have actual real-world costs) in order to optimize human ignorance and foolishness. I've rarely agreed with Pesce's articles here on the Reg, but this one I think is particularly daft.

  19. StrangerHereMyself Silver badge

    Purpose

    For what purpose would I want a LLM to run on my smartphone? I can't think of any use since I'm not into generating useless text articles.

    Maybe as an imaginary friend for someone to talk with? That would be sad IMHO. I'd recommend people start taking up a team sport of their liking and meeting more people IRL.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like