back to article Open source AI makes modern PCs relevant, and subscriptions seem shabby

This time last year the latest trend in computing became impossible to ignore: huge slabs of silicon with hundreds of billions of transistors – the inevitable consequence of another set of workarounds that kept Moore's Law from oblivion. But slumping PC sales suggest we don't need these monster computers – and not just because …

  1. Pascal Monett Silver badge

    "ChatGPT looks to be losing another race"

    Doesn't matter though. Borkzilla has managed to graft ChatGPT into Office and, soon, everything else it makes.

    And that will be paid for by user subscriptions.

    So it's just the Board of the Fortune 500 that will suddenly be asking themselves why they're paying for an inferior . . no, wait, it's from Borkzilla so they won't ask themselves anything.

    1. Ken Hagan Gold badge

      Re: "ChatGPT looks to be losing another race"

      They won't ask themselves that because the pricing will be arranged so that all the things MS want you to have are free (and "integral") with all the things that you wanted.

      It's leveraging a monopoly in one area to acquire a monopoly in another area. It's illegal, but they have always got away with it in the past.

    2. pluraquanta

      Re: "ChatGPT looks to be losing another race"

      Just because they added it doesn't mean people will use it though. They grafted Cortana onto Windows 10, look how that turned out.

      1. mark l 2 Silver badge

        Re: "ChatGPT looks to be losing another race"

        I went to try out the ChatGPT powered Bing earlier today from my laptop running Mint/Firefox and Microsoft insisted I HAD to download Microsoft Edge to be able to use the chat function, no matter what I tried I was always sent back around in a loop to the Edge download page.

        As I have no intention of downloading another browser just to use that one feature, I instead went over to Hugging Chat the Open source AI chat bot website which was happy to let me do what I wanted from Firefox without any additional web browser being installed, accounts created, phone number verification etc.

        After doing that I decided to see if there was a technical reason as to why MS wouldn't let me use the the Bing Chat feature from Firefox. So I changed the Firefox browser agent to report as using Edge on Windows 10, and low and behold it let me in and worked without an issue. So its clear Microsoft are using the chat function as another way to try and get people to use Edge whether they want to or not by making it difficult for alternative browsers to access the service.

        Sounds like the good old MS of the 90s back at work again.

      2. 43300 Silver badge

        Re: "ChatGPT looks to be losing another race"

        It's not always successful but it often is - Teams has been their most successful attempt at this strategy in recent years. Now, having got a large number hooked, they have introduced extra-cost 'Teams Premium'...

  2. Ken Hagan Gold badge

    Can't happen fast enough

    How long till we can have something like an Echo but all done locally so that you aren't spaffing your entire existence to some corporate data whore, and the damn thing eventually gets used to your accent and habits?

    Not long, I'm guessing, and it will be FOSS that does it coz none of the corporates have an incentive.

    1. Francis Boyle Silver badge

      Re: Can't happen fast enough

      I've finally got hold of a little board that can do Star Trek style voice recognition locally. It's not bad though it has it's limitations. If I had open source software that could run on my PC I suspect those limitations would be blown away.

    2. Kevin McMurtrie Silver badge

      Re: Can't happen fast enough

      Absolutely this.

      I've been running a home server continuously since the late 1990s. Improvements in power efficiency and IPv6 simplifies this to the point where the whole "cloud" can be fully personal-owned appliances. I haven't gotten any FOSS AI systems working yet (various software mismatches) but I'll keep trying periodically.

    3. Wayland

      Re: Can't happen fast enough

      That should be illegal. The government needs to hear you to make sure you're not shouting at your children or complaining about the King.

    4. Justthefacts Silver badge

      Re: Can't happen fast enough

      Well, maybe it can be produced by a non-profit, but one thing it can’t be is open-source. In the sense of “public code inspection, source code can be modified”.

      As we now understand, there needs to be a whole lot of gubbins in the background to make it safe (=non-Reddit-spewing, and =non-nuclear-bomb-making). If it were freely-editable, everyone would just take out the safeguards. And, the EU are making a law that those safety rails must be in place. In this case, definition of source code must be expanded to the training data, because that’s a large part of what is required to make the useful “object code” of an executable AI model.

      It’s fine while we are at the just-a-toy stage. But no, once this technology is powerful enough to be generally deployed, modifying it should be, and will be, legally prohibited. Inspection is TBD I think.

  3. b0llchit Silver badge

    Electrons being pushed

    While it is very impressive, all the improvements and so, it should also be interesting to look at the cost.

    All these monster machines are now running mostly idle at maybe 1...10% energy consumption. When a portion of these new monster machines starts to run these large models, then it is to be expected that the energy consumption will increase significantly. It is nice to say: "Computer, do my homework" and "Computer, email the neighbours a reminder to shut up.".

    When we run an audio machine model on the local computer, then we are using a lot of energy. Add the local interactions with the newest language model to write that next "perfect" paragraph and email. You will be using more energy. A lot more than before.

    Yes, yes, new devices will be less wasteful, but still, we are on track to more electrons being pushed, not fewer. How effective is it to run all those transistors to come up with the phrase: "We all knew, but couldn't resist.".

    1. Snake Silver badge

      Re: costs

      What about the cost of TIME? The article makes it all sound so rosy, AI at home!, but I have a friend who is actually experimenting with Stable Diffusion as we speak.

      He's in his 3rd week of training and experimentation in getting the system to serve up the results he seeks.

      It's not snap your fingers and get a great output. I tried of the shelf myself, results were...disappointing to the level that I decided it's probably not worth my limited time resources to continue the experiment. Either you jump in with both feet or you settle for almost worthless "results".

      So yes, a "few" people (tens of thousands out of hundreds of millions of computer owners) are bothering to experiment with the local-based AI. IF, and that's a big if based upon my own experiments, your desires fits the existing models and DB's you'll get some modest results. Everyone else will need to sink in hundreds of hours of training.

      1. that one in the corner Silver badge

        Re: TIME

        Playing with SD does indeed take up time - playing with Craiyon/DALL-E takes up an equal amount of time! Either you find image generators bleh or they'll sink their fingers into you.


        > to serve up the results he seeks

        You haven't said whether or not he can get the results *he* seeks "out of the box" with any of the cloudy options (and is fiddling with SD just because he can) or whether he has done all he can do with "Prompt Engineering" and turned to something he *can* train and himself out of desperation. Has he spent those three weeks screaming at the Heavens and kicking the cat down the cobbles or has he had three weeks of geek-gasm, twiddling the knobs with manic glee?

        > get a great output

        Especially with image generation, one man's "great output" is another man's "If thine eye offends thee, tear it out".

        > you'll get some modest results

        Or some terrific results that more than meet your requirements.

        > Either you jump in with both feet or you settle for almost worthless "results".

        Sorry, but again, judged by what criteria? Are we talking about how the horse always seems to have an extra bit of leg and the astronaut is clearly not side saddle like you asked? And are the online systems doing it better for you?

        > I decided it's probably not worth my limited time resources to continue the experiment

        Again, same question to you: do you find that the cloudy options, including the paid-for (trading money for time) give you results that you can - and do - make use of?

        > (tens of thousands out of hundreds of millions of computer owners) are bothering to experiment with the local-based AI

        Aside from probably underestimating the number of computer owners in the world, how does that compare with the number bothering to experiment with the cloud-based AI (and more than just once or twice for a giggle, a quick post about how it changed their life and then going back to Love Island)?

        1. Tom7

          Re: TIME

          I got all interested after this article and went and figured out how to install Stable Diffusion locally. About half an hour later, I had the GPU version installed, which immediately died because I have an AMD GPU and it will only work with NVIDIA. Okay.

          So I went for the CPU version. I've just run my first generation; ten and a quarter minutes to generate a 512x512 image that isn't really what I asked for.

          The difference between SD and the more commercial offerings is still large. They just work, they produce reasonable results and they're fast enough to just dabble with and adjust your prompt if the output isn't quite what you were after. SD takes quite a bit of nous to know how to get it to work at all, the results are a bit disappointing and it's slow enough that you'll have got bored by the time your first image is delivered.

          1. mpi Silver badge

            Re: TIME

            > The difference between SD and the more commercial offerings is still large.

            Absolutely. SD is infinitely configurable, doesn't charge me for using it, it doesn't rely on the goodwill of some corporation to use, people can and do freely adapt it to an enormeous number of use cases, there are hundreds of models, hypernetworks, textual inversions and LoRAs to chose from, and there is absolutely zero chance for some dictatorship to limit use cases based on insecurities of their beloved leaders.

            Whereas commercial offerings ... well ... ummm ... yeah. Okay, granted: They don't require me to have the hardware. Okay, I'll give em that.

            Oh, what's that? I have that hardware already, because I play videogames, and it sit's idle most of the time anyway? Well, guess I'm not gonna be a customer then :D

            > They just work

            So does stable diffusion if the hardware supports it. InvokeAI has an installer that's not more complicated than installing any old desktop PC software. And yes, one needs beefy enough hardware, and some things are not supported, and on it works only with a good amount of technical expertise ... same as with videogames.

            > the results are a bit disappointing


            Sorry, what exactly is the definition of "disappointing" we're talking about here exactly?

            > and it's slow enough that you'll have got bored by the time your first image is delivered.

            That depends entirely on the hardware. My current setup delivers 1024x1024 images in under 12 seconds, that's with float32 full-precision. Granted, thatrequires a somewhat expensive setup, but that too is the same as with videogames...I cannot expect VR games to run smoothly either if the hardware cannot back it up.

        2. Snake Silver badge

          Re: TIME

          He's done quite a substantial investment in researching and downloading / installing both models and LLORA's, and has also included retraining using his own source images (of himself) to help improve the output. After that, he's now using prompt engineering to train up to the output he's expecting. This week he's sent me more results and some are indeed fantastic, at least whilst be viewing on my lowly cellphone screen, larger screens may expose issues.

          He's working on human image output and SD seems to do that reasonably well, with much better results the less true to life you seek in output complexity; he just sent me a fake animation magazine cover that looks fantastic. I, OTOH, am looking for fantasy morph output and it's a major fail without major retraining and custom model input (there are a few but mostly unusable for my purposes).

          So YMMV but expect an overall time sink here.

          1. that one in the corner Silver badge

            Re: TIME

            > He's working on human image output and SD seems to do that reasonably well,

            Thanks for fleshing that out (cough, sorry).

            Definitely a time sink. Had to rein myself in, a few nights back, from seeing what happened if you just feed it short prompts like "peculiar" - I claim this is serious "probing the tag space attached to the training set" and not "a waste of time making icky pictures".

        3. Anonymous Coward
          Anonymous Coward

          Re: TIME

          Especially with image generation, one man's "great output" is another man's "If thine eye offends thee, tear it out".

          I'm not an artist by any means, but I am a fan of humans. I am committed to going out of my way not to patronize any art based on AI. I won't read anything with what looks like an AI generated illustration.

          I'll OK with using it for technical work - I see it as yet another automation tool and it seems to be a supercharged code completion. I wouldn't want to go back to writing in machine code. I do worry about the trend of CEO's talking about replacing entry level positions with AI. How will the next generation build up their higher level knowledge from the bottom up? We can only hope the high Greed/IQ ratio of Private Capital kills off those idiots before us.

          Oddly, the author didn't mention "Github Copilot" alternatives at all. I think it would be market and culture healthy to have alternatives. A small local system sounds great, especially when the internet is down. Are there any attempts at that?

          1. Anonymous Coward
            Anonymous Coward

            Re: TIME

            A small local system sounds great, especially when the internet is down. Are there any attempts at that?

            Ah yes. An AI that generates cat videos and memes while you wait. I think you have just come up with a monster: this will be the next site chatbot that will prevent you from actually talking to a human being for support..


          2. mpi Silver badge

            Re: TIME

            > How will the next generation build up their higher level knowledge from the bottom up?

            By fixing the mess these great CEO idea will cause.

            LLMs are incredibly helpful tools for people who already know how to write code. They are incredibly bad at replacing people who write code.

      2. Francis Boyle Silver badge

        Re: costs

        I have a local installation and the results are indeed disappointing though I'm not sure what I could do to improve them. The version I installed requires CUDA and since I only have an ancient Quadro card (because I'm not willing to give money to Nvidia these days) it's pretty slow. But the the main problem is that the images are just pretty naff and I'm not prepared to put in the time trying to make them better when I can produce better images using more traditional means. (If you can call Blender traditional). Maybe I'll give it a while and try again.

      3. seldom

        Re: costs

        I bought a bandsaw because I want to make some toys in wood for my niece's children.

        I'm in my fourth week of experimentation and the system is still not delivering the the results I'm looking for.

        On the plus side, I still have all my fingers, and I am enjoying the learning process.

        Sometimes the journey is more valuable than the destination (Grasshopper).

        1. M. Poolman
          Thumb Up

          Re: costs

          Exactly what he said! This is still very new technology, and if people at home want to spend time generating prog rock covers - good for them.

          On the other hand, there must be many potential applications in science and engineering waiting to be discovered (big data is all very well, but often begs the question as to how it is then processed and interpreted). Putting these tools in to the hands of researchers, for a fairly modest hardware budget, with the "freedom to tinker" and freedom from the hassles of licencing, subscriptions etc. can only be a good thing.

          Sure not all problems can be solved with a generous sprinkling of magic AI pixie dust, but it strikes me that we may well entering a new and exciting era of computing.

          1. Glenn Amspaugh

            Re: costs

            Heh, reminds me of the ear'y '90s and getting into 3D rendering and animation. New kit (Quadra 650 with AT&T coprocessor card) was taking a week to render 30 seconds of 640x480 ugly starships. But then I learned about distributed computing and shortly thereafter, was banned from using art school's new computer lab.

            Tools and apps are pretty rough now but in 5-10 years children will be creating their own Star Wars films filled with Power Ranger characters (use whatever 'cool' films and tv shows kids will be into soon).

      4. Inkey

        Re: costs

        Snake ....

        Stable diffusion works just fine ... way way better than mid journey and the dream scape onlinr subscription clones way beter

        Localy i have a really old intel 3.4 GHZ chip... and while it took a while to work it did it well with a 2nd hand rtx3070 it runs batches of 10 to 15 images in about 3 mins @ 640x720....

        Also i can make some primirive geometry in blender,

        and run it

    2. that one in the corner Silver badge

      Re: Electrons being pushed

      That raises the question: do we actually have any useful & reliable figures to work from?

      To start with, the comparative energy costs between pushing bits in the cloudy DCs versus at home? Including the cooling needed when you have all the kit in such close proximity - no aircon in Corner Corner[1] - and any other infrastructure that wouldn't be in place if the machinery wasn't there? Without a PC I'd still have a desk[2] and somewhere to put it. We're no doubt using different CPUs, RAM etc, what are their comparative operations per Watt?[3]

      Then, even though they may be "monster" machines compared to many a household, they aren't in the same league as the cloudy ones - which is the whole point, this is why the models being run at home are smaller. So we're swapping runs of a ginormous model for runs of a merely quite large model: fewer bits, quite a lot fewer bits in motion.

      Shall we just assume that the extra transport costs for the cloud (all that Javascript!) are trivial compared to the compute costs and are probably swamped by Netflix usage.

      [1] Although, if there was. maybe the window would be closed and I wouldn't have a wasp buzzing around my keyboard! Gerroff.

      [2] Only it'd be a good-looking desk, with a blotter and a heavy bound household ledger.

      [3] Sounds like the sort of thing one ought to know but ... reliable figures, which aren't triggering "marketing department" vibes?

    3. mpi Silver badge

      Re: Electrons being pushed

      > How effective is it to run all those transistors to come up with the phrase: "We all knew, but couldn't resist.".

      How effective is it to drive a 1.5t sedan around for 1h in each direction, alone, to do a job that can be done remotely? How effective is driving a 2t SUV for 30 minutes through stop and go traffic to transport half a liter of milk and a carton of eggs? How efficient are all these gazillions of heavy vehicles stuck in traffic jams on 12-lane superhighways for hours, just to transport an amount of people and materials, that sane public transport could manage with a fraction of the energy and space required?

      Remind me, compared to what our species spends on transportation, computers account for how much of global energy consumption and greenhouse gas emissions again?

      How about we put out the forest fire, before we worry about the candle smoke?

      1. Wayland

        Re: Electrons being pushed

        PCs are not the cheapest hobby but they start quite cheap and scale up to not that expensive. You would spend more doing up your bathroom. The fact that the PC is still with us essentially the same as it was 40 years ago is wonderful. AI is yet another thing we can do with it and it encourages hardware upgrades. It really is a Personal Computer unlike a Smartphone which is a person tracking device.

  4. ThoughtFission

    Having a closer look at the PC specs required to run something close to GPT-4, the average user isn't going to be able to do it. You need a very expensive card to get similar results.

    1. sten2012 Bronze badge

      There is ongoing work in that area though.

      Obviously it will be slow still, but overcoming the memory wall which is particularly brutal for consumer GPU's by shifting things in and out.

      It may reach a point where you can run it soon, albeit with a worse than linear performance degradation

    2. mpi Silver badge

      This isn't about running 100sB param models. This is about running much smaller models (smaller by an order of mangnitude), with comparable quality of output.

      We are talking about models in the 13B param range here.

    3. Glenn Amspaugh

      I've seen folks on reddit post stuff they say is run on GTX 1060 cards and with the big N announcing RTX 4060 8GB cards for $299.00, guessing folks will be upgrading their render rigs soon.

  5. TheMaskedMan Silver badge

    Interesting. I haven't yet got around to experimenting with a local copy of anything, though it's on my list - local chatGPT that wasn't excruciatingly slow might have it's uses.

    If these local variants work - or come to work eventually - as well as their cloudy brethren, it's going to really upset those screaming for regulation.

    1. doublelayer Silver badge

      Yes, it certainly will. I don't like the chances either, but I haven't wasted any breath asking for regulation because, by the time we get any, the ship will have sailed. There will certainly be a lot more spam on every network that accepts it now, and we'll just have to deal with that. I'm not sure any regulation at any time could have prevented that, but it's certainly too late now.

  6. well meaning but ultimately self defeating

    Standards dropping

    Historically I would have expected to see el Reg publish a much more detailed and considered version of this article about a month ago - this is late and derivative of about 100 similar commentaries. Do better guys!

  7. captain veg Silver badge


    "why the PC proliferated: by making it possible to bring home tools that were once only available in the office"

    Not really. Office managers could buy PCs on expenses without asking the IT department for permission, and, possibly more importantly, do useful stuff on them without getting the IT department involved.

    Agile, if you like.


POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like