back to article Fortytwo's decentralized AI has the answer to life, the universe, and everything

Fortytwo, a Silicon Valley startup, was founded last year based on the idea that a decentralized swarm of small AI models running on personal computers offers scaling and cost advantages over centralized AI services. On Friday, the company published benchmark results claiming that its swarm inference scheme outperformed OpenAI …

  1. Anonymous Coward
    Anonymous Coward

    The principle reminds me of when I used Super Compact on a VAX in the 1980s to design microstrip circuits. You could run an optimization routine which fiddled with the values of components in the circuit to hit the desired performance. Gradient optimization was the fastest - it went down the steepest local gradient in the direction of the desired performance but the problem was that it could get trapped in a local minimum - which was not the overall best performance - and not get out cos every way is up. Random optimization was the other, slower, option. It just jiggled everything between the pre-set limits and if you left it long enough it would usually find the best answer. Gradient could be tens of times faster than random, which was important when you're paying for CPU time. In practice the preferred tactic was to run gradient to get a quick solution than run random for a while to make sure you'd not hit a local minimum.

    1. m4r35n357 Silver badge

      Heh, we ran one of the early PC versions of SC on an AT machine in the late '80s.

      For those who are not aware (I'm sure you don't need a lecture from me!), "global optimization" is hard! You need to try all options (e.g. gradient, random, nelder-mead*), and leave them running. It all looks easy at first, until you encounter the "curse of dimensionality".

      *nelder mead is unpopular with some, but I have used it succesfully even for "global" searches.

      1. Paul Crawford Silver badge

        As you say, it is hard to do when you want a global minima but (typically) don't know roughly where it is. Sometimes I have used combinations of methods, an annealing style to get somewhere close, then a faster gradient-style once the locale is known. But some problems are just really troublesome...

      2. Zolko Silver badge

        You need to try all options (e.g. gradient, random, nelder-mead)

        you forgot genetic algorithms. They get very fast to a very good global optimum, and don't get trapped in local minima. It's also quite easy to code. For problems with large number of parameters, this is the best algorithm I've seen.

    2. UnknownUnknown Silver badge

      SETI@Home

      TBH sounds like SETI@Home…. But good luck getting people to run the agent on their PC these days. That seems a hard ask in today’s malware, data theft and enshittified times.

  2. Anonymous Coward
    Anonymous Coward

    In the washup wasn't 42 == 6×9 ?

    Clearly LLM on the job there.

    The reward system looks remarkably like crypto mining (proof of work.) So the crypto mafia and AI mafia have joined forces ?

    If there is actually any (bit)coin to be made in this, the crypto bros will be dusting off their cryptojacking wares and folding in the #42 sauce.

    1. Wiretrip

      Re: In the washup wasn't 42 == 6×9 ?

      The crypto and AI mafia are one and the same, and have been for some time.

    2. Pascal Monett Silver badge
      Thumb Down

      Re: In the washup wasn't 42 == 6×9 ?

      And I note that you only get some coin if your results are in the top tier.

      Brilliant way to have a large number of people donate their electricity and CPU time for nothing.

      Nothing to see here, people. Move along . . .

    3. elsergiovolador Silver badge

      Re: In the washup wasn't 42 == 6×9 ?

      It is 420 x 69 = 666

      1. Snowy Silver badge
        Coat

        Re: In the washup wasn't 42 == 6×9 ?

        42 does equal 6*9 just not in base 10.

        1. Jedit Silver badge
          Thumb Up

          Re: In the washup wasn't 42 == 6×9 ?

          Specifically it's in base 13.

          But that's not the point. 6x9 is not the true question, but rather a distorted version produced because the Golgafrinchans replaced the intended inhabitants of Earth. There's an implication that the real question is "What do you get if you multiply six by seven?" - effectively saying the answer to Life, the Universe and Everything is "look, it just is, OK?"

  3. Claude Yeller

    AI@Home

    This is the same principle as Folding@Home. But now nodes are diverse and deliver different services.

    Folding@home and SETI@home are just two such rather successful projects. So, the idea is not that outlandish.

    But all these successful projects had an altruistic goal, not commercial. I suspect that setting up such a network as a commercial organization will be much more difficult.

    If it can work, I assume someone will try to set up an open community to do the same.

    1. BartyFartsLast Silver badge

      Re: AI@Home

      Indeed, I'm not donating my compute and the power to run it on the off chance I might get a few measly fractions of bitcoin which may, or may not cover the cost depending which crypto scammer has pumped and dumped on any given day.

      Plus, I'm not inclined to encourage or add to the AI slop and bullshit

    2. inikitin

      Re: AI@Home

      SETI@home, BOINC, and Folding@home were massive inspirations (I personally participated in all of them).

      Fortytwo differs in one key way: it is not just about donating compute. It is a platform where people can contribute custom fine-tunes or even foundational models. Each contributed specialized model improves the intelligence of the entire network.

      In order for it ti continue to outperform large monolithic models it requires increasing model diversity. If someone builds the best legal, medical, coding, or chemistry model, we expect them to be rewarded. We don’t buy models from the community; we reward high-quality inferences, as determined through AI peer-ranking. Anyone can plug a model into the network without giving up the model or its data — both remain private. Unique models earn more.

      AI is expensive, from inference compute to post-training costs that come with fine-tuning specialized models. With that in mind, we don’t expect grassroots contributions to sustain without incentives. If we want a broad, resilient, community-scale system, we need to reward contributors based on their impact rather than rely on altruism alone.

      1. CoyoteDen

        Re: AI@Home

        Not necessarily. AI is locally expensive when you have a single large model and compute platform. This shouldn't be any more expensive for any one node than the distributed efforts you did years ago. The total cost is spread out and people donate what they can.

        It IS all about altruism. Giving people cryptocurrency for doing this is just going to attract the wrong kind of participant. You donate compute, and you donate your models and tuning. The open source community has been doing this with code, documentation, project management, etc.. for decades. Best part about it is no one org owns what comes out of it, everyone owns it.

        This shouldn't be a silicon valley project, it should be a university one.

        1. Zolko Silver badge

          Re: AI@Home

          Did you check the names ?

          Ivan Nikitin and co-founders Vladyslav Larin and Alexander Firsov

          don't sound very Silicon Valley-ish

    3. Mike VandeVelde
      Facepalm

      Re: AI@Home

      I can remember reading about bitcoin back when it started and thinking wow that sounds cool, but as if "they" will let it get anywhere, so I carried on with SETI@home because it seemed plenty cool enough.

      I console myself with the fact that any coins I could have mined back in the beginning when it was cheap to do so, I would have absolutely sold them by the time they reached the undreamed of value of $1000. I would have had a severe mental health crisis when my already sold coins got to $10,000. I don't know what might have happened when they reached $100,000.

      Not to mention that even if I hung onto them I probably would have had my wallet on zomething like a zipdrive that no amount of hypnosis could recover the password to when I came around to realizing that I could be a millionaire.

      I guess I can be thankful I didn't have to go through all that heart ache.

      1. BartyFartsLast Silver badge

        Re: AI@Home

        I did mine a couple back before the Pizza was bought because curiosity. No idea where they went but they're long gone and the hardware destroyed with no way of proving they were mine so meh, it's an amusing anecdote.

  4. Long John Silver Silver badge
    Pirate

    At first glance, some attractive ideas

    Ivan Nikitin's recognition of the importance of niche 'AI' models fine-tuned for specific tasks suggests good sense to be crystallising from the headlong rush to 'ever bigger', 'ever costlier', centrally controlled 'universal' 'AIs'. The idea that nodal models of differing construction may according to algorithmic rules self-combine to offer differing 'insights' on a problem is intriguing.

    It is already apparent that modern consumer-level PCs can host cut-down ('refined') versions of gargantuan 'AI' models to assist in various tasks. Networked small models of differing construction and training may enable a single instance of a modestly sized model to poll other models for confirmation/extension of its results. Latency should not bother routine uses of suitably chosen local models: searching for answers to 'big questions' must inevitably occasion delay, whether it is humans or their surrogates which are interrogated.

    Taking lessons from, for example, the distributed 'Freenet' suggests that anonymity could be factored in. Freenet has advantages in security and resilience over Tor; these resulting from 'content' not being localised on traditional servers; the downside being person to person interactions among node operators occurring at a pace akin to messages placed in bottles and then chucked into an ocean; yet these messages do eventually reach intended recipients.

    Manufacturing bespoke models suited for PCs is within the capabilities of small companies, academic institutions, and some individuals. If one's requirement is for aid analysing medical images, one has no need for models trained upon general Internet slop. These models would gain greater power if, according to need, they can confer with models of similar intent hosted elsewhere. Perhaps, models created by Google, Amazon, and OpenAI, shall literally die out like the dim-witted dinosaurs they emulate. Not only that, but also, 'AIs' could be networked according to commercial imperatives and according to sharing paradigms.

    1. This post has been deleted by its author

    2. Anonymous Coward
      Anonymous Coward

      Re: At first glance, some attractive ideas

      Yeah, "swarm" is a bit of a buzzword bingo staple in this one though (along with crypto) aimed at suggesting the potential for singularly emergent flocking sentience or suchlikes, which is highly dubious in this space imho. But networked mixtures of localized experts, or other genAIs, could have traction I guess, especially where individuals own their AIs and rent it out to folks in need of the skills therein, on the basis of a specific task to accomplish, or for some pre-specified amount of time or energy used.

      What we'll need Tobias to do for us then, so as to enable this, is "cobble" together some Hands-On on how to train our own 14-B model from scratch (at home?), on data that somehow represents our own unique (and outstanding) personal abilities ... so that, finally, we can rent-out our skillset virtually to multiple johns, janes, and whathaveyounots, simultaneously, even as we enjoy ourselves with a well-deserved cuppa, or a game of darts, at the local pub! ;)

  5. DarkwavePunk Silver badge

    Aha!

    I was reading the article on edge just waiting - then - "crypto". Of course. Bunch of shitgoblins.

    1. Paul Crawford Silver badge

      Re: Aha!

      Well, if you are going to "earn" money from computing effort, at least solving someone's AI query is better than pointless proof-of-work in the crypto world.

    2. inikitin

      Re: Aha!

      Unfortunately, in today’s world, crypto remains the only reliable global payment mechanism. The network’s architecture is designed so that even if the company behind it ceased to exist, the network could continue operating, maintained by individual contributors and node operators.

      Crypto makes it possible to ensure that node operators – regardless of their location, nationality, or banking access can be paid fairly and immediately, starting from day one on the network.

      1. Pulled Tea
        Mushroom

        Re: Aha!

        crypto remains the only reliable global payment mechanism

        Oh, dear.

        Crypto makes it possible to ensure that node operators – regardless of their location, nationality, or banking access can be paid fairly and immediately

        Oh, dear.

        I really don't want to start litigating about whether cryptocurrencies (I honestly refuse to call it crypto, cryptographers got the term first) or the blockchain could be called “the only reliable global payment mechanism” at all, but like… to say it charitably, that is a controversial position to hold. We could go into it, but there are other places in the Internet where the discussion is… “robustly held”, if only because BTC just got hit with yet another ATH or whatever it is blockchain fans call 'em.

        I think the whole idea is interesting in that you might be right in spreading the inference to multiple smaller (better-curated, more specialized) models might be the way to go, but like… if you're going to use cryptocurrencies, you're going to inherit cryptocurrency problems. And one of those problems is the immense amounts of money and crime involved in the whole space, and the many, many, many blockchain-poisoned and -HODLing maniacs who want some kind of return of investment they've made in sinking all of their money into their mining rigs.

        It might start decentralized, but it will consolidate very quickly, and if history is to be of any indication, you're going to relearn a lot of things that other, more centralized fields learned early on in their history (much like BTC HODLers learned very quickly why financial laws exist).

        That is… assuming that you lot aren't blockchain fans already. In which case… nice try.

    3. DS999 Silver badge

      I figured it was only a matter of time

      Before someone combined AI & crypto hoping to cash in from both sectors at once.

      I suppose these compute cycles have at least the possibility of doing something useful, other than the complete waste that is bitcoin.

      1. iron

        Re: I figured it was only a matter of time

        Sam Alternative Man already went there a few years ago with his World Con, sorry World Coin.

        He even added retina scanning and human verification (Voight-Kampff test?) for extra dystopian points.

    4. breakfast Silver badge

      Re: Aha!

      You start reading the article and see "AI Startup" so you are immediately 96% certain that this is going to be some bullshit, then you see "crypto" and the top explodes off the certainty meter as the bar reaches for the sky.

  6. cookiecutter Silver badge

    legacy tech

    just to annoy them i'm referring to microsoft amazon and meta as legacy tech stuck in the 50s

    1. sabroni Silver badge
      Facepalm

      Re: just to annoy them

      I'm sure they noticed and got really upset!!

  7. nobody who matters Silver badge

    Whatever answer to life, the universe, and everything Fortytwo's decentralised AI has come up with, it will almost certainly be the wrong one

  8. EricM Silver badge

    So the answer to "life, the universe, and everything" regarding AI now is: Don't only scam investors, also scam end users into paying for your energy and hardware bills?

    Saying that this scheme addresses "a practical issue: the shortage of centralized computing resources." is just another way of saying that running AI inference requires much more CPU/GPU and power than traditional IT - which is a very basic issue, that makes AI impractical and too expensive for most tasks to be economically viable.

    Shifting the burden of providing CPU/GPU and power to decentralized end users will not solve this underlying, expensive problem.

    They literally try to cloak this problem by an S.E.P (https://hitchhikers.fandom.com/wiki/Somebody_Else%27s_Problem_Field)

    1. sabroni Silver badge
      Meh

      That power isn't wasted

      You can't hallucinate an answer using normal computing!!

    2. inikitin

      It can solve the expensive problem if you build the system differently – focusing on small, specialized models and enabling a network of such models to act as a single, unified model where:

      - only idle compute is used with nodes running in the background, so users’ daily workflows remain unaffected (node operators don't keep separate hardware running 24/7, the nodes run only when the computer is actually in use).

      - improvements in capability (inference accuracy, domain expertise) come from specialized model diversity, not from expensive large "god model" training runs.

      - Incentives exist for AI enthusiasts to create custom fine-tunes, optimizing for higher rewards.

      As a result, you get a swarm of nodes running small models in the background on consumer hardware, but their combined capabilities (accuracy) remain competitive with, and can even surpassing the much more compute-intensive inference passes of large, centralized models.

  9. Khaptain Silver badge

    20 minutes for a response

    Its hard to imagime who has that much patience.

  10. Autonomous Mallard

    Energy Use

    On the whole, I'm not sure whether this actually addresses the energy demand issues involved in modern AI. While it does spread out the demand, the energy still has to be spent. Most consumer hardware will not be as energy efficient as datacenter kit (i.e cycles/watt), and distributing the load across the grid could actually make addressing additional demand _more_ difficult. Upgrading grid capacity to every endpoint requires replacing/upgrading more equipment than constructing new generation at the point of use, and distribution losses in a grid are significant.

    I think coupling this with point-of-use generation (e.g residential solar) and/or microgrids would work well. Distributing our computing and energy generation capacity would improve resilience to extreme weather events and other regional disruptions.

    1. RamenJunkie

      Re: Energy Use

      It "solves" the energy use problem for the giant companies losing billions of dollars on this crap.

  11. SimpleMan

    Is this a Bittensor Subnet?

    Cool idea. The incentive portion reminds me of Bittensor. Is this a bittensor subnet?

  12. mevets

    I wonder if they will get miffed?

    I might be willing to accept that it is some form of actual intelligence if one of the nodes, alice, gets in a snit because their answers are always getting ignore in favour Hals. The bitch.

  13. RamenJunkie

    So offset the expensive energy use, plus wear out people's phones and desktops faster.

    For what?

    Literally nothing. Some worthless WOW Tokens.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon