The principle reminds me of when I used Super Compact on a VAX in the 1980s to design microstrip circuits. You could run an optimization routine which fiddled with the values of components in the circuit to hit the desired performance. Gradient optimization was the fastest - it went down the steepest local gradient in the direction of the desired performance but the problem was that it could get trapped in a local minimum - which was not the overall best performance - and not get out cos every way is up. Random optimization was the other, slower, option. It just jiggled everything between the pre-set limits and if you left it long enough it would usually find the best answer. Gradient could be tens of times faster than random, which was important when you're paying for CPU time. In practice the preferred tactic was to run gradient to get a quick solution than run random for a while to make sure you'd not hit a local minimum.
Fortytwo's decentralized AI has the answer to life, the universe, and everything
Fortytwo, a Silicon Valley startup, was founded last year based on the idea that a decentralized swarm of small AI models running on personal computers offers scaling and cost advantages over centralized AI services. On Friday, the company published benchmark results claiming that its swarm inference scheme outperformed OpenAI …
COMMENTS
-
-
Sunday 2nd November 2025 11:06 GMT m4r35n357
Heh, we ran one of the early PC versions of SC on an AT machine in the late '80s.
For those who are not aware (I'm sure you don't need a lecture from me!), "global optimization" is hard! You need to try all options (e.g. gradient, random, nelder-mead*), and leave them running. It all looks easy at first, until you encounter the "curse of dimensionality".
*nelder mead is unpopular with some, but I have used it succesfully even for "global" searches.
-
Sunday 2nd November 2025 17:43 GMT Paul Crawford
As you say, it is hard to do when you want a global minima but (typically) don't know roughly where it is. Sometimes I have used combinations of methods, an annealing style to get somewhere close, then a faster gradient-style once the locale is known. But some problems are just really troublesome...
-
Tuesday 4th November 2025 11:57 GMT Zolko
You need to try all options (e.g. gradient, random, nelder-mead)
you forgot genetic algorithms. They get very fast to a very good global optimum, and don't get trapped in local minima. It's also quite easy to code. For problems with large number of parameters, this is the best algorithm I've seen.
-
-
-
Sunday 2nd November 2025 09:30 GMT Anonymous Coward
In the washup wasn't 42 == 6×9 ?
Clearly LLM on the job there.
The reward system looks remarkably like crypto mining (proof of work.) So the crypto mafia and AI mafia have joined forces ?
If there is actually any (bit)coin to be made in this, the crypto bros will be dusting off their cryptojacking wares and folding in the #42 sauce.
-
-
-
Monday 3rd November 2025 13:29 GMT Jedit
Re: In the washup wasn't 42 == 6×9 ?
Specifically it's in base 13.
But that's not the point. 6x9 is not the true question, but rather a distorted version produced because the Golgafrinchans replaced the intended inhabitants of Earth. There's an implication that the real question is "What do you get if you multiply six by seven?" - effectively saying the answer to Life, the Universe and Everything is "look, it just is, OK?"
-
-
-
Sunday 2nd November 2025 11:07 GMT Claude Yeller
AI@Home
This is the same principle as Folding@Home. But now nodes are diverse and deliver different services.
Folding@home and SETI@home are just two such rather successful projects. So, the idea is not that outlandish.
But all these successful projects had an altruistic goal, not commercial. I suspect that setting up such a network as a commercial organization will be much more difficult.
If it can work, I assume someone will try to set up an open community to do the same.
-
Sunday 2nd November 2025 12:22 GMT BartyFartsLast
Re: AI@Home
Indeed, I'm not donating my compute and the power to run it on the off chance I might get a few measly fractions of bitcoin which may, or may not cover the cost depending which crypto scammer has pumped and dumped on any given day.
Plus, I'm not inclined to encourage or add to the AI slop and bullshit
-
Sunday 2nd November 2025 19:36 GMT inikitin
Re: AI@Home
SETI@home, BOINC, and Folding@home were massive inspirations (I personally participated in all of them).
Fortytwo differs in one key way: it is not just about donating compute. It is a platform where people can contribute custom fine-tunes or even foundational models. Each contributed specialized model improves the intelligence of the entire network.
In order for it ti continue to outperform large monolithic models it requires increasing model diversity. If someone builds the best legal, medical, coding, or chemistry model, we expect them to be rewarded. We don’t buy models from the community; we reward high-quality inferences, as determined through AI peer-ranking. Anyone can plug a model into the network without giving up the model or its data — both remain private. Unique models earn more.
AI is expensive, from inference compute to post-training costs that come with fine-tuning specialized models. With that in mind, we don’t expect grassroots contributions to sustain without incentives. If we want a broad, resilient, community-scale system, we need to reward contributors based on their impact rather than rely on altruism alone.
-
Monday 3rd November 2025 15:42 GMT CoyoteDen
Re: AI@Home
Not necessarily. AI is locally expensive when you have a single large model and compute platform. This shouldn't be any more expensive for any one node than the distributed efforts you did years ago. The total cost is spread out and people donate what they can.
It IS all about altruism. Giving people cryptocurrency for doing this is just going to attract the wrong kind of participant. You donate compute, and you donate your models and tuning. The open source community has been doing this with code, documentation, project management, etc.. for decades. Best part about it is no one org owns what comes out of it, everyone owns it.
This shouldn't be a silicon valley project, it should be a university one.
-
-
Monday 3rd November 2025 01:19 GMT Mike VandeVelde
Re: AI@Home
I can remember reading about bitcoin back when it started and thinking wow that sounds cool, but as if "they" will let it get anywhere, so I carried on with SETI@home because it seemed plenty cool enough.
I console myself with the fact that any coins I could have mined back in the beginning when it was cheap to do so, I would have absolutely sold them by the time they reached the undreamed of value of $1000. I would have had a severe mental health crisis when my already sold coins got to $10,000. I don't know what might have happened when they reached $100,000.
Not to mention that even if I hung onto them I probably would have had my wallet on zomething like a zipdrive that no amount of hypnosis could recover the password to when I came around to realizing that I could be a millionaire.
I guess I can be thankful I didn't have to go through all that heart ache.
-
-
Sunday 2nd November 2025 11:36 GMT Long John Silver
At first glance, some attractive ideas
Ivan Nikitin's recognition of the importance of niche 'AI' models fine-tuned for specific tasks suggests good sense to be crystallising from the headlong rush to 'ever bigger', 'ever costlier', centrally controlled 'universal' 'AIs'. The idea that nodal models of differing construction may according to algorithmic rules self-combine to offer differing 'insights' on a problem is intriguing.
It is already apparent that modern consumer-level PCs can host cut-down ('refined') versions of gargantuan 'AI' models to assist in various tasks. Networked small models of differing construction and training may enable a single instance of a modestly sized model to poll other models for confirmation/extension of its results. Latency should not bother routine uses of suitably chosen local models: searching for answers to 'big questions' must inevitably occasion delay, whether it is humans or their surrogates which are interrogated.
Taking lessons from, for example, the distributed 'Freenet' suggests that anonymity could be factored in. Freenet has advantages in security and resilience over Tor; these resulting from 'content' not being localised on traditional servers; the downside being person to person interactions among node operators occurring at a pace akin to messages placed in bottles and then chucked into an ocean; yet these messages do eventually reach intended recipients.
Manufacturing bespoke models suited for PCs is within the capabilities of small companies, academic institutions, and some individuals. If one's requirement is for aid analysing medical images, one has no need for models trained upon general Internet slop. These models would gain greater power if, according to need, they can confer with models of similar intent hosted elsewhere. Perhaps, models created by Google, Amazon, and OpenAI, shall literally die out like the dim-witted dinosaurs they emulate. Not only that, but also, 'AIs' could be networked according to commercial imperatives and according to sharing paradigms.
-
This post has been deleted by its author
-
Sunday 2nd November 2025 16:15 GMT Anonymous Coward
Re: At first glance, some attractive ideas
Yeah, "swarm" is a bit of a buzzword bingo staple in this one though (along with crypto) aimed at suggesting the potential for singularly emergent flocking sentience or suchlikes, which is highly dubious in this space imho. But networked mixtures of localized experts, or other genAIs, could have traction I guess, especially where individuals own their AIs and rent it out to folks in need of the skills therein, on the basis of a specific task to accomplish, or for some pre-specified amount of time or energy used.
What we'll need Tobias to do for us then, so as to enable this, is "cobble" together some Hands-On on how to train our own 14-B model from scratch (at home?), on data that somehow represents our own unique (and outstanding) personal abilities ... so that, finally, we can rent-out our skillset virtually to multiple johns, janes, and whathaveyounots, simultaneously, even as we enjoy ourselves with a well-deserved cuppa, or a game of darts, at the local pub! ;)
-
-
-
Sunday 2nd November 2025 19:36 GMT inikitin
Re: Aha!
Unfortunately, in today’s world, crypto remains the only reliable global payment mechanism. The network’s architecture is designed so that even if the company behind it ceased to exist, the network could continue operating, maintained by individual contributors and node operators.
Crypto makes it possible to ensure that node operators – regardless of their location, nationality, or banking access can be paid fairly and immediately, starting from day one on the network.
-
Monday 3rd November 2025 03:56 GMT Pulled Tea
Re: Aha!
crypto remains the only reliable global payment mechanism
Oh, dear.
Crypto makes it possible to ensure that node operators – regardless of their location, nationality, or banking access can be paid fairly and immediately
Oh, dear.
I really don't want to start litigating about whether cryptocurrencies (I honestly refuse to call it crypto, cryptographers got the term first) or the blockchain could be called “the only reliable global payment mechanism” at all, but like… to say it charitably, that is a controversial position to hold. We could go into it, but there are other places in the Internet where the discussion is… “robustly held”, if only because BTC just got hit with yet another ATH or whatever it is blockchain fans call 'em.
I think the whole idea is interesting in that you might be right in spreading the inference to multiple smaller (better-curated, more specialized) models might be the way to go, but like… if you're going to use cryptocurrencies, you're going to inherit cryptocurrency problems. And one of those problems is the immense amounts of money and crime involved in the whole space, and the many, many, many blockchain-poisoned and -HODLing maniacs who want some kind of return of investment they've made in sinking all of their money into their mining rigs.
It might start decentralized, but it will consolidate very quickly, and if history is to be of any indication, you're going to relearn a lot of things that other, more centralized fields learned early on in their history (much like BTC HODLers learned very quickly why financial laws exist).
That is… assuming that you lot aren't blockchain fans already. In which case… nice try.
-
-
Sunday 2nd November 2025 21:21 GMT EricM
So the answer to "life, the universe, and everything" regarding AI now is: Don't only scam investors, also scam end users into paying for your energy and hardware bills?
Saying that this scheme addresses "a practical issue: the shortage of centralized computing resources." is just another way of saying that running AI inference requires much more CPU/GPU and power than traditional IT - which is a very basic issue, that makes AI impractical and too expensive for most tasks to be economically viable.
Shifting the burden of providing CPU/GPU and power to decentralized end users will not solve this underlying, expensive problem.
They literally try to cloak this problem by an S.E.P (https://hitchhikers.fandom.com/wiki/Somebody_Else%27s_Problem_Field)
-
Monday 3rd November 2025 11:53 GMT inikitin
It can solve the expensive problem if you build the system differently – focusing on small, specialized models and enabling a network of such models to act as a single, unified model where:
- only idle compute is used with nodes running in the background, so users’ daily workflows remain unaffected (node operators don't keep separate hardware running 24/7, the nodes run only when the computer is actually in use).
- improvements in capability (inference accuracy, domain expertise) come from specialized model diversity, not from expensive large "god model" training runs.
- Incentives exist for AI enthusiasts to create custom fine-tunes, optimizing for higher rewards.
As a result, you get a swarm of nodes running small models in the background on consumer hardware, but their combined capabilities (accuracy) remain competitive with, and can even surpassing the much more compute-intensive inference passes of large, centralized models.
-
Monday 3rd November 2025 17:25 GMT Autonomous Mallard
Energy Use
On the whole, I'm not sure whether this actually addresses the energy demand issues involved in modern AI. While it does spread out the demand, the energy still has to be spent. Most consumer hardware will not be as energy efficient as datacenter kit (i.e cycles/watt), and distributing the load across the grid could actually make addressing additional demand _more_ difficult. Upgrading grid capacity to every endpoint requires replacing/upgrading more equipment than constructing new generation at the point of use, and distribution losses in a grid are significant.
I think coupling this with point-of-use generation (e.g residential solar) and/or microgrids would work well. Distributing our computing and energy generation capacity would improve resilience to extreme weather events and other regional disruptions.