* Posts by TheLLMLad

3 publicly visible posts • joined 10 Feb 2024

Encryption backdoor debate 'done and dusted,' former White House tech advisor says

TheLLMLad

Finally LE agencies seem to have come around, and perhaps the politicians. NSA and such figured this out almost a decade ago, but LE was resistant, since foreign matters are not their remit (FBI counterintelligence notwithstanding). They'd much rather the Chinese not be able to read it than they be able to. The time where solutions that would allow NSA to read but nobody else were available has passed.

Just how deep is Nvidia's CUDA moat really?

TheLLMLad

Ultimately CUDA is under assault from three vectors;

First, enthusiasts and hobbyists who are unable to afford Nvidia hardware with sufficient amounts of VRAM to do anything interesting.

Second, the manufacturers themselves.

Third, and perhaps more importantly, all of the hyperscalers are investing heavily in their own accelerators. None of them can tolerate the current status quo; most can't even secure enough Nvidia chips to do what they want.

I think the long term prognosis for Nvidia remains dim, not in a "Nvidia will fail" way but a "it'll go back to being an AMD sized company and not an Amazon sized one". The moment right now feels a bit like the mid-10s when it appeared that Intel had a monopoly on x86 and nobody else could challenge it (eventually AMD would come to save the day, as it were, but arm started being taken seriously as well as a result of that time), but perhaps even more volatile. Nvidia does have a lot of the fab time booked and HBM bought which will make competition somewhat difficult, but you can already see Google and now Anthropic (via AWS Inferentia) migrating towards custom accelerators and away from GPUs.

And of course the GPU architecture isn't really purely optimized for Matmul, there's a lot to be said for, say, cerebras' waferscale approach (which also handily doesn't rely on HBM).

Sam Altman's chip ambitions may be loonier than feared

TheLLMLad

OpenAI isn't so special

Tend to concur with the assessment that Altman is being point man for hyping up OpenAI, because fundamentally there really isn't anything special about GPT other than it being bigger than everyone else's models and using a lot more fine tuning. Their only real trick, mixture of experts, already is done by the French and it took them what, six months?

Ok it's from Google but the "we have no moat" paper was entirely on point. Unless OpenAI has got a new model architecture hiding somewhere they're nothing special at all really.