... much more complex than ... more profits
Profits only if you ignore costs all except electricity under the assumption of a final end state of electricity->token production.
762 publicly visible posts • joined 16 Jul 2024
Buffy Jo Christina Wicks (born August 10, 1977) is an American politician who serves in the California State Assembly. A Democrat, she represents the 14th Assembly District, which includes the cities of Berkeley, Piedmont, Richmond, San Pablo, and El Cerrito in the East Bay. ... Wicks is a member of the California Legislative Progressive Caucus.
“California’s children are growing up in an online world with no guardrails, leaving them vulnerable to cyberbullying, sextortion, and mental health harms. This is simply unacceptable,” said Assemblymember Wicks. “AB 1043 offers a scalable, privacy-first approach that helps keep kids safe while holding tech companies accountable.”
The bill that does nothing like that but murders open software. I voted for her! Regret.
Greg Brockman, OpenAI’s cofounder and longtime president, and his wife Anna donated about 25 million dollars in September 2025 to MAGA Inc., a major pro‑Trump super PAC supporting President Donald Trump. Anthropic didn't. Here is Adam Smith’s describing what is NOT capitalism but rather the problem that capitalism is supposed to fix -
This scheme of making the administration of justice subservient to the purposes of revenue could scarce fail to be productive of several very gross abuses. The person who applied for justice with a large present in his hand was likely to get something more than justice, while he who applied for it with a small one was likely to get something less. Justice too might frequently be delayed in order that this present might be repeated. ... The immersement, besides, of the person complained of might frequently suggest a very strong reason for finding him in the wrong, even when he had not really been so. That such abuses were far from being uncommon, the ancient history of every country in Europe bears witness.
Apparently ~40% of their revenue was bitcoin related - fancy interface tools for shifting speculation money. That dried up this year.
But that bitcoin related revenue had a low profit margin - ~15% - relative to the rest of their business. So by dropping that bitcoin related work (and the employees to do it), their profit margin as a percentage of revenue does indeed actually go up.
It's quite possible that is the total explanation and AI has nothing to do with it.
Both Amazon's and Nvidia's investments are structured in such a way as to guarantee a return on every dollar invested in OpenAI. The funding is essentially a discount on compute infrastructure that doesn't dilute their revenues while driving up OpenAI's valuation.
However, both are spending massive amounts of money on datacenters and compute (Traniums, Vera Rubins) - isn't that true? I guess taxes are paid on revenue but not investments.
It didn't actually state that AI shouldn't be used at all to write the tests. The only technical explanation was this:
When the tests exist before the code, agents cannot cheat by writing a test that simply confirms whatever incorrect implementation they produced."
It would be better stated as tests and code should be written independently, yet derived from the same specifications - writing either code or tests with knowledge of the other is likely to introduce bias.
I read right over that with an internal spell correcter, so it really doesn't bother me. As for comments being AI generated, I wouldn't really care so long as they had been thoroughly read and checked for accuracy (of course), but also very importantly for brevity. Both in code and comments, AI can, and too frequently does, violate Occam's razor by many orders of magnitude, obscuring the gist completely, all while having perfect spelling and grammar.
The smart thing to do would be to copy it to a usb or other smallest possible device, and take it to them and handing it over politely (of course deleting your own version as well). The lessens the probability of them seizing your your main personal disk, and other things, which you will never see again.
Attempting to make it in the rugged individualist and capitalist competitive commercial market - rather than taking the easy way and hitching a ride on the government gravy train - gets big Kudos in my book. Getting hooked on welfare is a hard habit to kick. Let the traditionally disadvantaged companies such as OpenAI, Google, MS, and Amazon, pick up that easy DEI money because "from each according to their abilities, to each according to their grneeds".
If they REALLY were deliberately doing that they would make sure to not prints the names of those private files. There was no claim of seeing such file names in the past. It seems not pertinent to the topic.
Is it a risk? Yes. Therefore one should always run run such software in an environment such as a container where no private files can be accessed, including no encryption key files.
At Anthropic—an AI lab building some of the world’s most advanced models—engineers are no longer writing the code that powers their products; they’re outsourcing it to AI. The head of Anthropic’s Claude Code, Boris Cherny, has announced he hasn’t written any code in more than two months.
In a post on X, Cherny said 100% of his code is now written by Anthropic’s Claude Code and Opus 4.5. Across the rest of the company, he says “pretty much 100%” of code is also AI-generated.
“For me personally, it has been 100% for two+ months now, I don’t even make small edits by hand,” Cherny wrote in a post on X responding to AI researcher Andrej Karpathy. “I shipped 22 PRs (pull requests) yesterday and 27 the day before, each one 100% written by Claude.”
The comments echo earlier remarks by Anthropic CEO Dario Amodei at the World Economic Forum earlier this month, where he noted that some engineers at his company have stopped writing code themselves and instead rely on AI models to generate it while they focus on editing. At Davos, Amodei predicted that the industry may be just six to twelve months away from AI handling most or all of software engineering work from start to finish.
Top engineers at Anthropic, OpenAI say AI now writes 100% of their code
LLM AI companies are competing on per-token price charged and tokens throughput - and once they have decided on that metric it's in their interest to produce long streams of token with low average information value the fit the plausible Gaussian distribution - because its cheaper per token.
Typescript was originally written in Typescript, but recently was rewritten in Go. (It was good for testing, but terrible for performance.) TypeScript 10x Faster: Why Microsoft Chose Go Over Rust for Compiler Rewrite
"Microsoft’s TypeScript 7.0 compiler, rewritten in Go, achieves 10x performance gains according to the December 2, 2025 progress update—compiling VSCode’s 1.5 million lines in 8.74 seconds instead of 89 seconds."
Chess is much more of a closed set problem than designing a new language.
I would say, however, that a new compiled memory safe language could evolve, but it likely be humans not AI that invented - it's just that AI could make it possible to write at higher level than Rust (say Rust++), the compiler of which would run on a GPU and produces lower level code equivalent to Rust. It wouldn't have to be perfect, just practically useful.
It's different because at work, other people can't escape. Gratuitously insulting people or baiting conflict could cause people to become unproductive or leave. There is no magic pre-formable answer for what exactly is "right", but respectful behavior and dignity do actually exist.
After acquiring GitHub in 2018, Microsoft mostly let the developer platform run autonomously. But in recent months, that’s changed. With GitHub CEO Thomas Dohmke leaving the company this August, and GitHub being folded more deeply into Microsoft’s organizational structure, GitHub lost that independence. Now, according to internal GitHub documents The New Stack has seen, the next step of this deeper integration into the Microsoft structure is moving all of GitHub’s infrastructure to Azure, even at the cost of delaying work on new features.
In a message to GitHub’s staff, CTO Vladimir Fedorov notes that GitHub is constrained on capacity in its Virginia data center. “It’s existential for us to keep up with the demands of AI and Copilot, which are changing how people use GitHub,” he writes.
...
“We have to do this,” Fedorov writes. “It’s existential for GitHub to have the ability to scale to meet the demands of AI and Copilot, and Azure is our path forward. We have been incrementally using more Azure capacity in places like Actions, search, edge sites and Proxima, but the time has come to go all-in on this move and finish it.”
GitHub has recently seen more outages, in part because its central data center in Virginia is indeed resource-constrained and running into scaling issues. AI agents are part of the problem here. But it’s our understanding that some GitHub employees are concerned about this migration because GitHub’s MySQL clusters, which form the backbone of the service and run on bare metal servers, won’t easily make the move to Azure and lead to even more outages going forward.
New Stack, "GitHub Will Prioritize Migrating to Azure Over Feature Development ", Oct 8, 2025
Markdown is a general term to describe a family of text formats that generally have these properties
1. It's mostly already readable in its text form
2. each MD format may have one or more converters to render HTML (or similar)
Generating HTML with external inputs is an inherently dangerous practice - not just for MS, as this github page describes - Markdown's XSS Vulnerability (and how to mitigate it).
That page concludes - So, is it all lost? Not really. The answer is not to filter the input, but rather the output. After the input text is converted into full fledged HTML, you can then reliably apply the correct XSS filters to remove any dangerous or malicious content.
Yet the bug MS describes goes further - Attacker needs only to get an unwitting user to open a Markdown file in Notepad and click a malicious link embedded inside. According to Microsoft's explanation, a hacker can exploit the vulnerability to launch "unverified protocols" that load and execute files with the user's permissions. That's another level of permissiveness altogether.
I don't think any mainstream browser would allow execution in the host environment, or even allow saving a file without a message box confirmation. Somebody must have built that feature into Notepad deliberately. It's either 20th century level of naivety, or a nation state plot, or both.
The type of criminal resorting to home-manufacturing or to buying very cheap illicitly made firearms, would be drawn from among ill-educated, down-at-heel, hoodlums.
I think the ones MAKING the guns (plural) for sale are probably not ill educated but rather have some native technical ability to learn read and learn the steps involved. Still hoodlums though.
The attackers initially gained access by stealing valid test credentials from public Amazon S3 buckets.
Does that mean the default setting of the S3 bucket to be encrypted was disabled AND a user writing to that bucket wrote some of their own AWS credentials as well?
Leaving the bucket encrypted with the slight bother of distributing keys to users would at least thwart crawlers scanning every AWS public bucket looking for those with credentials in them.
As for Notepad++, I think they could have their free open source yet charge a subscription for pre-compiled versions to cover the costs of safely managing and distributing a pre-compiled version - or just don't bother publishing a compiled version at all. A underfunded attempt is worse than no attempt at all.
Presumably your companies employees were using it as a productivity tool - to make profit for your company. They were not using it to make profit for Notepad++.
You probably have the wherewithal to download the Notepad++ source from Github and compile the source yourself, and require your companies employees to use that compiled version only. Here are the build instructions - https://github.com/notepad-plus-plus/notepad-plus-plus/blob/master/BUILD.md.
I have heard that Google in-house uses a lot of open software, but they always inspect and maintain their own source version (while updating from the original), and therefore compile it in house, whitelisting only those versions. I haven't heard a lot of stories about Google being hacked internally - that may be why.
What editors do you whitelist, and are none of those "Free"? Sublime is popular, but free. MS VSCode is popular and free, but with VSCode perhaps you have no control over what extensions are installed? And some of the VSCode extensions in their "Marketplace" have been to known to have been malware.
Run the IDE+AI in a containers but don't place the keys in the containers mapped filespace, while still sharing that mapped filespace between the container and the host. Then you can still run `git push` and other git commands from the host using keys that are only visible on the host.
There is the slight inconvenience of not using being able to use the IDE's integrated git interface for operations that require connection to the remote repo, so it's a tradeoff.
AI or no AI, I run codium in an container without access to secret keys, because IDEs like codium are sprawling pieces of software including various extensions by default, and there is definitely risk there.
20x over GPUs from the GHz, 16x from from the 1K^2 core (vs 256^2) , that's really impressive for a first stab. When the LLM bubble has to at least adjust its size to the reality of computing costs, that could actually spur more interest in this venture, hardware improvements being the bottleneck to progress.
The cost to do this, Lambeau said, has been a Claude Max subscription that he purchased in December for €180 a month. In that time, he says, he wrote Elo, completed Bmg.js , completed Bmg's documentation, and created the first version of the Try page.
Claude uses rolling 5‑hour “sessions” with a usage cap per session - once that limit is hit, ordinary users must wait for the clock to reset the limit. Some power user of Max have, on occasion, reported running into that wall, [BUG] Instantly hitting usage limits with Max subscription #16157, but Prof. Lambeau doesn't mention the topic at all. Of course those could just be glitches.
There is absolutely no evidence that Prof. Lambeau was given any extraordinary access to resources, implicitly or explicitly, to enable continuous and autonomous agent performance: "I've started by making sure the testing methodology was effective and scientifically sound. Claude writes the tests, executes them, discovers where it's wrong, and corrects itself. Impressive." It also possible he did run into limits, but just didn't mention it.
[Nicholas Malaya, an AMD fellow] "In our analysis the vast majority of real HPC workloads rely on vector FMA, not DGEMM," he said. "I wouldn't say it's a tiny fraction of the market, but it's actually a niche piece."
The definition of a DGEMM (dense general matrix multiply) operation:
C = α ⋅ A ⋅ B + β ⋅ C where α and β are scalars, and A , B , C are matrices.
The definition of an vector FMA (fused multiply-add) operation:
F M A ( a , b , c ) = [ ( a 0 × b 0 + c 0 ) , ( a 1 × b 1 + c 1 ) , ( a 2 × b 2 + c 2 ) , ( a 3 × b 3 + c 3 ) , ...] , where ai, bi, ci are scalars.
which is(*) a SIMD (Single Instruction, Multiple Data) operation. Whereas x86 AVX-512 registers could perform 8 of these in parallel, GPUs and TPUs can do thousands. (* SIMD with qualifications, because even CPUs may execute FMA instructions out of order.)
Here's the thing - DGEMM on GPUs/TPUs are performed by breaking down the operations into vector FMA. So DGEMM and vector FMA ore not mutually exclusive operations. However, Optimizing DGEMM also includes things like handling memory for common use cases, so DGEMM is more specific. Presumably Malaya means many HPC applications can't make use those AI learning specific DGEMM optimizations, whether they be FP32 or emulated FP64, but they still use the common factor of vector FMA, which x86 can modestly parallelize. (64 cores by 8 FP64 == hardware 512 FP64 at once )
Cnut (/kəˈnjuːt/ kə-NYOOT;[3] Old Norse: Knútr;[a] c. 990 – 12 November 1035), also known as Canute and with the epithet the Great, was King of England from 1016, King of Denmark from 1018, and King of Norway from 1028 until his death in 1035.The three kingdoms united under Cnut's rule are referred to together as the North Sea Empire by historians.
"However, in children with high screen exposure, the networks controlling vision and cognition specialised faster, before they had developed the efficient connections needed for complex thinking." The report measuring this specialization via MRI images and a software software package FMRIB's Software Library (FSL, v6.0).
To put it bluntly, I'm skeptical that any meaningful measurements of differences between normal children can be made that correlate with screen time, and furthermore that that screen time results in earlier specialization. I would actually expect that less screen time meant more real world interactive time, including with responsive parents, which would increase interactions in meaningful ways - and likely improve brain connectivity.
Amazon’s largest early losses were on the order of a few hundred million dollars per year, not billions. The single biggest annual loss in its early growth phase was about $720 million in 1999 on roughly $1.6 billion of sales. https://www.bbc.com/culture/article/20240628-a-36-year-old-jeff-bezos-talks-about-losing-money.
OpenAI had $8.7 billion of costs vs $3.7 billion of revenue in 2024.