Amazing what maths can do
> started with an assumption “that we would be spending time on architectural differences such as floating point drift, concurrency, intrinsics such as platform-specific operators, and performance.” ... "it turns out modern compilers and tools like sanitizers have shaken out most of the surprises"
Sorry, what?
They planned to be spending their time on dealing with minute details of instruction sets and opcode interactions? The things for which we've been building decades of maths - you know, the ONE area that can actually deal in proofs! - from formal theories of languages to planning and optimisation of completely defined[1] and constrained operations in hardware? Variances in floating point accuracy and stability, the bread and butter of published mathematicians and computer scientists; published, as in actually told everyone and provided worked examples as compiler patches to present to the examining board?
But were then, apparently, surprised to find that their time was really needed on fixing things that were rather less rigorously created in the first place, like build and release systems?
Worse (!) having discounted the fine and detailed maths described above, they hadn't planned to be tackling the bits of maths and stats that they should have been expecting to find in their own systems, such as overfitting tests. Or the issues inherent in keeping existing running interacting systems stable (hint: keep the machine room clear of butterflies).
Ok, it wasn't my best subject, but being sent out of the terminal room, away from the blinking lights, and into the lecture halls of the Department of Mathematics, Statistics and Operations Research was once seen a basic to producing a well-rounded geek.
Let alone one with the title of "engineering fellow".
And then ... they patch it all up with "AI". Run a learning model, get it to print out what it has discovered, sanity check that, then engineer the results into your improved systems, good idea. Letting it run wild on its own...