Only 30%?
We've gotta have a competition to give a name to whatever hellchild is an amalgam of word excel and PowerPoint... Jesus WEPP?
Microsoft CEO Satya Nadella has claimed about thirty percent of code in at least some of the Windows titan's repositories was written by an AI. Nadella revealed that number during an interview with Meta boss Mark Zuckerberg at the latter mega-corp's LlamaCon event this week. A few minutes into their chat Zuck asked Nadella, “ …
It depends on what's meant by that. Individual LibreOffice applications are calls to a common executable with a flag to tell it whis functionality the user wanted. He might simply be catching up with that but it would likely be a complete rewrite. Something for Office users to look forward to.
Yes. Exactly this. Thumbs up. I don’t think I have a problem with some code being written with the help of an AI. That’s just code monkeying. But tests? The formal declaration of the specification? The do you really understand what the hell you’re supposed to be developing? Seems like a bad idea to me.
This was my thought. It's been many decades since I wrote (first as a schoolboy and then skilled amateur- but in those days most of us were) any software. The hard part wasn't writing the code. It was working out what I needed the code to do- and who for and what they'd do with it, and why. That sort of human stuff.
"Why do they have to keep patching the rubbish product"
Somebody had a good idea for a product that management liked and the MBA's filled in the publication date on their calendars. All hail the Calendar. It was releases on that date since waiting any longer would "bankrupt the company" and some other company had caught wind of it and might be first to market if they don't get it out.
Go buy a Tesla and it will need a patch much more frequently than once a year.
He says 30% of the code in the repos, He doesn't say where those repos are against the product release.
It will be interesting to see how buggy the next release is.
He also doesn't indicate how that code gets in the repo, but implies the statistic is from committer status. So if he's got 30% of his repos due to uncurated AI commits the next release is going to be .. interesting. I'd hope for his sake that all the commits were made by humans, after testing, and 30% is the proportion that used AI assistance on the changes.
LLM inference training is by its very nature lossy, and as a result LLM generated code quality is significantly poorer than the average quality of the code used to train it.
This would normally be a concern , however for Microsoft, below average probably represents a huge increase in code quality, going by the number of critical flaws patched every single month. I can see why they are so keen on it.
It's not having the LLM write code that's the problem.
The problem is how the hell are you testing or debugging that? Because tests can't ALSO be written by AI, and debugging something that another human has written is bad enough, or even something that YOU WROTE YOURSELF. Trying to debug some spam-churn out of an LLM? No thanks, I'd rather be out of work.
"Trying to debug some spam-churn out of an LLM? No thanks, I'd rather be out of work."
If you've got the point where you are chasing the bugs around, there's been a person or team that knows what's gone into the software, what it's supposed to do, how it goes about doing that thing and where an odd problem's source might be found. To polish up an AI written thing, where do you start? What was the overall approach? Are there modules or is it all one giant mish-mash.
If you've ever seen structures that were designed entirely by something non-human, you notice how oddly organic it looks. Yep, it's hit the design criteria with the least amount of material, but how do you make the thing in production? 3D printing is glacially slow and one has to factor in that the material doesn't have the same structural specs as the same stuff in a more homogeneous form. A big part of design can be optimizing the thing so it can be built at a price and at a pace. I've printed some really wild computer generated things on my filament printer that could never be turned into a product. As a hobby production, who cares about price/time if you can afford it?
This would normally be a concern , however for Microsoft, below average probably represents a huge increase in code quality, going by the number of critical flaws patched every single month. I can see why they are so keen on it.
I'm not so sure.
Given the IMHO rather dramatic decline in (remaining) quality of their latest releases I'd argue they have actually started this a while back, in which case the promises of Microsoft that this would somehow improve have about the same value and persistence as one from Trump.
For instance, the new Outlook is so feature deficient that it's almost not usable for users who have come to rely on those features. For instance, personally I loath Microsoft deciding that urgent messages cannot be simply merged in the new date sort and will ALWAYS appear first. I can see why that is important for some, but not for everyone and it is in my opinion becoming very clear that Microsoft has simply stopped listening to its users, because, let's face it, it's not like a majority of them even has a choice..
It all makes sense now. no real people coding, garbage churned out based on historic questionable code. No-one understanding it, or testing it internally.
"Ship it to the users, let them find the bugs"
As for a uber-app combining Word, Excel and electronic crayons. The users I've witnessed can barely operate the stand alone applications, dread to think the support calls which will come for someone who can't format their table into columns like Word while making it fly in like PowerPoint.
I suppose eventually they'll shoe-horn Outlook into this abomination too. Programs aren't complete until they can email.
One ring to rule them all? No thanks...
I make data. I choose a program to generate it; I choose a program to process it. Those two programs may or may not be the same as each other; there may be more than one choice in either field. But _I_ choose...
The end point they're aiming at is you write a description of what you want done with the data, and it's done for you. You don't have to worry about the software. It may pick the package for you. Or it may even code it on the spot.
Whether we get there or not, I don't know. But that's the vision. My guess is it will work for low hanging fruit. Where the boundary between low and high is, I don't know.
That's how programming already works. We write a functional description (we call it the spec), then we write a procedural description (we call it the code) then machines write a register-level description (we call it the executable).
In general, the functional description is incomplete and the programmer makes a bunch of judgement calls to turn it into a procedural description. This results in bugs. You can significantly reduce the number of bugs by having a more thorough functional description, but the people who write those aren't generally capable of doing better, which is why they don't.
Having AI do the procedural 'compilation' puts the onus on the functional spec writer to write a good spec. That's not going to happen.
I have to say that's what I use too. PowerPoint,or its LO equivalent, is something I have in the background when I'm doing my voluntary "Digital Champion" thing, because Local Authority corporate requires it, and which I pretty much ignore after I've done the "housekeeping" slides.
If you need a long flow of slides behind you to say what you are meant to be telling the audience, you aren't telling the audience properly.
That would make sense if the PP slides were helpful 6 months down the line. I've yet to be given any which were.
Most of my "residents" speak very limited English. And yes an occasional PP slide can be helpful, but mostly I control my language and use demonstrations of the actual work done. With the PP minimised.
"If you need a long flow of slides behind you to say what you are meant to be telling the audience, you aren't telling the audience properly."
That's bad. OTOH you might have a long flow of slides which you explain to the audience - actual photographic slides, not text. I used to have a carousel of slides illustrating all sorts of things - altered documents, comparison microscopy, footprints etc. I could pick that up, collect a projector and give different talks depending on the audience, anything from schools to detective training.
PowerPoint and Impress certainly make it too easy to put the text of the talk on the screen; what's on the screen should really be something to talk about.
"Carrying a USB stick beats lugging a Carousel and slides around."
It's just as easy to carry around a tablet with a couple of adapters to be able to plug into the AV system. I'd use my laptop since I'm going to have that with me anyway so it's pointless to also haul around a tablet. A backup on a USB stick is good practice. It will mean you would be using somebody else's computer with Bog knows what on it and how it's been configured.
The AI would respond,
My code is perfect and does not need any error handling as nothing will go wrong with it.
But.... every system has defects? asks the humanoid.
Exterminate... Zap....Kill.
MS has clearly been corrupted by the AI hype. I'm so glad that I'm MS free. (and google for that matter)
"Isn't that excel?"
And if you need to take some photos for marketing, you can use your iPhone. A band wants to have a music video, bam, iPhone. etc ad nauseum.
Yes, the above things 'could' be done with an iPhone, but you have to twist and contort your workflow to do it that way. A commercial photographer I know told me he doesn't "need" his 100mp camera to make photos for clients. He could make the same images with an iPhone. The trouble would be that the files would be what they are. There wouldn't be the margin to do much pushing and pulling in editing. There wouldn't be the ability to just use "this little piece of the photo over here" since that selection by itself wouldn't have enough resolution to be useful and that's often a big request after a photo session. it's more efficient to use good tools that were built to accomplish the job rather than driving nails with the adjustable spanner.
I can remember very early Windows that DDE or early OLE poster child was a spreadsheet embedded in text document with the magic of the embedded spreadsheet updating in real time as the user worked on the sheet in excel (or whatever.)†
All a bit like the Harry Potter movie's newspapers.
Problem was I have never encountered anyone that wanted this behaviour and most would find it a complete embuggerance if it were the default.
At the time I thought that implementing this stuff would pretty much a dawdle in Smalltalk 80.
I heartily encourage both Nadella and Zuckerberg to bring on their AI transfiguration. :)
† I don't recall whether it worked both ways viz if the embedded spreadsheet could be modified in Word (or whatever) with the changes propagating to excel.
"Problem was I have never encountered anyone that wanted this behaviour and most would find it a complete embuggerance if it were the default."
Indeed. Nothing worse than the text of the report not matching the graph and the excel snippet, because someone's been working on the spreadsheet after you finished the text.
Yes, you could do proper document control and the like, but for 99.9857% of reports that effort simply isn't justified.
>The Meta man said “the big one that we're focused on is building an AI and a machine learning engineer to advance Llama development itself.”<
Is Zuck really thinking a lossy statistic engine feeding on its own dog food will tune itself to perfection or keep improving itself beyond any limit and thus become truly intelligent?
"The Microsoft CEO said plenty of the company’s code is still C++, which he rated not that great”. Microsoft maintains a lot of C++ too, and Nadella said it’s in “pretty good” condition"
Should the first reference be just C instead of C++
Or, should first C++ remain "as is" and second reference be C#?
....Was that part of the article written by "AI"?
I've read the following several times and still can't make sense of it, maybe it's too early in the day.
So is the C++ code not that great or is the C++ pretty good?
The Microsoft CEO said plenty of the company’s code is still C++, which he rated "not that great." Microsoft maintains a lot of C++ too, and Nadella said it’s in “pretty good” condition while more recent Python is “fantastic.”
The fact that MS is using AI to develop its products will come as no shock to anyone around here.
I wonder if they're eating their own vomit i.e. using Co Pilot.
Having used a number of AI tools recently, Co Pilot was significantly worse than a number of other bug ridden tools.
I think I understand now why the W11 UX is as pleasent as standing behind a hippo in gastrointestinal distress