Perhaps Copilot will start handling the M365 support requests?
Or perhaps, given how bad the support frequently is, that has already happened?
While blasting customers with the promise of AI, Microsoft is now telling its employees to tiptoe into the brave new world too. In a message seen by Business Insider, Microsoft is trying to persuade staff, including developers, that upping their use of AI is a good thing. A memo sent by Microsoft's Strategic Missions and …
Yeah, basically it is!
Got a case ongoing at the moment about a fairly complicated problem, which we've done some quite detailed testing on. Jesus, it's heavy going - they keep coming back with the sort of questions you'd ask a not very tech savvy home user. I've avoided the temptation to write "yes of course we've fucking tried that - it was one of the first things we tried (and you've asked three times already)" or "that's quite obviously irrelevant to the actual issue". In fact, one or other of those answers would be a meaninfgul reply to most MIcrosoft support suggestions.
A reprise of the South Sea Bubble (and of course the 'dot com boom')? Didn't Gartner publish a graph (the adoption curve) about these phenomena some time back? And yet the cycle repeats and repeats and repeats and ........
I wonder what will happen to the stock price when we all finally decide what the various versions of "AI" are actually good at and bad at and just start using them as tools where they really help us reliably.
Make it compulsory for MS employees then.They can let us know what percentage of Windows 12 has been written by this amazing new AI, or even get the AI to write all of it. The world will then be able to judge how good it is by how reliable the OS is.
I mean, if they're charging that much for it, it must be capable enough?
I really wonder if that is true.
I recently was exposed to my first real interaction with "AI", a helper on a website where I was trying to edit my profile to remove an old phone number. Stupidly, you can only add phone numbers but not remove them, so I clicked "Help" and it chat-connected me to an AI which, once I went through a few prompts...removed the unwanted phone number entry in my profile.
Which begs the question: was "AI" really necessary if you clueless idiots put "Delete" as a valid option for phone numbers in the first place??
Given that "AI" has already been proven to give out incorrect information occasionally, what exactly is the purpose of something that either (a) does something that should / can be done already, or (b) does something incorrectly? Are they just putting lipstick on the pig of their ineptitude as a coverup for something they should have done correctly in the first place?
I'm sorry, Satya, I'm afraid I can't do that. Windows updates should be fully tested before release. This monthly release will take approximately 7 months to test. Press Enter to begin.
A few tips.
Don't subscribe to software, buy it. If you can't buy it, use something else that you can buy.
Don't use anything labelled 'AI'. It may introduce more legally expensive errors into your operations, more quickly than interns do.
Consider switching back to hybrid systems. Working offline or with paper and typing data in to a networked system only when you have to submit it electronically to a third party. It's a cheaper, more resilient and more secure way of doing business.
Software is rarely actually purchased - you are actually purchasing a license to use it. With the rise in subscriptions, the concept of "owning" software has become even more tenuous - I guess the analogy would be that instead of purchasing a license to use it, you are now renting a license to use it!
What if I were an MS developer and honestly knew that I could produce better code (or whatever) than AI, even if I might not be able to churn it out as fast (because I'm a perfectionist and will make sure that code is good, but will have to spend more time to do that)? Are they going to force me to use AI in order to get more product out the door in less time despite it being lower quality or having more flaws? (I know the answer, of course. Plus, a developer like that probably wouldn't have lasted long as Microsoft anyway.)
Honestly, regardless of how "good" Copilot was, I wouldn't use it. I'm not keen to jump on the learned-helplessness bandwagon.
None of the arguments I've seen promoting the use of LLMs for programming assistance, including from people like Matt Welsh, have been at all persuasive. I believe they're wildly mistaken about the actual, long-term costs and benefits.
I was using it today (use comes with MS stuff employer signs up to, allegedly all private, no details of code you look at beamed back to MS mothership, we are being encouraged to try it out & see) adding a bit of extra error recording to some SQL Server procs
It did basic "suggesting a completion" OK, e.g. I typed ERROR_L and it suggested ERROR_LINE(), similarly did OK for suggesting ERROR_MESSAGE() etc.
However I decided to play about with it and type something where there was not a corresponding system function
I tried ERROR_R - instead of doing nothing, it suggested ERROR_ROUTINE() in autocomplete.
It was either hallucinating or some of the training data has included someone's "custom" function of that name, either way fairly useless as a novice coder would be expecting a valid system error related function call (and no, there was not a "custom" ERROR_ROUTINE function in the codebase I was looking at, so it could not have grabbed it from analysis of open files)
"Those are basic routines. wait until they start coding on top of "hallucinations". This is going to be CHAOS."
Microsoft already has that chaos market cornered anyways with their patch Tuesdays, where every month you get to find out what they have FUBAR'd this time when doing it.