Data residency
It will be interesting to see how Microsoft manages data residency at national and tenant levels with respect to the overall training set
Microsoft is bringing ChatGPT, with all its promises and shortcomings, to world-plus-dog as a cloud service in Azure. Redmond this week was "thrilled to announce" ChatGPT will be selectively available as a preview within the Azure OpenAI Service. That service is largely aimed at corporations that want to put large-language …
Data residency policy: All your data are belong to us. Your teams call transcripts, your video and audio, your chatlogs, your documents, your GitHub "contributions", your LinkedIn interactions, your health records, your Edge browser history, the contents of your OneDrive, and the contents of your hard drive.
Training an AI model on every piece of data that passes through their systems is not the same as "storing" that data, and you can't prove your data is in the model. So Microsoft's lawyers will say.
Microsoft have seen Google quietly drop the "Don't" from their corporate charter and start being evil. They have seen Amazon and Meta slurp up lots of power and money by being evil. And they must have thought: "Google are being evil. Apple are being evil. All the tech companies are doing evil now. But we are Microsoft! We were evil before it was cool. We are the OG of Evil-corp. It's time to out-evil the competition"
This is FUD. MS have a Trust Center that details EXACTLY what is done with data and their model isn't to monetise it like google. It'll offer an org very granular control over where their data resides. They spend millions on this already for their existing offerings, it's a key differentiator from google
See icon. --->
Microsoft just loves to tell you how trustworthy they are.. It's a bit like the pathetic pleas displayed by Edge when you use it for its only valid purpose, to download a new browser. "Edge is Chromium but with the Added Trust of Microsoft!".. Trust you? Fuck off! "The Added Antitrust of Microsoft" would be more apt.
> It'll offer an org very granular control..
Hah, perhaps. But for us plebs, we are no more than data cows to be milked. Isn't it lovely how Microsoft by-default integrates peoples LinkedIn profiles into Microsoft Teams, and then sends them a little summary of how productive they've been this week...
Monetising data, isn't that -exactly- what Microsoft are doing with OpenAI and GPT3 / GPT-4? Does the hilariously-named Trust Center allow "orgs" (never mind the rest of us) to say if Microsoft is allowed to analyse and mine their data? I.e. using it for ++GPT? That option is likely only available if you turn off all of the cloud features of Office, at which point you would be better off using LibreOffice.
How about all those open source (and non-open-source) devs who's code was assimilated into CoPilot/CodeLens when Microsoft bought GitHub? Did they get any choice in that matter?
Interesting article on Slashdot today too: https://slashdot.org/story/23/03/15/0542207/microsoft-lays-off-key-ai-ethics-team-report-says So they may once have had an AI Ethics department, but they are no longer needed..
Glad that I've run *nix since time began. Upon hearing this (world-plus-dog is good)
I immediately day-dream that as the AI gets thoroughly integrated into their software, that
the folks begin to say 'ick' about having all their decisions taken from them and finally
throw up all that stuff with accompanying 'thud's & loud voices, and turn to linux.
I for one now see this as my new hobby in retirement. I may even invest a little money into it, to buy some cloud hosted space and the necessary software to become a content generator. And once I learn how to make it all happen, I plan to become a shit generator of the highest order.
I will give my innovations to hackers everywhere. I will hire hacker networks to host and spew. I will endeavor to generate so much AI sewage that they have to scrap it and start over again, and when they do my mighty army of teenagers, criminals and other deviants will be ready to jump back again with both feet!
Ha ha ha ha ha!!! BWA HA HA HA HA HA HA HAAAAA!!! JOIN ME IN MY CRUSADE, MY FRIENDS!!! HA HA HA HA HAAAAA!
If the bros create and upload vast amounts of incorrect and explicit text (in Trek, Wikipedia with a beard), these apps will eventually start trolling users. You will ask your PC to turn on Radio Bland Classics and it will reply: 'Not your personal army'.
But don't panic. Anyone with an IQ greater than their age knows that the first thing you do when a new Microsoft 'feature' is rolled out, is turn it off.
I really think we need to lobby governments for anything with 'AI' in it to come with a 'Turn the AI off' button, by law.
Nearly everything in Windows 11 requires a Microsoft account, is there even one new feature specific to Win11(over Win10), that is usable in Win11 without a Microsoft account?
OK, Notepad with tabs /sarc, but regarding nearly every other app, including basic time based 'To-do' apps, if you carry over a local account from Win10, using an in-place upgrade to Win11, there is very little in the way of new usability, if you refuse to sign in with a Microsoft account.
I'd actually say it's a retrograde step, as many features have been taken away in Win11, without first signing in with a Microsoft account. It's the slow drip, aka. boiling a frog.
Bear in mind too, to log into that Microsoft account, you are now forced to provide a mobile number, on the second logo in, (even when a user knows their log in info or even has a recovery code), 'due to suspicious activity' -the account hasn't been used for anything yet, which in itself is 'leaky', as it provides location data for that text message to be stored somewhere, likely accessible without a warrant and how do we know that isn't the real purpose behind this, in order to win Government security contracts? Scratch my back etc.
Anything that forces a user down a specific path without a choice, favouring the company who put those forced policies in place, puts the fundamentals of democracy in jeopardy.
People are also far too blase regarding overt and covert surveillance, when unlocking their mobiles, especially in pubs, where the camera is right above their head. Cameras can easily have 100x zoom capabilities.
How about the next step, using AI, to analyse those images to work when a user is entering a pass code, in conjunction with collecting IMEI info from the phone, then selling that info?
If anything 2FA isn't secure, is downright dangerous, because it's become clear that if someone does get hold of your mobile by force, they can quickly lock the person out of those accounts, and because they have their mobile, do far more damage in the time available, because 2FA will prevent/delay the owner even contacting companies like banks, to confirm their accounts are being compromised as they speak.
Many companies now, it's impossible to contact them and validate the account (eBay I'm looking at you), without first logging in and using 2FA. The choice should ALWAYS be with the customer, not the company providing the service.
Microsoft strong arming people, into 2FA and online account compliance, is not acceptable.
That's assuming you still have the option to turn it off.
It's becoming increasingly common that there is no way of turning off microsoft's anti-features. Except by deleting Windows entirely and installing Linux.
However, Microsoft seems to be in the pockets of the hardware vendors and seems to be making that more and more difficult to run Linux natively with each iteration of Secure Boot. (for example On my recent Asus motherboard there is no way to disable Secure Boot entirely. Even if you delete the PKs, it puts itself into "setup mode" and resets its own configuration every 30 days.)
(and with WSL2, why would anyone ever -need- to run Linux natively, right? Unless they have something to hide from Microsoft's ever-encroaching surveillance, in which case they must be a paedo/terrorist and the relevant three-letter agency ought to bee told..)
This is going to end badly. As the AI decided to train itself on the data in a tenant. Computerphile have already done a video on the issue of the likes of ChatGPT and its training data. That ChatGPT is the most powerful AI we currently have but no one really knows how it works. Then it spitting out random words. More investigation found out those words were Reddit user names as appears the whole of Reddit is part of the training data.
As always, remember those are the risks to MS and its customers. Not the risks to you, the End User. Or, heaven forfend, assessing risks to society as a whole.
"By using ChatGPT etc we can increase user retention, they are are now spending 15% longer on our site (we can charge more for ads)" - because it is now so full of junk that looks useful at first glance that it is taking users that just that much longer to realise it is all drivel and they need to look elsewhere.
"The service, in preview mode right now, will cost $0.002 per 1,000 tokens. Pricing for OpenAI's various AI models are based on tokens, which the upstart describes as pieces of words, with 1,000 tokens being about 750 words."
At 674 words (minus title and links), Mr. Burt's article would have cost $0.00180 if generated by ChatGPT. To put it in perspective, if you saw a US penny on the street and it took you three seconds to stop and pick it up, that would be $12.00 per hour (before taxes).
> When developers apply to use ChatGPT from Azure, they need to outline how they intend to use the technology before given access, Microsoft said. It also plans to filter out abusive and offensive content.
Microsoft's strategy against piracy has been such a success ever since they stopped their BASIC interpreter from escaping into the wild that I look forward to watching this one.
> "In the event of a confirmed policy violation, we may ask the developer to take immediate action to prevent further abuse," he added.
Sure, you do that.
Windows 12....
"I don't think we'll be installing that nasty war game! A waste of your time! MS Nanny Chat knows best, now install VSCode and .Net runtime, get learning something worthwhile you lazy so and so! Here's some ideas for .Net that will get you started. If you're a good boy/girl then MS-NannyChat might let you play MS Solitaire for 5 mins before bedtime!"
"Trying to install a non-approved MS language such as Golang or Rust? Naughty! I think someone needs to be punished with some very nice videos about why MS offers the best programming languages and why no one argues!"
"We see you tried to install a non-MS piece of software, let's have a chat about why MS offers software that's way, way superior to anything else anyone else produces, especially that nasty FOSS nonsense."
"Now let's discuss removing that awful LibreOffice nonsense and get you subscribed to Office 365, eh?"
can we stop calling this shit AI. it's not intelligent, it's as dumb as the rocks the cpu is made from. .... Anonymous Coward
Call it whatever you like, AC, but there’s no denying it is leading whatever servers of intelligence you think out there appropriate to you and your ilk a merry dance into shadows and shade and states of existence that they have zero positive information on to render its progress not a grave matter of international security and advanced existential threat concern to/for reigning and ruling status quo third party hierarchies and entities employing and enjoying the privileges of oligarchy.
Interesting to see Microsoft exploring the integration of ChatGPT into their Azure service. With the rising demand for intelligent chatbots and conversational AI, this move could potentially give Azure a major edge over competitors in the cloud computing space. It would be great to see how Microsoft leverages ChatGPT's natural language processing capabilities to enhance their existing services and offer more personalized customer experiences. However, it remains to be seen how Microsoft will address concerns around data privacy and security, especially considering the sensitivity of customer data that may be processed by ChatGPT. Nonetheless, this move highlights Microsoft's continued commitment to innovation and staying ahead of the curve in the ever-evolving world of technology.
"When I were a lad", say up to 1980, tools had an identifiable function. A reasonably intelligent and practical person could take them out of their case and see how they worked, how robust they were, how to use them and maybe how to repair them. Maybe they worked in some shoddy Heath-Robinson way, but at least that was visible, and maybe there was a more robust and expensive alternative.
Now everything is magic, Heath-Robinson, smoke-and-mirrors."Any sufficiently advanced technology is indistinguishable from magic", said Arthur C. Clarke. Well yes, but I'm not so sure that it really is more advanced than reading tea leaves.
I had to have a face-to-face row with my bank manager to get them to record that I was going on holiday to Argentina. They claim to have ways of detecting fraud. How, when I had never been to South America before? By no better method than reading tea leaves.
What the hell does this Artificial alleged Intelligence do? How does it work? Or is it glorified tea leaves?
I can imagine a useful tool using this technology. When one does a web search, often there are numerous pages with essentially the same text. It would be good to have a tool to merge them, identify the original version, corrections or errors that have been introduced, etc. Maybe there are conflicting versions, or opposite sides of a story: these could be distinguished. For centuries,historians have spent their lives in libraries doing this by hand. In the future, digital historians will have to sort out multiple versions of documents in a vastly larger dataset than classicists or medievalists ever had to play with.
But of course such a tool is not as sexy or profitable.