back to article Microsoft rolls out safety tools for Azure AI. Hint: More models

Microsoft has introduced a set of tools allegedly to help make AI models safer to use in Azure. Since the cloud-and-code biz started shoveling funds into OpenAI and infusing its software empire with chatbot capabilities – a drama enacted with equal fervor by rivals amid grandiose promises about productivity – Microsoft has had …

  1. Anonymous Coward
    Anonymous Coward

    AI can turn any rushed oversight into a lethal footgun

    There has always been a grey area beyond hustle where volume of production overtakes the capacity to confirm quality - now AI makes that grey area easier to reach and more dangerous than before. Furthermore, employees will be under the gun to prove their 10x improvement in productivity.

    If tech and other companies had any technical sense, they wouldn't be firing knowledgeable non-AI specific employees to replace them with AI, they would be using those knowledgeable non-AI specific employees to dogfood the AI into the workflow - that would take time - it would even be slower that normal at first, but that would payoff over years in developing the AI tooling to actually be safe and useful, and marketable. Yet I've never seen that concept expressed in any AI-hype anywhere.

    1. Doctor Syntax Silver badge

      Re: AI can turn any rushed oversight into a lethal footgun

      In other words, having the experts train their replacements. I don't think that would turn out well once the trainers cottoned on to what they were being asked to do.

  2. Doctor Syntax Silver badge

    "a custom language model that evaluates unsubstantiated claims based on source documents"

    Which source documents are these? The source documents that we really positively never touched because they're customers' property? The source documents that really are not in the model because the model is simply a statistical summary of lots of documents so it can't be a copyright infringement? Or the source documents which form a carefully curated, reliable training set as opposed to blindly sucking everything in sight into the general training set?

    1. 43300 Silver badge

      And what about when these multiple source documents state opposite positions on something? Which does the AI choose?

  3. Anonymous Coward
    Anonymous Coward

    Off Switch?

    Can I just have an off switch for the AI features in my software please?

    If it is off, then I don't need to worry about it doing something stupid.

    I would rather Word just stayed dumb.

    1. 43300 Silver badge

      Re: Off Switch?

      M$ are making it extremely difficult to opt out of a lot of this shit, and they are clearly instructing their partner network to push it hard too - I've had loads of emails from one of the consultancies we use, with breathtaking claims about 'Copilot' and exhortations to attend their exciting presentations on various aspects of it!

      1. navarac Bronze badge

        Re: Off Switch?

        Yes, we need a totally off, no BS switch. M$ can not even get the Windows GUI consistent, or the base code safe. I have no confidence what-so-ever in them making AI safe. They are far too busy concentrating making the next $bn and getting ahead of the equally stupid competition, who are ALL hyping this crap.

  4. pip25

    Hallucinations about hallucinations

    I can hardly wait to see them introduce a third layer of "protection", which will no doubt involve an AI checking whether the hallucination-checking AI hallucinated about the hallucinations. (And I guess it's all recursive from here.)

  5. amanfromMars 1 Silver badge

    The Next Small Step[s] for Giant Quantum Communication Leaps

    Microsoft has had to acknowledge that generative AI comes with risks

    No shit, Sherlock.

    And if the truth be told ...... The emergence of large language models that can hallucinate and offer incorrect or harmful responses has led to revolutionary funding for savvy postmodern tech industrialists promising traditional historical and hysterical elitist hoi polloi they are able to tame feral virtual models [Autonomous Intangibles] whenever they are clearly proving themselves to be able to run amok and render status quo systems administrations catastrophically compromised and vulnerable to totally unexpected and surprising indefensible attack/and increasing invasive and pervasive series of expanding prime fundamentalist 0day exploits ....... undeniably honest observations.

    And not so much the state of future things to come but rather more the way it currently is with/in IT and AI and just the most recent and disruptive of engaging and enlightening delights to savour and flavour with your favour should you wish to steer its direction into productions which present your greater wishes for promotion into realities.

    The Future of Reality is a Virtual Feast Trailing Trials and Tall Tales with and for Autonomous Anonymised Intangibles Internetworking .... AI²

    And one does well to note that is not shared as a question of fiction whenever engaging and exercising of novel emerging and quite naturally disturbing facts.

  6. ecofeco Silver badge

    More MS genius

    MS: "You know what would be fun? Making the the very software you rely on to create secure access to your network, vulnerable!"

    Also MS: "You know what be even more fun? Using half baked AI to pretend to make it secure!"

    Also MS: "Fix our code? LOL! Suck it plebes!"

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like