back to article Microsoft is changing how it handles device diagnostic data to keep EU sweet

Microsoft is continuing to change how diagnostic data from Windows devices is processed and controlled to keep its place in the European market amid stringent privacy and security regulations. IT administrators enrolling devices in the Windows diagnostic data processor configuration option had been able to use a range of …

  1. captain veg Silver badge

    good grief

    "Microsoft is continuing to change how diagnostic data from Windows devices is processed and controlled to keep its place in the European market amid stringent privacy and security regulations."

    Call me weird, but I would like Microsoft to use diagnostic data only for diagnosing problems.

    They can't know that such problems exist without me telling them, so this would be, of necessity, strictly by my permission.

    "IT administrators enrolling devices in the Windows diagnostic data processor configuration option had been able to use a range of policies for each system, such as allowing for a commercial data pipeline and for desktop analytics processing."

    I really don't know what those things might be, nor why IT administrators would want to enroll devices to them. Are they getting backhanders?

    "As part of a larger effort announced in May 2021 to enable European entities to process and store their data in Europe, the software giant is ending the use of policies to configure the processor option and instead is offering a configuration for an entire organization based on Azure Active Directory to set Microsoft's role in processing data."

    How about just turning the whole thing off by default? If the justification is that it make Windows better then clearly it has failed.

    -A.

    1. Pascal Monett Silver badge

      Re: good grief

      Diagnostics are not telemetry (at least, I hope Borkzilla still maintains a distinction between the two).

      As far as diagnostics are concerned, I have an explanation of the why because I am a developer and have been since 1996 (TLDR : I like diagnostics).

      Whenever I write a script that must execute automatically without human intervention (ie at set time intervals), or executes in the background without pestering the user with error messages (because they're just ignored anyway), I want a log of that code's activity. I want to know the start environment, the data in input, the path that the code took and why and, if relevant, what the code sent back as response.

      I want that information stored in a repository so I can consult it when (not if) there's a problem in production, because invariably, weeks, months, or even years after I wrote that code, I'm going to get a call to tell me that my code doesn't work anymore and could I fix that. Invariably, I ask what changed and, almost invariably, the answer is "nothing changed, your script is broken".

      Yeah, sure, because I wrote chameleon code that overwrites itself. Pull the other one, etc. But you don't say that to the customer, do you ? Not when you're a freelance developer in any case.

      So I know I have my logs. I ask permission to go on site and have access to the application. In customer environment, I access said logs and trace the activity back to where it was working properly, then I take the next log and find out, normally rather quickly, where the issue is.

      Correcting the issue may be easy, or it may be hard, but I can print out that log and point to it as to why the code isn't working anymore (because you changed the date format of the server, doofus).

      That, to me, is diagnostic data. Since I do not go and post that in The CloudTM, it is only accessible on-site and, therefor, as well protected as the client's server is (aka security is not my problem).

      What happens after varies and is irrelevant to my point, which is : with logs, I spend at most 15 minutes finding out what went wrong. Without logs, it would take hours, if not days, just to find out what the issue is - especially when the customer doesn't want me accessing production data.

      So I like diagnostics. They've saved my bacon (and my time) more times than I care to count, and they make me more efficient.

      I'm just hoping that the term means the same thing for Borkzilla.

      1. Anonymous Coward
        Anonymous Coward

        Re: good grief

        That's the approach some developers take because they don't trust the code they write and there is no proper error management in place. It's the classic "printf()" approach, in languages that fear exceptions and their proper management (there's a reason because C and Go are lame languages), and because a proficient user of debuggers and profilers is unknown. So just wait for the application to do something wrong and then peruse the logs. I've seen application spending more time logging what they do instead of doing something useful, actually, with so many logs statement withing the code it was quite difficult to follow the useful code path. Also logging if not done async can slow everything a lot. Writing robust code is far different than logging everything continuosly - especially since logs can also capture sensitive data and store it in unsafe places. Sure, there may be the need to run an app in "debug" or "verbose" mode sometimes, but that should not be the default. When an error occurs, capture the application state and store it.

        Moreover, while applications today log and track everything, doesn't really look nor more performant nor more robust.

        1. Pascal Monett Silver badge

          I trust the code I write. The customer agrees it works in the acceptance phase.

          It's the inevitable changes that happen afterwards that I don't trust.

          "I've seen application spending more time logging what they do instead of doing something useful"

          You haven't seen my code.

        2. Jadith

          Re: good grief

          Your code isn't perfect or perfectrly resilient. It needs to keeps logs so issues can be identified and sorted in a timely manner.

          OFC, if you lack confidence in your code, not logging is obviously the way to go. It lets you maintain this idea your code is perfect and any issues will come down to hardware,network, or OS, because there is no proof the problem is in the code. QED.

        3. Filippo Silver badge

          Re: good grief

          The problem with that is that, in complex systems, the root cause of a problem is frequently something that happens quite far from where the actual exception condition happens, and is also something that looks innocent at the time. When you can detect there's a problem, the state that would have showed you the root cause is long gone. If you're lucky, you might be able to replicate the problem in debug mode, but that's not at all a given, and even if it was - well, if you had logged innocent stuff a few minutes before, you wouldn't need to.

          Also, there are cases where the system is behaving exactly as it's supposed to, the customer is complaining anyway, and you have to prove he used it wrong. Again, if you only ever record diagnostics when an error condition occurs, or when the system is explicitly run in debug mode, you won't solve those situations easily.

  2. Anonymous Coward
    Anonymous Coward

    Good news like this one

    only amplify the good feeling I have after just yesterday erasing Windows 11 from my shiny new laptop. What do I really care what the guys at Microsoft think about human rights, right to privacy, fairness to customers, responsible thinking about the World future? Right now the only outcome of Windows development would be a total spying tool on unsuspecting consumers. Corporations, like people, grow old and sometimes they have to be taken care of. For the good of everyone concerned.

  3. eldakka Silver badge

    That means, in compliance terms, that Microsoft will process the Windows diagnostic data but the organization will control it, with their IT admins being the ones responding to their end users' data questions.
    WTF? I'm the data controller for something that Microsoft demand's be collected? Fuck off. That's what I'd tell my workplace too. They'd need to assign a business unit - a legal one - to oversee that at the very least, not the IT administrators. IT admins are neither the controllers or processors, they're just the ones following orders to configure it a certain way. Whoever is issuing thoise orders are the controllers.

    Just disable telemetry/diagnostics, problem solved.

    1. Missing Semicolon Silver badge
      WTF?

      "Microsoft will process"

      Where? And, if the data is resident on servers that Microsoft control, they are subject to the CLOUD act, wherever the machine may physically be.

      This is a fig-leaf to hopefully blindside the legal framework in the EU, which will last until Mr Schrems gets going...

      1. Strahd Ivarius Silver badge
        WTF?

        Re: "Microsoft will process"

        and it doesn't address the concerns of EU subsidiaries of a non-EU company, since data will be stored at a location of that is related to the parent company, so violating the meaning and the letter of the GDPR.

        Microsoft Germany data needs to be stored in Germany and not accessible by Microsoft US and any spook that they give access to their system because of US law!

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like