Re: Depressing
"data can be assembled at a local node, that can in turn notify and request an admin if it is okay to send the report to the provider"
I hate to have to ask, but have you ever operated a service that scales to hundreds of thousands or millions of servers? Yes, that data gets aggregated into a control plane and data archival service (which itself is almost always some kind of scale-out service running on hundreds or thousands of servers). No, it is not realistic to ask a human to sit there and tick the thousands of boxes every day indicating whether it's ok to send off crash dumps ranging from a few MB to tens of GB each, never mind expecting that human to manually review the contents before doing so. Once you scale beyond a couple dozen servers, there are only two options: an automated process or no process. That is what "possibly sanitised in some well-specified way" means: it's possible to limit what is collected in some manner satisfactory to the customer.
"There is never a valid reason to just automatically track what is happening on other peoples devices."
I agree, and you need to go back and read what I wrote instead of knee-jerk assuming I'm saying it's fine for software providers to spy on you.
First: We aren't talking about people's devices; the cases I'm making for automated collection of crash dumps apply only when the operator is a corporation. The "devices" in question are not your phone or laptop; they are corporate-owned servers sitting in data centres.
Second: I already agreed that this kind of service should be opt-in with informed consent. That is not "just automatically track[ing]" anything. If you don't want it, don't turn it on. The customers who buy and operate the kind of software I'm referring to definitely do want it.
Third: I already agreed that this method is neither necessary nor appropriate to Audacity and other interactive applications used by individual humans. Did you even read?
Fourth: The case I was making was not for "tracking what is happening" in general. You're imagining that for example every keypress and mouse click are being recorded, which is a thing that some evil corporations do with interactive software -- and in fact what I fully expect Muse want to do with Audacity -- but that is not what I'm talking about. The software for which this kind of automated collection makes sense is NON-INTERACTIVE which means there are no keys or mice, and the kind of data I'm suggesting should be collected is limited to errors and crashes. At this scale, collecting every individual normal event is extremely expensive and while the operator may choose to do that (and aggregate it) for its own purposes, it would never make any sense to want that sent to the software supplier. That's not what customers want; even if they were comfortable having their own or their customers' data sent out at that level of detail (they aren't), it would be prohibitively expensive.
Fifth: To the extent that this data can contain personally identifiable information or information that is confidential to individuals, GDPR and other data protection laws apply and must be followed. The SCC dodge is nonsense in this case as anyone knows perfectly well that data sent to Russia cannot be protected adequately by any private contract. At no time did I suggest ignoring the law to collect crash dumps *without informed consent*, *without an opt-in*, nor *without appropriate processes in place to protect and manage that data*.
Get a grip. I am advocating for *opt-in* automated collection of *error reporting data* for *non-interactive software* run only by *corporations* and then only *at scale*, performed *in a manner compliant with the law*. If your position is seriously that no software vendor should ever be allowed to have its software send data about its own erroneous behaviour back to itself, we'll simply have to agree to disagree, because you are wrong and that position does the customer no favours.