Re: What can the numbers tell us?
Given the long and well recorded history of patches for Windows (of all or a particular version), can statistical analysis (and other maths) tell us roughly how many vulnerabilities there are that still need patching? I have a feeling that it would be a scary number."
Unfortunately, I'd err on the side of "no", because there are too many variables to allow for a useful comparison:
* lack of knowledge about security/testing standards and whether these are/have been enforced to a standard degree
* significant difference in scope across different versions (number of architectures supported natively by the OS, the degree to which security is a focus, the degree to which network connectivity is ingrained in the OS, the development lifetime, etc)
* significantly, a lack of proof that the distribution of vulnerabilities is uniform throughout the code
* lack of knowledge as to whether the introduction of patches introduces other vulnerabilities
* changes in the approach to service packs skewing the numbers (NT4 got 6 SPs, Win2K got 4, XP got 3, Vista got 2, 7 got 1, 8 and 8.1 didn't get any, and 10 looks set to change the whole approach anyway).
You could maybe get some sort of average values for:
* how many vulnerabilities (possibly even grouped by broad categories) have affected previous releases by an equivalent amount of time since RTM
* how many vulnerabilities have been found in total over its supported lifespan
and use these to make very crude estimates about the relative security of the current release. But there's no mathematically-sound basis for giving those estimates any more weight than a number someone makes up...