So because M$ built in Flash into Edge, I am still vulnerable until the end of 2020. Nice going, Satya!
https://support.microsoft.com/en-us/help/4520411/adobe-flash-end-of-support
It's going to be a busy month for IT administrators as Microsoft, Intel, Adobe, and SAP have teamed up to deliver a bumper crop of security fixes for Patch Tuesday. Redmond weighs in just under the century mark Microsoft had one of its largest patch bundles in recent memory, as the Windows giant released fixes for 99 CVE- …
Since I looked it up I might as well cut and paste it.
1. Click the menu button in Edge. It's the three dots in the upper right corner.
2. Select Settings from the menu.
3. Click the "View advanced settings" button. You'll have to scroll down a little bit to find it.
4. Toggle "Use Adobe Flash Player" to offI jut
Well, that's weird. I'm running the latest Chromium based Edge which has Flash permanently turned off. As far as I know nothing else is using Flash as I certainly didn't install the damn piece of crap myself and it's definitely not listed in the Apps in settings nor in the older Control Panel Programs and Features. There's also no Adobe folder under Program Files either.
So why is Microsoft still offering me the Flash update (KB4537789)? Any ideas?
"We’ve all enjoyed using sites with it over two decades."
Speak for yourself. In my experience, Flash was used to produce sites that were slow to load and content-free when they arrived. I decided years ago that disabling Flash was actually a smart move because it flagged up all the sites created by that mindset, which I could then avoid.
Perhaps El Reg would care to commission from a suitable expert an article with intent to place code vulnerabilities and non-trivial bugs into perspective? The flow of singular (i.e. connections not obvious) reports within technical news media and general news media is hard to assess; one may ask the extent to which it is correcting hitherto ascertainment bias (lack of interest in the topic) and the degree to which it relates a growing problem; in particular there is the matter of whether there are avoidable commonalities underlying these events.
Specific questions to be posed include the following.
1. Has computer science come up with a workable and measurable conception of complexity in computer code? Obviously, sheer length of code is an inadequate measure because interconnectedness of code segments and possible pathways through them ought be taken into account.
2. Has any such measure been established as strongly (putatively causally) correlated with rates of occurrence of errors in released/deployed code?
3. How is the complexity measure influenced by efforts during the decades since digital computing was introduced to wall-off, e.g modularise, sections of code? Are lessons being ignored?
4. Is there a 'Tower of Babel' effect when sections of code in a complicated set of interrelating code-segments/programs are compiled from differing high level languages?
5. Is there insufficient separation between core operating system code and that of applications running on it? Similarly, are applications bundled as part of an operating system (e.g. tasks mediated by the human interface) becoming too interconnected to be of predictable behaviour?
6. Is there too much reliance upon accretions around 'legacy' code with consequent issues of backwards compatibility? For instance, during the past couple of decades coding options for developers and expectations by end-users have grown apace. Moreover, hardware capabilities are increasing rapidly such that 'legacy' code which may have entailed compromises and workarounds for hardware inadequacies now impede reliability and security of newly added code.
7. Are proprietary software vendors through excessive concern for their 'intellectual property' (IP) obstructing progress toward various helpful common standards and use by themselves and others of code known as trustworthy? Might there be a better way of conducting business and protecting rights/attribution? For instance, why must IP be protected at code level rather than just at end-product level? Trademark law offers opportunity for redress when a company passes itself off as another of established reputation. What does it matter if the ABC operating system or office suite DEF (each compiled from source) vended by company of long standing XYZ starts to be distributed by another company ZYX also compiling from source but with possibility of variations and enhancements? Company ZYX would be in the wrong if claiming its version of the software was ABC or DEF: this because software is generally not sold as a one-off but as is part of a brand package which includes customer support and other add-on features. At the high end of the market, e.g. large business and government institutions, proven reliability in support, fixes, and updates, will win against a cheaper identical (but not in name) version of less secure provenance.
It follows that major proprietary software vendors, many of international reputation, place themselves at little risk of sustaining losses outweighing advantages from working under a more liberal regimen. Newcomers, even if drawn from other major software houses, have a long uphill trek establishing themselves as trustworthy and reliable alternatives for products and associated services of long standing. Meanwhile, originators of successful software (plus services) can entice their customer bases with appealing innovations.
In a nutshell, it could be that software reliability overall would be enhanced by combination of coding practices drawn from the best currently known and openness about established code so that attempts to reproduce its functionality with different code in hope of avoiding copyright and patent disputes does not introduce new errors.