By the time I read this, it was waiting to be installed.
It's 2017... And Windows PCs can be pwned via DNS, webpages, Office docs, fonts – and some TPM keys are fscked too
Microsoft today released patches for more than 60 CVE-listed vulnerabilities in its software. Meanwhile, Adobe is skipping October's Patch Tuesday altogether. Among the latest holes that need papering over via Windows Update are three vulnerabilities already publicly disclosed – with one being exploited right now by hackers to …
COMMENTS
-
-
-
Wednesday 11th October 2017 10:23 GMT oiseau
Future headlines
You don't have to be a future telling savant to know that one of the prominent Microsoft related headlines appearing in The Register in the next few years will continue to be:
"It's 20XX... And Windows PCs can be pwned via DNS, webpages, Office docs, fonts – and some TPM keys are fscked too"
It *will* save a lot of work for the editor, mind you.
Just has to copy/paste and then replace XX with the proper number.
In any case, just like with this one headline, absolutely no one will be surprised and (uncanny beyond beief) quite a few members of the ElReg readership will be wondering why it takes so long to patch their (still) vulnerable MS install.
Cheers.
-
-
Wednesday 11th October 2017 17:39 GMT Ken Hagan
Re: 2XXX
No way will we need that third X. Microsoft have no new products that look capable of sustaining their historic position within the industry. They've given up on "devices" and they've largely lost on servers. They survive on desktops on the strength of their ability to run programs from a decade or so ago, but the result of *that* is that the current version of Windows is almost crushed under its own weight of back-compat crap.
They aren't dead yet, but in 2025 we may look back at 2017 and say "Yeah, the signs were already there.".
And to the naysayers who point to the cash pile I say just that it is all virtual money and another company (probably not Apple, Google or Amazon, although they are probably big enough) will eventually have a big enough pile of its own to *buy* Microsoft for its IP and promptly shut down the day-to-day operation as an act of mercy.
-
-
-
-
Wednesday 11th October 2017 07:53 GMT Anonymous Coward
By the time I read this, it was waiting to be installed.
Yeah, it's always nice when I can start the day with doing paperwork because my work system is wasting power and bandwidth performing yet-another-f*cking-update. Thank God we haven't managed the paperless office yet or I'd be sitting there for a couple of hours.
I can recall the days when someone was selling us a Windows update with the statement that it would increase our productivity. I note with interest that they're very careful to avoid that argument now.
-
Tuesday 10th October 2017 23:20 GMT Anonymous Coward
Disgorge
Microsoft spew again in to the bucket that is Windows. No wonder it smells, 30 ( THIRTY ) years of vomit needs one big malodorous bucket.
Running Windows is negligent, maybe the next big hack "victim" will sue MS as their crap no doubt will feature prominently in the corporate rampage.
Feel free to down vote lemmings, but be sure to open your eyes less you press the up vote icon.
-
-
Sunday 15th October 2017 01:59 GMT Kiwi
Re: Disgorge
Soooooooooooooooooooooooooo Tedious
Not as tedious as the MS fanbois etc defending something that should be indefensible.
In 1995, when the net was just starting to come to life, this stuff might have been forgiveable. But by now any decent SW company knows not to write code that allows the opening of a document to take control of a computer.
you need to up the dosage mate
Would love to know what you're taking that lets you sleep at night while you're defending this garbage.
(Oh, seems to set you off so : mickey$oth, windoze, losedoze, microsucks, M$ etc etc etc... Hope you have a padded cell nearby...)
-
-
-
-
-
Thursday 12th October 2017 06:22 GMT bombastic bob
Re: Old vs New Bugs
"I get the feeling that they do not test as thoroughly as they used to."
they don't test at all. they fired their testing staff 2 years ago, during the insider program for Win-10-nic. They're entirely relying on 'insiders' and people who get the first run of patches. that's why there are forced updates, to make SURE they get their patches tested by the unfortunate saps who risk bricking their new, shiny machines that came with Win-10-nic.
-
-
-
Wednesday 11th October 2017 06:17 GMT wallaby
Re: Fonts and Windows..
"I had the error message "a TrueType font caused a general protection fault in the module setup.exe" when installing Windows once. Must have been Win98 (SE?)."
I had similar to that that installing Win 98 last year ..... after a failed hard drive trashed the disk and the backup of it had been lost under a deluge of sea water 2 years previously..... it was the fish in the open backup safe that did it.
-
-
-
-
Wednesday 11th October 2017 11:48 GMT Anonymous Coward
Re: Who designed this then?
Why the hell did They design an OS that lets programmers embed code in a FONT?
The TrueType engine contains an interpreter that executes its own instruction set to adjust how fonts are rendered at different sizes on different resolutions. It's a complex process. The 8 x 8 grid of bits is long gone!
https://www.microsoft.com/en-us/Typography/SpecificationsOverview.aspx
☐☐☐☐☐☐☐☐
☐☐◼︎◼︎◼︎◼︎☐☐
☐◼︎◼︎☐☐◼︎◼︎☐
☐☐☐☐◼︎◼︎☐☐
☐☐☐◼︎◼︎☐☐☐
☐☐☐☐☐☐☐☐
☐☐☐◼︎◼︎☐☐☐
☐☐☐☐☐☐☐☐
-
-
Wednesday 11th October 2017 11:47 GMT Anonymous Coward
Re: Who designed this then?
That's what needed to have nice looking fonts able to scale on any output device, unluckily. People would complain about bitmap fonts enlarged for their 4K display, I'm afraid.
Anyway, in fonts like TrueType the culprits are both Apple and Microsoft - actually the hinting engine was an Apple patent. But other font rendering engines are not that different.
The real issue is not that font have code inside - it's how safe the rendering engine processing that code is. And still, the rendering pipeline must be very fast, or people will complaining if font rendering is slow.
-
-
Wednesday 11th October 2017 21:06 GMT Anonymous Coward
"if computers had totally separate data and executable storage"
Actually, x86 protected mode can define what memory segments are for (executable, readable, writable....) - just no mainstream OS ever used them because of the complexity. AMD even removed the feature in x64. Just, it was what it is needed now to write secure systems.
Anyway, fonts today are a sort of program - its execution should be strictly controlled, though. Again, the Intel four ring model would allow for better separation of privileges, but again nobody used it, because most CPUs had only two rings (and anyway, ring transitions are costly).
In a four ring model you could have the true kernel running at ring 0, for example, while I/O could work at ring 1. It could still directly access the hardware (with a proper IOPL setting), but would not be able to access and modify ring 0 data.
One day those features will be sold as a new, great breakthrough in computer security.... just like the cloud mainframe model looks so "disruptive"...
-
Thursday 12th October 2017 06:30 GMT bombastic bob
Re: "if computers had totally separate data and executable storage"
"AMD even removed the feature in x64"
you sure about that? I'm pretty certain that x64 has executable and non-executable page flags...
edit: found this quote on wikipedia
"The No-Execute bit or NX bit (bit 63 of the page table entry) allows the operating system to specify which pages of virtual address space can contain executable code and which cannot. An attempt to execute code from a page tagged "no execute" will result in a memory access violation, similar to an attempt to write to a read-only page. This should make it more difficult for malicious code to take control of the system via "buffer overrun" or "unchecked buffer" attacks. A similar feature has been available on x86 processors since the 80286 as an attribute of segment descriptors; however, this works only on an entire segment at a time."
https://en.wikipedia.org/wiki/X86-64
thought so
-
Thursday 12th October 2017 07:37 GMT Anonymous Coward
" I'm pretty certain that x64 has executable and non-executable page flags.."
It was a stopgap introduced when it became clear the flat model was risky, but is much inferior to the segment protection model. All it can do is mark some memory pages as non-executable. You can still modify executable ones, and read and modify memory everywhere.
A segment could be executable without even being readable. That means the CPU can load and execute the instruction, but a process - without high privileges to create an alias segment - can't read (or worse, write) the segment contents - i.e. to setup a ROP chain. You could have read-only segments - no way for a process to modify its contents.
One issue is compiler have the bad habit to mix instructions and some static data (and sometimes even non-static), while using properly the segmentation model would require segments for code, segments for constants, segments for variable data. Also, because segments have a size limit - which is checked when accessing the contents, any buffer overrun or the like could be limited.
It could be a model non suited for interpreters (especially for highly dynamic languages) and VMs, where code is delivered as data - these should be sandboxed because they are inherently less secure than compiled code.
The price to pay was a far more complex model, and loading segments and calling among them was slow exactly because of all the security checks involved.
Instead of removing all the protection layers to speed up the CPU, the path should have been to speed up the protection mechanism. We'll return there, because it's one of the few ways to make systems more robust.
-
Thursday 12th October 2017 11:27 GMT Roland6
Re: " I'm pretty certain that x64 has executable and non-executable page flags.."
One issue is compiler have the bad habit to mix instructions and some static data (and sometimes even non-static)
Don't remember having that problem with PL/M, but then PL/M did require the programmer to have some knowledge of segmentation, thus it was the programmer's decision to mix instructions, static data and dynamic data.
I suspect the compiler problem is down to people wanting to use high-level languages and hence their compilers to solve everything, rather than accept that there are times where assembler (and hence some understanding of machine/platform architecture) is the right choice.
-
-
-
Wednesday 11th October 2017 22:19 GMT Mike 16
VN Blaming.
Von Neumann gets both too much credit ("accidentally" circulating a group report with only his name) and too much blame. For two reasons:
1) The machine described in the (in)famous paper was to an extent "tagged". That is, each word had a bit (the setting of which was left as an exercise to the reader, but was part of the program loading process) to distinguish instructions from data. Not some modern sort of "throw an exception far enough up that the code that finally catches it knows sod-all about the context", but "If you store to an instruction, only allow the address part to be modified" and "If you execute data, treat it as a 'load immediate'". Instruction modification was needed because B-Boxes had not yet been invented in the UK, nor (as index registers) patented by IBM in the US.
2) Even doing a stronger separation of code and data (e.g. the NX bit fixing the elision of segment-based control on the way to pages) gets you only so far. Your JVM may be immutable code, but it will be interpreting "data" (byte-codes) from who knows where, manipulating other data, probably all in one bit-soup "for efficiency".
If you want _real_ separation, look into the Fairchild Symbol Computer. Even the compiler was "hardware".
-
Thursday 12th October 2017 06:28 GMT bombastic bob
Re: Who designed this then?
"if computers had totally separate data and executable storage this wouldn't be a problem"
harvard architecture. common for microcontrollers.
in the x86 world, proper memory management would prevent writing anything that's executable. there are flags for that. I guess Win-10-nic isn't using them enough.
-
-