back to article Bad news: Google drops macOS zero-day after Apple misses bug deadline. Good news: It's fiddly to exploit

Google has publicly disclosed a zero-day flaw in Apple's macOS after the Cupertino mobe-maker failed to fix the security shortcoming within the ad giant's 90-day deadline. The vulnerability itself is relatively minor in terms of danger: it allows malware already running on your Mac, or a rogue logged-in user, to potentially …

  1. LDS Silver badge

    Retaliation for the certificate suspension...

    Once again, Google is weaponizing its Project Zero findings...

    1. gnasher729 Silver badge

      Re: Retaliation for the certificate suspension...

      On a typical Mac there is one user. If malware breaks into that user account, everything if interest is lost to the attacker. Escalating to the OS doesn’t gain the attacker anything.

      People with more than one user on their Mac must make sure the other users are careful of course.

    2. Charlie Clark Silver badge

      Re: Retaliation for the certificate suspension...

      I don't agree: I think Google's security team is given the licence to do the research and required to treat all results the same, not least because lots of Googlers use Mac's for work and should all be considered potential targets.

      But the real test will be when someone else is able to do the same with some of Google's own software such as Android or GMail.

      1. LDS Silver badge

        "required to treat all results the same,"

        Still, Spectre/Meltdown findings were kept secret for well over 90 days until ElReg disclosed them - because they could potentially impact all Google systems as well...

        Moreover, most Google software - and the most important one - runs only inside Google - difficult for people outside to look at it as Google can do with MS or Apple commericial software.

        Sure, you can try a pen test on GMail, they can easily lock you outside if they spot it. With Android is a little easier, anyway most of its code is not Google's.

        Sure, Google uses Macs as well, so they aren't free as they are with Microsoft and it's no surprise this vulnerability is not one you can exploit remotely easily - so the perfect one for a little retaliation...

    3. Anonymous Coward
      Anonymous Coward

      Re: Retaliation for the certificate suspension...

      In the same way Microsoft did it to Google, in the same way Apple did it to Microsoft and Google, in the same way Facebook did it to Microsoft...

      Let's not pretend any of these companies are any better or worse than the other... If anything, you should be directing your anger at Apple for not fixing their leaking (closed source) bug ridden platforms.

      1. LDS Silver badge

        "Apple for not fixing their leaking (closed source) bug ridden platforms."

        Most Google software is closed source as well - as it runs on its systems only and you have no clue about what bugs it has and how long they take to fix them. Or you mean you have the code of its search engine? Or its advertisement system?

        An arbitrary deadline is simply stupid (and of course they didn't use it for Meltdown/Spectre... for fear of being vulnerable themselves) because some bugs are easy to fix, others may take a longer time because they require bigger changes are more tests.

  2. Franco Silver badge

    Regardless of whether or not these exploits are low risk or hard to exploit, it is still a dick move by Google to disclose them if their own arbitrary deadline for a patch isn't met.

    1. Charlie Clark Silver badge

      In many respects 90 days is too long. If Google can find the exploit, then it's reasonable to assume that the various even better resourced national security agencies and organised online gangs will have found it, too.

      Security exploits will no doubt in time be considered as product defects much like, say parts in a car. Should someone discover a fault with them, should they hold back and expect the manufacturer to fix the problem at some point? Currently, most software is exempt from strict product liability as long as it can provide software updates. Such disclosures are, in my opinion, essential should the exemption continue.

      1. Franco Silver badge

        Fine, then publicly disclose that they have found an exploit and it has not been patched. DON'T post the exploit on the internet.

        Using your own example, what if a car manufacturer had a flaw that allowed cars to be stolen and that was publicly revealed by the person that discovered it?

        1. Charlie Clark Silver badge

          Using your own example, what if a car manufacturer had a flaw that allowed cars to be stolen and that was publicly revealed by the person that discovered it?

          Easy: under the terms of strict liability the manufacturer is liable for all product defects, at least in America. But note that this is also one of the reasons for Chapter 11, see the recent "bankruptcy" of a power utility in California.

          1. Franco Silver badge

            "at least in America"

            There are 1 or 2 other countries in the world. Sticking with your example again though, the US is known to be somewhat litigious. If Google keep disclosing other companies exploits and someone manages a successful use of it in the wild, you can guarantee there will lawyers hammering at the chocolate factory's door.

            1. Charlie Clark Silver badge

              I deliberately chose the US because of strict liability and class action suits. These along with some rather comical rulings (warnings on hot drinks cups, or that sharp knives are dangerous) have repeatedly forced companies to change their behaviour. Apart from a few (mainly DMCA) exceptions, the manufacturer is always liable, so your example will never lead to Google being sued by customers.

      2. Anonymous Coward
        Anonymous Coward

        If Google can find the exploit, then it's reasonable to assume that the various even better resourced national security agencies and organised online gangs will have found it, too.

        There is also the possibility that they don't have to find it themselves - Google could just sell them their discovery. Nice little extra earner besides the global mass surveillance service they already have going.

        1. Charlie Clark Silver badge

          Google's reputation for security is bound to be worth much more than what it could earn for these kind of exploits. In other areas, cf. Dragonfly, it's already demonstrated its willingness to work with the authorities.

    2. Anonymous Coward
      Anonymous Coward

      it's an industry deadline, and Apple have done exactly the same several times in the past.... So I guess they are dicks too by that measure? of course not, Apple are lovely people and Google is this advertising machine, because the internet told you that's what you should think. #moron

      1. CrazyOldCatMan

        what you should think. #moron

        "quod loquor, et vobis".

        (Apologies to Latin scholars - what little Latin I posseesed wasn't up to the task so I had to use G-Translate..)

    3. Carpet Deal 'em

      The reason for the deadline is that companies have proven time and again they won't patch security flaws even if made aware of them. Threatening to take the matter public lights a fire under their ass and dramatically increases the odds it'll be fixed.

  3. Peter Gathercole Silver badge

    Is this a problem?

    I'm struggling to see why this is a problem, if it is working exactly as described and affecting memory-mapped files.

    If you forget about the memory mapped element of this exploit, if two processes have the same file open, one for read, and one for write or read/write, if the write process writes to the file, and the other process reads the region of the file that was written, the new data should be used (at least on a Posix compliant file system, for filesystem like Lustre on clusters, this may not be true).

    Even if you take the memory mapped file nature of the file into consideration, if the underlying file on disk is altered, I would expect that the copies of changed pages of the mm file to be invalidated when the corresponding block is written to disk, reflecting the change on disk, and meaning that if the page is referenced again, it will be fetched from disk to pick up the changes.

    I suspect that the problem is not to do with memory-mapped files, but more to do with demand-paging (or swapping on non-paging Unixes) of the text space of executable binaries. In this case, if a page of the text of a process is aged or pushed out (or never loaded in the first place), rather than being copied to the swap space, it is assumed that the text image on disk will not change, and can thus be re-loaded from the disk.

    I know that Mac OS does not run a Unix kernel, so may actually handle paging differently, but the Unix model for this has been will understood since before Mac OS X existed. Memory Mapped files and the mmap() call were first described (but not implemented) in BSD 4.2 documentation, first implemented in SunOS 3.2, and made it mainstream via SVR4 into Posix. Mach did implement mmap() before Apple used it in Mac OS X.

    For archaic Unix systems (I saw this behavior on Unix edition 7), the file on disk would not be allowed to be changed while it is currently executing (you got an ENOPERM even if you were root and had write permission).

    On more current Unixes, the currently executing old file is unmapped from the filesystem, and a new copy of the file would be created (with a new i-node, and possibly using copy-on-write) to hold the modified contents. The running old unlinked file would remain physically on disk, but not linked into a directory file just so that currently un-loaded pages of the process would remain to be reloaded at the same state as they were when initially run. The link count in the in-core copy of the i-node describing the executable is incremented each time that an executable is run, and decremented when a copy of the process finishes. When the link count on the in-core copy of the i-node drops to zero when the last process exits, the file is finally deleted from the filesystem, and the space released back to the free-map. A new copy of the process will pich up a fresh copy of the file from disk (I'm ignoring sticky text, this is just so old nobody uses it any more).

    I suspect that it is this latter behavior with the text of executable files that is the problem, not memory mapped files opened for read or write.

    When it comes to scripts or other interpreted files, I'm not sure that the situation is the same. I would expect that the script itself would be read off disk in it's entirety and held in the data segment, and paged to the paging disk as data if the real memory was needed, but I could be wrong here, and I've not got the time to read any source. I would welcome anybody who has information about this area to comment about how the contents of interpreted scripts are held in memory during execution.

    1. Jim Mitchell

      Re: Is this a problem?

      @Peter Gathercole

      "I know that Mac OS does not run a Unix kernel" that might be true, but MacOS is "UNIX 03" certified: . Is the behavior you describe as happening on "more current Unixes" part of any standard, or just what was implemented?

      Also, this issue is rather similar to the recent Docker issue, where the binary is overwritten.

      1. Peter Gathercole Silver badge

        Re: Is this a problem?

        "More recent Unixes" is a relative term compared to Unix Edition 7, which is ancient.

        The un-linking of running executables is probably the behavior of Unixes with demand paging (as opposed to swapping), so SVR3 (SVID issue 2) or later for AT&T Unixes, and probably BSD 4.2. I'm talking mid-1980s. It worked like I described it purely to prevent the type of flaw that appears to be reported here.

        The behavior of changing running executables is unlikely to be documented in any of the SVIDs or Posix standards, as these tend to document interfaces, not implementations. I no longer have access to any internal System V design documentation, so I don't know whether there was any documented design for this area of Unix.

        What I described was what I've observed in SVR2 and SVR3, and later in AIX and SunOS. I found the behavior curious, so looked into exactly what was going on (and at the time I had access to the System V source as part of my job at AT&T).

        Some of the behavior I describe is documented on the unlink(2) and exec(2) man pages, although I have not found the behavior for running files documented. I did find this stack exchange question, which describes what I've said in other words.

        I also remember the behavior of mmap() being described in quite some detail in the SVR4 developer conference in 1988, which went into concurrent access to memory mapped files, and I do still have my notes, so could look up that, but I think that there should be some SunOS and/or BSD documentation around on the 'net somewhere.

        But I was positing that the vulnerability was not with mmap(), so the mmap() documentation is not really appropriate here.

        Hope this is of some interest,

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022