back to article Epyc fail? We can defeat AMD's virtual machine encryption, say boffins

German researchers reckon they have devised a method to thwart the security mechanisms AMD's Epyc server chips use to automatically encrypt virtual machines in memory. So much so, they said they can exfiltrate plaintext data from an encrypted guest via a hijacked hypervisor and simple HTTP or HTTPS requests. AMD's data-center …

  1. YetAnotherJoeBlow

    Here we go again

    Since manufacturers all look at each others patents and reverse engineer each others chips (I personally know that two companies do,) one would think that someone would say wait a minute, why don't we throw this out in a repo and see what becomes of it.

    Encryption is not IP anymore. It's a commodity. It's really time to stop all this foolishness before consumer rage catches up with those manufacturers. Lets get it right and stop this embarrassment.

    Companies can no longer hide their failures in microcode.

    1. Brian Miller

      Re: Here we go again

      why don't we throw this out in a repo and see what becomes of it.

      Hello? Hardware? Built into the chip??

      All of this stuff is on the CPU die. Nobody is going to go and fab up a monster server chip for grins and giggles and have a go at it.

      It's quite possible that some of these security problems were likely reviewed by their teams, and they figured that they'd have to make a trade-off. Perhaps AMD can do something with a microcode update, perhaps not.

      1. YetAnotherJoeBlow

        Yes, hardware.

        @ Brian Miller

        Put the microcode up for public comment. Eventually, the code will get to a good starting point. The next time silicon is etched, burn this new code in. No trade secret there. Like I said they pretty much all use similar tactics - and engineers do jump ship. Perhaps one of the reasons why meltdown impacted all the major chips in very similar fashion, no?

        1. Mike Pellatt

          Re: Yes, hardware.

          Perhaps one of the reasons why meltdown impacted all the major chips in very similar fashion, no?

          Indeed not. It's because the meltdown vuln and similar is an inevitable result of the execution-time optimisations common across the x86 arch (and likely also to show up in any CISC execution-time optimisation in some form or another - were there any other CISC arch left around.....)

          1. RandSec

            Re: Yes, hardware.

            "the meltdown vuln and similar is an inevitable result"

            Hardly "inevitable": Current AMD processors are not vulnerable to Meltdown, and strongly resist Spectre. Running an updated OS and new microcode should increase that resistance.

        2. Brian Miller

          Re: Yes, hardware.

          @ YetAnotherJoeBlow

          But it has to be executed to be tested! That's the problem with code. A lot of errors only manifest during run time. And from poking my nose in the research paper, this seems like it's the result of some kind of race condition. Something like this is unlikely evident in reading the source.

          Also, I don't know of either AMD or Intel ever publishing their microcode, for anything.

          1. Pascal Monett Silver badge

            Publishing microcode ? What are you smoking ?

            That will simply never happen, no more than Google will publish its Page Rank system or FaceBook will publish anything at all.

            1) Microcode is how everything can actually happen, and it is the equivalent of the Crown Jewels. It is the reason a processor does what it does, and you do not want your competition to see what you are doing or how you are doing it.

            2) Microcode is difficult to grep, and there are not all that many eyes available to check it out - plus, most of those eyes are working for the competition anyway.

            1. teknopaul Silver badge

              Re: Publishing microcode ? What are you smoking ?

              This reads like a rant about c sourcecode in 1980.

              Given enogh eyes all bugs are shallow even applys to hardware.

              1. Peter2 Silver badge

                Re: Publishing microcode ? What are you smoking ?

                Given enogh eyes all bugs are shallow even applys to hardware.

                Heartbleed et al have shown quite dramatically that the number of eyes is irrelevant; it's the quality of the brain behind the eyes that matters.

      2. P. Lee

        Re: Here we go again

        >Hello? Hardware? Built into the chip??

        It would have been prudent to put the code out there in an emulator format before they baked it in.

        But regardless, put your $%^#^ VM on-prem, not under someone-else's kit.

        It is far cheaper and far safer than everything you need to do to mitigate the stupid cloud decision.

      3. Anonymous Coward
        Anonymous Coward

        Re: Here we go again

        "All of this stuff is on the CPU die. Nobody is going to go and fab up a monster server chip for grins and giggles and have a go at it."

        Surely chip development tools include detailed virtual emulation of a proposed chip. While it might be slower than silicon, you could run a thousand test cases in parallel.

    2. John Smith 19 Gold badge

      Companies can no longer hide their failures in microcode.

      I think you'll find companies have been doing exactly that for a very long time.

  2. CheesyTheClown

    The attack can only be partially mtitigated

    So long as there's a means to provide plain-text memory access to virtual machines for things like communication with something other than the virtual machine itself... like the hardware or hypervisor for example, it will always be possible to alter the SLAT to choose which memory to encrypt and which memory to not encrypt.

    I hadn't considered this attack vector earlier, but now that it's in the open, it's obvious that there is no possible way to create a walled garden suitable to this as there will always have to be gates available.

    Let's not overlook that an additional attack vector would be to pause scheduling to the VM, allocate a new virtual page, inject it into the SLAT marked as clear text, then push code into that page, and find a means to trigger it. I would recommend through the VM network driver for example.

    There's that attack vector too.... it should be possible to exploit the VM virtual NIC driver. VMXNET3 is a famously bad driver. After doing a code audit on Linux of VMware's kernel drivers, I transitioned from VMware because it there were so many completely obvious security holes that I couldn't run my servers in good faith on the platform. There was that and the $800,000 in licenses I was paying for it... which everyone else just gives away for free now.

    So, the real trick would be to inject a VIB on VMware which would allow code injection through VMXNET3 or the video driver which is even better as there's the wide open window to inject shaders into OpenGL or DirectX which is almost certainly being run as MesaGL software rasterizer or WARP.

    This would be perfect... create a clear text page, trigger a window size change to trigger resolution change. Provide the clear text page as the frame buffer to the guest... and voila, there's a clear path to start uploading code for graphics rendering. This will likely not work well with NVidia Grid, but there are like 5 people in the world using that.

    haha... this article was great.... now that I know that it counts as an attack if you attack the guest from the host, it opens an endless barrel of worms.

    I need to update my CV to say "Security Researcher" and hack some VIBs together. It's not even a challenge.

    1. Warm Braw Silver badge

      Re: The attack can only be partially mtitigated

      If the implementation of Secure Virtual Machines depends in any way on AMD's memory encryption support (and the API documentation linked from the article doesn't make this clear), I suspect it may in principle be vulnerable to side channel attacks (like SPECTRE/MELTDOWN).

      The last time I checked the specs, AMD's memory encryption applies only to main memory and not to the caches. If there were a side channel attack based on cache timing, you could potentially use it as a way to bypass the encryption unless the cache contents were somehow tied to the VM ID.

      It may well not be possible in practice - and there's been a lot of work done on preventing side channel attacks in recent months - but each feature you add to a CPU also increases the potential attack surface.

    2. Ken Hagan Gold badge

      Re: The attack can only be partially mtitigated

      Whether this attack can be mitigated or not is not really the point. If you don't trust your host then you have to assume that no-one will ever figure out an attack that works. That's a big assumption. The risk is small, but presumably any successul attack can be automated and rolled out in industrial fashion. If you have sensitive data then your options are "trust your VM host" or "buy your own iron".

      Using Amazon is certainly very convenient (and flexible) for folks like me who don't really care (and whose actual needs for processing and capacity vary wildly from day to day). However, if you are running a major business operation and hope to continue doing so over the long-term, you probably ought to be running your own data centres. Either the cost of computing is significant, in which case the security of that computing is probably a life-or-death issue for the whole business, or it isn't significant, in which case you can afford to pay a little over the odds and call it insurance.

      1. Charles 9 Silver badge

        Re: The attack can only be partially mtitigated

        So what if it's significant but inconsistent? Coming in surges too infrequent to house iron yet too important when they do come to trust anyone else?

      2. CheesyTheClown

        Re: The attack can only be partially mtitigated

        Not really about the host.

        If there’s an attack vector available to a VM from the host... which I’m confident there always must be due to the thought process I followed above, then the issue is whether it’s possible to always mitigate the attacks from the guest to the host. And they should be by employing the old dynamic recompiled support which was used in hypervisors to trap things like legacy inb/outb instructions.

        As such, it’s whether someone can hop contexts and read memory of other guests on the same host.

        I make a huge effort to encrypt sensitive data (like keychains) in TPM when I’m coding. But so far as I know, there is still no solid TPM virtualization tech.

      3. registered-on-register

        Re: The attack can only be partially mtitigated

        or use a dedicated host?

      4. StargateSg7

        Re: The attack can only be partially mtitigated

        I agree! You buy your OWN BIG IRON !!! We've got custom servers out wazoo to ensure we NEVER have to rely upon OTHER people to safeguard our data. mitigate this sort of attack, ALWAYS run your "special application" be it a database, a web-server, etc on a SEPARATE MACHINE that is in itself firewalled from even the INTERNAL network (we use BOTH a network firewall appliance AND a second multi-homed server setup aka two network cards on separate IPV6 addresses) on dueal 40 Gigabit Ethernet connections which allows TWO devices in serial to deep packet inspect ALL incoming AND outgoing packets before the main application, web server or other internally networked LAN machine even GET to process data!

        The OUTSIDE ROUTER and GATEWAY NEVER even see the internal server because the internal server is in itself firewalled off from the rest of the INTERNAL LAN by the internal and separated network appliance firewall AND a multi-homed in-between server! This gives TWO EXTRA LEVELS of protection. At 80 gigabits, the slowdown during the two-layer deep packed inspection/multi-homed firewall is a measly 2% of overall network speed and we can mitigate that by ALWAYS using DUAL 40 gigabit network cards on each internal LAN machine to create a SUMMED ETHERNET communications pipeline for each machine.

        YES! This is a bit extreme....BUT....when you work in a company where our video file processing and data transfers end up being in the MULTI-PETABYTES PER DAY, it makes sens to ENSURE you have 100% uptime by using multi-layers of data protection AND extra communications lines (i.e. two Ethernet ports aggregated together to get 80 gbits//sec) for each internal machine.

        1. CheesyTheClown

          Re: The attack can only be partially mtitigated

          Deep packet inspection is generally not worth much. Unless your deep packet inspection engine can sandbox all code and all data that passes through it. it will never be able to provide better security than proper endpoint protection.

          Deep packet inspection doesn't offer anything more than rate limiting the nonsense traffic. So it's certainly worth it. Whether you're using Snort based Cisco products or PfSense... or whatever, there is value.

          That said, I actually come from the broadcast video background. I spend the evening speaking about SDI forward error correction and no-return-to-zero with a fellow engineer and my 14 year old daughter last night. The other guy and I worked together for years developing chips and firmware for those things.

          I'd be pretty hard pressed to see any circumstance where there would be any value in an IPS on video content delivery channels. I certainly could never identify a circumstance where there's any value in 40Gbp/s networking unless you're buying into the looney tunes nonsense Cisco started by trying to sucker their customers into buying 10Gb/s networking for delivering content that could be delivered at 800Mb/s with almost no compression (as in 1.5Gb/s SDI which has about 1.1Gb/s of actual data which can easily compress below 1Gb without loss or latency issues)

          If you're a CDN, you're scaling up when you should be scaling out. That's putting a lot of eggs in one basket. It's a very 1990's-2000's way of thinking. It didn't scale then, it doesn't scale now.

          Of course, I'm purely speculating on your design, but even if you're a big production studio handling lots of multi-camera ingest, you are probably way too over-provisioned. Also, if you're doing layered security, you should never be in a circumstance where you'd need to inspect more than a few megabytes a second of traffic.

          But again, I'm speculating. Every design usually has a reason other than "we like to spend money"... but these days, with the advent of all the SMPTE members pushing for uncompressed (idiots) because it allows them to make A LOT MORE MONEY, a lot of people are falling for it.

          1. StargateSg7

            Re: The attack can only be partially mtitigated

            The so-called over-provisioning was done on purpose!

            Our boss is a techo-gearhead who has the money to do it "Because We Can"!

            We are using Audio/Video-over-IPV6 packets exclusively so SDI ingest is only on the cameras and decks (multiple RED Monstros and 8x8 and 16x16 camera arrays of 4K full frame cameras running at 60 fps uncompressed RGBA32) and a few Sony XDcams, F65's/F55's, Arri Alexa's and about 50 other systems) --- With editing and camera, we are doing Exabytes per day over the internal network and will double that within 6 months! 80 G/bits/sec is actually on the low-end being too slow for our needs because our daily INTERNAL/EXTERNAL connections are UNCOMPRESSED RAW or LITE-RAW files which is getting into the multi-Exabyte range.

            We also have a Server farm in Northern British Columbia, Canada which is fed every-few minutes with Geo-phsyics data that is at about 15cm per pixel (or about 6 inches per pixel) resolution so our satellite datasets are on the order of 64k by 64k image tiles so that is already maxxing-out the multi-line leased fibre.

            When you're dealing with up-to-two-minutes-long video datasets that are 65536 pixels wide by 65536 pixels long by 2048 pixels high of RGBA-32, that is 35,184,372,088,832 BYTES PER TILE (35 terabytes!) and multiply that by 60 fps, you are looking at 2,111,062,325,329,920 bytes (TWO+ PETABYTES)! PER SECOND !!! It's a good thing we have PARALLEL fibre pipelines because NO SDI connection or video-oriented copper connection can do that sort of data transfer bandwidth!

            If I remember correctly, our admins have said we are on-par with and may even exceed some the large Telecom operators in terms of overall bandwidth being gobbled up by our server systems! We assign one graphic card for each sub-tile of 2048 by 2048 by 32 RGBA-32 pixel 3D image dataset and then process in parallel so I think you can do the math on how many graphics cards we have in our newest Northern BC server farm (64k+)

            The point is...that even 80 Gbits/sec per connection is NOT ENOUGH BANDWIDTH for some applications!

            It seems that every year, our LAN/WAN network communication bandwidth gains get eaten up by SQUARED increases in dataset sizes!


            P.S. YES we are COMPLETE UNDER THE RADAR, so no-one knows about us!

    3. Aodhhan

      Re: The attack can only be partially mtitigated

      I call BS on your claim of being a security researcher.

      Exploiting the virtual NIC. Do you understand the concept of targeting the resources in memory? If you did, you'd laugh at what you are saying.

      Also, this isn't an attack, per se. It's a peep hole which isn't plugged.

  3. KD_

    Off topic, kinda. It seems there is a tendency not only to make bigger and smarter machines but also more resistant to admins. Just saying

  4. rav

    Complete security is a MYTH there is ALWAYS the human element to bugger up the works.

    "...rogue host-level administrator,"


    If a "rogue host-level administrator" is in charge of your network then you have bigger problems. Why did you hire the wanker in the first place?

    There is NO SUCH THING as total security and there never will be.

    1. Snowy Silver badge

      Re: Complete security is a MYTH there is ALWAYS the human element to bugger up the works.

      Indeed if you have that level of access security become a fig leaf. Sure it seems to cover up the private stuff but with poke it the right place it is all laid bare.

    2. Brewster's Angle Grinder Silver badge

      Re: Complete security is a MYTH there is ALWAYS the human element to bugger up the works.

      "Why did you hire the wanker in the first place?"

      Because the company psychic was off sick that day.

    3. Adam 1

      Re: Complete security is a MYTH there is ALWAYS the human element to bugger up the works.

      > If a "rogue host-level administrator" is in charge of your network then you have bigger problems.

      So where does an AWS or Azure sit in your threat model here?

      If it helps, imagine there is a country out there, let's call this place Murika, which believes that it's laws apply to all other countries. Let's call these other mysterious places Notmerika, and let's pretend that they have their own governments, laws and legal frameworks. Notmerika has certain laws that governs the treatment of data of its citizens and companies. These laws restrict what data an organisation may collect and with whom it may be shared, including how law enforcement can, through legal mechanisms like subpoenas, force the organisation to hand over data.

      If the host can access the guest's memory in a decrypted state, then it becomes practically certain that they will be subpoenaed by a Murikan court to produce contents from the guest which would otherwise have required the appropriate paperwork be passed to the Notmerikian authorities.

      Two classes of people should care about this:

      1. Murikan's who hope to sell their cloud services in Notmerika; and

      2. Notmerikians who went to run services for other Notmerikians whilst complying with Notmerikian law.

      1. Claptrap314 Silver badge

        National Security Boundaries

        The rule is always: If you don't want laws from country X to affect your business, don't do business with entities from country X. See: EU vs Brexit. See: US vs World. See: China vs World. See: Russia vs World.

        If you do business with AWS, then American law applies to the custodians of your code & data. What exactly is supposed to happen next?

        1. Adam 1

          Re: National Security Boundaries

          Really!? I must have missed Google ceding to New Zealand law and suppressing that name.

          What does Apple maps call the spratly islands? How is China with that call? What about the Philippines or Vietnam?

          How very quaint of you to think that these companies structure their legal entities and technical responsibilities such that those outposts have no capability to comply with demands made by those companies.

          Let's not even get into whether China accepts your right to publish certain political commentary, or whether YouTube should depict women driving cars as prohibited in some backwaters from which a lot of your oil comes from.

          If AWS has a bunch of bit barns across western Europe that become illegal to use for servicing European citizens due to GDPR or something, they will have no choice but to sell the bricks and mortar to some European company who isn't subject to American law. This was my very first point.

          1. Claptrap314 Silver badge

            Re: National Security Boundaries

            You appear to be in violent agreement with me. Do business in country X, abide by the rules of country X. Do business in countries X, Y, and Z, better figure out how to abide by the rules of all. If you can't, well, that's life.

            I don't like non-state actors becoming more powerful than major states. I like them being broken up along national lines.

  5. Temmokan

    So once again there was no scrupulous studying the mentioned encryption/protection before pushing it into production...

  6. ExpatZ

    None issue.

    Notice that the exploit is a hijacked HYPERVISOR, that's really not an issue.

    Anyone can own a machine once they own the root or underlying architecture, having physical access is a ticket to pretty much anything given enough time.

    What it doesn't allow is one pwned guest to hijack another, and THAT is the line of security that matters.

    That the AMD chip makes it HARDER for BOFH to mess with your stuff is a bonus, the BOFH was never ever actually locked out as he/she has control at the root level and given enough time and skill would pwn it all anyway if they were to be bothered, which most of us really aren't.

    So this is another nothing burger, although knowing for sure means I can harden my hypervisor's system down a little harder to make pwning it that much more difficult to the outsider.

    1. diodesign (Written by Reg staff) Silver badge

      Re: None issue

      "Notice that the exploit is a hijacked HYPERVISOR, that's really not an issue."

      The whole selling point of SEV is to thwart hijacked hypervisors and evil administrators. It was a selling point AMD pushed for cloud and off-prem platforms. According to this research, it may not live up to the marketing.

      AMD made a big deal out of SEV in its Epyc and Ryzen Pro marketing and advertising. It's only right that it is scrutinized, just like Intel's SGX was.


      1. Anonymous Coward
        Anonymous Coward

        Re: None issue

        Which does raise the question of where the researchers allegiance lies and who paid them? We can't have AMD looking too good compared to Intel now can we.

        1. Anonymous Coward
          Anonymous Coward

          Re: None issue

          "Which does raise the question of where the researchers allegiance lies and who paid them?"

          As the article says, it's the Fraunhofer Institute. Paranoia?

          1. Bryan_S

            Re: None issue

   Is the research, which looks oddly similar to the Intel researcher published Paranoia or due dilligence.

          2. RandSec

            Re: None issue

            "Intel, Fraunhofer cooperate in embedded systems" March 24, 2011


            1. Peter2 Silver badge

              Re: None issue

              That is a very good bit of detective work gentlemen, congratulations.

              One might also observe that the naming of the exploit (SEVered) also seems to be rather similar to another set of vunrabilities published under the name "RyzenFall".

              Almost as if a competitor is trying to damage a competitors brand name to prevent them from gaining market share isin't it? Also strange how difficult it is to get business desktops with AMD chippery in from the companies that were found to have been paid by Intel to exclude AMD from the market last time around...

      2. Gideon 1

        Re: None issue

        " It was a selling point AMD pushed for cloud and off-prem platforms."

        Meh. Cloud customers often don't know what iron they are running on, or even then if security protections are enabled. You have to assume cloud is insecure, because encryption (other then 1 time pads) only ever slows down an attacker.

  7. Anonymous Coward
    Anonymous Coward

    Congratulations to the person who picked the name

    This is the day they the joke kicks in.

  8. Anonymous Coward

    So we have to change what we say

    Encryption brings a false sense of security and allows people to believe they are safe in an ever increasing epidemic of hacking.

    Well all matter is hackable, learn from that, some things provide a temporary impediment and slow hackers down a bit. Hold them off until after D-Day.

    For anything that stores data, connects to anything else, has is accessible to anyone.

    Do not say:~ that encryption will protect you, your data or access.


  9. Anonymous Coward
    Anonymous Coward

    Snake-oil anyway

    Shrug. IBM, interestingly, did prove a while ago, as an esoteric but surprising computer science result, that it is possible for an untrusted third party to compute stuff on your behalf securely, so called "fully homomorphic encryption", an area of active research for efficient implementaiton - it was shown possible, but only rather inefficient implementations exist so far.

    What AMD are doing *is very, very far away from that*, it's almost DRM-style nonsense.

    1. Julian Bradfield

      Re: Snake-oil anyway

      It was Craig Gentry who developed fully homomorphic encryption. He may work for IBM, but that doesn't mean they take all the credit!

  10. Sil

    Need a malicious hypervisor

    You need a malicious hypervisor for this to work.

    How hard is it to make a hypervisor malicious? What is needed? Do we have studies or examples of it?

    1. Roland6 Silver badge

      Re: Need a malicious hypervisor

      >How hard is it to make a hypervisor malicious? What is needed?

      Well given the widespread usage of the PC architecture, we have Rutkowska's Blue Pill, the only challenge is slipping this on to a system before the hypervisor boots. But then we have the wonders of the UEFI firmware and its ability to take updates over the network outside of the hypervisor/OS.

      The only question is whether 'servers' suffer from the same vulnerabilities as desktop PC's with respect to the above attack vector...

  11. dnicholas

    Sounds like their plan requires the attacker to have pretty well pawnd the host anyway...

    1. Alistair


      Certainly if you pawn your laptop and leave your VMs on the disk someone's gonna figure out what you were up to with that thing..,

  12. Claptrap314 Silver badge

    This was always a bad joke

    The difference between the relationship of the OS and an application in the usual configuration and the hypervisor and the guest OS today is beyond my ability to understand. No one that I ever heard thought you could really run an application securely if the OS was untrusted. How in the world do you expect to be able to secure an OS against an untrusted hypervisor?

    I love AMD, (worked there a long time ago) but this "technology" always looked like garbage to me.

    1. Charles 9 Silver badge

      Re: This was always a bad joke

      OK, riddle me this, Batman. How can you run secure anything if you can't trust the hardware. And note, you have a budget, a deadline, AND a culpability axe over your head.

      1. Claptrap314 Silver badge

        Re: This was always a bad joke

        The hardware here is not being compromised. The hardware was supposedly providing a facility that could be used by software in a lower privileged environment to protect itself from software in a higher privileged environment.

        Don't bet on it working.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like