* Posts by larsk

25 posts • joined 8 Dec 2014

Xen turns it up to 4.11 and shrinks itself to contain containers

larsk

Re: I must be getting old

> So the good part of Xen which was always PV is now bad due to specter.

No, Spectre impacts HVM/PVH and PV equally and the same mitigations apply to each. You could argue that Meltdown has made PV less performant for some workloads. But the reality is that on most modern Hardware HVM and PVH have higher performance compared to PV for almost all workloads. Thus, many hosting/cloud providers only offer HVM/PVH guests for new instances. Now PVHVM for example, despite it's name is actually a HVM guest.

What hosting providers will gain is the option to support old PV guests (unmodified) running in a PVH container in a HVM/PVH container in a Hypervisor that is half the size (which is significant from a security perspective).

Conversely, you will also have the option of building a PV Xen only, if that is what you want. And there are use-cases where that makes sense.

> Reading the article I felt was a jargon spin cycle with a container thrown in.

Have a look at https://www.slideshare.net/xen_com_mgr/xpdss17-keynote-secure-containers-with-xen-and-coreos-rkt-stefano-stabellini-aporeto ... the development has moved on somewhat since then, but this gives you the ghist

larsk

New version turns Meltdown mitigation into a feature

It was actually the other way round: a feature we already worked on, happened to be useful as a Meltdown mitigation. But the effect is the same.

Xen fixes guest privilege escape and plenty more

larsk

Re: Fail Panda

"If they'd put half the effort into making the stuff work properly instead of drawing stupid fucking pandas maybe we wouldn't be seeing today's batch of patches."

As one of the few person which does some marketing for the project, I wanted to clarify that none of the developers have ever been involved in drawing pandas. In fact we never had a plan to do this: a community member stepped up and did the first version. After that wives, friends and other people who like drawing who have connections to the developers came up with the Panda in different poses. There was no real funding or marketing campaign behind this at all.

In fact if you compare Xen with say KVM and Container projects, we hardly do any marketing. And the same is true for many vendors which use Xen: you can't say that for other virtualisation technologies (FOSS or Not).

Xen warns of nine embargo-worthy bugs

larsk

Re: Moving to KVM because of Xen's XSAs...

In addition if anyone is concerned about the raw number of vulnerabilities check out:

* Linux: http://www.cvedetails.com/product/47/Linux-Linux-Kernel.html?vendor_id=33

* QEMU/KVM: http://www.cvedetails.com/product/12657/Qemu-Qemu.html?vendor_id=7506 & http://wiki.qemu.org/SecurityProcess - the interesting point on the process is that "at times, what appears to be a genuine security flaw, might not be considered so".

In other words, QEMU/KVM already has a similar policy, which Xen proposed publicly, and was criticised for. And as an aside, the project has not introduced the process change.

* Xen: http://www.cvedetails.com/product/23463/XEN-XEN.html?vendor_id=6276 (not yet including those 9)

I think all the figures speak for itself.

Xen bends own embargo rules to unbork risky Cirrus video emulation

larsk

Xen bends own embargo rules - note true

> However, what led to this one being shipped ahead of the normal embargo cycle is that

> the developers couldn't rule out a more serious exploit emerging.

The XSA states that "the embargo period is much shorter than our standard two-week period. This is at the request of the discoverer." This is entirely in line with the policy, See https://www.xenproject.org/security-policy.html section "Embargo and disclosure schedule" ... In particular: "When a discoverer reports a problem to us and requests longer delays than we would consider ideal, we will honour such a request if reasonable. If a discoverer wants an accelerated disclosure compared to what we would prefer, we naturally do not have the power to insist that a discoverer waits for us to be ready and will honour the date specified by the discoverer."

Xen Project wants permission to reveal fewer vulnerabilities

larsk

Re: Maybe a cup of tea will help

"However, the onus is on the Xen project team to prove that. It's not enough to say, "well, of course we do!" They need to show by their actions that this is the case. For example: a security audit of the code; participation in CTFs/bounty programs; partnership with major cloud folks to review Xen security. And so on."

Thank you for the suggestions. On the topic of bug bounties, basically every large vendor and contributor to the Xen Project runs their own bug bounty program as can be seen on https://firebounty.com. There are also regular security audits of sections of the code by various vendors. The last batch of XSAs in late Nov came out of such an audit. We are also integrating fuzzing into our CI infrastructure, etc.

But maybe the project does need to take more of an active role in CTFs/bounty programs and such things and/or communicate better what we do.

larsk

Re: @ Xen Project Team

> I will first assume that the current security advisories included every single bug

Currently security advisories are only issued for bugs that have been identified as security vulnerabilities which is in code that is not experimental or in preview. Whether the answer to your question is yes or no, depends on what you mean by every single bug.

> When your team cherry pick bugs to be disclose, you're basically hiding issues and creating a

> trust issue with your client and other supporters

That would certainly be true, if we changed the policy without consultation, or with consultation and lack of consensus and just introduced different criteria.

The proposal does not suggest that we would not fix bugs. What it does suggest is that we would fix issues for some security bugs without pre-disclosure and without issuing an XSA. That is not unusual approach: for example OpenStack does this (see https://security.openstack.org/vmt-process.html#incident-report-taxonomy - it only issues an advisory for some issues and a security note for others, using a more lightweight process)

> But no, your team should never try to hide bugs.

Well, that is not what we were asking or trying to do. Although I admit that it may have come across like it.

larsk

"Lack of disclosure isn't going to win many, if any, friends, especially when it comes to a hypervisor that's been riddled with vulnerabilities as of late. It just makes me wonder what they're trying to hide and for what reason."

You may wonder that, but the reality looks quite different. If you look at the facts (aka https://www.cvedetails.com/vendor/6276/XEN.html), you will find that the issues reported have been going down in the last two years (2014: 44, 2016: 28). This has happened in parallel with the project actively taking measures to find more bugs. Also, our security team is bigger in terms of members participating than at any time in the past.

If you look at Linux (https://www.cvedetails.com/vendor/33/Linux.html) and QEMU (https://www.cvedetails.com/vendor/7506/Qemu.html) the opposite is true. The big difference is the media attention that Xen Project vulnerabilities get compared to other projects: so yes, it looks as if we had more vulnerabilities than others, when in reality we are actually doing OK.

And it is not because we issue fewer CVE numbers or handle fewer issues. When you look at the data, you will also find that the average CVSS score of the issues we handled has reduced as well (the average score used to be around 5.1 - 5.4), but last year it was around 3.3.

"Transparency is not the enemy, especially not in an open source software project."

I do believe that we are one of the most transparent projects on how we handle security issues. This is why we made a public proposal to get feedback. Also, it is worth mentioning that there is a trade-off to transparency: every time we issue an XSA, which could be a vulnerability in some theoretical circumstances, but may not really be one, we are creating work for our down streams and users. We were criticised about this in the past, and this proposal is trying to address some of this alongside some other issues we have come across since we revised our policy last.

It is of course also good that El Reg is giving the proposal visibility. If you have an opinion, feel free to vote on the El Reg survey, but I did want to point out that a reply to our proposal on xen-devel at lists dot xenproject dot org is more helpful. You can also use the Reply button at https://www.mail-archive.com/xen-devel@lists.xen.org/msg96571.html (but make sure you CC xen-devel at lists dot xenproject dot org)

Citrix, Bitdefender in Xen-only virtual security double-team

larsk

Re: Hypervisor introspection

Actually it doesn't run in the hypervisor. It runs in a dedicated privileged VM, which can access the VMI interface (or in XenServer speak Direct Inspect APIs). Obviously access to these APIs needs to be very tightly controlled and no VM, but the BitDefender VM can access it.

Get patching: Xen bug blows hypervisor security to bits – literally

larsk

Or LivePatching could be used, which means these patches can be also applied without rebooting. But a fix, that has not yet been upstreamed to the LivePatch build tools is needed: see https://lists.xenproject.org/archives/html/xen-devel/2016-11/threads.html#02058

Xen hypervisor to gain non-disruptive patching in June

larsk

Not quite sure what you are saying here: little practical use for non-disruptive patching in general or only, if you use HA already.

Xen forgets recent patches in new maintenance release

larsk

We'd like to offer a comment from the Xen Project to this story for background on our process and thinking here. We detected the missing patches before the official release, but towards the end of the release process. We then had a discussion whether to make a new release which would have forced us to skip a release number (aka move from 4.6.0 to 4.6.1.1 or 4.6.2) or release 4.6.1 with two security patches which were incomplete, and document what is missing. At this point, we had to decide whether to re-tag (and thus re-number the release) or whether to document any issues. A similar issue happened in 2013, when we released Xen 4.1.6.1 instead of Xen 4.1.6. At that time it became clear that many consumers of Xen have difficulties with a version number that does not fit into the normal version numbering pattern, which led to Xen 4.1.6.1 not being widely used.

Because we documented the missing patches in the release notes and release announcement, nobody who downloads the signed source tarballs from our official download page should be unaware of the missing security patches. In addition, the tarballs from our official download pages are primarily consumed by Linux distros, which will apply the missing patches and any additional security fixes that turn up between Xen 4.6.1 and their own release. Most of our users consume binaries from those distros. The other group of users which consume the tarballs are power users, which are used to applying security patches.

This leaves the question, why we cannot re-spin a release without changing the version number if issues are discovered late during the release process. Firstly, making a release involves both extensive testing and also has a security dimension. Normally, after testing succeeds we create a signed tag in the git tree. This means that there is a secure way of accounting for where the tarball came from. We then rebuild and do additional testing, write the release notes, do some more checking and sign the tarballs. The missing patches were discovered on Thursday, well before the official release on Monday, but after we created the signed tag. Signed tags cannot be removed, as they have to be tamper proof.

There has been some confusion about Xen Project and our security processes because we do things differently than a lot of other projects out there. We handle and document every single security bug that is raised with the project to ensure that even low-severity bugs are fixed and known. Many projects and commercial software products do not do this and only handle bugs with severe security implications in recommended configurations, rather than handling even minor bugs in non-recommended configurations like we do. Essentially, we do go through this process as it makes everyone more secure and provides more control to those that use Xen.

How can it possibly be time to patch Xen again?

larsk

Re: wah wah wah...

I guess you are referring to my comment. I am just getting annoyed at the constant barrage of news stories related to Xen. I mean it's nice in some way, because writing stories about Xen obviously lead to traffic at The Register, which means that the project is important enough to get coverage. But sometimes there just is no story, like in this case.

larsk

As I pointed out in a comment to an earlier Register article on the topic of maintenance releases: we *always* make maintenance releases every 3-4 months, which contain bug fixes (including security fixes). This release does not contain any new security fixes, only those which had already been publicly disclosed. In addition, the Xen Project does not make binary releases. It makes source releases which are consumed by distros and commercial products and services. The vast majority of Xen users do not use Xen directly, but use a distro, commercial variant of Xen or a Xen based service. Only a small number of users builds and uses Xen from source.

Some of the commits, which have been highlighted in the article are of course XSA's : for example "xl: Sane handling of extra config file arguments" includes XSA-137. This is also an excellent example, which shows how we review older code after an XSA is discovered and harden it. The number of fixes in a maintenance release, is well within the normal range for similar sized projects or products. If it is higher, than this is a reflection of the needs of different vendors and distributions who request backports of specific bug fixes for their own convenience to avoid having to carry large patch queues, in accordance with our maintenance release policy at http://wiki.xenproject.org/wiki/Xen_Project_Maintenance_Releases

No, we're not sorry for Xen security SNAFUs says Ian Jackson

larsk

Ian Jackson said in the original post:

> ... over the last few years the Xen Project’s code review process has become a

> lot more rigorous. As a result, the quality of code being newly introduced into

> Xen has improved a lot.

This is certainly true. In fact, we have had complaints from various vendors that it is now too hard to contribute to the Xen Project (compared to Linux and KVM). To look into this, the project commissioned a study by Bitergia to model and data mine our review process. Phase 1 of that study completed a few weeks ago and it shows that the primary reasons for the increased elapsed time it takes to get code into Xen, are significantly higher quality expectations.

If you look at some of the data, in 2008 when the bug behind XSA 148 was introduced, the average number of sets of review comments per patch was 4. Sets of review comments are emails that comment on a submitted patch (not patch series) piece by piece and contain several review comments (typically between 1-15 depending on the size of patch). This number has steadily grown to around 13.5 sets of review comments per patch and has increased the time it takes to get a patch into Xen noticeably. We do not know how this compares to Linux, KVM or even proprietary software. But given the complaints by vendors who contribute to Linux, KVM and Xen, it appears that Xen is more rigorous than most other projects.

To say, that the project does not care about quality and thus security, is simply unfair. We do care, which is also why a) we run a rigorous security vulnerability process and b) we resist the temptation to sacrifice transparency, even if this sometimes leads to negative press coverage.

larsk

Re: Bugs

> "And this includes shipping a large testing base btw."

And that code is indeed available. The code which is used to test Xen is available and the code which is used to test XenServer is available. Of course there are other testing code bases from other vendors that use Xen which are not. Unfortunately, you will need a rather large HW installation with many different machine types to set it up.

However, I did want to point out, that traditional unit and functional testing, does not normally pick up security issues. To do this, you do need run fuzzing and other tools designed to find security issues. And in fact such tools are run regularly on the Xen codebase. But understandably, such code cannot be published without giving blackhats the tools to run them themselves.

> You still need to recruit enough eyes and make what the code does visible in the first place.

I guess that's an argument for open source?

Devote Thursday to Xen and the art of hypervisor maintenance

larsk

> The patch run looks to be a tricky one, because the Project added XSA 143 and

> 144 to the list, but now lists them as unassigned.

This has no real meaning: a bug report may initially have been classified as security issue and may either have been discounted as such, or may have been patched/merged with another vulnerability. Other cases where this may happen is that a 3rd party vulnerability (e.g. QEMU, Linux, ...) is first reported to the Xen Project and we later agreed with the raiser of the issue that the issue is better handled by the 3rd party open source project. We retain the entries in our own records to ensure we do not loose history.

Xen 4.6 lands, complete with contributions from the NSA

larsk

NSA and vTPM 2.0

I followed up on the actual vTPM log and the NSA has mistakenly been credited as co-contributor to vTPM 2.0 in the Xen Project blog. The bulk of the work has been contributed by Intel, with a smaller set of changes added by BitDefender.

Xen urges another upgrade to get OpenStack humming

larsk

> Users of the open source hypervisor could be excused for having upgrade fatigue of late, ...

Well, we *always* have released maintenance releases every 3-4 months. We used to coordinate these and do all the maintenance releases in sync. But, our users felt that coordinating maintenance releases across versions is not that important. So the only thing that has changed is that there are more separate release announcements today as 4.y.x updates won't happen on the same days as in the past.

It is also worth pointing out that most user consume xen via distros and that distros will pick up our releases, which get pushed out in the usual way via a package manager or a distro maintenance release.

> as the effort asked them to upgrade the 4.5 version of the code in late June and the project has also

> suffered a lengthy string of nasty bugs.

Most of the security bugs that affected Xen this year, were actually QEMU bugs which also affect KVM, VirtualBox and other users of QEMU.

QEMU may be fro-Xen out after two new bugs emerge

larsk

The problem with stubdoms is that they affect the bottom line of hosting/cloud providers. So even if Xen were to enable them by default, hosting/cloud providers would probably disable them. That's why an alternative is being worked on.

larsk

> The advisory for the flaw also offers the following contentious text:

>

> “We will encourage the community to have a conversation, when this advisory is

> released, about the continuing security support status of qemu-xen-traditional in

> non-stub-dm configurations.”

I can see how this text could be seen as contentious, but it is everything but. Xen ships with two versions of QEMU (see https://blog.xenproject.org/2013/09/20/qemu-vs-qemu-traditional-update/). So what this statement really means, is that the Xen Project should have a discussion to deprecate the qemu-xen-traditional fork in favour of QEMU upstream. This would mean that security fixes do not have to be backported from QEMU upstream to qemu-xen-traditional.

> With another two similar flaws now revealed, little wonder the Xen community is

> pondering its future relationship with QEMU.

Although split-ups do make a nice story, the opposite is actually the case. The Xen Project is working closer than ever with the QEMU folks and deprecation of our old qemu-xen-traditional fork is a sign of that growing closeness. In addition, we are working more closely on the security front too: the fact that Xen is handling more QEMU issues jointly with the QEMU security team is also a reflection of that. We also have some joint activities planned at LinuxCon.

Granted that the recent number of QEMU issues has been painful, but after Venom, it was to be expected that this would happen. And many of these issues also affect other consumers of QEMU (e.g. KVM). To mitigate such issues in the future, work is underway to sandbox QEMU within Xen (google for "xsrestrict QEMU", http://lists.xen.org/archives/html/xen-devel/2015-07/msg04501.html, ... ) such that the impact of future issues with QEMU can be contained.

Guest-host escape bug sees Xen project urge rapid upgrade

larsk

I wanted to point out that there is nothing unusual about this point release. We have always bundled up bug fixes, improvements and security updates into point releases. What we have done though, and will do in future, is list all the changes that went into a release on the download page.

Every Xen point release has historically contained bug fixes, improvements and other backports in the order of 100 changes. Our users and developers, can request for bugs to be back-ported to the last two releases, which they routinely do. It is also worth pointing out that Linux distributions, BSD distros and other direct users of Xen Project consume Xen from source and routinely apply security issues that affect them before security issues are fully disclosed. The majority of Xen users consume Xen indirectly and benefit from the work of security teams of their distro's. The 4.x.y point releases are mainly there for the convenience of our users.

It is also worthwhile mentioning, that XSA 135 is a QEMU security vulnerability, which affects both Xen and KVM (see https://rhn.redhat.com/errata/RHSA-2015-1087.html), which the Xen Project merely re-published on its XSA list to again make it easier for our users to identify security issues that may affect them. In the case of XSA 135, the vast majority of Xen users are not affected, as the affected QEMU controller is rarely used by Xen users and not in a default configuration.

Linus Torvalds releases Linux 3.18 as 3.17 wobbles

larsk

Locking issue

The problem that was Xen related had been diagnosed - and there was patch from Juergen Gross (Suse) that fixed it. But the generic lockup issue is still present and it hasn't been yet narrowed down to what is triggering it.

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2022