Re: it's only tepid when the emissions test cheat device is enabled.
or when buying covfefe
2859 publicly visible posts • joined 6 Sep 2007
I do not run one OS, I use multiple at the same time. Which is:
Linux when I need to do actual work, running both as a hypervisor and also as a guest OS within libvirt/qemu/kvm stack. That's what I use for writing and debugging code. Also, my personal OS is Windows 10 (running always as a guest OS) for sake of these few actually good programs not related to work, which are not available on Linux, and also when my children want to play games.
On put options: the current price is $11.35 , so put option at $11.50 is "in the money". However, the price has been climbing up, from the lowest point today $11.28, so those who bought these options when the shares were cheap will not make profit, unless the price falls again. It might, or it might not - if it does then it would be not on the "strength" of the security "discovery" discussed here.
When people find that your products suffer from meltdown, do you:
1) focus on fixing the problem, or
2) put large spectacles, wig and fake moustaches, point at a rodent passing nearby competitor's factory, and shout "oh look, squirrel!"
Credit to Torvalds for naming these guys for what they are.
Actually, IIRC Intel AMT flaws are worse, because to exploit those you do not need:
1) root access
2) any local access at all
The only unusual quality of these new AMD attacks is that they can remain under the radar for a very long time, making "evil maid attack" particularly dangerous.
Actually, having multiple distros use the same desktop (MATE, in your example) means that those who maintain desktop code receive more help and input from distributions maintainers.
Do not forget these are different people with different interests. Someone with expertise writing good UX code might not feel at home maintaining a distribution but will welcome input (and vice-versa - good maintainer might not necessarily write good UX code).
Similarly, having too many people working on any single piece of code directly (rather than via trickle of contributions from distributions) brings to mind saying "too many cooks spoil the broth". You do not achieve more progress by cramming more developers on a project, so having all of these distributions work on desktop code directly (rather than making it better for their distribution) would not help make it better. More likely it would be the opposite.
Here is an interesting discussion on power caps in SSD. They are the reason I only buy "enterprise grade" SSD, and only after having double-checked the specs for capacitors and their function. Both Intel and Crucial make some good, enterprise-grade SSDs, but you have to make your choices wisely. I would definitely not trust OCZ, though.
At the end of that wiki page is a good example.
Imagine you have generated a keypair (private key "a" and public "A"). Your public key "A" is a not memorable at all, but you want memorable one. Another person takes your public key "A" and generates a keypair (private "b" and public "B") such that operation "A + B" yields a "vanity" public key "C", i.e. one which is easier to remember (where "+" may not necessarily denote addition; it is just some mathematical operation). You buy from them the keypair "b" and "B" and then apply the same mathematical operation on private keys "a + b" which gives you a private key "c", matching the public key "C" (which, to remind, is a result of "A + B"). This is of course assuming that the homomorphic encryption was applied here - otherwise, this won't work.
There is also this little thing called "non-functional requirements". Things like scalability, logging and monitoring subsystems, deployment and test tools, etc. Things which are rarely, if ever, mentioned by the project stakeholders, but if ignored during design and development then the final product is guaranteed to fail.
AMD CPUs are not affected by Meltdown, only by Spectre. The workaround to the former is really expensive in terms of CPU overhead on Intel (and exactly 0 cost on AMD), especially if your workloads involve lots of IO, which is why I plan my next upgrade to AMD Epyc (from Xeon Ivy Bridge). The workarounds to Spectre are still appearing, but so far all are pretty cheap.
I recently installed a new server and migrated friends website from Ubuntu 14.04. While doing so, I also installed letsencrypt certificate and it was very easy, thanks to "apt-get install letsencrypt". A bit of learning of nginx configuration was required, but learning is what I do. Setting up a timer to refresh the certificate bimonthly was trivial, too. One point of note: the certificate will store all alternative host names from -d parameter(s) passed to letsencrypt, but the first -d parameter is also set as CN= record of the certificate. So make sure you pass the right name first.
Why boasting? Just to show there is absolutely no reason to stick with dinosaur CA like Symantec. If the only thing you want to show in the certificate is the host name (rather than organization name), then you do not need expensive verification and letsencrypt is your friend. If you need verification, there is plenty of competition to choose from.
To be truly pedantic about such things, you get 0 years (rounded down, as set by the rules of conversion from floating point to integral types) until the midnight before first anniversary of whatever event you are counting from. So, 1023 years would be either forever (if the counter does not have enough bits and hence keeps rolling over at some smaller value, like 255 + 1) or 1024 years less some arbitrary, usually small, quantum of time.
vCPU pinning is well know, but it makes load balancing difficult. Regular load balancing would be based on assumption that you can always pin more than one vCPU to a single core, and you pin vCPU from multiple VMs to cores on one physical CPU. These assumptions need to go out of the window now.
I expect Amazon, AWS, Azure etc. will start offering a new tier of services where they indeed guarantee that only your VMs run on any single physical CPU, but this is going to be expensive (you pay for more vCPUs than stricly needed), or slow (poor load balancing), or both.
I am not living under the impression that my computer is not vulnerable to spectre v1. But there is very little I can do about it. I am simply happy that living on the bleeding edge of both kernel and compiler development has, at least once, given me some real benefits. Few distributions make this easy and most are lagging behind, sometimes quite significantly.
Thanks to a really, really small patchset in the distribution itself, it is trivially simple to build my own Linux kernel straight from www.kernel.org and with my own configuration. Even better, thanks to following GCC releases really fast, my kernel is now reporting this: "Mitigation: Full generic reptoline". And I am running 4.14.15 - but I bet the distribution will make kernel 4.15 available in the next few weeks (or I can just roll my own - not tempted though, just yet)
Well, let's see ... processors from the competition are catching up, on some benchmarks are better and worse on others, but clearly getting there. Then comes the news about Meltdown bug (lets put Spectre aside - all are vulnerable to this) which only affects Intel processors, and the mitigation to this one comes at a cost at least 5% of the performance, sometimes more than 20%. Surely that is going to impact the benchmarks, and hence sales figures. The problem is systematic, and it will take Intel a long time to fix it in hardware - the time which the competition can use to improve their designs, not impacted by Meltdown bug
Judging by the moves of INTC and AMD share price right after El Reg article, shareholders think similar.
This, but what should we expect when everyone - including headhunters, regulators and journalists - think that CODERS are synonym with ENGINEERS.
Sometimes it feels as if these two were seriously considered to be engineers, and the whole of the profession held responsible for the inevitable mayhem.
I think you got it right here "If something goes wrong, we don't know how to fix it, but we may still be held responsible. Most IT practitioners simply won't be able to get over this."
The thing is, IT practitioners make the decisions. Their job is to control the risks and take the responsibility. Until "serverless" evangelists find a way to put that in a black-box, too, it will not take off. And we are far from it.
It started its life this way, but that was long ago. Since then it has become a central authentication authority based on standard Kerberos (now with both MIT and Heimdal implementations available) in the local network, with integrated directory services for both humans and machines, based on standard LDAP. Also, it is a go-to solution for making the enterprise scale distributed filesystems available to Windows machines, thanks to CTDB - for example see page 12 in Lustre Architecture whitepaper. Not everyone needs distributed filesystem; I will grant you that. But that does not mean that Samba is less useful as an authentication authority or directory service.
@ST you are right and I am right - I was referring to Meltdown (not to Specre) so we agree on this. The speculative execution on its own can also cause Spectre "class" of bugs which are cheaper (performance wise) to work around, as compared to Meltdown one. The numbers I keep seeing on lwn.net for Specre are consistently under 5% (usually around 1%), but numbers for Meltdown easily exceed 10% if your system is doing little more IO or other kernel-related activities. This is why I believe that AMD (not being affected by Meltdown) have now huge performance win against Intel, which is not reflected by old benchmarks, at all. On related note - I wonder if GPU intensive application (i.e. games) need context switch to communicate with the GPU. If so, then gaming benchmarks are going to be affected a lot, too.
You are also right on explaining that speculative execution issue is not just "implementation", it is more of a design issue. Just let me have that simplification, ok?
AMD is known to be not affected by Meltdown bug, which was also the most expensive (in terms of performance penalty) one to fix. This means that, suddenly, all the performance benchmarks comparing Intel (with crappy buggy speculative execution which allows user code crossing kernel boundary) against AMD ones (which would not allow such violation in its own, slightly less buggy, implementation of speculative execution) are no longer valid.
(typo, should be 4.14.14 - that's what I'm running, today). Always follow kernel.org , that's where upstream is.
Honestly, I am very annoyed at distributions refusing to use something closer to LTS upstream and insisting at applying hand-picked patches on old kernels instead. I can understand the reasoning for RedHat doing that, but everyone else?
I read elsewhere that MS has added (not production ready yet) an ssh server to Windows. I am trying to guess what shell is that ssh going to make available for its users, could that be PS rather than cmd? If a Linux admin tried to do some remote administration of a Windows machine under ssh and landed in PS prompt, then perhaps it would help to be able to run (and learn the basics of) PS on a Linux machine? It wouldn't be great and it would not convince anyone to switch from bash or zsh, but it would serve the educational purpose, I think.