Re: One time pad - with a twist
I downvoted for talking about downvotes. You're welcome.
49 publicly visible posts • joined 23 Apr 2017
True, but they represent a real thing - hardware no longer becoming useful and presumably being withdrawn from service. With the E2 series they can basically give people whatever they have to hand, and the only question is whether it's power-efficient and compatible with whatever service they want to add on.
I wanted to try to use IBM in Cloud's free 200MB Db2 instance for something, but nothing I care for seems to use it. I guess I'm not really their target customer anyway, but I think they have missed the boat with respect to application developer mindshare.
The same for Oracle and their two 20GB free cloud databases. MediaWiki used to support it and MS SQL, but hasn't since MW 1.34 - and not well before that: https://mediawiki.org/wiki/Core_Platform_Team/Decisions_Architecture_Research_Documentation/Dropping_Abstract_Schema_Support_For_Oracle_and_MSSQL
If on desktop you can use Ripcord. It's a Qt-based Discord and Slack client all in one.
Mobile might be trickier. Personally I'd ultimately lay the blame at Apple's feet for not providing public APIs that meet developer needs. (This is at times a problem for Windows as well, of course. But perhaps less so due to anti-trust remedies.)
It's a little sad, but I knew it was Linus even before I looked (and yes, it was).
The video's nothing special, but he did point out the useful tip that you have to be careful which header to plug the CPU fan into, after putting it into a case output - apparently Supermicro server boards don't have them labelled them that well?
POSReady got the updates to provide later versions of TLS up to 1.2 and support of SHA-256, etc. Just about the same time parts of the U.S. government started mandating TLS 1.2, as I recall.
One thing they did not get is architecturally-disruptive fixes for Meltdown and Spectre.
Sure. But they also shipped it as a service pack for Vista. Likewise, Vista SP2 was effectively Win7 SP1. There was a feature pack bringing many Win7 low-level features over too.
Once patched and on hardware that properly supported either OS, they were largely the same. That's what made it so frustrating when people started dropping support for Vista with XP, rather than with Win7.
Servers are increasingly being offered and bought with all-SSD storage nowadays, even at the lower ends of the market. Not saying spinning hard drives are dead, but it's gotten to the point where they're becoming the thing you add, rather than SSDs. The performance edge is undeniable, and the labour costs are the same (maybe less since they're arguably more reliable). Less heat and power, too, which can be big factors.
True, but if it can "race to idle" the average power draw may be close to the idle power rather than active. Even the time to resume from full device sleep is lower than the average access time of a spinning disk.
The question becomes how fast it can recognize that it will be idle, and what the proportion is. Hard disks may remain a good choice for steady but not huge workloads over large contiguous data. Which includes many file serving operations.
For a lot of it the fix is "find some way to stop the CPU predicting past this point", though. It didn't resolve the underlying issues - just work around them by selectively disabling that feature, at an often-severe cost to performance.
Meltdown and Spectre reveal information due to a side-effect of how fast it is to fail to access data which may have been brought into cache through a speculative fetch. It is not branching that is the issue, so much as the RAM-to-cache fetches, which are a large part of the point of predicative execution.
The variable speed of memory operations due to whether or not data us in the cache was not considered to be a security issue, just a performance issue. It is that core assumption which is the biggest problem.
The Register needs to do some fact checking before blandly reproducing corporate assertions - or just making up figures themselves by rough figures scrawled on the back of a packet of crisps.
Current Xbox One models ship with Gigabit Ethernet and 801.11ac 2x2 MIMO, immediately cutting maximum speed by a factor of ten. So it's literally impossible for a 25GB download to take 20 seconds - three and a half minutes is more feasible.
Looking around at people who actually have Gigabit suggests that the servers (or possibly, the hard drive) currently limits it to of around 230Mbps, so it's more like fourteen and a half minutes.
It probably is Oracle SPARC, though, because not every SPARC processor has this kind of architecture based on speculation.
It may also be other SPARC, yet still not SPARC in general. It's only X86 in general because speculative execution has been a feature of the platform for such a long time that almost everyone adopted it.
Debian has backports, which for the kernel is usually only one or two releases behind the tip. Ubuntu goes one further by building point-release kernels off of the mainline which you can use at your own risk.
Fedora, too. If you want to mix and match, you can make it happen.
Their job is to provide a working distribution of software. If you want to install the latest and greatest kernel on it, you probably can, but you lose their guarantee that it'll work (or that they'll try to fix it if it doesn't).
Sometimes they do rebase on a later LTS kernel, but only because it is the most time-efficient way of achieving the original goal.
As long as laptops and desktops provide a service, they'll continue to sold - and in enough volume to maintain reasonable prices, even if the premium buyers stick with phones and tablets/convertibles.
There has been some consolidation in the market, but part of it has been that there's no super-huge reason to get new devices. After all, they all run Windows 7, and if they do that, they can likely run 10, too. Well, now there may be a good reason for businesses to upgrade.
Recompilation introduces changes designed to frustrate speculative loads and execution, which otherwise might improve performance. There is, therefore, an impact. How big, depends on the precise mechanism of the protection and the software being run.
But without new microcode, the defence is inadequate, and so will not have the full performance impact. I've seen several graphs of performance diving after the microcode was also applied.
They have been doing that, yes - the Meltdown stuff isn't applied to AMD at all. There's more that they could be doing w.r.t. PCID when INVPCID is not available (most 3xxx/4xxx/G32xx/G1xxx Intel chips), and maybe they will in due course, but it that's an optimization that can be done a little later.
I believe it's speculative execution within the kernel, resulting in information disclosure to user mode due to a timing attack on the shared processor cache which can undermine KASLR.
So they split the user/kernel page table set, which had always been shared before for performance (and only split to provide 4GB/4GB space for both on x86-32, which suffered the same kind of impact).
One interesting way to approach this might be to limit the cache allocated to certain processes, but that's an advanced feature found only on recent Xeons, and I don't think anyone's actually planning to do that - it might have an even worse impact.
Reminds me of the Bugzilla entry requesting the inclusion of the U.S. Federal Government's root certificate, coming up to its tenth year.
It looks like they're still working towards a solution, but it's a slow grind. Of course, they're still doing better than Brazil.
I'm not quite sure what problem it is you're hoping to handle. Most sites make it clear in their terms that entering content into to the site acts a license to redisplay that content to others.
The EU's e-Commerce Directive (as implemented in many laws, including those you're thinking about) gives providers of "information society services" such as Facebook an out from such laws. They have to do something about illegal content once informed of it, but are not considered responsible for it unless they exert meaningful editorial control over the substance of the content.
Nobody's being paid to push them as "solutions" at trade shows, despite their demonstrated ability to solve many problems. They just keep doing what they do. Munin's graphs alone have helped me diagnose countless problems. Email notification of out-of-range values is a bonus.
Firefox developers are very keen to avoid crashes, but in doing so they have a tendency to disable things like hardware acceleration which are crucial for performance on many systems.
It has been broken for a while on my Radeon 6970 under Vista and while there were suggestions about what changes might have caused it, it was never fixed. Perhaps more importantly, I regularly have the same problem on my AMD Brazos-based x120e netbook. That's a critical fail, because the CPU is so anaemic that I have to open Chrome to watch any video.
As always, it's a case of "you get what you measure". Crashes are bad, so reducing crashes is good. But if you do so by disabling an important feature (rather than fix or work around the problem), that's not so good.