Who owns the tin, owns the data.
The way it has always been, the way it will always be.
The Spectre design flaws in modern CPUs can be exploited to punch holes through the walls of Intel's SGX secure environments, researchers claim. SGX – short for Software Guard eXtensions – is a mechanism that normal applications can use to ring-fence sections of memory that not even the operating system nor a hypervisor can …
Who owns the tin, owns the data.
The way it has always been, the way it will always be ... seven of five
Oh? Is one so sure?
Does not the tale tell whoever supplies the data, pwns the tin? And is that not the way it has always been, the way it will always be.
If you own the tin, in a bofh scenario, can you provide a broken sgx api anyway? With or without spectre?... teknopaul
Quite so, teknopaul, .... Own Command and Control with IT and their AIs. Can you Imagine to Realise their Future Programs for Picture ReProgramming with AIMedia Presentations from Colossal and Titanic Studio Organs ........ Birthing Areas for Alien Landers on Earth ........ where TS/SCI Scripts are Enlivened and AIRealised with Universal Broad Band Casting ........ so you get to see the Future Arriving just as IT and Media are showing you with talking pictures/selected program actor.
Just checking Wikipedia https://en.wikipedia.org/wiki/Software_Guard_Extensions#cite_note-14 we see that
a) There was a Prime+Probe attack which used "certain CPU instructions in lieu of a fine-grained timer to exploit cache DRAM side-channels" and a coutermeasure was published
b) The LSDS group at Imperial College London showed a proof of concept that the Spectre speculative execution security vulnerability can be adapted to attack the secure enclave and the code is dated 2 months ago.
I wonder if the "compiler-based tool, DR.SGX" which was a coutermeasure for Prime+Probe could be extended to handle Spectre?
It finally comes to light as I predicted a couple of months back that trusted zone memory would be up for grabs, see:
It has taken the white hats a couple of months ( maybe a little longer as it was known about before we , the great unwashed, got to know about it), Historically they have always trailed the black hats in exploit finding. though maybe not so far behind now there are bug bounties.
All stored key protection schemes are now totally insecure. By this I mean just about every DRM scheme which is in place up to now. I reiterate, watch the sue-balls fly from the fallout of this.
It's not just DRM, it's anything that uses a password. Most software was written with poor security, i.e., passwords in the clear all the time, etc. If they even bothered with rot13 it would be an improvement.
Keep a password encrypted until use, and then overwrite the plain text after you're done with it. Sorted.
Keep a password encrypted until use, and then overwrite the plain text after you're done with it. Sorted.
You need the keys in memory too though, so this is a form of security by obscurity. A relatively tough one, but someone motivated enough to get into an application's secured memory area is also going to be able to deal with that. Unless you're doing completely homomorphic computation (I'm not sure it's actually possible and still decodeable at a low cost) control of the hardware always offers the potential to compromise a system running on it. Encryption at rest is a different thing which you can achieve.
from what I understand, the practicality of the attack vs possibility is a completely different thing.
As much as I'd like to see the Blu-Ray equivalent of 'libdecss' with all of the possible decryption keys built-in, and easily downloadable software to convert Blu-Ray contents into h.265 media files on a computer, phone, or slab [for personal use, of course, I'm not interested in pirating content, just convenience], I don't see this happening any time soon, even with the ability to do a side-channel attack on the DRM code.
It just seems that there's a lot of cost for such a tiny payoff that it probably won't happen outside of specific kinds of "spear" attacks [perhaps by the NSA?].
In the mean time, every time I read into the technical details of these things, my mind boggles. Spectre is such a confusing mess to try and wrap my mind around, I can't see how any *SANE* person could actually make this work without an extreme amount of time and effort...
And the 'ret-poline' seems to be an adequate defense against at least SOME of it, by not using the speculative execution thingy in the first place.
Now, here's a thought: what if we could just flip a bit to turn branch prediction OFF for code that needs extra security? Or, better still, make it an inherent part of the TASK STATE so other CPU tasks can't pollute the branch prediction cache like that.
yeah that means a complete re-design of the chip's internals. I'll have to wait for a new CPU architecture before upgrading hardware, then... hopefully withOUT a mandatory management engine, too!
A back-of-the envelope computation suggests that branch prediction might be doubling performance in processor-bound applications. If you think that microprocessor designs are complicated today, just wait until you see what comes out to try to claw back the performance losses.
But of course, it's pretty rare to have processor-bound code. The real bottleneck is almost always memory or IO. Caches are there to help with that, but it is the speculative loads & stores that really win here. And are creating most of these vulnerabilities. So we're talking pre- and/or post- buffers on the caches.
The problem is if there's a way to do it then one sufficiently clever person may be able to automate it.
For bluray I suppose the difficulty (a number of different schemes and not as trivially flawed as CSS was) combined with the potential for becoming a criminal for breaking it outweigh the fairly small rewards of developing and open-sourcing a general solution. Additionally newer versions apparently have corrupt data on the disc and rely on online retrieval of correction tables, which sounds compromisable but also designed to get you into hot water if you tried it. (I think there may be some approaches that work for some cases, but generally you still can't expect to watch a legal blueray in linux without using a commercial player).
"And the 'ret-poline' seems to be an adequate defense against at least SOME of it, by not using the speculative execution thingy in the first place."
Retpoline doesn't get in the way of most speculative execution; that would make the penalty of the Meltdown mitigation look positively light-weight in comparison. Instead, it tricks the processor into treating simple jumps as function calls, which are handled differently(though our buddies at Intel have managed to screw up that bit of security: in the right circumstances, newer models can starting reading from the vulnerable branch buffer rather than the secure return buffer).
This post from Stack Overflow explains it a lot better than I ever could.
Maybe it's the death of *cheap* DRM ?
You could have expensive DRM, but that would be predicated on the thing being "managed" to be of enough value to make it worthwhile. Our old friend Laffey is at work here, as elsewhere, methinks.
That's before we get to the fact that - disingenuously - it can suit big companies to have slightly leaky DRM. They'll get their whack from the merchandising and tie-in stuff.
You need only be able to have a vulnerability so that you can inject your code in. This can be triggered when you, for example, visit a malicious website or have a spearfishing email. Just one of the few examples. Moreover, most trusted enclaves run code in the processor's internal static ram and reference data (including keys) in the static ram itself. In theory external code can't see the internals of this static ram. That section of static ram is not cached out to the general CPU cache. These researchers found a way to cross the wall because of speculative instruction execution.
Analysis Supermicro launched a wave of edge appliances using Intel's newly refreshed Xeon-D processors last week. The launch itself was nothing to write home about, but a thought occurred: with all the hype surrounding the outer reaches of computing that we call the edge, you'd think there would be more competition from chipmakers in this arena.
So where are all the AMD and Arm-based edge appliances?
A glance through the catalogs of the major OEMs – Dell, HPE, Lenovo, Inspur, Supermicro – returned plenty of results for AMD servers, but few, if any, validated for edge deployments. In fact, Supermicro was the only one of the five vendors that even offered an AMD-based edge appliance – which used an ageing Epyc processor. Hardly a great showing from AMD. Meanwhile, just one appliance from Inspur used an Arm-based chip from Nvidia.
In yet another sign of how fortunes have changed in the semiconductor industry, Taiwanese foundry giant TSMC is expected to surpass Intel in quarterly revenue for the first time.
Wall Street analysts estimate TSMC will grow second-quarter revenue 43 percent quarter-over-quarter to $18.1 billion. Intel, on the other hand, is expected to see sales decline 2 percent sequentially to $17.98 billion in the same period, according to estimates collected by Yahoo Finance.
The potential for TSMC to surpass Intel in quarterly revenue is indicative of how demand has grown for contract chip manufacturing, fueled by companies like Qualcomm, Nvidia, AMD, and Apple who design their own chips and outsource manufacturing to foundries like TSMC.
Intel has found a new way to voice its displeasure over Congress' inability to pass $52 billion in subsidies to expand US semiconductor manufacturing: withholding a planned groundbreaking ceremony for its $20 billion fab mega-site in Ohio that stands to benefit from the federal funding.
The Wall Street Journal reported that Intel was tentatively scheduled to hold a groundbreaking ceremony for the Ohio manufacturing site with state and federal bigwigs on July 22. But, in an email seen by the newspaper, the x86 giant told officials Wednesday it was indefinitely delaying the festivities "due in part to uncertainty around" the stalled Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act.
That proposed law authorizes the aforementioned subsidies for Intel and others, and so its delay is holding back funding for the chipmakers.
Having successfully appealed Europe's €1.06bn ($1.2bn) antitrust fine, Intel now wants €593m ($623.5m) in interest charges.
In January, after years of contesting the fine, the x86 chip giant finally overturned the penalty, and was told it didn't have to pay up after all. The US tech titan isn't stopping there, however, and now says it is effectively seeking damages for being screwed around by Brussels.
According to official documents [PDF] published on Monday, Intel has gone to the EU General Court for “payment of compensation and consequential interest for the damage sustained because of the European Commissions refusal to pay Intel default interest."
By now, you likely know the story: Intel made major manufacturing missteps over the past several years, giving rivals like AMD a major advantage, and now the x86 giant is in the midst of an ambitious five-year plan to regain its chip-making mojo.
This week, Intel is expected to detail just how it's going to make chips in the near future that are faster, less costly and more reliable from a manufacturing standpoint at the 2022 IEEE Symposium on VLSI Technology and Circuits, which begins on Monday. The Register and other media outlets were given a sneak peek in a briefing last week.
The details surround Intel 4, the manufacturing node previously known as the chipmaker's 7nm process. Intel plans to use the node for products entering the market next year, which includes the compute tiles for the Meteor Lake CPUs for PCs and the Granite Rapids server chips.
Updated Intel has said its first discrete Arc desktop GPUs will, as planned, go on sale this month. But only in China.
The x86 giant's foray into discrete graphics processors has been difficult. Intel has baked 2D and 3D acceleration into its chipsets for years but watched as AMD and Nvidia swept the market with more powerful discrete GPU cards.
Intel announced it would offer discrete GPUs of its own in 2018 and promised shipments would start in 2020. But it was not until 2021 that Intel launched the Arc brand for its GPU efforts and promised discrete graphics silicon for desktops and laptops would appear in Q1 2022.
The Linux Foundation wants to make data processing units (DPUs) easier to deploy, with the launch of the Open Programmable Infrastructure (OPI) project this week.
The program has already garnered support from several leading chipmakers, systems builders, and software vendors – Nvidia, Intel, Marvell, F5, Keysight, Dell Tech, and Red Hat to name a few – and promises to build an open ecosystem of common software frameworks that can run on any DPU or smartNIC.
SmartNICs, DPUs, IPUs – whatever you prefer to call them – have been used in cloud and hyperscale datacenters for years now. The devices typically feature onboard networking in a PCIe card form factor and are designed to offload and accelerate I/O-intensive processes and virtualization functions that would otherwise consume valuable host CPU resources.
A drought of AMD's latest Threadripper workstation processors is finally coming to an end for PC makers who faced shortages earlier this year all while Hong Kong giant Lenovo enjoyed an exclusive supply of the chips.
AMD announced on Monday it will expand availability of its Ryzen Threadripper Pro 5000 CPUs to "leading" system integrators in July and to DIY builders through retailers later this year. This announcement came nearly two weeks after Dell announced it would release a workstation with Threadripper Pro 5000 in the summer.
The coming wave of Threadripper Pro 5000 workstations will mark an end to the exclusivity window Lenovo had with the high-performance chips since they launched in April.
Lenovo has unveiled a small desktop workstation in a new physical format that's smaller than previous compact designs, but which it claims still has the type of performance professional users require.
Available from the end of this month, the ThinkStation P360 Ultra comes in a chassis that is less than 4 liters in total volume, but packs in 12th Gen Intel Core processors – that's the latest Alder Lake generation with up to 16 cores, but not the Xeon chips that we would expect to see in a workstation – and an Nvidia RTX A5000 GPU.
Other specifications include up to 128GB of DDR5 memory, two PCIe 4.0 slots, up to 8TB of storage using plug-in M.2 cards, plus dual Ethernet and Thunderbolt 4 ports, and support for up to eight displays, the latter of which will please many professional users. Pricing is expected to start at $1,299 in the US.
AMD's processors have come out on top in terms of cloud CPU performance across AWS, Microsoft Azure, and Google Cloud Platform, according to a recently published study.
The multi-core x86-64 microprocessors Milan and Rome and beat Intel Cascade Lake and Ice Lake instances in tests of performance in the three most popular cloud providers, research from database company CockroachDB found.
Using the CoreMark version 1.0 benchmark – which can be limited to run on a single vCPU or execute workloads on multiple vCPUs – the researchers showed AMD's Milan processors outperformed those of Intel in many cases, and at worst statistically tied with Intel's latest-gen Ice Lake processors across both the OLTP and CPU benchmarks.
Biting the hand that feeds IT © 1998–2022