What kind of opt-in was it?
I didn't get the description of the opt-in. From the article it seems that it is not a choice of the system administrator, but set by some software. I hope I misunderstood it, otherwise it would be a terrible solution.
Linus Torvalds has removed a patch in the next release of the Linux kernel intended to provide additional opt-in mitigation of attacks against the L1 data (L1D) CPU cache. The patch from AWS engineer Balbir Singh was to provide "an opt-in (prctl driven) mechanism to flush the L1D cache on context switch. The goal is to allow …
This post has been deleted by a moderator
By that definition, even that isn't really secure, as one could probably find a way to send energy into the computer through the vault and vacuum and then read information back wirelessly.
Meaning, one of the qualifications of having a secure computer is one a person can actually put to practical use. Otherwise, one should provide a Turing-style proof that the only secure computer is one destroyed beyond reconstruction.
The tri-core POWER-based CPU in the Xbox 360 was relatively simple, and it was noted to be somewhat of a lightweight compared to the Cell CPU of the PS3 (both ran at 3.2GHz IIRC).
Seems to me simple isn't going to cut it with modern workloads; its versatility will be too limited.
Yeah. Opt in as a certain distribution that tries to flush caches as a mitigation *all the time* seems ok if the user/admin can switch it on for special sensitive systems (if your air gapped/trusting your software, you many not care for the mitigation to be there and lose performance).
But if the program flushes it in a not so helpful way, then I can agree with the experts this is less than ideal.
I think there is an option to flush some cache or use special types of execution in Windows for some of the spectre mitigations. So that passwords etc are never cached/susceptible. But I don't know if you could use the same code to try and slow down a Windows OS that way too?
Funny as every sec. engineer, back in the 90s, didn't trust VLANs to be secure enough to be able to separate different security zones ...
It turned out, they were, and no compromise was ever shown ... Now, no-one would even require physical LAN security ...
Now, in the realm of CPUs, it turns out everyone is fighting side-channel CPU attacks on consolidated workloads, because Intel decided to compromise on security.
There are a lot of switches out there, and I'm not talking about /home/ smart ones which suck (*cough* netgear *cough*) that will accept a VLAN tagged frame from anywhere, they only tag UNTAGGED things coming in with the thing.
I suppose that's more injecting than leaking though right? There's only 409....5? (You know what I mean) just fire a packet to each, bound to find the thing you want to talk to.
It is news to me they're supposed to be secure? I thought it was a convenient/helpful thing
Both "huck" and "chuck" were used in Australia, though I think 'chuck' is more common.
Interestingly, WikiDiff's article on the differences lists a more extensive set of meanings for "chuck" vs "huck", and note that one of the meanings of "huck" is:
(informal) to throw or chuck
Which implies to me that since chuck has many more potential meanings, and that one of the meanings of huck is chuck, that "huck" derived from chuck by just dropping the 'c' to have a word that has a more specific subset of "chuck".
e.g. chuck steak (steak from the shoulder), chuck steak (throw/toss some steak)
Whereas "huck steak" really has only one meaning (I think, IANALinguist), to throw some steak.
Hmmm, I see you are using "de" as "to reduce" rather than "to remove", as in "devalue"
In that case, I'd use "defriend" in both cases.
"Un" is the absense of something.
"De" is to remove something (and also, I concede, to reduce something)
Unknown, unavailable, undetermined etc.
deescalate, derobe, deacidfy, deactivate, ...
Now the fact that you'll be able to find just as many examples that contradict what I wrote above, will be conveniently ignored! :-)
I note FreeBSD "deinstalls" packages rather than "uninstalling" them. So... erm QED. Checkmate, scientists!
What about undo?
Seems to be "un" can be a bit broader: not just the absence of something (as part of an adjective) but also to create that absence (as part of a verb). Meaning uninstalling, undressing, etc. make sense as you're removing (creating the absence) of the installation, clothes, etc.
"Finland doesn't have school buses so why would one be on the blackboard?"
Because it's a Google Captcha and everyone knows that Google thi9nks the whole world can solve US based street scene Captcha 'cos we all use the term "crosswalk" and school buses are ALWAYS yellow.
And everyone is the world is familiar with 'Fire Hydrants' * and has a deep fascination with traffic lights
That reminds me of a non-USA centric moan:
Traffic lights... Do they mean just the bulbs/globes? Or also the box the lights are in? And what about the poles the lights are mounted on?
Kinda sad seeing a key Intel customer, AWS, flounder around trying to fix Intel's bugs instead of leaning on Intel to actually fix them. AWS are between a rock and a hard place - they either replace the hardware - with the unavoidable chance that the new hardware will also be broken, or they implement performance killing hacks on their heavily utilized shared boxes... Gee, maybe putting all your eggs in one basket was a dumb idea after all...
Why not have a key be generated at the start of a program where as the context switches, just encrypt the cache so that it can't be read without the key.
Then as the context is switched in, the memory is decrypted. The OS would manage the single use keys for the applications There are some additional layers to this but the idea is to make it more difficult and expensive to get at a program.
Yes, there will be a performance hit but at least your system is secure. (Until someone breaks your key management system within the OS)
Xor cache address input with a per-process key? Xor should be fast...
This would somewhat degrade cache performance as the switched-to process overwrites the addresses of the switched-from process, but two processes shouldn't share all that much cache anyways. And if we switch back quickly, a lot of the cache should still be intact.
This seems equivalent to per-address layout randomization (with 64 bits).
Though if we're screwing with the silicon anyway, might as well tag cache entries per process.
AFAIK AMD is working on full encryption across the CPU. The problem is, that if the software has access at all, it's just a waiting game before you can decode the "key". As I don't think 256bit AES encryption is going ot be blazingly fast on the CPU for every bit execution. But some simple encryption in memory does mitigate it in part.
1) Spectre-class bugs CANNOT be mitigated in current hardware.
2) The entire point of caches is to speed process execution. Therefore any process with fine-grained access to the clock is going to be able to derive information about the addresses of data held in the cache. With Spectre-class attacks, one can derive information about the contents of the data.
The only way around this is to ensure that all code running inside the same cache has the same security context.
So, for Amazon, you are sharing your data with everyone else on the box. Dedicated boxes are required for anything handling PII.
> I wonder if it would cause any major issues to simply limit user space access to timers (and I guess Linux jiffy counter?) to like 1/10,000 of a second accuracy or so, instead of the nanosecond accuracy it is now.
There's a whole world out there of high frequency stock trading that seems to require high precision timing for everything, including network transit times and clock coordination accuracy in the uub-microsecond precision range.
I'm probably misunderstanding this:
Singh replied: "I am not so sure. A user can host multiple tasks and if one of them was compromised, it would be bad to let it allow the leak to happen. For example if the plugin in a browser could leak a security key of a secure session, that would be bad."
But as a user, I can run a debugger or dtrace or something and read the memory of any process running under my userid.
Therefore, couldn't one process running under my ID, if it was being deliberately malicious, just exec a debugger or dtrace (or include that functionality within its codebase) and hook into and read the memory of any other process I own anyway?
The process to be infiltrated may not necessarily be "owned" by you, persay. A higher-security context may not be directly accessible to you but only available through syscalls or other means. Cache leaks like this mean you can still read their "private" contents even if it's not a process under your direct control.
After nuking (flushing) L1, there's really not much room for "more security". Next up: disable the cpu clock!
Let's face it, Intel CPUs are broken by design (cheating on correctness & security for more performance). They want to get home. Let's send them there and demand the money back. ;-)
"broken by design"
The relationships between USA, Israel, NSA, ISNU, Intel corp. in Israel and USA are somewhat opaque.
Misdirection of public to have concern about security at higher protocols when backdoors are already built in at chip level.
Do AWS have an account type that can specify non-Intel hardware? Or perhaps just certain "regions" for particularly sensitive operations...
Just assume everything can be compromised.
1. There is a high risk population (of processes).
2. They become paranoid about sanitizing (the cache).
3. As a result, everybody is driven into lockdown (poor performance) and frequent (cache) flushing, etc., regardless of risk.
4. In the end, Linus (who is Swedish even as he comes from Finland) applies "the Swedish model" and rejects said lockdown, citing the lack of compelling data in the process.
Am I overthinking it?
Sweden, unlike the rest of the Nordic countries, has just caught up with Spain on death rate and may even aspire to the heights of the UK - currently holder of the highest death rate of the G7.
But it isn't relevant in the slightest. People can choose how they host. They can if they want walk away from containerisation and all that stuff and have their own, dedicated real servers in a data centre. If you live in a developed country, there really is no way to opt out of possible exposure to coronavirus. And CPUs will change to mitigate the threats anyway. In a few years that mod would be like some of the junk DNA we possess that may to dealing with ancient viruses.
It is not clear what you are calling a myth, but Swedish epidemiologists seem to be saying they got it wrong
... and becoming immune.
As Corona viruses tend to mutate rather quickly, it is noteworthy that the level of immunity achieved this way is MUCH higher than that of a potential vaccine. These people are, henceforward, immune to this disease; they can't spread it either.
Immunity via exposure is just as prone to failure by mutation as immunity via vaccine. All a vaccine does is trigger immune responses in the body without necessarily making you sick first.
That said, the structure of COVID-19 does not lend itself well to significant rearranging, much as how measles (a notoriously stubborn virus otherwise) can't seem to find a way around our current level of vaccine tech. Influenza is much easier to rearrange, which is why we can't seem to peg down a universal vaccine for it just yet.
The catch right now is how long the immunity effect lasts. There are hints that, like common cold coronavirii, the effect may not be as long as we'd like (months versus a year or two).
This post has been deleted by a moderator
"Incorrect. A vaccine makes you sick and triggers an immune response."
How can a killed virus make you sick? The trick is that an immune response doesn't necessarily trigger on illness but on the presence of intruders. Otherwise, a killed virus wouldn't work. Now, it's not always possible to use a killed virus, which is why you then have to use an attenuated virus instead, but they DO caveat that such a virus has the potential to make you sick.
This post has been deleted by a moderator
Linus is right, in that it's not up to the Linux kernel to solve security problems of the type that AWS is trying to mitigate. If the data you're processing is so sensitive what the hell are you doing running on something in the cloud? It's not just when you're processing the data that you're vulnerable - the input and output data has to be stored in the cloud too. But cloud providers have never been hacked - right?
I'm just amazed at the number of companies and government/semi-government organisations that are using cloud-based email and data processing/storage. Has no-one done a risk assessment of what could go wrong? But it seems the beancounters have taken over and cheaper beats secure every time - until an excavator or a fire takes out Internet connectivity and the business is left without any access to their corporate data for hours or even days. Yeah - much cheaper!
Biting the hand that feeds IT © 1998–2020