This is actually useful
I have seen a problem where the start of a bunch of VMs got delayed too much because a program that was part of the VM startup wanted randomness, and it took too long to "collect" it.
Linux v4.19-rc1, release candidate code published on Sunday, allows those building their own kernel or Linux distribution to choose whether or not to trust the CPU hardware random number generator, a decision that has become complicated in the wake of the revelations about government surveillance over the past five years. When …
No, not really that useful.
At issue is that for most tasks the regular random number generator is random enough.
For tasks that require a better random number generator, they will use alternative that are already available.
So sure it moves the issue to vendor, (e.g. RH, SuSE, etc ...) but it really is attempting to solve a problem that is solved thru other means. I guess you could say it reduces the burden on the other apps to supply their own random number generator. Thats about it...
The alien because one of the best sources for entropy (randomness) comes from listening to background radiation/noise from space...
> You have a Yes/No decision at kernel build time.
Only if you build your own kernel.
> Why would you want to disable it?
Because you are using a kernel provided by a Linux distro, not compiling your own. In this case, the distro may want to choose a default value for this flag, but the user may want to change the flag.
There are usually good reasons for using a distro rather than building your own Linux distro from scratch: it's a lot less hassle, it gives you software binaries that have been tested, and it makes it easy to get security patches. All those arguments apply to the kernel as much as the rest of the software in the distro. Of course, there are times when people have specific non-standard requirements, and have to compile a kernel themselves, but those are rare.
And yes, most of the people reading this thread are likely part of the rare group with non-standard requirements, but that's because this is a thread about a kernel patch on a tech news website...
"no - not every system has a need for it - my media server as example."
What makes you think a malcontent can't usurp your media server and use it as a springboard to other parts of your network...or even as part of a botnet to attack the greater Internet (in which case it doesn't matter if it has secrets or not, just oomph and access).
Because if you can't trust the CPU's RNG, you can't trust ANY RNG. There's no telling where it's been, certification or no, plus the CPU or mobo can undo any effort you make by tampering with the communications channels. The main reason you want a hardware RNG is because you need a high-throughput TRNG, such as running a key-generating server.
As for trusting the CPU's RNG, this is usually mitigated by employing multiple entropy sources so that the worst case is that a bad source adds no entropy. AFAIK, there's no practical way for the CPU to know enough about any alternate sources to actually negate entropy.
There's one place where the CPU and ONLY the CPU can be used: bootstrap. At that point, no other buses are open, including those you'd need to access another RNG. How does one propose to secure the bootstrap procedure without access to any other RNG?
"Because if you can't trust the CPU's RNG, you can't trust ANY RNG."
I don't follow that logic. Can you explain?
"The main reason you want a hardware RNG is because you need a high-throughput TRNG, such as running a key-generating server."
Absolutely. I wasn't arguing against hardware RNGs. I was talking about the RNGs that are included in some CPUs.
"How does one propose to secure the bootstrap procedure without access to any other RNG?"
There are a few ways to do this, depending on the CPU in question, but that's a discussion that can't be effectively had in a comment section. But I wasn't addressing securing the bootstrap procedure, I was really talking about using it for crypto in the more general case. If you're stuck with the CPU RNG for boot-time, then that's what you use. But that doesn't mean you should keep using it for crypto after the boot process completes.
"Because if you can't trust the CPU's RNG, you can't trust ANY RNG."I don't follow that logic. Can you explain?
Because any other source of RNG would have to be accessed via a communications interface in the same computer that has a compromised CPU RNG. The PCIe, USB, thunderbolt, serial, parallel, PS/2, or any other communications interface is controlled by the same source as the CPU. Therefore if the manufacturer of the CPU is going to compromise the CPU's RNG, they are full capable of intercepting, and modifying, any other data traffic in in the computer.
That hardware RNG you plugged into the USB port? Pity the number being used in the encryption software running on the CPU isn't the one from that USB attached RNG, as the CPU substituted the RNG from the USB port with its own dodgy RNG.
@aldakka: purely in theory, yes
but there are quite a few people that de-lid and de-cap Intel CPUs to look what they actually do on the silicon level, then there's the thing of CPU having very well defined outputs for given inputs, again, something that quite a few people verify before using the CPU in question
and then we have the RNG, circuit that *by design* produces inscrutable and unpredictable outputs for all inputs. Does it do that by encrypting a counter with AES and a key a TLA knows? or does it do that by getting the data from some quantum process? can't really tell (and believe me, people have looked at it, there are plenty of papers on the topic)
so while, yes, one single person can't be certain that the CPU doesn't switch completely predictable bytes in place where the USB provided random bytes should be, as a community we can be reasonably sure it doesn't do that; can't say the same of the built-in RNG
"so while, yes, one single person can't be certain that the CPU doesn't switch completely predictable bytes in place where the USB provided random bytes should be, as a community we can be reasonably sure it doesn't do that; can't say the same of the built-in RNG"
HOW? Particularly against something of state-level resources like a TLA? If they can hide corrupt RNGs in a CPU beyond the ability to detect even via things like x-rays, can't the same technique be used to corrupt any other I/O stream? After all, things like heartbleed and shellshock got past "the community" for a long time, too. For all we know, something like this has been a black project since before it was even a concern to us.
"If they can hide corrupt RNGs in a CPU beyond the ability to detect even via things like x-rays, can't the same technique be used to corrupt any other I/O stream?"
because to turn an RNG to a biased one requires changing the amount of doping in a single transistor (oh, and that counter mode for AES? that's what the Intel design document says how its RNG works; which means there is very little that needs to change to make the counter or the key predictable (and thus RNG's output) to certain people and still completely unpredictable to me and you)
detecting when the USB dongle connected is a custom RNG or just a RS232 bridge or a LHC muon detector requires likely hundreds of transistors or hundreds of cycles
and sure, it's technically possible for a TLA to create such a CPU and plant it in your computer, but if they are interested in you to this degree, the RNG of Intel CPU would be the last thing on my mind
I don't know why you bring shellshock – it was a documented feature with unintended consequences. Regarding heartbleed – because we know that the RNG is the important part, we know that Intel sometimes screws up implementation (fdiv bug for most well known example) and people are specifically looking for problems in it. Nobody was looking for bugs in heartbeat implementation before heartbleed.
"why is there even a random number generator in a cpu's microcode?"
Convenience. It's cheaper and easier to have it there than to have to include RNG hardware externally.
"It would make more sense to me for OS or better yet the security software to have an RNG."
Software cannot produce random numbers, only pseudorandom numbers. In practice, with the proper pRNG algorithm, that can be good enough -- but you still want at least one actual random number to seed the pRNG.
"This could tend to make it more difficult for unwanteds to gain access to the device."
That would make it easier, really.
A few reasons:
1) The CPU's random number generator can be random, based upon provably random phenomena rather than a pseudo-random number based upon mathematical manipulation.
2) There are some sources of actually-random data in a computer, although they are usually not the same strength as "provably random". An example is the jitter from disk drive events. But these sources are rapidly disappearing as physical devices towards silicon. This is the operational problem with not enough 'entropy' (aka real randomness) being available as a machine starts.
3) It's "too easy" for these actually-random sources of data in a computer to be influenced from outside the computer. Since they are not built as cryptographic devices. Whereas the random instructions within the CPU can include tamper detectors (such as for high EM fields).
4) Timing and other covert channel attacks are simpler against software than against hardware. Those attacks are also simpler against hardware not intended to be cryptographic devices than against hardware designed with covert channels in mind. It is easier in hardware to build a black box where all instances of the instruction take the same time to complete, use the same power, and so on. (As an aside the current issue with CPUs is that the care of design needed to defeat covert channels done for the RDRAND instruction needs to be repeated throughout the CPU design for other instructions.)
These reasons explain the last line of Ted's LKML e-mail: "Note: I trust [Intel's hardware instruction] RDRAND more than I do Jitter Entropy [from the computer's hardware devices]".
"(As an aside the current issue with CPUs is that the care of design needed to defeat covert channels done for the RDRAND instruction needs to be repeated throughout the CPU design for other instructions.)"
Well, one problem with that approach is that these CPUs are more often being put into portable applications where power isn't a given. In which case efficiency trumps security unless you can achieve both (which last I checked, you can't; efficiency inevitably leaves tells).
Cisco has alerted customers to another four vulnerabilities in its products, including a high-severity flaw in its email and web security appliances.
The networking giant has issued a patch for that bug, tracked as CVE-2022-20664. The flaw is present in the web management interface of Cisco's Secure Email and Web Manager and Email Security Appliance in both the virtual and hardware appliances. Some earlier versions of both products, we note, have reached end of life, and so the manufacturer won't release fixes; it instead told customers to migrate to a newer version and dump the old.
This bug received a 7.7 out of 10 CVSS severity score, and Cisco noted that its security team is not aware of any in-the-wild exploitation, so far. That said, given the speed of reverse engineering, that day is likely to come.
Microsoft is flagging up a security hole in its Service Fabric technology when using containerized Linux workloads, and urged customers to upgrade their clusters to the most recent release.
The flaw is tracked as CVE-2022-30137, an elevation-of-privilege vulnerability in Microsoft's Service Fabric. An attacker would need read/write access to the cluster as well as the ability to execute code within a Linux container granted access to the Service Fabric runtime in order to wreak havoc.
Through a compromised container, for instance, a miscreant could gain control of the resource's host Service Fabric node and potentially the entire cluster.
The latest version of OpenSSL v3, a widely used open-source library for secure networking using the Transport Layer Security (TLS) protocol, contains a memory corruption vulnerability that imperils x64 systems with Intel's Advanced Vector Extensions 512 (AVX512).
OpenSSL 3.0.4 was released on June 21 to address a command-injection vulnerability (CVE-2022-2068) that was not fully addressed with a previous patch (CVE-2022-1292).
But this release itself needs further fixing. OpenSSL 3.0.4 "is susceptible to remote memory corruption which can be triggered trivially by an attacker," according to security researcher Guido Vranken. We're imagining two devices establishing a secure connection between themselves using OpenSSL and this flaw being exploited to run arbitrary malicious code on one of them.
At The Linux Foundation's Open Source Summit in Austin, Texas on Tuesday, Linus Torvalds said he expects support for Rust code in the Linux kernel to be merged soon, possibly with the next release, 5.20.
At least since last December, when a patch added support for Rust as a second language for kernel code, the Linux community has been anticipating this transition, in the hope it leads to greater stability and security.
In a conversation with Dirk Hohndel, chief open source officer at Cardano, Torvalds said the patches to integrate Rust have not yet been merged because there's far more caution among Linux kernel maintainers than there was 30 years ago.
EndeavourOS is a rolling-release Linux distro based on Arch Linux. Although the project is relatively new, having started in 2019, it's the successor to an earlier Arch-based distro called Antergos, so it's not quite as immature as its youth might imply. It's a little more vanilla than Antergos was – for instance, it uses the Calamares cross-distro installer.
EndeavourOS hews more closely to its parent distro than, for example, Manjaro, which we looked at very recently. Unlike Manjaro, it doesn't have its own staging repositories or releases. It installs packages directly from the upstream Arch repositories, using the standard Arch package manager pacman
. It also bundles yay to easily fetch packages from the Arch User Repository, AUR. The yay
command takes the same switches as pacman
does, so if you wanted to install, say, Google Chrome, it's as simple as yay -s google-chrome
and a few seconds later, it's done.
1Password, the Toronto-based maker of the identically named password manager, is adding a security analysis and advice tool called Insights from 1Password to its business-oriented product.
Available to 1Password Business customers, Insights takes the form of a menu addition to the right-hand column of the application window. Clicking on the "Insights" option presents a dashboard for checking on data breaches, password health, and team usage of 1Password throughout an organization.
"We designed Insights from 1Password to give IT and security admins broader visibility into potential security risks so businesses improve their understanding of the threats posed by employee behavior, and have clear steps to mitigate those issues," said Jeff Shiner, CEO of 1Password, in a statement.
Interview In June, Purism began shipping a privacy-focused smartphone called Librem 5 USA that runs on a version of Linux called PureOS rather than Android or iOS. As the name suggests, it's made in America – all the electronics are assembled in its Carlsbad, California facility, using as many US-fabricated parts as possible.
While past privacy-focused phones, such as Silent Circle's Android-based Blackphone failed to win much market share, the political situation is different now than it was seven years ago.
Supply-chain provenance has become more important in recent years, thanks to concerns about the national security implications of foreign-made tech gear. The Librem 5 USA comes at a cost, starting at $1,999, though there are now US government agencies willing to pay that price for homegrown hardware they can trust – and evidently tech enthusiasts, too.
Analysis Toxic discussions on open-source GitHub projects tend to involve entitlement, subtle insults, and arrogance, according to an academic study. That contrasts with the toxic behavior – typically bad language, hate speech, and harassment – found on other corners of the web.
Whether that seems obvious or not, it's an interesting point to consider because, for one thing, it means technical and non-technical methods to detect and curb toxic behavior on one part of the internet may not therefore work well on GitHub, and if you're involved in communities on the code-hosting giant, you may find this research useful in combating trolls and unacceptable conduct.
It may also mean systems intended to automatically detect and report toxicity in open-source projects, or at least ones on GitHub, may need to be developed specifically for that task due to their unique nature.
Blockchain venture Harmony offers bridge services for transferring crypto coins across different blockchains, but something has gone badly wrong.
The Horizon Ethereum Bridge, one of the firm's ostensibly secure bridges, was compromised on Thursday, resulting in the loss of 85,867 ETH tokens optimistically worth more than $100 million, the organization said via Twitter.
"Our secure bridges offer cross-chain transfers with Ethereum, Binance and three other chains," the cryptocurrency entity explained on its website. Not so, it seems.
E-paper display startup Modos wants to make laptops, but is starting out with a standalone high-refresh-rate monitor first.
The initial plan is for the "Modos Paper Monitor," which the company describes as: "An open-hardware standalone portable monitor made for reading and writing, especially for people who need to stare at the display for a long time."
The listed specifications sound good: a 13.3", 1600×1200 e-ink panel, with a DisplayPort 1.2 input, powered off MicroUSB because it only takes 1.5-2W.
Biting the hand that feeds IT © 1998–2022