"the most corrupt nation in Europe."
That would be Russia, right? Or do we still count it in Europe?
54 publicly visible posts • joined 20 Oct 2015
AFAIK, USB-C video signalling uses Alternate mode which repurposes the USB3 data links for use as a video link. In the case of HDMI, this is specified for HDMI 1.4b which would have the effect you describe. You'd probably get better results with USB-C --> DisplayPort.
This also means that the data speed falls down to USB2 which is mandatory and uses separate wires.
Thunderbolt multiplexes the video data with regular data so the effective available data rate depends on the video data rate. It's a much better system and I'm hopeful that all USB4 implementations will support Thunderbolt, which should not be a big problem since they use the same PHY. Thunderbolt is trademarked by Intel who only allow its use for certified (by Intel, of course) devices but it can work without being formally certified, as in some Ryzen 6800U laptops.
On amd64 architecture 64-bit applications perform better due to doubling the amount of registers. There may be a small hit by the larger pointer type but it's usually negligible.
Also, with 64 bit address space there's more room for ASLR and that's a security benefit.
And if we can get rid of all 32-bit applications on 64-bit windows we can probably kick WOW64 in the gutter. Although I'm sure MS will still use it for obscure reason that is not immediately obvious.
Actually, as another reply to your post points out, it's always a good idea to have a couple of GBs as swap. Long gone are the days when the rule of thumb for swap size was twice the RAM. But Linux can swap out seldom used pages in order to free space for disk cache, if nothing else.
Personally, I prefer block device swap, rather than file swap. I use LVM and I'm usually in position to create a logical volume for additional swap if the going gets tough.
However, the 3100 has two CCXs with half the resources each, whereas the 3300 has a single complete CCX. The latter is better because the core-to-core latency is much higher across CCX-s. So the difference is bigger than what the frequencies alone would suggest. Should still enough for a mom PC though.
"Paint3D is an effing awful regression."
Well, you can uninstall it. And reclaim 16KB if Apps & features is to be believed. So what do you expect from a 16KB software package? OTOH, maybe it's a miracle of software engineering and that would be the justification to silently reinstall it on the next major update. Resistance is futile.
Thinking about it, maybe they should rename it to Pain 3D.
I am a huge KVM fan (haven't really tried Xen) but I assume VMware has features that are missing from KVM. I am trying to make use of the VMDq feature of intel NICs which is supposed to work out of the box on ESXi under the NetQueue moniker. But I cannot find any information on how to do this. I found some info that Xen 3.3 "will" support this (quite an old post obviously) but I couldn't find anything in the release notes of any Xen version that mentions VMDq.
As much as appreciate it, the FOSS world is not all roses, you know.
"they develop it themselves and don't make it open, as the license doesn't require it"
Now whose fault is that? Maybe the FSF was on to something? Developing software with BSD/MIT type license is charity work, plain and simple.
Open source (and software libre) do not exclude the profit motive. But that would be services one offers around it. Like someone wants a new feature, he can pay you to develop it. Or pay you for setting it up on premises, things like that. After all, companies that provide SaaS with OSS do pay their developers and admins. It's just that there's no artificial scarcity from which to extract exuberant profits.
At least that's how the theory goes. I'm not taking sides in this, just noting.
Imagine you have a multithreaded FP-heavy workload. With Bulldozer, if you run it as 4 threads or with 8 threads you'd get roughly similar performance. The 8 thread would probably be slightly faster because it would allow better utilization of the FPUs but OTOH it could thrash the shared L2 cache. Still, going from 4 thread to 8 threads will not result in anything akin to 2x the performance (disregarding Amdahl's law for the time being).
Now, there's always the possibility that for the 4 thread scenario a shitty OS would schedule the threads on every "core" on each of2 modules, rather than on single "core" for each of 4 modules. That would suck on its own although it would provide the 2x boost you'd expect - but has nothing to do with the point that I'm trying to make. Bulldozer was not an 8 core processor for any meaningful definition of the term. And I say this as an AMD fanboy. I hated them because they made me buy Intel CPUs. Only in the last 2 years have a returned to considering AMD for CPUs and I have advised the purchase of several Ryzen systems and built a ThreadRipper for myself. AMD's comeback is even more impressive because of the Bulldozer (and to a large extent the later iteration) fiasco. Part of which is the fake core-ness.
"The chips did have eight cores, and each core was able to do floating-point arithmetic at full speed."
This is patently not true. Full speed would mean using both FMAC units. When two "cores" use FP operation they effectivelly (i.e. on a rough averge) use only one. Sure, in most cases you won't be hammering both cores with FP workloads but if your workloads are indeed FP heavy, then it performs like every module is single core.
BTW, I personally wouldn't get too worked up about the FPUs. In a comment above I clarify that in my opinion what turns BD in 4-core architecture is the shared instruction decoder. It makes BD modules glorified HyperThreading cores. Only in later designs this was rectified and those can reasonably (but not completely) be considered x2 core.
"AMD were completely accurate and precise using the normal meaning of the word core."
I disagree. Not because of the shared FPU but because of the shared instruction decoder. That is what is turning a Bulldozer's module implementation into a glorified HyperThreading core. Later designs with separate instruction decoders could reasonably be considered separate cores. Not Bulldozer.
I use ZFS on Linux but my rationale might be the same - I use ZFS because I value my data and I value my money. The first should go without saying but the second requires some explanation.
I need to be able to use RAID5 - RAID1 doesn't cut it in terms of how many droves I need. This means Btrfs is not an option. I am a huge fan of Btrfs because it offers some features that ZFS doesn't (also the other around of course) but I don't trust its RAID 5/6 implementation. Wheres ZFS is rock solid.
Then there's the caching support - I can configure an SSD as L2ARC cache without much hassle and have my huge (raltively) HDD array perform as well as an SSD after the cache has been warmed up (caveat emptor - on Linux the cache starts from zero after reboot)l. But with ZFS you can configure the cachine behavior per dataset (whether the ARC and/L2ARC cache metadata, metadata + data or none).
I have a friend who runs an 8-disk RAID-Z1 array with a 9th as hot spare and 6 Optane drives for cache. I did set it up for him and it only took like 5 min. Generally, ZFS is very easy to set up and maintain once you've wrapped your head around it. And the flexibility is great. As long as you don't have to shrink your zpool...
BTW, there are many important ZFS feature additions in the pipeline. I think adding a drive to a RAID set is one of them.
Well, I already switched to SSD for my bulk storage (4x1TB in RAID-Z1). I have bought about 8TB of SSD storage for the last year or so and the prices keep falling.
However, you can combine the best of both worlds (on the HDD side being nothing more than the price, of course) by using ZFS with additional SSD(s) for L2ARC device(s). Once the cache has been warmed up it performs admirably. The only problem is that with ZFS on Linux the cache is empty after reboot but if you boot no more than once a month that's pretty much negligible.
"and swapping between boxen/virtuals is a PITA"
What do you mean "switch". I run several VirtualBox VMs and ssh into them from 'screen' sessions on Cygwin. Which also gives me a decent (if a bit sluggish) Linux command line on Windows too. I can, of course, use WSL to ssh to the VMs but I have problems running 'screen' on it. Then "switching" is simply Ctrl-A+<window number>.
In any case, Cygwin is always the second thing I install on a fresh Windows (the first is Firefox which I then use to download everything else). I really don't understand the lack of love for Cygwin. It's not very efficient but it works very reliably. You can even use it to set up ssh server on your Windows box.
I was about to write something similar but you beat me to up. The poster above reminds me of a co-student of mine at an oral exam. He complained that he told the professor everything yet he still failed him. I got the same ticket later that day and got an A. I may have known something more than said student thought was "everything"...
The problem is, people like the OP will say - yeah, you told the professor what he wanted to hear. Obviously, regular people on the street know more than any given professor in a University.
Well, you can also use the text-based minimalistic installer and select which desktop you want (Caveat emptor - I think it doesn't do EFI boot):
You can also install several desktop environments simultaneously and change them from the login manager.
For a KVM virtual machine there's even a better way - dig down a little deeper in the directories:
And download the kernel and initrd image of the installer. You can pass those to a KVM virtual machine to boot from.
I love the flexibility of this thing. I think that some time ago, if you started the installer kernel over a serial console it would even add the serial console option to the Grub defaults so you could access the virtual machine over serial console right after installation. This doesn't work with 18.04 so you better do this manually before rebooting the newly installed system. But I digress...
"Basically the manufacturers state that if the overall workload is kept below this threshold then the reliability of the drive will be as advertised. Exceeding the WRL rating reduces the reliability conferred."
It's probably just an excuse to not honor the warranty.
"avoiding leaking timing info generally means every instruction running for the worst-case time."
Not necessarily, IMO. You could have some flags in the cache that certain line was loaded speculatively and the flag is cleared when it's loaded for real. When it's in the cache but has the speculative flag then it's like not being there until the speculation is resolved. Obviously that would require hardware support but I think it's doable. I'm not a CPU designer though.
If your Wi-Fi sits on a mini-PCIe card, as is often the case, I'd rather replace it with something well supported under Linux, like the Intel ones. They go for about 10-20 quid on Amazon. Unfortunately, Wi-Fi is sometimes soldered on the mainboard. There are some pretty small USB WiFi controllers but their usable range might not be that good as opposed to using the antennas inside the laptop.
Just under the article (in the Whitepapers section) the first link is "Understanding the depth of the global ransomware problem". How appropriate.
Edit: Actually, I now see completely different list of whitepapers. Still, in light of the article, Java SE seems not much different from ransomware.
Reminds me of when Microsoft refused to localize Windows for my country saying that it would take them > $1million. And a local dude did it for free by replacing the strings in the binaries. It was only for a specific version of Windows (Win98 IIRC) but I'd never trust MS on such things again.
For UEFI+Linux the indispensable resource is Roderick Smith's page:
As for the firmware, I'd only ask for two things: (1) be entirely accessible via an RS-232 interface and (2) offer a way for you to be in complete control of your machine. Like get rid of the SMM mode that can be used to shaft your machine without any chance of you noticing. And no - running it in a virtual machine (!) as Intel suggests as remedy for the vulns (an SMM vuln can screw even TXT setup), doesn't quite cut it. Because who's controlling the SMM VM...?