The SMT discussions
SMT is a good thing almost always, and the situations where it isn't a good thing ought to be few and far between. Most of us here (except one really weird comment above - WTF m8?) get that it's the thing that is a really tight loop that doesn't benefit from SMT when there are more of them than physical cores. Linux's scheduler for eons (and I'm sure many others) are aware of hyper threading, the only difficulty is that it's bloody hard for a program to look around and go "ah this is Intel - I'll just halve that reported core count"
The difficulty is in software knowing it's dealing with SMT and adjusting itself accordingly, it can go all the way with processor affinities easily enough and it can spin up as few or as many threads as needed.
For as long as we can specify as an environmental variable or command line argument or config file (whatever) a non-default argument to these programs - leave it on. Those who shouldn't are those running binaries from others - maybe some interpreted languages - that's another issue for another time; maybe they should use affinities instead - again another time.
The modern cores in Intel and AMD (ignoring the piledriver-grade pounding they gave me kinda) are very very super-scalar in that they're extremely hard to keep busy even close to half the time. They're made so software heavy in various areas can run fast, No one stream will use *everything*. Someone above mentioned registers. You're looking at 168 integer registers now (and since Sandy Bridge/2012 which might be 154.... maybe...) this is another area where there are lots of resources one weird bit of software might use (needing lots of values, huge spilling or something) - but that software wont need all of the (some other area of the execution engine); I've not proved the claim "forall instruction sequences [ that sequence uses the full resources in at most one arbitrary partition of execution resources ]" but there's only around 150 to 180 ops in flight at any one time *max* so to use all those registers leaves you with like 12 operations to do something else - you get my point I hope.
Anyway that's why SMT is good. If you use perf and read the manuals (I've written some devices drivers that abuse ioctl to expose machine specific registers - there's all kinds of things these can do, but they're so model specific....) you can confirm this and see what's going on (ish) to get the most out of it. A lot of my jobs involve squeezing performance out of Sandy Bridge chips so trust me on this - there's a lot there. It's just so model specific I can see why perf et al went "screw that" WRT supporting it.
I actually don't like that it's just 1 extra thread. I think powerPC or SPARC - one of them - uses like 8 or 16 threads per core. Not even they fully utilise it most of the time (you can make it SMT512 if you like, but it doesn't matter if you're not running anything that isn't hammering SIMD floating point instructions, those units are going to be idle...).
It's a good thing. Dare I say "for as long as the execution units are there to do work - it wont bottleneck" but this is the problem with hard real time systems. You're just opening programs that are not coordinating so this is impossible to say or measure. However we can all see that this means "don't run that floating point heavy stuff with more threads than there are FPUs total on this system ish"
I'm one of those die hard hippies that trusts his computer though, If I ran windows (not a dig but all those things bring their own DLLs, phoning home, ect) I can see why you'd be worried. Or if you sell CPU time - you get my point. I trust the software I run and don't run software I wouldn't trust without some restrictions - and I'd say I pay a price or that. However how this affects a database (great use of SMT there) ....
You see my point. Generally a very good thing. Spectre is such an issue (see my comment here https://forums.theregister.co.uk/forum/1/2018/07/26/netspectre_network_leak/ - you can't just"jitter the clocks") that any CPU fixed to it (ie lying about the current time, no more rdtsc ect) could still do SMT and be safe. I've long been thinking about this, but as it's not my job (sadly) I don't know if my "mitigated system" would be practical (it involves lying about the time, pretending everything is deterministic and isolated, yet doing it much like today) or if it'd be way to slow. But it can be shown easily if you can do that safely you can make the SMT system running on top safe.