Re: Practice what you preach?
And it was absolutely equivalent?
Hypocrisy is not a moral argument, but an aesthetic one.
63 publicly visible posts • joined 14 Apr 2007
why not a free-floating constellation? I mean, if we were already good at going to the moon and building there, sure.
suppose you had a bunch of fairly cheap (perhaps even non-maneuvering) probes in a constellation some distance away from the earth, would shielding Earth RF really be a problem? such a constellation might even be able to do additional science (would ligo be out of reach?)
UASP is very not new. It's found on pretty much any usb-to-sata or -nvme adapter.
But the usb-stick market isn't sophisticated enough to care, so you won't find it there. Or at least no small/cheap/mass market ones support it, and it's surprisingly hard to find any that do.
Yes, if it supports UASP, it's almost certainly also got trim and smart. There's no such thing for usb storage that only supports BOT transfers.
LEO so often confuse two very different things: their own convenience vs privacy of citizens.
He says that getting a warrant would be too hard and slow. But that's exactly what the 4th Amendment is for!
They make the same argument against e2e encryption: that they wouldn't be able to mass-surveil
and then query at their convenience, without getting out of their comfortable chair. But of course, e2e is still
subject to surveillance - just at the endpoints, not conveniently in between.
Justice should be clean and accurate, not sloppy. Even if sloppy is easier and cheaper.
Closed source depends on vendor trust. Which is foolish, regardless of where your vendor is headquartered.
Open source is not just about the ability to modify or repurpose, but your ability to audit. Whether you, personally, do audit the code is less important than the possibility.
just curious why you think Power is so amazing.
to me, it's impressive the engineering that IBM can still bring to bear, but the results are distinctly meh. sure, they occasionally get out in front in some micro-architectural metric. but differences that matter to real systems?
the only thing I can think of is Power's tight integration with Nvidia - really just a political thing. And who cares much about it? It is at best a marginal benefit for a very niche market (gold-plated HPC clusters).
real SMT, sure, but show me a widespread use-case where that's critical.
Power is still a failure.
Power never managed to escape the stink of single-source. Yes, that's still a major issue - just look at all the teeth-gnashing that results from depending too much on Intel.
Has Power ever suffered from anything but "why bother"? Are there actual OS/software problems in the existing environment that cause problems? Buying RH might well fix them, but afaik, Power is unexceptional as just another one of the gazzilion arches that Linux supports.
Intel's promise with Optane has been that it's NV and doesn't wear like flash (that is, it doesn't require a block erase whose endurance is a few hundred cycles.)
This product is pointlessly small, and certainly no faster than the many NVMe flash products on the market. But if it's write endurance is extremely high, I guess that's a good sign. In the sense that, assuming Intel manages to make it 100x more dense, it would have a write-endurance advantage, if no other, versus flash.
Pretty scummy of them to provide no real info, though. For instance, does it provide standard NVMe, or is it some other one-off interface? Obviously, being M.2 it's just a PCIe device, but perhaps only the Intel chipset recognizes it, and only uses it for caching.
I wonder why you think that - have you perhaps not been around servers much, especially real datacenters with decent power density?
It's routine to dissipate 300W in a 1u server, so given the same airflow, a 5u box has a 1500W budget, and the drives shown dissipate about 5W when active...
the entire point of NVMe is latency and concurrency. how does mixing FC into the picture help this? NVMe latency is currently in the <50 us range, which is still pretty slow by IB standards, but what's the latency of FC fabrics? I had a hard time believing that FC, traditionally the domain of fat, slow enterprise setups, is going to suddenly become capable of dropping 2-3 orders of magnitude in its delivered latency.
although fat old enterprise bods might be comfortable with FC, it's completely obsolete: it has no advantages (cost, performance) over IB. I'd be much more interested if Mellanox (the only IB vendor) or Intel (the only IB-like vendor) started letting you tunnel PCIe over IB, so you could have a dumb PCIe backplane stuffed with commodity NVMe cards and one IB card, connecting to your existing IB fabric. That would require some added cleverness in the cards, but would actually deliver the kind of latency and concurrency (and scalability) that we require from flash.
The article doesn't make clear what's actually new about this: it appears to be just another blade chassis with the expected built-in san/lan networking.
What really puzzles me is why this sort of thing persistently appeals to vendors, when it's not at all clear that customers actually need it (let alone want it).
Obviously camp followers of the industry (like the Reg) need something to write about, but dis-aggregation of servers is, at this point, laughable. QPI is the fastest coherent fabric achievable right now, and it's not clear that Si photonics will change it in any way: latency is what matters, not bandwidth, and Si-p doesn't help there. PCIe is the fastest you can make a socket-oriented non-coherent fabric, and again, its main problem is latency, not bandwidth (though a blade chassis whose backplane was a giant PCIe switch might be interesting, but not require Si-p). 100Gb IB or Eth are the fastest scalable fabrics, but they don't really enter into this picture (they're certainly not fast enough to connected dis-aggregated cpus/memory/storage.)
Are Kinetic drives even available anywhere? If Seagate were smart, they'd be making them widely available to capture mindshare. I'd probably buy one, personally, just to have a chance to test it. Building a real facility from them would be fun. And there's a significant market: the server-based object-storage types still struggle to make the results fast and cheap (which is always the goal, after all.)
Seagate also needs to provide two Gb ports. Implementing a dual-port model not only matches the disk bandwidth better, but it lets us design for minimal points of failure. It would be interesting to know whether a commodity 48pt Gb (2-4 10G uplink) switch would deliver better performance than the usual SAS/expander backplane. Even cheap switch hardware delivers line-rate and impressively low latency.
Kinetic SSD would be pretty silly, though unless the fabric were IB, and that wouldn't work well, price-wise.
Buying COTS like Supermicro is a good idea, since it means you can replace/upgrade parts more easily (standard PSU, standard boards, etc). However, this post seems to be advocating bigger chassis being better: that's just not true. You want to move air past your devices and out of the case: bigger is not better. (It's also true that disks still don't dissipate much heat compared to CPUs.)
This work makes a lot of sense, because Flash is not going to challenge magnetic recording any time soon (in $/TB). Given that most data is quite cold, HAMR's emphasis on improving the write density is what the industry needs.
If, on the other hand, you live in a world where you only have modest amounts of hot data, you can simply ignore this.
I wonder who buys these damned things. Their price is astronomical, but you'll still need a cluster of them to avoid SPoF. How many companies need those kinds of IOPS and bandwidth? Sure, Amazon would, but they're smart enough to engineer distributed systems that scale and don't cost much. Something like NYSE or Visa/MasterCard? The latter would almost certainly follow the standard path like Amazon and others.
I was hoping you might discuss object storage for smaller *objects* - that would be interesting. An article about timid, half-hearted implementations of only a hundred disks or less, who cares?
It's easy to see how some workloads fit well for object storage. It's much harder to see how it'll challenge the prevalence of normal filesystems, where files are often tiny. After all, object storage is just a filesystem that can't efficiently handle large files, and refuses to manage your metadata/namespace for you!
it's funny that people often go on about humidity control for datacenters. but the fact is that they are easy to keep at modest numbers (say 15-35%), which also happens to let you avoid both humidification and dehumidification. in most countries, you'd have to put some effort into driving the humidity down so low that static was an issue.
Integrity is easy - paxos, raft etc: it's not like you have to give up sensible, cheap, commodity features like ECC. It's only worth paying for "Enterprise" features if you can't do it the modern way for some reason: corporate culture, not smart enough, superstition, etc. The only surprising thing here is how long it's taken the Enterprise culture to start withering away.
When will we get the important performance numbers, such as rates and latency? A variant of IB with 100Gb is only incrementally interesting, but if it's lower latency, or cheaper, or can do cache coherency, that would be news. Similarly, putting 60 cores on a chip is not exactly news unless it's substantially different (remote cacheline put instruction? threads in the ISA proper?)
But that's actually not true: cloud systems require sysadmins, too. Basically, your sysadmin needs will always be proportional to your IT needs, regardless of whether you outsource the physical datacenter (whic his all IaaS is...) If you think going Cloud means cutting staff, you're wrong. You might get rid of some box-monkeys when you outsource boxes, but they probably make minimum wage anyway (and each looked after hundreds of servers, so you had very few of them.)
I'm really curious what you think is difficult about HPC. Sure, there are a lot of details that contribute to a good cluster, but they're nothing magic. Manage reliability while containing cost. Choose enough but not too much cpu/memory/net/disk. Keep packages up-to-date but don't upset users with too much churn. These are all very straightforward ops things, nothing exotic.
tape-ism is a worldview. for instance, many people will say that it's not a real backup or archive if it's not offline (usually their justification is that mistake or malice can more easily kill an online "backup".) if you rarely recover from archive, that colors your expectations as well: you are rarely exercising the tape, so may have an unrealistic estimate of the actual, silent failure rate. obviously if you more frequently recover from archive, you'll be pained by tape's latency (probably offsite, but even libraries are slow relative to disk seeks.)
in reality, people who take tape seriously write two copies. once you plug that in - the price, the data rate, the space, and factor in environment-controlled storage, offsite of course, and the fact that tape drives are expensive and don't last very long, and normally need a separate spooling facility. wow, costs do pile up.
it can probably still work well for very large, very sparsely-accessed storage. most people don't bite, though, and online, spinning storage for backup and archive really is the norm. simply being able to verify all your data is a powerful argument.
hmm, flash is rated for much less than a million writes per bit (3k for common MLC, for instance). of course, ssd virtualizes that and covers the early failures using spare blocks. but it's completely mistaken to think that you can write an ssd a million times (fully, with uncompressible/non-dupe data).
flash retention rates depend not only on erase-based wear of cells, but also on crosstalk-like degradation from operations on nearby cells (even reads). in principle, if you wrote data once to flash (archival, like most tape uses), it would last on the order of 10 years. documentation of this seems fairly sparse, though, probably because that's not the main market. (flash all uses quite powerful ECC, which is fundamentally different from checksums...)
many people would not share your confidence of the retention rate for tape. it could be that we've all been warped by horrible performance of old generations of tape, but then again, that was always the explanation. (verify-after-write was a game-changing tape technology, for instance.)
don't read gamer reviews of intel vs amd power consumption and then draw conclusions about either HPC or webscale applications. these are throughput boxes, where the workload is embarassingly parallel and (for webscale at least) not flops-heavy. such servers are simply never idle, for instance (or their being used wrong).