* Posts by Mark Hahn

59 publicly visible posts • joined 14 Apr 2007

Page:

Square Kilometre Array precursor looks to filter out satellite interference

Mark Hahn

Re: By the time SKA is finished

why not a free-floating constellation? I mean, if we were already good at going to the moon and building there, sure.

suppose you had a bunch of fairly cheap (perhaps even non-maneuvering) probes in a constellation some distance away from the earth, would shielding Earth RF really be a problem? such a constellation might even be able to do additional science (would ligo be out of reach?)

Mark Hahn

Re: 41k loc in bash scripts

really? I've never heard of a site that had less than a 4h job limit (and as a greybeard sysadmin, this is crazy low - even 1d is).

You're not imagining things – USB memory sticks are getting worse

Mark Hahn

Re: ValiDrive

UASP is very not new. It's found on pretty much any usb-to-sata or -nvme adapter.

But the usb-stick market isn't sophisticated enough to care, so you won't find it there. Or at least no small/cheap/mass market ones support it, and it's surprisingly hard to find any that do.

Yes, if it supports UASP, it's almost certainly also got trim and smart. There's no such thing for usb storage that only supports BOT transfers.

FBI Director: FISA Section 702 warrant requirement a 'de facto ban'

Mark Hahn

LEO so often confuse two very different things: their own convenience vs privacy of citizens.

He says that getting a warrant would be too hard and slow. But that's exactly what the 4th Amendment is for!

They make the same argument against e2e encryption: that they wouldn't be able to mass-surveil

and then query at their convenience, without getting out of their comfortable chair. But of course, e2e is still

subject to surveillance - just at the endpoints, not conveniently in between.

Justice should be clean and accurate, not sloppy. Even if sloppy is easier and cheaper.

Micron joins the CXL 2.0 party with a 256GB memory expander

Mark Hahn

latency, latency, latency

Yet another article about CXL that doesn't mention latency.

You know, the reason you have RAM in the first place. To quote some random yoohoo, bandwidth is easy but latency is hard.

Petaflops help scientists understand why some COVID-19 variants are more contagious

Mark Hahn

Doesn't take much to bring out the conspiracy kooks, especially considering how common this sort of simulation study is.

Mark Hahn

Re: Hu Knows

Just curious what country you're talking about.

This startup reckons its chiplet interconnect tech can best Intel, TSMC

Mark Hahn

Amazing

This article manages to include a lot of words, but somehow completely avoids defining what's different about the startup's tech.

Dump these small-biz routers, says Cisco, because we won't patch their flawed VPN

Mark Hahn

Re: Hard-/Software expiration date

There's nothing edgy about this kind of hardware: off-the-shelf microcontrollers and commodity switch chips.

Can you compose memory across a HPC cluster? Yes. Yes you can

Mark Hahn

Wake me when composable memory is 50 ns, maybe 100. If it's microseconds, better off with explicit messaging.

US distrust of Huawei linked in part to malicious software update in 2012

Mark Hahn

Open source is the only answer

Closed source depends on vendor trust. Which is foolish, regardless of where your vendor is headquartered.

Open source is not just about the ability to modify or repurpose, but your ability to audit. Whether you, personally, do audit the code is less important than the possibility.

Official: IBM to gobble Red Hat for $34bn – yes, the enterprise Linux biz

Mark Hahn

just curious why you think Power is so amazing.

to me, it's impressive the engineering that IBM can still bring to bear, but the results are distinctly meh. sure, they occasionally get out in front in some micro-architectural metric. but differences that matter to real systems?

the only thing I can think of is Power's tight integration with Nvidia - really just a political thing. And who cares much about it? It is at best a marginal benefit for a very niche market (gold-plated HPC clusters).

real SMT, sure, but show me a widespread use-case where that's critical.

Mark Hahn

Power is still a failure.

Power never managed to escape the stink of single-source. Yes, that's still a major issue - just look at all the teeth-gnashing that results from depending too much on Intel.

Has Power ever suffered from anything but "why bother"? Are there actual OS/software problems in the existing environment that cause problems? Buying RH might well fix them, but afaik, Power is unexceptional as just another one of the gazzilion arches that Linux supports.

Do Optane's prospects look DIMM? Chip chap has questions for Intel

Mark Hahn

Re: Loads on memory bus still a concern?

if it's a bus, it slows down with more loads.

Lazy parent Intel dumps Lustre assets on HPC storage bods DDN

Mark Hahn

I wonder if DDN has the wisdom to keep Lustre open. If they try to squeeze revenue from it, they'll just kill the project entirely...

Industry whispers: Qualcomm mulls Arm server processor exit

Mark Hahn

you understand that spectre/meltdown are about speculative execution, which happens on high-performance ARM cores as well, right?

Mark Hahn

Re: Why should ARM Holdings help?

idle cores are failure of system design, architecture, sales, marketing.

it's a fine idea to shut down embedded cores (phones, laptops, desktops, IoT), but the WHOLE point of cloud is too pool customers at such scale that they can keep all cores utilized.

Regular or premium? Intel pumps out Optane memory at CES

Mark Hahn

Intel's promise with Optane has been that it's NV and doesn't wear like flash (that is, it doesn't require a block erase whose endurance is a few hundred cycles.)

This product is pointlessly small, and certainly no faster than the many NVMe flash products on the market. But if it's write endurance is extremely high, I guess that's a good sign. In the sense that, assuming Intel manages to make it 100x more dense, it would have a write-endurance advantage, if no other, versus flash.

Pretty scummy of them to provide no real info, though. For instance, does it provide standard NVMe, or is it some other one-off interface? Obviously, being M.2 it's just a PCIe device, but perhaps only the Intel chipset recognizes it, and only uses it for caching.

Grab an ARMful: OpenIO's Scale-out storage with disk drive feel

Mark Hahn

Re: Cooling?

I wonder why you think that - have you perhaps not been around servers much, especially real datacenters with decent power density?

It's routine to dissipate 300W in a 1u server, so given the same airflow, a 5u box has a 1500W budget, and the drives shown dissipate about 5W when active...

Well, FC-NVMe. Did this lightning-fast protocol just get faster?

Mark Hahn

show us the numbers, not the marketing slobber

the entire point of NVMe is latency and concurrency. how does mixing FC into the picture help this? NVMe latency is currently in the <50 us range, which is still pretty slow by IB standards, but what's the latency of FC fabrics? I had a hard time believing that FC, traditionally the domain of fat, slow enterprise setups, is going to suddenly become capable of dropping 2-3 orders of magnitude in its delivered latency.

although fat old enterprise bods might be comfortable with FC, it's completely obsolete: it has no advantages (cost, performance) over IB. I'd be much more interested if Mellanox (the only IB vendor) or Intel (the only IB-like vendor) started letting you tunnel PCIe over IB, so you could have a dumb PCIe backplane stuffed with commodity NVMe cards and one IB card, connecting to your existing IB fabric. That would require some added cleverness in the cards, but would actually deliver the kind of latency and concurrency (and scalability) that we require from flash.

Roll over Beethoven: HPE Synergy compositions oughta get Meg singing for joy

Mark Hahn

Just another blade chassis, no?

The article doesn't make clear what's actually new about this: it appears to be just another blade chassis with the expected built-in san/lan networking.

What really puzzles me is why this sort of thing persistently appeals to vendors, when it's not at all clear that customers actually need it (let alone want it).

Obviously camp followers of the industry (like the Reg) need something to write about, but dis-aggregation of servers is, at this point, laughable. QPI is the fastest coherent fabric achievable right now, and it's not clear that Si photonics will change it in any way: latency is what matters, not bandwidth, and Si-p doesn't help there. PCIe is the fastest you can make a socket-oriented non-coherent fabric, and again, its main problem is latency, not bandwidth (though a blade chassis whose backplane was a giant PCIe switch might be interesting, but not require Si-p). 100Gb IB or Eth are the fastest scalable fabrics, but they don't really enter into this picture (they're certainly not fast enough to connected dis-aggregated cpus/memory/storage.)

Mark Hahn

Re: Definitely wrong

"like HPC"!?! HPC is precisely where the ideal is every node connected to a fully non-blocking fabric: a megachassis like this would need a big bundle of uplinks.

OpenIO wants to turn your spinning rust into object storage nodes

Mark Hahn

Any Kinetic drives in the wild?

Are Kinetic drives even available anywhere? If Seagate were smart, they'd be making them widely available to capture mindshare. I'd probably buy one, personally, just to have a chance to test it. Building a real facility from them would be fun. And there's a significant market: the server-based object-storage types still struggle to make the results fast and cheap (which is always the goal, after all.)

Seagate also needs to provide two Gb ports. Implementing a dual-port model not only matches the disk bandwidth better, but it lets us design for minimal points of failure. It would be interesting to know whether a commodity 48pt Gb (2-4 10G uplink) switch would deliver better performance than the usual SAS/expander backplane. Even cheap switch hardware delivers line-rate and impressively low latency.

Kinetic SSD would be pretty silly, though unless the fabric were IB, and that wouldn't work well, price-wise.

Sick of storage vendors? Me too. Let's build the darn stuff ourselves

Mark Hahn

Re: Two reasons for buying

Buying COTS like Supermicro is a good idea, since it means you can replace/upgrade parts more easily (standard PSU, standard boards, etc). However, this post seems to be advocating bigger chassis being better: that's just not true. You want to move air past your devices and out of the case: bigger is not better. (It's also true that disks still don't dissipate much heat compared to CPUs.)

Seagate ready for the HAMR blow: First drives out in 2017

Mark Hahn

This work makes a lot of sense, because Flash is not going to challenge magnetic recording any time soon (in $/TB). Given that most data is quite cold, HAMR's emphasis on improving the write density is what the industry needs.

If, on the other hand, you live in a world where you only have modest amounts of hot data, you can simply ignore this.

Cisco should get serious about storage and Chuck some cash about

Mark Hahn

Cisco and Oracle would be perfect for each other. Both companies cater to the "it costs more so it's better" segment of the PhB/enterprise market.

Whip out your blades: All-flash Isilon scale-out bruiser coming

Mark Hahn

I wonder who buys these damned things. Their price is astronomical, but you'll still need a cluster of them to avoid SPoF. How many companies need those kinds of IOPS and bandwidth? Sure, Amazon would, but they're smart enough to engineer distributed systems that scale and don't cost much. Something like NYSE or Visa/MasterCard? The latter would almost certainly follow the standard path like Amazon and others.

No objections to object stores: Everyone's going smaller and faster

Mark Hahn

but why?

I was hoping you might discuss object storage for smaller *objects* - that would be interesting. An article about timid, half-hearted implementations of only a hundred disks or less, who cares?

It's easy to see how some workloads fit well for object storage. It's much harder to see how it'll challenge the prevalence of normal filesystems, where files are often tiny. After all, object storage is just a filesystem that can't efficiently handle large files, and refuses to manage your metadata/namespace for you!

Toshiba and Samsung both ponder opening new 3D flash fabs

Mark Hahn

Your explanation of 3d flash is exactly wrong: it's not just layers of planar flash.

Muted HAMR blow from Seagate: damp squib drive coming in 2016

Mark Hahn

right. people who rave about flash being the death of hdd always seem to forget that litho and hd platters are both 2d, and therefore follow the same moore's-like law: shrinks give exponential effect.

The data centre design that lets you cool down – and save electrons

Mark Hahn

Re: Sooo out of date!

it's funny that people often go on about humidity control for datacenters. but the fact is that they are easy to keep at modest numbers (say 15-35%), which also happens to let you avoid both humidification and dehumidification. in most countries, you'd have to put some effort into driving the humidity down so low that static was an issue.

Mark Hahn

Re: Three pages

and say silly things like "every DC has a PUE of >= 2" (penultimate paragraph).

Mark Hahn

Re: Just wondering

no, 12-15KW/rack is no problem with air.

HP's great cloud server cattle roundup with Foxconn begins

Mark Hahn

Re: So uptime sometimes doesn't matter. Nor does data integrity. Sometimes.

Integrity is easy - paxos, raft etc: it's not like you have to give up sensible, cheap, commodity features like ECC. It's only worth paying for "Enterprise" features if you can't do it the modern way for some reason: corporate culture, not smart enough, superstition, etc. The only surprising thing here is how long it's taken the Enterprise culture to start withering away.

Chipzilla spawns 60-core, six-teraflop Xeon Phi MONSTER CHIP

Mark Hahn

When will we get the important performance numbers, such as rates and latency? A variant of IB with 100Gb is only incrementally interesting, but if it's lower latency, or cheaper, or can do cache coherency, that would be news. Similarly, putting 60 cores on a chip is not exactly news unless it's substantially different (remote cacheline put instruction? threads in the ISA proper?)

HDS embiggens its object array by feeding it more spinning rust

Mark Hahn

I cannot understand who gives a damn about this stuff unless it achieves a reasonable price. The basic hardware costs $1-200/TB, so how about this stuff? Or is it just another phallic substitute to enhappify the costs-a-lot-so-must-be-good crowd?

Dumping gear in the public cloud: It's about ease of use, stupid

Mark Hahn

Re: We're doomed I tell you....

But that's actually not true: cloud systems require sysadmins, too. Basically, your sysadmin needs will always be proportional to your IT needs, regardless of whether you outsource the physical datacenter (whic his all IaaS is...) If you think going Cloud means cutting staff, you're wrong. You might get rid of some box-monkeys when you outsource boxes, but they probably make minimum wage anyway (and each looked after hundreds of servers, so you had very few of them.)

My other supercomputer is a Lenovo: What IBM System x sale means for HPC

Mark Hahn

Re: IBM will slide further down

Why is the mustard so hard to cut? Do you mean "that customer" is just pathologically risk-averse?

Mark Hahn

Re: Skills?

I'm really curious what you think is difficult about HPC. Sure, there are a lot of details that contribute to a good cluster, but they're nothing magic. Manage reliability while containing cost. Choose enough but not too much cpu/memory/net/disk. Keep packages up-to-date but don't upset users with too much churn. These are all very straightforward ops things, nothing exotic.

Tape rocks for storage - if you don't need to, um, access your data

Mark Hahn

tape-ism is a worldview. for instance, many people will say that it's not a real backup or archive if it's not offline (usually their justification is that mistake or malice can more easily kill an online "backup".) if you rarely recover from archive, that colors your expectations as well: you are rarely exercising the tape, so may have an unrealistic estimate of the actual, silent failure rate. obviously if you more frequently recover from archive, you'll be pained by tape's latency (probably offsite, but even libraries are slow relative to disk seeks.)

in reality, people who take tape seriously write two copies. once you plug that in - the price, the data rate, the space, and factor in environment-controlled storage, offsite of course, and the fact that tape drives are expensive and don't last very long, and normally need a separate spooling facility. wow, costs do pile up.

it can probably still work well for very large, very sparsely-accessed storage. most people don't bite, though, and online, spinning storage for backup and archive really is the norm. simply being able to verify all your data is a powerful argument.

Mark Hahn

Re: Longevity of SSD as a medium

hmm, flash is rated for much less than a million writes per bit (3k for common MLC, for instance). of course, ssd virtualizes that and covers the early failures using spare blocks. but it's completely mistaken to think that you can write an ssd a million times (fully, with uncompressible/non-dupe data).

Mark Hahn

Re: Longevity of SSD as a medium

flash retention rates depend not only on erase-based wear of cells, but also on crosstalk-like degradation from operations on nearby cells (even reads). in principle, if you wrote data once to flash (archival, like most tape uses), it would last on the order of 10 years. documentation of this seems fairly sparse, though, probably because that's not the main market. (flash all uses quite powerful ECC, which is fundamentally different from checksums...)

many people would not share your confidence of the retention rate for tape. it could be that we've all been warped by horrible performance of old generations of tape, but then again, that was always the explanation. (verify-after-write was a game-changing tape technology, for instance.)

SeaMicro acquisition: A game-changer for AMD

Mark Hahn

Re: What is AMD up to?

don't read gamer reviews of intel vs amd power consumption and then draw conclusions about either HPC or webscale applications. these are throughput boxes, where the workload is embarassingly parallel and (for webscale at least) not flops-heavy. such servers are simply never idle, for instance (or their being used wrong).

WD fattens up S25 with third juicy platter

Mark Hahn

Re: fuzzy math?

that's correct: "enterprise" disks only ever use quite narrow bands of the outer part of the disk, since that gives the lowest latency. these disks are sold on iops, not bandwidth. (which is why, more than ever, they sell to a shrinking niche market. think SSD...)

SUPERCOMPUTER vs your computer in bang-for-buck battle

Mark Hahn

uh, cloud is expensive

you know Amazon's profit margin is HUGE, right?

Mark Hahn

Re: Accuracy of results

whohasthefastestcomputer.com is just a flash plugin - very little relationship to the true speed of the computer it runs on, and totally unrelated to HPL.

Japanese boffins fire up 802 teraflops ceepie-geepie

Mark Hahn

Re: Let me overclock it plz :D

HPC doesn't generally overclock for two main reasons. first, overclocking is, by definition, running the system outside of spec. unless the specs were stupid, that means less reliable or robust - higher FIT, etc. second, overclocking dramatically increases power dissipation, and operating at scale means optimizing for performance/power, which means a strong preference for lower clocks.

Amazon cloud double fluffs in 2011

Mark Hahn

speculate on AWS margins?

I was looking at AWS prices recently, and even comparing to retail prices for servers, space, power, networking, I don't see how AWS could run at less than 20x markup. that's pretty amazing, even compared to, oh, say Apple. could it be that AWS gives incredibly steep discounts to large customers? or could they have some kind of exorbitant hidden costs?

AWS costs between $250 and $700 per year per ECU; purchasing your own servers, running them for 3 years, and throwing them away will cost you somewhere around $50/ECU-year. if you get hardware at wholesale and build/operate your own datacenters, the cost is probably close to half that.

Hot Intel teraflops MIC coprocessor action in a hotel

Mark Hahn

yuck

Hazra needs to work on his rhetoric. simply claiming pcie3 is "necessary" makes him laughable - a simple appeal to authority. _why_ is it necessary? show us the numbers demonstrating realistic cases where it helps.

the best examples I can think of are high-end IB and some kinds of IO-intensive GP-GPU codes. failing to provide an actual example, he looks like a marketing weasel.

Google: SSL alternative won't be added to Chrome

Mark Hahn

exactly (not)

if an attacker so 0wns your network that they control DNS and can MITM all traffic, you're basically screwed. but this doesn't mean you need to cache everything - just the root certs. and those should be updated via your OS's standard update mechanism (after all, you have to trust them just as much as you have to trust your kernel, tcp stack, etc)

this is really the way it should always have been - separating ssl from domain mechanisms was just a historic oddity.

the big change here is that the current nasty, parasitic SSL-cert industry goes away. lots of them won't be happy. no customers will regret this though.

Page: