* Posts by BillT

1 publicly visible post • joined 17 Jun 2009

DataSlide reinvents hard drive

BillT

@Charles:

Let me see if I now understand your product at least somewhere nearly adequately:

80 GB is about 160 million sectors at the currently-standard sector size of 512 bytes (granted that may change to 4 KB before long but if you're sampling now that's what you're stuck with in most environments). At your stated one head per sector that's 160 million heads (80 million per side if you're using two-sided media). Given the cost of current conventional disk heads that would be a show-stopper in itself, so you must obviously have decreased this cost by many orders of magnitude (certainly by being able to manufacture heads in huge batches per chip and possibly aided by the significantly different mechanical environment in which yours operate).

You don't need perfect 2 in x 2.5 in head plates or media surfaces, since it really won't matter much if even many thousands of heads per plate (and/or many thousands of sectors per surface) are defective as long as you detect them and map them out before shipping the product.

If the amplitude of your oscillation is 100 microns as you stated and you must fit 5K - 10K bits (one 512 byte sector plus overhead and gaps) into that space that's a linear density of 10 - 20 nm per bit which is comparable to linear densities on the densest contemporary conventional drives and approaches the size of a single magnetic grain on the media (perhaps your different mechanical environment makes this density easier to achieve). You could fit 660 such sectors along the 2.5-inch long dimension of your plate and would then need about 120,000 'tracks' across the 2-inch dimension (60,000 tpi, which is only about 1/4 that of the densest contemporary conventional drive) resulting in track spacing of about 0.40 microns - perhaps consistent with your statement above that the heads are manufactured 'at micron feature size'.

What the above (particularly the relationship to media grain size) seems to imply is that you'll need to get any short-term density improvements almost wholly from tpi increases and will be lucky to approach a factor of 10 there even with significant improvements in lithography (in contrast to the factor of 25 which you hope for). This means that you may remain at a significant rack density disadvantage when compared with conventional disks (especially when comparing against 2.5" conventional disks, where your power advantages become less significant as well).

As long as your side bearings don't allow side-to-side movement beyond a few dozen nm you don't need head alignment perfection in the head plates either, since each head sees only the portion of the media surface which belongs to it (but you do need to keep the heads adequately separated - in both dimensions). The side-to-side tolerance becomes tighter commensurately with the tpi increases mentioned above.

On the IOPS front, with 64 heads accessible in parallel on each of two surfaces you can achieve 128,000 random sector transfers per second with 1000 oscillations per second. Does the 160,000 IOPS claimed in the article mean that you can transfer in both stroke directions of a cycle?

The IOPS that you describe are not, however, directly comparable to conventional disk IOPS:

1. Disk IOPS are normally measured using 4 KB transfers. If you used 4 KB transfers that would decrease your IOPS by a factor of 8.

2. Real-world disk IOPS often involve even larger random transfers (e.g., 8 KB or 32 KB for older Oracle databases, quite possibly larger ones for newer ones or other contemporary applications). Again, that would cut your real-world IOPS commensurately for such applications.

3. Disk IOPS improve with queuing depth with only a sub-linear increase in latency. Your IOPS improve far less with queuing depth and your latency increases linearly.

In other words, you do have a dramatic advantage over a conventional disk in the IOPS area but you're currently overstating it significantly.

At 1000 oscillations per second and 64 heads in parallel on each of two surfaces you can transfer 64 MB/sec using 512-byte sectors (or 128 MB/sec if you can read/write on both stroke directions of the cycle). So how do you achieve the 500 MB/sec figure that the article claims?

You quote a latency of 0.5 ms., but it's not clear what that's supposed to mean when compared to what latency means for conventional disks. Your best possible access time for a request is clearly 1/2 cycle (0.5 ms.). Your worst possible access time for a request (in the absence of queuing) would be 1.5 cycles (1.5 ms.) if a) you can read/write in only one direction and b) you can start the transfer only at the start of the sector and c) the request hits you just after you've passed the sector start, resulting in an average access time of 1 cycle (1 ms.). If you can read/write in both directions *and* start a transfer anywhere within the sector.the worst-case access time drops to 1 cycle (1 ms.) resulting in an average access time of 0.75 cycle (0.75 ms.).

Incidentally, there's nothing that MRAM can do for that latency, and any effect that it has on *perceived* average latency can be applied to disks as well - more effectively, in fact, since their actual latency is so much higher.

And your suggestion that your two-dimensional medium layout is somehow 'architecturally useful' to SOL (SQL?) and the relational calculus sounds like pure poppycock, but I'd be happy to listen to you explain why that might not be the case.

The bottom line appears to be that your product offers about 1/10th the random-access latency and 10 to 100 times the practical IOPS of a conventional enterprise drive (or 1/20 the random access latency and 20 - 200 times the practical IOPS of a conventional SATA drive) with comparable bandwidth, rack density, and power consumption (especially when compared with 2.5" conventional drives).

That means that for capacity- and bandwidth-driven applications you'd need to sell your current 80 GB units for well under $10 apiece to compete with SATA drives (which are currently running about $80/TB at Newegg) - which doesn't strike me as providing the profit margin that you'll likely be seeking.

You would, however, appear to satisfy a niche for high-IOPS applications which don't require large amounts of storage more cost-effectively than conventional disks can even if your units are priced at well over $1000 apiece (at least as long as those applications can't share their storage with a great deal of 'cold' data and hence reap the benefits of the many disk arms that would otherwise be largely idle). In that environment, however, you need also to compete with flash storage that offers comparable bandwidth, IOPS, and latency (far better read latency, in fact - and probably better shock-resistance too) at a far more attractive price point.

So while your technical approach is really neat, unless you can make it inexpensive as well it's not clear that it will fly competitively. But I'd be delighted to be convinced otherwise.

Thanks,

- bill