* Posts by cloudguy

75 publicly visible posts • joined 24 Apr 2013


Turing Award goes to Robert Metcalfe, co-inventor of the Ethernet


Re: Ethernet has stood the test of time

The significant expense in data cable wiring is labor, not the cable itself. With many vendors building Ethernet cards, the price became very competitive. Ethernet cards were considerably less complicated to make than Token-Ring cards. That said, Dr. Olaf Solderblum, from Sweden, was busy asserting his patent claim for token-passing networks. He had once worked for IBM. His patent licensing fee paid by Token-Ring card manufacturers contributed to the higher pricing of Token-Ring cards. There were also far fewer Token-Ring card manufacturers than Ethernet card manufacturers. Curiously, Madge Networks in England refused to pay for a Token-Ring license, took Dr. Olaf Solderblum to court in England, and was upheld by the court in its refusal to pay for a license to make Token-Ring cards.


Ethernet has stood the test of time

You could say Robert Metcalfe was at the right place (Xerox PARC) at the right time, but no one would have predicted that Ethernet would become the universal network technology on LANs. The 1980s had many vendors providing LAN technology like Corvus Omninet, SMC ARCnet, 3Com Ethernet, Proteon ProNET 10, IBM PC Network Baseband, IBM PC Network Broadband, IBM Token-Ring, Appletalk, Gateway G-Net, pulse more even forgettable LAN networking schemes from Allen-Bradley and others. That said, the ONE thing that propelled Ethernet into the future was running it over unshielded twisted-pair (UTP) cable and introducing standards-based cabling systems. SynOptics pioneered Ethernet over UTP cable in 1988/89 with LattisNet, which jumped the gun on IEEE 802.3 Ethernet. Still, the concept was proved, and it took off like gangbusters once enough network techs were convinced you could run 10Mbps Ethernet over UTP cabling. The second other thing was EIA/TIA standards for building structured cabling systems, making it easy to design and deploy cabling systems for LANs. Before that, almost every LAN vendor had its cabling system. IBM had its own IBM Cabling System. DEC had its cabling system. The 1990s settled all that, including the demise of Ethernet's primary alternative, Token-Ring networks. Pretty interesting that in the space of 20 years, Ethernet over UTP cable became the winner. Now every mainboard has an embedded Ethernet controller. Congratulations to Robert Metcalfe on receiving his Turing Award as the co-inventor of Ethernet.

Teeth marks yield clue to widespread internet outage in Canada


Re: Not really novel

In the mid-1990s, I was an IT contractor at Simplex Cable, which built undersea fiber-optic cables at its plant in Newington, NH. Simplex made an armored cable to resist shark bites in shallow water areas. The issue is undersea fiber-optic cable use repeaters which require electricity. As the cable comes ashore in shallow water, sharks can sense the electric current on the cable and snap at it as if it were prey to eat. Beavers, while not ocean-going rodents, do live in water. The front teeth on a beaver are tough, which the beaver uses for chewing down trees. I suspect if there are areas where fiber-optic cables are run in beaver habitats in the water or close to the ground, they should use armored fiber-optic cable.

Voyager 1 space probe producing ‘anomalous telemetry data’


Signal is very weak, distance is very far, data is not making very much sense

Voyager 1 and Voyager 2 are our first interstellar space vehicles. They have both survived 45 years, which is a testament to their engineering and luck. Voyager 2 is not providing the same "incoherent" data as Voyager 1. This was why two and not one Voyager space vehicle were launched. Hopefully, some entity will capture one or the other Voyager and play the "golden record" onboard to discover something about who sent it into space. SNL had a comedy sketch about Voyager back in the day that a message was received on Earth from outer space indicating an alien life form listened to the Voyager "golden record" and replied, "Send more Chuck Berry."

Co-inventor of Ethernet David Boggs dies aged 71


Re: Ethernet turned out to become the network winner

The history of Token-Ring networks is more complicated and has nothing to do, per se, with IBM's market dominance in the mid-1980s. First, Token-Ring network adapters were more "intelligent" than Ethernet adapters. It also made them more expensive to manufacture due to requiring more components. Second, a Swedish citizen, Olof Soderblom, was awarded a U.S. Patent on token-passing network technology in the early 1980s. He proceeded to trot around the world, licensing his patent to manufacturers like IBM. His royalty fees contributed to making Token-Ring network adapters more expensive than Ethernet adapters. Madge Networks in the U.K. refused to pay a license fee to Mr. Solderblom for manufacturing their Token-Ring adapters. The English courts, both the lower and appeals courts, sided with Madge Networks by ruling that Madge Networks had not infringed on Mr. Solderblom's patent in the U.K. There were other manufacturers of Token-Ring network adapters, like SMC/Western Digital, but overall there were far fewer Token-Ring adapter manufacturers compared to manufacturers of Ethernet adapters. Today, you cannot buy a PC that does not have an Ethernet interface embedded on the mainboard.


Ethernet turned out to become the network winner

Back in the mid-1980s, it was not clear which Local Area Network design/technology would become dominant. I worked with over a dozen different networking technologies, including Corvus Omninet, Gateway G-Net, SMC ARCnet, Proteon ProNet 10, 3Com Ethernet, IBM PC Network Broadband, IBM PC Network Baseband, IBM Token-Ring Network, Orchid PC-Net. What clicked for Ethernet was SynOptics LattisNet running Ethernet over an unshielded, twisted-pair cable. After the emergence of structured cabling systems in the early 1990s, you have what you see today in wiring closets. Everyone has heard of Bob Metcalfe as one of the inventors of Ethernet, but far fewer people have heard about David Boggs, the co-inventor of Ethernet.

Got a few spare terabytes of storage sitting around unused? Tardigrade can turn that into crypto-bucks


Re: Only the data owner has the key(s) to decrypt the dispersed erasure-coded data

Note to my previous comment: As mentioned in the article, Storj can remove data from the Tardigrade platform. They cannot produce the data in response to a law enforcement warrant for the data. Only the person with the encryption key(s) can produce the data, but if Storj was informed that there was a copyright infringement concerning certain data it can remove the data in question. All demands or warrants to produce the data have to be directed to the peson who has the key(s) to decrypt the data.


Only the data owner has the key(s) to decrypt the dispersed erasure-coded data

If anyone is going to be served a warrant for their data, it will be the person who stored the data using the Tardigrade platform. The person who stored the data is the only one with the ability to decrypt the data. This is the person who must be compelled to hand over the data law enforcement has a warrant to obtain. Neither Storj, the Node Operators or the Satellite Operators have any ability to produce data in response to a warrant. The storage network is a globally dispersed network of decentralized nodes. There is no coordination among any of the storage nodes that are storing erasure-coded shards of the encrypted data in question. If law enforcement takes a Node or Satellite they will not get any data from them. The Satellites only store meta data and the Nodes only store an erasure-coded and encrypted shard of the data. The Tardigrade platform can survive the removal of Satellites and Nodes. Law enforcement will never know which nodes contain the encrypted shards of data they are trying to obtain under a warrant. Law enforcement can only focus their action on the person who stored the data because that person is the only one who can produce a decrypted version of the data.

No merry Christmas for SwiftStack staff: Enterprise cloud storage biz axes workers amid strategy shift


SwiftStack Retreats to Niche Storage Player

Well, the number of independent mass market object storage software vendors has gotten smaller with SwiftStack retreating to niche market player status. Going five years without a new funding round and only $24M raised means you can no longer afford to run with the big dogs. That leaves Cloudian and Scality as the independent survivors of mass market object storage software. Both of these companies have raised approx. $200M each over the past eight years. Both companies are still entertaining the notion that an IPO could be in their future despite the fact that no object-storage startup has been able to do it. SwiftStack's long term future looks like it will need to get bought to survive.

Seagate, WD mull 10-platter HDDs as pitstop before HAMR, MAMR time


HDDs Limited by Form Factor

Well, how to cram more bits in a square inch on an HDD platter has been going on for a long time. HAMR and MAMR HDDs are the next generations as PMR HDDs reach the limits imposed by physics. SMR HDDs are an error-prone aberration. Helium filled HDDs are more of a gimmick. Helium atoms themselves are very hard to contain for a long time. The real problem is HDDs are constrained by their form factor. NAND flash in all its various form factors is more flexible and extensible. Caging electrons is going to prove more productive than manipulating magnetic domains. NAND flash already dominates the 3TB and under storage drive market. NAND flash prices are at or slightly below the $0.10 per GB price. HDDs are living on borrowed time. They weigh too much, use too much space and consume too much electricity.

Backup bods Backblaze: Disk drive reliability improving


Three HDD makers left when there were dozens in the 80s

Well, the consensus has been that HGST manufactured better HDDs. WDs were usually OK, but go with the higher performance Blue and Black models and stay away from the Green and Red models. Toshiba was also OK, although they didn't have the reputation of HGST or the popularity of WD. Seagate, apart from certain models, always seemed like a crap shoot. Personally, I mostly run HGST refurbs in my storage servers. They may be a little noisy but they are in the basement where it is cool year round and I don't care about the noise.

The thing about HDDs is nobody will pay what it would cost to buy an HDD guaranteed to last 10 years. That said, manufacturers build HDDs to run close to edge of failure in order to cram as much storage (up to 16TB) as they get into a 3.5-inch form factor using tricks like Helium and SMR. Areal density of 1.x terabits per square inch is the limitation with current HDDs. When HAMR and MAMR HDDs appear they will have even higher terabit per square inch values and probably be more expensive to produce, which means they will cost more per GB. Meanwhile, SSDs are getting larger and cheaper. SSDs rule the 3TB and under storage market. HDDs will make their stand in the capacity storage market until they are eventually challenged by SSDs.

Keeping up with the kollect-kash-ians: Data manager Komprise more than doubles funding


Komprise makes steady progress

Back in the 90s Hierarchical Data Management (HSM) was supposed to move your data around for better management. Don't hear much about HSM now. Komprise has helped to change the name of the game from HSM to Enterprise Data Management by checking your Windows and Linux shares for candidate files to be moved to an on-premises object store or a public cloud object store. The Komprise control plane resides on AWS. However, it can run on-premises. Komprise leaves symbolic links when it moves files to deep and cheap storage. Users and their applications can continue to access their data without being retrained to know where to look for it. I beta tested Komprise before it went GA a couple of years ago. In my demo/test lab, I was able to move over a million files from Windows server shares to a Cloudian object store. Glad to see Komprise receive another round of funding to continue product development and expand marketing programs.

Big Cable tells US government: Now's not the time to talk about internet speeds – just give us the money


FCC captured by cable and telco oligopoly

Well, instead of being regulated by the FCC. The Republican-controlled FCC and its Chairman have been captured by the cable and telco oligopoly. Now the FCC wants to hand over billions in funding to the cable and telco oligopoly to improve broadband service in underserved rural areas. The FCC has bad data on rural broadband speed and availability because they rely on the same cable and telco oligopoly to report it to them. Local governments in cities and towns are better informed regarding who and where the underserved exist. So rather than provide funding to cities and town to plan for their broadband future and build publicly owned broadband networks, the FCC wants to shovel Federal funds into the coffers of the cable and telco oligopoly so they can increase the value of their assets for their shareholders at taxpayer expense. This is how the USA gets the worse Internet service at the highest prices in the world.

Seagate passes gassy 14TB whopper: He He He, one for each of you


Large HDDs are not meant to be backed up

Well, 8TB, 10TB, 12TB and now 14TB HDDs are not meant to be backed up, and they are not appropriate for traditional RAID schemes due to incredibly long rebuild times, although some smarter RAID controllers have learned a few tricks about rebuilding very large HDDs. These HDDs are destined for scale-out, object storage clusters which protect data objects using replication (making multiple copies of data objects) or erasure coding (sharding data objects and calculating parity fragments). In object storage clusters, data protection focuses on the number of cluster nodes and not the HDDs in them. Replication and erasure coding is spread over cluster nodes. No one cares if HDDs here and there fail, which they inevitably will if you have enough of them. Immediately replacing failed HDDs is not a priority. Replicated and erasure coded data objects on failed HDDs will have their "missing" replicas and erasure coded shards/fragments re-created on other HDDs on other cluster nodes by the object storage software running on each cluster node.

Object storage sweetheart Cloudian bags another $94m funding in E round


Re: Cloudian on way to IPO?

Well, here is a correction and a couple of additional details to my post. Digital Capital should be Digital Alpha. Cloudian received $25M in equity funding and $100M in debt financing from Digital Alpha in March 2018. Cisco is one of the limited partners in Digital Alpha which explains why Cloudian will offer Cisco storage servers as part of their previously announced pay for what you use on-premises storage consumption model. Lenovo is a Cloudian investor so they could have an interest in acquiring Cloudian because they lack an object storage solution to call their own.


Cloudian on way to IPO?

Well, Mike Tso CEO of Cloudian hinted that it could happen in 24 months. Jerome Lecat CEO at Scality has made similar comments in the past about Scality going public. The obstacle to their public IPOs would be annual revenue. There is the notion that $100M to $200M in annual revenue is where you want to be when you consider going for an IPO. Neither company is close to $100M in annual revenue. Scality has approx. $30M in annual revenue compared to Cloudian's approx. $25M in annual revenue. So, the question is can either of them get to $100M in annual revenue in 24 months? If the investors are "long-term greedy," they might wait to IPO. If not, then one or both of them will be sold but who would buy them? HPE has some familiarity with Scality but hasn't shown any interest yet. Digital Capital partners have ties to Cisco, so that's a possibility for Cloudian. Lenovo could buy Cloudian if they want to have an enterprise solution for capacity data storage. The number of potential buyers is not that large. IBM, Western Digital, and Red Hat have already made their acquisitions in this market.

Tintri rescued by DDN just hours after filing for Chapter 11


The flash storage gold rush is over...

Well, this is reminiscent of the 1980s when everyone was starting a disk drive manufacturing business. There were too many entrants to survive and today only three have survived (Seagate, Toshiba, and Western Digital) after having acquired many of their competitors over the years. The flash array market is in a similar situation with too many manufacturers with too little to distinguish themselves from their competition. Some have gone public by taking on loads of debt to grab market share. Some have been acquired because it was the only way the VCs would see a pay day and some have failed. DDN is likely looking at Tintri as a relatively cheap way to get access to their flash technology and apply it to their own high-capacity storage lineup. For a while they will have to provide support for existing Tintri customers but the plan would be to move them to DDN branded storage products over time. The VCs may have made some bad bets on Tintri but that's the way it goes.

WOS going on? DDN ejected from IDC object storage marketscape


What's wrong with this graphic?

Well, if there is a $20M annual revenue bar to get over, then Caringo, Ceph (Red Hat & SUSE), and SwiftStack could all be sneaking under it. What seems likely, given the changes in the "rules" by IDC, is DDN doesn't have enough touch points in the object-storage market since its major strength is storage for high-performance computing.

Another quarter, another record-breaking Tesla loss: Let's take a question from YouTube, eh, Mr Musk?


Is Tesla the new DeLorean Motor Company?

Well, back in 1975 automotive executive and engineer John Z. DeLorean attempted to build and sell an original new car (DMC-12) and take on the rest of the automotive industry. The company failed in 1982 when it went into into receivership and bankruptcy. Making and selling cars is a tough business to start from scratch even if you have an army of nerds working on the project. There is an overcapacity of car making worldwide. Tesla occupies a tiny segment of a soon to be large EV market. The fundamental problem is Tesla cannot possibly scale quickly enough to be successful with all of the excess car-making capabilities available to manufacture EVs. Mr. Musk would be smart to stick to making batteries and rocket boosters.

Scality swallows $60m to tame the multi-cloud data management beast


Scality bags $60M for Zenko?

Well, Scality needs more funding. I don't think the $60M will be for Zenko. Mr. Lecat enumerated who the company is competitively engaged with but conveniently omitted Scality's win rate for the engagements. It seems meaningless to mention them at all if you are not going to use it to brag about Scality. Mr. Lecat removed the COO and CMO at Scality this year claiming those C-suite positions were no longer necessary. Then we learned about HPE cutting a side deal with Cloudian in EMEA which is right on Scality's home turf. HPE did that after it dropped $10M on Scality a couple of years ago to find out if the company was worth buying. Along with the $60M in new funding Mr. Lecat is hoping for an IPO in 2023 when he was looking do that IPO right about now or in 2019. The truth is Scality's revenue is not what it should be to do an IPO. So, the most reasonable use for the $60M is to build a worldwide sales and marketing organization to get more market share, close sales (win rate again) and boost revenue to get to an IPO so the investors can have their payday. By comparison, Cleversafe had received approx. $128M in funding when IBM wrote a check for $1.3B to buy Cleversafe in November 2015. With $150M in funding, Scality could be bought for $1.5B if there was a buyer and it won't be HPE. That leaves Cisco and Lenovo in the enterprise-class storage market without an object-based storage software to call their own. So far none of the established object-based storage software companies has been able to do an IPO. There are only the ones that are still non-public and the ones that got bought by public companies. $60M for Zenko doesn't sound logical even if "multi-cloud" really is the next "big thing" in data storage management.

Seagate's HAMR to drop in 2020: Multi-actuator disk drives on the way


HAMR, MAMR and mechanical HDD trickery

Well, you can't blame the HDD manufacturers for trying to stay in the game. HAMR HDDs have been in development for more than a decade, and you still can't buy one. MAMR HDDs are exotic, and you won't see them for sale for a long time either. HDDs with dual actuators have been tried before and abandoned. Maybe it can be made to work, but adding electro-mechanical complexity to HDDs can only be a source of new HDD failures. In the meantime, caging electrons in NAND flash is a quicker way to increase capacity dramatically. Niche markets for HDDs will hang on for maybe another ten years, but when those use cases can be challenged by NAND flash storage, it will be the end for HDDs. The storage imperative in the 21st century is maximum capacity with minimum power requirements. Anything that does not do that will not be around.

He He He: Seagate's gasbag Exos spinner surges up to 14TB


Re: Limits?

Well, last year IBM scientists demonstrated the ability to store 1-bit of data on a single atom and read it under laboratory conditions. Currently, it takes 100K atoms to store 1-bit of data.

IBM thinks Notes and Domino can rise again


Notes was worth the trouble back in the day.

Well, Notes was the crowning achievement in software development for Ray Ozzie and his crew of developers at Iris Associates. Ray had worked at Lotus where he also created Symphony. At one point Lotus could not keep track of all the licenses it sold for Notes and cut a deal with Ray Ozzie for approx. $186M in royalties. Later on, IBM paid one-third of its cash on hand, which was $3B to buy Lotus. By this time Lotus had acquired Iris Associates.

It was the pre-internet "groupware" era, and the only serious contenders were Novell with GroupWise, which is alive and well at Micro Focus with its current GroupWise 18 release. The other contender was Microsoft Exchange with its Outlook client which is still widely deployed and sits alongside Microsoft's cloud offerings.

One of the issues with Notes was that it did not have all the functionality needed for a development platform. There were lots of third parties with Notes apps, but Notes as a development environment only gradually came into being, but by then the Internet was destined to become the new platform for groupware applications like Gmail and Google Apps. As for a Notes revival, I would not bet on it happening. Too much time has passed since the heyday of Notes and today.

Tonight on IPO, Bought or Binned: Cloudian and Scality collide as object storage endgame nears


Cloudian pulls ahead...

Well, the analysis was interesting and similar to some of the comments I've recently made here on what's up with Scality. Both companies are private so metrics and data are hard to get. Scality does have a higher headcount than Cloudian. Revenue is about the same, but Cloudian's smaller headcount gives it a better revenue per employee ratio than Scality. On that basis, Cloudian appears more efficient with its funding. Scality recently eliminated their COO and CMO positions, which seems odd for a healthy company. It probably means revenue is not where it should be and their runway is getting shorter. Scality probably needs more revenue and they look like they are under pressure to do something. Neither company looks like an IPO is in their future as their annual revenues are still well below $100M. HPE got a good look at Scality for their $10M two years ago and have not pushed the buy button. HPE's side deal with Cloudian In EMEA is probably a sign that some of HPE's customers don't want Scality and HPE doesn't want to lose the business. Scality has worked more closely with HPE and Cisco than Cloudian, but Cloudian may be moving to get closer to Cisco with the $100M line of credit they just got from Digital Alpha Capital. Cloudian stated this line of credit will be used to setup customers with either Cloudian-branded QCT storage servers or Cisco UCS storage servers. This leaves me to wonder why Cloudian did not leverage its existing partner relationship with Lenovo, which sells a storage server with Cloudian's software pre-installed. Both Cisco and Lenovo don't have an object storage software to call their own. If Lenovo has ambitions in the enterprise data storage market, and I think they do, then they would be smart to acquire Cloudian. Cisco has bought dozens of companies since their founding in 1984 and did a terrible job with some of them. Remember Whiptail? That $415M mess was shoved in a hole in 2015 just two years after Cisco bought it. In the end, money talks and if Cloudian can get the multiple it wants from Lenovo or Cisco, a deal will get done and the funders will have a payday. Cleversafe minted quite a few millionaires when it was acquired by IBM for $1.3B in October 2015.

Oh, Bucket! AWS in S3 status-checking tool free-for-all


Haven't we have seen this computing security conundrum before?

Well, there has always been a conflict between ease of use and security when it comes to computing. Cloud computing providers sold their model as a way to escape dealing with those nasty on-premises computing environments. And when you combine cloud computing with self-service access to anyone with a credit card, what could go wrong? Consumers of cloud computing don't pay attention to securing their information assets in the cloud. And when they don't pay attention to security, everyone on the public internet winds up with access to their data sitting in an improperly configured S3 bucket. Surprise, surprise! At least AWS is trying to make it clear you are doing it wrong! Better late than never.

We all hate Word docs and PDFs, but have they ever led you to being hit with 32 indictments?


Information in a criminal in investigation is asymmetric

Well, Special Prosecutor Mueller knows more than the people who receive subpoenas. Persons of interest lawyer up before being questioned. Lawyers tell their clients not to lie to the FBI and not to lie before a Grand Jury because lying is a slam dunk for prosecutors and you will go to jail. People facing indicted who think they have long lives ahead of them don't want to spend the next 30 years behind bars. When charged the number of counts is usually high enough to convince them they should plead guilty to lesser charges in exchange for their complete and truthful testimony. With every "plea deal" Special Prosecutor Mueller can increase his overall information putting remaining persons of interest at a more considerable disadvantage because they don't know what Special Prosecutor Mueller knows now that he didn't know before.

Scality CEO: About that C-suite throttling...


Mr. Lecat says we don't need a stinking C-suite at Scality

Well, I know Paul Turner who was until recently the CMO at Scality. Prior to that, Paul was CMO at Cloudian. You will not find a more hands-on CMO than Paul. That said when his predecessor only lasted four months at Scality, it should have been a warning sign about the company's prospects.

It wasn't too long ago that Mr. Lecat was talking about a Scality IPO and then HPE invested $10M in Scality. Well, the IPO never happened probably because Scality has not broken through the $100M level in annual revenue. HPE got a good look at Scality for their $10M and apparently decided not to buy the company even though HPE has no object-based storage product to call its own. More recently, HPE cut a deal with Cloudian to sell Cloudian HyperStore in EMEA. Why would HPE do this after putting $10M into Scality? The easy answer is HPE was not closing deals with Scality in EMEA.

The C-suite headcount at Scality is likely being reduced because Scality is not closing the business deals it needs to support the number of employees on its payroll. Funding provides a runway for business development, but cash "burn" can shorten the runway if it is too high compared to the revenue being booked. Looks more like Scality has too low a revenue per employee number to continue in business without cutting headcount.

Deputy lord of the Scality RING parts ways with object storage firm


Big Payday or IPO for Scality?

Well, the interesting piece of the story is who provided Scality with the recent $35M in additional funding. With approx. 219 employees and approx. $31M in annual revenue, Scality's revenue per employee is low at approx. $142K per employee. Scality's revenue may be less than what it needs to operate, hence the unattributed $35M in additional funding. Scality's competitors, IBM Cloud Object Storage (Cleversafe) and Cloudian have approx. $35M and $24M respectively in annual revenue and 214 and 123 employees respectively. IBM Cloud Object Storage and Cloudian have revenue per employee of approx. $164K and $195K respectively. These revenue per employee numbers are better than Scality's with Cloudian's being much better. On the surface, this would seem to indicate that Scality is not closing enough sales relative to the number of employees on the payroll. Back in the day when Novell was riding high, CEO Ray Noorda used a revenue per employee metric $250K to determine whether the company had too many employees. So the question is how much longer can Scality continue to increase employee headcount without a commensurate increase in revenue before it burns through its funding? Scality could be acquired for some multiple of its funding or it could go public, which CEO Lecat has mentioned in the past. The safer bet might be an acquisition by HPE since it already has $10M invested in Scality and they haven't bought an object-based storage software vendor yet.

Western Dig's MAMR is so phat, it'll store 100TB on a hard drive by 2032


Re: Space and energy will make flash the winner...

Well, probably when new NAND flash fabs scheduled for completion come online in the next several years. Larger capacity new HDDs are not coming to market at a lower per GB price than small capacity HDDs already in the market. The price per GB for HDDs has fallen very little over the past 3 or 4 years while the price of SSDs has fallen from $0.60 per GB to $0.25 per GB. When the delta between the price per GB of HDDs is within $0.10 per GB of the price of SSDs the market will flip in favor of SSDs because of the additional savings in floor space and energy costs. So, I'd say a price around $0.15 per GB for SSDs will be the point in time when HDDs start losing significant market share for capacity storage.


Space and energy will make flash the winner...

Well, HDDs cost less than SSDs and will survive for the next 5-7 years in scale-out storage clusters running object-based storage software. But if any of the unstructured data growth projections come reasonably close to reality then HDDs will just require too much energy and take too much space to store all of the unstructured data being ingested. SSDs have already won the capacity competition. All that remains is to make more of them and drop the price low enough to push HDDs out of the market. I think when the price difference between the two reaches $0.10 per GB, it will be the death knell for HDDs.

Whoosh, there it is: Toshiba bods say 14TB helium-filled disk is coming soon


Re: No RAID rebuilds on large HDDs

Well, not rebuild in the RAID controller sense. In an object storage cluster data is protected using replication (copies) of data objects or erasure coding (data fragments + parity fragment) of data objects to achieve the desired level of data durability. In most object storage clusters replication and erasure coding policies can be specified at the "bucket" level. Replication typically defaults to three replicas with one replica stored on three different nodes in the cluster. Erasure coding schemes can vary considerably in their combination of data fragments and parity fragments, but the fragments themselves are dispersed to a number of nodes in the cluster so that no node has more than a single fragment (data or parity). HDD failure in a given node means the replicas and parity fragments stored on the failed HDD will be re-created by the object-based storage software on other HDDs in the cluster. At no time will the replicated or erasure coded data become inaccessible while this happens.


No RAID rebuilds on large HDDs

The first cry you hear with the announcement of an even larger capacity HDD is that it will be impossible to use them in a RAID array due to an almost infinite amount of time needed for a RAID rebuild. Get a clue. These helium-filled HDDs are destined to be deployed in object-based storage clusters where single or multiple drive failures have no effect on the operational status of the cluster. Failed of disabled HDDs in object-based storage clusters are just pulled and replaced, hopefully under warranty.

StorONE 're-invents' storage stack to set hardware free


Who are those guys?

Well, in a crowded software-defined storage world, being mysterious about how you actually do what you claim to be doing will generate some curiosity and buzz. That said, 35 smart people being paid $150K per person per year could burn through $30M in less than six years. The 50 patents awarded or pending could be researched for information about what StorONE is actually doing. Patents represent a reduction to practice and not just an idea. One patent is typically not enough and is usually accompanied by related and/or nuisance patents to protect the valuable IP. By comparison, Cleversafe, an object-based storage software and hardware startup acquired by IBM for $1.3B in 2015 had amassed 300 patents.

The suggestion that StoreONE is looking for additional funding or a potential acquisition could be plausible but it only has beta customers at this point. By comparison, Cleversafe had over 100 production customers including a three-letter U.S. government intelligence agency when it was acquired by IBM. At some point, StorONE will have to come clean about what they are doing and how they do it. A few years ago, the OpenStack startup Nebula burned through $25M in funding in a couple of years but was unable to attract additional funding despite having investors who were A-list players. Oracle picked up the company for a bargain price basically to hire the people working at Nebula.

An object failure: All in all, it's just another... file system component?


Old story but OBS vendor are doing more to accommodate legacy file protocols

Well, the potential use cases of OBS include working as backend storage to NAS heads (Cloudian HyperFile) for SMB and NFS file access. They might not be as fast as NAS filers but fast enough for most NAS use cases. Virtually every OBS vendor supports SMB and NFS file access methods today and some have supported them for years. Various NFS and SMB gateways and caching appliances have also been around for years (Panzura) that can handle file locking and manage a global namespace. OBS vendors like Caringo, Cloudian, Scality, and SwiftStack are improving their support for legacy file access methods which is to be expected because not every application has been re-written to use a RESTful API to access an object store. That said, every OBS vendor supports the AWS S3 API which is the most popular RESTful API used to access cloud storage. OBS is not about object storage per se, it is about supporting the hundreds of data access, data storage, data analytics, and data management solutions that can use OBS.

Teen Pennsylvania HPC storage pusher Panasas: Small files, fat nodes, sharp blades


Data storage has been a conservative business

Well, Panasas has been around for a long time. Garth A. Gibson, one of the founders of Panasas, is credited with being in the group of three Ph.D. computer science graduate students at UC Berkeley whose research was instrumental in the development of RAID back in 1987. The other graduate students were David Patterson and Randy Katz. Panasas has had its niche in data storage and was not competing with emergent object-based storage vendors like Caringo. Now there is competition with companies like Qumulo and Weka.io getting some traction so Panasas is upping their game. That said, data storage has been a conservative technology business built on proprietary hardware and proprietary firmware. Errors in storage hardware and firmware can lead to the loss or corruption of the data being stored, so moving fast and breaking stuff was never an acceptable approach to doing business.

Now we are in the era of software-defined storage based on COTS (commodity off-the-shelf) hardware. Storage software vendors can now move faster because they are not dependent on building and testing proprietary hardware and firmware in their storage systems. Panasas is making this transition and will have to do it successfully to remain a viable player. Curiously, companies like Pure Storage have at least partially abandoned the use of software-defined storage in favor of using proprietary hardware. This would seem to run counter to the storage industry trend of relying on COTS hardware and doing everything of value in software.

HPE inks object storage reseller deal in EMEA – with Cloudian


Reading the fine print on one side and no comment from the other raises questions

Well, Paul Turner from Scality was quick to point out that the HPE "deal" with Cloudian was not a general resale agreement. His statement is probably accurate, and it looks like a narrowly focused deal that only applies to HPE's professional services organization in EMEA. It does beg the question as to why HPE was not able to accomplish the same objective with Scality. The deal looks like a situation where HPE was determined not to lose the business opportunity and brought in Cloudian. Cloudian does have a strong presence in EMEA and appears to be closing more business than Scality. Cloudian's no-comment was probably part of the arrangement with HPE to bring Cloudian into this limited purpose deal. If Cloudian were permitted to tout this as a win for Cloudian against its competitor, it would tarnish HPE's existing resale agreement with Scality. There could be more to it, but it is not apparent right now.


So much for the old HP "Invent" moniker...

Well, HPE did invest $10M in Scality in January 2016, yet the company did not use the investment to come to any decision about acquiring Scality. This announcement is more likely a sign that Cloudian is closing more business in EMEA than Scality. Perhaps HPE decided it needed to partner with Cloudian in this market rather than lose the business. It could also be a tactic to maintain customer account control by supplying customers with what they want rather than trying to convince them to use Scality. The Register has pointed out that HPE is also partnering with Qumulo and now Weka.IO on HPC customer projects. All of this reinforces the notion that HPE is partnering with third parties who can help them get the business. Nothing wrong with that except at some point companies like Qumulo, Weka.IO, Cloudian, and Scality could all be acquired by others. It all looks like HPE is executing a tactical game plan when they should be acting more strategically.

HPE and WekaIO sitting in a tree, k-i-s-s-i-n-g


Follow the money too...

Well, HPE not only runs Scality's OBS software on HPE hardware, it has invested $10M in Scality. Some pundits considered this investment a prelude to buying Scality, but the investment took place almost two years ago. Seeing how HPE needs to develop a portfolio in HPC and OBS solutions, why has it not made a move to acquire Scality outright? Two years ago this month, IBM pulled out its checkbook and paid $1.3B to acquire Scality competitor Cleversafe. HPE needs to play in the OBS and HPC markets with solutions they own. Partnering with Scality, WekaIO and Cumulo is different than buying them outright like it did with Simplivity and Nimble. Ms. Whitman's hesitation will be HPE's loss.

Tailored SwiftStack update should help get your GDPRse in gear


SwiftStack is making progress...

But so is everyone else in the OBS software business. SwiftStack has had a reputation for being hard to configure and requiring lots of "tuning" based on your performance requirements. That said, the vendor universe for OBS software vendors has been stable with just a few acquisitions over the past three years. HGST acquired Amplidata. Red Hat acquired Inktank (Ceph). IBM acquired Cleversafe. Caringo, Cloudian, and Scality have achieved traction in the OBS market. The enterprise vendors Dell-EMC, Hitachi, and IBM, are playing a long game. And several smaller vendors like OpenIO and Minio have received additional funding rounds. SwiftStack falls in with the pure-play OBS software vendors like Caringo, Cloudian, and Scality but it is also the major code contributor to the OpenStack Swift project. Its marketing efforts seem weak compared to the others in this group, and they have not received any additional funding for several years. GDPR is mostly an EU consideration, but all of these vendors sell in the global market, so they do need to be able to offer their customers GDPR compliant OBS software. Data storage just like networking has its jargon. If you work in it, you learn it.

Has Nexenta's growth stalled?


Is there really such a thing as a G round?

Well, I never see Nexenta show up in any list of object-based storage software vendors even though the company launched NexentaEdge back in 2014 to catch up with that emerging storage market. Today, you hardly hear anything about Nexenta worth noting. They had a reputation for working well as a storage solution for VMware back in the day, but what have they done lately? They do have a reasonably sized paying customer base so they are generating revenue, but apparently not enough to stem the need for additional cash. Since their IPO plans never materialized and a co-founder has left Nexenta, the next logical step would be an acquisition. With $120M invested over multiple funding rounds, there have to be some anxious investors still waiting for a payday. By comparison, Cleversafe had about $127M in funding before IBM paid $1.3B for it in November 2015. Somehow I don't think Nexenta will command nearly that much in an acquisition. If the coming announcement doesn't lead to an acquisition, why would anyone invest millions more in Nexenta?

In 2012 China vowed 'OpenStack will smash the monopoly of western cloud providers!'


OpenStack will not "smash" western cloud providers

Well, it was probably a hope sometime early on in OpenStack's development that it would emerge to challenge public cloud services from AWS, Google, and Microsoft. The efforts of Cisco, HPE, and Rackspace to use OpenStack to compete with the oligopoly of "western" public cloud computing providers appears to have failed. In the public cloud computing market, there is little chance that anyone will be able to harness OpenStack to compete with AWS, Google, and Microsoft at scale. OpenStack may have a future as a private managed cloud service from providers like ZeroStack and Platform9 or from one-off builders of private clouds like Red Hat or SUSE. The lingering question is will OpenStack be able to keep pace with the service capabilities of AWS, Google, and Microsoft?

Enterprise IT storage – where being fat and very dense is, um, a good thing. Right, Cloudian?


Ultra-density storage server revisted

Well, Cloudian released a multi-node storage server (Supermicro) with JBOD chassis (QCT) less than two years ago called the FL3000. All traces of it seem to have been removed from the Cloudian website. The HyperStore 4000 is a combined two-node storage server with 35 HDDs per node in a 4U chassis from QCT (QuantaPlex T21P-4U). The HyperStore FL3000, while expandable and highly modular, didn't offer a lot more than stacking 1U "pizza box" servers like the current HyperStore 1500 storage server. If you need PB plus storage from the get-go, then the ultra-dense HyperStore 4000 looks useful. If you are starting with a sub PB storage cluster, the HyperStore 1500 will be more useful because smaller clusters benefit from having more storage server nodes when it comes to using replication and/or erasure coding data to protect data.

Small but perfectly formed: Dailymotion's object storage odyssey


OpenIO uses its own hardware add-on. Is this progress?

Well, if OpenIO adds a hardware attachment (ARM+Ethernet add-on board) to HDDs and plugs them into a custom chassis, is this software defined storage using COTS hardware? Scality, AFAIK, makes no use of proprietary hardware with Scality's RING OBS software. In any event, it will be interesting to see how this hardware add-on approach from OpenIO actually performs at scale. We already know that Scality RING can perform at scale.

Did somebody say object storage? 9 ways to tell if there's a point


Re: Metadata is where its at.

Well, Caringo has already implemented Elasticsearch in Swarm and Cloudian promises to have Elasticsearch implemented in its next release of HyperStore along with Kibana, which is a data visualization plugin for Elasticsearch. So it is apparent to these OBS software vendors that being able to search for objects using metadata and display that data visually is an important aspect of running an OBS cluster. They have undoubtedly heard about the need for this from their customers.

NooBaa wraps AWS S3 wool around Microsoft Azure Blob storage


Sounds more like a Rube Goldberg Virtual S3 Machine

Well, with just a couple of million in funding, NooBaa isn't going to upset the object-based storage market anytime soon. Storage takes a lot of time to get right and to get traction because storage is a foundational computing technology that tends to be very conservative. Apps can and do crash all the time, but your storage better not. Some of the current group of object-based storage vendors like Amplidata (HGST), Caringo, Cleversafe (IBM), Cloudian, Scality, and SwiftStack have been working at it for more than a couple of years and with multiples of the funding NooBaa has received.

The idea of scavenging around for underutilized storage on desktop computers and servers and presenting this as a secure, stable and reliable way to do object storage takes a leap of faith. Symform, which was founded by several ex-Microsoft employees six years ago, tried something like this as a way to do backup using underutilied storage on computers anywhere in the world using a control plane on AWS for managing it. Symform was bought by Quantum a couple of years ago and Quantum recently shuttered the Symform business unit. So much for the "innovation" of coming up with ways to leverage underutilized storage on desktop computers and servers.

Scality developing way to stream objects to tape and the cloud


Tiering to AWS S3 or Glacier or Google Coldline or tape...

Well, I think Paul Turned did a fine job at Cloudian as their CMO for two years before he moved to Scality. Cloudian currently tiers data to AWS S3 or Glacier, other S3 compliant object storage clusters and now Google Coldline. Apparently, Scality is just getting around to doing it now that Mr. Turner is there. Coincidence or not?

Spectra Logic can tier data to tape libraries using their Black Pearl DS3 appliance. Spectra Logic added a couple of extensions to the S3 command set to deal with tape drives. You can use their SDK to write clients that will work with the Black Pearl DS3 to move data objects to LTFS formatted tapes in a library. So how is this "news" that Scality is planning to stream data objects to tape?

HPE has a $10M equity investment in Scality, so I'm sure whatever Scality is doing to potentially broaden the market for using Scality RING will be welcomed by HPE, which may eventually wind up buying the company.

HyperStore gets Coldline for tired old objects


Why not support all three...S3, Coldline and Azure Blob?

Well, Cloudian invested a great deal in being able to track the AWS S3 API very closely. This allowed Cloudian to tier data to S3 and to Glacier, although Glacier has a separate API. It makes sense for Cloudian to support tiering from Cloudian clusters to Google Coldline because it has some advantages over AWS Glacier in terms of data availability. So, if Cloudian is now able to tier data to AWS S3/Glacier and Google Coldline, why not Azure Blob? This would basically give Cloudian customers a choice advantage when it comes to tiering data .

OpenIO wants to turn your spinning rust into object storage nodes


Kinetic will always have a brilliant future...

Well, aside from OpenIO and its current testing of Seagate Kinetic HDDs, just who has written and deployed any production applications using Seagate Kinetic HDDs?

Three years ago, Mr. James Hughes from Seagate presented the first public presentation and "demo" of a Seagate Kinetic HDD at Basho's technical conference in San Francisco. Mr. Hughes was on a mission with Kinetic to rid the storage world of the evils of POSIX and storage servers with their disk controllers.

After the Kinetic announcement, there was the usual rush of supportive quotes from object-based storage vendors and storage hardware OEMs. SwiftStack, Scality, and Cleversafe said they were interested in Kinetic. Caringo indicated in a private message that they saw no advantage in Kinetic over their current technology. Cloudian said in a private conversation that Kinetic would require a "split brain" software development effort and the Seagate Kinetic code was not up to production quality. So what has happened after the initial enthusiasm and some cautious comments regarding Seagate Kinetic? The answer is not much. Seagate Kinetic remains a storage technology in search of applications using it in production level deployments. Seagate Kinetic will always have a brilliant future.

Behold this golden era of storage startups. It's coming to an end


Re: Next major advance...

Well, I agree and we have seen this "too cheap to manage" argument in other areas. Remember when the proponents of nuclear power for generating electricity said it would be too cheap to meter? How many people actually got free electricity generated by a nuclear reactor? Today, people think cloud storage will essentially be free and you will have as much as you can ever possibly want. Somehow I just don't think it will work out that way. There is always a cost involved in storing and preserving data. The question that needs to be answered is how much of the existing stored data will have any social, economic, scientific or cultural value in 10-20 years? The answer is probably only a small fraction of it. The mountains of data being generated as a result of people tapping their fingers on their smartphones will have a pretty short half-life before it is not worth the storage it is occupying in some cloud bit barn. Data storage is not an infinite resource. People will eventually need to determine what data is worth keeping and for how long.

Haters gonna hate, hate, hate: Cisco to tailor SwiftStack for UCS object storage cramming


The product is going to fly...

Will it fly the way Whiptail did when it blew a $400M hole in the ground after Cisco acquired it?