back to article Guess who's developing storage class memory kit? And cooking the chips on the side...

Huawei has made a move towards storage class memory that suggests it believes infrastructure driven by the tech is not only possible but inevitable. The firm said it believes that speeds of three storage elements are coming together to create a storage class memory-driven storage infrastructure that will work. It thinks that …

  1. FrankAlphaXII

    Um...

    You know it's pronounced "hwa-wei" right? I know that tones are hard, and probably even harder for Brits since you have like 50 different accents based on geography or even Socioeconomic class, but our future Chinese overlords really appreciate it when you don't mangle easy words.

    Huawei South Africa has a video that might help, here

    1. AndyinSwindon

      Re: Um...

      My Huawei account manager tells me its pronounced Wah wey. He made a point of spelling it out for me and told me to tell everyone in my company

  2. kibbster

    Apocolypse USA Tech

    Next year will be one big consolidation for the yanks and me too resellers, its be fun dude hha

  3. CheesyTheClown

    Perpetuating the problem?

    Most enterprise storage today is devoted to virtualization waste. Using virtual machines to solve problems in a SAN oriented environment has been an absolute planetary disaster. Many gigawatts of power are wasted 24/7 hosting systems which run at about 1/10000th the efficiency they should operate at. NVMe shouldn't even have a business case in today's storage world.

    This announcement was interesting because it appears that the solution presented has focus on database and object storage. VMs and containers have their own section, but VMs and containers are yesterdays news. Companies deploying on VMs or containers obviously have absolutely no clue what they're doing. They're letting IT people build platforms for systems without the slightest understanding of what they're actually deploying. They're just focusing all their time on building VM infrastructures which are simply crap for business systems in general.

    Let's make things simple. Businesses need the following :

    - Identity

    - Collaboration (e-mail, voice, video, Microsoft Teams/Slack, etc...)

    - Accounting

    - CRM

    - Business logic and reporting

    Identity can be hosted anywhere but for the purpose of accounting (a key component of identity) it should be cloud hosted. In fact, there should be laws requiring that identity is cloud hosted as it is a means of eliminating questions of authenticity of submitted logs within the court systems.

    Collaboration is generally something which should work over the Internet between colleagues and b2b... but again, for the ability to provide records to courts upon subpoena, cloud hosted is best for data authenticity sake. In addition, given the insane security issues related to collaboration technologies like e-mail servers, using a service like Google Mail, Microsoft Azure, etc... is far more sensible than hosting at home. No group of 10-20 people working at a bank or government agency will ever be able to harden their collaboration solutions as well as a team of people working at Google or Microsoft.

    Accounting... accounting should never ever ever be hosted on a SAN or NAS to begin with. There are 10000 reasons why this is just plain stupid. It should only ever be hosted on a proper database infrastructure employing sharded and transactional storage with proper transactional backup systems in place. Large banks can manage this in-house, but most companies run software designed to meet the needs of their national financial accounting requirements. Those systems need to be constantly updated to stay in sync with all the latest financial regulations. To do this, SaaS solutions from the vendors of those systems is the only reliable means of supporting accounting systems today. Consider that if the new U.S. tax code makes it through congress, there will be probably millions of accounting systems being patched soon. If this is done in the cloud, if there's a glitch, it will be corrected by the vendor. If there are glitches doing so in house (and there often is), data loss as well as many other problems will occur. Using systems which log data transactionally in the cloud as well as logging the individual REST calls allows data loss or corruption to be completely mitigated. This can't be said on-site solutions.

    CRM is a database. Every single piece of data stored in a CRM is either database records or objects associated to database records. There is absolutely no intelligent reason why anyone would ever run a SAN to store this information. Databases and object storage are far more reliable. Using systems like those offered from NetApp, EMC, etc... are insanely stupid as they don't store data logically for this type of media. They've added APIs with absolutely no regard for application requirements. Consider that databases and object storage employ sharding which inherently has highly granular storage tiering and data redundancy. The average company could probably invest less than $2000 and have a stable all flash system with 3-10 fold resiliency and performance able to shake the earth an EMC, Netapp or 3Par stands on. We are doing this now with 240GB drives mounted to Raspberry PIs. Our database performance is many times faster than the fastest NetApp on the market today. We have far more resiliency and a far more intelligent backup strategy as all of our data is entirely transactional.

    Then there's business systems. If you need to understand how these should work, then I highly recommend you read the Wikipedia entry on AS/400. Modern FaaS platforms operate on the exact same premises as System/36, System/38 and AS/400. You can run the absolute biggest enterprises on a few thousand bucks of hardware these days with massive redundancy without the need for expensive networks or heavy CPUs. The cost is in the platform and maintaining the platform. Pick one and settle on it. Once you do, then you build a team of people who learn the ins and outs of it and keep it running for 30+ years.

    As for Big Data. The only reason you need "storage class" anything here is that companies put too much on a single node. If you use smaller and lower powered nodes, you can build an in-house Google style big data solution that far outstrips most systems available today in performance using purely consumer or IoT grade equipment. If you need this kind of storage, you have an IT team who hasn't the slightest idea how things like Map/Reduce works. Map/Reduce doesn't need 100Gbe or NVMe. It works pretty well over 100Mb and mSATA. Just add more nodes.

    1. Anonymous Coward
      Anonymous Coward

      Re: Perpetuating the problem?

      So you're saying remove this overpriced yet centralised and easy to use system and provide a bespoke, complex yet more optimised solution for every aspect of your environment.

      So what's your DR plan?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021