back to article Database consolidation is a server gain. Storage vendors should butt out

Welcome to The Register Debate in which we pitch our writers against each other on contentious topics in IT and enterprise tech, and you – the reader – decide the winning side. The format is simple: a motion is proposed, for and against arguments are published today, then another round of arguments on Wednesday, and a concluding …

  1. alain williams Silver badge

    Consolidation is putting all your eggs in one basket

    Any breakage and nothing works.

    It also means that the one database/cluster/... does all the work ie a higher workload than when it is distributed. One humongous machine might be more costly than several smaller ones -- maybe.

    1. J. Cook Silver badge

      Re: Consolidation is putting all your eggs in one basket

      It also means that a rogue app that manages to get SA rights on the shared database server (or a rogue vendor) can really, really ruin the admin's day/week/month/career.

  2. Mike the FlyingRat
    Boffin

    Have to agree wtih Chris...

    Here's a guy who actually knows what he's talking about.

    But here's the thing... I don't know if the question is being framed properly.

    When you say 'database consolidation, what do you mean exactly.

    Yes, its a strange question, but think about it.

    You have databases that are OLTP transaction processing systems of truth. Then you have Data Warehouses (OLAP) that are used to drive analytics.

    Then you have Data Lakes which in itself is a Data Warehouse consolidation by removing the silos. (Here the number of DWs goes down, but the storage requirements go up. )

    And its not just the CPUs getting better, or storage, but also networking. 40GbE is becoming Cisco's norm. 100GbE is also there...

    But at 40GbE you can start to consider data fabric as your storage layer. The issue is cost versus density and performance has to be evaluated on a case by case basis.

    The networking also allows for a segregation of COTS and specialty hardware to get the most bang for your buck. You can weave a GPU appliance into your data fabric and then consolidate compute servers using K8s to allow distributed OLTP RDBMs to take better advantage of the hardware. (This is where the network can be a bottleneck. )

    What's interesting and a side note... when you look at this... its in *YOUR* Data Center. Not on the cloud. (Although it could be in the Cloud too.)

    These advances will spell a down turn in the cloud over the next 5 years. Thats not to say that there won't be a reason for cloud but more of a hybrid approach.

    Just some random thought from someone who's been around this for far too long but too broke to retire. ;-)

  3. chivo243 Silver badge

    OK, but factor in the Redmond tax

    database ...as you need fewer per-server instance licenses, but as MS has also made this calculation, it costs more to run windows on these cost saving servers. Save in one column, pay in another.

    1. Mike the FlyingRat
      Boffin

      @Chivo Re: OK, but factor in the Redmond tax

      I think you're stuck in a windows world.

      But if you look at Oracle, DB2 and others... including Hadoop vendors ... they've all gone to disk size, and vcpu counts for their databases. So they will try to maximize the profits.... (Just like mainframe Mips?)

      So they all get you one way or another and they've stopped with the perpetual licensing too. Most new license contracts are annual licenses.

  4. PeterCorless

    Database consolidation? More like a Cambrian explosion.

    I'll agree with Mike the FlyingRat that "I don't know if the question is being framed properly." That this whole question seemed fuzzy so that "it depends" is the only right answer.

    For specialized processing of data you are seeing a practical Cambrian explosion of different databases. Sure, you have your standard RDBMS for backoffice and operations work (ERP). But aside from that, a number of different SQL and/or NoSQL systems are being spun out into a constellation of special-purpose transactions and analytics platforms.

    Just an example:

    1. An IIoT NoSQL time series database (OTLP) that tracks all of a company's products in the field. You have that just because the RDBMS cannot deal with the direct rate of ingestion.

    2. An Apache Spark analytics cluster that draws quality assurance results (OLAP) from the above time series database.

    Now, this second database *might* be consolidated with #1 above if you use something like ScyllaDB, which allows workload prioritization. It requires a slightly larger consolidated cluster. It also means that you have to use the right tools to be drawing information and inferences out of your NoSQL database. Many data scientists would still rather get the data from a Spark cluster than do CQL queries. It's just more natural for them.

    Then again, rather than Spark, you might have an Elastic cluster to do free searches. What *sort* of queries are you trying to do? Set, repeatable batch analytics, streaming analytics, or free text ad hoc queries?

    Lacking methods for consolidation, you need to fall back on lambda architectures where you have a speed layer for OLTP (#1 above) and a serving layer for OLAP.

    And also, as a reminder, there's no way your standard ERP system is keeping up with the raw rate of ingestion and analytics of IIoT. And no way the CFO is going to let quarter close be impacted because someone's trying to run an ad hoc data query on the ERP system. It is going to be a second and maybe a third or even fourth system system, though information can flow between them all if and as needed.

    There can and will be multiple other systems out there. Real-time adtech/martech systems. These might have their own RDBMS, separate from your central corporate ERP, with a subset of user data.

    Another part might be a graph data model, totally orthogonal to your RDBMS, to track 360º total touch of users across multiple social networks with your brand. The size of that graph alone would make your typical ERP DBAs grimace if you suggested trying to store those nodes and edges in your ERP. ("How?!?" they'll ask.) Hint: SQL ≠ Gremlin/Tinkerpop.

    Yet another system might be a shopping cart system for your website. Sure, it might tie in closely to your ERP system, but for the purposes of keeping timeouts and abandonment low, it's built on a separate system to remain closer to your website than your back office. It will, of course, need to be able to traverse the firewall to get those orders into your ERP system for fulfillment. It may be SQL or NoSQL, depending on your use case.

    Another database, or even a cluster of databases, may be used specifically for your R&D processes that are *not* the same as your corporate ERP system. There is NO WAY engineering wants to have to rely on a "please-mother-may-I" game with IT. They're going to do their own. You cannot stop them. The rest of the corporation won't even know what the acronyms and code words for these servers stand for. ("Good," the engineers will say.)

    Other database(s) will track your own internal infrastructure. Every server and network device on your net will consolidate its logs for your CIO's needs. (Think Kafka feeding a NoSQL key-value store for log consolidation. For real-time uptime, latencies and throughput, Prometheus or Datadog for metrics, etc.)

    And so on.

    All of this is just your *on-prem* infrastructure. But what percentage of your infrastructure is now on the cloud? How much do you really know about what the back-end you are running on when you are getting the equivalent of a monthly time-share in a virtual server, or you are running APIs that are completely serverless anyway?

    Every time someone says that there will be "one database to rule them all," I keep thinking, "Okay Sauron. That's just not how corporations work. Not even in 2000. Certainly not 2020." If anything, there's a reason for the explosion of database types and classes, and they all run on their own hardware because trying to make them all "one thing" is utterly absurd. Each has its own design patterns for consistency vs. availability, data distribution vs. consolidation, and so on.

    The good news is that each of the database systems your corporation does run on will get faster and, as hardware generations roll out, cheaper overall to run the same workload. But don't be fooled entirely. You may save more money if you are being licensed "per server," but frankly many licenses are calculated "per core," in which case your higher densities will actual cost *more.*

    Though, of course, if your database software is open source, you will not have to care one whit about your database cost as you scale.

    Lastly, churning your on-prem servers just to get to higher densities has a capital cost expenditure which is never free. You might need to wait until you've reached the depreciation schedule on your current servers before funding opens up for new on-prem hardware. And then your boss might ask, with a hairy eyeball and a big spreadsheet, "So how does this compare to a public cloud, in terms of monthly spend?"

    Disclosure: I work for ScyllaDB (scylladb.com). We make a monstrously fast NoSQL database, but we have to play well with an infinite number of other great systems.

  5. Mike_in_the_house

    It you really dig int deep, is it not a database nothing more than a smart file system? Conceptually the lines are blurred between a unit of storage a a database, both are just a container of 1s-0s. The believe is that there is not benefit from storage per say (ie. if you’re data has many instance of itself, then there is an opportunity to gain from data reduction, which is a measurable unit of saving) but in the end storage is the slowest denominator in the equation, so your CPU IRQs, will only free themselves as soon a you finalize the entire IO trasversal. In this case, the right performance from storage will mitigate all the performance bottlenecks in the data access path. ( how many times has the DBA has used that as a legitimate excuse) This has a major impact in savings, a 2ms access, is not he same as the .5ms access. Consider a complete data ingest/read/validation ACID of most modern database roughly creates about 5 IOPS per transaction (Not a science but lets say that is the issue regardless) and the database is using 4KB blocks... OLTP type data, even when you do IO coalescing, 2x or 8x into a larger database write... you are still subject to the IO wait time. On transaction could be 300usecs to complete on the CPU, but you still need to move it off to storage on one side. I’m not going to do the math here, but you get the idea, integrate and you will see how much CPU waiting the WRONG storage produces. So the questions begs answering, does DB consolidation has a storage implication to achieve the cost savings, IMHOP, yes, better do your research and test man test!

    1. Mike the FlyingRat
      Boffin

      @Mike

      No, its not.

      There are aspects of a distributed file system that are important but there's more to it.

    2. logicalextreme Silver badge

      It's very easy to hit a bottleneck on storage if you've got okay code running on a decent CPU with a sensible amount of RAM. But "consolidation" doesn't really help that. If you were on SQL Server, say, you could just use row or page compression to do the "many instances" sort of "consolidation" up front (and there's probably something similar in any decent DBMS you'd care to think of). Normalisation and good design help certainly, as does storage tuning, but I think the question's wooly (possibly on purpose).

      A DBMS is certainly another way of storing data just like a file system is, but it's a very particular type of data that's often stored and accessed in very particular ways. We don't (I hope) store every file in a company on one humongous file system for "consolidation", just because they're files and files can go on file systems.

  6. Sr Mad Science

    Consolidation? Here be Dragons!

    I have to question whether the cost of disk storage should be the driving factor here. If the hardware is CAPEX and has already been paid for surely the only opportunity to save money will when it reaches the end of its life, unless a radically cheaper alternative appears? In the cloud this argument makes more sense, as we're talking OPEX and annoying monthly bills for storing stuff...

    As for consolidating legacy databases:

    Excluding gratuitous clones used for testing and development, IMHO the vast majority of 'overlapping' databases have some pretty good reasons for doing so, usually involving office politics or hard-to-do or obscure functionality that breaks database schemas when you try and merge them. The people who understood the problem in detail will probably have moved on, which means you run the risk of getting involved in a project which delivers no new functionality, might save some money at some point in the future when (and 'if'!) you purchase more enterprise storage, but also has a very high probability of breaking long-stable functionality in exciting ways.

  7. remi9898

    Oracle Database is licensed by core not server, so in your example there is no reduction of license cost. Same total number of cores before and after the server consolidation. Power and cooling are non-linear savings as the new servers are working harder than the old servers. Rack space is saved and sys admin burden reduced. The dba's job is going to be equal or harder post-consolidation.

    Storage is part of a server and should always be considered when upgrading or replacing a server. Your points about what differentiates server and storage consolidation are valid, but not exclusive. True, they can be done independently, but they can also be done together.

    Storage advances often fuel server migration or consolidation. For example, transactional databases are notorious for wasting CPU cycles while waiting for i/o to complete, and replacing legacy storage with modern low latency storage can dramatically cut cpu utilization and allow for database servers to be consolidated. There is no savings here in terms of database licenses, however, because database vendors like Oracle and Microsoft don't pay refunds.

  8. LDS Silver badge

    Beware of the hidden issues....

    Running the same workloads on fewer server, even with the same core count and amount of RAM and storage requires to understand what resource contentions may arise. There will be more "shared" resources than before, and it could be more difficult to optimize them to different requirements. Sure, some newer, faster (and expensive) technologies can offset some of these issues, but not always completely.

    There are also the security implications, as more users, DBAs and admnistrators need access to the same server - while without some form of clustering taking down one machine will impact now more applications. Change management becomes more complex. One application may need a patch impacting another. Machines closer to full hw utilization may have less room to expand if needed.

    As said elsewhere software licensing costs may not be reduced much, as vendors work hard to cope with more powerful hardware.

    Consolidation is a multi-dimensional affair, each dimension needs to be carefully examinated - if you look just at the dimension "less hardware = better" you may find there's more under the surface.

  9. katrinab Silver badge
    Meh

    Virtual machines an clustering are things. Does anyone install stuff on barer metal systems these days?

    Fater hardware means you can install the same stuff on fewer machines, but is that the same thing as database consolidation?

    There may be operational reasons why having data spread across different systems is a bad idea, but that’s a different argument, and sometimes a good one, sometimes a very bad idea.

    1. jtaylor

      "Virtual machines an clustering are things. Does anyone install stuff on barer metal systems these days?"

      Yes indeed. If a database server needs all the memory, CPU, or I/O of a physical server, why add a virtualization layer?

      Physical servers also make it easier to show license compliance. You can change number of cores in a VM much more easily and quietly than you can in a physical server.

      1. katrinab Silver badge

        why add a virtualization layer?

        It is easier to to snapshots before changing things, in case you need to roll back the changes.

        It is easier to move it to a different machine

        It is not impossible to do these things on bare metal, but virtualisation does make it easier. The performance penalty from virtualisation is not that high.

        1. jtaylor

          It is easier to to snapshots...It is easier to move it to a different machine....The performance penalty from virtualisation is not that high.

          Certainly, and if architecture, I/O, and latency permit, those are fine things.

          Maybe it's a matter of scale. The large clusters I work with require low-latency storage (thus close to the database host) so "move to another physical server" cannot be trivial. I don't know if we could snapshot a single node without knocking it out of the running cluster. Even virtualized, that procedure might require shutting down all databases.

          Also, although I agree that the CPU penalty for virtualization is now quite low, if you're licensing a database for 400 cores, nothing is cheap.

  10. J. Cook Silver badge
    Pirate

    My spin on this...

    Background: the environment we have consists of multiple database servers, most of them shared, and for the large part not using native clustering. We have a couple silo'd apps that use their own database servers because they are very transaction heavy, and the vendor's coding on the client side is... less than optimal, to put it politely*.

    All of the sql servers are virtualized, using a mostly standard virtual machine, but with the data, log, and tempDB files on not only seperate 'disks' , but separate virtual SAS/SCSI controllers within the VM. (i.e., the OS sees four controllers, with one or two disks on each). We've also done some shenanigans with the cluster size on the disks as well at the OS level. These are stored on the same flash/SSD accelerated rotating storage arrays. For the large part, the apps on the shared sql servers don't notice or care. The few apps that we do see performance issues on are the LOB apps that have their own silo of SQL servers, and we spent a couple months going around with the vendor before coming to the conclusion that their queries are crap along with their database design and lack of good index placement.

    Consolidation of databases onto shared servers are good, but care MUST be taken with what's put on them, and a competent DBA and system admin need to oversee and maintain them.

    * This is the same vendor that I may have mentioned had a habit of going "our code is fine, your hardware SUCKS- throw more hardware at it!!!" only to find out that their databases had no indexes on the tables that were getting pounded on, and their queries were being structured as "SELECT * from *" and using WHERE clauses to filter the results for the data they wanted. They've gotten better over the last five years as we've called them out on rather a lot of their crappy practices and they've gotten competent staff in who are fixing a lot of the design problems we've pointed out to them. But I'm still going to make fun of them, because. :)

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020