back to article MongoDB 4.4 aims to be a dev crowd-pleaser, but analysts say it's still short of 'general-purpose' database territory

Against a backdrop of mounting losses, NoSQL document store database MongoDB has pushed out its 4.4 iteration with a slew of new features that it expects to improve analytics, ease scaling and smooth performance. At the same time, the database-wrangler reckons it has sped up query access to data across current and historical …

  1. Charlie Clark Silver badge

    Clueless analysts

    greater for speed, simplicity, and the agility of schema-less operation

    Assumung we can forget about reliability we still have a trilemma: fast and agile, simple and fast, simple and agile. Document databases push the complexity from the database management system to the application code: find out every time what the current schema is and work with it and hand it over to some kind of map/reduce environment for processing. But relational databases haven't stood still either. Postgres will let you dump volatile documents in a binary-json store or even let you plugin external data sources as if they were tables (with some overhead obviously).

    But the money is always going to be in transactional data.

    1. teknopaul Silver badge

      Re: Clueless analysts

      Schemaless can reduce the complexity. new_att might be null, but if you add it to an existing relational db it will be nullable too.

      We have a nightmare changing the schema every golive. It takes downtime.

      I agree with your tx data statement, I find many systems have a couple of core requirements where the money is that need to be acid tx and almost all the rest is locked into a complex multi-table schema because of that.

      "As the number and type of databases and data sources incorporated into modern applications has exploded over the last decade,"

      I wish. We have one big iron/host and a fat db2 for everything we do.

      However even at 4.4 mongodb has gaps in reliability and write locking that make it hard to justify even for less important data. Maybe in 5.0?

      1. Charlie Clark Silver badge

        Re: Clueless analysts

        Schemaless can reduce the complexity.

        No, it just defers it. Volatile key value stuff can be dumped in a key value store or, with more recent versions of Postgres, in a binary JSON column: it gets indexed but you have no other guarantees like integrity. And there's plenty of stuff where this is fine.

        I understand the pain associated with regularly redoing the schema – this isn't supposed to happen a lot but then there is the real world – but the M in RDBMS requires it to work. But there will always be stuff, like logs, that you can keep "opaque" from the data management part. Just as long as you know that you'll be one writing the code that does do that management!

  2. Anonymous Coward
    Anonymous Coward

    Iffy stability claims.

    In this link from 3 weeks ago you'll read some claims made by MongoDB which are lies, there's also an extremely odd example of a "retrocasual".

    https://www.infoq.com/news/2020/05/Jepsen-MongoDB-4-2-6/

    When you pretend you're solid, you're soft.

  3. Anonymous Coward
    Anonymous Coward

    "We submit read requests to multiple nodes in that cluster simultaneously"

    I hope that in the technical detail below the marketing speak there's a smarter approach to choosing request node targets than blasting all of them and seeing who wins, wasting the other nodes' cycles in the meantime.

    Now 'union', what next? It looks like the feature set will grow and grow until they arrive at an ACID compliant RDBMS from the other end, in about 20 years. Maybe; not if their attitude to consistency issues continues. Meanwhile the current crop of mature RDBMSs will have quietly and slowly added the latest fads/features, some of which may not last.

  4. SecretSonOfHG

    The hype wave is over

    And all that is left from the NoSQL wave are the few use cases where it makes sense: environments where data integrity is secondary (such as web site usage analysis), are read-only (so you don't have to worry about concurrency or transactions) and there is little, if any, structure in the data (because JSON, such as only strings, no monetary values and no relationships between entities)

    In particular, I've always wondered who in his/her right mind would build a financial analysis system on top of something that only has "double" as its numeric data type. The loss of precision is unacceptable in most environments.

    Less and less room for growth, as relational engines gain some NoSQL features while retaining their beloved ACIDity and their much richer data types.

    1. Tom 7 Silver badge

      Re: The hype wave is over

      I've sat down with people who claim NoSQL is better for some things but after much discussion and analysis of use cases etc etc I just decided that the nosql was really about their knowledge. Perhaps we were both limited in our experience but it always seemed to come out that they thought it was better to do something in NoSQL because they couldn't work out how to do it in an RDBMS.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like

Biting the hand that feeds IT © 1998–2022