back to article 30 years of MySQL, the database that changed the world

Before Donald Trump became US president and the UK left the EU – both arguably the result of a new kind of online politics – a rather nervous-looking Mark Zuckerberg shuffled out onto a Harvard University lecture hall floor to offer some insight into the inner workings of a website he had created less than two years earlier. …

  1. Anonymous Coward
    Anonymous Coward

    Thirty years, and it's still a pretty awful RDBMS. I've never forgiven Monty and his colleagues for claiming an RDBMS didn't need referential integrity, saying it could be implemented easily in the application layer. This was a bit of misdirection to make one of the MyISAM table format's worst weaknesses seem unimportant. That lead to a generation of misinformed graduates thinking MySQL was a good choice for any RDBMS purposes and thousands of systems that had terrible data issues, as well as their own convoluted attempts at enforcing referential integrity and some approximation of transactions. Of course, as soon as MySQL adopted the third party InnoDB table format, which did offer referential integrity and some other key features, Monty et al were claiming it was the best thing since sliced bread.

    I've replaced MySQL (and more recently MariaDB) with PostgreSQL in systems a number of times. The overall performance and data integrity were greatly improved, as once you start doing anything involving a join MySQL's performance plummets. It's OK performance wise for reads on a single table, but everything else sucks. The query optimiser is still very primitive as well, but it's ultimately limited by the poor implementation of InnoDB itself.

    I've also had MySQL do weird things when tables became fairly big. Like silently dropping all the indexes. That was the final straw for me, and I now refuse to work on MySQL/MariaDB based systems unless there is a clear commitment to moving onto a better database in the near future.

    1. Charlie Clark Silver badge
      Pint

      Yep, MySQL and its predecessor mSQL, were fine in the telecommunications market where integrity wasn't an issue, but write performance was, but it was shit for any kind of OLTP or OLAP, which is where you need RDBMS. The rest was clever marketing and the luck that the MySQL driver for PHP was written first.

      But pretending to be an RDBMS while actually just paying lip-service the to the "R" and "M" parts – that's no relational integrity and no management, also ruined a generation of programmers. How often have we seen duplicate keys? I've just taken over the migration of a website from Typo3 to (probably) Wagtail and there are no foreign keys in the schema!; though I'm also no fan of the Active Record pattern still used by Django. InnoDB is, as you say, a disaster as schema changes involve table locks…

      1. logicalextreme

        Yep, I've been complaining about it for years. The default storage engines for MariaDB and afaik MySQL now support RI out of the box, but the damage was already done. You can't move these days for code-first stuff with questionable ORMs, attempts to hamfist everything with document databases and the resultant loss of any semblance of truth that goes along with it.

        I know the most about SQL Server and it's a very competent RDBMS but I'd always advocate for Postgres based on license cost, love of FOSS and general de-MICROS~1ification.

    2. kmorwath

      Datatabases, or datadumps?

      SInce a lot of web developers used - and still use - this kind of software as a datadump and not a database - things like referential integrity, constraints, permisssions, etc. are utterly useless for them.

      And after all, once the application always accesses the database with superuser permissions, the nephew of Bobby Tables can still create havoc at the first SQL injection vulnerability....

      1. Stevie Silver badge

        Re: Datatabases, or datadumps?

        I got persuaded to show my bosses this in '04 and was unpleasantly surprised to find that many of the features not only did not live up to the FOSS Claque's hype, they did not live up to the official documentation either.

        Lesson learned. Toy database.

        1. alcachofas

          OTT

          Oh boy the reg comentariat can never just not like anything can they? Where does the hyperbolic whining come from?

          It’s not a ‘toy database’. It’s not ‘awful’. You may disagree with some key decisions in it but eeesh… grow up.

          And the person who “refuses to work” with MySQL? I’d rather have data integrity issues than have someone like that on my team

          1. Charlie Clark Silver badge
            Coat

            Re: OTT

            I believe this is yours… Close the door on your way out.

            1. alcachofas

              Re: OTT

              Oh you’re adorable.

          2. frankvw Bronze badge

            Re: OTT

            To call mySQL a 'toy' is indeed unjustified, IMO. That said, it has serious shortcomings and limitations that make it unsuitable for large scale mission-critical applications. And yes, some of that is related to myISAM which was the default until 2009 when it was replaced with InnoDB, but still - there's a lot of room for improvement here.

            As it stands, mySQL s a free, quick-enough and good-enough go-to for most low and medium end websites that call for a fairly simple schema and to lots of reads and only a few writes. I've been making a living with these on LAMP stacks for the past 25 years and it works fine, provided you are aware of the limitations and don't try to use it for something that has to be scaleable into high-end territory.

            At least (to the best of my knowledge) 45% of the world's websites use mySQL. That's not a toy.

            1. Charlie Clark Silver badge

              Re: OTT

              Actually, MySQL's main advantage over Postgres is better write performance in most situations. Of course, this is done by paying only lip service to data integrity. So I would argue that for some time now, it would make sense to go with Postgres anyway.

              1. Alan Brown Silver badge

                Re: OTT

                "Actually, MySQL's main advantage over Postgres is better write performance in most situations."

                That used to be the case but Postgres usually outstrips MySQL by a couple of orders of magnitude when you're handling large datasets and for smaller ones the write differences are too small to matter. This isn't the 1990s anymore and 2GB is a SMALL amount of memory in most cases (Remember, MySQL's big claim to fame was running on small non-dedicated systems with 4-16MB total ram)

                Besides, PgSQL has native handling of a lot of data types that you need to run through external sanity checking before feeding into MySQL (eg: timestamps and IP addresses) AND it's fully posix compliant vs MySQL

                When converting datasets from MySQL to PgSQL queries usually only need tweaking if you've used keywoards specific to MySQL - and if any of your queries contain joins you'll see substantial savings in both time/memory consumption

                I've dragged a number of people kicking and screaming off mySQL onto PgSQL. After a few months none of them have ever wanted to revert. Fear of the unknown is the most prevalent reason for not even trying

            2. Alan Brown Silver badge

              Re: OTT

              MySQL was good enough and fast enough back in the days of 128MB systems

              It simply doesn't scale well, innodb or MyISAM backends regardless

              Once you're up to several million records it's struggling and the extra base memory/cpu requirements of PostgreSQL really don't matter post-2008 or so

              At 50+ million records, you'll find that Postgres uses 1/4 the memory of MySQL with queries running in seconds when MySQL takes minutes

              MySQL is good as a starting point, but people fixate on it, thinking other databases are "too hard" - resulting in massive kludgefests being written to handle cases that PostgreSQL frequently handles natively

              FWIW the primary reasons that MySQL was 3x faster than PgSQL for thousands of queries/inserts back in the 1990s was due to PgSQL fsyncing after every tranaction whilst MySQL relied on disk buffers. The risks to data and database integrity should be obvious - and PgSQL solved that issue with WAL in the early 2000s - making it MUCH faster than mySQL if you need to add 10 million entries and add indexing

              Calling MySQL a "toy" database isn't fair, but on the other hand with the ease of using PgSQL from the outset it makes little sense to use MySQL unless you KNOW with absolute certainty that it won't be asked to scale

              Having spent decades tuning MySQL for best performance/memory consumption, the fact that most of this is done automagically in PgSQL is a godsend that lets you concentrate on the actual task rather than trying to deal with the system growing increasingly unstable as the load piles up

      2. Charlie Clark Silver badge

        Re: Datatabases, or datadumps?

        The LAMP stack bears most of the responsibility for this; MySQL's excellent performance for individual tables, combined with its awful performance with joins, helped convince developers that joins and thus the relational model were the problem and that it was far better to implement the logic at the application layer. Couple this with the ease in PHP of getting a connection directly to the database and sending it queries directly, as opposed to using some kind of connection pool and prepared statements, gave people completely idiotic ideas about scalability and sanitising inputs. We saw this in the recent publicised exploit – the kind of code that you only write in PHP (you don't have to but the path is so well-trodden).

        I don't think it's a coincidence that we're seeing the decline of this co-dependency (PHP is getting expensive to run as it grows commercial licences) as a new generation of programmers comes along.

        1. frankvw Bronze badge

          Re: Datatabases, or datadumps?

          "MySQL's excellent performance for individual tables, combined with its awful performance with joins, helped convince developers that joins and thus the relational model were the problem and that it was far better to implement the logic at the application layer."

          I've tried that a few times. I was young, stupid and deadlined. I soon learned that trying to solve this programmatically in PHP comes with even worse performance penalties.

          1. Charlie Clark Silver badge

            Re: Datatabases, or datadumps?

            And you simply cannot implement ACID integrity in the application layer.

            1. captain veg Silver badge

              Re: Datatabases, or datadumps?

              > you simply cannot implement ACID integrity in the application layer.

              Sure you can. The application layer is just software. So is the database manager.

              I grew up in Pick, which is, in brief, a hash-based key-data store of semi-structured data on disk. It has dictionaries, and you can use them to specify expected relationships, which aids reporting but in no way at all constrain the actual table content. Just about the only enforced integrity rule is that keys are unique.

              This taught me pretty quickly how to write application code which ensured data integrity*.

              Many years later I'm walking through some code and the boss notices that I haven't set the database (er, Microsoft Access) to enforce referential integrity. I said that I didn't need that overhead. He said let's see and turned it on, expecting a flood of errors. There were none.

              -A.

              * Maybe I'm thick, but I fail to see how writing application code that is aware of the integrity rules is any better than catching and dealing with the exceptions that might result from breaching them.

  2. alain williams Silver badge

    Naming of MariaDB

    I believe that it was named after Widenius's other daughter: Maria.

    1. Lon24

      Re: Naming of MariaDB

      And that goes well with Deborah & Ian's thingy.

      1. Phil O'Sophical Silver badge

        Re: Naming of MariaDB

        And Mike and Terry Lawnmowers, back in 1973...

      2. ICL1900-G3 Silver badge

        Re: Naming of MariaDB

        A story with a sad ending, unfortunately. What an achievement all the same.

      3. captain veg Silver badge

        Re: Naming of MariaDB

        I wasn't aware that Ian's thingy had anything to do with it.

        -A.

  3. Anonymous Coward
    Anonymous Coward

    The bit in the article about the early history of MySQL is wrong. If you check the book "MySQL and mSQL" (O'Reilly, 1999), Monty Widenius is quoted as saying that he can't remember where the name "my" came from:

    "It is not perfectly clear where the name MySQL derives from. TcX's base directory and a large amount of their libraries and tools have had the prefix 'my' for well over 10 years. However, my daughter (some years younger) is also named My. So which of the two gave its name to MySQL is still a mystery."

    So the names use in the codebase precedes the birth of Widenius' daughter by a number of years.

    Also, MySQL was not initially built on top of mSQL as the article implies, but was compatible at the C API level. Widenius contacted the author of mSQL, David Hughes, about using the MyISAM engine as a backend to mSQL (which had started life as an SQL translator to the PostQUEL language used in early versions of Postgres). By that point a new version of mSQL was already nearing completion with its own storage engine, so Hughes declined.

  4. IanW

    Did some profiling on it before the Sun days

    I did some work with the company before the Sun takeover, paying to have their user base profiled. Mix of folks downloading code from MySQL themselves and also some stats on use with LAMP stacks out in the open web. That’s the best we could do given MySQL featured on most Linux distributions back then.

    Never seen anything like it with any other vendor. Around 30% of the base was from System integrators. The other 70% was as flat as a pancake over every SIC code known to man. Literally everywhere.

    That’s the reach open source gave to the industry.

  5. fg_swe Silver badge

    Avoid If Possible

    MySQL can be used for Cat-picture Distribution Systems such as Facebook. Or for brochureware websites.

    NEVER use it in case

    1.) Records are valuable and should not be lost. Think of banking, accounting, critical records keeping, policing, personell records, source code management and the like.

    2.) Join performance is critical

    PostgreSQL is the obvious proper alternative. Oracle and DB/2 are also very serious candidates, if you have a fat budget.

    I am speaking of experience with a database holding 3 million entities (a song metadata database). It would always lose records and never knew the exact number of songs. It was selected because my Manager was reading too much IT newspapers.

    1. fg_swe Silver badge

      Re: Avoid If Possible

      In other jobs, I worked with Oracle and DB/2, and they would never lose records or not know the exact number of records. They have their share of quirks and problems, but you can manage to make them properly work. For example, NEVER expose an Oracle listener port to your intranet, always lock it behind a server process, which generates the SQL statements.

      DB/2 needs an IBM engineer to get going, but that does not hurt too much on top of DB/2 license fees.

      So, Oracle and DB/2 are expensive workhorses, while MySQL is something like venomenous spider you get for free.

    2. Charlie Clark Silver badge
      Thumb Up

      Re: Avoid If Possible

      I'd add that, as with many systems, the "free" part of the software should never be the only reason for choosing it, though stupid licence fees and conditions have made it more important than it should be. For any system that company depends upon, you should expect to pay properly for professional support from modelling, to system design, replication and scaling to migration. Fortunately, the various fumbling with MySQL provided an opportunity around for Postgres for companies to promote exactly these skills along with Postgres' superiority.

    3. David Harper 1

      Re: Avoid If Possible

      "I am speaking of experience with a database holding 3 million entities (a song metadata database). It would always lose records and never knew the exact number of songs."

      Is it possible, do you suppose, that your software developers weren't very good at their jobs, and the reason why the records went missing is because the application was defective?

      I ask because I've worked with MySQL for 25 years, as an application developer and as a DBA, supporting databases with tables holding hundreds of millions of rows, and I've never seen the kind of behaviour that you describe.

      1. fg_swe Silver badge

        No

        It clearly was the MySQL server itself, which lost records. I heard the same story from more than one group of developers.

        1. This post has been deleted by its author

        2. David Harper 1

          Re: No

          Well, the developers *would* say that, wouldn't they. After all, every developer will tell you that their code is perfect, and if weird sh*t happens, it must be the database, right? :-)

          Seriously, if there were a problem like the one you describe -- MySQL randomly changing data -- then support forums like Stack Overflow would be full of threads on this subject, and it would be front-page news here at The Register. Googling for reports of MySQL changing users' data unexpectedly, I found just one Stack Overflow thread (https://stackoverflow.com/questions/49594025/mysql-loss-of-data) where a user claimed that MySQL was changing their data, and the leading reply was as skeptical of the claim as I am.

          Occam's Razor says that the simplest explanation is the most likely: the bug is in the application, not the database.

          1. Charlie Clark Silver badge

            Re: No

            Occam's Razor says both posts are subject to anecdotal bias…

            I think you have a point that sloppy development is probably the direct cause of the problem but this may be from expecting the database to do something that it didn't. It is a long time ago but I remember having to fix a website where a single user had two different ids… and the root cause was the lack of data integrity across tables that a FOREIGN KEY would have ensured. And I'm currently looking at the TYPO3 schema where there is not a foreign key in sight, inexplicable given that all tables have been set to use InnoDB and frankly inexcusable for well over a decade.

            I spent some time porting the old httparchive database from LAMP to Postgres and Python and was able to make all queries faster and more reliable. The original work was done by Steve Souder and it's clear that neither databases nor PHP were his domain but he did work hard to produce a very useful resource. It's since been moved to Googls Tables, but I'm pretty sure that most things would work fine on Postgres but you do need a lot of disk.

            1. David Harper 1

              Re: No

              "It is a long time ago but I remember having to fix a website where a single user had two different ids… and the root cause was the lack of data integrity across tables that a FOREIGN KEY would have ensured."

              I've seen this myself, back when Ruby-on-Rails was (briefly) the must-have framework that all the cool kids were using. I was told, by someone whose job title was "senior developer", that they had no need for foreign key constraints, because ActiveRecord (RoR's ORM layer) took care of data integrity. A couple of months later, the team leader asked me to look at the possibility of adding foreign keys. If you know MySQL, then you'll know that it won't allow you to add a foreign key if it would be violated by existing data. That was the case with several of the critical tables, so I had to tell the team leader that the data were already inconsistent and would need to be cleaned up first. Thankfully, that wasn't my problem.

              And that's why, whenever I hear a developer say "MySQL has changed my data!", I roll my eyes.

              1. Charlie Clark Silver badge

                Re: No

                I know MySQL and I know that only when InnoDB tables are used does it enforce referential integrity. 25 years ago this wasn't an option but you could still declare FOREIGN KEYS and be blissfully ignorant that the statement would be ignored. If you're in a situation where you have to backfit this, you'll feel the full pain of the poor design when the table locks bite as you fix the schema. And this still seems to be common practice. This is from the Typo3 schema:

                CREATE TABLE `fe_sessions` (

                `ses_id` varchar(190) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',

                `ses_iplock` varchar(39) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',

                `ses_userid` int UNSIGNED NOT NULL DEFAULT '0',

                `ses_tstamp` int UNSIGNED NOT NULL DEFAULT '0',

                `ses_data` mediumblob,

                `ses_permanent` smallint UNSIGNED NOT NULL DEFAULT '0'

                ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

                ALTER TABLE `fe_sessions`

                ADD PRIMARY KEY (`ses_id`),

                ADD KEY `ses_tstamp` (`ses_tstamp`);

                I'm guessing that ses_userid relates to a user somewhere but nothing will enforce this.

                1. David Harper 1

                  Re: No

                  Ah, you're talking about MyISAM, the bane of every MySQL DBA's life. Yes, legacy applications that rely on MyISAM are a world-class PITA. I used to work with a team that refused to switch to InnoDB because they liked to move entire schemas between instances using filesystem-level copying of the data files. You could (with care!) do that with MyISAM. The manual says that you can now do it with InnoDB using the "Transportable Tablespace" feature, but any user who comes to me and asks about that will get the Paddington Bear Hard Stare.

                  1. Charlie Clark Silver badge

                    Re: No

                    Well, as you can see this schema is not using MyASM tables… Why it doesn't have relational integrity in the DB is something I will probably never understand.

                    FWIW you can move Postgres files from one machine to another and, as long as the version is the same, things should "work", though you might want to reindex. I thought InnoDB might be as robust as Postgres by now, but guess not. AFAIK Oracle bought InnoDB before they bought MySQL.

                    Like I say, I can't really see any reason now for not going straight with Postgres. MySQL used to have better GUI tools but PGadmin will be good for many DBAs and I like Valentina DB as a client, though I'd prefer slightly less steep licensing.

    4. frankvw Bronze badge

      Re: Avoid If Possible

      "Think of banking, accounting, critical records keeping, policing, personell records..."

      With all due respect: nobody needed to be told that. ;-)

      MySQL is most often used in the FOSS world, for websites and for DIY applications (i.e. software that's been/being developed in-house). That's fine, but how many banks, accounting firms/departments, critical archiving organisations/departments, HR departments etc. actually contemplate using FOSS or DIY software for that sort of thing? If I suggested using LAMP to a bank for critical applications I'd be booted out the door so fast I'd cause a sonic boom.

      1. Charlie Clark Silver badge

        Re: Avoid If Possible

        Oh, I think there are lots of e-commerce sites running on it and probably fine. But I wouldn't want to see what they've had to write themselves to work around MySQL's limitations.

        1. Alan Brown Silver badge

          Re: Avoid If Possible

          Not just e-commerce sites

          Extremely useful stuff like GLPI is written using MySQL and there's a LOT of kludge in the code to try and work around issues that simple don't occur with other DBs

          Nagging devs who've been working on a single database for 15 years tends to fall on deaf ears. Nothing's going to happen until they're FORCED to change horses

  6. BinkyTheMagicPaperclip Silver badge

    Currently exhibiting 'surprised pikachu face'

    I was under the impression that MySQL previously wasn't suitable for valuable records, but that had improved, so it's quite the surprise to see the comments above and do a Google search that basically seems to confirm join performance is STILL a problem. A SQL server without decent join performance isn't a SQL server, it's a toy.

    Mostly I've used MS SQL which has been solid since SQL server 2000 (with a few exceptions[1]). For open source I always chose PostgreSQL[2], even if in past years getting Perl to talk to a PostgreSQL backend under Windows was troublesome (largely solved by Strawberry Perl). PostgreSQL's backup facilities are to say the least primitive, but they can be made to work, and it *is* free.

    [1] Constraints and referential integrity are important, but I have seen corruption in MS SQL databases. It's been a rarity in some quite sizable databases, but it has happened. However given the age of the systems I can't say for certain if constraints were added after the data were imported, or exactly which SQL server version the data originated in, it will have migrated between at least 3-4 major SQL releases. I've also seen some oddities in SQL server 2019 which seems to indicate undefined behaviour changed in that release after years of stability, but it's a bit off to blame Microsoft for that when the offending query looks like the developers were on crack that day, and the development team can't explain the lost in the mists of time reason for the logic either.

    [2] or SQLite for embedded. Being able to easily link in an in memory database in either open source or Powershell code is incredibly useful.

    1. David Harper 1

      Re: Currently exhibiting 'surprised pikachu face'

      "PostgreSQL's backup facilities are to say the least primitive"

      That may have been true ten years ago, but the pg_basebackup tool has been part of the standard PostgreSQL distribution for more than a decade now. It allows you to make hot backups of both local and remote PostgreSQL clusters, and re-building a working database cluster from the resulting backup fileset is so easy that even an intern could be trusted not to screw it up. The capability to backup a remote database cluster also makes pg_basebackup the perfect tool for setting up standby clusters quickly and easily. I'd hardly characterise those kinds of capabilities as "primitive".

    2. Charlie Clark Silver badge

      Re: Currently exhibiting 'surprised pikachu face'

      To clarify: JOIN performance is largely dependent upon the existence of indices that most DBMS will either automatically add or require when a FOREIGN KEY is declared. This avoids the need for costly table scans. MySQL broke this by supporting, but ignoring the syntax, and using table scans whenever tables were joined, causing many developers to think, not unreasonably, that JOINs were the problem. The use of autoincrement instead of sequences can also frequently lead to problems. Really, nowadays Postgres is a great goto system or SQLite for embedded systems.

      1. David Harper 1

        Re: Currently exhibiting 'surprised pikachu face'

        "MySQL broke this by supporting, but ignoring the syntax, and using table scans whenever tables were joined, causing many developers to think, not unreasonably, that JOINs were the problem."

        I've read and re-read this several times, but I still don't understand what you're saying. MySQL has always used indexes to perform joins, if there are suitable indexes. Its query analyser may not always pick the best index for the job, but it does have the EXPLAIN command, and an experienced developer or DBA will always run a new JOIN query through EXPLAIN to find out what indexes the query analyser is planning to use. MySQL also has a handy extension called index hints, which allow you to say to MySQL "no, don't use that index, use this index instead", when EXPLAIN shows you that it's picking the wrong index itself.

        Occasionally, the query analyser will determine that none of the available indexes will give better performance than a full table scan. That can happen if you have an index on a column that has only a couple of values, and the data are split 50:50 between them. That index is pretty useless, and a full table scan will in fact be faster. That's not MySQL's fault, of course, but trying to explain that to some developers is like trying to explain quantum mechanics to my cat. Pointless and annoying for both of us.

        1. Charlie Clark Silver badge

          Re: Currently exhibiting 'surprised pikachu face'

          Let me rephrase that slightly: on MySQL you're expected to implement the indices yourself manually and hope the query planner takes advantage of them. If they exist, then this will indeed boost performance. But, on other RDBMS and the one I'm most familiar with, FOREIGN KEYS are essential for referential integrity and cannot be added if relevant unique indices do not exist. This check must occur as soon as the key is declared is added. This is a fundamental problem with the use of MySQL for anything where data integrity is important: relational integrity must be guaranteed by the RDBMS otherwise it simply isn't relational; performance benefits that accrue because of the indices are not the reason for their existence.

          Tooling has got better under Oracle's stewardship, but comparing the output from EXPLAIN with that from Postgres shows the real gulf between systems. MySQL may have introduced many developers to DBMS but I think most of those who subsequently discovered and moved to Postgres or other systems will never have looked back.

  7. Joe Burmeister

    Linux land

    When talking about MariaDB this article should really point that MariaDB replaced MySQL in most Linux distros. You install MySQL and you get MariaDB. That's a lot of server installs.

  8. Stephen Booth

    Horses for courses

    Gosh so much hate in the comments.

    Clearly some people have found MySQL/MariaDB does not work for their use-cases but its popularity also suggests its quite good for others.

    My own experience is that we have been pretty happy using it for a long time now. (Our current database seems to have first started Dec 2006 but its had various version/hosting updates)

    Interesting to hear that people think PostgreSQL is faster at joins. I use joins a lot for aggregation queries that are hugely faster in SQL than trying to do the equivalent operation at the application layer. I don't see us putting in the effort to migrate though.

    We started before foreign keys worked properly but it was never really much of an issue because the majority of our data is fairly static once created, records that do experience lots of updates never update their reference fields after their initial creation and its very rare for us to delete records.

    If I have any niggle its that table metadata operations are a bit slow. Nothing thats a problem in production but its noticeable in unit tests where each test tends to re-create all the tables it uses.

  9. Anonymous Coward
    Anonymous Coward

    A Challenge ...

    ngly ing the comments I am doubly glad I haven't had anything to do with databases since the early 90s I was writing some stuff in C using Oracle V† C call interface and some some Pro*C embedded SQL (surpringly the former was actually a lot easier.)

    The relational model has taken a fair bit of stick in the last three decades mostly undeserved. Codd's original 1970 paper is still worth reading to be clear about the relational part. I have always valued CJ Date's writings on databases - they are remarkable for both their clarity and readability.

    My challenge would be for the Rust developer fraternity to build a true rdbms from the ground up. I can see that some of the concerns Rust addresses are also mirrored in those found in databases.

    † Vax/Ultrix

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like