> It may be for a very good department-size database; just good enough.
What on earth is a department-sized database?
A German IT services outfit specialising in the insurance market has migrated 500 IBM Db2 databases to the EnterpriseDB (EDB) iteration of Postgres in a sign of life after proprietary wares. BG-Phoenics joins Indian credit reference agency TransUnion CIBIL, which shifted Oracle workloads to EDB, providing evidence for the …
<<What on earth is a department-sized database?>>
Whatever has been evolved from a simple prototype by your local Access "guru" that has become crash-prone, corrupts itself so often and is so fragile and unresponsive that they all somehow think that moving to a "proper" SQL database instance on its own dedicated machine will magically solve all the problems (hint: only the tendency to crash and corrupt itself is solved, the rest of the issues still remain)
Most databases deployed on Oracle would be fine or better on SQL Server and a very large proportion of those would be fine on postgre. People who are emotionally attached, you can understand that they have years of experience and training which gives a certain comfort, but embrace something new, or be the legacy guy.
There's also a big difference in the user experience: users are usually far less senstive to the OS or server software than they are to a new brand (or even just a new version) of their desktop software. And, much as I like OpenOffice myself, it's difficult to argue that Microsoft hasn't learned some lessons and focussed on better versions of its office software.
When it comes to databases, I'm pretty sure the 80/20 rule would apply: 80% of any companies databases could run on pretty much any RDBMS; the other 20% might take some, or even a lot, of work.
full disclosure: that's a guess. but I do IT in the German healthcare sector for some time and have a grasp of the methods (I hope...)
Phoenics runs the software for some of the players under the umbrella of the Accident Insurance - those are organised by branches (one so called 'Berufsgenossenschaft' per industrial sector) and regions (often one org per sector per state). One database per insurer is likely the minimum (easiest way to safe guard the privacy rules around patients' data) and I assume they open a new database per year or even quarter -> The number of DBs will explode rather fast.
The standard industry-wide joke-but-not-a-joke traditionally was that the best platform to run Oracle on was a slide projector.
I used to be a R&D coder in an Oracle (engine) competitor in the early 90s. Their marketing was fierce$ and frankly amazing$$ but their engine (and its usability) was crap. Put it this way: Oracle still can't do SQL92 Transactions ("Isolation") last time I checked (~5yrs ago), except at Granularity of Table. Yes, if you want to guarantee data Consistency in a non-singleuser database, you have to tablelock. Everything. Every time. They fudge it for marketing/snowing purposes by redefining Isolation.
Not a joke.
Now, this IS documented in their documentation, but the obfuscation and scattering of it is truly a work of art. (Summary: if you want a multi-user database and you don't want data corruption, you have to toggle a setting (isolation=true iirc) then restart the server and then all non-unsafe+non-readonly locks are Table. No one does this in practice: multiuser Oracle dbs are the data version of Security By Obscurity: "you're mostly all right and if it DOES go wrong, no one's going to notice. Right?"
Sybase/SQLServer had the same design/architecture issues but competed on speed, which it achieved by having separate codebases for each query plan.
Not a joke.
BIG problem: many nontrivial query plans' codebases were buggy. And you could only discover this by experimentation then result-set data-crawling. I distinctly remember discovering this on first use: a simplistic 3 (4?) table join produced Cartesian Products. Unless you re-ordered the table list. Only one of the list orders worked. I banned Sybase use in the Australian office immediately; switched to SAS's SQL. In current-$ terms, I was quant trading A$2bn and joint head of research on half a US$trillion, so more than happy to pay away 10-20% performance for 100% Correctness.
Microsoft bought Sybase, slapped the SQLServer badge on it, then the coders hit the horror of the code. And proceeded to rewrite the living shit out of it.
I hammered SQLServer ~3yrs ago and was quite happy with it. Found no bugs, predictable behaviour, effortlessly tunable optimisation in the very rare cases you needed it (anyone "requiring" Hint syntax does not understand Indexes and from what I've seen tends to need better understanding of their specific Engine than those who do, which rather defeats Hints' notional purpose), and delightfully had finally implemented result-set calculations. So you can now do running subtotals etc. in SQL just like _WE_ used to casually do in our SQL engine late 80s/early 90s (...)(progress!)
Speed-wise, seemed OK. $value portfolio was A$488bn/nearly half trillion, the feeder system (investment management and accounting) had 3GL code to produce a single day's Trial Balance in ~20mins, which annoyed me because only single day, so I wrote a longish query which produced a whole-of-history Trial Balance with running totals each day for each account for each deal/instrument/trade for each currency (first time in years I've deliberately included a Cartesian Product), which ran in 10-15 seconds.
OK, I was there and still have the scars from the first versions. Microsoft certainly did not buy Sybase. It was a joint venture between Ashton Tate (dBase) Sybase and Microsoft. If you were nearly as cynical as I am/was you might well have thought from the start that it would not end well for the other two companies. MS tended to have an "unfortunate" history with competitors/partners, but MS usually did OK :-)
MS will tell you that they bought a software developer distributer licence from Sybase, which *was* basically true. The early versions of SQL Server (4x-6x) were written in C, and somewhat limited, but worked well enough except for things like cascading deletes/updates which were normally written as triggers. As I recall the database plan could be structured to show that cascades were there, but the DDL structures didn't actually work. MS rewrote it in C++ for V7, and that was the first of the MS only products.
Sybase continued to develop the product independently and it was still a fast viable system. Sybase bought PowerBuilder which was used by some large corporates but seemed to drift until 2010 when SAP "merged" with them (SAP "bought" them, and killed the name in 2014).
Ashton-Tate had their own problems - The story is that some of the original product was based on code written for JPL (federally funded and not copyrightable). This caused problems when A-T tried to limit the spread of clones and dBASE compilers (like Clipper). dBASE IV was slow and buggy, and on our Novell networks could not be loaded without removing the network functionality (They both tried to load into the same expanded/extended memory) These well publicized problems effectively crippled it in the market. Borland bought A-T, but favoured their own Paradox product. MS bought the dBASE work-alike FoxBASE/FoxPro and started to dominate that market. Eventually Borland sold out the product range to Novell...
I suppose that I was "lucky" in avoiding many of the dBASE problems as I had decided to write almost all of my/our small networked PC database systems (at a very large public utility) with MicroRim's R:BASE which had menus, an ad-hoc command generator, forms and reports; and could be programmed in ANSI SQL - Some of these grew to hold a few hundred thousand rows and had ~20 concurrent users on one network, and ~6 on each of another 4 sites - We merged the data by shipping floppy disks around. Originally MS sold R:BASE under licence. MicroRim started to have their own problems (They spent a lot of time/money porting to a DEC based server product, just when DEC was struggling) and then MS bought in some R:BASE developers to help write MS Access, which obviously changed the market again.
These days I'm retired and the only database I use (for my own stuff) is SQLite...
As of the conferences last year at least, the biggest limitation to Postgres in terms of performance is that it has a single threaded writer. You can have multithreaded readers, just not writers. For databases that have a lot of writes, this is a huge bottleneck compared to databases like Oracle. However, for the majority of databases, this is not an issue.