back to article Life after proprietary wares: German support biz flees IBM Db2 databases for something more Postgres-shaped

A German IT services outfit specialising in the insurance market has migrated 500 IBM Db2 databases to the EnterpriseDB (EDB) iteration of Postgres in a sign of life after proprietary wares. BG-Phoenics joins Indian credit reference agency TransUnion CIBIL, which shifted Oracle workloads to EDB, providing evidence for the …

  1. Robert Grant

    > It may be for a very good department-size database; just good enough.

    What on earth is a department-sized database?

    1. SecretSonOfHG

      <<What on earth is a department-sized database?>>

      Whatever has been evolved from a simple prototype by your local Access "guru" that has become crash-prone, corrupts itself so often and is so fragile and unresponsive that they all somehow think that moving to a "proper" SQL database instance on its own dedicated machine will magically solve all the problems (hint: only the tendency to crash and corrupt itself is solved, the rest of the issues still remain)

      1. Robert Grant

        So they think Postgres is just about good enough for that?

    2. werdsmith Silver badge

      Most databases deployed on Oracle would be fine or better on SQL Server and a very large proportion of those would be fine on postgre. People who are emotionally attached, you can understand that they have years of experience and training which gives a certain comfort, but embrace something new, or be the legacy guy.

      1. veti Silver badge

        For "emotionally attached", I read "I spent ten years learning all the undocumented tricks of this beast, I know how to make it do stuff that I'm never going to tell another living soul about, that's my job security. And you're throwing it out?"

        1. Blackjack Silver badge

          Well, technology moves on... or you end using a VM.

          My BASIC programing skills have got really rusty but besides updating a very old DOS BASIC library database program so it supports the next hundred years what I could do with Basic nowadays?

          1. deater

            enter the demoscene of course. see the recent winner: https://www.pouet.net/prod.php?which=87462

        2. This post has been deleted by its author

    3. Marcelo Rodrigues
      Devil

      "

      > It may be for a very good department-size database; just good enough.

      What on earth is a department-sized database?"

      Sirius Cybernetics Headquarters.

  2. Anonymous Coward
    Anonymous Coward

    Good for them. I just hope that we don't see a repeat of Microsoft and Munich...where IBM happen to move in next door and "persuade" them to switch back.

    But I guess that was public funds so it's a little different.

    1. Charlie Clark Silver badge

      There's also a big difference in the user experience: users are usually far less senstive to the OS or server software than they are to a new brand (or even just a new version) of their desktop software. And, much as I like OpenOffice myself, it's difficult to argue that Microsoft hasn't learned some lessons and focussed on better versions of its office software.

      When it comes to databases, I'm pretty sure the 80/20 rule would apply: 80% of any companies databases could run on pretty much any RDBMS; the other 20% might take some, or even a lot, of work.

      1. Blackjack Silver badge

        Open Office has not been updated in ages, please use LibreOffice instead.

        1. Charlie Clark Silver badge
          Stop

          Thanks for the advice, but repeated tests of LibreOffice have led to crashes so I'll still with the more stable OpenOffice, which has recently had a few patch releases.

          1. Blackjack Silver badge

            Just use the stable version, aka whatever version is current on Debian stable.

            Or you could risk using 6.4.7 that's the "Still" version.

            Is a shame Abiword is dead, I liked that thing.

  3. SecretSonOfHG

    1,500 operational databases supporting 8,500 IT users?

    So in BG Phoenics the average user count for each database was less than six users? I'm surre I'm missing something. Perhaps they had 10 DBs with 700 users each and then 375 DBs with four users per DB?

    1. renke

      Re: 1,500 operational databases supporting 8,500 IT users?

      full disclosure: that's a guess. but I do IT in the German healthcare sector for some time and have a grasp of the methods (I hope...)

      Phoenics runs the software for some of the players under the umbrella of the Accident Insurance - those are organised by branches (one so called 'Berufsgenossenschaft' per industrial sector) and regions (often one org per sector per state). One database per insurer is likely the minimum (easiest way to safe guard the privacy rules around patients' data) and I assume they open a new database per year or even quarter -> The number of DBs will explode rather fast.

  4. W.S.Gosset

    Oracle

    The standard industry-wide joke-but-not-a-joke traditionally was that the best platform to run Oracle on was a slide projector.

    I used to be a R&D coder in an Oracle (engine) competitor in the early 90s. Their marketing was fierce$ and frankly amazing$$ but their engine (and its usability) was crap. Put it this way: Oracle still can't do SQL92 Transactions ("Isolation") last time I checked (~5yrs ago), except at Granularity of Table. Yes, if you want to guarantee data Consistency in a non-singleuser database, you have to tablelock. Everything. Every time. They fudge it for marketing/snowing purposes by redefining Isolation.

    Not a joke.

    Now, this IS documented in their documentation, but the obfuscation and scattering of it is truly a work of art. (Summary: if you want a multi-user database and you don't want data corruption, you have to toggle a setting (isolation=true iirc) then restart the server and then all non-unsafe+non-readonly locks are Table. No one does this in practice: multiuser Oracle dbs are the data version of Security By Obscurity: "you're mostly all right and if it DOES go wrong, no one's going to notice. Right?"

    Sybase/SQLServer had the same design/architecture issues but competed on speed, which it achieved by having separate codebases for each query plan.

    Not a joke.

    BIG problem: many nontrivial query plans' codebases were buggy. And you could only discover this by experimentation then result-set data-crawling. I distinctly remember discovering this on first use: a simplistic 3 (4?) table join produced Cartesian Products. Unless you re-ordered the table list. Only one of the list orders worked. I banned Sybase use in the Australian office immediately; switched to SAS's SQL. In current-$ terms, I was quant trading A$2bn and joint head of research on half a US$trillion, so more than happy to pay away 10-20% performance for 100% Correctness.

    1. W.S.Gosset

      Sybase / SQLServer

      Microsoft bought Sybase, slapped the SQLServer badge on it, then the coders hit the horror of the code. And proceeded to rewrite the living shit out of it.

      I hammered SQLServer ~3yrs ago and was quite happy with it. Found no bugs, predictable behaviour, effortlessly tunable optimisation in the very rare cases you needed it (anyone "requiring" Hint syntax does not understand Indexes and from what I've seen tends to need better understanding of their specific Engine than those who do, which rather defeats Hints' notional purpose), and delightfully had finally implemented result-set calculations. So you can now do running subtotals etc. in SQL just like _WE_ used to casually do in our SQL engine late 80s/early 90s (...)(progress!)

      Speed-wise, seemed OK. $value portfolio was A$488bn/nearly half trillion, the feeder system (investment management and accounting) had 3GL code to produce a single day's Trial Balance in ~20mins, which annoyed me because only single day, so I wrote a longish query which produced a whole-of-history Trial Balance with running totals each day for each account for each deal/instrument/trade for each currency (first time in years I've deliberately included a Cartesian Product), which ran in 10-15 seconds.

      1. Tim99 Silver badge
        Windows

        Re: Sybase / SQLServer

        OK, I was there and still have the scars from the first versions. Microsoft certainly did not buy Sybase. It was a joint venture between Ashton Tate (dBase) Sybase and Microsoft. If you were nearly as cynical as I am/was you might well have thought from the start that it would not end well for the other two companies. MS tended to have an "unfortunate" history with competitors/partners, but MS usually did OK :-)

        MS will tell you that they bought a software developer distributer licence from Sybase, which *was* basically true. The early versions of SQL Server (4x-6x) were written in C, and somewhat limited, but worked well enough except for things like cascading deletes/updates which were normally written as triggers. As I recall the database plan could be structured to show that cascades were there, but the DDL structures didn't actually work. MS rewrote it in C++ for V7, and that was the first of the MS only products.

        Sybase continued to develop the product independently and it was still a fast viable system. Sybase bought PowerBuilder which was used by some large corporates but seemed to drift until 2010 when SAP "merged" with them (SAP "bought" them, and killed the name in 2014).

        Ashton-Tate had their own problems - The story is that some of the original product was based on code written for JPL (federally funded and not copyrightable). This caused problems when A-T tried to limit the spread of clones and dBASE compilers (like Clipper). dBASE IV was slow and buggy, and on our Novell networks could not be loaded without removing the network functionality (They both tried to load into the same expanded/extended memory) These well publicized problems effectively crippled it in the market. Borland bought A-T, but favoured their own Paradox product. MS bought the dBASE work-alike FoxBASE/FoxPro and started to dominate that market. Eventually Borland sold out the product range to Novell...

        I suppose that I was "lucky" in avoiding many of the dBASE problems as I had decided to write almost all of my/our small networked PC database systems (at a very large public utility) with MicroRim's R:BASE which had menus, an ad-hoc command generator, forms and reports; and could be programmed in ANSI SQL - Some of these grew to hold a few hundred thousand rows and had ~20 concurrent users on one network, and ~6 on each of another 4 sites - We merged the data by shipping floppy disks around. Originally MS sold R:BASE under licence. MicroRim started to have their own problems (They spent a lot of time/money porting to a DEC based server product, just when DEC was struggling) and then MS bought in some R:BASE developers to help write MS Access, which obviously changed the market again.

        These days I'm retired and the only database I use (for my own stuff) is SQLite...

  5. trevorde Silver badge

    "Nothing against Oracle"

    Said no-on ever

  6. Bryan Hall

    Single threaded writer

    As of the conferences last year at least, the biggest limitation to Postgres in terms of performance is that it has a single threaded writer. You can have multithreaded readers, just not writers. For databases that have a lot of writes, this is a huge bottleneck compared to databases like Oracle. However, for the majority of databases, this is not an issue.

    1. Charlie Clark Silver badge

      Re: Single threaded writer

      I'm sure pgloader runs copying in parallel, though this might of course be using processes rather than threads. Throughput is any case pretty impressive.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like