"We have asked Oracle to comment"
Financial services software specialist FNI has dodged a $2m licensing bullet by jumping off Oracle databases and lifting its MariaDB environment to the cloud. The banking technology services company is moving its loan risk management system to MariaDB's DBaaS SkySQL, with a heritage in the open-source MySQL family, hosted on …
Yes but when the choice is between "in the cloud" or "down a deep well / mine, wherein it is guarded by a cabal of hooded dwarf cultists who will demand sacrifices by way of burning a virgin on a hill and throwing fifty bags of gold pieces into their well if you want access to your data", I'd choose the cloud, even though I detest the notion of keeping my data "on someone else's computer".
At least from MariaDB I should be able to migrate back down to earth later on if I choose, e.g. into a dedicated Postgres cluster
For any running system, the state of success is always "so far." So far, they've saved $2M in licensing costs, substantially improved their understanding of the code they're running, and gotten out from under the Oracle boot, all of which I would classify as successes. If data leaks, that eventuality will be classified as a failure.
This is not actually true.
Prior to China, Oracle dominated the TPC-C benchmark with SPARC and 11g. Yes, SPARC.
"OceanBase" has put an end to that.
I don't speak Chinese, and I can't read the documentation. However, second-best is still what you have to use when you really, really need it.
"I'm not sure why anyone is using Oracle for anything but legacy databases at this point. The competition is so much cheaper and is in most ways faster and easier to use. Additionally, Oracle is terrible to deal with, I'd rather pull out my own fingernails than talk to Oracle support"
...plus there's that call/letter that comes later: "Dear Customer, we have reviewed your account and we think that you are in a license underpayment situation..."
You can have reliability, features and low cost. Pick 2.
I've dealt with Oracle as a company and I can't stand them, but - and it pains me to say it - their RDBMS is top notch and their RDBMS devs and engineers really know what they're doing. I'm afraid comparing MariaDB to Oracle is like comparing a Ford Fiesta to a Rolls Royce in both features and reliability. There is no way in hell I'd bet the company on something derived from MySQL which is little better than an SME toy DB.
There are a vast number of features in Oracle RDMS but how many of them actually get used in a typical application and how many features are used just because it's the latest thing. It's a bit like MS Word, an un$(deity) number of features but how many actually get used?
The car comparison is actually really appropriate, both have a wheel on each corner and an engine but you can buy 14 Fiestas for your 1 Rolls Royce ( New 2020 versions ). The Rolls may be much nicer in all comparisons but 1 Rolls will not be able to do the work of 14 Fiestas and the maintenance cost of 1 Rolls is higher than the cost of 14 Fiestas.
If you really need the Rolls then by all means you should get one, but don't diss the humble Fiesta when it can still do a decent job.
I remember working for Ingres in the mid-80's, when Ingres and Oracle were the same size. Oracle had a soundex (sp?) function that would search for words that sounded like others. There was almost never any use for it, but Ingres didn't have it. Thus Oracle told all the prospects that you had to have it, and so of course you couldn't possibly use Ingres.
Somethings don't change, I guess.
There's actually a US government standard for how the SOUNDEX function works, I once wrote one in an afternoon.
I suspect that was a good selling point for US government customers to be honest. Another box they can tick to support their decision "well it supports this US government standard here - I mean, we'll probably never use it, but just in case we ever need to..."
in old times (and certainly in the previous millennium) when you wanted to sell any computer system to Uncle Sam it had to come with a COBOL compiler, even if the system's purpose had nothing to do with that language.
Today it would imply that any shiny tablet or smartphone would have to also include such a feature...
And of course there was a validation procedure to see if it was working as expected, with a lot of source code provided that had to be compiled and provide the expected result, including a lot of different errors.
So what did the manufacturers do?
They provided a compiler that was able to pass the test, but was usually totally unable to compile any real-world application...
(VW was not involved in this)
Future plans come into it. If you're running an old, static project, pick what does what you need, save some budget. But if you're using it for a range of products, if you're developing and expanding, if you picked the least featured (for speed, simplicity, whatever) then each time you're doing a databasey thing, you rule out doing a new thing the easy way, because that other feature doesn't exist for you - either you need to write it code-side (expensive), change database (expensive), or worst case, simply don't do the thing there's a business case to consider.doing (potentially fatally expensive longterm, if you have competitors).
Yes, obviously the company here was not really using much of the feature set. Otherwise migrating to MariaDB wouldn't really be an option.
I'd actually like to see Oracle (and SAP for that matter) competing on quality rather than relying on lock-in contracts and inertia.
I'd probably compare it as a fleet of fiestas vs. a segmented heavy goods vehicle.
You can nip lots of stuff rapidly all over the place with the fiestas, though over time, their reliability may well mean targets are missed, but it's not that important if you're using them for the right jobs.
The HGV lacks some of the flexibilities of using large amounts of small vehicles, but does the heavy lifting extremely well.
I did DBA work for a couple of decades, and always thought that part of the job was picking the right tool for the job. The more heterogenous the environment, the more people you needed to support it properly, but the better long term results you got if you designed well.
On Oracle's features, I always found that they'd invest in something, often incredibly useful, and immediately pitch it as a feature you could only use if you paid more. Quite a bit of the time, people found that they could do without and use a different product (either due to price or lockin concerns or both). When Oracle realised they weren't making money out of it, they slipstreamed it to be a feature of a lesser version of a license (effectively making it 'free' if you bought that tier of license for other uses).
Oracle, SQL Server, Maria, PostgreSQL et. al. all have their places, and strengths and weaknesses; it's why I never understood the "holy wars" between them.
Kinda like saying the fiesta is better than the HGV because it's got a better acceleration, or the HGV is better because it carries tons (literally). Both _may_ be true in certain frames of reference, but aren't globally true (as I tell people in SQL classes I run, especially when I throw curveballs in the exercises in it: "An answer is easy to get if you go looking for it. _The_ answer is often quite elusive").
Its not so much the application side that matters, SQL is SQL with a few differences here and there though Oracle does have probably the most extensive version of the language. However its the options available to DBAs that differentiate it from the wannabes plus if you're running a multinational you want something can can guarantee 24/7 99.999% uptime because any downtime would cost you far more than the licensing costs of the DB itself so going for the Fiesta would be a false economy, you need the Roller. And no, the cloud is not an option when you're dealing with highly confidential data that often has to stay inside the organisations firewall by law.
This was forms/risk management/assessment.
You do not need top notch real time transactions with recovery unless you LIVE operations are critical.
So what you do is partition real time critical to ORACLE then move the work to Mongo whatever.
OR use MQ properly and not use Oracle at all.
Look at mainframe SAS customers who did the twostep to get off hostage contracts. Also get your DB's to present at techical conferences 'How we moved off Oracle'. Those who do often get rock bottom priced contracts to stop doing so.
At least that shop rejected the Hotel California mantra, rather than bilk their customer more because of vendor greed - where no value was added.
Risk assessment is not live. Not like its an irreversible transaction with transaction recovery needed.
Secondly, it sounds like the old system and fluff/link was in legacy territory. Thirdly vendors who charge more for ZERO added value need to be given the boot. Sadly the other 90% just sit still and take it up the a**s. The CLOUD was supposed to be portable - you could move at the drop of the hat to another, based on merit. Vendors reacted by selling you a Hotel California cloud, then starting adding time of day smart meter access charges and Cable TV WTF charges, and licence audits.
Having to bilk your clients more, when they are getting nothing extra in this day and age is a business risk. Thus one must act and move when formerly acceptable platform deals go sour.
I've been talking to a database supplier whose USP appears to be "we're not as bad as Oracle".
However, that's only in terms of price per core (just) and virtualised licensing (at least that is by VM use). Sales techniques, not so good.
Going to move away from them and to the opensource product with consultancy support where/when required.
I don't know how many years of exp. you have using Oracle databases. Cheaper is not always the better. Oracle has been introducing very rapid change in their database, adding new features and flexibility to use different data models which databases like MariaDB is far from. If they are not able to use Oracle DB properly they need to look deep into their application structure, programming practice and data models. Unless they sort out these problems they will run into the same problems with MariaDB as they have with Oracle. Any database migration motivated by cost alone will eventually fail.
I did it back in about 2007-2008 time frame. When I was hired the company was undergoing an Oracle audit. They were licensed for Oracle SE One, however their DBA consultants had installed Oracle EE. I pushed to move to SE but manager didn't trust me yet, assured us after we paid the fees everything was all cleared up. The following year we had another audit and we failed again. This time I was able to lead the charge to move to Oracle SE. Which at the time(I assume still now) has a per socket charge (max 4 sockets in a system) rather than per core.
So of course moving to Oracle SE was huge, as it turned out really the only reason the consultants installed EE was their monitoring system used partitioning, which of course hit us with another license breach. I recall buying new CPUs for our Oracle systems going from high speed 2 core (better licensing for EE) to lower speed quad core (better licensing for SE). I ran into a limitation on the DL380G5 systems that we had our system boards were too old to run quad core not even HP knew at the time (they later updated the spec sheets to reflect that), they replaced the boards and installed the new chips.
From a VMware perspective we did two things - we limited where we used Oracle (didn't allow it to run on just any VM host), and we also ran our DL380G5s with a single socket, cutting licensing further. This officially wasn't supported by VMware (ESX 3.5) at the time though I believe their intention was they didn't support running a single cpu, I had no doubts a single quad core cpu would work fine. VMware sold licenses at the time in pairs of sockets so we used 1 socket license on one system and one socket on another saving costs even more. (in hindsight perhaps this was a VMware license violation I never looked into that aspect). We never ended up needing VMware support, and Oracle had no issues with our new setup. I remember specifically having to educate our Oracle rep(s) on the licensing of Oracle SE being per core. Our production OLTP servers ran on physical single socket DL360G5, partly because I wanted no performance impact from VMware, but also because Oracle's support policy at the time was "reproduce on physical hardware before we support you in a VM". Didn't need that extra risk for the production OLTP. That and we just had VMware standard, so no VMotion, no HA etc. Just basic hypervisor.
Obviously Oracle SE has far fewer abilities than Oracle EE, the biggest one we missed at the time was Oracle Enterprise Manager with the query reports. Though at the time (10gR2 I think??) you were still able to install OEM on SE. It was easy to install and delete in the case they came to audit again(they did not). Newer versions of Oracle from what I could tell made it impossible to install OEM on SE (or at least it wasn't dead easy like it was back then).
Though I'm sure that Oracle SE is very cost effective and far easier to migrate to than MariaDB - not only that Oracle SE has much more abilities than MariaDB even.
Unfortunately there is nothing in the MySQL world in 2020 that comes close to those Oracle query reports I had access to in 2008. Our DBA does log many queries and can run query reports but it is a very time consuming process and can have quite a lot of overhead (if your not careful your query logs can be multi GB per hour easily which means query reports can take hours to get results back). I have used tools like ScaleArc and Heimdall which are MySQL aware proxies which come with real time analytics, though they are limited in that they can just see the queries and the response times, they can't get the in depth metrics of what is going on inside the DB for a given query.
I do get tons of internal metrics on each MariaDB server we have even query response times(can't see WHAT queries just # of queries at given response time thresholds), probably 200-250 metrics per server per minute or something. But still pales in comparison to the internal query level metrics available in Oracle in real time.
In a more modern world if I needed to run Oracle on a larger scale in VMware I'd build a dedicated VMware cluster for nothing but Oracle DB. Run the apps on other servers, limit licensing impact. I did run a very small Oracle system at my current org for many years it was for our vCenter 5 backend database. Given the choice was MSSQL, Oracle, or IBM DB2, the internal vCenter DB cannot scale very high at all. For me the choice was obvious, running Oracle on Linux. I used named user plus licensing(so not per CPU or per core) on Oracle SE and it was dirt cheap just a couple hundred a year or something. Eventually retired it almost 2 years ago after finishing migration to vCenter 6.5 on VCSA which uses an internal Postgres database.
Had many conversations over the years with Oracle they never cared about auditing us(I was ready regardless). Though we did undergo our first VMware audit last year, and we failed. I wasn't aware we couldn't move 3 nodes of essentials plus licensing from our UK office which had permanently shut down to our California office. The licensing was only in use for a few months but still had to pay the fees(which was buy a new essentials license which we used for 1 month before transitioning to Enterprise plus licensing as I retired some older systems from our primary datacenter and gave them to our HQ office with the VM licensing intact).
Fortunately we didn't get hit with a VMware audit a couple years earlier, the VAR that sold us our licensing for our datacenter stuff in EU bought everything in the U.S.(through HP) and shipped it to EU, would of had to pay probably more than 10x the fees for that, though that location for us was permanently closed 2 years ago (moved systems to U.S. for EU customers), then the business decided to close all EU operations entirely last year.
Having a region lock on a license code is just so stupid. I can understand region locks for things like on site support(had to jump through a lot of hoops(too many and took too long) to ship our HP gear from Amsterdam to the U.S. and get it under support again), but otherwise makes no sense to say you can't use this license code you bought in the EU on a U.S. system even though the EU system is permanently shut down. Not as if this was a site license, just a one off license purchase.
You can create all the cluster you like, but if you have a single vcenter or indded two vcenters that are linked or managed by teh same systems it makes no difference to Oracle, you will have to license your entire VMware estate as you may just move them somewhere. This may or may not include any DR facility.
This senseless and arbitry licencing is why i wuldl fight tooth and nail against any Oracle software, unfortunately we already have the Java tax as we use SAP. Oracle will keep buying in software and applying their stupid licencing untill they bleed everyone dry.
Sound testimony for avoiding Oracle altogether, distraction from an IT shop to plumber tactics.
For this reason, base your operation in Germany, where resale of licenses is legal, or in China or in India where this nonsense does not exist.
Databases are a commodity now. It's difficult to think of a good reason to pay for one. Think twice before putting it into a public cloud, though. Their performance can be unpredictable. Private cloud would be the appropriate place for a core workload like this.
I've claimed for years (if not decades) that most Oracle customers don't need it and can safely transition to MySQL, PostgreSQL or some cheaper proprietary product (such as MS SQL) saving them a bundle. Most Oracle customers are in the banking and financial sector where the cost wasn't really a concern, as long they had the bragging rights of having the fastest database.
However, now that licensing starts to move into the MILLIONS per year territory, even the financials are starting to scratch their heads wondering if Oracle is really worth these outrageous fees.
They pay for the engineer support. With some problems, you do get access to the engineers and those guys are seriously good.
If it's a bug that affects only you, then often in the Open Source world your edge case won't receive the attention that lots of others do. There isn't the dedicated resource with the specific knowledge to work at that scale for free and quickly.
You can buy support via contract, but that puts you more into the spend category. And you're still left with the problem that there are still limitations to the responsiveness (unless you contract for hefty sums to the people that are actively developing the whole product, which again moves into the big spend category).
Then you get things like Oracle which do have the resources into the codebase at scale. But that's expensive. I've submitted bugs in the past that were definitely edge case, but got in the way of me doing something I expected to work. Resolutions were rolled into the next set of hotfixes released.
I was pretty impressed. I've made bug reports to other DB vendors (including MS) and had nowhere near that response.
However, if I'm going to be putting that spend out there, it needs to be worth it. As with everything, know what you need, what's essential and what's merely wanted. Design using the right tools and spend covering essentials appropriately.
The article does not provide any clue whatsoever why they migrated to MariaDB whatsoever. The issues talked about may not even need Oracle support at all. Most issues pertain to Application SQL, bad data modelling and poor programming practices. Soon these practices will pop up in MariaDB as well and they will blame MariaDB! We have been using Oracle on Exadata for quite a while now and it's serving our need quite well, without much need for any cheap migration. So despite the rant made in the comment box Oracle is still the No. 1 DB.
Biting the hand that feeds IT © 1998–2021